ATLAS Colloquium
Tuesdays, 11:30 am - 12:30 pm
Each week during the fall and spring semesters, the ATLAS Colloquium features dynamic speakers from academia and industry who work in fields of interest to the creative technology and design community. Whether artists, creatives, entrepreneurs or free spirits, these speakers share their interdisciplinary experience and knowledge in an intimate, small-group setting.
The ATLAS Colloquium is organized and curated by Ellen Do, professor of computer science and director of the ACME Lab. Students have the option of taking the one-credit ATLAS Seminar ATLS-7000-001 to earn credit for attending colloquia.
Talks are free and open to the public.
Location: Attend in person in ATLS 208 (The Hackery)
Online: Most events are also accessible via Zoom.
Recordings: Colloquia are often recorded and posted on the ATLAS Institute's YouTube channel.
Spring 2026 Colloquia
Check out the Colloquium Schedule Google Doc for more updates.

Music, Minds, and Machines: An Interdisciplinary Approach for the Advancement of the Modern Musician
Speaker: Torin Hopkins
Tuesday, January 20, 2026, 11:30am - 12:30pm MT
Abstract: Abstract: Music is an ancient practice deeply rooted in our history and biology. However, modern developments in Artificial Intelligence are increasingly displacing, rather than enhancing, this creative act. Inspired by over 15 years of music education experience, I argue that it is our responsibility to build bespoke tools that address the varied needs of those participating in the act of making music. By grounding these tools in our shared human mechanisms while preserving the flexibility needed for personal expression, we can help musicians not only achieve but exceed their goals.
Through examples of co-creative AI and brain-computer interface-based systems, I demonstrate an interdisciplinary approach comprised of four iterative phases: 1) Discovery, 2) Assessing Requirements, 3) Building, and 4) Validation. This structure creates opportunities for research threads to cross-pollinate and foster innovation—from understanding needs and neural mechanisms to deploying co-creative systems in performances and therapeutic settings. Blending practices from Human-Computer Interaction, Computer Science, Neuroscience, and Music, I demonstrate how we can use machines to not only generate music, but better understand ourselves and our relationship to it.
Bio: Torin Hopkins is a musician, researcher, and educator who focuses on designing technology to advance musicianship, musical practice, and cognitive health. He recently completed a postdoctoral research fellowship at the National University of Singapore in computer science, where he collaborated with the school of medicine to develop AI-assisted music therapies for stroke rehabilitation and language learning. He received a triple PhD from the University of Colorado, Boulder in neuroscience, cognitive science, and creative technology & design where he studied how we learn music and built systems that incorporated: AI-driven musicians, brain-computer interfaces, networked extended reality environments, and novel performance paradigms. Over his career he has also worked in neurobiology (Southern Illinois School of Medicine), cognitive neuroscience (Evanston Hospital), music education (over 15 years of practice), AI and VR research (Ericsson Research), and music health research (Studios Education, Victor Wooten Center for Music and Nature).

Robot Design Games, Seriously: Playful Co-design Methods in Human Robot Interaction for Vulnerable Populations
Speaker: Sawyer Collins
January 27, 2026, 11:30am - 12:30pm
Abstract: Robots are increasingly being deployed in healthcare, from the Moxie robot that delivers items in hospitals to the Paro seal-like robot that provides support to older adults. Within the context of mental health, there is an overwhelming need to tailor the care plan to the individual, taking into account their symptoms, experiences, and comorbidities. However, this presents a challenge to robot designers, as making a robot customizable can be expensive and technically difficult. As such, in my own research designing the SAR Therabot™ for individuals living with depression, I have used a variety of co-design methods to understand what is truly necessary for SARs to be realistically customizable for that user group. My results showed that participants were often open to integrating a SAR into their care plan but had difficulty envisioning the robot and its use cases, suggesting the need for a new codesign method to foster a relaxed, creative design atmosphere.
My work, through the exploration of qualitative methods in co-design, shows that Design Games help future users envision a future with a SAR that supports them through roleplay, gamification, and design fiction. In this presentation, I will discuss the co-design process behind the Therabot™ robot for depression management, as well as the development and use of design games that supported these outcomes. I present two novel design games that I developed: a tabletop role-playing game and a card game, each of which supports the extraction of design insights for robot designs. These codesign methods used gamification and play to increase participants' comfort and enhance creativity during the design process. Further, through these case studies, I will motivate a new class of design methods I term “Serious Design Games,” a novel design game framework that supports mutual learning between participants and researchers.

Applied Research in Audio Signal Processing and a Few Reflections
Speaker: David Romblom
February 10, 2026, 11:30am - 12:30pm
Abstract: In this talk, I'll briefly outline my own path through E-mu Systems, Universal Audio, and Sennheiser Research, to creating a research team at Dysonics (now part of Google), and finally my role at Apple. Based on this outline, I propose that our career paths are piecewise linear trajectories built from the opportunities available at each inflection point, and that the availability of these opportunities depends heavily on the project work and relationships that we're all building right now.
Bio: David Romblom received Music and Electrical Engineering degrees from the University of Michigan, a Master's in Media Arts and Technology from the University of California at Santa Barbara, and his Ph.D. in Music Technology from McGill University. He has worked for E-mu Systems, Universal Audio, Sennheiser Research, Dysonics (now Google), and finally the Vision Products Group at Apple. His interests center on perceptually-transparent approximations of acoustic reality. In his spare time he enjoys playing music, cycling, and hiking with Julie, Rocket, and Echo.

Your science is cool. Why do you suck at talking about it?
Speaker: Paige Hoel
Tuesday, February 17, 2026 (11:30am - 12:30pm)
Abstract: The key to effective science communication lies not in the topic but in the storytelling. But what makes a great story? Science communication continuously takes a back seat in the scientific world, partially due to time constraints, partially due to lack of motivation, partially due to an inability to think of your work as a story. In this talk, I will explore how simple, engaging storytelling can make science accessible and compelling, using my scientific communication channel (@liloceanpaige), and other great communicators as case studies. I’ll argue that sharing scientific knowledge goes far beyond academic papers, and in this current political climate, is our responsibility.
Bio: Paige Hoel is an oceanographer and emerging voice in science communication. Her social media channel (@liloceanpaige) brings millions of viewers into the world of ocean and climate science in a fun and approachable way. Through her channel she has served as a live correspondent for a national morning show in Canada, a subject matter expert for the History Channel, and partnered with industry leaders such as Discovery and Surfline. Paige built her channels through trial and error, as suggested by her students while teaching an intro to climate change class during graduate school. Paige spent her undergraduate at UC Santa Barbara, and Ph.D. at UCLA (conferred June 2024) where she studied anthropogenic influences on coastal water quality through high resolution biogeochemical modeling. She currently resides here in Boulder (job searching) after having her position at NOAA rescinded January 2025 during the federal RIF.

Netwalking: Community-Based Computer Vision
Speaker: David Hunter
Tuesday, February 24, 2026 (11:30am - 12:30pm)
Abstract: We are surrounded by cameras and whether through government agencies, private companies, or by ourselves on social media, we live in a de-facto surveillance society. Despite this proliferation of cameras, we rarely harness the potential of distributed vision for our collective benefit. As designers and technologists this presents an opportunity: can we find use-cases for this technology that empower communities and shape society positively?
In this workshop we will go on a short "Netwalk" for a visceral experience of community-based computer vision followed by group design challenge to generate our own ideas, sketch solutions, and reflect on the practical and ethical challenges of distributed, shared and automated vision.
Bio: David Hunter is a multidisciplinary designer, PhD candidate at the ATLAS Institute and member of the ACME Lab, researching explorable interfaces to data.

The Human Mind Behind the Machine: Neural Foundations of Programming
Speaker: Yun Fei Liu
Tuesday, March 3, 2026 (11:30am - 12:30pm)
Abstract: As artificial intelligence becomes increasingly capable of generating working code from everyday language, the human role shifts toward structuring problems and evaluating solutions. To enable productive human-AI collaboration, it is now time to understand how programming is implemented in the human mind and brain behind the machines.
Programming is a recent cultural invention that our brains did not evolve for. So, how do we learn to think in code? One possibility is that programming adapts the brain’s language network, much like programming ability emerges in large “language” models through intensive training on text. Alternatively, programming may build upon pre-existing neural systems for logical reasoning.
In this talk, I present evidence for the latter. Brain imaging studies show that programming code primarily engages the reasoning network, where algorithmic structure is represented. Such representations exist even before formal programming instruction, when people read plain-English descriptions of algorithms. After instruction, the same neural patterns are reused to represent programming code. Furthermore, behavioral evidence shows that that reasoning ability, rather than language ability, predicts programming learning outcomes. Meanwhile, the language network plays a complementary role. It produces an initial interpretation of the code, which the reasoning network then elaborates into a structured mental model.
Together, these findings provide a foundation for understanding how neural representations of algorithms develop over time, both within a naturalistic programming session and throughout the development of programming expertise. This work has implications for programming education and for designing tools that expand creative expression through technology.
Bio: Yun-Fei Liu received his PhD in cognitive neuroscience from Johns Hopkins University, where he is currently a postdoctoral researcher in the lab of Dr. Marina Bedny. He began with a broad interest in the neuroscience of reading, exploring the neural bases of natural Chinese reading and braille literacy in blind individuals during his graduate training. His work on programming grew out of a simple question, partly inspired by his undergraduate background in electrical engineering: how does the brain “read” code? That curiosity gradually developed into a broader research program on how the brain engages in computational thinking.
He uses functional MRI, behavioral experiments, and computational analyses to investigate these questions and is continually interested in new methodologies and interdisciplinary collaborations. In parallel with his academic research, Liu works as a data analyst for the Medical Evidence Project at the Center for Scientific Integrity, where he develops large-scale databases of medical reviews and analytical tools to identify potential errors in the scientific literature.

Toward Installing Abilities into Users’ Nervous Systems
Speaker: Yudai Tanaka
Thursday, March 5, 2026 (11:30am - 12:30pm)
Abstract: Computers provide instant access to information through audiovisual and language-driven media. Yet they remain fundamentally limited in empowering users' embodied abilities. For example, watching videos or querying an LLM cannot directly address how the body should feel and act—whether for learning new skills or receiving physical assistance on demand—so what is the bottleneck? Looking across 60 years of computer interface history, I argue that the bottleneck lies in how computers present information to users’ sensory systems. The eyes and ears are non-contact senses, whereas the skin requires physical contact to perceive. Conventional haptic technologies are therefore insufficient to support bodily senses. Even simple hardware attached to users' fingerpads encumbers their ability to use their hands naturally, limiting broader deployment.
To build interfaces that truly empower embodied abilities, my research shifts focus beyond the skin to the nervous system. Rather than presenting information externally, my approach directly delivers electrical signals that neurons respond to, driving perception and action. This talk provides an overview of how this new class of computer interfaces enables: (1) presenting embodied information while supporting natural touch perception (CHI 2023); (2) augmenting abilities while preserving users’ sense of agency through computational adaptation (CHI 2026); and (3) providing a path toward general-purpose embodied interfaces (CHI 2024). Together, these works demonstrate the technical feasibility of orchestrating physiological and perceptual complexity to deliver computer interfaces that move toward “installing” abilities. I conclude by discussing the broader implications of my approach, with applications ranging from skill learning and accessibility to rehabilitation.
Bio: Yudai Tanaka is a PhD candidate in the Department of Computer Science at the University of Chicago. His research explores computer interfaces that empower users’ physical abilities by interfacing directly with their nervous systems. These interfaces are envisioned as a foundation for the next generation of interactive systems, with applications ranging from skill learning and accessibility to rehabilitation. Yudai has published 14 full papers in top Human-Computer Interaction venues, including ACM CHI and UIST, receiving a Best Paper Award (CHI ’23) and two Best Paper Honorable Mentions (CHI ’24, UIST ’24). He was also recognized as a Google PhD Fellow and a Siebel Scholar. His research has been featured in IEEE Spectrum, New Scientist, and CBS.

Electrons in, Emotions out: My Adventures in Entertainment Robotics
Speaker: Morgan Pope
Tuesday, March 10, 2026 (11:30am - 12:30pm)
Abstract: A chronological tour of the way my career has been about using technology to make people feel things.
Bio: Morgan Pope did his graduate work in a bioinspired robotics lab where he made robots jump, perch, fly, and climb in various combinations. He then spent eight years at Walt Disney Imagineering R&D trying to push the field of Audio-Animatronics towards more dynamic performances. Now he works at a startup, Familiar Machines & Magic, which he can't say too much about.
Fall 2025 Colloquia
Check out the Colloquium Schedule Google Doc for more updates.
Synth: AI-Powered Materials Discovery
Speaker: Grant Zukel
Tuesday, August 26, 11:30am - 12:30pm MT
Abstract: The discovery of new functional materials has traditionally been slow, expensive, and dominated by trial-and-error. Conventional “composition-first” approaches start with a candidate formula, simulate or synthesize it, and only then evaluate whether it meets the desired properties. Too often, it doesn’t.
Synth offers a new paradigm: a property-first materials discovery platform. By starting from the properties we want — rather than a fixed composition — Synth uses physics-informed candidate generation based on peer-reviewed science libraries, surrogate property models, and GPT-5’s multi-perspective expert reasoning to rapidly narrow the vast search space.. The system prioritizes candidates by thermodynamic stability (Ehull) and synthesis likelihood, producing visualized, expert-style reports and 3D structures.
Synth integrates established computational chemistry libraries (pymatgen, ASE, matminer, scikit-learn, and others) with GPT-5 reasoning to create a reproducible, auditable workflow. It produces actionable insights for researchers while learning continuously from user feedback and experimental data. The result is a faster, smarter way to move from target properties to viable candidate materials — enabling researchers to explore new frontiers in superconductivity, dielectrics, catalysts, and beyond.
Bio: Grant Zukel is the founder of Synth, an AI-assisted materials discovery platform. His background spans philosophy, software engineering, military intelligence, DevOps, and international telecom. Grant began programming at age 8, built software for multi-million-dollar companies by 14, and later served in military intelligence, where he developed software to process satellite feeds and enhance mission targeting.
After the military, he became a DevOps engineer, eventually leading AI infrastructure projects at AT&T and founding a DevOps school that trained and placed over 100 engineers into the industry. Today, he serves as a Director of DevOps while also leading an investment group with projects in agriculture, energy, and telecommunications across Africa.
As Chairman and President of YourTel Inc., Grant developed a graphene concrete battery technology to power remote telecom towers. The challenges of designing such advanced materials inspired him to create Synth — a property-first AI materials discovery platform. His mission is to accelerate materials innovation by combining physics-grounded machine learning with reasoning capabilities to bridge disciplines and shorten the path from discovery to application.
Who is Kurt Smith? Why is he here? And what in the world is vagalometry
Speaker: Kurt Smith
Tuesday, September 2, 11:30am - 12:30pm MT
Abstract: In this Colloquium seminar talk, Kurt will provide us with an overview of his career and discuss a career-long research interest in the field of human-machine interactions that promote improved healthcare & wellness. Specifically, Kurt will introduce the wild-eyed notion of vagalometry as a field of study and development of the phenomenology of the human-machine interaction with the expressed vision of enabling a neuroceptive-type relationship that supports wellness and sustainability for the individual and for society.
Bio: From music to engineering, Kurt Smith, D.Sc., a pioneer in the medtech industry, made a remarkable professional pivot that was incredibly fortunate for patients and the medtech industry as a whole. Initially aspiring to be a singer and songwriter, he reevaluated his path when he recognized the challenges of achieving a sustainable and profitable career in the music industry. During a fortuitous meeting with his college counselor, Smith learned that thanks to his passion for math and the numerous related courses he had already completed, he was just a year-and-a-half away from obtaining an electrical engineering degree. This helped convince him to make the shift, and he went on to complete undergraduate and graduate degrees at Southern Illinois University as well as a Doctor of Science in electrical & biomedical engineering from Washington University in St. Louis.
Engineering a long and storied career over the past four decades, Kurt went on to spend lengthy stints at Medtronic and other leading medtech firms, and helped found 20 ventures—many of which are still active growth programs at Medtronic and other organizations. He holds more than 30 patents and has earned numerous awards and accolades, including being named an AIMBE Fellow and Medtronic Bakken Fellow. Along the way, he’s touched well over 10 million patients with products he’s invented and developed. He also has held numerous board appointments and been published extensively.
Eager to share his knowledge with the next generation, Smith started the Engineering Entrepreneurship program at the University of Colorado, Boulder, which is now a minor offered in the School of Engineering. He also has served as an adjunct professor at the Johns Hopkins University School of Engineering, St. Louis University School of Medicine, and Southern Illinois University and developed and taught an Innovation Short Course in Shanghai for several years as part of the Jia Tong University and University of Michigan Engineering School Joint Program.
While his list of accomplishments is impressive, what Smith says he’s most proud of during this accomplished career is how he’s changed the lives of hundreds of young innovators and entrepreneurs through mentoring, leadership and teaching. “I’ve always focused on creating a strong culture wherever I’ve gone, aiming to inspire a ‘can-do’ attitude where people support and really care for each other,” he says. “The heart of developing a culture is ensuring there’s an environment of safety where people feel confident that hey can fully be themselves.”
While he has more than made his mark on the medtech industry, Smith didn’t abandon his musical dreams, having built and run a professional recording studio for fifteen years, and now he is in the process of putting out his second album as a singer-songwriter. Among other hobbies, he enjoys all the hiking and trail running that Colorado offers. With a certificate in Organizational Chaplaincy from Upaya Zen Center Chaplaincy Program, Smith has also continued practice and involvement with the center in Santa Fe.
Tangible Interfaces for Creative Learning
Speakers: Krithik Ranjan and Anika Mahajan
Tuesday, September 9, 11:30am - 12:30pm MT
Abstract: Tangible interfaces offer a powerful approach for engaging learners in computational experiences, fostering intuitive, collaborative, and constructionist-driven learning. There is a wide landscape of such learning technologies that support creative learning—tools and interfaces that immerse learners in computational learning through playful and open-ended means—in both research and commercial contexts. In this talk, we share how we evaluated several such existing tools to investigate how they engage learners and enable creative learning. We highlight the (1) diversity of learning goals and creative opportunities in these interfaces, (2) a taxonomy of tangible interaction utilized in them, and (3) a spectrum of tinkerability we developed based on our analysis. We developed this design space to provide insights for researchers, designers, and educators, and inform the future development of tangible, open-ended learning experiences.
The second half of this talk will be an interactive workshop with one such learning interface—OrbitSim—that we have developed for astronomy classrooms in collaboration with our collaborators at the Department of Physics, University of Denver. OrbitSim is a paper-based, tangible toolkit for creating learner-driven orbital simulations to visualize and experiment with Kepler’s laws of planetary motion. Come create your own planetary orbits with OrbitSim! For those attending online, please download and print the paper templates from here if you would like to play.
Bio:Krithik Ranjan is a 4th-year Ph.D. candidate in Creative Technology and Design at the ATLAS Institute. In his research, Krithik has been researching and developing tools for creative computational learning in a tangible, paper-based medium. He is interested in empowering young and adult learners to be producers of technology by leveraging familiar everyday materials and a “no-cost” approach to creative computing.
Anika Mahajan is a 2nd-year Ph.D. student in Creative Technology and Design at the ATLAS Institute. She wants to create new educational technologies for astrophysics/astronomy. She is particularly interested in combining the benefits of interaction and immersion within planetariums.
Beyond Reality: Crafting the Future of Human Interaction in Augmented Reality
Speaker: You-Jin Kim
Tuesday, September 16, 11:30am - 12:30pm MT
Abstract: In this talk, Dr. You-Jin Kim will explore his pioneering research in the Dynamic Reality Lab (DRL), where user inputs such as EEG, EMG, natural locomotion, and eye gaze are leveraged to make digital content interaction in augmented reality (AR) reflect human intent and expression. He will showcase his projects, such as Dynamic Theater, World Tracer, FractalBrain, Spatial Orchestra, and Reality Distortion Room, to illustrate how narratives unfold through user interaction in AR. The presentation will emphasize how these initiatives extend human interactivity by seamlessly blending physical and virtual worlds, providing new insights into the future of human-computer interaction, particularly within the field of engineering.
Bio: You-Jin Kim is an Assistant Professor and director of the Dynamic Reality Lab (DRL) at Texas A&M University, as well as a core faculty member in the Visual Computing and Interactive Media PhD program. The DRL combines research in engineering and the arts to enhance human interactivity in the blended world of physical and virtual objects, including augmented reality (AR) navigation and immersive theatrical environments. The lab specializes in human-computer interaction in wide-area AR environments, utilizing technologies like EEG, EMG, and eye-tracking. You-Jin envisions the future of virtual production and entertainment through experimental spatial computing techniques.
Circular Futures: Prototyping through Eco Materials and Digital Fabrication
Speaker: Beth Ferguson
Tuesday, September 23, 11:30am - 12:30pm MT
Abstract: Beth Ferguson will share recent research and design work from the Circular Futures Lab at University of California, Davis which investigates climate solutions through ecological design, digital fabrication, and sustainable materials. One of the lab’s key projects, the Eco Materials Library, provides practical guidance for Makerspaces to create their own collections of regenerative and waste-based materials including biomaterials, upcycled resources, and locally sourced natural materials like clay and bamboo. These materials offer new opportunities for low-carbon product design and architecture. Ferguson will also present her new Hybrid Basket project, which integrates craft with computational design through the fusion of handwoven reed and custom 3D-printed support rings. The forms explore structure and material relationships within fiber vessels, leveraging digital fabrication to expand the possibilities of woven forms. Together, these projects reflect the Circular Futures Lab’s commitment to rethinking material culture through systems thinking, craft, and climate resilience.
Bio: Beth Ferguson is an ecological designer whose work integrates solar engineering, active mobility, digital craft and sustainable materials to advance climate resilience and reduce carbon emissions. She is an Associate Professor in the Department of Design at the University of California Davis, where she directs the Circular Futures Lab. Her projects have been exhibited internationally at Ars Electronica, Dutch Design Week, Otago Museum, and Centre d’Arts Santa Mònica, and nationally at SXSW, the ZERO1 Biennial, TEDxPresidio, and the Exploratorium. Her work has been featured in Fast Company, BBC News, The New York Times, Wired Italy, and Radio New Zealand. Ferguson has held residencies at the Autodesk Technology Center in San Francisco, Haystack Mountain School of Crafts, and the American Arts Incubator in New Zealand.
The Crooked Path of Innovation
Speaker:Joshua Seiden
Tuesday, September 30, 11:30am - 12:30pm MT
Abstract: The talk will discuss the path that successful innovation takes in small and even extremely large enterprises. Successful innovation and product development is a function of three critical variables: Technology, Environment, and Business Opportunity. To be successful, innovators must be flexible with each one of these areas and must be willing to pivot across all three areas. How does an organization or team do that? In this talk, we’ll see real world examples of this process and talk about how teams can leverage this process to improve their likelihood of success.
Bio: Josh has spent more than three decades in the cable, broadband and technology industry, leading engineering teams that developed and scaled video distribution platforms, content delivery networks, and advanced advertising systems. Most recently, he served as the head of Comcast Labs, where he focused on commercializing emerging technologies that drive innovation for both Comcast and the broader industry. During that time, his team built tools and applications leveraging robotics, artificial intelligence, computer vision and data science. Beyond corporate leadership, Josh has partnered with startups to help them translate breakthrough ideas into sustainable businesses, combining technical expertise with an entrepreneurial mindset.
Josh is currently the founder of Ten Mile Ventures where he focuses on helping early-stage startups formulate their technologies and businesses as they grow and scale.
Making Things Meaningful
Speaker: Andy Carle
Tuesday, October 7, 11:30am - 12:30pm MT
Abstract: In this talk, Andy will reflect on his time spent in the Internet of Things industry, outlining an extended (and arguably quixotic) quest to make consumer electronics more personally meaningful to their users. The journey begins with a discussion of Kinoma Create, an IoT prototyping toolkit that Andy invented and brought to market to help Makers create gadgets for friends and family. The next stop is Moddable, a startup he cofounded based on a vision of ethical consumer electronics that can be maintained and modified by users. Navigating the good and bad of the Moddable experiment brings us from Silicon Valley to Colorado, where Andy has paused to work on things more meaningful to himself. He will conclude by exploring his current world of semi-professional curling and inviting the audience to brainstorm how technology might be woven meaningfully into the sport.
Bio: Andy Carle, PhD, is Curler in Residence of Rock Creek Curling, currently on sabbatical from a career as a prototype engineer, lecturer, product lead, and user experience strategist. He is a cofounder of Moddable Tech, the former Director of the Kinoma Team at Marvell Semiconductor, an author of the ECMA-419 ECMAScript embedded systems API specification, and a former member of the Visiting Faculty in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. He holds a PhD in Computer Science from Berkeley, where he focused on Human-Computer Interaction and Computer Science Education.
Andy is a passionate advocate for centering end users, societal good, and ethical design in both product development and computer science education. His work has made it easier for designers with strong ideas to bring them to life, including curriculum design tools that lead to more interactive and accessible college courses, prototyping tools for consumer electronics designers with limited embedded engineering skills, and an embedded JavaScript SDK that enables and encourages manufacturers to leave room for application designers to expand their products. Today, as a semi-professional curler, Andy is experimenting with novel teaching techniques for novice and advanced curlers, while also developing innovative curling technologies.
Digital Critical Maps
Speaker: Xavier Barriga-Abril
Tuesday, October 14, 11:30am - 12:30pm MT
Abstract: If everyone experiences walking through a city differently, who decides what its representation looks like, what it reveals, and what it conceals? Historically, maps have mediated dialogues between physical spaces and the social, political, economic, and biocultural forces that shape them. This talk presents ongoing research on designing digital maps as both artistic and political acts, questioning dominant spatial representations while engaging with decolonial perspectives. Through performative methodologies and data-driven self-portraits, my work explores more intimate relationships with urban data. At the same time, collaborative and participatory approaches have led to an ecology of interfaces (from interactive visualizations to public sculptures) that extend these conversations into shared civic spaces. Join us on a journey across diverse digital and material interfaces that sense and represent the city in novel ways, integrating speculative cartography, data physicalization, and autoethnography to imagine more plural and situated forms of mapping.
Bio: I am a designer, artist, and researcher exploring the convergence of technology and art. I am currently an Assistant Professor at PUCE University in Ecuador, in the School of Architecture, Design, and Arts. During the first two quarters of 2025, I served as a Visiting Teaching Assistant Professor in Emergent Digital Practices at the University of Denver. My academic work includes over fifteen publications in internationally recognized journals, and my artistic practice has been exhibited in Mexico, Spain, the United States, the United Kingdom, and Ecuador—earning recognition through national and international awards. My research focuses on critical maps, electronic fabrication, and creative coding. This year, I have also been developing new work within the Regenerative Food Futures project, funded by IRD France.
Engineering social interactions through music and interpersonal synchrony
Speaker: Tal-Chen Rabinowitch
Tuesday, October 21, 11:30am - 12:30pm MT
Abstract: A growing body of research highlights the potential of music and interpersonal synchrony to enhance social and emotional interactions. Musical interaction—particularly when individuals collaboratively make music—has been linked to increased group cohesion, cooperation, and emotional empathy. Similarly, interpersonal synchrony, where individuals coordinate their movements or actions, has been shown to foster social bonding, trust, and emotional connection. These positive outcomes are well-documented, and understanding the underlying mechanisms offers valuable insights into their broader social significance. However, despite the extensive literature on their benefits, relatively few studies address potential negative consequences associated with musical engagement and synchrony. Concerns include the possibility of fostering blind obedience, conformism, reduced creativity, or even aggression toward out-group members under certain conditions. This duality suggests that musical and rhythmic interactions may not be inherently beneficial but are influenced by contextual and sociocultural factors. In this talk, I will explore the theoretical frameworks explaining the mechanisms behind these dual outcomes, integrating sociological perspectives to elucidate how and when music and synchrony promote positive or negative social effects. I will present empirical evidence supporting these notions, including recent preliminary data from ongoing experiments in my lab. These findings contribute to understanding the conditions that enhance the beneficial impacts of musical and rhythmic engagement on social cohesion while mitigating potential drawbacks. I will also discuss future research and specifically the potential of incorporating new technologies in this kind of behavioral research.
Bio: Dr. Tal-Chen Rabinowitch is an associate professor at the School of Creative Arts Therapies, the director of the "Music & Social Development Lab" at the University of Haifa, and currently a visiting scholar at the ATLAS Institute, University of Colorado, Boulder. She is deeply interested in understanding the role music plays in children's social and emotional development. She has a Bachelor's degree in Psychology and Musicology from the Hebrew University of Jerusalem and a B.Mus in flute performance from the Jerusalem Academy of Music and Dance. She then completed her Master's in Music Cognition at the Hebrew University of Jerusalem, followed by a PhD at the Centre for Music and Science at the University of Cambridge. Her Postdoctoral training included work at the labs of Professors Ariel Knafo (Hebrew University of Jerusalem) and Andrew Meltzoff (Institute for Learning & Brain Sciences, University of Washington). She returned to Israel and to the University of Haifa during 2018.
Fundamental Regulatory Strategies for Medical Extended Reality
Speaker:Aubrey Shick
Tuesday, October 28, 11:30am - 12:30pm MT
Abstract: Medical extended reality (Med XR) has arrived, with over 92 FDA-authorized devices already on the market. But many innovators still underestimate the regulatory foundations required for market adoption and regulatory success. This session introduces the fundamental strategies every Med XR team should know: how to determine if a product is a medical device, when wellness vs device claims apply, and which FDA pathways (510(k), De Novo, Breakthrough) are most relevant. We’ll also discuss how to frame quality, usability, and evidence decisions in ways that anticipate regulatory review, even in the absence of XR-specific guidance. Beyond covering the basics, we’ll spotlight the “unknown unknowns” or blind spots that often derail promising projects, so attendees leave with the right questions to guide both regulatory and business strategy.
Learning Objectives
Distinguish non-regulated vs. medical device claims and classify Med XR products with confidence.
Recognize the primary FDA pathways available for MXR technologies.
Frame quality, usability, and evidence decisions in ways that strengthen regulatory alignment.
Identify common blind spots (“unknown unknowns”) and develop the right questions to guide cohesive regulatory and go-to-market strategy.
Format
45-minute talk + 15-minute Q&A.
Bio: Aubrey Shick is a former Sr. Digital Health Advisor at FDA’s Digital Health Center of Excellence and principal consultant at Launch and Logic. She specializes in regulatory strategy for AI/ML, XR, and digital mental health technologies, with expertise in aligning product design and FDA’s regulatory expectations.

Understanding the human genome to drive interdisciplinary research
Speaker: Sakaiza Rasolofomanana-Rajery
Tuesday, November 4, 11:30am - 12:30pm MT
Abstract: The human genome holds the information that dictates how the body functions in a sequence of base nucleotides (DNA). About 1–2% of the DNA codes for proteins, whereas the remaining 98–99%, non-coding DNA, contains elements that regulate various cellular functions. Both the coding and non-coding parts of the genome work together through biological pathways to carry out their functions. In this talk, I will discuss how our understanding of the entire human genome (coding and noncoding) can be used to relate a given phenotype to biological function by introducing the concept of the Pathway-Level Information Extractor (PLIER). PLIER is a computational method that extracts groups of genes co-expressed in the same context (modules) and that share the same function. These gene modules can then be used in various ways to connect a phenotype of interest to underlying biological pathways. This approach incorporates gene-gene interactions and enables hypothesis generation, advances our understanding of human health and disease, and helps identify potential directions for therapeutic innovation. Using Type 1 diabetes as an example, I will show one way to extract insights from genetic data and start a conversation on how a similar approach can be applied in interdisciplinary ways.
Bio: Sakaiza is a PhD student in Human Medical Genetics and Genomics (HMGG) at the Anschutz Medical Campus (AMC), School of Medicine. She is eager to use her understanding of the human genome to solve real-world problems and values collaboration with people in fields different from her own. Beside her involvement in health research, Sakaiza is passionate about open science, entrepreneurship, education, and art, and is always looking for ways to incorporate these concepts into her work. Sakaiza graduated from Smith College with a BA in Data Science and Chemistry. She has since worked as a research assistant in the CU Denver Math Department and the AMC Department of Biomedical Informatics (DBMI). Her work involved using various high-throughput datasets to understand the role of Mediterranean diets in cardiovascular health.
Another project she contributed to used unsupervised machine learning techniques to explore how infant diet impacts the microbiome. In 2024, Sakaiza joined the HMGG program as a PhD student and is now conducting her thesis research in the Pividori Lab (DBMI), where she uses computational analysis to uncover the latency of autoimmune diseases by exploring molecular mechanisms involved in their initiation and progression.

Touching Change: Haptic Co-Regulation as an Adjunct to Cognitive Reappraisal
Speaker:Preeti Vyas
Tuesday, November 11, 11:30am - 12:30pm MT
Abstract: Emotion regulation (ER) is vital to resilience and overall well-being, yet adaptive ER strategies like cognitive reappraisal are often difficult to access in real-world stressful moments. Touch can play an essential role in facilitating these cognitive ER processes. In this talk, Preeti will present the CHORA (Comforting Haptic Co-Regulating Adjunct) framework, which identifies how affective haptic systems can function as adjuncts to reappraisal and proposes mechanisms through which they can scaffold ER. Drawing on evidence from neuroscience and cognitive science research, the framework outlines how haptic cues can directly evoke alternative appraisals, indirectly support cognitive flexibility by calming arousal and shifting attention, and gradually entrain users to internalize reappraisal skills over time. The talk concludes with a roadmap for designing and evaluating haptic adjuncts to emotion regulation, highlighting applications in mental health support and opportunities for interdisciplinary collaboration across haptics, robotics, and affective computing. You might also hear insights on designing and conducting empirical in-lab studies to evaluate these complex ER constructs.
Bio: Preeti Vyas (website) is a 6th-year PhD Candidate in Computer Science at the University of British Columbia, working in the SPIN Lab with Prof. Karon MacLean. With a background in Electronics Engineering (B.Tech, NIT Bhopal) and Electrical Engineering (M.Sc., McGill University), she brings an interdisciplinary perspective spanning social robotics, affective haptics, and human-centred design. Her doctoral research investigates how haptic interactions with zoomorphic social robots can support emotion regulation. She has published in venues such as UIST, CHI, TOH, Haptics Symposium, EuroHaptics, NYAS. Preeti is also an HCI educator with 8+ years of teaching and mentoring experience. Preeti feels deeply committed to advancing emotion regulation research, believing that technologies which scaffold emotional resilience can have a transformative impact on everyday life and well-being. Beyond research, she explores human emotions through expressive poetry, visual arts, and open mic performances, practices that continually inform and inspire her scientific work. She is entering the job market this year and looking for opportunities that align well with the interdisciplinary nature of her academic research and being.

Reshaping Memory, Reimagining Futures: Art and Technology in Practice
Speaker: Eunsun Choi
Tuesday, November 18, 11:30am - 12:30pm MT
Abstract: Through this talk, Eunsun will present her work across diverse mediums and technologies, sharing her perspective as an individual artist who engages with technology as both a tool and a subject of inquiry. Her practice is project-based, and she will introduce a range of works that illustrate different approaches. Her artistic research focuses on how art can reshape memory and history—how we live in the present, and how we might speculate on the future. Much of this inquiry is grounded in her own difficult and sometimes negative experiences shaped by society, which she reimagines through humor, interaction, and movement. The audience will see how her conceptual investigations have developed into traditional sculptural forms, installation-based works, and screen-based projects that expand into the technological realm.
Bio: Eunsun Choi is a multidisciplinary artist from Korea, currently based in Seattle. She received her MFA from Hunter College and is pursuing a Ph.D. in the DXARTS program at the University of Washington. Her recent solo installations and exhibitions include presentations at 4Culture, the Seattle Art Fair, Out of the Box Gallery in Seoul, and the Thomas Hunter Project Space in New York City. Her work has also been featured in numerous group exhibitions across New York State, Queens, Brooklyn, and South Korea.
Choi has participated in several residencies, including the PLAYA Awarded Residency in Summer Lake, Oregon; the NY+20 Residency in Chengdu, China; the Sculpture Space Residency in Utica, New York; and the Hunter College Ceramic Residency in New York City. In addition, as part of the Jeju Island Artist Collective, she received the NYFA Queens Art Fund and the City Artist Corps Grant in 2021, through which the collective curated multiple group exhibitions in Korea and Japan.

From Survival to Storytelling: The Journey of Serenity Forge
Speaker: Zhenghua Yang
Tuesday, December 2, 11:30am - 12:30pm MT
Abstract: Zhenghua Yang (Z) is the founder and CEO of Serenity Forge, a values-driven video game development and publishing studio based in Broomfield, Colorado. A Forbes 30 Under 30 recipient and TEDx speaker, Z has created and published award-winning games that have reached millions of players worldwide, with exhibitions at the Smithsonian Institution and the Denver Botanic Gardens. His notable projects include Doki Doki Literature Club Plus, Slay the Princess, To the Moon, LISA, and the recently announced Fractured Blooms.
Bio: Zhenghua Yang (Z) founder and CEO of Serenity Forge, Creating meaningful and emotionally impactful experiences that challenge the way you think.