ATLAS Institute faculty members have diverse research interests that span computing, art, learning, chemistry, design, robotics, music and beyond.
Click here for a list of recent publications by ATLAS faculty and students.
A computational design tool for smart textile weaving that blends features from traditional weaving software with circuit design tools.
Design interfaces that use augmented reality technology to superimpose data on robots to make working with them more intuitive.
Empirical research in human-computer interaction and computer-supported cooperative work, with a focus on philanthropic informatics.
Building a low-cost chemical synthesis robot, with the ultimate goal of automating the most tedious and time-consuming chemistry lab operations.
Scalable interface technologies for supervising aerial robots, as well as new algorithms for communicating robot state to users.
Exploring how shaping carbon-coated paper with manual and digital fabrication processes can create functional and aesthetic physical interfaces.
Exploring 3D printing as an interactive medium with a fast and reconstructable shape formation system.
Exploring how intentional engagements with chance can be implemented in digital fabrication systems.
A software framework that enables educators and other subject experts to create tactile representations that combine audio descriptions with kinetic motion.
Exploring ways for robots to detect specific kinds of human group interactions so they can behave appropriately.
Vegetable oil-filled thin poly sheets configured using a modified CNC machine show promise for the development of flexible pumps, valves and actuators, opening the door to new shape-changing interfaces that are low-cost, modular, soft and conformable.
Furniture music is an interactive experience that meshes furniture placement with soundscape manipulation.
The IRON Lab explores new ways to use inexpensive consumer robotics to make computer interactions more tangible, useful and efficient.
Video and animation are common ways of delivering concepts that cannot be easily communicated through text. This visual information is often inaccessible to blind and visually impaired persons, and alternative representations such as Braille and audio may leave out important details. Audio-haptic displays allow for the presentation of complex spatial information, along with accompanying description.
Exploring how advances in augmented reality technologies are creating a new design space for long-distance telepresence communication through virtual avatars.
Shape display is a promising approach to general-purpose shape-changing interfaces, but most of the existing pin-based shape displays focus on interactions at the scale of a human hand. LiftTiles, a modular and reconfigurable room-scale shape display, consists of an array of retractable and inflatable actuator that is compact and light, while extending up to 1.5m to allow for large-scale shape transformation.
The Light Orchard is an interactive installation that invites people to walk into a grove of futuristic trees, lit with color. The trees are aware of the presence of people in their space, and can respond in many different ways. Users can play different games, watch animations, and work together with different simulations, that allow them to easily collaborate, learn, and play together
The Luminous Science project is an interdisciplinary exploration of learning about the biochemistry of a garden through art-making.
A prototype system for presenting information to people with vision impairments that combines a touchscreen tablet, static tactile overlays and several small, mobile robots.
To improve teamwork and trust between robots and humans, researchers explore human movement and actions as expressions of attitudes towards robots, ultimately helping robots understand what people think of them.
This project aims to catalyze innovation in smart textiles by creating next generation rapid prototyping tools that integrate circuit and textile design.
Autonomous navigation of outdoor trails remains a challenging problem. This project presents an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image.