Computer scientist envisions a world where robots have that human touch
Just mention the words “drone” or “robot” and some will conjure unsettling visions of a future in which computers threaten to take over the world.
Dan Szafir, a professor in the Department of Computer Science and ATLAS Institute, envisions a day when robots can be found making beds at understaffed nursing homes, drones fly over fields providing precise measurements of crop yields, and flying automatons hover around the International Space Station, handling mundane chores so astronauts can tendto more important tasks.
Rather than seeing such intelligent machines as replacements for people (as is so often the fear), Szafir views them as integral collaborators, able to help DIY-ers with household projects.
“The ultimate goal is to design robots that can better support human activities—to improve usability, efficiency, and how much people enjoy interacting with them,” Szafir says.
With an undergraduate degree in history and a PhD in computer science from the University of Wisconsin-Madison, Szafir arrived at CU in 2015 with a reputation—at age 27—as a key player in the burgeoning multidisciplinary study of human-robot interaction.
“There are a lot of good technology people and a lot of good social scientists, but individuals who bridge the gap between the two are rare. Dan is one of them,” says Bilge Mutlu, an assistant professor at UW and Szafir’s mentor.
Remotely controlled robots have long been used in factories, bomb disposal and space-exploration. But as they transition to more complex, autonomous and intimate work alongside people—vacuuming homes like the iRobot Roomba, or assisting shoppers like Lowes’ new robotic greeters—it’s becoming critical that humans and robots understand each other better, Szafir says.
With funding from NASA, the National Science Foundation and Intel, Szafir has rolled out several new research initiatives.
One aims to improve robots’ ability to understand nonverbal cues, like eye gaze, hand gestures and changes in voice intonation. “As people, we are coded to use gestures. It’s something we do naturally, and we are very good at untangling what they mean,” Szafir says. Robots, not so much. For instance, he explains, if you’re working on a car with a friend, you might say, “Hey, can you grab that wrench?” while pointing or glancing at the toolbox across the room. If your co-worker were a robot, you’d have to say: “Next, I need the 7 mm wrench. It is on this particular table in this particular place. Go pick it up and put it in my hand.”
Szafir and his graduate students will first videotape teams of human volunteers building something in the lab, painstakingly documenting their verbal and nonverbal cues. Next, he hopes to develop probabilistic models (if a human gestures like X, there’s a 90 percent likelihood she means Y) that could someday be used to develop software for more intuitive robots.
He’s also exploring ways to design robots so humans can better predict their actions. “Right now, drones are loud, very robotic looking and hard to predict,” he says. “People find that unsettling.”
Szafir is also developing ways robots, drones and hand-held consumer devices can interact, sharing information gleaned from their myriad sensors to paint a fuller picture for a remote human user. Can’t make it to that football game? “We could potentially combine footage from drones overhead, ESPN, and pictures and videos from your friends’ cell phones to create a full, reconstructed 3D map of the environment and port it back to you at home using a virtual reality device. You’d get the sense that you were right there,” Szafir says.
Sound like science fiction? Maybe so. But Szafir, well aware that some are creeped out by his chosen field, believes the potential for good far outweighs the potential for harm.