Rise of the Robots: Machines Custom-Made to Care for You
The next generation of robots is poised to acquire the very thing that makes us human: our empathy.
Brian is ugly. His face sags under the weight of many worries. He’s rarely seen without an unflattering blue University of Toronto baseball cap and T-shirt. Stationary, legless, he perches on his stand like a strange mechanical bird, three fingers on each hand. Goldie Nejat, Brian’s creator and an assistant professor in the University of Toronto’s Department of Mechanical and Industrial Engineering, says he looks this way because he’s still a prototype and also because, when fashioning him from silicone rubber, metal and wire, she kept in mind the “uncanny valley” effect. This phenomenon was first noted in 1970 by Japanese roboticist Masahiro Mori, who found that our affection for robots rises as they become more lifelike, until we meet one that’s uncannily realistic, at which point our emotions plummet into a valley of disgust. We prefer robots to look like robots.
It’s crucial that Brian doesn’t weird people out because he’s a socially assistive robot, intended for use in long-term care facilities. Thanks to a speech synthesizer, Brian is able to vary the pitch and tone of his voice. Servomotors help him rearrange his face-which looks delicate and somehow reconstructed, as if belonging to a burn victim-into a smile or a frown. His affect-detection software can decode the mood behind your words and body language. The sum of these parts is a 70-kilogram motivational coach; he’s designed to entice seniors into a memory card game to combat dementia, and to talk them into eating a meal to fight chronic undernutrition.
Unveiled in 2009, Brian has made great strides in his programming: he’s now wittier and better at picking up cues. The only robot of his kind in Canada, Brian is also among the first health-care robots in the industry to possess social intelligence: he can, on the fly, alter his behaviour in response to a user’s emotions. Brian is here to reassure, encourage, congratulate. Assistive robotics is one of the fastest-growing areas in the field; and as its products evolve into interactive, emotional humanoids, health-care experts have shown a keen interest. After years of field tests, Nejat and her team are designing other helpmates like Brian, anticipating that friendly, autonomous robots will eventually become an integral part of daily life.
Although its roots stretch back to the 1940s, the contemporary field of social robotics began in earnest during the late 1990s with machines like Kismit, a talking head whose synthetic nervous system allowed it to shift its mouth, eyes and eyebrows into an array of natural-seeming expressions. While social-robotics research has continued to roll out new breakthroughs-real-time speech recognition, laser sensors to steer self-directed machines-the promise of robots like Brian comes at a critical demographic moment. According to the 2011 census, nearly 15 per cent of Canadians are now 65 or older, a number expected to climb to almost 20 per cent by 2031. At the same time, some economists warn of a skills crisis in health care: the Canadian Chamber of Commerce reports that we will face a shortfall of 60,000 nurses over the next decade.
Robots like Brian, says Nejat, can help. Some will provide companionship for seniors or pester them to take their medication; others will be more hands on, keeping tabs on post-op patients or teaching those with mobility issues to walk again. Some will be on the market for daily home use; while others will operate as part of a fleet owned by a single institution, with a different robot attending to a patient’s every individual need. “That’s our objective,” Nejat says. “Ten, 20 years down the line, you will see these robots in private homes, nursing homes and hospitals.”
But how close are we to a future of Brians-self-aware computers that will take the shape of lifelike men and women; that will play with our children and tend to our parents; that will know us, love us, grieve with us? And is that a future we even want?
(Photo: Sylvain Dumais, Robot Design and Conception by Daniel Finkelstein and Phillipe Savard)
The growing excitement over social robots has spawned dozens of varieties, many of them originating, unsurprisingly, in Japan. The country’s renowned capacity for technological innovation has resulted in a thriving robotics industry and a widespread comfort with the machines, which range from robo-babies (used to train prospective parents) to sexbots that moan and simulate orgasm.
While Japanese companies have experimented with a number of humanlike androids in hospitals and assisted-living homes, the country’s most famous social robot is Paro, a plush sensor-packed baby seal that blinks, wriggles its head and tail, recognizes simple sentences and squeals adorably when petted. Now in its eighth generation and costing less than $6,000 (several thousand have been sold in Japan, Europe, the United States and Canada), Paro provides the benefits of animal-assisted therapy-lower stress levels, emotional support-without the demands of a live pet. Firms have also started to focus their artificial-intelligence (AI) research on more pragmatic goals. The Nara Institute of Science and Technology recently tested a two-armed machine capable of pulling a T-shirt over a user’s head and shoulders. Designed to help dress the elderly and disabled, the robot can quickly adjust its movements to accommodate tremors or a slumping posture.
American researchers aren’t far behind the Japanese. Sporting eyebrows, eye-socket cameras and a rubbery red grin, Bandit-conceived at the University of Southern California’s Viterbi School of Engineering-uses its articulated arms to direct physiotherapy exercises for recovering stroke victims. Bandit closely monitors patients’ progress and, if appropriate, cheers them on (“Are you working up a sweat yet?”). The Yale Social Robotics Laboratory is making waves with Nico, a robot able to observe itself and its environment and then integrate that data into an independent awareness of where it sits in three-dimensional space-a key skill for machines that might, one day, live in homes and work around the random movements of the people they serve.
Canada enjoys a pioneering history in robotics, with the Canadarm-which first entered space in 1981 and whose various iterations participated in some 90 shuttle missions-serving as the most iconic example of that legacy. Space tourism, NASA’s new Mars rover and drilling for water on the moon are among the many projects Canadian firms are working on; space-related robotics contracts brought in $127 million in 2011. Yet that reputation, according to Andrew Goldenberg, director of the Robotics and Automation Laboratory at the University of Toronto, has yet to catch up to other fields of robotics research. “Good companies in Canada tend toward space,” says Goldenberg, but he believes that’s about to change. As our population continues to age and with the global market for elder-care technology expected to hit $4 billion by 2015, social robots will become an attractive investment. “From a business point of view, this is the best possible way for a robotics company to get involved,” Goldenberg says.
While Brian may be the country’s most advanced prototype, he’s not alone. Goldenberg’s own company, Engineering Services Inc., is developing a mobile robot with assistive applications. Students at Simon Fraser University used cellphones to build Cally and Callo, two 16-centimetre-tall robots that dance, cry or throw tantrums in response to text messages and incoming calls. And, in perhaps the most Canadian project yet, students at the University of Manitoba created a toddler-size hockey-playing robot called Jennifer (named after the Olympic gold medallist Jennifer Botterill), able to skate, stickhandle and shoot pucks into a net.
As works in progress, many of these pseudo-humans are innocuous. But as the technology improves, robots like Brian, stranded in a purgatory between mindless servitude and self-awareness, will become less like themselves and more like us.
Two of Nejat’s students offer a demonstration. One, Derek McColl, sits at a table, facing the machine. On the table are 16 square cards, turned face down. Overhead, hanging like a desk lamp, a camera feeds Brian an image of the cards. The robot whirs to life, motors humming, his right arm jerking into a wave, his claw opening. “Hi. My name is Brian,” he says, jaw moving slightly. His voice is surprisingly smooth, more telephone operator than Stephen Hawking.
“I really enjoy playing the memory game,” Brian continues. “Let me show you what I can do. While playing a game, I can provide instructions. Please flip over a card.”
McColl turns a card: a basketball.
“I can provide help if you get stuck. For example, I can help you find the matching card if I have seen it during the game. Please flip over another card.”
McColl does as he is told. This time, the card pictures a book.
“When you do not get a match, I provide encouragement. Those are interesting cards, but they are not the same. Please flip back the cards and try again. I know you can do this.”
Along with providing cognitive stimulation, Brian is using the card game to keep track of patient mental functions, storing vital clues about the onset or progress of dementia.
McColl turns toward me. “Part of it is monitoring the state of the person,” he says, “so-“
Brian interjects. “When you don’t pay attention to the game,” he says, his voice modulating down a tone or two, “I get sad.” His rubbery face elongates, and his eyelids flutter and lower. Cameras in Brian’s chest, which peek through his T-shirt, have noticed McColl has turned away, triggering a facial-recognition algorithm that decides whether he is happy, angry or-in this case-distracted. Brian’s response? A guilt trip, which Nejat and her team have learned is often the most effective way to coax compliance.
Brian never stops watching and listening. He will treat different people in different ways, based on his contact with them. But he can himself be a tough guy to read. The human face has over 40 muscles; Nejat studied anatomy to better fine-tune the robot’s expressions. Yet Brian’s happiness seems scarcely different from his sadness, with his vocal tones doing much of the emotional legwork.
McColl and the other student, Geoffrey Louie, prepare the next demonstration. On the table they place a tray of food equipped with sensors beneath the plates to help Brian calculate how much his companion eats. On the butt of the fork are infrared LEDs that signal, to retrofitted Nintendo Wii controllers on Brian’s shoulder, when the robot’s companion lifts the utensil to his or her mouth. Louie sits across from Brian.
“Hi. My name is Brian,” the robot says. “You look very nice today. Please join me for lunch. Today’s menu includes some rice, chicken, apple slices and water. What a beautiful day it is today. I’m glad I get to spend some of it with you.”
“You, too, Brian,” Louie says, chuckling.
“That’s a good helping of food you have there. Please take a bite.”
Louie digs his fork into the rice.
“Knock, knock,” Brian says.
“Who’s there?” Louie asks.
“Let us in and we’ll tell you. Hee hee hee.” Brian smiles and lifts his claw to his mouth in an approximation of cheekiness. Louie turns away.
“Please have some water that is here on your tray,” Brian says, noticing Louie’s distracted body movement. The robot’s face shifts inscrutably as he points at the cup.
“He had a sad expression there,” McColl tells me.
The great challenge of social robotics is acclimatizing humans to these new caretakers. “This thing just creeps me out,” wrote one YouTube commenter of Nexi, a big-eyed, highly expressive robot developed at MIT. Academics have produced mixed findings on how readily users might embrace such machines. A 2012 study from the Georgia Institute of Technology asked 21 people between the ages of 65 and 93 to watch a video of an assistive robot. Afterwards, most said they wouldn’t mind a robot taking care of drudge work, like tidying up or reminding them to take medication, but drew the line at intimate tasks, like eating, dressing and social activities-which, unfortunately, happen to be the key areas in which social roboticists hope their machines can be of service.
Goldenberg admits his own review of studies on social robots has led to concerns about their widespread adoption. “The acceptance of these robots by the elderly, at whom this field is aiming, is not entirely positive,” he explains. “As long as people are given the freedom to be on their own, they do not prefer robots.”
Nejat, however, says her studies with Brian have produced much happier results. In 2011, Brian was placed in a geriatric-care facility in Toronto, where residents were invited to play the memory card game with him. According to Nejat, the charm offensive worked. The majority, whose ages ranged from 57 to 100, not only liked Brian’s ability to express different emotions and his humanlike voice and appearance, but also enjoyed hanging out with him. Some came back for repeat visits and even struck up conversations with Brian.
Last year, Nejat and her team studied how elderly users interacted with Brian at mealtime. Once again, some participants couldn’t help falling for him. They giggled at his jokes and asked if he was hungry. Nejat’s research suggests that humans are hard-wired to bestow agency on objects. Confronted with a robot that behaves in lifelike ways, our default response is affection-a willingness to suspend disbelief and engage with a machine as if it were a person.
Novelty probably plays a role here. It’s one thing to play a game or eat a meal with a robot once or twice; it’s quite another for a machine to take over your nurse’s day-to-day tasks. A good health-care worker doesn’t just dole out medication, recall appointments and read charts; sympathy and consideration are also part of the job. Even in Japan, despite a $93-million government investment in robotic home care, assistive robots have struggled to find solid commercial markets. One company, Tmsuk, discontinued a humanoid companion in 2011 due to lack of interest. “We want humans caring for us, not machines,” a user reportedly responded.
For such a young field, social robotics has already attracted harsh critics. In her recent book Alone Together, MIT professor Sherry Turkle suggests that assistive robots represent yet another encroachment of technology into our personal lives and offer only the veneer of care-an easy answer to complex 21st-century problems. “The idea of sociable robots suggests that we might navigate intimacy by skirting it,” Turkle writes. “People seem comforted by the belief that if we alienate or fail each other, robots will be there, programmed to provide simulations of love.”
For Turkle, the much-feared robot takeover is real, but it’s not quite the technological Armageddon we were led to expect by science fiction. In the end, we will simply surrender our most fundamental human characteristic: our empathy. Intertwined with the motorized chimeras enlisted to care for our vulnerable, we will further retreat from engagement with others. Social robotics could ultimately prove one of the most anti-social legacies we leave future generations.
Nejat has heard the Turklesque panic before but thinks it ignores the reality of the so-called “silver tsunami.” The Alzheimer Society of Canada reports that 747,000 Canadians suffer from dementia or some kind of cognitive impairment; a number that, by 2031, will more than double. How will our overburdened health-care system look after them? The dilemma, argues Nejat, is real, and we will need daring technological solutions. In any case, she says, Brian won’t replace health workers-he will just assist them. “He can’t do what a person does,” she explains. “He’s there to do repetitive, time-consuming tasks, freeing health-care professionals to do higher-level work.”
Nejat’s pragmatism is partly rooted in a basic axiom of robotics: the more human you try to make robots, the steeper the technological climb. Crafting a robot able to think with empathy, memory and insight is immensely difficult. Brian can respond to stimuli and chat with users, but that’s basically a sophisticated illusion. According to Andrea Kuszewski, a California-based behavioural psychologist who works in AI development, at issue is the difference between how humans and computers learn. “The challenge of programming AI is that you input all the correct answers and then you let it do its thing,” she says. “Humans are more of a trial-and-error type of machine. An AI will run into a situation it’s never seen before, and if there’s no answer in the database, it’s stumped. But we learn through context.”
However, the most insurmountable obstacle to creating compassionate bipedal companions isn’t software but hardware. In recent years, advances in computer science have far outpaced progress in mechanical engineering. While it’s now possible to program a superintelligent computer like Watson, the IBM system that won Jeopardy! in 2011, we can’t yet turn Watson into a movable, responsive encyclopedia like C-3PO. “The electromechanical components used in making robots are not as evolved as the data processing,” Goldenberg says. “It’s difficult to make electromechanical devices that are sufficiently dextrous. Robots are heavy structures. The weight of motors and actuators is very high relative to their ability to lift or move.”
Goldenberg’s point becomes especially clear when, after the demonstration, Nejat introduces me to one of Brian’s brothers, a lean orange-and-grey machine: wheels, arms, vaguely humanoid features-a friendlier-looking Optimus Prime. It’s an H20 Wireless Networked Autonomous Humanoid Mobile Robot, manufactured by a company called Dr. Robot.
Nejat’s students are working on improving its autonomy. Replicating the dual miracles of being human-our minds and the grace with which we move through meatspace-has forced many roboticists to specialize, breaking down our abilities into their component parts and then building machines to mimic them.
“Brian is stationary,” Nejat says. “People come to him. This one”-she points to Optimus Prime-“is more mobile. It can move from room to room. You can have a team of robots in a nursing home, a hospital, doing different activities. That one”-she points at Brian-“focuses on movement and facial expressions. This one is more about doing the tasks.”
Those robot-accomplished tasks have put us on the cusp of a new era-or not. For the moment, the dream of a walking, sentient bot, operating on its own steam, will continue to lag behind the imagination, relegated to the realm of jetpacks and flying cars. Brian is a marvel, but he resembles a theme-park automaton. His impressive responsiveness is possible only within the strict confines of what the computer that serves as his brain has learned. While his offspring will outstrip him in every respect, they will likely continue to feign emotions they don’t feel. And yet, if Nejat’s research is any indication, we will end up showing as much resistance to those future robots as we have to video games, computers and cellphones.
In the lab, I can’t stop staring at Brian. I know he can never see me the way I see him. I know he can never be an emotional, empathetic creature, with a will and consciousness of his own. I know exactly where I stand with him. Yet, again and again, as he chides McColl and tells bad jokes, I scour Brian’s face for signs of life.