banner
toolbar

January 7, 1997


Laugh and Your Computer
Will Laugh With You, Someday

By DANIEL GOLEMAN

Illustration: Christine Thompson

T he phrase "user friendly" is about to take on a more literal meaning: Computer scientists are creating machines that can recognize their users' most intimate moods and respond like an empathetic friend.

To be sure, the idea of a machine cognizant of that human Achilles' heel, emotion, can conjure more sinister images -- like HAL, the savvy, menacing computer in "2001," whose fear that he would be unplugged led him to kill all but one crew member on a space mission. Yet in a development being welcomed by some but alarming to others, as Jan. 12, 1997 -- the birth date of HAL in Arthur C. Clarke's novel -- approaches, scientists have already constructed pieces of the technical groundwork for such machines.

While the specter of robotic Frankenstein monsters captures the popular imagination, computer scientists offer benign visions of a more humanlike technology, animating gentle cousins of Oz's Tin Man. They foresee a time when computers in automobiles will sense when drivers are getting too drowsy or impatient and will respond by delivering wake-up messages or by producing soothing music. Empathic computer tutors will notice when their pupils are getting frustrated or overwhelmed, and so offer encouraging words and make lessons easier. Wearable computers could warn people with chronic conditions like severe asthma or high blood pressure when they are becoming too overwrought.

Bits and pieces of this emotionally attuned cyber-future already exist. Computer scientists at the Georgia Institute of Technology in Atlanta have developed a computer system that can recognize simple facial expressions of emotions like surprise and sadness.

At Northwestern University in Evanston, Ill., and at Carnegie-Mellon University in Pittsburgh, engineers have designed programs that converse with people and respond appropriately to their emotions. And at the Massachusetts Institute of Technology, in Cambridge, where much of the work in what is being called "affective computing" is under way, a computer worn around the waist monitors its wearer's every shift of mood.

People don't want a computer that cares about their mood so much as one that makes what they're trying to do easier.

Pat Billingsley,
the Merritt Group



No one claims that these more sensitive machines will come close to replicating full human emotion. And some skeptics question whether the work to mimic emotion in machines is worth the effort.

Pat Billingsley, an expert in human-machine interfaces at the Merritt Group in Williamsburg, Mass., said: "People don't want a computer that cares about their mood so much as one that makes what they're trying to do easier. You want a very predictable system, one you can rely on to behave the same way time after time -- you wouldn't want your computer to be too emotional."

One impetus for building these more sensitive computers is widespread frustration with the doltishness of present models. "Today's computers are emotionally impaired," said Dr. Roz Picard, a computer scientist at M.I.T. who is leading the effort there to bring emotion to the all-too-rational universe of computing. "They blather on and on with pages of output whether or not you care. If they could recognize emotions like interest, pleasure and distress, and respond accordingly, they'd be more like an intelligent, friendly companion."

Picard and her associates at M.I.T.'s Media Lab are developing prototypes of such sensitive machines that are not just portable but wearable. "A computer that monitors your emotions might be worn on your shoulders, waist, in your shoes, anywhere," Picard said. "It could sense from your muscle tension or the lightness of your step how you're feeling and alert you if, say, you're getting too stressed. Or share that information with people you wanted to know, like your doctor or your spouse."

One immediate step toward warmer-seeming machines is giving them more emotionally realistic voices. While computerized speech has come across at best as a monotonous drone, computer users may be cheered by progress in designing automated voices with the capacity for more realistic emotional inflection.

Such nuance "adds flavor and meaning to what we say," Picard said, adding: "With these abilities computers can communicate in a more natural, pleasant way.

Monotonous voice-reminder systems could vary their voices, for example, to flag urgent information."

While warmer voices signal a small start, much of the work deals with more sophisticated aspects of emotional astuteness.

In pilot tests, a computer read emotions with up to 98 percent accuracy.


Perhaps most progress has come in creating machines that can read human emotion, a technical challenge similar to having them recognize handwritten words or speech.

Emotions like fear, sadness and anger each announce themself through a unique signature of changes in facial muscle, vocal inflection, physiological arousal and other such cues. Building on techniques of pattern recognition already used for computer comprehension of words and images, Dr. Irfan Essa, a computer scientist at Georgia Tech, has constructed a computer system that can read people's emotions from changes in their facial expression.

The system uses a special camera that converts changes in facial muscles into digitized renderings of energy patterns; the computer compares each pattern to that of the person with a neutral expression. In pilot tests with people making deliberate expressions of emotions like anger, fear and surprise, the computer reads the emotions with up to 98 percent accuracy.

But just as computers that comprehend spoken words require that the speaker enunciate clearly one word at a time, the emotion-reading computer cannot yet detect the rapid, free flow of spontaneous feelings as mirrored in the face.

That, Essa said, is the next step, a more daunting technical challenge: "What we've done so far is just the very first step in building a machine that can read emotions."

A more elusive trick is for a computer to know how to respond once it has recognized an emotion.

A prototype program for a computer that can do this has been developed at the Institute for the Learning Sciences at Northwestern University, under the direction of Dr. Andrew Ortony, a computer scientist.

"The question was, could you get a computer to reason about people's emotions, like Star Trek's Mr. Spock, who can infer anger without being able to experience it?" Ortony said . Working with Dr. Paul O'Rorke, a computer scientist at the University of California at Irvine, Ortony designed a computer program called "AbMal," which has a rational understanding of emotions. AbMal, for example, can realize that gloating occurs when a person is happy about someone else's distress, or that hope and fear arise because people anticipate success or failure.

Such emotionally smart programs are essential to the next step, constructing machines that react like another person to an individual's emotions -- in other words, give a semblance of empathy. One approach to creating empathic machines is being taken by Clark Elliott, a computer scientist at DePaul University in Chicago. Dr. Elliott's computer program, "The Affective Reasoner," can talk with people about their emotional experiences. The program, which Dr. Elliott hopes will evolve into uses like friendly computer tutors, can comprehend simple sentences and understand the emotions they imply or describe. Then it responds like an understanding friend.

The program has 70 agents or characters, cartoon-like faces on a computer screen that can morph to express different emotions, like turning red and shaking to show extreme anger.

"I can say to an agent in the program, 'Sam, I'm worried about my test,' and Sam will recognize what 'worried' means," Elliott said. "Sam might respond, 'Clark, you're my friend. I'm sorry you're worried. I hope your test goes well.' Right now Sam's emotional acuity is more advanced than his language ability: he doesn't know what a test is, but he knows how to respond when you're worried."

In a test of the Affective Reasoner expressing different emotions, Elliott had people listen to the computer voice and then to an actor, each giving emotional nuance like anger or remorse to ambiguous or nonsensical sentences like "I picked up katapia in Timbuktu."

While the specter of Frankenstein monsters captures the popular imagination, scientists envision gentle cousins of Oz's Tin Man.


"People could correctly guess the emotion being expressed by the actor around 50 percent of the time, while they guessed correctly with the computer about 70 percent of the time," Elliott said.

Virtual reality games are another arena where work has advanced, in accord with an animator's maxim that the portrayal of emotions in cartoon characters adds the illusion of life. Dr. Joseph Bates, a computer scientist at Carnegie-Mellon University, has created a virtual reality game in which the human participant interacts with "woggles," characters that have emotional reactions to what goes on.

"Video games and virtual reality games so far are emotional deserts," Bates said. "The next challenge is to give characters emotional reactions. So the woggles have unique personalities. For instance, one woggle hates fights -- he gets sad and tries to stop them. That makes them more lifelike."

Beyond the appeal of more cuddly or alluring machines, computer scientists see another reason to bring feelings to computing. Paradoxically, a bit of emotion might make computers smarter, their intelligence less artificial and more like that of a person.

Because computers have no intrinsic sense of what within a mass of data is more important and what is irrelevant, they can waste huge amounts of time looking at every bit of information.

For that reason, experts in artificial intelligence are grappling with how to grant a computer a human-like ability to realize what information matters.

And that brings them back to emotions.

"It's become clear that something is missing in the purely cognitive, just-the-facts, problem-solving approach to modeling intelligence in a computer," said Dr. Fonya Montalvo, a computer scientist in Nahant, Mass. "From vision research I was doing at M.I.T., it was clear that people get a sense of what's important when they see a scene, while computers don't. They go through every bit of information without knowing what's salient. In humans it's our emotions that flag for attention what is important."

If computers had something akin to emotion, "they could be more efficient thinking machines," Montalvo said, adding, "Artificial intelligence has largely ignored the crucial role of emotions in their models of the human mind."

As early as the 1960's, one of the first to suggest that intelligence in machines would need to mimic emotions was Dr. Herbert Simon, a Nobel laureate and pioneer in artificial intelligence. But Simon's insight has been almost entirely ignored in most of the work done in that field in the last 30 years, save for a few lone voices. One of those who has taken up Simon's call is Dr. Aaron Sloman, a philosopher at the School of Computing Science at the University of Birmingham in England.

The best design for robots with intelligence, Sloman said, includes "a mechanism that can deploy resources as changing circumstances demand," adding, "The parallel in the human or animal mind is emotions; a machine as smart as a human would probably need something similar."

Sloman added, "If there is an intelligent robot crossing a dangerous bridge, it needs a state like anxiety that will put aside other, irrelevant concerns and focus on the danger at hand. Then after it had crossed safely, it can allow its attention to roam more freely, a state something like relief."

But Sloman said that while he and his colleagues had begun constructing artificial intelligence programs with emotion-like features, "the practical work is far behind the theory -- what we've been able to construct so far is very primitive."

As computer scientists labor on developing the technology that will allow computers to read and express emotions, some are debating whether the results will be more like the charming and affable C3P0 in George Lucas's "Star Wars" or like HAL, who, as Picard puts it, "can not only pass the Turing Test," to impersonate human intelligence seamlessly, "but also kill the person who gives it."

One way to ensure that machines with heart stay benign, Picard proposes in an essay in "Hal's Legacy," a book published this month by The M.I.T. Press, would be to require that the design of emotionally astute computers place primary import on preserving human life. But, she adds, the need for such caveats lies far in the future.

More to the point, Picard said: "The question is, do we really want to built an intelligent, friendly machine? I'm not sure people are ready for computers with emotions."


Home | Sections | Contents | Search | Forums | Help

Copyright 1997 The New York Times Company