Amazon Says It Can Detect Fear on Your Face. You Scared?

The company updates its Rekognition suite with an algorithm that can tell if you’re afraid. Researchers say such emotion detectors don’t work very well.
Casey Chin; Getty Images

Amazon announced a breakthrough from its AI experts Monday: Their algorithms can now read fear on your face, at a cost of $0.001 per image—or less if you process more than 1 million images.

The news sparked interest because Amazon is at the center of a political tussle over the accuracy and regulation of facial recognition. Amazon sells a facial-recognition service, part of a suite of image-analysis features called Rekognition, to customers that include police departments. Another Rekognition service tries to discern the gender of faces in photos. The company said Monday that the gender feature had been improved—apparently a response to research showing it was much less accurate for people with darker skin.

Rekognition has been assessing emotions in faces along a sliding scale for seven categories: “happy,” “sad,” “angry,” “surprised,” “disgusted,” “calm,” and “confused.” Fear, added Monday, is the eighth.

Amazon isn't the first company to offer developers access to algorithms that claim to detect emotions. Microsoft has had similar offerings since 2015; its service looks for a similar list of emotions, adding “contempt” but deleting confusion. Google has offered its own similar service since 2016.

Amazon declined to detail how customers are using emotion recognition. Online documentation for Rekognition warns that the service “is not a determination of the person’s internal emotional state and should not be used in such a way." But on its Rekognition website, Amazon, whose ecommerce business has squeezed brick-and-mortar retailers in part via deep data on consumers, suggests that stores could feed live images of shoppers into its face-analysis tools to track emotional and demographic trends at different retail locations over time.

Even as Amazon, Google, and Microsoft charge ahead with algorithms that intuit feelings, psychologists warn that trying to read emotions from facial expressions is fundamentally misguided.

A study published in February by UC Berkeley researchers found that for a person to accurately read someone else’s emotions in a video requires paying attention to not just their face but also their body language and surroundings. Software offered by tech companies generally analyzes each face in isolation.

Another study, published last month, took more direct and devastating aim at emotion-detection software. Psychologists reviewed more than 1,000 published findings about facial expressions and emotion and concluded there was no evidence that facial expressions reliably communicate emotion on their own, undermining the core assumption of emotion-detection software.

“It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts,” the authors wrote.

An online demo of Google's cloud-image analysis service shows how its AI software attempts to identify objects in photos and read facial expressions to discern emotions.

Getty Images; Google

Rumman Chowdhury, who leads work on responsible AI at Accenture, says the situation is an example of the industry not pausing to think through the limitations of its technology. Even if software could read faces accurately, the idea of collapsing the richness of human feeling into a handful of categories for all people and contexts doesn’t make much sense, she says. But hype about the power of AI has led many people inside and outside the tech industry to be overconfident about what computers can do.

“To most programmers, as long as the output is something reasonable and the accuracy looks OK on some measure, it’s considered to be fine,” she says. Customers told that AI is more powerful than ever are unlikely to check the foundation of the claims, Chowdhury says.

As with facial recognition, easier access to emotion-recognition algorithms seems to be causing the technology to spread more widely, including into law enforcement.

In July, Oxygen Forensics, which sells software that the FBI and others use to extract data from smartphones, added facial recognition and emotion detection to its product. Lee Reiber, Oxygen’s chief operating officer, says the features were added to help investigators sort through the hundreds or thousands of images that often turn up during digital evidence gathering.

Officers can now search for a specific face in an evidence trove, or cluster images of the same person together. They can also filter faces by race or age group, and emotions such as “joy” and “anger.” Reiber says visual tools can help investigators do their work more quickly, even if they are less than perfect, and that the investigative process means leads are always checked multiple ways. “I want to take as many pieces as possible and put them together to paint a picture,” he says

The number of commercial emotion-detection programs is growing, but they don’t appear to be very widely used. Oxygen Forensics added facial recognition and emotion detection using software from Rank One, a startup that has contracts with law enforcement. But when WIRED contacted Rank One CEO Brendan Klare, he was unaware that Oxygen Forensics had implemented emotion detection in addition to facial recognition.

Klare says the emotion detector has so far not proved popular. “The market’s pretty limited at the moment, and it’s not clear to us if it will ever pay off as a feature,” he says. “It’s not something that is that big right now.”

The changing focus of emotion-recognition startup Affectiva illustrates the challenge. The company emerged in 2009 from an MIT project trying to help people with autism understand people around them. It won funding from investors that include advertising giant WPP and launched products to help marketers measure audience reaction to commercials and other content. More recently, the company has focused on improving car safety, for example, through technology to spot when drivers are sleepy or angry. Affectiva announced $26 million in funding earlier this year, with auto parts manufacturer Aptiv as lead investor. The company declined to comment.

At least one big tech company appears to have decided that emotion recognition isn’t worth the effort. IBM competes with Amazon and Microsoft in cloud computing and facial recognition but does not offer emotion detection. An IBM spokesperson said the company does not plan to offer such a service.

Google does not offer facial recognition, a decision it says resulted from an internal ethical review raising concerns that the technology could be used to infringe privacy. But the company’s AI cloud services will detect and analyze faces in photos, estimating age, gender, and four emotions: joy, sorrow, anger, and surprise.

Google says its emotion-detection features passed through the same review process that nixed facial recognition. The company has also decided that it’s OK to apply the technology to personal photos of its users.

Searching for “happiness,” “surprise,” or “anger” in Google’s Photos app will surface images with appropriate facial expressions. It will also look for “fear.”


More Great WIRED Stories