Emotion-detection in AI should be regulated, AI Now says

Credit to Author: Lisa Vaas| Date: Mon, 16 Dec 2019 10:42:34 +0000

What has AI done to you today?

Perhaps it’s making a holy mess of your physical attributes? Has technology that supposedly reads your micro-expressions to determine your inner-emotional states, tone of voice, or the way you walk, been used to figure out if you’d be a good hire? If you’re in pain and should get medication? If you’re paying attention in class?

According to Professor Kate Crawford, a co-founder of New York University’s AI Now Institute, AI is increasingly being used to do all of those things, in spite of the field having been built on “markedly shaky foundations”.

AI Now is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies. It’s been publishing reports on such issues for a number of years.

The institute is calling for legislation to regulate emotion detection, or what it refers to by its more formal name, “affect recognition,” in its most recent annual report:

Regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities…. Given the contested scientific foundations of affect recognition technology – a subclass of facial recognition that claims to detect things such as personality, emotions, mental health, and other interior states – it should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school.

How do you even know if you’re going AI-isized?

Whether we realize it or not – and it’s likely “not” – AI is being used to manage and control workers or to choose which job candidates are selected for assessment, how they’re ranked, and whether or not they’re hired. AI has, in fact, been a hot technology in the hiring market for years.

Without comprehensive legislation, how the technology is used, and transparency into the research/algorithms that go into these products, are all hush-hush – and this, in spite of the fact that AI isn’t some purely mathematical, even-handed set of algorithms.

Rather, it’s been shown to be biased against people of color and women, and biased in favor of people who look like the engineers who train the software. And, to its credit, 14 months ago Amazon scrubbed plans for an AI recruiting engine after its researchers found out that the tool didn’t like women.

According to AI Now, the affect recognition sector of AI is growing like mad: at this point, it might already be worth as much as $20 billion (£15.3 billion). But as Crawford told the BBC, it’s based on junk science:

At the same time as these technologies are being rolled out, large numbers of studies are showing that there is …no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks.

She suggests that part of the problem is that some AI software makers may be basing their software on dusty research: she pointed to the work of Paul Ekman, a psychologist who proposed in the 1960s that there were only six basic emotions expressed via facial emotions.

Ekman’s work has been disproved by subsequent studies, Crawford said. In fact, there’s far greater variability in facial expressions with regards to how many emotions there are and how people express them.

It changes across cultures, across situations, and even across a single day.

Companies are selling emotion-detection technologies – particularly to law enforcement – in spite of the fact that often, it doesn’t work. One example: AI Now pointed to research from ProPublica that found that schools, prisons, banks, and hospitals have installed microphones purporting to detect stress and aggression before violence erupts.

It’s not very reliable. It’s been shown to interpret rough, higher-pitched sounds such as coughing to be aggression.

Are any regulatory bodies paying attention to this mess?

Kudos to Illinois. It’s often an early mover on privacy movers, as evidenced by the Illinois Biometric Information Privacy Act (BIPA) – legislation that protects people from unwarranted facial recognition or storage of their faceprints without consent.

Unsurprisingly, Illinois is the only state that’s passed legislation that pushes back against the secrecy of AI systems, according to AI Now. The Artificial Intelligence Video Interview Act, scheduled to go into effect in January 2020, mandates that employers notify job candidates when artificial intelligence is used in video interviewing, provide an explanation of how the AI system works and what characteristics it uses to evaluate an applicant’s fitness for the position, obtain the applicant’s consent to be evaluated by AI before the video interview starts, limit access to the videos, and destroy all copies of the video within 30 days of an applicant’s request.

Don’t throw out the baby

AI Now mentioned a number of emotion-detection technology companies that are cause for concern. But at least one of them, HireVue, defended itself, telling Reuters that the hiring technology has actually helped to reduce human bias. Spokeswoman Kim Paone:

Many job candidates have benefited from HireVue’s technology to help remove the very significant human bias in the existing hiring process.

Emotion detection is also being used by some call centers to determine when callers are getting upset or to detect fraud by voice characteristics.

Meanwhile, there are those working with emotion detection that agree that it needs regulating… with nuance and sensitivity to the good that the technology can lead to. One such is Emteq – a firm working to integrate emotion-detecting technology into virtual-reality headsets that can help those trying to recover from facial paralysis brought on by, for example, strokes or car accidents.

The BBC quotes founder Charles Nduka:

One needs to understand the context in which the emotional expression is being made. For example, a person could be frowning their brow not because they are angry but because they are concentrating or the sun is shining brightly and they are trying to shield their eyes. Context is key, and this is what you can’t get just from looking at computer vision mapping of the face.

Yes, let’s regulate it, he said. But please, lawmakers, don’t hamper the work we’re doing with emotion detection in medicine.

If things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater.

http://feeds.feedburner.com/NakedSecurity

Leave a Reply