Google Says Its New PaliGemma 2 AI Models Can Identify Emotions. Should We Be Worried? (techcrunch.com) 26
"Google says its new AI model family has a curious feature: the ability to 'identify' emotions," writes TechCrunch. And that's raising some concerns...
Announced on Thursday, the PaliGemma 2 family of models can analyze images, enabling the AI to generate captions and answer questions about people it "sees" in photos. "PaliGemma 2 generates detailed, contextually relevant captions for images," Google wrote in a blog post shared with TechCrunch, "going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene." Emotion recognition doesn't work out of the box, and PaliGemma 2 has to be fine-tuned for the purpose. Nonetheless, experts TechCrunch spoke with were alarmed at the prospect of an openly available emotion detector...
"Emotion detection isn't possible in the general case, because people experience emotion in complex ways," Mike Cook, a research fellow at Queen Mary University specializing in AI, told TechCrunch. "Of course, we do think we can tell what other people are feeling by looking at them, and lots of people over the years have tried, too, like spy agencies or marketing companies. I'm sure it's absolutely possible to detect some generic signifiers in some cases, but it's not something we can ever fully 'solve.'" The unsurprising consequence is that emotion-detecting systems tend to be unreliable and biased by the assumptions of their designers... "Interpreting emotions is quite a subjective matter that extends beyond use of visual aids and is heavily embedded within a personal and cultural context," said Heidy Khlaaf, chief AI scientist at the AI Now Institute, a nonprofit that studies the societal implications of artificial intelligence. "AI aside, research has shown that we cannot infer emotions from facial features alone...."
The biggest apprehension around open models like PaliGemma 2, which is available from a number of hosts, including AI dev platform Hugging Face, is that they'll be abused or misused, which could lead to real-world harm. "If this so-called emotional identification is built on pseudoscientific presumptions, there are significant implications in how this capability may be used to further — and falsely — discriminate against marginalized groups such as in law enforcement, human resourcing, border governance, and so on," Khlaaf said.
Those concerrns were echoed by a professor in data ethics and AI at the Oxford Internet Institute, Sandra Wachter, who gave this quote to TechCrunch. With models like this, "I can think of myriad potential issues... that can lead to a dystopian future, where your emotions determine if you get the job, a loan, and if you're admitted to uni."
"Emotion detection isn't possible in the general case, because people experience emotion in complex ways," Mike Cook, a research fellow at Queen Mary University specializing in AI, told TechCrunch. "Of course, we do think we can tell what other people are feeling by looking at them, and lots of people over the years have tried, too, like spy agencies or marketing companies. I'm sure it's absolutely possible to detect some generic signifiers in some cases, but it's not something we can ever fully 'solve.'" The unsurprising consequence is that emotion-detecting systems tend to be unreliable and biased by the assumptions of their designers... "Interpreting emotions is quite a subjective matter that extends beyond use of visual aids and is heavily embedded within a personal and cultural context," said Heidy Khlaaf, chief AI scientist at the AI Now Institute, a nonprofit that studies the societal implications of artificial intelligence. "AI aside, research has shown that we cannot infer emotions from facial features alone...."
The biggest apprehension around open models like PaliGemma 2, which is available from a number of hosts, including AI dev platform Hugging Face, is that they'll be abused or misused, which could lead to real-world harm. "If this so-called emotional identification is built on pseudoscientific presumptions, there are significant implications in how this capability may be used to further — and falsely — discriminate against marginalized groups such as in law enforcement, human resourcing, border governance, and so on," Khlaaf said.
Those concerrns were echoed by a professor in data ethics and AI at the Oxford Internet Institute, Sandra Wachter, who gave this quote to TechCrunch. With models like this, "I can think of myriad potential issues... that can lead to a dystopian future, where your emotions determine if you get the job, a loan, and if you're admitted to uni."
Should we be worried? (Score:5, Insightful)
If you are, it will know.
Perhaps, but (Score:2)
It can't read my, can't read my
No, it can't read my poker face.
feed it (Score:5, Funny)
Not worried, but... (Score:2)
...somewhere between annoyed and amused.
While groups like Deep Mind with Alpha Fold are doing useful work, the rest of the AI world seems committed to producing useless, annoying crap generators
No (Score:2)
Well, depends. If you are an aggressive fuckup, this thing may start lying to you. But maybe that is exactly what you want.
About mounting job losses due to automation? Yes (Score:2)
Re:About mounting job losses due to automation? Ye (Score:4, Funny)
Hahahaha, no indeed. But the capacity for hallucination of the not-smart part of the human race is impressive.
Startup Opportunity (Score:3)
Re: (Score:2)
Not necessarily doable with this. Statistical classifiers have a tendency to react to bizarre clues, even when they get it right.
Re: (Score:2)
Not necessarily doable with this. Statistical classifiers have a tendency to react to bizarre clues, even when they get it right.
Yea, but if you can get a few ten million in funding...
Re: (Score:2)
Re: (Score:2)
If it worked well enough, it could be used to train people to practice deception without the need for a human trainer. Useful for actors, undercover agents, sales people, swindlers .... or maybe everyone, all the time.
No, humans are a critical part. AI can't spend the millions of funding like a human can, and a human is needed to scam, err sell, investors on the idea over an expensive meal.
But what will it make (Score:4, Funny)
of my resting bitch face?
Re: (Score:2)
I hate this! (Score:2)
Re: I hate this! (Score:2)
There will be more, rest assured.
My cat (Score:2)
loves to attack just when you think he's calm and friendly. I'd love a version of this thingy that can read my cat.
believe in science (Score:1)
If built on scientific presumptions, it will only discriminate against the far-right and other undesirables on the wrong side of history.
I'm on the Autism Spectrum (Score:1)
Re: (Score:2)
The flip side is: what emotions will it attribute to neurodivergents?
Re: (Score:2)
"I'm not even sure how accurate it was. I took it to the trains store; it said everyone was sad."
Tell me how I feel .. (Score:1)
In No Way Should Anyone Trust It (Score:2)