In its search for possible insurance fraud, Lemonade Insurance faces an avalanche of criticism on networks for bragging about its artificial intelligence system to reject claims.
The insurer Lemonade Insurance has wanted to show off its digital procedure and artificial intelligence systems on Twitter, but has been met with a bitter response. The company has tried to defend itself against accusations of discrimination and prejudice.
“When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. Can pick up non-verbal signals that traditional insurers can’t“indicated the company in that Twitter thread that has raised blisters.
In that thread they ensure that their system registers and analyzes 100 times more data than traditional insurance companies, but they do not explain what data they refer to, how or when they collect it. With all of them, artificial intelligence would make a risk estimate for each case in search of fraud.
Like These? pic.twitter.com/TloOtdxHWR
– I like growth stocks (@GrowthLike) May 26, 2021
Following criticism, Lemonade Insurance has deleted the thread, although it is available at the web archive and some users have shared screenshots. In the messages, the company boasts of the economic benefits it has achieved with this system. They explain that before he paid more than he earned and that with this AI he has now been able to considerably reduce his loss rate by rejecting a large part of the claims for his clients.
“It’s incredibly insensitive to celebrate how your business saves money by not paying claims (in some cases to people who are probably having the worst day of their lives)“, has said Caitlin Seeley George, campaign manager for the digital rights group Fight for the Future, to Recode.
Jon callas, director of technology projects for the Electronic Frontier Foundation, has called these claims “pseudoscience” and “phrenology” to ZDNet. Denies AI is prepared to do what Lemonade Insurance asks of it and stresses that other companies are investing millions in similar systems that end up showing biases by gender or skin color.
The insurer has apologized for networks and denies that the indications of its AI are applied automatically. “Our systems do not evaluate claims based on background, gender, appearance, skin tone, disability, or any physical characteristics (nor do we evaluate any of these by proxy)” has said.
So, we deleted this awful thread which caused more confusion than anything else. TL; DR: We do not use, and we’re not trying to build AI that uses physical or personal features to deny claims (phrenology / physiognomy) (1/4)
– Lemonade (@Lemonade_Inc) May 26, 2021
Still, the statements of Shai wininger, co-founder and COO of Lemonade a year ago have resounded louder than ever: “At Lemonade, one million customers translates into billions of data points, fueling our AI at ever-increasing speed“.
Artificial intelligence experts emphasize that this technology is not yet ready to detect the emotional or mental state of a person who is going through a traumatic moment such as a house fire or a car accident. Navin thadani, CEO of digital accessibility company Evinced, acknowledges that “AI is meant to do things better, faster, more efficiently, with fewer errors than human interaction, but what it lacks is human judgment, understanding, and consideration of factors beyond what it is programmed to do. evaluate“.