Microsoft retires controversial AI that can guess your emotions
Tech giant warns that ‘new guardrails’ are required for artificial intelligence
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Microsoft has announced that it will halt sales of an artificial intelligence service that can predict a person’s age, gender and even emotions.
The tech giant cited ethical concerns surrounding the facial recognition technology, which it claimed could subject people to “stereotyping, discrimination, or unfair denial of services”.
In a blog post published on Tuesday, Microsoft outlined the measures it would take to ensure its Face API is developed and used responsibly.
“To mitigate these riskes, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup,” wrote Sarah Bird, a product manager at Microsoft’s Azure AI.
“Detection of these attributes will no longer be available to new customers beginning 21 June, 2022, and existing customers have until 30 June, 2023, to discontinue use of these attributes before they are retired.”
Microsoft’s Face API was used by companies like Uber to verify that the driver using the app matches the account on file, however unionised drivers in the UK called for it to be removed after it failed to recognise legitimate drivers.
The technology also raised fears about potential misuse in other settings, such as firms using it to monitor applicants during job interviews.
Despite retiring the product for customers, Microsoft will continue to use the controversial technology within at least one of its products. An app for people with visual impairments called Seeing AI will still make use of the machine vision capabilities.
Microsoft also announced that it would be making updates to its ‘Responsible AI Standard’ – an internal playbook that guides its development of AI products – in order to mitigate the “socio-technical risks” posed by the technology.
It involved consultations with researchers, engineers, policy experts and anthropologists to help understand which safeguards can help prevent discrimination.
“We recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” wrote Natasha Crampton, Microsoft’s chief responsible AI officer, in a separate blog post.
“We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another... Better, more equitable futures will require new guardrails for AI.”
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments