Image: Microsoft
Microsoft is phasing out public access to a number of AI-powered facial analysis tools â including one that claims to identify a subjectâs emotion from videos and pictures.
Such âemotion recognitionâ tools have been criticized by experts. They say not only do facial expressions that are thought to be universal differ across different populations but that it is unscientific to equate external displays of emotion with internal feelings.
âCompanies can say whatever they want, but the data are clear,â Lisa Feldman Barrett, a professor of psychology at Northeastern University who conducted a review into the subject of AI-powered emotion recognition, told The Verge in 2019. âThey can detect a scowl, but thatâs not the same thing as detecting anger.â
The decision is part of a larger overhaul of Microsoftâs AI ethics policies. The companyâs updated Responsible AI Standards (first outlined in 2019) emphasize accountability to find out who uses its services and greater human oversight into where these tools are applied.
In practical terms, this means Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely. Users will have to apply to use Azure Face for facial identification, for example, telling Microsoft exactly how and where theyâll be deploying its systems. Some use cases with less harmful potential (like automatically blurring faces in images and videos) will remain open-access.
In addition to removing public access to its emotion recognition tool, Microsoft is also retiring Azure Faceâs ability to identify âattributes such as gender, age, smile, facial hair, hair, and makeup.â
âExperts inside and outside the company have highlighted the lack of scientific consensus on the definition of âemotions,â the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,â wrote Microsoftâs chief responsible AI officer, Natasha Crampton, in a blog post announcing the news.
Microsoft says that it will stop offering these features to new customers from today, June 21st, while existing customers will have their access revoked on June 30th, 2023.
However, while Microsoft is retiring public access to these features, it will continue using them in at least one of its own products: an app named Seeing AI that uses machine vision to describe the world for people with visual impairments.
In a blog post, Microsoftâs principal group product manager for Azure AI, Sarah Bird, said that tools such as emotion recognition âcan be valuable when used for a set of controlled accessibility scenarios.â Itâs not clear if these tools will be used in any other Microsoft products.
Microsoft is also introducing similar restrictions to its Custom Neural Voice feature, which lets customers create AI voices based on recordings of real people (sometimes known as an audio deepfake).
The tool âhas exciting potential in education, accessibility, and entertainment,â writes Bird, but she notes that it âis also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners.â Microsoft says that in the future, it will limit access to the feature to âmanaged customers and partnersâ and âensure the active participation of the speaker when creating a synthetic voice.â