UN official wants moratorium on AI tools that violate human rights
Michelle Bachelet, the U.N.’s High Commissioner for Human Rights, called for a moratorium on the sale and use of artificial intelligence (AI) systems that she said can pose a risk to human rights.
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said.
The U.N. commissioner argued that AI now reaches “almost every corner of our physical and mental lives and even emotional states,” affecting who receives certain public services or gets to be a candidate for employment.
“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” she said.
In a report accompanying Bachelet’s remarks, the UN Human Rights Office highlighted concerns surrounding the use of AI in law enforcement. The organization pointed to how AI is sometimes used as forecasting tools to profile certain individuals.
“Rights affected include the rights to privacy, to a fair trial, to freedom from arbitrary arrest and detention and the right to life,” the report read.
They also made note of how AI is used to identify people in public, remarking on the instances of “erroneous identification” that can occur and how it can be used to profile people based on their “ethnicity, race, national origin, gender and other characteristics.”
“The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,” Bachelet added.
In the report, Bachelet’s office recommended that countries recognize the importance of preserving human rights in the development of AI systems, ban AI that cannot comply with international human rights laws, transparency in the use of AI, commit to combating discrimination in the use of AI and adopt policies that prevent limit the negative impacts of the technology on human rights.
The report also called for a moratorium on the use of “biometric recognition technologies in public spaces” until the recommendations mentioned can be implemented.
Facial recognition technology has long been criticized for misidentifying women and people of color, leading to false arrests.
In June, the Government Accountability Office reported that six federal agencies had used facial recognition technology to identify protesters attending racial justice protests last year. The Bureau of Alcohol, Tobacco, Firearms and Explosives, the U.S. Capitol Police, the FBI, the U.S. Marshals Service, the U.S. Park Police and the U.S. Postal Inspection are all believed to have used this technology.
Following the protests last year, many tech companies including Microsoft, IBM and Amazon suspended the sale of facial recognition technology to law enforcement.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..