She called for a moratorium on artificial intelligence systems that could put human rights at risk - at least until stronger safeguards are in place internationally.
"We cannot afford to continue playing catch-up regarding AI - allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact," she said in a statement.
The remarks came alongside the publication of a report by the U.N. Human Rights Council analyzing the human rights risks posed by a range of AI-powered technologies - including profiling, automated decision-making and machine learning. The consequences of unfettered proliferation of such technologies could be "catastrophic," Bachelet said.
The report also pointed out that data sets used by AI can have historical racial and ethnic biases embedded, which can perpetuate, or enhance, discrimination.
Many AI tools seek to predict outcomes, assess risk and provide insights into patterns of behavior on an individual or societal scale. The report raised warnings of a "digital welfare dystopia" in which data-matching could automate decisions about welfare benefits entitlements, loan access or home visits from child-care services - with human rights implications.
Technologies used by law enforcement, including national security and border management officials, are particularly fraught. AI systems can mine criminal arrest records, crime statistics, social media posts and travel records to profile people and identify sites of increased criminal or even terrorist activity, triggering criminal interventions, "even though AI assessments by themselves should not be seen as a basis for reasonable suspicion," the report argues.
Bachelet did not call for an outright ban on facial recognition technology - using human features including face, fingerprint, iris and voice to identify individuals - but urged a moratorium on the use of real-time remote biometric recognition until rights provisions can be agreed upon.
The report did not call out any countries by name, but AI technologies in some places around the world have raised human rights flags in recent years, according to experts.
China has come under sharp criticism for mass surveillance, using AI technology with few checks - particularly in the Xinjiang region, where the Chinese Communist Party has for decades systematically sought to oppress and assimilate members of the Uyghur ethnic minority group.
Chinese tech giant Huawei tested AI systems, using facial recognition technology, that would send automated "Uyghur alarms" to police once a camera detected a member of the minority group, The Post reported last year. Huawei responded that the language used to describe the capability had been "completely unacceptable," yet the company had advertised ethnicity-tracking efforts.
Technology can allow authorities to systematically identify and track individuals in public spaces, affecting the right to freedom of expression, of peaceful assembly and of movement, Bachelet said.
Fear of such surveillance affected protesters in Myanmar this year, Reuters reported. In March, Human Rights Watch criticized the Myanmar military junta's usage of a public camera system, provided by Huawei, that used facial and license plate recognition to alert the government of individuals on a "wanted list."
Human Rights Watch last year denounced a system in Buenos Aires that published personal data including photos of child suspects with open arrest warrants. The information was used by a facial recognition software operating in some city subway stations, the organization said.
Bachelet's statement echoed growing global concerns. Portland, Oregon, last September passed a broad ban on facial recognition technology, including uses by local police. The European Commission in April proposed a ban the use of AI for tracking individuals and ranking their behavior. Amnesty International launched the "Ban the Scan" initiative to ban the use of facial recognition by New York City government agencies.
"The power of AI to serve people is undeniable, but so is AI's ability to feed human rights violations at an enormous scale with virtually no visibility," Bachelet said, calling for greater transparency, systematic assessment and monitoring of the effects of AI. "Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."
Published : September 16, 2021