UN Human Rights Chief Calls for Freeze on Some Artificial Intelligence Systems

    15 September 2021

    The United Nations human rights chief is warning the use of artificial intelligence technology presents a threat to human rights.

    The U.N. High Commissioner for Human Rights, Michelle Bachelet, called for a freeze on the use of artificial intelligence, or AI technology. That includes face-scanning systems that track people in public places.

    She said countries should ban AI computer programs that do not observe international human rights law.

    FILE - U.N. High Commissioner for Human Rights Michelle Bachelet attends a session of the Human Rights Council at the United Nations in Geneva, Switzerland, September 13, 2021.
    FILE - U.N. High Commissioner for Human Rights Michelle Bachelet attends a session of the Human Rights Council at the United Nations in Geneva, Switzerland, September 13, 2021.

    Applications that should be banned include government "social scoring" systems that judge people based on their behavior. She also said some AI-based tools that organize people into groups based on their ethnicity or sex should not be permitted.

    AI-based technologies can be a force for good, Bachelet said. But she added that they can also have harmful effects if human rights are not considered.

    Her comments came with the announcement of a U.N. report that examines how countries and businesses have used AI systems. It warns that AI systems affect people's lives and livelihoods if measures to prevent discrimination and other harms are not in place.

    The human rights chief did not call for a complete ban of facial recognition technology. But she said governments should halt the scanning of people's faces in real time until they can show the technology is accurate and meets privacy and data protection standards.

    Countries were not named in the report. The report warns of tools that try to find out a person's emotional and mental condition by looking at their facial expressions or body movements. It says such systems can give incorrect results and lack scientific support.

    "The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests...risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial," the report says.

    The report's recommendations repeat the thinking of many political leaders in Western democracies. They want to realize gains from AI's economic and societal possibilities. But they worry about the dependability of tools that can track and keep information on individuals and make recommendations about jobs, loans and education.

    I'm Mario Ritter, Jr.

    Jamey Keaten and Matt O'Brien reported this story for The Associated Press. Mario Ritter Jr. adapted it for VOA Learning English. Ashley Thompson was the editor.


    Words in This Story

    artificial intelligence –n. an area of computer science that deals with giving machines the ability to seem like they have human intelligence

    scanning –adj. using machines to copy information about a physical object and store it for study and record-keeping

    track –v. to follow and observe, especially in an effort to find evidence

    application –n. a computer program that carries out a specific job

    accurate –adj. free from mistakes

    standard –n. a level of quality or of being correct that is acceptable or desireable

    authorities –n. (pl.) people who have power to make decisions and enforce rules and laws

    undermine –v. to make someone or something weaker or less effective, often in a secret or slow way

    societal –adj. related to society

    We want to hear from you. Write to us in the Comments section, and visit 51VOA.COM.