Musk, Experts Urge AI Pause, Citing ‘Risks to Society’


    30 March 2023

    Hundreds of artificial intelligence experts and industry leaders are urging for a suspension in development of some AI technology. They say that the most powerful AI technology could present extreme risks to humanity and social order.

    The group released an open letter about the issue this/last week. It referenced the recent release of a fourth version of the popular AI program ChatGPT.

    "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than" ChatGPT-4," the letter says.

    FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)
    FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. (AP Photo/Michael Dwyer, File)

    The product comes from Microsoft-backed developer OpenAI. It performs human-like discussions and creative abilities.

    "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter continues.

    The non-profit group Future of Life Institute released the letter signed by about a thousand AI scientists, experts and industry members, including Elon Musk.

    The Musk Foundation is the main financial backer of Future of Life. It also receives money from the London-based group Founders Pledge, and Silicon Valley Community Foundation.

    Elon Musk is one of the co-founders of OpenAI. His electric car company, Tesla, uses AI in models with self-driving systems.

    Musk has been critical about efforts to regulate the self-driving system. But, now, he is hoping an agency is created to make sure the development of AI serves the public.

    "It is ... deeply hypocritical for Elon Musk to sign on given how hard Tesla has fought against" AI regulation in his self-driving cars, said James Grimmelmann. He is a professor of digital and information law at Cornell University.

    Last month, Tesla had to recall from owners more than 362,000 of its U.S. vehicles. The company had to update software after U.S. regulators said the driver assistance system could cause crashes. At the time, Musk tweeted that the word "recall" for a software update is "just flat wrong!"

    However, Grimmelmann did not disagree with the idea of a temporary break. "A pause is a good idea," he said, "but the letter is vague and doesn't take the regulatory problems seriously."

    The letter suggests shared safety measures could be developed during the proposed suspension. It also calls on developers to work with policymakers on governance.

    The letter noted danger linked especially to "human-competitive intelligence."

    The writers ask, "Should we develop nonhuman minds that might eventually outnumber, outsmart...and replace us?" They also say that such decisions should not be made by "unelected tech leaders."

    Yoshua Bengio, often described as one of the "godfathers of AI," was also a signer. Stuart Russell, a lead researcher in the field, put his name on the letter as well. Business leaders who signed include Stability AI CEO Emad Mostaque.

    The concerns come as U.S. lawmakers begin to question ChatGPT's effect on national security and education. The European Union police force warned recently about the possible misuse of the system in phishing attempts, disinformation and crime.

    Gary Marcus is a professor at New York University who signed the letter. He said development should slow until more is learned. "The letter isn't perfect, but the spirit is right: we need to slow down until we better understand" the technology, he said.

    Since its release last year, ChatGPT has led other companies like Google to create similar AI systems.

    Suresh Venkatasubramanian is a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy.

    He said that a lot of the power to create these systems is usually in the hands of a few large companies.

    "That's how these models are, they're hard to build and they're hard to democratize."

    Dan Novak adapted this story for VOA Learning English based on reporting by Reuters.

    ________________________________________________________________

    Words in This Story

    confident — adj. having a feeling or belief that you can do something well or succeed at something

    positive — adj. good or useful

    regulate — v. to make rules or laws that control

    hypocritical — adj. a person who claims or pretends to have certain beliefs about what is right but who behaves in a way that disagrees with those beliefs

    vague — adj. not clear in meaning

    recall — v. to remember from the past

    phishing — n. the practice of sending emails pretending to be from real companies in order to get individuals to reveal personal information