Elon Musk along with other technologist urges to stop training AI systems

 

NTB/Carina Johansen via REUTERS

 

Elon Musk is one of over 1,000 tech giants urging a stop to AI development because of the “profound risks to society and humanity” it presents.

 

Per The New York Times, the open letter published through the non-profit Future of Life Institute expresses grave concern for “AI systems with human-competitive intelligence.” Development into advanced AI systems has accelerated at an exponential rate over the past few years, and the letter warns that developers on these projects are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control.”

 

Former presidential candidate Andrew Yang, Apple co-founder Steve Wozniak, and Bulletin of the Atomic Scientists president Rachel Bronson are among the famous signatories of the open letter. There is already concern about how AI may affect the arts and music, and it has already posed a significant threat to some job sectors in the United States. Interestingly, the letter’s signature was missing from OpenAI CEO Sam Altman.

 

“Should we let machines flood our information channels with propaganda and untruth?” reads the letter. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The letter calls for a pause on all systems “more powerful than GPT-4,” the OpenAI-developed chatbot that was introduced earlier this month. “This pause should be public and verifiable, and include all key actors,” reads the letter. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”