An open letter issued by the Institute of Life of the Non -profit Organization, signed more than 1,000 science and technology and artificial intelligence heavyweights, including US electric vehicle giants Tesla and social media giants to pushSpecial president Musk, Apple co -founder Wozniak and many other researchers.
(New York Reuters) A group of artificial intelligence experts and industry executives issued an open letter, calling on all artificial intelligence laboratories to immediately suspend training at least six months than GPT-4.Society and humans constitute potential risks.
This letter was released by Future of Life Institute. More than 1,000 science and technology and artificial intelligence heavyweights signed, including American electric vehicle giant Tesla and social media giants Twitter Twitter Twitter Twitter Twitter Twitter Twitter Twitter Twitter Twitter Twitter TwitterPresident Musk, Apple's co -founder Steve Wozniak, Stability AI President Emad Mostaque, and many researchers from artificial intelligence companies DeepMind.
The letter wrote that extensive research shows that artificial intelligence systems with intelligent competition with human beings may have far -reaching risks to society and humans.In recent months, the Artificial Intelligence Laboratory has fallen into an out of control and competed for more powerful digital brains for development and deployment, but no one includes their creators to understand, predict or reliably control these systems, and there is no planning planned planning.Management.
The letter says that artificial intelligence is now enough to compete with people in general tasks. Human beings must ask ourselves: "Should we automatically automatically work, including those who have a sense of accomplishment?Brain, it may eventually be more than humans, smarter than humans, eliminate and replace humans? "
The letter calls on all artificial intelligence laboratories to suspend training at least six months older than the GPT-4.This suspension should be public, checked, and includes all key participants.If this suspension cannot be implemented quickly, the government should intervene and implement a suspension.
The letter is required to suspend advanced artificial intelligence development until it formulates and implement a common security agreement, and is reviewed by independent experts."Only when we are convinced that a powerful artificial intelligence system, its effect is positive, the risk is controllable, and it can be developed."
European criminal police organization: ChatGPT or being abused on cyber crimes
The European Criminal Organization warned on Monday (March 27) that the artificial intelligence chat robot ChatGPT may be abused on "online fishing", cyber crimes, and spreading false information.ChatGPT was developed by an artificial intelligence research company OPENAI. After launching the market last year, its competitors also accelerated the development of similar language processing tools.
As of the deadline for Reuters, OPENAI founder Altman did not sign on the public letter, and OpenAI did not respond to Reuters' comment requirements.
Max Honorary professor of New York University, one of the signs of open letter, said: "This letter is not perfect, but its spirit is correct: we must slow down until we know the consequences more clearly ... they (artificial intelligence) may be possibleCause serious harm ... The big company is increasingly confidential about what they are doing, which makes it difficult for society to resist any possible harm. "