The Nobel Prize in Physics in 2024 awarded John J. Hopfield at Princeton University and Geoffrey E. Hinton of the University of Toronto, Canada to recognize them "to promote the use of artificial neural networks for machinesThe basic discovery and invention of learning. "Sington is known as the godfather of artificial intelligence (AI), but he has always reminded people that AI may threaten human survival.
Since the 1980s, Hopfield and Sington have conducted research on artificial neural networks. Hopfield proposed a new neural network model called "Hopfield Network" in the future;The theory of convolutional neural networks was deeply built. In 1986, he published a paper with the other two authors to learn through reverse communication errors, laying the core of deep learning theory.After that, the reverse communication algorithm is applied to a multi -layer neural network, which proves that this method is very effective for machine learning. The masterpiece is the deep learning model of ChatGPT.
Sington joined Google in 2013. Because of his contribution in deep learning, he was known as the "Three Giants of Deep Learning" with Benxio and Yang Likun. In 2018, the three won the Turing Award.However, Sington has been issuing a warning for AI.He proposed a clear concept to guide the AI system to make his behavior that meets the interests of designers and expected goals, so he is called the representative of the "AI alignment".
In May 2023, Sington left from Google.Soon, he posted on the X (former Twitter) platform that leaving Google to "openly talk about the danger of AI openly."He has always regretted the AI work he was engaged in."I use ordinary excuses to comfort myself: Even if I don't do it, others will do it ... It is difficult to imagine how can you stop the bad guys from using it (AI) to do bad things."
Humans will be behind
Sinon bluntly said the danger of AI. It is really possible to use a means of harmful to humans to achieve the set targets, and a competitive relationship may occur between AI in the future.Sington also refutes a view that when AI is out of control, you can unplug the power supply to stop, because AI, which transcends human wisdom, can manipulate humans in language, and may try to convince us not to turn off the switch.Facing AI, which becomes smarter through competition, humans will be behind.
Sinon's remarks are not alarmist.In the past, there were already too many AI fraud and evil. For example, naked Bayeng (the current president of the United States) appeared on the Internet (the current President of the United States) to hit the minor girl picture, AI software deceived personal and institutions' property, AI control and guided voters.In addition, AI risks involve all aspects of society and personal life, including politics, finance, falsification of various products, national security, personal privacy and property.
Therefore, Faride, deputy dean of the School of Information College of the University of California, said: "When news, images, audio, videos -everything can be forged, in that world, nothing is real."
The international community has consensus on the danger of AI and the security of people's use of AI.On November 24, 2021, the 41st conference of the UNESCO passed the first global agreement on AI ethics artificial intelligence ethics issues in history. There were 193 member states voting at the Teaching Organization Conference.This is the first global norm in artificial ethics.
This is also the first global framework for using AI in a form of ethical requirements.The proposal proposes that the development and application of AI must reflect the four major values, that is, respect, protect and enhance human rights and human dignity, promote the development of the environment and ecosystems, ensure diversity and inclusiveness, build peace, fair and interdependent humans who depend on interdependence.society.
Regarding AI's threat to humans, the proposal also puts forward some defense principles.AI should be competent and non -damage, and should also have security and security, fairness and non -discrimination, privacy and data protection, transparency and explanatory, human supervision and decision, and mutual benefit.
This also means that the international community has reached a consensus, R & D, and application of AI, which can be done and which cannot be done.As long as AI follows the principles of adaptability (composition), security, fairness, transparency, protection of privacy, and fairness, it can be done; otherwise, it cannot be done.
Overall, from the perspective of adaptability, that is, competent and rationality, the research and use of any AI products should prove that choosing to use the AI system and which AI method to choose is reasonable.According to the United Nations AI ethics proposal, in the future, each company and individuals should be discussed and approved by the Ethical Committee when developing AI products.Just as today's biomedical research and hospital treatment patients must be discussed and approved by the ethics committee when they are involved in life and safety. In particular, they must avoid causeing society's panic, damage and losses, and even disasters like deep -pseudo -technology.This should be procedures and steps that must be experienced in the current and future development.
When the creator of artificial neural networks and machine learning content won the Nobel Prize, Sington's warning that AI may bring about human survival crisis is more worthy of attention; more than, he believes that compared with security, he is more secure.Organizations often pay more attention to the benefits of AI creation.Sington's reminder was echoed and supported by some professionals.Former OPENAI chief scientist Suzkwei was one of Sindon's most proud disciples.In November 2023, the "coup is" occurred in OpenAI, and the board of directors once announced the firing company president and Ultraman, known as the "father of ChatGPT".The main reason is that Ultraman and the company's chief technical officer Muratte's concepts of AI products have different concepts.Ultraman believes that a powerful artificial intelligence Q*model of the company may threaten humans and must be suspended, but Mulati believes that it can develop and push to the market as soon as possible.Sutzkwei supports Ultraman and believes that AI's security should be first.In May of this year, Suzkwei resigned from Openai and announced a month later that the establishment of a new company SSI (SAFE SuperIntelligence Inc, Security Super Smart) said he would be committed to pursuing security super intelligence.
In the future, any institutions and individuals that study AI and launch AI products should be evaluated with the United Nations's artificial intelligence ethics issues. After pushing to the societySupervision.In this way, AI can benefit the society to the greatest extent and to minimize the society.
The author is Beijing scholar