Faced with the technology of taking off Mustang, the European Parliament passed the draft legislative case of the Artificial Intelligence Act in 2023. Regulating manufacturers must prevent AI from generating illegal content, prevent abuse and spread lies.wait.This draft set the tone for global AI management and obtain applause from publishers and creators, but the technology industry believes that it will stifle innovation capabilities.
Generatove AI (Genetive AI) established an algorithm after "reading" a large number of brain nuclear magnetic resonance maps, successfully interpreting a radio content that the nuclear magnetic resonance diagram was listening to it at the time. On this basis, artificial artificially, artificial artificialIntelligence is further thinking about the story of the subject's mind from a nuclear magnetic resonance diagram.After more than 10 years of hard work, Alexander Huth, a professor of Austin, who has chaired the study, has sprung up.This is a big step across neuroscience research, and it is expected to help paralyzed people talk about communication.But soon after, he turned to worry about this technology that could understand the brain and invaded the most private privacy field of human beings.
As early as 20 years ago, patients with paralyzed patients were able to move mouse and control mechanical limbs through consciousness after implantation of microelectronics.Ten years ago, scientists successfully established false memories in experimental mice's brains. In recent years, neuroscience leaps leaps under the blessing of AI.However, more and more scientists are worried that this medical technology is not only "read -minded", but also may be used to manipulate ideas, or the police interfere with the suspect's thoughts during the question.
The decision of the machine's autonomous decision is still black box
Hu Si's subjects clearly know that he is participating in the experiment. His instruments are quite expensive and complicated, and require large -scale funds and time investment, not an area that most people will come into contact with.However, AI has penetrated into daily life. ChatGPT based on large language model (LLM) is born at the end of 2022. It can generate emails, official documents in the mode of imitation of human thinking and emotional analysis, and some content farms like content farms.News.The technology of the text to the picture is followed. The Sora software released by Openai a few days ago can generate animation, realistic films, etc. according to the short instructions.
Scientists have studied AI for more than half a century, but recent discussions are getting hotter and warnings are getting higher and higher.It is not new technologies that are stunning. After all, contemporary people have experienced the baptism of computers, Internet, smartphones, and community media, but the rapid development of AI and the depth of influence, so that scientists in it can not ignore potential potentialthreaten.
In 1972, Geoffrey Hinton, who is still a graduate student, used the theory of neural networks to build mathematical model analysis data and laid down the foundation of computer deep learning. At that time, few people believed that this was later called it.The whimsical fan of "Father of Artificial Intelligence" is open.A few years ago, he still thought that AI needed at least 30 to 50 years to be smarter than humans, but the current development may make this trend soon appear.
In order to speak smoothly, Sinton resigned from the position of Vice President of Google last year.The concern in front of him is that AI has contributed to false news, false pictures and videos, and he is worried that this technology will cause many people to lose your job. In the long run, he is afraid that AI, which can write algorithms by himself, will exceed human control, or even even even human control, or even evenThreat human survival.
The stock market frenzy driven byAI drowned Sington's warning and continued to train stronger and greater functions.Since the Industrial Revolution, human beings have continuously pursued more effort and efficient automation, but the AI leap forward and step forward, allowing the machine to move towards "Autonomy", not only to listen to the automatic identification analysis of the order, but also can make it independently.The decision is a black box that humans may not understand.
From digital Leninism to monitoring capitalism
Message on the Internet, dialogue with voice assistant, shared photos, navigation paths, monitors everywhere, and AI black boxes with thirsty Qishan Baohai information day and night.Many original data before use, without the consent of the original author, even the original author did not know that his information had been captured, used and reused.AI's rapid progress depends on the big data accumulated on the Internet in the past 20 years. The Internet has been free to open democracy from the beginning, and then moves towards a closed concentration, allowing technology giant Haijiao users to be greedy for convenience but unpaid, unpaid, unpaid, and unconsciously provided massive data.Essence
When American industry players such as Facebook, Google, Amazon, and Microsoft continue to grow, the Chinese government has seen the special concentration of digital technology not only related to economic benefits, but also social control and social control and social control.National security, therefore builds the Great Great Wall to prevent overseas information and make good use of science and technology to promote economic development. During the epidemic, in addition to the almost pervasive epidemic prevention methods, digital technology monitor the people's every move to create a German Sinian Heilmann.The "Digital Leninism".
Faced with the technology of the Mustang, the European Parliament passed the draft legislative case of the Artificial Intelligence Act in 2023. Regulating the manufacturer must prevent AI from generating illegal content, prevent abuse and spread lies, and must reveal the original data sources of the model, and so on, and, etc., and,, etc.Requires to prevent discrimination and restrict some technologies listed as high -risk, such as tools that affect voters' ideas and facial recognition.This draft set the tone for global AI management and obtain applause from publishers and creators, but the technology industry believes that it will stifle innovation capabilities.
During the industrial revolutionary era, capitalists hope to create the maximum profits through free market forces, oppose restrictions on children's workers, protect workers, and reduce environmental pollution.The current scientific and technological capitalists have a new reasons for resisting government supervision -Western regulations that tie their hands and feet will make the unconstrained Chinese competitors occupy the upper hand.However, SHOShana Zuboff, a social psychologist in Harvard University, pointed out that after the 911 terrorist attacks in 2001, the U.S. government and technology companies that were anxious to scarcely reached a tacit understanding, letting technology companies search for all kindsInformation has shaped the "surveillance capitalism" for more than 20 years; the mainland government uses information technology to stabilize authoritarian regimes, crosses the privacy red line, and master the monitoring capitalism of the cognition of information cognition.
The EU bill exclude military use like a praying arm block
The image shape of science fiction and movies, and the word AI, people's imagination of this technology is often anthropomorphic, such as robots, whether it is a caring virtual lover or a robot who does not blink.But the more appropriate vocabulary of AI should be a "learning machine". No matter how smart and efficient it is, it is still a machine in a metal box; some initial motivation for the development of this machine is military use.Study from the US Department of Defense's virtual voice assistant; tanks that do not need military driving during the R & D war have contributed to the birth of electric vehicle Tesla.
The European Artificial Intelligence Act based on the European Union's single market is like a praying arm block, eliminating the military use of the EU's unwillingness but high risk.Although the use of AI to identify enemies and goals is quite common on the battlefield, it is only that many countries stipulate that the order of attack must be determined by the soldiers and will not be determined by the algorithm.However, Israel's Iron Dome -proof system can automatically determine itself. The missile was launched, and it has more power.China understands that AI's important role on the battlefield, especially after watching the weakness of Russia in the Ukrainian war, actively develops an unmanned chariot, promotes instant sensors and neural networks related to identification, tracking, analysis, and attacks to attack to attack with attacks to attackenemy'sLogistics and command networks.
Sington left the United States in the 1980s and went to Canada to teach in Canada because he resolutely opposed the use of AI on the battlefield, or the so -called "mechanical warrior", and refused to get the funding for the Pentagon.He said: "It's hard to imagine how to prevent bad people from doing bad things." The more complicated ethical problem is that even if it is so powerful, even if it falls in the so -called good person, it may causeEvil disasters.
The author is a Taiwanese journalist and a doctor of sociology living in Italy