Some Chinese have been deceived by 4.3 million yuan (RMB, about S $ 810,000) due to artificial intelligence (AI) "sound changing" technology at the end of last month, which has attracted attention to the hidden dangers of AI technology, which also reflects the relevant related related to the relevant.Law protection is still in the "vacuum zone".
Experts interviewed believe that the emergence of AI has subverted the public's see -out cognition, and officially can establish a trust mechanism at the source of the data to fundamentally solve the problem of falsehood.
Police in many places in China have issued warnings in the official social media in the past two weeks, saying that criminals use AI technology to generate deceived content to perform criminal activities with falsehood.
Inner Mongolia Police disclosed a AI fraud case in May this year. The scammers "changed their faces and voices" by deep falsification of AI technology, and impersonated other people and the victims to make video calls.Deception of 4.3 million yuan.
In -depth falsification of AI technology has begun to prevail a few years ago, but in the past year, it has made rapid progress, and through the combination of other generated AI technology (such as voice synthesizers and large language models)lifelike.
Fu Chengchen, a senior criminal lawyer at Hubei Law Firm today, estimates that about 90 % of the AI fraud on the market is currently posing as relatives and friends through AI.In addition, some people use this technology to "make yellow rumors" to generate obscene images and videos for extortion and extorting, or spread these contents.
He said that these activities can constitute criminal acts. "As long as you use AI for crimes, 483 crimes in Chinese criminal law will always be in line."
Hong Zhengjun, a senior researcher at the Excellent National Security Research Center of Rajiery South International Research Institute, pointed out in an interview that posing as a scam had already existed before AI, but AI technology made people more difficult to distinguish between authenticity.He said that as an emotional animal, people can easily believe in things that can be visible and heard, and AI's deep synthesis technology has subverted the public's seeing the truth, making fraud succeed.
In order to regulate the development of AI, the Chinese government has stipulated in January this year, requiring services to provide AI to generate content functions, and should be significantly identified to avoid public confusion or misunderstanding.The National Internet Information Office also issued a draft regulatory regulatory draft AI technology in April this year, which mentioned the prohibition of illegal acquisition, disclosure, and use of personal information and privacy and business secrets.
Fu Chengchen believes that the legislation of the existing Chinese law on the content of AI is still in a "vacuum zone", and it is obviously disconnected from the rapid development of AI.He said that the law is always on the road to catching up with the new world. Legislative updates will help minimize the side effects of AI and make the technology sustainable.
The ever -changing AI technology and its self -iterative functions make supervision face many challenges.Some public opinion believes that people may use AI to test the content generated by AI.
Zhu Feida, an associate professor of the School of Computing and Information Systems of Singapore Management University, judged that such a solution cannot solve the problem for a long time, because the deep learning and mutual confrontation technology possessed by the AI "generate confrontation network" meansAI better generates the phenomenon of fake and true content, forming the phenomenon of one foot height and one foot high.
Instead of monitoring and preventing false content, Zhu Feida suggested to establish a trust mechanism at the source of AI content.He explained that the main body of the content is still a person. If this mechanism can be binded with his personal identity with the data he produced, it will fundamentally better inhibit the motivation to generate false content and allow law enforcers to be moreThe source of the counterfeiting content accurately.
Zhu Feida said that the trust mechanism will make the cost of counterfeiting higher, but it also means that the supervision of China's network space may become stricter.He said, "This is actually a good opinion. Because some people think that if you don't think the content you posted is fake, you don't have to worry about the government binding your identity and data together."
Hong Zhengjun suggested that officials can impose more severe penalties on the abuse of criminals.He said: "AI is like a knife. Its existence and use are not illegal; but if you use it to sin, you should be punished extra."
As for how the people should prevent AI fraud, the experts interviewed believe that people should strengthen the ability of speculation, personal information, and awareness of AI technology.More importantly, when people receive a request involved in money, they should conduct a variety of verifications to ensure the reliability of the source of information.
Hong Zhengjun pointed out that what people actually can do is to report immediately when they find that they or relatives and friends are stolen on social media.
He said: "Don't think that others have been posing for you. Every fake account is the way that scammers may sin. Your friends are impersonated, which means that people around you may become the goal of scammers."