Chinese official media issued a text warning.

The Weekly News Weekly of Xinhua News Agency reported on Sunday (July 2) that a police officer in northern China recently released a fraud case using "AI face change".It is not concerned that the case is not only a new type of online fraud, but also a case involving political fraud.

It is reported that scammers "change their faces" into a leading cadre known to the victim in WeChat video chat, and proposed that their friends have a project margin to borrow the victim's company's account to account for the public account.Essence

In the video screen, the scammer "changing his face" is naturally, and his voice is realistic. The victim relaxes his vigilance. After calling for a call to confirm, he knows that he was deceived.

The report also mentioned another case. A local cadre in a certain place in North China recently found that someone used his name to add WeChat friends with his relatives and friends.

The cadre said, "After adding friends with my relatives and friends, the scammers also called them. The call time was only a few seconds, and my friends felt a bit like the sound and picture."

The cadre believes that it is estimated that the next step will begin to cheat money, so in recent days, he has sent several circle of friends to remind everyone.

Industry insiders said that as long as there are sufficient images and audio materials, criminals can forge virtual images through artificial intelligence "face change" software and synthetic sound software and perform fraud.Due to work reasons, some biological characteristics such as facial and sound of some leading cadres are more easily obtained, and the potential harm caused by artificial intelligence learning is also greater.

People in the industry remind all sectors of society to raise awareness of fraud and fraud. Relevant departments should also increase law enforcement in the "auxiliary" fraud cases of artificial intelligence in accordance with the law and seriously investigate the legal responsibilities of relevant personnel.

Pei Zhiyong, an expert in Qi Anxin Security, said that the current professional -level artificial intelligence "deep forgery" still needs to be realized in the laboratory environment through a strong hardware.This means that the "AI face change" fraud in the near future is unlikely to form a systemic risk, but it still needs to plan ahead and prevent blows.