As Deepfake Fraud Permeates China, Authorities Target Political Challenges Posed By AI

Chinese authorities are cracking down on political and fraud cases driven by deepfakes, created with face- and voice-changing software that tricks targets into believing they are video chatting with a loved one or another trusted person.

How good are the deepfakes? Good enough to trick an executive at a Fuzhou tech company in Fujian province who almost lost $600,000 to a person he thought was a friend claiming to need a quick cash infusion.

The entire transaction took less than 10 minutes from the first contact via the phone app WeChat to police stopping the online bank transfer when the target called the authorities after learning his real friend had never requested the loan, according to Sina Technology.

Despite the public’s outcry about such AI-driven fraud, some experts say Beijing appears more concerned about the political challenges that deepfakes may pose, as shown by newly implemented regulations on “deep synthesis” management that outlaw activities that “endanger national security and interests and damage the national image.”

The rapid development of artificial intelligence technology has propelled cutting-edge technology to mass entertainment applications in just a few years.

In a 2017 demonstration of the risks, a video created by University of Washington researchers showed then-U.S. President Barack Obama saying things he hadn’t.

Two years later, Chinese smartphone apps like Zao let users swap their faces with celebrities so they could appear as if they were in a movie. Zao was removed from app stores in 2019 and Avatarify, another popular Chinese face-swapping app, was also banned in 2021, likely for violation of privacy and portrait rights, according to Chinese media.

Pavel Goldman-Kalaydin, head of artificial intelligence and machine learning at SumSub, a Berlin-based global antifraud company, explained how easy it is with a personal computer or smartphone to make a video in which a person appears to say things he or she never would.

“To create a deepfake, a fraudster uses a real person’s document, taking a photo of it and turning it into a 3D persona,” he said. “The problem is that the technology, it is becoming more and more democratized. Many people can use it. … They can create many deepfakes, and they try to bypass these checks that we try to enforce.”

Subbarao Kambhampati, professor at the School of Computing and Augmented Intelligence at Arizona State University, said in a telephone interview he was surprised by the apparent shift from voice cloning to deepfake video calling by scammers in China. He compared that to a rise in voice-cloning phone scams in the U.S.

“Audio alone, you’re more easily fooled, but audio plus video, it would be little harder to fool you. But apparently they’re able to do it,” Kambhampati said, adding that it is harder to make a video that appears trustworthy.

“Subconsciously we look at people’s faces … and realize that they’re not exactly behaving the way we normally see them behave in terms of their facial expressions.”

Experts say that AI fraud will become more sophisticated.

“We don’t expect the problem to go away. The biggest solution … is education, let people understand the days of trusting your ears and eyes are over, and you need to keep that in the back of your mind,” Kambhampati said.

The Internet Society of China issued a warning in May, calling on the public to be more vigilant as AI face-changing, voice-changing scams and slanders became common.

The Wall Street Journal reported on June 4 that local governments across China have begun to crack down on false information generated by artificial intelligence chatbots. Much of the false content designed as clickbait is similar to authentic material on topics that have already attracted public attention.

To regulate “deep synthesis” content, China’s administrative measures implemented on January 10 require service providers to “conspicuously mark” AI-generated content that “may cause public confusion or misidentification” so that users can tell authentic media content from deepfakes.

China’s practice of requiring technology platforms to “watermark” deepfake content has been widely discussed internationally.

Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace, noted that deepfake regulations place the onus on the companies that develop and operate these technologies.

“If enforced well, the regulations could make it harder for criminals to get their hands on these AI tools,” he said in an email to VOA Mandarin. “It could throw up some hurdles to this kind of fraud.”

But he also said that much depends on how Beijing implements the regulations and whether bad actors can obtain AI tools outside China.

“So, it’s not a problem with the technology,” said SumSub’s Goldman-Kalaydin. “It is always a problem with the usage of the technology. So, you can regulate the usage, but not the technology.”

James Lewis, senior vice president of the strategic technologies program at the Center for Strategic and International Studies in Washington, told VOA Mandarin, “Chinese law needs to be modernized for changes in technology, and I know the Chinese are thinking about that. So, the cybercrime laws you have will probably catch things like deepfakes. What will be hard to handle is the volume and the sophistication of the new products, but I know the Chinese government is very worried about fraud and looking for ways to get control of it.”

Others suggest that in regulating AI, political stability is a bigger concern for the Chinese government.

“I think they have a stronger incentive to work on the political threats than they do for fraud,” said Bill Drexel, an associate fellow for the Technology and National Security Program at Center for a New American Security.

In May, the hashtag #AIFraudEruptingAcrossChina was trending on China’s social media platform Weibo. However, the hashtag has since been censored, according to the Wall Street Journal, suggesting authorities are discouraging discussion on AI-driven fraud.

“So even we can see from this incident, once it appeared that the Chinese public was afraid that there was too much AI-powered fraud, they censored,” Drexel told VOA Mandarin.

He continued, “The fact that official state-run media initially reported these incidents and then later discussion of it was censored just goes to show that they do ultimately care about covering themselves politically more than they care about addressing fraud.”

Adrianna Zhang contributed to this report.

коментуйте повідомлення: