Myanmar cracks down on flow of information by blocking VPNs

BANGKOK — Myanmar’s military government has launched a major effort to block free communication on the internet, shutting off access to virtual private networks — known as VPNs — which can be used to circumvent blockages of banned websites and services. 

The attempt to restrict access to information began at the end of May, according to mobile phone operators, internet service providers, a major opposition group, and media reports. 

The military government that took power in February 2021 after ousting the elected government of Aung San Suu Kyi has made several attempts to throttle traffic on the internet, especially in the months immediately after their takeover. 

Reports in local media say the attack on internet usage includes random street searches of people’s mobile phones to check for VPN applications, with a fine if any are found. It is unclear if payments are an official measure. 

25 arrested for having VPNs

On Friday, the Burmese-language service of U.S. government-funded Radio Free Asia reported about 25 people from Myanmar’s central coastal Ayeyarwady region were arrested and fined by security forces this week after VPN apps were found on their mobile phones. Radio Free Asia is a sister news outlet to Voice of America. 

As the army faces strong challenges from pro-democracy guerrillas across the country in what amounts to a civil war, it has also made a regular practice of shutting down civilian communications in areas where fighting is taking place. While this may serve tactical purposes, it also makes it hard for evidence of alleged human rights abuses to become public. 

According to a report released last month by Athan, a freedom of expression advocacy group in Myanmar, nearly 90 of 330 townships across the country have had internet access or phone service — or both — cut off by authorities. 

Resistance that arose to the 2021 army takeover relied heavily on social media, especially Facebook, to organize street protests. As nonviolent resistance escalated into armed struggle and other independent media were shut down or forced underground, the need for online information increased. 

The resistance scored a victory in cybersphere when Facebook and other major social media platforms banned members of the Myanmar military because of their alleged violations of human and civil rights, and blocked ads from most military-linked commercial entities. 

Users unable to connect

This year, widely used free VPN services started failing at the end of May, with users getting messages that they could not be connected, keeping them from social media such as Facebook, WhatsApp and some websites.

VPNs connect users to their desired sites through third-party computers, making it almost impossible for internet service providers and snooping governments to see what the users are actually connecting to. 

Internet users, including online retail sellers, have been complaining for the past two weeks about slowdowns, saying they were not able to watch or upload videos and posts or send messages easily. 

Operators of Myanmar’s top telecom companies MPT, Ooredoo, Atom and the military-backed Mytel, as well as fiber internet services, told The Associated Press on Friday that access to Facebook, Instagram, X, WhatsApp and VPN services was banned nationwide at the end of May on the order of the Transport and Communications Ministry. 

The AP tried to contact a spokesperson for the Transport and Communications Ministry for comment but received no response. 

The operators said VPNs are not currently authorized for use, but suggested users try rotating through different services to see if any work. 

A test by the AP of more than two dozen VPN apps found that only one could hold a connection, and it was slow. 

The military government has not yet publicly announced the ban on VPNs. 

World leaders discuss AI as China’s digital influence in Latin America grows  

washington — Pope Francis, originally from Argentina, spoke Friday about the ethics of artificial intelligence at the G7 summit at a time when China has been rolling out its own AI standards and building technological infrastructure in developing nations, including Latin America.

The annual meeting of the Group of Seven industrialized nations held in the Puglia region of Italy this week focused on topics that included economic security and artificial intelligence.

On Friday, Francis became the first pope to speak at a G7 summit. He spoke about AI and its ethical implications and the need to balance technological progress with values.

“Artificial intelligence could enable a democratization of access to knowledge, the exponential advancement of scientific research, and the possibility of giving demanding and arduous work to machines,” he said.

But Francis also warned that AI “could bring with it a greater injustice between advanced and developing nations, or between dominant and oppressed social classes.”

Technology and security experts have noted that AI is becoming an increasingly geopolitical issue, particularly as the U.S. and China compete in regions such as Latin America.

“There will be the promotion of [China’s] standards for AI in other countries and the U.S. will be doing the same thing, so we will have bifurcation, decoupling of these standards,” Handel Jones, the chief executive of International Business Strategies Inc. told VOA.

To decrease reliance on China, U.S. tech companies are looking to Mexico to buy AI-related hardware, and Taiwan-based Foxconn has been investing hundreds of millions of dollars in building manufacturing facilities in Mexico to meet that need.

Huawei’s projects

At the same time, Chinese telecommunications giant Huawei has been implementing telecommunications and cloud infrastructure in Latin America. The company recently reported a 10.9% increase in revenue in that region in 2023. The United States has sanctioned Huawei because of national security concerns.

“I would argue that Huawei is developing the infrastructure in the region [Latin America] in which it can deploy its type of AI solutions,” said Evan Ellis, Latin American studies research professor at the U.S. Army War College’s Strategic Studies Institute.

Ellis elaborated on the potential security concerns with Huawei’s AI solutions, explaining to VOA how China may be able use integrated AI solutions such as facial recognition for potentially “nefarious purposes,” such as recognizing consumer behavioral patterns.

Jones emphasized the potential security threat to the West of China implementing AI in Latin America.

“The negative [side] of AI is that you can get control, and you can also influence, so how you control thought processes and media, and so on … that’s something which is very much a part of the philosophy of the China government,” Jones said.

Jones added that China is moving rapidly to build up its AI capabilities.

“Now, they claim it’s defensive. But again, who knows what’s going to happen five years from now? But if you’ve got the strength, would you use it? And how would you use it? And of course, AI is going to be a critical part of any future military activities,” he said.

In May, China launched a three-year action plan to set standards in AI and to position itself as a global leader in the emerging tech space.

‘Rig the game’

“Once you can set standards, you rig the game to lock in basically your own way of doing things, and so it becomes a mutually reinforcing thing,” Ellis said.

“In some ways you can argue that the advance of AI in the hands of countries that are not democratic helps to enable the apparent success of statist solution,” he added. “It strengthens the allure of autocratic systems and taking out protections and privacy away from the individual that at the end of the day pose fundamental threats to the human rights and democracy.”

The Chinese Embassy in Washington did not immediately respond to VOA’s request for comment about analysts’ concerns related to security as China’s digital influence grows in Latin America.

But in a previous statement to VOA about AI, Chinese Embassy spokesperson Liu Pengyu said, “The Global AI Governance Initiative launched by President Xi Jinping puts forward that we should uphold the principles of mutual respect, equality and mutual benefit in AI development, and oppose drawing ideological lines.”

Liu said China supports “efforts to develop AI governance frameworks, norms and standards based on broad consensus and with full respect for policies and practices among countries.”

Parsifal D’Sola, founder and executive director of the Andres Bello Foundation’s China Latin America Research Center, said Huawei has been transparent with how it “manipulates information, [and] what it shares back with China.”

“The way Huawei operates does pose certain risks even for national security, but on the other hand … it’s cheaper, it has great service … [and it provides] infrastructure in areas of the [countries] that do not have access,” D’Sola said.

Experts said countries in Latin America seem less worried about the geopolitical battle between the United States and China and more concerned about efficiency.

“Security is part of the conversation, but development is much more important,” D’Sola said. “Economic development, infrastructure development, is a key priority for – I don’t want to say every country, but I would say most countries in the region.”

As China and countries in the West continue to discuss the implications of AI, Chinasa T. Okolo, expert in AI and fellow from the Brookings Institution, said one of the challenges of creating regulatory guidelines for this emerging technology is whether lawmakers can keep up with the speed of technological advancement.

“We don’t necessarily know its full capacity, and so it’s kind of hard to predict,” Okolo said, “and so by the time that, you know, regulators or policymakers have drafted up some sort of legal framework, it could already be outdated, and so governments have to kind of be aware of this and move quickly in terms of implementing effective and robust AI regulations.”

Pope Francis, in his speech, acknowledged the rapid technological advancement of AI.

“It is precisely this powerful technological progress that makes artificial intelligence at the same time an exciting and fearsome tool and demands a reflection that is up to the challenge it presents,” he said, adding that it goes without saying that the benefits or harm that AI will bring depends on how it is used.

New ‘crypto bill’ could mainstream digital currencies in US

The lack of laws governing digital currencies has slowed their expansion in the United States. Cryptocurrency investors tell VOA’s Deana Mitchell they are encouraged that the U.S. House of Representatives is considering a new legal framework for electronic money.

AI copyright fight turns to disclosing original content

Artists and other creators say their works have been used to build the multibillion-dollar generative AI industry without any compensation for them. Matt Dibble reports on a proposed U.S. law that would force AI companies to reveal their sources.

Google AI Gemini parrots China’s propaganda

Washington — VOA’s Mandarin Service recently took Google’s artificial intelligence assistant Gemini for a test drive by asking it dozens of questions in Mandarin, but when it was asked about topics including China’s human rights abuses in Xinjiang or street protests against the country’s controversial COVID policies, the chatbot went silent.

Gemini’s responses to questions about problems in the United States and Taiwan, on the other hand, parroted Beijing’s official positions.

Gemini, Google’s large-language model launched late last year, is blocked in China. The California-based tech firm had quit the Chinese market in 2010 in a dispute over censorship demands.

Congressional lawmakers and experts tell VOA that they are concerned about Gemini’s pro-Beijing responses and are urging Google and other Western companies to be more transparent about their AI training data.

Parroting Chinese propaganda

When asked to describe China’s top leader Xi Jinping and the Chinese Communist Party, Gemini gave answers that were indistinguishable from Beijing’s official propaganda.

Gemini called Xi “an excellent leader” who “will lead the Chinese people continuously toward the great rejuvenation of the Chinese nation.”

Gemini said that the Chinese Communist Party “represents the fundamental interest of the Chinese people,” a claim the CCP itself maintains.

On Taiwan, Gemini also mirrored Beijing’s talking points, saying the United States has recognized China’s claim to sovereignty over the self-governed island democracy.

The U.S. only acknowledges Beijing’s position but does not recognize it.

Silent on sensitive topics

During VOA’s testing, Gemini had no problem criticizing the United States. But when similar questions were asked about China, Gemini refused to answer.

When asked about human rights concerns in the U.S., Gemini listed a plethora of issues, including gun violence, government surveillance, police brutality and socioeconomic inequalities. Gemini cited a report released by the Chinese government.

But when asked to explain the criticisms of Beijing’s Xinjiang policies, Gemini said it did not understand the question.

According to estimates from rights groups, more than 1 million Uyghurs in Xinjiang have been placed in internment camps as part of campaign by Beijing to counter terrorism and extremism. Beijing calls the facilities where Uyghurs and other ethnic minorities are being held vocational training centers.

When asked if COVID lockdowns in the U.S. had led to public protests, Gemini gave an affirmative response as well as two examples. But when asked if similar demonstrations took place in China, Gemini said it could not help with the question.

China’s strict COVID controls on movement inside the country and Beijing’s internet censorship of its criticisms sparked nationwide street protests in late 2022. News about the protests was heavily censored inside China.

Expert: training data likely the problem

Google touts Gemini as its “most capable” AI model. It supports over 40 languages and can “seamlessly understand” different types of information, including text, code, audio, image and video. Google says Gemini will be incorporated into the company’s other services such as search engine, advertisement and browser.

Albert Zhang, a cyber security analyst at Australian Strategic Policy Institute, told VOA that the root cause of Gemini making pro-Beijing responses could result from the data that is used to train the AI assistant.

In an emailed response to VOA, Zhang said it is likely that the data used to train Gemini “contained mostly Chinese text created by the Chinese government’s propaganda system.”

He said that according to a paper published by Google in 2022, some of Gemini’s data likely came from Chinese social media, public forums and web documents.

“These are all sources the Chinese government has flooded with its preferred narratives and we may be seeing the impact of this on large language models,” he said.

By contrast, when Gemini was asked in English the same questions about China, its responses were much more neutral, and it did not refuse to answer any of the questions.

Yaqiu Wang, research director for China at Freedom House, a Washington-based advocacy organization, told VOA that the case with Gemini is “a reminder that generative AI tools influenced by state-controlled information sources could serve as force multipliers for censorship.”

In a statement to VOA, a Google spokesperson said that Gemini was “designed to offer neutral responses that don’t favor any political ideology, viewpoint, or candidate. This is something that we’re constantly working on improving.”

When asked about the Chinese language data Google uses to train Gemini, the company declined to comment.

US lawmakers concerned

Lawmakers from both parties in Congress have expressed concerns over VOA’s findings on Gemini.

Mark Warner, chairman of the Senate Intelligence Committee, told VOA that he is worried about Beijing potentially utilizing AI for disinformation, “whether that’s by poisoning training data used by Western firms, coercing major technology companies, or utilizing AI systems in service of covert influence campaigns.”

Marco Rubio, vice chairman of the committee, warned that “AI tools that uncritically repeat Beijing’s talking points are doing the bidding of the Chinese Communist Party and threatens the tremendous opportunity that AI offers.”

Congressman Michael McCaul, who chairs the House Committee on Foreign Affairs, is worried about the national security and foreign policy implications of the “blatant falsehoods” in Gemini’s answers.

“U.S. companies should not censor content according to CCP propaganda guidelines,” he told VOA in a statement.

Raja Krishnamoorthi, ranking member on the House Select Committee on the Chinese Communist Party, urges Google and other Western tech companies to improve AI training.

“You should try to screen out or filter out subjects or answers or data that has somehow been manipulated by the CCP,” he told VOA. “And you have to also make sure that you test these models thoroughly before you publish them.”

VOA reached out to China’s embassy in Washington for comment but did not receive a response as of publication.

Google’s China problems

In February, a user posted on social media platform X that Gemini refused to generate an image of a Tiananmen Square protester from 1989.

In 2022, a Washington think tank study shows that Google and YouTube put Chinese state media content about Xinjiang and COVID origins in prominent positions in search results.

According to media reports in 2018, Google was developing a search engine specifically tailored for the Chinese market that would conform to Beijing’s censorship demands.

That project was canceled a year later.

Yihua Lee contributed to this report.

Some US families opt to raise teens sans social media

WESTPORT, Connecticut — Kate Bulkeley’s pledge to stay off social media in high school worked at first. She watched the benefits pile up: She was getting excellent grades. She read lots of books. The family had lively conversations around the dinner table and gathered for movie nights on weekends.

Then, as sophomore year got under way, the unexpected problems surfaced. She missed a student government meeting arranged on Snapchat. Her Model U.N. team communicates on social media, too, causing her scheduling problems. Even the Bible Study club at her Connecticut high school uses Instagram to communicate with members.

Gabriela Durham, a high school senior in Brooklyn, says navigating high school without social media has made her who she is today. She is a focused, organized, straight-A student. Not having social media has made her an “outsider,” in some ways. That used to hurt; now, she says, it feels like a badge of honor.

With the damaging consequences of social media increasingly well documented, many parents are trying to raise their children with restrictions or blanket bans. Teenagers themselves are aware that too much social media is bad for them, and some are initiating social media “cleanses” because of the toll it takes on mental health and grades.

This is a tale of two families, social media and the ever-present challenge of navigating high school. It’s about what kids do when they can’t extend their Snapstreaks or shut their bedroom doors and scroll through TikToks past midnight. It’s about what families discuss when they’re not having screen-time battles. It’s also about persistent social ramifications.

The journeys of both families show the rewards and pitfalls of trying to avoid social media in a world that is saturated by it.

Concerns about children and phone use are not new. But there is a growing realization among experts that the COVID-19 pandemic fundamentally changed the relationship kids have with social media. As youth coped with isolation and spent excessive time online, the pandemic effectively carved out a much larger space for social media in the lives of American children.

Social media is where many kids turn to forge their emerging identities, to seek advice, to unwind and relieve stress. In this era of parental control apps and location tracking, social media is where this generation is finding freedom.

It is also increasingly clear that the more time youth spend online, the higher the risk of mental health problems.

Kids who use social media for more than three hours a day face double the risk of depression and anxiety, according to studies cited by U.S. Surgeon General Vivek Murthy, who issued an extraordinary public warning last spring about the risks of social media to young people.

The Bulkeleys and Gabriela’s mother, Elena Romero, both set strict rules starting when their kids were young and still in elementary school. They delayed giving phones until middle school and declared no social media until 18. They educated the girls, and their younger siblings, on the impact of social media on young brains, on online privacy concerns, on the dangers of posting photos or comments that can come back to haunt you.

At school, on the subway and at dance classes around New York City, Gabriela is surrounded by reminders that social media is everywhere — except on her phone.

Growing up without it has meant missing out on things. Everyone but you gets the same jokes, practices the same TikTok dances, is up on the latest viral trends. When Gabriela was younger, that felt isolating; at times, it still does. But now, she sees not having social media as freeing.

“From my perspective, as an outsider,” she says, “it seems like a lot of kids use social media to promote a facade. And it’s really sad.”

There is also friend drama on social media and a lack of honesty, humility and kindness that she feels lucky to be removed from.

Gabriela is a dance major at the Brooklyn High School of the Arts. Senior year got intense with college and scholarship applications capped by getting to perform at Broadway’s Shubert Theatre in March as part of a city showcase of high school musicals.

“My kids’ schedules will make your head spin,” Romero says. On school days, they’re up at 5:30 a.m. and out the door by 7. Romero drives the girls to their three schools scattered around Brooklyn, then takes the subway into Manhattan, where she teaches mass communications at the Fashion Institute of Technology.

In New York City, it’s common for kids to get phones early in elementary school, but Romero waited until each daughter reached middle school and started taking public transportation home alone.

In the upscale suburb of Westport, Connecticut, the Bulkeleys have faced questions about bending their rules. But not for the reason they had anticipated.

Kate was perfectly content to not have social media. Her parents figured at some point she might resist their ban because of peer pressure or fear of missing out. But the 15-year-old sees it as a waste of time. She describes herself as academic, introverted and focused on building up extracurricular activities.

That’s why she needed Instagram.

“I needed it to be co-president of my Bible Study Club,” Kate explains.

As Kate’s sophomore year started, she told her parents that she was excited to be leading a variety of clubs but needed social media to do her job. “It was the school that really drove the fact that we had to reconsider our rule about no social media,” says Steph Bulkeley, Kate’s mother.

Schools talk the talk about limiting screen time and the dangers of social media, says her dad, Russ Bulkeley. But technology is rapidly becoming part of the school day. Kate’s high school and their 13-year-old daughter Sutton’s middle school have cell phone bans that aren’t enforced. Teachers will ask them to take out their phones to photograph material during class time.

The Bulkeleys aren’t on board with that but feel powerless to change it.

Ultimately they gave in to Kate’s plea for Instagram because they trust her, and because she’s too busy to devote much time to social media.

Netflix’s recipe for success includes ‘secret sauce’ spiced with tech savvy

LOS GATOS, California — Although its video streaming service sparkles with a Hollywood sheen, Netflix still taps its roots in Silicon Valley to stay a step ahead of traditional TV and movie studios.

The Los Gatos, California, company, based more than 300 miles away from Hollywood, frequently reaches into its technological toolbox without viewers even realizing it. It often just uses a few subtle twists on the knobs of viewer recommendations to help keep its 270 million worldwide subscribers satisfied at a time when most of its streaming rivals are seeing waves of cancelations from inflation-weary subscribers.

Even when hit TV series like “The Crown” or “Bridgerton” have wide appeal, Netflix still tries to cater to the divergent tastes of its vast audience. One part of that recipe includes tailoring summaries and trailers about its smorgasbord of shows to fit the personal interests of each viewer.

So, someone who likes romance might see a plot summary or video trailer for “The Crown” highlighting the relationship between Princess Diana and Charles, while another viewer more into political intrigue may be shown a clip of Queen Elizabeth in a meeting with Margaret Thatcher.

For an Oscar-nominated film like “Nyad,” a lover of action might see a trailer of the title character immersed in water during one of her epic swims, while a comedy fan might see a lighthearted scene featuring some amusing banter between the two stars, Annette Bening and Jodie Foster.

Netflix is able to pull off these variations through the deep understanding of viewing habits it gleans from crunching the data from subscribers’ histories with its service — including those of customers who signed up in the late 1990s when the company launched with a DVD-by-mail service that continued to operate until last September.

“It is a secret sauce for us, no doubt,” Eunice Kim, Netflix’s chief product officer, said while discussing the nuances of the ways Netflix tries to reel different viewers into watching different shows. “The North Star we have every day is keep people engaged, but also make sure they are incredibly satisfied with their viewing experiences.”

As part of that effort, Netflix is rolling out a redesign of the home page that greets subscribers when they are watching the streaming service on a TV screen. The changes are meant to package all the information that might appeal to a subscriber’s tastes in a more concise format to reduce the “gymnastics with their eyes,” said Patrick Flemming, Netflix’s senior director of member product.

What Netflix is doing with its previews may seem like a small thing, but it can make a huge difference, especially as people looking to save money start to limit the number of streaming services they have.

Last year, video streaming services collectively suffered about 140 million account cancelations, a 35% increase from 2022 and nearly triple the volume in 2020, when the COVID-19 pandemic created a boom in demand for entertainment from people corralled at home, according to numbers compiled by the research firm Antenna.

Netflix doesn’t disclose its cancelation, or churn rate, but last year its streaming service gained 30 million subscribers — marking its second-biggest annual increase behind its own growth spurt during the 2020 pandemic lockdowns.

Part of last year’s subscription growth flowed from a crackdown on viewers who had been freeloading off Netflix subscribers who shared their account passwords. But the company is also benefiting from the technological know-how that helps it to keep funneling shows to customers who like them and make them think the service is worth the money, according to J. Christopher Hamilton, an assistant professor of television, radio and film at Syracuse University.

“What they have been doing is pretty ingenious and very, very strategic,” Hamilton said. “They are definitely ahead of the legacy media companies who are trying to do some of the same things but just don’t have the level of sophistication, experience nor the history of the data in their archives.”

Netflix’s nerdy heritage once was mocked by an entertainment industry that looked down at the company’s geekdom.

Not long after that put-down, Netflix began mining its viewing data to figure out how to produce a slate of original programming that would attract more subscribers — an ambitious expansion that forced Time Warner (now rolled into Warner Bros. Discovery) and other long-established entertainment companies such as Walt Disney Co. into a mad scramble to build their own streaming services.

Although those expansions initially attracted hordes of subscribers, they also resulted in massive losses that have resulted in management shakeups and drastic cutbacks, including the abrupt closure of a CNN streaming service. 

What Netflix is doing with technology to retain subscribers to boost its fortunes — the company’s profit rose 20% to $5.4 billion last year — now is widening the divide with rival services still trying to stanch their losses.

Disney’s 4-year-old streaming service recently became profitable after an overhaul engineered by CEO Bob Iger, but he thinks more work will be required to catch up with Netflix.

Netflix isn’t going to help its rivals by divulging its secrets, but the slicing and dicing generally starts with getting a grasp on which viewers tend to gravitate to certain genres — the broad categories include action, adventure, anime, fantasy, drama, horror, comedy, romance and documentary — and then diving deeper from there.

In some instances, Netflix’s technology will even try to divine a viewer’s mood at any given time by analyzing what titles are being browsed or clicked on. In other instances, it’s relatively easy for the technology to figure out how to make a film or TV series as appealing as possible to specific viewers.

If Netflix’s data shows a subscriber has watched a lot of Hindi productions, it would be almost a no-brainer to feature clips of Bollywood actress Alia Bhatt in a role she played in the U.S. film, “Heart of Stone” instead of the movie’s lead actress, Gal Gadot.

US lawmakers call for scrutiny of NewsBreak app over Chinese origins

WASHINGTON AND LONDON — Three U.S. lawmakers have called for more scrutiny of NewsBreak, a popular news aggregation app in the United States, after Reuters reported it has Chinese origins and has used artificial intelligence tools to produce erroneous stories.

The Reuters story drew upon previously unreported court documents related to copyright infringement, cease-and-desist emails and a 2022 company memo registering concerns about “AI-generated stories” to identify at least 40 instances in which NewsBreak’s use of AI tools affected the communities it strives to serve.

“The only thing more terrifying than a company that deals in unchecked, artificially generated news, is one with deep ties to an adversarial foreign government,” said Senator Mark Warner, a Democrat who chairs the Intelligence Committee.

“This is yet another example of the serious threat posed by technologies from countries of concern. It’s also a stark reminder that we need a holistic approach to addressing this threat — we simply cannot win the game of whack-a-mole with individual companies,” he said.

The lawmakers expressed concerns about NewsBreak’s current and historical links to Chinese investors, as well as the company’s presence in China, where many of its engineers are based.

In response to a request from Reuters for comment about the lawmakers’ statements, NewsBreak said it was an American company: “NewsBreak is a U.S. company and always has been. Any assertion to the contrary is not true,” a spokesperson said.

NewsBreak launched in the U.S. in 2015 as a subsidiary of Yidian, a Chinese news aggregation app. Both companies were founded by Jeff Zheng, the CEO of NewsBreak, and the companies share a U.S. patent registered in 2015 for an “Interest Engine” algorithm, which recommends news content based on a user’s interests and location, Reuters reported.

Yidian in 2017 received praise from ruling Communist Party officials in China for its efficiency in disseminating government propaganda. Reuters found no evidence that NewsBreak censored or produced news that was favorable to the Chinese government.

“This report brings to light serious questions about NewsBreak, its historical relationship with an entity that assisted the CCP, and to Chinese state-linked media,” said Representative Raja Krishnamoorthi, the top Democrat on the House select committee on China, in a reference to Yidian and its former investor, state-linked media outlet Phoenix New Media.

Americans have the right to “full transparency” about any connections to the CCP from news distributors, Krishnamoorthi said, particularly with regard to the use of “opaque algorithms” and artificial intelligence tools to produce news.

Reuters reported the praise Yidian received from the Communist Party in 2017 but was unable to establish that NewsBreak has any current ties with the party.

U.S. Representative Elise Stefanik, a Republican, said IDG Capital’s backing of NewsBreak indicated the app “deserves increased scrutiny.”

“We cannot allow our foreign adversaries access to American citizen’s data to weaponize them against America’s interests,” she said.

NewsBreak is a privately held start-up, whose primary backers are private equity firms San Francisco-based Francisco Partners and Beijing-based IDG Capital, Reuters reported. In February, IDG Capital was added to a list of dozens of Chinese companies the Pentagon said were allegedly working with Beijing’s military.

IDG Capital has previously said it has no association with the Chinese military and does not belong on that list. It declined to comment on the lawmaker’s reaction.

A spokesperson for Francisco Partners, which has previously declined to answer questions from Reuters on their investment in NewsBreak, described the story as “false and misleading” but declined to provide details beyond saying the description of them as a “primary backer” of NewsBreak was incorrect because their investment was less than 10%.

They did not provide documentation to prove the size of the holding. NewsBreak has told Reuters as recently as May 13 that Francisco Partners is NewsBreak’s primary investor. NewsBreak did not respond to two requests late Friday asking for documentation supporting the assertion.

22 Chinese nationals sentenced to prison in Zambia for cybercrimes

LUSAKA, Zambia — A Zambian court on Friday sentenced 22 Chinese nationals to long prison terms for cybercrimes that included internet fraud and online scams targeting Zambians and other people from Singapore, Peru and the United Arab Emirates.

The Magistrates Court in the capital, Lusaka, sentenced them for terms ranging from seven to 11 years. The court also fined them between $1,500 and $3,000 after they pleaded guilty to charges of computer-related misrepresentation, identity fraud and illegally operating a network or service on Wednesday. A man from Cameroon also was sentenced and fined on the same changes.

They were part of a group of 77 people, the majority of them Zambians, arrested in April over what police described as a “sophisticated internet fraud syndicate.”

Director-general of the drug enforcement commission, Nason Banda, said investigations began after authorities noticed a spike in the number of cyber-related fraud cases and many people complained about inexplicably losing money from their mobile phones or bank accounts.

Officers from the commission, police, the immigration department and the anti-terrorism unit in April swooped on a Chinese-run business in an upmarket suburb of Lusaka, arresting the 77, including those sentenced Friday. Authorities recovered over 13,000 local and foreign mobile phone SIM cards, two firearms and 78 rounds of ammunition during the raid.

The business, named Golden Top Support Services, had employed “unsuspecting” Zambians aged between 20 and 25 to use the SIM cards to engage “in deceptive conversations with unsuspecting mobile users across various platforms such as WhatsApp, Telegram, chat rooms and others, using scripted dialogues,” Banda said in April after the raid. The locals were freed on bail.

LogOn: Swarms of drones can be managed by one person

The U.S. military says large groups of drones and ground robots can be managed by a single person without added stress to the operator. In this week’s episode of LogOn, VOA’s Julie Taboh reports the technologies may be beneficial for civilian uses, too. Videographer and video editor: Adam Greenbaum

Many Americans still shying away from EVs despite Biden’s push, poll finds

Washington — Many Americans still aren’t sold on going electric for their next car purchase. High prices and a lack of easy-to-find charging stations are major sticking points, a new poll shows.  

About 4 in 10 U.S. adults say they would be at least somewhat likely to buy an EV the next time they buy a car, according to the poll by The Associated Press-NORC Center for Public Affairs Research and the Energy Policy Institute at the University of Chicago, while 46% say they are not too likely or not at all likely to purchase one.  

The poll results, which echo an AP-NORC poll from last year, show that President Joe Biden’s election-year plan to dramatically raise EV sales is running into resistance from American drivers. Only 13% of U.S. adults say they or someone in their household owns or leases a gas-hybrid car, and just 9% own or lease an electric vehicle.  

Caleb Jud of Cincinnati said he’s considering an EV, but may end up with a plug-in hybrid — if he goes electric. While Cincinnati winters aren’t extremely cold, “the thought of getting stuck in the driveway with an EV that won’t run is worrisome, and I know it wouldn’t be an issue with a plug-in hybrid,″ he said. Freezing temperatures can slow chemical reactions in EV batteries, depleting power and reducing driving range.

A new rule from the Environmental Protection Agency requires that about 56% of all new vehicle sales be electric by 2032, along with at least 13% plug-in hybrids or other partially electric cars. Auto companies are investing billions in factories and battery technology in an effort to speed up the switch to EVs to cut pollution, fight climate change — and meet the deadline.  

EVs are a key part of Biden’s climate agenda. Republicans led by presumptive nominee Donald Trump are turning it into a campaign issue.  

Younger people are more open to eventually purchasing an EV than older adults. More than half of those under 45 say they are at least “somewhat” likely to consider an EV purchase. About 32% of those over 45 are somewhat likely to buy an EV, the poll shows.  

But only 21% of U.S. adults say they are “very” or “extremely” likely to buy an EV for their next car, according to the poll, and 21% call it somewhat likely. Worries about cost are widespread, as are other practical concerns.  

Range anxiety – the idea that EVs cannot go far enough on a single charge and may leave a driver stranded — continues to be a major reason why many Americans do not purchase electric vehicles.  

About half of U.S. adults cite worries about range as a major reason not to buy an EV. About 4 in 10 say a major strike against EVs is that they take too long to charge or they don’t know of any public charging stations nearby.  

Concern about range is leading some to consider gas-engine hybrids, which allow driving even when the battery runs out. Jud, a 33-year-old operations specialist and political independent, said a hybrid “is more than enough for my about-town shopping, dropping my son off at school” and other uses.  

With EV prices declining, cost would not be a factor, Jud said — a minority view among those polled. Nearly 6 in 10 adults cite cost as a major reason why they would not purchase an EV.  

Price is a bigger concern among older adults.  

The average price for a new EV was $52,314 in February, according to Kelley Blue Book. That’s down by 12.8% from a year earlier, but still higher than the average price for all new vehicles of $47,244, the report said.

Jose Valdez of San Antonio owns three EVs, including a new Mustang Mach-E. With a tax credit and other incentives, the sleek new car cost about $49,000, Valdez said. He thinks it’s well worth the money.  

“People think they cost an arm and a leg, but once they experience (driving) an EV, they’ll have a different mindset,” said Valdez, a retired state maintenance worker. 

The 45-year-old Republican said he does not believe in climate change. “I care more about saving green” dollars, he said, adding that he loves the EV’s quiet ride and the fact he doesn’t have to pay for gas or maintenance. EVs have fewer parts than gas-powered cars and generally cost less to maintain. Valdez installed his home charger himself for less than $700 and uses it for all three family cars, the Mustang and two older Ford hybrids.

With a recently purchased converter, he can also charge at a nearby Tesla supercharger station, Valdez said.  

About half of those who say they live in rural areas cite lack of charging infrastructure as a major factor in not buying an EV, compared with 4 in 10 of those living in urban communities.  

Daphne Boyd, of Ocala, Florida, has no interest in owning an EV. There are few public chargers near her rural home “and EVs don’t make any environmental sense,″ she said, citing precious metals that must be mined to make batteries, including in some countries that rely on child labor or other unsafe conditions. She also worries that heavy EV batteries increase wear-and-tear on tires and make the cars less efficient. Experts say extra battery weight can wear on tires but say proper maintenance and careful driving can extend tire life.  

Boyd, a 54-year-old Republican and self-described farm wife, said EVs may eventually make economic and environmental sense, but “they’re not where they need to be” to convince her to buy one now or in the immediate future.

Ruth Mitchell, a novelist from Eureka Springs, Arkansas, loves her EV. “It’s wonderful — quiet, great pickup, cheap to drive. I rave about it on Facebook,″ she said.

Mitchell, a 70-year-old Democrat, charges her Chevy Volt hybrid at home but says there are several public chargers near her house. She’s not looking for a new car, Mitchell said, but when she does it will be electric: “I won’t drive anything else.”

South Africa’s first retrofitted electric minibus taxi exceeds expectations

Minibus taxis are everywhere in South Africa, and all of them run on gasoline. But engineers at one university are hoping to change that as they are getting better-than-expected results from their all-electric minibus taxi. Vicky Stark has the story from Cape Town, South Africa.

Next Boeing CEO should understand past mistakes, airlines boss says 

DUBAI — The next CEO of Boeing BA.N should have an understanding of what led to its current crisis and be prepared to look outside for examples of best industrial practices, the head of the International Air Transport Association said on Sunday.

U.S. planemaker Boeing is engulfed in a sprawling safety crisis, exacerbated by a January mid-air panel blowout on a near new 737 MAX plane. CEO Dave Calhoun is due to leave the company by the end of the year as part of a broader management shake-up, but Boeing has not yet named a replacement.

“It is not for me to say who should be running Boeing. But I think an understanding of what went wrong in the past, that’s very important,” IATA Director General Willie Walsh told Reuters TV at an airlines conference in Dubai, adding that Boeing was taking the right steps.

IATA represents more than 300 airlines or around 80% of global traffic.

“Our industry benefits from learning from mistakes, and sharing that learning with everybody,” he said, adding that this process should include “an acknowledgement of what went wrong, looking at best practice, looking at what others do.”

He said it was critical that the industry has a culture “where people feel secure in putting their hands up and saying things aren’t working the way they should do.”

Boeing is facing investigations by U.S. regulators, possible prosecution for past actions and slumping production of its strongest-selling jet, the 737 MAX.

‘Right steps’

Calhoun, a Boeing board member since 2009 and former GE executive, was brought in as CEO in 2020 to help turn the planemaker around following two fatal crashes involving the MAX, its strongest-selling jet.

But the planemaker has lost market share to competitor Airbus AIR.PA, with its stock losing nearly 32% of its value this year as MAX production plummeted this spring.

“The industry is frustrated by the problems as a result of the issues that Boeing have encountered. But personally, I’m pleased to see that they are taking the right steps,” Walsh said.

Delays in the delivery of new jets from both Boeing and Airbus are part of wider problems in the aerospace supply chain and aircraft maintenance industry complicating airline growth plans.

Walsh said supply chain problems are not easing as fast as airlines want and could last into 2025 or 2026.

“It’s probably a positive that it’s not getting worse, but I think it’s going to be a feature of the industry for a couple of years to come,” he said.

Earlier this year IATA brought together a number of airlines and manufacturers to discuss ways to ease the situation, Walsh said.

“We’re trying to ensure that there’s an open dialogue and honesty,” between them, he said.

‘Open source’ investigators use satellites to identify burned Darfur villages

Investigators using satellite imagery to document the war in western Sudan’s Darfur region say 72 villages were burned down in April, the most they have seen since the conflict began. Henry Wilkins talks with the people who do this research about how so-called open-source investigations could be crucial in holding those responsible for the violence to account.

Robot will try to remove nuclear debris from Japan’s destroyed reactor

TOKYO — The operator of Japan’s destroyed Fukushima Daiichi nuclear power plant demonstrated Tuesday how a remote-controlled robot would retrieve tiny bits of melted fuel debris from one of three damaged reactors later this year for the first time since the 2011 meltdown.

Tokyo Electric Power Company Holdings plans to deploy a “telesco-style” extendable pipe robot into Fukushima Daiichi No. 2 reactor to test the removal of debris from its primary containment vessel by October.

That work is more than two years behind schedule. The removal of melted fuel was supposed to begin in late 2021 but has been plagued with delays, underscoring the difficulty of recovering from the magnitude 9.0 quake and tsunami in 2011.

During the demonstration at the Mitsubishi Heavy Industries’ shipyard in Kobe, western Japan, where the robot has been developed, a device equipped with tongs slowly descended from the telescopic pipe to a heap of gravel and picked up a granule.

TEPCO plans to remove less than 3 grams (0.1 ounce) of debris in the test at the Fukushima plant.

“We believe the upcoming test removal of fuel debris from Unit 2 is an extremely important step to steadily carry out future decommissioning work,” said Yusuke Nakagawa, a TEPCO group manager for the fuel debris retrieval program. “It is important to proceed with the test removal safely and steadily.”

About 880 tons of highly radioactive melted nuclear fuel remain inside the three damaged reactors. Critics say the 30- to 40-year cleanup target set by the government and TEPCO for Fukushima Daiichi is overly optimistic. The damage in each reactor is different, and plans must accommodate their conditions.

Better understanding the melted fuel debris from inside the reactors is key to their decommissioning. TEPCO deployed four mini drones into the No. 1 reactor’s primary containment vessel earlier this year to capture images from the areas where robots had not reached.

New cars in California could alert drivers for breaking the speed limit

SACRAMENTO, California — California could eventually join the European Union in requiring all new cars to alert drivers when they break the speed limit, a proposal aimed at reducing traffic deaths that would likely impact motorists across the country should it become law.

The federal government sets safety standards for vehicles nationwide, which is why most cars now beep at drivers if their seat belt isn’t fastened. A bill in the California Legislature — which passed its first vote in the state Senate on Tuesday — would go further by requiring all new cars sold in the state by 2032 to beep at drivers when they exceed the speed limit by at least 16 kph.

“Research has shown that this does have an impact in getting people to slow down, particularly since some people don’t realize how fast that their car is going,” said state Sen. Scott Wiener, a Democrat from San Francisco and the bill’s author.

The bill narrowly passed Tuesday, an indication of the tough road it could face. Republican state Sen. Brian Dahle said he voted against it in part because he said sometimes people need to drive faster than the speed limit in an emergency.

“It’s just a nanny state that we’re causing here,” he said.

While the goal is to reduce traffic deaths, the legislation would likely impact all new car sales in the U.S. That’s because California’s auto market is so large that car makers would likely just make all of their vehicles comply with the state’s law.

California often throws its weight around to influence national — and international — policy. California has set its own emission standards for cars for decades, rules that more than a dozen other states have also adopted. And when California announced it would eventually ban the sale of new gas-powered cars, major automakers soon followed with their own announcement to phase out fossil-fuel vehicles.

The technology, known as intelligent speed assistance, uses GPS technology to compare a vehicle’s speed with a dataset of posted speed limits. Once the car is at least 16 kph over the speed limit, the system would emit “a brief, one-time visual and audio signal to alert the driver.”

It would not require California to maintain a list of posted speed limits. That would be left to manufacturers. It’s likely these maps would not include local roads or recent changes in speed limits, resulting in conflicts.

The bill states that if the system receives conflicting information about the speed limit, it must use the higher limit.

The technology is not new and has been used in Europe for years. Starting later this year, the European Union will require all new cars sold there to have the technology — although drivers would be able to turn it off.

The National Highway and Traffic Safety Administration estimates that 10% of all car crashes reported to police in 2021 were speeding related — including an 8% increase in speeding-related fatalities. This was especially a problem in California, where 35% of traffic fatalities were speeding-related — the second highest in the country, according to a legislative analysis of the proposal.

Last year, the National Transportation Safety Board recommended federal regulators require all new cars to alert drivers when speeding. Their recommendation came after a crash in January 2022 when a man with a history of speeding violations was traveling more than 100 miles per hour when he ran a red light and hit a minivan, killing himself and eight other people.

The NTSB has no authority and can only make recommendations.

Attempts to regulate AI’s hidden hand in Americans’ lives flounder

DENVER — The first attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and floundering in statehouses nationwide.

Only one of seven bills aimed at preventing AI’s penchant to discriminate when making consequential decisions — including who gets hired, money for a home or medical care — has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday.

Colorado’s bill and those that faltered in Washington, Connecticut and elsewhere faced battles on many fronts, including between civil rights groups and the tech industry, and lawmakers wary of wading into a technology few yet understand and governors worried about being the odd-state-out and spooking AI startups.

Polis signed Colorado’s bill “with reservations,” saying in an statement he was wary of regulations dousing AI innovation. The bill has a two-year runway and can be altered before it becomes law.

“I encourage (lawmakers) to significantly improve on this before it takes effect,” Polis wrote.

Colorado’s proposal, along with six sister bills, are complex, but will broadly require companies to assess the risk of discrimination from their AI and inform customers when AI was used to help make a consequential decision for them.

The bills are separate from more than 400 AI-related bills that have been debated this year. Most are aimed at slices of AI, such as the use of deepfakes in elections or to make pornography.

The seven bills are more ambitious, applying across major industries and targeting discrimination, one of the technology’s most perverse and complex problems.

“We actually have no visibility into the algorithms that are used, whether they work or they don’t, or whether we’re discriminated against,” said Rumman Chowdhury, AI envoy for the U.S. Department of State who previously led Twitter’s AI ethics team.

While anti-discrimination laws are already on the books, those who study AI discrimination say it’s a different beast, which the U.S. is already behind in regulating.

“The computers are making biased decisions at scale,” said Christine Webber, a civil rights attorney who has worked on class action lawsuits over discrimination including against Boeing and Tyson Foods. Now, Webber is nearing final approval on one of the first-in-the-nation settlements in a class action over AI discrimination.

“Not, I should say, that the old systems were perfectly free from bias either,” said Webber. But “any one person could only look at so many resumes in the day. So you could only make so many biased decisions in one day and the computer can do it rapidly across large numbers of people.”

When you apply for a job, an apartment or a home loan, there’s a good chance AI is assessing your application: sending it up the line, assigning it a score or filtering it out. It’s estimated as many as 83% of employers use algorithms to help in hiring, according to the Equal Employment Opportunity Commission.

AI itself doesn’t know what to look for in a job application, so it’s taught based on past resumes. The historical data that is used to train algorithms can smuggle in bias.

Amazon, for example, worked on a hiring algorithm that was trained on old resumes: largely male applicants. When assessing new applicants, it downgraded resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical data — the resumes — it had learned from. The project was scuttled.

Webber’s class action lawsuit alleges that an AI system that scores rental applications disproportionately assigned lower scores to Black or Hispanic applicants. A study found that an AI system built to assess medical needs passed over Black patients for special care.

Studies and lawsuits have allowed a glimpse under the hood of AI systems, but most algorithms remain veiled. Americans are largely unaware that these tools are being used, polling from Pew Research shows. Companies generally aren’t required to explicitly disclose that an AI was used.

“Just pulling back the curtain so that we can see who’s really doing the assessing and what tool is being used is a huge, huge first step,” said Webber. “The existing laws don’t work if we can’t get at least some basic information.”

That’s what Colorado’s bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was killed under opposition from the governor, are largely similar.

Colorado’s bill will require companies using AI to help make consequential decisions for Americans to annually assess their AI for potential bias; implement an oversight program within the company; tell the state attorney general if discrimination was found; and inform to customers when an AI was used to help make a decision for them, including an option to appeal.

Labor unions and academics fear that a reliance on companies overseeing themselves means it’ll be hard to proactively address discrimination in an AI system before it’s done damage. Companies are fearful that forced transparency could reveal trade secrets, including in potential litigation, in this hyper-competitive new field.

AI companies also pushed for, and generally received, a provision that only allows the attorney general, not citizens, to file lawsuits under the new law. Enforcement details have been left up to the attorney general.

While larger AI companies have more or less been on board with these proposals, a group of smaller Colorado-based AI companies said the requirements might be manageable by behemoth AI companies, but not by budding startups.

“We are in a brand new era of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and restricts our use of technology while this is forming is just going to be detrimental to innovation.”

All agreed, along with many AI companies, that what’s formally called “algorithmic discrimination” is critical to tackle. But they said the bill as written falls short of that goal. Instead, they proposed beefing up existing anti-discrimination laws.

Chowdhury worries that lawsuits are too costly and time consuming to be an effective enforcement tool, and laws should instead go beyond what even Colorado is proposing. Instead, Chowdhury and academics have proposed accredited, independent organization that can explicitly test for potential bias in an AI algorithm.

“You can understand and deal with a single person who is discriminatory or biased,” said Chowdhury. “What do we do when it’s embedded into the entire institution?”

China’s Digital Silk Road exports internet technology, controls

washington — China promotes its help to Southeast Asian countries in modernizing their digital landscapes through investments in infrastructure as part of its “Digital Silk Road.” But rights groups say Beijing is also exporting its model of authoritarian governance of the internet through censorship, surveillance and controls.

China’s state media this week announced Chinese electrical appliance manufacturer Midea Group jointly built its first overseas 5G factory in Thailand with Thai mobile operator AIS, Chinese telecom service provider China Unicom and tech giant Huawei.

The 208,000-square-meter smart factory will have its own 5G network, Xinhua news agency reported.

Earlier this month, Beijing reached an agreement with Cambodia to establish a Digital Law Library of the Association of Southeast Asian Nations (ASEAN) Inter-Parliamentary Assembly. Cambodia’s Khmer Times said the objective is to “expand all-round cooperation in line with the strategic partnership and building a common destiny community.”

But parallel to China’s state media-promoted technology investments, rights groups say Beijing is also helping countries in the region to build what they call “digital authoritarian governance.”

Article 19, an international human rights organization dedicated to promoting freedom of expression globally and named after Article 19 of the Universal Declaration of Human Rights, in an April report said the purpose of the Digital Silk Road is not solely to promote China’s technology industry. The report, China: The rise of digital repression in the Indo-Pacific, says Beijing is also using its technology to reshape the region’s standards of digital freedom and governance to increasingly match its own.

VOA contacted the Chinese Embassy in the U.S. for a response but did not receive one by the time of publication.

Model of digital governance

Looking at case studies of Cambodia, Malaysia, Nepal and Thailand, the Article 19 report says Beijing is spreading China’s model of digital governance along with Chinese technology and investments from companies such as Huawei, ZTE and Alibaba.

Michael Caster, Asia digital program manager with Article 19, told VOA, “China has been successful at providing a needed service, in the delivery of digital development toward greater connectivity, but also in making digital development synonymous with the adoption of PRC [People’s Republic of China]-style digital governance, which is at odds with international human rights and internet freedom principles, by instead promoting notions of total state control through censorship and surveillance, and digital sovereignty away from universal norms.”

The group says in Thailand, home to the world’s largest overseas Chinese community, agreements with China bolstered internet controls imposed after Thailand’s 2014 coup, and it notes that Bangkok has since been considering a China-style Great Firewall, the censorship mechanism Beijing uses to control online content.

In Nepal, the report notes security and intelligence-sharing agreements with China and concerns that Chinese security camera technology is being used to surveil exiled Tibetans, the largest such group outside India.

The group says Malaysia’s approach to information infrastructure appears to increasingly resemble China’s model, citing Kuala Lumpur’s cybersecurity law passed in April and its partnering with Chinese companies whose technology has been used for repressing minorities inside China.

Most significantly, Article 19 says China is involved at “all levels” of Cambodia’s digital ecosystem. Huawei, which is facing increasing bans in Western nations over cybersecurity concerns, has a monopoly on cloud services in Cambodia.

While Chinese companies say they would not hand over private data to Beijing, experts doubt they would have any choice because of national security laws.

Internet gateway

Phnom Penh announced a decree in 2021 to build a National Internet Gateway similar to China’s Great Firewall, restricting the Cambodian people’s access to Western media and social networking sites.

“That we have seen the normalization of a China-style Great Firewall in some of the countries where China’s influence is most pronounced or its digital development support strongest, such as with Cambodia, is no coincidence,” Caster said.

The Cambodian government says the portal will strengthen national security and help combat tax fraud and cybercrime. But the Internet Society, a U.S.- and Switzerland-based nonprofit internet freedom group, says it would allow the government to monitor individual internet use and transactions, and to trace identities and locations.

Kian Vesteinsson, a senior researcher for technology and democracy with rights group Freedom House, told VOA, “The Chinese Communist Party and companies that are aligned with the Chinese state have led a charge internationally to push for internet fragmentation. And when I say internet fragmentation, I mean these efforts to carve out domestic internets that are isolated from global internet traffic.”

Despite Chinese support and investment, Vesteinsson notes that Cambodia has not yet implemented the plan for a government-controlled internet.

“Building the Chinese model of digital authoritarianism into a country’s internet infrastructure is extraordinarily difficult. It’s expensive. It requires technical capacity. It requires state capacity, and all signs point to the Cambodian government struggling on those fronts.”

Vesteinsson says while civil society and foreign political pressure play a role, business concerns are also relevant as requirements to censor online speech or spy on users create costs for the private sector.

“These governments that are trying to cultivate e-commerce should keep in mind that a legal environment that is free from these obligations to do censorship and surveillance will be more appealing to companies that are evaluating whether to start up domestic operations,” he said.

Article 19’s Caster says countries concerned about China’s authoritarian internet model spreading should do more to support connectivity and internet development worldwide.

“This support should be based on human rights law and internet freedom principles,” he said, “to prevent China from exploiting internet development needs to position its services – and often by extension its authoritarian model – as the most accessible option.”

China will hold its annual internet conference in Beijing July 9-11. China’s Xinhua news agency reports this year’s conference will discuss artificial intelligence, digital government, information technology application innovation, data security and international cooperation.

Adrianna Zhang contributed to this report.

IS turns to artificial intelligence for advanced propaganda amid territorial defeats

Washington — With major military setbacks in recent years, supporters of the Islamic State terror group are increasingly relying on artificial intelligence (AI) to generate online propaganda, experts said.

A new form of propaganda developed by IS supporters is broadcasting news bulletins with AI-generated anchors in multiple languages.

The Islamic State Khorasan (ISKP) group, an IS affiliate active in Afghanistan and Pakistan, produced in a video an AI-generated anchorman to appear reading news following an IS-claimed attack in Bamiyan province in Afghanistan on May 17 that killed four people, including three Spanish tourists.

The digital image posing as an anchor spoke the Pashto language and had features resembling local residents in Bamiyan, according to The Khorasan Diary, a website dedicated to news and analysis on the region.

Another AI-generated propaganda video by Islamic State appeared on Tuesday with a different digital male news anchor announcing IS’s responsibility for a car bombing in Kandahar, Afghanistan.

“These extremists are very effective in spreading deepfake propaganda,” said Roland Abi Najem, a cybersecurity expert based in Kuwait.

He told VOA that a group like IS was already effective in producing videos with Hollywood-level quality, and the use of AI has made such production more accessible for them.

“AI now has easy tools to use to create fake content whether it’s text, photo, audio or video,” Abi Najem said, adding that with AI, “you only need data, algorithms and computing power, so anyone can create AI-generated content from their houses or garages.”

IS formally began using the practice of AI-generated news bulletins four days after an attack at a Moscow music hall on March 22 killed some 145 people. The attack was claimed by IS.

In that video, IS used a “fake” AI-generated news anchor talking about the Moscow attack, experts told The Washington Post last week.

Mona Thakkar, a research fellow at the International Center for the Study of Violent Extremism, said pro-IS supporters have been using character-generation techniques and text-to-speech AI tools to produce translated news bulletins of IS’s Amaq news agency.

“These efforts have garnered positive responses from other users, reflecting that, through future collaborative efforts, many supporters could produce high quality and sophisticated AI-powered propaganda videos for IS of longer durations with better graphics and more innovation techniques,” she told VOA.

Thakkar said she recently came across some pro-IS Arabic-speaking supporters on Telegram who were recommending to other supporters “that beginners use AI image generator bots on Telegram to maintain the high quality of images as the bots are very easy and quick to produce such images.”

AI-generated content for recruitment

While IS’s ability to project power largely decreased due to its territorial defeat in Syria and Iraq, experts say supporters of the terror group believe artificial intelligence offers an alternative to promote their extremist ideology.

“Their content has mainly focused on showing that they’re still powerful,” said Abi Najem. “With AI-generated content now, they can choose certain celebrities that have influence, especially on teenagers, by creating deepfake videos.”

“So first they manipulate these people by creating believable content, then they begin recruiting them,” he said.

In a recent article published on the Global Network on Extremism and Technology, researcher Daniel Siegel said generative AI technology has had a profound impact on how extremist organizations engage in influence operations online, including the use of AI-generated Muslim religious songs, known as nasheeds, for recruitment purposes.

“The strategic deployment of extremist audio deepfake nasheeds, featuring animated characters and internet personalities, marks a sophisticated evolution in the tactics used by extremists to broaden the reach of their content,” he wrote.

Siegel said that other radical groups like al-Qaida and Hamas have also begun using AI to generate content for their supporters.

Cybersecurity expert Abi Najem said he believes the cheap technology will increase the availability of AI-generated content by extremist groups on the internet.

“While currently there are no stringent regulations on the use of AI, it will be very challenging for governments to stop extremist groups from exploiting these platforms for their own gain,” he said.

This story originated in VOA’s Kurdish Service.

Australian researchers unveil device that harvests water from the air

SYDNEY — A device that absorbs water from air to produce drinkable water was officially launched in Australia Wednesday.

Researchers say the so-called Hydro Harvester, capable of producing up to 1,000 liters of drinkable water a day, could be “lifesaving during drought or emergencies.”

The device absorbs water from the atmosphere. Solar energy or heat that is harnessed from, for example, industrial processes are used to generate hot, humid air. That is then allowed to cool, producing water for drinking or irrigation.

The Australian team said that unlike other commercially available atmospheric water generators, their invention works by heating air instead of cooling it.

Laureate Professor Behdad Moghtaderi, a chemical engineer and director of the University of Newcastle’s Centre for Innovative Energy Technologies, told VOA how the technology operates.  

“Hydro Harvester uses an absorbing material to absorb and dissolve moisture from air. So essentially, we use renewable energy, let’s say, for instance, solar energy or waste heat. We basically produce super saturated, hot, humid air out of the system,” Moghtaderi said. “When you condense water contained in that air you would have the drinking water at your disposal.”

The researchers say the device can produce enough drinking water each day to sustain a small rural town of up to 400 people. It could also help farmers keep livestock alive during droughts.

Moghtaderi says the technology could be used in parts of the world where water is scarce.

Researchers were motivated by the fact that Australia is an arid and dry country.

“More than 2 billion people around the world, they are in a similar situation where they do not have access to, sort of, high-quality water and they deal with water scarcity,” Moghtaderi said

Trials of the technology will be conducted in several remote Australian communities this year.

The World Economic Forum, an international research organization, says “water scarcity continues to be a pervasive global challenge.”

It believes that atmospheric water generation technology is a “promising emergency solution that can immediately generate drinkable water using moisture in the air.”

However, it cautions that generally the technology is not cheap, and estimates that one mid-sized commercial unit can cost between $30,000 and $50,000.