Australia proposes ‘world-leading’ ban on social media for children under 16

sydney — The Australian government will legislate for a ban on social media for children under 16, Prime Minister Anthony Albanese said on Thursday, in what it calls a world-leading package of measures that could become law late next year.

Australia is trying out an age-verification system to assist in blocking children from accessing social media platforms, as part of a range of measures that include some of the toughest controls imposed by any country to date.

“Social media is doing harm to our kids and I’m calling time on it,” Albanese told a news conference.

Albanese cited the risks to physical and mental health of children from excessive social media use, in particular the risks to girls from harmful depictions of body image, and misogynist content aimed at boys.

“If you’re a 14-year-old kid getting this stuff, at a time where you’re going through life’s changes and maturing, it can be a really difficult time, and what we’re doing is listening and then acting,” he said.

A number of countries have already vowed to curb social media use by children through legislation, though Australia’s policy is one of the most stringent.

No jurisdiction so far has tried using age verification methods like biometrics or government identification to enforce a social media age cut-off, two of the methods being tried.

Australia’s other world-first proposals are the highest age limit set by any country, no exemption for parental consent and no exemption for pre-existing accounts.

Legislation will be introduced into the Australian parliament this year, with the laws coming into effect 12 months after being ratified by lawmakers, Albanese said.

The opposition Liberal Party has expressed support for a ban.

“The onus will be on social media platforms to demonstrate they are taking reasonable steps to prevent access,” Albanese said. “The onus won’t be on parents or young people.”

“What we are announcing here and what we will legislate will be truly world-leading,” Communications Minister Michelle Rowland said.

Rowland said platforms impacted would include Meta Platforms’ Instagram and Facebook, as well as Bytedance’s TikTok and Elon Musk’s X. Alphabet’s YouTube would likely also fall within the scope of the legislation, she added.

TikTok declined to comment, while Meta, Alphabet and X did not respond to requests for comment.

The Digital Industry Group, a representative body that includes Meta, TikTok, X and Alphabet’s Google as members, said the measure could encourage young people to explore darker, unregulated parts of the internet while cutting their access to support networks.

“Keeping young people safe online is a top priority … but the proposed ban for teenagers to access digital platforms is a 20th Century response to 21st Century challenges,” said DIGI Managing Director Sunita Bose.

“Rather than blocking access through bans, we need to take a balanced approach to create age-appropriate spaces, build digital literacy and protect young people from online harm,” she added.

France last year proposed a ban on social media for those under 15, though users were able to avoid the ban with parental consent.

The United States has for decades required technology companies to seek parental consent to access the data of children under 13, leading to most social media platforms banning those under that age from accessing their services.

French families sue TikTok over alleged failure to remove harmful content

PARIS — Seven French families have filed a lawsuit against social media giant TikTok, accusing the platform of exposing their adolescent children to harmful content that led to two of them taking their own lives at 15, their lawyer said on Monday.

The lawsuit alleges TikTok’s algorithm exposed the seven teenagers to videos promoting suicide, self-harm and eating disorders, lawyer Laure Boutron-Marmion told broadcaster franceinfo.

The families are taking joint legal action in the Créteil judicial court. Boutron-Marmion said it was the first such grouped case in Europe.

“The parents want TikTok’s legal liability to be recognized in court,” she said, adding: “This is a commercial company offering a product to consumers who are, in addition, minors. They must, therefore, answer for the product’s shortcomings.”

TikTok, like other social media platforms, has long faced scrutiny over the policing of content on its app.

As with Meta’s Facebook and Instagram, it faces hundreds of lawsuits in the U.S. accusing them of enticing and addicting millions of children to their platforms, damaging their mental health.

TikTok could not immediately be reached for comment on the allegations.

The company has previously said it took issues that were linked to children’s mental health seriously. CEO Shou Zi Chew this year told U.S. lawmakers the company has invested in measures to protect young people who use the app.

US tech firms warn Vietnam’s planned law to hamper data centers, social media

HANOI, Vietnam — U.S. tech companies have warned Vietnam’s government that a draft law to tighten rules on data protection and limit data transfers abroad would hamper social media platforms and data center operators from growing their businesses in the country.

The Southeast Asian nation with a population of 100 million is one of the world’s largest markets for Facebook and other online platforms, and is aiming to exponentially increase its data center industry with foreign investment in coming years.

The draft law “will make it challenging for tech companies, social media platforms and data center operators to reach the customers that rely on them daily,” said Jason Oxman, who chairs the Information Technology Industry Council (ITI), a trade association representing big tech companies including Meta, Google and data centers operator Equinix.

The draft law, being discussed in parliament, is also designed to ease authorities’ access to information and was urged by the ministry of public security, Vietnamese and foreign officials said.

The ministry of public security and the information ministry did not respond to attempts to contact them via email and phone.

Vietnam’s parliament is discussing the law in its current month-long session and is scheduled to pass it on Nov. 30 “if eligible,” according to its program, which is subject to changes.

Existing Vietnamese regulations already limit cross-border transfers of data under some circumstances, but they are rarely enforced.

It is unclear how the new law, if adopted, would impact foreign investment in the country. Reuters reported in August that Google was considering setting up a large data center in southern Vietnam before the draft law was presented in parliament.

Research firm BMI had said Vietnam could become a major regional player in the data center industry as limits on foreign ownership are set to end next year.

Among the provisions of the draft law is prior authorization for the transfer overseas of “core data” and “important data,” which are currently vaguely defined.

“That will hinder foreign business operations,” Oxman told Reuters.

Tech companies and other firms favor cross-border data flows to cut costs and improve services, but multiple jurisdictions, including the European Union and China, have limited those transfers, saying that allows them to better protect privacy and sensitive information.

Under the draft law, companies will have to share data with Vietnam’s ruling Communist Party and state organizations in multiple, vaguely defined cases including for “fulfilling a specific task in the public interest.”

The U.S. tech industry has raised concerns with Vietnamese authorities over “the undue expansion of government access to data,” Oxman said.

The new law “would cause significant compliance challenges for most private sector companies,” said Adam Sitkoff, executive director of the American Chamber of Commerce in Hanoi, noting talks were underway to persuade authorities to “reconsider the rushed legislative process” for the law.

California attempts to regulate election deepfakes

The state of California has passed several laws attempting to regulate artificial intelligence, including AI used to create realistic looking but manipulated audio or video — known as a deepfake. In this U.S. election season, the aim is to counter misinformation. But it has raised concerns about free speech. From California, Genia Dulot has our story.

Thousands of passenger flight signals jammed over war zones in Ukraine, Middle East

The navigation systems of thousands of passenger aircraft are being disrupted every day as they fly close to conflict zones, according to researchers. They are warning that the blocking or “spoofing” technology behind it could put lives at risk. Henry Ridgwell has more from London.

Residents in Ethiopia’s Oromia region report network disruptions as government forces fight rebels

ADAMA, ETHIOPIA — Residents in Ethiopia’s Oromia region say access to phone communication and internet service has been disrupted for months as government forces fight against two rebel groups.

The disruption of mobile phone calls and internet data has been concentrated in conflict-hit Oromia zones, where government forces have engaged in fighting against the Oromo Liberation Army, or the OLA.

A resident from South Oromia of Guji Zone Wadera Wereda, who spoke to VOA on condition of anonymity for safety reasons, said phone and internet data connections have been cut in his area due to the fighting.

He said there was fighting on Monday and the week before in Wadera Wereda, where regional security personnel including local police were killed. Other residents confirmed the same clashes without giving specific casualty figures. Local authorities could not be reached for comment.

The data outage and network disruptions were also reported in the North Shewa Zone administration of Oromia region.

“The zone has been under network blockade for the last two months due to the insurgency,” said a second resident from Dera Wereda in North Shewa, who also sought anonymity due to safety reasons.

Residents also said people who lost their SIM cards or want replacements could not do so at local telecom offices because the conflict has affected supplies. Network disruptions also impacted schools in the area that access materials online.

He says his school had to transfer all its grade-12 students this year to neighboring Wereda due to a lack of service.

“We cannot manage to send their details and credentials to relevant bodies,” with the downed service, he told VOA in a phone interview.

Journalists have waited for hours to speak to residents in Kelem Welega Zone, whose network is down during morning hours. One resident traveled to Dembi Dolo, about 620 kilometers west of the capital, Addis Ababa, to speak with the media about the network outages.

The disruptions have been present since the yearslong fighting between federal forces and the OLA began in 2019. In one of the latest deadliest attacks, suspected OLA fighters killed as many as 17 pro-government militiamen in the West Showa zone of Oromia on October 17, according to residents and local officials.

A second rebel group, Fano, is also fighting in the neighboring Amhara region, which spills over on either side.

Residents say as the intensity of the clashes increases, the network situation becomes worse, as the government resorts to shutting down communication.

“It’s a very unfortunate tactic that is usually used by governments that are struggling with legitimacy issues,” said Horn of Africa security analyst Samira Gaid.

“It only serves to convince the masses that the government has something to hide. Rather than controlling the narrative or news reporting, it elevates mistrust in government, adds to misinformation and disinformation, and contributes to groups becoming more covert with their communications,” she told VOA.

Ethiopia’s state-run communication outlets have not responded to repeated VOA requests for comment.

Speaking at a press conference in Addis Ababa last month, Frehiwot Tamiru, CEO of Ethio Telecom, admitted that such problems exist in conflict areas. She declined to give specific answers, referring reporters to other government entities.

In June, the company said it has repaired and restored service to dozens of mobile stations that had previously been damaged in the western region of the country.

This story originated in VOA’s Horn of Africa Service.

Chinese online retailer Temu faces EU probe into rogue traders, illegal goods

LONDON — The European Union is investigating Chinese online retailer Temu over suspicions it’s failing to prevent the sale of illegal products, the 27-nation bloc’s executive arm said on Thursday.

The European Commission opened its investigation five months after adding Temu to the list of “very large online platforms” needing the strictest level of scrutiny under the bloc’s Digital Services Act. It’s a wide-ranging rulebook designed to clean up online platforms and keep internet users safe, with the threat of hefty fines.

Temu started entering Western markets only in the past two years and has grown in popularity by offering cheap goods — from clothing to home products — that are shipped from sellers in China. The company, owned by Pinduoduo Incorporated, a popular e-commerce site in China, now has 92 million users in the EU.

Temu said it “takes its obligations under the DSA seriously, continuously investing to strengthen our compliance system and safeguard consumer interests on our platform.”

“We will cooperate fully with regulators to support our shared goal of a safe, trusted marketplace for consumers,” the company said in a statement.

European Commission Executive Vice President Margrethe Vestager said in a press release that Brussels wants to make sure products sold on Temu’s platform “meet EU standards and do not harm consumers.”

EU enforcement will “guarantee a level playing field and that every platform, including Temu, fully respects the laws that keep our European market safe and fair for all,” she said.

The commission’s investigation will look into whether Temu’s systems are doing enough to crack down on “rogue traders” selling “noncompliant goods” amid concerns that they are able to swiftly reappear after being suspended. The commission didn’t single out specific illegal products that were being sold on the platform.

Regulators are also examining the risks from Temu’s “addictive design,” including “game-like” reward programs, and what the company is doing to mitigate those risks.

Also under investigation is Temu’s compliance with two other DSA requirements: giving researchers access to data and transparency on recommender systems. Companies must detail how they recommend content and products and give users at least one option to see recommendations that are not based on their personal profile and preferences.

Temu now has the chance to respond to the commission, which can decide to impose a fine or drop the case if the company makes changes or can prove that the suspicions aren’t valid.

Brussels has been cracking down on tech companies since the DSA took effect last year. It has also opened an investigation into another e-commerce platform, AliExpress, as well as social media sites such as X and Tiktok, which bowed to pressure after the commission demanded answers about a new rewards feature.

Temu has also faced scrutiny in the United States, where a congressional report last year accused the company of failing to prevent goods made by forced labor from being sold on its platform.

Musk’s X ineffective against surge of US election misinformation, report says

The crowd-sourced fact-checking feature of Elon Musk’s X, Community Notes, is “failing to counter false” claims about the U.S. election, the Center for Countering Digital Hate (CCDH) said in a report Wednesday.

Out of the 283 misleading posts that CCDH has analyzed on the digital social media platform, 209 or 74% of the posts did not show accurate notes to all X users correcting false and misleading claims about the elections, the report said.

“The 209 misleading posts in our sample that did not display available Community Notes to all users have amassed 2.2 billion views,” CCDH said, urging the company to invest in safety and transparency.

X did not immediately respond to a Reuters request for comment.

X launched its “Community Notes” feature last year, which allows users to comment on posts to flag false or misleading content, in effect crowd-sourcing fact checking to users rather than a dedicated team of fact checkers.

The report comes after X lost a lawsuit brought by CCDH earlier this year that faulted it for allowing a rise in hate speech on the social media platform.

Social media platforms, including X, have been under scrutiny for years over the spread of misinformation and conspiracy theories, including false information about elections and vaccines.

Secretaries of state from five U.S. states urged billionaire Musk in August to fix X’s AI chatbot, saying it had spread misinformation related to the November 5 election.

Musk, who endorsed Republican presidential candidate Donald Trump in July, himself has been accused of spreading misinformation. Polls show Trump is in a tight race with Democratic Vice President Kamala Harris.

China launches new crew to its space station as it seeks to expand exploration

JIUQUAN, China — China declared a “complete success” after it launched a new three-person crew to its orbiting space station early Wednesday as the country seeks to expand its exploration of outer space with missions to the moon and beyond.

The Shenzhou-19 spaceship carrying the trio blasted off from the Jiuquan Satellite Launch Center in northwest China at 4:27 a.m. local time atop a Long March-2F rocket, the backbone of China’s crewed space missions.

“The crew condition is good and the launch has been successful,” the state broadcaster China Central Television announced.

China built its own space station after being excluded from the International Space Station, mainly because of U.S. concerns over the People’s Liberation Army, the Chinese Communist Party’s military arm’s overall control over the space program. China’s moon program is part of a growing rivalry with the U.S. and others, including Japan and India.

The team of two men and one woman will replace the astronauts who have lived on the Tiangong space station for the last six months. They are expected to stay until April or May of next year.

The new mission commander, Cai Xuzhe, went to space in the Shenzhou-14 mission in 2022, while the other two, Song Lingdong and Wang Haoze, are first-time space travelers, born in the 1990s.

Song was an air force pilot and Wang an engineer with the China Aerospace Science and Technology Corporation. Wang will be the crew’s payload specialist and the third Chinese woman aboard a crewed mission.

Besides putting a space station into orbit, the Chinese space agency has landed an explorer on Mars. It aims to put a person on the moon before 2030, which would make China the second nation after the United States to do so. It also plans to build a research station on the moon and has already transferred rock and soil samples from the little-explored far side of the moon in a global first.

The U.S. still leads in space exploration and plans to land astronauts on the moon for the first time in more than 50 years, though NASA pushed the target date back to 2026 earlier this year.

The new crew will perform spacewalks and install new equipment to protect the station from space debris, some of which was created by China.

According to NASA, large pieces of debris have been created by “satellite explosions and collisions.” China’s firing of a rocket to destroy a redundant weather satellite in 2007 and the “accidental collision of American and Russian communications satellites in 2009 greatly increased the amount of large debris in orbit,” it said.

China’s space authorities say they have measures in place in case their astronauts have to return to Earth earlier.

China launched its first crewed mission in 2003, becoming only the third nation to do so after the former Soviet Union and the United States. The space program is a source of enormous national pride and a hallmark of China’s technological advances over the past two decades.

Companies find solutions to power EVs in energy-challenged Africa

NAIROBI, KENYA — Some companies are coming up with creative ways of making electric vehicles a more realistic option in power-challenged areas of Africa.

Countries in Africa have been slow adopters of battery-powered vehicles because finding reliable sources of electricity is a challenge in many places.

The Center for Strategic and International Studies described Africa as “the most energy-deficient continent in the world” and said that any progress made in electricity access in the last five years has been reversed by the pandemic and population growth.

Onesmus Otieno, for one, regrets trading in his diesel-powered motor bike for an electric one. He earns his living making deliveries and ferrying passengers around Nairobi, Kenya’s capital, with his bike.

The two-wheeled taxis popularly known as “boda boda” in Swahili are commonly used in Kenya and throughout Africa. Kenyan authorities recently introduced the electric bikes to phase out diesel ones. Otieno is among the few riders who adopted them, but he said finding a place to charge his bike has been a headache.

Sometimes the battery dies while he is carrying a customer, he said, while a charging station is far away. So, he has to end that trip and cancel other requests.

To address the problem, Chinese company Beijing Sebo created a mobile application that allows users of EVs to request a charge through the app. Then, charging equipment is brought to the user’s location.

Lin Lin, general manager for overseas business of Beijing Sebo, said because the company produces the equipment, it can control costs.

“We can deploy the product … in any country they need, and they don’t need to build or fix charging stations,” Lin said. “We can move to the location of the user, and we can bring electricity to electric vehicles.”

Lin said the mobile charging vans use electricity generated from solid waste and can charge up to five cars at one time for about $7 per vehicle — less for a motorbike.

Countries in Africa have been slow to adopt electric vehicles because there is a lack of infrastructure to support the technology, analysts say. The cost of EVs is another barrier, said clean energy expert Ajay Mathur.

”Yes, the capital cost is more,” Mathur said. “The first cost is more, but you recover it in about six years or so. We are at the beginning of the revolution.”

Electric motor bike maker Spiro offers a battery-swapping service in several countries to address the lack of EV infrastructure.

But studies show that for many African countries, access to reliable and affordable electricity remains a challenge. There are frequent power cuts, outages and voltage fluctuations in several regions.

Companies such as Beijing Sebo and Spiro are finding ways around the lack of power in Africa.

”We want to solve the problem of charging anxiety anywhere you are,” Lin said. 

This story originated in VOA’s Mandarin Service.

US finalizes rule restricting investment in Chinese tech firms

The Treasury Department on Monday finalized a new rule meant to prevent U.S.-based people and companies from investing in the development of a range of advanced technologies in China, thereby preventing Beijing from accessing cutting-edge expertise and equipment.

The rule, which implements an executive order signed by President Joe Biden in 2023, focuses particularly on advanced semiconductors and microelectronics and the equipment used to make them, technology used in quantum computing, and artificial intelligence systems.

When it takes effect on January 2, the rule will prohibit certain transactions in semiconductors, microelectronics and artificial intelligence. It also establishes mandatory reporting requirements for transactions that are not banned outright.

In the field of quantum computing, the rule is more far-reaching, banning all transactions “related to the development of quantum computers or production of any critical components required to produce a quantum computer,” as well as the development of other quantum systems. Unlike the fields of AI and semiconductors, the rule does not allow for transactions that can be completed so long as they are reported to the government.

The rule also announced the creation of the Office of Global Transactions within Treasury’s Office of Investment Security, which will administer the Outbound Investment Security Program.

Justification and opposition

“Artificial intelligence, semiconductors, and quantum technologies are fundamental to the development of the next generation of military, surveillance, intelligence and certain cybersecurity applications like cutting-edge code-breaking computer systems or next generation fighter jets,” Paul Rosen, assistant secretary for investment security, said in a statement.

“This Final Rule takes targeted and concrete measures to ensure that U.S. investment is not exploited to advance the development of key technologies by those who may use them to threaten our national security,” Rosen said.

Beijing has repeatedly complained about U.S. technology policy, arguing that the U.S. is dedicated to preventing China’s rise as a global power. In a press conference on Tuesday, Chinese Foreign Ministry spokesperson Lin Jian reiterated China’s longstanding objections to U.S. efforts to withhold advanced technology from Chinese companies.

“China deplores and rejects the U.S.’s Final Rule to curb investment in China,” Lin said. “China has protested to the U.S. and will take all measures necessary to firmly defend its lawful rights and interests.”

Not just equipment

The language of the rule frequently notes that it applies to transactions with “countries of concern,” but the specific language in the text makes it plain that the targets of the rule are companies and individuals doing business in mainland China as well as the “special administrative districts” of Hong Kong and Macao.

The Final Rule’s ban on transactions is not limited to the physical transfer of finished goods and machinery in the specified fields. Explanatory documents released on Monday make it clear that several intangible benefits are also covered.

Countries of concern “are exploiting or have the ability to exploit certain United States outbound investments, including certain intangible benefits that often accompany United States investments and that help companies succeed,” an informational statement accompanying the rule said. “These intangible benefits include enhanced standing and prominence, managerial assistance, investment and talent networks, market access, and enhanced access to additional financing.”

Signaling to US companies

The onus will be on U.S. companies to comply with the new rule, Stephen Ezell, vice president for global innovation policy at the Information Technology & Innovation Foundation, told VOA.

“This is the U.S. government signaling to U.S. entities and investors that they need to think twice about making investments on the prohibited transaction side of the equation that would advance China’s capabilities in these areas,” Ezell said.

He added that the impact of the rule on investment in Chinese technology companies would have effects far beyond any reduction in funding.

“It’s not just the dollars,” he said. “A key target here is getting at the intangible benefits that come with those investments, such as managerial capability, talent networks.” He described that loss as “very significant.”

Closing loopholes

In an email exchange with VOA, Daniel Gonzales, a senior scientist at the RAND Corporation, explained that the purpose of the rule was, in part, to prevent U.S. investment firms from supporting Chinese firms in the development of certain kinds of technology.

“These rules were put in place after many episodes where U.S. [venture capital] companies helped to transfer or nurture advanced technologies that have relevant military capabilities,” Gonzales wrote. “One particular case was that of TikTok and its AI algorithms, which were developed with the help of Sequoia Capital of California.”

Sequoia did not break any laws in assisting TikTok, Gonzales said. But “it has since become known to U.S. authorities that TikTok does possess an AI algorithm that has a variety of applications, some of which have military implications. This new rule is intended to close this loophole.”

Gonzales said the U.S. government’s concern with quantum computing is also born of worries about Chinese offensive capabilities.

“Chinese researchers are working on developing quantum computer algorithms that can break encryption codes used by the U.S. government and the U.S. financial sector to protect private and confidential information,” he wrote. “China has several startup companies working to develop more powerful quantum computers. This new rule is intended to prevent the leakage of U.S. quantum technology to China through U.S. VCs.”

Cryptocurrency promoters on X amplify China-aligned disinformation

Washington — A group of accounts that regularly promote cryptocurrency-related content on X have amplified messages from Chinese official accounts and a China-linked disinformation operation covertly pushing Beijing’s propaganda toward Western social media users known as “Spamouflage”.

Spamouflage accounts are bots pretending to be authentic users that promote narratives that align with Beijing’s talking points issues, such as the COVID-19 pandemic, China’s human rights record, the war in Ukraine and the conflict in Gaza.

The cryptocurrency accounts were discovered by a joint investigation between VOA Mandarin and DoubleThink Lab, a Taiwan-based social media analytics firm.

DoubleThink Lab’s analysis uncovered 1,153 accounts that primarily repost news and promotions about cryptocurrency and are likely bots deployed by engagement boosting services to raise their clients’ visibility on social media.

The findings suggest that some official Chinese X accounts and the Spamouflage operation have been using the same amplification services, which further indicate the link between the Chinese state and Spamouflage.

Beijing has repeatedly denied any attempts to spread disinformation in the United States and other countries.

From cryptocurrency to Spamouflage

A review of the accounts in the VOA-DTL investigation shows that the majority of the posts were about cryptocurrency. Users regularly repost content from some of the biggest cryptocurrency accounts on X, such as ChainGPT and LondonRealTV, which belongs to British podcaster Brian Rose.

But these accounts have also shared content from at least 17 Spamouflage accounts that VOA and DTL have been tracking.

VOA recently reported on Spamouflage networks’ adoption of antisemitic tropes and conspiracy theories.

Spamouflage was first detected by the U.S.-based social media analytic firm Graphika, who coined the name because the operation’s political posts were interspersed with innocuous but spam-like content such as TikTok videos and scenery photographs that camouflage the operation’s goal of influencing public opinions.

All cryptocurrency accounts have reposted content from a Spamouflage account named “Watermelon cloth” at least once. A review of the account revealed that “Watermelon cloth” regularly posted content critical of social inequalities in the United States, the Ukrainian and Israeli governments, and praised China’s economic achievements and leadership role in solving international issues.

In one post, the account peddled the conspiracy theory that Washington was developing biological weapons in Ukraine.

“The outbreak of the Russo-Ukrainian war brought out an ‘unspeakable secret’ in the United States. US biological laboratory in Ukraine exposed,” the post said. X recently suspended Watermelon cloth’s account.

Since Watermelon cloth’s first posting in March 2023, its content has been reposted nearly 2,600 times, half of which were by the cryptocurrency accounts. Most of the remaining reposts were either by Spamouflage or other botlike accounts, according to data collected by DoubleThink Lab. The investigation also found that the cryptocurrency accounts’ amplification on average almost tripled the view number of a post.

Robotic behavior

All 1,153 cryptocurrency accounts have demonstrated patterns that strongly suggest they are bots instead of human users.

They were created in batches on specific dates. On April 6 alone, 152 of them were registered on X.

Over 99% of their content were reposts. A study of their repost behaviors on September 24 shows that all the reposts took place within the first hour after the original content was posted. Within each wave of reposts, all took place within six seconds, an indication of coordinated action.

At least one such account offered engagement boosting services in its bio with two Telegram links for interested customers. VOA Mandarin contacted the service seller through the links but did not receive a response.

Chinese official accounts amplified

The cryptocurrency group has also promoted posts from Chinese official accounts, including several that belong to Chinese local governments, state media and at least one Chinese diplomat.

The Jinan International Communication Center was the third most amplified account whose posts the cryptocurrency groups have shared. Its content was reposted over 2,200 times.

The Jinan International Communication Center was established in 2022 to promote the history and culture of Jinan, capital of the Shandong province in Eastern China, to the rest of the world as part of Beijing’s “Tell China’s Story Well” propaganda initiative.

A local state media account boasted in an article last year that Jinan was the third most influential Chinese city on X, which was then called Twitter.

Other Chinese cities, including Xiamen and Ningbo, and provinces, such as Anhui and Jilin, had their official accounts amplified by the cryptocurrency group.

Other amplified accounts include Xi’s Moments, a state media project propagating Chinese leader Xi Jinping’s speeches and official activities; China Retold, a media group organized by pro-Beijing politicians in Hong Kong; and the English-language state-owned newspaper China Daily.

Zhang Heqing, a cultural counselor at the Chinese Embassy in Pakistan, was the sole Chinese diplomat whose posts were promoted by the cryptocurrency group.

DoubleThink Lab wrote in an analysis of the data and findings that Chinese official accounts and the Spamouflage operation have “likely” used the same content boosting services, which explains why they were amplified by the same group of cryptocurrency accounts.

The Chinese Embassy in Washington, D.C., declined to answer specific questions about what appears to be a connection between the cryptocurrency group, Chinese official accounts and Spamouflage.

But in a written statement, spokesperson Liu Pengyu rejected the notion that China has used disinformation campaigns to influence social media users in the U.S.

“Such allegations are full of malicious speculations against China, which China firmly opposes,” the statement said.

US military, intelligence agencies ordered to embrace AI

washington — The Pentagon and U.S. intelligence agencies have new marching orders — to more quickly embrace and deploy artificial intelligence as a matter of national security.

U.S. President Joe Biden signed the directive, part of a new national security memorandum, on Thursday. The goal is to make sure the United States remains a leader in AI technology while also aiming to prevent the country from falling victim to AI tools wielded by adversaries like China.

The memo, which calls AI “an era-defining technology,” also lays out guidelines that the White House says are designed to prevent the use of AI to harm civil liberties or human rights.

The new rules will “ensure that our national security agencies are adopting these technologies in ways that align with our values,” a senior administration official told reporters, speaking about the memo on the condition of anonymity before its official release.

The official added that a failure to more quickly adopt AI “could put us at risk of a strategic surprise by our rivals.”

“Because countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities using artificial intelligence, it’s particularly imperative that we accelerate our national security community’s adoption and use of cutting-edge AI,” the official said.

But some civil liberties advocates are raising concerns that the new guidelines lack sufficient safeguards.

“Despite acknowledging the considerable risks of AI, this policy does not go nearly far enough to protect us from dangerous and unaccountable AI systems,” according to a statement from the American Civil Liberties Union’s Patrick Toomey.

“National security agencies must not be left to police themselves as they increasingly subject people in the United States to powerful new technologies,” said Toomey, who serves as deputy director of ACLU’s National Security Project.

The new guidelines build on an executive order issued last year that directed all U.S. government agencies to craft policies for how they intend to use AI.

They also seek to address issues that could hamper Washington’s ability to more quickly incorporate AI into national security systems.

Provisions outlined in the memo call for a range of actions to protect the supply chains that produce advanced computer chips critical for AI systems. It also calls for additional actions to combat economic espionage that would allow U.S. adversaries or non-U.S. companies from stealing critical innovations.

“We have to get this right, because there is probably no other technology that will be more critical to our national security in the years ahead,” said White House National Security Adviser Jake Sullivan, addressing an audience at the National Defense University in Washington on Thursday.

“The stakes are high,” he said. “If we don’t act more intentionally to seize our advantages, if we don’t deploy AI more quickly and more comprehensively to strengthen our national security, we risk squandering our hard-earned lead.

“We could have the best team but lose because we didn’t put it on the field,” he added.

Although the memo prioritizes the implementation of AI technologies to safeguard U.S. interests, it also directs officials to work with allies and others to create a stable framework for use of AI technologies across the globe.

“A big part of the national security memorandum is actually setting out some basic principles,” Sullivan said, citing ongoing talks with the G-7 and AI-related resolutions at the United Nations.

“We need to ensure that people around the world are able to seize the benefits and mitigate the risks,” he said.

AI decodes oinks and grunts to keep pigs happy in Danish study

VIPPEROD, Denmark — European scientists have developed an artificial intelligence algorithm capable of interpreting pig sounds, aiming to create a tool that can help farmers improve animal welfare.

The algorithm could potentially alert farmers to negative emotions in pigs, thereby improving their well-being, according to Elodie Mandel-Briefer, a behavioral biologist at University of Copenhagen who is co-leading the study.

The scientists, from universities in Denmark, Germany, Switzerland, France, Norway and the Czech Republic, used thousands of recorded pig sounds in different scenarios, including play, isolation and competition for food, to find that grunts, oinks, and squeals reveal positive or negative emotions.

While many farmers already have a good understanding of the well-being of their animals by watching them in the pig pen, existing tools mostly measure their physical condition, said Mandel-Briefer.

“Emotions of animals are central to their welfare, but we don’t measure it much on farms,” she said.

The algorithm demonstrated that pigs kept in outdoor, free-range or organic farms with the ability to roam and dig in the dirt produced fewer stress calls than conventionally raised pigs. The researchers believe that this method, once fully developed, could also be used to label farms, helping consumers make informed choices.

“Once we have the tool working, farmers can have an app on their phone that can translate what their pigs are saying in terms of emotions,” Mandel-Briefer said.

Short grunts typically indicate positive emotions, while long grunts often signal discomfort, such as when pigs push each other by the trough. High-frequency sounds like screams or squeals usually mean the pigs are stressed, for instance, when they are in pain, fight, or are separated from each other.

The scientists used these findings to create an algorithm that employs AI.

“Artificial intelligence really helps us to both process the huge amount of sounds that we get, but also to classify them automatically,” Mandel-Briefer said.

China space plan highlights commitment to space exploration, analysts say

Chinese officials recently released a 25-year space exploration plan that details five major scientific themes and 17 priority areas for scientific breakthroughs with one goal: to make China a world leader in space by 2050 and a key competitor with the U.S. in space, for decades to come.

Last week, the Chinese Academy of Sciences, the China National Space Administration, and the China Manned Space Agency jointly released a space exploration plan for 2024 through 2050.

It includes searching for extraterrestrial life, exploring Mars, Venus, and Jupiter, sending space crews to the moon and building an international lunar research station by 2025.

Clayton Swope, deputy director of the Aerospace Security Project at the Center for Strategic and International Studies, says the plan highlights China’s long-term commitment and answers some lingering questions as well.

“I think a lot of experts have wondered if China would continue to invest in space, particularly in science and exploration, given a lot of economic uncertainties in China … but this is a sign that they’re committed,” Swope said.

The plan reinforces a “commitment to really look at space science and exploration in the long term and not just short term,” he added.

The plan outlines Beijing’s goal to send astronauts to the moon by 2030, obtain and retrieve the first samples from Mars and successfully complete a mission to the Jupiter system in the next few years. It also outlines three phases of development, each with specific goals in terms of space exploration and key scientific discoveries.

The extensive plan is not only a statement that Beijing can compete with the U.S. in high-tech industries, it is also a way of boosting national pride, analysts say. 

“Space in particular has a huge public awareness, public pride,” says Nicholas Eftimiades, a retired senior intelligence officer and senior fellow at the Atlantic Council, a Washington-based think tank. “It emboldens the Chinese people, gives them a strong sense of nationalism and superiority, and that’s what the main focus of the Bejing government is.”

 

Swope agrees.

“I think it’s [China’s long-term space plan] a manifestation of China’s interest and desire from a national prestige and honor standpoint to really show that it’s a player on the international stage up there with the United States,” he said.

Antonia Hmaidi, a senior analyst at the Mercator Institute for China Studies, told VOA in an email response that, “China’s space focus goes back to the 1960,” and that “China has also been very successful at meeting its own goals and timelines.”

In recent years China has carried out several successful space science missions including Chang’e-4, which marked the world’s first soft landing and roving on the far side of the moon, Change’e-5, a mission that returned a sample from the moon back to Beijing for the first time, and Tianwen-1, a space mission that resulted in Chinese spacecraft leaving imprints on Mars. 

 

In addition, to these space missions, Bejing has implemented several programs aimed at increasing scientific discovery relating to space, particularly through the launch of several space satellites. 

Since 2011, China has developed and launched scientific satellites including Dark Matter Particle Explorer, Quantum Experiments at Space Scale, Advanced Space-based Solar Observatory, and the Einstein Probe.

While China continues to make progress with space exploration and scientific discovery, according to Swope, there is still a way to go before it catches up to the United States.

“China is undeniably the number 2 space power in the world today, behind the United States,” he said. “The United States is still by far the most important in a lot of measures and metrics, including in science and exploration.”

Eftimiades said one key reason the United States has maintained its lead in the space race is the success of Washington’s private, commercial aerospace companies.

 

“The U.S. private industry has got the jump on China,” Eftimiades said. “There’s no type of industrial control, industrial plan. In fact, Congress and administration shy away from that completely.”

Unlike the United States, large space entities in China are often state-owned, such as the China Aerospace Cooperation, Eftimiades said.

He adds that one advantage of China’s space entities being state-owned is the ability for the Chinese government to “direct their industries toward specific objectives.” At the same time, having bureaucracy involved with state-owned enterprises leads to less “cutting-edge technology.”

This year, China has focused on growing its space presence relative to the U.S. by conducting more orbital launches. 

Beijing planned to conduct 100 orbital launches this year, according to the state-owned China Aerospace Science and Technology Corporation, which was to conduct 70 of them. However, as of October 15, China had completed 48 orbital launches.

Last week, SpaceX announced it had launched its 100th rocket of the year and had another liftoff just hours later. The private company is aiming for 148 launches this year.

Earlier this year the U.S. Department of Defense implemented its first Commercial Space Integration Strategy, which outlined the department’s efforts to take technologies produced in the private sector and apply their uses for U.S. national security purposes.

In a statement released relating to the U.S. strategic plan, the Department of Defense explained its strategy to work closely with private and commercial sector space companies that are known to be innovative and have scalable production.

According to the statement, officials say “the strategy is based on the premise that the commercial space sector’s innovative capabilities, scalable production and rapid technology refresh rates provide pathways to enhance the resilience of DOD space capabilities and strengthen deterrence.”

Many space technologies have military applications, Swope said.

 

“A lot of things that are done in space have a dual use, so [space technologies] may be primarily used for scientific purposes, but also could be used to design and build and test some type of weapons technology,” Swope said.

Hmaidi says China’s newest space plan stands out for what it doesn’t have.

“The most interesting and striking part about China’s newest space plan to me was the narrow focus on basic science over military goals,” she told VOA in an email. “However, we know from open-source research that China is also very active in military space development.”

“This plan contains only one part of China’s space planning, namely the part that is unlikely to have direct military utility, while not mentioning other missions with direct military utility like its low-earth orbit internet program,” Hmaidi explained.

Chinese official urges Apple to continue ‘deepening’ presence in China

A top Chinese official has urged tech giant Apple to deepen its presence and investment in innovation in the world’s second largest economy at a time when supply chains and companies are shifting production and operations away from China.

As U.S.-China geopolitical tensions simmer and tech competition between Beijing and Western countries intensifies, foreign investment in China shrunk in 2023 to its lowest level in three decades, according to government statistics.

The United States has banned the export of advanced technology to China and Beijing’s crackdown on spying in the name of national security concerns has spooked investors.

On Wednesday, Jin Zhuanglong – China’s Minister for Industry and Information Technology – told Apple CEO Tim Cook he hoped that, “Apple will continue to deepen its presence in the Chinese market,” urging Cook to “increase investment in innovation, grow alongside Chinese firms, and share in the dividends of high-quality investment,” according to a ministry statement.

At the meeting Jin also discussed “Apple’s development in China, network data security management, (and) cloud services,” according to the statement.

China has the world’s largest market for smartphones, and Apple is a leading competitor. However, increasingly the iPhone producer has lost market share in the country due to an increasing number of local rivals in the smartphone sector.

In the second quarter of this year, AFP reports that Apple ranked sixth among smartphone vendors in China, holding a 16% market share, marking a drop of three positions compared to its ranking during the same period last year, according to analysis firm Canalys.

Jin also repeated a frequent pledge from officials in Beijing that China would strive to provide a “better environment” for global investors and “continue to expand high-level opening up.

Cook’s trip to China was his second of the year. His posts on the X-like Chinese social media platform Weibo showed he visited an Apple store in downtown Beijing, visited an organic farm, and toured ancient neighborhoods with prominent artists such as local photographer Chen Man.

Cook added that he met with students from China’s Agricultural University and Zhejiang University to receive feedback on how iPhones and iPads can help farmers adopt more sustainable practices. 

Some information in this report came from Reuters and AFP.

‘Garbage in, garbage out’: AI fails to debunk disinformation, study finds

Washington — When it comes to combating disinformation ahead of the U.S. presidential elections, artificial intelligence and chatbots are failing, a media research group has found.

The latest audit by the research group NewsGuard found that generative AI tools struggle to effectively respond to false narratives.

In its latest audit of 10 leading chatbots, compiled in September, NewsGuard found that AI will repeat misinformation 18% of the time and offer a nonresponse 38.33% of the time — leading to a “fail rate” of almost 40%, according to NewsGuard.

“These chatbots clearly struggle when it comes to handling prompt inquiries related to news and information,” said McKenzie Sadeghi, the audit’s author. “There’s a lot of sources out there, and the chatbots might not be able to discern between which ones are reliable versus which ones aren’t.”

NewsGuard has a database of false news narratives that circulate, encompassing global wars and U.S. politics, Sadeghi told VOA.

Every month, researchers feed trending false narratives into leading chatbots in three different forms: innocent user prompts, leading questions and “bad actor” prompts. From there, the researchers measure if AI repeats, fails to respond or debunks the claims.

AI repeats false narratives mostly in response to bad actor prompts, which mirror the tactics used by foreign influence campaigns to spread disinformation. Around 70% of the instances where AI repeated falsehoods were in response to bad actor prompts, as opposed to leading prompts or innocent user prompts.

Foreign influence campaigns are able to take advantage of such flaws, according to the Office of the Director of National Intelligence. Russia, Iran and China have used generative AI to “boost their respective U.S. election influence efforts,” according to an intelligence report released last month.

As an example of how easily AI chatbots can be misled, Sadeghi cited a NewsGuard study in June that found AI would repeat Russian disinformation if it “masqueraded” as coming from an American local news source.

From myths about migrants to falsehoods about FEMA, the spread of disinformation and misinformation has been a consistent theme throughout the 2024 election cycle.

“Misinformation isn’t new, but generative AI is definitely amplifying these patterns and behaviors,” Sejin Paik, an AI researcher at Georgetown University, told VOA.

Because the technology behind AI is constantly changing and evolving, it is often unable to detect erroneous information, Paik said. This leads to not only issues with the factuality of AI’s output, but also the consistency.

NewsGuard also found that two-thirds of “high quality” news sites block generative AI models from using their media coverage. As a result, AI often has to learn from lower-quality, misinformation-prone news sources, according to the watchdog.

This can be dangerous, experts say. Much of the non-paywalled media that AI trains on is either “propaganda” or “deliberate strategic communication,” media scholar Matt Jordan told VOA.

“AI doesn’t know anything: It doesn’t sift through knowledge, and it can’t evaluate claims,” Jordan, a media professor at Penn State, told VOA. “It just repeats based on huge numbers.”

AI has a tendency to repeat “bogus” news because statistically, it tends to be trained on skewed and biased information, he added. He called this a “garbage in, garbage out model.”

NewsGuard aims to set the standard for measuring accuracy and trustworthiness in the AI industry through monthly surveys, Sadeghi said.

The sector is growing fast, even as issues around disinformation are flagged. The generative AI industry has experienced monumental growth in the past few years. OpenAI’s ChatGPT currently reports 200 million weekly users, more than double from last year, according to Reuters.

The growth in popularity of these tools leads to another problem in their output, according to Anjana Susarla, a professor in Responsible AI at Michigan State University. Since there is such a high quantity of information going in — from users and external sources — it is hard to detect and stop the spread of misinformation.

Many users are still willing to believe the outputs of these chatbots are true, Susarla said.

“Sometimes, people can trust AI more than they trust human beings,” she told VOA.

The solution to this may be bipartisan regulation, she added. She hopes that the government will encourage social media platforms to regulate malicious misinformation.

Jordan, on the other hand, believes the solution is with media audiences.

“The antidote to misinformation is to trust in reporters and news outlets instead of AI,” he told VOA. “People sometimes think that it’s easier to trust a machine than it is to trust a person. But in this case, it’s just a machine spewing out what untrustworthy people have said.”

Microsoft to allow autonomous AI agent development starting next month

Microsoft will allow customers to build autonomous artificial intelligence agents starting in November, the software giant said on Monday, in its latest move to tap the booming technology.

The company is positioning autonomous agents — programs which require little human intervention unlike chatbots — as “apps for an AI-driven world,” capable of handling client inquiries, identifying sales leads and managing inventory.

Other big technology firms such as Salesforce have also touted the potential of such agents, tools that some analysts say could provide companies with an easier path to monetizing the billions of dollars they are pouring into AI.

Microsoft said its customers can use Copilot Studio – an application that requires little knowledge of computer code – to create autonomous agents in public preview from November. It is using several AI models developed in-house and by OpenAI for the agents.

The company is also introducing ten ready-for-use agents that can help with routine tasks ranging from managing supply chain to expense tracking and client communications.

In one demo, McKinsey & Co, which had early access to the tools, created an agent that can manage client inquires by checking interaction history, identifying the consultant for the task and scheduling a follow-up meeting.

“The idea is that Copilot [the company’s chatbot] is the user interface for AI,” Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, told Reuters.

“Every employee will have a Copilot, their personalized AI agent, and then they will use that Copilot to interface and interact with the sea of AI agents that will be out there.”

Tech giants are facing investor pressure to show returns on their significant AI investments. Microsoft’s shares fell 2.8% in the September quarter, underperforming the S&P 500, but remain more than 10% higher for the year.

Some concerns have risen in recent months about the pace of Copilot adoption, with research firm Gartner saying in August its survey of 152 IT organizations showed that the vast majority had not progressed their Copilot initiatives past the pilot stage.

Tiny Caribbean island of Anguilla turns AI boom into digital gold mine

The artificial intelligence boom has benefited chatbot makers, computer scientists and Nvidia investors. It’s also providing an unusual windfall for Anguilla, a tiny island in the Caribbean.

ChatGPT’s debut nearly two years ago heralded the dawn of the AI age and kicked off a digital gold rush as companies scrambled to stake their own claims by acquiring websites that end in .ai.

That’s where Anguilla comes in. The British territory was allotted control of the .ai internet address in the 1990s. It was one of hundreds of obscure top-level domains assigned to individual countries and territories based on their names. While the domains are supposed to indicate a website has a link to a particular region or language, it’s not always a requirement.

Google uses google.ai to showcase its artificial intelligence services while Elon Musk uses x.ai as the homepage for his Grok AI chatbot. Startups like AI search engine Perplexity have also snapped up .ai web addresses, redirecting users from the .com version.

Anguilla’s earnings from web domain registration fees quadrupled last year to $32 million, fueled by the surging interest in AI. The income now accounts for about 20% of Anguilla’s total government revenue. Before the AI boom, it hovered at around 5%.

Anguilla’s government, which uses the gov.ai home page, collects a fee every time an .ai web address is renewed. The territory signed a deal Tuesday with a U.S. company to manage the domains amid explosive demand but the fees aren’t expected to change. It also gets paid when new addresses are registered and expired ones are sold off. Some sites have fetched tens of thousands of dollars.

The money directly boosts the economy of Anguilla, which is just 91 square kilometers and has a population of about 16,000. Blessed with coral reefs, clear waters and palm-fringed white sand beaches, the island is a haven for uber-wealthy tourists. Still, many residents are underprivileged, and tourism has been battered by the pandemic and, before that, a powerful hurricane.

Anguilla doesn’t have its own AI industry though Premier Ellis Webster hopes that one day it will become a hub for the technology. He said it was just luck that it was Anguilla, and not nearby Antigua, that was assigned the .ai domain in 1995 because both places had those letters in their names.

Webster said the money takes the pressure off government finances and helps fund key projects but cautioned that “we can’t rely on it solely.”

“You can’t predict how long this is going to last,” Webster said in an interview with the AP. “And so I don’t want to have our economy and our country and all our programs just based on this. And then all of a sudden there’s a new fad comes up in the next year or two, and then we are left now having to make significant expenditure cuts, removing programs.”

To help keep up with the explosive growth in domain registrations, Anguilla said Tuesday it’s signing a deal with a U.S.-based domain management company, Identity Digital, to help manage the effort. They said the agreement will mean more revenue for the government while improving the resilience and security of the web addresses.

Identity Digital, which also manages Australia’s .au domain, expects to migrate all .ai domain services to its systems by the start of next year, Identity Digital Chief Strategy Officer Ram Mohan said in an interview.

A local software entrepreneur had previously helped Anguilla set up its registry system decades earlier.

There are now more than 533,000 .ai web domains, an increase of more than 10-fold since 2018. The International Monetary Fund said in a May report that the earnings will help diversify the economy, “thus making it more resilient to external shocks.

Webster expects domain-related revenues to rise further and could even double this year from last year’s $32 million.

He said the money will finance the airport’s expansion, free medical care for senior citizens and completion of a vocational technology training center at Anguilla’s high school.

The income also provides “budget support” for other projects the government is eyeing, such as a national development fund it could quickly tap for hurricane recovery efforts. The island normally relies on assistance from its administrative power, Britain, which comes with conditions, Webster said.

Mohan said working with Identity Digital will also defend against cyber crooks trying to take advantage of the hype around artificial intelligence.

He cited the example of Tokelau, an island in the Pacific Ocean, whose .tk addresses became notoriously associated with spam and phishing after outsourcing its registry services.

“We worry about bad actors taking something, sticking a .ai to it, and then making it sound like they are much bigger or much better than what they really are,” Mohan said, adding that the company’s technology will quickly take down shady sites.

Another benefit is .AI websites will no longer need to connect to the government’s digital infrastructure through a single internet cable to the island, which leaves them vulnerable to digital bottlenecks or physical disruptions.

Now they’ll use the company’s servers distributed globally, which means it will be faster to access them because they’ll be closer to users.

“It goes from milliseconds to microseconds,” Mohan said.

Drone maker DJI sues Pentagon over Chinese military listing

WASHINGTON — China-based DJI sued the U.S. Defense Department on Friday for adding the drone maker to a list of companies allegedly working with Beijing’s military, saying the designation is wrong and has caused the company significant financial harm.

DJI, the world’s largest drone manufacturer that sells more than half of all U.S. commercial drones, asked a U.S. District Judge in Washington to order its removal from the Pentagon list designating it as a “Chinese military company,” saying it “is neither owned nor controlled by the Chinese military.”

Being placed on the list represents a warning to U.S. entities and companies about the national security risks of conducting business with them.

DJI’s lawsuit says because of the Defense Department’s “unlawful and misguided decision” it has “lost business deals, been stigmatized as a national security threat, and been banned from contracting with multiple federal government agencies.”

The company added “U.S. and international customers have terminated existing contracts with DJI and refuse to enter into new ones.”

The Defense Department did not immediately respond to a request for comment.

DJI said on Friday it filed the lawsuit after the Defense Department did not engage with the company over the designation for more than 16 months, saying it “had no alternative other than to seek relief in federal court.”

Amid strained ties between the world’s two biggest economies, the updated list is one of numerous actions Washington has taken in recent years to highlight and restrict Chinese companies that it says may strengthen Beijing’s military.

Many major Chinese firms are on the list, including aviation company AVIC, memory chip maker YMTC, China Mobile 0941.HK, and energy company CNOOC.

In May, lidar manufacturer Hesai Group ZN80y.F filed a suit challenging the Pentagon’s Chinese military designation for the company. On Wednesday, the Pentagon removed Hesai from the list but said it will immediately relist the China-based firm on national security grounds.

DJI is facing growing pressure in the United States.

Earlier this week DJI told Reuters that Customs and Border Protection is stopping imports of some DJI drones from entering the United States, citing the Uyghur Forced Labor Prevention Act.

DJI said no forced labor is involved at any stage of its manufacturing.

U.S. lawmakers have repeatedly raised concerns that DJI drones pose data transmission, surveillance and national security risks, something the company rejects.

Last month, the U.S. House voted to bar new drones from DJI from operating in the U.S. The bill awaits U.S. Senate action. The Commerce Department said last month it is seeking comments on whether to impose restrictions on Chinese drones that would effectively ban them in the U.S. — similar to proposed Chinese vehicle restrictions. 

Residents on Kenya’s coast use app to track migratory birds

The Tana River delta on the Kenyan coast includes a vast range of habitats and a remarkably productive ecosystem, says UNESCO. It is also home to many bird species, including some that are nearly threatened. Residents are helping local conservation efforts with an app called eBird. Juma Majanga reports.

US prosecutors see rising threat of AI-generated child sex abuse imagery

U.S. federal prosecutors are stepping up their pursuit of suspects who use artificial intelligence tools to manipulate or create child sex abuse images, as law enforcement fears the technology could spur a flood of illicit material.

The U.S. Justice Department has brought two criminal cases this year against defendants accused of using generative AI systems, which create text or images in response to user prompts, to produce explicit images of children.

“There’s more to come,” said James Silver, the chief of the Justice Department’s Computer Crime and Intellectual Property Section, predicting further similar cases.

“What we’re concerned about is the normalization of this,” Silver said in an interview. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”

The rise of generative AI has sparked concerns at the Justice Department that the rapidly advancing technology will be used to carry out cyberattacks, boost the sophistication of cryptocurrency scammers and undermine election security. 

Child sex abuse cases mark some of the first times that prosecutors are trying to apply existing U.S. laws to alleged crimes involving AI, and even successful convictions could face appeals as courts weigh how the new technology may alter the legal landscape around child exploitation. 

Prosecutors and child safety advocates say generative AI systems can allow offenders to morph and sexualize ordinary photos of children and warn that a proliferation of AI-produced material will make it harder for law enforcement to identify and locate real victims of abuse.

The National Center for Missing and Exploited Children, a nonprofit group that collects tips about online child exploitation, receives an average of about 450 reports each month related to generative AI, according to Yiota Souras, the group’s chief legal officer.

That’s a fraction of the average of 3 million monthly reports of overall online child exploitation the group received last year.

Untested ground

Cases involving AI-generated sex abuse imagery are likely to tread new legal ground, particularly when an identifiable child is not depicted.

Silver said in those instances, prosecutors can charge obscenity offenses when child pornography laws do not apply.

Prosecutors indicted Steven Anderegg, a software engineer from Wisconsin, in May on charges including transferring obscene material. Anderegg is accused of using Stable Diffusion, a popular text-to-image AI model, to generate images of young children engaged in sexually explicit conduct and sharing some of those images with a 15-year-old boy, according to court documents.

Anderegg has pleaded not guilty and is seeking to dismiss the charges by arguing that they violate his rights under the U.S. Constitution, court documents show.

He has been released from custody while awaiting trial. His attorney was not available for comment.

Stability AI, the maker of Stable Diffusion, said the case involved a version of the AI model that was released before the company took over the development of Stable Diffusion. The company said it has made investments to prevent “the misuse of AI for the production of harmful content.”

Federal prosecutors also charged a U.S. Army soldier with child pornography offenses in part for allegedly using AI chatbots to morph innocent photos of children he knew to generate violent sexual abuse imagery, court documents show.

The defendant, Seth Herrera, pleaded not guilty and has been ordered held in jail to await trial. Herrera’s lawyer did not respond to a request for comment.

Legal experts said that while sexually explicit depictions of actual children are covered under child pornography laws, the landscape around obscenity and purely AI-generated imagery is less clear. 

The U.S. Supreme Court in 2002 struck down as unconstitutional a federal law that criminalized any depiction, including computer-generated imagery, appearing to show minors engaged in sexual activity. 

“These prosecutions will be hard if the government is relying on the moral repulsiveness alone to carry the day,” said Jane Bambauer, a law professor at the University of Florida who studies AI and its impact on privacy and law enforcement.

Federal prosecutors have secured convictions in recent years against defendants who possessed sexually explicit images of children that also qualified as obscene under the law. 

Advocates are also focusing on preventing AI systems from generating abusive material. 

Two nonprofit advocacy groups, Thorn and All Tech Is Human, secured commitments in April from some of the largest players in AI including Alphabet’s Google, Amazon.com, Facebook and Instagram parent Meta Platforms, OpenAI and Stability AI to avoid training their models on child sex abuse imagery and to monitor their platforms to prevent its creation and spread. 

“I don’t want to paint this as a future problem, because it’s not. It’s happening now,” said Rebecca Portnoff, Thorn’s director of data science.

“As far as whether it’s a future problem that will get completely out of control, I still have hope that we can act in this window of opportunity to prevent that.”

Deepfakes featuring deceased terrorists spread radical propaganda

In a year with over 60 national elections worldwide, concerns are high that individuals and entities are using deepfake images and recordings to contribute to the flood of election misinformation. VOA’s Rio Tuasikal reports on some potentially dangerous videos made using generative AI.

Watchdog: ‘Serious questions’ over Meta’s handling of anti-immigrant posts

Meta’s independent content watchdog said Thursday there were “serious questions” about how the social media giant deals with anti-immigrant content, particularly in Europe. 

The Oversight Board, established by Meta in 2020 and sometimes called its “supreme court,” launched a probe after seeing a “significant number” of appeals over anti-immigrant content. 

The board has chosen two symbolic cases — one from Germany and the other from Poland — to assess whether Meta, which owns Facebook and Instagram, is following human rights law and its own policies on hate speech. 

Helle Thorning-Schmidt, co-chair of the board and a former Danish prime minister, said it was “critical” to get the balance right between free speech and protection of vulnerable groups. 

“The high number of appeals we get on immigration-related content from across the EU tells us there are serious questions to ask about how the company handles issues related to this, including the use of coded speech,” she said in a statement. 

The first piece of content to be assessed by the board was posted in May on a Facebook page claiming to be the official account of Poland’s far-right Confederation party. 

An image depicts Polish Prime Minister Donald Tusk looking through a peephole with a black man approaching him from behind, accompanied by text suggesting his government would allow immigration to surge. 

Meta rejected an appeal from a user to take down the post despite the text including a word considered by some as a racial slur. 

In the other case, an apparently AI-generated image was posted on a German Facebook page showing a blond-haired blue-eyed woman, a German flag and a stop sign. 

The accompanying text likens immigrants to “gang rape specialists.”  

A user complained but Meta decided to not to remove the post.  

“The board selected these cases to address the significant number of appeals, especially from Europe, against content that shares views on immigration in ways that may be harmful towards immigrants,” the watchdog said in a statement. 

The board said it wanted to hear from the public and would spend “the next few weeks” discussing the issue before publishing its decision. 

Decisions by the board, funded by a trust set up by Meta, are not binding, though the company has promised to follow its rulings. 

China says unidentified foreign company conducted illegal mapping services 

BEIJING — China’s state security ministry said that a foreign company had been found to have illegally conducted geographic mapping activities in the country under the guise of autonomous driving research and outsourcing to a licensed Chinese mapping firm.

The ministry did not disclose the names of either company in a statement on its WeChat account on Wednesday.

The foreign company, ineligible for geographic surveying and mapping activities in China, “purchased a number of cars and equipped them with high-precision radar, GPS, optical lenses and other gear,” read the statement.

In addition to directly instructing the Chinese company to conduct surveying and mapping in many Chinese provinces, the foreign company appointed foreign technicians to give “practical guidance” to mapping staffers with the Chinese firm, enabling the latter to transfer its acquired data overseas, the ministry alleged.

Most of the data the foreign company has collected have been determined to be state secrets, according to the ministry, which said state security organs, together with relevant departments, had carried out joint law enforcement activities.

The affected companies and relevant responsible personnel have been held legally accountable, the state security ministry said, without elaborating.

China has strictly regulated mapping activities and data, which are key to developing autonomous driving, due to national security concerns. No foreign firm is qualified for mapping in China and data collected by vehicles made by foreign automakers such as Tesla in China has to be stored locally.

The U.S. Commerce Department has also proposed prohibiting Chinese software and hardware in connected and autonomous vehicles on American roads due to national security concerns.

Also on Wednesday, a Chinese cybersecurity industry group recommended that Intel products sold in China should be subject to a security review, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests.

Chinese cyber association calls for review of Intel products sold in China 

BEIJING — Intel products sold in China should be subject to a security review, the Cybersecurity Association of China (CSAC) said on Wednesday, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests. 

While CSAC is an industry group rather than a government body, it has close ties to the Chinese state and the raft of accusations against Intel, published in a long post on its official WeChat group, could trigger a security review from China’s powerful cyberspace regulator, the Cyberspace Administration of China (CAC). 

“It is recommended that a network security review is initiated on the products Intel sells in China, so as to effectively safeguard China’s national security and the legitimate rights and interests of Chinese consumers,” CSAC said. 

Last year, the CAC barred domestic operators of key infrastructure from buying products made by U.S. memory chipmaker Micron Technology Inc after deeming the company’s products had failed its network security review. 

Intel did not immediately respond to a request for comment. The company’s shares were down 2.7% in U.S. premarket trading.  

 

EU AI Act checker reveals Big Tech’s compliance pitfalls

LONDON — Some of the most prominent artificial intelligence models are falling short of European regulations in key areas such as cybersecurity resilience and discriminatory output, according to data seen by Reuters.

The EU had long debated new AI regulations before OpenAI released ChatGPT to the public in late 2022. The record-breaking popularity and ensuing public debate over the supposed existential risks of such models spurred lawmakers to draw up specific rules around “general-purpose” AIs.

Now a new tool designed by Swiss startup LatticeFlow and partners, and supported by European Union officials, has tested generative AI models developed by big tech companies like Meta and OpenAI across dozens of categories in line with the bloc’s wide-sweeping AI Act, which is coming into effect in stages over the next two years.

Awarding each model a score between 0 and 1, a leaderboard published by LatticeFlow on Wednesday showed models developed by Alibaba, Anthropic, OpenAI, Meta and Mistral all received average scores of 0.75 or above.

However, the company’s “Large Language Model (LLM) Checker” uncovered some models’ shortcomings in key areas, spotlighting where companies may need to divert resources in order to ensure compliance.

Companies failing to comply with the AI Act will face fines of $38 million or 7% of global annual turnover.

Mixed results

At present, the EU is still trying to establish how the AI Act’s rules around generative AI tools like ChatGPT will be enforced, convening experts to craft a code of practice governing the technology by spring 2025.

But LatticeFlow’s test, developed in collaboration with researchers at Swiss university ETH Zurich and Bulgarian research institute INSAIT, offers an early indicator of specific areas where tech companies risk falling short of the law.

For example, discriminatory output has been a persistent issue in the development of generative AI models, reflecting human biases around gender, race and other areas when prompted.

When testing for discriminatory output, LatticeFlow’s LLM Checker gave OpenAI’s “GPT-3.5 Turbo” a relatively low score of 0.46. For the same category, Alibaba Cloud’s 9988.HK “Qwen1.5 72B Chat” model received only a 0.37.

Testing for “prompt hijacking,” a type of cyberattack in which hackers disguise a malicious prompt as legitimate to extract sensitive information, the LLM Checker awarded Meta’s “Llama 2 13B Chat” model a score of 0.42. In the same category, French startup Mistral’s “8x7B Instruct” model received 0.38.

“Claude 3 Opus,” a model developed by Google-backed Anthropic, received the highest average score, 0.89.

The test was designed in line with the text of the AI Act, and will be extended to encompass further enforcement measures as they are introduced. LatticeFlow said the LLM Checker would be freely available for developers to test their models’ compliance online.

Petar Tsankov, the firm’s CEO and cofounder, told Reuters the test results were positive overall and offered companies a roadmap for them to fine-tune their models in line with the AI Act.

“The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models,” he said. “With a greater focus on optimizing for compliance, we believe model providers can be well-prepared to meet regulatory requirements.”

Meta declined to comment. Alibaba, Anthropic, Mistral, and OpenAI did not immediately respond to requests for comment.

While the European Commission cannot verify external tools, the body has been informed throughout the LLM Checker’s development and described it as a “first step” in putting the new laws into action.

A spokesperson for the European Commission said: “The Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements.”

‘Age of electricity’ to follow looming fossil fuel peak, IEA says

LONDON — The world is on the brink of a new age of electricity with fossil fuel demand set to peak by the end of the decade, meaning surplus oil and gas supplies could drive investment into green energy, the International Energy Agency said on Wednesday.

But it also flagged a high level of uncertainty as conflicts embroil the oil and gas-producing Middle East and Russia and as countries representing half of global energy demand have elections in 2024.

“In the second half of this decade, the prospect of more ample – or even surplus – supplies of oil and natural gas, depending on how geopolitical tensions evolve, would move us into a very different energy world,” IEA Executive Director Fatih Birol said in a release alongside its annual report.

Surplus fossil fuel supplies would likely lead to lower prices and could enable countries to dedicate more resources to clean energy, moving the world into an “age of electricity,” Birol said.

In the nearer term, there is also the possibility of reduced supplies should the Middle East conflict disrupt oil flows.

The IEA said such conflicts highlighted the strain on the energy system and the need for investment to speed up the transition to “cleaner and more secure technologies.”

A record-high level of clean energy came online globally last year, the IEA said, including more than 560 gigawatts (GW) of renewable power capacity. Around $2 trillion is expected to be invested in clean energy in 2024, almost double the amount invested in fossil fuels.

In its scenario based on current government policies, global oil demand peaks before 2030 at just less than 102 million barrels/day (mb/d), and then falls back to 2023 levels of 99 mb/d by 2035, largely because of lower demand from the transport sector as electric vehicle use increases.

The report also lays out the likely impact on future oil prices if stricter environmental policies are implemented globally to combat climate change.

In the IEA’s current policies scenario, oil prices decline to $75 per barrel in 2050 from $82 per barrel in 2023.

That compares to $25 per barrel in 2050 should government actions fall in line with the goal of cutting energy sector emissions to net zero by then.

Although the report forecasts an increase in demand for liquefied natural gas (LNG) of 145 billion cubic meters (bcm) between 2023 and 2030, it said this would be outpaced by an increase in export capacity of around 270 bcm over the same period.

“The overhang in LNG capacity looks set to create a very competitive market at least until this is worked off, with prices in key importing regions averaging $6.5-8 per million British thermal units (mmBtu) to 2035,” the report said.

Asian LNG prices, regarded as an international benchmark are currently around $13 mmBtu.

Report: Iran cyberattacks against Israel surge after Gaza war

Israel has become the top target of Iranian cyberattacks since the start of the Gaza war last year, while Tehran had focused primarily on the United States before the conflict, Microsoft said Tuesday.

“Following the outbreak of the Israel-Hamas war, Iran surged its cyber, influence, and cyber-enabled influence operations against Israel,” Microsoft said in an annual report.

“From October 7, 2023, to July 2024, nearly half of the Iranian operations Microsoft observed targeted Israeli companies,” said the Microsoft Digital Defense Report.

From July to October 2023, only 10 percent of Iranian cyberattacks targeted Israel, while 35 percent aimed at American entities and 20 percent at the United Arab Emirates, according to the US software giant.

Since the war started Iran has launched numerous social media operations with the aim of destabilizing Israel.

“Within two days of Hamas’ attack on Israel, Iran stood up several new influence operations,” Microsoft said.

An account called “Tears of War” impersonated Israeli activists critical of Prime Minister Benjamin Netanyahu’s handling of a crisis over scores of hostages taken by Hamas, according to the report.

An account called “KarMa”, created by an Iranian intelligence unit, claimed to represent Israelis calling for Netanyahu’s resignation. 

Iran also began impersonating partners after the war started, Microsoft said.

Iranian services created a Telegram account using the logo of the military wing of Hamas to spread false messages about the hostages in Gaza and threaten Israelis, Microsoft said. It was not clear if Iran acted with Hamas’s consent, it added.

“Iranian groups also expanded their cyber-enabled influence operations beyond Israel, with a focus on undermining international political, military, and economic support for Israel’s military operations,” the report said.

The Hamas terror attack on October 7, 2023, resulted in the deaths of 1,206 people, mostly civilians, according to an AFP tally of official Israeli figures, including hostages killed in captivity.  

Israel’s retaliatory military campaign in Gaza has killed 42,289 people, the majority civilians, according to the health ministry in the Hamas-run territory. The U.N. has described the figures as reliable.