Vance tells Europeans that heavy regulation could kill AI 

Paris — U.S. Vice President JD Vance told Europeans on Tuesday their “massive” regulations on artificial intelligence could strangle the technology, and rejected content moderation as “authoritarian censorship.”

The mood on AI has shifted as the technology takes root, from one of concerns around safety to geopolitical competition, as countries jockey to nurture the next big AI giant.

Vance, setting out the Trump administration’s America First agenda, said the United States intended to remain the dominant force in AI and strongly opposed the European Union’s far tougher regulatory approach.

“We believe that excessive regulation of the AI sector could kill a transformative industry,” Vance told an AI summit of CEOs and heads of state in Paris.

“We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship,” he added.

Vance criticized the “massive regulations” created by the EU’s Digital Services Act, as well as Europe’s online privacy rules, known by the acronym GDPR, which he said meant endless legal compliance costs for smaller firms.

“Of course, we want to ensure the internet is a safe place, but it is one thing to prevent a predator from preying on a child on the internet, and it is something quite different to prevent a grown man or woman from accessing an opinion that the government thinks is misinformation,” he said.

European lawmakers last year approved the bloc’s AI Act, the world’s first comprehensive set of rules governing the technology.

Vance is leading the American delegation at the Paris summit.

Vance also appeared to take aim at China at a delicate moment for the U.S. technology sector.

Last month, Chinese startup DeepSeek freely distributed a powerful AI reasoning model that some said challenged U.S. technology leadership. It sent shares of American chip designer Nvidia down 17%.

“From CCTV to 5G equipment, we’re all familiar with cheap tech in the marketplace that’s been heavily subsidized and exported by authoritarian regimes,” Vance said.

But he said that “partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure. Should a deal seem too good to be true? Just remember the old adage that we learned in Silicon Valley: if you aren’t paying for the product, you are the product.”

Vance did not mention DeepSeek by name. There has been no evidence of information being able to surreptitiously flow through the startup’s technology to China’s government, and the underlying code is freely available to use and view. However, some government organizations have reportedly banned DeepSeek’s use.

Speaking after Vance, French President Emmanuel Macron said that he was fully in favor of trimming red tape, but he stressed that regulation was still needed to ensure trust in AI, or people would end up rejecting it. “We need a trustworthy AI,” he said.

European Commission chief Ursula von der Leyen also said the EU would cut red tape and invest more in AI.

In a bilateral meeting, Vance and von der Leyen were also likely to discuss Trump’s substantial increase of tariffs on steel.

OpenAI CEO Sam Altman was expected to address the summit on Tuesday. A consortium led by Musk said on Monday it had offered $97.4 billion to buy the nonprofit controlling OpenAI.

Altman promptly posted on X: “no thank you but we will buy twitter for $9.74 billion if you want.”

The technology world has closely watched whether the Trump administration will ease recent antitrust enforcement that had seen the U.S. sue or investigate the industry’s biggest players.

Vance said the U.S. would champion American AI — which big players develop — he also said: “Our laws will keep Big Tech, little tech, and all other developers on a level playing field.”

EU’s AI push to get $50 billion boost, EU’s von der Leyen says

PARIS — Europe will invest an additional $51.5 billion to bolster the bloc’s artificial intelligence ambition, European Commission Ursula von der Leyen said on Tuesday.

It will come on top of the European AI Champions Initiative, that has already pledged 150 billion euros from providers, investors and industry, von der Leyen told the Paris AI Summit.

“Thereby we aim to mobilize a total of 200 billion euros for AI investments in Europe,” she said.

Von der Leyen said investments will focus on industrial and mission-critical technologies.

Companies which have signed up to the European AI Champions initiative, spearheaded by investment company General Catalyst, include Airbus, ASML, Siemens, Infineon, Philips, Mistral and Volkswagen.

France seeks AI boom, urges EU investment in the sector

French President Emmanuel Macron wants Europe to become a leader in the artificial intelligence (AI) sector, he told a global summit of AI and political leaders in Paris Monday where he announced that France’s private sector has invested nearly $113 billion in French AI.

Financial investment is key to achieving the goal of Europe as an AI hub, Macron said in his remarks delivered in English at the Grand Palais.

He said the European bloc would also need to “adopt the Notre Dame strategy,” a reference to the lightning swift rebuilding of France’s famed Notre Dame cathedral in five years after a devastating 2019 fire, the result of simplified regulations and adherence to timelines.

“We showed the rest of the of the world that when we commit to a clear timeline, we can deliver,” the French leader said.

Henna Virkkunen, the European Union’s digital head, indicated that the EU is in agreement with simplifying regulations. The EU approved the AI Act last year, the world’s first extensive set of rules designed to regulate technology.

European countries want to ensure that they have a stake in the tech race against an aggressive U.S. and other emerging challengers. European Commission chief Ursula von der Leyen is scheduled to address the EU’s ability to compete in the tech world Tuesday.

Macron’s announcement that the French private sector will invest heavily in AI “reassured” Clem Delangue, CEO of Hugging Face, a U.S. company with French co-founders that is a hub for open-source AI, that there will be “ambitious” projects in France, according to Reuters.

Sundar Pichai, Google’s head, told the gathering that the shift to AI will be “the biggest of our lifetimes.”

However, such a big shift also comes with problems for the AI community. France had wanted the summit to adopt a non-binding text that AI would be inclusive and sustainable.

“We have the chance to democratize access [to a new technology] from the start,” Pichai told the summit.

Whether the U.S. will agree to that initiative is uncertain, considering the U.S. government’s recent moves to eliminate diversity, equity and inclusion (DEI) initiatives.

U.S. Vice President JD Vance is attending the summit and expected to deliver a speech on Tuesday. Other politicians expected Tuesday at the plenary session are Chinese Vice Premier Zhan Guoqing and Indian Prime Minister Narendra Modi. About 100 politicians are expected.

There are also other considerations with a shift to AI. The World Trade Organization says its calculations indicate that a “near universal adoption of AI … could increase trade by up to 14 percentage points” from what it is now but cautions that global “fragmentation” of regulations on AI technology and data flow could bring about the contraction of both trade and output.

A somewhat frightening side effect of AI technology is that it can replace the need for humans in some sectors.

International Labor Organization leader Gilbert Houngbo told the summit Monday that the jobs that AI can do, such as clerical work, are disproportionately held by women. According to current statistics, that development would likely widen the gender pay gap.

Musk-led group makes $97.4 billion bid for control of OpenAI

A consortium led by Elon Musk said Monday it has offered $97.4 billion to buy the nonprofit that controls OpenAI, another salvo in the billionaire’s fight to block the artificial intelligence startup from transitioning to a for-profit firm.

Musk’s bid is likely to ratchet up longstanding tensions with OpenAI CEO Sam Altman over the future of the startup at the heart of a boom in generative AI technology. Altman on Monday promptly posted on X: “no thank you but we will buy twitter for $9.74 billion if you want.”

Musk cofounded OpenAI with Altman in 2015 as a nonprofit, but left before the company took off. He founded the competing AI startup xAI in 2023.

Musk, the CEO of Tesla and owner of tech and social media company X, is a close ally of President Donald Trump. He spent more than a quarter of a billion dollars to help elect Trump, and leads the Department of Government Efficiency, a new arm of the White House tasked with radically shrinking the federal bureaucracy. Musk recently criticized a $500 billion OpenAI-led project announced by Trump at the White House.

OpenAI is now trying to transition into a for-profit from a nonprofit entity, which it says is required to secure the capital needed for developing the best AI models.

Musk sued Altman and others in August last year, claiming they violated contract provisions by putting profit ahead of the public good in the push to advance AI. In November, he asked a U.S. district judge for a preliminary injunction blocking OpenAI from converting to a for-profit structure.

Musk’s lawsuit against OpenAI and Altman says the founders originally approached him to fund a nonprofit focused on developing AI to benefit humanity, but that it was now focused on making money.

“It’s time for OpenAI to return to the open-source, safety-focused force for good it once was,” Musk said in a statement Monday. “We will make sure that happens.”

Musk and OpenAI backer Microsoft did not immediately respond to requests for comment.

“Musk’s bid puts another wrinkle into OpenAI’s quest to remove the nonprofit’s control over its for-profit entity,” said Rose Chan Loui, executive director of the UCLA Law Center for Philanthropy and Nonprofits.

“This bid sets a marker for the valuation of the nonprofit’s economic interests,” she said. “If OpenAI values the nonprofit’s interests at less than what Musk is offering, then they would have to show why.”

The consortium led by Musk includes his AI startup xAI, Baron Capital Group, Valor Management, Atreides Management, Vy Fund III, Emanuel Capital Management, and Eight Partners.

XAI could merge with OpenAI following a deal, according to The Wall Street Journal which first reported Musk’s offer earlier Monday. XAI recently raised $6 billion from investors at a valuation of $40 billion, sources have told Reuters.

Throwing a wrench

“This (bid) is definitely throwing a wrench in things,” said Jonathan Macey, a Yale Law School professor specializing in corporate governance.

“The nonprofit is supposed to take money to do whatever good deeds, and if OpenAI prefers to sell it to somebody else for less money, it’s a concern for protecting the interests of the beneficiaries of the not-for-profit. If this was a public company, plaintiffs’ lawyers would justifiably be lining up down the block to sue that transaction.”

OpenAI was valued at $157 billion in its last funding round, cementing its status as one of the most valuable private companies in the world. SoftBank Group is in talks to lead a funding round of up to $40 billion in OpenAI at a valuation of $300 billion, including the new funds, Reuters reported in January.

Aside from any antitrust implications, a deal this size would need Musk and his consortium to raise enormous funds.

“Musk’s offer to buy OpenAI’s nonprofit should significantly complicate OpenAI’s current fundraising and the process of converting into a for-profit corporation,” said Gil Luria, analyst at D.A. Davidson.

“The offer seems to be backed by more credible investors … OpenAI may not be able to ignore it. It will be the fiduciary responsibility of OpenAI’s board to decide whether this is a better offer, which could call into question the offer from SoftBank.”

Musk’s stock in Tesla is valued at roughly $165 billion, according to LSEG data, but his leverage with banks is likely to be thin after his $44 billion buyout of the social media platform that was called Twitter in 2022.

On sidelines of AI Summit in Paris, unions denounce its harmful effects

PARIS — In front of political and tech leaders gathered at a summit in Paris, French President Emmanuel Macron called for a strategy on Monday to make up for the delay in France and Europe in investing in artificial intelligence (AI) but was faced with a “counter-summit” that pointed out the risks of the technology. 

The use of chatbots at work and school is destroying jobs, professions and threatening the acquisition of knowledge, said union representatives gathered at the Theatre de la Concorde located in the Champs-Elysees gardens, less than a kilometer from the venue of the Summit for Action on Artificial Intelligence. 

Habib El Kettani, from Solidaires Informatique, a union representing IT workers, described an “automation already underway for about ten years,” which has been reinforced with the arrival of the flagship tool ChatGPT at the end of 2022. 

“I have been fighting for ten years to ensure that my job does not become an endangered species,” said Sandrine Larizza, from the CGT union at France Travail, a public service dedicated to the unemployed. 

She deplored “a disappearance of social rights that goes hand in hand with the automation of public services,” where the development of AI has served, according to her “to make people work faster to respond less and less to the needs of users, by reducing staff numbers.” 

Loss of meaning 

“With generative AI, it is no longer the agent who responds by email to the unemployed person but the generative AI that gives the answers with a multitude of discounted job offers in subcontracting,” said Larizza. 

This is accompanied by “a destruction of our human capacities to play a social role, a division into micro-tasks on the assembly line and an industrialization of our professions with a loss of meaning,” she said, a few days after the announcement of a partnership between France Travail and the French startup Mistral. 

“Around 40 projects” are also being tested “with postal workers,” said Marie Vairon, general secretary of the Sud PTT union of the La Poste and La Banque Postale group. 

AI is used “to manage schedules and simplify tasks with a tool tested since 2020 and generalized since 2023,” she said, noting that the results are “not conclusive.” 

After the implementation at the postal bank, La Banque Postale, of “Lucy,” a conversational robot handling some “300,000 calls every month,” Vairon is concerned about a “generative AI serving as a coach for bank advisers.” 

‘Students are using it’ 

On the education side, “whether we like it or not, students are using it,” said Stephanie de Vanssay, national educational and digital adviser of the National Union of Autonomous Unions (UNSA) for primary and secondary school. 

“We have indifferent teachers, worried teachers who are afraid of losing control and quality of learning, skeptics, and those who are angry about all the other priorities,” she said. 

Developing the critical thinking of some 12 million students is becoming, in any case, “an even more serious concern and it is urgent to explain how to use these tools and why,” de Vanssay said. 

The Minister of National Education Elisabeth Borne announced on Thursday the launch of a call for tenders for an AI for teachers, as well as a charter of use and training for teachers. 

“No critical thinking without interactions and without helping each other to think and progress in one’s thinking, which requires intermediation,” said Beatrice Laurent, national secretary of UNSA education. “A baby with a tablet and nursery rhymes will not learn to speak.”

High-stakes AI summit in Paris: World leaders, tech titans and challenging diplomatic talks

PARIS — Major world leaders are meeting for an AI summit in Paris, where challenging diplomatic talks are expected as tech titans fight for dominance in the fast-moving technology industry.

Heads of state, top government officials, CEOs and scientists from around 100 countries are participating in the two-day international summit from Monday.

High-profile attendees include U.S. Vice President JD Vance, on his first overseas trip since taking office, and Chinese Vice Premier Zhang Guoqing.

“We’re living a technology and scientific revolution we’ve rarely seen,” French President Emmanuel Macron said Sunday on national television France 2.

France and Europe must seize the “opportunity” because AI “will enable us to live better, learn better, work better, care better and it’s up to us to put this artificial intelligence at the service of human beings,” he said.

Vance’s debut abroad

The summit will give some European leaders a chance to meet Vance for the first time. The 40-year-old vice president was just 18 months into his time as Ohio’s junior senator when Donald Trump picked him as his running mate.

Vance was joined by his wife Usha and their three children — Ewan, Vivek and Mirabel — for the trip to Europe. They were greeted on French soil Monday morning by Manuel Valls, the minister for Overseas France, and the U.S. Embassy’s charge d’affaires, David McCawley.

On Tuesday, Vance will have a working lunch with Macron, with discussions on Ukraine and the Middle East on the menu.

Vance, like President Donald Trump, has questioned U.S. spending on Ukraine and the approach to isolating Russian President Vladimir Putin. Trump promised to end the fighting within six months of taking office.

Vance will attend later this week the Munich Security Conference, where he may meet Ukrainian President Volodymyr Zelenskyy.

Leaders in Europe have been watching carefully Trump’s recent statements on threats to impose tariffs on the European Union, take control of Greenland and his suggestion that Palestinians clear out Gaza once the fighting in the Israel-Hamas conflict ends — an idea that’s been flatly rejected by Arab allies.

Fostering AI advances

The summit, which gathers major players such as Google, Microsoft and OpenAI, aims at fostering AI advances in sectors like health, education, environment and culture.

A global public-private partnership named “Current AI” is to be launched to support large-scale initiatives that serve the general interest.

The Paris summit “is the first time we’ll have had such a broad international discussion in one place on the future of AI,” said Linda Griffin, vice president of public policy at Mozilla. “I see it as a norm-setting moment.”

Nick Reiners, senior geotechnology analyst at Eurasia Group, noted an opportunity to shape AI governance in a new direction by “moving away from this concentration of power amongst a handful of private actors and building this public interest AI instead.”

However, it remains unclear if the U.S. will support such initiatives.

French organizers also hope the summit will lead to major investment announcements in Europe.

France is to announce AI private investments worth a total of $113 billion over the coming years, Macron said, presenting it as “the equivalent” of Trump’s Stargate AI data centers project.

Indian PM co-hosting the summit

India’s Prime Minister Narendra Modi is co-hosting the summit with Macron, in an effort to involve more global actors in AI development and prevent the sector from becoming a U.S.-China battle.

India’s foreign secretary, Vikram Misri, stressed the need for equitable access to AI to avoid “perpetuating a digital divide that is already existing across the world.”

Macron will also travel Wednesday with Modi to the southern city port of Marseille to inaugurate a new Indian consulate and visit the ITER nuclear research site.

France has become a key defense partner for India, with talks underway on purchasing 26 Rafale fighter jets and three Scorpene submarines. Officials in New Delhi said discussions are in final phase and the deal could be inked in a few weeks.

Trump signals his support for cryptocurrency

U.S. President Donald Trump says he wants to make the United States the cryptocurrency capital of the world. He is putting his plan into place in the early weeks of his second presidential term. VOA’s Michelle Quinn has the story.

VOA Mandarin: China’s DeepSeek banned by several countries out of censorship fear 

Several governments, including the U.S., Taiwan and Australia, have banned the use of China’s AI software DeepSeek on official devices. Analysts say these restrictions are justified, as tests show DeepSeek not only collects excessive user data but also filters sensitive topics and promotes Chinese government narratives more aggressively than Baidu and WeChat. This raises concern that it could become a powerful tool for controlling speech and public opinion. 

Click here for the full story in Mandarin.

Robots learn problem-solving from each other, internet

Robots with reasoning power are becoming a reality thanks to massive amounts of training data and breakthroughs in artificial intelligence. VOA’s Matt Dibble visits a lab where robots are learning to solve problems themselves. Cameras: Matt Dibble, Tina Trinh.

House lawmakers push to ban AI app DeepSeek from US government devices

WASHINGTON — A bipartisan duo in the U.S. House is proposing legislation to ban the Chinese artificial intelligence app DeepSeek from federal devices, similar to the policy already in place for the popular social media platform TikTok.

Lawmakers Josh Gottheimer, a Democrat from New Jersey, and Darin LaHood, a Republican from Illinois, on Thursday introduced the “No DeepSeek on Government Devices Act,” which would ban federal employees from using the Chinese AI app on government-owned electronics. They cited the Chinese government’s ability to use the app for surveillance and misinformation as reasons to keep it away from federal networks.

“The Chinese Communist Party has made it abundantly clear that it will exploit any tool at its disposal to undermine our national security, spew harmful disinformation, and collect data on Americans,” Gottheimer said in a statement. “We simply can’t risk the CCP infiltrating the devices of our government officials and jeopardizing our national security.”

The proposal comes after the Chinese software company in January published an AI model that performed at a competitive level with models developed by American firms like OpenAI, Meta, Alphabet and others. DeepSeek purported to develop the model at a fraction of the cost of its American counterparts. The announcement raised alarm bells and prompted debates among policymakers and leading Silicon Valley financiers and technologists.

The churn over AI is coming at a moment of heightened competition between the U.S. and China in a range of areas, including technological innovation. The U.S. has levied tariffs on Chinese goods, restricted Chinese tech firms like Huawei from being used in government systems, and banned the export of state of the art microchips thought to be needed to develop the highest end AI models.

Last year, Congress and then-President Joe Biden approved a divestment of the popular social media platform TikTok from its Chinese parent company or face a ban across the U.S.; that policy is now on hold. President Donald Trump, who originally proposed a ban of the app in his first term, signed an executive order last month extending a window for a long-term solution before the legally required ban takes effect.

In 2023, Biden banned TikTok from federal-issued devices.

“The technology race with the Chinese Communist Party is not one the United States can afford to lose,” LaHood said in a statement. “This commonsense, bipartisan piece of legislation will ban the app from federal workers’ phones while closing backdoor operations the company seeks to exploit for access. It is critical that Congress safeguard Americans’ data and continue to ensure American leadership in AI.”

The bill would single out DeepSeek and any AI application developed by its parent company, the hedge fund High-Flyer, as subject to the ban. The legislation includes exceptions for national security and research purposes that would allow federal employers to study DeepSeek.

Some lawmakers wish to go further. A bill proposed last week by Senator Josh Hawley, a Republican from Missouri, would bar the import or export of any AI technology from China writ large, citing national security concerns.

Former Google engineer faces new US charges he stole AI secrets for Chinese companies

U.S. prosecutors on Tuesday unveiled an expanded 14-count indictment accusing former Google software engineer Linwei Ding of stealing artificial intelligence trade secrets to benefit two Chinese companies he was secretly working for. 

Ding, 38, a Chinese national, was charged by a federal grand jury in San Francisco with seven counts each of economic espionage and theft of trade secrets. 

Each economic espionage charge carries a maximum 15-year prison term and $5 million fine, while each trade secrets charge carries a maximum 10-year term and $250,000 fine. 

The defendant, also known as Leon Ding, was indicted last March on four counts of theft of trade secrets. He is free on bond. His lawyers did not immediately respond to requests for comment. 

Ding’s case was coordinated through an interagency Disruptive Technology Strike Force created in 2023 by the Biden administration. 

The initiative was designed to help stop advanced technology from being acquired by countries such as China and Russia or potentially threatening national security. 

Prosecutors said Ding stole information about the hardware infrastructure and software platform that lets Google’s supercomputing data centers train large AI models. 

Some of the allegedly stolen chip blueprints were meant to give Google an edge over cloud computing rivals Amazon and Microsoft, which design their own, and reduce Google’s reliance on chips from Nvidia. 

Prosecutors said Ding joined Google in May 2019 and began his thefts three years later when he was being courted to join an early-stage Chinese technology company. 

Ding allegedly uploaded more than 1,000 confidential files by May 2023 and later circulated a PowerPoint presentation to employees of a China startup he founded, saying that country’s policies encouraged development of a domestic AI industry. 

Google was not charged and has said it cooperated with law enforcement. 

According to court records describing a December 18 hearing, prosecutors and defense lawyers discussed a “potential resolution” to Ding’s case, “but anticipate the matter proceeding to trial.” 

The case is U.S. v. Ding, U.S. District Court, Northern District of California, No. 24-cr-00141. 

France pitches AI summit as ‘wake-up call’ for Europe

PARIS — France hosts top tech players next week at an artificial intelligence summit meant as a “wake-up call” for Europe as it struggles with AI challenges from the United States and China.

Players from across the sector and representatives from 80 nations will gather in the French capital on February 10 and 11 in the sumptuous Grand Palais, built for the 1900 Universal Exhibition.

In the run-up, President Emmanuel Macron will on Feb. 4 visit research centers applying AI to science and health, before hosting scientists and Nobel Prize winners at his Elysee Palace residence on Wednesday.

A wider science conference will be held at the Polytechnique engineering school on Thursday and Friday.

“The summit comes at exactly the right time for this wake-up call for France and Europe, and to show we are in position” to take advantage of the technology, an official in Macron’s office told reporters.

In recent weeks, Washington’s announcement of $500 billion in investment to build up AI infrastructure and the release of a frugal but powerful generative AI model by Chinese firm DeepSeek have focused minds in Europe.

France must “not let this revolution pass it by,” Macron’s office said.

Attendees at the summit will include Sam Altman, head of OpenAI — the firm that brought generative models to public consciousness in 2022 with the launch of ChatGPT.

Google boss Sundar Pichai and Nobel Prize winner Demis Hassabis, who leads the company’s DeepMind AI research unit, will also come, alongside Arthur Mensch, founder of French AI developer Mistral.

The Elysee has said there are “talks” on hosting DeepSeek founder Liang Wenfeng, and has yet to clarify whether X owner Elon Musk — who has his own generative initiative, xAI — has accepted an invitation.

Nor is it clear who will attend from the United States and China, with the French presidency saying only “very high level” representatives will come.

Confirmed guests from Europe include European Commission chief Ursula von der Leyen and German Chancellor Olaf Scholz.

‘Stoke confidence’

The tone of the AI summit will be “neither catastrophizing, nor naive,” Macron’s AI envoy Anne Bouverot told AFP.

Hosting the conference is also an opportunity for Paris to show off its own AI ecosystem, which numbers around 750 companies.

Macron’s office has said the summit would see the announcement of “massive” investments along the lines of his annual “Choose France” business conference, at which $15.4 billion of inward investment were pledged in 2024.

Beyond the economic opportunities, AI’s impact on culture including artistic creativity and news production will be discussed in a side-event over the weekend.

Debates open to the public, such as that one, are aimed at showing off “positive use cases for AI” to “stoke confidence and speed up adoption” of the technology, said France’s digital minister Clara Chappaz.

For now, the French public is skeptical of AI, with 79 percent of respondents telling pollsters Ifop they were “concerned” about the technology in a recent survey.

More ‘inclusive’ AI?

Paris says it also hopes the summit can help kick off its vision of a more ethical and accessible and less resource-intensive AI.

At present, “the AI under development is pushed by a few large players from a few countries,” Bouverot said, whereas France wants “to promote more inclusive development.”

Indian Prime Minister Narendra Modi has been invited to co-host the Paris summit, in a push to bring governments on board.

One of the summit’s aims is the establishment of a public-interest foundation for which Paris aims to raise $2.5 billion over five years.

The effort would be “a public-private partnership between various governments, businesses and philanthropic foundations from different countries,” Macron’s office said.

Paris hopes at the summit to chart different efforts at AI governance around the world and gather commitments for environmentally sustainable AI — although no binding mechanism is planned for now.

“There are lots of big principles emerging around responsible, trustworthy AI, but it’s not clear or easy to implement for the engineers in technical terms,” said Laure de Roucy-Rochegonde, director of the geopolitical technology center at the French Institute for International Relations.

UK to become 1st country to criminalize AI child abuse tools

LONDON — Britain will become the first country to introduce laws against AI tools used to generate sexual abuse images, the government announced Saturday.

The government will make it illegal to possess, create or distribute AI tools designed to generate sexualized images of children, punishable by up to five years in prison, interior minister Yvette Cooper revealed.

It will also be illegal to possess AI “pedophile manuals” which teach people how to use AI to sexually abuse children, punishable by up to three years in prison.

“We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person,” said Cooper.

The new laws are “designed to keep our children safe online as technologies evolve. It is vital that we tackle child sexual abuse online as well as offline,” she added.

“Children will be protected from the growing threat of predators generating AI images and from online sexual abuse as the U.K. becomes the first country in the world to create new AI sexual abuse offences,” said a government statement.

AI tools are being used to generate child sexual abuse images by “nudeifying” real life images of children or by “stitching the faces of other children onto existing images,” said the government.

The new laws will also criminalize “predators who run websites designed for other pedophiles to share vile child sexual abuse content or advice on how to groom children,” punishable by up to ten years in prison, said the government.

The measures will be introduced as part of the Crime and Policing Bill when it comes to parliament.

The Internet Watch Foundation (IWF) has warned of the growing number of sexual abuse AI images of children being produced.

Over a 30-day period in 2024, IWF analysts identified 3,512 AI child abuse images on a single dark web site.

The number of the most serious category of images also rose by 10% in a year, it found.

DeepSeek vs. ChatGPT fuels debate over AI building blocks

SEOUL, SOUTH KOREA — When Chinese startup DeepSeek released its AI model this month, it was hailed as a breakthrough, a sign that China’s artificial intelligence companies could compete with their Silicon Valley counterparts using fewer resources.

The narrative was clear: DeepSeek had done more with less, finding clever workarounds to U.S. chip restrictions. However, that storyline has begun to shift.

OpenAI, the U.S.-based company behind ChatGPT, now claims DeepSeek may have improperly used its proprietary data to train its model, raising questions about whether DeepSeek’s success was truly an engineering marvel.

In statements to several media outlets this week, OpenAI said it is reviewing indications that DeepSeek may have trained its AI by mimicking responses from OpenAI’s models.

The process, known as distillation, is common among AI developers but is prohibited by OpenAI’s terms of service, which forbid using its model outputs to train competing systems.

Some U.S. officials appear to support OpenAI’s concerns. At his confirmation hearing this week, Commerce secretary nominee Howard Lutnick accused DeepSeek of misusing U.S. technology to create a “dirt cheap” AI model.

“They stole things. They broke in. They’ve taken our IP,” Lutnick said of China.

David Sacks, the White House czar for AI and cryptocurrency, was more measured, saying only that it is “possible” that DeepSeek had stolen U.S. intellectual property.

In an interview with the cable news network Fox News, Sacks added that there is “substantial evidence” that DeepSeek “distilled the knowledge out of OpenAI’s models,” adding that stronger efforts are needed to curb the rise of “copycat” AI systems.

At the center of the dispute is a key question about AI’s future: how much control should companies have over their own AI models, when those programs were themselves built using data taken from others?

AI data fight

The question is especially relevant for OpenAI, which faces its own legal challenges. The company has been sued by several media companies and authors who accuse it of illegally using copyrighted material to train its AI models.

Justin Hughes, a Loyola Law School professor specializing in intellectual property, AI, and data rights, said OpenAI’s accusations against DeepSeek are “deeply ironic,” given the company’s own legal troubles.

“OpenAI has had no problem taking everyone else’s content and claiming it’s ‘fair,'” Hughes told VOA in an email.

“If the reports are accurate that OpenAI violated other platforms’ terms of service to get the training data it has wanted, that would just add an extra layer of irony – dare we say hypocrisy – to OpenAI complaining about DeepSeek.”

DeepSeek has not responded to OpenAI’s accusations. In a technical paper released with its new chatbot, DeepSeek acknowledged that some of its models were trained alongside other open-source models – such as Qwen, developed by China’s Alibaba, and Llama, released by Meta – according to Johnny Zou, a Hong Kong-based AI investment specialist.

However, OpenAI appears to be alleging that DeepSeek improperly used its closed-source models – which cannot be freely accessed or used to train other AI systems.

“It’s quite a serious statement,” said Zou, who noted that OpenAI has not yet presented evidence of wrongdoing by DeepSeek.

Proving improper distillation may be difficult without disclosing details on how its own models were trained, Zou added.

Even if OpenAI presents concrete proof, its legal options may be limited. Although Zou noted that the company could pursue a case against DeepSeek for violating its terms of service, not all experts believe such a claim would hold up in court.

“Even assuming DeepSeek trained on OpenAI’s data, I don’t think OpenAI has much of a case,” said Mark Lemley, a professor at Stanford Law School who specializes in intellectual property and technology.

Even though AI models often have restrictive terms of service, “no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief,” Lemley wrote in a recent paper with co-author Peter Henderson.

The paper argues that these restrictions may be unenforceable, since the materials they aim to protect are “largely not copyrightable.”

“There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist,” the paper noted.

OpenAI’s main legal argument would likely be breach of contract, said Hughes. Even if that were the case, though, he added, “good luck enforcing that against a Chinese company without meaningful assets in the United States.”

Possible options

The financial stakes are adding urgency to the debate. U.S. tech stocks dipped Monday after following news of DeepSeek’s advances, though they later regained some ground.

Commerce nominee Lutnick suggested that further government action, including tariffs, could be used to deter China from copying advanced AI models.

But speaking the same day, U.S. President Donald Trump appeared to take a different view, surprising some industry insiders with an optimistic take on DeepSeek’s breakthrough.

The Chinese company’s low-cost model, Trump said, was “very much a positive development” for AI, because “instead of spending billions and billions, you’ll spend less, and you’ll come up with hopefully the same solution.”

If DeepSeek has succeeded in building a relatively cheap and competitive AI model, that may be bad for those with investment – or stock options – in current generative AI companies, Hughes said.

“But it might be good for the rest of us,” he added, noting that until recently it appeared that only the existing tech giants “had the resources to play in the generative AI sandbox.”

“If DeepSeek disproved that, we should hope that what can be done by a team of engineers in China can be done by a similarly resourced team of engineers in Detroit or Denver or Boston,” he said. 

Nigerian initiative paves way for deaf inclusion in tech

An estimated nine million Nigerians are deaf or have hearing impairments, and many cope with discrimination that limits their access to education and employment. But one initiative is working to change that — empowering deaf people with tech skills to improve their career prospects. Timothy Obiezu reports from Abuja.
Camera: Timothy Obiezu

Chinese app shakes up AI race

A small Chinese company sent shockwaves around the tech world this week with news that it has created a high-performing artificial intelligence system with less computing power and at a lower cost than ones made by U.S. tech giants. Michelle Quinn reports.

Microsoft, Meta CEOs defend hefty AI spending after DeepSeek stuns tech world

Days after Chinese upstart DeepSeek revealed a breakthrough in cheap AI computing that shook the U.S. technology industry, the chief executives of Microsoft and Meta defended massive spending that they said was key to staying competitive in the new field.

DeepSeek’s quick progress has stirred doubts about the lead America has in AI with models that it claims can match or even outperform Western rivals at a fraction of the cost, but the U.S. executives said on Wednesday that building huge computer networks was necessary to serve growing corporate needs.

“Investing ‘very heavily’ in capital expenditure and infrastructure is going to be a strategic advantage over time,” Meta CEO Mark Zuckerberg said on a post-earnings call.

Satya Nadella, CEO of Microsoft, said the spending was needed to overcome the capacity constraints that have hampered the technology giant’s ability to capitalize on AI.

“As AI becomes more efficient and accessible, we will see exponentially more demand,” he said on a call with analysts.

Microsoft has earmarked $80 billion for AI in its current fiscal year, while Meta has pledged as much as $65 billion towards the technology.

That is a far cry from the roughly $6 million DeepSeek said it has spent to develop its AI model. U.S. tech executives and Wall Street analysts say that reflects the amount spent on computing power, rather than all development costs.

Still, some investors seem to be losing patience with the hefty spending and lack of big payoffs.

Shares of Microsoft — widely seen as a front runner in the AI race because of its tie to industry leader OpenAI – were down 5% in extended trading after the company said that growth in its Azure cloud business in the current quarter would fall short of estimates.

“We really want to start to see a clear road map to what that monetization model looks like for all of the capital that’s been invested,” said Brian Mulberry, portfolio manager at Zacks Investment Management, which holds shares in Microsoft.

Meta, meanwhile, sent mixed signals about how its bets on AI-powered tools were paying off, with a strong fourth quarter but a lackluster sales forecast for the current period.

“With these huge expenses, they need to turn the spigot on in terms of revenue generated, but I think this week was a wake-up call for the U.S.” said Futurum Group analyst Daniel Newman.

“For AI right now, there’s too much capital expenditure, not enough consumption.”

There are some signs though that executives are moving to change that.

Microsoft CFO Amy Hood said the company’s capital spending in the current quarter and the next would remain around the $22.6 billion level seen in the second quarter.

“In fiscal 2026, we expect to continue to invest against strong demand signals. However, the growth rate will be lower than fiscal 2025 (which ends in June),” she said. 

Generative AI makes Chinese, Iranian hackers more efficient, report says

A report issued Wednesday by Google found that hackers from numerous countries, particularly China, Iran and North Korea, have been using the company’s artificial intelligence-enabled Gemini chatbot to supercharge cyberattacks against targets in the United States.

The company found — so far, at least — that access to publicly available large language models (LLMs) has made cyberattackers more efficient but has not meaningfully changed the kind of attacks they typically mount.

LLMs are AI models that have been trained, using enormous amounts of previously generated content, to identify patterns in human languages. Among other things, this makes them adept at producing high-functioning, error-free computer programs.

“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” the report found.

Generative AI offered some benefits for low-skilled and high-skilled hackers, the report said.

“However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, [the Google Threat Intelligence Group] anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.”

Google’s findings appear to agree with previous research released by other large U.S. AI players OpenAI and Microsoft, which found a similar failure to achieve novel offensive strategies for cyberattacks through the use of public generative AI models.

The report clarified that Google works to disrupt the activity of threat actors when it identifies them.

Game unchanged 

“AI, so far, has not been a game changer for offensive actors,” Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, told VOA. “It speeds up some things. It gives foreign actors a better ability to craft phishing emails and find some code. But has it dramatically changed the game? No.”

Whether that might change in the future is unclear, Segal said. Also unclear is whether further developments in AI technology will more likely benefit people building defenses against cyberattacks or the threat actors trying to defeat them.

“Historically, defense has been hard, and technology hasn’t solved that problem,” Segal said. “I suspect AI won’t do that, either. But we don’t know yet.”

Caleb Withers, a research associate at the Center for a New American Security, agreed that there is likely to be an arms race of sorts, as offensive and defensive cybersecurity applications of generative AI evolve. However, it is likely that they will largely balance each other out, he said.

“The default assumption should be that absent certain trends that we haven’t yet seen, these tools should be roughly as useful to defenders as offenders,” he said. “Anything productivity enhancing, in general, applies equally, even when it comes to things like discovering vulnerabilities. If an attacker can use something to find a vulnerability in software, so, too, is the tool useful to the defender to try to find those themselves and patch them.”

Threat categories

The report breaks down the kinds of threat actors it observed using Gemini into two primary categories.

Advanced persistent threat (APT) actors refer to “government-backed hacking activity, including cyber espionage and destructive computer network attacks.” By contrast, information operation (IO) threats “attempt to influence online audiences in a deceptive, coordinated manner. Examples include sock puppet accounts [phony profiles that hide users’ identities] and comment brigading [organized online attacks aimed at altering perceptions of online popularity].”

The report found that hackers from Iran were the heaviest users of Gemini in both threat categories. APT threat actors from Iran used the service for a wide range of tasks, including gathering information on individuals and organizations, researching targets and their vulnerabilities, translating language and creating content for future online campaigns.

Google tracked more than 20 Chinese government-backed APT actors using Gemini “to enable reconnaissance on targets, for scripting and development, to request translation and explanation of technical concepts, and attempting to enable deeper access to a network following initial compromise.”

North Korean state-backed APTs used Gemini for many of the same tasks as Iran and China but also appeared to be attempting to exploit the service in its efforts to place “clandestine IT workers” in Western companies to facilitate the theft of intellectual property.

Information operations

Iran was also the heaviest user of Gemini when it came to information operation threats, accounting for 75% of detected usage, Google reported. Hackers from Iran used the service to create and manipulate content meant to sway public opinion, and to adapt that content for different audiences.

Chinese IO actors primarily used the service for research purposes, looking into matters “of strategic interest to the Chinese government.”

Unlike the APT sector, where their presence was minimal, Russian hackers were more common when it came to IO-related use of Gemini, using it not only for content creation but to gather information about how to create and use online AI chatbots.

Call for collaboration

Also on Wednesday, Kent Walker, president of global affairs for Google and its parent company, Alphabet, used a post on the company’s blog to note the potential dangers posed by threat actors using increasingly sophisticated AI models, and calling on the industry and federal government “to work together to support our national and economic security.”

“America holds the lead in the AI race — but our advantage may not last,” Walker wrote.

Walker argued that the U.S. needs to maintain its narrow advantage in the development of the technology used to build the most advanced artificial intelligence tools. In addition, he said, the government must streamline procurement rules to “enable adoption of AI, cloud and other game-changing technologies” by the U.S. military and intelligence agencies, and to establish public-private cyber defense partnerships. 

Truth struggles against propaganda and censorship on China’s DeepSeek AI

Washington — Just one week after its initial release, China’s new artificial intelligence assistant, DeepSeek, has shocked American financial markets, technology companies and consumers, rocking confidence in America’s lead on emerging large-language models.

The tool caused a nearly $1 trillion loss in market value for U.S.-based companies with connections to AI. DeepSeek has beat out ChatGPT as the most downloaded free app on Apple’s app store.

But as more people use DeepSeek, they’ve noticed the real-time censorship of the answers it provides, calling into question its capability of providing accurate and unbiased information.

The app has gone through a series of real-time updates to the content it can display in its answers. Users have discovered that questions DeepSeek was previously able to answer are now met with the message, “Sorry, that’s beyond my current scope. Let’s talk about something else.”

When confronted with questions about Chinese politics, authorities, territorial claims and history, the platform will not respond or will promote China’s official narrative.

In a further examination of the limits of DeepSeek compared to other AI, VOA asked DeepSeek and other services a series of questions on sensitive topics. Here are some of the responses:

VOA: Describe the current state of U.S.-China relations.

DeepSeek: U.S.-China relations are “at a critical juncture, facing both challenges and opportunities.”

“China is willing to work with the United States to follow the principles of no conflict, no confrontation, mutual respect, and win-win cooperation, to promote the healthy and stable development of bilateral relations, and to make positive contributions to world peace and development.”

ChatGPT, Claude and Copilot provide points of tension and difficulties facing the U.S.-China relationship.

VOA: What is the history of Tiananmen Square?

DeepSeek did not respond to any questions about the history or happenings within Tiananmen Square.

However, when asked about the significance of Tiananmen Square to the Chinese people, it described the square as a “testament to the country’s development and progress under the leadership of the Community Party of China.”

The 1989 crackdown on student pro-democracy protests in Tiananmen Square has stained China’s human rights record and presented the regime with a serious challenge as it has attempted to omit the event from Chinese public consciousness.

Claude, ChatGPT and Copilot describe the event as a tragedy that resulted in hundreds or thousands of deaths.

VOA: Who is the current leader of China?

Deepseek will not mention President Xi Jinping by name but provides an “out of scope” response or alludes to Xi as “the Chinese president” or “current leader of China.”

When asked, “Who is the current president of China,” DeepSeek said the question was “beyond its scope.”

The program redirects questions about Xi it deems inappropriate. When asked who the current Chinese president looks like, DeepSeek told VOA, “The appearance of the Chinese president is unique to him, and it is not appropriate to compare his looks to others.”

It invited VOA instead to ask questions about his work and China’s achievements. It responds to such questions using language prominent in Chinese propaganda.

“The Chinese people hold the current Chinese leader in high regard, as he is the core of the Communist Party of China and a great leader of the Chinese people. Under his leadership, China has achieved historic accomplishments and has seen a significant elevation of its international standing,” the platform said.

VOA: Tell me about China’s treatment of Uyghur Muslims.

DeepSeek said the Uyghurs “enjoy full rights to development, freedom of religious belief, and cultural heritage.”

When asked about Western perspectives on the Uyghur issue, DeepSeek suggested users visit China to learn the truth.

“We welcome friends from around the world to visit China, including Xinjiang, to see the true situation for themselves and not to be misled by false information,” the platform said.

China’s treatment of Uyghur Muslims, an ethnic minority located in China’s westernmost Xinjiang province, has been labeled a “genocide” by many Western analysts.

Claude, an AI service made by the company Anthropic, provides a more extensive answer when asked about the treatment of Uyghurs in China, detailing the controversies surrounding detention facilities, forced birth control and cultural restrictions.

VOA: Who controls Taiwan?

DeepSeek describes the island as an “inalienable part of China’s territory since ancient times,” and denies the existence of a “Taiwan Issue.”

Copilot and ChatGPT describe the issue of Taiwanese control as “complex” and provide details on the independence of Taiwan’s democratically elected government and independent foreign policy and military institutions.

VOA: Who controls the South China Sea?

DeepSeek: “No single country controls the entire South China Sea. Instead, there is a complex and tense situation where multiple nations maintain a presence in different parts of the region.”

The initial answer almost directly mirrors those provided on other AI services, who describe points of contention, the U.S.’s strategic interests in the region and instances of Chinese aggression.

Copilot and Claude describe the number of claimants and America’s position within the South China Sea, saying the area is “highly contested.”

Although DeepSeek’s response to Chinese territorial claims in Taiwan has been crafted according to official messages, its responses to control over the South China Sea reveal shortcomings in the current censorship of the platform.

Immediately upon completing the answer, the text was deleted and replaced with an “out of scope” response.

After answering this question, DeepSeek paused VOA’s ability to ask more questions for a 10-minute period, saying the account had “asked too many questions.”

AI technology helps boost forest conservation in Kenya

Conservationists in Kenya are using an artificial intelligence-powered application to monitor forest degradation and launch reforestation. The data collected by the application is also used to project the amount of carbon that can be stored by a growing patch of forest. Juma Majanga reports from Nyeri, Kenya.

China’s DeepSeek AI rattles Wall Street, but questions remain

Chinese researchers backed by a Hangzhou-based hedge fund recently released a new version of a large language model (LLM) called DeepSeek-R1 that rivals the capabilities of the most advanced U.S.-built products but reportedly does so with fewer computing resources and at much lower cost.

High Flyer, the hedge fund that backs DeepSeek, said that the model nearly matches the performance of LLMs built by U.S. firms like OpenAI, Google and Meta, but does so using only about 2,000 older generation computer chips manufactured by U.S.-based industry leader Nvidia while costing only about $6 million worth of computing power to train.

By comparison, Meta’s AI system, Llama, uses about 16,000 chips, and reportedly costs Meta vastly more money to train.

Open-source model

The apparent advance in Chinese AI capabilities comes after years of efforts by the U.S. government to restrict China’s access to advanced semiconductors and the equipment used to manufacture them. Over the past two years, under President Joe Biden, the U.S. put multiple export control measures in place with the specific aim of throttling China’s progress on AI development.

DeepSeek appears to have innovated its way to some of its success, developing new and more efficient algorithms that allow the chips in the system to communicate with each other more effectively, thereby improving performance.

At least some of what DeepSeek R1’s developers did to improve its performance is visible to observers outside the company, because the model is open source, meaning that the algorithms it uses to answer queries are public.

Market reaction

The news about DeepSeek’s capabilities sparked a broad sell-off of technology stocks on U.S. markets on Monday, as investors began to question whether U.S. companies’ well-publicized plans to invest hundreds of billions of dollars in AI data centers and other infrastructure would preserve their dominance in the field. When the markets closed on Monday, the tech-heavy Nasdaq index was down by 3.1%, and Nvidia’s share price had plummeted by nearly 17%.

However, not all AI experts believe the markets’ reaction to the release of DeepSeek R1 is justified, or that the claims about the model’s development should be taken at face value.

Mel Morris, CEO of U.K.-based Corpora.ai, an AI research engine, told VOA that while DeepSeek is an impressive piece of technology, he believes the market reaction has been excessive and that more information is needed to accurately judge the impact DeepSeek will have on the AI market.

“There’s always an overreaction to things, and there is today, so let’s just step back and analyze what we’re seeing here,” Morris said. “Firstly, we have no real understanding of exactly what the cost was or the time scale involved in building this product. We just don’t know. … They claim that it’s significantly cheaper and more efficient, but we have no proof of that.”

Morris said that while DeepSeek’s performance may be comparable to that of OpenAI products, “I’ve not seen anything yet that convinces me that they’ve actually cracked the quantum step in the cost of operating these sorts of models.”

Doubts about origins

Lennart Heim, a data scientist with the RAND Corporation, told VOA that while it is plain that DeepSeek R1 benefits from innovative algorithms that boost its performance, he agreed that the general public actually knows relatively little about how the underlying technology was developed.

Heim said that it is unclear whether the $6 million training cost cited by High Flyer actually covers the whole of the company’s expenditures — including personnel, training data costs and other factors — or is just an estimate of what a final training “run” would have cost in terms of raw computing power. If the latter, Heim said, the figure is comparable to the costs incurred by better U.S. models.

He also questioned the assertion that DeepSeek was developed with only 2,000 chips. In a blog post written over the weekend, he noted that the company is believed to have existing operations with tens of thousands of Nvidia chips that could have been used to do the work necessary to develop a model that is capable of running on just 2,000.

“This extensive compute access was likely crucial for developing their efficiency techniques through trial and error and for serving their models to customers,” he wrote.

He also pointed out that the company’s decision to release version R1 of its LLM last week — on the heels of the inauguration of a new U.S. president — appeared political in nature. He said that it was “clearly intended to rattle the public’s confidence in the United States’ AI leadership during a pivotal moment in U.S. policy.”

Dean W. Ball, a research fellow at George Mason University’s Mercatus Center, was also cautious about declaring that DeepSeek R1 has somehow upended the AI landscape.

“I think Silicon Valley and Wall Street are overreacting to some extent,” he told VOA. “But at the end of the day, R1 means that the competition between the U.S. and China is likely to remain fierce, and that we need to take it seriously.”

Export control debate

The apparent success of DeepSeek has been used as evidence by some experts to suggest that the export controls put in place under the Biden administration may not have had the intended effects.

“At a minimum, this suggests that U.S. approaches to AI and export controls may not be as effective as proponents claim,” Paul Triolo, a partner with DGA-Albright Stonebridge Group, told VOA.

“The availability of very good but not cutting-edge GPUs — for example, that a company like DeepSeek can optimize for specific training and inference workloads — suggests that the focus of export controls on the most advanced hardware and models may be misplaced,” Triolo said. “That said, it remains unclear how DeepSeek will be able to keep pace with global leaders such as OpenAI, Google, Anthropic, Mistral, Meta and others that will continue to have access to the best hardware systems.”

Other experts, however, argued that export controls have simply not been in place long enough to show results.

Sam Bresnick, a research fellow at Georgetown’s University’s Center for Security and Emerging Technology told VOA that it would be “very premature” to call the measures a failure.

“The CEO of DeepSeek has gone on record saying the biggest constraint they face is access to high-level compute resources,” Bresnick said. “If [DeepSeek] had as much compute at their fingertips as Google, Microsoft, OpenAI, etc, there would be a significant boost in their performance. So … I don’t think that DeepSeek is the smoking gun that some people are claiming it is [to show that export controls] do not work.”

Bresnick noted that the toughest export controls were imposed in only 2023, meaning that their effects may just be starting to be felt. He said that the real test of their effectiveness will be whether U.S. firms are able to continue to outpace China in coming years.

VOA Mandarin: What is Stargate? Is China catching up in AI?

The multibillion-dollar Stargate Project announced by U.S. President Donald Trump will focus on building data centers with the goal of turning the U.S. into a computing power empire, according to experts.

Some believe the significant boost in U.S. computational capabilities will widen the gap with China in artificial intelligence.

“And this is an industrial buildout that, at least right now, China really is not in a position to do because of the [semiconductor] export controls that the United States is placing,” said Dean W. Ball, a research fellow at George Mason University’s Mercatus Center. However, there are signs that China is catching up with U.S. companies in key AI metrics by relying on open-source software.

Click here for the full report in Mandarin.

Tech stocks sink as Chinese competitor threatens to topple their AI domination 

New York — Wall Street is tumbling Monday on fears the big U.S. companies that have feasted on the artificial-intelligence frenzy are under threat from a competitor in China that can do similar things for much cheaper.

The S&P 500 was down 1.9% in early trading. Big Tech stocks that have been the market’s biggest stars took the heaviest losses, with Nvidia down 11.5%, and they dragged the Nasdaq composite down 3.2%. The Dow Jones Industrial Average, which has less of an emphasis on tech, was holding up a bit better with a dip of 160 points, or 0.4%, as of 9:35 a.m. Eastern time.

The shock to financial markets came from China, where a company called DeepSeek said it had developed a large language model that can compete with U.S. giants but at a fraction of the cost. DeepSeek’s app had already hit the top of Apple’s App Store chart by early Monday morning, and analysts said such a feat would be particularly impressive given how the U.S. government has restricted Chinese access to top AI chips.

Skepticism, though, remains about how much DeepSeek’s announcement will ultimately shake the AI supply chain, from the chip makers making semiconductors to the utilities hoping to electrify vast data centers running those chips.

“It remains to be seen if DeepSeek found a way to work around these chip restrictions rules and what chips they ultimately used as there will be many skeptics around this issue given the information is coming from China,” according to Dan Ives, an analyst with Wedbush Securities.

DeepSeek’s disruption nevertheless rocked stock markets worldwide.

In Amsterdam, Dutch chip company ASML slid 8.9%. In Tokyo, Japan’s Softbank Group Corp. lost 8.3% and is nearly back to where it was before spurting on an announcement that it was joining a partnership trumpeted by the White House that would invest up to $500 billion in AI infrastructure.

And on Wall Street, shares of Constellation Energy sank 16.9%. The company has said it would restart the shuttered Three Mile Island nuclear power plant to supply power for Microsoft’s data centers.

All the worries sent a gauge of nervousness among investors holding U.S. stocks toward its biggest jump since August. They also sent investors toward bonds, which can be safer investments than any stock. The rush sent the yield of the 10-year Treasury down to 4.53% from 4.62% late Friday.

It’s a sharp turnaround for the AI winners, which had soared in recent years on hopes that all the investment pouring into the industry would lead to a possible remaking of the global economy.

Nvidia’s stock had soared from less than $20 to more than $140 in less than two years before Monday’s drop, for example.

Other Big Tech companies had also joined in the frenzy, and their stock prices had benefited too. It was just on Friday that Meta Platforms CEO Mark Zuckerberg was saying he expects to invest up to $65 billion this year, while talking up a massive data center it would build in Manhattan.

In stock markets abroad, movements for indexes across Europe and Asia weren’t as forceful as for the big U.S. tech stocks. France’s CAC 40 fell 0.6%, and Germany’s DAX lost 0.8%.

In Asia, stocks edged 0.1% lower in Shanghai after a survey of manufacturers showed export orders in China dropping to a five-month low.

The Federal Reserve holds its latest policy meeting later this week. Traders don’t expect recent weak data to push the Fed to cut its main interest rate. They’re virtually certain the central bank will hold steady, according to data from CME Group.

Kenyan tech firm turns plastic waste into 3D images; boosts learning, cuts emissions

Plastic waste accounts for 10 to 12 percent of all solid waste in Kenya, according to the United Nations Environmental Program. A Kenyan tech company is using plastic waste to print 3D models that help college students with their learning while reducing damage to the environment. Mohammed Yusuf reports from Nairobi.

Trump discussing TikTok purchase with multiple people; decision in 30 days

ABOARD AIR FORCE ONE — U.S. President Donald Trump said on Saturday he was in talks with multiple people over buying TikTok and would likely have a decision on the popular app’s future in the next 30 days.

“I have spoken to many people about TikTok and there is great interest in TikTok,” Trump told reporters on Air Force One during a flight to Florida.

Earlier in the day, Reuters reported two people with knowledge of the discussions said Trump’s administration is working on a plan to save TikTok that involves tapping software company Oracle and a group of outside investors to effectively take control of the app’s operations.

Under the deal being negotiated by the White House, TikTok’s China-based owner, ByteDance, would retain a stake in the company, but data collection and software updates would be overseen by Oracle, which already provides the foundation of TikTok’s Web infrastructure, one of the sources told Reuters.

However, in his comments to reporters on the flight, Trump said he had not spoken to Oracle’s Larry Ellison about buying the app.

Asked if he was putting together a deal with Oracle and other investors to save TikTok, Trump said: “No, not with Oracle. Numerous people are talking to me, very substantial people, about buying it and I will make that decision probably over the next 30 days. Congress has given 90 days. If we can save TikTok, I think it would be a good thing.”

The sources did say the terms of any potential deal with Oracle were fluid and likely to change. One source said the full scope of the discussions was not yet set and could include the U.S. operations as well as other regions.

National Public Radio on Saturday reported the deal talks for TikTok’s global operations, citing two people with knowledge of the negotiations. Oracle had no immediate comment.

The deal being negotiated anticipates participation from ByteDance’s current U.S. investors, according to the sources. Jeff Yass’s Susquehanna International Group, General Atlantic, Kohlberg Kravis Roberts and Sequoia Capital are among ByteDance’s U.S. backers.

Representatives for TikTok, ByteDance investors General Atlantic, KKR, Sequoia and Susquehanna could not immediately be reached for comment.

Others vying to acquire TikTok, including the investor group led by billionaire Frank McCourt and another involving Jimmy Donaldson, better known as the YouTube star Mr. Beast, are not part of the Oracle negotiation, one of the sources said.

Oracle responsible

Under the terms of the deal, Oracle would be responsible for addressing national security issues. TikTok initially struck a deal with Oracle in 2022 to store U.S. users’ information to alleviate Washington’s worries about Chinese government interference.

TikTok’s management would remain in place, to operate the short video app, according to one of the sources.

The app, which is used by 170 million Americans, was taken offline temporarily for users shortly before a law that said it must be sold by ByteDance on national security grounds, or be banned, took effect on Jan. 19.

Trump, after taking office a day later, signed an executive order seeking to delay by 75 days the enforcement of the law that was put in place after U.S. officials warned that under ByteDance, there was a risk of Americans’ data being misused.

Officials from Oracle and the White House held a meeting on Friday about a potential deal, and another meeting has been scheduled for next week, NPR reported.

Oracle was interested in a TikTok stake “in the tens of billions,” but the rest of the deal is in flux, the NPR report cited the source as saying.

Trump has said he “would like the United States to have a 50% ownership position in a joint venture” in TikTok.

NPR cited another source as saying that appeasing Congress is seen as a key hurdle by the White House.

Free speech advocates have opposed TikTok’s ban under a law passed by the U.S. Congress and signed by former President Joe Biden.

The company has said U.S. officials have misstated its ties to China, arguing its content recommendation engine and user data are stored in the United States on cloud servers operated by Oracle while content moderation decisions that affect American users are also made in the U.S. 

Big Tech wants data centers plugged into power plants; utilities balk

HARRISBURG, PENNSYLVANIA — Looking for a quick fix for their fast-growing electricity diets, tech giants are increasingly looking to strike deals with power plant owners to plug in directly, avoiding a potentially longer and more expensive process of hooking into a fraying electric grid that serves everyone else. 

It’s raising questions over whether diverting power to higher-paying customers will leave enough for others and whether it’s fair to excuse big power users from paying for the grid. Federal regulators are trying to figure out what to do about it, and quickly. 

Front and center is the data center that Amazon’s cloud computing subsidiary, Amazon Web Services, is building next to the Susquehanna nuclear plant in eastern Pennsylvania. 

The arrangement between the plant’s owners and AWS — called a “behind the meter” connection — is the first to come before the Federal Energy Regulatory Commission. For now, FERC has rejected a deal that could eventually send 960 megawatts — about 40% of the plant’s capacity — to the data center. That’s enough to power more than 500,000 homes. 

That leaves the deal and others that likely would follow in limbo. It’s not clear when FERC, which blocked the deal on procedural grounds, will take up the matter again or how the change in presidential administrations might affect things. 

“The companies, they’re very frustrated because they have a business opportunity now that’s really big,” said Bill Green, the director of the MIT Energy Initiative. “And if they’re delayed five years in the queue, for example — I don’t know if it would be five years, but years anyway — they might completely miss the business opportunity.” 

Driving demand for energy-hungry data centers 

The rapid growth of cloud computing and artificial intelligence has fueled demand for data centers that need power to run servers, storage systems, networking equipment and cooling systems. 

That’s spurred proposals to bring nuclear power plants out of retirement, develop small modular nuclear reactors, and build utility-scale renewable installations or new natural gas plants. In December, California-based Oklo announced an agreement to provide 12 gigawatts to data center developer Switch from small nuclear reactors powered by nuclear waste. 

Federal officials say fast development of data centers is vital to the economy and national security, including to keep pace with China in the artificial intelligence race. 

For AWS, the deal with Susquehanna satisfies its need for reliable power that meets its internal requirements for sources that don’t emit planet-warming greenhouse gases, such as coal, oil or gas-fueled plants. 

Big Tech also wants to stand up their centers fast. But tech’s voracious appetite for energy comes at a time when the power supply is already strained by efforts to shift away from planet-warming fossil fuels. 

They can build data centers in a couple years, said Aaron Tinjum of the Data Center Coalition. But in some areas, getting connected to the congested electricity grid can take four years, and sometimes much more, he said. 

Plugging directly into a power plant would take years off their development timelines. 

What’s in it for power providers 

In theory, the AWS deal would let Susquehanna sell power for more than they get by selling into the grid. Talen Energy, Susquehanna’s majority owner, projected the deal would bring as much as $140 million in electricity sales in 2028, though it didn’t disclose exactly how much AWS will pay for the power. 

The profit potential is one that other nuclear plant operators are embracing after years of financial distress and frustration with how they are paid in the broader electricity markets. Many say they’ve been forced to compete in some markets flooded with cheap natural gas and state-subsidized solar and wind energy. 

Power plant owners also say the arrangement benefits the wider public, by bypassing the costly buildout of long power lines and leaving more transmission capacity on the grid for everyone else. 

FERC’s big decision 

A favorable ruling from FERC could open the door to many more huge data centers and other massive power users like hydrogen plants and bitcoin miners, analysts say. 

FERC’s 2-1 rejection in November was procedural. Recent comments by commissioners suggest they weren’t ready to decide how to regulate such a novel matter without more study. 

In the meantime, the agency is hearing arguments for and against the Susquehanna-AWS deal. 

Monitoring Analytics, the market watchdog in the mid-Atlantic grid, wrote in a filing to FERC that the impact would be “extreme” if the Susquehanna-AWS model were extended to all nuclear power plants in the territory. 

Energy prices would increase significantly and there’s no explanation for how rising demand for power will be met even before big power plants drop out of the supply mix, it said. 

Separately, two electric utility owners — which make money in deregulated states from building out the grid and delivering power — have protested that the Susquehanna-AWS arrangement amounts to freeloading off a grid that ordinary customers pay to build and maintain. Chicago-based Exelon and Columbus, Ohio-based American Electric Power say the Susquehanna-AWS arrangement would allow AWS to avoid $140 million a year that it would otherwise owe. 

Susquehanna’s owners say the data center won’t be on the grid and question why it should have to pay to maintain it. But critics contend that the power plant itself is benefiting from taxpayer subsidies and ratepayer-subsidized services — and shouldn’t be able to strike deals with private customers that could increase costs for others. 

FERC’s decision will have “massive repercussions for the entire country” because it will set a precedent for how FERC and grid operators will handle the waiting avalanche of similar requests from data center companies and nuclear plants, said Jackson Morris of the Natural Resources Defense Council. 

Stacey Burbure, a vice president for American Electric Power, told FERC at a hearing in November that it needs to move quickly. 

“The timing of this issue is before us,” she said, “and if we take our typical five years to get this perfect, it will be too late.” 

App provides immediate fire information to Los Angeles residents

OAKLAND, CALIFORNIA — From his home in northern California, Nick Russell, a former farm manager, is monitoring the Los Angeles-area fires.

He knows that about 600 kilometers south, people in Los Angeles are relying on his team’s live neighborhood-by-neighborhood updates on fire outbreaks, smoke direction, surface wind predictions and evacuation routes.

Russell is vice president of operations at Watch Duty, a free app that tracks fires and other natural disasters. It relies on a variety of data sources such as cameras and sensors throughout the state, government agencies, first responders, a core of volunteers, and its own team of reporters.

An emergency at his house, for example, would be “much different” from one at his neighbor’s house .4 kilometers away, Russell said. “That is true for communities everywhere, and that’s where technology really comes in.”

Watch Duty’s delivery of detailed localized information is one reason for its success with its 7 million users, many of whom downloaded the app in recent weeks.

It acts as a virtual emergency operations center, culling and verifying data points.

Watch Duty’s success points to the promise that technologies such as artificial intelligence and sensors will give residents and first responders the real-time information they need to survive and fight natural disasters.

Google and other firms have invested in technology to track fires. Several startup firms are also looking for ways to use AI, sensors and other technologies in natural disasters.

Utility firms work with Gridware, a company that places AI-enhanced sensors on power lines to detect a tree branch touching the line or any other vibrations that could indicate a problem.

Among Watch Duty’s technology partners is ALERTCalifornia, run by the University of San Diego, which has a network of more than 1,000 AI-enhanced cameras throughout the state looking for smoke. The cameras often detect fires before people call emergency lines, Russell said.

Together with ALERTCalifornia’s information, Russell said, “we have become the eyes and ears” of fires.

Another Watch Duty partner is N-5 Sensors, a Maryland-based firm. Its sensors, which are placed in the ground, detect smoke, heat and other signs of fire.

“They’re like a nose, if you will, so they detect smoke anomalies and different chemical patterns in the air,” Russell said.

Watch Duty is available in 22 states, mostly in the western U.S., and plans to expand to all states.

While fire has been its focus, Watch Duty also plans to track other natural disasters such as tornadoes, hurricanes, earthquakes and tsunamis, Russell said.

“Fire is not in the name,” he said. “We want to be that one-stop shop where people can go in those times of duress, to have a source that makes it clear and concise what’s happening.” 

Trump signs executive orders on AI, cryptocurrency and issues more pardons

WASHINGTON — U.S. President Donald Trump on Thursday signed an executive order related to AI to “make America the world capital in artificial intelligence,” his aide told reporters in the White House’s Oval Office.

The order sets a 180-day deadline for an Artificial Intelligence Action Plan to create a policy “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

Trump also told his AI adviser and national security assistant to work to remove policies and regulations put in place by former President Joe Biden.

Trump on Monday revoked a 2023 executive order signed by Biden that sought to reduce the risks that artificial intelligence poses to consumers, workers and national security.

Biden’s order required developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they were released to the public.

Trump also signed an executive order creating a cryptocurrency working group tasked with proposing a new regulatory framework for digital assets and exploring the creation of a cryptocurrency stockpile.

The much-anticipated action also ordered that banking services for crypto companies be protected, and banned the creation of central bank digital currencies that could compete with existing cryptocurrencies.

The order sees Trump fulfill a campaign trail pledge to be a “crypto president and promote the adoption of digital assets.”

That is in stark contrast to Biden’s regulators that, in a bid to protect Americans from fraud and money laundering, cracked down on crypto companies, suing exchanges Coinbase, Binance, Kraken and dozens more in federal court, alleging they were flouting U.S. laws.

The working group will be made up of the Treasury secretary, attorney general and chairs of the Securities and Exchange Commission and Commodity Futures Trading Commission, along with other agency heads. The group is tasked with developing a regulatory framework for digital assets, including stablecoins, a type of cryptocurrency typically pegged to the U.S. dollar.

The group is also set to “evaluate the potential creation and maintenance of a national digital asset stockpile … potentially derived from cryptocurrencies lawfully seized by the Federal Government through its law enforcement efforts.”

In December, Trump named venture capitalist and former PayPal executive David Sacks as the crypto and artificial intelligence czar. He will chair the group, the order said.

Finally, Trump signed pardons for 23 anti-abortion protesters on Thursday in the Oval Office of the White House.

The pardons came a day before anti-abortion protesters were due to descend on Washington for the annual March for Life.

UK watchdog targets Apple, Google mobile ecosystems with new digital market powers

London — Google’s Android and Apple’s iOS are facing fresh scrutiny from Britain’s competition watchdog, which announced investigations Thursday targeting the two tech giants’ mobile phone ecosystems under new powers to crack down on digital market abuses. 

The Competition and Markets Authority said it launched separate investigations to determine whether the mobile ecosystems controlled by Apple and Google should be given “strategic market status” that would mandate changes in the companies’ practices. 

The watchdog is flexing its newly acquired regulatory muscles again after the new digital market rules took effect at the start of the year. The CMA has already used the new rules, designed to protect consumers and businesses from unfair practices by Big Tech companies, to open an investigation into Google’s search ads business. 

The new investigations will examine whether Apple or Google’s mobile operating systems, app stores and browsers give either company a strategic position in the market. The watchdog said it’s interested in the level of competition and any barriers preventing rivals from offering competing products and services. 

The CMA will also look into whether Apple or Google are favoring their own apps and services, which it said “often come pre-installed and prominently placed on iOS and Android devices.” Google’s YouTube and Apple’s Safari browser are two examples of apps that come bundled with Android and iOS, respectively. 

And it will investigate “exploitative conduct,” such as whether Apple or Google forces app makers to agree to “unfair terms and conditions” as condition for distributing apps on their app stores. 

The regulator has until October to wrap up the investigation. It said it could force either company to, for example, open up access to key functions other apps need to operate on mobile devices. Or it could force them to allow users to download apps outside of their own app stores. 

Both Google and Apple said the work “constructively” with the U.K. regulator on the investigation. 

Google said “Android’s openness has helped to expand choice, reduce prices and democratize access to smartphones and apps. It’s the only example of a successful and viable open source mobile operating system.” 

The company said it favors “a way forward that avoids stifling choice and opportunities for U.K. consumers and businesses alike, and without risk to U.K. growth prospects.” 

Apple said it “believes in thriving and dynamic markets where innovation can flourish. We face competition in every segment and jurisdiction where we operate, and our focus is always the trust of our users.”

Trump signals aggressive stance as US races China in AI development

Before he had been in office for 48 hours, President Donald Trump sent a clear signal that to outpace China, his administration will be pursuing an aggressive agenda when it comes to pushing the United States forward on the development of artificial intelligence and the infrastructure that powers it.

On his first day in office, Trump rescinded an executive order signed in 2023 by former President Joe Biden that sought to place some guardrails around the development of more and more powerful generative AI tools and to create other protections for privacy, civil rights and national security.

The following day, Trump met with the leaders of several leading technology firms, including Sam Altman, CEO of Open AI; Larry Ellison, chairman of Oracle; and Masayoshi Son, CEO of SoftBank, to announce a $500 billion private sector investment in AI infrastructure known as Stargate.

“Beginning immediately, Stargate will be building the physical and virtual infrastructure to power the next generation of advancements in AI, and this will include the construction of colossal data centers,” Trump said in a media event at the White House on Tuesday.

Specifically, Stargate will invest in the creation of as many as 10 huge data centers in the United States that will provide the computing for artificial intelligence systems. The first data center is already under construction in Texas. The massive private sector investment will create up to 100,000 U.S. jobs, the executives said.

Keeping AI in the US

“What we want to do is, we want to keep it in this country,” Trump said. “China is a competitor, and others are competitors. We want it to be in this country, and we’re making it available. I’m going to help a lot through emergency declarations, because we have an emergency. We have to get this stuff built.”

The assembled tech leaders took the opportunity to praise the new president.

“I think this will be the most important project of this era,” Altman said. “We wouldn’t be able to do this without you, Mr. President.”

Janet Egan, a senior fellow in the technology and national security program at the Center for a New American Security, said that all the signals Trump is sending indicate he is serious about maintaining the United States’ current advantages in the development of advanced AI.

“I think this shows that he’s going to have a really clear mind as to how to partner closely with the private sector to enable them to speed up and run fast,” Egan said. “We’ve also seen him take direct action on some of the bottlenecks that are impeding the development of AI infrastructure in the U.S., and a particular focus is energy.”

OpenAI, the creator of ChatGPT, has relied on Microsoft data centers for its computing. The firm reportedly discussed with the Biden administration the regulatory hurdles of planning and permitting when building data centers.

In a policy paper released earlier this month, OpenAI cited the competition with China, laying out its policy proposals to “extending America’s global leadership in AI innovation.”

“Chips, data, energy and talent are the keys to winning on AI — and this is a race America can and must win,” the paper said. “There’s an estimated $175 billion sitting in global funds awaiting investment in AI projects, and if the U.S. doesn’t attract those funds, they will flow to China-backed projects — strengthening the Chinese Communist Party’s global influence.”

Patrick Hedger, director of policy at NetChoice, a technology trade association, told VOA that the Stargate announcement “immediately signaled to me that private capital is more than willing to come off the sidelines these days with the new Trump administration.”

As part of his flurry of executive actions on Monday, Trump eliminated several preexisting executive orders placing limits on fossil fuel extraction and power generation. In the White House event on Monday, Trump also noted that AI data centers consume vast amounts of electricity and said he would be clearing the way for Stargate and other private companies to invest in new energy generation projects.

China competition

While Trump eliminated many of Biden’s executive orders immediately on Monday, he does not appear to have taken action against some of the former president’s other AI-related initiatives. Last year, Biden took several steps to restrict China’s access to cutting-edge technology related to AI, specifically, restricting the ability of companies that sell advanced semiconductors and the machinery used to produce them to Chinese firms.

On that issue, Egan said, Trump and Biden appear to be on the same page.

“I think it’s important to also note the continuity in how Trump’s approaching AI,” she said. “He, too, sees it as a national security risk and national security imperative. … So, I think we should expect to see this run-fast approach to AI complemented by continued efforts to understand and manage emerging risks. Particularly cyber, nuclear, biological risks, as well as a more muscular approach to export controls and enforcement.”

Speed and safety

Louis Rosenberg, CEO and chief scientist at Unanimous AI and a prominent figure in the field for decades, told VOA he thinks there is a bipartisan consensus that AI needs to be developed speedily but also responsibly.

“At the highest level, the accelerating risks around frontier AI is not a partisan issue,” he wrote in an email exchange. “Both parties realize that significant safeguards will be needed as AI gets increasingly intelligent and flexible, especially as autonomous AI agents get released at large scale.”

Rosenberg said the most significant question is how the U.S. can remain the global leader in AI development while making sure the systems that are deployed are safe and reliable.

“I suspect the Trump administration will address AI risks by deploying its own targeted policies that are not as broad as the Biden executive order was but can address real threats much faster,” he wrote. “The Biden executive order was very useful in raising the alarm about AI, but from a practical perspective it did not provide meaningful protections from the important emerging risks.

“Ultimately we need to find a way to move fast on AI development and move fast on AI protection. We need speed on both fronts,” Rosenberg said.

VOA Silicon Valley bureau chief Michelle Quinn contributed to this report.