UK to become 1st country to criminalize AI child abuse tools

LONDON — Britain will become the first country to introduce laws against AI tools used to generate sexual abuse images, the government announced Saturday.

The government will make it illegal to possess, create or distribute AI tools designed to generate sexualized images of children, punishable by up to five years in prison, interior minister Yvette Cooper revealed.

It will also be illegal to possess AI “pedophile manuals” which teach people how to use AI to sexually abuse children, punishable by up to three years in prison.

“We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person,” said Cooper.

The new laws are “designed to keep our children safe online as technologies evolve. It is vital that we tackle child sexual abuse online as well as offline,” she added.

“Children will be protected from the growing threat of predators generating AI images and from online sexual abuse as the U.K. becomes the first country in the world to create new AI sexual abuse offences,” said a government statement.

AI tools are being used to generate child sexual abuse images by “nudeifying” real life images of children or by “stitching the faces of other children onto existing images,” said the government.

The new laws will also criminalize “predators who run websites designed for other pedophiles to share vile child sexual abuse content or advice on how to groom children,” punishable by up to ten years in prison, said the government.

The measures will be introduced as part of the Crime and Policing Bill when it comes to parliament.

The Internet Watch Foundation (IWF) has warned of the growing number of sexual abuse AI images of children being produced.

Over a 30-day period in 2024, IWF analysts identified 3,512 AI child abuse images on a single dark web site.

The number of the most serious category of images also rose by 10% in a year, it found.

DeepSeek vs. ChatGPT fuels debate over AI building blocks

SEOUL, SOUTH KOREA — When Chinese startup DeepSeek released its AI model this month, it was hailed as a breakthrough, a sign that China’s artificial intelligence companies could compete with their Silicon Valley counterparts using fewer resources.

The narrative was clear: DeepSeek had done more with less, finding clever workarounds to U.S. chip restrictions. However, that storyline has begun to shift.

OpenAI, the U.S.-based company behind ChatGPT, now claims DeepSeek may have improperly used its proprietary data to train its model, raising questions about whether DeepSeek’s success was truly an engineering marvel.

In statements to several media outlets this week, OpenAI said it is reviewing indications that DeepSeek may have trained its AI by mimicking responses from OpenAI’s models.

The process, known as distillation, is common among AI developers but is prohibited by OpenAI’s terms of service, which forbid using its model outputs to train competing systems.

Some U.S. officials appear to support OpenAI’s concerns. At his confirmation hearing this week, Commerce secretary nominee Howard Lutnick accused DeepSeek of misusing U.S. technology to create a “dirt cheap” AI model.

“They stole things. They broke in. They’ve taken our IP,” Lutnick said of China.

David Sacks, the White House czar for AI and cryptocurrency, was more measured, saying only that it is “possible” that DeepSeek had stolen U.S. intellectual property.

In an interview with the cable news network Fox News, Sacks added that there is “substantial evidence” that DeepSeek “distilled the knowledge out of OpenAI’s models,” adding that stronger efforts are needed to curb the rise of “copycat” AI systems.

At the center of the dispute is a key question about AI’s future: how much control should companies have over their own AI models, when those programs were themselves built using data taken from others?

AI data fight

The question is especially relevant for OpenAI, which faces its own legal challenges. The company has been sued by several media companies and authors who accuse it of illegally using copyrighted material to train its AI models.

Justin Hughes, a Loyola Law School professor specializing in intellectual property, AI, and data rights, said OpenAI’s accusations against DeepSeek are “deeply ironic,” given the company’s own legal troubles.

“OpenAI has had no problem taking everyone else’s content and claiming it’s ‘fair,'” Hughes told VOA in an email.

“If the reports are accurate that OpenAI violated other platforms’ terms of service to get the training data it has wanted, that would just add an extra layer of irony – dare we say hypocrisy – to OpenAI complaining about DeepSeek.”

DeepSeek has not responded to OpenAI’s accusations. In a technical paper released with its new chatbot, DeepSeek acknowledged that some of its models were trained alongside other open-source models – such as Qwen, developed by China’s Alibaba, and Llama, released by Meta – according to Johnny Zou, a Hong Kong-based AI investment specialist.

However, OpenAI appears to be alleging that DeepSeek improperly used its closed-source models – which cannot be freely accessed or used to train other AI systems.

“It’s quite a serious statement,” said Zou, who noted that OpenAI has not yet presented evidence of wrongdoing by DeepSeek.

Proving improper distillation may be difficult without disclosing details on how its own models were trained, Zou added.

Even if OpenAI presents concrete proof, its legal options may be limited. Although Zou noted that the company could pursue a case against DeepSeek for violating its terms of service, not all experts believe such a claim would hold up in court.

“Even assuming DeepSeek trained on OpenAI’s data, I don’t think OpenAI has much of a case,” said Mark Lemley, a professor at Stanford Law School who specializes in intellectual property and technology.

Even though AI models often have restrictive terms of service, “no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief,” Lemley wrote in a recent paper with co-author Peter Henderson.

The paper argues that these restrictions may be unenforceable, since the materials they aim to protect are “largely not copyrightable.”

“There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist,” the paper noted.

OpenAI’s main legal argument would likely be breach of contract, said Hughes. Even if that were the case, though, he added, “good luck enforcing that against a Chinese company without meaningful assets in the United States.”

Possible options

The financial stakes are adding urgency to the debate. U.S. tech stocks dipped Monday after following news of DeepSeek’s advances, though they later regained some ground.

Commerce nominee Lutnick suggested that further government action, including tariffs, could be used to deter China from copying advanced AI models.

But speaking the same day, U.S. President Donald Trump appeared to take a different view, surprising some industry insiders with an optimistic take on DeepSeek’s breakthrough.

The Chinese company’s low-cost model, Trump said, was “very much a positive development” for AI, because “instead of spending billions and billions, you’ll spend less, and you’ll come up with hopefully the same solution.”

If DeepSeek has succeeded in building a relatively cheap and competitive AI model, that may be bad for those with investment – or stock options – in current generative AI companies, Hughes said.

“But it might be good for the rest of us,” he added, noting that until recently it appeared that only the existing tech giants “had the resources to play in the generative AI sandbox.”

“If DeepSeek disproved that, we should hope that what can be done by a team of engineers in China can be done by a similarly resourced team of engineers in Detroit or Denver or Boston,” he said. 

Nigerian initiative paves way for deaf inclusion in tech

An estimated nine million Nigerians are deaf or have hearing impairments, and many cope with discrimination that limits their access to education and employment. But one initiative is working to change that — empowering deaf people with tech skills to improve their career prospects. Timothy Obiezu reports from Abuja.
Camera: Timothy Obiezu

Meta погодилася виплатити Трампу 25 млн доларів для врегулювання позову про блокування його акаунтів – ЗМІ

За повідомленням, приблизно 22 мільйони з цієї суми підуть на фінансування президентської бібліотеки Трампа, а три мільйони – на компенсацію судових витрат Трампа й інших учасників позов

Chinese app shakes up AI race

A small Chinese company sent shockwaves around the tech world this week with news that it has created a high-performing artificial intelligence system with less computing power and at a lower cost than ones made by U.S. tech giants. Michelle Quinn reports.

Microsoft, Meta CEOs defend hefty AI spending after DeepSeek stuns tech world

Days after Chinese upstart DeepSeek revealed a breakthrough in cheap AI computing that shook the U.S. technology industry, the chief executives of Microsoft and Meta defended massive spending that they said was key to staying competitive in the new field.

DeepSeek’s quick progress has stirred doubts about the lead America has in AI with models that it claims can match or even outperform Western rivals at a fraction of the cost, but the U.S. executives said on Wednesday that building huge computer networks was necessary to serve growing corporate needs.

“Investing ‘very heavily’ in capital expenditure and infrastructure is going to be a strategic advantage over time,” Meta CEO Mark Zuckerberg said on a post-earnings call.

Satya Nadella, CEO of Microsoft, said the spending was needed to overcome the capacity constraints that have hampered the technology giant’s ability to capitalize on AI.

“As AI becomes more efficient and accessible, we will see exponentially more demand,” he said on a call with analysts.

Microsoft has earmarked $80 billion for AI in its current fiscal year, while Meta has pledged as much as $65 billion towards the technology.

That is a far cry from the roughly $6 million DeepSeek said it has spent to develop its AI model. U.S. tech executives and Wall Street analysts say that reflects the amount spent on computing power, rather than all development costs.

Still, some investors seem to be losing patience with the hefty spending and lack of big payoffs.

Shares of Microsoft — widely seen as a front runner in the AI race because of its tie to industry leader OpenAI – were down 5% in extended trading after the company said that growth in its Azure cloud business in the current quarter would fall short of estimates.

“We really want to start to see a clear road map to what that monetization model looks like for all of the capital that’s been invested,” said Brian Mulberry, portfolio manager at Zacks Investment Management, which holds shares in Microsoft.

Meta, meanwhile, sent mixed signals about how its bets on AI-powered tools were paying off, with a strong fourth quarter but a lackluster sales forecast for the current period.

“With these huge expenses, they need to turn the spigot on in terms of revenue generated, but I think this week was a wake-up call for the U.S.” said Futurum Group analyst Daniel Newman.

“For AI right now, there’s too much capital expenditure, not enough consumption.”

There are some signs though that executives are moving to change that.

Microsoft CFO Amy Hood said the company’s capital spending in the current quarter and the next would remain around the $22.6 billion level seen in the second quarter.

“In fiscal 2026, we expect to continue to invest against strong demand signals. However, the growth rate will be lower than fiscal 2025 (which ends in June),” she said. 

Generative AI makes Chinese, Iranian hackers more efficient, report says

A report issued Wednesday by Google found that hackers from numerous countries, particularly China, Iran and North Korea, have been using the company’s artificial intelligence-enabled Gemini chatbot to supercharge cyberattacks against targets in the United States.

The company found — so far, at least — that access to publicly available large language models (LLMs) has made cyberattackers more efficient but has not meaningfully changed the kind of attacks they typically mount.

LLMs are AI models that have been trained, using enormous amounts of previously generated content, to identify patterns in human languages. Among other things, this makes them adept at producing high-functioning, error-free computer programs.

“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume,” the report found.

Generative AI offered some benefits for low-skilled and high-skilled hackers, the report said.

“However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, [the Google Threat Intelligence Group] anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.”

Google’s findings appear to agree with previous research released by other large U.S. AI players OpenAI and Microsoft, which found a similar failure to achieve novel offensive strategies for cyberattacks through the use of public generative AI models.

The report clarified that Google works to disrupt the activity of threat actors when it identifies them.

Game unchanged 

“AI, so far, has not been a game changer for offensive actors,” Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, told VOA. “It speeds up some things. It gives foreign actors a better ability to craft phishing emails and find some code. But has it dramatically changed the game? No.”

Whether that might change in the future is unclear, Segal said. Also unclear is whether further developments in AI technology will more likely benefit people building defenses against cyberattacks or the threat actors trying to defeat them.

“Historically, defense has been hard, and technology hasn’t solved that problem,” Segal said. “I suspect AI won’t do that, either. But we don’t know yet.”

Caleb Withers, a research associate at the Center for a New American Security, agreed that there is likely to be an arms race of sorts, as offensive and defensive cybersecurity applications of generative AI evolve. However, it is likely that they will largely balance each other out, he said.

“The default assumption should be that absent certain trends that we haven’t yet seen, these tools should be roughly as useful to defenders as offenders,” he said. “Anything productivity enhancing, in general, applies equally, even when it comes to things like discovering vulnerabilities. If an attacker can use something to find a vulnerability in software, so, too, is the tool useful to the defender to try to find those themselves and patch them.”

Threat categories

The report breaks down the kinds of threat actors it observed using Gemini into two primary categories.

Advanced persistent threat (APT) actors refer to “government-backed hacking activity, including cyber espionage and destructive computer network attacks.” By contrast, information operation (IO) threats “attempt to influence online audiences in a deceptive, coordinated manner. Examples include sock puppet accounts [phony profiles that hide users’ identities] and comment brigading [organized online attacks aimed at altering perceptions of online popularity].”

The report found that hackers from Iran were the heaviest users of Gemini in both threat categories. APT threat actors from Iran used the service for a wide range of tasks, including gathering information on individuals and organizations, researching targets and their vulnerabilities, translating language and creating content for future online campaigns.

Google tracked more than 20 Chinese government-backed APT actors using Gemini “to enable reconnaissance on targets, for scripting and development, to request translation and explanation of technical concepts, and attempting to enable deeper access to a network following initial compromise.”

North Korean state-backed APTs used Gemini for many of the same tasks as Iran and China but also appeared to be attempting to exploit the service in its efforts to place “clandestine IT workers” in Western companies to facilitate the theft of intellectual property.

Information operations

Iran was also the heaviest user of Gemini when it came to information operation threats, accounting for 75% of detected usage, Google reported. Hackers from Iran used the service to create and manipulate content meant to sway public opinion, and to adapt that content for different audiences.

Chinese IO actors primarily used the service for research purposes, looking into matters “of strategic interest to the Chinese government.”

Unlike the APT sector, where their presence was minimal, Russian hackers were more common when it came to IO-related use of Gemini, using it not only for content creation but to gather information about how to create and use online AI chatbots.

Call for collaboration

Also on Wednesday, Kent Walker, president of global affairs for Google and its parent company, Alphabet, used a post on the company’s blog to note the potential dangers posed by threat actors using increasingly sophisticated AI models, and calling on the industry and federal government “to work together to support our national and economic security.”

“America holds the lead in the AI race — but our advantage may not last,” Walker wrote.

Walker argued that the U.S. needs to maintain its narrow advantage in the development of the technology used to build the most advanced artificial intelligence tools. In addition, he said, the government must streamline procurement rules to “enable adoption of AI, cloud and other game-changing technologies” by the U.S. military and intelligence agencies, and to establish public-private cyber defense partnerships.