У Берліні відбулась демонстрація проти співпраці з «Альтернативою для Німеччини»

Близько 80 тисяч людей 2 лютого вийшли на вулиці Берліна на демонстрацію проти співпраці з ультраправою «Альтернативою для Німеччини», пишуть місцеві ЗМІ.

Демонстрація простяглася від Колони Перемоги до штаб-квартири партії ХДС.

Демонстранти перед цим зібралися перед Рейхстагом. Лідери СДПН Ларс Клінгбайль і Саскія Ескен, а також генеральний секретар СДПН Маттіас Мірш також були там. За словами Клінгбайля, він «хотів послати сигнал, що в цій країні є сильний демократичний центр, і що праві екстремісти не повинні мати права голосу в політиці».

Гасло демонстрації – «Повстання порядних людей – ми є захистом!». Багато учасників мали при собі плакати і транспаранти. На них, зокрема, було написано: «Фріц, послухай маму», «5 хвилин до 1933 року», «Ні Мерцу в лютому».

Журналіст Мішель Фрідман, який кілька днів тому на знак протесту вийшов з ХДС, нагадав усім на відкритті мітингу про обіцянку, що «гідність кожної людини є недоторканною».

До демонстрації закликали організація Campact, DGB Берлін-Бранденбург і «П’ятниці заради майбутнього». Після мітингу-відкриття була запланована демонстраційна хода до Будинку Конрада Аденауера, партійної штаб-квартири ХДС в районі Тіргартен, що в Берліні.

Люди також вийшли на вулиці в Регенсбурзі, Ульмі, Кілі, Потсдамі та Брауншвейгу.

Раніше в Бундестазі була ухвалена пропозиція ХДС/ХСС щодо посилення міграційної політики завдяки голосам ультраправої «Альтернативи для Німеччини».

Голосування стало «переломним», оскільки зламало багаторічний консенсус німецьких політичних партій щодо відмови від співпраці з крайніми правими.

За це лідер ХДС Фрідріх Мерц потрапив під шквал критиків від партій центру, які потенційно могли би стати майбутніми союзниками по коаліції після виборів 23 лютого.

 

«Ніжно помахуватимуть хвостиком» – Путін поділився уявленнями про стосунки Трампа з лідерами Європи

Активність Путіна збігається в часі з заявами представників нової американської адміністрації про намір якнайшвидше досягти сталого перемир’я в російсько-українській війні

UK to become 1st country to criminalize AI child abuse tools

LONDON — Britain will become the first country to introduce laws against AI tools used to generate sexual abuse images, the government announced Saturday.

The government will make it illegal to possess, create or distribute AI tools designed to generate sexualized images of children, punishable by up to five years in prison, interior minister Yvette Cooper revealed.

It will also be illegal to possess AI “pedophile manuals” which teach people how to use AI to sexually abuse children, punishable by up to three years in prison.

“We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person,” said Cooper.

The new laws are “designed to keep our children safe online as technologies evolve. It is vital that we tackle child sexual abuse online as well as offline,” she added.

“Children will be protected from the growing threat of predators generating AI images and from online sexual abuse as the U.K. becomes the first country in the world to create new AI sexual abuse offences,” said a government statement.

AI tools are being used to generate child sexual abuse images by “nudeifying” real life images of children or by “stitching the faces of other children onto existing images,” said the government.

The new laws will also criminalize “predators who run websites designed for other pedophiles to share vile child sexual abuse content or advice on how to groom children,” punishable by up to ten years in prison, said the government.

The measures will be introduced as part of the Crime and Policing Bill when it comes to parliament.

The Internet Watch Foundation (IWF) has warned of the growing number of sexual abuse AI images of children being produced.

Over a 30-day period in 2024, IWF analysts identified 3,512 AI child abuse images on a single dark web site.

The number of the most serious category of images also rose by 10% in a year, it found.

DeepSeek vs. ChatGPT fuels debate over AI building blocks

SEOUL, SOUTH KOREA — When Chinese startup DeepSeek released its AI model this month, it was hailed as a breakthrough, a sign that China’s artificial intelligence companies could compete with their Silicon Valley counterparts using fewer resources.

The narrative was clear: DeepSeek had done more with less, finding clever workarounds to U.S. chip restrictions. However, that storyline has begun to shift.

OpenAI, the U.S.-based company behind ChatGPT, now claims DeepSeek may have improperly used its proprietary data to train its model, raising questions about whether DeepSeek’s success was truly an engineering marvel.

In statements to several media outlets this week, OpenAI said it is reviewing indications that DeepSeek may have trained its AI by mimicking responses from OpenAI’s models.

The process, known as distillation, is common among AI developers but is prohibited by OpenAI’s terms of service, which forbid using its model outputs to train competing systems.

Some U.S. officials appear to support OpenAI’s concerns. At his confirmation hearing this week, Commerce secretary nominee Howard Lutnick accused DeepSeek of misusing U.S. technology to create a “dirt cheap” AI model.

“They stole things. They broke in. They’ve taken our IP,” Lutnick said of China.

David Sacks, the White House czar for AI and cryptocurrency, was more measured, saying only that it is “possible” that DeepSeek had stolen U.S. intellectual property.

In an interview with the cable news network Fox News, Sacks added that there is “substantial evidence” that DeepSeek “distilled the knowledge out of OpenAI’s models,” adding that stronger efforts are needed to curb the rise of “copycat” AI systems.

At the center of the dispute is a key question about AI’s future: how much control should companies have over their own AI models, when those programs were themselves built using data taken from others?

AI data fight

The question is especially relevant for OpenAI, which faces its own legal challenges. The company has been sued by several media companies and authors who accuse it of illegally using copyrighted material to train its AI models.

Justin Hughes, a Loyola Law School professor specializing in intellectual property, AI, and data rights, said OpenAI’s accusations against DeepSeek are “deeply ironic,” given the company’s own legal troubles.

“OpenAI has had no problem taking everyone else’s content and claiming it’s ‘fair,'” Hughes told VOA in an email.

“If the reports are accurate that OpenAI violated other platforms’ terms of service to get the training data it has wanted, that would just add an extra layer of irony – dare we say hypocrisy – to OpenAI complaining about DeepSeek.”

DeepSeek has not responded to OpenAI’s accusations. In a technical paper released with its new chatbot, DeepSeek acknowledged that some of its models were trained alongside other open-source models – such as Qwen, developed by China’s Alibaba, and Llama, released by Meta – according to Johnny Zou, a Hong Kong-based AI investment specialist.

However, OpenAI appears to be alleging that DeepSeek improperly used its closed-source models – which cannot be freely accessed or used to train other AI systems.

“It’s quite a serious statement,” said Zou, who noted that OpenAI has not yet presented evidence of wrongdoing by DeepSeek.

Proving improper distillation may be difficult without disclosing details on how its own models were trained, Zou added.

Even if OpenAI presents concrete proof, its legal options may be limited. Although Zou noted that the company could pursue a case against DeepSeek for violating its terms of service, not all experts believe such a claim would hold up in court.

“Even assuming DeepSeek trained on OpenAI’s data, I don’t think OpenAI has much of a case,” said Mark Lemley, a professor at Stanford Law School who specializes in intellectual property and technology.

Even though AI models often have restrictive terms of service, “no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief,” Lemley wrote in a recent paper with co-author Peter Henderson.

The paper argues that these restrictions may be unenforceable, since the materials they aim to protect are “largely not copyrightable.”

“There are compelling reasons for many of these provisions to be unenforceable: they chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist,” the paper noted.

OpenAI’s main legal argument would likely be breach of contract, said Hughes. Even if that were the case, though, he added, “good luck enforcing that against a Chinese company without meaningful assets in the United States.”

Possible options

The financial stakes are adding urgency to the debate. U.S. tech stocks dipped Monday after following news of DeepSeek’s advances, though they later regained some ground.

Commerce nominee Lutnick suggested that further government action, including tariffs, could be used to deter China from copying advanced AI models.

But speaking the same day, U.S. President Donald Trump appeared to take a different view, surprising some industry insiders with an optimistic take on DeepSeek’s breakthrough.

The Chinese company’s low-cost model, Trump said, was “very much a positive development” for AI, because “instead of spending billions and billions, you’ll spend less, and you’ll come up with hopefully the same solution.”

If DeepSeek has succeeded in building a relatively cheap and competitive AI model, that may be bad for those with investment – or stock options – in current generative AI companies, Hughes said.

“But it might be good for the rest of us,” he added, noting that until recently it appeared that only the existing tech giants “had the resources to play in the generative AI sandbox.”

“If DeepSeek disproved that, we should hope that what can be done by a team of engineers in China can be done by a similarly resourced team of engineers in Detroit or Denver or Boston,” he said. 

Nigerian initiative paves way for deaf inclusion in tech

An estimated nine million Nigerians are deaf or have hearing impairments, and many cope with discrimination that limits their access to education and employment. But one initiative is working to change that — empowering deaf people with tech skills to improve their career prospects. Timothy Obiezu reports from Abuja.
Camera: Timothy Obiezu