Drone maker DJI sues Pentagon over Chinese military listing

WASHINGTON — China-based DJI sued the U.S. Defense Department on Friday for adding the drone maker to a list of companies allegedly working with Beijing’s military, saying the designation is wrong and has caused the company significant financial harm.

DJI, the world’s largest drone manufacturer that sells more than half of all U.S. commercial drones, asked a U.S. District Judge in Washington to order its removal from the Pentagon list designating it as a “Chinese military company,” saying it “is neither owned nor controlled by the Chinese military.”

Being placed on the list represents a warning to U.S. entities and companies about the national security risks of conducting business with them.

DJI’s lawsuit says because of the Defense Department’s “unlawful and misguided decision” it has “lost business deals, been stigmatized as a national security threat, and been banned from contracting with multiple federal government agencies.”

The company added “U.S. and international customers have terminated existing contracts with DJI and refuse to enter into new ones.”

The Defense Department did not immediately respond to a request for comment.

DJI said on Friday it filed the lawsuit after the Defense Department did not engage with the company over the designation for more than 16 months, saying it “had no alternative other than to seek relief in federal court.”

Amid strained ties between the world’s two biggest economies, the updated list is one of numerous actions Washington has taken in recent years to highlight and restrict Chinese companies that it says may strengthen Beijing’s military.

Many major Chinese firms are on the list, including aviation company AVIC, memory chip maker YMTC, China Mobile 0941.HK, and energy company CNOOC.

In May, lidar manufacturer Hesai Group ZN80y.F filed a suit challenging the Pentagon’s Chinese military designation for the company. On Wednesday, the Pentagon removed Hesai from the list but said it will immediately relist the China-based firm on national security grounds.

DJI is facing growing pressure in the United States.

Earlier this week DJI told Reuters that Customs and Border Protection is stopping imports of some DJI drones from entering the United States, citing the Uyghur Forced Labor Prevention Act.

DJI said no forced labor is involved at any stage of its manufacturing.

U.S. lawmakers have repeatedly raised concerns that DJI drones pose data transmission, surveillance and national security risks, something the company rejects.

Last month, the U.S. House voted to bar new drones from DJI from operating in the U.S. The bill awaits U.S. Senate action. The Commerce Department said last month it is seeking comments on whether to impose restrictions on Chinese drones that would effectively ban them in the U.S. — similar to proposed Chinese vehicle restrictions. 

Данія передасть Україні новий пакет військової допомоги

«Уряд працює над тим, щоб найближчим часом ухвалити рішення щодо подальших закупівель безпосередньо в українського оборонпрому, зокрема у сфері безпілотників»

Пентагон: сили США не брали безпосередньої участі в операції Ізраїлю, під час якої вбили лідера «Хамасу»

При цьому речник Пентагону зазначив, що розвідувальні дані США зі встановлення місць перебування заручників, захоплених «Хамасом», сприяли розумінню Ізраїлем місця можливого перебування лідерів угруповання

Residents on Kenya’s coast use app to track migratory birds

The Tana River delta on the Kenyan coast includes a vast range of habitats and a remarkably productive ecosystem, says UNESCO. It is also home to many bird species, including some that are nearly threatened. Residents are helping local conservation efforts with an app called eBird. Juma Majanga reports.

US prosecutors see rising threat of AI-generated child sex abuse imagery

U.S. federal prosecutors are stepping up their pursuit of suspects who use artificial intelligence tools to manipulate or create child sex abuse images, as law enforcement fears the technology could spur a flood of illicit material.

The U.S. Justice Department has brought two criminal cases this year against defendants accused of using generative AI systems, which create text or images in response to user prompts, to produce explicit images of children.

“There’s more to come,” said James Silver, the chief of the Justice Department’s Computer Crime and Intellectual Property Section, predicting further similar cases.

“What we’re concerned about is the normalization of this,” Silver said in an interview. “AI makes it easier to generate these kinds of images, and the more that are out there, the more normalized this becomes. That’s something that we really want to stymie and get in front of.”

The rise of generative AI has sparked concerns at the Justice Department that the rapidly advancing technology will be used to carry out cyberattacks, boost the sophistication of cryptocurrency scammers and undermine election security. 

Child sex abuse cases mark some of the first times that prosecutors are trying to apply existing U.S. laws to alleged crimes involving AI, and even successful convictions could face appeals as courts weigh how the new technology may alter the legal landscape around child exploitation. 

Prosecutors and child safety advocates say generative AI systems can allow offenders to morph and sexualize ordinary photos of children and warn that a proliferation of AI-produced material will make it harder for law enforcement to identify and locate real victims of abuse.

The National Center for Missing and Exploited Children, a nonprofit group that collects tips about online child exploitation, receives an average of about 450 reports each month related to generative AI, according to Yiota Souras, the group’s chief legal officer.

That’s a fraction of the average of 3 million monthly reports of overall online child exploitation the group received last year.

Untested ground

Cases involving AI-generated sex abuse imagery are likely to tread new legal ground, particularly when an identifiable child is not depicted.

Silver said in those instances, prosecutors can charge obscenity offenses when child pornography laws do not apply.

Prosecutors indicted Steven Anderegg, a software engineer from Wisconsin, in May on charges including transferring obscene material. Anderegg is accused of using Stable Diffusion, a popular text-to-image AI model, to generate images of young children engaged in sexually explicit conduct and sharing some of those images with a 15-year-old boy, according to court documents.

Anderegg has pleaded not guilty and is seeking to dismiss the charges by arguing that they violate his rights under the U.S. Constitution, court documents show.

He has been released from custody while awaiting trial. His attorney was not available for comment.

Stability AI, the maker of Stable Diffusion, said the case involved a version of the AI model that was released before the company took over the development of Stable Diffusion. The company said it has made investments to prevent “the misuse of AI for the production of harmful content.”

Federal prosecutors also charged a U.S. Army soldier with child pornography offenses in part for allegedly using AI chatbots to morph innocent photos of children he knew to generate violent sexual abuse imagery, court documents show.

The defendant, Seth Herrera, pleaded not guilty and has been ordered held in jail to await trial. Herrera’s lawyer did not respond to a request for comment.

Legal experts said that while sexually explicit depictions of actual children are covered under child pornography laws, the landscape around obscenity and purely AI-generated imagery is less clear. 

The U.S. Supreme Court in 2002 struck down as unconstitutional a federal law that criminalized any depiction, including computer-generated imagery, appearing to show minors engaged in sexual activity. 

“These prosecutions will be hard if the government is relying on the moral repulsiveness alone to carry the day,” said Jane Bambauer, a law professor at the University of Florida who studies AI and its impact on privacy and law enforcement.

Federal prosecutors have secured convictions in recent years against defendants who possessed sexually explicit images of children that also qualified as obscene under the law. 

Advocates are also focusing on preventing AI systems from generating abusive material. 

Two nonprofit advocacy groups, Thorn and All Tech Is Human, secured commitments in April from some of the largest players in AI including Alphabet’s Google, Amazon.com, Facebook and Instagram parent Meta Platforms, OpenAI and Stability AI to avoid training their models on child sex abuse imagery and to monitor their platforms to prevent its creation and spread. 

“I don’t want to paint this as a future problem, because it’s not. It’s happening now,” said Rebecca Portnoff, Thorn’s director of data science.

“As far as whether it’s a future problem that will get completely out of control, I still have hope that we can act in this window of opportunity to prevent that.”

Deepfakes featuring deceased terrorists spread radical propaganda

In a year with over 60 national elections worldwide, concerns are high that individuals and entities are using deepfake images and recordings to contribute to the flood of election misinformation. VOA’s Rio Tuasikal reports on some potentially dangerous videos made using generative AI.

Watchdog: ‘Serious questions’ over Meta’s handling of anti-immigrant posts

Meta’s independent content watchdog said Thursday there were “serious questions” about how the social media giant deals with anti-immigrant content, particularly in Europe. 

The Oversight Board, established by Meta in 2020 and sometimes called its “supreme court,” launched a probe after seeing a “significant number” of appeals over anti-immigrant content. 

The board has chosen two symbolic cases — one from Germany and the other from Poland — to assess whether Meta, which owns Facebook and Instagram, is following human rights law and its own policies on hate speech. 

Helle Thorning-Schmidt, co-chair of the board and a former Danish prime minister, said it was “critical” to get the balance right between free speech and protection of vulnerable groups. 

“The high number of appeals we get on immigration-related content from across the EU tells us there are serious questions to ask about how the company handles issues related to this, including the use of coded speech,” she said in a statement. 

The first piece of content to be assessed by the board was posted in May on a Facebook page claiming to be the official account of Poland’s far-right Confederation party. 

An image depicts Polish Prime Minister Donald Tusk looking through a peephole with a black man approaching him from behind, accompanied by text suggesting his government would allow immigration to surge. 

Meta rejected an appeal from a user to take down the post despite the text including a word considered by some as a racial slur. 

In the other case, an apparently AI-generated image was posted on a German Facebook page showing a blond-haired blue-eyed woman, a German flag and a stop sign. 

The accompanying text likens immigrants to “gang rape specialists.”  

A user complained but Meta decided to not to remove the post.  

“The board selected these cases to address the significant number of appeals, especially from Europe, against content that shares views on immigration in ways that may be harmful towards immigrants,” the watchdog said in a statement. 

The board said it wanted to hear from the public and would spend “the next few weeks” discussing the issue before publishing its decision. 

Decisions by the board, funded by a trust set up by Meta, are not binding, though the company has promised to follow its rulings. 

Серба, який приховав звинувачення у воєнних злочинах, засудили до ув’язнення і наступної депортації зі США

Хорватський суд установив, що під час нападу етнічних сербів на місто Петриня в Хорватії 16 вересня 1991 року Югослав Відич відрізав руку мирному жителю Стієпану Комесу і залишив його стікати кров’ю

China says unidentified foreign company conducted illegal mapping services 

BEIJING — China’s state security ministry said that a foreign company had been found to have illegally conducted geographic mapping activities in the country under the guise of autonomous driving research and outsourcing to a licensed Chinese mapping firm.

The ministry did not disclose the names of either company in a statement on its WeChat account on Wednesday.

The foreign company, ineligible for geographic surveying and mapping activities in China, “purchased a number of cars and equipped them with high-precision radar, GPS, optical lenses and other gear,” read the statement.

In addition to directly instructing the Chinese company to conduct surveying and mapping in many Chinese provinces, the foreign company appointed foreign technicians to give “practical guidance” to mapping staffers with the Chinese firm, enabling the latter to transfer its acquired data overseas, the ministry alleged.

Most of the data the foreign company has collected have been determined to be state secrets, according to the ministry, which said state security organs, together with relevant departments, had carried out joint law enforcement activities.

The affected companies and relevant responsible personnel have been held legally accountable, the state security ministry said, without elaborating.

China has strictly regulated mapping activities and data, which are key to developing autonomous driving, due to national security concerns. No foreign firm is qualified for mapping in China and data collected by vehicles made by foreign automakers such as Tesla in China has to be stored locally.

The U.S. Commerce Department has also proposed prohibiting Chinese software and hardware in connected and autonomous vehicles on American roads due to national security concerns.

Also on Wednesday, a Chinese cybersecurity industry group recommended that Intel products sold in China should be subject to a security review, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests.

Chinese cyber association calls for review of Intel products sold in China 

BEIJING — Intel products sold in China should be subject to a security review, the Cybersecurity Association of China (CSAC) said on Wednesday, alleging the U.S. chipmaker has “constantly harmed” the country’s national security and interests. 

While CSAC is an industry group rather than a government body, it has close ties to the Chinese state and the raft of accusations against Intel, published in a long post on its official WeChat group, could trigger a security review from China’s powerful cyberspace regulator, the Cyberspace Administration of China (CAC). 

“It is recommended that a network security review is initiated on the products Intel sells in China, so as to effectively safeguard China’s national security and the legitimate rights and interests of Chinese consumers,” CSAC said. 

Last year, the CAC barred domestic operators of key infrastructure from buying products made by U.S. memory chipmaker Micron Technology Inc after deeming the company’s products had failed its network security review. 

Intel did not immediately respond to a request for comment. The company’s shares were down 2.7% in U.S. premarket trading.  

 

Байден збере «європейську четвірку» під час візиту до Німеччини – ЗМІ

«Європейська четвірка» планувала зустрітися минулого тижня з президентом України Володимиром Зеленським на авіабазі «Рамштайн» у Німеччині. Зустріч була скасована, оскільки Байден відклав свою поїздку через ураган у США