China Unveils Proposed New Law Overseeing Artificial Intelligence Products

China’s internet regulator has unveiled a proposed law that will require makers of new artificial intelligence, or AI, products to submit to security assessments before public release.

The draft law released Tuesday by the Cyberspace Administration of China says that content generated by future AI products must reflect the country’s “core socialist values” and not encourage subversion of state power.  

The draft law also said AI content must not promote discrimination based on ethnicity, race and gender, and should not provide false information.  

The proposed law is expected to take effect sometime this year. The regulations come as several China-based tech companies, including Alibaba, JD.com and Baidu have released a flurry of new so-called generative AI products which can mimic human speech and generate content such as images and texts. The innovative feature has surged in popularity since San Francisco-based OpenAI introduced ChatGPT last November.  

Some information for this report came from Reuters, Agence France-Presse.

Australia Aims to Make Industry More Resilient Against Cyberattacks

The Australian government is asking major banks and other institutions to take part in ‘wargaming’ exercises to test how they would respond to cyber-attacks. It follows recent mass data theft attacks on several large companies, which compromised the data of millions of Australians.     

Australia is preparing for potential cyberattacks on critical services including hospitals, the banking system and the electricity grid.  

Home Affairs Minister Clare O’Neil Tuesday warned that recent high-profile hacks on the telecommunications and health insurance sectors, which have affected millions of people, “were just the tip of the iceberg”.   

The government is setting up a series of drills with large organizations to help them respond to security breaches.   

Anna Bligh, chief executive of the Australian Banking Association, an industry body, told the Australian Broadcasting Corp. Tuesday that cyber security drills organized by the government will make the sector more resilient. 

“How would the whole system cope if one of the very large companies were taken down by a cyber threat?” Bligh asked. “The sort of scale and sophistication of the threat is now moving into something that we haven’t seen before. So, it is a very timely move.  This is now potentially a significant threat to the national security of the country.”   

A major Australian financial services company revealed Tuesday that criminals who stole sensitive customer information last month have demanded a ransom.   

The cyberattack on Latitude Financial resulted in the theft of 14 million customer records, including financial statements, driver’s license numbers and passport numbers.   

The company said that in line with government policy it would not pay a ransom to prevent the data being leaked or sold online.   

The Australian government is considering an updated Cyber Security law that would impose new obligations and standards to protect data across industry and government departments.   

However, officials have warned that cyber criminals are becoming more professional, powerful and effective. 

News Presenter Generated with AI Appears in Kuwait

A Kuwaiti media outlet has unveiled a virtual news presenter generated using artificial intelligence, with plans for it to read online bulletins.   

“Fedha” appeared on the Twitter account of the Kuwait News website Saturday as an image of a woman, her light-colored hair uncovered, wearing a black jacket and white T-shirt.   

“I’m Fedha, the first presenter in Kuwait who works with artificial intelligence at Kuwait News. What kind of news do you prefer? Let’s hear your opinions,” she said in classical Arabic.   

The site is affiliated with the Kuwait Times, founded in 1961 as the Gulf region’s first English-language daily.   

Abdullah Boftain, deputy editor-in-chief for both outlets, said the move is a test of AI’s potential to offer “new and innovative content.”   

In the future Fedha could adopt the Kuwaiti accent and present news bulletins on the site’s Twitter account, which has 1.2 million followers, he said.   

“Fedha is a popular, old Kuwaiti name that refers to silver, the metal. We always imagine robots to be silver and metallic in color, so we combined the two,” Boftain said.    

The presenter’s blonde hair and light-colored eyes reflect the oil-rich country’s diverse population of Kuwaitis and expatriates, according to Boftain.    

“Fedha represents everyone,” he said.   

Her initial 13-second video generated a flood of reactions on social media, including from journalists. 

The rapid rise of AI globally has raised the promise of benefits, such as in health care and the elimination of mundane tasks, but also fears, for example over its potential to spread disinformation, threats to certain jobs and to artistic integrity.   

Kuwait ranked 158 out of 180 countries and territories in the Reporters Without Borders (RSF) 2022 Press Freedom Index. 

Reports: Tesla Plans Shanghai Factory for Power Storage

Electric car maker Tesla Inc. plans to build a factory in Shanghai to produce power-storage devices for sale worldwide, state media reported Sunday.

Plans call for annual production of 10,000 Megapack units, according to the Xinhua News Agency and state television. They said the company made the announcement at a signing ceremony in Shanghai, where Tesla operates an auto factory.

The factory is due to break ground in the third quarter of this year and start production in the second quarter of 2024, the reports said.

Tesla didn’t immediately respond to requests for information.

Mayor in Australia Ready to Sue over Alleged AI Chatbot Defamation

A mayor in Australia’s Victoria state said Friday he may sue the artificial intelligence writing tool ChatGPT after it falsely claimed he’d served time in prison for bribery.  Hepburn Shire Council Mayor Brian Hood was incorrectly identified as the guilty party in a corruption case in the early 2000s.

Brian Hood was the whistleblower in a corruption scandal involving a company partly owned by the Reserve Bank of Australia.  Several people were charged, but Hood was not one of them. That did not stop an article generated by ChatGPT, an automated writing service powered by artificial intelligence. The article cast him as the culprit who was jailed for his part in a conspiracy to bribe foreign officials to win currency printing contracts.

Hood only found out after friends told him. He told the Australian Broadcasting Corp. He then used the chatbot software to see the story for himself.

“After making the inquiry, it generated five or six paragraphs of information.  The really disturbing thing was that some of the paragraphs were accurate, and then were other paragraphs that described things that were completely incorrect.  It told me that I’d be charged with very serious criminal offenses, that I’d be convicted of them and that I had spent 30 months in jail,” he said.

Hood said that if OpenAI, a U.S.-based company that owns the chatbot, does not correct the false claims, he will sue.

It would be the first defamation lawsuit against the automated service.

However, a new version of ChatGPT reportedly avoids the mistakes of its predecessor. It reportedly correctly explains that Hood was a whistleblower who was praised for his actions. Hood’s lawyers say that the defamatory material, which damages the mayor’s reputation, still exists and their efforts to have the mistakes rectified would continue.

A disclaimer on the ChatGPT program warns users that it “may produce inaccurate information about people, places, or facts”.  The technology has exploded in popularity around the world.

OpenAI has yet to comment publicly on the allegations.

Google has announced the launch of its rival to ChatGPT, Bard.  Meta, which owns WhatsApp, Facebook and Instagram, launched its own AI chatbot Blenderbot in the United States last year, while Baidu, the Chinese tech company, has said it was working on an advanced version of its chatbot, Ernie.

Samsung Cutting Memory Chip Production as Profit Slides

Samsung Electronics said Friday it is cutting the production of its computer memory chips in an apparent effort to reduce inventory as it forecasted another quarter of sluggish profit. 

The South Korean technology giant, in a regulatory filing, said it has been reducing the production of certain memory products by unspecified “meaningful levels” to optimize its manufacturing operations, adding it has sufficient supplies of those chips to meet demand fluctuations. 

The company predicted an operating profit of $455 million for the three months through March, which would be a 96% decline from the same period a year earlier. It said sales during the quarter likely fell 19% to $47.7 billion. 

Samsung, which will release its finalized first quarter earnings later this month, said the demand for its memory chips declined as a weak global economy depressed consumer spending on technology products and forced business clients to adjust their inventories to nurse worsening finances. 

Samsung had reported a near 70% drop in profit for the October-December quarter, which partially reflected how global events like Russia’s war on Ukraine and high inflation have rattled technology markets. 

SK Hynix, another major South Korean semiconductor producer, said this week that it sold $1.7 billion in bonds that can be exchanged for the company’s shares to help fund its purchases of chipmaking materials as it weathers the industry’s downswing. SK Hynix had reported an operating loss of $1.28 billion for the October-December period, which marked its first quarterly deficit since 2012. 

“While we have lowered our short-term production plans, we expect solid demand for the mid- to long-term, so we will continue to invest in infrastructure to secure essential levels in clean room capacities and expand investment in research and development to strengthen our technology leadership,” Samsung said. 

Samsung last month announced plans to invest $227 billion over the next 20 years as part of an ambitious South Korean project to build the world’s largest semiconductor manufacturing base near the capital, Seoul. 

The chip-making “mega cluster,” which will be established in Gyeonggi province by 2042, will be anchored by five new semiconductor plants built by Samsung near its existing manufacturing hub. It will aim to attract 150 other companies producing materials and components or designing high-tech chips, according to South Korea’s government. 

The South Korean plan comes as other technology powerhouses, including the United States, Japan and China, are building up their domestic chip manufacturing, deploying protectionist measures, tax cuts and sizeable subsidies to lure investments. 

Artemis Crew Looking Forward to Restarting NASA’s Moon Program

The last time humans were on the moon was in 1972. Now NASA is preparing to set foot back on the moon in 2025, if all goes as scheduled. VOA’s Alexander Kruglyakov spoke with the crew that will take part in the first of those missions: a planned flight around the Moon in November 2024.

FBI Targets Users in Crackdown on Darknet Marketplaces

Darknet users, beware: If you frequent criminal marketplaces in the internet’s underbelly, think again. Chances are you’re in the FBI’s crosshairs. 

The FBI is cracking down on sites that peddle everything from guns to stolen personal data, and it is not only going after the sites’ administrators but also their users.  

A recent surge in ransomware attacks and other malicious cyber activities has fueled the effort to shut down services that cater to online criminals.  

But shutting down the marketplaces has proven ineffective. With each takedown, a new iteration pops up drawing users with it. Which is why the FBI is eyeing both the operators and users of these sites.   

“We’re not only trying to attack the supply side, but we’re also attacking the demand side with the users,” a senior FBI official said during a Wednesday briefing on the agency’s takedown of Genesis Market, a large online criminal marketplace. “There’s consequences if you’re going to be using these types of sites to engage in this type of activity.” 

The darknet, the hidden part of the internet that can only be accessed by a special browser, has long been home to various criminal marketplaces and forums. 

One type of criminal marketplace there specializes in buying and selling illegal items, such as drugs, firearms and fraudulently obtained gift cards. 

Another type of market trades in sensitive data, such as stolen credit cards, bank account details and other information that can be used for criminal activity. These sites are known as “data stores.”  

In recent years, a new breed of cyber criminals has emerged. Known as “initial access brokers,” these criminals specialize in selling access to compromised computer networks. Among their customers: ransomware gangs.  

The takedown on Tuesday of Genesis Market, a 5-year-old criminal marketplace described by officials as an “initial access broker,” offers a window into this type of cyber-criminal activity. 

It also shows how the FBI is increasingly going after users of criminal marketplaces and not just their administrators.  

U.S. officials said Genesis Market was not only a seller of stolen account access credentials but was also “one of the most prolific” initial access brokers operating on the darknet.  

Describing it as a “key enabler of ransomware,” the Justice Department said Genesis Market sold “the type of access sought by ransomware actors to attack computer networks in the United States and around the world.”  

The site went dark on Tuesday after the FBI, working with law enforcement agencies in nearly 20 countries, including the U.K. and Canada, took it offline and arrested nearly 120 people. 

In a statement, Attorney General Merrick Garland hailed the operation as “an unprecedented takedown of a major criminal marketplace that enabled cybercriminals to victimize individuals, businesses, and governments around the world.” 

Genesis is one of two popular cyber-criminal marketplaces taken down by the FBI in the past month.   

In March, the FBI shut down Breach Forums, a criminal forum and marketplace that boasted more than 340,000 members. On the Breach Forums website, users discussed tools and techniques for hacking and exploiting hacked information, according to the Justice Department. 

“We’re going after the users who leverage a service like Genesis Market, and we are doing that on a global scale,” the FBI official said. 

To take down Genesis Market, the FBI and its international law enforcement partners seized its servers and domains.  

In doing so, the FBI was able to obtain information about 59,000 individual user accounts, a senior Justice Department official said during the briefing.  

The information included usernames, passwords, email accounts, secure messenger accounts and user histories, the official said.  

“And those records helped law enforcement uncover the true identities of many of the users,” the official said.  

The users ran the gamut from online fraudsters to ransomware criminals.  

Some of the users were in the U.S., officials said, declining to provide any other details about them. They were among the 119 people arrested around the world in connection with Genesis Market takedown.  

Is Social Media Bad for Kids? What We Know

The push to legally restrict children’s access to social media in the United States is gaining steam. So far, however, researchers say there are both negative and positive aspects of minors using the platforms, as VOA’s Veronica Balderas Iglesias found out.

US Chip Controls Threaten China’s Technology Ambitions

Furious at U.S. efforts that cut off access to technology to make advanced computer chips, China’s leaders appear to be struggling to figure out how to retaliate without hurting their own ambitions in telecoms, artificial intelligence and other industries.

Chinese leader Xi Jinping’s government sees the chips — which are used in everything from phones to kitchen appliances to fighter jets — as crucial assets in its strategic rivalry with Washington and efforts to gain wealth and global influence. Chips are the center of a “technology war,” a Chinese scientist wrote in an official journal in February.

China has its own chip foundries, but they supply only low-end processors used in autos and appliances. The U.S. government, starting under President Donald Trump, has been cutting off access to a growing array of tools to make chips for computer servers, AI and other advanced applications. Japan and the Netherlands have joined in limiting access to technology they say might be used to make weapons.

Xi, in unusually pointed language, accused Washington in March of trying to block China’s development with a campaign of “containment and suppression.” He called on the public to “dare to fight.”

Despite that, Beijing has been slow to retaliate against U.S. companies, possibly to avoid disrupting Chinese industries that assemble most of the world’s smartphones, tablet computers and other consumer electronics. They import more than $300 billion worth of foreign chips every year.

Investing in self-reliance

The ruling Communist Party is throwing billions of dollars at trying to accelerate chip development and reduce the need for foreign technology.

China’s loudest complaint: It is blocked from buying a machine available only from a Dutch company, ASML, that uses ultraviolet light to etch circuits into silicon chips on a scale measured in nanometers, or billionths of a meter. Without that, Chinese efforts to make transistors faster and more efficient by packing them more closely together on fingernail-size slivers of silicon are stalled.

Making processor chips requires some 1,500 steps and technologies owned by U.S., European, Japanese and other suppliers.

“China won’t swallow everything. If damage occurs, we must take action to protect ourselves,” the Chinese ambassador to the Netherlands, Tan Jian, told the Dutch newspaper Financieele Dagblad.

“I’m not going to speculate on what that might be,” Tan said. “It won’t just be harsh words.”

The conflict has prompted warnings the world might split into separate spheres with incompatible technology standards that mean computers, smartphones and other products from one region wouldn’t work in others. That would raise costs and might slow innovation.

“The bifurcation in technological and economic systems is deepening,” Prime Minister Lee Hsien Loong of Singapore said at an economic forum in China last month. “This will impose a huge economic cost.”

U.S.-Chinese relations are at their lowest level in decades due to disputes over security, Beijing’s treatment of Hong Kong, and Muslim ethnic minorities, territorial disputes, and China’s multibillion-dollar trade surpluses.

Chinese industries will “hit a wall” in 2025 or 2026 if they can’t get next-generation chips or the tools to make their own, said Handel Jones, a tech industry consultant.

China “will start falling behind significantly,” said Jones, CEO of International Business Strategies.

EV batteries as leverage

Beijing might have leverage, though, as the biggest source of batteries for electric vehicles, Jones said.

Chinese battery giant CATL supplies U.S. and Europe automakers. Ford Motor Co. plans to use CATL technology in a $3.5 billion battery factory in Michigan.

“China will strike back,” Jones said. “What the public might see is China not giving the U.S. batteries for EVs.”

On Friday, Japan increased pressure on Beijing by joining Washington in imposing controls on exports of chipmaking equipment. The announcement didn’t mention China, but the trade minister said Tokyo doesn’t want its technology used for military purposes.

A Chinese Foreign Ministry spokeswoman, Mao Ning, warned Japan that “weaponizing sci-tech and trade issues” would “hurt others as well as oneself.”

Hours later, the Chinese government announced an investigation of the biggest U.S. memory chip maker, Micron Technology Inc., a key supplier to Chinese factories. The Cyberspace Administration of China said it would look for national security threats in Micron’s technology and manufacturing but gave no details.

The Chinese military also needs semiconductors for its development of stealth fighter jets, cruise missiles and other weapons.

Chinese alarm grew after President Joe Biden in October expanded controls imposed by Trump on chip manufacturing technology. Biden also barred Americans from helping Chinese manufacturers with some processes.

To nurture Chinese suppliers, Xi’s government is stepping up support that industry experts say already amounts to as much as $30 billion a year in research grants and other subsidies.

Biden Eyes AI Dangers, Says Tech Companies Must Make Sure Products are Safe

U.S. President Joe Biden said on Tuesday it remains to be seen whether artificial intelligence (AI) is dangerous, but underscored that technology companies had a responsibility to ensure their products were safe before making them public. 

Biden told science and technology advisers that AI could help in addressing disease and climate change, but it was also important to address potential risks to society, national security and the economy. 

“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” he said at the start of a meeting of the President’s Council of Advisors on Science and Technology. When asked if AI was dangerous, he said, “It remains to be seen. It could be.” 

Biden spoke on the same day that his predecessor, former President Donald Trump, surrendered in New York over charges stemming from a probe into hush money paid to a porn actor. 

Biden declined to comment on Trump’s legal woes, and Democratic strategists say his focus on governing will create a politically advantageous split screen of sorts as his former rival, a Republican, deals with his legal challenges. 

The president said social media had already illustrated the harm that powerful technologies can do without the right safeguards. 

“Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people,” Biden said.  

He reiterated a call for Congress to pass bipartisan privacy legislation to put limits on personal data that technology companies collect, ban advertising targeted at children, and to prioritize health and safety in product development. 

Shares of companies that employ AI dropped sharply before Biden’s meeting, although the broader market was also selling off on Tuesday.  

Shares of AI software company C3.ai Inc. were down 24%, more than halving a four-session winning streak of nearly 40% through Monday. Thailand security firm Guardforce AI GFAI.O fell 29%, data analytics firm BigBear.ai BBAI.N was down 16% and conversation intelligence company SoundHound AI SOUN.O was down 13% late on Tuesday.  

AI is becoming a hot topic for policymakers. 

The tech ethics group Center for AI and Digital Policy has asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, which has wowed and appalled users with its human-like abilities to generate written responses to requests. 

Democratic U.S. Senator Chris Murphy has urged society to pause as it considers the ramifications of AI. 

Last year the Biden administration released a blueprint “Bill of Rights” to help ensure users’ rights are protected as technology companies design and develop AI systems.  

US-Trained Woman Teaching Digital Skills to Children in Rural Kenya

The digital divide is one of the biggest challenges to education in sub-Saharan Africa, where the United Nations says nearly 90% of students lack access to household computers, and 82% to the internet. In Kenya, the aid group TechLit Africa aims to change that by building scores of computer labs. Juma Majanga reports from Mogotio, Kenya.

Ukraine’s Destruction Brought to Life Through Virtual Reality Exhibit

An exhibition currently on display in Poland uses virtual reality to show the level of destruction Russia’s war has brought on Ukraine. For some visitors, the VR videos that can be viewed at the “Through the War” display have been overwhelming. Lesia Bakalets reports from Warsaw. Daniil Batushchak.

TikTok Fined $15.9M by UK Watchdog for Misuse of Kids’ Data

Britain’s privacy watchdog hit TikTok with a multimillion-dollar penalty Tuesday for misusing children’s data and violating other protections for users’ personal information.

The Information Commissioner’s Office said it issued a fine of $15.9 million to the short-video sharing app, which is wildly popular with young people.

It’s the latest example of tighter scrutiny that TikTok and its parent, Chinese technology company ByteDance, are facing in the West, where governments are increasingly concerned about risks that the app poses to data privacy and cybersecurity.

The British watchdog, which was investigating data breaches between May 2018 and July 2020, said TikTok allowed as many as 1.4 million children in the U.K. under 13 to use the app in 2020, despite the platform’s own rules prohibiting children that young from setting up accounts.

TikTok didn’t adequately identify and remove children under 13 from the platform, the watchdog said. And even though it knew younger children were using the app, TikTok failed to get consent from their parents to process their data, as required by Britain’s data protection laws, the agency said.

“There are laws in place to make sure our children are as safe in the digital world as they are in the physical world. TikTok did not abide by those laws,” Information Commissioner John Edwards said in a press release.

TikTok collected and used personal data of children who were inappropriately given access to the app, he said.

“That means that their data may have been used to track them and profile them, potentially delivering harmful, inappropriate content at their very next scroll,” Edwards said.

The company said it disagreed with the watchdog’s decision.

“We invest heavily to help keep under 13s off the platform and our 40,000-strong safety team works around the clock to help keep the platform safe for our community,” TikTok said in statement. “We will continue to review the decision and are considering next steps.”

TikTok says it has improved its sign-up system since the breaches happened by no longer allowing users to simply declare they are old enough and looking for other signs that an account is used by someone under 13.

The penalty also covered other breaches of U.K. data privacy law.

The watchdog said TikTok failed to properly inform people about how their data is collected, used and shared in an easily understandable way. Without this information, it’s unlikely that young users would be able “to make informed choices” about whether and how to use TikTok, it said.

TikTok also failed to ensure personal data of British users was processed lawfully, fairly and transparently, the regulator said.

TikTok initially faced a 27 million-pound fine, which was reduced after the company persuaded regulators to drop other charges.

U.S. regulators in 2019 fined TikTok, previously known as Music.aly, $5.7 million in a case that involved similar allegations of unlawful collection of children’s personal information.

Also Tuesday, Australia became the latest country to ban TikTok from its government devices, with authorities from the European Union to the United States concerned that the app could share data with the Chinese government or push pro-Beijing narratives

U.S. lawmakers are also considering forcing a sale or even banning it outright as tensions with China grow.

Australia Bans TikTok on Government Devices

Australia said Tuesday it will ban TikTok on government devices, joining a growing list of Western nations cracking down on the Chinese-owned app due to national security fears.   

Attorney-General Mark Dreyfus said the decision followed advice from the country’s intelligence agencies and would begin “as soon as practicable”.   

Australia is the last member of the secretive Five Eyes security alliance to pursue a government TikTok ban, joining its allies the United States, Britain, Canada and New Zealand.   

France, the Netherlands and the European Commission have made similar moves.   

Dreyfus said the government would approve some exemptions on a “case-by-case basis” with “appropriate security mitigations in place”.   

Cybersecurity experts have warned that the app — which boasts more than one billion global users — could be used to hoover up data that is then shared with the Chinese government.   

Surveys have estimated that as many as seven million Australians use the app — or about a quarter of the population.   

In a security notice outlining the ban, the Attorney-General’s Department said TikTok posed “significant security and privacy risks” stemming from the “extensive collection of user data”.   

China condemned the ban, saying it had “lodged stern representations” with Canberra over the move and urging Australia to “provide Chinese companies with a fair, transparent and non-discriminatory business environment”.   

“China has always maintained that the issue of data security should not be used as a tool to generalize the concept of national security, abuse state power and unreasonably suppress companies from other countries,” foreign ministry spokesperson Mao Ning said.   

‘No-brainer’    

But Fergus Ryan, an analyst with the Australian Strategic Policy Institute, said stripping TikTok from government devices was a “no-brainer”.   

“It’s been clear for years that TikTok user data is accessible in China,” Ryan told AFP.    

“Banning the use of the app on government phones is a prudent decision given this fact.”   

The security concerns are underpinned by a 2017 Chinese law that requires local firms to hand over personal data to the state if it is relevant to national security.   

Beijing has denied these reforms pose a threat to ordinary users.   

China “has never and will not require companies or individuals to collect or provide data located in a foreign country, in a way that violates local law”, the foreign ministry’s Mao said in March.   

‘Rooted in xenophobia’   

TikTok has said such bans are “rooted in xenophobia”, while insisting that it is not owned or operated by the Chinese government.    

The company’s Australian spokesman Lee Hunter said it would “never” give data to the Chinese government.   

“No one is working harder to make sure this would never be a possibility,” he told Australia’s Channel Seven.   

But the firm acknowledged in November that some employees in China could access European user data, and in December it said employees had used the data to spy on journalists.   

The app is typically used to share short, lighthearted videos and has exploded in popularity in recent years.   

Many government departments were initially eager to use TikTok as a way to connect with a younger demographic that is harder to reach through traditional media channels.   

New Zealand banned TikTok from government devices in March, saying the risks were “not acceptable in the current New Zealand Parliamentary environment”.    

Earlier this year, the Australian government announced it would be stripping Chinese-made CCTV cameras from politicians’ offices due to security concerns. 

Virgin Orbit Files for Bankruptcy, Seeks Buyer

Virgin Orbit, the satellite launch company founded by Richard Branson, has filed for Chapter 11 bankruptcy and will sell the business, the firm said in a statement Tuesday.   

The California-based company said last week it was laying off 85% of its employees — around 675 people — to reduce expenses due to its inability to secure sufficient funding.   

Virgin Orbit suffered a major setback earlier this year when an attempt to launch the first rocket into space from British soil ended in failure.   

The company had organized the mission with the UK Space Agency and Cornwall Spaceport to launch nine satellites into space.   

On Tuesday, the firm said “it commenced a voluntary proceeding under Chapter 11 of the U.S. Bankruptcy Code… in order to effectuate a sale of the business” and intended to use the process “to maximize value for its business and assets.”   

Last month, Virgin Orbit suspended operations for several days while it held funding negotiations and explored strategic opportunities.   

But at an all-hands meeting on Thursday, CEO Dan Hart told employees that operations would cease “for the foreseeable future,” US media reported at the time.   

“While we have taken great efforts to address our financial position and secure additional financing, we ultimately must do what is best for the business,” Hart said in the company statement on Tuesday.   

“We believe that the cutting-edge launch technology that this team has created will have wide appeal to buyers as we continue in the process to sell the Company.”   

Founded by Branson in 2017, the firm developed “a new and innovative method of launching satellites into orbit,” while “successfully launching 33 satellites into their precise orbit,” Hart added.   

Virgin Orbit’s shares on the New York Stock Exchange were down 3% at 19 cents on Monday evening. 

Germany Could Block ChatGPT if Needed, Says Data Protection Chief

Germany could follow in Italy’s footsteps by blocking ChatGPT over data security concerns, the German commissioner for data protection told the Handelsblatt newspaper in comments published on Monday.

Microsoft-backed MSFT.O OpenAI took ChatGPT offline in Italy on Friday after the national data agency banned the chatbot temporarily and launched an investigation into a suspected breach of privacy rules by the artificial intelligence application. 

“In principle, such action is also possible in Germany,” Ulrich Kelber said, adding that this would fall under state jurisdiction. He did not, however, outline any such plans. 

Kelber said that Germany has requested further information from Italy on its ban. Privacy watchdogs in France and Ireland said they had also contacted the Italian data regulator to discuss its findings. 

“We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” said a spokesperson for Ireland’s Data Protection Commissioner (DPC). 

OpenAI had said on Friday that it actively works to reduce personal data in training its AI systems. 

While the Irish DPC is the lead EU regulator for many global technology giants under the bloc’s “one stop shop” data regime, it is not the lead regulator for OpenAI, which has no offices in the EU.

The privacy regulator in Sweden said it has no plans to ban ChatGPT nor is it in contact with the Italian watchdog.

The Italian investigation into OpenAI was launched after a cybersecurity breach last week led to people being shown excerpts of other users’ ChatGPT conversations and their financial information. 

It accused OpenAI of failing to check the age of ChatGPT’s users, who are supposed to be aged 13 or above. Italy is the first Western country to take action against a chatbot powered by artificial intelligence. 

For a nine-hour period, the exposed data included first and last names, billing addresses, credit card types, credit card expiration dates and the last four digits of credit card numbers, according to an email sent by OpenAI to one affected customer and seen by the Financial Times.

NASA to Reveal Crew for 2024 Flight Around the Moon

NASA is to reveal the names on Monday of the astronauts — three Americans and a Canadian — who will fly around the Moon next year, a prelude to returning humans to the lunar surface for the first time in a half century.   

The mission, Artemis II, is scheduled to take place in November 2024 with the four-person crew circling the Moon but not landing on it.   

As part of the Artemis program, NASA aims to send astronauts to the Moon in 2025 — more than five decades after the historic Apollo missions ended in 1972.   

Besides putting the first woman and first person of color on the Moon, the US space agency hopes to establish a lasting human presence on the lunar surface and eventually launch a voyage to Mars.   

NASA administrator Bill Nelson said this week at a “What’s Next Summit” hosted by Axios that he expected a crewed mission to Mars by the year 2040.  

The four members of the Artemis II crew will be announced at an event at 10:00 am (1500 GMT) at the Johnson Space Center in Houston.   

The 10-day Artemis II mission will test NASA’s powerful Space Launch System rocket as well as the life-support systems aboard the Orion spacecraft.   

The first Artemis mission wrapped up in December with an uncrewed Orion capsule returning safely to Earth after a 25-day journey around the Moon.   

During the trip around Earth’s orbiting satellite and back, Orion logged well over 1.6 million kilometers and went farther from Earth than any previous habitable spacecraft.   

Nelson was also asked at the Axios summit whether NASA could stick to its timetable of landing astronauts on the south pole of the Moon in late 2025.   

“Space is hard,” Nelson said. “You have to wait until you know that it’s as safe as possible, because you’re living right on the edge.   

“So I’m not so concerned with the time,” he said. “We’re not going to launch until it’s right.”   

Only 12 people — all of them white men — have set foot on the Moon. 

Congolese Student’s Device Makes Science Fiction Reality

A student in Congo has developed a tool that allows people to control or move objects using their brain signals. Andre Ndambi visited the department of engineering at the University of Kinshasa and has this story narrated by Salem Solomon. Jean-Louis Mafema contributed.

Twitter Pulls ‘Verified’ Check Mark From Main New York Times Account

Twitter has removed the verification check mark on the main account of The New York Times, one of CEO Elon Musk’s most despised news organizations.

The removal comes as many of Twitter’s high-profile users are bracing for the loss of the blue check marks that helped verify their identity and distinguish them from impostors on the social media platform.

Musk, who owns Twitter, set a deadline of Saturday for verified users to buy a premium Twitter subscription or lose the checks on their profiles. The Times said in a story Thursday that it would not pay Twitter for verification of its institutional accounts.

Early Sunday, Musk tweeted that the Times’ check mark would be removed. Later he posted disparaging remarks about the newspaper, which has aggressively reported on Twitter and on flaws with partially automated driving systems at Tesla, the electric car company, which he also runs.

Other Times accounts such as its business news and opinion pages still had either blue or gold check marks Sunday, as did multiple reporters for the news organization.

“We aren’t planning to pay the monthly fee for check mark status for our institutional Twitter accounts,” the Times said in a statement Sunday. “We also will not reimburse reporters for Twitter Blue for personal accounts, except in rare instances where this status would be essential for reporting purposes,” the newspaper said in a statement Sunday.

The Associated Press, which has said it also will not pay for the check marks, still had them on its accounts at midday Sunday.

Twitter did not answer emailed questions Sunday about the removal of The New York Times check mark.

The costs of keeping the check marks ranges from $8 a month for individual web users to a starting price of $1,000 monthly to verify an organization, plus $50 monthly for each affiliate or employee account. Twitter does not verify the individual accounts to ensure they are who they say they are, as was the case with the previous blue check doled out to public figures and others during the platform’s pre-Musk administration.

While the cost of Twitter Blue subscriptions might seem like nothing for Twitter’s most famous commentators, celebrity users from basketball star LeBron James to Star Trek’s William Shatner have balked at joining. Seinfeld actor Jason Alexander pledged to leave the platform if Musk takes his blue check away.

The White House is also passing on enrolling in premium accounts, according to a memo sent to staff. While Twitter has granted a free gray mark for President Joe Biden and members of his Cabinet, lower-level staff won’t get Twitter Blue benefits unless they pay for it themselves.

“If you see impersonations that you believe violate Twitter’s stated impersonation policies, alert Twitter using Twitter’s public impersonation portal,” said the staff memo from White House official Rob Flaherty.

Alexander, the actor, said there are bigger issues in the world but without the blue mark, “anyone can allege to be me” so if he loses it, he’s gone.

“Anyone appearing with it=an imposter. I tell you this while I’m still official,” he tweeted.

After buying Twitter for $44 billion in October, Musk has been trying to boost the struggling platform’s revenue by pushing more people to pay for a premium subscription. But his move also reflects his assertion that the blue verification marks have become an undeserved or “corrupt” status symbol for elite personalities, news reporters and others granted verification for free by Twitter’s previous leadership.

Along with shielding celebrities from impersonators, one of Twitter’s main reasons to mark profiles with a blue check mark starting about 14 years ago was to verify politicians, activists and people who suddenly find themselves in the news, as well as little-known journalists at small publications around the globe, as an extra tool to curb misinformation coming from accounts that are impersonating people. Most “legacy blue checks” are not household names and weren’t meant to be.

One of Musk’s first product moves after taking over Twitter was to launch a service granting blue checks to anyone willing to pay $8 a month. But it was quickly inundated by impostor accounts, including those impersonating Nintendo, pharmaceutical company Eli Lilly and Musk’s businesses Tesla and SpaceX, so Twitter had to temporarily suspend the service days after its launch.

The relaunched service costs $8 a month for web users and $11 a month for users of its iPhone or Android apps. Subscribers are supposed to see fewer ads, be able to post longer videos and have their tweets featured more prominently. 

Dutch Refinery to Feed Airlines’ Thirst for Clean Fuel 

Scaffolding and green pipes envelop a refinery in the port of Rotterdam where Finnish giant Neste is preparing to significantly boost production of sustainable aviation fuel. 

Switching to non-fossil aviation fuels that produce less net greenhouse gas emissions is key to plans to decarbonize air transport, a significant contributor to global warming. 

Neste, the largest global producer of SAF, uses cooking oil and animal fat at this Dutch refinery. 

Sustainable aviation fuels (SAF) are being made from different sources such as municipal waste, leftovers from the agricultural and forestry industry, crops and plants, and even hydrogen. 

These technologies are still developing, and the product is more expensive. 

But these fuels will help airlines reduce CO2 emissions by up to 80%, according to the International Air Transport Association. 

Global output of SAF was 250,000 tons last year, less than 0.1% of the more than 300 million tons of aviation fuel used during that period. 

“It’s a drop in the ocean but a significant drop,” said Matti Lehmus, CEO of Neste. 

“We’ll be growing drastically our production from 100,000 tons to 1.5 million tons next year,” he added. 

There clearly is demand. 

The European Union plans to impose the use of a minimum amount of sustainable aviation fuel by airlines, rising from 2% in 2025 to 6% in 2030 and at least 63% in 2050. 

Neste has another site for SAF in Singapore which will start production in April. 

“With the production facilities of Neste in Rotterdam and Singapore, we can meet the mandate for [the] EU in 2025,” said Jonathan Wood, the company’s vice president for renewable aviation. 

Vincent Etchebehere, director for sustainable development at Air France, said that “between now and 2030, there will be more demand than supply of SAF.” 

Need to mature technologies 

Air France-KLM has reached a deal with Neste for a supply of 1 million tons of sustainable aviation fuel between 2023 and 2030. 

It has also lined up 10 year-agreements with U.S. firm DG Fuels for 600,000 tons and with TotalEnergies for 800,000 tons. 

At the Rotterdam site, two giant storage tanks of 15,000 cubic meters are yet to be painted. 

They’re near a quay where the fuel will be transported by boat to feed Amsterdam’s Schiphol airport and airports in Paris. 

The Franco-Dutch group has already taken steps to cut its carbon footprint, using 15% of the global SAF output last year — or 0.6% of its fuel needs. 

Neste’s Lehmus said there was a great need to “mature the technologies” to make sustainable aviation fuel from diverse sources such as algae, nitrocellulose and synthetic fuels. 

Air France CEO Anne Rigail said, the prices of sustainable aviation fuel were as important as their production. 

Sustainable fuel costs 3,500 euros ($3,800) a ton globally but only $2,000 in the United States thanks to government subsidies. In France, it costs 5,000 euros a ton. 

“We need backing and we really think the EU can do more,” said Rigail. 

Italy Temporarily Blocks ChatGPT Over Privacy Concerns

Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government’s privacy watchdog said Friday.

The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users’ data.

U.S.-based OpenAI, which developed the chatbot, said late Friday night it has disabled ChatGPT for Italian users at the government’s request. The company said it believes its practices comply with European privacy laws and hopes to make ChatGPT available again soon.

While some public schools and universities around the world have blocked ChatGPT from their local networks over student plagiarism concerns, Italy’s action is “the first nation-scale restriction of a mainstream AI platform by a democracy,” said Alp Toker, director of the advocacy group NetBlocks, which monitors internet access worldwide.

The restriction affects the web version of ChatGPT, popularly used as a writing assistant, but is unlikely to affect software applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.

The agency’s statement cites the EU’s General Data Protection Regulation and pointed to a recent data breach involving ChatGPT “users’ conversations” and information about subscriber payments.

OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users’ chat history.

“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company had said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

Italy’s privacy watchdog, known as the Garante, also questioned whether OpenAI had legal justification for its “massive collection and processing of personal data” used to train the platform’s algorithms. And it said ChatGPT can sometimes generate — and store — false information about individuals.

Finally, it noted there’s no system to verify users’ ages, exposing children to responses “absolutely inappropriate to their age and awareness.”

OpenAI said in response that it works “to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.”

“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” the company said.

The Italian watchdog’s move comes as concerns grow about the artificial intelligence boom. A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.

The president of Italy’s privacy watchdog agency told Italian state TV Friday evening he was one of those who signed the appeal. Pasquale Stanzione said he did so because “it’s not clear what aims are being pursued” ultimately by those developing AI.

Namibia Looks East for Green Hydrogen Partnerships

The administrator of the National Energy Administration of China, Zhang Jinhua, on Friday paid a visit to Namibia President Hage Geingob. The visit is aimed at establishing cooperation in the area of green hydrogen production.

Namibia is positioning itself as a future green hydrogen producer to attract investment from the globe’s leading and fastest growing producer of renewable energy — China.

James Mnyupe, Namibia’s green hydrogen commissioner and economic adviser to the president, told VOA that although Namibia has not signed a partnership with China on green hydrogen, officials are looking to the Asian country as a critical partner. But it isn’t talking to China alone.

“We have an MOU [Memo of Understanding] with Europe; we are also discussing possibilities of collaboration with the United States,” he said. “If you look at any of these green hydrogen projects as I mentioned, simply they will use components from all over the world.”

He said in the face of rising energy demands around the globe and increased tensions between the East and West, Namibia will not be drawn into picking sides. He was referring to the conflict in Ukraine and its effect on international relations

“So today Europe’s biggest trading partner is China, China’s biggest markets are the U.S. and Europe so if Namibia trades with Europe, China or the U.S. for that matter, that is not a reason for involving Namibia in any political or conflict-related discussions between those countries,” he said.

Presidential spokesperson Alfredo Hengari said the visit by U.S. Ambassador to Namibia Randy Berry on Tuesday was aimed at cementing relations in major areas of interest, among them green hydrogen and oil exploration.

“Namibia is making tremendous advances in the areas of green energy but also in hydrocarbons,” he said. “American companies are drilling off the coast of the Republic of Namibia and so it was a courtesy visit just to emphasize increasing cooperation in these areas.”

Speaking through an interpreter, China’s administrator for its National Energy Administration on Friday said China is ready to partner with Namibia in all areas of green hydrogen.

Hydrogen is an alternative fuel that industrialized nations hope can help them reach their ambitious goal of net-zero carbon emission by 2050.

Mnyupe says Namibia is looking to learn from China on how best to use its experience in producing renewable energy and renewable energy components. Friday’s visit is an indication of China’s interest in partnering with Namibia and participating in the countries green-hydrogen value chain.

Call for Pause in AI Development May Fall on Deaf Ears

A group of influential figures from Silicon Valley and the larger tech community released an open letter this week calling for a pause in the development of powerful artificial intelligence programs, arguing that they present unpredictable dangers to society.

The organization that created the open letter, the Future of Life Institute, said the recent rollout of increasingly powerful AI tools by companies like Open AI, IBM and Google demonstrates that the industry is “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The signatories of the letter, including Elon Musk, founder of Tesla and SpaceX, and Steve Wozniak, co-founder of Apple, called for a six-month halt to all development work on large language model AI projects.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

The letter does not call for a halt to all AI-related research but focuses on extremely large systems that assimilate vast amounts of data and use it to solve complex tasks and answer difficult questions.

However, experts told VOA that commercial competition between different AI labs, and a broader concern about allowing Western companies to fall behind China in the race to develop more advanced applications of the technology, make any significant pause in development unlikely.

Chatbots offer window

While artificial intelligence is present in day-to-day life in myriad ways, including algorithms that curate social media feeds, systems used to make credit decisions in many financial institutions and facial recognition increasingly used in security systems, large language models have increasingly taken center stage in the discussion of AI.

In its simplest form, a large language model is an AI system that analyzes large amounts of textual data and uses a set of parameters to predict the next word in a sentence. However, models of sufficient complexity, operating with billions of parameters, are able to model human language, sometimes with uncanny accuracy.

In November of last year, Open AI released a program called ChatGPT (Chat Generative Pre-trained Transformer) to the general public. Based on the underlying GPT 3.5 model, the program allows users to communicate with the program by entering text through a web browser, which returns responses created nearly instantaneously by the program.

ChatGPT was an immediate sensation, as users used it to generate everything from complex computer code to poetry. Though it was quickly apparent that the program frequently returned false or misleading information, the potential for it to disrupt any number of sectors of life, from academia to customer service systems to national defense, was clear.

Microsoft has since integrated ChatGPT into its search engine, Bing. More recently, Google has rolled out its own AI-supported search capability, known as Bard.

GPT-4 as benchmark

In the letter calling for pause in development, the signatories use GPT-4 as a benchmark. GPT-4 is an AI tool developed by Open AI that is more powerful than the version that powers the original ChatGPT. It is currently in limited release. The moratorium being called for in the letter is on systems “more powerful than GPT-4.”

One problem though, is that it is not precisely clear what “more powerful” means in this context.

“There are other models that, in computational terms, are much less large or powerful, but which have very powerful potential impacts,” Bill Drexel, an associate fellow with the AI Safety and Stability program at the Center for a New American Security (CNAS), told VOA. “So there are much smaller models that can potentially help develop dangerous pathogens or help with chemical engineering — really consequential models that are much smaller.”

Limited capabilities

Edward Geist, a policy researcher at the RAND Corporation and the author of the forthcoming book Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare told VOA that it is important to understand both what programs like GPT-4 are capable of, but also what they are not.

For example, he said, Open AI has made it clear in technical data provided to potential commercial customers that once the model is trained on a set of data, there is no clear way to teach it new facts or to otherwise update it without completely retraining the system. Additionally, it does not appear to be able to perform tasks that require “evolving” memory, such as reading a book.

“There are, sort of, glimmerings of an artificial general intelligence,” he said. “But then you read the report, and it seems like it’s missing some features of what I would consider even a basic form of general intelligence.”

Geist said that he believes many of those warning about the dangers of AI are “absolutely earnest” in their concerns, but he is not persuaded that those dangers are as severe as they believe.

“The gap between that super-intelligent self-improving AI that has been postulated in those conjectures, and what GPT-4 and its ilk can actually do seems to be very broad, based on my reading of Open AI’s technical report about it.”

Commercial and security concerns

James A. Lewis, senior vice president and director of the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS), told VOA he is skeptical that the open letter will have much effect, for reasons as varied as commercial competition and concerns about national security.

Asked what he thinks the chances are of the industry agreeing to a pause in research, he said, “Zero.”

“You’re asking Microsoft to not compete with Google?” Lewis said. “They’ve been trying for decades to beat Google on search engines, and they’re on the verge of being able to do it. And you’re saying, let’s take a pause? Yeah, unlikely.”

Competition with China

More broadly, Lewis said, improvements in AI will be central to progress in technology related to national defense.

“The Chinese aren’t going to stop because Elon Musk is getting nervous,” Lewis said. “That will affect [Department of Defense] thinking. If we’re the only ones who put the brakes on, we lose the race.”

Drexel, of CNAS, agreed that China is unlikely to feel bound by any such moratorium.

“Chinese companies and the Chinese government would be unlikely to agree to this pause,” he said. “If they agreed, they’d be unlikely to follow through. And in any case, it’d be very difficult to verify whether or not they were following through.”

He added, “The reason why they’d be particularly unlikely to agree is because — particularly on models like GPT-4 — they feel and recognize that they are behind. [Chinese President] Xi Jinping has said numerous times that AI is a really important priority for them. And so catching up and surpassing [Western companies] is a high priority.”

Li Ang Zhang, an information scientist with the RAND Corporation, told VOA he believes a blanket moratorium is a mistake.

“Instead of taking a fear-based approach, I’d like to see a better thought-out strategy towards AI governance,” he said in an email exchange. “I don’t see a broad pause in AI research as a tenable strategy but I think this is a good way to open a conversation on what AI safety and ethics should look like.”

He also said that a moratorium might disadvantage the U.S. in future research.

“By many metrics, the U.S. is a world leader in AI,” he said. “For AI safety standards to be established and succeed, two things must be true. The U.S. must maintain its world-lead in both AI and safety protocols. What happens after six months? Research continues, but now the U.S. is six months behind.”

Is Banning TikTok Constitutional?

U.S. lawmakers and officials are ratcheting up threats to ban TikTok, saying the Chinese-owned video-sharing app used by millions of Americans poses a threat to privacy and U.S. national security.

But free speech advocates and legal experts say an outright ban would likely face a constitutional hurdle: the First Amendment right to free speech.

“If passed by Congress and enacted into law, a nationwide ban on TikTok would have serious ramifications for free expression in the digital sphere, infringing on Americans’ First Amendment rights and setting a potent and worrying precedent in a time of increased censorship of internet users around the world,” a coalition of free speech advocacy organizations wrote in a letter to Congress last week, urging a solution short of an outright ban.

The plea came as U.S. lawmakers grilled TikTok CEO Shou Chew over concerns the Chinese government could exploit the platform’s user data for espionage and influence operations in the United States.

TikTok, which bills itself as a “platform for free expression” and a “modern-day version of the town square,” says it has more than 150 million users in the United States.

But the platform is owned by ByteDance, a Beijing-based company, and U.S. officials have raised concerns that the Chinese government could utilize the app’s user data to influence and spy on Americans.

Aaron Terr, director of public advocacy at the Foundation for Individual Rights and Expression, said while there are legitimate privacy and national security concerns about TikTok, the First Amendment implications of a ban so far have received little public attention.

“If nothing else, it’s important for that to be a significant part of the conversation,” Terr said in an interview. “It’s important for people to consider alongside national security concerns.”

To be sure, the First Amendment is not absolute. There are types of speech that are not protected by the amendment. Among them: obscenity, defamation and incitement.

But the Supreme Court has also made it clear there are limits on how far the government can go to regulate speech, even when it involves a foreign adversary or when the government argues that national security is at stake.

In a landmark 1965 case, the Supreme Court invalidated a law that prevented Americans from receiving foreign mail that the government deemed was “communist political propaganda.”

In another consequential case involving a defamation lawsuit brought against The New York Times, the court ruled that even an “erroneous statement” enjoyed some constitutional protection.

“And that’s relevant because here, one of the reasons that Congress is concerned about TikTok is the potential that the Chinese government could use it to spread disinformation,” said Caitlin Vogus, deputy director of the Free Expression Project at the Center for Democracy and Technology, one of the signatories of the letter to Congress.

Proponents of a ban deny a prohibition would run afoul of the First Amendment.

“This is not a First Amendment issue, because we’re not trying to ban booty videos,” Republican Senator Marco Rubio, a longtime critic of TikTok, said on the Senate floor on Monday.

ByteDance, TikTok’s parent company, is beholden to the Chinese Communist Party, Rubio said.

“So, if the Communist Party goes to ByteDance and says, ‘We want you to use that algorithm to push these videos on Americans to convince them of whatever,’ they have to do it. They don’t have an option,” Rubio said.

The Biden administration has reportedly demanded that ByteDance divest itself from TikTok or face a possible ban.

TikTok denies the allegations and says it has taken measures to protect the privacy and security of its U.S. user data.

Rubio is sponsoring one of several competing bills that envision different pathways to a TikTok ban.

A House bill called the Deterring America’s Technological Adversaries Act would empower the president to shut down TikTok.

A Senate bill called the RESTRICT Act would authorize the Commerce Department to investigate information and communications technologies to determine whether they pose national security risks.

This would not be the first time the U.S. government has attempted to ban TikTok.

In 2020, then-President Donald Trump issued an executive order declaring a national emergency that would have effectively shut down the app.

In response, TikTok sued the Trump administration, arguing that the executive order violated its due process and First Amendment rights.

While courts did not weigh in on the question of free speech, they blocked the ban on the grounds that Trump’s order exceeded statutory authority by targeting “informational materials” and “personal communication.”

Allowing the ban would “have the effect of shutting down, within the United States, a platform for expressive activity used by about 700 million individuals globally,” including more than 100 million Americans, federal judge Wendy Beetlestone wrote in response to a lawsuit brought by a group of TikTok users.

A fresh attempt to ban TikTok, whether through legislation or executive action, would likely trigger a First Amendment challenge from the platform, as well as its content creators and users, according to free speech advocates. And the case could end up before the Supreme Court.

In determining the constitutionality of a ban, courts would likely apply a judicial review test known as an “intermediate scrutiny standard,” Vogus said.

“It would still mean that any ban would have to be justified by an important governmental interest and that a ban would have to be narrowly tailored to address that interest,” Vogus said. “And I think that those are two significant barriers to a TikTok ban.”

But others say a “content-neutral” ban would pass Supreme Court muster.

“To pass content-neutral laws, the government would need to show that the restraint on speech, if any, is narrowly tailored to serve a ‘significant government interest’ and leaves open reasonable alternative avenues for expression,” Joel Thayer, president of the Digital Progress Institute, wrote in a recent column in The Hill online newspaper.

In Congress, even as the push to ban TikTok gathers steam, there are lone voices of dissent.

One is progressive Democrat Alexandria Ocasio-Cortez. Another is Democratic Representative Jamal Bowman, himself a prolific TikTok user.

Opposition to TikTok, Bowman said, stems from “hysteria” whipped up by a “Red scare around China.”

“Our First Amendment gives us the right to speak freely and to communicate freely, and TikTok as a platform has created a community and a space for free speech for 150 million Americans and counting,” Bowman, who has more than 180,000 TikTok followers, said recently at a rally held by TikTok content creators.

Instead of singling out TikTok, Bowman said, Congress should enact new legislation to ensure social media users are safe and their data secure.

Russia Using TikTok to Push Pro-Moscow Narrative on Ukraine

New data is suggesting at least some U.S. adversaries are taking advantage of the hugely popular TikTok video-sharing app for influence operations.

A report Thursday by the Alliance for Securing Democracy (ASD) finds Russia “has been using the app to push its own narrative” in its effort to undermine Western support for Ukraine.

“Based on our analysis, some users are engaging more with Russian state media than other, more reputable independent news outlets on the platform,” according to the report by the U.S.-based election security advocate that tracks official state actors and state-backed media.

“More TikTok users follow RT than The New York Times,” it said.

The ASD report found that as of March 22, there were 78 Russian-funded news outlets on TikTok with a total of more than 14 million followers.

It also found that despite a commitment from TikTok to label the accounts as belonging to state-controlled media, 31 of the accounts were not labeled.

Yet even labeling the accounts seemed to have little impact on their ability to gain an audience.

“By some measures, including the performance of top posts, labeled Russian state media accounts are reaching larger audiences on TikTok than other platforms,” the report said. “RIA Novosti’s top TikTok post so far in 2023 has more than 5.6 million views. On Twitter, its top post has fewer than 20,000 views.”

The report on Russian state media’s use of TikTok comes as U.S. officials are again voicing concern about the potential for TikTok to be used for disinformation campaigns and foreign influence operations.

“Just a tremendous number of people in the United States use TikTok,” John Plumb, the principal cyber adviser to the U.S. secretary of defense, told members of a House Armed Services subcommittee, warning of “the control China may have to direct information through it” and use it as a “misinformation platform.”

“This provides a foreign nation a platform for information operations,” U.S. Cyber Command’s General Paul Nakasone added, noting that TikTok has 150 million users in the United States.

“One-third of the adult population receives their news from this app,” he said. “One-sixth of our children are saying they’re constantly on this app.”

TikTok, owned by China-based ByteDance, has sought to push back against the concerns.

“Let me state this unequivocally: ByteDance is not an agent of China or any other country,” TikTok CEO Shou Zi Chew told U.S. lawmakers during a hearing last week.

“We do not promote or remove content at the request of the Chinese government,” he said, trying to downplay fears about the company’s data collection practices and Chinese laws that would require the company to share that information with the Chinese government if asked.U.S. lawmakers, intelligence and security officials, however, have their doubts.

The top Republican on the Senate Intelligence Committee, Marco Rubio, earlier this month warned that TikTok is “probably one of the most valuable surveillance tools on the planet.”

A day later, Cyber Command’s Nakasone told members of the House Intelligence Committee that TikTok is like a “loaded gun,” while FBI Director Christopher Wray warned that TikTok’s recommendation algorithm “could be used to conduct influence operations.”

“That’s not something that would be easily detected,” he added.

 

Chinese Hacking Group Highly Active, US Cybersecurity Firm Says

A Chinese hacking group that is likely state-sponsored and has been linked previously to attacks on U.S. state government computers is highly active and focusing on a broad range of targets that may be of strategic interest to China’s government and security services, a private American cybersecurity firm said in a report Thursday.

The hacking group, which the report called RedGolf, shares such close overlap with groups tracked by other security companies under the names APT41 and BARIUM that it is thought they are either the same or very closely affiliated, said Jon Condra, director of strategic and persistent threats for Insikt Group, the threat research division of Massachusetts-based cybersecurity company Recorded Future.

Following up on previous reports of APT41 and BARIUM activities and monitoring the targets that were attacked, Insikt Group said it had identified a cluster of domains and infrastructure “highly likely used across multiple campaigns by RedGolf” over the past two years.

“We believe this activity is likely being conducted for intelligence purposes rather than financial gain due to the overlaps with previously reported cyberespionage campaigns,” Condra said in an emailed response to questions from The Associated Press.

China’s Foreign Ministry denied the accusations, saying, “This company has produced false information on so-called ‘Chinese hacker attacks’ more than once in the past. Their relevant actions are groundless accusations, far-fetched and lack professionalism.”

Chinese authorities have consistently denied any form of state-sponsored hacking, instead saying China itself is a major target of cyberattacks.

APT41 was implicated in a 2020 U.S. Justice Department indictment that accused Chinese hackers of targeting more than 100 companies and institutions in the U.S. and abroad, including social media and video game companies, universities and telecommunications providers.

In its analysis, Insikt Group said it found evidence that RedGolf “remains highly active” in a wide range of countries and industries, “targeting aviation, automotive, education, government, media, information technology and religious organizations.”

Insikt Group did not identify specific victims of RedGolf, but said it was able to track scanning and exploitation attempts targeting different sectors with a version of the KEYPLUG backdoor malware also used by APT41.

Insikt said it had identified several other malicious tools used by RedGolf in addition to KEYPLUG, “all of which are commonly used by many Chinese state-sponsored threat groups.”

In 2022, the cybersecurity firm Mandiant reported that APT41 was responsible for breaches of the networks of at least six U.S. state governments, also using KEYPLUG.

In that case, APT41 exploited a previously unknown vulnerability in an off-the-shelf commercial web application used by 18 states for animal health management, according to Mandiant, which is now owned by Google. It did not identify which states’ systems were compromised.

Mandiant called APT41 “a prolific cyber threat group that carries out Chinese state-sponsored espionage activity in addition to financially motivated activity potentially outside of state control.”

Cyber intelligence companies use different tracking methodologies and often name the threats they identify differently, but Condra said APT41, BARIUM and RedGolf “likely refer to the same set of threat actor or group(s)” due to similarities in their online infrastructure, tactics, techniques and procedures.

“RedGolf is a particularly prolific Chinese state-sponsored threat actor group that has likely been active for many years against a wide range of industries globally,” he said.

“The group has shown the ability to rapidly weaponize newly reported vulnerabilities and has a history of developing and using a large range of custom malware families.”

Tech Leaders Sign Letter Calling for ‘Pause’ to Artificial Intelligence 

An open letter signed by Elon Musk, Apple co-founder Steve Wozniak and other prominent high-tech experts and industry leaders is calling on the artificial intelligence industry to take a six-month pause for the development of safety protocols regarding the technology.

The letter — which as of early Thursday had been signed by nearly 1,400 people — was drafted by the Future of Life Institute, a nonprofit group dedicated to “steering transformative technologies away from extreme, large-scale risks and towards benefiting life.”

In the letter, the group notes the rapidly developing capabilities of AI technology and how it has surpassed human performance in many areas. The group uses the example of how AI used to create new drug treatments could easily be used to create deadly pathogens.

Perhaps most significantly, the letter points to the recent introduction of GPT-4, a program developed by San Francisco-based company OpenAI, as a standard for concern.

GPT stands for Generative Pre-trained Transformer, a type of language model that uses deep learning to generate human-like conversational text.

The company has said GPT-4, its latest version, is more accurate and human-like and has the ability to analyze and respond to images. The firm says the program has passed a simulated bar exam, the test that allows someone to become a licensed attorney.

In its letter, the group maintains that such powerful AI systems should be developed “only once we are confident that their effects will be positive and their risks will be manageable.”

Noting the potential a program such as GPT-4 could have to create disinformation and propaganda, the letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter says AI labs and independent experts should use the pause “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that will ensure they are safe beyond a reasonable doubt.”

Meanwhile, another group has taken its concerns about the negative potential for GPT-4 a step further.

The nonprofit Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission on Thursday calling on the agency to suspend further deployment of the system and launch an investigation.

In its complaint, the group said the technical description of the GPT-4 system provided by its own makers describes almost a dozen major risks posed by its use, including “disinformation and influence operations, proliferation of conventional and unconventional weapons,” and “cybersecurity.”

Some information for this report was provided by The Associated Press and Reuters.

Biden Says GOP Policies Would Surrender Tech Economy to China

President Joe Biden said Tuesday that Republicans’ ideas for cutting the budget could undermine U.S. manufacturing and help China dominate the world economy. 

Speaking at a semiconductor maker in North Carolina to highlight his own policies, Biden is trying to shape public sentiment as he faces off with House Speaker Kevin McCarthy, R-Calif., about raising the federal government’s legal borrowing capacity. 

McCarthy sent a letter to Biden on Tuesday saying that talks should start about possible spending cuts in return for the debt limit increase.  

Biden has said Republicans need to put forth their own budget plan before negotiations start. Without an agreement, the federal government could default on its financial obligations. 

The president tried to ratchet up pressure on Tuesday by saying that the GOP demands on the budget would only empower China, the country’s key geopolitical rival.  

Being tough on China has been a core part of the identity of former President Donald Trump, who is seeking to return to the White House in 2024, and his Make America Great Again movement. The Democratic president said Republican objections to his policies would instead strengthen China. 

“It would mean ceding the future of innovation and technology to China,” Biden told the crowd. “I’ve got news for you and for MAGA Republicans in Congress: not on my watch. We’re not going to let them undo all the progress we made.” 

Biden’s trip to Wolfspeed follows the Durham-based company announcing plans last September to build a $5 billion manufacturing facility in Chatham County that is expected to create 1,800 new jobs. The company is the world’s leading producer of silicon carbide chips. Biden had won passage last July of a $280 billion legislative package known as the CHIPS Act, which was intended to boost the U.S. semiconductor industry and scientific research. 

It’s nothing new for the Biden administration to highlight the CHIPS Act, the $1.9 trillion COVID relief bill, the $1 trillion infrastructure legislation and a roughly $375 billion climate bill — major legislation that the Democratic administration steered into law before Democrats lost control of the House. 

But now, just weeks after Biden unveiled his own budget — it includes $2.6 trillion in new spending — his administration is looking for chances to lean into its battle with Republicans over spending priorities and who has the better ideas to steward the U.S. economy in the years to come.  

Republicans have rejected Biden’s budget but have yet to unveil their own counteroffer to the Democrats’ blueprint, which is built around tax increases on the wealthy and a vision statement of sorts for Biden’s yet-to-be-declared campaign for reelection in 2024. 

His trip is part of a larger effort to draw attention to his policies, which have been overshadowed by high inflation. 

Besides Biden’s visit to Wolfspeed, Vice President Kamala Harris, first lady Jill Biden and other senior administration officials will fan out to 20 states over the next three weeks to highlight the impact of Biden’s economic agenda, according to the White House. 

Biden has said he intends to run for a second term but has yet to formally launch his reelection campaign. 

His effort to highlight legislative victories could also give him an opportunity to present voters with images of an administration focused on governing as Trump braces for a possible indictment over alleged hush money payments made during his 2016 campaign. 

Trump narrowly won North Carolina in 2020. Among the other states that Biden and administration officials will be visiting in the weeks ahead are Georgia, Michigan, Pennsylvania, Nevada and Wisconsin — crucial battlegrounds that Biden won in 2020 and states expected to be competitive again in 2024. 

Curbed by US Sanctions, Huawei Unveils New 4G Smartphones

At a March 23 product launch in Shanghai, Chinese tech giant Huawei unveiled its signature P60 series of smartphones with high-end cameras and its Mate X3 series mobile phones equipped with folding screens.

There were demonstrations. There were speeches. But something was missing from the Huawei offerings: 5G, which gives phones the speedy internet access wanted by many consumers in North America, Europe and Asia.

The smartphones also lack access to Google’s Android operating system and popular Western apps such as Google Maps.

The launch quieted “rumors that it is considering selling off its handset business, thus showcasing the company’s resilience amid U.S. government restrictions,” according to the government-affiliated China Daily.

Yu Chengdong, CEO of Huawei’s device business group, said at the event, “We have experienced four years of winter under sanctions. Now, the spring has come, and we are excited about the future.”

In 2020, Huawei briefly surpassed Apple and Samsung to become the world’s largest smartphone seller when its market share peaked at 18%, according to market tracker Canalys.

Then the Trump administration imposed successive rounds of U.S. export controls.

By 2022, Huawei had a 2% share of the global smartphone market, with most of its sales in China.

Now the Biden administration is considering banning all technology exports to Huawei.

And its smartphone business today shows how the Shenzhen-based company, a major supplier of equipment used in 5G telecommunications networks, still relies on American technology for some key components.

According to a December 2022 report by Counterpoint, a Hong Kong-based analyst firm, Huawei used up its stockpile of homegrown advanced chips for smartphones, leaving it with a market share of zero for the final three quarters of the year.

“They suffered a steep drop in profits. They have a lot of damage to the brand,” James Lewis, senior vice president, Pritzker chair and director of the Strategic Technologies Program at the Center for Strategic and International Studies, told VOA Mandarin. “I think it’s a mixed bag that Huawei was never going to give up. The Chinese government was never going to let Huawei go out of business, so they’ve found ways to keep selling things. Most of what they sell is 4G or earlier.”

Huawei founder Ren Zhengfei said in a February 24 speech that the Chinese tech giant has survived U.S. sanctions by substituting components locally.

He said, “We completed a process of redesigning over 4,000 circuit boards as well as finding local suppliers for more than 13,000 components the company needs for our products within three years.”

Paul Triolo, senior vice president for China and technology policy lead at Albright Stone Group, a business consulting firm, said the risks of using Cold War era tools such as export controls can have unintended consequences.

In an email, Triolo told VOA Mandarin, “If the result of the ‘small yard, high fence’ policy over the next decade is to significantly slow technology innovation and massively incentivize the development of a large rival technology ecosystem, then the US approach will be judged to have failed, with many losers. Any short-term national security gain will be very hard if not impossible to measure while the short-term pain, particularly for US technology companies, will be substantial, as will the long-term consequences to global innovation systems.”

After Huawei was caught stealing trade secrets, evading U.S. bans on transferring technology to Iran and was suspected — though never proved — to be an arm of the Chinese intelligence services, the U.S. began imposing a series of controls. Since 2019, these have cut off Huawei’s supply of chips from U.S. companies and its access to U.S. technology tools to design its own chips and have them manufactured by its partners.

The Biden administration is considering tightening export control measures against Huawei and completely banning all business dealings with the company, including banning exports to Huawei’s suppliers and middlemen.

For now, vendors selling less-desirable technologies such as 4G phones can still apply to the U.S. Department of Commerce for a license to do business with Huawei. The Commerce Department has approved billions of dollars in such sales from U.S. suppliers, including Intel Corp., which sells chips used in Huawei laptops, and Qualcomm Inc., which supplies chips for 4G smartphones.

Ren said in the speech last month that Huawei invested $23.8 billion in research and development in 2022. “As our profitability improves, we will continue to increase research and development expenditures.”

He added that the company has established its own enterprise resource planning system called MetaERP. Set to launch in April, it will help run its core business functions including finance, supply chain and manufacturing operations.

Lewis said Huawei had been able to circumvent some U.S. controls.

“They have a plan on how to recover, and they’re actually making it work. It doesn’t work in a lot of countries, but it works in Latin America. It works in Africa.”

This means the U.S. will need to refine its strategy on Huawei, Lewis said.

“It has to look at how does it match Huawei, how does it match China in the Southern Hemisphere,” he said. “So the Latin Americans are buying from China and from Huawei. Huawei has Africa pretty much sewn up. So, it’s really a question of how you undo that. And the answer is, you need to do it through development aid, and I don’t know if Western countries are willing to spend.”