Tesla data helped police after truck explosion; experts have privacy concerns

NEW YORK — Your car is spying on you. 

That is one takeaway from the fast, detailed data that Tesla collected on the driver of one of its Cybertrucks that exploded in Las Vegas, Nevada, earlier this week. Privacy data experts say the deep dive by Elon Musk’s company was impressive but also shines a spotlight on a difficult question as vehicles become less like cars and more like computers on wheels. 

“You might want law enforcement to have the data to crack down on criminals but can anyone have access to it?” said Jodi Daniels, CEO of privacy consulting firm Red Clover Advisors.  

Many of the latest cars not only know where you’ve been and where you are going, but also often have access to your contacts, your call logs, your texts and other sensitive information thanks to cell phone syncing. 

The data collected by Musk’s electric car company after the Cybertruck packed with fireworks burst into flames in front of the Trump International Hotel Wednesday proved valuable to police in helping track the driver’s movements. 

Within hours of the New Year’s Day explosion that burned the driver beyond recognition and injured seven, Tesla was able to track Matthew Livelsberger’s movements in detail from Denver to Las Vegas — and confirm that the problem was explosives in the truck, not the truck itself. Tesla used data collected from charging stations and from onboard software. 

“I have to thank Elon Musk, specifically,” said Las Vegas Metropolitan Police Department Sheriff Kevin McMahill to reporters.  

Some privacy experts were less enthusiastic. 

“It reveals the kind of sweeping surveillance going on,” said David Choffnes, executive director of the Cybersecurity and Privacy Institute at Northeastern University in Boston. “When something bad happens, it’s helpful, but it’s a double-edged sword. Companies that collect this data can abuse it.” 

General Motors, for instance, was sued in August by the Texas attorney general for allegedly selling data from 1.8 million drivers to insurance companies without their consent. 

Cars equipped with cameras to enable self-driving features have added a new security risk. Tesla itself came under fire after Reuters reported how employees from 2019 through 2022 shared drivers’ sensitive videos and recordings with each other, including videos of road rage incidents and, in one case, nudity. 

Tesla did not respond to emailed questions about its privacy policy. On its website, Tesla says it follows strict rules for keeping names and information private. 

“No one but you would have knowledge of your activities, location, or a history of where you’ve been,” according to a statement. “Your information is kept private and secure.” 

Auto analyst Sam Abuelsamid at Telemetry Insight, said he doesn’t think Tesla is “especially worse” than other auto companies in handling customer data, but he is still concerned. 

“This is one of the biggest ethical issues we have around modern vehicles. They’re connected,” he said. “Consumers need to have control over their data.” 

Tensions were high when the Cybertruck parked at the front doors of Trump’s hotel began smoking, then burst into flames. Just hours earlier, a driver in another vehicle using the same peer-to-peer car rental service, Turo, had killed 15 people after slamming into a crowd in New Orleans, Louisiana, in what law enforcement is calling a terrorist attack. 

Shortly before 1 p.m., the Las Vegas police announced they were investigating a second incident. 

“The whole Tesla senior team is investigating this matter right now,” Musk wrote on X. “Will post more information as soon as we learn anything.” 

Over the next few hours, Tesla was able to piece together Livelsberger’s journey over five days and four states by tracking, among other things, his recharging stops in various locations, including Monument, Colorado, Albuquerque, New Mexico, and Flagstaff, Arizona. 

Apple to pay $95M to settle lawsuit accusing Siri of eavesdropping

Apple has agreed to pay $95 million to settle a lawsuit accusing the privacy-minded company of deploying its virtual assistant Siri to eavesdrop on people using its iPhone and other trendy devices.

The proposed settlement filed Tuesday in an Oakland, California, federal court would resolve a five-year-old lawsuit revolving around allegations that Apple surreptitiously activated Siri to record conversations through iPhones and other devices equipped with the virtual assistant for more than a decade.

The alleged recordings occurred even when people didn’t seek to activate the virtual assistant with the trigger words, “Hey, Siri.” Some of the recorded conversations were then shared with advertisers in an attempt to sell their products to consumers more likely to be interested in the goods and services, the lawsuit asserted.

The allegations about a snoopy Siri contradicted Apple’s long-running commitment to protect the privacy of its customers — a crusade that CEO Tim Cook has often framed as a fight to preserve “a fundamental human right.”

Apple isn’t acknowledging any wrongdoing in the settlement, which still must be approved by U.S. District Judge Jeffrey White. Lawyers in the case have proposed scheduling a February 14 court hearing in Oakland to review the terms.

If the settlement is approved, tens of millions of consumers who owned iPhones and other Apple devices from Sept. 17, 2014, through the end of last year could file claims. Each consumer could receive up to $20 per Siri-equipped device covered by the settlement, although the payment could be reduced or increased, depending on the volume of claims. Only 3% to 5% of eligible consumers are expected to file claims, according to estimates in court documents.

Eligible consumers will be limited to seeking compensation on a maximum of five devices.

The settlement represents a sliver of the $705 billion in profits that Apple has pocketed since September 2014. It’s also a small fraction of the roughly $1.5 billion that the lawyers representing consumers had estimated Apple could have been required to pay if the company had been found guilty of violating wiretapping and other privacy laws had the case gone to a trial.

The attorneys who filed the lawsuit may seek up to $29.6 million from the settlement fund to cover their fees and other expenses, according to court documents.

US appeals court blocks Biden administration effort to restore net neutrality rules

Washington — A U.S. appeals court ruled on Thursday the Federal Communications Commission did not have legal authority to reinstate landmark net neutrality rules.

The decision is a blow to the outgoing Biden administration that had made restoring the open internet rules a priority. President Joe Biden signed a 2021 executive order encouraging the FCC to reinstate the rules.

A three-judge panel of the Cincinnati-based 6th U.S. Circuit Court of Appeals said the FCC lacked authority to reinstate the rules initially implemented in 2015 by the agency under Democratic former President Barack Obama, but then repealed by the commission in 2017 under Republican former President Donald Trump.

Net-neutrality rules require internet service providers to treat internet data and users equally rather than restricting access, slowing speeds or blocking content for certain users. The rules also forbid special arrangements in which ISPs give improved network speeds or access to favored users.

The court cited the Supreme Court’s June decision in a case known as Loper Bright to overturn a 1984 precedent that had given deference to government agencies in interpreting laws they administer, in the latest decision to curb the authority of federal agencies. “Applying Loper Bright means we can end the FCC’s vacillations,” the court ruled.

The decision leaves in place state neutrality rules adopted by California and others but may end more than 20 years of efforts to give federal regulators sweeping oversight over the internet.

FCC Chair Jessica Rosenworcel called on Congress to act after the decision. “Consumers across the country have told us again and again that they want an internet that is fast, open, and fair. With this decision it is clear that Congress now needs to heed their call, take up the charge for net neutrality, and put open internet principles in federal law,” Rosenworcel said in a statement.

The FCC voted in April along party lines to reassume regulatory oversight of broadband internet and reinstate open internet rules. Industry groups filed suit and successfully convinced the court to temporarily block the rules as they considered the case.

Incoming FCC Chair Brendan Carr voted against the reinstatement last year. He did not immediately comment on Thursday.

Former FCC Chair Ajit Pai said the court ruling should mean the end of efforts to reinstate the rules, and a focus shift to “what actually matters to American consumers – like improving Internet access and promoting online innovation.”

The Trump administration is unlikely to appeal the decision but net-neutrality advocates could seek review by the Supreme Court.

The rules would have given the FCC new tools to crack down on Chinese telecom companies and the ability to monitor internet service outages.

A group representing companies including Amazon.com AMZN.O, Apple AAPL.O, Alphabet GOOGL.O and Meta Platforms META.O had backed the FCC net-neutrality rules, while USTelecom, an industry group whose members include AT&T T.N and Verizon VZ.N, last year called reinstating net neutrality “entirely counterproductive, unnecessary, and an anti-consumer regulatory distraction.”

VOA Mandarin: What cards does China hold in US-China tech, trade battles?  

Beijing has launched a series of retaliatory actions against U.S. technological sanctions, including cutting off supplies of rare earth elements and punishing American companies operating in China. U.S. President-elect Donald Trump has repeatedly warned of additional tariffs on Chinese exports, and analysts believe he will further tighten technological restrictions on China. What other cards might Beijing play on the 2025 U.S.-China trade and technology battlefield? 

 

Click here for the full story in Mandarin.

VOA Mandarin: Quantum technology a key battleground in US-China competition 

Quantum computing is emerging as a revolutionary technology capable of solving complex problems that traditional computers cannot address. The U.S. leads in quantum innovation, driven by companies like Google and IBM, robust government funding and top-tier research institutions. China, however, has rapidly advanced through massive state-led investments, dominating global quantum patents and establishing specialized research centers. 

 

Click here for the full story in Mandarin.

Losing your kids to doom scrolling? Greece is building government app for that

ATHENS, GREECE — Greece announced plans on Monday to enhance parental oversight of mobile devices in 2025 through a government-operated app that will help get digital age verification and browsing controls. 

Dimitris Papastergiou, the minister of digital governance, said the Kids Wallet app, due to launch in March, was aimed at safeguarding children under the age of 15 from the risks of excessive and inappropriate internet use. 

The app will be run by a widely used government services platform and operate in conjunction with an existing smartphone app for adults to carry digital identification documents. 

“It’s a big change,” Papastergiou told reporters, adding that the app would integrate advanced algorithms to monitor usage and apply strict authentication processes. 

“The Kids Wallet application will do two main things: It will make parental control much easier, and it will be our official national tool for verifying the age of users,” he said. 

A survey published this month by Greek research organization KMOP found that 76.6% of children ages 9-12 have access to the internet via personal devices, 58.6% use social media daily, and 22.8% have encountered inappropriate content. 

Many lack awareness of basic safety tools such as the block and report buttons, authors of the study said. 

Papastergiou said the government was hoping to have the children’s app preinstalled on smartphones sold in Greece by the end of 2025. 

While facing criticism from some digital rights and religious groups, government-controlled apps and online services — many introduced during the pandemic — are generally popular in Greece, as they are seen as a way of bypassing historically slow bureaucratic procedures. 

The planned online child protection measures would go further than regulations already in place in several European countries by introducing more direct government involvement. 

They will also help hold social media platforms more accountable for enforcing age controls, Papastergiou said. 

“What’s the elephant in the room? Clearly, it’s how we define and verify a person’s age,” he said. “When you have an [online] age check, you might have a 14-year-old claiming they are 18. Or you could have someone who actually is a genuine 20-year-old. … Now we can address that.”

US Treasury: Chinese hackers remotely accessed workstations, documents

WASHINGTON — Chinese hackers remotely accessed several U.S. Treasury Department workstations and unclassified documents after compromising a third-party software service provider, the agency said Monday. 

The department did not provide details on how many workstations had been accessed or what sort of documents the hackers may have obtained, but it said in a letter to lawmakers revealing the breach that “at this time there is no evidence indicating the threat actor has continued access to Treasury information.” 

“Treasury takes very seriously all threats against our systems, and the data it holds,” the department said. “Over the last four years, Treasury has significantly bolstered its cyber defense, and we will continue to work with both private and public sector partners to protect our financial system from threat actors.” 

The department said it learned of the problem on Dec. 8 when a third-party software service provider, BeyondTrust, flagged that hackers had stolen a key used by the vendor that helped it override the system and gain remote access to several employee workstations. 

The compromised service has since been taken offline, and there’s no evidence that the hackers still have access to department information, Aditi Hardikar, an assistant Treasury secretary, said in the letter Monday to leaders of the Senate Banking Committee. 

The department said it was working with the FBI and the Cybersecurity and Infrastructure Security Agency, and that the hack had been attributed to Chinese culprits. It did not elaborate.

AI technology helps level playing field for students with disabilities

For Makenzie Gilkison, spelling is such a struggle that a word like rhinoceros might come out as “rineanswsaurs” or sarcastic as “srkastik.” 

The 14-year-old from suburban Indianapolis can sound out words, but her dyslexia makes the process so draining that she often struggles with comprehension.

“I just assumed I was stupid,” she recalled of her early grade school years. 

But assistive technology powered by artificial intelligence has helped her keep up with classmates. Last year, Makenzie was named to the National Junior Honor Society. She credits a customized AI-powered chatbot, a word prediction program and other tools that can read for her. 

“I would have just probably given up if I didn’t have them,” she said. 

New tech; countless possibilities

Artificial intelligence holds the promise of helping countless  students with a range of visual, speech, language and hearing impairments to execute tasks that come easily to others. Schools everywhere have been wrestling with how and where to incorporate AI, but many are fast-tracking applications for students with disabilities. 

Getting the latest technology into the hands of students with disabilities is a priority for the U.S. Education Department, which has told schools they must consider whether students need tools like text-to-speech and alternative communication devices. New rules from the Department of Justice also will require schools and other government entities to make apps and online content accessible to those with disabilities. 

There is concern about how to ensure students using it — including those with disabilities — are still learning. 

Students can use artificial intelligence to summarize jumbled thoughts into an outline, summarize complicated passages, or even translate Shakespeare into common English. And computer-generated voices that can read passages for visually impaired and dyslexic students are becoming less robotic and more natural. 

“I’m seeing that a lot of students are kind of exploring on their own, almost feeling like they’ve found a cheat code in a video game,” said Alexis Reid, an educational therapist in the Boston area who works with students with learning disabilities. But in her view, it is far from cheating: “We’re meeting students where they are.” 

Programs fortify classroom lessons 

Ben Snyder, a 14-year-old freshman from Larchmont, New York, who was recently diagnosed with a learning disability, has been increasingly using AI to help with homework. 

“Sometimes in math, my teachers will explain a problem to me, but it just makes absolutely no sense,” he said. “So if I plug that problem into AI, it’ll give me multiple different ways of explaining how to do that.” 

He likes a program called Question AI. Earlier in the day, he asked the program to help him write an outline for a book report — a task he completed in 15 minutes that otherwise would have taken him an hour and a half because of his struggles with writing and organization. But he does think using AI to write the whole report crosses a line. 

“That’s just cheating,” Ben said. 

Schools weigh pros, cons 

Schools have been trying to balance the technology’s benefits against the risk that it will do too much. If a special education plan sets reading growth as a goal, the student needs to improve that skill. AI can’t do it for them, said Mary Lawson, general counsel at the Council of the Great City Schools. 

But the technology can help level the playing field for students with disabilities, said Paul Sanft, director of a Minnesota-based center where families can try out different assistive technology tools and borrow devices. 

“There are definitely going to be people who use some of these tools in nefarious ways. That’s always going to happen,” Sanft said. “But I don’t think that’s the biggest concern with people with disabilities, who are just trying to do something that they couldn’t do before.” 

Another risk is that AI will track students into less rigorous courses of study. And, because it is so good at identifying patterns, AI might be able to figure out a student has a disability. Having that disclosed by AI and not the student or their family could create ethical dilemmas, said Luis Perez, the disability and digital inclusion lead at CAST, formerly the Center for Applied Specialized Technology. 

Schools are using the technology to help students who struggle academically, even if they do not qualify for special education services. In Iowa, a new law requires students deemed not proficient — about a quarter of them — to get an individualized reading plan. As part of that effort, the state’s education department spent $3 million on an AI-driven personalized tutoring program. When students struggle, a digital avatar intervenes. 

Educators anticipate more tools 

The U.S. National Science Foundation is funding AI research and development. One firm is developing tools to help children with speech and language difficulties. Called the National AI Institute for Exceptional Education, it is headquartered at the University of Buffalo, which did pioneering work on handwriting recognition that helped the U.S. Postal Service save hundreds of millions of dollars by automating processing. 

“We are able to solve the postal application with very high accuracy. When it comes to children’s handwriting, we fail very badly,” said Venu Govindaraju, the director of the institute. He sees it as an area that needs more work, along with speech-to-text technology, which isn’t as good at understanding children’s voices, particularly if there is a speech impediment. 

Sorting through the sheer number of programs developed by education technology companies can be a time-consuming challenge for schools. Richard Culatta, CEO of the International Society for Technology in Education, said the nonprofit launched an effort this fall to make it easier for districts to vet what they are buying and ensure it is accessible. 

Mother sees potential

Makenzie wishes some of the tools were easier to use. Sometimes a feature will inexplicably be turned off, and she will be without it for a week while the tech team investigates. The challenges can be so cumbersome that some students resist the technology entirely. 

But Makenzie’s mother, Nadine Gilkison, who works as a technology integration supervisor at Franklin Township Community School Corporation in Indiana, said she sees more promise than downside. 

In September, her district rolled out chatbots to help special education students in high school. She said teachers, who sometimes struggled to provide students the help they needed, became emotional when they heard about the program. Until now, students were reliant on someone to help them, unable to move ahead on their own. 

“Now we don’t need to wait anymore,” she said. 

Elon Musk vows ‘war’ over H-1B visas in rift with Trump supporters

WEST PALM BEACH, FLORIDA — Elon Musk, the billionaire CEO of Tesla and SpaceX, vowed to go to “war” to defend the H-1B visa program for foreign tech workers late Friday amid a dispute between President-elect Donald Trump’s longtime supporters and his most recently acquired backers from the tech industry. 

In a post on social media platform X, Musk said, “The reason I’m in America along with so many critical people who built SpaceX, Tesla and hundreds of other companies that made America strong is because of H1B.” 

“I will go to war on this issue the likes of which you cannot possibly comprehend,” he added. 

Musk, a naturalized U.S. citizen born in South Africa, has held an H-1B visa, and his electric-car company Tesla obtained 724 of the visas this year. H-1B visas are typically for three-year periods, though holders can extend them or apply for green cards. 

Musk’s tweet was directed at Trump’s supporters and immigration hardliners who have increasingly pushed for the H-1B visa program to be scrapped amid a heated debate over immigration and the place of skilled immigrants and foreign workers brought into the country on work visas. 

Trump has so far remained silent on the issue. The Trump transition did not respond to a request for comment on Musk’s tweets and the H-1B visa debate. 

In the past, Trump has expressed a willingness to provide more work visas to skilled workers. He has also promised to deport all immigrants who are in the U.S. illegally, deploy tariffs to help create more jobs for American citizens, and severely restrict immigration. 

The issue highlights how tech leaders such as Musk — who has taken an important role in the presidential transition, advising on key personnel and policy areas — are now drawing scrutiny from his base. 

The U.S. tech industry relies on the government’s H-1B visa program to hire foreign skilled workers to help run its companies, a labor force that critics say undercuts wages for American citizens. 

The altercation was set off this week by far-right activists who criticized Trump’s selection of Sriram Krishnan, an Indian American venture capitalist, to be an adviser on artificial intelligence, saying he would have influence on the Trump administration’s immigration policies. 

On Friday, Steve Bannon, a longtime Trump confidante, critiqued “big tech oligarchs” for supporting the H-1B program and cast immigration as a threat to Western civilization. 

In response, Musk and many other tech billionaires drew a line between what they view as legal immigration and illegal immigration. 

Musk has spent more than a quarter of a billion dollars helping Trump get elected president in November. He has posted regularly this week about the lack of homegrown talent to fill all the needed positions within American tech companies.  

Internet is rife with fake reviews – will AI make it worse?

The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say. 

Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback. 

But AI-infused text generation tools, popularized by OpenAI’s ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts. 

The deceptive practice, which is illegal in the U.S., is carried out year-round but becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews to help them purchase gifts. 

Where fakes are appearing 

Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants to services such as home repairs, medical care and piano lessons. 

The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023 and they have multiplied ever since. 

For a report released this month, the Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated. 

“It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and adviser to tech startups, who reviewed the Transparency Company’s work and is set to lead the organization starting Jan. 1. 

In August, software company DoubleVerify said it was observing a “significant increase” in mobile phone and smart TV apps with reviews crafted by generative AI. The reviews often were used to deceive customers into installing apps that could hijack devices or run ads constantly, the company said. 

The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews. 

The FTC, which this year banned the sale or purchase of fake reviews, said some of Rytr’s subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags and other businesses. 

Likely on prominent online sites, too 

Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses has detected with almost certainty that some AI-generated appraisals posted on Amazon bubbled up to the top of review search results because they were so detailed and appeared to be well thought out. 

But determining what is fake or not can be challenging. External parties can fall short because they don’t have “access to data signals that indicate patterns of abuse,” Amazon has said. 

Pangram Labs has done detection for some prominent online sites, which Spero declined to name because of nondisclosure agreements. He said he evaluated Amazon and Yelp independently. 

Many of the AI-generated comments on Yelp appeared to be posted by individuals who were trying to publish enough reviews to earn an “Elite” badge, which is intended to let users know they should trust the content, Spero said. 

The badge provides access to exclusive events with local business owners. Fraudsters also want it so their Yelp profiles can look more realistic, said Kay Dean, a former federal criminal investigator who runs a watchdog group called Fake Review Watch. 

To be sure, just because a review is AI-generated doesn’t necessarily mean it’s fake. Some consumers might experiment with AI tools to generate content that reflects their genuine sentiments. Some non-native English speakers say they turn to AI to make sure they use accurate language in the reviews they write. 

“It can help with reviews [and] make it more informative if it comes out of good intentions,” said Michigan State University marketing professor Sherry He, who has researched fake reviews. She says tech platforms should focus on the behavioral patterns of bad actors, which prominent platforms already do, instead of discouraging legitimate users from turning to AI tools. 

What companies are doing 

Prominent companies are developing policies for how AI-generated content fits into their systems for removing phony or abusive reviews. Some already employ algorithms and investigative teams to detect and take down fake reviews but are giving users some flexibility to use AI. 

Spokespeople for Amazon and Trustpilot, for example, said they would allow customers to post AI-assisted reviews as long as they reflect their genuine experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy. 

“With the recent rise in consumer adoption of AI tools, Yelp has significantly invested in methods to better detect and mitigate such content on our platform,” the company said in a statement. 

The Coalition for Trusted Reviews, which Amazon, Trustpilot, employment review site Glassdoor, and travel sites Tripadvisor, Expedia and Booking.com launched last year, said that even though deceivers may put AI to illicit use, the technology also presents “an opportunity to push back against those who seek to use reviews to mislead others.” 

“By sharing best practice and raising standards, including developing advanced AI detection systems, we can protect consumers and maintain the integrity of online reviews,” the group said. 

The FTC’s rule banning fake reviews, which took effect in October, allows the agency to fine businesses and individuals who engage in the practice. Tech companies hosting such reviews are shielded from the penalty because they are not legally liable under U.S. law for the content that outsiders post on their platforms. 

Tech companies, including Amazon, Yelp and Google, have sued fake review brokers they accuse of peddling counterfeit reviews on their sites. The companies say their technology has blocked or removed a huge swath of suspect reviews and suspicious accounts. However, some experts say they could be doing more. 

“Their efforts thus far are not nearly enough,” said Dean of Fake Review Watch. “If these tech companies are so committed to eliminating review fraud on their platforms, why is it that I, one individual who works with no automation, can find hundreds or even thousands of fake reviews on any given day?” 

Spotting fake reviews 

Consumers can try to spot fake reviews by watching out for a few possible warning signs, according to researchers. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product’s full name or model number is another potential giveaway. 

When it comes to AI, research conducted by Balazs Kovacs, a Yale University professor of organization behavior, has shown that people can’t tell the difference between AI-generated and human-written reviews. Some AI detectors may also be fooled by shorter texts, which are common in online reviews, the study said. 

However, there are some “AI tells” that online shoppers and service seekers should keep it mind. Panagram Labs says reviews written with AI are typically longer, highly structured and include “empty descriptors,” such as generic phrases and attributes. The writing also tends to include cliches like “the first thing that struck me” and “game-changer.”

US proposes cybersecurity rules to limit impact of health data leaks

Health care organizations may be required to bolster their cybersecurity to better prevent sensitive information from being leaked by cyberattacks like the ones that hit Ascension and UnitedHealth, a senior White House official said Friday.

Anne Neuberger, the U.S. deputy national security adviser for cyber and emerging technology, told reporters that proposed requirements are necessary in light of the massive number of Americans whose data has been affected by large breaches of health care information. The proposals include encrypting data so it cannot be accessed, even if leaked, and requiring compliance checks to ensure networks meet cybersecurity rules.

The full proposed rule was posted to the Federal Register on Friday, and the Department of Health and Human Services posted a more condensed breakdown on its website.

She said that the health care information of more than 167 million people was affected in 2023 as a result of cybersecurity incidents.

The proposed rule from the Office for Civil Rights (OCR) within HHS would update standards under the Health Insurance Portability and Accountability Act and would cost an estimated $9 billion in the first year, and $6 billion in years two through five, Neuberger said.

“We’ve made some significant proposals that we think will improve cybersecurity and ultimately everyone’s health information, if any of these proposals are ultimately finalized,” an OCR spokesperson told Reuters late Friday. The next step in the process is a 60-day public comment period before any final decisions will be made.

Large health care breaches caused by hacking and ransomware have increased by 89% and 102%, respectively, since 2019, Neuberger said.

“In this job, one of the most concerning and really troubling things we deal with is hacking of hospitals, hacking of health care data,” she said.

Hospitals have been forced to operate manually and Americans’ sensitive health care data, mental health information and other information are “being leaked on the dark web with the opportunity to blackmail individuals,” Neuberger said.

Trump asks court to delay possible TikTok ban until he can weigh in as president

U.S. President-elect Donald Trump asked the Supreme Court on Friday to pause the potential TikTok ban from going into effect until his administration can pursue a “political resolution” to the issue.

The request came as TikTok and the Biden administration filed opposing briefs to the court, in which the company argued the court should strike down a law that could ban the platform by January 19 while the government emphasized its position that the statute is needed to eliminate a national security risk.

“President Trump takes no position on the underlying merits of this dispute. Instead, he respectfully requests that the court consider staying the act’s deadline for divestment of January 19, 2025, while it considers the merits of this case,” said Trump’s amicus brief, which supported neither party in the case.

The filings come ahead of oral arguments scheduled for January 10 on whether the law, which requires TikTok to divest from its China-based parent company or face a ban, unlawfully restricts speech in violation of the First Amendment.

Earlier this month, a panel of three federal judges on the U.S. Court of Appeals for the District of Columbia Circuit unanimously upheld the statute, leading TikTok to appeal the case to the Supreme Court.

The brief from Trump said he opposes banning TikTok at this junction and “seeks the ability to resolve the issues at hand through political means once he takes office.”

Massive Chinese espionage scheme hit 9th telecom firm, US says

WASHINGTON — A sprawling Chinese espionage campaign hacked a ninth U.S. telecom firm, a top White House official said Friday.

The Chinese hacking blitz known as Salt Typhoon gave officials in Beijing access to private texts and phone conversations of an unknown number of Americans. The White House earlier this month said the attack affected at least eight telecommunications companies and dozens of nations.

Anne Neuberger, the deputy national security adviser for cyber and emerging technologies, told reporters Friday that a ninth victim was identified after the administration released guidance to companies about how to hunt for Chinese culprits in their networks.

The update from Neuberger is the latest development in a massive hacking operation that alarmed national security officials, exposed cybersecurity vulnerabilities in the private sector and laid bare China’s hacking sophistication.

The hackers compromised the networks of telecommunications companies to obtain customer call records and gain access to the private communications of “a limited number of individuals.” Although the FBI has not publicly identified any of the victims, officials believe senior U.S. government officials and prominent political figures are among those whose communications were accessed.

Neuberger said officials did not yet have a precise sense of how many Americans overall were affected by Salt Typhoon, in part because the Chinese were careful about their techniques, but a “large number” were in or near Washington.

Officials believe the goal of the hackers was to identify who owned the phones and, if they were “government targets of interest,” spy on their texts and phone calls, she said.

The FBI said most of the people targeted by the hackers are “primarily involved in government or political activity.”

Neuberger said the episode highlighted the need for required cybersecurity practices in the telecommunications industry, something the Federal Communications Commission is to take up at a meeting next month.

“We know that voluntary cybersecurity practices are inadequate to protect against China, Russia and Iran hacking of our critical infrastructure,” she said.

The Chinese government has denied responsibility for the hacking.

Ukraine tech company presents latest military simulators

Russia’s invasion has pushed Ukrainian tech companies working with defense simulation technology to seriously compete in global markets. One such company is SKIFTECH, which specializes in high-tech military simulators. Iryna Solomko visited the company’s production site in Kyiv. Anna Rice narrates the story. Camera: Pavlo Terekhov

Japan Airlines suffers delays after carrier reports cyberattack

TOKYO — Japan Airlines reported a cyberattack on Thursday that caused delays to domestic and international flights but later said it had found and addressed the cause.

The airline, Japan’s second biggest after All Nippon Airways (ANA), said 24 domestic flights had been delayed by more than half an hour.

Public broadcaster NHK said problems with the airline’s baggage check-in system had caused delays at several Japanese airports but no major disruption was reported.

“We identified and addressed the cause of the issue. We are checking the system recovery status,” Japan Airlines (JAL) said in a post on social media platform X.

“Sales for both domestic and international flights departing today have been suspended. We apologize for any inconvenience caused,” the post said.

A JAL spokesperson told AFP earlier the company had been subjected to a cyberattack.

Japanese media said it may have been a so-called DDoS attack aimed at overwhelming and disrupting a website or server.

Network disruption began at 7:24 a.m. Thursday (2224 GMT Wednesday), JAL said in a statement, adding that there was no impact on the safety of its operations.

Then “at 8:56 a.m., we temporarily isolated the router (a device for exchanging data between networks) that was causing the disruption,” it said.

Report on January collision

JAL shares fell as much as 2.5% in morning trade after the news emerged, before recovering slightly.

The airline is just the latest Japanese firm to be hit by a cyberattack.

Japan’s space agency JAXA was targeted in 2023, although no sensitive information about rockets or satellites was accessed.

The same year one of Japan’s busiest ports was hit by a ransomware attack blamed on the Russia-based Lockbit group.

In 2022, a cyberattack at a Toyota supplier forced the top-selling automaker to halt operations at domestic plants.

More recently, the popular Japanese video-sharing website Niconico came under a large cyberattack in June.

Separately, a transport ministry committee tasked with probing a fatal January 2024 collision involving a JAL passenger jet released an interim report on Wednesday blaming human error for the incident that killed five people.

The collision at Tokyo’s Haneda Airport was with a coast guard plane carrying six crew members — of whom five were killed — that was on mission to deliver relief supplies to a quake-hit central region of Japan.

According to the report, the smaller plane’s pilot mistook an air traffic control officer’s instructions to mean authorization had been given to enter the runway.

The captain was also “in a hurry” at the time because the coast guard plane’s departure was 40 minutes behind schedule, the report said.

The traffic controller failed to notice the plane had intruded into the runway, oblivious even to an alarm system warning against its presence.

All 379 people on board the JAL Airbus escaped just before the aircraft was engulfed in flames.

Iran cyberspace council votes to lift ban on WhatsApp

TEHRAN, IRAN — Iran’s top council responsible for safeguarding the internet voted Tuesday to lift a ban on the popular messaging application WhatsApp, which has been subject to restrictions for over two years, state media reported. 

“The ban on WhatsApp and Google Play was removed by unanimous vote of the members of the Supreme Council of Cyberspace,” the official IRNA news agency said. 

The council is headed by the president, and its members include the parliament speaker, the head of the judiciary and several ministers. 

It was not immediately clear when the decision would come into force. 

‘Restrictions … achieved nothing but anger’

The move has sparked a debate in Iran, with critics of the restrictions arguing the controls were costly for the country.  

“The restrictions have achieved nothing but anger and added costs to people’s lives,” presidential adviser Ali Rabiei said on X Tuesday. 

“President Masoud Pezeshkian believes in removing restrictions and does not consider the bans to be in the interest of the people and the country. All experts also believe that this issue is not beneficial to the country’s security,” Vice President Mohammad Javad Zarif said on Tuesday. 

Lifting restrictions ‘a gift to enemies’

Others, however, warned against lifting the restrictions.  

The reformist Shargh daily on Tuesday reported that 136 lawmakers in the 290-member parliament sent a letter to the council saying the move would be a “gift to [Iran’s] enemies.”  

The lawmakers called for allowing access to restricted online platforms only “if they are committed to the values of Islamic society and comply with the laws of” Iran.  

Iranian officials have in the past called for the foreign companies that own popular international apps to introduce representative offices in Iran. 

Meta, the American giant that owns Facebook, Instagram and WhatsApp, has said it had no intention of setting up offices in the Islamic republic, which remains under U.S. sanctions. 

Iranians have over the years grown accustomed to using virtual private networks, or VPNs, to bypass internet restrictions.  

Other popular social media platforms, including Facebook, X and YouTube, remain blocked after being banned in 2009. 

Telegram was also banned by a court order in April 2018. 

Instagram and WhatsApp were added to the list of blocked applications following nationwide protests that erupted after the September 2022 death in custody of Mahsa Amini.  

Amini, a 22-year-old Iranian Kurd, was arrested for an alleged breach of Iran’s dress code for women. 

Hundreds of people, including dozens of security personnel, were killed in the subsequent months-long nationwide protests, and thousands of demonstrators were arrested. 

Pezeshkian, who took office in July, had vowed during his campaign to ease the long-standing internet restrictions. 

in the past several years, Iran has introduced domestic applications to supplant popular foreign ones. 

VOA Mandarin: Trump’s new AI policy seeks to loosen regulations, support innovation, defeat China

U.S. President-elect Donald Trump has vowed to repeal President Joe Biden’s executive order on artificial intelligence security, setting the stage for deregulation for AI companies by nominating pro-business, pro-startups Silicon Valley leaders.

The nomination of Jacob Helberg, an outspoken China critic, for a key State Department post indicates Trump’s intention to lead over China in AI, according to analysts.

“We’re likely to see quite a great focus on countering China when it comes to AI – beating China, when it comes to having the most advanced AI capabilities,” says Ruby Scanlon, a researcher on technology and national security at Center for a New America Security.

Click here for the full story in Mandarin.

Albanian PM says TikTok ban was not ‘rushed reaction to a single incident’

Tirana, Albania — Albania’s prime minister said Sunday the ban on TikTok his government announced a day earlier was “not a rushed reaction to a single incident.”

Edi Rama said Saturday the government will shut down TikTok for one year, accusing the popular video service of inciting violence and bullying, especially among children.

Authorities have held 1,300 meetings with teachers and parents since the November stabbing death of a teenager by another teen after a quarrel that started on social media apps. Ninety percent of them approve of the ban on TikTok.

“The ban on TikTok for one year in Albania is not a rushed reaction to a single incident, but a carefully considered decision made in consultation with parent communities in schools across the country,” said Rama.

Following Tirana’s decision, TikTok asked for “urgent clarity from the Albanian government” in the case of the stabbed teenager. The company said it had “found no evidence that the perpetrator or victim had TikTok accounts, and multiple reports have in fact confirmed videos leading up to this incident were being posted on another platform, not TikTok.”

“To claim that the killing of the teenage boy has no connection to TikTok because the conflict didn’t originate on the platform demonstrates a failure to grasp both the seriousness of the threat TikTok poses to children and youth today and the rationale behind our decision to take responsibility for addressing this threat,” Rama said.

“Albania may be too small to demand that TikTok protect children and youth from the frightening pitfalls of its algorithm,” he said, blaming TikTok for “the reproduction of the unending hell of the language of hatred, violence, bullying and so on.”

Albanian children comprise the largest group of TikTok users in the country, according to domestic researchers.

Many youngsters in Albania did not approve of the ban.

“We disclose our daily life and entertain ourselves, that is, we exploit it during our free time,” said Samuel Sulmani, an 18-year-old in the town of Rreshen, 75 kilometers north of the capital Tirana, on Sunday. “We do not agree with that because that’s a deprivation for us.”

But Albanian parents have been increasingly concerned following reports of children taking knives and other objects to school to use in quarrels or cases of bullying promoted by stories they see on TikTok.

“Our decision couldn’t be clearer: Either TikTok protects the children of Albania, or Albania will protect its children from TikTok,” Rama said.

Albania to shut down TikTok for 1 year, says platform promotes violence among children

TIRANA, ALBANIA — Albania’s prime minister said Saturday the government will shut down the video service TikTok for one year, blaming it for inciting violence and bullying, especially among children. 

Albanian authorities held 1,300 meetings with teachers and parents following the stabbing death of a teenager in mid-November by another teen after a quarrel that started on TikTok. 

Prime Minister Edi Rama, speaking at a meeting with teachers and parents, said TikTok “would be fully closed for all. … There will be no TikTok in the Republic of Albania.” Rama said the shutdown would begin sometime next year. 

It was not immediately clear if TikTok has a representative in Albania. 

In an email response Saturday to a request for comment, TikTok asked for “urgent clarity from the Albanian government” on the case of the stabbed teenager. The company said it had “found no evidence that the perpetrator or victim had TikTok accounts, and multiple reports have in fact confirmed videos leading up to this incident were being posted on another platform, not TikTok.” 

Albanian children comprise the largest group of TikTok users in the country, according to domestic researchers. 

There has been increasing concern from Albanian parents after reports of children taking knives and other objects to school to use in quarrels or cases of bullying promoted by stories they see on TikTok. 

TikTok’s operations in China, where its parent company is based, are different, “promoting how to better study, how to preserve nature … and so on,” according to Rama. 

Albania is too small a country to impose on TikTok a change of its algorithm so that it does not promote “the reproduction of the unending hell of the language of hatred, violence, bullying and so on,” Rama’s office wrote in an email response to The Associated Press’ request for comment. Rama’s office said that in China TikTok “prevents children from being sucked into this abyss.” 

Authorities have set up a series of protective measures at schools, starting with an increased police presence, training programs and closer cooperation with parents. 

Rama said Albania would follow how the company and other countries react to the one-year shutdown before deciding whether to allow the company to resume operations in Albania. 

Not everyone agreed with Rama’s decision to close TikTok. 

“The dictatorial decision to close the social media platform TikTok … is a grave act against freedom of speech and democracy,” said Ina Zhupa, a lawmaker of the main opposition Democratic Party. “It is a pure electoral act and abuse of power to suppress freedoms.” 

Albania holds parliamentary elections next year. 

Trump wants US to dominate AI as industry weighs benefits, risks

Generative artificial intelligence companies are racing to build on the popularity of programs like ChatGPT, but AI regulation has not kept pace with the technology. Now, an incoming administration could favor U.S. domination over risk mitigation. Tina Trinh reports.

US slow to react to pervasive Chinese hacking, experts say

As new potential threats from Chinese hackers were identified this week, the federal government issued one of its strongest warnings to date about the need for Americans — and in particular government officials and other “highly targeted” individuals — to secure their communications against eavesdropping and interception.

The warning came as news was breaking about a Commerce Department investigation into the possibility that computer network routers manufactured by the Chinese firm TP-Link may pose a threat to the millions of U.S. businesses, households and government agencies that use them.

Also on Wednesday, Congress took long-awaited steps toward funding a program that will purge other Chinese technology from U.S. telecommunications systems. The so-called rip-and-replace program targets gear manufactured by Chinese firms Huawei and ZTE.

Too far behind

While experts said the recent actions are a step in the right direction, they warned that U.S. policymakers have been extremely slow to react to a mountain of evidence that Chinese hackers have long been targeting essential communications and infrastructure systems in the U.S.

The lack of action has persisted despite law enforcement and intelligence agencies repeatedly sounding alarms.

In January, while testifying before the House Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party, FBI Director Christopher Wray said, “There has been far too little public focus on the fact that [People’s Republic of China] hackers are targeting our critical infrastructure — our water treatment plants, our electrical grid, our oil and natural gas pipelines, our transportation systems. And the risk that poses to every American requires our attention now.”

A year previously, Wray had warned lawmakers on the House Appropriations Committee that his investigators were badly outnumbered.

“To give you a sense of what we’re up against, if each one of the FBI’s cyber agents and intel analysts focused exclusively on the China threat, Chinese hackers would still outnumber FBI Cyber personnel by at least 50-to-1,” Wray said.

Decades of complexity

Part of the problem, experts said, is that it is difficult for policymakers to summon the political will to make changes that could be disruptive to the lives and livelihoods of U.S. citizens in the absence of public concern about the problem.

“It still remains very, very difficult to impress upon average, typical everyday citizens the gravity of Chinese espionage, or the extent of it,” said Bill Drexel, a fellow with the Technology and National Security Program at the Center for a New American Security.

He contrasted the relatively muted public response to the recent revelation of a Chinese hacking operation known as Salt Typhoon, which compromised mobile telephone networks throughout the country, with the uproar that accompanied the far less serious appearance of a Chinese spy balloon over the U.S. mainland in 2023.

“That just goes to show this … problem where really grave issues that are intangible — that are just in cyberspace — are really hard to wrap our minds around,” Drexel told VOA.

“For four decades, we intertwined our supply chains very deeply with China, and our digital systems became more and more complex, allowing more and more compounding ways to be hacked, to be compromised,” Drexel said.

“We’ve just started to try to change course on this stuff,” he added. “But there’s so much momentum for so long on these issues, and they continue to compound in complexity, such that it’s just really hard to catch up.”

Warning ‘highly targeted’ Americans

The Cybersecurity and Infrastructure Security Agency (CISA) issued guidance on Wednesday, reporting that it “has identified cyber espionage activity by People’s Republic of China (PRC) government-affiliated threat actors targeting commercial telecommunications infrastructure.”

It continued, “This activity enabled the theft of customer call records and the compromise of private communications for a limited number of highly targeted individuals.”

The warning appeared to be related to the Salt Typhoon hack that, according to government investigators, compromised all the major mobile phone carriers in the U.S., giving the Chinese government extraordinary access to the communications among millions of Americans.

The five-page CISA document outlines steps that the agency advises all Americans, but particularly those most likely to be targeted, to take immediately.

The first is to immediately curtail use of standard mobile communications platforms, such as voice calls and Short Message Service (SMS) texting. Instead, the agency advises Americans to restrict their communications to free messaging platforms that offer end-to-end encryption, such as Signal, which support one-on-one and group chats, as well as voice and video calls. Data sent with end-to-end encryption is extremely difficult to decrypt, even if a malicious actor is able to intercept it during transmission.

Among the other advice CISA offered was to avoid using SMS messages for multifactor authentication by switching to apps that provide authenticator codes or, where possible, adopting hardware-based security keys for highly sensitive accounts. Other recommendations included the use of complex and random passwords stored in password manager software, as well as platform-specific suggestions for iPhone and Android users.

TP-Link concerns

On Wednesday, The Wall Street Journal reported, and other outlets subsequently confirmed, that the Commerce Department, as well as the Justice and Defense departments, are investigating reports that computer routers manufactured by the Shenzhen-based TP-Link are one vector of attack for Chinese hackers.

TP-Link currently dominates the market for computer routers in the U.S., with nearly two-thirds of total market share. In October, a report from Microsoft revealed that one Chinese hacking operation it identified as CovertNetwork-1658 has compromised thousands of TP-Link routers to create a network that is used by “multiple Chinese threat actors” to gain illicit access to computer networks around the world.

The Journal’s reporting also revealed that the Commerce Department is considering a ban on the sale of TP-Link routers in the U.S. next year, an action that could significantly disrupt the U.S. market for networking hardware.

Rip and replace

Congress on Wednesday took long-delayed action to address a different potential threat from China, allocating $3 billion to a program that will remove telecommunications equipment manufactured by Huawei and ZTE from rural telecommunications networks in the U.S.

Funding for the rip-and-replace program arrives years after the U.S. identified the two companies as posing a potential threat.

Beginning in the first Trump administration and continuing during Joe Biden’s time in office, the U.S. pressured allies around the world to block the installation of Huawei and ZTE 5G cellular communications equipment from their networks, in some cases threatening to stop sharing sensitive intelligence with allies that failed to comply. 

Bluesky could become target of foreign disinformation, experts warn

washington — Experts on cybersecurity and online foreign influence campaigns are urging social media company Bluesky, whose app has exploded in popularity in recent weeks, to step up moderation to counter potential state-sponsored influence efforts.

Over the past month, Bluesky, a microblogging platform with its roots in Twitter, has seen one of its biggest increases in new user registrations since it was publicly released in February. Over 25 million are now on the platform, close to half of whom joined after the 2024 U.S. presidential election.

Rose Wang, Bluesky’s chief operating officer, said in a recent interview that Bluesky does not intend to push any political ideologies.

“We have no political viewpoint that we are trying to promote,” she said in early December.

Exploiting users’ political leanings

Many who joined Bluesky have cited user experience as one of the reasons for migrating from social media platform X. They also have said they joined the platform after Election Day because they are critics of Elon Musk and President-elect Donald Trump. Some commentators in the U.S. have questioned whether Bluesky is risking becoming an echo chamber of the left.

Some experts contend the platform’s liberal-leaning users could be exploited by foreign propagandists. Joe Bodnar, who tracks foreign influence operations for the Institute for Strategic Dialogue, told VOA Mandarin that Russian propaganda often appeals to the anti-establishment left in the U.S. on contentious topics, like Gaza, gun violence and America’s global dominance.

“The Kremlin wants to make those arguments even louder,” Bodnar said. “Sometimes that means they play to the left.”

So far, at least three accounts that belong to RT, a Russia-controlled media outlet, have joined Bluesky. Sputnik Brazil is also actively posting on the platform.

VOA Mandarin found that at least two Chinese accounts that belong to state broadcaster CGTN have joined the platform.

Bluesky does not assign verification labels. One way to authenticate an account is for the person or organization to link it to the domain of its official website.

There are at least four other accounts that claim to be Chinese state media outlets, including China Daily, the Global Times and People’s Daily. None of the three publications replied to VOA’s emails inquiring about these accounts’ authenticity.

Additionally, Beijing has played heavily to the Western left on certain global issues. China has consistently called for a ceasefire in Gaza and blamed the West for supporting Israel.

But those familiar with Chinese and Russian state media say the left-leaning user base on Bluesky actually could give Beijing and Moscow a hard time for pushing their narratives.

“Bluesky isn’t the most hospitable place for Russian narratives,” Bodnar said.

Sean Haines, a British national who used to work for Chinese state media outlets, shared similar opinions in a recent blog post about Bluesky.

“With its predominately Western liberal leaning, the platform also will be an uphill challenge for those looking to push overtly nationalistic viewpoints,” he wrote.

Most of the Chinese and Russian state media accounts have only hundreds of followers, with RT en Espanol at the top, with nearly 7,000.

Could ‘decentralization’ be detrimental?

China and Russia have been finding ways to reach the American public through covert disinformation operations on social media. During this year’s election, disinformation campaigns connected to China and Russia promoted claims that cast doubt on the integrity of the voting process.

Similar tactics could soon be coming to Bluesky.

“I don’t think Bluesky is more vulnerable to influence campaigns than X or other social networks,” Jennifer Victoria Scurrell, a researcher on AI-supported influence operations, told VOA Mandarin. But Scurrell, of ETH Zurich’s Center for Security Studies, said Bluesky’s decentralized moderation approach is flawed.

Jack Dorsey, the founder of Twitter, started Bluesky as an internal project to give users more power over moderation. Bluesky then went independent in 2021.

“Our mission is to develop and drive large-scale technologies of open and decentralized public conversation,” the company says on its website.

To do that, Bluesky “decentralized” its moderation authority, giving users tools to customize their experience on the site.

Bluesky offers a universal basic moderation setting for every user, which labels content such as extremism, misinformation, fake accounts and adult content. Users can choose whether to see the content labeled by Bluesky. Users can report to Bluesky content or accounts they believe have violated Bluesky’s guidelines.

On top of that, users get to create their own moderation settings to label or filter out certain content and accounts. Other users can subscribe to these customized settings, should they choose.

Scurrell, who helps test security weaknesses for OpenAI as a contractor, told VOA Mandarin the decentralized approach to moderation could be a double-edged sword.

“Societal values are diverse, contextual and local, which makes decentralized moderation an appealing concept,” she wrote in her replies to VOA.

She warned that outsourcing content moderation to users, though, “raises serious concerns” because the approach would give bad actors the same amount of power as normal users.

“What happens if an entire node is taken over by malicious actors spreading disinformation or manipulative content,” she wrote, or “if the system gets hijacked by an army of bots?”

VOA Mandarin emailed Bluesky a list of detailed questions about its moderation policy against potential foreign influence attempts but did not receive a response.

Experts have urged Bluesky to implement measures to counter potential foreign influence campaigns.

In a recent blog post, Sarah Cook, an independent China watcher and former China director at Freedom House, urged Bluesky to label state media accounts, a practice exercised by many social media companies, so users know of these accounts’ ties to foreign governments.

Eugenio Benincasa, an expert on Chinese cyber threats at ETH Zurich, asserts that studying how Chinese tech companies help Beijing surveil social media platforms and manipulate online discussions can help Bluesky better prepare.

“It is crucial to thoroughly study the evolving influence tactics enabled by tools like public opinion monitoring systems to identify vulnerabilities that may have been overlooked or are emerging, in order to develop effective safeguards,” Benincasa said.

US cyber watchdog seeks switch to encrypted apps following ‘Salt Typhoon’ hacks

WASHINGTON — The U.S. cybersecurity watchdog CISA is telling senior American government officials and politicians to immediately switch to end-to-end encrypted messaging following intrusions at major American telecoms blamed on Chinese hackers. 

In written guidance released on Wednesday, the Cybersecurity and Infrastructure Security Agency said “individuals who are in senior government or senior political positions” should “immediately review and apply” a series of best practices around the use of mobile devices. 

The first recommendation: “Use only end-to-end encrypted communications.” 

End-to-end encryption — a data protection technique that aims to make data unreadable by anyone except its sender and its recipient — is baked into various chat apps, including Meta Platforms’ WhatsApp, Apple’s iMessage, and the privacy-focused app Signal. Corporate offerings, which allow end-to-end encryption, also include Microsoft’s Teams and Zoom Communications’ meetings. 

CISA’s message is the latest in a series of increasingly stark warnings issued by American officials in the wake of dramatic hacks of U.S. telecom companies by a group dubbed “Salt Typhoon.” 

Last week, Democratic Senator Ben Ray Lujan said, “this attack likely represents the largest telecommunications hack in our nation’s history.” 

U.S. officials have blamed China for the hacking. Beijing routinely denies allegations of cyberespionage. 

US Supreme Court to consider TikTok bid to halt ban

WASHINGTON — The U.S. Supreme Court decided on Wednesday to hear a bid by TikTok and its China-based parent company, ByteDance, to block a law intended to force the sale of the short-video app by January 19 or face a ban on national security grounds. 

The justices did not immediately act on an emergency request by TikTok and ByteDance, as well as by some of its users who post content on the social media platform, for an injunction to halt the looming ban, opting instead to hear arguments on the matter on January 10.  

The challengers are appealing a lower court’s ruling that upheld the law. TikTok is used by about 170 million Americans. 

Congress passed the measure in April and President Joe Biden, a Democrat, signed it into law. The Justice Department had said that as a Chinese company, TikTok poses “a national-security threat of immense depth and scale” because of its access to vast amounts of data on American users, from locations to private messages, and its ability to secretly manipulate content that Americans view on the app. TikTok has said it poses no imminent threat to U.S. security.  

TikTok and ByteDance asked the Supreme Court on December 16 to pause the law, which they said violates free speech protections under the U.S. Constitution’s First Amendment.  

TikTok on Wednesday said it was pleased the court will take up the issue. “We believe the court will find the TikTok ban unconstitutional so the over 170 million Americans on our platform can continue to exercise their free speech rights,” the company said. 

The companies said that being shuttered for even one month would cause TikTok to lose about a third of its U.S. users and undermine its ability to attract advertisers and recruit content creators and employee talent. 

The U.S. Court of Appeals for the District of Columbia Circuit in Washington on December 6 rejected the First Amendment arguments by the companies.  

In their filing to the Supreme Court, TikTok and ByteDance said that “if Americans, duly informed of the alleged risks of ‘covert’ content manipulation, choose to continue viewing content on TikTok with their eyes wide open, the First Amendment entrusts them with making that choice, free from the government’s censorship.” 

Senate Republican leader Mitch McConnell on Wednesday, in a brief filed with the Supreme Court, urged the court to reject any delay, comparing TikTok to a hardened criminal. 

A U.S. ban on TikTok would make the company far less valuable to ByteDance and its investors, and hurt businesses that depend on TikTok to drive their sales. 

Republican President-elect Donald Trump, who unsuccessfully tried to ban TikTok during his first term in the White House in 2020, has reversed his stance and promised during the presidential race this year that he would try to save TikTok. Trump said on Dec. 16 that he has “a warm spot in my heart for TikTok” and that he would “take a look” at the matter. 

Trump takes office on January 20, the day after the TikTok deadline under the law. 

In its decision, the D.C. Circuit wrote, “The First Amendment exists to protect free speech in the United States. Here the government acted solely to protect that freedom from a foreign adversary nation and to limit that adversary’s ability to gather data on people in the United States.” 

TikTok has denied it has or ever would share U.S. user data, accusing U.S. lawmakers in the lawsuit of advancing speculative concerns. It has characterized the ban as a “radical departure from this country’s tradition of championing an open Internet.”  

The dispute comes at a time of growing trade tensions between the world’s two biggest economies after the Biden administration placed new restrictions on the Chinese chip industry and China responded with a ban on exports of gallium, germanium and antimony, metals which are used in making high-tech microchips, to the United States. 

The U.S. law would bar providing certain services to TikTok and other foreign adversary-controlled apps including offering it through app stores such as Apple and Alphabet’s Google, effectively preventing TikTok’s continued U.S. use unless ByteDance divests TikTok by the deadline. 

An unimpeded ban could open the door to a future crackdown on other foreign-owned apps. In 2020, Trump had also tried to ban WeChat, owned by Chinese company Tencent, but was blocked by the courts.

Senators urge US House to pass Kids Online Safety Act

A bipartisan effort to protect children from the harms of social media is running out of time in this session of the U.S. Congress. If passed, the Kids Online Safety Act would institute safeguards for minors’ personal data online. But free speech advocates and some Republicans are concerned the bill could lead to censorship. VOA’s Congressional Correspondent Katherine Gypson has more. Kim Lewis contributed to this story.

Congo files criminal complaints against Apple in Europe over conflict minerals

Paris — The Democratic Republic of Congo has filed criminal complaints against Apple subsidiaries in France and Belgium, accusing the tech firm of using conflict minerals in its supply chain, lawyers for the Congolese government told Reuters. 

Congo is a major source of tin, tantalum and tungsten, so-called 3T minerals used in computers and mobile phones. But some artisanal mines are run by armed groups involved in massacres of civilians, mass rapes, looting and other crimes, according to U.N. experts and human rights groups. 

Apple does not directly source primary minerals and says it audits suppliers, publishes findings and funds bodies that seek to improve mineral traceability. 

Apple last year said it had “no reasonable basis for concluding” its products contain illegally exported minerals from conflict-hit zones. The tech giant has insisted it carefully verifies the origin of materials in its output. 

Its 2023 filing on conflict minerals to the U.S. Securities and Exchange Commission said none of the smelters or refiners of 3T minerals or gold in its supply chain had financed or benefited armed groups in Congo or neighboring countries. 

But international lawyers representing Congo argue that Apple uses minerals pillaged from Congo and laundered through international supply chains, which they say renders the firm complicit in crimes taking place in Congo. 

In parallel complaints filed to the Paris prosecutor’s office and to a Belgian investigating magistrate’s office on Monday, Congo accuses local subsidiaries Apple France, Apple Retail France and Apple Retail Belgium of a range of offenses. 

These include covering up war crimes and the laundering of tainted minerals, handling stolen goods, and carrying out deceptive commercial practices to assure consumers supply chains are clean. 

“It is clear that the Apple group, Apple France and Apple Retail France know very well that their minerals supply chain relies on systemic wrongdoing,” says the French complaint, after citing U.N. and rights reports on conflict in east Congo. 

Belgium had a particular moral duty to act because looting of Congo’s resources began during the 19th-century colonial rule of its King Leopold II, Congo’s Belgian lawyer Christophe Marchand said. 

“It is incumbent on Belgium to help Congo in its effort to use judicial means to end the pillaging,” he said. 

The complaints, prepared by the lawyers on behalf of Congo’s justice minister, make allegations not just against the local subsidiaries but against the Apple group as a whole. 

France and Belgium were chosen because of their perceived strong emphasis on corporate accountability. Judicial authorities in both nations will decide whether to investigate the complaints further and bring criminal charges. 

In an unrelated case in March, a U.S. federal court rejected an attempt by private plaintiffs to hold Apple, Google, Tesla, Dell and Microsoft accountable for what the plaintiffs described as their dependence on child labor in Congolese cobalt mines. 

Minerals fuel violence 

Since the 1990s, Congo’s mining heartlands in the east have been devastated by waves of fighting between armed groups, some backed by neighboring Rwanda, and the Congolese military. 

Millions of civilians have died and been displaced. 

Competition for minerals is one of the main drivers of conflict as armed groups sustain themselves and buy weapons with the proceeds of exports, often smuggled via Rwanda, according to U.N. experts and human rights organizations. 

Rwanda denies benefiting from the trade, dismissing the allegations as unfounded. 

Among the appendices to Congo’s legal complaint in France was a statement issued by the U.S. State Department in July, expressing concerns about the role of the illicit trade in minerals from Congo, including tantalum, in financing conflict. 

The statement was a response to requests from the private sector for the U.S. government to clarify potential risks associated with manufacturing products using minerals extracted, transported or exported from eastern Congo, Rwanda and Uganda. 

Congo’s complaints focus on ITSCI, a metals industry-funded monitoring and certification scheme designed to help companies perform due diligence on suppliers of 3T minerals exported from Congo, Rwanda, Burundi and Uganda. 

Congo’s lawyers argue that ITSCI has been discredited, including by the Responsible Minerals Initiative (RMI) of which Apple is a member, and that Apple nevertheless uses ITSCI as a fig leaf to falsely present its supply chain as clean. 

The RMI, whose members include more than 500 companies, announced in 2022 it was removing ITSCI from its list of approved traceability schemes. 

In July, it said it was prolonging the suspension until at least 2026, saying ITSCI had not provided field observations from high-risk sites or explained how it was responding to an escalation of violence in North Kivu province, which borders Rwanda and is a key 3T mining area. 

ITSCI criticized the RMI’s own processes and defended its work in Congo as reliable. It has also rejected allegations in a 2022 report by campaigning group Global Witness entitled “The ITSCI Laundromat,” cited in Congo’s legal complaint in France, that it was complicit in the false labeling of minerals from conflict zones as coming from mines located in peaceful areas. 

Apple mentioned ITSCI five times in its 2023 filing on conflict minerals. The filing also made multiple mentions of the RMI, in which Apple said it had continued active participation and leadership but did not mention the RMI’s ditching of ITSCI. 

In its July statement, the U.S. State Department said flaws in traceability schemes have not garnered sufficient engagement and attention to lead to changes needed. 

Robert Amsterdam, a U.S.-based lawyer for Congo, said the French and Belgian complaints were the first criminal complaints by the Congolese state against a major tech company, describing them as a “first salvo” only. 

Some information for this report came from Agence France-Presse. 

EU investigates TikTok over Romanian presidential election

LONDON — European Union regulators said Tuesday they’re investigating whether TikTok breached the bloc’s digital rulebook by failing to deal with risks to Romania’s presidential election, which has been thrown into turmoil over allegations of electoral violations and Russian meddling.

The European Commission is escalating its scrutiny of the popular video-sharing platform after Romania’s top court canceled results of the first round of voting that resulted in an unknown far-right candidate becoming the front-runner.

The court made its unprecedented decision after authorities in the European Union and NATO member country declassified documents alleging Moscow organized a sprawling social media campaign to promote a long-shot candidate, Calin Georgescu.

“Following serious indications that foreign actors interfered in the Romanian presidential elections by using TikTok, we are now thoroughly investigating whether TikTok has violated the Digital Services Act by failing to tackle such risks,” European Commission President Ursula von der Leyen said in a press release. “It should be crystal clear that in the EU, all online platforms, including TikTok, must be held accountable.”

The European Commission is the 27-nation European Union’s executive arm and enforces the bloc’s Digital Services Act, a sweeping set of regulations intended to clean up social media platforms and protect users from risks such as election-related misinformation. It ordered TikTok earlier this month to retain all information related to the election.

In the preliminary round of voting on Nov. 24 Georgescu was an outsider among the 13 candidates but ended up topping the polls. He was due to face a pro-EU reformist rival in a runoff before the court canceled the results.

The declassified files alleged that there was an “aggressive promotion campaign” to boost Georgescu’s popularity, including payments worth a total of $381,000 to TikTok influencers to promote him on the platform.

TikTok said it has “protected the integrity” of its platform over 150 elections around the world and is continuing to address these “industry-wide challenges.”

“TikTok has provided the European Commission with extensive information regarding these efforts, and we have transparently and publicly detailed our robust actions,” it said in a statement.

The commission said its investigation will focus on TikTok’s content recommendation systems, especially on risks related to “coordinated inauthentic manipulation or automated exploitation.” It’s also looking at TikTok’s policies on political advertisements and “paid-for political content.”

TikTok said it doesn’t accept paid political ads and “proactively” removes content for violating policies on misinformation.

The investigation could result in TikTok making changes to fix problems or fines worth up to 6% of the company’s total global revenue.

Incoming FCC chair is big tech critic who worries about China

President-elect Donald Trump has nominated Brendan Carr to lead the Federal Communications Commission, which regulates communications in the United States. Carr, an FCC commissioner since 2017, has taken aim at big tech and China’s influence on U.S. communications. VOA’s Dora Mekouar reports.

Hackers demand ransom from Rhode Islanders after data breach

Hundreds of thousands of Rhode Island residents’ personal and bank information, including Social Security numbers, were likely hacked by an international cybercriminal group asking for a ransom, state officials said on Saturday. 

In what Rhode Island officials described as extortion, the hackers threatened to release the stolen information unless they were paid an undisclosed amount of money. 

The breached data affects people who use the state’s government assistance programs and includes the Supplemental Nutrition Assistance Program, or SNAP, Temporary Assistance for Needy Families and healthcare purchased through the state’s HealthSource RI, Governor Dan McKee announced on Friday. 

Hackers gained access to RIBridges, the state’s online portal for obtaining social services earlier this month, the governor’s office said in a statement, but the breach was not confirmed by its vendor, Deloitte, until Friday. 

“Deloitte confirmed that there is a high probability that a cybercriminal has obtained files with personally identifiable information from RIBridges,” the governor’s office said in a statement on Saturday. 

A representative from McKee’s office was not immediately available to Reuters for comment. 

Anyone who has applied for or received benefits through those programs since 2016 could be affected. 

The state directed Deloitte to shut down RIBridges to remediate the threat, and for the time being, anyone applying for new benefits will have to do so on paper applications until the system is back up. 

Households believed to have been affected will receive a letter from the state notifying them of the problem and explaining steps to be taken to help protect their data and bank accounts. 

US court rejects TikTok request to temporarily halt pending US ban

WASHINGTON — A U.S. appeals court on Friday rejected an emergency bid by TikTok to temporarily block a law that would require its Chinese parent company ByteDance to divest the short-video app by January 19 or face a ban on the app.

TikTok and ByteDance on Monday filed the emergency motion with the U.S. Court of Appeals for the District of Columbia, asking for more time to make its case to the U.S. Supreme Court. Friday’s ruling means that TikTok now must quickly move to the Supreme Court in an attempt to halt the pending ban.

The companies had warned that without court action, the law will “shut down TikTok — one of the nation’s most popular speech platforms — for its more than 170 million domestic monthly users.”

“The petitioners have not identified any case in which a court, after rejecting a constitutional challenge to an Act of Congress, has enjoined the Act from going into effect while review is sought in the Supreme Court,” the D.C. Circuit said.

TikTok did not immediately respond to a request for comment.

Under the law, TikTok will be banned unless ByteDance divests it by January 19. The law also gives the U.S. government sweeping powers to ban other foreign-owned apps that could raise concerns about collection of Americans’ data.

The U.S. Justice Department argues “continued Chinese control of the TikTok application poses a continuing threat to national security.”

TikTok says the Justice Department has misstated the social media app’s ties to China, arguing its content recommendation engine and user data are stored in the U.S. on cloud servers operated by Oracle while content moderation decisions that affect U.S. users are made in the U.S.