Wimbledon tennis tournament replaces line judges with AI in break with tradition

LONDON — That long-held Wimbledon tradition of line judges dressed in elegant uniforms is no more. 

The All England Club announced Wednesday that artificial intelligence will be used to make the “out” and “fault” calls at the championships from 2025. 

Wimbledon organizers said the decision to adopt live electronic line calling was made following extensive testing at the 2024 tournament and “builds on the existing ball-tracking and line-calling technology that has been in place for many years.” 

“We consider the technology to be sufficiently robust and the time is right to take this important step in seeking maximum accuracy in our officiating,” said Sally Bolton, chief executive of the All England Club. “For the players, it will offer them the same conditions they have played under at a number of other events on tour.” 

Bolton said Wimbledon had a responsibility to “balance tradition and innovation.” 

“Line umpires have played a central role in our officiating setup at the championships for many decades,” she said, “and we recognize their valuable contribution and thank them for their commitment and service.” 

Line-calling technology has long been used at Wimbledon and other tennis tournaments to call whether serves are in or out. 

The All England Club also said Wednesday that the ladies’ and gentlemen’s singles finals will be scheduled to take place at the later time of 4 p.m. local time on the second Saturday and Sunday, respectively — and after doubles finals on those days. 

Bolton said the moves have been made to ensure the day of the finals “builds towards the crescendo of the ladies’ and gentlemen’s singles finals, with our champions being crowned in front of the largest possible worldwide audience.”

 

US states sue TikTok, saying it harms young users

NEW YORK/WASHINGTON — TikTok faces new lawsuits filed by 13 U.S. states and the District of Columbia on Tuesday, accusing the popular social media platform of harming and failing to protect young people.

The lawsuits, filed separately in New York, California, the District of Columbia and 11 other states, expand Chinese-owned TikTok’s legal fight with U.S. regulators and seek new financial penalties against the company.

Washington is located in the District of Columbia.

The states accuse TikTok of using intentionally addictive software designed to keep children watching as long and often as possible and misrepresenting its content moderation effectiveness.

“TikTok cultivates social media addiction to boost corporate profits,” California Attorney General Rob Bonta said in a statement. “TikTok intentionally targets children because they know kids do not yet have the defenses or capacity to create healthy boundaries around addictive content.”

TikTok seeks to maximize the amount of time users spend on the app in order to target them with ads, the states said.

“Young people are struggling with their mental health because of addictive social media platforms like TikTok,” said New York Attorney General Letitia James.

TikTok said on Tuesday that it strongly disagreed with the claims, “many of which we believe to be inaccurate and misleading,” and that it was disappointed the states chose to sue “rather than work with us on constructive solutions to industrywide challenges.”

TikTok provides safety features that include default screentime limits and privacy defaults for minors under 16, the company said.

Washington, D.C., Attorney General Brian Schwalb alleged that TikTok operates an unlicensed money transmission business through its livestreaming and virtual currency features.

“TikTok’s platform is dangerous by design. It’s an intentionally addictive product that is designed to get young people addicted to their screens,” Schwalb said in an interview.

Washington’s lawsuit accused TikTok of facilitating sexual exploitation of underage users, saying TikTok’s livestreaming and virtual currency “operate like a virtual strip club with no age restrictions.”

Illinois, Kentucky, Louisiana, Massachusetts, Mississippi, New Jersey, North Carolina, Oregon, South Carolina, Vermont and Washington state also sued on Tuesday.

In March 2022, eight states, including California and Massachusetts, said they launched a nationwide probe of TikTok impacts on young people.

The U.S. Justice Department sued TikTok in August for allegedly failing to protect children’s privacy on the app. Other states, including Utah and Texas, previously sued TikTok for failing to protect children from harm. TikTok on Monday rejected the allegations in a court filing.

TikTok’s Chinese parent company, ByteDance, is battling a U.S. law that could ban the app in the United States.

Pioneers in artificial intelligence win the Nobel Prize in physics 

STOCKHOLM — Two pioneers of artificial intelligence — John Hopfield and Geoffrey Hinton — won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats to humanity, one of the winners said.

Hinton, who is known as the “godfather of artificial intelligence,” is a citizen of Canada and Britain who works at the University of Toronto. Hopfield is an American working at Princeton.

“This year’s two Nobel Laureates in physics have used tools from physics to develop methods that are the foundation of today’s powerful machine learning,” the Nobel committee said in a press release.

Ellen Moons, a member of the Nobel committee at the Royal Swedish Academy of Sciences, said the two laureates “used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets.”

She said that such networks have been used to advance research in physics and “have also become part of our daily lives, for instance in facial recognition and language translation.”

Hinton predicted that AI will end up having a “huge influence” on civilization, bringing improvements in productivity and health care.

“It would be comparable with the Industrial Revolution,” he said in the open call with reporters and the officials from the Royal Swedish Academy of Sciences.

“Instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us. And it’s going to be wonderful in many respects,” Hinton said. “But we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.”

The Nobel committee that honored the science behind machine learning and AI also mentioned fears about its possible flipside. Moon said that while it has “enormous benefits, its rapid development has also raised concerns about our future. Collectively, humans carry the responsibility for using this new technology in a safe and ethical way for the greatest benefit of humankind.”

Hinton shares those concerns. He quit a role at Google so he could more freely speak about the dangers of the technology he helped create.

On Tuesday, he said he was shocked at the honor.

“I’m flabbergasted. I had, no idea this would happen,” he said when reached by the Nobel committee on the phone.

There was no immediate reaction from Hopfield.

Hinton, now 76, in the 1980s helped develop a technique known as backpropagation that has been instrumental in training machines how to “learn.”

His team at the University of Toronto later wowed peers by using a neural network to win the prestigious ImageNet computer vision competition in 2012. That win spawned a flurry of copycats, giving birth to the rise of modern AI.

Hopfield, 91, created an associative memory that can store and reconstruct images and other types of patterns in data, the Nobel committee said.

Hinton used Hopfield’s network as the foundation for a new network that uses a different method, known as the Boltzmann machine, that the committee said can learn to recognize characteristic elements in a given type of data.

Six days of Nobel announcements opened Monday with Americans Victor Ambros and Gary Ruvkun winning the medicine prize for their discovery of tiny bits of genetic material that serve as on and off switches inside cells that help control what the cells do and when they do it. If scientists can better understand how they work and how to manipulate them, it could one day lead to powerful treatments for diseases like cancer.

The physics prize carries a cash award of 11 million Swedish kronor ($1 million) from a bequest left by the award’s creator, Swedish inventor Alfred Nobel. The laureates are invited to receive their awards at ceremonies on Dec. 10, the anniversary of Nobel’s death.

Nobel announcements continue with the chemistry physics prize on Wednesday and literature on Thursday. The Nobel Peace Prize will be announced Friday and the economics award on Oct. 14.

China-connected spamouflage networks spread antisemitic disinformation

washington — Spamouflage networks with connections to China are posting antisemitic conspiracy theories on social media, casting doubt on Washington’s independence from alleged Jewish influence and the integrity of the two U.S. presidential candidates, a joint investigation by VOA Mandarin and Taiwan’s Doublethink Lab, a social media analytics firm, has found.

The investigation has so far uncovered more than 30 such X posts, many of which claim or suggest that core American political institutions, including the White House and Congress, have pledged loyalty to or are controlled by Jewish elites and the Israeli government.

One post shows a graphic of 18 U.S. officials of Jewish descent, including Secretary of State Antony Blinken, Treasury Secretary Janet Yellen, and the head of the Homeland Security Department, Alejandro Mayorkas, and asks: “Jews only make up 2% of the U.S. population, so why do they have so many representatives in important government departments?!”

Another post shows a cartoon depicting Vice President Kamala Harris, the Democratic candidate for president, and her opponent, Donald Trump, having their tongues tangled together and wrapped around an Israeli flagpole. The post proclaims that “no matter who of them comes to power, they will not change their stance on Judaism.”

Most of the 32 posts analyzed by VOA Mandarin and Doublethink Lab were posted during July and August. The posts came from three spamouflage accounts, two of which were previously reported by VOA.

Each of the three accounts leads its own spamouflage network. The three networks consist of 140 accounts, which amplify content from the three main accounts, or seeders.

A spamouflage network is a state-sponsored operation disguised as the work of authentic social media users to spread pro-government narratives and disinformation while discrediting criticism from adversaries.

Jasper Hewitt, a digital intelligence analyst at Doublethink Lab, told VOA Mandarin that the impact of these antisemitic posts has been limited, as most of them failed to reach real users, despite having garnered over 160,000 views.

U.S. officials have cast China as one of the major threats looking to disrupt this year’s election. Beijing, however, has repeatedly denied these allegations and urged Washington to “not make an issue of China in the election.”

Tuvia Gering, a nonresident fellow at the Atlantic Council’s Global China Hub, has closely followed antisemitic disinformation coming from China. He told VOA Mandarin that Beijing isn’t necessarily hostile toward Jews, but anti-Semitic conspiracy theories have historically been a handy tool to be used against Western countries.

“You can trace its origins back to the Cold War, when the Soviet Union promoted antisemitic conspiracy theories all over the world just to instigate in Western societies,” Gering said, “because it divides them from within and it casts the West in a bad light in a strategic competition. [It’s] the same thing you see here [with China].”

Anti-Semitic speech floods Chinese internet

Similar antisemitic narratives about U.S. politics posted by the spamouflage accounts have long been flourishing on the Chinese internet.

An article that received thousands of likes and reposts on Chinese social media app WeChat claims that “Jewish capital” has completed its control of the American political sphere “through infiltration, marriages, campaign funds and lobbying.”

The article also brings up the Jewish heritage of many current and former U.S. officials and their families as evidence of the alleged Jewish takeover of America.

“The wife of the U.S. president is Jewish, the son-in-law of the former U.S. president is Jewish, the mother of the previous former U.S. president was Jewish, the U.S. Secretary of State is Jewish, the U.S. Secretary of Treasury is Jewish, the Deputy Secretary of State, the Attorney General … are all Jewish,” it wrote.

In fact, first lady Jill Biden is Roman Catholic, and the mother of former President Barack Obama was raised as a Christian. The others named are Jewish.

Conspiracy theories and misinformation abounded on the Chinese internet after the U.S. House of Representatives passed a bill in May that would empower the Department of Education to adopt a new set of standards when investigating antisemitism in educational programs.

Articles and videos assert that the bill marks the death of America because it “definitively solidifies the superior and unquestionable position of the Jews in America,” claiming falsely that anyone who’s labeled an antisemite will be arrested.

One video with more than 1 million views claimed that the New Testament of the Bible would be deemed illegal under the bill. And since all U.S. presidents took their inaugural oath with the Bible, the bill allegedly invalidates the legitimacy of the commander in chief. None of that is true.

The Chinese public hasn’t historically been hostile toward Jews. A 2014 survey published by the Anti-Defamation League, a U.S.-based group against antisemitism, found that only 20% of the participants from China harbored an antisemitic attitude.

But when the Israel-Hamas conflict broke out a year ago, the otherwise heavily censored Chinese social media was flooded with antisemitic comments and praise for Nazi Germany leader Adolf Hitler.

The Chinese government has dismissed criticism of antisemitism on its internet. When asked about it at a news conference last year, Wang Wenbin, then the spokesperson of the Foreign Ministry, said that “China’s laws unequivocally prohibit disseminating information on extremism, ethnic hatred, discrimination and violence via the internet.”

But online hate speech against Jews has hardly disappeared. Eric Liu, a former censor for Chinese social media Weibo who now monitors online censorship, told VOA Mandarin that whenever Israel is in the news, there would be a surge in online antisemitism.

Just last month, after dozens of members of the Lebanon-based militant group Hezbollah were killed by explosions of their pagers, Chinese online commentators acidly condemned Israel and Jews.

The attack “proves that Jews are the most terrifying and cowardly people,” one Weibo user wrote. “They are self-centered and believe themselves to be superior, when in fact they are considered the most indecent and shameless. When the time comes, it’s going to be blood for blood.”

Australia’s online dating industry agrees to code of conduct to protect users

MELBOURNE, Australia — A code of conduct will be enforced on the online dating industry to better protect Australian users after research found that three-in-four people suffer some form of sexual violence through the platforms, Australia’s government said on Tuesday.

Bumble, Grindr and Match Group Inc., a Texas-based company that owns platforms including Tinder, Hinge, OKCupid and Plenty of Fish, have agreed to the code that took effect on Tuesday, Communications Minister Michelle Rowland said.

The platforms, which account for 75% of the industry in Australia, have until April 1 to implement the changes before they are strictly enforced, Rowland said.

The code requires the platforms’ systems to detect potential incidents of online-enabled harm and demands that the accounts of some offenders are terminated.

Complaint and reporting mechanisms are to be made prominent and transparent. A new rating system will show users how well platforms are meeting their obligations under the code.

The government called for a code of conduct last year after the Australian Institute of Criminology research found that three-in-four users of dating apps or websites had experienced some form of sexual violence through these platforms in the five years through 2021.

“There needs to be a complaint-handling process. This is a pretty basic feature that Australians would have expected in the first place,” Rowland said on Tuesday.

“If there are grounds to ban a particular individual from utilizing one of those platforms, if they’re banned on one platform, they’re blocked on all platforms,” she added.

Match Group said it had already introduced new safety features on Tinder, including photo and identification verification to prevent bad actors from accessing the platform while giving users more confidence in the authenticity of their connections.

The platform used artificial intelligence to issue real-time warnings about potentially offensive language in an opening line and advising users to pause before sending.

“This is a pervasive issue, and we take our responsibility to help keep users safe on our platform very seriously,” Match Group said in a statement on Wednesday.

Match Group said it would continue to collaborate with the government and the industry to “help make dating safer for all Australians.”

Bumble said it shared the government’s hope of eliminating gender-based violence and was grateful for the opportunity to work with the government and industry on what the platform described as a “world-first dating code of practice.”

“We know that domestic and sexual violence is an enormous problem in Australia, and that women, members of LGBTQ+ communities, and First Nations are the most at risk,” a Bumble statement said.

“Bumble puts women’s experiences at the center of our mission to create a world where all relationships are healthy and equitable, and safety has been central to our mission from day one,” Bumble added.

Grindr said in a statement it was “honored to participate in the development of the code and shares the Australian government’s commitment to online safety.”

All the platforms helped design the code.

Platforms that have not signed up include Happn, Coffee Meets Bagel and Feeld.

The government expects the code will enable Australians to make better informed choices about which dating apps are best equipped to provide a safe dating experience.

The government has also warned the online dating industry that it will legislate if the operators fail to keep Australians safe on their platforms.

Arkansas sues YouTube over claims it’s fueling mental health crisis

little rock, arkansas — Arkansas sued YouTube and parent company Alphabet on Monday, saying the video-sharing platform is made deliberately addictive and fueling a mental health crisis among youth in the state.

Attorney General Tim Griffin’s office filed the lawsuit in state court, accusing them of violating the state’s deceptive trade practices and public nuisance laws. The lawsuit claims the site is addictive and has resulted in the state spending millions on expanded mental health and other services for young people.

“YouTube amplifies harmful material, doses users with dopamine hits, and drives youth engagement and advertising revenue,” the lawsuit said. “As a result, youth mental health problems have advanced in lockstep with the growth of social media, and in particular, YouTube.”

Alphabet’s Google, which owns the video service and is also named as a defendant in the case, denied the lawsuit’s claims.

“Providing young people with a safer, healthier experience has always been core to our work. In collaboration with youth, mental health and parenting experts, we built services and policies to provide young people with age-appropriate experiences, and parents with robust controls,” Google spokesperson Jose Castaneda said in a statement. “The allegations in this complaint are simply not true.”

YouTube requires users under 17 to get their parent’s permission before using the site, while accounts for users younger than 13 must be linked to a parental account. But it is possible to watch YouTube without an account, and kids can easily lie about their age.

The lawsuit is the latest in an ongoing push by state and federal lawmakers to highlight the impact that social media sites have on younger users. U.S. Surgeon General Vivek Murthy in June called on Congress to require warning labels on social media platforms about their effects on young people’s lives, like those now mandatory on cigarette boxes.

Arkansas last year filed similar lawsuits against TikTok and Facebook parent company Meta, claiming the social media companies were misleading consumers about the safety of children on their platforms and protections of users’ private data. Those lawsuits are still pending in state court.

Arkansas also enacted a law requiring parental consent for minors to create new social media accounts, though that measure has been blocked by a federal judge.

Along with TikTok, YouTube is one of the most popular sites for children and teens. Both sites have been questioned in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm.

YouTube in June changed its policies about firearm videos, prohibiting any videos demonstrating how to remove firearm safety devices. Under the new policies, videos showing homemade guns, automatic weapons and certain firearm accessories like silencers will be restricted to users 18 and older.

Arkansas’ lawsuit claims that YouTube’s algorithms steer youth to harmful adult content, and that it facilitates the spread of child sexual abuse material.

The lawsuit doesn’t seek specific damages, but asks that YouTube be ordered to fund prevention, education and treatment for “excessive and problematic use of social media.”

California governor vetoes bill to create first-in-nation AI safety measures

Sacramento, California — California Governor Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday.

The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said.

Earlier in September, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal “can have a chilling effect on the industry.”

The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal.

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers.

The legislation is among a host of bills passed by the legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take action this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave and why.

The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year.

“This is because of the massive investment scale-up within the industry,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky.”

The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn’t as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said.

A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated that AI developers follow requirements similar to those commitments, said the measure’s supporters.

But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.

Newsom’s decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations.

Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline in August. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.

The governor said earlier this summer he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s top 50 AI companies are located in the state.

He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier in September, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use.

But even with Newsom’s veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals.

“They are going to potentially either copy it or do something similar next legislative session,” Rice said. “So it’s not going away.”

Brazil imposes new fine, demands payments before letting X resume

SAO PAULO/BRASILIA BRAZIL — Brazil’s Supreme Court said on Friday that social platform X still needs to pay just over $5 million in pending fines, including a new one, before it will be allowed to resume its service in the country, according to a court document. 

Earlier this week, the Elon Musk-owned U.S. firm told the court it had complied with orders to stop the spread of misinformation and asked it to lift a ban on the platform. 

But Judge Alexandre de Moraes responded on Friday with a ruling that X and its legal representative in Brazil must still agree to pay a total of $3.4 million in pending fines that were previously ordered by the court. 

In his decision, the judge said that the court can use resources already frozen from X and Starlink accounts in Brazil, but to do so the satellite company, also owned by Musk, had to drop its pending appeal against the fund blockage.  

The judge also demanded a new $1.8 million fine related to a brief period last week when X became available again for some users in Brazil. 

X, formerly known as Twitter, did not immediately respond to a request for comment. 

According to a person close to X, the tech firm will likely pay all the fines but will consider challenging the fine that was imposed by the court after the platform ban.  

X has been suspended since late August in Brazil, one of its largest and most coveted markets, after Moraes ruled it had failed to comply with orders related to restricting hate speech and naming a local legal representative. 

Musk, who had denounced the orders as censorship and called Moraes a “dictator,” backed down and started to reverse his position last week, when X lawyers said the platform tapped a local representative and would comply with court rulings. 

In Friday’s decision, Moraes said that X had proved it had now blocked accounts as ordered by the court and had named the required legal representative in Brazil. 

CrowdStrike executive apologizes to Congress for July global tech outage

WASHINGTON — An executive at cybersecurity company CrowdStrike apologized in testimony to Congress for sparking a global technology outage over the summer. 

“We let our customers down,” said Adam Meyers, who leads CrowdStrike’s threat intelligence division, in a hearing before a U.S. House cybersecurity subcommittee Tuesday. 

Austin, Texas-based CrowdStrike has blamed a bug in an update that allowed its cybersecurity systems to push bad data out to millions of customer computers, setting off a global tech outage in July that grounded flights, took TV broadcasts off air and disrupted banks, hospitals and retailers. 

“Everywhere Americans turned, basic societal functions were unavailable,” House Homeland Security Committee Chairman Mark Green said. “We cannot allow a mistake of this magnitude to happen again.” 

The Tennessee Republican likened the impact of the outage to an attack “we would expect to be carefully executed by a malicious and sophisticated nation-state actor.” 

“We’re deeply sorry and we are determined to prevent this from ever happening again,” Meyers told lawmakers while laying out the technical missteps that led to the outage of about 8.5 million computers running Microsoft’s Windows operating system. 

Meyers said he wanted to “underscore that this was not a cyberattack” but was, instead, caused by a faulty “rapid-response content update” focused on addressing new threats. The company has since bolstered its content update procedures, he said. 

The company still faces a number of lawsuits from people and businesses that were caught up in July’s mass outage. 

Former executive gets 2 years in prison for role in FTX fraud

new york — Caroline Ellison, a former top executive in Sam Bankman-Fried’s fallen FTX cryptocurrency empire, was sentenced to two years in prison on Tuesday after she apologized repeatedly to everyone hurt by a fraud that stole billions of dollars from investors, lenders and customers. 

U.S. District Judge Lewis A. Kaplan said Ellison’s cooperation was “very, very substantial” and “remarkable.” 

But he said a prison sentence was necessary because she had participated in what might be the “greatest financial fraud ever perpetrated in this country and probably anywhere else” or at least close to it. 

He said in such a serious case, he could not let cooperation be a get-out-of-jail-free card, even when it was clear that Bankman-Fried had become “your kryptonite.” 

“I’ve seen a lot of cooperators in 30 years here,” he said. “I’ve never seen one quite like Ms. Ellison.”

She was ordered to report to prison on November 7. 

Ellison, 29, pleaded guilty nearly two years ago and testified against Bankman-Fried for nearly three days at a trial last November. 

At sentencing, she emotionally apologized to anyone hurt by the fraud that stretched from 2017 through 2022. 

“I’m deeply ashamed with what I’ve done,” she said, fighting through tears to say she was “so so sorry” to everyone she had harmed directly or indirectly. 

She did not speak as she left Manhattan federal court, surrounded by lawyers. 

In a court filing, prosecutors had called her testimony the “cornerstone of the trial” against Bankman-Fried, 32, who was found guilty of fraud and sentenced to 25 years in prison. 

In court Tuesday, Assistant U.S. Attorney Danielle Sassoon called for leniency, saying her testimony was “devastating and powerful proof” against Bankman-Fried. 

The prosecutor said Ellison’s time on the witness stand was very different from Bankman-Fried, who she said was “evasive, even contemptuous, and unable to answer questions directly” when he testified. 

Attorney Anjan Sahni asked the judge to spare his client from prison, citing “unusual circumstances,” including her off-and-on romantic relationship with Bankman-Fried and the damage caused when her “whole professional and personal life came to revolve” around him. 

FTX was one of the world’s most popular cryptocurrency exchanges, known for its Superbowl TV ad and its extensive lobbying campaign in Washington before it collapsed in 2022. 

U.S. prosecutors accused Bankman-Fried and other executives of looting customer accounts on the exchange to make risky investments, make millions of dollars of illegal political donations, bribe Chinese officials, and buy luxury real estate in the Caribbean. 

Ellison was chief executive at Alameda Research, a cryptocurrency hedge fund controlled by Bankman-Fried that was used to process some customer funds from FTX. 

As the business began to falter, Ellison divulged the massive fraud to employees who worked for her even before FTX filed for bankruptcy, trial evidence showed. 

Ultimately, she also spoke extensively with criminal and civil U.S. investigators. 

Sassoon said prosecutors were impressed that Ellison did not “jump into the lifeboat” to escape her crimes but instead spent nearly two years fully cooperating. 

Since testifying at Bankman-Fried’s trial, Ellison has engaged in extensive charity work, written a novel, and worked with her parents on a math enrichment textbook for advanced high school students, according to her lawyers. 

They said she also now has a healthy romantic relationship and has reconnected with high school friends she had lost touch with while she worked for and sometimes dated Bankman-Fried from 2017 until late 2022. 

Biden proposes banning Chinese vehicles from US roads with software crackdown 

Washington — The U.S. Commerce Department on Monday proposed prohibiting key Chinese software and hardware in connected vehicles on American roads due to national security concerns — a move that would effectively bar nearly all Chinese cars from entering the U.S. market.

The planned regulation, first reported by Reuters, would also force American and other major automakers in the coming years to remove key Chinese software and hardware from vehicles in the United States.

The Biden administration has raised serious concerns about the collection of data by Chinese companies on U.S. drivers and infrastructure through connected vehicles as well as about potential foreign manipulation of vehicles connected to the internet and navigation systems. The White House ordered an investigation into the potential dangers in February.

The prohibitions would prevent testing of self-driving cars on U.S. roads by Chinese automakers and extend to vehicle software and hardware produced by other U.S. foreign adversaries including Russia.

“When foreign adversaries build software to make a vehicle that means it can be used for surveillance, can be remotely controlled, which threatens the privacy and safety of Americans on the road,” Commerce Secretary Gina Raimondo told a briefing.

“In an extreme situation, a foreign adversary could shut down or take control of all their vehicles operating in the United States all at the same time causing crashes, blocking roads.”

The move is a significant escalation in the United States’ ongoing restrictions on Chinese vehicles, software and components. Earlier this month, the Biden administration locked in steep tariff hikes on Chinese imports, including a 100% duty on electric vehicles as well as new hikes on EV batteries and key minerals.

There are relatively few Chinese-made cars or light-duty trucks imported into the United States. But Raimondo said the department is acting “before suppliers, automakers and car components linked to China or Russia become commonplace and widespread in the U.S. automotive sector… We’re not going to wait until our roads are filled with cars and the risk is extremely significant before we act.”

Nearly all newer cars and trucks are considered “connected” with onboard network hardware that allows internet access, allowing them to share data with devices both inside and outside the vehicle.

A senior administration official confirmed the proposal would effectively ban all existing Chinese light-duty cars and trucks from the U.S. market, but added it would allow Chinese automakers to seek “specific authorizations” for exemptions.

The United States has ample evidence of China prepositioning malware in critical American infrastructure, White House National Security Adviser Jake Sullivan told the same briefing.

“With potentially millions of vehicles on the road, each with 10- to 15-year lifespans the risk of disruption and sabotage increases dramatically,” Sullivan said.

The Chinese Embassy in Washington last month criticized planned action to limit Chinese vehicle exports to the United States: “China urges the U.S. to earnestly abide by market principles and international trade rules, and create a level playing field for companies from all countries. China will firmly defend its lawful rights and interests.”

The proposal calls for making software prohibitions effective in the 2027 model year while the hardware ban would take effect in the 2030 model year or January 2029.

The Commerce Department is giving the public 30 days to comment on the proposal and hopes to finalize it by Jan. 20. The rules would apply to all on-road vehicles but exclude agricultural or mining vehicles not used on public roads.

The Alliance For Automotive Innovation, a group representing major automakers including General Motors, Toyota, Volkswagen and Hyundai, has warned that changing hardware and software would take time.

The group noted connected vehicle hardware and software are developed around the world, including China, but could not detail to what extent Chinese-made components are prevalent in U.S. models.

US to propose ban on Chinese software, hardware in connected vehicles, sources say

Washington — The U.S. Commerce Department is expected on Monday to propose prohibiting Chinese software and hardware in connected and autonomous vehicles on American roads due to national security concerns, two sources told Reuters.

The Biden administration has raised serious concerns about the collection of data by Chinese companies on U.S. drivers and infrastructure as well as the potential foreign manipulation of vehicles connected to the internet and navigation systems.

The proposed regulation would ban the import and sale of vehicles from China with key communications or automated driving system software or hardware, said the two sources, who declined to be identified because the decision had not been publicly disclosed.

The move is a significant escalation in the United States’ ongoing restrictions on Chinese vehicles, software and components. Last week, the Biden administration locked in steep tariff hikes on Chinese imports, including a 100% duty on electric vehicles as well as new hikes on EV batteries and key minerals.

Commerce Secretary Gina Raimondo said in May the risks of Chinese software or hardware in connected U.S. vehicles were significant.

“You can imagine the most catastrophic outcome theoretically if you had a couple million cars on the road and the software were disabled,” she said.

President Joe Biden in February ordered an investigation into whether Chinese vehicle imports pose national security risks over connected-car technology — and if that software and hardware should be banned in all vehicles on U.S. roads.

“China’s policies could flood our market with its vehicles, posing risks to our national security,” Biden said earlier. “I’m not going to let that happen on my watch.”

The Commerce Department plans to give the public 30 days to comment before any finalization of the rules, the sources said. Nearly all newer vehicles on U.S. roads are considered “connected.” Such vehicles have onboard network hardware that allows internet access, allowing them to share data with devices both inside and outside the vehicle.

The department also plans to propose making the prohibitions on software effective in the 2027 model year and the ban on hardware would take effect in January 2029 or the 2030 model year. The prohibitions in question would include vehicles with certain Bluetooth, satellite and wireless features as well as highly autonomous vehicles that could operate without a driver behind the wheel.

A bipartisan group of U.S. lawmakers in November raised alarm about Chinese auto and tech companies collecting and handling sensitive data while testing autonomous vehicles in the United States.

The prohibitions would extend to other foreign U.S. adversaries, including Russia, the sources said.

A trade group representing major automakers including General Motors, Toyota Motor, Volkswagen, Hyundai and others had warned that changing hardware and software would take time.

The carmakers noted their systems “undergo extensive pre-production engineering, testing, and validation processes and, in general, cannot be easily swapped with systems or components from a different supplier.”

The Commerce Department declined to comment on Saturday. Reuters first reported, in early August, details of a plan that would have the effect of barring the testing of autonomous vehicles by Chinese automakers on U.S. roads. There are relatively few Chinese-made light-duty vehicles imported into the United States.

The White House on Thursday signed off on the final proposal, according to a government website. The rule is aimed at ensuring the security of the supply chain for U.S. connected vehicles. It will apply to all vehicles on U.S. roads, but not for agriculture or mining vehicles, the sources said.

Biden noted that most cars are connected like smartphones on wheels, linked to phones, navigation systems, critical infrastructure and to the companies that made them.

California governor signs law to protect children from social media addiction

SACRAMENTO, California — California will make it illegal for social media platforms to knowingly provide addictive feeds to children without parental consent beginning in 2027 under a new law Governor Gavin Newsom signed Friday. 

California follows New York state, which passed a law earlier this year allowing parents to block their kids from getting social media posts suggested by a platform’s algorithm. Utah has passed laws in recent years aimed at limiting children’s access to social media, but those have faced challenges in court. 

The California law will take effect in a state home to some of the largest technology companies in the world. Similar proposals have failed to pass in recent years, but Newsom signed a first-in-the-nation law in 2022 barring online platforms from using users’ personal information in ways that could harm children. 

It is part of a growing push in states across the country to try to address the impact of social media on the well-being of children. 

“Every parent knows the harm social media addiction can inflict on their children — isolation from human contact, stress and anxiety, and endless hours wasted late into the night,” Newsom, a Democrat, said in a statement. “With this bill, California is helping protect children and teenagers from purposely designed features that feed these destructive habits.” 

The law bans platforms from sending notifications without permission from parents to minors between midnight and 6 a.m., and between 8 a.m. and 3 p.m. on weekdays from September through May, when children are typically in school. The legislation also makes platforms set children’s accounts to private by default. 

Opponents of the legislation say it could inadvertently prevent adults from accessing content if they cannot verify their age. Some argue it would threaten online privacy by making platforms collect more information on users. 

The law defines an “addictive feed” as a website or app “in which multiple pieces of media generated or shared by users are, either concurrently or sequentially, recommended, selected, or prioritized for display to a user based, in whole or in part, on information provided by the user, or otherwise associated with the user or the user’s device,” with some exceptions. 

The subject garnered renewed attention in June when U.S. Surgeon General Vivek Murthy called on Congress to require warning labels on social media platforms and their impacts on young people. Attorneys general in 42 states endorsed the plan in a letter sent to Congress last week. 

State Senator Nancy Skinner, a Democrat representing Berkeley who wrote the California law, said that “social media companies have designed their platforms to addict users, especially our kids.” 

“With the passage of SB 976, the California Legislature has sent a clear message: When social media companies won’t act, it’s our responsibility to protect our kids,” she said in a statement.

China-connected spamouflage impersonated Dutch cartoonist

Washington — Based on the posts of an X account that bears the name of Dutch cartoonist Bart van Leeuwen, a profile picture of his face and short professional bio, one would think the Amsterdam-based artist is a staunch supporter of China and fierce critic of the United States.

In one post, the account blasts what it calls Washington’s “fallacies against the Chinese economy,” accompanied by a cartoon from the Global Times — a Beijing-controlled media outlet — showing Uncle Sam aiming but failing to hit a target emblazoned with the words “China’s economy.”

In another, the account reposts a Chinese propaganda video about the country’s rubber-stamp legislature, writing “today’s China is closely connected with the world, blending with each other, and achieving mutual success.”

But Van Leeuwen didn’t make the posts. In fact, this account doesn’t even belong to him.

It belongs to a China-connected network on X of “spamouflage” accounts, which pretend to be the work of real people but are in reality controlled by robots sending out messages designed to shape public opinion.

China has repeatedly rejected reports that it seeks to influence U.S. presidential elections, describing such claims as “fabricated.”

VOA Mandarin and DoubleThink Lab (DTL), a Taiwanese social media analytics firm, uncovered the fake Van Leeuwen account during a joint investigation into a network of spamouflage accounts working on behalf of the Chinese government.

The network, consisting of at least nine accounts, propagated Beijing’s talking points on issues including human rights abuses in China’s western Xinjiang province, territorial disputes with countries in the South China Sea and U.S. tariffs on Chinese goods.

Fake account contradicts real artist

Van Leeuwen confirmed in an interview with VOA Mandarin that he had nothing to do with and was not aware of the fake account.

“It’s ironic that my identity, being a political cartoonist, is being used for political propaganda,” he told VOA in a written statement.

The real Van Leeuwen is an award-winning cartoonist whose works have been published on news outlets around the world, such as the Las Vegas Review-Journal, the Korea Times, Sing Tao Daily in Hong Kong and Gulf Today in the United Arab Emirates.

He specializes in editorial cartoons, whose main subjects include global politics, elections in the U.S. and Russia’s invasion of Ukraine. Several of his past illustrations made fun of Chinese leader Xi Jinping’s economic policies and the opaqueness of Beijing’s inner political struggles.

After being contacted by VOA Mandarin, a spokesman from X said the fake account has been suspended.

Other than finding irony in being impersonated by a Chinese propaganda bot, Van Leeuwen said the incident also worries him.

“This example once again highlights the need for far-reaching measures regarding the restriction of social media,” Van Leeuwen wrote in his statement, “especially with irresponsible people like Elon Musk at the helm.”

After purchasing what was then called Twitter in 2022, the Tesla and SpaceX CEO vowed to reduce the prevalence of bots on the platform, but many users complain it has become even worse.

Musk, the world’s richest person, is a so-called “free speech absolutist,” opposing almost all censorship of people voicing their views. Critics say his policy allows racist and false information to flourish on X.

Former President Donald Trump has praised Musk’s business acumen and said he plans to have the man who may become the world’s first trillionaire head a commission on government efficiency if he is reelected in November.

Network of spamouflage accounts

Before its suspension, the X account that impersonated Van Leeuwen had close to 1,000 followers, more than Van Leeuwen’s real X account. It was registered in 2013, but its first post came only last year. The account’s early posts were mostly encouraging and inspiring words in Chinese. It also posted many dance videos.

Gradually, the account started to mix in more and more political narratives, criticizing the U.S. and defending China. It often reposted content from another spamouflage account called “Grey World.”

“Grey World” used a photo of an attractive Asian woman as its profile picture. Most of its posts were supportive of Beijing’s talking points. It regularly posted videos and cartoons from Chinese state media. It also posted several of Van Leeuwen’s cartoons about American politics.

VOA Mandarin and DTL’s investigation identified “Grey World” as the main spamouflage account in a network of nine such accounts. Other accounts in the network, including the fake Van Leeuwen account, amplified “Grey World” by reposting its content.

But posts from “Grey World” had limited reach on X, despite having tens of thousands of followers. For example, between August 18 and September 1, its most popular post, a diatribe against Washington’s Indo-Pacific strategy, was viewed a little over 10,000 times but only had 35 reposts and 65 likes.

After the suspension of the fake Van Leeuwen account, X also shut down the “Grey World” account.

The spamouflage network is not the first linked to China.

In April, British researchers released a report saying Chinese nationalist trolls were posing as American supporters of Trump on X to try to exploit domestic divisions ahead of the U.S. election.

U.S. federal prosecutors in 2023 accused China’s Ministry of Public Security of having a covert social media propaganda campaign that also aimed to influence U.S. elections.

Researchers at Facebook’s parent company Meta said it was the largest known covert propaganda operation ever identified on that platform and Instagram, reported Rolling Stone magazine.

Network analysis firm Graphika called the pro-Chinese network “Spamouflage Dragon,” part of a campaign it identified in early 2020 that was at the time posting content that praised Beijing’s policies and attacked those of then-President Trump.

US targets second major Chinese hacking group

Washington — The United States has identified and taken down a botnet campaign by China-directed hackers to further infiltrate American infrastructure as well as a variety of internet-connected devices. 

FBI Director Christopher Wray announced the disruption of what he called Flax Typhoon during a cyber summit Wednesday in Washington, describing it as part of a much larger campaign by Beijing. 

“Flax Typhoon hijacked Internet-of-Things devices like cameras, video recorders and storage devices — things typically found across both big and small organizations,” Wray said. “And about half of those hijacked devices were located here in the U.S.” 

Wray said the hackers, working under the guise of an information security company called the Integrity Technology Group, collected information from corporations, media organizations, universities and government agencies. 

“They used internet-connected devices — this time, hundreds of thousands of them — to create a botnet that helped them compromise systems and exfiltrate confidential data,” he said. 

But Flax Typhoon’s operations were disrupted last week when the FBI, working with allies and under court orders, took control of the botnet and pursued the hackers when they tried to switch to a backup system. 

“We think the bad guys finally realized that it was the FBI and our partners that they were up against,” Wray said. “And with that realization, they essentially burned down their new infrastructure and abandoned their botnet.” 

Wray said Flax Typhoon appeared to build on the exploits and tactics of another China-linked hacking group, known as Volt Typhoon, which was identified by Microsoft in May of last year. 

Volt Typhoon used office network equipment, including routers, firewalls and VPN hardware, to infiltrate and disrupt communications infrastructure in Guam, home to key U.S. military facilities. 

VOA has reached out to the Chinese Embassy in Washington for comment. 

The FBI and the U.S. Cybersecurity and Infrastructure Security Agency have previously warned that Chinese-government directed hackers, like Volt Typhoon, have been positioning themselves to launch destructive cyberattacks that could jeopardize the physical safety of Americans. 

Following Wednesday’s announcement by the FBI, the U.S. National Security Agency (NSA) issued an advisory encouraging anyone with a device that was compromised by Flax Typhoon to apply needed patches. 

It said that as of this past June, the Flax Typhoon botnet was making use of more than 260,000 devices in North America, Europe, Africa and Southeast East. 

The NSA said almost half of the compromised devices were in the U.S. Another 18 countries, including Vietnam, Bangladesh, Albania, China, South Africa and India, were also impacted.

 

‘End of an era’: UK to shut last coal-fired power plant 

Ratcliffe on Soar, United Kingdom — Ratcliffe-on-Soar Power Station has dominated the landscape of the English East Midlands for nearly 60 years, looming over the small town of the same name and a landmark on the M1 motorway bisecting Derby and Nottingham.  

At the mainline railway station serving the nearby East Midlands Airport, its giant cooling towers rise up seemingly within touching distance of the track and platform.  

But at the end of this month, the site in central England will close its doors, signaling the end to polluting coal-powered electricity in the UK, in a landmark first for any G7 nation.   

“It’ll seem very strange because it has always been there,” said David Reynolds, a 74-year-old retiree who saw the site being built as a child before it began operations in 1967.  

“When I was younger you could go down certain parts and you saw nothing but coal pits,” he told AFP.   

Energy transition 

Coal has played a vital part in British economic history, powering the Industrial Revolution of the 18th and 19th centuries that made the country a global superpower, and creating London’s infamous choking smog.  

Even into the 1980s, it still represented 70% of the country’s electricity mix before its share declined in the 1990s.   

In the last decade the fall has been even sharper, slumping to 38% in 2013, 5.0% in 2018 then just 1.0% last year. 

  

In 2015, the then Conservative government said that it intended to shut all coal-fired power stations by 2025 to reduce carbon emissions.  

Jess Ralston, head of energy at the Energy and Climate Intelligence Unit think-tank, said the UK’s 2030 clean-energy target was “very ambitious.”  

But she added: “It sends a very strong message that the UK is taking climate change as a matter of great importance and also that this is only the first step.”  

By last year, natural gas represented a third of the UK’s electricity production, while a quarter came from wind power and 13 percent from nuclear power, according to electricity operator National Grid ESO.  

“The UK managed to phase coal out so quickly largely through a combination of economics and then regulations,” Ralston said.   

“So larger power plants like coal plants had regulations put on them because of all the sulphur dioxide, nitrous oxides, all the emissions coming from the plant and that meant that it was no longer economically attractive to invest in those sorts of plants.”  

The new Labour government launched its flagship green energy plan after its election win in July, with the creation of a publicly owned body to invest in offshore wind, tidal power and nuclear power.  

The aim is to make Britain a superpower once more, this time in “clean energy.”  

As such, Ratcliffe-on-Soar’s closure on September 30 is a symbolic step in the UK’s ambition to decarbonize electricity by 2030, and become carbon neutral by 2050.   

It will make the country the first in the G7 of rich nations to do away entirely with coal power electricity.  

Italy plans to do so by next year, France in 2027, Canada in 2030 and Germany in 2038. Japan and the United States have no set dates.   

  • ‘End of an era’ – 

In recent years, Ratcliffe-on-Soar Power Station, which had the potential to power two million homes, has been used only when big spikes in electricity use were expected, such as during a cold snap in 2022 or the 2023 heatwave.  

Its last delivery of 1,650 tons of coal at the start of this summer barely supplied 500,000 homes for eight hours.    

“It’s like the end of a era,” said Becky, 25, serving £4 pints behind the bar of the Red Lion pub in nearby Kegworth.  

Her father works at the power station and will be out of a job. September 30 is likely to stir up strong emotions for him and the other 350 remaining employees.   

“It’s their life,” she said.  

Nothing remains of the world’s first coal-fired power station, which was built by Thomas Edison in central London in 1882, three years after his invention of the electric light bulb.  

The same fate is slated for Ratcliffe-on-Soar: the site’s German owner, Uniper, said it will be completely dismantled “by the end of the decade.”  

In its place will be a new development — a “carbon-free technology and energy hub”, the company said.

EU court confirms Qualcomm’s antitrust fine, with minor reduction

brussels — Europe’s second-top court largely confirmed on Wednesday an EU antitrust fine imposed on U.S. chipmaker Qualcomm, revising it down slightly to $265.5 million from an initial $2.7 million.

The European Commission imposed the fine in 2019, saying that Qualcomm sold its chipsets below cost between 2009 and 2011, in a practice known as predatory pricing, to thwart British phone software maker Icera, which is now part of Nvidia Corp.

Qualcomm had argued that the 3G baseband chipsets singled out in the case accounted for just 0.7% of the Universal Mobile Telecommunications System (UMTS) market and so it was not possible for it to exclude rivals from the chipset market.

The Court made “a detailed examination of all the pleas put forward by Qualcomm, rejecting them all in their entirety, with the exception of a plea concerning the calculation of the amount of the fine, which it finds to be well founded in part,” the Luxembourg-based General Court said.

Qualcomm can appeal on points of law to the EU Court of Justice, Europe’s highest.

The chipmaker did not immediately reply to an emailed Reuters request for comment.

The company convinced the same court two years ago to throw out a $1.1 billion antitrust fine handed down in 2018 for paying billions of dollars to Apple from 2011 to 2016 to use only its chips in all its iPhones and iPads in order to block out rivals such as Intel Corp.

The EU watchdog subsequently declined to appeal the judgment.

Big Tech, calls for looser rules await new EU antitrust chief 

Brussels — Teresa Ribera will have to square up to Big Tech, banks and airlines if confirmed as Europe’s new antitrust chief, while juggling calls for looser rules to help create EU champions.

Nominated Tuesday by European Commission President Ursula von der Leyen for the high-profile antitrust post, Ribera has been Spain’s minister for ecological transition since 2018.

The 55-year-old Spanish socialist, one of Europe’s most ambitious policymakers on climate change, will have to secure European Parliament approval before taking up her post.

As competition commissioner, she will be able to approve or veto multi-billion euro mergers or slap hefty fines on companies seeking to bolster their market power by throttling smaller rivals or illegally teaming up to fix prices.

One of her biggest challenges will be to ensure that Amazon, Apple, Alphabet’s Google, Microsoft and Meta comply with landmark rules aimed at reining in their power and giving consumers more choice.

Apple, Google and Meta are firmly in outgoing EU antitrust chief Margrethe Vestager’s crosshairs for falling short of complying with the Digital Markets Act.

Another challenge will be how to deal with the increasing popularity of artificial intelligence amid concerns about Big Tech leveraging its existing dominance.

Ribera may ramp up a crackdown on non-EU state subsidies begun by Vestager aimed at preventing foreign companies from acquiring EU businesses or taking part in EU public tenders with unfair state support.

Recent rulings from Europe’s highest court, which backed the Commission’s $14.5 billion tax order to Apple, and its $2.7 billion antitrust fine against Google, could embolden Ribera to take a tough line against antitrust violations.

That would mean she would be in no hurry to ease up on antitrust rules, despite Mario Draghi’s call to boost EU industrial champions so that they are able to compete with U.S. and Chinese competitors.

Ribera was also named on Tuesday as executive vice president of a clean, just and competitive energy transition, tasked with ensuring that Europe achieves its green goals.

Her credentials include negotiating deals last year among EU countries on emissions limits for trucks and a contentious upgrade of EU power market rules.

 

France uses tough, untested cybercrime law to target Telegram’s Durov

PARIS — When French prosecutors took aim at Telegram boss Pavel Durov, they had a trump card to wield – a tough new law with no international equivalent that criminalizes tech titans whose platforms allow illegal products or activities.

The so-called LOPMI law, enacted in January 2023, has placed France at the forefront of a group of nations taking a sterner stance on crime-ridden websites. But the law is so recent that prosecutors have yet to secure a conviction.

With the law still untested in court, France’s pioneering push to prosecute figures like Durov could backfire if its judges balk at penalizing tech bosses for alleged criminality on their platforms.

A French judge placed Durov under formal investigation last month, charging him with various crimes, including the 2023 offence: “Complicity in the administration of an online platform to allow an illicit transaction, in an organized gang,” which carries a maximum 10-year sentence and a $556,300 fine.

Being under formal investigation does not imply guilt or necessarily lead to trial, but indicates judges think there’s enough evidence to proceed with the probe. Investigations can last years before being sent to trial or dropped.

Durov, out on bail, denies Telegram was an “anarchic paradise.” Telegram has said it “abides by EU laws,” and that it’s “absurd to claim that a platform or its owner are responsible for abuse of that platform.”

In a radio interview last week, Paris Prosecutor Laure Beccuau hailed the 2023 law as a powerful tool for battling organized crime groups who are increasingly operating online.

The law appears to be unique. Eight lawyers and academics told Reuters they were unaware of any other country with a similar statute.

“There is no crime in U.S. law directly analogous to that and none that I’m aware of in the Western world,” said Adam Hickey, a former U.S. deputy assistant attorney general who established the Justice Department’s (DOJ) national security cyber program.

Hickey, now at U.S. law firm Mayer Brown, said U.S. prosecutors could charge a tech boss as a “co-conspirator or an aider and abettor of the crimes committed by users” but only if there was evidence the “operator intends that its users engage in, and himself facilitates, criminal activities.”

He cited the 2015 conviction of Ross Ulbricht, whose Silk Road website hosted drug sales. U.S. prosecutors argued Ulbricht “deliberately operated Silk Road as an online criminal marketplace … outside the reach of law enforcement,” according to the DOJ. Ulbricht got a life sentence.

Timothy Howard, a former U.S. federal prosecutor who put Ulbricht behind bars, was “skeptical” Durov could be convicted in the United States without proof he knew about the crimes on Telegram, and actively facilitated them – especially given Telegram’s vast, mainly law-abiding user base.

“Coming from my experience of the U.S. legal system,” he said, the French law appears “an aggressive theory.”

Michel Séjean, a French professor of cyber law, said the toughened legislation in France came after authorities grew exasperated with companies like Telegram.

“It’s not a nuclear weapon,” he said. “It’s a weapon to prevent you from being impotent when faced with platforms that don’t cooperate.”

Tougher laws

The 2023 law traces its origins to a 2020 French interior ministry white paper, which called for major investment in technology to tackle growing cyber threats.

It was followed by a similar law in November 2023, which included a measure for the real-time geolocation of people suspected of serious crimes by remotely activating their devices. A proposal to turn on their devices’ cameras and mouthpieces so that investigators could watch or listen in was shot down by France’s Constitutional Council.

These new laws have given France some of the world’s toughest tools for tackling cybercrime, with the proof being the arrest of Durov on French soil, said Sadry Porlon, a French lawyer specialized in communication technology law.

Tom Holt, a cybercrime professor at Michigan State University, said LOPMI “is a potentially powerful and effective tool if used properly,” particularly in probes into child sexual abuse images, credit card trafficking and distributed denial of service attacks, which target businesses or governments.

Armed with fresh legislative powers, the ambitious J3 cybercrime unit at the Paris prosecutor’s office, which is overseeing the Durov probe, is now involved in some of France’s most high-profile cases.

In June, the J3 unit shut down Coco, an anonymized chat forum cited in over 23,000 legal proceedings since 2021 for crimes including prostitution, rape and homicide.

Coco played a central role in a current trial that has shocked France.

Dominique Pelicot, 71, is accused of recruiting dozens of men on Coco to rape his wife, whom he had knocked out with drugs. Pelicot, who is expected to testify this week, has admitted his guilt, while 50 other men are on trial for rape.

Coco’s owner, Isaac Steidel, is suspected of a similar crime as Durov: “Provision of an online platform to allow an illicit transaction by an organized gang.”

Steidel’s lawyer, Julien Zanatta, declined to comment.

AI videos of US leaders singing Chinese go viral in China

WASHINGTON — “I love you, China. My dear mother,” former U.S. President Donald Trump, standing in front of a mic at a lectern, appears to sing in perfect Mandarin.

“I cry for you, and I also feel proud for you,” Vice President Kamala Harris, Trump’s Democratic opponent in this year’s election, appears to respond, also in perfect Mandarin. Trump lets out a smile as he listens to the lyric.

The video has received thousands of likes and tens of thousands of reposts on Douyin, China’s variation of TikTok.

“These two are almost as Chinese as it gets,” one comment says.

Neither Trump nor Harris knows Mandarin. And the duet shown in the video has never happened. But recently, deepfake videos, frequently featuring top U.S. leaders, including President Joe Biden, singing Chinese pop songs, have gone viral on the Chinese internet.

Some of the videos have found their way to social media platforms not available in China, such as Instagram, TikTok and X.

U.S. intelligence officials and experts have long warned about how China and other foreign adversaries have been implementing generative AI in their disinformation effort to disrupt and influence the 2024 presidential election.

“There has been an increased use of Chinese AI-generated content in recent months, attempting to influence and sow division in the U.S. and elsewhere,” a Microsoft report on China’s disinformation threat said in April.

Few of the people who saw the videos of the American leaders singing in Chinese, however, were convinced that they were real, based on what users wrote in the comments. The videos themselves do not contain misinformation, either.

Instead, these videos and their popularity reflect, at least in part, a sense of cultural confidence in Chinese netizens in the age of perpetually intensifying U.S.-China competitions, observers told VOA Mandarin.

By making the likes of Biden and Trump sing whatever Chinese songs the creators of the videos want them to sing, they can “culturally domesticate powerful Americans,” said Alexa Pan, a researcher on China’s AI industry for ChinaTalk, an influential newsletter about China and technology.

“Making fun of U.S. leaders might be especially politically acceptable to and popular with Chinese viewers,” she said.

Political opponents sing about friendship

Videos of American leaders singing in Chinese started to spread on Chinese social media in May. In many of the videos featuring Biden and Trump, creators made the two politically opposed men sing songs about friendship.

After Biden announced his withdrawal from the presidential race in July, one viral video had him sing to Trump, “Actually I don’t want to leave. Actually, I want to stay. I want to stay with you through every spring, summer, autumn and winter,” to which Trump appeared to sing, “You have to believe me. It won’t take long before we can spend our whole life together.”

“Crying eyes,” one Chinese netizen commented sarcastically. “They must have gotten along really well.”

Another such video posted on Instagram received mostly positive reactions. Some users said it was a stark contrast to the bitterness that has permeated U.S. politics.

“Made me laugh,” an Instagram user wrote. “Wouldn’t that be so refreshing to actually have them sing like that together?”

Easy to make

After reviewing some of the videos, Pan, of ChinaTalk, told VOA Mandarin that she believes they were quite easy to make.

Obvious flaws in the videos, including body parts occasionally blending into the background, suggest they were created with simple AI technology, Pan said.

“One could generate these videos on the many AI text-to-video generation platforms available in China,” she wrote in an e-mail.

On the Chinese internet, there are countless tutorials on how to make AI-generated videos using popular lip-syncing AI models, such as MuseTalk, released by Chinese tech giant Tencent, and SadTalker, developed by Xi’an Jiaotong University, a research-focused university in northwestern China.

One Douyin account reviewed by VOA Mandarin has pumped out over 200 videos of American leaders singing in Chinese since May. One of the account’s videos was even reposted by the Iranian embassy.

Chinese leaders off-limits

The release of ChatGPT by OpenAI in 2022 has triggered a global AI frenzy, with China being one of the leading countries developing the technology. The United Nations said in July that China had requested the most patents on generative AI, with the U.S. being a distant second.

On the Chinese internet, the obsession has been particularly strong with deepfakes, which can be used to manipulate videos, images and audio of people to make them appear to say or sing things that they have not actually uttered.

Some deepfake videos are made mostly for fun, such is the case with Biden and Trump singing Chinese songs. But there have also been abuses of the technology. Earlier this year, web users in China stole a Ukrainian girl’s image and turned her into a “Russian beauty” to sell goods online.

 China has released strict regulations on deepfakes. A 2022 law states that the technology cannot be used to “endanger the national security and interests, harm the image of the nation, harm the societal public interest, disturb economic or social order, or harm the lawful rights and interests of others.”

Yang Han, an Australian commentator who used to work for China’s Foreign Ministry, told VOA Mandarin that the prominence of U.S leaders and the absence of Chinese leaders in these viral AI videos reflects a lack of political free speech in China.

He said that it reminds him of a joke that former U.S. President Ronald Reagan used to tell during the Cold War.

“An American and a Russian compare with each other whose country has more freedom,” Yang said, relaying the joke. “The American says he can stand in front of the White House and call Reagan stupid. The Russian dismisses it and says he can also stand in front of the Kremlin and call Reagan stupid.”

Robot begins removing Fukushima nuclear plant’s melted fuel

tokyo — A long robot entered a damaged reactor at Japan’s Fukushima nuclear power plant on Tuesday, beginning a two-week, high-stakes mission to retrieve for the first time a tiny amount of melted fuel debris from the bottom.

The robot’s trip into the Unit 2 reactor is a crucial initial step for what comes next — a daunting, decades-long process to decommission the plant and deal with large amounts of highly radioactive melted fuel inside three reactors that were damaged by a massive earthquake and tsunami in 2011. Specialists hope the robot will help them learn more about the status of the cores and the fuel debris.

Here is an explanation of how the robot works, its mission, significance and what lies ahead as the most challenging phase of the reactor cleanup begins.

What is the fuel debris?

Nuclear fuel in the reactor cores melted after the magnitude 9.0 earthquake and tsunami in March 2011 caused the Fukushima Daiichi nuclear plant’s cooling systems to fail. The melted fuel dripped down from the cores and mixed with internal reactor materials such as zirconium, stainless steel, electrical cables, broken grates and concrete around the supporting structure and at the bottom of the primary containment vessels.

The reactor meltdowns caused the highly radioactive, lava-like material to spatter in all directions, greatly complicating the cleanup. The condition of the debris also differs in each reactor.

Tokyo Electric Power Company Holdings, or TEPCO, which manages the plant, says an estimated 880 tons of molten fuel debris remains in the three reactors, but some experts say the amount could be larger.

What is the robot’s mission?

Workers will use five 1.5-meter-long pipes connected in sequence to maneuver the robot through an entry point in the Unit 2 reactor’s primary containment vessel. The robot itself can extend about 6 meters inside the vessel. Once inside, it will be maneuvered remotely by operators at another building at the plant because of the fatally high radiation emitted by the melted debris.

The front of the robot, equipped with tongs, a light and a camera, will be lowered by a cable to a mound of melted fuel debris. It will then snip off and collect a bit of the debris — less than 3 grams). The small amount is meant to minimize radiation dangers.

The robot will then back out to the place it entered the reactor, a roundtrip journey that will take about two weeks.

The mission takes that long because the robot must make extremely precise maneuvers to avoid hitting obstacles or getting stuck in passageways. That has happened to earlier robots.

TEPCO is also limiting daily operations to two hours to minimize the radiation risk for workers in the reactor building. Eight six-member teams will take turns, with each group allowed to stay maximum of about 15 minutes.

What do officials hope to learn?

Sampling the melted fuel debris is “an important first step,” said Lake Barrett, who led the cleanup after the 1979 disaster at the U.S. Three Mile Island nuclear plant for the Nuclear Regulatory Commission and is now a paid adviser for TEPCO’s Fukushima decommissioning.

While the melted fuel debris has been kept cool and has stabilized, the aging of the reactors poses potential safety risks, and the melted fuel needs to be removed and relocated to a safer place for long-term storage as soon as possible, experts say.

An understanding of the melted fuel debris is essential to determine how best to remove it, store it and dispose of it, according to the Japan Atomic Energy Agency.

Experts expect the sample will also provide more clues about how exactly the meltdown 13 years ago played out, some of which is still a mystery.

The melted fuel sample will be kept in secure canisters and sent to multiple laboratories for more detailed analysis. If the radiation level exceeds a set limit, the robot will take the sample back into the reactor.

“It’s the start of a process. It’s a long, long road ahead,” Barrett said in an online interview. “The goal is to remove the highly radioactive material, put it into engineered canisters … and put those in storage.”

For this mission, the robot’s small tong can only reach the upper surface of the debris. The pace of the work is expected to pick up in the future as more experience is gained and robots with additional capabilities are developed.

What’s next?

TEPCO will have to “probe down into the debris pile, which is over a meter thick, so you have to go down and see what’s inside,” Barrett said, noting that at Three Mile Island, the debris on the surface was very different from the material deeper inside. He said multiple samples from different locations must be collected and analyzed to better understand the melted debris and develop necessary equipment, such as stronger robots for future larger-scale removal.

Compared to collecting a tiny sample for analysis, it will be a more difficult challenge to develop and operate robots that can cut larger chunks of melted debris into pieces and put that material into canisters for safe storage.

There are also two other damaged reactors, Unit 1 and Unit 3, which are in worse condition and will take even longer to deal with. TEPCO plans to deploy a set of small drones in Unit 1 for a probe later this year and is developing even smaller “micro” drones for Unit 3, which is filled with a larger amount of water.

Separately, hundreds of spent fuel rods remain in unenclosed cooling pools on the top floor of both Unit 1 and 2. This is a potential safety risk if there’s another major quake. Removal of spent fuel rods has been completed at Unit 3.

When will the decommissioning be finished?

Removal of the melted fuel was initially planned to start in late 2021 but has been delayed by technical issues, underscoring the difficulty of the process. The government says decommissioning is expected to take 30-40 years, while some experts say it could take as long as 100 years.

Others are pushing for an entombment of the plant, as at Chernobyl after its 1986 explosion, to reduce radiation levels and risks for plant workers.

That won’t work at the seaside Fukushima plant, Barrett says.

“You’re in a high seismic area, you’re in a high-water area, and there are a lot of unknowns in those (reactor) buildings,” he said. “I don’t think you can just entomb it and wait.”

Apple faces challenges in Chinese market against Huawei’s tri-fold phone

Taipei, Taiwan — The U.S.-China technology war is playing out in the smartphone market in China, where global rivals Apple and Huawei released new phones this week. Industry experts say Apple, which lacks home-field advantage, faces many challenges in defending its market share in the country.

The biggest highlight of the iPhone 16 is its artificial intelligence system, dubbed Apple Intelligence, while the Huawei Mate XT features innovative tri-fold screen technology.  But at a starting price of RMB 19,999, about $2,810, the Mate XT will cost about three times as much as the iPhone 16.

According to data from VMall, Huawei’s official shopping site, nearly 5.74 million people in China preordered the Mate XT as of late Thursday, 5½ days after Huawei began accepting preorders.

But in a survey conducted on the Chinese microblogging site Weibo by Radio France International, half of the 9,200 respondents said they would not purchase a Mate XT because the price is prohibitive. An additional 3,500 said they are not in the market for a new phone now.

“I suggest that Huawei release some products that ordinary people can afford,” a Weibo user wrote under the name “Diamond Man Yang Dong Feng.”

The iPhone 16 is not available for preorder until Friday, but some e-commerce vendors in China have promised to deliver the new devices to consumers within half a day to two days of sale.

In the competition between Apple and Huawei, iPhone 16 has some inherent disadvantages, said Shih-Fang Chiu, a senior industry analyst at the Taiwan Institute of Economic Research.

“Apple’s strength is information security and privacy, but this is difficult to achieve in the Chinese market, where the government can control the data in China’s market to a relatively high degree. In the era of AI mobile phones, this will bring challenges to Apple’s development in the Chinese market,” Chiu said.

Apple’s AI service on its iPhone 16 will roll out at a gradual pace in different languages, first in English and other languages later this year. The Chinese version will not be available until 2025.

There are other challenges Apple faces as well, Chiu added, such as regulatory controls, consumer sentiment favoring local brands and weakening spending power amid China’s economic slowdown.

According to Counterpoint Research’s statistics, Huawei held a market share of 15% in the second quarter of 2024, surpassing Apple’s 14% market share. That compares with Apple’s 17.3% share in 2023 as reported by the industry research firm International Data Corporation China, or IDC China.

Ryan Reith, the program vice president for IDC’s Mobile Device Tracker suite, said in a written response to VOA that the iPhone 16 has not made significant hardware upgrades and that AI applications alone are not attractive because consumers have GPT and other AI solutions.

AI applications are also another hurdle. Analyst Chih-Yen Tai said iPhone 16’s AI services involve personal data collection, information application and cloud computing, which will require collaboration with Chinese service providers.

That, along with the ban on Chinese civil servants and employees at state-owned enterprises from using their iPhone at work in recent years, will affect the sales of Apple products, said Tai, the deputy director of the Center for Science and Technology Policy Evaluation at Chung-Hua Institution for Economic Research in Taipei.

“China’s patriotism has led to a strong number of preorders” for Huawei’s tri-fold phones, Tai said.

“The competitors in China will sell the idea [to consumers] that iPhones will soon be edged out of the premium smartphone market. So, in the next stage, the affordable iPhone versions will be the key to whether it [Apple] can return to China or its previous glorious sales era,” Tai said.

Tzu-Ang Chen, a senior consultant in the digital technology industry in Taipei, said use of Huawei’s HarmonyOS operating system surpassed that of Apple’s iOS in China in the first quarter of this year, representing China’s determination to “go its own way” and create “one world, two systems.”

“The U.S.-China technology war has extended to smartphones,” Chen said. “IPhone sales in China will get worse and worse, obviously because Huawei is doing better, and coupled with patriotism, Apple’s position in the hearts of 1.4 billion people will never return.”

He said that as China seeks to develop pro-China markets among member countries of the Belt and Road Initiative in Southeast Asia, the Middle East and Africa, China-made mobile phones may become their first choice.

VOA’s Adrianna Zhang contributed to this report.