New Zealand to loosen gene editing regulation, make commercialization easier

WELLINGTON, New Zealand — The New Zealand government said Tuesday that it would introduce new legislation to make it easier for companies and researchers to develop and commercialize products using gene technologies such as gene editing. 

Science, Innovation and Technology Minister Judith Collins said in a statement that rules and time-consuming processes have made research outside the lab almost impossible. 

“These changes will bring New Zealand up to global best practice and ensure we can capitalize on the benefits,” she said. 

Current regulations mean that genetically modified organisms (GMOs) cannot be released out of containment without going through a complex and vigorous process and it is difficult to meet the set standard. Furthermore, gene editing is considered the same as genetic modification even when it doesn’t involve the introduction of foreign DNA. 

Under the new law, low-risk gene editing techniques that produce changes indistinguishable from conventional breeding will be exempted from regulation, local authorities will no longer be able to prevent the use of GMOs in their regions and there will be a new regulator of the industry. 

“This is a major milestone in modernizing gene technology laws to enable us to improve health outcomes, adapt to climate change, deliver massive economic gains and improve the lives of New Zealanders,” Collins said. 

The government hopes to have the legislation passed and the regulator in operation by the end of 2025.

China test-flies biggest cargo drone as low-altitude economy takes off

beijing — Engineers sent China’s biggest-yet cargo drone on a test run over the weekend while a helicopter taxi took to the skies on a soon-to-open 100-km route to Shanghai, laying new milestones for the country’s expanding low-altitude economy.

Packing a payload capacity of 2 metric tons, the twin-engine aircraft took off on Sunday on an inaugural flight, state media said, citing developer Sichuan Tengden Sci-tech Innovation Co., for a trip of about 20 minutes in southwestern Sichuan province.

China’s civilian drone makers are testing larger payloads as the government pushes to build a low-altitude economy, with the country’s aviation regulator envisioning a $279-billion industry by 2030, for a four-fold expansion from 2023.

The Tengden-built drone, with a wingspan of 16.1 meters and a height of 4.6 meters, is slightly larger than the world’s most popular light aircraft, the four-seat Cessna 172.

The trial run followed the maiden flight in June of a cargo drone developed by state-owned Aviation Industry Corp of China (AVIC), the leading aerospace enterprise.

The AVIC’s HH-100 has a payload capacity of 700 kilograms and a flight radius of 520 km. Next year, AVIC plans to test its biggest cargo drone, the TP2000, which can carry up to 2 tons of cargo and fly four times farther than the HH-100.

China has already begun commercial deliveries by drone.

In May, cargo drone firm Phoenix Wings, part of delivery giant SF Express, started delivering fresh fruit from the island province of Hainan to southern Guangdong, using Fengzhou-90 drones developed by SF, a unit of S.F. Holding 002352.SZ.

Cargo drones promise shorter delivery times and lower transport costs, Chinese industry insiders say, while widening deliveries to sites lacking conventional aviation facilities, such as rooftop spaces in heavily built-up cities.

They could also ferry people on taxi services.

China’s drivers fret as robotaxis pick up pace – and passengers

WUHAN, China — Liu Yi is among China’s 7 million ride-hailing drivers. A 36-year-old Wuhan resident, he started driving part-time this year when construction work slowed in the face of a nationwide glut of unsold apartments.

Now he predicts another crisis as he stands next to his car watching neighbors order driverless taxis.

“Everyone will go hungry,” he said of Wuhan drivers competing against robotaxis from Apollo Go, a subsidiary of technology giant Baidu 9888.HK.

Baidu and the Ministry of Industry and Information Technology declined comment.

Ride-hailing and taxi drivers are among the first workers globally to face the threat of job loss from artificial intelligence as thousands of robotaxis hit Chinese streets, economists and industry experts said.

Self-driving technology remains experimental but China has moved aggressively to green-light trials compared with the U.S which is quick to launch investigations and suspend approvals after accidents.

At least 19 Chinese cities are running robotaxi and robobus tests, disclosure showed. Seven have approved tests without human-driver monitors by at least five industry leaders: Apollo Go, Pony.ai, WeRide, AutoX and SAIC Motor 600104.SS.

Apollo Go has said it plans to deploy 1,000 in Wuhan by year-end and operate in 100 cities by 2030.

Pony.ai, backed by Japan’s Toyota Motor 7203.T, operates 300 robotaxis and plans 1,000 more by 2026. Its vice president has said robotaxis could take five years to become sustainably profitable, at which point they will expand “exponentially.”

WeRide is known for autonomous taxis, vans, buses and street sweepers. AutoX, backed by e-commerce leader Alibaba Group 9988.HK, operates in cities including Beijing and Shanghai. SAIC has been operating robotaxis since the end of 2021.

“We’ve seen an acceleration in China. There’s certainly now a rapid pace of permits being issued,” said Boston Consulting Group managing director Augustin Wegscheider. “The U.S. has been a lot more gradual.”

Alphabet’s GOOGL.O Waymo is the only U.S. firm operating uncrewed robotaxis that collect fares. It has over 1,000 cars in San Francisco, Los Angeles and Phoenix but could grow to “thousands,” said a person with knowledge of its operations.

Cruise, backed by General Motors GM.N, restarted testing in April after one of its vehicles hit a pedestrian last year.

Cruise said it operates in three cities with safety its core mission. Waymo did not respond to a request for comment.

“There’s a clear contrast between U.S. and China” with robotaxi developers facing far more scrutiny and higher hurdles in the U.S., said former Waymo CEO John Krafcik.

Robotaxis spark safety concerns in China, too, but fleets proliferate as authorities approve testing to support economic goals. Last year, President Xi Jinping called for “new productive forces,” setting off regional competition.

Beijing announced testing in limited areas in June and Guangzhou said this month it would open roads citywide to self-driving trials.

Some Chinese firms have sought to test autonomous cars in the U.S. but the White House is set to ban vehicles with China-developed systems, said people briefed on the matter.

Boston Consulting’s Wegscheider compared China’s push to develop autonomous vehicles to its support of electric vehicles.

“Once they commit,” he said, “they move pretty fast.”

‘Stupid radishes’

China has 7 million registered ride-hailing drivers versus 4.4 million two years ago, official data showed. With ride-hailing providing last-resort jobs during economic slowdown, the side effects of robotaxis could prompt the government to tap the brakes, economists said.

In July, discussion of job loss from robotaxis soared to the top of social media searches with hashtags including, “Are driverless cars stealing taxi drivers’ livelihoods?”

In Wuhan, Liu and other ride-hailing drivers call Apollo Go vehicles “stupid radishes” – a pun on the brand’s name in local dialect – saying they cause traffic jams.

Liu worries, too, about the impending introduction of Tesla’s TSLA.O “Full Self-Driving” system – which still requires human drivers – and the automaker’s robotaxi ambitions.

“I’m afraid that after the radishes come,” he said, “Tesla will come.”

Wuhan driver Wang Guoqiang, 63, sees a threat to workers who can least afford disruption.

“Ride-hailing is work for the lowest class,” he said, as he watched an Apollo Go vehicle park in front of his taxi. “If you kill off this industry, what is left for them to do?”

Baidu declined to comment on the drivers’ concerns and referred Reuters to comments in May by Chen Zhuo, Apollo Go’s general manager. Chen said the firm would become “the world’s first commercially profitable” autonomous-driving platform.

Apollo Go loses almost $11,000 a car annually in Wuhan, Haitong International Securities estimated. A lower-cost model could enable per-vehicle annual profit of nearly $16,000, the securities firm said. By contrast, a ride-hailing car earns about $15,000 total for the driver and platform.

‘Already at the forefront’

Automating jobs could benefit China in the long run given a shrinking population, economists said.

“In the short run, there must be a balance in speed between the creation of new jobs and the destruction of old jobs,” said Tang Yao, associate professor of applied economics at Peking University. “We do not necessarily need to push at the fastest speed, as we are already at the forefront.”

Eastern Pioneer Driving School 603377.SS has more than halved its instructor number since 2019 to about 900. Instead, it has teachers at a Beijing control center remotely monitoring students in 610 cars equipped with computer instruction tools.

Computers score students on every wheel turn and brake tap, and virtual reality simulators coach them on navigating winding roads. Massive screens provide real-time analysis of driver tasks, such as one student’s 82% parallel-parking pass rate.

Zhang Yang, the school’s intelligent-training director, said the machines have done well.

“The efficiency, pass rate and safety awareness have greatly improved.”

UN approves its first treaty targeting cybercrime

United Nations — U.N. member states on Thursday approved a treaty targeting cybercrime, the body’s first such text, despite fierce opposition from human rights activists who have warned of potential surveillance dangers.

After three years of negotiations and a final two-week session in New York, members approved the United Nations Convention Against Cybercrime by consensus, and it will now be submitted to the General Assembly for formal adoption.

“I consider the documents … adopted. Thank you very much, bravo to all!” Algerian diplomat Faouzia Boumaiza Mebarki, chairwoman of the treaty drafting committee, said to applause.

The committee was set up, despite U.S. and European opposition, following an initial move in 2017 by Russia.

The new treaty would enter into force once it has been ratified by 40 member nations and aims to “prevent and combat cybercrime more efficiently and effectively,” notably regarding child sexual abuse imagery and money laundering.

Hailing a “landmark convention,” South Africa’s delegate said, “the provisions of technical assistance and capacity building offer much needed support to countries with less developed cyber infrastructures.”

But the treaty’s detractors — an unusual alliance of human rights activists and big tech companies — denounce it as being far too broad in scope, claiming it could amount to a global “surveillance” treaty and be used for repression.

In particular, the text provides that a state may, in order to investigate any crime punishable by a minimum of four years’ imprisonment under its domestic law, ask the authorities of another nation for any electronic evidence linked to the crime, and also request data from internet service providers.

Warning of an “unprecedented multilateral tool for surveillance,” Deborah Brown of Human Rights Watch told AFP the treaty “will be a disaster for human rights and is a dark moment for the UN.”

“This treaty is effectively a legal instrument of repression,” she said. “It can be used to crack down on journalists, activists, LGBT people, free thinkers, and others across borders.”

Human rights clause

Nick Ashton-Hart heads the Cybersecurity Tech Accord delegation to the treaty talks, representing more than 100 technology companies, including Microsoft and Meta.

“Regretably,” he said Thursday, the committee “adopted a convention without addressing many of the major flaws identified by civil society, the private sector, or even the U.N.’s own human rights body.”

“Wherever it is implemented the Convention will be harmful to the digital environment generally and human rights in particular,” he told AFP, calling for nations not to sign or implement it.

Some nations however are complaining the treaty actually includes too many human rights safeguards.

A few days ago, Russia, which has historically supported the drafting process, complained the treaty had become “oversaturated with human rights safeguards,” while accusing countries of pursuing “narrow self-serving goals under the banner of democratic values.”

During Thursday’s session, Iran attempted to have several clauses with “inherent flaws” deleted.

One paragraph in question stipulated that “nothing in this Convention shall be interpreted as permitting suppression of human rights or fundamental freedoms,” such as “freedoms of expression, conscience, opinion, religion or belief.”

The deletion request was rejected with 102 votes against, 23 in favor (including Russia, India, Sudan, Venezuela, Syria, North Korea and Libya) and 26 abstentions.

Neither Iran nor any other country, however, chose to prevent approval by consensus. 

Microsoft: Iran accelerating cyber activity in apparent bid to influence US election

NEW YORK — Iran is ramping up online activity that appears intended to influence the upcoming U.S. election, in one case targeting a presidential campaign with an email phishing attack, Microsoft said Friday.

Iranian actors also have spent recent months creating fake news sites and impersonating activists, laying the groundwork to stoke division and potentially sway American voters this fall, especially in swing states, the technology giant found.

The findings in Microsoft’s newest threat intelligence report show how Iran, which has been active in recent U.S. campaign cycles, is evolving its tactics for another election that’s likely to have global implications. The report goes a step beyond anything U.S. intelligence officials have disclosed, giving specific examples of Iranian groups and the actions they have taken so far. Iran’s United Nations mission denied it had plans to interfere or launch cyberattacks in the U.S. presidential election.

The report doesn’t specify Iran’s intentions besides sowing chaos in the United States, though U.S. officials have previously hinted that Iran particularly opposes former President Donald Trump. U.S. officials also have expressed alarm about Tehran’s efforts to seek retaliation for a 2020 strike on an Iranian general that was ordered by Trump. This week, the Justice Department unsealed criminal charges against a Pakistani man with ties to Iran who’s alleged to have hatched assassination plots targeting multiple officials, potentially including Trump.

The report also reveals how Russia and China are exploiting U.S. political polarization to advance their own divisive messaging in a consequential election year.

Microsoft’s report identified four examples of recent Iranian activity that the company expects to increase as November’s election draws closer.

First, a group linked to Iran’s Revolutionary Guard in June targeted a high-ranking U.S. presidential campaign official with a phishing email, a form of cyberattack often used to gather sensitive information, according to the report, which didn’t identify which campaign was targeted. The group concealed the email’s origins by sending it from the hacked email account of a former senior adviser, Microsoft said.

Days later, the Iranian group tried to log into an account that belonged to a former presidential candidate but wasn’t successful, Microsoft’s report said. The company notified those who were targeted.

In a separate example, an Iranian group has been creating websites that pose as U.S.-based news sites targeted to voters on opposite sides of the political spectrum, the report said.

One fake news site that lends itself to a left-leaning audience insults Trump by calling him “raving mad” and suggests he uses drugs, the report said. Another site meant to appeal to Republican readers centers on LGBTQ issues and gender-affirming surgery.

A third example Microsoft cited found that Iranian groups are impersonating U.S. activists, potentially laying the groundwork for influence operations closer to the election.

Finally, another Iranian group in May compromised an account owned by a government employee in a swing state, the report said. It was unclear whether that cyberattack was related to election interference efforts.

Iran’s U.N. mission sent The Associated Press an emailed statement: “Iran has been the victim of numerous offensive cyber operations targeting its infrastructure, public service centers, and industries. Iran’s cyber capabilities are defensive and proportionate to the threats it faces. Iran has neither the intention nor plans to launch cyber attacks. The U.S. presidential election is an internal matter in which Iran does not interfere.”

The Microsoft report said that as Iran escalates its cyber influence, Russia-linked actors also have pivoted their influence campaigns to focus on the U.S. election, while actors linked to the Chinese Communist Party have taken advantage of pro-Palestinian university protests and other current events in the U.S. to try to raise U.S. political tensions.

Microsoft said it has continued to monitor how foreign foes are using generative AI technology. The increasingly cheap and easy-to-access tools can generate lifelike fake images, photos and videos in seconds, prompting concern among some experts that they will be weaponized to mislead voters this election cycle.

While many countries have experimented with AI in their influence operations, the company said, those efforts haven’t had much impact so far. The report said as a result, some actors have “pivoted back to techniques that have proven effective in the past — simple digital manipulations, mischaracterization of content, and use of trusted labels or logos atop false information.”

Microsoft’s report aligns with recent warnings from U.S. intelligence officials, who say America’s adversaries appear determined to seed the internet with false and incendiary claims ahead of November’s vote.

Top intelligence officials said last month that Russia continues to pose the greatest threat when it comes to election disinformation, while there are indications that Iran is expanding its efforts and China is proceeding cautiously when it comes to 2024.

Iran’s efforts seem aimed at undermining candidates seen as being more likely to increase tension with Tehran, the officials said. That’s a description that fits Trump, whose administration ended a nuclear deal with Iran, reimposed sanctions and ordered the killing of the top Iranian general.

The influence efforts also coincide with a time of high tensions between Iran and Israel, whose military the U.S. strongly supports.

Director of National Intelligence Avril Haines said last month that the Iranian government has covertly supported American protests over Israel’s war against Hamas in Gaza. Groups linked to Iran have posed as online activists, encouraged protests and provided financial support to some protest groups, Haines said.

America’s foes, Iran among them, have a long history of seeking to influence U.S. elections. In 2020, groups linked to Iran sent emails to Democratic voters in an apparent effort to intimidate them into voting for Trump, intelligence officials said.

President Maduro suspends X social network in Venezuela for 10 days

CARACAS, Venezuela — President Nicolás Maduro said he has ordered a 10-day block on access to X in Venezuela, accusing the owner Elon Musk of using the social network to promote hatred after the country’s disputed presidential election.

Associated Press journalists in Caracas found that by Thursday night posts had stopped loading on X on two private telephone services and state-owned Movilnet.

“Elon Musk is the owner of X and has violated all the rules of the social network itself,” said Maduro in a speech following a march by pro-government groups. Maduro alleged Musk “has incited hatred.”

Maduro also accused the social network of being used by his opponents to create political unrest.

Venezuela’s president said he had signed a resolution “with the proposal made by CONATEL, the National Telecommunications Commission, which has decided to remove the social network X, formerly known as Twitter, from circulation in Venezuela for 10 days so that they can present their documents.” Maduro did not provide more details about the process taken against X.

X’s press office did not immediately respond to an email from AP requesting comment.

“X out for 10 days! Elon Musk out!” Maduro said.

The president’s announcement comes after Maduro and Musk exchanged accusations over Venezuela’s disputed July 28 presidential election. Electoral authorities declared Maduro the winner but have yet to produce voting tallies. Meanwhile, the opposition claims to have collected records from more than 80% of the 30,000 electronic voting machines nationwide showing the winner was their candidate, Edmundo González.

Musk used the social network to accuse the self-proclaimed socialist leader of a “great electoral fraud.”

“Shame on the dictator Maduro,” Musk said on Monday in a post.

Since the election, Maduro has expressed the need to “regulate” social networks in Venezuela.

Maduro also denounced that the social platform was used by his adversaries to threaten the families of his followers and political allies, military personnel, police officers and to generate a state of anxiety in Venezuela.

World’s largest 3D-printed neighborhood nears completion in Texas

GEORGETOWN, Texas — As with any desktop 3D printer, the Vulcan printer pipes layer by layer to build an object – except this printer is more than 45 feet (13.7 m) wide, weighs 4.75 tons and prints residential homes.

This summer, the robotic printer from ICON is finishing the last few of 100 3D-printed houses in Wolf Ranch, a community in Georgetown, Texas, about 30 miles from Austin.

ICON began printing the walls of what it says is the world’s largest 3D-printed community in November 2022. Compared to traditional construction, the company says that 3D printing homes is faster, less expensive, requires fewer workers, and minimizes construction material waste.

“It brings a lot of efficiency to the trade market,” said ICON senior project manager Conner Jenkins. “So, where there were maybe five different crews coming in to build a wall system, we now have one crew and one robot.”

After concrete powder, water, sand and other additives are mixed together and pumped into the printer, a nozzle squeezes out the concrete mixture like toothpaste onto a brush, building up layer by layer along a pre-programmed path that creates corduroy-effect walls.

The single-story three- to four-bedroom homes take about three weeks to finish printing, with the foundation and metal roofs installed traditionally.

Jenkins said the concrete walls are designed to be resistant to water, mold, termites and extreme weather.

Lawrence Nourzad, a 32-year-old business development director, and his girlfriend Angela Hontas, a 29-year-old creative strategist, purchased a Wolf Ranch home earlier this summer.

“It feels like a fortress,” Nourzad said, adding that he was confident it would be resilient to most tornados.

The walls also provide strong insulation from the Texas heat, the couple said, keeping the interior temperature cool even when the air conditioner wasn’t on full blast.

There was one other thing the 3D-printed walls seemed to protect against, however: a solid wireless internet connection.

“Obviously these are really strong, thick walls. And that’s what provides a lot of value for us as homeowners and keeps this thing really well-insulated in a Texas summer, but signal doesn’t transfer through these walls very well,” Nourzad said.

To alleviate this issue, an ICON spokeswoman said most Wolf Ranch homeowners use mesh internet routers, which broadcast a signal from multiple units placed throughout a home, versus a traditional router which sends a signal from one device.

The 3D-printed homes at Wolf Ranch, called the “Genesis Collection” by developers, range in price from around $450,000 to close to $600,000. Developers said a little more than one quarter of the 100 homes have been sold.

ICON, which 3D-printed its first home in Austin in 2018, hopes to one day take its technology to the Moon. NASA, as part of its Artemis Moon exploration program, has contracted ICON to develop a construction system capable of building landing pads, shelters, and other structures on the lunar surface.

Drones warn New Yorkers about storm dangers

NEW YORK — Gone is the bullhorn. Instead, New York City emergency management officials have turned high-tech, using drones to warn residents about potential threatening weather.

With a buzzing sound in the background, a drone equipped with a loudspeaker flies over homes warning people who live in basement or ground-floor apartments about impending heavy rains.

“Be prepared to leave your location,” said the voice from the sky in footage released Tuesday by the city’s emergency management agency. “If flooding occurs, do not hesitate.”

About five teams with multiple drones each were deployed to specific neighborhoods prone to flooding. Zach Iscol, the city’s emergency management commissioner, said the messages were being relayed in multiple languages. They were expected to continue until the weather impacted the drone flights.

Flash floods have been deadly for New Yorkers living in basement apartments, which can quickly fill up in a deluge. Eleven people drowned in such homes in 2011 amid rain from the remnants of Hurricane Ida.

The drones are in addition to other forms of emergency messaging, including social media, text alerts and a system that reaches more than 2,000 community-based organizations throughout the city that serve senior citizens, people with disabilities and other groups.

“You know, we live in a bubble, and we have to meet people where they are in notifications so they can be prepared,” New York City Mayor Eric Adams said at a press briefing on Tuesday.

Adams is a self-described “tech geek” whose administration has tapped drone technology to monitor large gatherings as well as to search for sharks on beaches. Under his watch, the city’s police department also briefly toyed with using a robot to patrol the Times Square subway station, and it has sometimes deployed a robotic dog to dangerous scenes, including the Manhattan parking garage that collapsed in 2023.

Musk’s X sues advertisers over alleged ‘massive advertiser boycott’

wichita falls, texas — Elon Musk’s social media platform X has sued a group of advertisers, alleging that a “massive advertiser boycott” deprived the company of billions of dollars in revenue and violated antitrust laws.

The company formerly known as Twitter filed the lawsuit Tuesday in a federal court in Texas against the World Federation of Advertisers and member companies Unilever, Mars, CVS Health and Orsted.

It accused the advertising group’s brand safety initiative, called the Global Alliance for Responsible Media, of helping to coordinate a pause in advertising after Musk bought Twitter for $44 billion in late 2022 and overhauled its staff and policies.

Musk posted about the lawsuit on X on Tuesday, saying “now it is war” after two years of being nice and “getting nothing but empty words.”

X CEO Linda Yaccarino said in a video announcement that the lawsuit stemmed in part from evidence uncovered by the U.S. House Judiciary Committee, which she said showed a “group of companies organized a systematic illegal boycott” against X.

The Republican-led committee had a hearing last month looking at whether current laws are “sufficient to deter anticompetitive collusion in online advertising.”

The lawsuit’s allegations center on the early days of Musk’s Twitter takeover and not a more recent dispute with advertisers that came a year later.

In November 2023, about a year after Musk bought the company, a number of advertisers began fleeing X over concerns about their ads showing up next to pro-Nazi content and hate speech on the site in general, with Musk inflaming tensions with his own posts endorsing an antisemitic conspiracy theory.

Musk later said those fleeing advertisers were engaging in blackmail and, using a profanity, essentially told them to go away.

The Belgium-based World Federation of Advertisers and representatives for CVS, Orsted, Mars and Unilever didn’t immediately respond to requests for comment Tuesday.

A top Unilever executive testified at last month’s congressional hearing, defending the British consumer goods company’s practice of choosing to put ads on platforms that won’t harm its brand.

“Unilever, and Unilever alone, controls our advertising spending,” said prepared written remarks by Herrish Patel, president of Unilever USA. “No platform has a right to our advertising dollar.”

Google loses massive antitrust case over its search dominance

Washington — A judge on Monday ruled that Google’s ubiquitous search engine has been illegally exploiting its dominance to squash competition and stifle innovation in a seismic decision that could shake up the internet and hobble one of the world’s best-known companies.

The highly anticipated decision issued by U.S. District Judge Amit Mehta comes nearly a year after the start of a trial pitting the U.S. Justice Department against Google in the country’s biggest antitrust showdown in a quarter century.

After reviewing reams of evidence that included testimony from top executives at Google, Microsoft and Apple during last year’s 10-week trial, Mehta issued his potentially market-shifting decision three months after the two sides presented their closing arguments in early May.

“After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly,” Mehta wrote in his 277-page ruling.

It represents a major setback for Google and its parent, Alphabet Inc., which had steadfastly argued that its popularity stemmed from consumers’ overwhelming desire to use a search engine so good at what it does that it has become synonymous with looking things up online.

Google’s search engine currently processes an estimated 8.5 billion queries per day worldwide, nearly doubling its daily volume from 12 years ago, according to a recent study released by the investment firm BOND.

Google almost certainly will appeal the decision in a process that ultimately may land in the U.S. Supreme Court.

For now, the decision vindicates antitrust regulators at the Justice Department, which filed its lawsuit nearly four years ago while Donald Trump was still president and has been escalating it efforts to rein in Big Tech’s power during President Joe Biden’s administration.

The case depicted Google as a technological bully that methodically has thwarted competition to protect a search engine that has become the centerpiece of a digital advertising machine that generated nearly $240 billion in revenue last year. Justice Department lawyers argued that Google’s monopoly enabled it to charge advertisers artificially high prices while also enjoying the luxury of having to invest more time and money into improving the quality of its search engine — a lax approach that hurt consumers.

As expected, Mehta’s ruling focused on the billions of dollars Google spends every year to install its search engine as the default option on new cellphones and tech gadgets. In 2021 alone, Google spent more than $26 billion to lock in those default agreements, Mehta said in his ruling.

Google ridiculed those allegations, noting that consumers have historically changed search engines when they become disillusioned with the results they were getting. For instance, Yahoo — now a minor player on the internet — was the most popular search engine during the 1990s before Google came along.

Mehta said the evidence at trial showed the importance of the default settings. He noted that Microsoft’s Bing search engine has 80% share of the search market on the Microsoft Edge browser. The judge said that shows other search engines can be successful if Google is not locked in as the predetermined default option.

Still, Mehta credited the quality of Google’s product as an important part of its dominance, as well, saying flatly that “Google is widely recognized as the best [general search engine] available in the United States.”

Mehta’s conclusion that Google has been running an illegal monopoly sets up another legal phase to determine what sorts of changes or penalties should be imposed to reverse the damage done and restore a more competitive landscape.

Besides boosting Microsoft’s Bing search engine, the outcome could hurt Google at a critical pivot point that is tilting technology in the age of artificial intelligence. Both Microsoft and Google are among the early leaders in AI in a battle that now could be affected by Mehta’s market-rattling decision.

Microsoft CEO Satya Nadella was one of the Justice Department’s star witnesses during the testimony that covered his frustration with Google deals with the likes of Apple that made it nearly impossible for the Bing search engine to make any headway, even as Microsoft poured more than $100 billion in improvements since 2009.

“You get up in the morning, you brush your teeth, and you search on Google,” Nadella said at one point in his testimony. “Everybody talks about the open web, but there is really the Google web.”

Nadella also expressed fear that it might take an antitrust crackdown to ensure the situation didn’t get worse as AI becomes a bigger force in search.

Google still faces other legal threats besides this one, both in the U.S. and abroad. any antitrust lawsuits brought against Google domestically and abroad. In September, a federal trial is scheduled to begin in Virginia over the Justice Department’s allegations that Google’s advertising technology constitutes an illegal monopoly.

Secretaries of state urge Elon Musk to fix AI chatbot spreading election misinformation on X

Chicago — Five secretaries of state are urging Elon Musk to fix an AI chatbot on the social media platform X, saying in a letter sent Monday that it has spread election misinformation.

The top election officials from Michigan, Minnesota, New Mexico, Pennsylvania and Washington told Musk that X’s AI chatbot, Grok, produced false information about state ballot deadlines shortly after President Joe Biden dropped out of the 2024 presidential race.

While Grok is available only to subscribers to the premium versions of X, the misinformation was shared across multiple social media platforms and reached millions of people, according to the letter. The bogus ballot deadline information from the chatbot also referenced Alabama, Indiana, Ohio and Texas, although their secretaries of state did not sign the letter. Grok continued to repeat the false information for 10 days before it was corrected, the secretaries said.

The letter urged X to immediately fix the chatbot “to ensure voters have accurate information in this critical election year.” That would include directing Grok to send users to CanIVote.org, a voting information website run by the National Association of Secretaries of State, when asked about U.S. elections.

“In this presidential election year, it is critically important that voters get accurate information on how to exercise their right to vote,” Minnesota Secretary of State Steve Simon said in a statement. “Voters should reach out to their state or local election officials to find out how, when, and where they can vote.”

X did not respond to a request for comment.

Grok debuted last year for X premium and premium plus subscribers and was touted by Musk as a “rebellious” AI chatbot that will answer “spicy questions that are rejected by most other AI systems.”

Social media platforms have faced mounting scrutiny for their role in spreading misinformation, including about elections. The letter also warned that inaccuracies are to be expected for AI products, especially chatbots such as Grok that are based on large language models.

“As tens of millions of voters in the U.S. seek basic information about voting in this major election year, X has the responsibility to ensure all voters using your platform have access to guidance that reflects true and accurate information about their constitutional right to vote,” the secretaries wrote in the letter.

Since Musk bought Twitter in 2022 and renamed it to X, watchdog groups have raised concerns over a surge in hate speech and misinformation being amplified on the platform, as well as the reduction of content moderation teams, elimination of misinformation features and censoring of journalists critical of Musk.

Experts say the moves represent a regression from progress made by social media platforms attempting to better combat political disinformation after the 2016 U.S. presidential contest and could precipitate a worsening misinformation landscape ahead of this year’s November elections.

US expected to propose barring Chinese software in autonomous vehicles

WASHINGTON — The U.S. Commerce Department is expected to propose barring Chinese software in autonomous and connected vehicles in the coming weeks, according to sources briefed on the matter.

The Biden administration plans to issue a proposed rule that would bar Chinese software in vehicles in the United States with Level 3 automation and above, which would have the effect of also banning testing on U.S. roads of autonomous vehicles produced by Chinese companies.

The administration, in a previously unreported decision, also plans to propose barring vehicles with Chinese-developed advanced wireless communications abilities modules from U.S. roads, the sources added.

Under the proposal, automakers and suppliers would need to verify that none of their connected vehicle or advanced autonomous vehicle software was developed in a “foreign entity of concern” like China, the sources said.

The Commerce Department said last month it planned to issue proposed rules on connected vehicles in August and expected to impose limits on some software made in China and other countries deemed adversaries.

Asked for comment, a Commerce Department spokesperson said on Sunday that the department “is concerned about the national security risks associated with connected technologies in connected vehicles.”

The department’s Bureau of Industry and Security will issue a proposed rule that “will focus on specific systems of concern within the vehicle. Industry will also have a chance to review that proposed rule and submit comments.”

The Chinese Embassy in Washington did not immediately comment but the Chinese foreign ministry has previously urged the United States “to respect the laws of the market economy and principles of fair competition.” It argues Chinese cars are popular globally because they had emerged out of fierce market competition and are technologically innovative.

On Wednesday, the White House and State Department hosted a meeting with allies and industry leaders to “jointly address the national security risks associated with connected vehicles,” the department said. Sources said officials disclosed details of the administration’s planned rule.

The meeting included officials from the United States, Australia, Canada, the European Union, Germany, India, Japan, the Republic of Korea, Spain, and the United Kingdom who “exchanged views on the data and cybersecurity risks associated with connected vehicles and certain components.”

Also known as conditional driving automation, Level 3 involves technology that allows drivers to engage in activities behind the wheel, such as watching movies or using smartphones, but only under some limited conditions.

In November, a group of U.S. lawmakers raised alarm about Chinese companies collecting and handling sensitive data while testing autonomous vehicles in the United States and asked questions of 10 major companies including Baidu, Nio, WeRide, Didi Chuxing, Xpeng, Inceptio, Pony.ai, AutoX, Deeproute.ai and Qcraft.

The letters said in the 12 months ended November 2022 that Chinese AV companies test drove more than 450,000 miles in California. In July 2023, Transportation Secretary Pete Buttigieg said his department had national security concerns about Chinese autonomous vehicle companies in the United States.

The administration is worried about connected vehicles using the driver monitoring system to listen or record occupants or take control of the vehicle itself.

“The national security risks are quite significant,” Commerce Secretary Gina Raimondo said in May. “We decided to take action because this is really serious stuff.”

China’s proposal to create a cyber ID system faces criticism

Taipei, Taiwan — Concern is rising among China’s more than 1 billion internet users over a government proposal portrayed as a step to protect their personal information and fight against fraud. Many fear the plan would do the opposite.

China’s Ministry of Public Security and the Cyberspace Administration issued the draft “Measures for the Administration of National Network Identity Authentication Public Services” on July 26.

According to the proposal, Chinese netizens would be able to apply for virtual IDs on a voluntary basis to “minimize the excessive collection and retention of citizens’ personal information by online platforms” and “protect personal information.”

While many netizens appear to agree in their posts that companies have too much access to their personal information, others fear the cyber ID proposal, if implemented, will simply allow the government to more easily track them and control what they can say online.

Beijing lawyer Wang Cailiang said on Weibo: “My opinion is short: I am not in favor of this. Please leave a little room for citizens’ privacy.”

Shortly after the proposal was published, Tsinghua University law professor Lao Dongyan posted on her Weibo account, “The cyber IDs are like installing monitors to watch everyone’s online behavior.”

Her post has since disappeared, along with many other negative comments that can only be found on foreign social media platforms like X and Free Weibo, an anonymous and unblocked search engine established in 2012 to capture and save posts censored by China’s Sina Weibo or deleted by users.

A Weibo user under the name “Liu Jiming” said, “The authorities solemnly announced [the proposal] and solicited public opinions while blocking people from expressing their opinions. This clumsy show of democracy is really shocking.”

Beijing employs a vast network of censors to block and remove politically sensitive content, known by critics as the Great Firewall.

Since 2017, China has required internet service and content providers to verify users’ real names through national IDs, allowing authorities to more easily trace and track online activities and posts to the source.

Chinese internet experts say netizens can make that harder by using others’ accounts, providers, IDs and names on various platforms. But critics fear a single cyber ID would close those gaps in the Great Firewall.

Zola, a network engineer and well-known citizen journalist originally from China’s Hunan province, who naturalized in Taiwan, told VOA “The control of the cyber IDs is a superpower because you don’t only know a netizen’s actual name, but also the connection between the netizen and the cybersecurity ID.”

Mr. Li, a Shanghai-based dissident who did not want to disclose his full name because of the issue’s sensitivity, told VOA that the level of surveillance by China’s internet police has long been beyond imagination. He said the new proposal is a way for authorities to tell netizens that the surveillance will be more overt “just to intimidate and warn you to behave.”

Some netizens fear China could soon change the cyber ID system from a voluntary program to a requirement for online access.

A Weibo user under the name “Fang Zhifu” warned that in the future, if “the cyber ID is revoked, it will be like being sentenced to death in the cyber world.”

Meanwhile, China’s Ministry of Public Security and Cyberspace Administration say they are soliciting public opinion on the cyber ID plan until August 25.

Turkey blocks access to Instagram, gives no reason

ANKARA, Turkey — Turkey’s communications authority blocked access to the social media platform Instagram on Friday, the latest instance of a clampdown on websites in the country.

The Information and Communication Technologies Authority, which regulates the internet, announced the block early Friday but did not provide a reason. Sabah newspaper, which is close to the government, said access was blocked in response to Instagram removing posts by Turkish users that expressed condolences over the killing of Hama political leader Ismail Haniyeh.

It came days after Fahrettin Altun, the presidential communications director and aide to President Recep Tayyip Erdogan, criticized the Meta-owned platform for preventing users in Turkey from posting messages of condolences for Haniyeh.

Unlike its Western allies, Turkey does not consider Hamas to be a terror organization. A strong critic of Israel’s military actions in Gaza, Erdogan has described the group as “liberation fighters.”

The country is observing a day of mourning for Haniyeh on Friday, during which flags will be flown at half-staff.

Turkey has a track record of censoring social media and websites. Hundreds of thousands of domains have been blocked since 2022, according to the Freedom of Expression Association, a nonprofit organization regrouping lawyers and human rights activists. The video-sharing platform YouTube was blocked from 2007 to 2010.

Online misinformation fuels tensions over deadly Southport stabbing attack

LONDON — Within hours of a stabbing attack in northwest England that killed three young girls and wounded several more children, a false name of a supposed suspect was circulating on social media. Hours after that, violent protesters were clashing with police outside a nearby mosque.

Police say the name was fake, as were rumors that the 17-year-old suspect was an asylum-seeker who had recently arrived in Britain. Detectives say the suspect charged Thursday with murder and attempted murder was born in the U.K., and British media including the BBC have reported that his parents are from Rwanda.

That information did little to slow the lightning spread of the false name or stop right-wing influencers pinning the blame on immigrants and Muslims.

“There’s a parallel universe where what was claimed by these rumors were the actual facts of the case,” said Sunder Katwala, director of British Future, a think tank that looks at issues including integration and national identity. “And that will be a difficult thing to manage.”

Local lawmaker Patrick Hurley said the result was “hundreds of people descending on the town, descending on Southport from outside of the area, intent on causing trouble — either because they believe what they’ve written, or because they are bad faith actors who wrote it in the first place, in the hope of causing community division.”

One of the first outlets to report the false name, Ali Al-Shakati, was Channel 3 Now, an account on the X social media platform that purports to be a news channel. A Facebook page of the same name says it is managed by people in Pakistan and the U.S. A related website on Wednesday showed a mix of possibly AI-generated news and entertainment stories, as well as an apology for “the misleading information” in its article on the Southport stabbings.

By the time the apology was posted, the incorrect identification had been repeated widely on social media.

“Some of the key actors are probably just generating traffic, possibly for monetization,” said Katwala. The misinformation was then spread further by “people committed to the U.K. domestic far right,” he said.

Governments around the world, including Britain’s, are struggling with how to curb toxic material online. U.K. Home Secretary Yvette Cooper said Tuesday that social media companies “need to take some responsibility” for the content on their sites.

Katwala said that social platforms such as Facebook and X worked to “de-amplify” false information in real time after mass shootings at two mosques in Christchurch, New Zealand, in 2019.

Since Elon Musk, a self-styled free-speech champion, bought X, it has gutted teams that once fought misinformation on the platform and restored the accounts of banned conspiracy theories and extremists.

Rumors have swirled in the relative silence of police over the attack. Merseyside Police issued a statement saying the reported name for the suspect was incorrect, but have provided little information about him other than his age and birthplace of Cardiff, Wales.

Under U.K. law, suspects are not publicly named until they have been charged and those under 18 are usually not named at all. That has been seized on by some activists to suggest the police are withholding information about the attacker.

Tommy Robinson, founder of the far-right English Defense League, accused police of “gaslighting” the public. Nigel Farage, a veteran anti-immigration politician who was elected to Parliament in this month’s general election, posted a video on X speculating “whether the truth is being withheld from us” about the attack.

Brendan Cox, whose lawmaker wife Jo Cox was murdered by a far-right attacker in 2016, said Farage’s comments showed he was “nothing better than a Tommy Robinson in a suit.”

“It is beyond the pale to use a moment like this to spread your narrative and to spread your hatred, and we saw the results on Southport’s streets last night,” Cox told the BBC.

AI-backed autonomous robots monitor construction progress

The construction industry is finding new uses for artificial intelligence. In a multi-story building project in the northwestern U.S. city of Seattle, autonomous robots are tasked with documenting progress and detecting potential hazards. VOA’s Natasha Mozgovaya has the story.

Manipulated video shared by Musk mimics Harris’ voice, raising concerns about AI in politics

New York — A manipulated video that mimics the voice of Vice President Kamala Harris saying things she did not say is raising concerns about the power of artificial intelligence to mislead with Election Day about three months away.

The video gained attention after tech billionaire Elon Musk shared it on his social media platform X on Friday evening without explicitly noting it was originally released as parody.

The video uses many of the same visuals as a real ad that Harris, the likely Democratic president nominee, released last week launching her campaign. But the video swaps out the voice-over audio with another voice that convincingly impersonates Harris.

“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says in the video. It claims Harris is a “diversity hire” because she is a woman and a person of color, and it says she doesn’t know “the first thing about running the country.” The video retains “Harris for President” branding. It also adds in some authentic past clips of Harris.

Mia Ehrenberg, a Harris campaign spokesperson, said in an email to The Associated Press: “We believe the American people want the real freedom, opportunity and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump.”

The widely shared video is an example of how lifelike AI-generated images, videos or audio clips have been utilized both to poke fun and to mislead about politics as the United States draws closer to the presidential election. It exposes how, as high-quality AI tools have become far more accessible, there remains a lack of significant federal action so far to regulate their use, leaving rules guiding AI in politics largely to states and social media platforms.

The video also raises questions about how to best handle content that blurs the lines of what is considered an appropriate use of AI, particularly if it falls into the category of satire.

The original user who posted the video, a YouTuber known as Mr Reagan, has disclosed both on YouTube and on X that the manipulated video is a parody. But Musk’s post, which has been viewed more than 123 million times, according to the platform, only includes the caption “This is amazing” with a laughing emoji.

X users who are familiar with the platform may know to click through Musk’s post to the original user’s post, where the disclosure is visible. Musk’s caption does not direct them to do so.

While some participants in X’s “community note” feature to add context to posts have suggested labeling Musk’s post, no such label had been added to it as of Sunday afternoon. Some users online questioned whether his post might violate X’s policies, which say users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

The policy has an exception for memes and satire as long as they do not cause “significant confusion about the authenticity of the media.”

Musk endorsed former President Donald Trump, the Republican nominee, earlier this month. Neither Mr Reagan nor Musk immediately responded to emailed requests for comment Sunday.

Two experts who specialize in AI-generated media reviewed the fake ad’s audio and confirmed that much of it was generated using AI technology.

One of them, University of California, Berkeley, digital forensics expert Hany Farid, said the video shows the power of generative AI and deepfakes.

“The AI-generated voice is very good,” he said in an email. “Even though most people won’t believe it is VP Harris’ voice, the video is that much more powerful when the words are in her voice.”

He said generative AI companies that make voice-cloning tools and other AI tools available to the public should do better to ensure their services are not used in ways that could harm people or democracy.

Rob Weissman, co-president of the advocacy group Public Citizen, disagreed with Farid, saying he thought many people would be fooled by the video.

“I don’t think that’s obviously a joke,” Weissman said in an interview. “I’m certain that most people looking at it don’t assume it’s a joke. The quality isn’t great, but it’s good enough. And precisely because it feeds into preexisting themes that have circulated around her, most people will believe it to be real.”

Weissman, whose organization has advocated for Congress, federal agencies and states to regulate generative AI, said the video is “the kind of thing that we’ve been warning about.”

Other generative AI deepfakes in both the U.S. and elsewhere would have tried to influence voters with misinformation, humor or both.

In Slovakia in 2023, fake audio clips impersonated a candidate discussing plans to rig an election and raise the price of beer days before the vote. In Louisiana in 2022, a political action committee’s satirical ad superimposed a Louisiana mayoral candidate’s face onto an actor portraying him as an underachieving high school student.

Congress has yet to pass legislation on AI in politics, and federal agencies have only taken limited steps, leaving most existing U.S. regulation to the states. More than one-third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.

Beyond X, other social media companies also have created policies regarding synthetic and manipulated media shared on their platforms. Users on the video platform YouTube, for example, must reveal whether they have used generative artificial intelligence to create videos or face suspension.

Can tech help solve the Los Angeles homeless crisis? Finding shelter may someday be a click away

LOS ANGELES — Billions of dollars have been spent on efforts to get homeless people off the streets in California, but outdated computer systems with error-filled data are all too often unable to provide even basic information like where a shelter bed is open on any given night, inefficiencies that can lead to dire consequences.

The problem is especially acute in Los Angeles, where more than 45,000 people — many suffering from serious mental illness, substance addictions or both — live in litter-strewn encampments that have spread into virtually every neighborhood, and where rows of rusting RVs line entire blocks.

Even in the state that is home to Silicon Valley, technology has not kept up with the long-running crisis. In an age when anyone can book a hotel room or rent a car with a few strokes on a mobile phone, no system exists that provides a comprehensive listing of available shelter beds in Los Angeles County, home to more than 1 in 5 unhoused people in the U.S.

Mark Goldin, chief technology officer for Better Angels United, a nonprofit group, described L.A.’s technology as “systems that don’t talk to one another, lack of accurate data, nobody on the same page about what’s real and isn’t real.”

The systems can’t answer “exactly how many people are out there at any given time. Where are they?” he said.

The ramifications for people living on the streets could mean whether someone sleeps another night outside or not, a distinction that can be life-threatening.

“They are not getting the services to the people at the time that those people either need the service, or are mentally ready to accept the services,” said Adam Miller, a tech entrepreneur and CEO of Better Angels.

The problems were evident at a filthy encampment in the city’s Silver Lake neighborhood, where Sara Reyes, executive director of SELAH Neighborhood Homeless Coalition, led volunteers distributing water, socks and food to homeless people, including one who appeared unconscious.

She gave out postcards with the address of a nearby church where the coalition provides hot food and services. A small dog bolted out of a tent, frantically barking, while a disheveled man wearing a jacket on a blistering hot day shuffled by a stained mattress.

At the end of the visit Reyes began typing notes into her mobile phone, which would later be retyped into a coalition spreadsheet and eventually copied again into a federal database.

“Anytime you move it from one medium to another, you can have data loss. We know we are not always getting the full picture,” Reyes said. The “victims are the people the system is supposed to serve.”

The technology has sputtered while the homeless population has soared. Some ask how can you combat a problem without reliable data to know what the scope is? An annual tally of homeless people in the city recently found a slight decline in the population, but some experts question the accuracy of the data, and tents and encampments can be seen just about everywhere.

Los Angeles Mayor Karen Bass has pinpointed shortcomings with technology as among the obstacles she faces in homelessness programs and has described the city’s efforts to slow the crisis as “building the plane while flying it.”

She said earlier this year that three to five homeless people die every day on the streets of L.A.

On Thursday, Gov. Gavin Newsom ordered state agencies to start removing homeless encampments on state land in his boldest action yet following a Supreme Court ruling allowing cities to enforce bans on sleeping outside in public spaces.

There is currently no uniform practice for caseworkers to collect and enter information into databases on the homeless people they interview, including notes taken on paper. The result: Information can be lost or recorded incorrectly, and it becomes quickly outdated with the lag time between interviews and when it’s entered into a database. 

The main federal data system, known as the Homeless Management Information System, or HMIS, was designed as a desktop application, making it difficult to operate on a mobile phone.

“One of the reasons the data is so bad is because what the case managers do by necessity is they take notes, either on their phones or on scrap pieces of paper or they just try to remember it, and they don’t typically input it until they get back to their desk” hours, days, a week or even longer afterward, Miller said.

Every organization that coordinates services for homeless people uses an HMIS program to comply with data collection and reporting standards mandated by the U.S. Department of Housing and Urban Development. But the systems are not all compatible.

Sam Matonik, associate director of data at L.A.-based People Assisting the Homeless, a major service provider, said his organization is among those that must reenter data because Los Angeles County uses a proprietary data system that does not talk to the HMIS system.  

“Once you’re manually double-entering things, it opens the door for all sorts of errors,” Matonik said. “Small numerical errors are the difference between somebody having shelter and not.”

Bevin Kuhn, acting deputy chief of analytics for the Los Angeles Homeless Services Authority, the agency that coordinates homeless housing and services in Los Angeles County, said work is underway to create a database of 23,000 beds by the end of the year as part of technology upgrades.

For case managers, “just seeing … the general bed availability is challenging,” Kuhn said.

Among other changes is a reboot of the HMIS system to make it more compatible with mobile apps and developing a way to measure if timely data is being entered by case workers, Kuhn said.

It’s not uncommon for a field worker to encounter a homeless person in crisis who needs immediate attention, which can create delays in collecting data. Los Angeles Homeless Services Authority aims for data to be entered in the system within 72 hours, but that benchmark is not always met.

In hopes of filling the void, Better Angels assembled a team experienced in building large-scale software applications. They are constructing a mobile-friendly prototype for outreach workers — to be donated to participating groups in Los Angeles County — that will be followed by systems for shelter operators and a comprehensive shelter bed database.

Since homeless people are transient and difficult to locate for follow-up services, one feature would create a map of places where an individual had been encountered, allowing case managers to narrow the search.

Services are often available, but the problem is linking them with a homeless person in real time. So, a data profile would show services the individual received in the past, medical issues and make it easy to contact health workers, if needed.

As a secondary benefit — if enough agencies and providers agree to participate — the software could produce analytical information and data visualizations, spotlighting where homeless people are moving around the county, or concentrations of where homeless people have gathered.

One key goal for the prototypes: ease of use even for workers with scant digital literacy. Information entered into the app would be immediately unloaded to the database, eliminating the need for redundant reentries while keeping information up to date.

Time is often critical. Once a shelter bed is located, there is a 48-hour window for the spot to be claimed, which Reyes says happens only about half the time. The technology is so inadequate, the coalition sometimes doesn’t learn a spot is open until it has expired.

She has been impressed with the speed of the Better Angels app, which is in testing, and believes it would cut down on the number of people who miss the housing window, as well as create more reliability for people trying to obtain services.

“I’m hoping Better Angels helps us put the human back into this whole situation,” Reyes said.  

US claims TikTok collected user views on issues like abortion, gun control

WASHINGTON — In a fresh broadside against one of the world’s most popular technology companies, the Justice Department late Friday accused TikTok of harnessing the capability to gather bulk information on users based on views on divisive social issues like gun control, abortion and religion.

Government lawyers wrote in a brief filed to the federal appeals court in Washington that TikTok and its Beijing-based parent company ByteDance used an internal web-suite system called Lark to enable TikTok employees to speak directly with ByteDance engineers in China.

TikTok employees used Lark to send sensitive data about U.S. users, information that has wound up being stored on Chinese servers and accessible to ByteDance employees in China, federal officials said.

One of Lark’s internal search tools, the filing states, permits ByteDance and TikTok employees in the U.S. and China to gather information on users’ content or expressions, including views on sensitive topics, such as abortion or religion. Last year, The Wall Street Journal reported TikTok had tracked users who watched LGBTQ content through a dashboard the company said it had since deleted.

The new court documents represent the government’s first major defense in a consequential legal battle over the future of the popular social media platform, which is used by more than 170 million Americans. Under a law signed by President Joe Biden in April, the company could face a ban in a few months if it doesn’t break ties with ByteDance.

The measure was passed with bipartisan support after lawmakers and administration officials expressed concerns that Chinese authorities could force ByteDance to hand over U.S. user data or sway public opinion towards Beijing’s interests by manipulating the algorithm that populates users’ feeds.

The Justice Department warned, in stark terms, of the potential for what it called “covert content manipulation” by the Chinese government, saying the algorithm could be designed to shape content that users receive.

“By directing ByteDance or TikTok to covertly manipulate that algorithm; China could for example further its existing malign influence operations and amplify its efforts to undermine trust in our democracy and exacerbate social divisions,” the brief states.

The concern, they said, is more than theoretical, alleging that TikTok and ByteDance employees are known to engage in a practice called “heating” in which certain videos are promoted in order to receive a certain number of views. While this capability enables TikTok to curate popular content and disseminate it more widely, U.S. officials posit it can also be used for nefarious purposes.

Justice Department officials are asking the court to allow a classified version of its legal brief, which won’t be accessible to the two companies.

Nothing in the redacted brief “changes the fact that the Constitution is on our side,” TikTok spokesperson Alex Haurek said in a statement.

“The TikTok ban would silence 170 million Americans’ voices, violating the 1st Amendment,” Haurek said. “As we’ve said before, the government has never put forth proof of its claims, including when Congress passed this unconstitutional law. Today, once again, the government is taking this unprecedented step while hiding behind secret information. We remain confident we will prevail in court.”

In the redacted version of the court documents, the Justice Department said another tool triggered the suppression of content based on the use of certain words. Certain policies of the tool applied to ByteDance users in China, where the company operates a similar app called Douyin that follows Beijing’s strict censorship rules.

But Justice Department officials said other policies may have been applied to TikTok users outside of China. TikTok was investigating the existence of these policies and whether they had ever been used in the U.S. in, or around, 2022, officials said.

The government points to the Lark data transfers to explain why federal officials do not believe that Project Texas, TikTok’s $1.5 billion mitigation plan to store U.S. user data on servers owned and maintained by the tech giant Oracle, is sufficient to guard against national security concerns.

In its legal challenge against the law, TikTok has heavily leaned on arguments that the potential ban violates the First Amendment because it bars the app from continued speech unless it attracts a new owner through a complex divestment process. It has also argued divestment would change the speech on the platform because a new social platform would lack the algorithm that has driven its success.

In its response, the Justice Department argued TikTok has not raised any valid free speech claims, saying the law addresses national security concerns without targeting protected speech, and argues that China and ByteDance, as foreign entities, aren’t shielded by the First Amendment.

TikTok has also argued the U.S. law discriminates on viewpoints, citing statements from some lawmakers critical of what they viewed as an anti-Israel tilt on the platform during its war in Gaza.

Justice Department officials disputes that argument, saying the law at issue reflects their ongoing concern that China could weaponize technology against U.S. national security, a fear they say is made worse by demands that companies under Beijing’s control turn over sensitive data to the government. They say TikTok, under its current operating structure, is required to be responsive to those demands.

Oral arguments in the case is scheduled for September. 

US, Taiwan, China race to improve military drone technology  

washington — This week, as Taiwan was preparing for the start of its Han Kuang military exercises, its air defense system detected a Chinese drone circling the island. This was the sixth time that China had sent a drone to operate around Taiwan since 2023.

Drones like the one that flew around Taiwan, which are tasked with dual-pronged missions of reconnaissance and intimidation, are just a small part of a broader trend that is making headlines from Ukraine to the Middle East to the Taiwan Strait and is changing the face of warfare. 

The increasing role that unmanned aerial vehicles, or UAVs, play and rising concern about a Chinese invasion of democratically ruled Taiwan is pushing Washington, Beijing and Taipei to improve the sophistication, adaptability and cost of drone technology.

‘Hellscape’ strategy

Last August, the Pentagon launched a $1 billion Replicator Initiative to create air, sea and land drones in the “multiple thousands,” according to the Defense Department’s Innovation Unit. The Pentagon aims to build that force of drones by August 2025.

The initiative is part of what U.S. Admiral Samuel Paparo recently described to The Washington Post as a “hellscape” strategy, which aims to counter a Chinese invasion of Taiwan through the deployment of thousands of unmanned drones in the air and sea between the island and China.

“The benefits of unmanned systems are that you get cheap, disposable mass that’s low cost. If a drone gets shot down, the only people that are crying about it are the accountants,” said Zachary Kallenborn, a policy fellow at George Mason University. “You can use them at large amounts of scale and overwhelm your opponents as well as degrade their defensive capabilities.”

The hellscape strategy, he added, aims to use lots of cheap drones to try to hold back China from attacking Taiwan.

Drone manufacturing supremacy

China has its own plans under way and is the world’s largest manufacturer of commercial drones. In a news briefing after Paparo’s remarks to the Post, it warned Washington that it was playing with fire. 

“Those who clamor for turning others’ homeland into hell should get ready for burning in hell themselves,” said Senior Colonel Wu Qian, spokesperson for the Chinese defense ministry.

“The People’s Liberation Army is able to fight and win in thwarting external interference and safeguarding our national sovereignty and territorial integrity. Threats and intimidation never work on us,” Wu said.

China’s effort to expand its use of drones has been bolstered, analysts say, by leader Xi Jinping’s emphasis on technology and modernization in the military, something he highlighted at a top-level party meeting last week.

“China’s military is developing more than 50 types of drones with varying capabilities, amassing a fleet of tens of thousands of drones, potentially 10 times larger than Taiwan and the U.S. combined,” Michael Raska, assistant professor at Singapore’s Nanyang Technological University, told VOA in an email. “This quantitative edge currently fuels China’s accelerating military modernization, with drones envisioned for everything from pre-conflict intel gathering to swarming attacks.”

Analysts add that China’s commercial drone manufacturing supremacy aids its military in the push for drone development. China’s DJI dominates in production and sale of household drones, accounting for 76% of the worldwide consumer market in 2021.

The scale of production and low price of DJI drones could put China in an advantageous position in a potential drone war, analysts say.

“In Russia and Ukraine, if you have a lot of drones – even if they’re like the commercial off-the-shelf things, DJI drones you can buy at Costco – and you throw hundreds of them at an air defense system, that’s going to create a large problem,” said Major Emilie Stewart, a research analyst at the China Aerospace Studies Institute.

China denies it is seeking to use commercial UAV technology for future conflicts.

“China has always been committed to maintaining global security and regional stability and has always opposed the use of civilian drones for military purposes,” Liu Pengyu, spokesperson for the Chinese Embassy in Washington, told VOA. “We are firmly opposed to the U.S.’s military ties with Taiwan and its effort of arming Taiwan.”

Drone force

With assistance from its American partners, pressure from China and lessons from Ukraine, Taiwan has been pushing to develop its own domestic drone warfare capabilities.

The United States has played a pivotal role in Taiwan’s drone development, and just last week it pledged to sell $360 million of attack drones to the Taipei Economic and Cultural Representative Office, or TECRO, Taiwan’s de facto embassy in Washington.

“Taiwan will continue to build a credible deterrence and work closely with like-minded partners, including the United States, to preserve peace and stability in the region,” TECRO told VOA when asked about the collaboration between Taipei and Washington. “We have no further information to share at this moment.”

The effort to incorporate drones into its defense is crucial for Taiwan, said Eric Chan, a senior nonresident fellow at the Global Taiwan Institute.

“The biggest immediate effects of the U.S. coming into this mass UAV game is to give Taiwan a bigger advantage to be able to, first, detect their enemy and, second, help them build a backstop to their own capabilities as well,” Chan said.

With the potential for China to consider using drones in an urban conflict environment, Taiwan is recognizing the importance of stepping up its counter-drone defense systems.

“After multiple intrusions of Chinese drones in outlying islands, the Taiwan Ministry of Defense now places great emphasis on anti-drone capabilities,” said Yu-Jiu Wang, chief executive of Tron Future, an anti-drone company working with the Taiwanese military.

The demand is one that Wang said his company is willing and ready to fill.

Video game performers to strike over artificial intelligence concerns

LOS ANGELES — Hollywood’s video game performers voted Thursday to go on strike, throwing part of the entertainment industry into another work stoppage after talks for a new contract with major game studios broke down over artificial intelligence protections. 

The strike — the second for video game voice actors and motion capture performers under the Screen Actors Guild-American Federation of Television and Radio Artists — will begin at 12:01 a.m. Friday. The move comes after nearly two years of negotiations with gaming giants, including divisions of Activision, Warner Bros. and Walt Disney Co., over a new interactive media agreement. 

SAG-AFTRA negotiators say gains have been made over wages and job safety in the video game contract, but that the studios will not make a deal over the regulation of generative AI. Without guardrails, game companies could train AI to replicate an actor’s voice, or create a digital replica of their likeness without consent or fair compensation, the union said. 

Fran Drescher, the union’s president, said in a prepared statement that members would not approve a contract that would allow companies to “abuse AI.” 

“Enough is enough. When these companies get serious about offering an agreement our members can live — and work — with, we will be here, ready to negotiate,” Drescher said. 

A representative for the studios did not immediately respond to an email seeking comment. 

The global video game industry generates well over $100 billion in profit annually, according to game market forecaster Newzoo. The people who design and bring those games to life are the driving force behind that success, SAG-AFTRA said. 

“Eighteen months of negotiations have shown us that our employers are not interested in fair, reasonable AI protections, but rather flagrant exploitation,” said Interactive Media Agreement Negotiating Committee Chair Sarah Elmaleh. 

Last month, union negotiators told The Associated Press that the game studios refused to “provide an equal level of protection from the dangers of AI for all our members” — specifically, movement performers. 

Members voted overwhelmingly last year to give leadership the authority to strike. Concerns about how movie studios will use AI helped fuel last year’s film and television strikes by the union, which lasted four months. 

The last interactive contract, which expired November 2022, did not provide protections around AI but secured a bonus compensation structure for voice actors and performance capture artists after an 11-month strike that began October 2016. That work stoppage marked the first major labor action from SAG-AFTRA following the merger of Hollywood’s two largest actors unions in 2012. 

The video game agreement covers more than 2,500 “off-camera (voiceover) performers, on-camera (motion capture, stunt) performers, stunt coordinators, singers, dancers, puppeteers, and background performers,” according to the union. 

Amid the tense interactive negotiations, SAG-AFTRA created a separate contract in February that covered indie and lower-budget video game projects. The tiered-budget independent interactive media agreement contains some of the protections on AI that video game industry titans have rejected.