Musk’s X Delays Access to Content on Reuters, NY Times, Social Media Rivals

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, The Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on Aug. 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

Google to Train 20,000 Nigerians in Digital Skills

Google plans to train 20,000 Nigerian women and youth in digital skills and provide a grant of $1.6 million to help the government create 1 million digital jobs in the country, its Africa executives said on Tuesday. 

Nigeria plans to create digital jobs for its teeming youth population, Vice President Kashim Shettima told Google Africa executives during a meeting in Abuja. Shettima did not provide a timeline for creating the jobs. 

Google Africa executives said a grant from its philanthropic arm in partnership with Data Science Nigeria and the Creative Industry Initiative for Africa will facilitate the program. 

Shettima said Google’s initiative aligned with the government’s commitment to increase youth participation in the digital economy. The government is also working with the country’s banks on the project, Shettima added. 

Google director for West Africa Olumide Balogun said the company would commit funds and provide digital skills to women and young people in Nigeria and also enable startups to grow, which will create jobs. 

Google is committed to investing in digital infrastructure across Africa, Charles Murito, Google Africa’s director of government relations and public policy, said during the meeting, adding that digital transformation can be a job enabler. 

Fiction Writers Fear Rise of AI, Yet See It as a Story

For a vast number of book writers, artificial intelligence is a threat to their livelihood and the very idea of creativity. More than 10,000 of them endorsed an open letter from the Authors Guild this summer, urging AI companies not to use copyrighted work without permission or compensation.

At the same time, AI is a story to tell, and no longer just science fiction.

As present in the imagination as politics, the pandemic, or climate change, AI has become part of the narrative for a growing number of novelists and short story writers who only need to follow the news to imagine a world upended.

“I’m frightened by artificial intelligence, but also fascinated by it. There’s a hope for divine understanding, for the accumulation of all knowledge, but at the same time there’s an inherent terror in being replaced by non-human intelligence,” said Helen Phillips, whose upcoming novel “Hum” tells of a wife and mother who loses her job to AI.

“We’ve been seeing more and more about AI in book proposals,” said Ryan Doherty, vice president and editorial director at Celadon Books, which recently signed Fred Lunzker’s novel “Sike,” featuring an AI psychiatrist.

“It’s the zeitgeist right now. And whatever is in the cultural zeitgeist seeps into fiction,” Doherty said. 

Other AI-themed novels expected in the next two years include Sean Michaels’ “Do You Remember Being Born?” — in which a poet agrees to collaborate with an AI poetry company; Bryan Van Dyke’s “In Our Likeness,” about a bureaucrat and a fact-checking program with the power to change facts; and A.E. Osworth’s “Awakened,” about a gay witch and her titanic clash with AI.

Crime writer Jeffrey Diger, known for his thrillers set in contemporary Greece, is working on a novel touching upon AI and the metaverse, the outgrowth of being “continually on the lookout for what’s percolating on the edge of societal change,” he said.

Authors are invoking AI to address the most human questions.

In Sierra Greer’s “Annie Bot,” the title name is an AI mate designed for a human male. For Greer, the novel was a way to explore her character’s “urgent desire to please,” adding that a robot girlfriend enabled her “to explore desire, respect, and longing in ways that felt very new and strange to me.”

Amy Shearn’s “Animal Instinct” has its origins in the pandemic and in her personal life; she was recently divorced and had begun using dating apps.

“It’s so weird how, with apps, you start to feel as if you’re going person-shopping,” she said. “And I thought, wouldn’t it be great if you could really pick and choose the best parts of all these people you encounter and sort of cobble them together to make your ideal person?”

“Of course,” she added, “I don’t think anyone actually knows what their ideal person is, because so much of what draws us to mates is the unexpected, the ways in which people surprise us. That said, it seemed like an interesting premise for a novel.”

Some authors aren’t just writing about AI, but openly working with it.

Earlier this year, journalist Stephen Marche used AI to write the novella “Death of An Author,” for which he drew upon everyone from Raymond Chandler to Haruki Murakami. Screenwriter and humorist Simon Rich collaborated with Brent Katz and Josh Morgenthau for “I Am Code,” a thriller in verse that came out this month and was generated by the AI program “code-davinci-002.” (Filmmaker Werner Herzog reads the audiobook edition). 

Osworth, who is trans, wanted to address comments by “Harry Potter” author J.K. Rowling that have offended many in the trans community, and to wrest from her the power of magic. At the same time, they worried the fictional AI in their book sounded too human, and decided AI should speak for AI.

Osworth devised a crude program, based on the writings of Machiavelli among others, that would turn out a more mechanical kind of voice.

“I like to say that CHATgpt is a Ferrari, while what I came up with is a skateboard with one square wheel. But I was much more interested in the skateboard with one square wheel,” they said.

Michaels centers his new novel on a poet named Marian, in homage to poet Marianne Moore, and an AI program called Charlotte. He said the novel is about parenthood, labor, community, and “this technology’s implications for art, language and our sense of identity.”

Believing the spirit of “Do You Remember Being Born?” called for the presence of actual AI text, he devised a program that would generate prose and poetry, and uses an alternate format in the novel so readers know when he’s using AI.

In one passage, Marian is reviewing some of her collaboration with Charlotte.

“The preceding day’s work was a collection of glass cathedrals. I reread it with alarm. Turns of phrase I had mistaken for beautiful, which I now found unintelligible,” Michaels writes. “Charlotte had simply surprised me: I would propose a line, a portion of a line, and what the system spat back upended my expectations. I had been seduced by this surprise.”

And now AI speaks: “I had mistaken a fit of algorithmic exuberance for the truth.”

Chinese Surveillance Firm Selling Cameras With ‘Skin Color Analytics’

IPVM, a U.S.-based security and surveillance industry research group, says the Chinese surveillance equipment maker Dahua is selling cameras with what it calls a “skin color analytics” feature in Europe, raising human rights concerns. 

In a report released on July 31, IPVM said “the company defended the analytics as being a ‘basic feature of a smart security solution.'” The report is behind a paywall, but IPVM provided a copy to VOA Mandarin. 

Dahua’s ICC Open Platform guide for “human body characteristics” includes “skin color/complexion,” according to the report. In what Dahua calls a “data dictionary,” the company says that the “skin color types” that Dahua analytic tools would target are ”yellow,” “black,” and ”white.”  VOA Mandarin verified this on Dahua’s Chinese website. 

The IPVM report also says that skin color detection is mentioned in the “Personnel Control” category, a feature Dahua touts as part of its Smart Office Park solution intended to provide security for large corporate campuses in China.  

Charles Rollet, co-author of the IPVM report, told VOA Mandarin by phone on August 1, “Basically what these video analytics do is that, if you turn them on, then the camera will automatically try and determine the skin color of whoever passes, whoever it captures in the video footage. 

“So that means the camera is going to be guessing or attempting to determine whether the person in front of it … has black, white or yellow — in their words — skin color,” he added.  

VOA Mandarin contacted Dahua for comment but did not receive a response. 

The IPVM report said that Dahua is selling cameras with the skin color analytics feature in three European nations. Each has a recent history of racial tension: Germany, France and the Netherlands.

‘Skin color is a basic feature’

Dahua said its skin tone analysis capability was an essential function in surveillance technology.  

 In a statement to IPVM, Dahua said, “The platform in question is entirely consistent with our commitments to not build solutions that target any single racial, ethnic, or national group. The ability to generally identify observable characteristics such as height, weight, hair and eye color, and general categories of skin color is a basic feature of a smart security solution.”  

IPMV said the company has previously denied offering the mentioned feature, and color detection is uncommon in mainstream surveillance tech products. 

In many Western nations, there has long been a controversy over errors due to skin color in surveillance technologies for facial recognition. Identifying skin color in surveillance applications raises human rights and civil rights concerns.  

“So it’s unusual to see it for skin color because it’s such a controversial and ethically fraught field,” Rollet said.  

Anna Bacciarelli, technology manager at Human Rights Watch (HRW), told VOA Mandarin that Dahua technology should not contain skin tone analytics.   

“All companies have a responsibility to respect human rights, and take steps to prevent or mitigate any human rights risks that may arise as a result of their actions,” she said in an email.

“Surveillance software with skin tone analytics poses a significant risk to the right to equality and non-discrimination, by allowing camera owners and operators to racially profile people at scale — likely without their knowledge, infringing privacy rights — and should simply not be created or sold in the first place.”  

Dahua denied that its surveillance products are designed to enable racial identification. On the website of its U.S. company, Dahua says, “contrary to allegations that have been made by certain media outlets, Dahua Technology has not and never will develop solutions targeting any specific ethnic group.” 

However, in February 2021, IPVM and the Los Angeles Times reported that Dahua provided a video surveillance system with “real-time Uyghur warnings” to the Chinese police that included eyebrow size, skin color and ethnicity.  

IPVM’s 2018 statistical report shows that since 2016, Dahua and another Chinese video surveillance company, Hikvision, have won contracts worth $1 billion from the government of China’s Xinjiang province, a center of Uyghur life. 

The U.S. Federal Communications Commission determined in 2022 that the products of Chinese technology companies such as Dahua and Hikvision, which has close ties to Beijing, posed a threat to U.S. national security. 

The FCC banned sales of these companies’ products in the U.S. “for the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes,” but not for other purposes.  

Before the U.S. sales bans, Hikvision and Dahua ranked first and second among global surveillance and access control firms, according to The China Project.  

‘No place in a liberal democracy’

On June 14, the European Union passed a revision proposal to its draft Artificial Intelligence Law, a precursor to completely banning the use of facial recognition systems in public places.  

“We know facial recognition for mass surveillance from China; this technology has no place in a liberal democracy,” Svenja Hahn, a German member of the European Parliament and Renew Europe Group, told Politico.  

Bacciarelli of HRW said in an email she “would seriously doubt such racial profiling technology is legal under EU data protection and other laws. The General Data Protection Regulation, a European Union regulation on Information privacy, limits the collection and processing of sensitive personal data, including personal data revealing racial or ethnic origin and biometric data, under Article 9. Companies need to make a valid, lawful case to process sensitive personal data before deployment.” 

“The current text of the draft EU AI Act bans intrusive and discriminatory biometric surveillance tech, including real-time biometric surveillance systems; biometric systems that use sensitive characteristics, including race and ethnicity data; and indiscriminate scraping of CCTV data to create facial recognition databases,” she said.  

In Western countries, companies are developing AI software for identifying race primarily as a marketing tool for selling to diverse consumer populations. 

The Wall Street Journal reported in 2020 that American cosmetics company Revlon had used recognition software from AI start-up Kairos to analyze how consumers of different ethnic groups use cosmetics, raising concerns among researchers that racial recognition could lead to discrimination.  

The U.S. government has long prohibited sectors such as healthcare and banking from discriminating against customers based on race. IBM, Google and Microsoft have restricted the provision of facial recognition services to law enforcement.  

Twenty-four states, counties and municipal governments in the U.S. have prohibited government agencies from using facial recognition surveillance technology. New York City, Baltimore, and Portland, Oregon, have even restricted the use of facial recognition in the private sector.  

Some civil rights activists have argued that racial identification technology is error-prone and could have adverse consequences for those being monitored. 

Rollet said, “If the camera is filming at night or if there are shadows, it can misclassify people.”  

Caitlin Chin is a fellow at the Center for Strategic and International Studies, a Washington think tank where she researches technology regulation in the United States and abroad. She emphasized that while Western technology companies mainly use facial recognition for business, Chinese technology companies are often happy to assist government agencies in monitoring the public.  

She told VOA Mandarin in an August 1 video call, “So this is something that’s both very dehumanizing but also very concerning from a human rights perspective, in part because if there are any errors in this technology that could lead to false arrests, it could lead to discrimination, but also because the ability to sort people by skin color on its own almost inevitably leads to people being discriminated against.”  

She also said that in general, especially when it comes to law enforcement and surveillance, people with darker skin have been disproportionately tracked and disproportionately surveilled, “so these Dahua cameras make it easier for people to do that by sorting people by skin color.”  

Virgin Galactic Flies Its First Tourists to the Edge of Space

Virgin Galactic rocketed to the edge of space with its first tourists Thursday, including a former British Olympian who bought his ticket 18 years ago and a mother-daughter duo from the Caribbean.

The space plane glided back to a runway landing at Spaceport America in the New Mexico desert, after a brief flight that gave passengers a few minutes of weightlessness.

Cheers erupted from families and friends watching from below when the craft’s rocket motor fired after it was released from the plane that had carried it aloft. The rocket ship reached about 88 kilometers high.

Richard Branson’s company expects to begin offering monthly trips to customers on its winged space plane, joining Jeff Bezos’ Blue Origin and Elon Musk’s SpaceX in the space tourism business.

Virgin Galactic passenger Jon Goodwin, who was among the first to buy a ticket in 2005, said he had faith that he would someday make the trip. The 80-year-old athlete — he competed in canoeing in the 1972 Olympics — has Parkinson’s disease and wants to be an inspiration to others.

“I hope it shows them that these obstacles can be the start rather than the end to new adventures,” he said in a statement.

Ticket prices were $200,000 when Goodwin signed up. The cost is now $450,000.

He was joined by sweepstakes winner Keisha Schahaff, 46, a health coach from Antigua, and her daughter, Anastatia Mayers, 18, a student at Scotland’s University of Aberdeen. Also on board: two pilots and the company’s astronaut trainer.

It was Virgin Galactic’s seventh trip to space since 2018, but the first with a ticket-holder. Branson, the company’s founder, hopped on board for the first full-size crew ride in 2021. Italian military and government researchers soared in June on the first commercial flight. About 800 people are currently on Virgin Galactic’s waiting list, according to the company.

Virgin Galactic’s rocket ship launches from the belly of an airplane, not from the ground, and requires two pilots in the cockpit. Once the mothership reaches a height of about 15 kilometers, the space plane is released and fires its rocket motor to make the final push to just over 80 kilometers up. Passengers can unstrap from their seats, float around the cabin for a few minutes and take in the sweeping views of Earth, before the space plane glides back home and lands on a runway.

In contrast, the capsules used by SpaceX and Blue Origin are fully automated and parachute back down.

Like Virgin Galactic, Blue Origin aims for the fringes of space, quick ups-and-downs from West Texas. Blue Origin has launched 31 people so far, but flights are on hold following a rocket crash last fall. The capsule, carrying experiments but no passengers, landed intact.

SpaceX, is the only private company flying customers all the way to orbit, charging a much heftier price, too: tens of millions of dollars per seat. It’s already flown three private crews. NASA is its biggest customer, relying on SpaceX to ferry its astronauts to and from the International Space Station. since 2020.

People have been taking on adventure travel for decades, the risks underscored by the recent implosion of the Titan submersible that killed five passengers on their way down to view the Titanic wreckage. Virgin Galactic suffered its own casualty in 2014 when its rocket plane broke apart during a test flight, killing one pilot. Yet space tourists are still lining up, ever since the first one rocketed into orbit in 2001 with the Russians.

Branson, who lives in the British Virgin Islands, watched Thursday’s flight from a party in Antigua. He had held a virtual lottery to establish a pecking order for the company’s first 50 customers — dubbed the Founding Astronauts. Virgin Galactic said the group agreed Goodwin would go first, given his age and his Parkinson’s.

China to Require all Apps to Share Business Details in New Oversight Push

China will require all mobile app providers in the country to file business details with the government, its information ministry said, marking Beijing’s latest effort to keep the industry on a tight leash. 

The Ministry of Industry and Information Technology (MIIT) said late on Tuesday that apps without proper filings will be punished after the grace period that will end in March next year, a move that experts say would potentially restrict the number of apps and hit small developers hard. 

You Yunting, a lawyer with Shanghai-based DeBund Law Offices, said the order is effectively requiring approvals from the ministry. The new rule is primarily aimed at combating online fraud but it will impact all apps in China, he said. 

Rich Bishop, co-founder of app publishing firm AppInChina, said the new rule is also likely to affect foreign-based developers which have been able to publish their apps easily through Apple’s App Store without showing any documentation to the Chinese government. 

Bishop said that in order to comply with the new rules, app developers now must either have a company in China or work with a local publisher.  

Apple did not immediately reply to a request for comment. 

The iPhone maker pulled over a hundred artificial intelligence (AI) apps from its App Store last week to comply with regulations after China introduced a new licensing regime for generative AI apps for the country.  

The ministry’s notice also said entities “engaged in internet information services through apps in such fields as news, publishing, education, film and television and religion should also submit relevant documents.” 

The requirement could affect the availability of popular social media apps such as X, Facebook and Instagram. Use of such apps are not allowed in China, but they can be still downloaded from app stores, enabling Chinese to use them when traveling overseas. 

China already requires mobile games to obtain licenses before they launch in the country, and it had purged tens of thousands of unlicensed games from various app stores in 2020. 

Tencent’s WeChat, China’s most popular online social platform, said on Wednesday that mini apps, apps that can be opened within WeChat, must also follow the new rules. 

The company said that new apps must complete the filing before launch starting from September, while exiting mini apps have until the end of March.  

 

US to Restrict High-Tech Investment in China

U.S. President Joe Biden is planning Wednesday to impose restrictions on U.S. investments in some high-tech industries in China.

Biden’s expected executive order could again heighten tensions between the U.S., the world’s biggest economy, and No. 2 China after a period in which leaders of the two countries have held several discussions aimed at airing their differences and seeking common ground.

The new restrictions would limit U.S. investments in such high-tech sectors in China as quantum computing, artificial intelligence and advanced semi-conductors, but apparently not in the broader Chinese economy, which recently has been struggling to advance.

In a trip to China in July, Treasury Secretary Janet Yellen told Chinese Premier Li Qiang, “The United States will, in certain circumstances, need to pursue targeted actions to protect its national security. And we may disagree in these instances.”

Trying to protect its own security interests in the Indo-Pacific region and across the globe, National Security Adviser Jake Sullivan said in April that the U.S. has implemented “carefully tailored restrictions on the most advanced semiconductor technology exports” to China.

“Those restrictions are premised on straightforward national security concerns,” he said. “Key allies and partners have followed suit, consistent with their own security concerns.”

Sullivan said they are not, as Beijing has claimed, a ‘technology blockade.’”

US Launches Contest to Use AI to Prevent Government System Hacks

The White House on Wednesday said it had launched a multimillion-dollar cyber contest to spur use of artificial intelligence to find and fix security flaws in U.S. government infrastructure, in the face of growing use of the technology by hackers for malicious purposes.  

“Cybersecurity is a race between offense and defense,” said Anne Neuberger, the U.S. government’s deputy national security adviser for cyber and emerging technology.

“We know malicious actors are already using AI to accelerate identifying vulnerabilities or build malicious software,” she added in a statement to Reuters.

Numerous U.S. organizations, from health care groups to manufacturing firms and government institutions, have been the target of hacking in recent years, and officials have warned of future threats, especially from foreign adversaries.  

Neuberger’s comments about AI echo those Canada’s cybersecurity chief Samy Khoury made last month. He said his agency had seen AI being used for everything from creating phishing emails and writing malicious computer code to spreading disinformation.

The two-year contest includes around $20 million in rewards and will be led by the Defense Advanced Research Projects Agency, the U.S. government body in charge of creating technologies for national security, the White House said.

Google, Anthropic, Microsoft, and OpenAI — the U.S. technology firms at the forefront of the AI revolution — will make their systems available for the challenge, the government said.

The contest signals official attempts to tackle an emerging threat that experts are still trying to fully grasp. In the past year, U.S. firms have launched a range of generative AI tools such as ChatGPT that allow users to create convincing videos, images, texts, and computer code. Chinese companies have launched similar models to catch up.

Experts say such tools could make it far easier to, for instance, conduct mass hacking campaigns or create fake profiles on social media to spread false information and propaganda.  

“Our goal with the DARPA AI challenge is to catalyze a larger community of cyber defenders who use the participating AI models to race faster – using generative AI to bolster our cyber defenses,” Neuberger said.

The Open Source Security Foundation (OpenSSF), a U.S. group of experts trying to improve open source software security, will be in charge of ensuring the “winning software code is put to use right away,” the U.S. government said. 

Zoom, Symbol of Remote Work Revolution, Wants Workers Back in Office Part-time

The company whose name became synonymous with remote work is joining the growing return-to-office trend.

Zoom, the video conferencing pioneer, is asking employees who live within a 50-mile radius of its offices to work onsite two days a week, a company spokesperson confirmed in an email. The statement said the company has decided that “a structured hybrid approach – meaning employees that live near an office need to be onsite two days a week to interact with their teams – is most effective for Zoom.”

The new policy, which will be rolled out in August and September, was first reported by the New York Times, which said Zoom CEO Eric Yuan fielded questions from employees unhappy with the new policy during a Zoom meeting last week.

Zoom, based in San Jose, California, saw explosive growth during the first year of the COVID-19 pandemic as companies scrambled to shift to remote work, and even families and friends turned to the platform for virtual gatherings. But that growth has stagnated as the pandemic threat has ebbed.

Shares of Zoom Video Communications Inc. have tumbled hard since peaking early in the pandemic, from $559 apiece in October 2020, to below $70 on Tuesday. Shares have slumped more than 10% to start the month of August. In February, Zoom laid off about 1,300 people, or about 15% of its workforce.

Google, Salesforce and Amazon are among major companies that have also stepped up their return-to-office policies despite a backlash from some employees.

Similarly to Zoom, many companies are asking their employees to show up to the office only part time, as hybrid work shapes up to be a lasting legacy of the pandemic. Since January, the average weekly office occupancy rate in 10 major U.S. cities has hovered around 50%, dipping below that threshold during the summer months, according to Kastle Systems, which measures occupancy through entry swipes.

LogOn: Police Recruit AI to Analyze Police Body-Camera Footage

U.S. police reform advocates have long argued that police-worn body cameras will help reduce officers’ excessive use of force and work to build public trust. But the millions of hours of footage that so-called “body cams” generate are difficult for police supervisors to monitor. As Shelley Schlender explains, artificial intelligence may help.

Pope Warns Against Potential Dangers of Artificial Intelligence

Pope Francis on Tuesday called for a global reflection on the potential dangers of artificial intelligence (AI), noting the new technology’s “disruptive possibilities and ambivalent effects.”  

Francis, who is 86 and said in the past he does not know how to use a computer, issued the warning in a message for the next World Day of Peace of the Catholic Church, falling on New Year’s Day.  

The Vatican released the message well in advance, as it is customary.  

The pope “recalls the need to be vigilant and to work so that a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded,” it reads.  

“The urgent need to orient the concept and use of artificial intelligence in a responsible way, so that it may be at the service of humanity and the protection of our common home, requires that ethical reflection be extended to the sphere of education and law,” it adds.  

Back in 2015, Francis acknowledged being “a disaster” with technology, but he has also called the internet, social networks and text messages “a gift of God,” provided that they are used wisely.  

In 2020, the Vatican joined forces with tech giants Microsoft MSFT.O and IBM IBM.N to promote the ethical development of AI and call for regulation of intrusive technologies such as facial recognition.

US Tech Groups Back TikTok in Challenge to Montana State Ban

Two to tech groups on Monday backed TikTok Inc in its lawsuit seeking block enforcement of a Montana state ban on use of the short video sharing app before it takes effect on January 1.

NetChoice, a national trade association that includes major tech platforms, and Chamber of Progress, a tech-industry coalition, said in a joint court filing that “Montana’s effort to cut Montanans off from the global network of TikTok users ignores and undermines the structure, design, and purpose of the internet.”

TikTok, which is owned by China’s ByteDance, filed a suit in May seeking to block the first-of-its-kind U.S. state ban on several grounds, arguing it violates the First Amendment free speech rights of the company and users.

Analysts Say Use of Spyware During Conflict Is Chilling

The use of sophisticated spyware to hack into the devices of journalists and human rights defenders during a period of conflict in Armenia has alarmed analysts.

A joint investigation by digital rights organizations, including Amnesty International, found evidence of the surveillance software on devices belonging to 12 people, including a former government spokesperson.

The apparent targeting took place between October 2020 and December 2022, including during key moments in the Nagorno-Karabakh conflict, Amnesty reported.

The region has been at the center of a decades-long dispute between Azerbaijan and Armenia, which have fought two wars over the mountainous territory.

Elina Castillo Jiménez, a digital surveillance researcher at Amnesty International’s Security Laboratory, told VOA that her organization’s research — published earlier this year — confirmed that at least a dozen public figures in Armenia were targeted, including a former spokesperson for the Ministry of Foreign Affairs and a representative of the United Nations.

Others had reported on the conflict, including for VOA’s sister network Radio Free Europe/Radio Liberty; provided analysis; had sensitive conversations related to the conflict; or in some cases worked for organizations known to be critical of the government, the researchers found.

“The conflict may have been one of the reasons for the targeting,” Castillo said.

If, as Amnesty and others suspect, the timing is connected to the conflict, it would mark the first documented use of Pegasus in the context of an international conflict.

Researchers have found previously that Pegasus was used extensively in Azerbaijan to target civil society representatives, opposition figures and journalists, including the award-winning investigative reporter Khadija Ismayilova.

VOA reached out via email to the embassies of Armenia and Azerbaijan in Washington for comment but as of publication had not received a response.

Pegasus is a spyware marketed to governments by the Israeli digital security company NSO Group. The global investigative collaboration, The Pegasus Project, has been tracking the spyware’s use against human rights defenders, critics and others.

Since 2021, the U.S government has imposed measures on NSO over the hacking revelations, saying its tools were used for “transnational repression.” U.S actions include export limits on NSO Group and a March 2023 executive order that restricts the U.S. government’s use of commercial spyware like Pegasus.

VOA reached out to the NSO Group for comment but as of publication had not received a response.

Castillo said that Pegasus has the capability to infiltrate both iOS and Android phones.

Pegasus spyware is a “zero-click” mobile surveillance program. It can attack devices without any interaction from the individual who is targeted, gaining complete control over a phone or laptop and in effect transforming it into a spying tool against its owner, she said.

“The way that Pegasus operates is that it is capable of using elements within your iPhones or Androids,” said Castillo. “Imagine that it embed(s) something in your phone, and through that, then it can take control over it.”

The implications of the spyware are not lost on Ruben Melikyan. The lawyer, based in Armenia’s capital, Yerevan, is among those whose devices were infected.

An outspoken government critic, Melikyan has represented a range of opposition parliamentarians and activists.

The lawyer said he has concerns that the software could have allowed hackers to gain access to his data and information related to his clients.

“As a lawyer, my phone contained confidential information, and its compromise made me uneasy, particularly regarding the protection of my current and former clients’ rights.” he said.

Melikyan told VOA that his phone had been targeted twice: in May 2021, when he was monitoring Armenian elections, and again during a tense period in the Armenia and Azerbaijan conflict in December 2022.

Castillo said she believes targeting individuals with Pegasus is a violation of “international humanitarian law” and that evidence shows it is “an absolute menace to people doing human rights work.”

She said the researchers are not able to confirm who commissioned the use of the spyware, but “we do believe that it is a government customer.”

When the findings were released this year, an NSO Group spokesperson said it was unable to comment but that earlier allegations of “improper use of our technologies” had led to the termination of contracts.

Amnesty International researchers are also investigating the potential use of a commercial spyware, Predator, which was found on Armenian servers.

“We have the evidence that suggests that it was used. However, further investigation is needed,” Castillo said, adding that their findings so far suggest that Pegasus is just “one of the threats against journalists and human rights defenders.”

This story originated in VOA’s Armenia Service.

US Mom Blames Face Recognition Technology for Flawed Arrest

A mother is suing the city of Detroit, saying unreliable facial recognition technology led to her being falsely arrested for carjacking while she was eight months pregnant. 

Porcha Woodruff was getting her two children ready for school the morning of February 16 when a half-dozen police officers showed up at her door to arrest her, taking her away in handcuffs, the 32-year-old Detroit woman said in a federal lawsuit.

“They presented her with an arrest warrant for robbery and carjacking, leaving her baffled and assuming it was a joke, given her visibly pregnant state,” her attorney wrote in a lawsuit accusing the city of false arrest. 

The suit, filed Thursday, argues that police relied on facial recognition technology that should not be trusted, given “inherent flaws and unreliability, particularly when attempting to identify Black individuals” such as Woodruff.

Some experts say facial recognition technology is more prone to error when analyzing the faces of people of color.

In a statement Sunday, the Wayne County prosecutor’s office said the warrant that led to Woodruff’s arrest was on solid ground, NBC News reported.

“The warrant was appropriate based upon the facts,” it said.

The case began in late January, when police investigating a reported carjacking by a gunman used imagery from a gas station’s security video to track down a woman believed to have been involved in the crime, according to the suit.

Facial recognition analysis from the video identified Woodruff as a possible match, the suit said.

Woodruff’s picture from a 2015 arrest was in a set of photos shown to the carjacking victim, who picked her out, according to the lawsuit.

Woodruff was freed on bond the day of her arrest and the charges against her were later dropped due to insufficient evidence, the civil complaint maintained. 

“This case highlights the significant flaws associated with using facial recognition technology to identify criminal suspects,” the suit argued.

Woodruff’s suit seeks unspecified financial damages plus legal fees. 

US Scientists Repeat Fusion Ignition Breakthrough

U.S. scientists have achieved net energy gain in a fusion reaction for the second time since December, the Lawrence Livermore National Laboratory said on Sunday.

Scientists at the California-based lab repeated the fusion ignition breakthrough in an experiment in the National Ignition Facility (NIF) on July 30 that produced a higher energy yield than in December, a Lawrence Livermore spokesperson said.

Final results are still being analyzed, the spokesperson added.

Lawrence Livermore achieved a net energy gain in a fusion experiment using lasers on Dec. 5, 2022. The scientists focused a laser on a target of fuel to fuse two light atoms into a denser one, releasing the energy.

That experiment briefly achieved what’s known as fusion ignition by generating 3.15 megajoules of energy output after the laser delivered 2.05 megajoules to the target, the Energy Department said.

In other words, it produced more energy from fusion than the laser energy used to drive it, the department said.

The Energy Department called it “a major scientific breakthrough decades in the making that will pave the way for advancements in national defense and the future of clean power.”

Scientists have known for about a century that fusion powers the sun and have pursued developing fusion on Earth for decades. Such a breakthrough could one day help curb climate change if companies can scale up the technology to a commercial level in the coming decades.

Musk Says Fight with Zuckerberg Will be Live-Streamed on X

Elon Musk said in a social media post that his proposed cage fight with Meta (META.O) CEO Mark Zuckerberg would be live-streamed on social media platform X, formerly known as Twitter. 

The social media moguls have been egging each other into a mixed martial arts cage match in Las Vegas since June.

“Zuck v Musk fight will be live-streamed on X. All proceeds will go to charity for veterans,” Musk said in a post on X early on Sunday morning, without giving any further details.

Earlier on Sunday, Musk had said on X that he was “lifting weights throughout the day, preparing for the fight”, adding that he did not have time to work out so brings the weights to work.

When a user on X asked Musk the point of the fight, Musk responded by saying “It’s a civilized form of war. Men love war.”

Meta did not respond to a Reuters request for comment on Musk’s post. 

The brouhaha began when Musk said in a June 20 post that he was “up for a cage match” with Zuckerberg, who is trained in jiujitsu.

A day later, Zuckerberg, 39, who has posted pictures of matches he has won on his company’s Instagram platform, asked Musk, 51, to “send location” for the proposed throwdown, to which Musk replied “Vegas Octagon”, referring to an events center where mixed martial arts (MMA) championship bouts are held.

Musk then said he would start training if the cage fight took shape. 

AI Anxiety: Workers Fret Over Uncertain Future

The tidal wave of artificial intelligence (AI) barrelling toward many professions has generated deep anxiety among workers fearful that their jobs will be swept away — and the mental health impact is rising.

The launch in November 2022 of ChatGPT, the generative AI platform capable of handling complex tasks on command, marked a tech landmark as AI started to transform the workplace.

“Anything new and unknown is anxiety-producing,” Clare Gustavsson, a New York therapist whose patients have shared concerns about AI, told AFP.

“The technology is growing so fast, it is hard to gain sure footing.”

Legal assistants, programmers, accountants and financial advisors are among those professions feeling threatened by generative AI that can quickly create human-like prose, computer code, articles or expert insight.

Goldman Sachs analysts see generative AI impacting, if not eliminating, some 300 million jobs, according to a study published in March.

“I anticipate that my job will become obsolete within the next 10 years,” Eric, a bank teller, told AFP, declining to give his second name.

“I plan to change careers. The bank I work for is expanding AI research.”

Trying to ’embrace the unknown’

New York therapist Meris Powell told AFP of an entertainment professional worried about AI being used in film and television production — a threat to actors and screenwriters that is a flashpoint in strikes currently gripping Hollywood.

“It’s mainly people who are in creative fields who are at the forefront of that concern,” Gustavsson said.

AI is bringing with it a level of apprehension matched by climate change and the Covid-19 pandemic, she contended.

But she said that she tries to get patients to “embrace the unknown” and find ways to use new technology to their advantage.

For one graphic animator in New York, the career-threatening shock came from seeing images generated by AI-infused software such as Midjourney and Stable Diffusion that rivaled the quality of those created by humans.

“People started to realize that some of the skills they had developed and specialized in could possibly be replaced by AI,” she told AFP, adding she had honed her coding skills, but now feels even that has scant promise in an AI world.

“I’ll probably lean into more of a management-level role,” she said. “It’s just hard because there are a lot less of those positions.

“Before I would just pursue things that interested me and skills that I enjoy. Now I feel more inclined to think about what’s actually going to be useful and marketable in the future.”

Peter Vukovic, who has been chief technology officer at several startups, expects just one percent or less of the population to benefit from AI.

“For the rest, it’s a gray area,” Vukovic, who lives in Bosnia, said. “There is a lot of reason for 99 percent of people to be concerned.”

AI is focused on efficiency and making money, but it could be channeled to serve other purposes, Vukovic said.

“What’s the best way for us to use this?” he asked. “Is it really just to automate a bunch of jobs?”

NASA Back in Touch With Voyager 2 After ‘Interstellar Shout’

NASA has succeeded in reestablishing full contact with Voyager 2 by using its highest-power transmitter to send an “interstellar shout” that righted the distant probe’s antenna orientation, the space agency said Friday.

Launched in 1977 to explore the outer planets and serve as a beacon of humanity to the wider universe, it is currently more than 19.9 billion kilometers from our planet — well beyond the solar system. 

A series of planned commands sent to the spaceship on July 21 mistakenly caused the antenna to point 2 degrees away from Earth, compromising its ability to send and receive signals and endangering its mission.

The situation was not expected to be resolved until at least Oct. 15 when Voyager 2 was scheduled to carry out an automated realignment maneuver.

But Tuesday, engineers enlisted the help of multiple Earth observatories that form the Deep Space Network to detect a carrier or “heartbeat” wave from Voyager 2, though the signal was still too faint to read the data it carried.

In an update on Friday, NASA’s Jet Propulsion Laboratory (JPL), which built and operates the probe, said it had succeeded in a longshot effort to send instructions that righted the craft.

“The Deep Space Network used the highest-power transmitter to send the command (the 100-kw S-band uplink from the Canberra site) and timed it to be sent during the best conditions during the antenna tracking pass in order to maximize possible receipt of the command by the spacecraft,” Voyager project manager Suzanne Dodd told AFP.

This so-called “interstellar shout” required 18.5 hours traveling at light speed to reach Voyager, and it took 37 hours for mission controllers to learn whether the command worked, JPL said in a statement.

The probe began returning science and telemetry data at 12:29 a.m. Eastern Time on Friday, “indicating it is operating normally and that it remains on its expected trajectory,” JPL added.

‘Golden records’

Voyager 2 left the protective magnetic bubble provided by the sun, called the heliosphere, in December 2018, and is currently traveling through the space between the stars.

Before leaving our solar system, it explored Jupiter and Saturn, and became the first and so far only spacecraft to visit Uranus and Neptune.

Voyager 2’s twin, Voyager 1, was mankind’s first spacecraft to enter the interstellar medium, in 2012, and is currently almost 24 billion kilometers from Earth.

Both carry “Golden Records” — 30-centimeter, gold-plated copper disks intended to convey the story of our world to extraterrestrials.

These include a map of our solar system, a piece of uranium that serves as a radioactive clock allowing recipients to date the spaceship’s launch, and symbols that convey how to play the record.

The contents of the discs, selected for NASA by a committee chaired by legendary astronomer Carl Sagan, include encoded images of life on Earth, as well as music and sounds that can be played using an included stylus.

For now, the Voyagers continue to transmit scientific data to Earth, though their power banks are expected to eventually be depleted sometime after 2025.

They will then continue to wander the Milky Way, potentially for eternity, in silence. 

Australian Lawmakers Highlight Social Media’s Threat to National Security

A parliamentary committee investigating foreign interference in Australia has found that Chinese apps TikTok and WeChat could present major security risks.

In April, Australia said it would ban TikTok on government devices because of security fears. 

Lawmakers in Australia have sounded the alarm about the nefarious rise of social media and its power to spread disinformation and undermine trust. 

The Senate Select Committee on Foreign Interference through Social Media said that foreign interference was Australia’s most pressing national security threat. The parliamentary inquiry in Canberra found that the increased use of social media, including Chinese-owned apps TikTok and WeChat, could “corrupt our decision-making, political discourse and societal norms.”   

The report stated that “the Chinese government can require these social media companies to secretly cooperate with Chinese intelligence agencies.” 

Committee makes recommendations

The committee in Canberra has made 17 recommendations, including extending an April 2023 ban on TikTok on Australian government issued devices to include WeChat, with the threat of fines and nationwide bans if the apps breach transparency guidelines.   

Senator James Paterson is the head of the committee as well as Shadow Cyber Security Minister. He told the Australian Broadcasting Corp. Wednesday that the apps were guilty of spreading disinformation.  

“It is absolutely rife and it is occurring on all social media platforms,” said Paterson. “It is absolutely critical that any social media platform operating in Australia of any scale is able to be subject to Australian laws and regulation, and the oversight of our regulatory agencies and our parliament.”   

The Canberra government said it was considering all the committee’s recommendations. A government spokesperson asserted that foreign governments have used social media to harass diaspora and spread disinformation.  

TikTok responds

In a statement, TikTok said that while it disagreed with the way it had been characterized by the parliamentary inquiry, it welcomed the committee’s decision to not recommend an outright ban.   

It added that TikTok remained “committed to continuing an open and transparent dialogue with all levels of Australian government.” 

There has been no comment, so far, from WeChat.   

Meta, which owns Facebook, had previously told the inquiry that it had removed more than 200 foreign interference operations since 2017.  The U.S. company has warned that the internet’s democratic principles were increasingly being challenged by “strong forces.” 

Meta to Ask EU Users’ Consent to Share Data for Targeted Ads

Social media giant Meta on Tuesday said it intends to ask European Union-based users to give their consent before allowing targeted advertising on its networks including Facebook, bowing to pressure from European regulators.

It said the changes were to address “evolving and emerging regulatory requirements” amid a bruising tussle with the Irish Data Protection Commission that oversees EU data rules in Ireland, out of which Meta runs its European operations.

European regulators in January had dismissed the previous legal basis — “legitimate interest” — Meta had used to justify gathering users’ personal data for targeted advertising.

Currently, users joining Facebook and Instagram by default have that permission turned on, feeding their data to Meta so it can generate billions of dollars from such ads.

“Today, we are announcing our intention to change the legal basis that we use to process certain data for behavioral advertising for people in the EU, EEA [European Economic Area] and Switzerland from ‘Legitimate Interests’ to ‘Consent’,” Meta said in a blog post.

Meta added it will share more information in the months ahead as it continues to “constructively engage” with regulators.

“There is no immediate impact to our services in the region. Once this change is in place, advertisers will still be able to run personalized advertising campaigns to reach potential customers and grow their businesses,” it said.

Meta and other U.S. Big Tech companies have been hit by massive fines over their business practices in the EU in recent years and have been impacted by the need to comply with the bloc’s strict data privacy regulations.

Further effects are expected from the EU’s landmark Digital Markets Act, which bans anti-competitive behavior by the so-called “gatekeepers” of the internet.

Amazon Adds US-Wide Video Telemedicine Visits to Its Virtual Clinic

Amazon is adding video telemedicine visits in all 50 states to a virtual clinic it launched last fall, as the e-commerce giant pushes deeper into care delivery.

Amazon said Tuesday that customers can visit its virtual clinic around the clock through Amazon’s website or app. There, they can compare prices and response times before picking a telemedicine provider from several options.

The clinic, which doesn’t accept insurance, launched last fall with a focus on text message-based consultations. Those remain available in 34 states.

Virtual care, or telemedicine, exploded in popularity during the COVID-19 pandemic. It has remained popular as a convenient way to check in with a doctor or deal with relatively minor health issues like pink eye.

Amazon says its clinic offers care for more than 30 common health conditions. Those include sinus infections, acne, COVID-19 and acid reflux. The clinic also offers treatments for motion sickness, seasonal allergies and several sexual health conditions, including erectile dysfunction.

It also provides birth control and emergency contraception.

Chief Medical Officer Dr. Nworah Ayogu said in a blog post that the clinic aims to remove barriers to help people treat “everyday health concerns.”

“As a doctor, I’ve seen firsthand that patients want to be healthy but lack the time, tools, or resources to effectively manage their care,” Ayogu wrote.

Amazon said messaging-based consultations cost $35 on average while video visits cost $75.

That’s cheaper than the cost of many in-person visits with a doctor, which can run over $100 for people without insurance or coverage that makes them pay a high deductible.

While virtual visits can improve access to help, some doctors worry that they also lead to care fragmentation and can make it harder to track a patient’s overall health. That could happen if a patient has a regular doctor who doesn’t learn about the virtual visit from another provider.

In addition to virtual care, Amazon also sells prescription drugs through its Amazon Pharmacy business and has been building its presence with in-patient care.

Earlier this year, Amazon also closed a $3.9 billion acquisition of the membership-based primary care provider One Medical, which had about 815,000 customers and 214 medical offices in more than 20 markets.

One Medical offers both in-person care and virtual visits.

Anti-monopoly groups had called on the Federal Trade Commission to block the deal, arguing it would endanger patient privacy and help make the retailer more dominant in the marketplace. The agency didn’t block the deal but said it won’t rule out future challenges.

That deal was the first acquisition made under Amazon CEO Andy Jassy, who took over from founder Jeff Bezos in 2021. Jassy sees health care as a growth opportunity for the company.

Flashing ‘X’ Sign Removed From Former Twitter’s Headquarters

A brightly flashing “X” sign has been removed from the San Francisco headquarters of the company formerly known as Twitter just days after it was installed. 

The San Francisco Department of Building Inspection said Monday it received 24 complaints about the unpermitted structure over the weekend. Complaints included concerns about its structural safety and illumination. 

The Elon Musk-owned company, which has been rebranded as X, had removed the Twitter sign and iconic blue bird logo from the building last week. That work was temporarily paused because the company did not have the necessary permits. For a time, the “er” at the end of “Twitter” remained up due to the abrupt halt of the sign takedown. 

The city of San Francisco had opened a complaint and launched an investigation into the giant “X” sign, which was installed Friday on top of the downtown building as Musk continues his rebrand of the social media platform. 

 

 

The chaotic rebrand of Twitter’s building signage is similar to the haphazard way in which the Twitter platform is being turned into X. While the X logo has replaced Twitter on many parts of the site and app, remnants of Twitter remain. 

Representatives for X did not immediately respond to a message for comment Monday. 

China Curbs Drone Exports, Citing Ukraine, Concern About Military Use

China imposed restrictions Monday on exports of long-range civilian drones, citing Russia’s war in Ukraine and concern that drones might be converted to military use. 

Chinese leader Xi Jinping’s government is friendly with Moscow but says it is neutral in the 18-month-old war. It has been stung by reports that both sides might be using Chinese-made drones for reconnaissance and possibly attacks. 

Export controls will take effect Tuesday to prevent use of drones for “non-peaceful purposes,” the Ministry of Commerce said in a statement. It said exports still will be allowed but didn’t say what restrictions it would apply. 

China is a leading developer and exporter of drones. DJI Technology Co., one of the global industry’s top competitors, announced in April 2022 it was pulling out of Russia and Ukraine to prevent its drones from being used in combat. 

“The risk of some high specification and high-performance civilian unmanned aerial vehicles being converted to military use is constantly increasing,” the Ministry of Commerce said. 

Restrictions will apply to drones that can fly beyond the natural sight distance of operators or stay aloft more than 30 minutes, have attachments that can throw objects and weigh more than seven kilograms (15½ pounds), according to the ministry. 

“Since the crisis in Ukraine, some Chinese civilian drone companies have voluntarily suspended their operations in conflict areas,” the Ministry of Commerce said. It accused the United States and Western media of spreading “false information” about Chinese drone exports. 

The government defended its dealings Friday with Russia as “normal economic and trade cooperation” after a U.S. intelligence report said Beijing possibly provided equipment used in Ukraine that might have military applications. 

The report cited Russian customs data that showed Chinese state-owned military contractors supplied drones, navigation equipment, fighter jet parts and other goods. 

The Biden administration has warned Beijing of unspecified consequences if it supports the Kremlin’s war effort. Last week’s report didn’t say whether any of the trade cited might trigger U.S. retaliation. 

Xi and Russian President Vladimir Putin declared before the February 2022 invasion that their governments had a “no-limits” friendship. Beijing has blocked efforts to censure Moscow in the United Nations and has repeated Russian justifications for the attack. 

China has “always opposed the use of civilian drones for military purposes,” the Ministry of Commerce said. “The moderate expansion of drone control by China this time is an important measure to demonstrate the responsibility of a responsible major country.” 

The Ukrainian government appealed to DJI in March 2022 to stop selling drones it said the Russian ministry was using to target missile attacks. DJI rejected claims it leaked data on Ukraine’s military positions to Russia. 

AM Radio Fights to Keep Its Spot on US Car Dashboards

The number of AM radio stations in the United States is dwindling. Over the decades, mainstream broadcasters have moved to the FM band — especially music stations — to take advantage of FM’s superior audio fidelity. Now, there is a new threat to America’s remaining 4,000 AM stations. Some automakers want to kick AM off their dashboard radios.

In Dimmitt, in the state of Texas, that has Nancy and Todd Whalen worried. For eight years, they’ve owned KDHN-AM 1470, on the air since 1963. The Whalens are heard live on the station’s morning show and are KDHN’s sole employees.

“We came here to Dimmitt and told people that we wanted to give them something to be proud of. And we feel like what we’ve done and what we continue to do is provide that, not just for Dimmitt but for all the small towns in the area that no longer have local radio stations,” Nancy said.

KDHN, known as “The Twister,” also has received a Federal Communications Commission license for an FM (frequency modulation) translator, limited to 250 watts, which simulcasts the AM (amplitude modulation) signal. But the 500-watt AM signal covers more territory — about a 160-kilometer (99-mile) radius — compared with the 30-kilometer (19-mile) reach of the FM signal.

“The AM radio station is everything for us,” Nancy Whalen said. “We just turned on the FM translator, it’ll be two years in September. But the AM signal has been our bread and butter since the beginning.”

Where the profit is

Some urban station owners have decided it is more profitable to sell the real estate on which their antenna towers sit rather than continue to try to make money from commercials targeting a dwindling audience. That is what happened to KDWN in Las Vegas, Nevada, which was authorized by the FCC to transmit the maximum 50,000 watts allowed for AM stations. Corporate owner Audacy sold its 15-hectare (37-acre) transmission site on desert land last year to a real estate developer for $40 million and then switched off the powerful AM station, which had listeners across the entire Western U.S. at night.

Unlike FM band stations, which are limited to line-of-sight reception by the laws of physics, lower-frequency AM signals bounce off the ionosphere after sunset, giving them a range of hundreds and sometimes thousands of kilometers. FM stations have a greater audio frequency range, as they are allowed a wider bandwidth compared with AM stations. The most popular formats for the remaining AM stations in the United States are news/talk programming and sports, followed by country music.

Todd Whalen said audio quality is not an issue for his KDHN listeners.

“Our AM signal actually sounds as good as an FM signal because we have a state-of-the-art transmitter and processing,” he explained.

Recently, some major auto manufacturers announced plans to stop including AM radios in new vehicles, contending electric vehicle motor systems cause interference with reception, making stations unlistenable and, thus, the AM band obsolete.

Legislative response

Broadcasters and lawmakers object.

U.S. Senator Amy Klobuchar, a Minnesota Democrat, posted a video to Twitter about legislation she co-sponsored that would require vehicle manufacturers to include AM receivers in all new vehicles.

The Senate Committee on Commerce, Science, and Transportation approved via voice vote Thursday the “AM For Every Vehicle Act,” sending it to the Senate floor for consideration.

“Maybe people don’t understand how rural works, but a lot of people drive long distances to get to their town, to visit their friends,” Klobuchar said in her online video. She added she did not think auto manufacturers “understand how important AM radio is to people today.”

People like Rodney Hunter, who manages two grain silo sites in Tulia and Edmonson, Texas, said news on AM radio about corn, cotton, wheat and cattle are critical.

“I’ve had at least three farmers that called in today and said they heard on the radio that the markets are up. And without AM radio that would not be possible,” he told VOA on a recent morning at the grain silo in Tulia when a halt to grain shipments from Ukraine was causing a surge in prices of some agricultural commodities.

“Farmers are in their pickups or in their tractors, and they’re going up and down the road,” Hunter said. Relying on AM radio reception in vehicles “is just a lot handier” than trying to get crop-related news online.

Different languages

A five-hour drive southeast of Tulia found Joann Whang, in Carrollton, tuned in to another AM station. She’s not a farmer, but a pharmacist — listening to Korean-language KKDA-AM 730.

“My friend told me about it,” she said. “At first, I thought a Korean radio station is usually for the older generation, but it was actually pretty interesting. You can get all the information and highlights and even K-pop [music].”

The station is owned by the DK Media Group, which also publishes two Korean-language weekly newspapers in the Dallas-Fort Worth area. The company’s president, Stephanie Min Kim, said having no AM radios in new cars would imperil ethnic broadcasters who cannot afford the limited and more lucrative FM licenses.

“We feel that it is our duty to help and support our Korean immigrants integrate into American society,” said Kim, a former broadcaster at KBS in South Korea. “So, we invite experts from the law, health care and education to provide practical and useful information” over the station’s airwaves.

“More than 40% of radio listening is done in the car,” Kim said. “So, I think AM radio is facing a potential existential threat.”

That existential threat also affects another Dallas-area station — KHSE at 700 on the AM dial.

The station, known as Radio Caravan, with announcers speaking in Hindi, Tamil, English and other languages, plays South Asian music and provides information about community events.

While Radio Caravan also simulcasts on FM from a site 50 kilometers (31 miles) north of Dallas, that transmission does not have the reach of the 1,500-watt AM station whose transmitter and antenna array are located at a different site, also about 50 kilometers northeast of downtown Dallas.

“I don’t think AM can ever go away,” said Radio Caravan program host Aparna Ragnan, who suggested that auto manufacturers find a way to minimize the noise interference in electric vehicles instead of stopping installation of AM receivers in new cars and trucks.

Content is key

The inferior audio range of AM is not really an issue, said Radio Caravan’s station manager, Vaibhav Sheth.

“It’s the content that matters,” according to Sheth, who also noted that AM stations are a critical link for the alerts sent by the nationwide Emergency Alert System.

“Those sirens go off and your regular programming is interrupted, and when there’s an emergency, whether it’s a tornado warning, whether it’s a child abduction, whatever it is that’s happening, it goes to the AM frequency,” he said.

Some radio stations, including those struggling with personnel costs to fill 24 hours of programming, are beginning to use artificial intelligence, known as AI, which can grab real-time information, such as weather forecasts and sports scores, and use cloned announcer voices to make the computer-generated content sound live.

Kim at DK Media Group said AI might be valuable for some content, such as commercials, but she did not see it replacing empathetic voices interacting with the community in live programming.

“We are human beings,” Kim said.

The Whalens said they have not considered AI, even though they could use extra help at their “mom ‘n’ pop”-style station, which also broadcasts some local high school sports.

“We like being live in the studio. There’s just a different energy and a different feel,” said Nancy Whalen. “I think people listening can tell that over the radio. Artificial Intelligence is just that, and it’s not going to give the listener what they’re really looking for.”

Her husband, Todd, agreed. “We don’t want to be a canned radio station, because there’s a lot of canned stations out there.”

AM Radio Fights to Keep Its Spot on US Car Dashboards

There has been a steady decline in the number of AM radio stations in the United States. Over the decades, urban and mainstream broadcasters have moved to the FM band, which has better audio fidelity, although more limited range. Now, there is a new threat to the remaining AM stations. Some automakers want to kick AM off their dashboard radios, deeming it obsolete. VOA’s chief national correspondent, Steve Herman, in the state of Texas, has been tuning in to some traditional rural stations, as well as those broadcasting in languages others than English in the big cities. Camera – Steve Herman and Jonathan Zizzo.

FBI Warns About China Theft of US AI Technology

China is pilfering U.S.-developed artificial intelligence (AI) technology to enhance its own aspirations and to conduct foreign influence operations, senior FBI officials said Friday.

The officials said China and other U.S. adversaries are targeting American businesses, universities and government research facilities to get their hands on cutting-edge AI research and products.

“Nation-state adversaries, particularly China, pose a significant threat to American companies and national security by stealing our AI technology and data to advance their own AI programs and enable foreign influence campaigns,” a senior FBI official said during a background briefing call with reporters.

China has a national plan to surpass the U.S. as the world’s top AI power by 2030, but U.S. officials say much of its progress is based on stolen or otherwise acquired U.S. technology.

“What we’re seeing is efforts across multiple vectors, across multiple industries, across multiple avenues to try to solicit and acquire U.S. technology … to be able to re-create and develop and advance their AI programs,” the senior FBI official said.

The briefing was aimed at giving the FBI’s view of the threat landscape, not to react to any recent events, officials said.

FBI Director Christopher Wray sounded the alarm about China’s AI intentions at a cybersecurity summit in Atlanta on Wednesday. He warned that after “years stealing both our innovation and massive troves of data,” the Chinese are well-positioned “to use the fruits of their widespread hacking to power, with AI, even more powerful hacking efforts.”

China has denied the allegations.

The senior FBI official briefing reporters said that while the bureau remains focused on foreign acquisition of U.S. AI technology and talent, it has concern about future threats from foreign adversaries who exploit that technology.

“However, if and when the technology is acquired, their ability to deploy it in an instance such as [the 2024 presidential election] is something that we are concerned about and do closely monitor.”

With the recent surge in AI use, the U.S. government is grappling with its benefits and risks. At a White House summit earlier this month, top AI executives agreed to institute guidelines to ensure the technology is developed safely.

Even as the technology evolves, cybercriminals are actively using AI in a variety of ways, from creating malicious code to crafting convincing phishing emails and carrying out insider trading of securities, officials said.

“The bulk of the caseload that we’re seeing now and the scope of activity has in large part been on criminal actor use and deployment of AI models in furtherance of their traditional criminal schemes,” the senior FBI official said.

The FBI warned that violent extremists and traditional terrorist actors are experimenting with the use of various AI tools to build explosives, he said.

“Some have gone as far as to post information about their engagements with the AI models and the success which they’ve had defeating security measures in most cases or in a number of cases,” he said.

The FBI has observed a wave of fake AI-generated websites with millions of followers that carry malware to trick unsuspecting users, he said. The bureau is investigating the websites.

Wray cited a recent case in which a Dark Net user created malicious code using ChatGPT.

The user “then instructed other cybercriminals on how to use it to re-create malware strains and techniques based on common variants,” Wray said.

“And that’s really just the tip of the iceberg,” he said. “We assess that AI is going to enable threat actors to develop increasingly powerful, sophisticated, customizable and scalable capabilities — and it’s not going to take them long to do it.”

Prospect of AI Producing News Articles Concerns Digital Experts 

Google’s work developing an artificial intelligence tool that would produce news articles is concerning some digital experts, who say such devices risk inadvertently spreading propaganda or threatening source safety. 

The New York Times reported last week that Google is testing a new product, known internally by the working title Genesis, that employs artificial intelligence, or AI, to produce news articles.

Genesis can take in information, like details about current events, and create news content, the Times reported. Google already has pitched the product to the Times and other organizations, The Washington Post and News Corp, which owns The Wall Street Journal.

The launch of the generative AI chatbot ChatGPT last fall has sparked debate about how artificial intelligence can and should fit into the world — including in the news industry.

AI tools can help reporters research by quickly analyzing data and extracting it from PDF files in a process known as scraping.  AI can also help journalists’ fact-check sources. 

But the apprehension — including potentially spreading propaganda or ignoring the nuance humans bring to reporting — appears to be weightier. These worries extend beyond Google’s Genesis tool to encapsulate the use of AI in news gathering more broadly.

If AI-produced articles are not carefully checked, they could unwittingly include disinformation or misinformation, according to John Scott-Railton, who researches disinformation at the Citizen Lab in Toronto.  

“It’s sort of a shame that the places that are the most friction-free for AI to scrape and draw from — non-paywalled content — are the places where disinformation and propaganda get targeted,” Scott-Railton told VOA. “Getting people out of the loop does not make spotting disinformation easier.”

Paul M. Barrett, deputy director at New York University’s Stern Center for Business and Human Rights, agrees that artificial intelligence can turbocharge the dissemination of falsehoods. 

“It’s going to be easier to generate myths and disinformation,” he told VOA. “The supply of misleading content is, I think, going to go up.”

In an emailed statement to VOA, a Google spokesperson said, “In partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help their journalists with their work.”

“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” the spokesperson said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

The implications for a news outlet’s credibility are another important consideration regarding the use of artificial intelligence.

News outlets are presently struggling with a credibility crisis. Half of Americans believe that national news outlets try to mislead or misinform audiences through their reporting, according to a February report from Gallup and the Knight Foundation.

“I’m puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool, with a much shakier command of facts, into newsrooms,” said Scott-Railton, who was previously a Google Ideas fellow.

Reports show that AI chatbots regularly produce responses that are entirely wrong or made up. AI researchers refer to this habit as a “hallucination.”

Digital experts are also cautious about what security risks may be posed by using AI tools to produce news articles. Anonymous sources, for instance, might face retaliation if their identities are revealed.

“All users of AI-powered systems need to be very conscious of what information they are providing to the system,” Barrett said.

“The journalist would have to be cautious and wary of disclosing to these AI systems information such as the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn’t become public,” he said. 

Scott-Railton said he thinks AI probably has a future in most industries, but it’s important not to rush the process, especially in news. 

“What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts,” he said.  

Vietnam Orders Social Media Firms to Cut ‘Toxic’ Content Using AI

HO CHI MINH CITY, VIETNAM – Vietnam’s demand that international social media firms use artificial intelligence to identify and remove “toxic” online content is part of an ever expanding and alarming campaign to pressure overseas platforms to suppress freedom of speech in the country, rights groups, experts and activists say.

Vietnam is a lucrative market for overseas social media platforms. Of the country’s population of nearly 100 million there are 75.6 million Facebook users, according to Singapore-based research firm Data Reportal. And since Vietnamese authorities have rolled out tighter restrictions on online content and ordered social media firms to remove content the government deems anti-state, social media sites have largely complied with government demands to silence online critiques of the government, experts and rights groups told VOA.

“Toxic” is a term used broadly to refer to online content which the state deems to be false, violent, offensive, or anti-state, according to local media reports.

During a mid-year review conference on June 30, Vietnam’s Information Ministry ordered international tech firms to use artificial intelligence to find and remove so-called toxic content automatically, according to a report from state-run broadcaster Vietnam Television. Details have not been revealed on how or when companies must comply with the new order.

Le Quang Tu Do, the head of the Authority of Broadcasting and Electronic Information, had noted during an April 6 news conference that Vietnamese authorities have economic, technical and diplomatic tools to act against international platforms, according to a local media report. According to the report he said the government could cut off social platforms from advertisers, banks, and e-commerce, block domains and servers, and advise the public to cease using platforms with toxic content.

“The point of these measures is for international platforms without offices in Vietnam, like Facebook and YouTube, to abide by the law,” Do said.

Pat de Brun, Amnesty International’s deputy director of Amnesty Tech, told VOA the latest demand is consistent with Vietnam’s yearslong strategy to increase pressure on social media companies. De Brun said it is the government’s broad definition of what is toxic, rather than use of artificial intelligence, that is of most human rights concern because it silences speech that can include criticism of government and policies.

“Vietnamese authorities have used exceptionally broad categories to determine content that they find inappropriate and which they seek to censor. … Very, very often this content is protected speech under international human rights law,” de Brun said. “It’s really alarming to see that these companies have relented in the face of this pressure again and again.”

During the first half of this year, Facebook removed 2,549 posts, YouTube removed 6,101 videos, and TikTok took down 415 links, according to an Information Ministry statement.

Online suppression

Nguyen Khac Giang, a research fellow at Singapore’s ISEAS-Yusof Ishak Institute, told VOA that heightened online censorship has been led by the conservative faction within Vietnam’s Communist Party, which gained power in 2016.

Nguyen Phu Trong was elected as general secretary in 2016, putting a conservative in the top position within the one-party state. Along with Trong, other conservative-minded leaders rose within government the same year, pushing out reformists, Giang said. Efforts to control the online sphere led to 2018’s Law on Cybersecurity, which expands government control of online content and attempts to localize user data in Vietnam. The government also established Force 47 in 2017, a military unit with reportedly 10,000 members assigned to monitor online space.

On July 19, local media reported that the information ministry proposed taking away the internet access of people who commit violations online especially via livestream on social media sites.

Activists often see their posts removed, lose access to their accounts, and the government also arrests Vietnamese bloggers, journalists, and critics living in the country for their online speech. They are often charged under Article 117 of Vietnam’s Criminal Code, which criminalizes “making, storing, distributing or disseminating information, documents and items against the Socialist Republic of Vietnam.”

According to The 88 Project, a U.S.-based human rights group, 191 activists are in jail in Vietnam, many of whom have been arrested for online advocacy and charged under Article 117.

“If you look at the way that social media is controlled in Vietnam, it is very starkly contrasted with what happened before 2016,” Giang said. “What we are seeing now is only a signal of what we’ve been seeing for a long time.”

Giang said the government order is a tool to pressure social media companies to use artificial intelligence to limit content, but he warned that online censorship and limits on public discussion could cause political instability by eliminating a channel for public feedback.

“The story here is that they want the social media platforms to take more responsibility for whatever happens on social media in Vietnam,” Giang said. “If they don’t allow people to report on wrongdoings … how can the [government] know about it?”

Vietnamese singer and dissident Do Nguyen Mai Khoi, now living in the United States, has been contacting Facebook since 2018 for activists who have lost accounts or had posts censored, or are the victims of coordinated online attacks by pro-government Facebook users. Although she has received some help from the company in the past, responses to her requests have become more infrequent.

“[Facebook] should use their leverage,” she added. “If Vietnam closed Facebook, everyone would get angry and there’d be a big wave of revolution or protests.”

Representatives of Meta Platforms Inc., Facebook’s parent company, did not respond to VOA requests for comment.

Vietnam is also a top concern in the region for the harsh punishment of online speech, Dhevy Sivaprakasam, Asia Pacific policy counsel at Access Now, a nonprofit defending digital rights, said.

“I think it’s one of the most egregious examples of persecution on the online space,” she said.