Internet is rife with fake reviews – will AI make it worse?

The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say. 

Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback. 

But AI-infused text generation tools, popularized by OpenAI’s ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts. 

The deceptive practice, which is illegal in the U.S., is carried out year-round but becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews to help them purchase gifts. 

Where fakes are appearing 

Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants to services such as home repairs, medical care and piano lessons. 

The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023 and they have multiplied ever since. 

For a report released this month, the Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated. 

“It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and adviser to tech startups, who reviewed the Transparency Company’s work and is set to lead the organization starting Jan. 1. 

In August, software company DoubleVerify said it was observing a “significant increase” in mobile phone and smart TV apps with reviews crafted by generative AI. The reviews often were used to deceive customers into installing apps that could hijack devices or run ads constantly, the company said. 

The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews. 

The FTC, which this year banned the sale or purchase of fake reviews, said some of Rytr’s subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags and other businesses. 

Likely on prominent online sites, too 

Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses has detected with almost certainty that some AI-generated appraisals posted on Amazon bubbled up to the top of review search results because they were so detailed and appeared to be well thought out. 

But determining what is fake or not can be challenging. External parties can fall short because they don’t have “access to data signals that indicate patterns of abuse,” Amazon has said. 

Pangram Labs has done detection for some prominent online sites, which Spero declined to name because of nondisclosure agreements. He said he evaluated Amazon and Yelp independently. 

Many of the AI-generated comments on Yelp appeared to be posted by individuals who were trying to publish enough reviews to earn an “Elite” badge, which is intended to let users know they should trust the content, Spero said. 

The badge provides access to exclusive events with local business owners. Fraudsters also want it so their Yelp profiles can look more realistic, said Kay Dean, a former federal criminal investigator who runs a watchdog group called Fake Review Watch. 

To be sure, just because a review is AI-generated doesn’t necessarily mean it’s fake. Some consumers might experiment with AI tools to generate content that reflects their genuine sentiments. Some non-native English speakers say they turn to AI to make sure they use accurate language in the reviews they write. 

“It can help with reviews [and] make it more informative if it comes out of good intentions,” said Michigan State University marketing professor Sherry He, who has researched fake reviews. She says tech platforms should focus on the behavioral patterns of bad actors, which prominent platforms already do, instead of discouraging legitimate users from turning to AI tools. 

What companies are doing 

Prominent companies are developing policies for how AI-generated content fits into their systems for removing phony or abusive reviews. Some already employ algorithms and investigative teams to detect and take down fake reviews but are giving users some flexibility to use AI. 

Spokespeople for Amazon and Trustpilot, for example, said they would allow customers to post AI-assisted reviews as long as they reflect their genuine experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy. 

“With the recent rise in consumer adoption of AI tools, Yelp has significantly invested in methods to better detect and mitigate such content on our platform,” the company said in a statement. 

The Coalition for Trusted Reviews, which Amazon, Trustpilot, employment review site Glassdoor, and travel sites Tripadvisor, Expedia and Booking.com launched last year, said that even though deceivers may put AI to illicit use, the technology also presents “an opportunity to push back against those who seek to use reviews to mislead others.” 

“By sharing best practice and raising standards, including developing advanced AI detection systems, we can protect consumers and maintain the integrity of online reviews,” the group said. 

The FTC’s rule banning fake reviews, which took effect in October, allows the agency to fine businesses and individuals who engage in the practice. Tech companies hosting such reviews are shielded from the penalty because they are not legally liable under U.S. law for the content that outsiders post on their platforms. 

Tech companies, including Amazon, Yelp and Google, have sued fake review brokers they accuse of peddling counterfeit reviews on their sites. The companies say their technology has blocked or removed a huge swath of suspect reviews and suspicious accounts. However, some experts say they could be doing more. 

“Their efforts thus far are not nearly enough,” said Dean of Fake Review Watch. “If these tech companies are so committed to eliminating review fraud on their platforms, why is it that I, one individual who works with no automation, can find hundreds or even thousands of fake reviews on any given day?” 

Spotting fake reviews 

Consumers can try to spot fake reviews by watching out for a few possible warning signs, according to researchers. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product’s full name or model number is another potential giveaway. 

When it comes to AI, research conducted by Balazs Kovacs, a Yale University professor of organization behavior, has shown that people can’t tell the difference between AI-generated and human-written reviews. Some AI detectors may also be fooled by shorter texts, which are common in online reviews, the study said. 

However, there are some “AI tells” that online shoppers and service seekers should keep it mind. Panagram Labs says reviews written with AI are typically longer, highly structured and include “empty descriptors,” such as generic phrases and attributes. The writing also tends to include cliches like “the first thing that struck me” and “game-changer.”

US proposes cybersecurity rules to limit impact of health data leaks

Health care organizations may be required to bolster their cybersecurity to better prevent sensitive information from being leaked by cyberattacks like the ones that hit Ascension and UnitedHealth, a senior White House official said Friday.

Anne Neuberger, the U.S. deputy national security adviser for cyber and emerging technology, told reporters that proposed requirements are necessary in light of the massive number of Americans whose data has been affected by large breaches of health care information. The proposals include encrypting data so it cannot be accessed, even if leaked, and requiring compliance checks to ensure networks meet cybersecurity rules.

The full proposed rule was posted to the Federal Register on Friday, and the Department of Health and Human Services posted a more condensed breakdown on its website.

She said that the health care information of more than 167 million people was affected in 2023 as a result of cybersecurity incidents.

The proposed rule from the Office for Civil Rights (OCR) within HHS would update standards under the Health Insurance Portability and Accountability Act and would cost an estimated $9 billion in the first year, and $6 billion in years two through five, Neuberger said.

“We’ve made some significant proposals that we think will improve cybersecurity and ultimately everyone’s health information, if any of these proposals are ultimately finalized,” an OCR spokesperson told Reuters late Friday. The next step in the process is a 60-day public comment period before any final decisions will be made.

Large health care breaches caused by hacking and ransomware have increased by 89% and 102%, respectively, since 2019, Neuberger said.

“In this job, one of the most concerning and really troubling things we deal with is hacking of hospitals, hacking of health care data,” she said.

Hospitals have been forced to operate manually and Americans’ sensitive health care data, mental health information and other information are “being leaked on the dark web with the opportunity to blackmail individuals,” Neuberger said.

Trump asks court to delay possible TikTok ban until he can weigh in as president

U.S. President-elect Donald Trump asked the Supreme Court on Friday to pause the potential TikTok ban from going into effect until his administration can pursue a “political resolution” to the issue.

The request came as TikTok and the Biden administration filed opposing briefs to the court, in which the company argued the court should strike down a law that could ban the platform by January 19 while the government emphasized its position that the statute is needed to eliminate a national security risk.

“President Trump takes no position on the underlying merits of this dispute. Instead, he respectfully requests that the court consider staying the act’s deadline for divestment of January 19, 2025, while it considers the merits of this case,” said Trump’s amicus brief, which supported neither party in the case.

The filings come ahead of oral arguments scheduled for January 10 on whether the law, which requires TikTok to divest from its China-based parent company or face a ban, unlawfully restricts speech in violation of the First Amendment.

Earlier this month, a panel of three federal judges on the U.S. Court of Appeals for the District of Columbia Circuit unanimously upheld the statute, leading TikTok to appeal the case to the Supreme Court.

The brief from Trump said he opposes banning TikTok at this junction and “seeks the ability to resolve the issues at hand through political means once he takes office.”

Massive Chinese espionage scheme hit 9th telecom firm, US says

WASHINGTON — A sprawling Chinese espionage campaign hacked a ninth U.S. telecom firm, a top White House official said Friday.

The Chinese hacking blitz known as Salt Typhoon gave officials in Beijing access to private texts and phone conversations of an unknown number of Americans. The White House earlier this month said the attack affected at least eight telecommunications companies and dozens of nations.

Anne Neuberger, the deputy national security adviser for cyber and emerging technologies, told reporters Friday that a ninth victim was identified after the administration released guidance to companies about how to hunt for Chinese culprits in their networks.

The update from Neuberger is the latest development in a massive hacking operation that alarmed national security officials, exposed cybersecurity vulnerabilities in the private sector and laid bare China’s hacking sophistication.

The hackers compromised the networks of telecommunications companies to obtain customer call records and gain access to the private communications of “a limited number of individuals.” Although the FBI has not publicly identified any of the victims, officials believe senior U.S. government officials and prominent political figures are among those whose communications were accessed.

Neuberger said officials did not yet have a precise sense of how many Americans overall were affected by Salt Typhoon, in part because the Chinese were careful about their techniques, but a “large number” were in or near Washington.

Officials believe the goal of the hackers was to identify who owned the phones and, if they were “government targets of interest,” spy on their texts and phone calls, she said.

The FBI said most of the people targeted by the hackers are “primarily involved in government or political activity.”

Neuberger said the episode highlighted the need for required cybersecurity practices in the telecommunications industry, something the Federal Communications Commission is to take up at a meeting next month.

“We know that voluntary cybersecurity practices are inadequate to protect against China, Russia and Iran hacking of our critical infrastructure,” she said.

The Chinese government has denied responsibility for the hacking.

Ukraine tech company presents latest military simulators

Russia’s invasion has pushed Ukrainian tech companies working with defense simulation technology to seriously compete in global markets. One such company is SKIFTECH, which specializes in high-tech military simulators. Iryna Solomko visited the company’s production site in Kyiv. Anna Rice narrates the story. Camera: Pavlo Terekhov

Japan Airlines suffers delays after carrier reports cyberattack

TOKYO — Japan Airlines reported a cyberattack on Thursday that caused delays to domestic and international flights but later said it had found and addressed the cause.

The airline, Japan’s second biggest after All Nippon Airways (ANA), said 24 domestic flights had been delayed by more than half an hour.

Public broadcaster NHK said problems with the airline’s baggage check-in system had caused delays at several Japanese airports but no major disruption was reported.

“We identified and addressed the cause of the issue. We are checking the system recovery status,” Japan Airlines (JAL) said in a post on social media platform X.

“Sales for both domestic and international flights departing today have been suspended. We apologize for any inconvenience caused,” the post said.

A JAL spokesperson told AFP earlier the company had been subjected to a cyberattack.

Japanese media said it may have been a so-called DDoS attack aimed at overwhelming and disrupting a website or server.

Network disruption began at 7:24 a.m. Thursday (2224 GMT Wednesday), JAL said in a statement, adding that there was no impact on the safety of its operations.

Then “at 8:56 a.m., we temporarily isolated the router (a device for exchanging data between networks) that was causing the disruption,” it said.

Report on January collision

JAL shares fell as much as 2.5% in morning trade after the news emerged, before recovering slightly.

The airline is just the latest Japanese firm to be hit by a cyberattack.

Japan’s space agency JAXA was targeted in 2023, although no sensitive information about rockets or satellites was accessed.

The same year one of Japan’s busiest ports was hit by a ransomware attack blamed on the Russia-based Lockbit group.

In 2022, a cyberattack at a Toyota supplier forced the top-selling automaker to halt operations at domestic plants.

More recently, the popular Japanese video-sharing website Niconico came under a large cyberattack in June.

Separately, a transport ministry committee tasked with probing a fatal January 2024 collision involving a JAL passenger jet released an interim report on Wednesday blaming human error for the incident that killed five people.

The collision at Tokyo’s Haneda Airport was with a coast guard plane carrying six crew members — of whom five were killed — that was on mission to deliver relief supplies to a quake-hit central region of Japan.

According to the report, the smaller plane’s pilot mistook an air traffic control officer’s instructions to mean authorization had been given to enter the runway.

The captain was also “in a hurry” at the time because the coast guard plane’s departure was 40 minutes behind schedule, the report said.

The traffic controller failed to notice the plane had intruded into the runway, oblivious even to an alarm system warning against its presence.

All 379 people on board the JAL Airbus escaped just before the aircraft was engulfed in flames.

«Хайп, спекуляція» – спікер сенату Казахстану про версію збиття пасажирського літака російською ППО

Після авіакатастрофи в Актау 25 грудня з’явилися відеозаписи із пошкодженнями літака Azerbaijan Arilines. На обшивці уцілілих уламків лайнера в хвостовій частині помітили сліди, схожі на пошкодження від елементів ураження зенітної ракети