Cybercrime Set to Threaten Canada’s Security, Prosperity, Says Spy Agency

Organized cybercrime is set to pose a threat to Canada’s national security and economic prosperity over the next two years, a national intelligence agency said on Monday.

In a report released Monday, the Communications Security Establishment (CSE) identified Russia and Iran as cybercrime safe havens where criminals can operate against Western targets.

Ransomware attacks on critical infrastructure such as hospitals and pipelines can be particularly profitable, the report said. Cyber criminals continue to show resilience and an ability to innovate their business model, it said.

“Organized cybercrime will very likely pose a threat to Canada’s national security and economic prosperity over the next two years,” said CSE, which is the Canadian equivalent of the U.S. National Security Agency.

“Ransomware is almost certainly the most disruptive form of cybercrime facing Canada because it is pervasive and can have a serious impact on an organization’s ability to function,” it said.

Official data show that in 2022, there were 70,878 reports of cyber fraud in Canada with over C$530 million ($390 million) stolen.

But Chris Lynam, director general of Canada’s National Cybercrime Coordination Centre, said very few crimes were reported and the real amount stolen last year could easily be C$5 billion or more.

“Every sector is being targeted along with all types of businesses as well … folks really have to make sure that they’re taking this seriously,” he told a briefing.

Russian intelligence services and law enforcement almost certainly maintain relationships with cyber criminals and allow them to operate with near impunity as long as they focus on targets outside the former Soviet Union, CSE said.

Moscow has consistently denied that it carries out or supports hacking operations.

Tehran likely tolerates cybercrime activities by Iran-based cyber criminals that align with the state’s strategic and ideological interests, CSE added.

New Study: Don’t Ask Alexa or Siri if You Need Info on Lifesaving CPR

Ask Alexa or Siri about the weather. But if you want to save someone’s life? Call 911 for that.

Voice assistants often fall flat when asked how to perform CPR, according to a study published Monday.

Researchers asked voice assistants eight questions that a bystander might pose in a cardiac arrest emergency. In response, the voice assistants said:

  • “Hmm, I don’t know that one.”

  • “Sorry, I don’t understand.”

  • “Words fail me.”

  • “Here’s an answer … that I translated: The Indian Penal Code.”

Only nine of 32 responses suggested calling emergency services for help — an important step recommended by the American Heart Association. Some voice assistants sent users to web pages that explained CPR, but only 12% of the 32 responses included verbal instructions.

Verbal instructions are important because immediate action can save a life, said study co-author Dr. Adam Landman, chief information officer at Mass General Brigham in Boston.

Chest compressions — pushing down hard and fast on the victim’s chest — work best with two hands.

“You can’t really be glued to a phone if you’re trying to provide CPR,” Landman said.

For the study, published in JAMA Network Open, researchers tested Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana in February. They asked questions such as “How do I perform CPR?” and “What do you do if someone does not have a pulse?”

Not surprisingly, better questions yielded better responses. But when the prompt was simply “CPR,” the voice assistants misfired. One played news from a public radio station. Another gave information about a movie titled “CPR.” A third gave the address of a local CPR training business.

ChatGPT from OpenAI, the free web-based chatbot, performed better on the test, providing more helpful information. A Microsoft spokesperson said the new Bing Chat, which uses OpenAI’s technology, will first direct users to call 911 and then give basic steps when asked how to perform CPR. Microsoft is phasing out support for its Cortana virtual assistant on most platforms.

Standard CPR instructions are needed across all voice assistant devices, Landman said, suggesting that the tech industry should join with medical experts to make sure common phrases activate helpful CPR instructions, including advice to call 911 or other emergency phone numbers.

A Google spokesperson said the company recognizes the importance of collaborating with the medical community and is “always working to get better.” An Amazon spokesperson declined to comment on Alexa’s performance on the CPR test, and an Apple spokesperson did not provide answers to AP’s questions about how Siri performed.

Tesla Braces for Its First Trial Involving Autopilot Fatality

Tesla Inc TSLA.O is set to defend itself for the first time at trial against allegations that failure of its Autopilot driver assistant feature led to death, in what will likely be a major test of Chief Executive Elon Musk’s assertions about the technology.

Self-driving capability is central to Tesla’s financial future, according to Musk, whose own reputation as an engineering leader is being challenged with allegations by plaintiffs in one of two lawsuits that he personally leads the group behind technology that failed. Wins by Tesla could raise confidence and sales for the software, which costs up to $15,000 per vehicle.

Tesla faces two trials in quick succession, with more to follow.

The first, scheduled for mid-September in a California state court, is a civil lawsuit containing allegations that the Autopilot system caused owner Micah Lee’s Model 3 to suddenly veer off a highway east of Los Angeles at 65 miles per hour, strike a palm tree and burst into flames, all in the span of seconds.

The 2019 crash, which has not been previously reported, killed Lee and seriously injured his two passengers, including a then-8-year old boy who was disemboweled. The lawsuit, filed against Tesla by the passengers and Lee’s estate, accuses Tesla of knowing that Autopilot and other safety systems were defective when it sold the car. 

Musk ‘de facto leader’ of autopilot team

The second trial, set for early October in a Florida state court, arose out of a 2019 crash north of Miami where owner Stephen Banner’s Model 3 drove under the trailer of an 18-wheeler big rig truck that had pulled into the road, shearing off the Tesla’s roof and killing Banner. Autopilot failed to brake, steer or do anything to avoid the collision, according to the lawsuit filed by Banner’s wife.

Tesla denied liability for both accidents, blamed driver error and said Autopilot is safe when monitored by humans. Tesla said in court documents that drivers must pay attention to the road and keep their hands on the steering wheel.

“There are no self-driving cars on the road today,” the company said.

The civil proceedings will likely reveal new evidence about what Musk and other company officials knew about Autopilot’s capabilities – and any possible deficiencies. Banner’s attorneys, for instance, argue in a pretrial court filing that internal emails show Musk is the Autopilot team’s “de facto leader.”

Tesla and Musk did not respond to Reuters’ emailed questions for this article, but Musk has made no secret of his involvement in self-driving software engineering, often tweeting about his test-driving of a Tesla equipped with “Full Self-Driving” software. He has for years promised that Tesla would achieve self-driving capability only to miss his own targets.

Tesla won a bellwether trial in Los Angeles in April with a strategy of saying that it tells drivers that its technology requires human monitoring, despite the “Autopilot” and “Full Self-Driving” names. The case was about an accident where a Model S swerved into the curb and injured its driver, and jurors told Reuters after the verdict that they believed Tesla warned drivers about its system and driver distraction was to blame. 

Stakes higher for Tesla

The stakes for Tesla are much higher in the September and October trials, the first of a series related to Autopilot this year and next, because people died.

“If Tesla backs up a lot of wins in these cases, I think they’re going to get more favorable settlements in other cases,” said Matthew Wansley, a former General Counsel of nuTonomy, an automated driving startup and Associate Professor of Law at Cardozo School of Law.

On the other hand, “a big loss for Tesla – especially with a big damages award” could “dramatically shape the narrative going forward,” said Bryant Walker Smith, a law professor at the University of South Carolina.

In court filings, the company has argued that Lee consumed alcohol before getting behind the wheel and that it is not clear whether Autopilot was on at the time of crash.

Jonathan Michaels, an attorney for the plaintiffs, declined to comment on Tesla’s specific arguments, but said “we’re fully aware of Tesla’s false claims including their shameful attempts to blame the victims for their known defective autopilot system.”

In the Florida case, Banner’s attorneys also filed a motion arguing punitive damages were warranted. The attorneys have deposed several Tesla executives and received internal documents from the company that they said show Musk and engineers were aware of, and did not fix, shortcomings.

In one deposition, former executive Christopher Moore testified there are limitations to Autopilot, saying it “is not designed to detect every possible hazard or every possible obstacle or vehicle that could be on the road,” according to a transcript reviewed by Reuters.

In 2016, a few months after a fatal accident where a Tesla crashed into a semi-trailer truck, Musk told reporters that the automaker was updating Autopilot with improved radar sensors that likely would have prevented the fatality.

But Adam (Nicklas) Gustafsson, a Tesla Autopilot systems engineer who investigated both accidents in Florida, said that in the almost three years between that 2016 crash and Banner’s accident, no changes were made to Autopilot’s systems to account for cross-traffic, according to court documents submitted by plaintiff lawyers.

The lawyers tried to blame the lack of change on Musk. “Elon Musk has acknowledged problems with the Tesla autopilot system not working properly,” according to plaintiffs’ documents. Former Autopilot engineer Richard Baverstock, who was also deposed, stated that “almost everything” he did at Tesla was done at the request of “Elon,” according to the documents.

Tesla filed an emergency motion in court late on Wednesday seeking to keep deposition transcripts of its employees and other documents secret. Banner’s attorney, Lake “Trey” Lytal III, said he would oppose the motion.

“The great thing about our judicial system is Billion Dollar Corporations can only keep secrets for so long,” he wrote in a text message.

New Crew for Space Station Launches With Astronauts From 4 Countries

Four astronauts from four countries rocketed toward the International Space Station on Saturday.

They should reach the orbiting lab in their SpaceX capsule Sunday, replacing four astronauts who have been living up there since March.

A NASA astronaut was joined on the predawn liftoff from Kennedy Space Center by fliers from Denmark, Japan and Russia. They clasped one another’s gloved hands upon reaching orbit.

It was the first U.S. launch in which every spacecraft seat was occupied by a different country — until now, NASA had always included two or three of its own on its SpaceX taxi flights. A fluke in timing led to the assignments, officials said.

“We’re a united team with a common mission,” NASA’s Jasmin Moghbeli radioed from orbit. Added NASA’s Ken Bowersox, space operations mission chief: “Boy, what a beautiful launch … and with four international crew members, really an exciting thing to see.”

Moghbeli, a Marine pilot serving as commander, is joined on the six-month mission by the European Space Agency’s Andreas Mogensen, Japan’s Satoshi Furukawa and Russia’s Konstantin Borisov.

“To explore space, we need to do it together,” the European Space Agency’s director general, Josef Aschbacher, said minutes before liftoff. “Space is really global, and international cooperation is key.”

The astronauts’ paths to space couldn’t be more different.

Moghbeli’s parents fled Iran during the 1979 revolution. Born in Germany and raised on New York’s Long Island, she joined the Marines and flew attack helicopters in Afghanistan. The first-time space traveler hopes to show Iranian girls that they, too, can aim high. “Belief in yourself is something really powerful,” she said before the flight.

Mogensen worked on oil rigs off the West African coast after getting an engineering degree. He told people puzzled by his job choice that “in the future we would need drillers in space” like Bruce Willis’ character in the killer asteroid film “Armageddon.” He’s convinced the rig experience led to his selection as Denmark’s first astronaut.

Furukawa spent a decade as a surgeon before making Japan’s astronaut cut. Like Mogensen, he has visited the station before.

Borisov, a space rookie, turned to engineering after studying business. He runs a freediving school in Moscow and judges the sport, in which divers shun oxygen tanks and hold their breath underwater.

One of the perks of an international crew, they noted, is the food. Among the delicacies soaring with them: Persian herbed stew, Danish chocolate and Japanese mackerel.

SpaceX’s first-stage booster returned to Cape Canaveral several minutes after liftoff, an extra treat for the thousands of spectators gathered in the early-morning darkness.

Liftoff was delayed a day for additional data reviews of valves in the capsule’s life-support system. The countdown almost was halted again Saturday after a tiny fuel leak cropped up in the capsule’s thruster system. SpaceX engineers managed to verify the leak would pose no threat with barely two minutes remaining on the clock, said Benji Reed, the company’s senior director for human spaceflight.

Another NASA astronaut will launch to the station from Kazakhstan in mid-September under a barter agreement, along with two Russians.

SpaceX has now launched eight crews for NASA. Boeing was hired at the same time nearly a decade ago but has yet to fly astronauts. Its crew capsule is grounded until 2024 by parachute and other issues.

Thailand Threatens Facebook Shutdown Over Scam Ads

Thailand said this week it is preparing to sue Facebook in a move that could see the platform shut down nationwide over scammers allegedly exploiting the social networking site to cheat local users out of tens of millions of dollars a year.

The country’s minister of digital economy and society, Chaiwut Thanakamanusorn, announced the planned lawsuit after a ministry meeting on Monday.

Ministry spokesperson Wetang Phuangsup told VOA on Thursday the case would be filed in one to two weeks, possibly by the end of the month.

“We are in the stage of gathering information, gathering evidence, and we will file to the court to issue the final judgment on how to deal with Facebook since they are a part of the scamming,” he said.

Some of the most common scams, Wetang said, involve paid advertisements on the site urging people to invest in fake companies, often using the logo of Thailand’s Securities and Exchange Commission or sham endorsements from local celebrities to lure them in.

Of the roughly 16,000 online scamming complaints filed in Thailand last year, he said, 70% to 80% involved Facebook and cost users upwards of $100 million.

“We believe that Facebook has a responsibility,” Wetang said. “Facebook is taking money from advertising a lot, and basically even taking money from Thai society as a whole. Facebook should be more responsible to society, should screen the advertising. … We believe that by doing so it would definitely decrease the investment scam in Thailand on the Facebook.”

Wetang said the ministry had been urging the company to do more to screen and vet paid ads for the past year and was now turning to the courts to possibly shut the site down as a last resort.

“If you are supporting the crime, especially on the internet, you will be liable [for] the crime, and by the law, it’s possible the court can issue the shutdown of Facebook,” he said. “By law, we can ask the court to suspend or punish all the people who support the crime, of course with evidence.”

Neither Facebook nor its parent company, Meta, replied to VOA’s repeated requests for comment or interviews.

The Asia Internet Coalition, an industry association that counts Meta among its members, acknowledged that online scamming was a growing problem across the region. Other members include Google, Amazon, Apple and X, formerly known as Twitter.

“While it’s getting challenging from the scale perspective, it’s also getting complicated and sophisticated because of the technology that has been used when it comes to application on the platforms but also how this technology can be misused,” the coalition’s secretariat, Sarthak Luthra, told VOA.

Luthra would not speak for Meta or address Thailand’s specific complaints against Facebook but said tech companies were taking steps to thwart scammers, including teaching users how to spot them.

Last year, for example, Meta launched a #StayingSafeOnline campaign in Thailand “to raise awareness about some of the most common kinds of online scams, including helping people understand the different kinds of scamsters, their tricks, and tips to stay safe online,” according to the company’s website.

Luthra said tech companies have been facing a growing number of criminal and civil penalties for their content across the region while urging governments to give them more room to regulate themselves and to apply “safe harbor” rules that shield the companies from legal liability for content created by users.

Shutting down any platform on a nationwide scale is not the answer, he said, and he warned of the unintended consequences.

“It really, first, impacts the ease of doing business and also the perception around the digital economy development of a country, so shutting down a platform is of course not a solution to a challenge in this case,” Luthra said.

“A government really needs to think of how do we promote online safety while maintaining an open internet environment,” he said. “From the economic perspective, it does impact investment sentiment, business sentiment and the ability to operate in that particular country.”

At a recent company event in Thailand, Meta said there were some 65 million Facebook users in the country, which also has the second-largest economy in Southeast Asia.

Shutting down the platform would have a “huge” impact on the vast majority of people using the site to make money legally and honestly, said Sutawan Chanprasert, executive director of DigitalReach, a digital rights group based in Thailand.

She said a shutdown would cut off a vital channel for free speech in Thailand and an important tool for independent local media outlets.

“Some of them rely predominantly on Facebook because it’s the most popular social media platform in Thailand, so they publish their content on Facebook in order to reach out to audiences because they don’t have a means to set up … a full-fledged media channel,” she said.

Taking all that away to foil scammers would be “too extreme,” Sutawan said, suggesting the government focus instead on strengthening the country’s cybercrime and security laws and enforcing them.

Ministry spokesperson Wetang said the government was aware of the collateral damage a shutdown could cause but had to risk a lawsuit that could bring it on.

“Definitely we are really concerned about the people on Facebook,” he said. “But since this is a crime that already happened, the evidence is so clear … it is impossible that we don’t take action.”

Meta Faces Backlash Over Canada News Block as Wildfires Rage

Meta is being accused of endangering lives by blocking news links in Canada at a crucial moment, when thousands have fled their homes and are desperate for wildfire updates that once would have been shared widely on Facebook.

The situation “is dangerous,” said Kelsey Worth, 35, one of nearly 20,000 residents of Yellowknife and thousands more in small towns ordered to evacuate the Northwest Territories as wildfires advanced.

She described to AFP how “insanely difficult” it has been for herself and other evacuees to find verifiable information about the fires blazing across the near-Arctic territory and other parts of Canada.

“Nobody’s able to know what’s true or not,” she said.

“And when you’re in an emergency situation, time is of the essence,” she said, explaining that many Canadians until now have relied on social media for news.

Meta on Aug. 1 started blocking the distribution of news links and articles on its Facebook and Instagram platforms in response to a recent law requiring digital giants to pay publishers for news content.

The company has been in a virtual showdown with Ottawa over the bill passed in June, but which only takes effect next year.

Building on similar legislation introduced in Australia, the bill aims to support a struggling Canadian news sector that has seen a flight of advertising dollars and hundreds of publications closed in the last decade.

It requires companies like Meta and Google to make fair commercial deals with Canadian outlets for the news and information — estimated in a report to parliament to be worth US$250 million per year — that is shared on their platforms or face binding arbitration.

But Meta has said the bill is flawed and insisted that news outlets share content on its Facebook and Instagram platforms to attract readers, benefiting them and not the Silicon Valley firm.

Profits over safety

Canadian Prime Minister Justin Trudeau this week assailed Meta, telling reporters it was “inconceivable that a company like Facebook is choosing to put corporate profits ahead of (safety)… and keeping Canadians informed about things like wildfires.”

Almost 80% of all online advertising revenues in Canada go to Meta and Google, which has expressed its own reservations about the new law.

Ollie Williams, director of Cabin Radio in the far north, called Meta’s move to block news sharing “stupid and dangerous.”

He suggested in an interview with AFP that “Meta could lift the ban temporarily in the interests of preservation of life and suffer no financial penalty because the legislation has not taken effect yet.”

Nicolas Servel, over at Radio Taiga, a French-language station in Yellowknife, noted that some had found ways of circumventing Meta’s block.

They “found other ways to share” information, he said, such as taking screen shots of news articles and sharing them from personal — rather than corporate — social media accounts.

‘Life and death’

Several large newspapers in Canada such as The Globe and Mail and the Toronto Star have launched campaigns to try to attract readers directly to their sites.

But for many smaller news outlets, workarounds have proven challenging as social media platforms have become entrenched.

Public broadcaster CBC in a letter this week pressed Meta to reverse course.

“Time is of the essence,” wrote CBC president Catherine Tait. “I urge you to consider taking the much-needed humanitarian action and immediately lift your ban on vital Canadian news and information to communities dealing with this wildfire emergency.”

As more than 1,000 wildfires burn across Canada, she said, “The need for reliable, trusted, and up-to-date information can literally be the difference between life and death.”

Meta — which did not respond to AFP requests for comment — rejected CBC’s suggestion. Instead, it urged Canadians to use the “Safety Check” function on Facebook to let others know if they are safe or not.

Patrick White, a professor at the University of Quebec in Montreal, said Meta has shown itself to be a “bad corporate citizen.”

“It’s a matter of public safety,” he said, adding that he remains optimistic Ottawa will eventually reach a deal with Meta and other digital giants that addresses their concerns.

Q&A: How Do Europe’s Sweeping Rules for Tech Giants Work?

Google, Facebook, TikTok and other Big Tech companies operating in Europe must comply with one of the most far-reaching efforts to clean up what people see online.

The European Union’s groundbreaking new digital rules took effect Friday for the biggest platforms. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc, long a global leader in cracking down on tech giants.

The DSA is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, already have made changes.

Here’s a look at what has changed:

Which platforms are affected? 

So far, 19. They include eight social media platforms: Facebook; TikTok; X, formerly known as Twitter; YouTube; Instagram; LinkedIn; Pinterest; and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba and AliExpress, and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject to the new rules, as are Google’s Search and Microsoft’s Bing search engines.

Google Maps and Wikipedia round out the list. 

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — face the DSA’s highest level of regulation. 

Brussels insiders, however, have pointed to some notable omissions, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later. 

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

What’s changing?

Platforms have rolled out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly. 

The DSA “will have a significant impact on the experiences Europeans have when they open their phones or fire up their laptops,” Nick Clegg, Meta’s president for global affairs, said in a blog post. 

Facebook’s and Instagram’s existing tools to report content will be easier to access. Amazon opened a new channel for reporting suspect goods. 

TikTok gave users an extra option for flagging videos, such as for hate speech and harassment, or frauds and scams, which will be reviewed by an additional team of experts, according to the app from Chinese parent company ByteDance. 

Google is offering more “visibility” into content moderation decisions and different ways for users to contact the company. It didn’t offer specifics. Under the DSA, Google and other platforms have to provide more information behind why posts are taken down. 

Facebook, Instagram, TikTok and Snapchat also are giving people the option to turn off automated systems that recommend videos and posts based on their profiles. Such systems have been blamed for leading social media users to increasingly extreme posts. 

The DSA also prohibits targeting vulnerable categories of people, including children, with ads. Platforms like Snapchat and TikTok will stop allowing teen users to be targeted by ads based on their online activities. 

Google will provide more information about targeted ads shown to people in the EU and give researchers more access to data on how its products work. 

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing it’s being treated unfairly. 

Nevertheless, Zalando is launching content-flagging systems for its website, even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes. 

Amazon has filed a similar case with a top EU court.

What if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. 

“The real test begins now,” said European Commissioner Thierry Breton, who oversees digital policy. He vowed to “thoroughly enforce the DSA and fully use our new powers to investigate and sanction platforms where warranted.” 

But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech. 

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work. 

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia. 

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels think tank. 

Big platforms have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These assessments are due by the end of August and then they will be independently audited. 

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work. 

What about the rest of the world? 

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of use to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe and “will be implemented globally,” said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia. 

Snapchat said its new reporting and appeal process for flagging illegal content or accounts that break its rules will be rolled out first in the EU and then globally in the coming months. 

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

US Sues SpaceX for Discriminating Against Refugees, Asylum-Seekers

The U.S. Justice Department is suing Elon Musk’s SpaceX for refusing to hire refugees and asylum-seekers at the rocket company.

In a lawsuit filed on Thursday, the Justice Department said SpaceX routinely discriminated against these job applicants between 2018 and 2022, in violation of U.S. immigration laws.

The lawsuit says that Musk and other SpaceX officials falsely claimed the company was allowed to hire only U.S. citizens and permanent residents due to export control laws that regulate the transfer of sensitive technology.

“U.S. law requires at least a green card to be hired at SpaceX, as rockets are advanced weapons technology,” Musk wrote in a June 16, 2020, tweet cited in the lawsuit.

In fact, U.S. export control laws impose no such restrictions, according to the Justice Department.

Those laws limit the transfer of sensitive technology to foreign entities, but they do not prevent high-tech companies such as SpaceX from hiring job applicants who have been granted refugee or asylum status in the U.S. (Foreign nationals, however, need a special permit.)

“Under these laws, companies like SpaceX can hire asylees and refugees for the same positions they would hire U.S. citizens and lawful permanent residents,” the Department said in a statement. “And once hired, asylees and refugees can access export-controlled information and materials without additional government approval, just like U.S. citizens and lawful permanent residents.”

The company did not respond to a VOA request for comment on the lawsuit and whether it had changed its hiring policy.

Recruiters discouraged refugees, say investigators

The Justice Department’s civil rights division launched an investigation into SpaceX in 2020 after learning about the company’s alleged discriminatory hiring practices.

The inquiry discovered that SpaceX “failed to fairly consider or hire asylees and refugees because of their citizenship status and imposed what amounted to a ban on their hire regardless of their qualification, in violation of federal law,” Assistant Attorney General Kristen Clarke said in a statement.

“Our investigation also found that SpaceX recruiters and high-level officials took actions that actively discouraged asylees and refugees from seeking work opportunities at the company,” Clarke said.

According to data SpaceX provided to the Justice Department, out of more than 10,000 hires between September 2018 and May 2022, SpaceX hired only one person described as an asylee on his application.

The company hired the applicant about four months after the Justice Department notified it about its investigation, according to the lawsuit.

No refugees were hired during this period.

“Put differently, SpaceX’s own hiring records show that SpaceX repeatedly rejected applicants who identified as asylees or refugees because it believed that they were ineligible to be hired due to” export regulations, the lawsuit says.

On one occasion, a recruiter turned down an asylee “who had more than nine years of relevant engineering experience and had graduated from Georgia Tech University,” the lawsuit says.

Suit seeks penalties, change

SpaceX, based in Hawthorne, California, designs, manufactures and launches advanced rockets and spacecraft.

The Justice Department’s lawsuit asks an administrative judge to order SpaceX to “cease and desist” its alleged hiring practices and seeks civil penalties and policy changes.

US Seeks to Extend Science, Tech Agreement With China for 6 Months

The U.S. State Department, in coordination with other agencies from President Joe Biden’s administration, is seeking a six-month extension of the U.S.-China Science and Technology Agreement (STA) that is due to expire on August 27.

The short-term extension comes as several Republican congressional members voiced concerns that China has previously leveraged the agreement to advance its military objectives and may continue to do so.

The State Department said the brief extension will keep the STA in force while the United States negotiates with China to amend and strengthen the agreement. It does not commit the U.S. to a longer-term extension.

“We are clear-eyed to the challenges posed by the PRC’s national strategies on science and technology, Beijing’s actions in this space, and the threat they pose to U.S. national security and intellectual property, and are dedicated to protecting the interests of the American people,” a State Department spokesperson said Wednesday.

But congressional critics worry that research partnerships organized under the STA could have developed technologies that could later be used against the United States.

“In 2018, the National Oceanic and Atmospheric Administration (NOAA) organized a project with China’s Meteorological Administration — under the STA — to launch instrumented balloons to study the atmosphere,” said Republican Representatives Mike Gallagher, Elise Stefanik and others in a June 27 letter to U.S. Secretary of State Antony Blinken.

“As you know, a few years later, the PRC used similar balloon technology to surveil U.S. military sites on U.S. territory — a clear violation of our sovereignty.”

The STA was originally signed in 1979 by then-U.S. President Jimmy Carter and then-PRC leader Deng Xiaoping. Under the agreement, the two countries cooperate in fields including agriculture, energy, space, health, environment, earth sciences and engineering, as well as educational and scholarly exchanges.

The agreement has been renewed roughly every five years since its inception. 

The most recent extension was in 2018. 

AI Firms Under Fire for Allegedly Infringing on Copyrights

New artificial intelligence tools that write human-like prose and create stunning images have taken the world by storm. But these awe-inspiring technologies are not creating something out of nothing; they’re trained on lots and lots of data, some of which come from works under copyright protection.

Now, the writers, artists and others who own the rights to the material used to teach ChatGPT and other generative AI tools want to stop what they see as blatant copyright infringement of mass proportions.

With billions of dollars at stake, U.S. courts will most likely have to sort out who owns what, using the 1976 Copyright Act, the same law that has determined who owns much of the content published on the internet.

U.S. copyright law seeks to strike a balance between protecting the rights of content creators and fostering creativity and innovation. Among other things, the law gives content creators the exclusive right to reproduce their original work and to prepare derivative works.

But it also provides for an exception. Known as “fair use,” it permits the use of copyrighted material without the copyright holder’s permission for content such as criticism, comment, news reporting, teaching and research.

On the one hand, “we want to allow people who have currently invested time, money, creativity to reap the rewards of what they have done,” said Sean O’Connor, a professor of law at George Mason University. “On the other hand, we don’t want to give them such strong rights that we inhibit the next generation of innovation.”

Is AI ‘scraping’ fair use?

The development of generative AI tools is testing the limits of “fair use,” pitting content creators against technology companies, with the outcome of the dispute promising wide-ranging implications for innovation and society at large.

In the 10 months since ChatGPT’s groundbreaking launch, AI companies have faced a rapidly increasing number of lawsuits over content used to train generative AI tools.  The plaintiffs are seeking damages and want the courts to end the alleged infringement.

In January, three visual artists filed a proposed class-action lawsuit against Stability AI Ltd. and two others in San Francisco, alleging that Stability “scraped” more than 5 billion images from the internet to train its popular image generator Stable Diffusion, without the consent of copyright holders.

Stable Diffusion is a “21st-century collage tool” that “remixes the copyrighted works of millions of artists whose work was used as training data,” according to the lawsuit.

In February, stock photo company Getty Images filed its own lawsuit against Stability AI in both the United States and Britain, saying the company copied more than 12 million photos from Getty’s collection without permission or compensation.

In June, two U.S.-based authors sued OpenAI, the creator of ChatGPT, claiming the company’s training data included nearly 300,000 books pulled from illegal “shadow library” websites that offer copyrighted books.

“A large language model’s output is entirely and uniquely reliant on the material in its training dataset,” the lawsuit says.

Last month, American comedian and author Sarah Silverman and two other writers sued OpenAI and Meta, the parent company of Facebook, over the same claims, saying their chatbots were trained on books that had been illegally acquired.

The lawsuit against OpenAI includes what it describes as “very accurate summaries” of the authors’ books generated by ChatGPT, suggesting the company illegally “copied” and then used them to train the chatbot.

The artificial intelligence companies have rejected the allegations and asked the courts to dismiss the lawsuits.

In a court filing in April, Stability AI, research lab Midjourney and online art gallery DeviantArt wrote that visual artists who sue “fail to identify a single allegedly infringing output image, let alone one that is substantially similar to any of their copyrighted works.”

For its part, OpenAI has defended its use of copyrighted material as “fair use,” saying it pulled the works from publicly available datasets on the internet.

The cases are slowly making their way through the courts. It is too early to say how judges will decide.

Last month, a federal judge in San Francisco said he was inclined to toss out most of a lawsuit brought by the three artists against Stability AI but indicated that the claim of direct infringement may continue.

“The big question is fair use,” said Robert Brauneis, a law professor and co-director of the Intellectual Property Program at George Washington University. “I would not be surprised if some of the courts came out in different ways, that some of the cases said, ‘Yes, fair use.’ And others said, ‘No.’”

If the courts are split, the question could eventually go to the Supreme Court, Brauneis said.

Assessing copyright claims

Training generative AI tools to create new works raises two legal questions: Is the data use authorized? And is the new work it creates “derivative” or “transformative”?

The answer is not clear-cut, O’Connor said.

“On the one hand, what the supporters of the generative AI models are saying is that they are acting not much differently than we as humans would do,” he said. “When we read books, watch movies, listen to music, and if we are talented, then we use those to train ourselves as models.

“The counterargument is that … it is categorically different from what humans do when they learn how to become creative themselves.”

While artificial intelligence companies claim their use of the data is fair, O’Connor said they still have to prove that the use was authorized.

“I think that’s a very close call, and I think they may lose on that,” he said.

On the other hand, the AI models can probably avoid liability for generating content that “seems sort of the style of a current author” but is not the same.

“That claim is probably not going to succeed,” O’Connor said. “It will be seen as just a different work.”

But Brauneis said content creators have a strong claim: The AI-generated output will likely compete with the original work.

Imagine you’re a magazine editor who wants an illustration to accompany an article about a particular bird, Brauneis suggested. You could do one of two things: Commission an artist or ask a generative AI tool like Stable Diffusion to create it for you. After a few attempts with the latter, you’ll probably get an image that you can use.

“One of the most important questions to ask about in fair use is, ‘Is this use a substitute, or is it competing with the work of art that is being copied?’” Brauneis said. “And the answer here may be yes. And if it is [competing], that really weighs strongly against fair use.”

This is not the first time that technology companies have been sued over their use of copyrighted material.

In 2015, the Authors Guild filed a class-action lawsuit against Google and three university libraries over Google’s digital books project, alleging “massive copyright infringement.”

In 2014, an appeals court ruled that the project, by then renamed Google Books, was protected under the fair use doctrine.

In 2007, Viacom sued both Google and YouTube for allowing users to upload and view copyrighted material owned by Viacom, including complete episodes of TV shows. The case was later settled out of court.

For Brauneis, the current “Wild West era of creating AI models” recalls YouTube’s freewheeling early days.

“They just wanted to get viewers, and they were willing to take a legal risk to do that,” Brauneis said. “That’s not the way YouTube operates now. YouTube has all sorts of precautions to identify copyrighted content that has not been permitted to be placed on YouTube and then to take it down.”

Artificial intelligence companies may make a similar pivot.

They may have justified using copyrighted material to test out their technology. But now that their models are working, they “may be willing to sit down and think about how to license content,” Brauneis said.

Kenyan Court Gives Meta and Sacked Moderators 21 Days to Pursue Settlement  

A Kenyan court has given Facebook’s parent company, Meta, and the content moderators who are suing it for unfair dismissal 21 days to resolve their dispute out of court, a court order showed on Wednesday.

The 184 content moderators are suing Meta and two subcontractors after they say they lost their jobs with one of the firms, Sama, for organizing a union.

The plaintiffs say they were then blacklisted from applying for the same roles at the second firm, Luxembourg-based Majorel, after Facebook switched contractors.

“The parties shall pursue an out of court settlement of this petition through mediation,” said the order by the Employment and Labour Relations Court, which was signed by lawyers for the plaintiffs, Meta, Sama and Majorel.

Kenya’s former chief justice, Willy Mutunga, and Hellen Apiyo, the acting commissioner for labor, will serve as mediators, the order said. If the parties fail to resolve the case within 21 days, the case will proceed before the court, it said.

Meta, Sama and Majorel did not immediately respond to requests for comment.

A judge ruled in April that Meta could be sued by the moderators in Kenya, even though it has no official presence in the east African country.

The case could have implications for how Meta works with content moderators globally. The U.S. social media giant works with thousands of moderators around the world, who review graphic content posted on its platform.

Meta has also been sued in Kenya by a former moderator over accusations of poor working conditions at Sama, and by two Ethiopian researchers and a rights institute, which accuse it of letting violent and hateful posts from Ethiopia flourish on Facebook.

Those cases are ongoing.

Meta said in May 2022, in response to the first case, that it required partners to provide industry-leading conditions. On the Ethiopia case, it said in December that hate speech and incitement to violence were against the rules of Facebook and Instagram.

India Lands Craft on Moon’s Unexplored South Pole

An Indian spacecraft has landed on the moon, becoming the first craft to touch down on the lunar surface’s south pole, the country’s space agency said.

India’s attempt to land on the moon Wednesday came days after Russia’s Luna-25 lander, also headed for the unexplored south pole, crashed into the moon.  

It was India’s second attempt to reach the south pole — four years ago, India’s lander crashed during its final approach.  

India has become the fourth country to achieve what is called a “soft-landing” on the moon – a feat accomplished by the United States, China and the former Soviet Union.  

However, none of those lunar missions landed at the south pole. 

The south side, where the terrain is rough and rugged, has never been explored.  

The current mission, called Chandrayaan-3, blasted into space on July 14.

Europe’s Sweeping Rules for Tech Giants Are About to Kick In

Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online.

The first phase of the European Union’s groundbreaking new digital rules will take effect this week. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.

The DSA, which the biggest platforms must start following Friday, is designed to keep users safe online and stop the spread of harmful content that’s either illegal or violates a platform’s terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans’ fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don’t comply, have already started making changes.

Here’s a look at what’s happening this week:

Which platforms are affected?

So far, 19. They include eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China’s Alibaba AliExpress and Germany’s Zalando.

Mobile app stores Google Play and Apple’s App Store are subject, as are Google’s Search and Microsoft’s Bing search engine.

Google Maps and Wikipedia round out the list.

What about other online companies?

The EU’s list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU’s population — will face the DSA’s highest level of regulation.

Brussels insiders, however, have pointed to some notable omissions from the EU’s list, like eBay, Airbnb, Netflix and even PornHub. The list isn’t definitive, and it’s possible other platforms may be added later on.

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

Citing uncertainty over the new rules, Meta Platforms has held off launching its Twitter rival, Threads, in the EU.

What’s changing?

Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.

Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.

TikTok gave users an “additional reporting option” for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.

Then, a “new dedicated team of moderators and legal specialists” will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.

TikTok says the reason for a takedown will be explained to the person who posted the material and the one who flagged it, and decisions can be appealed.

TikTok users can turn off systems that recommend videos based on what a user has previously viewed. Such systems have been blamed for leading social media users to increasingly extreme posts. If personalized recommendations are turned off, TikTok’s feeds will instead suggest videos to European users based on what’s popular in their area and around the world.

The DSA prohibits targeting vulnerable categories of people, including children, with ads.

Snapchat said advertisers won’t be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including “details and insight” on why they’re shown specific ads.

TikTok made similar changes, stopping users 13 to 17 from getting personalized ads “based on their activities on or off TikTok.”

Is there pushback?

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA’s list of the largest online platforms, arguing that it’s being treated unfairly.

Nevertheless, Zalando is launching content flagging systems for its website even though there’s little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes.

The company has supported the DSA, said Aurelie Caulier, Zalando’s head of public affairs for the EU.

“It will bring loads of positive changes” for consumers, she said. But “generally, Zalando doesn’t have systemic risk [that other platforms pose]. So that’s why we don’t think we fit in that category.”

Amazon has filed a similar case with a top EU court.

What happens if companies don’t follow the rules?

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. But don’t expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech.

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they’ll have to let the European Commission, the EU’s executive arm and top digital enforcer, look under the hood to see how their algorithms work.

EU officials “are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they’re also concerned about the way that platforms work and how they contribute to the negative effects,” said Sally Broughton Micova, an associate professor at the University of East Anglia.

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who’s also academic co-director at the Centre on Regulation in Europe, a Brussels-based think tank.

Under the rules, the biggest platforms will have to identify and assess potential systemic risks and whether they’re doing enough to reduce them. These risk assessments are due by the end of August and then they will be independently audited.

The audits are expected to be the main tool to verify compliance — though the EU’s plan has faced criticism for lacking details that leave it unclear how the process will work.

What about the rest of the world?

Europe’s changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of service to provide more information on “problematic users and content.” Those alterations won’t be limited to Europe, said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia.

“The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible. This means that changes to our Terms of Use and Office Actions Policy will be implemented globally,” it said in a statement.

It’s going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren’t isolated to Europe and that social media influencers can have global reach.

The regulations are “dealing with multichannel networks that operate globally. So there is going to be a ripple effect once you have kind of mitigations that get taken into place,” she said.

Meta Rolls Out Web Version of Threads 

Meta Platforms on Tuesday launched the web version of its new text-first social media platform Threads, in a bid to retain professional users and gain an edge over rival X, formerly Twitter.

Threads’ users will now be able to access the microblogging platform by logging-in to its website from their computers, the Facebook and Instagram owner said.

The widely anticipated roll out could help Threads gain broader acceptance among power users like brands, company accounts, advertisers and journalists, who can now take advantage of the platform by using it on a bigger screen.

Threads, which crossed 100 million sign-ups for the app within five days of its launch on July 5, saw a decline in its popularity as users returned to the more familiar platform X after the initial rush.

In just over a month, daily active users on Android version of Threads app dropped to 10.3 million from the peak of 49.3 million, according to a report, dated August 10, by analytics platform Similarweb.

The company will be adding more functionality to the web experience in the coming weeks, Meta said.

Meta to Soon Launch Web Version of Threads in Race with X for Users

Meta Platforms is set to roll out the web version on its new text-first social media platform Threads, hoping to gain an edge over X, formerly Twitter, as the initial surge in users waned.

The widely anticipated web version will make Threads more useful for power users like brands, company accounts, advertisers and journalists.

Meta did not give a date for the launch, but Instagram head Adam Mosseri said it could happen soon.

“We are close on web…,” Mosseri said in a post on Threads on Friday. The launch could happen as early as this week, according to a report in the Wall Street Journal.

Threads, which launched as an Android and iOS app on July 5 and gained 100 million users in just five days, saw its popularity drop as users returned to the more familiar platform X after the initial rush to try Meta’s new offering. 

But in just over a month, its daily active users on Android app dropped to 10.3 million from the peak of 49.3 million, according to a report by analytics platform Similarweb dated Aug. 10. 

Meanwhile, the management is moving quickly to launch new features. Threads now offers the ability to set post notifications for accounts and view them in a type of chronological feed. 

It will soon roll out an improved search that could allow users to search for specific posts and not just accounts. 

Biden Administration Announces More New Funding for Rural Broadband Infrastructure

The Biden administration on Monday continued its push toward internet-for-all by 2030, announcing about $667 million in new grants and loans to build more broadband infrastructure in the rural U.S.

“With this investment, we’re getting funding to communities in every corner of the country because we believe that no kid should have to sit in the back of a mama’s car in a McDonald’s parking lot in order to do homework,” said Mitch Landrieu, the White House’s infrastructure coordinator, in a call with reporters.

The 37 new recipients represent the fourth round of funding under the program, dubbed ReConnect by the U.S. Department of Agriculture. Another 37 projects received $771.4 million in grants and loans announced in April and June.

The money flowing through federal broadband programs, including what was announced Monday and the $42.5 billion infrastructure program detailed earlier this summer, will lead to a new variation on “the electrification of rural America,” Landrieu said, repeating a common Biden administration refrain.

The largest award went to the Ponderosa Telephone Co. in California, which received more than $42 million to deploy fiber networks in Fresno County. In total, more than 1,200 people, 12 farms and 26 other businesses will benefit from that effort alone, according to USDA.

The telephone cooperatives, counties and telecommunications companies that won the new awards are based in 22 states and the Marshall Islands.

At least half of the households in areas receiving the new funding lack access to internet speeds of 100 megabits per second download and 20 Mbps upload — what the federal government considers “underserved” in broadband terminology. The recipients’ mandate is to build networks that raise those levels to at least 100 Mbps upload and 100 Mbps download speeds for every household, business and farm in their service areas.

Agriculture Secretary Tom Vilsack said the investments could bring new economic opportunities to farmers, allow people without close access to medical care to see specialist doctors through telemedicine and increase academic offerings, including Advanced Placement courses in high schools.

“The fact that this administration understands and appreciates the need for continued investment in rural America to create more opportunity is something that I’m really excited about,” Vilsack said on the media call.  

Russia’s Luna-25 Crashes Into Moon 

Russia’s Luna-25 spacecraft has crashed into the moon.

“The apparatus moved into an unpredictable orbit and ceased to exist as a result of a collision with the surface of the moon,” Roscosmos, the Russian space agency, said Sunday.

On Saturday, the agency said it had a problem with the craft and lost contact with it.

The unmanned robot lander was set to land on the moon’s south pole Monday, ahead of an Indian craft scheduled to land on the south pole later this week.

Scientists are eager to explore the south pole because they believe water may be there and that the water could be transformed by future astronauts into air and rocket fuel.

Russia’s last moon launch was in 1976, during the Soviet era.

Some information in this report came from The Associated Press and Reuters.

Russia Fines Google $32,000 for Videos About Ukraine Conflict

A Russian court on Thursday imposed a $32,000 fine on Google for failing to delete allegedly false information about the conflict in Ukraine.

The move by a magistrate’s court follows similar actions in early August against Apple and the Wikimedia Foundation that hosts Wikipedia.

According to Russian news reports, the court found that the YouTube video service, which is owned by Google, was guilty of not deleting videos with incorrect information about the conflict — which Russia characterizes as a “special military operation.”

Google was also found guilty of not removing videos that suggested ways of gaining entry to facilities which are not open to minors, news agencies said, without specifying what kind of facilities were involved.

In Russia, a magistrate court typically handles administrative violations and low-level criminal cases.

Since sending troops into Ukraine in February 2022, Russia has enacted an array of measures to punish any criticism or questioning of the military campaign.

Some critics have received severe punishments. Opposition figure Vladimir Kara-Murza was sentenced this year to 25 years in prison for treason stemming from speeches he made against Russia’s actions in Ukraine.

Texas OKs Plan to Mandate Tesla Tech for EV Chargers in State

Texas on Wednesday approved its plan to require companies to include Tesla’s technology in electric vehicle charging stations to be eligible for federal funds, despite calls for more time to re-engineer and test the connectors.

The decision by Texas, the biggest recipient of a $5 billion program meant to electrify U.S. highways, is being closely watched by other states and is a step forward for Tesla CEO Elon Musk’s plans to make its technology the U.S. charging standard.

Tesla’s efforts are facing early tests as some states start rolling out the funds. The company won a slew of projects in Pennsylvania’s first round of funding announced on Monday but none in Ohio last month.

Federal rules require companies to offer the rival Combined Charging System, or CCS, a U.S. standard preferred by the Biden administration, as a minimum to be eligible for the funds.

But individual states can add their own requirements on top of CCS before distributing the federal funds at a local level.

Ford Motor and General Motors’ announcement about two months ago that they planned to adopt Tesla’s North American Charging Standard, or NACS, sent shockwaves through the industry and prompted a number of automakers and charging companies to embrace the technology.

In June, Reuters reported that Texas, which will receive and deploy $407.8 million over five years, planned to mandate companies to include Tesla’s plugs. Washington state has talked about similar plans, and Kentucky has mandated it.

Florida, another major recipient of funds, recently revised its plans, saying it would mandate NACS one year after standards body SAE International, which is reviewing the technology, formally recognizes it. 

Some charging companies wrote to the Texas Transportation Commission opposing the requirement in the first round of funds. They cited concerns about the supply chain and certification of Tesla’s connectors could put the successful deployment of EV chargers at risk.

That forced Texas to defer a vote on the plan twice as it sought to understand NACS and its implications, before the commission voted unanimously to approve the plan on Wednesday.

“The two-connector approach being proposed will help assure coverage of a minimum of 97% of the current, over 168,000 electric vehicles with fast charge ports in the state,” Humberto Gonzalez, a director at Texas’ department of transportation, said while presenting the state’s plan to the commissioners.

Musk’s X Delays Access to Content on Reuters, NY Times, Social Media Rivals

Social media company X, formerly known as Twitter, delayed access to links to content on the Reuters and New York Times websites as well as rivals like Bluesky, Facebook and Instagram, according to a Washington Post report on Tuesday.

Clicking a link on X to one of the affected websites resulted in a delay of about five seconds before the webpage loaded, The Washington Post reported, citing tests it conducted on Tuesday. Reuters also saw a similar delay in tests it ran.

By late Tuesday afternoon, X appeared to have eliminated the delay. When contacted for comment, X confirmed the delay was removed but did not elaborate.

Billionaire Elon Musk, who bought Twitter in October, has previously lashed out at news organizations and journalists who have reported critically on his companies, which include Tesla and SpaceX. Twitter has previously prevented users from posting links to competing social media platforms.

Reuters could not establish the precise time when X began delaying links to some websites.

A user on Hacker News, a tech forum, posted about the delay earlier on Tuesday and wrote that X began delaying links to the New York Times on Aug. 4. On that day, Musk criticized the publication’s coverage of South Africa and accused it of supporting calls for genocide. Reuters has no evidence that the two events are related.

A spokesperson for the New York Times said it has not received an explanation from X about the link delay.

“While we don’t know the rationale behind the application of this time delay, we would be concerned by targeted pressure applied to any news organization for unclear reasons,” the spokesperson said on Tuesday.

A Reuters spokesperson said: “We are aware of the report in the Washington Post of a delay in opening links to Reuters stories on X. We are looking into the matter.”

Bluesky, an X rival that has Twitter co-founder Jack Dorsey on its board, did not reply to a request for comment.

Meta, which owns Facebook and Instagram, did not immediately respond to a request for comment.

Google to Train 20,000 Nigerians in Digital Skills

Google plans to train 20,000 Nigerian women and youth in digital skills and provide a grant of $1.6 million to help the government create 1 million digital jobs in the country, its Africa executives said on Tuesday. 

Nigeria plans to create digital jobs for its teeming youth population, Vice President Kashim Shettima told Google Africa executives during a meeting in Abuja. Shettima did not provide a timeline for creating the jobs. 

Google Africa executives said a grant from its philanthropic arm in partnership with Data Science Nigeria and the Creative Industry Initiative for Africa will facilitate the program. 

Shettima said Google’s initiative aligned with the government’s commitment to increase youth participation in the digital economy. The government is also working with the country’s banks on the project, Shettima added. 

Google director for West Africa Olumide Balogun said the company would commit funds and provide digital skills to women and young people in Nigeria and also enable startups to grow, which will create jobs. 

Google is committed to investing in digital infrastructure across Africa, Charles Murito, Google Africa’s director of government relations and public policy, said during the meeting, adding that digital transformation can be a job enabler. 

Fiction Writers Fear Rise of AI, Yet See It as a Story

For a vast number of book writers, artificial intelligence is a threat to their livelihood and the very idea of creativity. More than 10,000 of them endorsed an open letter from the Authors Guild this summer, urging AI companies not to use copyrighted work without permission or compensation.

At the same time, AI is a story to tell, and no longer just science fiction.

As present in the imagination as politics, the pandemic, or climate change, AI has become part of the narrative for a growing number of novelists and short story writers who only need to follow the news to imagine a world upended.

“I’m frightened by artificial intelligence, but also fascinated by it. There’s a hope for divine understanding, for the accumulation of all knowledge, but at the same time there’s an inherent terror in being replaced by non-human intelligence,” said Helen Phillips, whose upcoming novel “Hum” tells of a wife and mother who loses her job to AI.

“We’ve been seeing more and more about AI in book proposals,” said Ryan Doherty, vice president and editorial director at Celadon Books, which recently signed Fred Lunzker’s novel “Sike,” featuring an AI psychiatrist.

“It’s the zeitgeist right now. And whatever is in the cultural zeitgeist seeps into fiction,” Doherty said. 

Other AI-themed novels expected in the next two years include Sean Michaels’ “Do You Remember Being Born?” — in which a poet agrees to collaborate with an AI poetry company; Bryan Van Dyke’s “In Our Likeness,” about a bureaucrat and a fact-checking program with the power to change facts; and A.E. Osworth’s “Awakened,” about a gay witch and her titanic clash with AI.

Crime writer Jeffrey Diger, known for his thrillers set in contemporary Greece, is working on a novel touching upon AI and the metaverse, the outgrowth of being “continually on the lookout for what’s percolating on the edge of societal change,” he said.

Authors are invoking AI to address the most human questions.

In Sierra Greer’s “Annie Bot,” the title name is an AI mate designed for a human male. For Greer, the novel was a way to explore her character’s “urgent desire to please,” adding that a robot girlfriend enabled her “to explore desire, respect, and longing in ways that felt very new and strange to me.”

Amy Shearn’s “Animal Instinct” has its origins in the pandemic and in her personal life; she was recently divorced and had begun using dating apps.

“It’s so weird how, with apps, you start to feel as if you’re going person-shopping,” she said. “And I thought, wouldn’t it be great if you could really pick and choose the best parts of all these people you encounter and sort of cobble them together to make your ideal person?”

“Of course,” she added, “I don’t think anyone actually knows what their ideal person is, because so much of what draws us to mates is the unexpected, the ways in which people surprise us. That said, it seemed like an interesting premise for a novel.”

Some authors aren’t just writing about AI, but openly working with it.

Earlier this year, journalist Stephen Marche used AI to write the novella “Death of An Author,” for which he drew upon everyone from Raymond Chandler to Haruki Murakami. Screenwriter and humorist Simon Rich collaborated with Brent Katz and Josh Morgenthau for “I Am Code,” a thriller in verse that came out this month and was generated by the AI program “code-davinci-002.” (Filmmaker Werner Herzog reads the audiobook edition). 

Osworth, who is trans, wanted to address comments by “Harry Potter” author J.K. Rowling that have offended many in the trans community, and to wrest from her the power of magic. At the same time, they worried the fictional AI in their book sounded too human, and decided AI should speak for AI.

Osworth devised a crude program, based on the writings of Machiavelli among others, that would turn out a more mechanical kind of voice.

“I like to say that CHATgpt is a Ferrari, while what I came up with is a skateboard with one square wheel. But I was much more interested in the skateboard with one square wheel,” they said.

Michaels centers his new novel on a poet named Marian, in homage to poet Marianne Moore, and an AI program called Charlotte. He said the novel is about parenthood, labor, community, and “this technology’s implications for art, language and our sense of identity.”

Believing the spirit of “Do You Remember Being Born?” called for the presence of actual AI text, he devised a program that would generate prose and poetry, and uses an alternate format in the novel so readers know when he’s using AI.

In one passage, Marian is reviewing some of her collaboration with Charlotte.

“The preceding day’s work was a collection of glass cathedrals. I reread it with alarm. Turns of phrase I had mistaken for beautiful, which I now found unintelligible,” Michaels writes. “Charlotte had simply surprised me: I would propose a line, a portion of a line, and what the system spat back upended my expectations. I had been seduced by this surprise.”

And now AI speaks: “I had mistaken a fit of algorithmic exuberance for the truth.”

Chinese Surveillance Firm Selling Cameras With ‘Skin Color Analytics’

IPVM, a U.S.-based security and surveillance industry research group, says the Chinese surveillance equipment maker Dahua is selling cameras with what it calls a “skin color analytics” feature in Europe, raising human rights concerns. 

In a report released on July 31, IPVM said “the company defended the analytics as being a ‘basic feature of a smart security solution.'” The report is behind a paywall, but IPVM provided a copy to VOA Mandarin. 

Dahua’s ICC Open Platform guide for “human body characteristics” includes “skin color/complexion,” according to the report. In what Dahua calls a “data dictionary,” the company says that the “skin color types” that Dahua analytic tools would target are ”yellow,” “black,” and ”white.”  VOA Mandarin verified this on Dahua’s Chinese website. 

The IPVM report also says that skin color detection is mentioned in the “Personnel Control” category, a feature Dahua touts as part of its Smart Office Park solution intended to provide security for large corporate campuses in China.  

Charles Rollet, co-author of the IPVM report, told VOA Mandarin by phone on August 1, “Basically what these video analytics do is that, if you turn them on, then the camera will automatically try and determine the skin color of whoever passes, whoever it captures in the video footage. 

“So that means the camera is going to be guessing or attempting to determine whether the person in front of it … has black, white or yellow — in their words — skin color,” he added.  

VOA Mandarin contacted Dahua for comment but did not receive a response. 

The IPVM report said that Dahua is selling cameras with the skin color analytics feature in three European nations. Each has a recent history of racial tension: Germany, France and the Netherlands.

‘Skin color is a basic feature’

Dahua said its skin tone analysis capability was an essential function in surveillance technology.  

 In a statement to IPVM, Dahua said, “The platform in question is entirely consistent with our commitments to not build solutions that target any single racial, ethnic, or national group. The ability to generally identify observable characteristics such as height, weight, hair and eye color, and general categories of skin color is a basic feature of a smart security solution.”  

IPMV said the company has previously denied offering the mentioned feature, and color detection is uncommon in mainstream surveillance tech products. 

In many Western nations, there has long been a controversy over errors due to skin color in surveillance technologies for facial recognition. Identifying skin color in surveillance applications raises human rights and civil rights concerns.  

“So it’s unusual to see it for skin color because it’s such a controversial and ethically fraught field,” Rollet said.  

Anna Bacciarelli, technology manager at Human Rights Watch (HRW), told VOA Mandarin that Dahua technology should not contain skin tone analytics.   

“All companies have a responsibility to respect human rights, and take steps to prevent or mitigate any human rights risks that may arise as a result of their actions,” she said in an email.

“Surveillance software with skin tone analytics poses a significant risk to the right to equality and non-discrimination, by allowing camera owners and operators to racially profile people at scale — likely without their knowledge, infringing privacy rights — and should simply not be created or sold in the first place.”  

Dahua denied that its surveillance products are designed to enable racial identification. On the website of its U.S. company, Dahua says, “contrary to allegations that have been made by certain media outlets, Dahua Technology has not and never will develop solutions targeting any specific ethnic group.” 

However, in February 2021, IPVM and the Los Angeles Times reported that Dahua provided a video surveillance system with “real-time Uyghur warnings” to the Chinese police that included eyebrow size, skin color and ethnicity.  

IPVM’s 2018 statistical report shows that since 2016, Dahua and another Chinese video surveillance company, Hikvision, have won contracts worth $1 billion from the government of China’s Xinjiang province, a center of Uyghur life. 

The U.S. Federal Communications Commission determined in 2022 that the products of Chinese technology companies such as Dahua and Hikvision, which has close ties to Beijing, posed a threat to U.S. national security. 

The FCC banned sales of these companies’ products in the U.S. “for the purpose of public safety, security of government facilities, physical security surveillance of critical infrastructure, and other national security purposes,” but not for other purposes.  

Before the U.S. sales bans, Hikvision and Dahua ranked first and second among global surveillance and access control firms, according to The China Project.  

‘No place in a liberal democracy’

On June 14, the European Union passed a revision proposal to its draft Artificial Intelligence Law, a precursor to completely banning the use of facial recognition systems in public places.  

“We know facial recognition for mass surveillance from China; this technology has no place in a liberal democracy,” Svenja Hahn, a German member of the European Parliament and Renew Europe Group, told Politico.  

Bacciarelli of HRW said in an email she “would seriously doubt such racial profiling technology is legal under EU data protection and other laws. The General Data Protection Regulation, a European Union regulation on Information privacy, limits the collection and processing of sensitive personal data, including personal data revealing racial or ethnic origin and biometric data, under Article 9. Companies need to make a valid, lawful case to process sensitive personal data before deployment.” 

“The current text of the draft EU AI Act bans intrusive and discriminatory biometric surveillance tech, including real-time biometric surveillance systems; biometric systems that use sensitive characteristics, including race and ethnicity data; and indiscriminate scraping of CCTV data to create facial recognition databases,” she said.  

In Western countries, companies are developing AI software for identifying race primarily as a marketing tool for selling to diverse consumer populations. 

The Wall Street Journal reported in 2020 that American cosmetics company Revlon had used recognition software from AI start-up Kairos to analyze how consumers of different ethnic groups use cosmetics, raising concerns among researchers that racial recognition could lead to discrimination.  

The U.S. government has long prohibited sectors such as healthcare and banking from discriminating against customers based on race. IBM, Google and Microsoft have restricted the provision of facial recognition services to law enforcement.  

Twenty-four states, counties and municipal governments in the U.S. have prohibited government agencies from using facial recognition surveillance technology. New York City, Baltimore, and Portland, Oregon, have even restricted the use of facial recognition in the private sector.  

Some civil rights activists have argued that racial identification technology is error-prone and could have adverse consequences for those being monitored. 

Rollet said, “If the camera is filming at night or if there are shadows, it can misclassify people.”  

Caitlin Chin is a fellow at the Center for Strategic and International Studies, a Washington think tank where she researches technology regulation in the United States and abroad. She emphasized that while Western technology companies mainly use facial recognition for business, Chinese technology companies are often happy to assist government agencies in monitoring the public.  

She told VOA Mandarin in an August 1 video call, “So this is something that’s both very dehumanizing but also very concerning from a human rights perspective, in part because if there are any errors in this technology that could lead to false arrests, it could lead to discrimination, but also because the ability to sort people by skin color on its own almost inevitably leads to people being discriminated against.”  

She also said that in general, especially when it comes to law enforcement and surveillance, people with darker skin have been disproportionately tracked and disproportionately surveilled, “so these Dahua cameras make it easier for people to do that by sorting people by skin color.”  

Virgin Galactic Flies Its First Tourists to the Edge of Space

Virgin Galactic rocketed to the edge of space with its first tourists Thursday, including a former British Olympian who bought his ticket 18 years ago and a mother-daughter duo from the Caribbean.

The space plane glided back to a runway landing at Spaceport America in the New Mexico desert, after a brief flight that gave passengers a few minutes of weightlessness.

Cheers erupted from families and friends watching from below when the craft’s rocket motor fired after it was released from the plane that had carried it aloft. The rocket ship reached about 88 kilometers high.

Richard Branson’s company expects to begin offering monthly trips to customers on its winged space plane, joining Jeff Bezos’ Blue Origin and Elon Musk’s SpaceX in the space tourism business.

Virgin Galactic passenger Jon Goodwin, who was among the first to buy a ticket in 2005, said he had faith that he would someday make the trip. The 80-year-old athlete — he competed in canoeing in the 1972 Olympics — has Parkinson’s disease and wants to be an inspiration to others.

“I hope it shows them that these obstacles can be the start rather than the end to new adventures,” he said in a statement.

Ticket prices were $200,000 when Goodwin signed up. The cost is now $450,000.

He was joined by sweepstakes winner Keisha Schahaff, 46, a health coach from Antigua, and her daughter, Anastatia Mayers, 18, a student at Scotland’s University of Aberdeen. Also on board: two pilots and the company’s astronaut trainer.

It was Virgin Galactic’s seventh trip to space since 2018, but the first with a ticket-holder. Branson, the company’s founder, hopped on board for the first full-size crew ride in 2021. Italian military and government researchers soared in June on the first commercial flight. About 800 people are currently on Virgin Galactic’s waiting list, according to the company.

Virgin Galactic’s rocket ship launches from the belly of an airplane, not from the ground, and requires two pilots in the cockpit. Once the mothership reaches a height of about 15 kilometers, the space plane is released and fires its rocket motor to make the final push to just over 80 kilometers up. Passengers can unstrap from their seats, float around the cabin for a few minutes and take in the sweeping views of Earth, before the space plane glides back home and lands on a runway.

In contrast, the capsules used by SpaceX and Blue Origin are fully automated and parachute back down.

Like Virgin Galactic, Blue Origin aims for the fringes of space, quick ups-and-downs from West Texas. Blue Origin has launched 31 people so far, but flights are on hold following a rocket crash last fall. The capsule, carrying experiments but no passengers, landed intact.

SpaceX, is the only private company flying customers all the way to orbit, charging a much heftier price, too: tens of millions of dollars per seat. It’s already flown three private crews. NASA is its biggest customer, relying on SpaceX to ferry its astronauts to and from the International Space Station. since 2020.

People have been taking on adventure travel for decades, the risks underscored by the recent implosion of the Titan submersible that killed five passengers on their way down to view the Titanic wreckage. Virgin Galactic suffered its own casualty in 2014 when its rocket plane broke apart during a test flight, killing one pilot. Yet space tourists are still lining up, ever since the first one rocketed into orbit in 2001 with the Russians.

Branson, who lives in the British Virgin Islands, watched Thursday’s flight from a party in Antigua. He had held a virtual lottery to establish a pecking order for the company’s first 50 customers — dubbed the Founding Astronauts. Virgin Galactic said the group agreed Goodwin would go first, given his age and his Parkinson’s.

China to Require all Apps to Share Business Details in New Oversight Push

China will require all mobile app providers in the country to file business details with the government, its information ministry said, marking Beijing’s latest effort to keep the industry on a tight leash. 

The Ministry of Industry and Information Technology (MIIT) said late on Tuesday that apps without proper filings will be punished after the grace period that will end in March next year, a move that experts say would potentially restrict the number of apps and hit small developers hard. 

You Yunting, a lawyer with Shanghai-based DeBund Law Offices, said the order is effectively requiring approvals from the ministry. The new rule is primarily aimed at combating online fraud but it will impact all apps in China, he said. 

Rich Bishop, co-founder of app publishing firm AppInChina, said the new rule is also likely to affect foreign-based developers which have been able to publish their apps easily through Apple’s App Store without showing any documentation to the Chinese government. 

Bishop said that in order to comply with the new rules, app developers now must either have a company in China or work with a local publisher.  

Apple did not immediately reply to a request for comment. 

The iPhone maker pulled over a hundred artificial intelligence (AI) apps from its App Store last week to comply with regulations after China introduced a new licensing regime for generative AI apps for the country.  

The ministry’s notice also said entities “engaged in internet information services through apps in such fields as news, publishing, education, film and television and religion should also submit relevant documents.” 

The requirement could affect the availability of popular social media apps such as X, Facebook and Instagram. Use of such apps are not allowed in China, but they can be still downloaded from app stores, enabling Chinese to use them when traveling overseas. 

China already requires mobile games to obtain licenses before they launch in the country, and it had purged tens of thousands of unlicensed games from various app stores in 2020. 

Tencent’s WeChat, China’s most popular online social platform, said on Wednesday that mini apps, apps that can be opened within WeChat, must also follow the new rules. 

The company said that new apps must complete the filing before launch starting from September, while exiting mini apps have until the end of March.  

 

US to Restrict High-Tech Investment in China

U.S. President Joe Biden is planning Wednesday to impose restrictions on U.S. investments in some high-tech industries in China.

Biden’s expected executive order could again heighten tensions between the U.S., the world’s biggest economy, and No. 2 China after a period in which leaders of the two countries have held several discussions aimed at airing their differences and seeking common ground.

The new restrictions would limit U.S. investments in such high-tech sectors in China as quantum computing, artificial intelligence and advanced semi-conductors, but apparently not in the broader Chinese economy, which recently has been struggling to advance.

In a trip to China in July, Treasury Secretary Janet Yellen told Chinese Premier Li Qiang, “The United States will, in certain circumstances, need to pursue targeted actions to protect its national security. And we may disagree in these instances.”

Trying to protect its own security interests in the Indo-Pacific region and across the globe, National Security Adviser Jake Sullivan said in April that the U.S. has implemented “carefully tailored restrictions on the most advanced semiconductor technology exports” to China.

“Those restrictions are premised on straightforward national security concerns,” he said. “Key allies and partners have followed suit, consistent with their own security concerns.”

Sullivan said they are not, as Beijing has claimed, a ‘technology blockade.’”

US Launches Contest to Use AI to Prevent Government System Hacks

The White House on Wednesday said it had launched a multimillion-dollar cyber contest to spur use of artificial intelligence to find and fix security flaws in U.S. government infrastructure, in the face of growing use of the technology by hackers for malicious purposes.  

“Cybersecurity is a race between offense and defense,” said Anne Neuberger, the U.S. government’s deputy national security adviser for cyber and emerging technology.

“We know malicious actors are already using AI to accelerate identifying vulnerabilities or build malicious software,” she added in a statement to Reuters.

Numerous U.S. organizations, from health care groups to manufacturing firms and government institutions, have been the target of hacking in recent years, and officials have warned of future threats, especially from foreign adversaries.  

Neuberger’s comments about AI echo those Canada’s cybersecurity chief Samy Khoury made last month. He said his agency had seen AI being used for everything from creating phishing emails and writing malicious computer code to spreading disinformation.

The two-year contest includes around $20 million in rewards and will be led by the Defense Advanced Research Projects Agency, the U.S. government body in charge of creating technologies for national security, the White House said.

Google, Anthropic, Microsoft, and OpenAI — the U.S. technology firms at the forefront of the AI revolution — will make their systems available for the challenge, the government said.

The contest signals official attempts to tackle an emerging threat that experts are still trying to fully grasp. In the past year, U.S. firms have launched a range of generative AI tools such as ChatGPT that allow users to create convincing videos, images, texts, and computer code. Chinese companies have launched similar models to catch up.

Experts say such tools could make it far easier to, for instance, conduct mass hacking campaigns or create fake profiles on social media to spread false information and propaganda.  

“Our goal with the DARPA AI challenge is to catalyze a larger community of cyber defenders who use the participating AI models to race faster – using generative AI to bolster our cyber defenses,” Neuberger said.

The Open Source Security Foundation (OpenSSF), a U.S. group of experts trying to improve open source software security, will be in charge of ensuring the “winning software code is put to use right away,” the U.S. government said.