SpaceX Sends Saudi Astronauts, Including Nation’s 1st Woman in Space, to International Space Station

Saudi Arabia’s first astronauts in decades rocketed toward the International Space Station on a chartered multimillion-dollar flight Sunday. 

SpaceX launched the ticket-holding crew, led by a retired NASA astronaut now working for the company that arranged the trip from Kennedy Space Center. Also on board: a U.S. businessman who now owns a sports car racing team. 

The four should reach the space station in their capsule Monday morning; they’ll spend just more than a week there before returning home with a splashdown off the Florida coast. 

Sponsored by the Saudi Arabian government, Rayyanah Barnawi, a stem cell researcher, became the first woman from the kingdom to go to space. She was joined by Ali al-Qarni, a fighter pilot with the Royal Saudi Air Force. 

They’re the first from their country to ride a rocket since a Saudi prince launched aboard shuttle Discovery in 1985. In a quirk of timing, they’ll be greeted at the station by an astronaut from the United Arab Emirates. 

“Hello from outer space! It feels amazing to be viewing Earth from this capsule,” Barnawi said after settling into orbit. 

Added al-Qarni: “As I look outside into space, I can’t help but think this is just the beginning of a great journey for all of us.” 

Rounding out the visiting crew: Knoxville, Tennessee’s John Shoffner, former driver and owner of a sports car racing team that competes in Europe, and chaperone Peggy Whitson, the station’s first female commander who holds the U.S. record for most accumulated time in space: 665 days and counting. 

“It was a phenomenal ride,” Whitson said after reaching orbit. Her crewmates clapped their hands in joy. 

It’s the second private flight to the space station organized by Houston-based Axiom Space. The first was last year by three businessmen, with another retired NASA astronaut. The company plans to start adding its own rooms to the station in another few years, eventually removing them to form a stand-alone outpost available for hire. 

Axiom won’t say how much Shoffner and Saudi Arabia are paying for the planned 10-day mission. The company had previously cited a ticket price of $55 million each. 

NASA’s latest price list shows per-person, per-day charges of $2,000 for food and up to $1,500 for sleeping bags and other gear. Need to get your stuff to the space station in advance? Figure roughly $10,000 per pound ($20,000 per kilogram), the same fee for trashing it afterward. Need your items back intact? Double the price. 

At least the email and video links are free. 

The guests will have access to most of the station as they conduct experiments, photograph Earth and chat with schoolchildren back home, demonstrating how kites fly in space when attached to a fan. 

After decades of shunning space tourism, NASA now embraces it with two private missions planned a year. The Russian Space Agency has been doing it, off and on, for decades. 

“Our job is to expand what we do in low-Earth orbit across the globe,” said NASA’s space station program manager Joel Montalbano. 

SpaceX’s first-stage booster landed back at Cape Canaveral eight minutes after liftoff — a special treat for the launch day crowd, which included about 60 Saudis. 

“It was a very, very exciting day,” said Axiom’s Matt Ondler. 

Early Warning Systems Send Disaster Deaths Plunging, UN Says

Weather-related disasters have surged over the past 50 years, causing swelling economic damage even as early warning systems have meant dramatically fewer deaths, the United Nations said Monday. 

Extreme weather, climate and water-related events caused 11,778 reported disasters between 1970 and 2021, new figures from the U.N.’s World Meteorological Organization (WMO) show. 

Those disasters killed just more than 2 million people and caused $4.3 trillion in economic losses. 

“The most vulnerable communities unfortunately bear the brunt of weather, climate and water-related hazards,” WMO chief Petteri Taalas said in a statement. 

The report found that more than 90% of reported deaths worldwide due to disasters in the 51-year period occurred in developing countries. 

But the agency also said improved early warning systems and coordinated disaster management had significantly reduced the human casualty toll. 

WMO pointed out in a report issued two years ago covering disaster-linked deaths and losses between 1970 and 2019, that at the beginning of the period the world was seeing more than 50,000 such deaths each year. 

By the 2010s, the disaster death toll had dropped below 20,000 annually. 

And in its update of that report, WMO said Monday that 22,608 disaster deaths were recorded globally in 2020 and 2021 combined. 

‘Early warnings save lives’ 

Cyclone Mocha, which wreaked havoc in Myanmar and Bangladesh last week, exemplifies this, Taalas said. 

Mocha “caused widespread devastation … impacting the poorest of the poor,” he said. 

But while Myanmar’s junta has put the death toll from the cyclone at 145, Taalas pointed out that during similar disasters in the past, “both Myanmar and Bangladesh suffered death tolls of tens and even hundreds of thousands of people.” 

“Thanks to early warnings and disaster management, these catastrophic mortality rates are now thankfully history. Early warnings save lives,” he added. 

The U.N. has launched a plan to ensure all nations are covered by disaster early warning systems by the end of 2027. 

Endorsing that plan figures among the top strategic priorities during a meeting of WMO’s decision-making body, the World Meteorological Congress, which opens Monday. 

To date, only half of countries have such systems in place. 

Surging economic losses 

WMO meanwhile warned that while deaths have plunged, the economic losses incurred when weather, climate and water extremes hit have soared. 

The agency previously recorded economic losses that increased sevenfold between 1970 and 2019, rising from $49 million per day during the first decade to $383 million per day in the final one. 

Wealthy countries have been hardest hit by far in monetary terms.  

The United States alone incurred $1.7 trillion in losses, or 39% of the economic losses globally from disasters since 1970. 

But while the dollar figures on losses suffered in poorer nations were not particularly high, they were far higher in relation to the size of their economies, WMO noted. 

Developed nations accounted for more than 60% of losses from weather, climate and water disasters, but in more than four-fifths of cases, the economic losses were equivalent to less than 0.1% of gross domestic product (GDP). 

And no disasters saw reported economic losses greater than 3.5% of the respective GDPs. 

By comparison, in 7% of the disasters that hit the world’s least developed countries, losses equivalent to more than 5% of their GDP were reported, with several disasters causing losses equivalent to nearly a third of GDP. 

And for small island developing states, one-fifth of disasters saw economic losses of more than 5% of GDP, with some causing economic losses of 100 percent. 

SpaceX Launching Saudi Astronauts on Private Flight to Space Station

SpaceX’s next private flight to the International Space Station awaited takeoff Sunday, weather and rocket permitting.

The passengers include Saudi Arabia’s first astronauts in decades, as well as a Tennessee businessman who started his own sports car racing team. They’ll be led by a retired NASA astronaut who now works for the company that arranged the 10-day trip.

It’s the second charter flight organized by Houston-based Axiom Space. The company would not say how much the latest tickets cost; it previously cited per-seat prices of $55 million.

With its Falcon rocket already on the pad, SpaceX targeted a liftoff late Sunday afternoon from NASA’s Kennedy Space Center. It’s the same spot where Saudi Arabia’s first astronaut, a prince, soared in 1985.

Representing the Saudi Arabian government this time are Rayyanah Barnawi, a stem cell researcher set to become the kingdom’s first woman in space, and Royal Saudi Air Force fighter pilot Ali al-Qarni.

Rounding out the crew: John Shoffner, the racecar buff; and Peggy Whitson, who holds the U.S. record for the most accumulated time in space at 665 days.

Iraq Rebuilding Efforts Get High-Tech Boost

It’s been more than a decade since the end of the Iraq War. Much of the country still bears the scars of the U.S.-led invasion. But Iraqis today are working to clean up their country, and some have turned to technology for help. VOA’s Arash Arabasadi has more.

China Tells Tech Manufacturers: Stop Using US-Made Micron Chips

Stepping up a feud with Washington over technology and security, China’s government Sunday told users of computer equipment deemed sensitive to stop buying products from the biggest U.S. memory chipmaker, Micron Technology Inc. 

Micron products have unspecified “serious network security risks” that pose hazards to China’s information infrastructure and affect national security, the Cyberspace Administration of China said on its website. Its six-sentence statement gave no details. 

“Operators of critical information infrastructure in China should stop purchasing products from Micron Co.,” the agency said. 

The United States, Europe and Japan are reducing Chinese access to advanced chipmaking and other technology they say might be used in weapons at a time when President Xi Jinping’s government has threatened to attack Taiwan and is increasingly assertive toward Japan and other neighbors. 

Chinese officials have warned of unspecified consequences but appear to be struggling to find ways to retaliate without hurting China’s smartphone producers and other industries and efforts to develop its own processor chip suppliers. 

An official review of Micron under China’s increasingly stringent information security laws was announced April 4, hours after Japan joined Washington in imposing restrictions on Chinese access to technology to make processor chips on security grounds. 

Foreign companies have been rattled by police raids on two consulting firms, Bain & Co. and Capvision, and a due diligence firm, Mintz Group. Chinese authorities have declined to explain the raids but said foreign companies are obliged to obey the law. 

Business groups and the U.S. government have appealed to authorities to explain newly expanded legal restrictions on information and how they will be enforced. 

Sunday’s announcement appeared to try to reassure foreign companies. 

“China firmly promotes high-level opening up to the outside world and, as long as it complies with Chinese laws and regulations, welcomes enterprises and various platform products and services from various countries to enter the Chinese market,” the cyberspace agency said. 

Xi accused Washington in March of trying to block China’s development. He called on the public to “dare to fight.” 

Despite that, Beijing has been slow to retaliate, possibly to avoid disrupting Chinese industries that assemble most of the world’s smartphones, tablet computers and other consumer electronics. They import more than $300 billion worth of foreign chips every year. 

Beijing is pouring billions of dollars into trying to accelerate chip development and reduce the need for foreign technology. Chinese foundries can supply low-end chips used in autos and home appliances but can’t support smartphones, artificial intelligence and other advanced applications. 

The conflict has prompted warnings the world might decouple or split into separate spheres with incompatible technology standards that mean computers, smartphones and other products from one region wouldn’t work in others. That would raise costs and might slow innovation. 

U.S.-Chinese relations are at their lowest level in decades due to disputes over security, Beijing’s treatment of Hong Kong and Muslim ethnic minorities, territorial disputes and China’s multibillion-dollar trade surpluses. 

G7 Calls for ‘Responsible’ Use of Generative AI

The world must urgently assess the impact of generative artificial intelligence, G7 leaders said Saturday, announcing they will launch discussions this year on “responsible” use of the technology.

A working group will be set up to tackle issues from copyright to disinformation, the seven leading economies said in a final communique released during a summit in Hiroshima, Japan.

Text generation tools such as ChatGPT, image creators and music composed using AI have sparked delight, alarm and legal battles as creators accuse them of scraping material without permission.

Governments worldwide are under pressure to move quickly to mitigate the risks, with the chief executive of ChatGPT’s OpenAI telling U.S. lawmakers this week that regulating AI was essential.

“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors,” the G7 statement said.

“We task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner … for discussions on generative AI by the end of this year,” it said.

“These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”

The new working group will be organized in cooperation with the OECD group of developed countries and the Global Partnership on Artificial Intelligence (GPAI), the statement added.

On Tuesday, OpenAI CEO Sam Altman testified before a U.S. Senate panel and urged Congress to impose new rules on big tech.

He insisted that in time, generative AI developed by his company would one day “address some of humanity’s biggest challenges, like climate change and curing cancer.”

However, “we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said.

European Parliament lawmakers this month also took a first step towards EU-wide regulation of ChatGPT and other AI systems.

The text is to be put to the full parliament next month for adoption before negotiations with EU member states on a final law.

“While rapid technological change has been strengthening societies and economies, the international governance of new digital technologies has not necessarily kept pace,” the G7 said.

For AI and other emerging technologies including immersive metaverses, “the governance of the digital economy should continue to be updated in line with our shared democratic values,” the group said.

Among others, these values include fairness, respect for privacy and “protection from online harassment, hate and abuse,” among others, it added.

US Supreme Court Lets Twitter Off Hook in Terror Lawsuit Over Istanbul Massacre

The U.S. Supreme Court on Thursday refused to clear a path for victims of attacks by militant organizations to hold social media companies liable under a federal anti-terrorism law for failing to prevent the groups from using their platforms, handing a victory to Twitter.

The justices, in a unanimous decision, reversed a lower court’s ruling that had revived a lawsuit against Twitter by the American relatives of Nawras Alassaf, a Jordanian man killed in a 2017 attack during New Year’s celebration in a Istanbul nightclub claimed by the Islamic State militant group. 

The case was one of two that the Supreme Court weighed in its current term aimed at holding internet companies accountable for contentious content posted by users – an issue of growing concern for the public and U.S. lawmakers. 

The justices on Thursday, in a similar case against Google-owned YouTube, part of Alphabet Inc, sidestepped ruling on a bid to narrow a federal law protecting internet companies from lawsuits for content posted by their users — called Section 230 of the Communications Decency Act. 

That case involved a lawsuit by the family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally shot in an Islamic State attack in Paris in 2015, of a lower court’s decision to throw out their lawsuit. 

The Istanbul massacre on Jan. 1, 2017, killed Alassaf and 38 others. His relatives accused Twitter of aiding and abetting the Islamic State, which claimed responsibility for the attack, by failing to police the platform for the group’s accounts or posts in violation of a federal law called the Anti-Terrorism Act that enables Americans to recover damages related to “an act of international terrorism.” 

Twitter and its backers had said that allowing lawsuits like this would threaten internet companies with liability for providing widely available services to billions of users because some of them may be members of militant groups, even as the platforms regularly enforce policies against terrorism-related content. 

The case hinged on whether the family’s claims sufficiently alleged that the company knowingly provided “substantial assistance” to an “act of international terrorism” that would allow the relatives to maintain their suit and seek damages under the anti-terrorism law.

After a judge dismissed the lawsuit, the San Francisco-based 9th U.S. Circuit Court of Appeals in 2021 allowed it to proceed, concluding that Twitter had refused to take “meaningful steps” to prevent Islamic State’s use of the platform. 

President Joe Biden’s administration supported Twitter, saying the Anti-Terrorism Act imposes liability for assisting a terrorist act and not for “providing generalized aid to a foreign terrorist organization” with no causal link to the act at issue. 

In the Twitter case, the 9th Circuit did not consider whether Section 230 barred the family’s lawsuit. Google and Meta’s Facebook, also defendants, did not formally join Twitter’s appeal.

Islamic State called the Istanbul attack revenge for Turkish military involvement in Syria. The main suspect, Abdulkadir Masharipov, an Uzbek national, was later captured by police.

Twitter in court papers has said that it has terminated more than 1.7 million accounts for violating rules against “threatening or promoting terrorism.” 

Montana Becomes First US State to Ban TikTok

Montana Governor Greg Gianforte on Wednesday signed legislation to ban Chinese-owned TikTok from operating in the state, making it the first U.S. state to ban the popular short video app.

Montana will make it unlawful for Google’s and Apple’s app stores to offer the TikTok app within its borders. The ban takes effect January 1, 2024.

TikTok has over 150 million American users, but a growing number of U.S. lawmakers and state officials are calling for a nationwide ban on the app over concerns about potential Chinese government influence on the platform.

In March, a congressional committee grilled TikTok chief executive Shou Zi Chew about whether the Chinese government could access user data or influence what Americans see on the app.

Gianforte, a Republican, said the bill will further “our shared priority to protect Montanans from Chinese Communist Party surveillance.”

TikTok, owned by Chinese tech company ByteDance, said in a statement the bill “infringes on the First Amendment rights of the people of Montana by unlawfully banning TikTok,” adding that they “will defend the rights of our users inside and outside of Montana.”

The company has previously denied that it ever shared data with the Chinese government and has said it would not do so if asked.

Montana, which has a population of just over 1 million people, said TikTok could face fines for each violation and additional fines of $10,000 per day if it violated the ban. Apple and Google could also face fines of $10,000 per violation per day if they violate the ban.

The ban will likely face numerous legal challenges on the ground that it violates the First Amendment free speech rights of users. An attempt by then-President Donald Trump to ban new downloads of TikTok and WeChat through a Commerce Department order in 2020 was blocked by multiple courts and never took effect.

TikTok’s free speech allies include several Democratic members of Congress, including Representative Alexandria Ocasio-Cortez, and First Amendment groups such as the American Civil Liberties Union.

Gianforte also prohibited the use of all social media applications that collect and provide personal information or data to foreign adversaries on government-issued devices.

TikTok is working on an initiative called Project Texas, which creates a standalone entity to store American user data in the U.S. on servers operated by U.S. tech company Oracle.

‘It’s the Algorithms’: YouTube Sent Violent Gun Videos to 9-Year-Olds, Study Finds

When researchers at a nonprofit that studies social media wanted to understand the connection between YouTube videos and gun violence, they set up accounts on the platform that mimicked the behavior of typical boys living in the United States.

They simulated two 9-year-olds who liked video games. The accounts were identical, except that one clicked on the videos recommended by YouTube, and the other ignored the platform’s suggestions.

The account that clicked on YouTube’s suggestions was soon flooded with graphic videos about school shootings, tactical gun training videos and how-to instructions on making firearms fully automatic. One video featured an elementary school-age girl wielding a handgun; another showed a shooter using a .50-caliber gun to fire on a dummy head filled with lifelike blood and brains. Many of the videos violate YouTube’s policies against violent or gory content.

About a dozen a day

The findings show that despite YouTube’s rules and content moderation efforts, the platform is failing to stop the spread of frightening videos that could traumatize vulnerable children — or send them down dark roads of extremism and violence.

“Video games are one of the most popular activities for kids. You can play a game like ‘Call of Duty’ without ending up at a gun shop — but YouTube is taking them there,” said Katie Paul, director of the Tech Transparency Project, the research group that published its findings about YouTube on Tuesday. “It’s not the video games, it’s not the kids. It’s the algorithms.”

The accounts that followed YouTube’s suggested videos received 382 different firearms-related videos in a single month, or about 12 per day. The accounts that ignored YouTube’s recommendations still received some gun-related videos, but only 34 in total.

The researchers also created accounts mimicking 14-year-old boys; those accounts also received similar levels of gun- and violence-related content.

One of the videos recommended for the accounts was titled “How a Switch Works on a Glock (Educational Purposes Only).” YouTube later removed the video after determining it violated its rules; an almost identical video popped up two weeks later with a slightly altered name; that video remains available.

A spokeswoman for YouTube defended the platform’s protections for children and noted that it requires users younger than 17 to get their parent’s permission before using their site; accounts for users younger than 13 are linked to the parental account.

“We offer a number of options for younger viewers,” the company wrote in emailed statement, “… which are designed to create a safer experience for tweens and teens.”

Shooters glorify violence

Along with TikTok, the video-sharing platform is one of the most popular sites for children and teens. Both sites have been criticized in the past for hosting, and in some cases promoting, videos that encourage gun violence, eating disorders and self-harm. Critics of social media have also pointed to the links between social media, radicalization and real-world violence.

The perpetrators behind many recent mass shootings have used social media and video streaming platforms to glorify violence or even livestream their attacks. In a post on YouTube, the shooter behind the 2018 attack that killed 17 in Parkland, Florida, wrote “I’m going to be a professional school shooter.”

The neo-Nazi gunman who killed eight people earlier this month at a Dallas-area shopping center also had a YouTube account that included videos about assembling rifles, the serial killer Jeffrey Dahmer and a clip from a school shooting scene in a television show.

In some cases, YouTube has already removed some of the videos identified by researchers at the Tech Transparency Project, but in other instances the content remains available. Many big tech companies rely on automated systems to flag and remove content that violates their rules, but Paul said the findings from the Project’s report show that greater investments in content moderation are needed.

In the absence of federal regulation, social media companies must do more to enforce their own rules, said Justin Wagner, director of investigations at Everytown for Gun Safety, a leading gun control advocacy organization. Wagner’s group also said the Tech Transparency Project’s report shows the need for tighter age restrictions on firearms-related content.

Similar concerns have been raised about TikTok after earlier reports showed the platform was recommending harmful content to teens.

TikTok has defended its site and its policies, which prohibit users younger than 13. Its rules also prohibit videos that encourage harmful behavior; users who search for content about topics including eating disorders automatically receive a prompt offering mental health resources.

ChatGPT’s Chief Testifies Before US Congress as Concerns Grow About AI Risks

The head of the artificial intelligence company that makes ChatGPT told U.S. Congress on Tuesday that government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” OpenAI CEO Sam Altman testified at a Senate hearing Tuesday.

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal’s floor speeches and reciting a speech written by ChatGPT after he asked the chatbot, “How I would open this hearing?”

The result was impressive, said Blumenthal, but he added, “What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin’s leadership?”

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them.

Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying will be IBM’s chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI’s latest model, GPT-4, described as more powerful than ChatGPT.

“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel’s ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”

Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM’s Montgomery asks Congress to take a “precision regulation” approach.

“This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.

US Announces Charges Related to Efforts by Russia, China, Iran to Steal Technology

U.S. law enforcement officials on Tuesday announced a series of criminal cases exposing the relentless efforts by Russia, China and Iran to steal sensitive U.S. technologies.  

The five cases, which spanned a wide range of protected U.S. technologies, were brought by a new “strike force” created earlier this year to deter foreign adversaries from obtaining advanced U.S. innovation.

“These charges demonstrate the Justice Department’s commitment to preventing sensitive technology from falling into the hands of foreign adversaries, including Russia, China, and Iran,” said Assistant Attorney General Matthew Olsen, who leads the Justice Department’s National Security Division, and co-heads the task force.

Some of the cases announced on Tuesday go back several years but Olsen said the “threat is as significant as ever.”

Two of the cases involve Russia.

In New York, prosecutors charged a Russian national with smuggling U.S. military and dual-use technologies, including advanced electronics and testing equipment, to Russia through the Netherlands and France.  Nikolaos “Nikos” Bogonikolos was arrested last week in France and prosecutors said they’ll seek his extradition.

In a second case, two other Russian nationals – Oleg Sergeyevich Patsulya and Vasilii Sergeyevich Besedin – were arrested in Arizona on May 11 in connection with illegally shipping civilian aircraft parts from the United States to Russian airlines.

Patsulya and Besedin, both residents of Florida, allegedly used their U.S.-based limited liability company to purchase and send the parts, according to court documents.

The three other cases center on China and Iran.

In New York, prosecutors charged a Chinese national for conspiring to provide materials to Iran’s ballistic missile program.

Xiangjiang Qiao, an employee of a Chinese sanctioned company for its role in the proliferation of weapons of mass destruction, allegedly conspired to furnish isostatic graphite, a material used in the production of Intercontinental Ballistic Missiles, to Iran.

Liming Li, a California resident, was arrested on May 6 on charges of stealing “smart manufacturing” technologies from two companies he worked at and providing them to businesses in China.

Li allegedly offered to help Chinese companies build “their own capabilities,” a federal prosecutor said.

He was arrested at Ontario International Airport after arriving on a flight from Taiwan and has since been in federal custody, the Justice Department said.

The fifth case announced on Tuesday dates back to 2018 and accuses a former Apple  software engineer with stealing the company’s proprietary research on autonomous systems, including self-driving cars. The defendant took a flight to China on the day the FBI searched his house.

The charges and arrests stem from the work of the Disruptive Technology Strike Force, a joint effort between the departments of justice and transportation.

The initiative, announced in February, leverages the expertise of the FBI, Homeland Security Investigations (HSI) and 14 U.S. attorney’s offices.

Olsen said the cases brought by strike force “demonstrate the breadth and complexity of the threats we face, as well as what is at stake.”

“And they show our ability to accelerate investigations and surge our collective resources to defend against these threats,” Olsen said at a press conference.

STEM Courses in Rural Kenya Open Doors for Girls With Disabilities

Studying science, technology, engineering, and math — or STEM — can be a challenge for girls in rural Africa, especially those with disabilities. In Kenya, an aid group called The Action Foundation is helping to change that by providing remote STEM courses for girls with hearing, visual and physical impairments. Ahmed Hussein reports from Wajir County, Kenya. Camera: Ahmed Hussein

AI Presents Political Peril for 2024 With Threat to Mislead Voters

Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election. 

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away. 

No more. 

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low. 

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen. 

“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.” 

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence. 

Here are a few: Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race. 

“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.” 

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool. 

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text “What if the weakest president we’ve ever had was re-elected?” 

A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic. 

“An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024,” reads the ad’s description from the RNC. 

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust. 

“What happens if an international entity — a cybercriminal or a nation state — impersonates someone. What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.” 

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries. 

AI images appearing to show Trump’s mug shot also fooled some social media users even though the former president didn’t take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin. 

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact. 

Some states have offered their own proposals for addressing concerns about deepfakes. 

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other. 

“It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.” 

Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them “a deception” with “no place in legitimate, ethical campaigns.” 

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too. 

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT “every single day” and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward. 

Nellis’ newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails — all typically tedious tasks on campaigns. 

“The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” he said. 

Bolivian EV Startup Hopes Tiny Car Will Make It Big in Lithium-Rich Country

On a recent, cold morning, Dr. Carlos Ortuño hopped into a tiny electric car to go check on a patient in the outskirts of Bolivia’s capital of La Paz, unsure if the vehicle would be able to handle the steep, winding streets of the high-altitude city. 

“I thought that because of the city’s topography it was going to struggle, but it’s a great climber,” said Ortuño about his experience driving a Quantum, the first EV to have ever been made in Bolivia. “The difference from a gasoline-powered vehicle is huge.” 

Ortuño’s home visit aboard a car the size of a golf cart was part of a government-sponsored program that brings doctors to patients living in neighborhoods far from the city center. The “Doctor in your house” program was launched last month by the municipality of La Paz using a fleet of six EV’s manufactured by Quantum Motors, the country’s sole producer of electric cars. 

“It is a pioneering idea. It helps protect the health of those in need, while protecting the environment and supporting local production,” La Paz Mayor Iván Arias said. 

The program could also help boost Quantum Motors, a company launched four years ago by a group of entrepreneurs who believe EVs will transform the auto industry in Bolivia, a lithium-rich country, where cheap, subsidized imported gasoline is still the norm. 

Built like a box, the Quantum moves at no more than 35 mph (56 kph), can be recharged from a household outlet and can travel 50 miles (80 kilometers) before a recharge. Its creators hope the $7,600 car will help revive dreams of a lithium-powered economy and make electric cars something the masses will embrace. 

“E-mobility will prevail worldwide in the next few years, but it will be different in different countries,” says José Carlos Márquez, general manager of Quantum Motors. “Tesla will be a dominant player in the U.S., with its speedy, autonomous cars. But in Latin America, cars will be more compact, because our streets are more similar to those of Bombay and New Delhi than to those of California.” 

But the company’s quest to boost e-mobility in the South American country has been challenging. In the four years since it released its first EVs, Quantum Motors has sold barely 350 cars in Bolivia and an undisclosed number of units in Peru and Paraguay. The company is also set to open a factory in Mexico later this year, although no further details have been provided on the scope of production there. 

Still, Quantum Motors’ bet on battery-powered cars makes sense when it comes to Bolivia’s resources. With an estimated 21 million tons, Bolivia has the world’s largest reserve of lithium, a key component in electric batteries, but it has yet to extract — and industrialize — its vast resources of the metal. 

In the meantime, the large majority of vehicles in circulation are still powered by fossil fuels and the government continues to pour millions of dollars subsidizing imported fuel than then sells at half the price to the domestic market. 

“The Quantum (car) might be cheap, but I don’t think it has the capacity of a gasoline-powered car,” says Marco Antonio Rodriguez, a car mechanic in La Paz, although he acknowledges people might change their mind once the government puts an end to gasoline subsidies. 

Despite the challenges ahead, the makers of the Quantum car are hopeful that programs like “Médico en tu casa,” which is scheduled to double in size and extend to other neighborhoods next year, will help boost production and churn out more EV’s across the region. 

“We are ready to grow,” said Márquez. “Our inventory has been sold out through July.” 

As Net Tightens, Iranians Pushed to Take Up Homegrown Apps

Banned from using popular Western apps, Iranians have been left with little choice but to take up state-backed alternatives, as the authorities tighten internet restrictions for security reasons following months of protests.

Iranians are accustomed to using virtual private networks, or VPNs, to evade restrictions and access prohibited websites or apps, including the U.S.-based Facebook, Twitter and YouTube.

The authorities went as far as imposing total internet blackouts during the protests that erupted after the September death of 22-year-old Mahsa Amini, following her arrest for an alleged breach of the Islamic republic’s dress code for women.

Connections are back up and running again, and even those who are tech-savvy are being corralled into using the apps approved by the authorities such as Neshan for navigation and Snapp! to hail a car ride.

As many as 89 million people have signed up to Iranian messaging apps including Bale, Ita, Rubika and Soroush, the government says, but not everyone is keen on making the switch.

“The topics that I follow and the friends who I communicate with are not on Iranian platforms,” said Mansour Roghani, a resident in the capital Tehran.

“I use Telegram and WhatsApp and, if my VPN still allows me, I’ll check Instagram,” the former municipality employee said, adding that he has not installed domestic apps as replacements.

Integration

At the height of the deadly Amini protests in October, the Iranian government cited security concerns as it moved to restrict internet access and added Instagram and WhatsApp to its long list of blocked applications.

“No one wants to limit the internet and we can have international platforms” if the foreign companies agree to introduce representative offices in Iran, Telecommunications Minister Issa Zarepour said last month.

Meta, the American giant that owns Facebook, Instagram and WhatsApp, has said it has no intention of setting up offices in the Islamic republic, which remains under crippling U.S. sanctions.

The popularity of the state-sanctioned apps may not be what it seems, however, with the government encouraging people to install them by shifting essential online public services to the homegrown platforms which are often funded by the state.

In addition, analysts say, Iranian users have online safety concerns when using the approved local apps.

“We have to understand they have needs,” said Amir Rashidi, director of digital rights and security at the New York-based Miaan Group.

“As an Iranian citizen, what would you do if registering for university is only based on one of these apps? Or what would you do if you need access to government services?” he said.

The locally developed apps lack a “clear privacy policy,” according to software developer Keikhosrow Heydari-Nejat.

“I have installed some of the domestic messaging apps on a separate phone, not the one that I am using every day,” the 23-year-old said, adding he had done so to access online government services.

“If they (government) shut the internet down, I will keep them installed but I will visit my friends in person,” he said.

Interconnection 

In a further effort to push people onto the domestic platforms, the telecommunications ministry connected the four major messaging apps, enabling users to communicate across the platforms.

“Because the government is going for the maximum number of users, they are trying to connect these apps,” the analyst Rashidi said, adding all the domestic platforms “will enjoy financial and technical support.”

Iran has placed restrictions on apps such as Facebook and Twitter since 2009, following protests over disputed presidential elections.

In November 2019, Iran imposed nationwide internet restrictions during protests sparked by surprise fuel price hikes.

A homegrown internet network, the National Information Network (NIN), which is around 60% completed, will allow domestic platforms to operate independently of global networks.

One platform already benefiting from the highly filtered domestic network is Snapp!, an app similar to U.S. ride-hailing service Uber that has 52 million users — more than half the country’s population.

But Rashidi said the NIN will give Tehran greater control to “shut down the internet with less cost” once completed.

Off-Grid Solar Brings Light, Time, Income to Remotest Indonesia Villages

As Tamar Ana Jawa wove a red sarong in the fading sunlight, her neighbor switched on a light bulb dangling from the sloping tin roof. It was just one bulb powered by a small solar panel, but in this remote village that means a lot. In some of the world’s most remote places, off-grid solar systems are bringing villagers like Jawa more hours in the day, more money and more social gatherings.

Before electricity came to the village, a little less than two years ago, the day ended when the sun went down. Villagers in Laindeha, on the island of Sumba in eastern Indonesia, would set aside the mats they were weaving or coffee they were sorting to sell at the market as the light faded.

A few families who could afford them would start noisy generators that rumbled into the night, emitting plumes of smoke. Some people wired lightbulbs to old car batteries, which would quickly die or burn out appliances, as they had no regulator. Children sometimes studied by makeshift oil lamps, but these occasionally burned down homes when knocked over by the wind.

That’s changed since grassroots social enterprise projects have brought small, individual solar panel systems to Laindeha and villages like it across the island.

For Jawa, it means much-needed extra income. When her husband died of a stroke in December 2022, Jawa wasn’t sure how she would pay for her children’s schooling. But when a neighbor got electric lighting shortly after, she realized she could continue weaving clothes for the market late into the evening.

“It used to be dark at night, now it’s bright until morning,” the 30-year-old mother of two said, carefully arranging and pushing red threads at the loom. “So tonight, I work … to pay for the children.”

Around the world, hundreds of millions of people live in communities without regular access to power, and off-grid solar systems like these are bringing limited access to electricity to places like these years before power grids reach them.  

Some 775 million people globally lacked access to electricity in 2022, according to the International Energy Agency. Sub-Saharan Africa and South Asia are home to some of the largest populations without access to electricity. Not having electricity at home keeps people in poverty, the U.N. and World Bank wrote in a 2021 report. It’s hard for very poor people to get electricity, according to the report, and it’s hard for people who don’t have it to participate in the modern economy.

Indonesia has brought electricity to millions of people in recent years, going from 85% to nearly 97% coverage between 2005 and 2020, according to World Bank data. But there are still more than half a million people in Indonesia living in places the grid doesn’t reach.

While barriers still remain, experts say off-grid solar programs on the island could be replicated across the vast archipelago nation, bringing renewable energy to remote communities.

Now, villagers frequently gather in the evening to continue the day’s work, gather to watch television shows on cellphones charged by the panels and help children do homework in light bright enough to read.

“I couldn’t really study at night before,” said Antonius Pekambani, a 17-year-old student in Ndapaymi village, east Sumba. “But now I can.”

Solar power is still fairly rare in Indonesia. While the country has targeted more solar as part of its climate goals, there has been limited progress due to regulations that don’t allow households to sell power back to the grid, ruling out a way of defraying the cost that has helped people afford solar in other parts of the world.

That’s where grassroots organizations like Sumba Sustainable Solutions, based in eastern Sumba since 2019, saw potential to help. Working with international donors to help subsidize the cost, it provides imported home solar systems, which can power light bulbs and charge cellphones, for monthly payments equivalent to $3.50 over three years.

The organization also offers solar-powered appliances such as wireless lamps and grinding machines. It said it has distributed over 3,020 solar light systems and 62 mills across the island, reaching more than 3,000 homes.

Imelda Pindi Mbitu, a 46-year-old mother of five living in Walatungga, said she used to spend whole days grinding corn kernels and coffee beans between two rocks to sell at the local market; now, she takes it to a solar-powered mill shared by the village.

“With manual milling, if I start in the morning I can only finish in the afternoon. I can’t do anything else,” she said sitting in her wooden home. “If you use the machine, it’s faster. So now I can do other things.”

Similar schemes in other places, including Bangladesh and sub-Saharan Africa, have helped provide electricity for millions, according to the World Bank.

But some smaller off-grid solar systems like these don’t provide the same amount of power as grid access. While cellphones, light bulbs and mills remain charged, the systems don’t generate enough power for a large sound system or a church.

Off-grid solar projects face hurdles too, said Jetty Arlenda, an engineer with Sumba Sustainable Solutions.

The organization’s scheme is heavily reliant upon donors to subsidize the cost of solar equipment, which many rural residents would be unable to afford at their market cost. Villagers without off-grid solar panels are stuck on waitlists while Sumba Sustainable Solutions looks for more funding. They’re hoping for support from Indonesia’s $20 billion Just Energy Transition Partnership deal, which is being negotiated by numerous developed nations and international financial institutions.

There’s also been issues with recipients failing to make payments, especially as the island deals with locust outbreaks diminishing crops and livelihoods of villagers. And when solar systems break, they need imported parts that can be hard to come by.

Audio Book Narrators Say AI Is Already Taking Away Business

As people brace for the disruptive impact of artificial intelligence on jobs and everyday living, those in the world of audio books say their field is already being transformed.

AI has the ability to create human-sounding recordings — at assembly-line speed — while bypassing at least part of the services of the human professionals who for years have made a living with their voices.

Many of them are already seeing a sharp drop off in business.

Tanya Eby has been a full-time voice actor and professional narrator for 20 years. She has a recording studio in her home.

But in the past six months she has seen her work load fall by half. Her bookings now run only through June, while in a normal year they would extend through August.

Many of her colleagues report similar declines.

While other factors could be at play, she told AFP, “It seems to make sense that AI is affecting all of us.”

There is no label identifying AI-assisted recordings as such, but professionals say thousands of audio books currently in circulation use “voices” generated from a databank.

Among the most cutting-edge, DeepZen offers rates that can slash the cost of producing an audio book to one-fourth, or less, that of a traditional project.

The small London-based company draws from a database it created by recording the voices of several actors who were asked to speak in a variety of emotional registers.

“Every voice that we are using, we sign a license agreement, and we pay for the recordings,” said DeepZen CEO Kamis Taylan.

For every project, he added, “we pay royalties based on the work that we do.”

Not everyone respects that standard, said Eby.

“All these new companies are popping up who are not as ethical,” she said, and some use voices found in databases without paying for them.

“There’s that gray area” being exploited by several platforms, Taylan acknowledged.

“They take your voice, my voice, five other people’s voices combined that just creates a separate voice… They say that it doesn’t belong to anybody.”

All the audio book companies contacted by AFP denied using such practices.

Speechki, a Texas-based start-up, uses both its own recordings and voices from existing databanks, said CEO Dima Abramov.

But that is done only after a contract has been signed covering usage rights, he said.

Future of coexistence?

The five largest U.S. publishing houses did not respond to requests for comment.

But professionals contacted by AFP said several traditional publishers are already using so-called generative AI, which can create texts, images, videos and voices from existing content — without human intervention.

“Professional narration has always been, and will remain, core to the Audible listening experience,” said a spokesperson for that Amazon subsidiary, a giant in the American audio book sector.

“However, as text-to-speech technology improves, we see a future in which human performances and text-to-speech generated content can coexist.”

The giants of U.S. technology, deeply involved in the explosively developing field of AI, are all pursuing the promising business of digitally narrated audio books.

‘Accessible to all’

Early this year, Apple announced it was moving into AI-narrated audio books, a move it said would make the “creation of audio books more accessible to all,” notably independent authors and small publishers.

Google is offering a similar service, which it describes as “auto-narration.”

“We have to democratize the publishing industry, because only the most famous and the big names are getting converted into audio,” said Taylan. 

“Synthetic narration just opened the door for old books that have never been recorded, and all the books from the future that never will be recorded because of the economics,” added Speechki’s Abramov.

Given the costs of human-based recording, he added, only some five percent of all books are turned into audio books.

But Abramov insisted that the growing market would also benefit voice actors.

“They will make more money, they will make more recordings,” he said. 

The human element

“The essence of storytelling is teaching humanity how to be human. And we feel strongly that that should never be given to a machine to teach us about how to be human,” said Emily Ellet, an actor and audio book narrator who cofounded the Professional Audiobook Narrators Association (PANA).

“Storytelling,” she added, “should remain human entirely.”

Eby underlined a frequent criticism of digitally generated recordings. 

When compared to a human recording, she said, an AI product “lacks in emotional connectivity.”

Eby said she fears, however, that people will grow accustomed to the machine-generated version, “and I think that’s quietly what’s kind of happening.”

Her wish is simply “that companies would let listeners know that they’re listening to an AI-generated piece… I just want people to be honest about it.”

Elon Musk Names NBCUniversal’s Yaccarino as New Twitter CEO

Billionaire tech entrepreneur Elon Musk on Friday named NBCUniversal executive Linda Yaccarino as the chief executive officer of social media giant Twitter.

From his own Twitter account Friday, Musk wrote, “I am excited to welcome Linda Yaccarino as the new CEO of Twitter! (She) will focus primarily on business operations, while I focus on product design and new technology.” 

He said Yaccarino would transform Twitter, which is now called X Corp., into “an everything app” called X. 

On Thursday, Musk teased Yaccarino’s hiring, saying only “she” will start in six to eight weeks.  

Yaccarino worked in advertising and media sales for NBCUniversal since 2011 and as chairperson of global advertising since October 2020. The company announced her departure earlier in the day Friday.

Analysts say Yaccarino’s background could be key to Twitter’s future. Since Musk acquired Twitter last October, he has taken some controversial steps, such as loosening controls on the spread of false information and laying off nearly 80% of its staff, which prompted advertisers to flee.

No comment from Yaccarino on her hiring was immediately available.

Some information for this report was provided by The Associated Press and Reuters. 

Apple to Launch First Online Store in Vietnam

Apple will launch its first online store in Vietnam next week, the company said Friday, hoping to cash in on the country’s young and tech-savvy population.

The iPhone maker is among a host of global tech giants including Intel, Samsung and LG, that have chosen Vietnam for assembly of their products.

But up to now, the Silicon Valley giant has sold its products in Vietnam’s market of 100 million people via authorized resellers.

“We’re honored to be expanding in Vietnam,” said Deirdre O’Brien, Apple’s senior vice president of retail in an online statement in Vietnamese.

The country’s communist government says it wants 85 percent of its adult population to have access to a smartphone by 2025, up from the current 73 percent.

Less than a third of the country’s mobile users have an iPhone, according to market research platform Statista.

Through online stores, “clients in Vietnam can discover products and connect with our experienced experts,” O’Brien said in the statement.

The production of accessories and assembly of mobile phones account for up to 70 percent of electronics manufacturing in Vietnam. Products are mainly for export.

Official figures said Vietnam’s mobile phone production industry reported an import-export turnover of U.S. $114 billion last year, a third of the country’s total import-export revenue.

Stunning Mosaic of Baby Star Clusters Created From 1 Million Telescope Shots

Astronomers have created a stunning mosaic of baby star clusters hiding in our galactic backyard.

The montage, published Thursday, reveals five vast stellar nurseries less than 1,500 light-years away. A light-year is nearly 9.7 trillion kilometers.

To come up with their atlas, scientists pieced together more than 1 million images taken over five years by the European Southern Observatory in Chile. The observatory’s infrared survey telescope was able to peer through clouds of dust and discern infant stars.

“We can detect even the faintest sources of light, like stars far less massive than the sun, revealing objects that no one has ever seen before,” University of Vienna’s Stefan Meingast, the lead author, said in a statement.

The observations, conducted from 2017 to 2022, will help researchers better understand how stars evolve from dust, Meingast said.

The findings, appearing in the journal Astronomy and Astrophysics, complement observations by the European Space Agency’s star-mapping Gaia spacecraft, orbiting nearly 1.5 million kilometers away.

Gaia focuses on optical light, missing most of the objects obscured by cosmic dust, the researchers said.

Will Artificial Intelligence Take Away Jobs? Not Many for Now, Says Expert

The growing abilities of artificial intelligence have left many observers wondering how AI will impact people’s jobs and livelihoods. One expert in the field predicts it won’t have much effect, at least in the short term.  

The topic was a point of discussion at the annual TED conference held recently in Vancouver.   

In a world where students’ term papers can now be written by artificial intelligence, paintings can be drawn by merely uttering words and an AI-generated version of your favorite celebrity can appear on screen, the impact of this new technology is starting to be felt in societies and sparking both wonderment and concern.  

While artificial intelligence has yet to become pervasive in everyday life, the rumblings of what could be a looming economic earthquake are growing stronger.  

 

Gary Marcus is a professor emeritus of psychology and neural science at New York University who helped ride sharing company Uber adopt the rapidly developing technology. 

 

An author and host of the podcast “Humans versus Machines,” Marcus says AI’s economic impact is limited for now, although some jobs have already been threatened by the technology, such as commercial animators for electronic gaming. 

Speaking with VOA after a recent conference for TED, the non-profit devoting to spreading ideas, Marcus said jobs that require manual labor will be safe, for now.   

“We’re not going to see blue collar jobs replaced I think as quickly as some people had talked about.,” Marcus predicted. “So we still don’t have driverless cars, even though people have talked about that for years. Anybody that does something with their hands is probably safe right now. Because we don’t really know how to make robots that sophisticated when it comes to dealing with the real world.”          

Another TED speaker, Sal Khan, is the founder of Khanmigo, an artificial intelligence powered software designed to help educate children. He is optimistic about AI’s potential economic impact as a driver of wealth creation. 

“Will it cause mass dislocations in the job market? I actually don’t know the answer to that,” Khan said, adding that “It will create more wealth, more productivity.” 

The legal profession could be boosted by AI if the technology prompts litigation. Copyright attorneys could especially benefit. 

 

Tom Graham and his company, Metaphysic.ai, artificially recreate famous actors and athletes so they do not need to physically be in front of a camera or microphone in order to appear in films, TV shows or commercials.    

His company is behind the popular fake videos of actor Tom Cruise that have gone viral on social media. 

 

He says the legal system will play a role in protecting people from being recreated without their permission.  

Graham, who has a law degree from Harvard University, has applied to the U.S. Copyright Office to register the real-life version of himself.            

“We did that because you’re looking for legal institutions that exist today, that could give you some kind of protection or remedy,” Graham explained, “It’s just, if there’s no way to enforce it, then it’s not really a thing.”                                

Gary Marcus is urging the formation of an international organization to oversee and monitor artificial intelligence.   

He emphasized the need to “get a lot of smart people together, from the companies, from the government, but also scientists, philosophers, ethicists…” 

“I think it’s really important that we as a globe, think all these things through,” Marcus concluded, “And don’t just leave it to like 190 governments doing whatever random thing they do without really understanding the science.”     

The popular AI website ChatGPT has gained widespread attention in recent months but is not yet a moneymaker. Its parent company, OpenAI, lost more than $540 million in 2022.     

Elon Musk and Tesla Break Ground on Massive Texas Lithium Refinery

Tesla Inc on Monday broke ground on a Texas lithium refinery that CEO Elon Musk said should produce enough of the battery metal to build about 1 million electric vehicles (EVs) by 2025, making it the largest North American processor of the material. 

The facility will push Tesla outside its core focus of building automobiles and into the complex area of lithium refining and processing, a step Musk said was necessary if the auto giant was to meet its ambitious EV sales targets. 

“As we look ahead a few years, a fundamental choke point in the advancement of electric vehicles is the availability of battery grade lithium,” Musk said at the ground-breaking ceremony on Monday, with dozers and other earth-moving equipment operating in the background. 

Musk said Tesla aimed to finish construction of the factory next year and then reach full production about a year later. 

The move will make Tesla the only major automaker in North America that will refine its own lithium. Currently, China dominates the processing of many critical minerals, including lithium. 

“Texas wants to be able to be self-reliant, not dependent upon any foreign hostile nation for what we need. We need lithium,” Texas Governor Greg Abbott said at the ceremony. 

Musk did not specify the volume of lithium the facility would process each year, although he said the automaker would continue to buy the metal from its vendors, which include Albemarle Corp and Livent Corp. 

“We intend to continue to use suppliers of lithium, so it’s not that Tesla will do all of it,” Musk said. 

Albemarle plans to build a lithium processing facility in South Carolina that will refine 100,000 tons of the metal each year, with construction slated to begin next year and the facility coming online sometime later this decade. 

Musk did not say where Tesla will source the rough form of lithium known as spodumene concentrate that will be processed at the facility, although Tesla has supply deals with Piedmont Lithium Inc and others. 

‘Clean operations’

Tesla said it would eschew the lithium industry’s conventional refining process, which relies on sulfuric acid and other strong chemicals, in favor of materials that were less harsh on the environment, such as soda ash. 

“You could live right in the middle of the refinery and not suffer any ill effect. So they’re very clean operations,” Musk said, although local media reports said some environmental advocates had raised concerns over the facility. 

Monday’s announcement was not the first time that Tesla has attempted to venture into lithium production. Musk in 2020 told shareholders that Tesla had secured rights to 10,000 acres in Nevada where it aimed to produce lithium from clay deposits, which had never been done before on a commercial scale. 

While Musk boasted that the company had developed a proprietary process to sustainably produce lithium from those Nevada clay deposits, Tesla has not yet deployed the process. 

Musk has urged entrepreneurs to enter the lithium refining business, saying it is like “minting money.” 

“We’re begging you. We don’t want to do it. Can someone please?” he said during a conference call last month. 

Tesla said last month a recent plunge in prices of lithium and other commodities would aid Tesla’s bruised margins in the second half of the year. 

The refinery is the latest expansion by Tesla into Texas after the company moved its headquarters there from California in 2021. Musk’s other companies, including SpaceX and The Boring Company, also have operations in Texas. 

SEE ALSO: A related video by VOA’s Arash Arabasadi

“We are proud that he calls Texas home,” Abbott said, saying Tesla and Musk are “Texas’s economic juggernauts.” 

Congress Eyes New Rules for Tech

Most Democrats and Republicans agree that the federal government should better regulate the biggest technology companies, particularly social media platforms. But there is little consensus on how it should be done. 

Concerns have skyrocketed about China’s ownership of TikTok, and parents have grown increasingly worried about what their children are seeing online. Lawmakers have introduced a slew of bipartisan bills, boosting hopes of compromise. But any effort to regulate the mammoth industry would face major obstacles as technology companies have fought interference. 

Noting that many young people are struggling, President Joe Biden said in his February State of the Union address that “it’s time” to pass bipartisan legislation to impose stricter limits on the collection of personal data and ban targeted advertising to children. 

“We must finally hold social media companies accountable for the experiment they are running on our children for profit,” Biden said.

A look at some of the areas of potential regulation: 

Children’s safety

Several House and Senate bills would try to make social media, and the internet in general, safer for children who will inevitably be online. Lawmakers cite numerous examples of teenagers who have taken their own lives after cyberbullying or have died engaging in dangerous behavior encouraged on social media. 

In the Senate, at least two bills are focused on children’s online safety. Legislation by Senators Richard Blumenthal, a Connecticut Democrat, and Marsha Blackburn, a Tennessee Republican, approved by the chamber’s Commerce Committee last year would require social media companies to be more transparent about their operations and enable child safety settings by default. Minors would have the option to disable addictive product features and algorithms that push certain content. 

The idea, the senators say, is that platforms should be “safe by design.” The legislation, which Blumenthal and Blackburn reintroduced last week, would also obligate social media companies to prevent certain dangers to minors — including promotion of suicide, disordered eating, substance abuse, sexual exploitation and other illegal behaviors. 

A second bill introduced last month by four senators — Democratic Senators Brian Schatz of Hawaii and Chris Murphy of Connecticut and Republican Senators Tom Cotton of Arkansas and Katie Britt of Alabama — would take a more aggressive approach, prohibiting children under 13 from using social media platforms and requiring parental consent for teenagers. It would also prohibit companies from recommending content through algorithms for users under 18.

Critics of the bills, including some civil rights groups and advocacy groups aligned with tech companies, say the proposals could threaten teens’ online privacy and prevent them from accessing content that could help them, such as resources for those considering suicide or grappling with their sexual and gender identity. 

“Lawmakers should focus on educating and empowering families to control their online experience,” said Carl Szabo of NetChoice, a group aligned with Meta, TikTok, Google and Amazon, among other companies. 

Data privacy 

Biden’s State of the Union remarks appeared to be a nod toward legislation by Senators Ed Markey, a Massachusetts Democrat, and Bill Cassidy, a Louisiana Republican, that would expand child privacy protections online, prohibiting companies from collecting personal data from younger teenagers and banning targeted advertising to children and teens. The bill, also reintroduced last week, would create an “eraser button” allowing parents and kids to eliminate personal data, when possible. 

A broader House effort would attempt to give adults as well as children more control over their data with what lawmakers call a “national privacy standard.” Legislation that passed the House Energy and Commerce Committee last year would try to minimize data collected and make it illegal to target ads to children, usurping state laws that have tried to put privacy restrictions in place. But the bill, which would have also given consumers more rights to sue over privacy violations, never reached the House floor. 

Prospects for the House legislation are unclear now that Republicans have the majority.

 

TikTok, China 

Lawmakers introduced a raft of bills to either ban TikTok or make it easier to ban it after a combative March House hearing in which lawmakers from both parties grilled TikTok CEO Shou Zi Chew over his company’s ties to China’s communist government, data security and harmful content on the app. 

Chew attempted to assure lawmakers that the hugely popular video-sharing app prioritizes user safety and should not be banned because of its Chinese connections. But the testimony gave new momentum to the efforts. 

Soon after the hearing, Missouri Senator Josh Hawley, a Republican, tried to force a Senate vote on legislation that would ban TikTok from operating in the United States. But he was blocked by a fellow Republican, Kentucky Senator Rand Paul, who said that a ban would violate the Constitution and anger the millions of voters who use the app. 

Another bill sponsored by Republican Senator Marco Rubio of Florida would, like Hawley’s bill, ban U.S. economic transactions with TikTok, but it would also create a new framework for the executive branch to block any foreign apps deemed hostile. His bill is co-sponsored by Representatives Raja Krishnamoorthi, an Illinois Democrat, and Mike Gallagher, a Wisconsin Republican. 

There is broad Senate support for bipartisan legislation sponsored by Senate Intelligence Committee Chairman Mark Warner, a Virginia Democrat, and South Dakota Senatpr John Thune, the No. 2 Senate Republican, that does not specifically call out TikTok but would give the Commerce Department power to review and potentially restrict foreign threats to technology platforms. 

The White House has signaled it would back that bill, but its prospects are uncertain. 

Artificial intelligence 

A newer question for Congress is whether lawmakers should move to regulate artificial intelligence as rapidly developing and potentially revolutionary products like AI chatbot ChatGPT begin to enter the marketplace and can in many ways mimic human behavior. 

Senate Democratic leader Chuck Schumer of New York has made the emerging technology a priority, arguing that the United States needs to stay ahead of China and other countries that are eyeing regulations on AI products. He has been working with AI experts and has released a general framework of what regulation could look like, including increased disclosure of the people and data involved in developing the technology, more transparency and explanation for how the bots arrive at responses.

The White House has been focused on the issue as well, with a recent announcement of a $140 million investment to establish seven new AI research institutes. Vice President Kamala Harris met Thursday with the heads of Google, Microsoft and other companies developing AI products.

New Twitter Rules Expose Election Offices to Spoof Accounts

Tracking down accurate information about Philadelphia’s elections on Twitter used to be easy. The account for the city commissioners who run elections, @phillyvotes, was the only one carrying a blue check mark, a sign of authenticity.

But ever since the social media platform overhauled its verification service last month, the check mark has disappeared. That’s made it harder to distinguish @phillyvotes from a list of random accounts not run by the elections office but with very similar names.

The election commission applied weeks ago for a gray check mark — Twitter’s new symbol to help users identify official government accounts – but has yet to hear back from the Twitter, commission spokesman Nick Custodio said. It’s unclear whether @phillyvotes is an eligible government account under Twitter’s new rules.

That’s troubling, Custodio said, because Pennsylvania has a primary election May 16 and the commission uses its account to share important information with voters in real time. If the account remains unverified, it will be easier to impersonate – and harder for voters to trust – heading into Election Day.

Impostor accounts on social media are among many concerns election security experts have heading into next year’s presidential election. Experts have warned that foreign adversaries or others may try to influence the election, either through online disinformation campaigns or by hacking into election infrastructure.

Election administrators across the country have struggled to figure out the best way to respond after Twitter owner Elon Musk threw the platform’s verification service into disarray, given that Twitter has been among their most effective tools for communicating with the public.

Some are taking other steps allowed by Twitter, such as buying check marks for their profiles or applying for a special label reserved for government entities, but success has been mixed. Election and security experts say the inconsistency of Twitter’s new verification system is a misinformation disaster waiting to happen.

“The lack of clear, at-a-glance verification on Twitter is a ticking time bomb for disinformation,” said Rachel Tobac, CEO of the cybersecurity company SocialProof Security. “That will confuse users – especially on important days like election days.”

The blue check marks that Twitter once doled out to notable celebrities, public figures, government entities and journalists began disappearing from the platform in April. To replace them, Musk told users that anyone could pay $8 a month for an individual blue check mark or $1,000 a month for a gold check mark as a “verified organization.”

The policy change quickly opened the door for pranksters to pose convincingly as celebrities, politicians and government entities, which could no longer be identified as authentic. While some impostor accounts were clear jokes, others created confusion.

Fake accounts posing as Chicago Mayor Lori Lightfoot, the city’s Department of Transportation and the Illinois Department of Transportation falsely claimed the city was closing one of its main thoroughfares to private traffic. The fake accounts used the same photos, biographical text and home page links as the real ones. Their posts amassed hundreds of thousands of views before being taken down.

Twitter’s new policy invites government agencies and certain affiliated organizations to apply to be labeled as official with a gray check. But at the state and local level, qualifying agencies are limited to “main executive office accounts and main agency accounts overseeing crisis response, public safety, law enforcement, and regulatory issues,” the policy says.

The rules do not mention agencies that run elections. So while the main Philadelphia city government account quickly received its gray check mark last month, the local election commission has not heard back.

Election offices in four of the country’s five most populous counties — Cook County in Illinois, Harris County in Texas, Maricopa County in Arizona and San Diego County — remain unverified, a Twitter search shows. Maricopa, which includes Phoenix, has been targeted repeatedly by election conspiracy theorists as the most populous and consequential county in one of the most closely divided political battleground states.

Some counties contacted by The Associated Press said they have minimal concerns about impersonation or plan to apply for a gray check later, but others said they already have applied and have not heard back from Twitter.

Even some state election offices are waiting for government labels. Among them is the office of Maine Secretary of State Shenna Bellows.

In an April 24 email to Bellows’ communications director reviewed by The Associated Press, a Twitter representative wrote that there was “nothing to do as we continue to manually process applications from around the world.” The representative added in a later email that Twitter stands “ready to swiftly enforce any impersonation, so please don’t hesitate to flag any problematic accounts.”

An email sent to Twitter’s press office and a company safety officer requesting comment was answered only with an autoreply of a poop emoji.

“Our job is to reinforce public confidence,” Bellows told the AP. “Even a minor setback, like no longer being able to ensure that our information on Twitter is verified, contributes to an environment that is less predictable and less safe.”

Some government accounts, including the one representing Pennsylvania’s second-largest county, have purchased blue checks because they were told it was required to continue advertising on the platform.

Allegheny County posts ads for elections and jobs on Twitter, so the blue check mark “was necessary,” said Amie Downs, the county’s communications director.

When anyone can buy verification and when government accounts are not consistently labeled, the check mark loses its meaning, Colorado Secretary of State Jena Griswold said.

Griswold’s office received a gray check mark to maintain trust with voters, but she told the AP she would not buy verification for her personal Twitter account because “it doesn’t carry the same weight” it once did.

Custodio, at the Philadelphia elections commission, said his office would not buy verification either, even if it gets denied a gray check.

“The blue or gold check mark just verifies you as a paid subscriber and does not verify identity,” he said.

Experts and advocates tracking election discourse on social media say Twitter’s changes do not just incentivize bad actors to run disinformation campaigns — they also make it harder for well-meaning users to know what’s safe to share.

“Because Twitter is dropping the ball on verification, the burden will fall on voters to double check that the information they are consuming and sharing is legitimate,” said Jill Greene, voting and elections manager for Common Cause Pennsylvania.

That dampens an aspect of Twitter that until now had been seen as one of its strengths – allowing community members to rally together to elevate authoritative information, said Mike Caulfield, a research scientist at the University of Washington’s Center for an Informed Public.

“The first rule of a good online community user interface is to ‘help the helpers.’ This is the opposite of that,” Caulfield said. “It takes a community of people who want to help boost good information, and robs them of the tools to make fast, accurate decisions.”

Buffett Shares Good News on Profits, AI Thoughts at Meeting

Billionaire Warren Buffett said artificial intelligence may change the world in all sorts of ways, but new technology won’t take away opportunities for investors, and he’s confident America will continue to prosper over time.

Buffett and his partner Charlie Munger are spending all day Saturday answering questions at Berkshire Hathaway’s annual meeting inside a packed Omaha arena.

“New things coming along doesn’t take away the opportunities. What gives you the opportunities is other people doing dumb things,” said Buffett, who had a chance to try out ChatGPT when his friend Bill Gates showed it to him a few months back.

Buffett reiterated his long-term optimism about the prospects for America even with the bitter political divisions today.

“The problem now is that partisanship has moved more towards tribalism, and in tribalism you don’t even hear the other side,” he said.

Both Buffett and Munger said the United States will benefit from having an open trading relationship with China, so both countries should be careful not to exacerbate the tensions between them because the stakes are too high for the world.

“Everything that increases the tension between these two countries is stupid, stupid, stupid,” Munger said. And whenever either country does something stupid, he said the other country should respond with incredible kindness.

The chance to listen to the two men answer all sorts of questions about business and life attracts people from all over the world to Omaha, Nebraska. Some of the shareholders feel a particular urgency to attend now because Buffett and Munger are both in their 90s.

“Charlie Munger is 99. I just wanted to see him in person. It’s on my bucket list,” said 40-year-old Sheraton Wu from Vancouver. “I have to attend while I can.”

“It’s a once in a lifetime opportunity,” said Chloe Lin, who traveled from Singapore to attend the meeting for the first time and learn from the two legendary investors.

One of the few concessions Buffett makes to his age is that he no longer tours the exhibit hall before the meeting. In years past, he would be mobbed by shareholders trying to snap a picture with him while a team of security officers worked to manage the crowd. Munger has used a wheelchair for several years, but both men are still sharp mentally.

But in a nod to the concerns about their age, Berkshire showed a series of clips of questions about succession from past meetings dating back to the first one they filmed in 1994. Two years ago, Buffett finally said that Greg Abel will eventually replace him as CEO although he has no plans to retire. Abel already oversees all of Berkshire’s noninsurance businesses.

Buffett assured shareholders that he has total confidence in Abel to lead Berkshire in the future, and he doesn’t have a second choice for the job because Abel is remarkable in his own right. But he said much of what Abel will have to do is just maintain Berkshire’s culture and keep making similar decisions.

“Greg understands capital allocation as well as I do. He will make these decisions on the same framework that I use,” Buffett said.

Abel followed that up by assuring the crowd that he knows how Buffett and Munger have handled things for nearly six decades and “I don’t really see that framework changing.”

Although not everyone at the meeting is a fan. Outside the arena, pilots from Berkshire’s NetJets protested over the lack of a new contract and pro-life groups carried signs declaring “Buffett’s billions kill millions” to object to his many charitable donations to abortion rights groups.

Berkshire Hathaway said Saturday morning that it made $35.5 billion, or $24,377 per Class A share, in the first quarter. That’s more than 6 times last year’s $5.58 billion, or $3,784 per share.

But Buffett has long cautioned that those bottom line figures can be misleading for Berkshire because the wide swings in the value of its investments — most of which it rarely sells — distort the profits. In this quarter, Berkshire sold only $1.7 billion of stocks while recording a $27.4 billion paper investment gain. Part of this year’s investment gains included a $2.4 billion boost related to Berkshire’s planned acquisition of the majority of the Pilot Travel Centers truck stop company’s shares in January.

Buffett says Berkshire’s operating earnings that exclude investments are a better measure of the company’s performance. By that measure, Berkshire’s operating earnings grew nearly 13% to $8.065 billion, up from $7.16 billion a year ago.

The three analysts surveyed by FactSet expected Berkshire to report operating earnings of $5,370.91 per Class A share.

Buffett came close to giving a formal outlook Saturday when he told shareholders that he expects Berkshire’s operating profits to grow this year even though the economy is slowing down and many of its businesses will sell less in 2023. He said Berkshire will profit from rising interest rates on its holdings, and the insurance market looks good this year.

This year’s first quarter was relatively quiet compared to a year ago when Buffett revealed that he had gone on a $51 billion spending spree at the start of last year, snapping up stocks like Occidental Petroleum, Chevron and HP. Buffett’s buying slowed through the rest of last year with the exception of a number of additional Occidental purchases.

At the end of this year’s first quarter, Berkshire held $130.6 billion cash, up from about $128.59 billion at the end of last year. But Berkshire did spend $4.4 billion during the quarter to repurchase its own shares.

Berkshire’s insurance unit, which includes Geico and a number of large reinsurers, recorded a $911 million operating profit, up from $167 million last year, driven by a rebound in Geico’s results. Geico benefitted from charging higher premiums and a reduction in advertising spending and claims.

But Berkshire’s BNSF railroad and its large utility unit did report lower profits. BNSF earned $1.25 billion, down from $1.37 billion, as the number of shipments it handled dropped 10% after it lost a big customer and imports slowed at the West Coast ports. The utility division added $416 million, down from last year’s $775 million.

Besides those major businesses, Berkshire owns an eclectic assortment of dozens of other businesses, including a number of retail and manufacturing firms such as See’s Candy and Precision Castparts.

Berkshire again faces pressure from activist investors urging the company to do more to catalog its climate change risks in a companywide report. Shareholders were expected to brush that measure and all the other shareholder proposals aside Saturday afternoon because Buffett and the board oppose them, and Buffett controls more than 30% of the vote.

But even as they resist detailing climate risks, a number of Berkshire’s subsidiaries are working to reduce their carbon emissions, including its railroad and utilities. The company’s Clayton Homes unit is showing off a new home design this year that will meet strict energy efficiency standards from the Department of Energy and come pre-equipped for solar power to be added later.

Google Plans to Make Search More ‘Human,’ Says Wall Street Journal

Google is planning to make its search engine more “visual, snackable, personal and human,” with a focus on serving young people globally, The Wall Street Journal reported on Saturday, citing documents.

The move comes as artificial intelligence (AI) applications such as ChatGPT are rapidly gaining in popularity, highlighting a technology that could upend the way businesses and society operate.

The tech giant will nudge its service further away from “10 blue links,” which is a traditional format of presenting search results and plans to incorporate more human voices as part of the shift, the report said.

At its annual I/O developer conference in the coming week, Google is expected to debut new features that allow users to carry out conversations with an AI program, a project code-named “Magi,” The Wall Street Journal added, citing people familiar with the matter.

Generative AI has become a buzzword this year, with applications capturing the public’s fancy and sparking a rush among companies to launch similar products they believe will change the nature of work.

Google, part of Alphabet Inc., did not immediately respond to Reuters’ request for comment.

Could AI Pen ‘Casablanca’? Screenwriters Take Aim at ChatGPT

When Greg Brockman, the president and co-founder of ChatGPT maker OpenAI, was recently extolling the capabilities of artificial intelligence, he turned to “Game of Thrones.”

Imagine, he said, if you could use AI to rewrite the ending of that not-so-popular finale. Maybe even put yourself into the show.

“That is what entertainment will look like,” said Brockman.

Not six months since the release of ChatGPT, generative artificial intelligence is already prompting widespread unease throughout Hollywood. Concern over chatbots writing or rewriting scripts is one of the leading reasons TV and film screenwriters took to picket lines earlier this week.

Though the Writers Guild of America is striking for better pay in an industry where streaming has upended many of the old rules, AI looms as rising anxiety.

“AI is terrifying,” said Danny Strong, the “Dopesick” and “Empire” creator. “Now, I’ve seen some of ChatGPT’s writing and as of now I’m not terrified because Chat is a terrible writer. But who knows? That could change.”

AI chatbots, screenwriters say, could potentially be used to spit out a rough first draft with a few simple prompts (“a heist movie set in Beijing”). Writers would then be hired, at a lower pay rate, to punch it up.

Screenplays could also be slyly generated in the style of known writers. What about a comedy in the voice of Nora Ephron? Or a gangster film that sounds like Mario Puzo? You won’t get anything close to “Casablanca” but the barest bones of a bad Liam Neeson thriller isn’t out of the question.

The WGA’s basic agreement defines a writer as a “person” and only a human’s work can be copyrighted. But even though no one’s about to see a “By AI” writers credit at the beginning a movie, there are myriad ways that regenerative AI could be used to craft outlines, fill in scenes and mockup drafts.

“We’re not totally against AI,” says Michael Winship, president of the WGA East and a news and documentary writer. “There are ways it can be useful. But too many people are using it against us and using it to create mediocrity. They’re also in violation of copyright. They’re also plagiarizing.”

The guild is seeking more safeguards on how AI can be applied to screenwriting. It says the studios are stonewalling on the issue. The Alliance of Motion Picture and Television Producers, which bargains on the behalf of production companies, has offered to annually meet with the guild to go over definitions around the fast-evolving technology.

“It’s something that requires a lot more discussion, which we’ve committed to doing,” the AMPTP said in an outline of its position released Thursday.

Experts say the struggle screenwriters are now facing with regenerative AI is just the beginning. The World Economic Forum this week released a report predicting that nearly a quarter of all jobs will be disrupted by AI over the next five years.

“It’s definitely a bellwether in the workers’ response to the potential impacts of artificial intelligence on their work,” says Sarah Myers West, managing director of the nonprofit AI Now Institute, which has lobbied the government to enact more regulation around AI. “It’s not lost on me that a lot of the most meaningful efforts in tech accountability have been a product of worker-led organizing.”

AI has already filtered into nearly every part of moviemaking. It’s been used to de-age actors, remove swear words from scenes in post-production, supply viewing recommendations on Netflix and posthumously bring back the voices of Anthony Bourdain and Andy Warhol.

The Screen Actors Guild, set to begin its own bargaining with the AMPTP this summer, has said it’s closely following the evolving legal landscape around AI.

“Human creators are the foundation of the creative industries, and we must ensure that they are respected and paid for their work,” the actors union said.

The implications for screenwriting are only just being explored. Actors Alan Alda and Mike Farrell recently reconvened to read through a new scene from “M(asterisk)A(asterisk)S(asterisk)H” written by ChatGPT. The results weren’t terrible, though they weren’t so funny, either.

“Why have a robot write a script and try to interpret human feelings when we already have studio executives who can do that?” deadpanned Alda.

Writers have long been among notoriously exploited talents in Hollywood. The films they write usually don’t get made. If they do, they’re often rewritten many times over. Raymond Chandler once wrote “the very nicest thing Hollywood can possibly think to say to a writer is that he is too good to be only a writer.”

Screenwriters are accustomed to being replaced. Now, they see a new, readily available and inexpensive competitor in AI — albeit one with a slightly less tenuous grasp of the human condition.

“Obviously, AI can’t do what writers and humans can do. But I don’t know that they believe that, necessarily,” says screenwriter Jonterri Gadson (“A Black Lady Sketchshow”). “There needs to be a human writer in charge and we’re not trying to be gig workers, just revising what AI does. We need to tell the stories.”

Dramatizing their plight as man vs. machine surely doesn’t hurt the WGA’s cause in public opinion. The writers are wrestling with the threat of AI just as concern widens over how hurriedly regenerative AI products have been thrust into society.

Geoffrey Hinton, an AI pioneer, recently left Google in order to speak freely about its potential dangers. “It’s hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times.

“What’s especially scary about it is nobody, including a lot of the people who are involved with creating it, seem to be able to explain exactly what it’s capable of and how quickly it will be capable of more,” says actor-screenwriter Clark Gregg.

The writers find themselves in the awkward position of negotiating on a newborn technology with the potential for radical effect. Meanwhile, AI-crafted songs by “Fake Drake” or “Fake Eminem” continue to circulate online.

“They’re afraid that if the use of AI to do all this becomes normalized, then it becomes very hard to stop the train,” says James Grimmelmann, a professor of digital and information law at Cornell University. “The guild is in the position of trying to imagine lots of different possible futures.”

In the meantime, chanting demonstrators are hoisting signs with messages aimed at a digital foe. Seen on the picket lines: “ChatGPT doesn’t have childhood trauma”; “I heard AI refuses to take notes”; and “Wrote ChatGPT this.”

Hate Passwords? You’re in Luck — Google Is Sidelining Them

Good news for all the password-haters out there: Google has taken a big step toward making them an afterthought by adding “passkeys” as a more straightforward and secure way to log into its services. 

Here’s what you need to know: 

What are passkeys?  

Passkeys offer a safer alternative to passwords and texted confirmation codes. Users won’t ever see them directly; instead, an online service like Gmail will use them to communicate directly with a trusted device such as your phone or computer to log you in. 

All you’ll have to do is verify your identity on the device using a PIN unlock code, biometrics such as your fingerprint or a face scan or a more sophisticated physical security dongle. 

Google designed its passkeys to work with a variety of devices, so you can use them on iPhones, Macs and Windows computers, as well as Google’s own Android phones. 

Why are passkeys necessary?  

Thanks to clever hackers and human fallibility, passwords are just too easy to steal or defeat. And making them more complex just opens the door to users defeating themselves. 

For starters, many people choose passwords they can remember — and easy-to-recall passwords are also easy to hack. For years, analysis of hacked password caches found that the most common password in use was “password123.” A more recent study by the password manager NordPass found that it’s now just “password.” This isn’t fooling anyone. 

Passwords are also frequently compromised in security breaches. Stronger passwords are more secure, but only if you choose ones that are unique, complex and non-obvious. And once you’ve settled on “erVex411$%” as your password, good luck remembering it. 

In short, passwords put security and ease of use directly at odds. Software-based password managers, which can create and store complex passwords for you, are valuable tools that can improve security. But even password managers have a master password you need to protect, and that plunges you back into the swamp. 

In addition to sidestepping all those problems, passkeys have one additional advantage over passwords. They’re specific to particular websites, so scammer sites can’t steal a passkey from a dating site and use it to raid your bank account. 

How do I start using passkeys?  

The first step is to enable them for your Google account. On any trusted phone or computer, open the browser and sign into your Google account. Then visit the page g.co/passkeys and click the option to “start using passkeys.” Voila! The passkey feature is now activated for that account. 

If you’re on an Apple device, you’ll first be prompted to set up the Keychain app if you’re not already using it; it securely stores passwords and now passkeys, as well. 

The next step is to create the actual passkeys that will connect your trusted device. If you’re using an Android phone that’s already logged into your Google account, you’re most of the way there; Android phones are automatically ready to use passkeys, though you still have to enable the function first. 

On the same Google account page noted above, look for the “Create a passkey” button. Pressing it will open a window and let you create a passkey either on your current device or on another device. There’s no wrong choice; the system will simply notify you if that passkey already exists. 

If you’re on a PC that can’t create a passkey, it will open a QR code that you can scan with the ordinary cameras on iPhones and Android devices. You may have to move the phone closer until the message “Set up passkey” appears on the image. Tap that and you’re on your way. 

And then what?  

From that point on, signing into Google will only require you to enter your email address. If you’ve gotten passkeys set up properly, you’ll simply get a message on your phone or other device asking you to for your fingerprint, your face or a PIN.

Of course, your password is still there. But if passkeys take off, odds are good you won’t be needing it very much. You may even choose to delete it from your account someday. 

‘Godfather of AI’ Quits Google to Warn of the Technology’s Dangers

A computer scientist often dubbed “the godfather of artificial intelligence” has quit his job at Google to speak out about the dangers of the technology, U.S. media reported Monday.

Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity”.

“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday. “Take the difference and propagate it forwards. That’s scary.”

Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he told The Times.

Jobs could be at risk

In 2022, Google and OpenAI — the startup behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.

Hinton told The Times he believed these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.

“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.

While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.

AI “takes away the drudge work” but “might take away more than that,” he told The Times.

Concern about misinformation

The scientist also warned about the potential spread of misinformation created by AI, telling The Times that the average person will “not be able to know what is true anymore.”

Hinton notified Google of his resignation last month, The Times reported.

Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to U.S. media.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.

“We’re continually learning to understand emerging risks while also innovating boldly.”

In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.

An open letter, signed by more than 1,000 people. including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.

Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”