Instagram blurring nudity in messages to protect teens, fight sexual extortion

LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

As with many of Meta’s tools and policies around child safety, critics saw the move as a positive step, but one that does not go far enough.

“I think the tools announced can protect senders, and that is welcome. But what about recipients?” said Arturo Béjar, former engineering director at the social media giant who is known for his expertise in curbing online harassment. He said 1 in 8 teens receives an unwanted advance on Instagram every seven days, citing internal research he compiled while at Meta that he presented in November testimony before Congress. “What tools do they get? What can they do if they get an unwanted nude?”

Béjar said “things won’t meaningfully change” until there is a way for a teen to say they’ve received an unwanted advance, and there is transparency about it.

Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.

Swarms of drones can be managed by a single person

The U.S. military says large groups of drones and ground robots can be managed by just one person without added stress to the operator. As VOA’s Julie Taboh reports, the technologies may be beneficial for civilian uses, too. VOA footage by Adam Greenbaum.

Indiana aspires to become next great tech center

indianapolis, indiana — Semiconductors, or microchips, are critical to almost everything electronic used in the modern world. In 1990, the United States produced about 40% of the world’s semiconductors. As manufacturing migrated to Asia, U.S. production fell to about 12%.  

“During COVID, we got a wake-up call. It was like [a] Sputnik moment,” explained Mark Lundstrom, an engineer who has worked with microchips much of his life. 

The 2020 global coronavirus pandemic slowed production in Asia, creating a ripple through the global supply chain and leading to shortages of everything from phones to vehicles. Lundstrom said increasing U.S. reliance on foreign chip manufacturers exposed a major weakness. 

“We know that AI is going to transform society in the next several years, it requires extremely powerful chips. The most powerful leading-edge chips.” 

Today, Lundstrom is the acting dean of engineering at Purdue University in Lafayette, Indiana, a leader in cutting-edge semiconductor development, which has new importance amid the emerging field of artificial intelligence. 

“If we fall behind in AI, the consequences are enormous for the defense of our country, for our economic future,” Lundstrom told VOA. 

Amid the buzz of activity in a laboratory on Purdue’s campus, visitors can get a vision of what the future might look like in microchip technology. 

“The key metrics of the performance of the chips actually are the size of the transistors, the devices, which is the building block of the computer chips,” said Zhihong Chen, director of Purdue’s Birck Nanotechnology Center, where engineers work around the clock to push microchip technology into the future. 

“We are talking about a few atoms in each silicon transistor these days. And this is what this whole facility is about,” Chen said. “We are trying to make the next generation transistors better devices than current technologies. More powerful and more energy-efficient computer chips of the future.” 

Not just RVs anymore

Because of Purdue’s efforts, along with those on other university campuses in the state, Indiana believes it’s an attractive location for manufacturers looking to build new microchip facilities. 

“Purdue University alone, a top four-ranked engineering school, offers more engineers every year than the next top three,” said Eric Holcomb, Indiana’s Republican governor. “When you have access to that kind of talent, when you have access to the cost of doing business in the state of Indiana, that’s why people are increasingly saying, Indiana.” 

Holcomb is in the final year of his eight-year tenure in the state’s top position. He wants to transform Indiana beyond the recreational vehicle, or “RV capital” of the country.  

“We produce about plus-80% of all the RV production in North America in one state,” he told VOA. “We are not just living up to our reputation as being the number one manufacturing state per capita in America, but we are increasingly embracing the future of mobility in America.” 

Holcomb is spearheading an effort to make Indiana the next great technology center as the U.S. ramps up investment in domestic microchip development and manufacturing.  “If we want to compete globally, we have to get smarter and healthier and more equipped, and we have to continue to invest in our quality of place,” Holcomb told VOA in an interview. 

His vision is shared by other lawmakers, including U.S. Senator Todd Young of Indiana, who co-sponsored the bipartisan CHIPS and Science Act, which commits more than $50 billion in federal funding for domestic microchip development. 

‘We are committed’

Indiana is now home to one of 31 designated U.S. technology and innovation hubs, helping it qualify for hundreds of millions of dollars in grants designed to attract technology-driven businesses. 

“The signal that it sends to the rest of the world [is] that we are in it, we are committed, and we are focused,” said Holcomb. “We understand that economic development, economic security and national security complement one another.” 

Indiana’s efforts are paying off. 

In April, South Korean microchip manufacturer SK Hynix announced it was planning to build a $4 billion facility near Purdue University that would produce next-generation, high-bandwidth memory, or HBM chips, critical for artificial intelligence applications.  

The facility, slated to start operating in 2028, could create more than 1,000 new jobs. While U.S. chip manufacturer SkyWater also plans to invest nearly $2 billion in Indiana’s new LEAP Innovation District near Purdue, the state recently lost bidding to host chipmaker Intel, which selected Ohio for two new factories. 

“Companies tend to like to go to locations where there is already that infrastructure, where that supply chain is in place,” Purdue’s Lundstrom said. “That’s a challenge for us, because this is a new industry for us. So, we have a chicken-and- egg problem that we have to address, and we are beginning to address that.” 

Lundstrom said the CHIPS and Science Act and the federal money that comes with it are helping Indiana ramp up to compete with other U.S. locations already known for microchip development, such as Silicon Valley in California and Arizona. 

What could help Indiana gain an edge is its natural resources — plenty of land and water, and regular weather patterns, all crucial for the sensitive processes needed to manufacture microchips at large manufacturing centers. 

Indiana aspires to become next great tech hub

The Midwestern state of Indiana aspires to become the next great technology center as the United States ramps up investment in domestic microchip development and manufacturing. VOA’s Kane Farabaugh has more from Indianapolis. Videographer: Kane Farabaugh, Adam Greenbaum

Ukrainian civilians help build up their country’s drone fleet

Inexpensive first-person view – or radio controlled – drones have become a powerful weapon in Ukraine’s war against Russian invaders. As the country presses the West for more military aid, many Ukrainian civilians are stepping in to help by making homemade attack drones. Lesia Bakalets has the story from Kyiv.

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

Washington; Flagstaff, Arizona — President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer to produce semiconductors in the southwestern U.S. state of Arizona, which includes a third facility that will bring the foreign tech giant’s investment in the state to $65 billion.

Biden said the move aims to perk up a decades-old slump in American chip manufacturing. Taiwan Semiconductor Manufacturing Company (TSMC), which is based in the Chinese-claimed island, claims more than half of the global market share in chip manufacturing.

The new facility, Biden said, will put the U.S. on track to produce 20% of the world’s leading-edge semiconductors by 2030.

“I was determined to turn that around, and thanks to my CHIPS and Science Act — a key part of my Investing in America agenda — semiconductor manufacturing and jobs are making a comeback,” Biden said in a statement.

U.S. production of this American-born technology has fallen steeply in recent decades, said Andy Wang, dean of engineering at Northern Arizona University.

“As a nation, we used to produce 40% of microchips for the whole world,” he told VOA. “Now, we produce less than 10%.”

A single semiconductor transistor is smaller than a grain of sand. But billions of them, packed neatly together, can connect the world through a mobile phone, control sophisticated weapons of war and satellites that orbit the Earth, and someday may even drive a car.

The immense value of these tiny chips has fueled fierce competition between the U.S. and China.

The U.S. Department of Commerce has taken several steps to hamper China’s efforts to build its own chip industry. Those include export controls and new rules to prevent “foreign countries of concern” — which it said includes China, Iran, North Korea and Russia — from benefiting from funding from the CHIPS and Science Act.

While analysts are divided over whether Taiwan’s dominance of this critical industry makes it more or less vulnerable to Chinese aggression, they agree it confers the island significant global status.

“It is debatable what, if any, role Taiwan’s semiconductor manufacturing prowess plays in deterrence,” said David Sacks, an analyst who focuses on U.S.-China relations at the Council on Foreign Relations. “What is not debatable is how devastating an attack on Taiwan would be for the global economy.”

Biden did not mention U.S. adversaries in his statement, but he noted the impact of Monday’s announcement, saying it “represent(s) a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.”

VOA met with engineers in the new technological hub state, who said the legislation addresses a key weakness in American chip manufacturing.

“We’ve just gotten in the cycle of the last 15 to 20 years, where innovation has slowed down,” said Todd Achilles, who teaches innovation, strategy and policy analysis at the University of California-Berkeley. “It’s all about financial results, investor payouts and stock buybacks. And we’ve lost that innovation muscle. And the CHIPS Act — pulling that together with the CHIPS Act — is the perfect opportunity to restore that.”

The White House says this new investment could create 25,000 construction and manufacturing jobs. Academics say they’re churning out workers at a rapid pace, but that still, America lacks talent.

“Our engineering college is the largest in the country, with over 33,000 enrolled students, and still we’re hearing from companies across the semiconductor industry that they’re not able to get the talent they need in time,” Zachary Holman, vice dean for research and innovation at Arizona State University, told VOA.

And as the American industry stretches to keep pace, it races a technical trend known as t: that the number of transistors in a computer chip doubles about every two years. As a result, cutting-edge chips get ever smaller as they grow in computing power.

TSMC in 2022 broke ground on a facility that makes the smallest chip currently available, coming in at 3 nanometers — that’s just wider than a strand of DNA.

Reporter Levi Stallings contributed to this report from Flagstaff, Arizona.

With $6.6B to Arizona hub, Biden touts big steps in US chipmaking

President Joe Biden on Monday announced a $6.6 billion grant to Taiwan’s top chip manufacturer for semiconductor manufacturing in Arizona, which includes a third facility that will bring the tech giant’s investment in the state to $65 billion. VOA’s White House correspondent Anita Powell reports from Washington, with reporter Levi Stallings in Flagstaff, Arizona.

Experts fear Cambodian cybercrime law could aid crackdown

PHNOM PENH, CAMBODIA — The Cambodian government is pushing ahead with a cybercrime law experts say could be wielded to further curtail freedom of speech amid an ongoing crackdown on dissent. 

The cybercrime draft is the third controversial internet law authorities have pursued in the past year as the government, led by new Prime Minister Hun Manet, seeks greater oversight of internet activities. 

Obtained by VOA in both English and Khmer language versions, the latest draft of the cybercrime law is marked “confidential” and contains 55 articles. It lays out various offenses punishable by fines and jail time, including defamation, using “insulting, derogatory or rude language,” and sharing “false information” that could harm Cambodia’s public order and “traditional culture.”  

The law would also allow authorities to collect and record internet traffic data, in real time, of people under investigation for crimes, and would criminalize online material that “depicts any act or activity … intended to stimulate sexual desire” as pornography. 

Digital rights and legal experts who reviewed the law told VOA that its vague language, wide-ranging categories of prosecutable speech and lack of protections for citizens fall short of international standards, instead providing the government more tools to jail dissenters, opposition members, women and LGBTQ+ people. 

Although in the works since 2016, earlier drafts of the law, which sparked similar criticism, have not leaked since 2020 and 2021. Authorities hope to enact the law by the end of the year. 

“This cybercrime bill offers the government even more power to go after people expressing dissent,” Kian Vesteinsson, a senior research analyst for technology at the human rights organization Freedom House, told VOA.  

“These vague provisions around defamation, insults and disinformation are ripe for abuse, and we know that Cambodian authorities have deployed similarly vague criminal provisions in other contexts,” Vesteinsson said. 

Cambodian law already considers defamation a criminal offense, but the cybercrime draft would make it punishable by jail time up to six months, plus a fine of up to $5,000. The “false information” clause — defined as sharing information that “intentionally harms national defense, national security, relations with other countries, economy, public order, or causes discrimination, or affects traditional culture” — carries a three- to five-year sentence and fine of up to $25,000. 

Daron Tan, associate international legal adviser at the International Commission of Jurists, told VOA the defamation and false information articles do not comply with the International Covenant on Civil and Political Rights, to which Cambodia is a party, and that the United Nations Human Rights Committee is “very clear that imprisonment is never the appropriate penalty for defamation.” 

“It’s a step very much in the wrong direction,” Tan said. “We are very worried that this would expand the laws that the government can use against its critics.” 

Chea Pov, the deputy head of Cambodia’s National Police and former director of the Ministry of Interior’s Anti-Cybercrime Department that is overseeing the drafting process, told VOA the law “doesn’t restrict your rights” and claimed the U.S. companies which reviewed it “didn’t raise concerns.”  

Google, Meta and Amazon, which the government has said were involved in drafting the law, did not respond to requests for comment. 

“If you say something based on evidence, there is no problem,” Pov said. “But if there is no evidence, [you] defame others, which is also stated in the criminal law … we don’t regard this as a restriction.”  

The law also makes it illegal to use technology to display, trade, produce or disseminate pornography, or to advertise a “product or service mixed with pornography” online. Pornography is defined as anything that “describes a genital or depicts any act or activity involving a sexual organ or any part of the human body, animal, or object … or other similar pornography that is intended to stimulate sexual desire or cause sexual excitement.” 

Experts say this broad category is likely to be disproportionately deployed against women and LGBTQ+ people. 

Cambodian authorities have often rebuked or arrested women for dressing “too sexily” on social media, singing sexual songs or using suggestive speech. In 2020, an online clothes and cosmetics seller received a six-month suspended sentence after posting provocative photos; in another incident, a policewoman was forced to publicly apologize for posting photos of herself breastfeeding. 

Naly Pilorge, outreach director at Cambodian human rights organization Licadho, told VOA the draft law “could lead to more rights violations against women in the country.” 

“This vague definition of ‘pornography’ poses a serious threat to any woman whose online activity the government decides may ‘cause sexual excitement,’” Pilorge said. “The draft law does not acknowledge any legitimate artistic or educational purposes to depict or describe sexual organs, posing another threat to freedom of expression.” 

In March, authorities said they hosted civil society organizations to revisit the draft. They plan to complete the drafting process and send the law to Parliament for passage before the end of the year, according to Pov, the deputy head of police. 

Soeung Saroeun, executive director of the NGO Forum on Cambodia, told VOA “there was no consultation on each article” at the recent meeting. 

“The NGO representatives were unable to analyze and present their inputs,” said Saroeun, echoing concerns about its contents. “How is it [possible]? We need to debate on this.” 

The cybercrime law has resurfaced as the government works to complete two other draft internet laws, one covering cybersecurity and the other personal data protection. Experts have critiqued the drafts as providing expanded police powers to seize computer systems and making citizens’ data vulnerable to hacking and surveillance. 

Authorities have also sought to create a national internet gateway that would require traffic to run through centralized government servers, though the status of that project has been unclear since early 2022 when the government said it faced delays. 

Biden administration announces $6.6 billion to ensure leading-edge microchips are built in US 

WILMINGTON, Del. — The Biden administration pledged on Monday to provide up to $6.6 billion so that a Taiwanese semiconductor giant can expand the facilities it is already building in Arizona and better ensure that the most-advanced microchips are produced domestically for the first time. 

Commerce Secretary Gina Raimondo said the funding for Taiwan Semiconductor Manufacturing Co. means the company can expand on its existing plans for two facilities in Phoenix and add a third, newly announced production hub. 

“These are the chips that underpin all artificial intelligence, and they are the chips that are the necessary components for the technologies that we need to underpin our economy,” Raimondo said on a call with reporters, adding that they were vital to the “21st century military and national security apparatus.” 

The funding is tied to a sweeping 2022 law that President Joe Biden has celebrated and which is designed to revive U.S. semiconductor manufacturing. Known as the CHIPS and Science Act, the $280 billion package is aimed at sharpening the U.S. edge in military technology and manufacturing while minimizing the kinds of supply disruptions that occurred in 2021, after the start of the coronavirus pandemic, when a shortage of chips stalled factory assembly lines and fueled inflation. 

The Biden administration has promised tens of billions of dollars to support construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. 

“Semiconductors – those tiny chips smaller than the tip of your finger – power everything from smartphones to cars to satellites and weapons systems,” Biden said in a statement. “TSMC’s renewed commitment to the United States, and its investment in Arizona represent a broader story for semiconductor manufacturing that’s made in America and with the strong support of America’s leading technology firms to build the products we rely on every day.” 

Taiwan Semiconductor Manufacturing Co. produces nearly all of the leading-edge microchips in the world and plans to eventually do so in the U.S. 

It began construction of its first facility in Phoenix in 2021, and started work on a second hub last year, with the company increasing its total investment in both projects to $40 billion. The third facility should be producing microchips by the end of the decade and will see the company’s commitment increase to a total of $65 billion, Raimondo said. 

The investments would put the U.S. on track to produce roughly 20% of the world’s leading-edge chips by 2030, and Raimondo said they should help create 6,000 manufacturing jobs and 20,000 construction jobs, as well as thousands of new positions more indirectly tied to assorted suppliers in chip-related industries tied to Arizona projects. 

The potential incentives announced Monday include $50 million to help train the workforce in Arizona to be better equipped to work in the new facilities. Additionally, approximately $5 billion of proposed loans would be available through the CHIPS and Science Act. 

“TSMC’s commitment to manufacture leading-edge chips in Arizona marks a new chapter for America’s semiconductor industry,” Lael Brainard, director of the White House National Economic Council, told reporters. 

The announcement came as U.S. Treasury Secretary Janet Yellen is traveling in China. Senior administration officials were asked on the call with reporters if the Biden administration gave China a head’s up on the coming investment, given the delicate geopolitics surrounding Taiwan. The officials said only that their focus in making Monday’s announcement was solely on advancing U.S. manufacturing. 

“We are thrilled by the progress of our Arizona site to date,” C.C. Wei, CEO of TSMC, said in a statement, “And are committed to its long-term success.” 

Exclusive: Russian company supplies military with microchips despite denials

PENTAGON — Russian microchip company AO PKK Milandr continued to provide microchips to the Russian armed forces at least several months after Russia invaded Ukraine, despite public denials by company director Alexey Novoselov of any connection with Russia’s military.

A formal letter obtained by VOA dated February 10, 2023, shows a sale request for 4,080 military grade microchips for the Russian military. The sale request was addressed from a deputy commander of the 546 military representation of the Russian Ministry of Defense and the commercial director of Russian manufacturer NPO Poisk to Milandr CEO S.V. Tarasenko for delivery by April 2023, more than a year into the war.

The letter instructs Milandr to provide three types of microchip components to NPO Poisk, a well-established Russian defense manufacturer that makes detonators for weapons used by the Russian Armed Forces.

“Each of these three circuits that you have in the table on the document, each one of them is classed as a military-grade component … and each of these is manufactured specifically by Milandr,” said Denys Karlovskyi, a research fellow at the London-based Royal United Services Institute for Defense and Security Studies. VOA shared the document with him to confirm its authenticity.

In addition to Milandr CEO Tarasenko, the letter is addressed to a commander of the Russian Defense Ministry’s 514 military representation of the Russian Ministry of Defense named I.A. Shvid.

Karlovskyi says this inclusion shows that Milandr, like Poisk, appears to have a Russian commander from the Defense Ministry’s oversight unit assigned to it — a clear indicator that a company is part of Russia’s defense industry.

Milandr, headquartered near Moscow in an area known as “Soviet Silicon Valley,” was sanctioned by the United States in November 2022, for its illegal procurement of microelectronic components using front companies.

In the statement announcing the 2022 sanctions against Milandr and more than three dozen other entities and individuals, U.S. Treasury Secretary Janet Yellen said, “The United States will continue to expose and disrupt the Kremlin’s military supply chains and deny Russia the equipment and technology it needs to wage its illegal war against Ukraine.”

Karlovskyi said that in Russia’s database of public contracts, Milandr is listed in more than 500 contracts, supplying numerous state-owned and military-grade enterprises, including Ural Optical Mechanical Plant, Concern Avtomatika and Izhevsk Electromechanical Plant, or IEMZ Kupol, which also have been sanctioned by the United States.

“It clearly suggests that this entity is a crucial node in Russia’s military supply chain,” Karlovskyi told VOA.

Novoselov, Milandr’s current director, told Bloomberg News last August that he was not aware of any connections to the Russian military.

“I don’t know any military persons who would be interested in our product,” he told Bloomberg in a phone interview, adding that the company mostly produces electric power meters.

The U.S. allegations are “like a fantasy,” he said. “The United States’ State Department, they suppose that every electronics business in Russia is focused on the military. I think that is funny.”

But a U.S. defense official told VOA that helping Russia’s military kill tens of thousands of people in an illegal invasion “is no laughing matter.”

“The company is fueling microchips for missiles and heavily armored vehicles that are used to continue the war in Ukraine,” said the defense official, who spoke to VOA on the condition of anonymity due to the sensitivities of discussing U.S. intelligence.

Milandr’s co-founder Mikhail Pavlyuk was also sanctioned during the summer of 2022 for his involvement in microchip smuggling operations and was caught stealing from Milandr. Pavlyuk fled Russia and has claimed he was not involved.

Officials estimate that 500,000 Ukrainian and Russian troops have been killed or injured in the war, with tens of thousands of Ukrainian civilians killed in the fighting.

“There are consequences to their actions, and the U.S. will persist to expose and disrupt the Kremlin’s supply chain,” the U.S. defense official said.

US, Europe Issue Strictest Rules Yet on AI

washington — In recent weeks, the United States, Britain and the European Union have issued the strictest regulations yet on the use and development of artificial intelligence, setting a precedent for other countries.

This month, the United States and the U.K. signed a memorandum of understanding allowing for the two countries to partner in the development of tests for the most advanced artificial intelligence models, following through on commitments made at the AI Safety Summit last November.

These actions come on the heels of the European Parliament’s March vote to adopt its first set of comprehensive rules on AI. The landmark decision sets out a wide-ranging set of laws to regulate this exploding technology.

At the time, Brando Benifei, co-rapporteur on the Artificial Intelligence Act plenary vote, said, “I think today is again an historic day on our long path towards regulation of AI. … The first regulation in the world that is putting a clear path towards a safe and human-centric development of AI.”

The new rules aim to protect citizens from dangerous uses of AI, while exploring its boundless potential.

Beth Noveck, professor of experiential AI at Northeastern University, expressed enthusiasm about the rules.

“It’s really exciting that the EU has passed really the world’s first … binding legal framework addressing AI. It is, however, not the end; it is really just the beginning.”

The new rules will be applied according to risk level: the higher the risk, the stricter the rules.

“It’s not regulating the tech,” she said. “It’s regulating the uses of the tech, trying to prohibit and to restrict and to create controls over the most malicious uses — and transparency around other uses.

“So things like what China is doing around social credit scoring, and surveillance of its citizens, unacceptable.”

Noveck described what she called “high-risk uses” that would be subject to scrutiny. Those include the use of tools in ways that could deprive people of their liberty or within employment.

“Then there are lower risk uses, such as the use of spam filters, which involve the use of AI or translation,” she said. “Your phone is using AI all the time when it gives you the weather; you’re using Siri or Alexa, we’re going to see a lot less scrutiny of those common uses.”

But as AI experts point out, new laws just create a framework for a new model of governance on a rapidly evolving technology.

Dragos Tudorache, co-rapporteur on the AI Act plenary vote, said, “Because AI is going to have an impact that we can’t only measure through this act, we will have to be very mindful of this evolution of the technology in the future and be prepared.”

In late March, the Biden administration issued the first government-wide policy to mitigate the risks of artificial intelligence while harnessing its benefits.

The announcement followed President Joe Biden’s executive order last October, which called on federal agencies to lead the way toward better governance of the technology without stifling innovation.

“This landmark executive order is testament to what we stand for: safety, security, trust, openness,” Biden said at the time,” proving once again that America’s strength is not just the power of its example, but the example of its power.”

Looking ahead, experts say the challenge will be to update rules and regulations as the technology continues to evolve.

Hybrids, electric vehicles shine at New York auto show

The 2024 New York International Auto Show kicked off in Manhattan in late March — and visitors have until April 7 to admire some of the coolest new car technology. Evgeny Maslov has the story, narrated by Anna Rice. Camera: Michael Eckels.

Scathing federal report rips Microsoft for response to Chinese hack

BOSTON — In a scathing indictment of Microsoft corporate security and transparency, a Biden administration-appointed review board issued a report Tuesday saying “a cascade of errors” by the tech giant let state-backed Chinese cyber operators break into email accounts of senior U.S. officials including Commerce Secretary Gina Raimondo.

The Cyber Safety Review Board, created in 2021 by executive order, describes shoddy cybersecurity practices, a lax corporate culture and a lack of sincerity about the company’s knowledge of the targeted breach, which affected multiple U.S. agencies that deal with China.

It concluded that “Microsoft’s security culture was inadequate and requires an overhaul” given the company’s ubiquity and critical role in the global technology ecosystem. Microsoft products “underpin essential services that support national security, the foundations of our economy, and public health and safety.”

The panel said the intrusion, discovered in June by the State Department and dating to May, “was preventable and should never have occurred,” and it blamed its success on “a cascade of avoidable errors.” What’s more, the board said, Microsoft still doesn’t know how the hackers got in.

The panel made sweeping recommendations, including urging Microsoft to put on hold adding features to its cloud computing environment until “substantial security improvements have been made.”

It said Microsoft’s CEO and board should institute “rapid cultural change,” including publicly sharing “a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products.”

In a statement, Microsoft said it appreciated the board’s investigation and would “continue to harden all our systems against attack and implement even more robust sensors and logs to help us detect and repel the cyber-armies of our adversaries.”

In all, the state-backed Chinese hackers broke into the Microsoft Exchange Online email of 22 organizations and more than 500 individuals around the world — including the U.S. ambassador to China, Nicholas Burns — accessing some cloud-based email boxes for at least six weeks and downloading some 60,000 emails from the State Department alone, the 34-page report said. Three think tanks and foreign government entities, including a number of British organizations, were among those compromised, it said.

The board, convened by Homeland Security Secretary Alejandro Mayorkas in August, accused Microsoft of making inaccurate public statements about the incident — including issuing a statement saying it believed it had determined the likely root cause of the intrusion “when, in fact, it still has not.” Microsoft did not update that misleading blog post, published in September, until mid-March, after the board repeatedly asked if it planned to issue a correction, it said.

Separately, the board expressed concern about a separate hack disclosed by the Redmond, Washington, company in January, this one of email accounts — including those of an undisclosed number of senior Microsoft executives and an undisclosed number of Microsoft customers — and attributed to state-backed Russian hackers.

The board lamented “a corporate culture that deprioritized both enterprise security investments and rigorous risk management.”

The Chinese hack was initially disclosed in July by Microsoft in a blog post and carried out by a group the company calls Storm-0558. That same group, the panel noted, has been engaged in similar intrusions — compromising cloud providers or stealing authentication keys so it can break into accounts — since at least 2009, targeting companies including Google, Yahoo, Adobe, Dow Chemical and Morgan Stanley.

Microsoft noted in its statement that the hackers involved are “well-resourced nation state threat actors who operate continuously and without meaningful deterrence.”

The company said that it recognized that recent events “have demonstrated a need to adopt a new culture of engineering security in our own networks,” and added that it had “mobilized our engineering teams to identify and mitigate legacy infrastructure, improve processes, and enforce security benchmarks.”

US, Britain announce partnership on AI safety, testing

WASHINGTON — The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions.

Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.

“We all know AI is the defining technology of our generation,” Raimondo said. “This partnership will accelerate both of our institutes work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society.”

Britain and the United States are among countries establishing government-led AI safety institutes.

Britain said in October its institute would examine and test new types of AI, while the United States said in November it was launching its own safety institute to evaluate risks from so-called frontier AI models and is now working with 200 companies and entites.

Under the formal partnership, Britain and the United States plan to perform at least one joint testing exercise on a publicly accessible model and are considering exploring personnel exchanges between the institutes. Both are working to develop similar partnerships with other countries to promote AI safety.

“This is the first agreement of its kind anywhere in the world,” Donelan said. “AI is already an extraordinary force for good in our society and has vast potential to tackle some of the world’s biggest challenges, but only if we are able to grip those risks.”

Generative AI, which can create text, photos and videos in response to open-ended prompts, has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.

In a joint interview with Reuters Monday, Raimondo and Donelan urgent joint action was needed to address AI risks.

“Time is of the essence because the next set of models are about to be released, which will be much, much more capable,” Donelan said. “We have a focus one the areas that we are dividing and conquering and really specializing.”

Raimondo said she would raise AI issues at a meeting of the U.S.-EU Trade and Technology Council in Belgium Thursday.

The Biden administration plans to soon announce additions to its AI team, Raimondo said. “We are pulling in the full resources of the U.S. government.”

Both countries plan to share key information on capabilities and risks associated with AI models and systems and technical research on AI safety and security.

In October, Biden signed an executive order that aims to reduce the risks of AI. In January, the Commerce Department said it was proposing to require U.S. cloud companies to determine whether foreign entities are accessing U.S. data centers to train AI models.

Britain said in February it would spend more than 100 million pounds ($125.5 million) to launch nine new research hubs and AI train regulators about the technology.

Raimondo said she was especially concerned about the threat of AI applied to bioterrorism or a nuclear war simulation.

“Those are the things where the consequences could be catastrophic and so we really have to have zero tolerance for some of these models being used for that capability,” she said.

Kia Recalls 427,000 Telluride SUVs; Could Roll Away While Parked

New York — Kia is recalling more than 427,000 of its Telluride SUVs due to a defect that may cause the cars to roll away while they’re parked.

According to documents published by the National Highway Traffic Safety Administration, the intermediate shaft and right front driveshaft of certain 2020-2024 Tellurides may not be fully engaged. Over time, this can lead to “unintended vehicle movement” while the cars are in park — increasing potential crash risks.

Kia America decided to recall all 2020-2023 model year and select 2024 model year Tellurides earlier this month, NHTSA documents show. At the time, no injuries or crashes were reported.

Improper assembly is suspected to be the cause of the shaft engagement problem — with the recall covering 2020-2024 Tellurides that were manufactured between Jan. 9, 2019, and Oct. 19, 2023. Kia America estimates that 1% have the defect.

To remedy this issue, recall documents say, dealers will update the affected cars’ electronic parking brake software and replace any damaged intermediate shafts for free. Owners who already incurred repair expenses will also be reimbursed.

In the meantime, drivers of the impacted Tellurides are instructed to manually engage the emergency brake before exiting the vehicle. Drivers can also confirm if their specific vehicle is included in this recall and find more information using the NHTSA site and/or Kia’s recall lookup platform.

Owner notification letters are otherwise set to be mailed out on May 15, with dealer notification beginning a few days prior.

The Associated Press reached out to Irvine, California-based Kia America for further comment Sunday. No comment was received.

Gmail Revolutionized Email 20 Years Ago

San Francisco — Google co-founders Larry Page and Sergey Brin loved pulling pranks, so they began rolling out outlandish ideas every April Fool’s Day not long after starting their company more than a quarter century ago. One year, Google posted a job opening for a Copernicus research center on the moon. Another year, the company said it planned to roll out a “scratch and sniff” feature on its search engine.

The jokes were consistently over-the-top, and people learned to laugh them off as another example of Google mischief. That’s why Page and Brin decided to unveil something no one would believe was possible 20 years ago on April Fool’s Day.

It was Gmail, a free service boasting 1 gigabyte of storage per account, an amount that sounds almost pedestrian in an age of 1-terabyte iPhones. But it sounded like a preposterous amount of email capacity back then, enough to store about 13,500 emails before running out of space compared to just 30 to 60 emails in the then-leading webmail services run by Yahoo and Microsoft. That translated into 250 to 500 times more email storage space.

Besides the quantum leap in storage, Gmail also came equipped with Google’s search technology so users could quickly retrieve a tidbit from an old email, photo or other personal information stored on the service. It also automatically threaded together a string of communications about the same topic, so everything flowed together as if it was a single conversation.

“The original pitch we put together was all about the three ‘S’s’ — storage, search and speed,” said former Google executive Marissa Mayer, who helped design Gmail and other company products before later becoming Yahoo’s CEO.

It was such a mind-bending concept that shortly after The Associated Press published a story about Gmail late on the afternoon of April Fool’s 2004, readers began calling and emailing to inform the news agency it had been duped by Google’s pranksters.

“That was part of the charm, making a product that people won’t believe is real. It kind of changed people’s perceptions about the kinds of applications that were possible within a web browser,” former Google engineer Paul Buchheit recalled during a recent AP interview about his efforts to build Gmail.

It took three years to do as part of a project called “Caribou” — a reference to a running gag in the Dilbert comic strip. “There was something sort of absurd about the name Caribou, it just made make me laugh,” said Buchheit, the 23rd employee hired at a company that now employs more than 180,000 people.

The AP knew Google wasn’t joking about Gmail because an AP reporter had been abruptly asked to come down from San Francisco to the company’s Mountain View, California, headquarters to see something that would make the trip worthwhile.

After arriving at a still-developing corporate campus that would soon blossom into what became known as the “Googleplex,” the AP reporter was ushered into a small office where Page was wearing an impish grin while sitting in front of his laptop computer.

Page, then just 31 years old, proceeded to show off Gmail’s sleekly designed inbox and demonstrated how quickly it operated within Microsoft’s now-retired Explorer web browser. And he pointed out there was no delete button featured in the main control window because it wouldn’t be necessary, given Gmail had so much storage and could be so easily searched. “I think people are really going to like this,” Page predicted.

As with so many other things, Page was right. Gmail now has an estimated 1.8 billion active accounts — each one now offering 15 gigabytes of free storage bundled with Google Photos and Google Drive. Even though that’s 15 times more storage than Gmail initially offered, it’s still not enough for many users who rarely see the need to purge their accounts, just as Google hoped.

The digital hoarding of email, photos and other content is why Google, Apple and other companies now make money from selling additional storage capacity in their data centers. (In Google’s case, it charges anywhere from $30 annually for 200 gigabytes of storage to $250 annually for 5 terabytes of storage). Gmail’s existence is also why other free email services and the internal email accounts that employees use on their jobs offer far more storage than was fathomed 20 years ago.

“We were trying to shift the way people had been thinking because people were working in this model of storage scarcity for so long that deleting became a default action,” Buchheit said.

Gmail was a game changer in several other ways while becoming the first building block in the expansion of Google’s internet empire beyond its still-dominant search engine.

After Gmail came Google Maps and Google Docs with word processing and spreadsheet applications. Then came the acquisition of video site YouTube, followed by the introduction of the Chrome browser and the Android operating system that powers most of the world’s smartphones. With Gmail’s explicitly stated intention to scan the content of emails to get a better understanding of users’ interests, Google also left little doubt that digital surveillance in pursuit of selling more ads would be part of its expanding ambitions.

Although it immediately generated a buzz, Gmail started out with a limited scope because Google initially only had enough computing capacity to support a small audience of users.

But that scarcity created an air of exclusivity around Gmail that drove feverish demand for elusive invitations to sign up. At one point, invitations to open a Gmail account were selling for $250 apiece on eBay. “It became a bit like a social currency, where people would go, ‘Hey, I got a Gmail invite, you want one?’” Buchheit said.

Although signing up for Gmail became increasingly easier as more of Google’s network of massive data centers came online, the company didn’t begin accepting all comers to the email service until it opened the floodgates as a Valentine’s Day present to the world in 2007.

Swedish Embassy Exhibit Highlights Uses of Artificial Intelligence

WASHINGTON — Artificial Intelligence for good is the subject of a new exhibit at the Embassy of Sweden in Washington, showing how Swedish companies and organizations are using AI for a more open society, a healthier world, and a greener planet.

Ambassador Urban Ahlin told an embassy reception that Sweden’s broad collaboration across industry, academia and government makes it a leader in applying AI in public-interest areas, such as clean tech, social sciences, medical research, and greener food supply chains. That includes tracking the mood and health of cows.

Fitbit for cows

It is technology developed by DeLaval, a producer of dairy and farming machinery. The firm’s Market Solution Manager in North America Joaquin Azocar says the small wearable device the size of an earring fits in a cow’s ear and tracks the animal’s movements 24/7, much like a Fitbit.

The ear-mounted tags send out signals to receivers across the farm. DeLaval’s artificial intelligence system analyzes the data and looks for correlations in patterns, trends, and deviations in the animals’ activities, to predict if a cow is sick, in heat, or not eating well.

As a trained veterinarian, Azocar says dairy farmers being alerted sooner to changes in their animals’ behavior means they can provide treatment earlier which translates to less recovery time.

AI helping in childbirth

There are also advances in human health. The developing Pelvic Floor AI project is an AI-based solution to identify high-risk cases of pelvic floor injury and facilitate timely interventions to prevent and limit harm.

It was developed by a team of gynecologists and women’s health care professionals from Sweden’s Sahlgrenska University Hospital to help the nearly 20% of women who experience injury to their pelvic floor during childbirth.

The exhibition “is a great way to showcase the many ways AI is being adapted and used, in medicine and in many other areas,” said exhibition attendee Jesica Lindgren, general counsel for international consulting firm BlueStar Strategies. “It’s important to know how AI is evolving and affecting our everyday life.”

Green solutions using AI

The exhibition includes examples of what AI can do about climate change, including rising sea levels and declining biodiversity.

AirForestry is developing technology “for precise forestry that will select and harvest trees fully autonomously.” The firm says that “harvesting the right trees in the right place could significantly improve overall carbon sequestration and resilience.”

AI & the defense industry

Outlining the development of artificial intelligence for the defense industry, the exhibit admits that “can be controversial.”

“There are exciting possibilities to use AI to solve problems that cannot be solved using traditional algorithms due to their complexity and limitations in computational power,” the exhibit states. “But it requires thorough consideration of how AI should and shouldn’t be utilized. Proactively engaging in AI research is necessary to understand the technology’s capabilities and limitations and help shape its ethical standards.”

AI and privacy

Exhibition participant Quentin Black is an engineer with Axis Communications, an industry leader in video surveillance. He said the project came out of GDPR, or General Data Protection Regulation; an EU policy that provides privacy to citizens who are out in public whose image could be picked up on video surveillance cameras.

The regulations surrounding privacy are stricter in Europe than they are in the U.S., Black said.

“In the U.S. the public doesn’t really have an expectation of privacy; there’s cameras everywhere. In Europe, it’s different.” That regulation inspired Axis Communications to develop AI that provides privacy, he explained.

Black pointed to a large monitor divided into four windows, to show how AI is being used to set up four different filters to provide privacy.

The Axis Live Privacy Shield remotely monitors activities both indoors and outdoors while safeguarding privacy in real time. The technology is downloadable and free, to provide privacy to people and/or environments, using a variety of filters.

In the monitor on display in the exhibition, Black explained the four quadrants. The upper right window of the monitor displays privacy with a full color block out of all humans, using AI to distinguish the difference between the people and the environment.

The upper left window provides privacy to the person’s head. The bottom left corner provides pixelization, or a mosaic, of the person’s entire/whole body, and the immediate environment surrounding the person. And the bottom right corner shows blockage of the environment, so “an inverse of the personal privacy,” Black explained.

“So, if it was a top secret facility, or you want to see the people walking up to your door without a view of your neighbor’s house, this is where this can this be applied.”

Tip of the iceberg

“I think that AI is on everybody’s thoughts, and what I appreciate about the House of Sweden’s approach in this exhibition is highlighting a thoughtful, scientific, business-oriented and human-oriented perspective on AI in society today,” said Molly Steenson, President and CEO of the American Swedish Institute.

Though AI and machine learning have been around since the 1950s, she says it is only now that we are seeing “the contemporary upswing and acceleration of AI, especially generative AI in things like large language models.”

“So, while large companies and tech companies might want us to speed up and believe that it is only scary or it is only good, I think it’s a lot more nuanced than that,” she said.

Aborted Space Launch Sees Success on Second Try

A space launch aborted only to find success days later. Plus, Japan makes a push into private spaceflight, and NASA really wants you to see the solar eclipse — but safety first. VOA’s Arash Arabasadi brings us The Week in Space.

TikTok Bill Faces Uncertain Fate in Senate

WASHINGTON — The young voices in the messages left for North Carolina Senator Thom Tillis were laughing, but the words were ominous.

“OK, listen, if you ban TikTok I will find you and shoot you,” one said, giggling and talking over other young voices in the background. “I’ll shoot you and find you and cut you into pieces.” Another threatened to kill Tillis, and then take their own life.

Tillis’s office says it has received around 1,000 calls about TikTok since the House passed legislation this month that would ban the popular app if its China-based owner doesn’t sell its stake. TikTok has been urging its users — many of whom are young — to call their representatives, even providing an easy link to the phone numbers. “The government will take away the community that you and millions of other Americans love,” read one pop-up message from the company when users opened the app.

Tillis, who supports the House bill, reported the call to the police. “What I hated about that was it demonstrates the enormous influence social media platforms have on young people,” he said in an interview.

While more aggressive than most, TikTok’s extensive lobbying campaign is the latest attempt by the tech industry to head off any new legislation — and it’s a fight the industry usually wins. For years Congress has failed to act on bills that would protect users’ privacy, protect children from online threats, make companies more liable for their content and put loose guardrails around artificial intelligence, among other things.

“I mean, it’s almost embarrassing,” says Senate Intelligence Committee Chairman Mark Warner, D-Va., a former tech executive who is also supporting the TikTok bill and has long tried to push his colleagues to regulate the industry. “I would hate for us to maintain our perfect zero batting average on tech legislation.”

Some see the TikTok bill as the best chance for now to regulate the tech industry and set a precedent, if a narrow one focused on just one company. President Joe Biden has said he would sign the House bill, which overwhelmingly passed 362-65 this month after a rare 50-0 committee vote moving it to the floor.

But it’s already running into roadblocks in the Senate, where there is little unanimity on the best approach to ensure that China doesn’t access private data from the app’s 170 million U.S. users or influence them through its algorithms.

Other factors are holding the Senate back. The tech industry is broad and falls under the jurisdiction of several different committees. Plus, the issues at play don’t fall cleanly on partisan lines, making it harder for lawmakers to agree on priorities and how legislation should be written. Senate Commerce Committee Chairwoman Maria Cantwell, D-Wash., has so far been reluctant to embrace the TikTok bill, for example, calling for hearings first and suggesting that the Senate may want to rewrite it.

“We’re going through a process,” Cantwell said. “It’s important to get it right.”

Warner, on the other hand, says the House bill is the best chance to get something done after years of inaction. And he says that the threatening calls from young people are a good example of why the legislation is needed: “It makes the point, do we really want that kind of messaging being able to be manipulated by the Communist Party of China?”

Some lawmakers are worried that blocking TikTok could anger millions of young people who use the app, a crucial segment of voters in November’s election. But Warner says “the debate has shifted” from talk of an outright ban a year ago to the House bill which would force TikTok, a wholly owned subsidiary of Chinese technology firm ByteDance Ltd., to sell its stake for the app to continue operating.

Vice President Kamala Harris, in a television interview that aired Sunday, acknowledged the popularity of the app and that it has become an income stream for many people. She said the administration does not intend to ban TikTok but instead deal with its ownership. “We understand its purpose and its utility and the enjoyment that it gives a lot of folks,” Harris told ABC’s ”This Week.”

Republicans are divided. While most of them support the TikTok legislation, others are wary of overregulation and the government targeting one specific entity.

“The passage of the House TikTok ban is not just a misguided overreach; it’s a draconian measure that stifles free expression, tramples constitutional rights, and disrupts the economic pursuits of millions of Americans,” Kentucky Sen. Rand Paul posted on X, formerly Twitter.

Hoping to persuade their colleagues to support the bill, Democratic Sen. Richard Blumenthal of Connecticut and Republican Sen. Marsha Blackburn of Tennessee have called for intelligence agencies to declassify information about TikTok and China’s ownership that has been provided to senators in classified briefings.

“It is critically important that the American people, especially TikTok users, understand the national security issues at stake,” the senators said in a joint statement.

Blumenthal and Blackburn have separate legislation they have been working on for several years aimed at protecting children’s online safety, but the Senate has yet to vote on it. Efforts to regulate online privacy have also stalled, as has legislation to make technology companies more liable for the content they publish.

And an effort by Senate Majority Leader Chuck Schumer, D-N.Y., to quickly move legislation that would regulate the burgeoning artificial intelligence industry has yet to show any results.

Schumer has said very little about the TikTok bill or whether he might put it on the Senate floor.

“The Senate will review the legislation when it comes over from the House,” was all he would say after the House passed the bill.

South Dakota Sen. Mike Rounds, a Republican who has worked with Schumer on the artificial intelligence effort, says he thinks the Senate can eventually pass a TikTok bill, even if it’s a different version. He says the classified briefings “convinced the vast majority of members” that they have to address the collection of data from the app and TikTok’s ability to push out misinformation to users.

“I think it’s a clear danger to our country if we don’t act,” he said. “It does not have to be done in two weeks, but it does have to be done.”

Rounds says he and Schumer are still holding regular meetings on artificial intelligence, as well, and will soon release some of their ideas publicly. He says he’s optimistic that the Senate will eventually act to regulate the tech industry.

“There will be some areas that we will not try to get into, but there are some areas that we have very broad consensus on,” Rounds says.

Tillis says senators may have to continue laying the groundwork for a while and educating colleagues on why some regulation is needed, with an eye toward passing legislation in the next Congress.

“It can’t be the wild, wild west,” Tillis said.

At UN, Nations Cooperate Toward Safe, Trustworthy AI Systems

United Nations — The U.N. General Assembly adopted by consensus Thursday a first-of-its-kind resolution addressing the potential of artificial intelligence to accelerate progress toward sustainable development, while emphasizing the need for safe, secure and trustworthy AI systems.

The initiative, led by the United States, seeks to manage AI’s risks while utilizing its benefits.

“Today as the U.N. and AI finally intersect, we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than to let it govern us,” said U.S. Ambassador Linda Thomas-Greenfield. “So let us reaffirm that AI will be created and deployed through the lens of humanity and dignity, safety and security, human rights and fundamental freedoms.”

The Biden administration said it took more than three months to negotiate what it characterized as a “baseline set of principles” around AI, engaging with 120 countries and incorporating feedback from many of them, including China, which was one of the 123 co-sponsors of the text.

While General Assembly resolutions are not legally binding, they reflect the political consensus of the international community.

The resolution recognizes the disparities in technological development between developed and developing countries and stresses the need to bridge the digital divide so everyone can equitably access the benefits of AI.

It also outlines measures for responsible AI governance, including the development of regulatory frameworks, capacity building initiatives and support for research and innovation. The resolution encourages international collaboration to address the evolving challenges and opportunities AI technologies pose, with a focus on advancing sustainable development goals.

U.S. Vice President Kamala Harris welcomed adoption of the resolution, saying all nations must be guided by a common set of understandings on the use of AI systems.

“Too often, in past technological revolutions, the benefits have not been shared equitably, and the harms have been felt by a disproportionate few,” she said in a statement. “This resolution establishes a path forward on AI where every country can both seize the promise and manage the risks of AI.”

At the World Economic Forum meetings in Davos, Switzerland, in January, U.N. Secretary-General Antonio Guterres expressed concern about the risk of unintended consequences with “every new iteration of generative AI.” He said it has “enormous potential” for sustainable development but also the potential to worsen inequality.

“And some powerful tech companies are already pursuing profits with a clear disregard for human rights, personal privacy and social impact,” he said at the time.

The U.N. chief created an AI advisory body last year, and it will publish its final report ahead of the U.N.’s Summit of the Future in September.

Reddit, the Self-Anointed ‘Front Page of the Internet,’ Jumps 55% in Wall Street Debut

NEW YORK — Reddit soared in its Wall Street debut as investors pushed the valued of the company close to $9 billion seconds after it began trading on the New York Stock Exchange.

Reddit, which priced its IPO at $34 a share, debuted Thursday afternoon at $47 a share. The going price has climbed even higher since, with shares for the self-anointed “front page of the internet” soaring more than 55% as of around 1:20 p.m. ET.

The IPO will test the quirky company’s ability to overcome a nearly 20-year history colored by uninterrupted losses, management turmoil and occasional user backlashes to build a sustainable business.

“The supply is pretty limited and there’s strong demand, so my sense is that this is going to be a hot IPO,” Reena Aggarwal, director of Georgetown University’s Psaros Center for Financial Markets and Policy, said ahead of Reddit’s trading Thursday. “The good news for Reddit is it’s a hot market.”

Still, she also anticipates Reddit’s IPO to be volatile. Even with a sizeable “pop,” it’s possible that some might sell their shares to reap their gains soon after, potentially causing prices to drift.

The interest surrounding Reddit stems largely from a large audience that religiously visits the service to discuss a potpourri of subjects that range from silly memes to existential worries, as well as get recommendations from like-minded people.

About 76 million users checked into one of Reddit’s roughly 100,000 communities in December, according to the regulatory disclosures required before the San Francisco company goes public. Reddit set aside up to 1.76 million of 15.3 million shares being offered in the IPO for users of its service.

Per the usual IPO custom, the remaining shares are expected to be bought primarily by mutual funds and other institutional investors betting Reddit is ready for prime time in finance.

Reddit’s moneymaking potential also has attracted some prominent supporters, including OpenAI CEO Sam Altman, who accumulated a stake as an early investor that has made him one of the company’s biggest shareholders. Altman owns 12.2 million shares of Reddit stock, according to the company’s IPO disclosures.

Other early investors in Reddit have included PayPal co-founder Peter Thiel, Academy Award-winning actor Jared Leto and rapper Snoop Dogg. None of them are listed among Reddit’s largest shareholders heading into the IPO.

By the tech industry’s standards, Reddit remains extraordinarily small for a company that has been around as long as it has.

Reddit has never profited from its broad reach while piling up cumulative losses of $717 million. That number has swollen from cumulative losses of $467 million in December 2021 when the company first filed papers to go public before aborting that attempt.

In the recent documents filed for its revived IPO, Reddit attributed the losses to a fairly recent focus on finding new ways to boost revenue.

Not long after it was born, Reddit was sold to magazine publisher Conde Nast for $10 million in deal that meant the company didn’t need to run as a standalone business. Even after Conde Nast parent Advance Magazine Publishers spun off Reddit in 2011, the company said in its IPO filing that it didn’t begin to focus on generating revenue until 2018.

Those efforts, mostly centered around selling ads, have helped the social platform increase its annual revenue from $229 million in 2020 to $804 million last year. But the San Francisco-based company also posted combined losses of $436 million from 2020 through 2023.

Reddit outlined a strategy in its filing calling for even more ad sales on a service that it believes companies will be a powerful marketing magnet because so many people search for product recommendations there.

The company also is hoping to bring in more money by licensing access to its content in deals similar to the $60 million that Google recently struck to help train its artificial intelligence models. That ambition, though, faced an almost immediate challenge when the U.S. Federal Trade Commission opened an inquiry into the arrangement.

Since Thursday just marks Reddit’s first day on the public market, Aggarwal stresses that the first key measure of success will boil down to the company’s next earnings call.

“As a public company now they have to report a lot more … in the next earnings release,” she said. “I’m sure the market will watch that carefully.”

Reddit also experienced tumultuous bouts of instability in leadership that may scare off prospective investors. Company co-founders Steve Huffman and Alexis Ohanian — also the husband of tennis superstar Serena Williams — both left Reddit in 2009 while Conde Nast was still in control, only to return years later.

Huffman, 40, is now CEO, but how he got the job serves as a reminder of how messy things can get at Reddit. The change in command occurred in 2015 after Ellen Pao resigned as CEO amid a nasty user backlash to the banning of several communities and the firing of Reddit’s talent director. Even though Ohanian said he was primarily responsible for the firing and the bans, Pao was hit with most of the vitriol.

Although his founder’s letter leading up to this IPO didn’t mention it, Huffman touched upon the company’s past turmoil in another missive included in a December 2021 filing attempt that was subsequently canceled.

“We lived these challenges publicly and have the scars, learnings, and policy updates to prove it,” Huffman wrote in 2021. “Our history influences our future. There will undoubtedly be more challenges to come.”

US Senate Considers Bill That Could Ban TikTok in United States

The White House is urging senators to quickly begin considering a bill that would address national security concerns related to the social media app TikTok. The House approved the measure earlier this week. VOA Congressional Correspondent Katherine Gypson reports. Camera: Saqib Ul Islam.