Current:Home > reviewsDispute over threat of extinction posed by AI looms over surging industry -WealthRise Academy
Dispute over threat of extinction posed by AI looms over surging industry
View
Date:2025-04-12 13:16:01
While experts disagree about whether AI poses an existential threat, Dan Hendrycks, a researcher and the director of the Center for AI Safety, or CAIS, is among those who believe the technology could destroy humanity in a variety of ways.
A bad actor could gain possession of a future version of generative AI, ask it for instructions on how to make a biological weapon and set off devastation, he told ABC News. Or, he added, the efficiency delivered by AI could force widespread business adoption, leaving the global economy in its thrall; alternatively, it could also worsen the spread of misinformation and disinformation.
The end of humanity, Hendrycks said, is hardly a remote possibility. "If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct," he added.
Experts who lend credence to the threat told ABC News that the massive potential risks require urgent attention and stiff oversight; while skeptics warned that grave forecasts fuel a misunderstanding about the capabilities of AI and distract from current harms caused by the technology.
In recent months, dire warnings about the massive threat posed by AI have ascended from the corridors of computer science departments to the halls of Congress.
An open letter written in May by CIAS warned that AI poses a "risk of extinction" akin to pandemics or nuclear war, featuring signatures from hundreds of researchers and industry leaders like OpenAI CEO Sam Altman and Demis Hassabis, the CEO of Google DeepMind, the tech giant's AI division.
Altman, whose company developed the viral AI sensation Chat-GPT, in May told a Senate subcommittee: "If this technology goes wrong, it can go quite wrong." In an interview with ABC News, in March, Altman said, "I think people should be happy that we are a little bit scared of this."
OpenAI and Google did not immediately respond to ABC News' request for comment.
MORE: AI leaders warn the technology poses 'risk of extinction' like pandemics and nuclear war
Other AI luminaries, however, have balked. Yann LeCun, chief AI scientist at Meta, told the MIT Technology Review that fear of an AI takeover is "preposterously ridiculous." Sarah Myers West, managing director of the nonprofit AI Now Institute, told ABC News: "A lot of this is more rhetoric than grounded analysis."
The divide over the existential threat posed by AI looms over recent advances in the technology as it sweeps across institutions from manufacturing to mass entertainment, prompting disagreement about the pace of development and the focus of possible regulation.
"We're looking to experts to tell us," Jeffrey Sonnenfeld, a professor of management at Yale University who convenes gatherings of top CEOs. "But experts are split on this."
As AI develops, however, an imperative for onlookers is clear, he said: "We can't sit on the sidelines."
Concern about the risks posed by AI have drawn greater attention lately in response to major breakthroughs like ChatGPT, which reached 100 million users within two months of its launch in November.
Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.
"AI in previous instantiations was a largely invisible system," Myers West said. "It wasn't something we interacted with in a tangible way. This has had a really visceral effect on the broader public in that it's contributing to this wave of both excitement and a tremendous amount of anxiety."
MORE: Is AI coming for your job? ChatGPT renews fears
Doomsday forecasts, however, lack granular specifics and overstate the potential for self-awareness to form within generative AI like ChatGPT, which scans text from across the internet and strings words together based on statistical probability, Myers West said.
"At present, essentially the way these systems work is akin to applied statistics, so they don't have any capacity for deeper understanding, don't have any capacity for empathy and certainly not sentience," Myers West added.
But the risk posed by AI stems from its potential to exceed human intelligence rather than mimic it, Stuart Russell, an AI researcher at the University of California, Berkeley who co-authored a study on societal-scale dangers of the technology, told ABC News.
"If you make systems that are more intelligent than humans, they will have more power over the world than we do, just as we have more power over the world than other species on earth," he said.
Acknowledging a lack of specifics in some prominent messages about the extreme risks, such as the open letter released by CAIS in May, Russell said: "Once you get into specifics, you end up with arguments about which is the most plausible." Hendrycks, of CAIS, added: "Since AI touches on many aspects of society, we end up finding there are many, many risk sources."
Key remedies, such as robust government oversight and liability for AI developers, can help deter a range of catastrophic scenarios, Hendrycks said. "We don't need to know exactly what's going to happen to make interventions to reduce risk," he said.
MORE: Can artificial intelligence help stop mass shootings?
To be sure, while acknowledging the risks of AI, experts heralded its potential benefits. Proponents of AI say the technology could increase productivity, automate unpleasant or mundane tasks and afford the opportunity to focus on creative and innovative endeavors. AI has been touted as an aid for endeavors ranging from the fight against climate change to the diagnosis of cancer.
Senate Majority Chuck Schumer, D-N.Y., released a framework last month outlining four pillars that he hopes will guide future bipartisan legislation governing AI: security, accountability, protecting our foundations and explainability.
The framework is not legislative text, and it's not clear how long it will take for Congress to begin putting together legislative proposals. There has not yet been any comprehensive legislation introduced in Congress to deal with regulating AI, though a bicameral group of lawmakers introduced a proposal last month that would create a blue-ribbon commission to study AI's impact.
"We have no choice but to acknowledge that AI's changes are coming, and in many cases are already here. We ignore them at our own peril," Schumer said last month in prepared remarks at the Center for Strategic and International Studies.
President Joe Biden, meanwhile, appeared last month at a roundtable event focused on AI in California, describing artificial intelligence as something that has "enormous promise and its risks."
An effort to ward off hypothetical long-term dangers could distract from present-day damage caused by AI, said Isabelle Jones, campaign outreach manager for Stop Killer Robots, which aims to establish an international agreement prohibiting the use of autonomous weapons.
"I think that to purely focus on the future is to the detriment of the existing harms that are coming about or that there's an immediate risk of," Jones told ABC News.
Policymakers can address current and future dangers at the same time, Russell said, just as they do in combating climate change. "I think the narrative that you can either do one or the other but can't do both is actually poisonous."
Regardless of whether they believe or question forecasts of extreme risk, experts who spoke with ABC News called on the government to regulate the technology.
"My biggest thing is regulation and international coordination," Hendrycks said. Jones cited international accords on issues like nuclear proliferation as a model for reigning in autonomous weapons.
Still, Myers West said, differences could again arise on the issue of specifics. "The devil is in the details," she said.
ABC News' Alison Pecorin contributed reporting.
veryGood! (9763)
Related
- Bodycam footage shows high
- 'The Penguin' spoilers! Colin Farrell spills on that 'dark' finale episode
- Princess Kate makes rare public appearance after completing cancer chemo
- Michael Jordan and driver Tyler Reddick come up short in bid for NASCAR championship
- Realtor group picks top 10 housing hot spots for 2025: Did your city make the list?
- Is the stock market open on Veterans Day? What to know ahead of the federal holiday
- Veterans face challenges starting small businesses but there are plenty of resources to help
- California voters reject measure that would have banned forced prison labor
- Bill Belichick's salary at North Carolina: School releases football coach's contract details
- Rita Ora pays tribute to Liam Payne at MTV Europe Music Awards: 'He brought so much joy'
Ranking
- Will the 'Yellowstone' finale be the last episode? What we know about Season 6, spinoffs
- Prayers and cheeseburgers? Chiefs have unlikely fuel for inexplicable run
- Taylor Swift touches down in Kansas City as Chiefs take on Denver Broncos
- Dwayne Johnson Admits to Peeing in Bottles on Set After Behavior Controversy
- Paula Abdul settles lawsuit with former 'So You Think You Can Dance' co
- Utah AD Mark Harlan fined $40,000 for ripping referees and the Big 12 after loss to BYU
- Brianna LaPaglia Reacts to Rumors Dave Portnoy Paid Her $10 Million for a Zach Bryan Tell-All
- Trump is likely to name a loyalist as Pentagon chief after tumultuous first term
Recommendation
Could Bill Belichick, Robert Kraft reunite? Maybe in Pro Football Hall of Fame's 2026 class
Pete Rose fans say final goodbye at 14-hour visitation in Cincinnati
Timothée Chalamet Details How He Transformed Into Bob Dylan for Movie
Melissa Gilbert recalls 'painful' final moment with 'Little House' co-star Michael Landon
Where will Elmo go? HBO moves away from 'Sesame Street'
Mike Tyson vs. Jake Paul stirs debate: Is this a legitimate fight?
FSU football fires offensive, defensive coordinators, wide receivers coach
How Ben Affleck Really Feels About His and Jennifer Lopez’s Movie Gigli Today