Categories: USA

Artificial intelligence could represent an “endangering” threat to humanity and the US should intervene, the report says

(CNN) –– The US A new report commissioned by the State Department paints an alarming picture of the “catastrophic” risks that rapidly evolving artificial intelligence poses to national security, and warns that time is running out for the government to avert a federal disaster.

The findings were based on interviews with more than 200 people over a year, including senior executives at leading artificial intelligence companies, cybersecurity researchers, weapons of mass destruction experts and national security officials in government.

A report published by Gladstone AI this week clearly states that the most advanced artificial intelligence systems could, in the worst case, “represent an extinction risk for the human species.”

A US State Department official confirmed to CNN that the agency made the report as part of its ongoing assessment of how AI protects US objectives at home and abroad. However, the official said the report does not represent the views of the US government.

The report’s warning comes as another reminder that while AI’s potential continues to captivate investors and the public, it also poses real risks.

“Artificial intelligence is already an economically transformative technology. It can allow us to cure diseases, make scientific discoveries and overcome challenges we once thought were unreachable,” Jeremy Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday. said.

“But it can also pose serious risks, including catastrophic risks, that we need to be aware of,” Harris said. “And a growing body of evidence — including research and empirical analysis published at the world’s leading AI conferences — suggests that above a certain threshold of capability, artificial intelligence can potentially become uncontrollable.”

White House spokeswoman Robin Patterson said President Joe Biden’s executive order on artificial intelligence was “the most significant step any government in the world has taken to embrace the promise and manage the risks of AI.”

“The President and Vice President will continue to work with our international partners to urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

“Clear and urgent need” to intervene

Researchers warn of two central risks that AI poses broadly.

First, Gladstone AI noted, the most advanced AI systems can be used as weapons to cause potentially irreversible damage. Second, the report reveals that AI labs have private concerns that they may at some point “lose control” of the systems they are developing, with “potentially devastating consequences for global security.”

“The rise of AI and AGI (Artificial General Intelligence) has the potential to destabilize global security in a manner reminiscent of the introduction of nuclear weapons,” the report asserts. He added that AI risks an “arms race”, conflict and “fatal accidents on the scale of weapons of mass destruction”.

In that sense, the Gladstone AI report calls for drastic new measures aimed at combating this threat, including the launch of a new AI agency, the imposition of “emergency” regulatory safeguards and limits on the amount of computing power that can be used for training. Artificial intelligence models.

“There is a clear and urgent need for intervention by the United States government,” the report’s authors wrote.

The report notes that although the work was produced for the State Department, its views “do not necessarily reflect” those of the agency or the US government as a whole.

A government notice issued by the State Department’s Office of Nonproliferation and Disarmament Funding in 2022 shows that the government offered to pay up to $250,000 for assessments of “proliferation and security risks” related to advanced artificial intelligence.

Security concerns

Gladstone AI CEO Harris said the “unprecedented level of access” his team gave to public and private sector executives led to surprising findings. Gladstone AI detailed that it spoke with the technical and leadership teams of ChatGPT owner OpenAI; with Google DeepMind; with Facebook’s parent company Meta and with Anthropic.

“Along the way, we learned some serious things,” Harris said in a video posted on Gladstone AI’s website announcing the report. “Behind the scenes, the security situation in advanced AI looks woefully inadequate in relation to the national security threats that AI may very soon pose.”

The Gladstone AI report says competitive pressures are driving companies to accelerate AI development “at the expense of security,” increasing the likelihood that the most advanced AI systems will be “stolen” and “weaponized” against the United States.

The findings add to a growing list of warnings about the existential risks posed by AI, including from some of the industry’s most powerful figures.

About a year ago, Geoffrey Hinton, known as the “godfather of artificial intelligence,” quit his job at Google and denounced the technology he helped develop. Hinton has said there is a 10% chance that AI will lead to human extinction in the next three decades.

Hinton and dozens of other AI industry leaders, as well as academics and others, signed a statement in June 2023 that said “reducing the risk of artificial intelligence becoming extinct must be a global priority.”

Business leaders are increasingly concerned about these risks, even as they invest billions of dollars in AI. Last year, 42% of CEOs surveyed at the Yale CEO Summit said that artificial intelligence has the potential to destroy humanity within five to ten years.

Human-like abilities to learn

In its report, Gladstone AI highlighted several prominent people who have warned about the existential risks posed by AI, including Elon Musk, Federal Trade Commission Chairwoman Lina Khan, and a former top OpenAI executive.

According to Gladstone AI, some employees of AI companies privately share similar concerns.

“One person from a well-known AI lab opined that, if a particular model of next-generation artificial intelligence were ever released as open access, it would be ‘horribly bad,'” the report said, “because the motivational potential if the election The model can ‘break democracy’ if it is ever exploited in areas such as interference or voter manipulation.

Gladstone said he asked AI experts at Frontier Labs to privately share their personal estimates of the likelihood that the AI ​​phenomenon could cause “global, irreversible effects” in 2024. Estimates ranged from 4% to as high as 20%, according to the report, which noted that the estimates were informal and likely subject to significant bias.

One of the biggest wild cards is the speed with which AI is evolving, particularly AGI, a hypothetical form of artificial intelligence with human-like or even superhuman learning capabilities.

The report says that AGI is considered a “major driver of catastrophic loss of control risk” and highlights that OpenAI, Google DeepMind, Anthropic and Nvidia have publicly stated that AGI could arrive by 2028, though others believe that is far away. . further away.

Gladstone AI notes that disagreements over AGI timelines make it difficult to develop policies and safeguards, and there is a risk that regulation “could prove detrimental” if the technology develops more slowly than expected.

Source link

Admin

Share
Published by
Admin

Recent Posts

100 million degrees for 48 seconds: South Korea’s ‘artificial sun’ moves closer to nuclear revolution

This is a new record that scientists from the Korea Fusion Energy Institute (KFE) have…

6 months ago

The report offers solutions for insurers facing future growth in natural disasters

Damages associated with drought, floods, hail and other increasingly violent events are expected to increase…

6 months ago

You still have time to claim this exciting investigation

An estimated 9 million people in the United States are still waiting for their final…

6 months ago

IDF recognizes “serious mistake” in killing seven members of NGO World Central Kitchen

The death of seven humanitarian workers from the American NGO World Central Kitchen in an…

6 months ago

Fortnite Shop Apr 3, 2024 – Fortnite

Today, at one o'clock in the morning, Gamer updates it Boutique de Fortnite Through the…

6 months ago

Sharon Stone tried to make a Barbie movie in the 1990s

The Basic Instinct and Casino actress looks back at a time in Hollywood when adapting…

6 months ago