Technology

Researchers have created the first AI-powered computer worm

⇧ (video) You may also like this partner content

A group of researchers at Cornell Tech recently developed a computer worm that leverages generative AI to spread more easily. This malware replicates and spreads autonomously from one AI system to another. Although an attack with such malware has never been detected so far, researchers have warned of the possibility of it happening soon, given the current technological context.

This new computer worm was named “Morris II” in reference to Morris, one of the first computer worms, which caused significant damage to the Internet nearly thirty years ago. This malware can break some security protections of AI systems and steal personal data contained in emails.

One of the study’s authors, Cornell Tech’s Ben Nasi, spoke to the media recently Wired For details on this computing feat. During the interview, he also warned about the vulnerability of current AI systems. ” Now you have the ability to launch a new type of cyber attack that has never been seen before.“, he warned.

“Prompt against self-fertilization”

To create the computer worm, the researchers used what they call a “self-reproducing anti-prompt” — a prompt is an instruction given to the AI ​​to generate a response. This technique involves creating a prompt that, once processed by the AI ​​system, generates a new prompt in response. Thus, the infected system is asked to generate a series of instructions in its responses.

The worm attacks through AI-assisted messaging systems. For their experiment, the team built an experimental email system using generative AI technologies such as OpenAI’s ChatGPT, Google’s Gemini, and an open source AI model called LLaVA. They then created a message containing a self-reproducing prompt and made sure it was integrated into the database used by the AI. Thus, when a query was sent, the system generated a response based on “poisoned” data.

Each response generated can serve as a new infection vector, spreading to other AI systems when sent to another person via the messaging system. Therefore, it can easily infect new systems, creating a self-propagating cycle. In addition to purely textual prompts, worms can be embedded as hidden prompts in images to infect email systems.

Legitimate concerns

The process is not limited to worm propagation. It can also extract various types of sensitive information from emails. “This can be names, phone numbers, credit card numbers, social security numbers, anything considered confidential», Nasi explains to the mediaWired.

The findings by this team of researchers raise obvious concerns regarding computer security. Indeed, the effects of this type of attack go beyond minor inconveniences, potentially leading to privacy violations, all kinds of fraud, and other harmful consequences for end users. The authors of the study warn that as AI becomes more accessible and its understanding expands, the potential for its use with malicious intent increases.

See also

Figure Openai Humanoid Robot

According to Nasi, the purpose of this research is not to criticize the weaknesses of current AI models, but above all to highlight the urgency of strengthening their security. Additionally, the team has already reported the results to OpenAI and Google.

Source: ComPromptMized

To learn more about how the Morris II computer virus works (© YouTube/Ben Nassi):

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button