Technology

Hackers linked to the governments of Russia, North Korea and China are using ChatGPT

Hackers were not shy about using artificial intelligence. Hackers linked to the Russian, Chinese, Iranian or North Korean governments use ChatGPT to identify vulnerabilities in computer systems, prepare “phishing” operations or disable antivirus software, OpenAI and Microsoft reported in documents published Wednesday.

In a message posted on its site, OpenAI indicates that it has “disrupted” the use of generative artificial intelligence (AI) by these para-government actors, Microsoft Threat Intelligence, a unit that lists the threats companies may be facing. Cyber ​​security matters. “OpenAI accounts identified as associated with these actors have been closed,” said the creator of the generative AI interface ChatGPT.

Emerald Sleet, a North Korean hacker group affiliated with the Iranian Revolutionary Guards, and Crimson Sandstorm used chatbots to create documents that could be used for “phishing,” according to the study. “Phishing” involves presenting yourself to Internet users under a false identity in order to gain illegal access to passwords, codes, identifiers or direct non-public information and documents from them. According to Microsoft, Crimson Sandstorm also used language models (LLM), the basis of a generative AI interface, to better understand how to disable antivirus software.

Refusing to help a group of pirates near Beijing

As for the Charcoal Typhoon group, which is believed to be close to Chinese authorities, it used chatgpt to try to find vulnerabilities in anticipation of possible computer attacks.

“The goal of the partnership between Microsoft and OpenAI is to ensure the safe and responsible use of technologies powered by artificial intelligence, such as ChatGPT. The Redmond (Washington State) group indicates that it has contributed to strengthening the protection of OpenAI’s Language Models (LLM). The report noted that Interface refused to help another hacker group close to the Chinese government, Salmon Typhoon, generate computer code for hacking purposes, therefore “adhering to ethical rules” built into the software.

“Understanding how the most advanced threat actors use our programs for malicious purposes informs us of practices that may become more prevalent in the future,” says OpenAI. “We will not be able to block any malicious attempts,” the company warns. “But by continuing to innovate, collaborate and share, we make it harder for bad actors to go unnoticed. »

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button