AI Tools like ChatGPT Are Being Used for Malware Development

Using AI in healthcare has a lot of advantages, such as the acceleration of drug creation and the analysis of medical images. However, the same AI systems that help healthcare can likewise be employed for malicious applications like malware development. The Health Sector Cybersecurity Coordination Center (HC3) lately released an analyst note outlining the possibilities for hackers to use artificial intelligence tools for this reason and the proof is increasing that AI tools are currently being misused.

AI systems have changed to a point where they could be employed to write human-like content with very good fluency and ingenuity, which includes correct computer code. ChatGPT is one popular AI tool in recent weeks. The OpenAI-created chatbot can produce human-like content according to requests. There are over 1 million ChatGPT users in December. The tool is employed for numerous purposes, such as writing poetry, songs, books, web articles, and email messages, and successfully passing the Medical Licensure and Bar exams.

Because of the amazing popularity of ChatGPT, security experts began examining its features to find out how quickly the tool can be employed for malicious uses. Several security experts discovered that in spite of the terms of use forbidding the use of chatbots for possibly harmful uses, they could use it to create persuasive phishing emails, without the grammatical and spelling errors that are frequently seen in these email messages. ChatGPT along with other AI tools can be utilized for phishing and social engineering, allowing a much wider range of attackers while likewise helping to boost the performance of these attacks.

One of the major problems is using AI tools to speed up malware creation. IBM researchers created an AI-based tool to show the possibilities of using AI to make a new type of malware. The tool, called DeepLocker, integrates a variety of extremely-targeted and elusive attack tools that enable the malware to disguise its motive until it gets to a certain victim. The malicious activities are then let loose when the AI model pinpoints the target via indicators such as geolocation, facial and voice recognition.

When asking ChatGPT to compose a phishing email or code malware, the request will be rejected as it breaks its terms of use, nevertheless making seemingly innocent requests may be possible to realize those purposes. Check Point researchers revealed creating a complete infection flow is possible with ChatGPT. They utilized ChatGPT to create a persuasive phishing email imitating a hosting firm for sending a malicious payload, utilized OpenAI’s code-writing program, Codex, to produce VBA code to include in an Excel attachment, and likewise employed Codex to generate a completely functional reverse shell. Check Point’s Threat Intelligence Group Manager Sergey Shykevich said that threat actors who have very minimal technical expertise could make malicious tools utilizing ChatGPT. Sophisticated cybercriminals could have far more efficient and simple day-to-day operations including creating various segments of the infection chain.

Hackers are actually using OpenAI code to create malware. One hacker utilized the OpenAI tool to compose a Python multi-layer encryption/decryption script that can be taken as ransomware and another person developed a data-stealer that can search for, copy, compress, and exfiltrate sensitive data. Although AI systems have a lot of benefits, these tools will undoubtedly be employed for malicious purposes. Presently, the cybersecurity community has not designed mitigations or a means to protect against using these tools for developing malware. Stopping the misuse of these AI tools may not be possible.

About Christine Garcia 1201 Articles
Christine Garcia is the staff writer on Calculated HIPAA. Christine has several years experience in writing about healthcare sector issues with a focus on the compliance and cybersecurity issues. Christine has developed in-depth knowledge of HIPAA regulations. You can contact Christine at [email protected]. You can follow Christine on Twitter at https://twitter.com/ChrisCalHIPAA