Cybercriminals turn to ChatGPT for help in their nefarious activities

News Desk -

Share

At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. 

However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks.

Check Point Research’s (CPR) first blog on this topic explained how ChatGPT successfully executed a complete infection process, including creating a believable spear-phishing email and executing a reverse shell that can accept commands in English. The question being raised is whether this is just a potential threat or if there are already individuals or groups using OpenAI technologies for malicious activities.

Cybercriminals are already leveraging OpenAI to create dangerous tools, according to CPR’s investigation of numerous important underground hacking communities. As expected, some of the incidents made it abundantly evident that many cyber criminals utilising OpenAI lacked any kind of technical expertise. Although the tools we offer in this paper are simple, it won’t be long until more experienced threat actors improve how they employ AI-based tools for malicious purposes.

Case 1 – Creating an Infostealer

A thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum on December 29, 2022. The one who published the thread claimed that he was experimenting with ChatGPT to make malware strains and techniques described in studies about common malware. The publisher of the tread later showed the code of a Python-based stealer that searches for common file types and sends them to hardcoded FTP servers after it copies the file types in a random folder inside the temp folder.

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled.png

Figure 1 –Cybercriminal showing how he created infostealer using ChatGPT

Examination of the script confirms the statements made by the cybercriminal. This malware is a basic stealer that looks for 12 commonly used file types (e.g. MS Office documents, PDFs, and images) on the system. If any relevant files are detected, the malware copies them to a temporary directory, compresses them into a zip file, and sends them over the internet. It’s worth mentioning that the actor did not take measures to encrypt or transmit the files securely, so the files could potentially be accessed by third parties.

The second example this actor produced with ChatGPT is a straightforward Java snippet. It downloads PuTTY, a well-known SSH and telnet client, then uses Powershell to run it discreetly on the machine. Naturally, this script may be altered to download and execute any software, including popular malware families.  

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled 1.png

Figure 2 –Proof of how he created Java program that downloads PuTTY and runs it using Powershell

This threat actor’s prior forum participation includes sharing several scripts like automation of the post-exploitation phase, and a C++ program that attempts to phish for user credentials. In addition, he actively shares cracked versions of SpyNote, an Android RAT malware. So overall, this individual seems to be a tech-oriented threat actor, and the purpose of his posts is to show less technically capable cybercriminals how to utilize ChatGPT for malicious purposes, with real examples they can immediately use.

Case 2 – Creating an Encryption Tool

On December 21, 2022, a threat actor dubbed USDoD posted a Python script, which he emphasized was the first script he ever created.

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled 2.png

Figure 3 –Cybercriminal dubbed USDoD posts multi-layer encryption tool

When another cybercriminal commented that the style of the code resembles openAI code, USDoD confirmed that the OpenAI gave him a “nice [helping] hand to finish the script with a nice scope.”

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled 3.png

Figure 4 –Confirmation that the multi-layer encryption tool was created using Open AI

Analysis of the script verified that it is a Python script that performs cryptographic operations. To be more specific, it is a mix of different signing, encryption and decryption functions. At first glance, the script seems benign, but it implements a variety of different functions:

  • The first part of the script generates a cryptographic key (specifically uses elliptic curve cryptography and the curve ed25519), that is used in signing files.
  • The second part of the script includes functions that use a hard-coded password to encrypt files in the system using the Blowfish and Twofish algorithms concurrently in a hybrid mode. These functions allow the user to encrypt all files in a specific directory or a list of files.
  • The script also uses RSA keys, uses certificates stored in PEM format, MAC signing, and the blake2 hash function to compare the hashes etc.

It’s vital to remember that the script also includes all of the decryption functions that correspond to the encryption routines. The script has two primary functions: one that encrypts a single file and appends a message authentication code (MAC) to the file’s end and the other that encrypts a hardcoded path and decrypts a list of files that it is given as an argument.

There are benign uses for all of the aforementioned programmes. However, it is simple to alter this script so that it encrypts a user’s computer without their input. If the script and grammar issues are resolved, for instance, it might make the code into ransomware.

Although it appears that UsDoD is not a software developer and has limited technical expertise, he is a well-known and active participant in the underground community. UsDoD is involved in various illegal activities such as selling access to hacked companies and stolen databases. One notable example is the reportedly leaked InfraGard database that UsDoD recently shared.

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled 4.png

Figure 5 –USDoD previous illicit activity that involved publication of InfraGard Database

Case 3 – Facilitating ChatGPT for Fraud Activity 

Another example of the use of ChatGPT for fraudulent activity was posted on New Year’s Eve of 2022, and it demonstrated a different type of cybercriminal activity. While our first two examples focused more on the malware-oriented use of ChatGPT, this example shows a discussion with the title “Abusing ChatGPT to create Dark Web Marketplaces scripts.”

This thread showcases how the cybercriminal demonstrates the ease of building a Dark Web marketplace using ChatGPT. The marketplace’s primary role in the underground illegal economy is to provide a platform for automated trading of illegal or stolen items such as stolen accounts, payment cards, malware, drugs and ammunition, with all transactions done in cryptocurrency. To demonstrate how ChatGPT can be used for these purposes, the cybercriminal has shared a piece of code that utilizes a third-party API to obtain current cryptocurrency prices (Monero, Bitcoin and Ethereum) as part of the Dark Web marketplace’s payment system.

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled 5.png

Figure 6 –Threat actor using ChatGPT to create DarkWeb Market scripts

Several threat actors started discussions in extra underground forums that looked into how to use ChatGPT for fraudulent schemes. Most of the talk was about creating random art with another OpenAI technology (DALLE2) and selling them online on places like Etsy. Another topic the threat actor discussed was how to make an e-book or short chapter for a specific topic (using ChatGPT) and sells this content online.

Figure 7 –Multiple threads in the underground forums on how to use ChatGPT for fraud activity

Summary

It’s still too early to say whether ChatGPT features will replace other popular tools as the preferred ones among Dark Web users. However, the cybercriminal sector has already expressed a lot of interest and is embracing this most recent development to produce dangerous programmes. The activity will be monitored by CPR through 2023.

Lastly, it is suggested that the most effective method for discovering ChatGPT abuse is to inquire with the chatbot directly. An inquiry was made to the chatbot regarding the options for abuse and an intriguing response was received.

C:\Users\SERGEY~1\AppData\Local\Temp\Rar$EXa11648.30261\Version 2 - OPWNAI Cyber Criminals Starting to Use 6a303c51295f4e9f8efa4d445a7526b6\Untitled 7.png

Leave a reply