How Cybercriminals are Using ChatGPT to Create Malware?

How Cybercriminals are Using ChatGPT to Create Malware?
Published on

ChatGPT is being used by Cybercriminal to create Malware.

According to cybersecurity specialists, cybercriminals have begun leveraging OpenAI's artificially intelligent chatbot ChatGPT to quickly construct hacking tools. Scammers are also trying ChatGPT's potential to construct other chatbots tailored to impersonate young ladies in order to catch prey, according to one specialist who monitors criminal forums. Many early ChatGPT users were concerned that the app, which went popular in the days following its release in December, may code harmful malware capable of spying on users' keyboard strokes or encrypting data. According to a survey from Israeli security firm Check Point, underground criminal forums have finally caught on. In one Check Point-reviewed forum post, a hacker who had previously released Android malware demonstrated code developed using ChatGPT that took files of interest, compressed them, and transferred them across the internet. They demonstrated another tool that planted a backdoor on a computer and could upload additional malware to an affected computer.

Another user on the same forum posted Python code that could encrypt files, claiming that OpenAI's app assisted them in developing it. They said it was the very first script they'd ever written. According to Check Point's analysis, such malware can be used for absolutely innocuous purposes, but it can also "simply be updated to encrypt someone's system fully without any user intervention," similar to how ransomware works. Check Point highlighted that the same forum member has previously offered access to hacked enterprise servers and stolen data. One user also considered "abusing" ChatGPT by using it to help design features for a dark web marketplace similar to Silk Road or Alphabay. The user demonstrated how the chat bot might easily create an app that monitored cryptocurrency values for a hypothetical payment system. According to Alex Holden, head of cyber intelligence firm Hold Security, dating scammers are also using ChatGPT to construct convincing personas. "They want to construct chatbots to impersonate largely girls in order to advance in talks with their marks," he explained. "They're attempting to automate idle chit-chat." At the time of publication, OpenAI had not responded to a request for comment.

While the ChatGPT-coded tools appeared "quite rudimentary," Check Point claimed it was only a matter of time before more "skilled" hackers discovered a way to exploit the AI. According to Rik Ferguson, vice president of security intelligence at the American cybersecurity firm Forescout, ChatGPT does not appear to be capable of coding something as complex as the major ransomware strains seen in significant hacking incidents in recent years, such as Conti, which was infamous for its use in the breach of Ireland's national health system. OpenAI's tool, on the other hand, will lower the barrier to entry for newcomers into the criminal market by creating more basic, but equally potent malware, according to Ferguson.

He also expressed worry that, rather than being used to create code that steals victims' data, ChatGPT could be used to assist in the creation of websites and bots that deceive users into providing their information. It has the potential to "industrialize the design and personalization of harmful web sites, highly-targeted phishing attacks, and social engineering-based scams," according to Ferguson. Check Point threat intelligence expert Sergey Shykevich told Forbes that ChatGPT will be a "wonderful tool" for Russian hackers who don't speak English to create legitimate-looking phishing emails. Concerning precautions against criminal use of ChatGPT, Shykevich stated that they would eventually, and "sadly," have to be enforced through regulation. OpenAI has put certain filters in place to prevent obvious requests for ChatGPT to construct malware with policy violation notifications, but hackers and journalists have found ways around those safeguards. According to Shykevich, firms such as OpenAI may have to be legally compelled to educate its AI to detect such exploitation.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net