Security researchers have reported that both experienced and novice cybercriminals are using ChatGPT to create hacking tools and code.
One such instance is the Israeli security firm Check Point, which discovered a thread on a well-known underground hacking site by a hacker who claimed to be testing the famous AI chatbot to "recreate malware strains".
The hacker later compressed and distributed Android malware created by ChatGPT throughout the internet. According to Forbes, spyware has the power to steal important files.
The same hacker also demonstrated another program that could install a backdoor on a computer and allow a PC to become infected with more malware.
In their analysis of the problem, Check Point noticed that some cybercriminals were utilizing ChatGPT to write their initial scripts. Another user uploaded Python code that he claimed could encrypt files and had been created using ChatGPT on the aforementioned forum. He claimed that the hacking tools and code were his first of its kind.
Even if such code may be employed for good, Check Point warned that it could "simply be updated to encrypt someone's PC without any user intervention."
Although ChatGPT-coded hacking tools seemed "quite rudimentary," the security firm emphasized that it is "just a matter of time until more sophisticated threat actors modify the way they exploit AI-based hacking tools for harm."
In a third instance of ChatGPT being used fraudulently and detected by Check Point, a hacker demonstrated how the AI chatbot might be used to establish a Dark Web marketplace. The hacker revealed via ChatGPT that he had developed a piece of code that uses a third-party API to get the most recent bitcoin values and is utilized for the Dark Web market payment mechanism.
The creator of ChatGPT, OpenAI, has put in place several safeguards that stop blatant demands for AI to create malware. However, as security researchers and journalists discovered that the AI chatbox could create error-free, grammatically accurate phishing emails, the chatbot has come under even greater scrutiny.
A request for comment from OpenAI did not receive a prompt response.
"Cybercriminals are attracted to ChatGPT. Recently, there has been evidence that hackers are beginning to utilize it to create harmful malware. Given that ChatGPT provides hackers with a solid starting point, it can speed up the process "said Sergey Shykevich, manager of Check Point's Threat Intelligence Group.
Both positive and bad uses of ChatGPT are possible. For example, it may be used to help engineers write code.
A threat actor submitted a Python script on December 21st 2022, emphasizing that it was the first script he had ever written.
The hacker acknowledged that OpenAI provided him with a "good (helping) hand to finish the script with a great scope" in response to another cybercriminal's observation that the style of the code is similar to OpenAI code.
According to the research, this might indicate that future cybercriminals with little to no programming experience could use ChatGPT to create dangerous tools and grow into full-fledged cybercriminals with the necessary technical expertise.
Even though the tools we analyze are quite simple, Shykevich asserted that it won't be long until more experienced threat actors improve how they employ AI-based tools.
The creator of ChatGPT, OpenAI, is seeking funding at a valuation of close to US$30 billion.
Microsoft just paid US$1 billion to purchase OpenAI and is now promoting ChatGPT apps for handling practical issues.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.