Along with machine learning, artificial intelligence addresses many cybersecurity challenges. A dystopian future for cybersecurity is presented by all good apocalyptic sci-fi films and books.
Beyond the inclusion of open AI in many vendor slide decks in a way of positioning next-generation technology and a more automated, advanced, approach to technology, the concept is now becoming more mainstream. With the launch of OpenAI's ChatGPT
platform component that enables conversational dialogue to answer questions, engage in dialogue and provide detailed responses. The topic of dystopia burst into public consciousness in a whole new way at the end of 2022.
This advancement from a glorified Google search has sparked the interest of students seeking to expedite essay responses without consuming the time or need to read source materials, followed by teachers seeking to similarly automate marking.
Anyone who has been frustrated by a website chatbot while attempting to get help or an answer to a vaguely complex question found solace in conversing with a computer. Without a doubt, it's opened up a world of possibilities for AI to influence how we interact with technology daily as individuals, rather than just being a mysterious black box powering systems ranging from weather forecasting to space rockets.
Inevitably, the potential impact on cyber security will quickly become a key topic of discussion on both sides. For attackers, there was an instant opportunity to transform basic, often childlike phishing text into more professional structures. Moreover, it also impacts the opportunity to automate engagement with multiple potential victims attempting to escape the ransomware trap they'd fallen into.
Could this also provide an opportunity for defenders to revolutionise processes such as code verification or phishing education? It's still in the early stages, and certainly has flaws. But it's expanded the debate about how AI can change the cyber security industry.
"It's a terrifyingly good system," says Dave Barnett, Cloudflare's head of secure access service edge (SASE) and email security for Europe, the Middle East, and Africa (EMEA).
Serious concerns
Barnett, on the other hand, emphasises the serious concerns that have arisen. He claims, "The information security community should spend more time considering the implications. It used to be fairly simple to recognise when we were being duped by 419 scams, delivery payment SMS, or business email compromise because they all appeared to humans to be fake to humans," he says.
"Could artificial intelligence deceive us?" We must also be cautious of certain data security risks, such as where the data goes, who controls it, and who processes it. Humans are naturally inquisitive, so if we start talking to computers like they're people, we're going to share information we shouldn't. Finally, could this be a solution to the IT skill shortage? " If OpenAI can write code in a long-forgotten language, it will undoubtedly be of great assistance."
The principal threat advisor at Cofense Mr Ronnie Tokazowski says that the chance to be creative with the platform was something to be enjoyed. As creating AI-generated rap lyrics about world peace and UFO disclosures is cheeky and fun; however, it is possible to trick the AI into giving the information which you're looking for."
Including safeguards in any application build is critical, and security by design for AI will always be preferred over a post-build security wrapper. That is the developers' goal, as "the AI's intent is good and does not want to create phishing simulations,". But asking in various ways (such as removing the word "phishing") still fails to yield a positive response. However, being inventive yielded results.
"ChatGPT also provides an overall about verifying and staying safe before using any gift card as a payment form," he says.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.