In the digital sphere, generative AI is a potent tool that may be utilized for both good and bad purposes. When ChatGPT was introduced and the technology shot to fame, experts started to think about what it would mean for cybersecurity. Although, happily, we have not yet observed the technology being utilized in significant attacks by bad actors, security professionals have been showcasing smart applications of generative AI to strengthen cybersecurity.
PassGPT, a novel model based on OpenAI's GPT-2 architecture that can generate and guess passwords, was created by a team of researchers from ETH Zürich, Swiss Data Science Centre, and SRI International in New York. Millions of credentials exposed in many hacks, most especially the infamous RockYou leak, were used to train the model. PassGPT reportedly has a 20% greater password guessing capacity than the most advanced GAN models.
The purpose of PassGPT is to assist users to build stronger, more complicated passwords and to identify potential passwords based on some inputs, even though some of its features may seem frightening. The model employs a cutting-edge method called progressive sampling, which constructs passwords one character at a time to make them more difficult to decipher. Additionally, the model surpasses earlier models that made use of generative adversarial networks (GANs), which are made up of two competing networks that attempt to deceive one another using genuine or false content.
The algorithm can also calculate the likelihood of any given password and examine its strength and flaws, according to PassGPT's author, Javi Rando. He continued by saying that the model may identify patterns that, while powerful by conventional standards, are very simple to predict using generative approaches. Additionally, he claimed that the model could handle passwords in various languages and deduce new passwords not included in its database.
PassGPT is an illustration of how LLMs may be customized to fit various applications and domains utilizing various data sources. And it's not the first time that people who are on the right side of the law have trained generative AI on unlawful data. In the past, researchers used dark web data to train an AI model dubbed DarkBERT to find ransomware leak sites and keep an eye on illicit information sharing.