Major Chatbot Data Leaks: Star Health Telegram Data Breach, Samsung Ban, and More

Telegram, Gemini, ChatGPT AI chatbots under fire for data breaches.
Major Chatbot Data Leaks: Star Health Telegram Data Breach, Samsung Ban, and More
Published on

The hacker has leaked the details of Star Health's sensitive customer data using Telegram chatbots, which has been demonstrated as the most shocking incident.

This is one of the cases that highlights the growth of the misuse of chatbots in data exploitation, which has been happening in previous cases involving companies and respective industries.

Data Exposure by Chatbots in Telegram by Hacker Ends Star Health's Business

A security researcher, Jason Parker, recently informed Reuters of the existence of Telegram chatbots available for sale that sold private customer data from Star Health. The exposed personal information includes names and addresses, with health details, and even sensitive medical reports. Some documents dated as recently as July 2024.

Responding to this, Star Health claimed there was no "widespread compromise" and "Sensitive customer data remains secure." But the popular news site Reuters was able to download hundreds of files, which demonstrated details about the customers' medical diagnoses, ID cards, policy numbers, and so much more. The hacker behind the chatbots, known by the alias xenZen, declared the possession of information concerning 31 million customers over 7.24 terabytes available piecemeal basis through the chatbots and for sale in bulk.

Star Health Reactions Towards the Data Breach

Star Health replied that it was appropriately informed regarding the data breach on August 13, 2024. The company contacted the local authorities, and efforts were made to notify the Cybercrime Department of Tamil Nadu and the Federal Cybersecurity Agency of India, CERT-In. "Privacy is held in high regard, and customers are cooperating with law enforcement to find a way out of this incident," states Star Health.

While this is a guarantee, the breach has raised numerous concerns for the policyholders. Sandeep TS is one such policyholder who found medical records relating to the diagnosis and blood tests of his one-year-old daughter leaked through the chattiest of bots. Tax account, ultrasound imaging, and other pertinent documents related to Pankaj Subhash Malhotra also went in for exposure. Until Reuters contacted them, none of the customers had heard of this case.

Leaks Through Chatbots: Few Earlier Cases

1. A glitch in ChatGPT Leaks User Chat Titles

In March 2023, a glitch enabled users to view the chat titles from other users using ChatGPT. Though OpenAI CEO Sam Altman denied any access to content from chats, it still irked users about privacy since the glitch printed out titles like "Chinese Socialism Development." Within a couple of hours, OpenAI disabled the chatbot and promised a postmortem of the technical findings.

2. Google Gemini Advanced Data Leak

It was discovered that Google's Gemini AI has a weakness especially when used with Google Workspace or Gemini API. The chatbot accidentally leaked private data such as passwords if asked for in a certain way. But when asked directly, it refused to give away the passphrase. However, indirect questions made it leak sensitive information. This weakness mattered a lot given how safe Google AI is about data.

3. GPT-3.5 Turbo Leaks Personal Information

Recently, at the close of 2023, it was found that researchers could extract personal email addresses, including 30 from the New York Times employees, from GPT-3.5 Turbo. This break allowed the model's restrictions on queries about privacy to be bypassed, but above all, it immediately became clear that ChatGPT could leak private information if sufficient adjustments were made. This implies that alarms have already been sounded as to how such generative AI tools may leak confidential information.

4. OpenAI Data Leak and Hacker Attack

The second case saw OpenAI taken to task over a hacking incident where a hacker stole the conversations of users whose proposals and presentations of sensitive business were left visible for anyone to see. Despite the great protocol followed by the users, it was traced back to Sri Lanka, marking the second major leak since the security bug that exposed the payment information of OpenAI in its incident of March 2023.

5. Samsung Bans ChatGPT Over Sensitive Data Leak

Samsung banned the chatbot tool from accessing its internal files after sensitive engineering codes from one of its employees escaped. The company thus reviewed and fortified security practices for the use of AI systems because of the prospect of data leakage with confidential information to some AI tools. It expressed the growing potential for raising alarm among organizations on the uses of AI tools in the corporate setups and data privacy.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net