When ChatGPT and similar chatbots first became widely available, the concern in the cybersecurity world was how AI technology could be used to launch cyberattacks. In fact, it didn’t take very long until threat actors figured out how to bypass the safety checks to use ChatGPT to write malicious code.
It now seems that the tables have turned. Instead of attackers using ChatGPT to cause cyber incidents, they have now turned on the technology itself. OpenAI, which developed the chatbot, confirmed a data breach in the system that was caused by a vulnerability in the code’s open-source library, according to Security Week. The breach took the service offline until it was fixed.
An Overnight Success
ChatGPT’s popularity was evident from its release in late 2022. Everyone from writers to software developers wanted to experiment with the chatbot. Despite its imperfect responses (some of its prose was clunky or clearly plagiarized), ChatGPT quickly became the fastest-growing consumer app in history, reaching over 100 million monthly users by January. Approximately 13 million people used the AI technology daily within a full month of its release. Compare that to another extremely popular app — TikTok — which took nine months to reach similar user numbers.
One cybersecurity analyst compared ChatGPT to a Swiss Army knife, saying that the technology’s wide variety of useful applications is a big reason for its early and quick popularity.
The Data Breach
Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it. In the case of ChatGPT, the exploit came via a vulnerability in the Redis open-source library. This allowed users to see the chat history of other active users.
Open-source libraries are used “to develop dynamic interfaces by storing readily accessible and frequently used routines and resources, such as classes, configuration data, documentation, help data, message templates, pre-written code and subroutines, type specifications and values,” according to a definition from Heavy.AI. OpenAI uses Redis to cache user information for faster recall and access. Because thousands of contributors develop and access open-source code, it’s easy for vulnerabilities to open up and go unnoticed. Threat actors know that which is why attacks on open-source libraries have increased by 742% since 2019.
In the grand scheme of things, the ChatGPT exploit was minor, and OpenAI patched the bug within days of discovery. But even a minor cyber incident can create a lot of damage.
However, that was only a surface-level incident. As the researchers from OpenAI dug in deeper, they discovered this same vulnerability was likely responsible for visibility into payment information for a few hours before ChatGPT was taken offline.
“It was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number and credit card expiration date. Full credit card numbers were not exposed at any time,” OpenAI said in a release about the incident.
Read the Cost of a Data Breach Report
AI, Chatbots and Cybersecurity
The data leakage in ChatGPT was addressed swiftly with apparently little damage, with impacted paying subscribers making up less than 1% of its users. However, the incident could be a harbinger of the risks that could impact chatbots and users in the future.
Already there are privacy concerns surrounding the use of chatbots. Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN that ChatGPT and chatbots are like the black box in an airplane. The AI technology stores vast amounts of data and then uses that information to generate responses to questions and prompts. And anything in the chatbot’s memory becomes fair game for other users.
For example, chatbots can record a single user’s notes on any topic and then summarize that information or search for more details. But if those notes include sensitive data — an organization’s intellectual property or sensitive customer information, for instance — it enters the chatbot library. The user no longer has control over the information.
Tightening Restrictions on AI Use
Because of privacy concerns, some businesses and entire countries are clamping down. JPMorgan Chase, for example, has restricted employees’ use of ChatGPT due to the company’s controls around third-party software and applications, but there are also concerns surrounding the security of financial information if entered into the chatbot. And Italy cited the data privacy of its citizens for its decision to temporarily block the application across the country. The concern, officials stated, is due to compliance with GDPR.
Experts also expect threat actors to use ChatGPT to create sophisticated and realistic phishing emails. Gone is the poor grammar and odd sentence phrasing that have been the tell-tale sign of a phishing scam. Now, chatbots will mimic native speakers with targeted messages. ChatGPT is capable of seamless language translation, which will be a game-changer for foreign adversaries.
A similarly dangerous tactic is the use of AI to create disinformation and conspiracy campaigns. The implications of this usage could go beyond cyber risks. Researchers used ChatGPT to write an op-ed, and the result was similar to anything found on InfoWars or other well-known websites peddling conspiracy theories.
OpenAI Responding to Some Threats
Each evolution of chatbots will create new cyber threats, either through the more sophisticated language abilities or through their popularity. This makes the technology a prime target as an attack vector. To that end, OpenAI is taking steps to prevent future data breaches within the application. It is offering a bug bounty of up to $20,000 to anyone who discovers unreported vulnerabilities.
However, The Hacker News reported, “the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs.” So it sounds like OpenAI wants to harden the technology against outside attacks but is doing little to prevent the chatbot from being the source of cyberattacks.
ChatGPT and other chatbots are going to be major players in the cybersecurity world. Only time will tell if the technology will be the victim of attacks or the source.
If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034.
The post ChatGPT Confirms Data Breach, Raising Security Concerns appeared first on Security Intelligence.