When ChatGPT gets hacked: How to safeguard your data in the generative AI era

A glowing blue robot head symbolizing generative AI. ChatGPT can perform various tasks. It can compose poems, brainstorm blog ideas, and rapidly produce drafts of everything from short stories to research papers (albeit with dubious levels of accuracy due to its tendency to “hallucinate”). What it can’t do, however, is guarantee that the data you feed into it won’t fall into the clutches of cybercriminals.

Bad actors will inevitably target any program or system businesses rely on and find cracks in its defenses – generative AI solutions are no exception. ChatGPT has already experienced security issues, and hackers have stolen credentials for thousands of accounts via malware, catering to the demand for the information people disclose in prompts for generative AI.

If you plan on utilizing ChatGPT or another generative AI tool to transform your business processes, you must also take steps to address the new security vulnerabilities presented by these programs. Here’s everything you should know about the potential data security dangers of AI and what you can do to keep your risk level as low as possible.

The cybersecurity hazards of generative AI

Cybercriminals have already utilized “info stealing” malware to pilfer over 100,000 ChatGPT account credentials and put them up for sale on the dark web, according to the Singapore-based cybersecurity company Group-IB. Since ChatGPT maintains a chat history containing all the prompts and information you’ve entered, a compromised account could potentially expose company secrets.

For example, Samsung has run into issues with employees feeding sensitive information to ChatGPT, according to an April 2023 Gizmodo article. One individual copy-pasted the source code for a faulty semiconductor database, while another submitted proprietary code seeking assistance with fixing equipment, as reported by local Korean media.

In addition to hacking and human error, flaws in the program itself are also cause for concern. In March of this year, OpenAI disclosed that a bug in an open-source library allowed some ChatGPT users to see snippets of other users’ chat histories. The same bug might have exposed payment-related information (including first and last names, email addresses, payment addresses, credit card types, card expiration dates, and the final four digits of users’ cards) for 1.2 percent of ChatGPT Plus subscribers active during a particular nine-hour period. OpenAI has since patched the bug and taken steps to improve its system’s information security.

How to leverage AI without exposing sensitive data

The info-stealing malware that harvests login credentials operates by infecting an end user’s computer via phishing or another method and then collecting data from web browsers, email, instant messaging programs, and other systems and software on the machine, according to Group-IB. Subsequently, to reduce your odds of data exposure via ChatGPT and other generative AI systems, you should take the following preventive steps:

    • Conduct security awareness training to teach your team how to recognize phishing attempts.
    • Teach your staff about best practices for setting strong passwords.
    • Require your team to use a password manager.
    • Implement multi-factor authentication (MFA) for all accounts.

Additionally, you should instruct employees not to submit confidential or proprietary information to ChatGPT and other generative AI programs. Some companies have chosen to warn their staff members about the dangers of sharing information with AI, while others have banned the use of these tools altogether, according to Gizmodo.

If you’d like to explore your options for cybersecurity solutions and services like MFA, awareness training, vulnerability management, and more, we can connect you with leading managed security service providers (MSSPs) in our partner network. With our decades of experience in IT and background in the managed cybersecurity solution space, we can offer expert insights and rapidly pinpoint the best options for your business based on your pain points, requirements, and objectives.

Get started today by calling 877-599-3999 or emailing sales@stratospherenetworks.com to schedule a meeting with our consultants.

Contact Us

We will handle your contact details in line with our Privacy Policy. If you prefer not to receive marketing emails from Stratosphere Networks, you can optout of all marketing communications or customize your preferences here.