Imagine receiving an email from your boss that seems a little off. Maybe he’s requesting a password or financial information he should be able to access without your help. You remember your security awareness training on how to spot phishing and scrutinize the message for typos and bad grammar.
There should be obvious signs if the email is a phishing attempt, right? Unfortunately, that’s not necessarily true anymore, thanks to generative AI. Just as businesses have seized the opportunity to bolster content creation efforts with the capabilities of large language models (LLMs), cybercriminals have also jumped at the chance to enhance their phishing campaigns with tools like ChatGPT.
“Hackers are always testing new technologies for ways to subvert it for their own ends, and generative AI is no different,” Gartner Risk & Audit Practice Research Director Ran Xu said in a press release.
That means you must adjust your cybersecurity strategy to account for AI-generated phishing emails. Here’s what you should know about how AI has changed phishing and steps you can take to keep your breach risk low despite supercharged social engineering tactics.
Generative AI phishing: Why it’s harder to spot scams in the ChatGPT era
Even before generative AI entered the picture, phishing attempts were rampant. In 2020, at least one end user attempted to connect to a phishing site at 86 percent of organizations, according to Cisco’s 2021 Cyber Security Threat Trends report.
Before widespread access to AI, however, scammers had to manually write the copy for fraudulent messages and could be limited by time and language skills. Today, they can rely on large language models to instantly generate tons of grammatically flawless text, making it much harder to spot suspicious messages, according to The Wall Street Journal.
Additionally, generative AI can easily take on different tones and personalities. Hackers can task LLMs with crafting an email written as if it’s from your boss or colleague.
Generative AI has subsequently fueled what Xu of Gartner calls “the industrialization of advanced phishing attacks.” Bad actors immediately seized the opportunity to utilize this cutting-edge technology for nefarious purposes after the release of ChatGPT in Q4 2022, with mentions of generative AI on the dark web skyrocketing into the thousands by Q1 2023, according to Bain & Company. The AI revolution is here for criminal enterprises as well as legitimate ones.
How to combat AI-powered phishing attempts
The good news is that cybercriminals aren’t the only ones with generative AI on their side. At Black Hat USA 2023, security researchers presented evidence that LLMs that hadn’t been trained with security data could still identify suspicious emails, according to KnowBe4. Cybersecurity professionals can, therefore, fight fire with fire by flagging phishing attempts with AI.
You can also implement other safeguards against phishing messages and data breaches. Here are just a few steps you can take, according to the Cybersecurity and Infrastructure Security Agency (CISA) and our IT experts.
- Enable multi-factor authentication (MFA) to keep hackers out even if they obtain login info.
- Conduct regular security awareness training to ensure your employees know other signs of phishing (e.g., spoofed email addresses).
- Enact a zero-trust approach to security so anyone who accesses your network won’t get far.
The right managed cybersecurity service provider can help you stay up to speed on rapidly changing IT security solutions and threats. With our background as a managed security service provider (MSSP), we can help you modernize your strategy and keep your data breach level low in the face of evolving threats.
Get started today by calling 877-599-3999 or emailing sales@stratospherenetworks.com to connect with our advisors.