An employee at a multinational finance firm received an email that appeared to be from the company’s chief financial officer (CFO) instructing the recipient to participate in a confidential transaction, according to CNN. The worker initially suspected fraud, but a subsequent video call with the CFO and colleagues convinced him to transfer approximately $25.6 million.
However, it turned out that scammers had created the video call with deepfake technology, and the email was fraudulent after all. This is one example of how generative AI, or genAI, has fueled sophisticated scams.
While generative artificial intelligence has made a positive impact on how we live and work in various ways, it also has its drawbacks. Aside from deepfakes, a significant disadvantage of the proliferation of genAI is that cybercriminals can churn out phishing emails faster and without the grammatical and spelling errors that used to serve as tell-tale signs of malicious messages.
If you’re an IT or business leader, that means safeguarding your organization against scammers and identifying AI-generated phishing emails is more difficult than ever. Here’s what you should know about how genAI has transformed phishing and what you can do to spot and combat AI-powered fraud.
Generative AI phishing: The dark side of large language models
In a May 2024 webinar hosted by Stratosphere Networks and eSentire, John Moretti – a principal solutions architect for eSentire, noted that hackers have seized the opportunities presented by genAI to streamline phishing campaigns and ransomware attacks.
Bad actors can now utilize AI to efficiently generate almost flawless, highly targeted spear phishing messages. The days when you could spot a suspicious overture due to subpar writing are over.
“We actually saw a phishing attack where the email was almost totally legitimate,” Moretti said during the webinar. “There was one character that was off.”
Research has shown that large language models (LLMs) like ChatGPT can completely automate phishing campaigns, lowering the cost of launching these types of criminal exploits by over 95 percent, according to the May 2024 Harvard Business Review article “AI Will Increase the Quantity — and Quality — of Phishing Scams.”
These AI-crafted messages also have a high rate of success: European security awareness training provider SoSafe found that 78 percent of people open AI-authored phishing emails, and 21 percent go further by clicking on links or attachments.
Subsequently, genAI has supported an explosion in phishing, with a 4,151 percent increase in malicious emails between ChatGPT’s debut in November 2022 and March 2024, according to The State of Phishing 2024 report from SlashNext.
What are the warning signs of AI-generated phishing lures?
Unfortunately, as previously noted, generative AI’s language skills mean keeping an eye out for grammatical and spelling errors will no longer save you from falling for phishing attempts. Still, there are common indicators that a message might not be legitimate.
Here’s what to watch out for, according to the Federal Bureau of Investigation (FBI) and Cybersecurity and Infrastructure Security Agency (CISA).
- Email addresses or links that are slightly off (e.g., “amazan dot com”)
- Urgent or emotional language
- Suspicious-looking shortened URLs or attachments
- Requests for money or personal/financial information
- Requests for login credentials
- Unfamiliar senders
- Unusual tone or language from familiar senders
If you’re not sure if a message is legitimate, call the sender to confirm their request. Delete any phishing emails and block the senders.
How your business can combat phishing emails fueled by AI
To protect your organization’s IT network and data from AI-fueled phishing campaigns, our IT experts and the CISA recommend the following actions:
- Turn on multi-factor authentication (MFA) to block hackers even if they steal login credentials.
- Mandate routine security awareness training for your staff to ensure they have the latest info about spotting suspicious messages.
- Adopt a zero-trust approach to security to minimize the damage in the event of a breach.
You can also fight fire with fire by deploying genAI for email security, according to the TechTarget article “Generative AI is making phishing attacks more dangerous.” However, utilizing an internal AI model to scan incoming emails might not be affordable for all organizations.
With our background as a former managed security service provider (MSSP), we can leverage our expertise as well as our extensive partner network to rapidly identify the best managed cybersecurity services and solutions to protect your business from phishing and other threats.
Start today by calling 877-599-3999 or emailing sales@stratospherenetworks.com. You can also jumpstart your search for the best IT security solutions by taking our free assessment.