Generative AI is being manipulated to aid cybercriminals in business email compromise attacks, with its ability to instantly generate personalised text being seized upon to write convincing phishing emails that won’t get flagged as spam. For instance, it can be asked to write in an academic language to enhance its sophistication, resulting in a higher chance of making it through filters and making cybercrime more accessible for people for whom English isn’t a first language.
Cybercriminals are also sharing tactics to ‘jailbreak’ ChatGPT, tricking the AI into bypassing its built-in restrictions and speaking on illegal or morally questionable subjects. Some hackers are taking things one step further and creating their own versions of ChatGPT, tools uniquely designed for cybercrime.
WormGPT is one of these tools, having been trained on datasets focused on the creation of malware. When instructed by the article's author, it wrote a convincing email purporting to be a company’s CEO - asking an account manager to urgently pay a fraudulent invoice - that could be utilised in a business email compromise attack. This experiment highlights the danger presented by generative AI technologies such as WormGPT.
What does this matter for businesses?
- To counter the new threats posed by these harmful iterations and uses of AI, businesses should ensure employees are trained to spot BEC attacks.
- Organisations should also have enhanced email verification processes that sound the alarm when external sources claim to be employees of the business, and that automatically flag keywords like “wire transfer; and “urgent”.