Cyberattacks have increased by 7% in the first quarter of this year, and it’s possible that cybercriminals’ ability to exploit ChatGPT has contributed to this rise. This article explores whether it’s possible to turn the tables and create incident response plans with generative AI that can mitigate attacks successfully - which could prove especially helpful in today’s economic climate where budgets are tight. 

However, the author quickly discovered that despite feeding ChatGPT information about a fictional organisation’s structure and the platforms it used, the IR plan it generated was very generic, lacking the specificity necessary for it to be useful. An IR plan needs to contain a concrete step-by-step approach for a particular organisation, but ChatGPT could only provide a general overview of what an ideal IR plan should include. 

ChatGPT can’t deviate from basic responses because it has none of the adaptability or analytical abilities of a real human expert. It might be able to provide more constructive and tailored advice if it was given critical information about the inner workings of a business - information that is often sensitive and would be potentially risky to hand over. 

As it stands, generative AI is more effective as a tool for aiding cybercriminals than for helping safeguard against them. But its abilities are improving at pace - so this may start to change.

 

What does this matter for businesses?

  • Developing a response to cybersecurity threats is a critical task for businesses, one they shouldn’t expect to be able to automate anytime soon. The human touch - the ability to lead and engage a team, develop a dynamic plan, and respond to unexpected events - is still key.

 

Read the full article