Researchers at the University of Missouri found that large language learning models get passing marks as ethical hackers. The researchers asked ChatGPT and Gemini questions from the Certified Ethical Hacker exam - for instance by asking it to explain a man-in-the-middle attack, which both models could answer successfully.
If the LLMs gave the wrong answer, the researchers would prod it to take another guess, which in some cases was then correct. The chatbots were asked to write out why they gave their responses, with the study assessing them on accuracy as well as comprehensiveness, clarity, and conciseness. ChatGPT scored 80.8% and Gemini 82.6%.
These results are promising, as they can boost the productivity of ethical hackers, but they were still wrong in a significant number of instances. The job of the ethical hacker seems to be safe from AI disruption—at least for now!
View the Report