Executives, including CISOs, are finding themselves in a tricky position: Employees and business units are swiftly embracing generative AI technologies, but existing policies and processes do not yet reflect the power and perils of generative AI tools. Many people are using it within organisations for day-to-day tasks like writing code, communications, and streamlining business processes, but there are risks that need to be taken into account around this innovative, much-hyped, and misunderstood technology. In this report, The Team8 CISO Village, a community of CISOs from the world’s leading enterprises, write about the enterprise risks of GenAI. The following have been identified as high-risk:

Data privacy and confidentiality 

  • A major fear amid businesses is that their sensitive information, IP, and source code, among other things, could be accessed by competitors once they are exchanged with AI. While there are risks inherent to sending this information outside of the organisation’s servers, it won’t be seen by competitors - at least, not in this version of ChatGPT - as the AI is not updated in real-time, and businesses can opt out of their data being used to train future models. 

Enterprise, SaaS, and third-party security

  • Generative AI is now so commonly integrated with third-party apps that it will precipitate data being shared in new and unpredictable ways. If these tools are not sufficiently secure, breaches could lead to sensitive data, such as customer information being leaked.

AI behavioural vulnerabilities 

  • By jailbreaking the AI, attackers can force it to perform actions it would otherwise be restricted from doing. For example, a chatbot could be attacked and tricked into adversely impacting other companies by giving them maliciously crafted results.

Software security vulnerabilities

  • GenAI has the same risks as any traditional software that a business utilises, as well as new ones caused by vulnerabilities unique to AI.

Legal and regulatory

  • Incorporating GenAI within enterprise processes involving Personally Identifiable Information (PII) must adhere to global data privacy regulations and potentially create legal or regulatory exposure.

When dealing with these risks, the first step is to review existing policies and determine if these capture the risks posed by AI and update them accordingly to reflect the new threats. 

Collaboration is key to developing new policy statements. The company’s CISO needs to be firmly entrenched with its AI/ML team, if this exists, to ensure they have a clear picture of the threat landscape. Other stakeholders should also be involved through the creation of working groups, as generative AI produces more risks around data privacy, IP exposure, and regulation, for example. 

Organisations can then make decisions around their use of AI-based on their appetite for risk, alongside enforcing certain practices such as:

  • Opting out of prompt information being used to train future models
  • Accepting the 30-day data retention policy
  • Ensuring all employees undergo risk awareness training

 

What does this matter for businesses?

  • With this new technology becoming quickly ingrained in business operations, it’s critical users don’t become so dazzled by its possibilities that they overlook the risks. Generative AI’s prevalence makes it an attractive target for hackers - businesses need to take stringent steps to protect themselves.

 

Read the full article