AI assistants: a new challenge for CISOs

CISOs and data protection officers face a very real danger: how to protect company data and intellectual property against the risks of exposure to external service providers through these AI-generating platforms?

In the past year, workspaces have been radically transformed by the new capabilities of artificial intelligence (AI). A recent global survey by McKinsey on AI, reports that 65% of respondents say their organization regularly uses generative AI, nearly double the percentage from the previous survey.

This increase reveals a real need in organizations. Artificial intelligence assistants make it possible to simplify everyday life: whether it is writing meeting notes or emails, developing code, developing marketing strategies or even managing company finances. They can also help develop compliance strategies by actively monitoring changes in regulations, evaluating an organization’s practices, and identifying areas for improvement.

CISO and data protection officers facing a very real danger: how do we protect company data and intellectual property against the risks of exposure to external service providers through these AI-generating platforms?

Curiosity isn’t always a bad mistake

Many executives have considered blocking generative AI tools from their systems altogether, but this approach risks hindering the organization’s ability to innovate, creating a culture of mistrust among employees or even leading to the phenomenon of “shadow AI,” i.e. unapproved use of AI applications running outside of company systems.

Such an approach already seems outdated. Our research actually shows that in organizations, AI assistants are already widely integrated into everyday tasks and with the widespread adoption of solutions like Microsoft Copilot, These tools are expected to see increasing adoption in 2024.

So, instead of simply blocking these assistants, CISOs can continuously implement security policies using intelligent data loss prevention (DLP) tools that safely enable AI applications. They ensure that sensitive company information is not used in queries entered into these applications to protect critical data and prevent unauthorized access, leakage and other misuse.

CISOs can also review the apps used by employees, restricting access to those that don’t meet the organization’s needs or pose a risk.

Once an interesting AI assistant for his company is identified, the CISO checks whether his vendor presents all the necessary seriousness guarantees and evaluates its data processing policy.

This goes through several steps:

● Data processing practices: What happens to the data entered by the employee? Understanding how the provider manages and protects data is critical to ensuring its privacy and security. World Economic Forum survey found that 95% of cybersecurity incidents are the result of human error. Likewise, entrusting sensitive data to an external AI assistant can exacerbate this risk.

Furthermore, by feeding these tools with data, an organization risks contributing, unwittingly, to the development of AI models that are likely to overshadow it. Indeed, there may be a scenario where confidential data or information about its business operations will be provided to competing companies. This poses significant risks to the competitive forces and market position of this organization.

● Is the model used for additional services privately or publicly? Is it developed by the vendor itself or is it based on a third-party solution? Many AI assistant apps used by employees rely on external services or contractors. It’s easy to use an application without knowing that its infrastructure is based on a publicly available platform.

CISOs have an understanding of the costs associated with these AI technologies, but free or low-cost options must generate revenue through other avenues. One of these ways is, for example, to sell data or information generated by the use of their application. Therefore, it is important to read the terms of use very carefully if you want to ensure the protection and privacy of your sensitive data.

● What happens to the results? Are they used to train other models? Many AI vendors don’t just use input to train their models: they also use output.

This cycle generates increasingly complex data paths that can cause applications to accidentally expose sensitive company information or even touch upon copyright and intellectual property issues. This can significantly complicate data protection planning in the supply chain.

Essential inner vigilance

Pending clearer legislative guidance on AI, CISOs and data protection officers are required to develop a culture of self-regulation and ethical practices around AI within their organizations. With the proliferation of AI assistants, action must be taken now to assess the impact of AI tools in the workplace.

Every employee will soon be performing many everyday tasks in tandem with their smart assistants. This should encourage companies to establish internal governance committees, not only to analyze the tools and their applications, but also to discuss the ethics of AI, review their processes and review their strategy before the widespread adoption and publication of comprehensive regulations.

Employees in all sectors, regardless of their level, can benefit from one of these AI assistants, called to “radically change the way we live, whether online or offline”. as Bill Gate describes them with

From a CISO perspective, the key to unlocking the full potential of these tools lies in focusing on responsible governance through adequate team training, but also integrating an AI governance committee responsible for the strategy, mechanisms and approaches to handling data.

Leave a Comment