Generative AI: Are companies neglecting security?

Beyond the media hype, what are the concrete reactions of companies to the rise in popularity of generative AI? Have they managed to balance the risks and benefits it offers?

ChatGPT, Bard, MidJourney. Unless you’ve been living in a cave for the past few months, these names most likely mean something to you. In fact, in 2023, most business sectors, functions, or positions have devoted a significant portion of their thinking to envisioning how they will be able to use these types of generative AI tools.

But beyond the media hype, what are the concrete reactions of companies to the rise in popularity of generative AI? Have they managed to balance the risks and benefits it offers?

Integrating Generative AI: Rapid Adoption

Business investment in generative AI solutions around the world is booming. According to IDC forecasts these investments will increase by $127 billion between 2023 and 2027. The observation is clear: companies around the world are increasingly turning to GenAI tools.

Our research shows that nearly all (95%) IT leaders are already using generative AI tools in one way or another. Nearly four in five companies (78%) use generative AI for data analysis (the primary use case for these companies).

Almost half of them use generative AI to develop R&D (55%) and marketing (53%) services, while just over two in five use these tools to streamline end-user tasks (44%) and logistics (41%) .

Staying competitive in today’s digital world requires keeping up with the latest technology, but it’s also important to find a balance. Also, security is an important factor to consider.

Although the numbers are very high, 89% of surveyed IT leaders admit that their organization considers generative AI technologies to be a security risk. Almost half (48%) believe that the threat may now exceed the capabilities that these tools can offer.

This finding underscores the gap between what they think and what they actually do, given that only 5% of them say their company limits the use of these tools, preferring to wait and watch as the technology develops, or even they block it completely. Early adoption of generative AI appears to be less thoughtful than we would like in terms of risk management.

Between fear and action: the great chasm

The main concerns of companies that have not yet adopted generative AI include the potential fear of losing sensitive data, the lack of resources to monitor its use, and the lack of understanding of the risks and benefits associated with it.

When a new technology emerges, it’s critical to understand the specific security issues it poses so they don’t overshadow its potential.

Slowly but surely

If we take a closer look at who is advocating for the rapid adoption of generative AI, the results are surprising. Interestingly, 59% of IT leaders say they are driving this adoption, while only 21% have responded to requests from their business leaders.

Interest from employees as a whole is even more limited at just 5%. The situation appears to be less about the “pressure” to introduce new technologies and more about the “desire” of IT teams to keep up with digital innovation.

Indeed, if IT teams drive this early adoption, IT managers and business leaders should rest easy.

In other words, we can strategically delay the adoption of generative AI by giving IT teams the time they need, a real window of opportunity, to improve their security measures before vulnerabilities turn into crises.
But the window is closing, as 51% of respondents expect interest in GenAI tools to grow significantly by the end of the year.

A total ban on its use is also not a solution, as companies would then suffer a significant competitive disadvantage. It is better to choose a slightly slower, more strategic and more systematic execution. Another old adage “let’s make haste slowly” takes on its full meaning here.

Turning the tables: from threat to opportunity

The rules around generative AI management are critical because how, where and why the tool is used will be specific to your business. If governance guidelines are not yet established or need to be strengthened, an excellent approach is to bring together a group of cross-functional experts from across the business (not just IT) to form a ‘team’.

This team will be able to formulate security and privacy policies for new generative AI implementations. It will then make decisions about implementing solutions and address current gaps in technological understanding.

At the end of the day, generative AI management comes down to data protection. Therefore, the first crucial step is to classify the data, especially since currently only 46% of respondents are convinced that all their data is classified. Data that is properly categorized and level of privacy is more easily protected.

Every time a new technology comes along, it comes with different use cases. But with the right security measures in place, businesses can prepare to harness the potential of generative AI safely, responsibly, and turn what initially looks like a threat into an opportunity.

Leave a Comment

×