Embrace the opportunity in shadow AI

Embrace the opportunity in shadow AI

Ivanti’s May 2025 survey found that 42% of UK employees now use generative AI (GenAI) at work — rising from 26% in 2024. Respondents said it made their work easier and tasks quicker to complete; they became more productive. It’s easy to see why the technology has so quickly become popular, to the benefit of both employees and their employers. 

But many of those 42% who are plugged into GenAI said they were using their own personally selected AI tools, either instead of or alongside those sanctioned and managed by their employer. This is what’s known as shadow AI. Employees open their own accounts and use them without the knowledge or permission of their employer.

What lurks in the shadows?

Shadow AI is a subset of shadow IT, in which people use their own tools in their jobs because they’re seen as better or more convenient than the work-issue toolset. Early examples included employees taking work home on a USB key to use on their own PC, or emailing it to their personal accounts; or people using unauthorised-but-slick apps to help complete tasks.

Since the data in shadow IT isn’t on company-controlled devices and services, it’s not possible for IT departments to manage it, to meet regulatory requirements or their own data policies. Those personal accounts might gain access to the company’s intellectual property, for example. They might contain regulated data, such as customer information, data the company is legally obliged to protect. More broadly,  they may contain sensitive information that the company would rather keep confidential.

Creating opportunities with shadow AI

Conservative IT managers have generally faced an uphill battle combatting shadow IT. If people feel that a tool is helping them do a better job, more easily, they’ll use it, whatever bans or technical measures are put in place. A large proportion of the IT we use today — smartphones, cloud services, remote access to files — were once banned as shadow IT at many companies. Then, steadily, they found their way into the approved toolset because their advantages were so compelling. 

The AI revolution is here. It will lead to extensive retooling and offer employers the opportunity to reset the shadow IT dynamic.

Shadow AI ought to be recognised as the opportunity it is. IT managers should consult closely and openly with employees, without fear of censure, to understand which tools are in use and why they are so popular. By finding compliant paths and setting up the right safeguards, spending can be redirected from tools that aren’t used to those which are in demand. Productivity can be increased by giving people software they want and like to use. Consider this a form of market research, establishing a toolset that will deliver productivity benefits because the selected apps are popular and effective.

How can you lead by example?

This shift to a more democratic model of software procurement has been progressing organically over decades. Once, enterprise software was uniformly cumbersome to use; now it is much friendlier, thanks to the influence of modern consumer apps. Business-specific cloud services have been embraced as a way to work from anywhere, on anything, without data leakage. Cybersecurity has evolved to become less onerous for employees, and less dependent on prescribed device restrictions. GenAI needs to join the fold of managed services: many do offer a gated, premium option that doesn’t ingest inputs, but these need to be managed by IT departments, not individual employees. By permitting these tools in a team’s tech stack, employers are able to better monitor usage, encourage efficiency, and train employees on safety and best practices.

At DeepL, we’re very conscious and intentional about serving two quite different audiences with our Language AI solutions. On the one hand, we obviously strive to delight end-users by making sure we’re quick and easy to use, and by providing reliable results. At the same time, we know that reassuring CISOs is absolutely essential; trust is key to adoption. We adhere to GDPR, ISO, C5 and SOC 2 Type II security certifications; information security is core to how we operate.

Looking ahead

One difficulty for companies in the GenAI space is that trust is hard-won and needs to be nurtured. Solid data protection and reliability are prerequisites for Enterprise customers. And this needs to happen from the ground-up for providers: once trust is broken, it is difficult to rebuild.

Amid the spread of AI development and deployment, we’re pushing decidedly towards maturity in the market. The AI revolution is here. It will lead to extensive retooling and offer employers the opportunity to reset the shadow IT dynamic. We have an opportunity to improve enterprise software by creating the opportunities for collaboration. For those IT leaders who create amnesty to learn from their employees, and build tool access around this grass-roots intelligence, the potential for productivity is immense.

Ed Crook DeepL

Ed Crook

Ed Crook is VP Strategy & Operations at DeepL, a global leader in Language AI technology. DeepL is currently used by over 100,000 businesses and organisations and millions of individuals worldwide, who rely on its cutting-edge technology to communicate on a global scale.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE