AI adoption has not stalled so much as it has diffused across organisations, moving from centrally managed deployments into everyday tools, workflows and development environments. As these capabilities have become embedded in widely used software and accessible development resources, their use increasingly takes place outside formal approval processes.
These are examples of shadow AI: the use or integration of AI systems without the knowledge or approval of IT and security teams. A significant share of AI activity now falls into this category, leaving leadership without a clear view of where and how it is being applied.
The concern is exposure. Employees adopt tools to accelerate routine work, yet unapproved systems may process sensitive data, fall outside regulatory boundaries, or rely on security models that are difficult to interrogate. Because these capabilities are often built into familiar systems, their use is less visible, which makes governance more difficult and allows risk to accumulate without clear points of intervention.
The risk is systemic
Treating shadow AI as a matter of employee behaviour understates the problem, because the more significant risk lies in how AI enters the enterprise stack without consistent scrutiny.
Embedded AI features within enterprise applications can operate with limited visibility. Internal teams may deploy open-source AI solutions that are unverified or unsupported, while third-party or partner models can be integrated into systems without adequate vetting. Even internally developed models introduce risk when their lifecycle is not properly managed, as they may degrade over time through model drift or become vulnerable to manipulation of training data through data poisoning.
What emerges is a distributed risk environment in which exposure builds through a series of local decisions across systems, vendors and teams, rather than from a single point of failure.
Governance must follow where decisions are made
Attempting to contain this through IT or security functions alone is unlikely to succeed, given the scale and dispersion of AI adoption. Many of the decisions that introduce risk sit outside central technology teams, which requires governance to extend into those domains.
A federated model allows security, compliance, procurement, development and business units to operate within a shared framework of standards and accountability. Decisions involving AI, whether vendor selection, model deployment or feature integration, should be subject to consistent controls regardless of origin.
Human oversight, however, does not scale at the pace at which AI systems operate. Automated governance mechanisms are therefore required to monitor usage, enforce policy and flag deviations in real time, forming part of the core infrastructure that supports AI oversight.
AI risk should be engineered
AI introduces distinct risks, but it does not require a separate rulebook. Established engineering practices remain directly applicable, particularly in relation to testing, validation, version control and continuous monitoring.
Where organisations encounter difficulty is in applying these disciplines consistently to AI systems, especially across model validation, deployment and ongoing oversight. When these controls are weak or unevenly applied, issues tend to accumulate until they surface as operational or compliance failures.
External partners can support through validation, advisory and managed tooling, although accountability for governance remains internal.
Control is determined by where AI runs
The distinction between public and private AI environments carries direct implications for data control. When sensitive information is entered into public models, there is limited visibility into how that data is stored, used or retained, which creates clear risks around privacy and security.
Private AI infrastructure provides an alternative in which organisations retain control over data, training processes and model behaviour within a defined environment. This allows systems to be monitored, updated and aligned with internal policies and regulatory requirements.
Without this level of control, governance efforts remain constrained, as policies have limited effect when underlying systems lack transparency.
Visibility is the prerequisite for advantage
Efforts to eliminate shadow AI are rarely effective, as they tend to push usage further out of view rather than bring it under control. A more effective approach is to bring these activities into scope so that AI systems become visible, traceable and governed.
This depends on clear frameworks applied across functions, automation to scale oversight, and infrastructure that supports control. When these conditions are in place, organisations can support experimentation within defined boundaries while maintaining a clear line of sight over risk.
Organisations that achieve this balance are better positioned to turn AI from a source of risk into a controlled and reliable capability.
Simone Larsson
Simone Larsson is Head of Enterprise AI, EMEA at Lenovo.
Hande Sahin-Bahceci
Hande Sahin-Bahceci is Infrastructure Solutions & AI Marketing Manager at Lenovo.


