Embracing innovation without compromising hybrid cloud security

Embracing Innovation Without Compromising Hybrid Cloud Security

Artificial intelligence is no longer a concept on the horizon. It is here, scaling quickly and reshaping how businesses operate. Organisations are adopting it to move faster, work smarter, and deliver better outcomes. And with global investment expected to surpass $749 billion by 2028, AI is becoming essential to staying competitive.

But the infrastructure behind this transformation is under pressure. Hybrid cloud remains the foundation for AI, balancing the performance of public cloud with the control of private environments. Yet recent data from the 2025 Hybrid Cloud Security Survey shows that the foundation is cracking. Breach rates are up 17 percent this year, and 91 percent of surveyed Security and IT leaders acknowledge they are making compromises in their security strategies.

Whilst innovation is critical to keep pace with the competition, that innovation must be built on a foundation of security. Without visibility, without clean, high-quality data, and without control over risk, the promise of AI becomes much harder to realise. Rather than slowing down, organisations should focus on building a strategy that enables teams to move forward with clarity and hastening the pace of innovation.

The burden on existing infrastructure

AI is fundamentally changing how data moves, behaves, and introduces risk across hybrid cloud environments. One in three organisations report that their network data volumes have more than doubled over the past two years due to AI workloads, placing serious strain on already stretched infrastructure. Tool-to-tool traffic is burgeoning, monitoring systems are becoming overwhelmed, and a growing share of that traffic is encrypted, making it increasingly difficult to inspect and secure.

The challenge is not only about scale but also about complexity. 58 percent of organisations have seen a rise in AI-powered attacks, including more sophisticated phishing campaigns and deepfake-based impersonation, while 47 percent report an increase in targeted threats against their own large language models (LLMs).

Staying secure doesn’t mean slowing down. It means adapting the security strategy to support innovation from the inside out.

The reality is that existing security tools were never intended to navigate this level of sophistication. Designed for a pre-AI environment, conventional tools struggle to keep up with today’s demands. As AI reshapes the threat landscape, it is becoming clear that existing architectures no longer offer sufficient protection. New risk calls for a new approach.

Doubling down on visibility

As hybrid cloud environments grow more complex, many organisations are making compromises just to maintain momentum. But in this environment, not all compromises carry the same weight. Visibility and data quality—both essential to securing and managing hybrid cloud infrastructure—are being deprioritised in ways that exacerbate the underlying issues. Nearly half of organisations still lack complete visibility across their environments, particularly into lateral and encrypted traffic, where movement most often goes undetected. At the same time, 46 percent claim they do not have clean, high-quality data to support secure AI workload deployment.

These are huge gaps. They undercut the foundation needed to secure an AI-enabled future. Without completevisibility, it becomes nearly impossible to spot hidden threats, understand where data is going, or detect misuse in real time. This is especially critical as public cloud, once the default engine of agility, is now seen as the riskiest environment by 70 percent of security and IT leaders. Blind spots in encrypted traffic, fragmented tools, and governance concerns are forcing a reconsideration of how and where AI workloads are deployed.

Security leaders are being asked to deliver outcomes in environments where they lack full control, while the scale and speed of AI adoption continues to rise. In this context, visibility is not something organisations can afford to trade away. It is the control point that makes all others possible and the only way to manage risk with confidence in a landscape where complexity is perpetually increasing.

Resetting the strategy

To meet the demands of AI securely and at scale, organisations need a stronger foundation, one that is built for the pace and complexity of what’s next. Deep observability provides that reset. Rather than adding another tool to an already fragmented stack, it reframes the entire approach, bringing visibility, context, and control back to the centre of the security strategy.

By fusing network-derived telemetry with traditional metric, event, log, and trace (MELT) data, deep observability delivers real-time insight into how data is moving, where it is going, and how it is being used. It reveals threats conventional tools often miss, especially in encrypted and lateral traffic, and it enables security teams to act with clarity. This is particularly important when deploying AI workloads, where visibility into model behaviour, data flows, and performance is essential to managing both security and operational risk. With 88 percent of security and IT leaders now recognising deep observability as critical to securing AI, the direction is clear.

Staying secure doesn’t mean slowing down. It means adapting the security strategy to support innovation from the inside out. When complete visibility is embedded at the foundation, organisations can move quickly and securely without losing sight of the risks that come with scale and innovation.

Mark Jow Gigamon

Mark Jow

Mark Jow is Technical Evangelist EMEA at Gigamon, helping organisations harness the power of deep observability – ensuring they run fast, stay secure and continue to innovate at pace.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE