There is no denying that major cloud outages make global news headlines. In fact, all three of the largest hyperscalers – AWS, Google and Microsoft Azure – experienced notable outages in 2025. With these incidents causing severe disruption at huge cost to businesses worldwide, these serve as warnings to cloud-invested digital businesses that they should not place too much dependence on the cloud to secure the uptime needed for essential workloads.
The impact of even a minor outage extends well beyond lost sales, to productivity loss, recovery costs and longer-term damage to reputation. Last year’s newsworthy outages sent shockwaves around the globe, with huge financial costs to large and small companies alike. And while cloud packs a powerful punch that is here to stay, it is no surprise that many companies are considering hybrid models that can provide robust and reliable business continuity in the event of a cloud outage.
No cloud is perfect
Overall, the growth of cloud services, and the opportunities these afford to businesses, are heralded as a major technology success story. One key industry report found that over 75% of Fortune 500 companies rely on AWS, and more than 55% on Microsoft Azure. Worldwide end-user spending on public cloud services alone was expected to reach $723.4 billion in 2025, along with double digit growth in all cloud segments.
However, it is a myth that any major cloud provider can offer assurance of 100% uptime. Quite simply, this service level does not exist. Any cloud provider, at any time can experience downtime issues lasting from minutes to many hours – the heavyweight hyperscalers are not exempt from this. Industry data indicates that AWS, Azure and Google experienced around 100 service outages between them over a 12-month period between August 2024 and August 2025.
While the cloud’s compute power is not in question, the key issue is whether it should be the sole point of dependency for critical business workloads. This is something that organisations operating with a cloud-only infrastructure need to pay attention to, especially companies whose IT architectures have become markedly more distributed in recent years, with increasing volumes of data generated, used and needing to be stored at the edge.
From retail to healthcare and manufacturing, these edge environments typically demand continuous uptime. And while there is significant benefit from the compute power of hyperscale, there is also an element of operational risk to address when critical services rely on remote connectivity. When a key cloud provider suffers service disruption, the impact can be magnified across interconnected systems. For many edge locations, it is not a feasible option to just wait for centralised services to recover.
Why edge reinforces cloud
The good news is that there is a workable way for organisations to achieve reliability, resilience and scale right at the edge. While cloud is essential for scale, analytics and collaboration, decentralised infrastructures designed using edge computing and storage are a very effective way to maintain services locally if the cloud goes down. By implementing this, organisations are positively reinforcing cloud infrastructure, not replacing it.
Edge computing is already an established market and is fast becoming more mainstream. The global market is estimated to grow from $28.5 billion in 2026, to $263.8 billion in 2035, at a compound annual growth rate of 28%. With hardware innovation delivering data centre level performance in compact, lower cost formats, edge is becoming more accessible and much less costly than in previous years. More advanced management tools also mean it is easier to manage smaller sites than in the past.
Reaching a fine-tuned balance
To roll out this type of hybrid infrastructure, organisations need to work in tandem with their services providers to agree a practical approach to cloud and storage design that fits their specific business requirements. It is crucial to remember that there is no one-size-fits-all model.
Rather than assuming a ‘cloud only’ or ‘cloud first’ design is best, organisations can also bring another alternative into the equation. This is one that blends on-premises edge computing that is ‘cloud enabled’ without over-reliance on it. If there is a cloud outage, the internet stops working or there is local disruption to cloud access, edge architectures are an effective way to navigate this void. With the tech becoming more straightforward to deploy, service providers should be prepared to identify companies that require edge architectures, or those who could benefit from their use.
This deliberate design approach, and how cloud and edge complement each other, is especially notable now in AI deployments. Hyperscale cloud environments lend themselves ideally to supporting large scale model training, storage and centralised analytics. Concurrently, lightweight models are deployed at the edge enabling faster, insightful decision making.
It is clear that edge computing offers many data-driven organisations a way of adjusting their architectures to ensure their critical operations stay working even if the cloud is out. This is not rejecting cloud. It is a way of embracing it, in a prudent and strategic move to minimise future disruption and secure business operations against costly downtime.
Mark Christie
Mark Christie is Director of Technical Services at StorMagic. With over 15 years’ experience in technical services, Mark has deep expertise in SvSAN, and now leads the Technical Services team across pre-sales and support. A graduate of Oxford Brookes University, he holds multiple certifications and has broad experience across storage, virtualisation and infrastructure technologies.


