
Depending on who you listen to, the “modern” data stack is either about to be replaced, or is not “modern” anymore. It’s true the data stack concept is not new, but today’s reality is that the rise of AI-powered applications and agentic AI is dramatically redefining the demands placed on data stacks. Rather than becoming irrelevant, the modern data stack is a major consideration as organisations attempt to embrace AI to increase decision velocity.
Organisations must think carefully about how their data stacks are structured so that their AI agents can operate effectively and deliver accurate, insightful information as quickly as possible.
First, it’s important to qualify what we mean by the data stack. At its core, it is (according to IBM): “…integrated, cloud-based tools and technologies that enable the collection, ingestion, storage, cleaning, transformation, analysis and governance of data.”
Agentic AI – a way to structure and manage the use of AI – requires a data stack to perform these tasks at even greater speed. It is key in delivering on AI’s promise of acting autonomously within an ethical and secure framework. To ensure the data stack performs well, good data is key. Beyond everyone agreeing that “garbage in, garbage out” impacts the quality of AI decision making, there is growing acceptance that organisations must adopt a “data-first” mindset. Without a unified picture of all the data within their application environments, organisations will not be able to interrogate data effectively using agentic tools.
Agentic AI creates more pressure on the data pipeline
Agentic AI places greater pressure on the flow of data given the sheer volume of tasks it is completing, which can lead to more layers of complexity in the data stack. Aside from calling on data in different formats, the AI may interrogate data across different platforms – on-premise, hybrid and public cloud. This can create latency challenges, slowing down the speed of decision making. Every time data is changed or moved, it affects performance. On top of that, how do you future-proof the data stack when AI is evolving so rapidly? Interoperability and open APIs become essential to enable integration of the latest developments.
McKinsey suggests that IT architectures must become “agent-native” if they are to exploit the opportunities of agentic AI. Indeed, it goes as far as saying that this requires process reinvention: “…it involves rearchitecting the entire task flow from the ground up. That includes reordering steps, reallocating responsibilities between humans and agents, and designing the process to fully exploit the strengths of agentic AI…”
To get to that point, organisations must ensure they have a data mature IT infrastructure, as agentic AI will completely change the traditional human-machine relationship. As McKinsey suggests: “In such a model, systems are no longer organised around screens and forms but around machine-readable interfaces, autonomous workflows, and agent-led decision flows.”
Every organisation must change its relationship with data, creating a data-centric culture and improving data literacy.
With such large volumes of structured, unstructured and semi-unstructured data, organisations must ensure there is ownership of data lakes within these data stacks. This will deliver oversight of the expertise embedded in the data, as well as monitor for accuracy and set rules about knowing where it has come from. Without this commitment, alongside effective governance, it could undermine the value of the information in your data stack.
This can be addressed by treating your data as a product, as it means you are focused on understanding the purpose and value of data to your organisation. For example, we operate in the public sector and professional services industries. Today, we have what we call industry models for each of these sectors to help our customers quickly implement our enterprise resource planning (ERP) software. There are common processes, regulations and tasks for these industries so we have created packaged functionality that customers can adopt and tailor according to their specific operating requirements. This is a form of productising data and functionality, looking to understand its purpose and how it can be used systematically to offer value to customers. Similarly, any organisation could examine the data within its environment to see how it could be packaged to offer value to employees wanting to make fast and accurate decisions with the support of agentic AI tools. For instance, could you package up data from across the whole organisation on patterns of customer engagement with your company? This data could be used to evaluate customer loyalty, willingness to recommend your brand and buy more of your offerings.
Context is key
However, the most important priority is to focus on building context around data, as without it, agentic AI will struggle. This is particularly true in an enterprise environment where governance and regulation means organisations must have confidence that agents are being given a proper framework. Agents must be taught to understand best practices, like standard data formats and data quality checks as well as sector-specific knowledge such as compliance with regulations like DORA (Digital Operational Resilience Act) if you are a financial services institution. They also need to know company specific naming protocols and security policies.
This is where metadata is crucial, and organisations must think carefully about how they create context around their data. Applications, like ERP, contain significant valuable insights that can help agentic AI systems reach better decisions, because they are stored with metadata tags. Some commentators are starting to talk about such applications becoming “systems of knowledge” which help to improve the accuracy and effectiveness of AI tools. Unless they have this understanding, agents will struggle because enterprise processes are highly structured and when making decisions agents can only be right or wrong. They will get answers wrong if they do not have enough context to guide their decision making.
Some commentators have suggested that a System of Agents will completely shake up how enterprise software works, because they will change the way work is done, exactly as McKinsey has suggested. Historically, there have certainly been issues with backoffice applications. Systems of Record are often accused of being user unfriendly, so there is a case for agentic AI to reshape how users interact with software. I have described this as “Ambient ERP” where autonomous agents operate invisibly in the background and only call upon users when needed.
However, to reach that point requires a significant amount of work, not least on the modern data stack. Every organisation must change its relationship with data, creating a data-centric culture and improving data literacy. Unless you understand the data within your organisation, how to use it and how to protect it, agentic systems will find it hard to exploit its potential.

Claus Jepsen
Claus Jepsen is CTO at Unit4, building cloud-based, super-scalable solutions and bringing innovative technologies such as AI, chatbots, and predictive analytics to ERP. Prior to joining Unit4, Claus was VP of Software Development, Technology at Infor, where he led the Infor Mobile Platform and Infor Intelligent Open Network (ION) technology development teams.