The next generation of intelligence relies on context, not compute

knowledge graphs for AI context

For years, AI progress has been measured by scale: larger models, expanding datasets, wider context windows. Each new breakthrough carries the same promise that feeding systems enough data will yield sharper insights.

Outside of training, that logic is beginning to crack. As models absorb lengthier prompts, their reliability comes into question. The model has more information to choose from, increasing the likelihood of the system fixating on the wrong detail.

Researchers have coined this phenomenon context rot: as an AI system processes a growing volume of information, extraneous details clutter its working memory. The knock-on effects can include less precise outputs, inflated costs, and a gradual erosion of trust.

A recent Microsoft experiment created an AI-led “Magentic Marketplace”, laid bare exactly how AI can fail here. Ece Kamar, the lab’s managing director, noted that “current models are actually getting really overwhelmed by having too many options.”

The rise of context rot

Most enterprise data resides in documents—PDFs, reports, and internal files that are chopped into chunks for retrieval-augmented generation (RAG). When a user asks a question, the system retrieves semantically similar passages and sends them to the large language model (LLM) as context.

The catch is that similarity isn’t the same as relevance. A fragment can look like a match but miss key definitions or exceptions. Without additional context, a fragment may just be noise. The AI ends up juggling too much information, without understanding which parts truly matter and which just create more noise in the system.

The fix isn’t to cram in more text; it’s finding text that’s more relevant to the business question at hand. This means equipping AI with a knowledge layer that reflects how the world really works, as a network of entities and relationships, not disconnected data points.

Reasoning in relationships, not documents

Humans don’t reason in documents, but in relationships. A knowledge graph captures those connections explicitly: people, places, products, and the links between them.

When data is stored and searched as a graph, retrieval shifts from “closest approximate match” to “best supported answer.” A legal assistant, for example, might ask about a contract clause. A keyword or vector search could return one clause that looks relevant, while a graph-based system understands that the clause belongs to a larger definition and retrieves all related sections. The answer is more complete and contextualised, and this avoids the problem of trying to connect information across different chunks. The end result is that the model needs far fewer tokens to generate a relevant answer.

Building trust with graphs

Transparency is another major advantage of graphs. Vector embeddings, the mathematical process AI uses to link similar words, are powerful for machines but completely unreadable by humans. A graph, by contrast, is easy to see and understand. It records the exact chain of facts the system used to reach a conclusion, along with the sources and permissions involved. It can be visualised in a way that makes sense to humans.

That traceability is invaluable in regulated environments. It’s much easier to justify a decision when you can show the path through the data and why a decision was made, rather than just point to a cluster of opaque numbers. Built-in governance and explainability make graph-based AI enterprise-ready and trustworthy.

Don’t wait for GPT-6

Some leaders ask why they should worry about context when future models will be smarter. It’s true that large language models are improving quickly. But no matter how capable they become, they will never be trained on your private enterprise data.

A foundation model also works a bit like a search engine with extraordinary reasoning capabilities, but no index of your company’s information. It can generate answers, but without being fed with the right context, it can’t know which parts of your knowledge are authoritative, up to date, or most relevant to the question. Even when LLMs reach double-digit versions, they’ll still need a structured, secure way to access what’s unique to a business.

That’s why the bottleneck for AI adoption is shifting from compute power to data organisation. The key question is no longer “Which model should I use?” It’s “How well is my knowledge organised?”

Making graphs work for you

Graph databases once had a reputation for being hard to learn. That was true a decade ago, when teams had to invent their own schemas from scratch. Two changes have made them far more accessible.

First, the Graph Query Language (GQL) is now an international ISO standard. It’s the first new data language to be standardised in decades since SQL. GQL gives engineers a shared declarative language for working with graph data, one that complements SQL rather than competing with it. Standardisation leads to improved interoperability, clearer documentation, and a well-defined skill set for hiring purposes.

Second, thanks to AI, modern graph platforms now automate work that previously required specialised expertise. Assisted modelling, domain templates, and hybrid search, which seamlessly blends vector and graph queries, are now AI-powered and accelerated with agents. It’s a step change in making the technology easier to use and deploy. Teams spend less time hand-crafting data structures and more time asking real business questions.

Harnessing the knowledge layer

The smartest organisations are recognising that the strongest AI outcomes come from pairing capable models with well-structured, connected, and contextualised knowledge. The model is the reasoning engine; the graph is the framework that holds the right facts in place.

When retrieval is shaped by connections, it delivers higher quality context and better results. LLMs can spend less effort bridging gaps and more on producing accurate, explainable reasoning. Responses sharpen, latency falls, and costs follow suit. More than anything, users begin to trust what they are being told.

We’re shifting from an era defined by raw compute to one defined by organised context. Longer prompts and bigger models will continue to play a role, but structure, clarity, and connectedness will carry more weight than before.

If you want AI that’s consistent, fast, and reliable, the path forward isn’t “bigger.” It’s better organised.

Emil Eifrem Neo4j

Emil Eifrem

Emil Eifrem is a Swedish technology entrepreneur and Co-Founder and CEO of Neo4j, widely recognised for advancing graph database innovation. A passionate coder from an early age, he launched his first open-source project as a teenager and has since become a leading voice in graph technology.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE