Model Context Protocols: what they are and how you can use them

Model Context Protocols: what they are and how you can use them

AI innovation today is moving faster than ever before, with leaps and bounds being made in the field on what seems like a weekly basis. Further to innovation that is directly related to or produced by AI, there is also a great deal of new and nascent supporting technologies that are extending what’s possible with AI. One such technology is Model Context Protocols (MCPs), which are enabling us to connect systems and applications in ways that unlock the exchange and flow of information like never before.

To use a hardware analogy, MCPs work much like USB ports on devices in so much as they allow us to connect a multitude of other devices and peripherals to our main device. In a similar fashion, MCPs allow us to plug different tools and pieces of software into AI-centric applications, like connecting ChatGPT to GitHub, for example, but through a bespoke UI.

The business case for MCP servers

The immediate potential of MCP servers lies in their ability to support a new way of working for today’s digital natives. Those within this cohort, and those that will follow, will have begun their professional lives and grown up in the era of Gen AI, with the likes of ChatGPT, Claude, Perplexity, and Midjourney being as common as the traditional Microsoft Office suite of applications is to previous generations. AI is now everywhere and MCP servers are a technology that enables Gen AI tools to be embedded seamlessly into other applications, enabling a layer of abstraction from the core technology.

MCP servers, and increasingly A2A protocols, are the latest technology that allows us to extend the capabilities of AI and discover new avenues of innovation and we must continue to diligently assess the risk profile of every nascent technology and evolve accordingly.

Using a financial services example, MCP servers allow users to interface with financial products and services without having to log in or connect directly to a financial application. In the future, users will be able to simply ask a chatbot a question, via a personalized UI, about their personal finances or business account and receive the answer. Similarly, businesses will be able to use the same technology internally to help with their daily workflows. For example, a payment operator could use a chatbot to find out how many payments errors have occurred over a set period; a loan provider could inquire about the amount of loans they have approved over the year against rejected applications; and an HR professional could ask about the average salary for a role they are advertising, and so on.  

Ensuring trust in MCP servers

Of course, the necessary permissions and security protocols will have to be in place when it comes to accessing different applications and systems for data extraction, and this also applies to organizations that provide access to their own systems and applications via MCP servers. Ensuring best practise when building MCP servers with regard to security, discoverability, and reliability, is essential to avoid vulnerabilities.

As the use of agents becomes the norm, and we move closer to unlocking fully autonomous agentic AI systems, developers are going to bear the responsibility for controlling which MCP servers agents can access. Building your own MCP servers can provide greater certainty when it comes to security, but if you are going to give agents the power to access other MCP servers, we must be mindful of the risks and ensure due diligence.

Risks and rewards

New technologies are emerging all the time that extend what is possible with AI. In just the last year, we have seen an explosion of interest in agents and agentic AI systems, reasoning models, RAG systems, and now we are looking at what’s possible with MCP servers and agent-to-agent (A2A) protocols. There are always risks when it comes to venturing into the unknown, but the core principles of ethics, security, reliability, governance, and scalability must be observed when building solutions and systems with new technologies, such as MCP servers.

These principles must also evolve as architectural paradigms shift, particularly in line with increased automation. It’s a great example of the self-perpetuating nature of technological innovation. Much like the move from the Ford Model T to the high-performance vehicles of today, which have evolved to incorporate several safety features, the evolution of software reveals new ways in which we can improve security. MCP servers, and increasingly A2A protocols, are the latest technology that allows us to extend the capabilities of AI and discover new avenues of innovation and we must continue to diligently assess the risk profile of every nascent technology and evolve accordingly.

Adam Lieberman, Chief AI Officer, Finastra

Adam Lieberman

Adam Lieberman is Chief AI Officer at  Finastra. Leveraging his background in mathematics and computer science, Adam is responsible for applying cutting edge machine learning research and development to innovate in the financial services industry. He is a firm believer that innovation is key and he works with his data science teams to use the latest emerging technologies to conceptualize and quickly turn proof of concepts to production-grade products and services across all of Finastra’s financial lines of business.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE