Over recent years, Generative AI (Gen AI) has dominated headlines in technology and bubbled up to mainstream news. Each week brings new advancements that push the boundaries of this tech, transforming industries and reshaping our future. Based on a report from SNS Insider, the global AI market was valued at $178.6 billion in 2023 and is expected to reach nearly $2.5 trillion by 2032, growing at a compound annual growth rate (CAGR) of 33.89%!
Following the rise of Gen AI, the focus has shifted to “Agents.” Agentic AI represents a more advanced implementation, where AI systems not only interpret inputs as instructions but also takes actions to achieve specific goals.
Taking this concept one step further, the introduction of AI orchestration tools such as CrewAI, Pydantic, Microsoft AutoGen, and LangGraph has led to the development of what is known as a Multi-Agentic Architecture pattern. This innovative approach allows for the deployment of multiple agents, each assigned to specific tasks, much like a team of individuals working collaboratively. By leveraging the strengths of various agents, we can achieve a larger and more efficient output across teams and organizations.
AI orchestration tools can be granted access to external resources to assist in accomplishing their tasks, such as web crawlers, JSON Retrieval Augment Generation (RAG) searches, vision tools, and file readers. With the current emphasis on agents, it’s become evident that creating many custom integrations for these interactions is challenging and presents a significant amount of overhead to maintain.
Since the evolution of technology, software engineers have adapted to evolving standards and protocols to create effective communication with software systems. The internet boom of the ‘90s brought about protocols such as HTTP and TCP/IP, enabling seamless communication across networks. The popular adoption of REST Architecture has standardized how web services interact, promoting simplicity and scalability. This continuous evolution reflects the industry’s commitment to improving efficiency, reliability, and collaboration in software development.
Now, to face today’s need for standards and protocols, enter the Model Context Protocol (MCP).
What is the Model Context Protocol?
The Model Context Protocol (MCP) is a new open standard released by Anthropic in November 2024. This breaking standard streamlines the method in which applications can provide richer context to LLMs (Large Language Models). The introduction of MCP simplifies the process for AI assistants to communicate with external data sources and tools, thus eliminating the need for custom code or sifting through API documentation.
In essence, MCP addresses the challenge of AI integration by creating a singular voice that all system components can speak, akin to being in a room where everyone speaks the same language, thereby eliminating the need for a translator.
How Does it Work?
MCP is inspired by traditional client-server architecture where the client sends a request to an API server, which parses the request, queries a database, and returns a response to the client. Similarly, MCP mimics this architecture by establishing a two-way communication path between an MCP Client (client) and an MCP Server (server).

We’ve visualized the flow in a graphic above, but let’s also talk through it.
The MCP Host can be an LLM such as Claude, an AI-Integrated IDE like Windsurf, or an LLM powered application. At any given time, the MCP Host needs to have a path to interact with multiple data sources. As applications scale and require interactions with multiple services simultaneously, it increases the complexity of the codebase by having to maintain a larger amount of application dependencies.
The MCP Client is what establishes and manages the connection to the MCP Servers. The MCP Servers are configured to communicate with a specific service, such as databases, local filesystems, or third-party tools.
The MCP Host calls the MCP Client to make a request to one of the servers, attaching the prompt from the LLM as a request header. Upon receiving this request, the MCP Server responds by returning a list of actions it can perform. This list will include specific actions the server can execute on behalf of the client.
Subsequently, after the MCP Server processes the request sent from the MCP Client, the server translates the prompt into the native API required to query the data or perform the specified action (all of which is abstracted from the client). Finally, the MCP Server takes the information and sends it back to the MCP Client as an output.
MCP focuses on retrieving data in these forms:
- Resources
- Tools
- Prompts
MCP Resources
MCP Resources can be structured data, repositories, company documents, or anything that the LLM can leverage as trusted knowledge. This is relative to Retrieval-Augmented-Generation (RAG), a common data retrieval technique that allows the LLM to access relevant information and gain external context. This is typically achieved with a semantic search or keyword search. Through this method, MCP fetches the data directly from the source, without the need for additional plugins or libraries.
MCP Tools
MCP Tools are specialized functions or actions that integrate into an MCP Server, significantly enhancing the capabilities of an LLM. These tools enable dynamic operations like creating, listing, updating, or deleting resources. For instance, an LLM could take notes and save them to your local filesystem. A more entertaining example might be a Spotify MCP Server that allows the LLM to request to play or queue songs via a prompt and generate a brief synopsis of the song’s origin while it plays. With these tools, LLMs become agents, capable of performing complex tasks rather than merely generating output.
MCP Prompts
MCP Prompts are least utilized of the three methods but can still prove to be valuable. These typically are for running consistent, repeatable queries against custom data. Taking advantage of MCP prompts, the MCP Client uses a templated prompt to send to the MCP Server. The application abstracts that custom query or API call away from the end-user and developers, returning the query results to the user.
Out of the three methods, Resources and Tools make up the vast majority of MCP use cases. Let’s discuss some of those use cases.
Use-Case: Document-Based Knowledge Management with SharePoint MCP Server
An organization has a large repository of documents related to various projects, standard operating procedures (SOPs), and case studies. By leveraging a SharePoint MCP Server, you can enhance knowledge management and streamline direct access to critical information. The knowledge management application establishes a connection with the MCP Server, which has access to the document repository. The document repository contains all the necessary documents related to projects, SOPs, and case studies.
When a user requests information or needs to perform a document-related task, the application sends a request to the MCP Server via an MCP Client. The MCP Server processes this request and retrieves the relevant documents from the repository.
The retrieved documents are then made available to the Large Language Model (LLM) within the knowledge management application. The LLM processes the documents, extracting key information and generating summaries or detailed responses based on the user’s query.
Use-Case: Synthetic Data Generation with MCP
A financial institution is developing an anomaly detection system but cannot use real transaction data for testing due to its sensitivity. To address this, Model Context Protocol (MCP) can be leveraged to generate synthetic transaction data. The LLM application sends a prompt to the MCP Server detailing the required data schema and a small example. The MCP Server, equipped with a file reader tool, accesses the development team’s sample data directory, generates the synthetic dataset based on the prompt, and writes the new test data into a sample data directory. This synthetic data is then integrated into the code repository for testing, ensuring privacy and compliance while providing realistic data for the anomaly detection system.
Use-Case: Cloud Administration with Azure MCP Server
A cloud administrator needs to integrate an AI Ops layer on top of Azure CLI Management tools so engineers and project managers can query Azure resources to track resource utilization and cost without the need to have cloud domain expertise to execute complicated commands.
Many service providers have already created MCP Servers to integrate with their service. For example, Azure Cloud has gained traction within MCP by releasing their own MCP Server. Azure MCP creates a direct connection to your Azure resources and abstracts away the need to interact directly with Azure native APIs. By implementing Azure MCP into the environment, engineers and business managers will have the capability to query Azure resources using Natural Language Processing (NLP). This can be taken one step further by using MCP prompts and creating templated prompt messages that can be called at any time by an application and completely automate mundane administrative tasks.
What Makes MCP Different From Other Current Tools?
While MCP will forever revolutionize the way AI systems interact with data sources, the fundamental process of retrieving data from external data sources through LLM-powered applications is not revolutionary. The use cases provided above could have been achieved without using MCP. A very prominent player in this domain today is LangChain. At face value, both MCP and LangChain address the challenge of integrating LLMs into software systems, but both focus on distinctly different design objectives.
LangChain is a Python and JavaScript toolkit. It’s also a framework used to integrate application logic with LLMs by emphasizing agent orchestration, chaining AI workflows, and offering a numerous number of connectors to integrate APIs and retrieve external data. MCP is much simpler in terms of what it provides for developers. It’s not a framework with a bunch of out-the-box tools and connectors; it holds the ultimate objective of creating a universal protocol for data access.
Is MCP Ready for Adoption?
MCP is currently being implemented but may not yet be enterprise-ready straight out-of-the-box. Early adopters like GitHub, Slack, AirTable, and Postgres have already developed open-source MCP Servers available for use. The MCP community is rapidly expanding, with many individual contributors creating their own servers. Currently, most MCP Servers require installation on local machines. However, recent experimental projects by the major Hyperscalers aim to make MCP a portable and cloud compatible solution. Alongside Azure’s new MCP Server offering, other players like Docker have released a curated MCP catalog and toolkit, and AWS has announced that MCP can now be utilized through AWS Lambda!
Each implementation of MCP necessitates thoughtful architectural decisions concerning application and system design. Key security considerations include robust authentication and credential management, establishing trust in MCP Servers, ensuring runtime environment security, preventing consent fatigue attacks, and maintaining data protection and compliance. Regular auditing and timely updates are also essential to address evolving security standards.
With careful consideration, MCP can in fact be adopted into an enterprise environment. Despite the need for further standardization of security measures, MCP represents a significant advancement in the AI industry. It is redefining the way system architects approach concepts such as AI Orchestration and Multi-Agentic Architecture. It’s also among the first breakthroughs in standardization for integrating AI capabilities into a software environment. This protocol is poised to revolutionize how applications are built to interact with AI assistants and large language models (LLMs) in the very near future.
Christian Moore is a Cloud Engineer for Insight Global’s professional services division, Evergreen. To learn more about Evergreen’s capabilities, check out our suite of technical services.