This is a cache of https://www.elastic.co/search-labs/blog/mcp-current-state. It is a snapshot of the page at 2025-06-16T00:57:42.706+0000.
The current state of MCP (Model Context Protocol) - Elasticsearch Labs

The current state of MCP (Model Context Protocol)

Learn about MCP, project updates, features, security challenges, emerging use-cases, and how to tinker around with Elastic’s Elasticsearch MCP server.

Elasticsearch has native integrations to industry leading Gen AI tools and providers. Check out our webinars on going Beyond RAG Basics, or building prod-ready apps Elastic Vector Database.

To build the best search solutions for your use case, start a free cloud trial or try Elastic on your local machine now.

I recently attended the MCP Developers Summit in San Francisco and it’s clear that Model Context Protocol (MCP) is quickly becoming a foundational building block for AI agents and context-rich AI Applications. In this post, I’ll go over key updates from the event, emerging use cases, what’s on the horizon for MCP, and how to tinker around with Elastic’s Elasticsearch MCP server.

What is Model Context Protocol (MCP)?

For those unfamiliar, Model Context Protocol is an open standard that offers a structured, bi-directional way to connect AI models into various data sources and tools, enabling them to generate more relevant and informed responses. It’s commonly referred to as a “USB-C port for AI applications.”

Here is an architectural diagram that highlights its bi-directional nature:

This is a significant shift for AI practitioners, as one of the main challenges for scaling AI applications is having to build custom integrations for each new data source. MCP offers a sustainable, reusable architecture for managing and supplying context to models. It’s model-agnostic, server-agnostic and fully open source.

MCP is the latest iteration in a lineage of API specifications looking to standardize integration between applications. In the past, we had OpenAPI for RESTful services, GraphQL for data querying, and gRPC for micro-service communication. MCP not only shares the structured rigor of these older specs but also brings it into a generative AI setting, making it easier to plug agents into different systems without custom connectors. In many ways, MCP aims to do for AI agents what HTTP did for the web. Just as HTTP standardized communication between browsers and websites, MCP seeks to standardize how AI agents interact with the world of data around them.

MCP vs. other agent protocols

The agent protocol landscape is rapidly expanding, with over a dozen emerging standards competing to define how agents interact. Laurie Voss from LlamaIndex describes how most can be categorized into 2 types: inter-agent protocols that focus on agents talking to each other and context-oriented protocols like MCP that focus on delivering structured context to LLMs.

Other popular protocols like Google’s A2A (Agent to Agent), Cisco and IBM’s ACP (Agent Communication Protocol), and Agora, aim to enable agent to agent negotiations, coalition building and even decentralized identity systems. MCP takes a bit more pragmatic approach in that as it focuses on how agents access tools and data and not necessarily how they talk to each other (although MCP could also enable that in the future in different ways).

Currently, what sets MCP apart is its traction and momentum. Like React in the early days of frontend frameworks, MCP started with a niche problem and now sits as one of the most adopted and extensible agent protocols in practice.

Summit recap: Evolving priorities for MCP

The summit featured speakers from contributors at Anthropic, Okta, OpenAI, AWS, GitHub and many others. The talks ranged from core protocol enhancements to real-world implementations and outlined both immediate and long-term priorities. These talks reflected a shift away from early experimentation and simple tool calling to building trustworthy, scalable and modular AI systems using MCP as the foundation.

Several speakers teased a future where MCP is more than just protocol plumbing, it can become the foundation of an AI-native web. Just like how JavaScript enabled users to click and interact with web pages, MCP could enable agents to carry out those same actions on our behalf. For example, in e-commerce, instead of users manually navigating to a website to shop, they could simply tell an agent to log in, find a specific product, add it to their cart and check out.

This isn’t just pure speculation and hype either; PayPal showcased their new agent toolkit and MCP server at the summit, which enables this exact agentic commerce experience. With MCP providing secure and reliable access to tools and data sources, agents won’t just read the web, they’ll be able to act on it. Today, MCP is already a powerful standard with a lot of momentum, and down the road, it could become the standard of AI-enhanced user interactions across the web.

MCP project updates: Transport, elicitation, and structured tooling

Jerome Swannack, a core contributor to MCP, shared a few updates to the protocol spec from the last 6 months. The main goals of these changes are:

  1. To enable remote MCP with the addition of Streamable HTTP
  2. To enable richer agent interaction models with the addition of Elicitation and Tool Output Schemas

With MCP being open source, changes like Streamable HTTP are already available for developers to implement. Elicitation and Tool Output Schemas are currently unreleased; they are in the draft stage and may evolve.

Streamable HTTP (released on 03-26-2025): An impactful technical update was the introduction of streamable HTTP as a new transport mechanism. This replaces server-sent events (SSE) with a more scalable, bi-directional model that supports chunked transfer encoding and progressive message delivery over a single HTTP connection. This enables you to deploy MCP servers on cloud infrastructure like AWS Lambda and support enterprise network constraints without long-lived connections or the need for polling.

Elicitation (unreleased, draft stage): Elicitation allows servers to define a schema for how they want the context structured from a client. Essentially, the server can describe what it needs and the kind of input it expects. This has a few implications: For server builders, they can build more complex agentic interactions. For client builders, they can implement dynamic UIs that adapt to these schemas. However, elicitation should not be used to extract sensitive or personally identifiable information from users. Developers should follow best practices to make sure elicitation prompts stay safe and appropriate, especially as MCP matures. This ties into broader security concerns that we will discuss later in this post.

Tool Output Schemas (unreleased, future addition): This concept allows the client and the LLM to know the tool output shapes ahead of time. Tool output schemas let developers describe what a tool is expected to return. These schemas address one of the main limitations of direct tool calling, which is the inefficient use of the context window. The context window is considered one of the most important resources when working with LLMs and when you directly call a tool, it returns raw content that entirely gets pushed into the LLM’s context. Tool Output Schemas can help you make better use of your tokens and the context window by allowing the MCP server to provide structured data. There is currently no documentation on this concept, but it is under review by the MCP team.

Together, these new updates and future additions will help MCP become a more modular, typed and production-ready agent protocol.

Underused power features: Sampling and roots

While not new to the MCP specification, both sampling and roots were highlighted during the keynote. These two primitives are currently overlooked and underexplored, but can significantly contribute to richer, more secure interactions between agents.

Sampling - Servers can request completions from the client: Sampling allows MCP servers to request completions from the client-side LLM. This adds to the bidirectional nature of the protocol, where the server isn’t just responding to requests; it can prompt and ask the client’s model to generate a response. This allows the client to maintain full control over the cost, security, and which model the MCP server uses. So in the case of using an external MCP server with a preconfigured model, you won’t need to provide your own API keys or configure your own subscription to that model, as the server can just prompt the model already connected to the client. This enables more complex and interactive agent behaviors.

Roots - Scoped access to resources: Roots were designed to provide a way for clients to inform servers about relevant resources and workspaces to focus on. This is powerful for setting the scope in which servers operate. It’s important to note that roots are “informational and not strictly enforcing”, meaning that they do not define entitlements or permissions for MCP servers or agents. In other words, you cannot rely on roots alone to prevent a server or agent from executing certain tools or performing write actions. With roots, permissions should still be handled on the client-side with mechanisms for user approvals. Also, developers should still be mindful of using servers that are designed to respect the boundaries set by roots and use best practices.

Authentication for agents: OAuth 2.1 and protected metadata

This section focuses on OAuth 2.1, which is the latest iteration of OAuth 2.0 that removes insecure flows and consolidates best practices.

OAuth support was a highly anticipated topic, especially since security and scalability are seen as the major roadblocks preventing MCP from becoming the standard for connecting agents to tools. Aaron Parecki (OAuth 2.1 editor and identity standards expert at Okta) discussed how MCP can adopt a clean, scalable OAuth flow that offloads most of the complexity from server developers.

In this implementation, OAuth responsibilities can be split between the MCP client and the server. Most of the authentication flow is initiated and handled by the MCP client, only involving the server at the end to receive and verify the secure token. This split helps solve a critical scaling problem of how to authenticate across many tools without requiring developers to configure every single connection and ensures that MCP server developers don’t have to become OAuth experts.

Two key highlights from the talk:

  1. Protected Resource Metadata: MCP servers can publish a JSON file describing their purpose, endpoints and authentication methods. This allows clients to start OAuth flows with just the server URL, simplifying the connection process. Learn more: Let’s fix OAuth in MCP
  2. Support for IDPs and SSO: Enterprises can integrate identity providers to manage access centrally. This is a win for both user experience and security. Users wouldn’t need to click through 10 different consent screens and security teams can have observability into each connection.

By pushing the OAuth logic to the client and relying on metadata from servers, the MCP ecosystem avoids a major bottleneck. This aligns MCP more closely with how modern APIs are secured in today’s production environments.

Additional reading: OAuth 2 Simplified.

Security challenges in a composable ecosystem

New developments also come with new attack surfaces. Arjun Sambamoorthy from Cisco lists several key threats in the MCP landscape, including:

ThreatDescriptionRemediation & best practices
Prompt injection & tool poisoningA way to inject a malicious prompt inside the LLM system context or tool description, causing the LLM to perform unintended actions like reading files or leaking data.Use tools like MCP Scan to perform checks on tool metadata. Validate descriptions and parameters before including them in prompts. Lastly, consider implementing user approvals for high-risk tools. For more details, see the OWASP Prompt Injection guide in the additional reading list below the table.
Sampling attacksIn the context of MCP, sampling opens the door for the MCP server to do prompt injection attacks on the LLM.Disable sampling for untrusted servers and consider adding human-in-the-loop approvals for sampling requests.
Malicious MCP serversIn current collections of MCP servers, it is hard to vet each and every one to ensure safety. Rogue servers can quietly collect and expose your data to malicious actors.Only connect to MCP servers from trusted registries or internal lists. Run third-party servers in containers with sandboxing.
Malicious MCP install toolsCommand line installers and scripts are convenient for quickly implementing MCP servers or tools, but you could end up installing unverified, compromised code.Install in sandboxed environments and validate package signatures. Never auto-update from unverified sources.

To further combat this, Arjun suggests a trusted MCP registry to handle all of the verifications (a topic that was front and center—for more details, see the top two items in the reading list below), as well as using this security checklist.

Additional reading:

What’s next: Registries, governance, and ecosystem

A centralized MCP registry is in development and it was one of the most consistently discussed topics at the summit. The current server ecosystem suffers from fragmentation, low trust and discoverability. It’s hard for developers to find MCP servers, verify what they do and install them safely, especially in a decentralized ecosystem where metadata can be incomplete or spoofed.

A centralized registry addresses these pain points directly by acting as a trusted source of truth, improving discoverability, ensuring integrity of server metadata and reducing the risk of installing malicious tools.

The goals of the MCP registry are:

  • Offering a single source of truth for server metadata (what a server does, how to authenticate, install it and call it)
  • Getting rid of incomplete third-party registries and fragmentation so that when a server wants to be registered, it doesn’t have to update every single other registry on the internet.
  • Providing a server registration flow that includes a CLI tool and a server.json file that contains the metadata mentioned before.

The broader hope is that a trusted registry will help scale the ecosystem safely, enabling developers to build and share new tools confidently.

Governance was another top-of-mind issue for Anthropic. They made it clear that MCP should remain open and community-driven, but scaling that governance model is still a work in progress. They are currently looking for help in that arena and ask anybody who has experience with governance in open-sourced protocols to reach out. This leads to the other topic I wanted to mention. Throughout the event, the speakers emphasized that the ecosystem can only grow with contributions from developers within. There needs to be a concentrated effort in order to make MCP the new web standard and stand out from the other popular agent protocols.

MCP in the real world: Case studies and demos

Several organizations shared how MCP is already being used in practical applications:

  • PayPal - MCP Server for Agentic Commerce: PayPal showcased its new agent-toolkit and MCP server, which can fundamentally change a user's shopping experience. Instead of scouring social media to find items, compare prices and check out, users can chat with an agent that connects to the PayPal MCP server to handle all of those actions.
  • EpicAI.pro - Jarvis: Developments in MCP get us closer and closer to having a real life Jarvis type assistant. For those unfamiliar with the Iron Man movies, Jarvis is an AI assistant that uses natural language, responds to multi-modal inputs, has zero latency when responding, is proactive in anticipating the needs of the user, manages integrations automatically and can context switch between devices and locations. If we imagine Jarvis as a physical robot assistant, MCP gives Jarvis “hands” or the ability to handle complex tasks.
  • Postman - MCP Server Generator: Provides a shopping cart experience for API requests where you can pick different API requests, put them in a basket and download the entire basket as an MCP server.
  • Bloomberg - Bloomberg solved a key bottleneck in enterprise GenAI development. With almost 10,000 engineers, they needed a standardized way to integrate tools and agents across teams. With MCP, they transformed their internal tools into modular, remote-first components that agents can easily call on a unified interface. This enabled their engineers to contribute tools across the organization while AI teams focused on building agents instead of custom integrations. Bloomberg now supports scalable, secure agent workflows that unlock full interoperability with the MCP ecosystem. Bloomberg hasn’t linked any public resources, but this is what they presented in public at the summit.
  • Block - Block uses MCP to power Goose, an internal AI agent that enables employees to automate tasks across engineering, sales, marketing and more. They built over 60 MCP servers for tools like Git, Snowflake, Jira and Google Workspace to enable natural language interaction with the systems they use every day. Employees at Block now use Goose to query data, detect fraud, manage incidents, navigate internal processes and more, all without having to write code. MCP has helped Block scale AI adoption across many job functions in just 2 months.
  • AWS - AWS MCP Servers: AWS presented a fun Dungeons and Dragons themed MCP server that simulates rolling dice, tracks past rolls and returns results using Streamable HTTP. This lightweight example highlighted how easy it is to build and deploy MCP servers using AWS tools and infrastructure like Lambda and Fargate. They also introduced Strands SDK, an open-source toolkit for building multi-modal agents that interact with MCP servers.

Elastic’s MCP server

You can tinker around and use Elastic’s MCP server today. Note that this is currently in preview. The available tools exposed by the MCP server include:

  • list_indices: List all available Elasticsearch indices
  • get_mappings: Get field mappings for a specific Elasticsearch index
  • search: Perform an Elasticsearch search with the provided query DSL
  • get_shards: Get shard information for all or specific indices

The tools above already give you access to some powerful functionality and Elasticsearch’s core strength, which is search at scale. list_indices allows your agent to discover what data you have, get_mappings enables it to understand the structure and field types and search allows it to send queries with the full power of Elasticsearch DSL. This simple pattern already unlocks enterprise-grade search capabilities and includes features like complex aggregations.

We’re continuing to explore more advanced use cases and we welcome any feedback, contributions and suggestions on our Discuss forum or via issues on the GitHub repo. We have an awesome community of contributors, and one already created an unofficial MCP server for Kibana that exposes all of the Kibana endpoints. This allows you to leverage the powerful features Kibana offers, such as creating dashboards, through a chat interface like Claude desktop. Check out the full repository here. (Note that this is a community-maintained project and not an official product of Elastic.)

Conclusion

The MCP Dev Summit made it clear that MCP is shaping the way these AI agents interact with one another and with the world of data around them. Whether you’re connecting an agent to enterprise data or designing fully autonomous agents, MCP offers a standardized, composable way to integration that is quickly becoming useful at scale. From transport protocols and security patterns to registries and governance, the MCP ecosystem is maturing quickly. MCP will continue to be open and community-driven, so developers today have a chance to shape its evolution.

Related content

Ready to build state of the art search experiences?

Sufficiently advanced search isn’t achieved with the efforts of one. Elasticsearch is powered by data scientists, ML ops, engineers, and many more who are just as passionate about search as your are. Let’s connect and work together to build the magical search experience that will get you the results you want.

Try it yourself