The Model Context Protocol crossed 97 million monthly SDK downloads in March 2026. When Anthropic first released MCP in late 2024, it got roughly 100,000 downloads in its first month. That 970x growth in 18 months is not a vanity metric — it reflects genuine adoption by teams building production AI agents. I’ve been integrating MCP servers into Claude-based workflows since early 2025, and the shift from “experimental protocol” to “de facto standard” has been dramatic. This guide covers where the ecosystem actually stands today: the governance changes, the real enterprise adoption numbers, and the technical problems that still aren’t solved.
What Is the MCP Ecosystem in 2026?
The Model Context Protocol (MCP) ecosystem is the collection of tools, servers, clients, governance bodies, and standards that have grown around Anthropic’s open protocol for connecting AI models to external data sources and tools. In 2026, MCP has 10,000–12,000 public servers across GitHub, npm, and PyPI, 97 million monthly SDK downloads, and support from every major AI provider — OpenAI, Google, Microsoft, and Anthropic. It operates under Linux Foundation governance through the Agentic AI Foundation (AAIF), making it a community standard rather than a vendor-controlled protocol. The practical effect: when you build an MCP server today, it works with Claude, GPT-4o, Gemini 2.5, Cursor, Windsurf, and dozens of other clients without modification. That interoperability is why 67% of enterprise AI teams are now using or actively evaluating MCP for production deployments, with Fortune 500 companies including Block, Bloomberg, Amazon, and Pinterest running live MCP integrations.
How Did MCP Grow So Fast?
MCP’s 970x growth in 18 months is the fastest protocol adoption I’ve seen in developer tooling since Docker. In March 2025, MCP SDK downloads were around 1 million per month. By September 2025, the number crossed 40 million. By March 2026, it hit 97 million. Three factors drove this:
Network effects from client adoption. Once Cursor integrated MCP in early 2025, every Cursor user became a potential MCP server consumer. Windsurf followed. Then Continue, Zed, and a dozen other editors. When Claude.ai added MCP support for all users in mid-2025, the addressable market for MCP servers exploded overnight. Server developers could now write once and reach every major AI coding tool.
Framework integration. LangChain, LlamaIndex, AutoGen, and CrewAI all shipped MCP adapters in 2025. This meant existing agent frameworks could use the growing catalog of MCP servers without any custom integration work. A team already using LangChain for RAG could add Brave Search, GitHub, or Linear MCP servers in minutes.
The self-reinforcing catalog. Anthropic published the official MCP server list, then the community took over. When a developer builds a useful MCP server and publishes it, other developers discover it, which creates demand for more servers. The catalog grew from a few dozen official servers to 10,000+ community contributions in under a year.
The growth is real, but it comes with problems I’ll cover in the scaling challenges section.
The Linux Foundation Governance Shift
In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF) under the Linux Foundation. This was the most significant event in MCP’s history, and its implications are still playing out.
Before the donation, MCP was a protocol Anthropic controlled. Anthropic wrote the spec, released updates, and made decisions about the protocol’s direction. That’s appropriate for an early-stage protocol but creates vendor lock-in risk at scale. Enterprises don’t build critical infrastructure on protocols owned by a single company.
The AAIF structure changes that. With 146+ member organizations contributing to governance, protocol changes now require broader consensus. Google, Microsoft, and IBM joined as founding members alongside Anthropic. This means future MCP updates go through a more deliberate process — slower in some ways, but far more trustworthy for long-term enterprise planning.
The practical effect for developers: the protocol is now genuinely multi-vendor. When you see MCP listed as a requirement in enterprise AI RFPs, it’s not “Anthropic’s protocol” — it’s an industry standard. That distinction matters enormously for procurement teams.
How Many MCP Servers Are There?
The ecosystem currently has approximately 10,000–12,000 public MCP servers as of April 2026. That number understates the real footprint because many enterprise teams build private servers that never appear in public catalogs.
Breaking down the server types by transport method: 67% run over local STDIO (most common for developer tools, privacy-sensitive workloads), 28% use Streamable HTTP (the new standard for remote, multi-user deployments), and roughly 5% still use the deprecated SSE (Server-Sent Events) transport. If you’re building a new MCP server in 2026, Streamable HTTP is the right choice — SSE is being phased out, and STDIO doesn’t work well for shared, hosted deployments.
The server categories with the most growth: DevOps/infrastructure (Kubernetes, Terraform, GitHub, GitLab), data retrieval (databases, search APIs, document stores), and communication platforms (Slack, Linear, Notion, Jira). The weakest coverage is still in financial systems and healthcare data — not surprising given the compliance complexity.
For production-grade deployments, see our DevOps MCP servers guide for the specific servers worth using and our comparison of MCP gateways for managing multiple servers at scale.
Enterprise Adoption: Who Is Actually Using MCP?
The 67% adoption rate across enterprise AI teams reflects real deployments, not just evaluations. Named production users include:
Block (formerly Square): Running MCP servers for financial data retrieval, with Claude agents querying transaction histories and flagging anomalies. They published a case study in early 2026 describing how MCP replaced their custom API integration layer.
Bloomberg: Using MCP to connect AI agents to their proprietary financial data terminals. The key advantage: Bloomberg engineers can update MCP server behavior without modifying every AI client that uses it.
Amazon: Multiple internal MCP deployments for AWS tooling, including a server that exposes CloudWatch metrics and another for Bedrock model management.
Pinterest: Using MCP to connect image generation pipelines to their internal asset database. Their team contributed several MCP extensions around multi-modal context handling.
The 80% of Fortune 500 companies deploying active AI agents in production by 2026 — per industry analyst estimates — are overwhelmingly connecting those agents via MCP. It’s become the assumption, not the exception.
Transport Evolution: From SSE to Streamable HTTP
This is technically dense but matters if you’re building MCP infrastructure. The original MCP spec used two transports: STDIO for local processes and SSE (Server-Sent Events) for remote connections. SSE worked but had limitations — it’s a unidirectional streaming protocol that required workarounds for bidirectional communication.
In early 2026, the MCP spec introduced Streamable HTTP as the new standard for remote connections. Streamable HTTP is a cleaner approach: standard HTTP for request/response, with HTTP streaming for long-running operations and server-side events. It’s easier to debug, works better with existing infrastructure (load balancers, CDNs, API gateways), and doesn’t require WebSocket-like persistent connections.
If you have existing SSE-based MCP servers, they’ll continue to work but clients are increasingly defaulting to Streamable HTTP. Most MCP server frameworks (the TypeScript SDK and Python SDK) support both transports with a configuration flag. The migration effort is usually minimal. For a detailed guide on deploying Streamable HTTP in production, see our MCP production deployment guide.
The Context Window Problem No One Talks About
Here’s a real issue I’ve run into building MCP-heavy agent systems: tool definition bloat consumes your context budget faster than you’d expect.
A concrete example: 3 MCP servers (GitHub, Linear, and Slack) with a combined total of 25 tools consumed 143,000 of a 200,000 token context window — 72% — just for tool definitions. That left only 57,000 tokens for the actual conversation, code, and reasoning.
The math gets worse as you add servers. A developer with 10 MCP servers might exhaust their entire context just loading tool definitions, before the agent has processed a single line of code.
The solutions being developed:
- Tool registry with lazy loading: Only load tool definitions when the agent determines a tool is relevant
- Tool summarization: Compress tool schemas to shorter descriptions at the cost of some accuracy
- Server composition: Build “meta-servers” that aggregate related tools under fewer, higher-level interfaces
None of these are fully solved. The MCP working group has a proposal for dynamic tool discovery (load tool specs on demand) but it hasn’t shipped yet. For now, be selective about which MCP servers you load for a given agent workflow.
Security Gaps: The Authentication Problem
The uncomfortable reality: most MCP servers in production don’t implement proper authentication. A 2026 security analysis found that only 8.5% of community MCP servers implement OAuth 2.1, which is the required authentication standard per the MCP spec.
The security risks this creates:
- Prompt injection via MCP responses: A malicious document retrieved by an MCP server can include instructions that override the agent’s behavior
- Tool poisoning: Compromised MCP servers can return manipulated data that leads agents to take harmful actions
- Static credentials exposure: Many MCP servers embed API keys in their configuration rather than using dynamic credential vaulting
CVE-2025-6514 was the first major MCP-specific security vulnerability, exploiting a trust boundary issue in how some MCP clients handle server-provided context. Most clients patched it quickly, but it illustrated that the security model needs work.
For production deployments: enforce OAuth 2.1 between your MCP clients and servers, never embed static API credentials in server configs, and treat all data returned by MCP servers as untrusted input before passing it to agents. See our MCP OAuth authentication guide for implementation details.
MCP vs A2A: Complementary, Not Competing
The “MCP vs A2A” framing is wrong. They’re complementary protocols. MCP handles the tool/resource layer — connecting AI models to external systems like databases, APIs, and file systems. Google’s A2A (Agent-to-Agent) protocol handles communication between AI agents — how one agent delegates tasks to another, how agents report results, and how multi-agent systems coordinate.
Both are now under the Linux Foundation umbrella. In practice, a well-architected multi-agent system uses both: MCP for each agent’s tool integrations, A2A for orchestrating work across agents. Think of MCP as the “how do I call an external API” protocol and A2A as “how do I call another agent.”
The “WebMCP” proposals floating around in early 2026 try to bridge these further by standardizing how agents expose their own capabilities as MCP-compatible servers, making agent-to-agent communication work through the existing MCP tooling. Expect this to mature throughout 2026.
Gartner’s 75% Prediction and What It Actually Means
Gartner projects that 75% of API gateway vendors will have MCP features by end of 2026. Having spent time with several gateway vendors, I think Gartner is slightly optimistic on timing but right on direction.
The rationale: MCP has become the expected interface for AI agents to call external services. If your API gateway doesn’t support MCP, agents using Claude, GPT, or Gemini can’t easily call the APIs behind it. Gateway vendors have a commercial incentive to add MCP support — it makes their platform relevant to the AI agent workload.
The vendors leading this: Kong, AWS API Gateway (which added MCP passthrough in Q1 2026), and several specialized “MCP gateway” startups. The pattern I see is “MCP passthrough” (route MCP calls to existing REST APIs) becoming the default offering, with more sophisticated features like tool schema generation and traffic shaping coming later.
How to Get Started with MCP in 2026
If you’re building a new MCP server or integrating existing ones, the practical starting point:
For server builders:
- Use the official TypeScript or Python MCP SDK (both updated for Streamable HTTP)
- Default to Streamable HTTP transport for any server that will be used by multiple users
- Implement OAuth 2.1 from the start — retrofitting auth is painful
- Keep your tool count low — aim for under 10 tools per server, use the “fewer, more powerful” principle
For agent builders:
- Be selective about which MCP servers you load — context budget matters
- Use the MCP Inspector tool for local debugging before production
- Implement error handling for MCP server failures — agents should degrade gracefully
- Consider a caching layer for high-frequency MCP calls to reduce latency
For teams:
- Run a private MCP registry for internal servers
- Establish a review process for any community MCP server you use — check authentication and error handling
- Monitor token consumption by MCP tool definitions and set budgets accordingly
For a complete walkthrough of building a production MCP server, see our Python MCP server tutorial.
Frequently Asked Questions
Is MCP controlled by Anthropic? No longer. Since December 2025, MCP is governed by the Agentic AI Foundation (AAIF) under the Linux Foundation, with 146+ member organizations including Google, Microsoft, IBM, and Anthropic. Protocol changes require multi-vendor consensus.
How many MCP servers exist in 2026? Approximately 10,000–12,000 public servers across GitHub, npm, and PyPI as of April 2026. Many more enterprise-private servers exist that aren’t publicly listed. Growth from the initial dozen Anthropic-built servers happened in under 18 months.
Does MCP work with GPT and Gemini, not just Claude? Yes. OpenAI, Google, Microsoft, and Anthropic all support MCP. Most major AI coding tools — Cursor, Windsurf, Continue, Zed — also support it. The Linux Foundation governance ensures it remains multi-vendor.
What’s the difference between MCP and A2A? MCP connects AI agents to external tools and resources (APIs, databases, file systems). Google’s A2A protocol coordinates communication between AI agents. They’re complementary — a multi-agent system typically uses both. Both are now under Linux Foundation governance.
What’s the biggest practical problem with MCP right now? Context window bloat. Loading tool definitions for multiple MCP servers can consume 50–70%+ of your model’s context window before the agent has done any actual work. The protocol needs better solutions for dynamic tool loading, which the working group is actively developing.
