The Model Context Protocol (MCP) has emerged as a highly discussed innovation in AI integration since Anthropic introduced it in late 2024. If you're engaged with the AI community, you've likely encountered numerous developer opinions on the subject. Some hail it as revolutionary, while others are quick to highlight its limitations. In truth, both perspectives hold some validity.
A pattern I've observed with MCP adoption is that initial skepticism often evolves into appreciation: This protocol addresses genuine architectural issues that other methods do not. Below, I've compiled a list of questions that mirror the discussions I've had with fellow developers considering implementing MCP in production environments.
1. Why should I use MCP over other alternatives?
Most developers contemplating MCP are likely already acquainted with solutions such as OpenAI's custom GPTs, vanilla function calling, Responses API with function calling, and fixed connections to services like Google Drive. The real question isn't whether MCP entirely replaces these solutions — technically, you could integrate the Responses API with function calling that still interfaces with MCP. What's crucial is the resulting tech stack.
Amid the buzz surrounding MCP, here's the straightforward truth: It isn't a monumental technical advancement. MCP essentially "encapsulates" existing APIs in a manner comprehensible to large language models (LLMs). Many services already possess an OpenAPI spec that models can utilize. For small-scale or personal projects, the argument that MCP "isn't that significant" is fairly reasonable.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
- Turning energy into a strategic advantage
- Architecting efficient inference for real throughput gains
- Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
The practical advantage becomes clear when constructing an analysis tool that must connect to data sources across various ecosystems. Without MCP, you'd need to create custom integrations for each data source and each LLM you intend to support. With MCP, you establish the data source connections once, allowing any compatible AI client to utilize them.
2. Local vs. remote MCP deployment: What are the actual trade-offs in production?
This is where the distinction between reference servers and reality becomes evident. Local MCP deployment utilizing the stdio programming language is incredibly straightforward to set up: Initiate subprocesses for each MCP server and facilitate communication through stdin/stdout. This is excellent for a technical audience but challenging for everyday users.
Remote deployment naturally addresses scaling but introduces complexities related to transport. The initial HTTP+SSE approach was superseded by a March 2025 streamable HTTP update, aiming to simplify matters by routing everything through a single /messages endpoint. Nevertheless, this isn’t essential for most companies likely to develop MCP servers.
Yet, a few months later, support remains inconsistent. Some clients still anticipate the old HTTP+SSE configuration, while others adapt to the new approach — so if you're deploying today, you'll likely need to support both. Protocol detection and dual transport support are essential.
Authorization is another factor to consider with remote deployments. The OAuth 2.1 integration involves mapping tokens between external identity providers and MCP sessions. While this adds complexity, it is manageable with appropriate planning.
3. How can I be sure my MCP server is secure?
This is arguably the largest gap between the MCP hype and what you actually need to address for production. Most demonstrations or examples you'll encounter use local connections without authentication, or they gloss over security by stating “it uses OAuth.”
The MCP authorization spec does utilize OAuth 2.1, a well-established open standard. However, implementation can vary. For production deployments, focus on the essentials:
- Proper scope-based access control aligned with your actual tool boundaries
- Direct (local) token validation
- Audit logs and monitoring for tool use
Nevertheless, the primary security concern with MCP pertains to tool execution itself. Many tools require (or believe they require) extensive permissions to be functional, leading to broad scope design (like a blanket “read” or “write”). Even without an overly broad approach, your MCP server may access sensitive data or execute privileged operations — so, when uncertain, adhere to the best practices recommended in the latest MCP auth draft spec.
4. Is MCP worth investing resources and time into, and will it be around for the long term?
This question strikes at the core of any adoption decision: Why should I invest in a temporary trend when the AI landscape is evolving rapidly? What assurance do you have that MCP will remain a viable option (or even exist) in a year, or even six months?
Consider MCP's adoption by major players: Google supports it with its Agent2Agent protocol, Microsoft has integrated MCP with Copilot Studio and is incorporating built-in MCP features for Windows 11, and Cloudflare is eager to assist you in launching your first MCP server on their platform. Likewise, ecosystem growth is promising, with numerous community-developed MCP servers and official integrations from well-known platforms.
In summary, the learning curve is not steep, and the implementation burden is manageable for most teams or solo developers. It delivers on its promises. So, why should I be cautious about embracing the hype?
MCP is fundamentally crafted for current-gen AI systems, assuming a human supervises a single-agent interaction. Multi-agent and autonomous tasking are two areas MCP doesn’t address; to be fair, it doesn’t need to. However, if you're seeking an evergreen yet still cutting-edge approach, MCP isn’t it. It’s about standardizing something that desperately needs consistency, not venturing into uncharted territory.
5. Are we about to witness the “AI protocol wars?”
Indications suggest potential tension ahead for AI protocols. While MCP has established a neat audience by being an early entrant, there's ample evidence it won't remain solitary for long.
Consider Google’s Agent2Agent (A2A) protocol launch with over 50 industry partners. It complements MCP, but the timing — just weeks after OpenAI publicly embraced MCP — doesn’t seem coincidental. Was Google developing an MCP rival upon seeing the leading name in LLMs adopt it? Perhaps a pivot was the right decision. But it’s hardly speculative to think that, with features like multi-LLM sampling soon to be released for MCP, A2A and MCP may become competitors.
Then there’s the perspective from today’s skeptics about MCP being a “wrapper” rather than a genuine advancement for API-to-LLM communication. This is another factor that will become more evident as consumer-facing applications transition from single-agent/single-user interactions into the domain of multi-tool, multi-user, multi-agent tasking. What MCP and A2A don’t address will become a battleground for a new breed of protocol altogether.
For teams bringing AI-powered projects to production today, the prudent strategy is likely hedging protocols. Implement what is effective now while designing for adaptability. If AI makes a generational leap and leaves MCP behind, your efforts won’t be wasted. The investment in standardized tool integration will yield immediate benefits, but keep your architecture adaptable for whatever emerges next.
Ultimately, the developer community will determine MCP's relevance. It’s the MCP projects in production, not specification elegance or market buzz, that will dictate whether MCP (or another protocol) remains dominant in the next AI hype cycle. And frankly, that’s probably how it should be.
Meir Wahnon is a co-founder at Descope.
