Data Management Vendors Can't Succeed Without MCP Support
Key Lessons from the First Year of MCP.
One year post-launch, the Model Context Protocol (MCP) has transcended its origins as an open-source framework to become the foundational layer for enterprise AI integration. For you, as a data management vendor, this is not merely a technological shift but an existential market realignment.
Support for MCP is no longer a differentiating feature; it is the price of admission. This report details how MCP has reshaped the competitive landscape, why its absence makes your platform a "data dead end," and what strategic steps you must take beyond basic compliance to secure your long-term viability.
The era of proprietary, walled-garden data access is over. The age of the interconnected, agentic fabric has begun, and MCP is its core protocol.

I. The Genesis of a Standard: From Anthropic’s Code to Your Core Offering
The launch of the Model Context Protocol by Anthropic on November 25, 2024, was a direct response to a critical and growing pain point in the enterprise: the immense complexity and cost of connecting AI to data.
For years, you have built powerful platforms for data ingestion, transformation, and governance. Yet, when your customers sought to leverage this data within the new generation of AI, specifically, large language models (LLMs) and autonomous AI agents, they faced a formidable integration challenge.

AI models are stateless; they possess no inherent knowledge of your customers' unique business operations, customer data, or financial records. To be valuable, they must be connected to these proprietary data sources.
Before MCP, this meant your customers' development teams were engaged in a Sisyphean task of building and maintaining custom, point-to-point integrations. Each new agent, each new data source, required a new handcrafted pipeline. This process was:
Slow: Dragging development cycles from weeks to months.
Expensive: Consuming vast developer resources on "plumbing" rather than innovation.
Unscalable: Creating a brittle, unmanageable web of connections.
Ungoverned: Making it nearly impossible to enforce consistent data security and access policies.
MCP entered this fray as a set of open-source code designed to standardise this connection layer. In essence, it provides a universal specification for building MCP servers.
These servers act as secure bridges, creating a standardised communication channel between AI models (like OpenAI's GPTs or Anthropic's Claude) and the data sources you manage, be they SQL databases, data lakehouses, or SaaS applications.

When analysts like David Menninger of ISG compare MCP to the advent of Open Database Connectivity (ODBC), they are not being hyperbolic. ODBC created a universal standard for applications to access database management systems, unleashing a wave of innovation in business intelligence and enterprise software.
MCP is poised to do the same for AI. It is the USB-C port for the agentic economy, a universal adapter that eliminates the need for a drawer full of proprietary cables.
For you, this means your platform’s value is no longer judged solely on its internal capabilities, but on its ability to plug and play seamlessly within this new, open ecosystem.
II. The New Table Stakes: Why MCP Support is Non-Negotiable
The market's adoption of MCP has been breathtakingly rapid. Within a year, it has moved from a promising technical preview to a core requirement. The consensus among industry leaders and analysts is unequivocal: supporting MCP is now a baseline expectation.
1. The Shift from Differentiator to Disqualifier
Sumeet Agrawal, Vice President of Product Management at Informatica, crystallises this sentiment by labelling MCP "table-stakes functionality." This is a critical strategic concept for you to internalise. A year ago, announcing MCP support could have been a headline at your user conference.
Today, its presence is assumed. The competitive differentiation has inverted. You do not gain significant new customers by having MCP support; you will absolutely lose them by not having it.
It has become a disqualifying factor in procurement processes. Enterprise buyers, wary of vendor lock-in and technological dead ends, will simply not consider a data platform that is not "agent-ready."
2. The Peril of Invisibility
Michael Ni, an analyst at Constellation Research, offers an even starker warning: "Invisible vendors get replaced." In a world where business processes are increasingly orchestrated by AI agents, your data platform must be discoverable and actionable by these autonomous systems.
If an agent cannot natively connect to your platform via MCP to retrieve a customer record, analyse a sales trend, or validate a data quality metric, then your platform is functionally invisible to the most advanced layer of the enterprise tech stack.
You become a silo of dormant data, bypassed by the very intelligence that is driving decision-making. In this new paradigm, being bypassed is the first step toward being replaced.
3. The Ecosystem Play
The leading cloud providers, AWS, Microsoft Azure, and Google Cloud, were among the first to throw their weight behind MCP. They were quickly followed by a wave of major data platform vendors throughout 2025, including Snowflake, Databricks, Alation, Confluent, and Oracle.
This creates a powerful network effect as more tools and platforms adopt MCP; the value of the entire network increases. For you, resisting this trend is akin to trying to build a social network that cannot send emails.
You are isolating yourself from the ecosystem where innovation is happening at a pace. Your customers do not want to build their AI future on an island; they want to be on the continent, with well-maintained highways connecting all their critical assets.

III. The Tangible Impact: From Weeks to Hours
To understand why MCP has garnered such fervent support, you must look at its practical impact on development velocity. The pre-MCP development process for an AI agent was a test of endurance.
Consider the task of building a simple agent to monitor IT costs and generate alerts. A developer would need to:
Write custom code to connect to the cloud billing API.
Build another set of custom logic to authenticate and query the internal financial database for budget allocations.
Develop a complex prompting strategy to get the LLM to understand the context from these two disparate sources.
Manually handle error checking and retry logic for both connections.
This process was, as Informatica's Agrawal confirms from his own development experience, "very complicated prompting" that could take weeks.
With MCP, this paradigm shatters. The same agent can be built in hours.
Here’s how:
MCP Servers as Standardised Components: Your platform would host (or provide easy access to) pre-built MCP servers for common data sources, one for your data warehouse, another for a CRM system, etc.
Unified Tool Discovery: The AI development environment (be it your own or a third-party tool like Claude Studio) discovers all available tools through the MCP protocol.
The agent doesn't see custom code; it sees a standardised menu of available actions:
get_customer_data,analyze_sales_trend,check_IT_spend.Simplified Agent Logic: The developer no longer codes integrations. They simply instruct the agent, in natural language, to "use the
check_IT_spendtool and compare it to theget_budget_allocationtool, then alert me if we are over 90% of the budget."The MCP protocol handles the complex orchestration of calling these tools, retrieving the data, and presenting it to the model in a structured context.
This is the power Ni describes as turning "development from artisanal craftsmanship into platform engineering."
Your role as a vendor evolves from providing raw data access to providing a curated set of powerful, governed, and agent-ready tools. You are not just a data repository; you are an intelligence provider for an autonomous workforce.
IV. The Evolving Battlefield: MCP as a Foundation, Not a Finish Line
Declaring support for MCP is only the beginning. The strategic battle is now shifting to the depth, breadth, and security of your MCP implementation.
1. The Maturity of Your MCP Support

As David Menninger notes, vendor support for MCP exists on a spectrum. Your initial implementation is just the first step. The leaders in this space are already advancing to more sophisticated tiers:
Tier 1: Basic Connectivity. Providing MCP servers that allow agents to run simple queries against your platform. This is the bare minimum.
Tier 2: Advanced Tooling. Embedding in-line calls to LLMs directly within your native query languages (e.g., SQL). This allows for complex operations like sentiment analysis or data classification to be invoked as a standard function within a larger data pipeline.
Tier 3: Unstructured Data Mastery. Extending MCP support beyond structured tables to allow agents to natively query and comprehend documents, images, and videos stored within your data lakehouses. This unlocks a vastly larger portion of the enterprise data estate.
Your product roadmap must explicitly address how you will progress through these tiers. A stagnant MCP implementation will be overtaken within quarters.
2. The Interoperability Horizon: Agent-to-Agent (A2A) Protocol
MCP solves the data connection problem, but it does not solve the agent collaboration problem. Recognising this, Google Cloud launched the Agent-to-Agent (A2A) Protocol in May 2025. While MCP connects an agent to a tool (your data platform), A2A provides a standard for agents to communicate with each other.

Is A2A a requirement today? Not in the same urgent way as MCP. As Dwarak Rajagopal of Snowflake notes, most enterprises are still focused on building individual, capable agents. However, the direction of travel is clear.
The future of enterprise AI is not a single, monolithic agent, but a swarm of specialised agents working in concert, a "sales agent" negotiating a deal by consulting a "pricing agent" and a "legal compliance agent."
The vendors who are future-proofing their strategies, including AWS, Databricks, Microsoft, and Snowflake, are already supporting both MCP and A2A.
For you, this means your platform must not only be queryable by agents but must also be capable of hosting or spawning its own specialised agents that can participate in these multi-agent workflows. Your data platform should be an active participant in the agentic ecosystem, not a passive data source.

3. The Critical Gap: Governance and Security
This is the most significant challenge and opportunity for MCP and for you. The protocol, in its current form, excels at connectivity but is agnostic to the critical enterprise functions of governance, security, and semantics.

The Identity and Policy Gap: As Michael Ni points out, MCP "still lacks the depth of identity support, policy enforcement and….the shared semantics enterprises need." If an AI agent requests sensitive customer PII through your MCP server, how do you, in a standardised way, authenticate the agent's authority on behalf of a user?
How do you enforce fine-grained access policies (e.g., "this agent can only see European customer data") at the protocol level? Currently, this is pushed back to the vendor to implement in a non-standard way, creating fragility.
The Security Vulnerability: Sumeet Agrawal highlights a tangible threat: "One key opportunity for MCP to improve is on security." The potential for malicious actors to poison an MCP server or manipulate the tool-calling process to feed an agent incorrect data is a real risk.
Without a standardised way to authenticate tool endpoints and authorise actions, the entire retrieval-augmented generation (RAG) pipeline is vulnerable.
This governance gap is your strategic opening. You cannot wait for the MCP standard to evolve. You must build robust, proprietary governance layers on top of your MCP implementation. This includes:
Agent-aware authentication and auditing.
Dynamic data masking and row-level security that is invoked transparently through MCP calls.
Tool-level permissioning that dictates which agents can call which data tools.
By offering the most secure, governed, and policy-driven MCP implementation, you can transition from a vendor who merely supports the standard to one who defines its enterprise-grade future.
V. Conclusion and Strategic Recommendations
The introduction of the Model Context Protocol marks a watershed moment. It has fundamentally redefined the relationship between data and AI, establishing a new and non-negotiable requirement for your platform.

Your customers are no longer just people; they are increasingly AI agents, and you must build your products to serve them.
To navigate this new landscape, your strategy must be aggressive and forward-looking:
Treat MCP as Core Infrastructure, Not a Feature. Integrate it deeply into your product's architecture. Your MCP support should be a first-class citizen, with the same level of performance, reliability, and monitoring as your core query engines.
Advance Your MCP Maturity. Do not stop at basic connectivity. Publicly roadmap your progression to advanced tooling and unstructured data support. Show your customers you are committed to leading, not just complying.
Embrace the Multi-Agent Future. Begin experimenting with and planning for A2A support. Explore how your platform can not only serve data to agents but also host specialised data management agents that can be orchestrated within larger workflows.
Differentiate Through Governance. Invest heavily in building the most secure and policy-rich governance layer for your MCP implementation. In a world increasingly worried about AI risks, "Secure MCP" will be a powerful market message. Solve the identity, policy, and security challenges that the base protocol has yet to fully address.
The message is clear: your data platform's relevance in the age of AI is contingent on its connectedness. MCP is the language of that connection. If you are not fluent, you are not in the conversation. The time for observation is over; the time for strategic, decisive action is now.

Warm regards,
Shen and Team