Agent-to-Agent Protocols and Why Interoperability Will Define the Next Billion-Dollar AI Exits

Created by

There's no denying the artificial intelligence sphere is ripe for investment, but certain areas of AI are capturing more attention—and more capital—than others. Companies that define infrastructure standards like Model Context Protocol (MCP) and Agent-to-Agent Protocol (A2A) typically command higher valuations, alongside cybersecurity and data-moat applications.

Higher Valuations for AI Infrastructure Standards

Major players like Anthropic and OpenAI have completed record-breaking funding rounds, but startups are also raising huge stacks of cash. According to Aventis, in 2024, pre-money AI startups raising pre-seed capital had a median valuation of $3.6 million, while startups raising seed capital had a median valuation of $10 million.

Series A fundraising rounds for AI startups had a median valuation of $45.7 million, while Series B had a median valuation of $366.5 million. Going up to Series C, fundraising rounds for AI startups had a median valuation of $795.2 million.

It's easy to see why this is. Enterprise automation is being transformed forever by the rise of agentic AI, which enables models capable of acting independently to interpret context and then set goals based on that. However, to operate effectively, agents need a standardized communication method, which is where protocols like MCP and A2A come in.

Some of the most common venture capital firms that invest in A2A and MCP include AIFund, Menlo Ventures, Pioneer Square Labs, Raiven Capital, and Boldstart Ventures, but this is far from a comprehensive list.

MCP and A2A as Connectivity Standards

Milton He Yan, founder of CoreSpeed Inc., described industry standards like MCP and A2A as "the USB-C ports of the agent economy." He explained that without these standards, every device is proprietary, but with them, everything connects.

"Standards like MCP and A2A are foundational," he added. "They solve one of the biggest bottlenecks in AI today—interoperability. Instead of every company building fragile, one-off integrations, MCP and A2A create a universal 'language' for models, tools, and agents. This reduces integration costs, improves reliability, and enables an ecosystem where innovation compounds. Without standards, the AI landscape risks fragmentation; with them, we can build a true agent-native internet."

Building a Foundation for AI

Like CoreSpeed, venture capitalist AIFund also describes now as the "USB moment" for these two standards and the Agent Communication Protocol (ACP) due to their connectivity. The VC firm further dubs these three emerging AI infrastructure protocols as "the foundational protocols enabling AI agents to communicate effectively."

According to AIFund, large language models' (LLMs) ability to interact with external tools led to agent protocols, an advancement that enabled LLMs to go beyond just generating text and be transformed into systems capable of interacting with a variety of applications, including databases and APIs.

"Early agent workflows involved chaining multiple LLM interactions together, but the industry quickly realized the need for standardized communication methods between agents and external systems," explained AIFund.

How MCP and A2A Are Also What HTTP Was for the Web

Industry standards for AI can also be compared to what HTTP did for the web. According to Yan, HTTP unlocked the web by giving everyone a common language, and similarly, MCP and A2A will provide agents a universal way to talk, connect, and grow.

"HTTP unified how browsers and servers communicated, and that single standard unlocked the modern web," he added. "MCP and A2A aim to do the same for AI by enabling seamless communication between agents and systems. Once interoperability is solved, developers can focus on creating higher-level applications instead of plumbing. That shift is exactly what allowed the web to scale—and it's what will allow AI agents to scale globally."

Anthropic developed MCP, which connects AI models to data sources, APIs, and external tools. So far, it's been adopted by OpenAI, Google DeepMind, and Microsoft Copilot Studio, according to AIFund. MCP is the very first step for any LLM application that requires integration with external tools, enabling natural language to interface with complex systems. Using MCP, agents can understand the capabilities of different tools, thus enabling them to use them effectively. This is similar to the way a web browser taps HTTP when asking for resources from a server.

Google developed A2A, which standardizes communication between AI agents. This protocol supports peer-to-peer collaboration between specialized agents to enable automatic discovery of new agents and their capabilities in a network. A2A also provides a framework that enables agents to not only share information but also delegate tasks and even work together to solve complex problems, which isn't that different from how web applications interact via APIs built on HTTP.

Interoperability

Much attention has been focused on the use of individual AI tools like ChatGPT, one-on-one between the user and the model. However, Yan believes the real power of AI doesn't come from isolated models, but rather, from collaboration. He said interoperability transforms single agents into ecosystems.

"Interoperability is critical because AI won't succeed in silos," Yan explained. "A single agent is powerful, but true value comes when multiple agents, tools, and models collaborate. Interoperability ensures that these interactions are predictable, secure, and composable. It's the difference between isolated apps and a networked ecosystem."

He added that AI interoperability will enable enterprises to reduce vendor lock-in while unlocking compound value. Yan also said that developers will benefit from accelerated cycles of innovation and reduced friction during the building process.

What's Next for MCP and A2A?

Looking forward, Everest Group sees three potential scenarios for the future of agent protocols. The first is similar to what happened with the HTTP standard, where HTTP unified web traffic.

The second, which the firm sees as most likely, is "layered coexistence," meaning MCP helps move information between AI tools, while A2A enables specialists to trade tasks back and forth. Development tools like CoreSpeed and AutoGen would hide most of the complexity, enabling teams to focus more on the workflow.

The third option, according to Everest Group, is a "protocol mesh," where vendors could keep evolving their own dialects, which forces companies to use adapters or gateways to translate messages in real time.

As a fourth possibility, Yan thinks of interoperability for AI as similar to the Internet of Things, describing it as "an internet of agents" that serves as common protocols that serve as the base, orchestration in the middle, and a thriving marketplace on top.

"The future of interoperability will look like a layered AI ecosystem: standardized protocols at the bottom (MCP, A2A), orchestration and infra platforms in the middle, and a flourishing marketplace of agents on top," he explained. "Much like the internet stack, interoperability will make agents portable, composable, and trusted across environments. In the next few years, we'll see interoperability become the invisible infrastructure that allows AI agents to work together at scale—turning today's isolated experiments into tomorrow's interconnected agent economy."

© 2025 VCPOST.com All rights reserved. Do not reproduce without permission.

Join the Conversation