AI's "HTTP Moment" Is Here: Are You Ready?
The next transformative moment in open source AI may already be here — and it’s called Model Context Protocol (MCP). Developed by Anthropic, Model Context Protocol (“MCP”) is emerging as a foundational communications layer for AI systems, akin to what HTTP did for the web. To be clear, there are other open specifications for AI communication and data sharing. We will probably end up agreeing on several of them, much like we also use ICMP, DNS and ARP protocols for the Internet.
That said, there is no W3C standards body for AI (yet) and MCP is rapidly gathering momentum to fill some of the vacuum. Both OpenAI and Google have integrated it into their AI offerings. Whether MCP can attain the same universal acceptance is less a matter of technology and more a matter of laying the foundation for neutral governance required for standardization and broad adoption.
Pull back the cover on any widely accepted standard or open source technology that stands the test of time, and you will find some form of neutral governance — with a foundation, a consortium, or some other body created for the purpose of keeping it open, fair, and free to use.
Having a rock solid foundation of standards and community governance is not antithetical to profits. Actually, it is critical for growing the pie and enabling greater profits for a wider array of players. Here’s an example. OpenAI is quickly realizing that its killer revenue generator will be ChatGPT and at the application layer. This is why Sam Altman is talking about open sourcing models again and will likely follow through.
Altman sees how DeepSeek R1 sparked innovation and understands how he can benefit from it without sacrificing stickiness. When R1 hit and the DeepSeek consumer app launched in the U.S., it rocketed up to number two on the app download charts and then fell way back — far behind ChatGPT. With the announcement of enhanced memory, OpenAI is building a moat based on a better product experience and the ability to recall user conversations and preferences. In this scenario, OpenAI building on a solid foundation of open source and open standards will likely accelerate the technology innovation of the underlying models.
This is also why a ban on open models or excluding foreign companies from participating in AI governance and contributions will backfire. You can’t block or control models with a mandate once they are in the wild. That will only hurt companies that miss out on the innovation and ability to learn from others. We have already seen how that works out with DeepSeek, which not only built breakthrough models but did so using pioneering techniques that ultimately will benefit all AI companies.
Investors and business leaders have long recognized the strategic value of giving away complementary technologies to strengthen their own market position. Google did this by open-sourcing Kubernetes, effectively neutralizing rivals' infrastructure advantages. Meta is taking a similar approach with its open release of the Llama AI models. These moves allow companies to monetize the areas where they truly differentiate — their moats. In the same vein, open source AI, supported by robust governance and a foundational layer of communication protocols like MCP, could be the ultimate complement to open up the broader business opportunity. Doing so will accelerate AI progress, with massive returns in both innovation and revenue to companies that have defensible moats like OpenAI, Anthropic and Meta.
Here’s the backstory. Last November, Anthropic announced MCP. Initially, it was a way for Anthropic desktop users to connect Claude to data sources on their local machines. However, the intent was clearly well beyond that. The simple and clear design and easy integration with existing technology standards (JSON, etc) indicated that MCP was designed to create a universal communications layer between AI models and external sources of information (tools, data, other models).
MCP got a huge boost when OpenAI added support for the protocol in late March, 2025 and announced it would standardize on MCP. Then on April 9, Google DeepMind leader Demis Hassabis voiced support for MCP and said the DeepMind team looked forward to working on the protocol with the MCP team. Then Google CEO Sundar Pichai echoed, signaling that the third large AI player was onboard, giving MCP strong momentum. Lately, folks have been saying MCP is the next HTTP — the communications protocol that governs interoperability of systems connected to the Internet.
Just as AI is moving faster than previous big technology shifts, the move towards creating shared standards and governance must also accelerate to keep up. Artificial intelligence is the third technology “phase change” that I have lived through at the Linux Foundation. The previous two were Linux and open source operating systems, and then containers and Kubernetes. Linux arguably took two decades before it became fully embedded as the OS standard. After incubating Kubernetes inside of Google, at least five years passed before we knew it was going to be the standard for container orchestration. AI is a different beast — its adoption and evolution are currently measured in months, not years or decades.
Unlike Linux and Kubernetes, we all already recognize there will not be one single open source model to rule them all. There will be a huge array of many thousands of viable open source models, some building on the back of leading models like Llama 4 and DeepSeek R1, and some building off completely different but just as advanced and remarkably focused use cases. The ARC Foundation’s Evo 2 DNA analysis and generation model is one of those.
Compared to Linux and Kubernetes, the phase shift with AI will likely have an even more profound impact on the way technology works. That’s because intelligence is much closer to both what makes us human and to providing a higher level of utility than mere access and abstracted calculation that still requires heavy human direction. In particular, agentic AI promises an incredible world where our wishes become the command of systems — booking just the right restaurants for an anniversary, identifying novel pathways for drug research by cross-referencing scientific papers with advanced molecular imaging, or creating self-healing networks that “understand” the context of traffic and prioritize critical communication. Even these examples are simplistic, given we don’t really know what use cases AI will create — just as we never imagined all the potential use cases revealed in other technology phase changes.
To get there, we need an MCP — or something like it — to stitch it all together. But you can’t wave a magic wand and make that happen.
Recommended by LinkedIn
But we need more than MCP. In fact, all this goes WAY beyond MCP and open source models. It means that the standards, models, and protocols that matter must become predictable and trusted with neutral and democratic governance.
If you go back to Linux, it was originally embraced by enterprises and technology companies that didn’t want to pay to use expensive proprietary operating systems that shipped new features slowly and were not transparent or open. They adopted Linux as a business strategy to allow them to compete on a more level playing field with Sun and other proprietary vendors, and to create a bigger overall pie. With Kubernetes, Google smartly contributed the container orchestration framework to the Linux Foundation to create a neutral layer for cloud computing and level the playing field. It may have taken some years, but once Google chose the path of open and neutral, Kubernetes thrived, became the most widely adopted orchestration standard, and generated an enormous and still fast-growing ecosystem. For us to all fully benefit from AI, agreeing on MCP is not enough. Just acknowledging that open source AI models have won is only a partial victory.
The real lesson from MCP, then, is this — we have hit an inflection point where it’s clear that we all have to agree on the basic rules of the game, in standards, licenses, and other shared mechanisms for interoperability and cooperation. Not having these rules in place with credible governance means even the best efforts will be duplicative, inefficient, and subject to the whims of powerful players. Open source models might thrive. But assembling them into meaningful stacks for applications and workflows — with transparent licenses that enterprises can get comfortable with — will remain expensive and fraught with risk of technical debt and backwards compatibility until we can all get on the same AI page.
I am not advocating ONLY for MCP. Standards can and should compete on their technical merits, but at the end of the day end users need standards that facilitate integration. End users and vendors decide which standards matter based on their adoption. One complimentary project to MCP is AGNTCY, which is being developed and maintained by a collective led by Cisco Systems. Google introduced another open standard on April 9, Agent2Agent, a protocol that defines rules for how enterprise AI agents interact and communicate. A2A does not enjoy widespread backing. Other open source efforts like LangChain and OpenAPI have tried to tackle pieces of this interoperability issue. All are flavors of the same quest — one API to enable the vast majority of AI use cases for AI communications, data sharing, and interactions. While they may have some overlapping aspects, each also tends to add a new feature or use case, and ultimately, the industry will have to figure out how to come together around these efforts.
However, we will likely only have a set of core communications protocols for AI, just as we have well defined protocols for the Internet. There will probably be more specialized communications standards like we see MQTT for message buses and gRPC for cloud-native applications. Some developers continue to argue that OpenAPI and building atop existing HTTP calls and conventions is a better bet to make AI easier, with less toil on developers and less disruption or failure modes for users.
What needs to come now is the plumbing and the glue — agreement and consensus on how all the wide variety of AI models, AI applications, and AI tooling will connect and interoperate. We also need to create a way to categorize AI models to simplify the adoption process and make it easy to understand what is permitted with each different model. (We try to provide that with the LF AI & Data Foundation’s Model Openness Framework)
Right now, we have a fragmented array of proposed standards, largely controlled by single enterprises or small consortiums, with no clear governance process and no clear guarantees of neutrality and continuity. We have a dizzying array of license types and open arguments over what can and cannot be labeled open source AI. (We also have chaos in AI scraping, which is harming a lot of open source projects — a discussion for another day).
To match the success of Linux, Kubernetes, and the Internet, we need to match their evolution into neutral, democratic communities governed for the good of all and embracing a common set of definitions to describe things they care about. The MCP Moment and momentum for similar standards are symptoms of pressure building up. This doesn’t mean that putting all the pieces in place will feel fast.
On that note, I’ll leave you with the advice of Kubernetes founder Craig McLuckie who wrote this in a post on LinkedIn based on his experience:
“If I had one piece of advice for the folks at Anthropic it would be: ‘there is power in giving away your legos’. Communities start to identify with the technology more than the vendor. Thoughtfully and carefully bring people into the center of the ecosystem and find ways to cede control over time. Anthropic’s behavior to this point would indicate they are indeed very community-centric; so far so good.
If I had one piece of advice for vendors with interest in MCP (and frustration with the pace of community development) it would be: ‘if you want to go fast, go alone. If you want to go far, go together’. Chop wood, carry water and earn your stripes in the community. Be patient.”
MCP is not the endgame — but it may be the starting line. To realize the full promise of agentic, interoperable AI, we must rally around open standards, neutral governance, and shared responsibility. The time to lay the foundation is now.
Solutions Architect | AI Enthusiast | Cloud Computing Specialist | 8x Microsoft Azure Certified | 5x AWS Certified
2wThanks for sharing, Jim
Head of AX Architecture, Distinguished Engineer - Applied AI, Web Architecture, and Performance
2wProviding an exceptional agent experience using the right methods for the clients you want to support is going to define relevancy in this next generation.
Software Engineer at Apple
2wGreat article - a win-win proposal for all involved parties ..
Technology Entrepreneur
2wWe should distinguish between formats ideal for knowledge representation (e.g., OpenAPI, Arazzo, A2A) and those for connectivity (e.g., MCP). MCP is now the standard client-side plugin interface for agents - “the USB-C port for AI” - but it’s limited for representing service knowledge. E.g., Agents rely on richly detailed context for reliable API invocation, which OpenAPI is a better fit for. While this content may be converted to markdown and sent via MCP, OpenAPI remains the best canonical form. Formats like A2A/ACP/Agntcy may also serve external agent needs. Since LLMs are protocol polyglots, multiple knowledge formats can coexist. At Jentic, we're creating an open-source repo of OpenAPI specs optimized for agents, identifying workflows with AI, and committing them in Arazzo (from the OpenAPI initiative). We’ll include new formats and converters where helpful. What matters is open, structured, accessible knowledge—searchable, loadable, and executable via MCP. MCP is "the USB-C for AI", but not a replacement for foundational technologies.
💡 Great insight. An area to must watch. Thank you for sharing