Model Context Protocol: The Missing Layer Between AI and Your Apps
Open Spaces is Gun.io’s deep dive into tech through the eyes of our engineering community. This edition comes from community member Tim Kleier—Tech Lead (and veteran backend/full-stack engineer & solutions architect), whose toolkit runs on JavaScript, Ruby on Rails, and PostgreSQL. Tim unpacks Model Context Protocol (MCP), the “USB-C for AI” that lets LLMs like Claude or ChatGPT plug into calendars, CRMs, and codebases to drive real-time, personalized workflows. Whether you’re on a morning run or buried in terminal windows, MCP shows how the next wave of AI will act on your data, not just talk about it.
You’re out on your morning run, earbuds in, enjoying the crisp air. Suddenly, you remember you have a meeting right when you get into the office. You tap your earbuds and say:
“Hey Claude, what’s my first meeting today?”
Claude responds:
“8:30 a.m. with Acme Corp. Would you like a quick brief?”
You reply:
“Yes. Summarize the Hubspot notes and remind me of key meeting objectives.”
While you continue your run, Claude:
This seamless interaction isn’t a distant future—it’s the emerging reality enabled by something called Model Context Protocol (MCP).
For a wider view on why that matters, see future of AI assistance.
The Current Challenge with LLMs
Large Language Models (LLMs) like Claude and ChatGPT have transformed how we interact with information. However, they have a significant limitation: they lack direct access to real-time, personalized data. In other words, they are blind to the outside world.
Without structured access to tools and context, their responses can be generic and disconnected from specific user needs. Integrating LLMs with various tools–calendars, CRMs, email clients–often requires custom, brittle solutions. There’s a pressing need for a standardized, secure method to connect LLMs with the diverse data sources and applications users rely on daily.
Introducing the Model Context Protocol (MCP)
MCP is an open standard designed to bridge the gap between LLMs and the tools they need to interact with. You can think of MCP as the USB-C of AI applications—a universal connector that allows LLMs to interface with various systems seamlessly.
Here are some of MCP’s key features:
Recommended by LinkedIn
How MCP Works
MCP operates on a client-server architecture, with three key components:
The diagram below shows a sample architecture of MCP interactions from Claude to our CRM (Hubspot) and calendar (Google).
This interaction between the MCP host and servers is done dynamically, based on tool discovery, where the LLM uses the conversation to choose the right tools for the job.
Obviously, each of these tools needs input, like a calendar ID for retrieving events and the Hubspot account ID, but that’s a little too in-depth to cover in this introductory article. The point is that MCP is a powerful architecture that simplifies interaction with functional capabilities in other systems.
Real-World Applications
With MCP, the possibilities for enhancing AI interactions are vast:
Who Builds What?
Rolling out MCP can involve backend engineers, DevOps, and AI specialists. Deciding whether you need an ML engineer or a strong backend dev? See AI/ML engineer vs. backend developer for a quick framework.
What’s Next in This Series
This article is the first in a series exploring the practical applications of MCP:
Stay tuned as we delve deeper into each of these topics, providing code examples, best practices, and insights to help you harness the full potential of MCP in your projects.