What is the Model Context Protocol (MCP)? A 2026 primer
MCP is the on-demand counterpart to event-driven automation. Zapier triggers workflows; MCP gives the AI hands to call them when asked.
TL;DR: The Model Context Protocol (MCP) is an open protocol, introduced by Anthropic in November 2024, that lets large language models connect to external tools and data through a single JSON-RPC 2.0 interface.
Instead of writing a custom connector for every model-and-tool pair, you build one MCP server per tool, and any MCP-compatible host - Claude Desktop, Cursor, VS Code, n8n, Zapier - can use it.
For automation builders, MCP is the on-demand counterpart to event-driven automation. Zapier and Make trigger workflows when something happens. MCP gives the AI hands to call workflows when it decides to.
How MCP works
MCP uses JSON-RPC 2.0 over a stateful connection between three roles, defined in the current specification (2025-11-25):
- Hosts - the LLM application the user interacts with. Claude Desktop, Cursor, ChatGPT, Claude Code.
- Clients - the connector embedded in the host that talks to one server.
- Servers - the small services that expose capabilities. One server per tool: a GitHub server, a Slack server, a Postgres server, a filesystem server.

Each server exposes some combination of three primitives:
- Resources - read-only context: a file, a database row, a Notion page.
- Prompts - templated, reusable prompts the user can pick from.
- Tools - functions the model can call. "Create a Linear issue", "send a Slack message", "run this SQL query".
The host is responsible for asking the user before invoking any tool, and for not transmitting resource data without consent. MCP is arbitrary code execution, so the spec is explicit that consent and authorization are the host's job, not the protocol's.
Why MCP exists: the M-by-N problem
Before MCP, every LLM application that wanted to talk to a third-party tool needed a custom integration: ChatGPT plus Slack, Claude plus Notion, Cursor plus GitHub, and so on for every pair. With M models and N tools, that's M times N integrations. Each one re-implements authentication, error handling, and capability discovery.
MCP collapses M-by-N into M-plus-N. One MCP server per tool serves every MCP-compatible host. The protocol takes inspiration from the Language Server Protocol, which did the same thing for code editors and programming languages a decade ago. Once Microsoft, JetBrains, and Vim all spoke LSP, every language got first-class support in every editor for free.
The same dynamic is now playing out across AI hosts. As of early 2026, over 500 public MCP servers exist - covering GitHub, Slack, PostgreSQL, Stripe, Figma, Docker, Kubernetes, and most of the long tail. Anthropic, OpenAI, and Google DeepMind all support the protocol; OpenAI added MCP to ChatGPT in September 2025.
MCP vs Zapier, Make, and n8n
This is where most explainers get muddled. Zapier and MCP are not competing protocols - they solve different problems.
| Tool | Trigger model | Best for |
|---|---|---|
| Zapier / Make | Event-driven (a thing happens, the workflow runs) | "When a Stripe payment lands, post in Slack and create a Linear issue" |
| n8n | Event-driven, with self-hosting and code branches | Same as above, plus custom logic and on-prem data |
| MCP | On-demand (the AI calls the tool when needed) | "Look up the latest Stripe payments and draft a refund email" - in chat, by request |

The two approaches stack rather than replace. A real automation often uses both: a Zapier trigger fires on the payment event, while an MCP server lets the support agent query Stripe ad-hoc when a user asks a question. Zapier even ships Zapier MCP, which exposes its 9,000-app catalog to AI hosts on demand.
How automation builders use MCP today
You don't need to write code to start. Three concrete entry points:
- n8n MCP nodes. A self-hosted n8n install ships an MCP Client node (calls external MCP servers) and an MCP Server Trigger node (exposes one of your n8n workflows as an MCP tool). The latter is the interesting one: any workflow you have already built becomes available to any AI host that speaks MCP.
- Zapier MCP. If you already have Zaps wired up to your apps, Zapier MCP lets the AI use those connections without re-building auth.
- Local servers in Claude Desktop or Cursor. The fastest "hello world" is a filesystem MCP server pointed at a local folder. From there the model can read your files, summarise them, and edit them in place. The next step up is to set up an MCP server on Windows against a real tool like GitHub or Postgres.
For agent builders working with frameworks like LangChain, MCP is a cleaner alternative to writing one-off tool wrappers - it slots in alongside other tool-calling approaches and saves the duplication. Anyone who has hit the iteration cap on a LangChain agent knows tool sprawl is half the cost; MCP standardises the surface.
Is MCP the future of integrations?
It is plausibly the future of AI-to-tool integrations specifically. The 2026 roadmap from the protocol team focuses on transport scalability, agent-to-agent communication, and enterprise governance - the gaps that today still push some teams to build custom adapters. None of those problems are unsolvable; they are the same problems LSP worked through between 2016 and 2020.
For event-driven business automation, Zapier, Make, and n8n are not going anywhere. The interesting design decision in 2026 is no longer "Zapier or n8n" but "where in the stack should the AI sit, and how does it call the tools you already have?" MCP is the answer the industry is converging on.
FAQ
Who created MCP and when?
Anthropic announced MCP in November 2024 as an open protocol. The current specification is dated 2025-11-25, and the reference implementation lives on GitHub.
What is the difference between MCP and a regular API?
An API is the underlying interface a service exposes. MCP is a thin standard layer on top: it tells AI hosts how to discover the service's capabilities, how to authenticate, and how to call those capabilities consistently. An MCP server for GitHub still calls the GitHub REST API underneath; it just packages the calls in a way every MCP-compatible model can use without bespoke code.
Is MCP the same as Zapier?
No. Zapier is event-driven workflow automation - a Zap fires when a trigger happens. MCP is on-demand tool access for AI - the model calls a tool when it decides one is needed. The two stack: Zapier MCP exposes Zapier's app catalog to MCP hosts, so an AI can use any of the 9,000 apps Zapier already supports.
Do I need to write code to use MCP with n8n?
No. n8n's MCP Client and MCP Server Trigger nodes are configured visually, like any other n8n node. You point the client at an MCP server URL, or you wrap an existing workflow with the trigger node and copy the resulting MCP endpoint into Claude Desktop or another host.
How is MCP different from ChatGPT plugins or function calling?
Function calling is a model feature - a way for one model to invoke a function the developer defined in the same application. MCP is a protocol between separate processes, so the same MCP server works across Claude, ChatGPT, Cursor, and any other host without rewriting wrappers. ChatGPT plugins were OpenAI-specific and were largely superseded; OpenAI now supports MCP directly.
Is MCP secure?
Security is the host's responsibility, not the protocol's. The spec requires hosts to obtain explicit user consent before sharing data with a server or invoking a tool, and to treat tool descriptions as untrusted unless the server is itself trusted. In practice that means: only install MCP servers from sources you trust, and treat every tool invocation as a potential side effect.