n8n-mcp
An MCP server that gives Claude, Cursor, and any MCP-compatible agent nine tools for n8n. Generate workflow JSON, lint for the silent failures, and - the wedge - diagnose failed executions node by node so the agent stops guessing.
What it gives your agent
Nine tools, split between stateless (work on pasted JSON, no n8n instance needed) and live (drive a real n8n via REST, gated on N8N_API_URL + N8N_API_KEY).
| Tool | Type | Purpose |
|---|---|---|
n8n_generate_workflow |
stateless | Plain-English description in, importable workflow JSON out. Detects AI-Agent intent and emits proper LangChain clusters. |
n8n_lint_workflow |
stateless | Catches the silent failures: deprecated node types, AI Agent missing a language model, IF v1 schema, broken connections across all connection types. |
n8n_explain_execution |
stateless | Paste a failed execution JSON, get back per-node findings: which nodes returned zero items, which expressions failed to resolve, where the data was dropped. |
n8n_scaffold_node |
stateless | Description in, a single INodeType TypeScript file out, ready to drop into a custom n8n package. |
n8n_list_workflows |
live | Paginate workflows. Filter by active state, tags, or name. |
n8n_get_workflow |
live | Fetch a workflow by id for the agent to inspect or modify. |
n8n_create_workflow |
live | POST a workflow to your instance. Strips read-only fields automatically. |
n8n_activate_workflow |
live | Flip active state on or off without leaving the agent. |
n8n_list_executions |
live | Browse executions; pass includeData: true to pipe the full body into n8n_explain_execution. |
What makes it different
Debugging-first, not breadth-first
Other n8n MCP servers index the entire node catalog (20+ tools). This one is focused on the failure modes - the diagnoses an LLM cannot do alone from a workflow dump.
Per-node execution diagnosis
n8n_explain_execution calls out which node returned zero items, which expressions failed to resolve, which branch the IF/Switch took. Solves the #1 n8n debugging pain point: silent data loss between nodes.
Opinionated AI-Agent topology
Generator emits proper LangChain clusters - ai_languageModel / ai_memory / ai_tool connections wired upward to the agent, not via main. Imports cleanly on n8n 1.x.
Lint catches the silent killers
Function vs Code, spreadsheetFile vs convertToFile, IF v1 schema, AI Agent without a chat model, Webhook missing webhookId. The errors n8n accepts on import and bites you on first run.
Stateless tools work offline
Generate, lint, and explain don't need an n8n instance. Paste JSON in chat, get answers. REST tools turn on when you set the API key.
Paired Agent Skill
Ships with a SKILL.md that teaches the model when to use which tool, plus references split so they don't bloat the prompt. The skill is portable across MCP-capable harnesses.
Install in 3 steps
-
Install the npm package (requires Node 20+)
npm install -g @automatelab/n8n-mcp -
Add the server to your MCP host config (
~/.cursor/mcp.jsonfor Cursor,claude_desktop_config.jsonfor Claude Desktop){ "mcpServers": { "n8n": { "command": "npx", "args": ["-y", "@automatelab/n8n-mcp"], "env": { "N8N_API_URL": "https://your-n8n.example.com", "N8N_API_KEY": "n8n_..." } } } }Theenvblock is optional - the 4 stateless tools work without it. Get an API key in n8n at Settings → API → Create API key. -
Restart your MCP host. The nine
n8n_*tools appear in the MCP panel.
Works with Claude Desktop, Claude Code, Cursor, Cline, and any MCP-compatible agent harness. Full reference: the GitHub readme.
FAQ
What does it cost?
Do I need a live n8n instance?
N8N_API_URL + N8N_API_KEY - set those when you want the agent to drive a real instance.Which agents does it work with?
How is this different from czlonkowski/n8n-mcp?
explain_execution is the wedge, the generator is opinionated about AI-Agent topology, and lint encodes the silent failure modes. Different niches; many users will run both.What does it not do?
Want it wired into your n8n setup?
We use n8n-mcp daily to build, lint, and debug our own workflows. If you want it set up and tuned for your stack - or a custom workflow built end to end - we can do that.
Get in touch