AI-SEO MCP
A free Model Context Protocol server that gives Claude, Cursor, and any MCP-compatible agent 13 tools to audit, score, and rewrite pages for AI-citation eligibility. Built for content teams who want their pages to show up inside ChatGPT, Perplexity, Google AI Overviews, and Claude with web access, not just on page one of Google.
npx -y @automatelab/ai-seo-mcp
What it gives your agent
Thirteen tools, grouped into five families. Every tool returns the same structured finding shape: severity, category, where on the page, what to change, estimated impact. Auditors get scores plus a prioritized fix list. Rewriters use MCP sampling on the host model, with a graceful fallback to prompt templates when sampling is not available.
| Tool | Family | Purpose |
|---|---|---|
audit_page |
audit | Composite AI-SEO audit. Eight dimensions (schema, technical, structure, robots, freshness, authority, entity density, sitemap) into a 0-100 score plus a ranked fix list. |
audit_schema |
audit | Validate JSON-LD against Schema.org rules and AI-citation best practice. Flags deprecated patterns. |
audit_canonical |
audit | Canonical link integrity, trailing-slash hygiene, og:url consistency. |
check_robots |
check | Parse robots.txt and report per-crawler allow/disallow for 10+ AI crawlers. Surfaces the GPTBot-blocked-but-OAI-SearchBot-allowed trap. |
check_sitemap |
check | Validate XML sitemaps: presence, URL count, lastmod freshness, image/video extensions. |
check_technical |
check | HEAD tag audit: canonical, OpenGraph, Twitter Card, hreflang, HTTPS, noindex, title hygiene. |
score_ai_overview_eligibility |
score | Score a page's probability of appearing in Google AI Overviews using current correlation factors. |
score_citation_worthiness |
score | Score how citable a page or text block is for Perplexity, ChatGPT, Google AI Overviews, and Claude. |
generate_llms_txt |
llms.txt | Generate llms.txt (and optionally llms-full.txt) from a domain's sitemap. |
validate_llms_txt |
llms.txt | Lint an existing llms.txt for spec compliance and broken links. |
extract_entities |
entities | Extract named entities, sameAs links, and citation-density score from a page. |
rewrite_for_aeo |
rewrite | Rewrite content for Answer Engine Optimization. BLUF structure, FAQ format, schema additions. |
rewrite_for_geo |
rewrite | Rewrite content for Generative Engine Optimization. Entity definitions, comparison tables, synthesis-ready structure. |
What makes it different
Built for AI search, not classic SEO
Lighthouse will not flag missing FAQPage schema. Search Console will not tell you GPTBot is allowed but OAI-SearchBot is blocked. Ahrefs will not score citation worthiness. This MCP audits the signals AI assistants actually use to decide who to cite.
Deterministic rubrics, not opaque scores
Every score is the sum of explicit, published checks. Every finding carries a severity, a category, the exact location on the page, the fix to apply, and an impact estimate. If you do not agree with a finding, the rule is visible and editable.
Vendor-agnostic, no API keys
Audits ChatGPT, Perplexity, Google AI Overviews, Claude, and Microsoft Copilot citation signals from the same toolset. No registration. No paid-tier gates. Works the day you install it.
Polite by default
Every request goes through one fetch path that respects robots.txt, identifies itself honestly, sleeps between requests to the same host, and caps response size. The MCP is an auditor, not a scraper.
10+ AI crawlers covered
GPTBot, OAI-SearchBot, ChatGPT-User, ClaudeBot, anthropic-ai, PerplexityBot, Perplexity-User, Google-Extended, Applebot-Extended, Bytespider, Meta-ExternalAgent. Updated as Anthropic, OpenAI, Google, and Perplexity publish new agents.
Rewriters use the host model
rewrite_for_aeo and rewrite_for_geo use MCP sampling, so the rewrite is done by whichever model your client speaks to (Claude, GPT, Llama). The MCP supplies the rubric and the constraints; your model does the writing.
Install in 3 steps
-
Install with npx (Node 20 or later).
npx -y @automatelab/ai-seo-mcp -
Add the server to your MCP host config (
~/.cursor/mcp.jsonfor Cursor,claude_desktop_config.jsonfor Claude Desktop).{ "mcpServers": { "ai-seo": { "command": "npx", "args": ["-y", "@automatelab/ai-seo-mcp"] } } }No API keys required. All five env vars (USER_AGENT,FETCH_TIMEOUT_MS,MAX_BYTES,RESPECT_ROBOTS,INTER_REQUEST_DELAY_MS) are optional with sensible defaults. -
Restart your MCP host. The 13 tools appear in the MCP panel, ready to use.
Works with Claude Desktop, Claude Code, Cursor, Cline, Continue, and any MCP-compatible agent harness. Full reference: the GitHub readme.
Example workflow
Ask Claude: "Run an AI-SEO audit on https://example.com/my-post and tell me the top three
things to fix." Claude calls audit_page, gets back an eight-dimension score with
prioritized findings, and reports them in order of impact. For a "missing FAQPage schema"
finding, Claude can then call rewrite_for_aeo on a passage and return a citation-ready
answer block with the JSON-LD wrapper already applied. The whole loop runs without any
API keys, against any public URL.
FAQ
What is the AI-SEO MCP?
Do I need an API key?
How is this different from Lighthouse, Search Console, or Ahrefs?
robots.txt (GPTBot vs OAI-SearchBot can have different rules on the same domain), llms.txt presence and validity, sameAs entity links, and the structure generative models extract answers from.Does it actually rewrite content, or just suggest changes?
rewrite_for_aeo and rewrite_for_geo) use MCP sampling so the host model - Claude, GPT, or whichever your client uses - performs the rewrite under the rubric the MCP supplies. If your client does not support sampling yet, the MCP falls back to returning a prompt-template output the agent can run inline.Which agents does it work with?
What does it cost?
Want it wired into your publishing pipeline?
We use the AI-SEO MCP to audit every post we ship. If you want it set up against your CMS, sitemap, or content workflow, or a full AI-SEO audit done end to end on a site you own, we can do that.
Get in touch