Launching the AI-SEO MCP: 13 tools to get your pages cited by AI assistants

A free MCP server that gives Claude, Cursor, and any MCP agent 13 tools to audit, score, and rewrite pages for AI-citation eligibility.

AutomateLab title card with a giant 13 numeral on a dark navy background and the headline 13 tools to get cited by AI assistants
AI-SEO MCP: a free Model Context Protocol server with 13 tools to get pages cited by AI assistants.

TL;DR. We published @automatelab/ai-seo-mcp, a free MCP server with 13 tools to audit, score, and rewrite pages for AI-citation eligibility. It runs in Claude Desktop, Claude Code, Cursor, and any MCP-compatible agent. No API keys, MIT license. Full reference: the product page.

Classic SEO tools tell you how to rank on Google's blue links. They do not tell you whether ChatGPT will cite your page, whether Perplexity will quote you, or whether your robots.txt accidentally blocks the crawler that feeds Google's AI Overviews. The AI-SEO MCP exists to fill that gap. It gives any MCP-capable agent a structured way to ask: is this page set up to get cited by AI assistants, and if not, what should change.

What it does

The server exposes 13 tools across five families: audit, check, score, llms.txt, and rewrite. Every audit returns the same structured finding shape: severity, category, where on the page, what to change, estimated impact. Scores are deterministic sums of explicit checks, not opaque black-box numbers. Rewriters use MCP sampling so the host model performs the actual rewrite under the rubric the server supplies, with a prompt-template fallback when the client does not implement sampling.

It is vendor-agnostic on both ends. It audits citation signals for ChatGPT, Perplexity, Google AI Overviews, Claude with web access, and Microsoft Copilot from the same toolset. And it works with any MCP host, so the same setup runs identically in Claude Desktop, Claude Code, Cursor, Cline, and Continue.

Tool surface

  • audit_page: composite eight-dimension AI-SEO audit returning a 0-100 score and a ranked fix list.
  • audit_schema: validates JSON-LD against Schema.org rules and AI-citation best practice.
  • audit_canonical: checks canonical link integrity, trailing-slash hygiene, and og:url consistency.
  • check_robots: parses robots.txt and reports per-crawler allow/disallow for 10+ AI crawlers, including the GPTBot vs OAI-SearchBot split that trips most sites up.
  • check_sitemap: validates XML sitemap presence, URL count, lastmod freshness, and image/video extensions.
  • check_technical: head-tag audit for canonical, OpenGraph, Twitter Card, hreflang, HTTPS, noindex, and title hygiene.
  • score_ai_overview_eligibility: scores a page's probability of appearing in Google AI Overviews using current correlation factors.
  • score_citation_worthiness: scores how citable a page or text block is across the major answer engines.
  • generate_llms_txt: generates llms.txt (and optionally llms-full.txt) from a domain's sitemap.
  • validate_llms_txt: lints an existing llms.txt for spec compliance and broken links.
  • extract_entities: pulls named entities, sameAs links, and a citation-density score from a page.
  • rewrite_for_aeo and rewrite_for_geo: rewrite content for Answer Engine and Generative Engine Optimization. BLUF structure, FAQ format, entity definitions, and the schema additions an AI assistant needs to extract a clean answer.

Install

Node 20 or later. The package is on the public npm registry, so the standard MCP install pattern works:

{
  "mcpServers": {
    "ai-seo": {
      "command": "npx",
      "args": ["-y", "@automatelab/ai-seo-mcp"]
    }
  }
}

Add the block to claude_desktop_config.json for Claude Desktop, ~/.cursor/mcp.json for Cursor, or the equivalent file for your host. Restart the host and the 13 tools appear in the MCP panel. No API keys. All five environment variables (USER_AGENT, FETCH_TIMEOUT_MS, MAX_BYTES, RESPECT_ROBOTS, INTER_REQUEST_DELAY_MS) are optional with sensible defaults.

Every outbound request goes through one fetch path that respects robots.txt, identifies itself honestly via User-Agent, sleeps between requests to the same host, and caps response size. The server is an auditor, not a scraper.

Example

Ask Claude: "Run an AI-SEO audit on https://example.com/my-post and tell me the top three things to fix." The agent calls audit_page, gets back an eight-dimension score with prioritized findings, and reports them in order of impact. If the top finding is a missing FAQPage schema, the agent can then call rewrite_for_aeo on a passage and return a citation-ready answer block with the JSON-LD wrapper already applied. The whole loop runs against any public URL, with no API keys at any step.

We use the same loop on every post we publish, against our own staging URLs, before anything ships.