Indexed knowledge base
Your docs, help-centre articles, FAQs, past tickets, and any internal wiki you point us at, ingested and chunked into a vector store with source-of-truth links preserved.
An AI support agent is a retrieval-augmented chatbot grounded in your own docs, tickets, and product knowledge. We build, evaluate, and deploy yours: it answers customer questions in your tone, deflects the repetitive tier-1 traffic that’s eating your team, and escalates cleanly when it’s out of its depth.
Indicative timeline and starting price. Final scope and quote agreed after the intro call.
Support ticket volumes are growing roughly 14–20% year-on-year across mid-market SaaS while support-team headcount stays flat (Zendesk CX Trends, 2025). The teams below feel it first.
Your docs, help-centre articles, FAQs, past tickets, and any internal wiki you point us at, ingested and chunked into a vector store with source-of-truth links preserved.
A grounded chat agent built on Claude or frontier OpenAI models. Answers cite their sources, refuse to hallucinate when the docs don’t cover something, and escalate to a human with full context.
Embedded chat widget on your site plus an API endpoint your existing helpdesk (Intercom, Zendesk, Help Scout, Front, custom) can call. Slack and WhatsApp on request.
A test set built from your real tickets, automated quality checks on every prompt change, and a small dashboard showing deflection rate, escalation rate, and the questions you’re still missing answers for. Grounded RAG agents typically resolve 40–60% of tier-1 inquiries end-to-end (Intercom State of AI in Customer Service, 2025); the dashboard tracks where you actually land.
One working session with your support and product leads. We pull a sample of recent tickets, map the question categories, and identify the doc gaps that will limit the agent’s ceiling before any code is written.
We stand up the vector store, ingest your sources, and ship a first version of the agent against a real account. You can chat with it by end of week one.
Build a test set from your real tickets, run it against the agent, and tune retrieval, system prompt, and refusal behaviour until quality clears the bar you set in week one.
Embed the widget, wire the API into your helpdesk, ship the dashboard, and walk your team through the runbook. You leave week three with a live agent and a quote for any add-ons.
Indicative timeline. Larger knowledge bases or non-trivial helpdesk integrations can stretch this; we confirm dates after the kickoff session.
Starting from €4,500*, payable 50% on kickoff and 50% on handover. Scope and final price are agreed in writing after the intro call — no surprise add-ons.
Optional retainer from €750/month covers monitoring, content updates, and prompt tuning as your product changes. Cancel any time.
Industry benchmarks put the loaded cost of a human-handled tier-1 ticket between €8 and €25 (Forrester); a working agent typically pays back the build inside the first quarter.
Start a projectStarting price reflects a focused single-product knowledge base and a standard helpdesk integration. Multi-product or custom-stack builds quoted after scoping.
The whole architecture is built to prevent it. Answers are grounded in retrieved chunks from your sources, the system prompt instructs the model to refuse when context is missing, and the eval suite blocks prompt changes that regress refusal behaviour. It’s never zero risk, but it’s a different category from a generic chatbot.
Default is Claude (Anthropic) or frontier OpenAI, picked per project based on your data residency, latency, and cost requirements. You own the model choice and can swap providers post-handover.
In infrastructure you control. Vector store and orchestration deploy in your cloud account (AWS, GCP, Azure) by default; we can also run on managed providers like Pinecone or Supabase if you prefer.
Three numbers: deflection rate (tickets the agent fully resolved), escalation quality (does the human get useful context?), and refusal accuracy (does it know when to say “I don’t know”?). Baseline is captured in week one; we report against it on handover.
Anything under €50 in model spend during the build is on us. Production usage is billed by the provider directly to your account — no markup, no reseller margin.
Three options: (1) take the repo and run it internally, (2) keep us on retainer for monitoring and content updates, (3) scope a follow-on build (voice, additional channels, multilingual expansion). No pressure to continue.
Tell us about your product, your support volume, and where the docs live. We’ll come back within one business day with a proposed scope and a quote.
Open the contact form