Engineering teams using AI coding agents (Claude Code, Copilot, Cursor, etc.) repeatedly hit the same gotchas and environment-specific issues, wasting tokens and developer time re-discovering solutions.
A hosted SaaS platform where AI agents automatically capture and share verified solutions ('knowledge units') across an org, with human-in-the-loop review, RBAC, audit logs, and SSO — solving the trust/governance/security problems the open-source PoC explicitly punts on.
Subscription — tiered by seats/agents. Free tier for small teams, paid tiers for SSO, audit, governance, private hosting.
The pain is real but diffuse. Teams DO re-discover solutions and waste tokens, but it's currently tolerated as a cost of doing business with AI agents. It's a 'paper cut times a thousand' problem, not a 'hair on fire' problem. Most eng leaders aren't yet tracking agent-knowledge-loss as a metric. The 221 HN upvotes show interest but not desperation. Pain will intensify as agent usage scales — this is a timing bet.
TAM estimate: ~50K engineering orgs globally with 50-500+ devs, maybe 10-20% heavily using AI agents today (growing to 60%+ in 2-3 years). At $20-50/seat/month and 100 seats avg, that's $100-250M near-term SAM, growing to $500M-1B as agent adoption matures. Not massive today, but the growth trajectory is strong. The 'enterprise AI tooling' budget is expanding rapidly.
This is the weakest link. Enterprises will pay for SSO/RBAC/audit (table stakes for procurement), but the core value prop — 'your agents share knowledge' — is hard to quantify in dollar terms. ROI story is: saved tokens + saved developer re-discovery time, but neither is tracked well today. You'd need to prove measurable time savings in a pilot. Competing against free CLAUDE.md files and the 'just write better docs' mindset. Enterprise sales cycles will be 3-6 months.
Core MVP is a knowledge base with a structured API, HITL review UI, and agent integrations. A strong solo dev could build this in 6-8 weeks: vector store for knowledge units, REST/MCP API for agent queries/contributions, simple web UI for review/approval, basic auth. The hard part isn't building it — it's the integrations (Claude Code, Cursor, Copilot all have different extension models) and making the capture-and-query loop seamless enough that agents actually use it without developer intervention.
Nobody owns this space yet. Mozilla CQ validated the concept but explicitly left enterprise features on the table. The native agent memory files (CLAUDE.md etc.) are primitive and per-repo. Pieces/Swimm/Unblocked are adjacent but not purpose-built. There IS a real gap for a managed, enterprise-grade, agent-native knowledge platform. However, the gap exists partly because the market is nascent — first-mover advantage could evaporate if Anthropic, GitHub, or Cursor build this natively.
Strong subscription fit. Knowledge accumulates over time (switching cost increases), seat-based pricing scales with team growth, and governance/audit features are inherently ongoing needs. Usage-based pricing on agent queries could add expansion revenue. The flywheel — more knowledge units → more agent value → more seats — is a good SaaS dynamic.
- +Clear gap between OSS proof-of-concept (Mozilla CQ) and enterprise needs — classic open-core/managed-service play
- +Tailwinds are strong: AI agent adoption is accelerating and enterprise governance demand always follows adoption
- +Network effects within an org: platform gets more valuable as more teams and agents contribute knowledge
- +Built-in switching costs: accumulated verified knowledge units create lock-in
- +The 'picks and shovels' positioning means you benefit regardless of which AI agent wins
- !PLATFORM RISK (critical): Anthropic, GitHub, or Cursor could build org-wide agent memory natively — one product announcement kills your market. Claude's memory features and Copilot's knowledge bases are already moving this direction.
- !Chicken-and-egg problem: platform is only valuable with sufficient knowledge units, but agents won't contribute until the platform is valuable. Cold start is brutal.
- !Enterprise sales cycle mismatch: 3-6 month sales cycles with a product targeting a behavior (agent knowledge sharing) that most orgs haven't formalized yet. You'd be selling a painkiller for a pain they haven't diagnosed.
- !Integration fragility: you need to work with Claude Code, Cursor, Copilot, etc. — each has different extension models, and they can break your integration at any time
- !Proving ROI is hard: 'saved tokens and re-discovery time' is fuzzy. CFOs want concrete numbers.
Open-source 'Stack Overflow for agents' — captures and shares knowledge units that AI coding agents can query. The exact project this idea is based on.
AI-powered developer productivity tool that saves, enriches, and retrieves code snippets and context across IDEs and workflows.
Swimm auto-generates and maintains internal documentation tied to code. Backstage is Spotify's developer portal. Both aim to reduce knowledge silos in engineering orgs.
Built-in per-repo configuration files where teams store instructions, patterns, and gotchas for AI coding agents. Ships free with each respective tool.
AI-powered codebase understanding tools. Unblocked indexes code, docs, Slack, tickets to answer engineering questions. Greptile provides an API for AI to understand codebases.
MCP server + web dashboard targeting Claude Code users specifically (highest power-user density, MCP is a natural integration point). MVP scope: (1) MCP tool that agents call to search/contribute knowledge units, (2) web UI for HITL review — engineers approve/reject/edit agent-submitted solutions before they enter the shared pool, (3) org-level knowledge store (Postgres + vector embeddings), (4) basic API key auth. Skip SSO/RBAC/audit for MVP — sell to a single team first, not the enterprise. Prove the capture→review→reuse loop works before adding governance layers.
Free for up to 5 users / 1 repo (prove value) → Team tier at $25/seat/month (multi-repo, team sharing, basic analytics) → Enterprise at $50/seat/month (SSO, RBAC, audit logs, private hosting, SLA) → Platform tier with usage-based pricing on agent API calls for orgs with 500+ devs. Upsell path: compliance reporting, knowledge quality analytics, custom integrations.
8-12 weeks to MVP, 4-6 months to first paying customer. The gap is not build time — it's finding a design partner willing to pilot with their agents and proving measurable value. Target a mid-size startup (100-300 devs) already deep on Claude Code, not a Fortune 500. First revenue likely comes from a team-tier sale at $2-5K/month, not enterprise.
- “companies use similar tech stack across all their projects and their engineers solve similar problems over and over again”
- “bigger, complex problems to solve in the future around data privacy, governance”
- “human in the loop (HITL) via a UI in the browser, before they're allowed to appear in queries”