8.0criticalSTRONG GO

AgentGuard

Security scanner that detects prompt injection vulnerabilities in AI agent workflows before they ship.

SaaS
The Gap

AI agent systems have permissions bolted on as an afterthought, leaving them vulnerable to hidden prompt injections via HTML comments, markdown files, and other data channels agents consume.

Solution

A CI/CD-integrated scanner that audits agent tool permissions, tests for indirect prompt injection vectors (hidden HTML, markdown, user-supplied data), and flags overly broad data access before deployment.

Feasibility Scores
Pain Intensity9/10

This is a hair-on-fire problem for any team shipping AI agents to production. The Reddit thread signals (332 upvotes, 146 comments on 'AI clownpocalypse') show visceral developer anxiety. Indirect prompt injection is a known, demonstrated, and currently unsolved problem. Every major AI agent framework (LangChain, CrewAI, AutoGen, Claude tools) has this vulnerability class. The pain is real, current, and escalating as agents get more permissions. Security teams are actively blocking agent deployments over this.

Market Size7/10

TAM for AI security tools broadly is $7-10B+ by 2028. The agent-specific security niche is smaller today but growing exponentially as agent adoption increases. SAM is every engineering team building agent products — estimated 50K-200K teams globally by 2026. At $200-500/month average, that's $120M-$1.2B SAM. SOM for a startup in year 1-2 is realistically $1-5M ARR. Not a massive standalone market yet, but perfectly sized for a focused startup to own the niche before incumbents wake up.

Willingness to Pay7/10

Security tooling has proven willingness-to-pay (Snyk, Semgrep, etc. built billion-dollar businesses). DevSecOps budgets are real. However, many target users are early-stage AI startups with limited budgets, and the 'shift-left' security mindset for AI is still nascent. Enterprise teams will pay $500-2K/month without blinking, but the market is split between budget-conscious startups and big enterprises. Freemium-to-enterprise is the right play. Score would be 8-9 if the market were more mature.

Technical Feasibility8/10

A solo dev can absolutely build an MVP CLI scanner in 4-8 weeks. Core components: static analysis of agent config files (tool permissions, data access scopes), a library of known indirect injection test payloads (hidden HTML, markdown injection, unicode tricks), and CI/CD integration (GitHub Actions, GitLab CI). The hard part is reducing false positives and keeping up with novel attack vectors, but an MVP with a curated rule set is very buildable. No need for ML initially — rule-based scanning plus known payload testing gets you 80% of value. The founder likely already understands the attack surface deeply.

Competition Gap8/10

This is the key insight: existing players focus on runtime detection (Lakera, PromptArmor) or model-level testing (Garak, Protect AI). NOBODY is doing shift-left, CI/CD-integrated scanning of agent workflow architectures specifically — auditing tool permissions, scanning data pipelines for injection vectors, and catching overly-broad access before deployment. The gap between 'firewall at runtime' and 'scan before you ship' is exactly where Snyk/Semgrep won in traditional AppSec. This is a genuine whitespace.

Recurring Potential9/10

Perfect subscription fit. Agent codebases change constantly, new vulnerabilities emerge weekly, compliance requires continuous monitoring. The 'continuous scanning' model is proven in DevSecOps (Snyk has $300M+ ARR). Every code push needs re-scanning. Compliance reports need monthly regeneration. New attack vectors require rule updates. This is naturally recurring, not a one-time purchase.

Strengths
  • +Genuine whitespace — no one is doing shift-left CI/CD scanning for agent-specific vulnerabilities; existing tools focus on runtime or model-level, not pre-deployment architecture auditing
  • +Proven playbook — this is 'Snyk/Semgrep but for AI agents', a pattern that has created multiple billion-dollar companies in traditional AppSec
  • +Timing is perfect — agent adoption is exploding in 2025-2026 while security tooling lags 12-18 months behind, creating a window for a focused startup
  • +Open-source CLI as wedge creates developer trust and community flywheel, paid SaaS for teams/compliance is a proven conversion funnel
  • +Pain signals are strong and specific — the Reddit thread quotes are textbook problem-awareness signals from technical users who are your exact buyers
Risks
  • !Incumbents could add agent scanning as a feature: Lakera, Snyk, or Semgrep could ship a competing feature in 3-6 months once the market is validated — speed to community adoption is critical
  • !The attack surface is evolving so fast that maintaining scanner accuracy (low false positives, high true positives) requires constant research investment — one bad scan result erodes trust quickly
  • !Market timing risk: if major agent frameworks (LangChain, etc.) build in security-by-default or if a high-profile breach triggers heavy-handed regulation, the market could bifurcate in unpredictable ways
  • !Developer adoption of security tools is notoriously hard — developers resist friction in CI/CD, so the scanner must be fast (<30s) and low-noise or it gets disabled
Competition
Lakera Guard

API-based prompt injection detection and content filtering for LLM applications. Provides a real-time API that sits between user input and the LLM to detect and block prompt injections, data leakage, and toxic content.

Pricing: Free tier (10K API calls/month
Gap: Focused on runtime input filtering, NOT on pre-deployment CI/CD scanning of agent architectures. Does not audit agent tool permissions or scan for indirect injection vectors in data pipelines (HTML comments, markdown files). No static analysis of agent workflow configurations.
Protect AI (NB Defense, Guardian, ModelScan)

Suite of open-source and enterprise tools for ML/AI security including model scanning, LLM vulnerability assessment, and supply chain security for AI artifacts.

Pricing: Open-source tools free, enterprise Guardian platform custom pricing (est. $50K-200K+/year
Gap: Focused heavily on model-level and supply chain security rather than agent-specific workflow vulnerabilities. Does not specifically audit agent tool permissions, indirect prompt injection via data channels, or provide CI/CD-native scanning for agent architectures. Enterprise-heavy, not accessible to small teams.
Robust Intelligence (acquired by Cisco, now part of Cisco AI Defense)

AI firewall and continuous validation platform that tests AI models for vulnerabilities, bias, and security issues. Provides guardrails for LLM applications.

Pricing: Enterprise pricing only (est. $100K+/year as part of Cisco security suite
Gap: Now buried inside Cisco's enterprise security stack — inaccessible to startups and small teams. No developer-first CLI tool, no open-source component, no specific focus on agent workflow architecture auditing or indirect prompt injection via data channels. Slow enterprise sales cycles.
PromptArmor

Security platform focused specifically on detecting prompt injection attacks in LLM applications, including indirect prompt injection detection.

Pricing: Usage-based API pricing, free tier for testing, paid tiers from ~$99/month
Gap: Runtime detection focus rather than shift-left CI/CD integration. Does not provide static analysis of agent tool permission configurations, no automated scanning of agent data pipelines pre-deployment, limited to API-based detection rather than developer workflow integration.
Garak (NVIDIA open-source)

Open-source LLM vulnerability scanner that probes language models for weaknesses including prompt injection, data leakage, hallucination, and toxicity.

Pricing: Free and open-source
Gap: Focused on probing the LLM itself, NOT on auditing the agent architecture around it. Does not scan for indirect injection via HTML/markdown data sources, does not audit tool permission scoping, no awareness of agent workflow topology. It tests the model, not the system. No SaaS dashboard, compliance reports, or team features.
MVP Suggestion

Open-source CLI tool (Python or Go) that: (1) scans agent configuration files for overly-broad tool permissions and flags dangerous access patterns, (2) runs a curated suite of 50-100 indirect prompt injection test payloads (hidden HTML comments, markdown injection, unicode exploits, data-channel attacks) against agent endpoints, (3) outputs a security report with severity ratings and remediation suggestions, (4) integrates as a GitHub Action with a single YAML line. Ship with support for LangChain, CrewAI, and OpenAI Agents SDK configs out of the box. Do NOT build the SaaS dashboard for MVP — just the CLI and GitHub Action.

Monetization Path

Free open-source CLI scanner (developer adoption and community) → Paid Team tier at $99-299/month (team dashboards, historical trends, Slack alerts, policy enforcement) → Enterprise tier at $500-2000/month (compliance reports for SOC2/ISO27001/EU AI Act, SSO, custom policies, API access, SLA) → Scale via partnerships with agent framework vendors (LangChain, CrewAI) and cloud marketplaces (AWS, Azure)

Time to Revenue

8-14 weeks to first dollar. Weeks 1-6: build and ship open-source CLI with GitHub Action. Weeks 6-10: build community, get 500-1000 GitHub stars, publish blog posts on specific agent vulnerabilities found. Weeks 10-14: launch paid Team tier for teams that adopted the CLI. First paying customers likely come from early CLI adopters who need team features or compliance output. Could accelerate to 4-6 weeks if you launch with a waitlist and a compelling demo video showing a real agent exploit being caught.

What people are saying
  • Hidden HTML comments that agents can see but users can't, and the fix still isn't deployed
  • We keep bolting permissions onto these agent systems as an afterthought
  • glaring security concerns and concrete examples of vulnerabilities
  • Let's give it access to all of my personal data and give it the ability to act with it