Current AI agent frontends like Claude Code only support one level of CLI permission authorization, making it absurd to grant broad access like 'az vm:*' when you only need specific subcommands
A permission middleware that sits between AI agents and tools/CLIs/APIs, offering fine-grained, policy-based access control with audit logging and approval workflows
subscription - free tier for individual devs, paid tiers for teams with audit logs, SSO, and policy templates
The pain is real but currently felt by a narrow audience — power users running AI agents with cloud CLI access. Most devs haven't hit this wall yet because they're not giving agents broad tool access. Pain will intensify dramatically as agent autonomy increases over the next 6-12 months. The 'az vm:*' example resonates strongly with anyone who's tried it.
TAM is currently small but growing fast. ~500K developers actively using AI coding agents with tool access today, maybe 10% hitting permission pain points. At $20/seat/mo for teams, near-term SAM is ~$12M/year. But this could 10x in 18 months as agentic AI goes mainstream in enterprise. The ceiling is high, the floor right now is low.
Individual devs will not pay — they'll use workarounds or accept coarse permissions. Enterprise security/compliance teams WILL pay, but the sales cycle is long. The gap between 'annoying' and 'must fix with budget' is the core risk. You need to sell to the CISO or platform engineering lead, not the individual dev. Free tier adoption won't easily convert to paid.
A solo dev can build an MVP proxy/middleware in 4-8 weeks that intercepts MCP tool calls and applies policy rules. The hard parts: supporting the explosion of CLI argument patterns (az, gh, aws all have different structures), keeping up with MCP protocol changes, and handling edge cases in command parsing. The 'last mile' of CLI subcommand parsing is deceptively complex. A basic version is feasible; a production-grade one for enterprise is 3-6 months.
This is the strongest dimension. Nobody is purpose-built for this. OPA is too generic. AI security companies focus on prompts, not tools. Agent frontends have primitive permission models. The specific intersection of 'fine-grained CLI/API permission control for AI agents' is genuinely unoccupied. First credible product here owns the category.
Natural subscription model. Policies need ongoing management, audit logs accumulate, team seats scale. Enterprise compliance requirements create lock-in. Per-seat pricing for teams aligns value with usage. High retention once embedded in workflow.
- +Genuinely unoccupied niche — no purpose-built competitor exists yet
- +Riding a massive tailwind (AI agent adoption in enterprise)
- +Clear enterprise upgrade path with audit, SSO, and compliance features
- +MCP is becoming a standard, making integration points well-defined
- +Pain signal is specific, articulable, and worsening over time
- !Anthropic/OpenAI/Cursor could ship native fine-grained permissions in a single release, killing the standalone market overnight
- !Willingness to pay is unproven — individual devs won't pay and enterprise sales takes 6+ months
- !CLI argument parsing across hundreds of tools is a long-tail complexity nightmare
- !MCP protocol is still evolving — building on shifting ground
- !Small current market means slow initial traction; you're betting on timing
Native permission system in Claude Code that allows/denies tool access at the command level
General-purpose policy engine using Rego language. Can theoretically be wired to enforce fine-grained access control on any API or CLI call. Styra offers managed OPA with a UI.
AI security platforms focused on prompt injection defense, data loss prevention, and guardrails for LLM inputs/outputs. Monitor and filter what goes in and out of AI models.
Open-source framework for adding validation and guardrails to LLM outputs. Validators check if outputs meet criteria before being returned to users.
Various open-source MCP server wrappers on GitHub that add basic auth or scoping to MCP tool access. Examples include mcp-proxy projects and gateway patterns emerging in the ecosystem.
MCP proxy server that sits between Claude Code and MCP tool servers. Config file (YAML/JSON) where you define allow/deny rules at the subcommand level (e.g., 'az vm list: allow, az vm delete: deny'). CLI to manage policies. Local audit log of all tool calls with allow/deny decisions. Ship as a single binary or npm package. Target Claude Code users first since they have the most acute pain. Skip UI, skip teams, skip SSO — just nail the single-dev policy engine.
Free open-source CLI for individual devs (build community + GitHub stars) -> Pro tier at $15/mo for advanced policies, cloud audit log dashboard, and Slack/email approval workflows -> Team tier at $30/seat/mo for centralized policy management, SSO, and compliance exports -> Enterprise at custom pricing for org-wide deployment, SOC2 audit support, and on-prem option
3-5 months. First 6-8 weeks building open-source MVP and getting community traction. Months 2-4 iterating based on feedback and adding team features. First paying customers likely month 4-5, from early adopters who hit the pain in production and need audit logs for compliance.
- “Most agent frontends only give you one level deep of CLI commands to authorize”
- “It is absurd to grant Claude Code permission to az vm:*”
- “setting granular permissions per tool”
- “complex CLIs like GitHub, Azure, etc. it just doesn't scale well”