Large PRs (93 commits by non-maintainers) waste reviewer time, AI-generated code dumps are illegible, and there's no automated way to enforce PR hygiene standards
GitHub App that scores PRs on size, commit hygiene, author context, AI-generation likelihood, and readability — blocks merge or flags for team leads when thresholds are exceeded
Freemium — free for public repos, $15/user/month for private repos and team dashboards
The pain is real and visceral — the Reddit thread with 828 upvotes proves engineers hate reviewing massive, low-quality PRs. But it's an intermittent annoyance, not a daily crisis for most teams. The pain spikes when it happens but many teams tolerate it with social norms. AI-generated code dumps are making this worse and more frequent though.
TAM: ~30M professional developers on GitHub/GitLab, ~5M in orgs large enough to care about PR hygiene (50+ devs). At $15/user/month, theoretical SAM is ~$900M/year. Realistic early market is DevOps-mature teams at mid-market companies — probably 500K-1M seats addressable in year 1-3, so $90-180M realistic SAM.
This is the weak spot. Engineering managers feel this pain but dev tooling budgets are scrutinized. $15/user/month is in the right range but competes with CodeRabbit, Codacy, and other dev tools for budget. Many teams will try to solve this with Danger rules or CI scripts for free. The 'AI detection' angle is the strongest willingness-to-pay driver — that's novel and hard to DIY.
A solo dev can absolutely build an MVP in 4-8 weeks. GitHub App API is well-documented, PR metadata (size, commits, author history) is trivially accessible. Commit hygiene scoring is straightforward heuristics. AI-generation detection is the hardest part — likely needs an LLM classifier, but even simple heuristics (uniform commit messages, no iterative refinement pattern, unusual code style consistency) would work for v1.
This is the strongest signal. Nobody owns the 'PR-level quality gate' category. Existing tools either do code-level review (CodeRabbit, Codacy) or require manual setup (Danger). The specific combination of PR size gating + AI-detection + author trust scoring + team dashboards does not exist as a product. The AI-generation detection angle is particularly timely and unaddressed.
Textbook SaaS. Once installed on a repo, it runs on every PR forever. Teams won't uninstall it — the value compounds as it builds author history and team baselines. Per-seat pricing scales naturally with team size. GitHub Marketplace makes distribution and billing trivial.
- +Clear competitive gap — no one owns PR-level quality gating as a product category
- +AI-generation detection is a timely, novel hook that creates urgency and press potential
- +Technically feasible MVP in 4-8 weeks using GitHub App APIs and simple heuristics
- +Strong viral mechanics — shows up in every PR, visible to entire team, natural word-of-mouth
- +Natural land-and-expand: install on one repo, spread to org
- !GitHub/GitLab could ship native PR size limits or AI-detection features, killing the category overnight
- !Willingness-to-pay is unproven — teams may cobble together free CI scripts instead of paying $15/seat
- !AI-generation detection accuracy is hard — false positives will erode trust fast, and detection gets harder as AI code improves
- !Selling to engineering managers requires navigating procurement at mid-large companies, lengthening sales cycles
- !Cultural resistance — some teams view PR gating as friction, not value
AI-powered code review bot that auto-reviews PRs with inline comments, summaries, and suggestions
Open-source CI plugin that runs rules against PRs — e.g., warn if PR is too large, missing description, no tests
Automated code review platform covering code quality, security, duplication, and complexity
Stacked PR workflow tools that encourage smaller, incremental PRs through better UX
Built-in GitHub features for requiring approvals, status checks, and code ownership
GitHub App that runs on every PR and posts a quality scorecard comment: PR size grade (lines changed, files touched, commit count), commit hygiene score (message quality, squash-worthiness), author context (first-time contributor flag, recent activity), and a basic AI-likelihood heuristic. Configurable thresholds that block merge via GitHub status checks. No dashboard yet — just the PR comment and status check. Ship to GitHub Marketplace in 4 weeks.
Free for public repos and up to 3 private repos (land) → $15/user/month for unlimited private repos + configurable thresholds + Slack alerts (convert) → $30/user/month for team dashboards, historical analytics, org-wide policies, SSO, and API access (expand) → Enterprise tier with custom AI-detection models, SAML, and audit logs
6-10 weeks. 4 weeks to MVP, 2 weeks on GitHub Marketplace with free tier to get initial installs, convert early adopters to paid within first month. First paying customer realistic within 8-10 weeks if you do targeted outreach to engineering managers complaining about AI-generated PRs on Twitter/Reddit.
- “93 commits in one PR by a person who isn't regularly maintaining the code should be illegal”
- “never bothered to answer comments or do anything else”
- “merged by himself, no approvals”