6.4mediumCONDITIONAL GO

TestPulse

A dashboard that tracks team testing investment over time and alerts when QA attention is declining post-incident.

DevToolsEngineering managers and VPs of Engineering at mid-size SaaS companies (50-50...
The Gap

Teams only prioritize testing right after a bad release, then gradually deprioritize it until the next incident — a costly boom-bust cycle.

Solution

Integrates with CI/CD, issue trackers, and test suites to measure testing velocity, coverage trends, and QA engagement over time. Surfaces a 'testing health score' and sends automated nudges to engineering leads when investment drops below a threshold, breaking the reactive cycle.

Revenue Model

Subscription SaaS — free tier for small teams, $500-2k/mo per org for dashboards, alerts, and integrations

Feasibility Scores
Pain Intensity6/10

The boom-bust testing cycle is real and universally acknowledged — the Reddit signals confirm this. However, it's a 'slow bleed' problem, not a hair-on-fire one. Teams feel the pain acutely only after incidents, which is exactly the cycle you're trying to break. The challenge: the people who'd buy this (eng managers) feel the pain episodically, not constantly. You're selling prevention to people who only buy after the fire.

Market Size6/10

Target: mid-size SaaS companies with 50-500 engineers. Estimated ~15,000-25,000 such companies globally. At $500-2k/mo, TAM is roughly $90M-$600M/year. Realistic serviceable market is much smaller — maybe 2,000-5,000 orgs that are mature enough to care about testing metrics but not so large they build internally. Decent niche but not massive.

Willingness to Pay5/10

This is the weakest link. Engineering teams already pay for Codecov, SonarQube, Datadog, and LinearB. TestPulse would need to prove ROI on top of existing tooling spend. The 'nudging' value prop is behavioral, not technical — harder to justify in procurement. $500-2k/mo is reasonable for the target, but the buyer (VP Eng) needs a clear incident-cost-reduction narrative. Many orgs will try to cobble this together with existing dashboards + Slack reminders.

Technical Feasibility8/10

Very buildable. Core is API integrations (GitHub, GitLab, Jira, CI providers) + time-series aggregation + threshold-based alerting. No ML required for MVP — simple trend lines and configurable thresholds. A solo dev with CI/CD and data pipeline experience could ship an MVP in 6-8 weeks. The hardest part is the breadth of integrations, not depth.

Competition Gap7/10

Nobody owns 'testing health over time' as a category. Codecov tracks coverage per-commit but not investment trends. LinearB tracks engineering allocation but not QA-specific behavioral patterns. The incident-correlation angle (tracking whether post-incident testing commitments actually stick) is genuinely novel. Gap is real, but the risk is that LinearB or Datadog adds this as a feature in a quarter.

Recurring Potential8/10

Natural subscription fit — the value is continuous monitoring and alerting, not one-time analysis. Once integrated into CI/CD and tied to team workflows, switching costs are moderate. The alerting/nudging creates ongoing engagement. Health scores need to be checked regularly. Strong retention mechanics if the product delivers visible behavior change.

Strengths
  • +Genuinely unserved niche — no one owns 'testing investment tracking over time' despite universal recognition of the problem
  • +The incident-correlation angle is a compelling and novel hook that existing tools don't offer
  • +Natural integration points with tools teams already use (GitHub, CI, Jira) lower adoption friction
  • +Strong narrative for VP Eng buyers: 'prove your team maintains quality discipline, not just after fires'
  • +Technically straightforward MVP — no exotic infrastructure or ML needed
Risks
  • !Feature-not-product risk: LinearB, Datadog, or Codecov could add a 'testing trends' dashboard and kill the standalone market
  • !Selling prevention is hard — buyers are most motivated right after an incident (exactly when they don't need the tool yet)
  • !The 'nudging' value prop may feel like surveillance to ICs, creating bottom-up resistance that blocks adoption
  • !Proving ROI is indirect — 'we prevented incidents that didn't happen' is a tough sell at renewal time
  • !Integration breadth (many CI providers, test frameworks, issue trackers) creates high maintenance burden for a solo dev
Competition
Codecov

Code coverage reporting tool that integrates with CI/CD to track test coverage percentages per commit and PR, with coverage diff visibility and status checks.

Pricing: Free for open source, $10/user/month for teams, enterprise pricing available
Gap: Purely coverage-focused — no temporal trend analysis of testing investment, no alerting on declining QA engagement, no connection to incident data, no 'testing health score' concept, no nudging/behavioral layer for engineering leads
SonarQube / SonarCloud

Code quality and security platform that tracks code smells, bugs, vulnerabilities, and test coverage with quality gates that can block merges.

Pricing: Free community edition, SonarCloud from $10/month, enterprise from $20k+/year
Gap: Static point-in-time analysis — doesn't track team behavioral trends over time, no incident-correlation, no concept of 'testing attention declining,' no proactive alerting to managers about QA deprioritization patterns
LinearB / Jellyfish / Sleuth (Engineering Intelligence Platforms)

Engineering metrics platforms that track developer productivity, DORA metrics, cycle time, deployment frequency, and engineering investment allocation.

Pricing: LinearB free tier available, paid from $20/dev/month; Jellyfish enterprise-only ($50k+/year
Gap: Testing is a small subset of what they track — none offer a dedicated 'testing health score,' no incident-to-QA-investment correlation, no automated nudging when testing drops, QA metrics are surface-level (coverage %) not behavioral
Allure TestOps

Test management and analytics platform that aggregates test results across frameworks, tracks flaky tests, and provides test execution dashboards.

Pricing: Free open-source reporter, TestOps from $30/user/month, enterprise pricing
Gap: Focused on test execution results, not testing investment trends — no tracking of whether teams are writing fewer tests over time, no correlation with incidents, no management-level alerting on declining QA attention
Datadog CI Visibility / Test Visibility

Part of Datadog's observability suite — tracks CI pipeline performance, test execution times, flaky tests, and test suite health within the broader monitoring ecosystem.

Pricing: Included in Datadog CI Visibility at $20/committer/month on top of existing Datadog subscription
Gap: Optimized for CI performance not QA culture — no temporal analysis of testing investment patterns, no incident-triggered tracking of post-mortem follow-through, no behavioral nudging, buried inside a massive platform rather than purpose-built for eng managers
MVP Suggestion

GitHub + one CI provider (GitHub Actions) + one issue tracker (Jira or Linear) integration only. Dashboard showing: (1) weekly test-added/test-modified velocity, (2) coverage trend over 90 days, (3) post-incident testing follow-through score (did the team actually write the tests they committed to in the post-mortem?). Slack alerts when testing velocity drops below team's own 30-day average. Skip the 'health score' for MVP — show raw trends and let managers draw conclusions. Target 5 design partners who recently had a bad release.

Monetization Path

Free tier: single repo, 30-day history, basic trends → Paid ($500/mo): org-wide dashboard, unlimited history, Slack/email alerts, multi-repo → Enterprise ($2k/mo): SSO, incident-tracker correlation, post-mortem follow-through tracking, custom thresholds, API access → Scale: annual contracts with engineering orgs, expand to 'engineering discipline' platform beyond just testing

Time to Revenue

10-14 weeks. 6-8 weeks to MVP with GitHub Actions + Jira integration, 2-3 weeks of design partner iteration, then convert 1-2 design partners to paid within 4 weeks. First revenue likely month 3-4. The sales cycle for this buyer (VP Eng) is typically 2-6 weeks with a champion.

What people are saying
  • appetite for testing is always highest the week after a bad release and then slowly fades until the next one
  • the thing everyone agrees is important but nobody prioritizes until it's too late
  • quality was something teams wanted to invest in... but only after something breaks in production