Proxmox lacks a polished, easy-to-configure shared storage solution, which is a key blocker for enterprise adoption.
A software appliance or Proxmox plugin that provides a GUI-driven setup for shared FC/iSCSI storage pools, with built-in health monitoring, failover, and compatibility testing.
Subscription per node or per cluster, with a free tier for small clusters
The Reddit thread with 947 upvotes and the direct quote about FC/iSCSI storage being the blocker for market domination is strong signal. Every Proxmox-vs-VMware comparison thread surfaces this exact pain point. IT teams migrating from VMware have existing FC/iSCSI SANs and find Proxmox's integration with them frustrating and manual. This is a real, frequent, and vocal pain.
TAM is meaningful but bounded. Proxmox has ~800K installs, but only a fraction are clustered SMB/enterprise environments needing shared storage (est. 50K–100K clusters). At $50–200/node/month, addressable revenue is roughly $30–120M/year. Not a unicorn market, but a solid niche. The VMware migration tailwind is real but will plateau within 2–3 years.
Proxmox users already pay for subscriptions ($110–$700/socket/year) and storage solutions like StarWind ($1,500–$5,000/node/year). Enterprises migrating from VMware are accustomed to paying for storage tooling. However, the Proxmox community has a strong open-source-first culture — a free tier is essential, and pricing must undercut VMware ecosystem alternatives significantly. The sweet spot is likely $30–100/node/month.
This is the hardest dimension. Building a reliable FC/iSCSI storage layer with failover is non-trivial systems programming. FC requires kernel-level work (LIO/SCST target frameworks), hardware compatibility testing across HBA vendors, and deep Linux storage stack expertise. iSCSI is more approachable but still complex with MPIO, CHAP, and failover. A Proxmox plugin requires understanding their Perl-based API. A solo dev with strong Linux storage experience could build an iSCSI-only MVP in 8–12 weeks, but FC support and production-grade failover would take 4–6 months minimum. This is not a weekend project.
The gap is clear and well-defined. No existing product combines: (1) native Proxmox GUI integration, (2) FC + iSCSI target support, (3) wizard-driven setup, (4) rich health monitoring, and (5) automatic failover. Open-E has protocol coverage but no Proxmox integration. Ceph has integration but no FC/iSCSI. StarWind has iSCSI + HA but no FC and no Proxmox plugin. The intersection of these requirements is genuinely unserved.
Storage management is inherently subscription-worthy — it runs 24/7, needs ongoing monitoring/updates, and customers cannot easily switch once data is on the platform. Per-node or per-cluster pricing is industry standard and well-accepted. Health monitoring, firmware compatibility updates, and failover testing provide ongoing value that justifies renewals. Very high lock-in once deployed in production.
- +Genuine, vocal, and growing market gap validated by strong community signal (947 upvotes) and the VMware migration wave
- +No existing product combines Proxmox-native integration with FC/iSCSI support and polished GUI — the intersection is completely unserved
- +High recurring revenue potential with strong lock-in once deployed in production storage infrastructure
- +Timing is excellent — Proxmox adoption is surging and enterprise buyers are actively shopping for solutions right now
- !Technical complexity is high — FC/iSCSI target implementation with failover requires deep Linux storage kernel expertise, not typical SaaS/web dev skills
- !Proxmox itself could build this — they have the codebase access, the customer relationships, and this is an obvious roadmap item for enterprise push
- !Hardware compatibility matrix is a support nightmare — every FC HBA, iSCSI initiator, and switch vendor combination needs testing
- !Small addressable market ceiling — this is a niche within a niche (clustered Proxmox with SAN storage needs), unlikely to exceed $50–100M ARR even with strong execution
Distributed storage system natively integrated into Proxmox, providing block
Creates highly available shared storage by mirroring local disks between Proxmox nodes and presenting them as iSCSI targets. Purpose-built for 2–3 node HA clusters where Ceph is overkill.
ZFS-based storage OS that turns hardware into a NAS/SAN appliance, presenting shared storage to Proxmox via iSCSI, NFS, or SMB. Available as free software or enterprise appliances.
ZFS-based storage management OS for building enterprise SAN/NAS, supporting iSCSI, Fibre Channel, NFS, and SMB targets with a web GUI and HA clustering.
Software-defined storage using DRBD block-level replication with an official Proxmox plugin. Creates distributed storage pools from local disks across cluster nodes.
Start with iSCSI-only (skip FC for MVP). Build a Proxmox plugin that adds a 'Shared Storage Wizard' tab to the Proxmox web UI. The wizard auto-discovers local disks, creates ZFS pools, configures LIO iSCSI targets, sets up MPIO, and presents the storage to all cluster nodes — all in under 10 clicks. Add a health dashboard showing IOPS, latency, capacity, and disk SMART status per target. Ship as a .deb package installable via apt. Free for 2-node clusters, paid for 3+.
Free tier (2 nodes, basic monitoring) → Pro ($50/node/month: 3+ nodes, advanced monitoring, email alerts, priority updates) → Enterprise ($150/node/month: FC support, HA failover, phone support, SLA, compliance reporting) → Managed Service (host the management plane, charge per TB managed)
3–5 months to first dollar. Month 1–2: iSCSI MVP with Proxmox plugin. Month 3: beta with 10–20 Proxmox community power users (free). Month 4: launch paid tier on Proxmox forum and r/Proxmox. Month 5: first paying customers. Expect 6–12 months to reach $5K MRR given the enterprise sales cycle.
- “If Proxmox had a good file system so that you can easily do shared Fibre Channel or iSCSI storage...they would be taking over the market right now”