Purpose-built GPU infrastructure for mid-market AI companies. 1,000–20,000 GPU clusters, 100% reserved, silicon-agnostic, live in 4–6 months.
2,500 funded AI startups. 50–100 active infrastructure buyers. Zero purpose-built options for dedicated GPU clusters at this scale.
Mid-market AI companies need 1,000–10,000 GPUs on 6–12 month timelines.
Nobody serves them. Until now.
Meridian delivers dedicated GPU clusters through a 100% reserved instance model. Every rack is contracted before deployment. Zero utilization risk.
Annual commitments at fixed rates. Revenue guaranteed before we rack a single GPU. No spot exposure.
NVIDIA, AMD, TPU, FPGA. Ethernet-only networking (RoCE v2). Optimized per workload, not locked to one vendor.
50–70% TDP draw. PCIe sufficient, Ethernet adequate. Lower power and cooling than training clusters.
N. Virginia, Dallas, Phoenix, Sacramento. ERCOT + utility power across markets, not one campus bet.
Project-level equity through ring-fenced SPVs. Founders retain 100% OpCo control, IP, and customer relationships.
GPUs + Ethernet into existing colo shells. No construction risk. LOIs secured before deployment begins.
Five layers of best-in-class partners. The most performance-focused, vendor-neutral inference offering on the market.
Liquid cooling retrofit, powered-shell buildout, 24/7 DC ops, Phase 2 greenfield construction.
Ethernet-based inference fabric (RoCE v2). No InfiniBand. Silicon agnosticism preserved.
GPU residual value. Salvage at month 36 for 30% of initial CapEx. Funds hardware refresh cycle.
Full DCIM: power, cooling, environmental sensors, capacity planning, predictive analytics.
Positron, NVIDIA, AMD, Google TPU, FPGA. Best tok/s/watt per workload. Zero vendor lock-in.
= Most performance-focused, vendor-neutral inference offering available today.
A $49.8B market with no purpose-built solution for the fastest-growing segment. We built the infrastructure mid-market AI companies have been waiting for.
Training dominated the last cycle. Inference dominates the next one. The mid-market is where that shift lands first — and hardest.
Hyperscalers won't touch sub-150 MW deals
1,000–10,000 GPU clusters, purpose-sized
Spot markets can't guarantee SLAs or dedicated capacity
100% reserved model — zero utilization volatility
2–4 year build timelines kill product roadmaps
4–6 month deploy into existing colo shells
Dedicated GPU clusters deployed in 4–6 months. 100% reserved capacity, zero utilization risk, silicon-agnostic. Built for AI companies that can't afford to wait.