GPU Pricing
Train & infer faster with on-demand NVIDIA GPUs
Spin up managed workbenches or raw VMs in under 90 seconds – billed to the minute, no commitments.
Managed Workbench Instances
All the tools, none of the setup.
Minute-level billing – stop waste
Launch up to 8 GPUs per instance
Pre-built stacks: JupyterLab, VS Code, API endpoint & SSH
Running Docker? Choose Bare-Metal VMs
| GPU Type | Generation | VRAM | RAM | vCPUs | $/hour |
|---|---|---|---|---|---|
| H200 SXM | 141GB | 200GB | 16 | $3.80 | |
| H100 SXM | 80GB | 200GB | 16 | $2.69 | |
| A6000 | 48GB | 32GB | 7 | $0.79 | |
| RTX6000 Ada | 48GB | 128GB | 32 | $0.99 | |
| L4 | 24GB | 124GB | 32 | $0.44 | |
| A100 | 40GB | 112GB | 16 | $1.29 | |
| A100-80GB | 80GB | 112GB | 16 | $1.49 |
On-Demand VMs
Full root access, billed per minute. Zero lock-in.
Need 25+ GPUs or multi-month reservations?
| GPU Type | Generation | RAM | VRAM | vCPUs | $/hour |
|---|---|---|---|---|---|
| H200 | 200GB | 141GB | 16 | $3.80 | |
| H100 | 200GB | 80GB | 16 | $2.69 | |
| A100-80GB | 112GB | 80GB | 16 | $1.49 | |
| A100 | 112GB | 40GB | 16 | $1.29 | |
| L4 | 124GB | 24GB | 32 | $0.44 |
Prices shown per GPU. Multi-GPU configurations (up to 8x) available at launch.
Long-term commitment? Get a custom quote.
Running multi-GPU inference? Read our guide on scaling with DP, PP, and TP
Trusted by 3,000+ researchers and 50+ startups worldwide
Frequently Asked Questions
99.9% uptime SLA