GPU naming
The Compute Clear API uses two levels of GPU identification:
| Level | Example | Scope |
|---|
| Family name | H100 | Matches all variants of H100 (SXM5, PCIe, etc.) |
| Canonical name | nvidia-h100-sxm5-80gb | Matches one exact GPU variant |
Use family names for broad searches. Use canonical names when you need a specific interconnect or VRAM configuration.
Discovering available GPUs
curl https://supply-api.compute-index.com/available_gpus \
-H "Authorization: Bearer $TOKEN"
The response maps family names to their canonical variants:
{
"H100": ["nvidia-h100-sxm5-80gb", "nvidia-h100-pcie-80gb"],
"A100": ["nvidia-a100-sxm4-80gb", "nvidia-a100-pcie-80gb", "nvidia-a100-pcie-40gb"],
"L40S": ["nvidia-l40s-pcie-48gb"]
}
Using GPU types in offers
Both formats work in /get_offers:
# Family name — matches all H100 variants
compute:
gpu_type: H100
# Canonical name — matches only SXM5 80GB
compute:
gpu_type: nvidia-h100-sxm5-80gb
Popular GPU families
| Family | Use case | Typical price range |
|---|
| H100 | Large model training, inference | $2-5/GPU/hr |
| H200 | Next-gen training, high-memory | $3-6/GPU/hr |
| A100 | Training, fine-tuning | $1-3/GPU/hr |
| L40S | Inference, mixed workloads | $1-2/GPU/hr |
| L4 | Cost-effective inference | $0.30-1/GPU/hr |
| RTX 4090 | Budget training, inference | $0.30-0.80/GPU/hr |
Prices vary by vendor, region, and contract type. Use /get_offers with max_price_per_gpu_hour to find options within your budget.
Filtering by vendor and region
Combine GPU selection with vendor and region filters:
vendor: nebius
region: EU
compute:
gpu_type: H100
gpu_count: 8
max_price_per_gpu_hour: 4
contract_type: ondemand