Add your RunPod, Lambda, Vast.ai, TensorDock API keys. Submit a job and we route it to the cheapest available GPU across all your providers.
Your keys. Your accounts. Your billing. We just make it smarter.
Three steps. You keep full control of your provider accounts.
Add your RunPod, Lambda, Vast.ai, or TensorDock API keys in the dashboard.
You stay in control of billing with each provider.
Tell us what GPU you need, your image, and your command. One API for all your providers.
We find the cheapest available GPU across all your connected providers and launch instantly.
Connect your accounts with any of these GPU clouds. We route across all of them.
Built-in simulated provider. Simulates real GPU job lifecycle.
from harvestgpu import HarvestGPU
gpu = HarvestGPU(api_key="hg_live_...")
# Find cheapest H100
job = gpu.run(
gpu="H100",
image="pytorch/pytorch:latest",
command="python train.py",
budget_max=2.50,
)
print(f"Running on {job.provider} at ${job.cost_per_hour}/hr")
# Running on runpod at $2.86/hr
# Install
$ pip install harvestgpu
# Authenticate
$ harvestgpu auth login --key hg_live_...
# Launch a job
$ harvestgpu run --gpu H100 \
--image pytorch/pytorch:latest \
--command "python train.py"
Job abc123 started on runpod (H100) at $2.86/hr
# Check status
$ harvestgpu jobs status abc123
Status: running | Runtime: 1h 23m | Cost: $3.95
# Submit a job
$ curl -X POST https://api.harvestgpu.dev/api/v1/jobs \
-H "Authorization: Bearer hg_live_..." \
-H "Content-Type: application/json" \
-d '{
"gpu": "H100",
"image": "pytorch/pytorch:latest",
"command": "python train.py",
"priority": "cost"
}'
# Response
{
"status": "ok",
"data": {
"job_id": "abc123",
"provider": "RunPod",
"cost_per_hour": 2.86,
"status": "running"
}
}
Connect your GPU provider accounts in 30 seconds. We find the cheapest option across all of them.
Get Started — Free