Your GPU powers
the AI revolution

Join GPU owners sharing their spare compute with AI developers worldwide. If you're running Ollama, you're 2 minutes from earning money and powering the next generation of AI.

RTX 4090 24 GB VRAM
$0.75/hr
A100 80GB 80 GB VRAM
$2.50/hr
H100 SXM 80 GB VRAM
$3.80/hr

Start earning in 2 minutes

1

Run Ollama

Already running? Skip this step. Not yet?

curl -fsSL https://ollama.com/install.sh | sh
2

List your GPU

Tell us your GPU model, which models you serve, and set your price. Done.

3

Earn passively

Renters find your listing, connect via our proxy, and you earn per-hour or per-token. We handle billing.

What your GPU earns

Your GPU earns nothing when it's idle. List it and it pays for itself.

GPU Hourly rate 8 hrs idle/day Monthly
RTX 3090 $0.35/hr $2.80/day ~$84/mo
RTX 4090 $0.75/hr $6.00/day ~$180/mo
A100 80GB $2.50/hr $20.00/day ~$600/mo
H100 SXM $3.80/hr $30.40/day ~$912/mo

Built for GPU owners, not cloud companies

You set the price

Your GPU, your rules. Set hourly rates, per-token pricing, or both.

Ollama native

No Docker, no networking config. If Ollama runs, you're ready.

Direct payouts

Stripe Connect deposits earnings directly to your bank. No invoicing.

Health monitoring

We check your endpoint every 2 minutes. If it goes down, we pause automatically.

Need GPU compute? Skip the cloud.

Browse available GPUs, click Rent, get an Ollama API endpoint instantly. Pay by the hour or by the token. No setup, no SSH, no Docker. Just connect and run your models.

Browse marketplace

The world's first Ollama-native GPU marketplace

AWS charges $3/hr for an A100. Your neighbor's gaming rig runs the same models for $0.75. The GPU sharing revolution isn't coming — it's here. Every GPU you list makes AI more accessible, more affordable, and more decentralized.

Join the revolution