PRICING

Simple plans for all your needs

Serverless is the most cost-efficient way to manage your compute expense.
We built the first API for serverless ML compute to accelerate prototyping and deploying your models fast & easy.
Forget about servers, forever.

Want to train and run predictions? Choose Prototyping
Want to deploy your models? Choose Deployment

Prototyping
Train and run predictions while exploring the best models for your dataset.
Beginner
For students, practitioners and starters to the AI world
Free
Unlimited trainings
Unlimited predictions
Visualise everything
Up to 1 project
M60 & T4 GPUs. We select the best GPU for your model
10GB of Data Storage
Professional
Coming Soon
For professionals with focus on productivity & speed
$
10
per hour of compute
Unlimited trainings
Unlimited predictions
Visualise everything
Unlimited projects
M60, T4, V100 GPUs. We select the best GPU for your model
10TB of Data Storage
Up to 12 concurrent training & predictions
Multi-GPU support
TPU support
Enterprise
For maximum speed, security and compliance controls
Contact Sales
Unlimited trainings
Unlimited predictions
Visualise everything
Unlimited projects
M60, T4, V100 & other GPUs. Flexibility of best GPU for your model
Unlimited Data Storage
Unlimited concurrent training & predictions
Multi-GPU support
TPU support
On-premise support
24/7 support
Deployment
Deploy your models and keep track of how well they perform
Standard
Deployment suite for beginners & practitioners
$
0
per GPU / month
+ compute per hour cost on selected prototyping plan
T4 GPUs
Inference <1.5s*
Unlimited models
Keep track of model health

* Minimum average speed of inference

Ludicrous
Coming Soon
Deployment suite for extremely fast applications
$
250
per GPU / month
+ compute per hour cost on selected prototyping plan
Private GPU Cluster
Inference <150ms*
Unlimited models
Keep track of model health
Private VPC
Custom subdomain

* Minimum average speed of inference

Enterprise
Deployment suite with full speed, security and controls
Contact Sales
Private GPU Cluster
Inference <100ms*
Unlimited models
Keep track of model health
Private VPC
Custom subdomain
Enhanced Security & full SLA
24/7 support

* Minimum average speed of inference