The API integrates with PyTorch, TensorFlow2.0, and MXNet. Also 🤗.
Our API automatically connects your ML task to top-tier servers.
Our infrastructure scales to support high speed and bandwidth requirements.
Save money and the environment by paying only for compute time.
Build your ML model and upload it with our API. In the dashboard, you'll be able to check on the model's performance.
Train or predict with your model from your Python environment of choice. Behind the scenes, we'll spin up the best hardware. Instantly.
We'll train or run prediction on the fastest GPU for your task.
Your data will be stored in AWS S3 buckets.
After parsing your model, a Neuro algorithm automatically finds and assigns the fastest GPU for your task. If you want to select a specific GPU, contact us about an enterprise solution.
You can use our npu library in any Python environment, including Jupyter notebooks.
THE API for serverless ML compute
Monthly product updates