Skip to main content

trainML Documentation

Serverless ML Infrastructure

server_cluster

Serverless GPUs

Run the training and inference you need on-demand. No instance management, reservations, or time restrictions.

Complete ML Pipelines

Run all stages of your machine learning pipeline, from R&D to training to batch and real-time inference.

Declare Cloud Independence

Seamlessly run workloads across any infrastructure, anywhere with CloudBender™.

Federated Inference

Deliver inference results securely inside your customer's environment without losing control of your valuable intellectual property.

Automation First

Purposefully designed to integrate with upstream applications through our Python SDK.

Easy Model/Checkpoint Management

Store unlimited custom model or checkpoint versions and attach them to training or inference jobs with ease.