Customers with prebuilt docker images can now use them as the job environment for any job type.
REST Endpoints for Inference
The trainML platform has been extended to support deploying models as REST API endpoints. These fully managed endpoints give you the real-time predictions you need for production applications without having to worry about servers, certificates, networking, or web development.
Automatic Dependency Installation
trainML jobs now accept lists of packages that
will be installed using apt, pip, or conda as part of the job creation process and
will automatically install dependencies found in the requirements.txt
file in the
root of the model code working directory.
Consolidated Account Billing
Now your entire team or organization can share a single credit balance managed by a central account.
Load Model Code Directly From Your Laptop
You can now start any job type from model code stored on your local computer without committing the code to a git repository. In combination with the trainML CLI, starting a notebook from your local computer is as simple as:
trainml job create notebook --model-dir ~/model-code --data-dir ~/data "My Notebook"
Start Training Models With One Line
trainml job create notebook "name"
RTX 3090 (BFGPU) Instances Now Available
Enjoy the "big ferocious" performance of NVIDIA's Ampere-based RTX 3090 for less than $1 an hour. Supplies are limited so reserve one while you can.
Build Full Machine Learning Pipelines with Inference Jobs
The trainML platform has been extended to support batch inference jobs, enabling customers to use trainML for all stages of the machine learning pipeline that require GPU acceleration.
Store Training Results Directly on the Platform
The trainML platform now allows customers to store models permanently and reuse those models for as many notebook and training jobs as desired.
Dataset Viewing
You can now view summary details of the contents of a created dataset from the user interface.