Skip to main content

· 2 min read

You can now convert notebooks directly into training jobs to easily run independent training experiments while working on your projects. In contrast to copying the notebook into another notebook job, training jobs will run autonomously, send their output to the location you specify, and automatically terminate when finished.

· 2 min read

trainML notebooks can now be forked into new instances to enable easy parallel experimentation. Unlike other cloud notebooks, when you fork a trainML notebook, the entire working directory is copied. All datasets, checkpoints, and other data are copied into the new notebook.

· 4 min read

Persistent Datasets just got even better. Not only can you use the same dataset across many jobs in parallel at no additional charge, now you can attach multiple datasets to a single job for free. If that wasn't enough, you can now dynamically change the datasets attached to any notebook job as your needs evolve through the model development process. Additionally, more options have been added for job base environments, allowing you to save time and storage quota by using specific versions of popular frameworks.

· 4 min read

Customers using trainML to compete in Kaggle competitions or using public Kaggle datasets for analysis can now directly populate trainML datasets from Kaggle competitions or datasets, as well as automatically load their Kaggle account credentials into notebook and training jobs to use for competition or kernel submissions.