Run real-time inference workloads on NVIDIA Jetson fully managed by CloudBender™.
How It Works
Devices are edge inference nodes that contain a system-on-chip (SOC) accelerator and permanently run a single, always-on inference model. Currently supported devices include:
- NVIDIA Jetson AGX Xavier
- NVIDIA Jetson Xavier NX
- NVIDIA Jetson AGX Orin
- NVIDIA Jetson Orin NX
- NVIDIA Jetson Orin Nano
CloudBender compatible devices can be purchased through trainML. Contact us for a quote.
Devices are only supported in Physical regions configured with a centralized storage node. If you already have compute nodes in a region configured with local storage, you must create a new region to add devices.
Obtain the device
minion_id from the sticker on device or by attaching a display to the device. Select
View Region from the action menu of the new device's physical region. Once on the region dashboard, click the
Add button on the Devices grid. Enter a name for the new device, the minion id, and click
You will be navigated back to the region's dashboard. The new device will finalize its provisioning process, and may restart again. When the process is complete, the device will automatically enter the
Active state. Once it is active, you must set the desired device configuration to run the inference model. Once you have created a Device Configuration, select
Set Device Config from the action menu. Select the desired device configuration and click
Select. Once the configuration is set, deploy the inference model to the device by selecting it on the grid and clicking
Deploy Config on the toolbar, or select
Deploy Latest Config from the device's action menu.
When the deployment is complete, the
Inference Status will show
runnning, and the configuration status will indicate the last date it was deployed. While running, the inference job will have access to the SoC accelerator and any
media devices plugged into the device.
Devices incur a fixed monthly fee of 15 credits/month.