Up and Running with Distributed Training

Distributed training is a common problem area for a lot of machine learning and deep learning teams.

This webinar will demystify the scaling of notebooks from a single node to distributed training. This webinar will focus on a real application will feature plenty of live coding. Join us!

Topics will include:
- How to setup a Gradient notebook and production environment
- How to load dependencies and packages
- How to convert single node experiments to multinode
- How to schedule jobs using the integrated job runner
- How to take advantage of data and model parallelism

This webinar took place on Wed Mar 11, 2020. Filling out the form to the right will provide you with a direct link to the recording.