
Fundamentals of Deep Learning for Multi-GPUs (9/27 and 9/28)
Private Location (sign in to display)
View MapDetails
This workshop meets twice:
Tuesday, September 27, 12:30-4:30 PM
Wednesday, September 28, 1:00-4:30 PM
This 2-day workshop teaches you techniques for training deep neural networks on multi-GPU technology to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, you’ll learn concepts for implementing PyTorch multi-GPUs to reduce the complexity of writing efficient distributed software and to maintain accuracy when training a model across many GPUs.
Workshop format: Interactive presentation with hands-on exercises
Target audience: This workshop is intended for researchers that would like to use multiple GPUs to train deep learning models in PyTorch.
Knowledge prerequisites: Participants should be comfortable with training deep learning models using a single GPU.
Hardware/software prerequisites: Bring a laptop and power cable. All of the hands-on exercises will be done in a web browser on the NVIDIA cloud.
Learning objectives: Participants will learn the theory and practice of training deep learning models using multiple GPUs.
Speakers
Srivathsan Koundinyan
NVIDIA
Sri works as a data scientist at NVIDIA. He holds a Ph.D. in computer science from Stanford University.
Hosted By
Co-hosted with: GradFUTURES
Contact the organizers