Distributed training in tf.keras with W&B

Sayak Paul
9
Apr
2020

Distributed training can be particularly very useful when you have very large datasets and the need to scale the training costs becomes very prominent with that. It becomes unrealistic to perform the training on only a single hardware accelerator (a GPU in this case), hence the need for performing distributed training.

Learn how to seamlessly integrate tf.distribute.MirroredStrategy for distributing your training workloads across multiple GPUs for tf.keras models in Weights & Biases.

Check out live dashboard and report -->

Join our mailing list to get the latest machine learning updates.