1
Mar
2020
/
Stacey Svetlichnaya, Deep Learning Engineer

Easy Data-Parallel Distributed Training in Keras

Did you know you can massively accelerate model training time with a Keras utility wrapper function? Especially useful if you’re laser-focused on one experimental direction while extra GPUs idle on your system. Discover the magic trick of data-parallel distributed training:

Newsletter

Enter your email to get updates about new features and blog posts.

Weights & Biases

We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.

Partner Program

We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Lavanya to learn about opportunities to share your research and insights.

Try our free tools for experiment tracking →