1
Mar
2020
/
Stacey Svetlichnaya, Deep Learning Engineer

Easy Data-Parallel Distributed Training in Keras

Did you know you can massively accelerate model training time with a Keras utility wrapper function? Especially useful if you’re laser-focused on one experimental direction while extra GPUs idle on your system. Discover the magic trick of data-parallel distributed training:

Join our mailing list to get the latest machine learning updates.