7
Jun
2019
/
Nick Bardy

Multi-GPU Hyperparameter Sweeps in Three Simple Steps

Hyperparameter sweeps are ways to automatically test different configurations of your model. They address a wide range of needs, including running experiments with different test conditions, exploration of your dataset, or large scale tuning hyperparameters.

Setting up the infrastructure for these sweeps can be tedious. So we've built W&B sweeps to be simple to set up and flexible to deploy. Inspired by Google's Vizier, we've implemented a wide range of features, including bayesian optimization and hyperband early stopping. Integration is simple: if you have a machine learning script running on the command line, you’re ready to go.

Step 1: Select Hyperparameters

First, you’ll want to select the hyperparameters you’re sweeping over. Set this up in a YAML file, as detailed further in the sweep docs.

wandb init # Initialize your project repo
wandb sweep sweep.yaml # returns your SWEEP_ID



Step 2: Launch Agents

Grab your sweep ID from the output of the command above and launch some agents to begin running your sweep.

wandb agent mcg70107

Sweep agents can run in any environment wandb is installed. If you have multiple GPUs on your machine run multiple agents with the CUDA environment variable.

CUDA_VISIBLE_DEVICES=0 wandb agent mcg70107
CUDA_VISIBLE_DEVICES=1 wandb agent mcg70107

Step 3: Visualize Training

Running hyperparameter sweeps has opened up new possibilities in my research. Recently I’ve been using them as a tool to explore new datasets, for example ShapeNet for 3D semantic Segmentation.




Here’s a sweep where I explored a few papers about varying learning rates.



These two papers inspired the approaches I took in my sweep:

Partner Program

We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Carey to learn about opportunities to share your research and insights.

Try our free tools for experiment tracking →