We’re excited to launch a powerful and efficient way to do hyperparameter tuning and optimization - W&B Sweeps.
With just a few lines of code Sweeps automatically search through high dimensional hyperparameter spaces to find the best performing model, with very little effort on your part.
Here’s how you can launch sophisticated hyperparameter sweeps in 3 simple steps.
First let’s install the Weights & Biases library and add it into your training script.
pip install wandb
from wandb.keras import WandbCallback
# define model architecture
# compile the model
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
# add the WandbCallback()
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=config.epochs,
You can define powerful sweeps simply by creating a YAML file that specifies the parameters to search through, the search strategy, and the optimization metric.
Here’s an example:
values: ["adam", "sgd"]
values: [96, 128, 148]
Let’s break this yaml file down:
You can find a list of all the configuration options here.
Run wandb sweep with the config file you created in step 1.
This creates your sweep, and returns both a unique identifier (SWEEP_ID) and a URL to track all your runs.
wandb sweep sweep.yaml
It’s time to launch our sweep and train some models!
You can do so by calling wandb agent with the SWEEP_ID you got from step 2.
wandb agent SWEEP_ID
This will start training models with different hyperparameter combinations and return a URL where you can track the sweep’s progress. You can launch multiple agents concurrently. Each of these agents will fetch parameters from the W&B server and use them to train the next model.
And voila! That's all there is to running a hyperparameter sweep!
Let’s see how we can extract insights about our model from sweeps next.
This plot maps hyperparameter values to model metrics. It’s useful for honing in on combinations of hyperparameters that led to the best model performance.
The hyperparameter importance plot surfaces which hyperparameters were the best predictors of, and highly correlated to desirable values for your metrics.
These visualizations can help you save both time and resources running expensive hyperparameter optimizations by honing in on the parameters (and value ranges) that are the most important, and thereby worthy of further exploration.
We created a simple training script and a few flavors of sweep configs for you to play with. We highly encourage you to give these a try. This repo also has examples to help you try more advanced sweep features like Bayesian Hyperband, and Hyperopt.
Enter your email to get updates about new features and blog posts.
We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.
We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Lavanya to learn about opportunities to share your research and insights.