14
Jan
2020
/
Lavanya Shukla, ML engineer at Weights & Biases

Find The Most Important Hyperparameters In Seconds

We're quite excited to launch the hyperparameter importance panel. This panel surfaces which of your hyperparameters were the best predictors of, and highly correlated to desirable values of your metrics.

Correlation is the correlation between the hyperparameter and the chosen metric (in this case val_loss), so a high correlation means that when the hyperparameter has a higher value, the metric also has higher values and vice versa.  Correlation is a great metric to look at but it can’t capture second order interactions between inputs and it can get messy to compare inputs with wildly different ranges.

Therefore we also calculate an importance metric where we train a random forest with the hyperparameters as inputs and the metric as the target output and report the feature importance values for the random forest.  

The idea for this technique was inspired by a conversation with Jeremy Howard who has pioneered the use of random forest feature importances to explore hyperparameter spaces at Fast.ai. We highly recommend you check out his phenomenal lecture (and these notes) to learn more about the motivation behind this analysis.

Let’s create one so you can see how useful this panel is for yourself.

Creating A Hyperparameter Importance Panel

Go to your Weights & Biases Project. If you don’t have one, you can use this project.

From your project page, click Add Visualization.


Then choose Parameter Importance.

Voilà! There’s your chart. You don’t need to write any new code, other than integrating Weights & Biases into your project as you normally do. This means all your Weights & Biases projects already have a Parameter Importance panel, ready to be added.

Let’s break this down.

Interpreting A Hyperparameter Importance Panel

The panel shows you all the parameters passed to the wandb.config object in your training script. Next, it shows the feature importances and correlations of these config parameters with respect to the model metric you select (val_loss in this case).

Importance

The importance column shows you the degree to which each hyperparameter was useful in predicting the chosen metric. We can imagine a scenario in which we start by tuning a plethora of hyperparameters and using this plot to hone in on which ones merit further exploration. The subsequent sweeps can then be limited to the most important hyperparameters, thereby finding a better model faster and cheaper.

Note: We calculate these importances using a tree based model rather than a linear model as the former are more tolerant of both categorical data and data that’s not normalized.

In the aforementioned panel we can see that epochs, learning_rate, batch_size and weight_decay were fairly important. As a next step, we might run another sweep exploring more fine grained values of these hyperparameters. Interestingly, while learning_rate and batch_size were important, they weren’t very well correlated to the output.

This brings us to correlations.

Correlations

Correlations capture linear relationships between individual hyperparameters and metric values. They answer the question – is there a significant relationship between using a hyperparameter, say the SGD optimizer, and my val_loss (the answer in this case is yes). Correlation values range from -1 to 1, where positive values represent positive linear correlation, negative values represent negative linear correlation and a value of 0 represents no correlation. Generally a value greater than 0.7 in either direction represents strong correlation.

We might use this graph to further explore the values that are have a higher correlation to our metric (in this case we might pick stochastic gradient descent or adam over rmsprop or nadam) or train for more epochs.

Quick note on interpreting correlations:

The disparities between importance and correlations result from the fact that importance accounts for interactions between hyperparameters, whereas correlation only measures the affects of individual hyperparameters on metric values. Secondly, correlations capture only the linear relationships, whereas importances can capture more complex ones.

As you can see both importance and correlations are powerful tools for understanding how your hyperparameters influence model performance. We hope that this panel helps you capture these insights and hone in on a powerful model faster.

Newsletter

Enter your email to get updates about new features and blog posts.

Weights & Biases

We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.

Partner Program

We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Lavanya to learn about opportunities to share your research and insights.

Try our free tools for experiment tracking →