13
Jan
2020
/
Lavanya Shukla, ML engineer at Weights & Biases

Visualize LightGBM Performance in One Line of Code

Gradient boosting decision trees are the state of the art when it comes to building predictive models for structured data.

LigthGBM, a gradient boosting framework by Microsoft, has recently dethroned xgboost and become the go to GBDT algorithm (along with catboost). It outperforms xgboost in training speeds, memory usage and the size of datasets it can handle. LightGBM does so by using histogram-based algorithms to bucket continuous features into discrete bins during training.

We want to make it incredible easy for people to look under the hood of their models, so we built a callback that helps you visualize your LightGBM’s performance in just one line of code.

# Import callback
from wandb.lightgbm import wandb_callback

# Add callback
lgb.train(params, X_train, callbacks=[wandb_callback()])

You can see the results in your Weights & Biases dashboard.


Once you train multiple models, you can compare all of their performances in one dashboard like so.


We encourage you to give it a try on your own models or via this colab notebook.

If you liked this, we also have a callback for XGBoost.

Newsletter

Enter your email to get updates about new features and blog posts.

Weights & Biases

We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.

Partner Program

We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Lavanya to learn about opportunities to share your research and insights.

Try our free tools for experiment tracking →