ML Tools for Kaggle
Make it to the leaderboard faster with experiment tracking and hyperparameter optimization
Optimize quickly
Run lots of quick experiments to find the winning model fast. With just a couple lines of code, track and compare results in minutes.
Realtime debugging
Visualize predictions during training and identify common cases where different models perform poorly.
Centralized dashboard
Use our free, hosted dashboard to seamlessly compare accuracy across different versions of your models, from any machine.
Add a couple lines to your script and see results immediately.
Get started →
Features
Dashboard
Iterate on model architecture quickly
In a Kaggle competition, you need to optimize your model faster than the competition with lots of quick experiments. Use the dashboard to track model’s performance and predictions in real time.

Answer questions fast— which learning rate worked best, did adding BatchNorm help, am I overfitting, or using too many GPU resources?

Track your model performance →

import torch
import torch.nn as nn

import wandb
wandb.init(project="not-iris")

# Log any metric
wandb.log({"acc": accuracy, "val_acc": val_accuracy})

Fast Integration
Get started in minutes
You can integrate W&B into your code with just a few lines of code. We're framework agnostic, so you can use anything from scikit and XGBoost to TensorFlow and PyTorch.

Integrate W&B in 5 mins →
Sweeps
Hyperparameter optimization
Optimize your model hyperparameters, and you could go from a bronze to a gold on the leaderboard.

Add a few lines of code to your model to track your configuration and logged metrics, and then try W&B Sweeps to automatically launch dozens of parallel experiments to search the hyperparameter space for you.

Launch a sweep in 2 minutes →
GPU Metrics
Resource efficient model training
GPU resources are expensive and we’re trying to win our Kaggle competitions by being as resource efficient (read: frugal) as possible. W&B helps you by tracking your GPU and other resources usage automatically, so you can eliminate the most resource intensive models and hyperparameters from your experiments.

See an example dashboard →
Rich Media
Debug performance in real time
Debug models by visualizing the predictions they’re making in real time. W&B supports logging images, videos, audio, tables, HTML, metrics, plots, molecules, 3d objects and point clouds with one line fo code.

With these comprehensive visualizations W&B helps understand where the model is failing, where it performs the most optimally and what the most common scenarios where the model doesn’t work are.

Visualize model predictions →
Save models forever
Reproduce your results any time
Save everything you need to reproduce your models— weights, architecture, predictions, code to a safe place in the cloud.

This is useful because you don’t have to re-train a model, you can simply view its performance days, weeks, or even a few months later. Before the final submission deadline for the competition, you can compare the performance of all the models you trained in the previous months and download the predictions for the best performing one for submission.

Save & restore a model →

import wandb
wandb.init(project="not-iris")

# ROC
wandb.sklearn.plot_roc(y_true, y_probas, labels)

# PR
wandb.sklearn.plot_precision_recall(y_true, y_probas, labels

# Feature Importance
wandb.sklearn.plot_feature_importances(model, feature_names)

Custom plots
Native support for your favorite plots
Visualize your favorite plots with just one line of code.

Out of the box we support ROC and PR curves, confusion matrices, learning curves, feature importance plots, calibration curves and many more.

See a live example →
Share results
Contribute to the Kaggle community
Whether you’re entering competitions with teammates, or you’re writing a cool kernel that explains your model to the Kaggle community, W&B can help bring your models to life with Reports. Reports are like blog posts come alive so your readers can interact with model predictions at various epochs, view the results of hyperparameter sweeps interactively and more.

Reports serve as a centralized repository of metrics, models and hyperparameters tried, predictions and accompanying notes to give you a bird’s eye view of your machine learning workflow - all the pieces that went into building the model so you can reproduce results.

They make it easy to explain how the model works, and share your results with collaborators, competition organizers, or even your boss.

A report on predicting protein structures →
Use Cases
CURRENT EVENTS
COVID-19 Research using PyTorch
In this tutorial, we provide a boilerplate template for anyone who'd like to engage in research on COVID-19 datasets.
TUTORIAL
Debugging Neural Networks
See what makes a neural network underperform, and try debugging techniques including visualizing the gradients and parameters.
COMPUTER VISION
Can GANs Be Detected?
Learn how CNNDetection works and train a classifier to detect images that are generated by many different models.
TUTORIAL
Visualize Scikit Model Performance
Learn how to visualize your scikit-learn model's performance with just a few lines of code, and try live plots in this interactive report.
AUTONOMOUS VEHICLES
The View from the Driver's Seat
Check out a reproducible model for semantic segmentation for scene parsing on the Berkeley Deep Drive 100K dataset.
BIOLOGY
Predicting Protein Structures
Despite recent advancements in ML, protein structure prediction remains one of the "holy grails" of molecular biology.
COMPUTER VISION
Exploring Neural Style Transfer
We’ll go through the neural style transfer algorithm by Gatys, implement it and track it using the W&B library.
TUTORIAL
Effects of Weights Initialization
Compare a plethora of weight initialization methods for neural nets, and learn a simple recipe for initializing your model's weights.
ADVANCED
Distributed Training
Explore data-parallel distributed training in Keras. In this report you can see the effect of different GPU count configurations.