As an engineer at W&B, nothing would make me happier than supporting the open source ML community with useful tools, which is why our product is and always will be free for open source projects. One of the reasons I joined W&B was the opportunity to contribute back to the community, so I would love for our product to provide value to the innovative projects out there pushing the field forward.
So to all the open source projects out there, I have some good news: you no longer need an account to submit runs to W&B! In the next sections, I’ll walk you through how to instrument a training script in a Jupyter notebook and then go over how to instrument a Python codebase. At the end of each section you’ll have training scripts logging to W&B, giving you access to our product’s experiment tracking and data visualization functionality — all without an account!
Let’s instrument a script that trains an MNIST classifier written using Keras. We’ll be using this notebook as a reference. Feel free to make a copy of the notebook by selecting File → Save a copy in Drive.
That’s it! Your training script is set up with W&B, so now any new training jobs you run in your notebook will be automatically tracked.
If you have a project that uses TensorFlow, PyTorch, Keras, XGBoost or Fast.ai then this is a perfect opportunity to instrument it with W&B. In just a few easy steps, you’ll be able to track every experiment you launch and monitor your models as they train.
from wandb.keras import WandbCallback
model.fit(X_train, y_train, validation_data=(X_test, y_test),
import tensorflow as tf
import xgboost as xgb
bst = xgb.train(param_list, d_train, callbacks=[wandb_callback()])
from wandb.fastai import WandbCallback
learn = cnn_learner(data,
Now you can launch your training script just as you normally would. The first time you run your script after integrating with W&B, you’ll get a prompt like the following:
Once you select the first option your training script will proceed, and W&B will track the run execution for you in the background. Click the run link included in the output and you can monitor the performance of your model in real time.
That’s it! W&B will now automatically track all of your experiments, so you can just focus on building groundbreaking new models.
We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.
We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Carey to learn about opportunities to share your research and insights.