Never lose your progress again.

Save everything you need to compare and reproduce models — in 5 minutes.

Create a free account
Track your machine learning workflow
Central repository for your model pipelines
Save everything you need to compare and reproduce models — architecture, hyperparameters, weights, model predictions, GPU usage, git commits, and even datasets. You can save experiment & dataset files directly to W&B or store pointers to your own storage.
Visualize predictions
Debug performance in real time
Log model predictions to see how your model is performing in realtime, and identify problem areas during training. We support rich media including images, video, audio, bounding boxes, segmentation maps and 3D objects.
Collaborative reports
Share high level updates and detailed work logs
It's never been easier to share updates with your coworkers, stakeholders, or even the whole world. Explain how your model works, show graphs of how your model versions improved, discuss bugs, and demonstrate progress towards milestones with Reports.

See an example report →
Hyperparameter optimization
Try dozens of model versions quickly with Sweeps
Optimize models with our massively scalable hyperparameter search tool. Sweeps are lightweight, fast to set up, and plug in to your existing infrastructure for running models.
System metrics
CPU and GPU usage across runs
Visualize live metrics like GPU utilization to identify training bottlenecks and avoid wasting expensive resources with automatically generated system metrics.
Fast and scalable
Robust, flexible experiment tracking at scale
We handle millions of models every month for teams doing some of the most cutting-edge deep learning research.
Get started in 5 minutes
Add a few lines to your script to start logging results.
Our lightweight integration works with any Python script.
# Flexible integration for any Python script
import wandb

# 1. Start a W&B run
wandb.init(config=tf.flags.FLAGS, sync_tensorboard=True)

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here

# 3. Log metrics over time to visualize performance

wandb.log({"loss": loss})
import wandb

# 1. Start a W&B run
wandb.init(config=tf.flags.FLAGS, sync_tensorboard=True)

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here

# 3. Log metrics over time to visualize performance

wandb.log({"loss": loss})
import wandb

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# 3. Log gradients and model parameters
wandb.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):
  ...  
  if batch_idx % args.log_interval == 0:      
    # 4. Log metrics to visualize performance
    wandb.log({"loss": loss})
import wandb
from wandb.keras import WandbCallback

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

... Define a model

# 3. Log layer dimensions and metrics over time
model.fit(X_train, y_train, validation_data=(X_test, y_test),
callbacks=[WandbCallback()])
import wandb
wandb.init(project="visualize-sklearn")

# Model training here

# Log classifier visualizations
wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels, model_name='SVC', feature_names=None)

# Log regression visualizations
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test,  model_name='Ridge')

# Log clustering visualizations
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name='KMeans')
# 1. Import the wandb library
import wandb

# 2. Run a script with the Trainer, which automatically logs losses, evaluation metrics, model topology and gradients
!python run_glue.py \
 --model_name_or_path bert-base-uncased \
 --task_name MRPC \
 --data_dir $GLUE_DIR/$TASK_NAME \
 --do_train \
 --evaluate_during_training \
 --max_seq_length 128 \
 --per_gpu_train_batch_size 32 \
 --learning_rate 2e-5 \
 --num_train_epochs 3 \
 --output_dir /tmp/$TASK_NAME/ \
 --overwrite_output_dir \
 --logging_steps 50
import wandb

# 1. Start a new run
wandb.init(project="visualize-models", name="xgboost")

# 2. Add the callback
bst = xgboost.train(param, xg_train, num_round, watchlist, callbacks=[wandb.xgboost.wandb_callback()])

# Get predictions
pred = bst.predict(xg_test)
import wandb
import numpy as np
import xgboost as xgb

# 1. Start a W&B run
wandb.init(project="visualize-models", name="xgboost")

# 2. Add the wandb callback
bst = gbm = lgb.train(params,
               lgb_train,
               num_boost_round=20,
               valid_sets=lgb_eval,
               valid_names=('validation'),
               callbacks=[wandb_callback()])

# Get prediction
pred = bst.predict(xg_test)
import wandb
from fastai2.callback.wandb import *

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Automatically log model metrics
learn.fit(..., cbs=WandbCallback())
Explore a live dashboard
Getting Started
See how teams working on cutting edge deep learning projects use W&B
to train, collaborate on, and debug their workflows.
Advice from experts
See how teams working on cutting edge deep learning projects use W&B
to train, collaborate on, and debug their workflows.

"W&B was fundamental for launching our internal machine learning systems, as it enables collaboration across various teams."

Hamel Husain
GitHub

"W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands."

Wojciech Zaremba
Cofounder of OpenAI

"W&B is a key piece of our fast-paced, cutting-edge, large-scale research workflow: great flexibility, performance, and user experience."

Adrien Gaidon
Toyota Research

Have questions on getting started? Ask us on Slack.

Never lose your progress again.

Create a free account