Developer tools for machine learning

Experiment tracking, hyperparameter optimization, model and dataset versioning

Track, compare, and visualize ML experiments with 5 lines of code.

Try a live notebook →
# Flexible integration for any Python script
import wandb

# 1. Start a W&B run
wandb.init(project='gpt3')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here

# 3. Log metrics over time to visualize performance

wandb.log({"loss": loss})
import wandb

# 1. Start a W&B run
wandb.init(project='gpt3')

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here

# 3. Log metrics over time to visualize performance
with tf.Session() as sess:
  # ...
  wandb.tensorflow.log(tf.summary.merge_all())
import wandb

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# 3. Log gradients and model parameters
wandb.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):
  ...  
  if batch_idx % args.log_interval == 0:      
    # 4. Log metrics to visualize performance
    wandb.log({"loss": loss})
import wandb
from wandb.keras import WandbCallback

# 1. Start a new run
wandb.init(project="gpt-3")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

... Define a model

# 3. Log layer dimensions and metrics over time
model.fit(X_train, y_train, validation_data=(X_test, y_test),
callbacks=[WandbCallback()])
import wandb
wandb.init(project="visualize-sklearn")

# Model training here

# Log classifier visualizations
wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels, model_name='SVC', feature_names=None)

# Log regression visualizations
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test,  model_name='Ridge')

# Log clustering visualizations
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name='KMeans')
# 1. Install the wandb library
pip install wandb

# 2. Run a script with the Trainer, which automatically logs losses, evaluation metrics, model topology and gradients
python run_glue.py \
 --model_name_or_path bert-base-uncased \
 --task_name MRPC \
 --data_dir $GLUE_DIR/$TASK_NAME \
 --do_train \
 --evaluate_during_training \
 --max_seq_length 128 \
 --per_gpu_train_batch_size 32 \
 --learning_rate 2e-5 \
 --num_train_epochs 3 \
 --output_dir /tmp/$TASK_NAME/ \
 --overwrite_output_dir \
 --logging_steps 50
import wandb

# 1. Start a new run
wandb.init(project="visualize-models", name="xgboost")

# 2. Add the callback
bst = xgboost.train(param, xg_train, num_round, watchlist, callbacks=[wandb.xgboost.wandb_callback()])

# Get predictions
pred = bst.predict(xg_test)

Central dashboard

A system of record for your model results

Add a few lines to your script, and each time you train a new version of your model, you'll see a new experiment stream live to your dashboard.

Learn more →

Hyperparameter sweeps

Try dozens of model versions quickly

Optimize models with our massively scalable hyperparameter search tool. Sweeps are lightweight, fast to set up, and plug in to your existing infrastructure for running models.

Learn more →

Artifact tracking

Lightweight model and dataset versioning

Save every detail of your end-to-end machine learning pipeline — data preparation, data versioning, training, and evaluation.

Learn more →

Interactive reports

Explore results and share findings

It's never been easier to share project updates. Explain how your model works, show graphs of how  model versions improved, discuss bugs, and demonstrate progress towards milestones.

Learn more →

Collaboration

Seamlessly share progress across projects

Manage team projects with a lightweight, central system of record. It's easy to hand off projects when every experiment is automatically well documented and saved centrally.

Learn more →

Hyperparameter sweeps

Try dozens of model versions quickly

Optimize models with our massively scalable hyperparameter search tool. Sweeps are lightweight, fast to set up, and plug in to your existing infrastructure for running models.

Learn more →

Artifact tracking

Lightweight model and dataset versioning

Save every detail of your end-to-end machine learning pipeline — data preparation, data versioning, training, and evaluation.

Learn more →

Interactive reports

Explore results and share findings

It's never been easier to share project updates. Explain how your model works, show graphs of how  model versions improved, discuss bugs, and demonstrate progress towards milestones.

Learn more →

Central dashboard

A system of record for your model results

Add a few lines to your script, and each time you train a new version of your model, you'll see a new experiment stream live to your dashboard.

Learn more →

Hyperparameter sweeps

Try dozens of model versions quickly

Optimize models with our massively scalable hyperparameter search tool. Sweeps are lightweight, fast to set up, and plug in to your existing infrastructure for running models.

Learn more →

Artifact tracking

Lightweight model and dataset versioning

Save every detail of your end-to-end machine learning pipeline — data preparation, data versioning, training, and evaluation.

Learn more →

Interactive reports

Explore results and share findings

It's never been easier to share project updates. Explain how your model works, show graphs of how  model versions improved, discuss bugs, and demonstrate progress towards milestones.

Learn more →

Governance

Protect and manage valuable IP

Use this central platform to reliably track all your organization's machine learning models, from experimentation to production.

Learn more →

Data provenance

Reliable records for auditing models

Capture all the

Learn more →

Artifact tracking

Lightweight model and dataset versioning

Save every detail of your end-to-end machine learning pipeline — data preparation, data versioning, training, and evaluation.

Learn more →

Interactive reports

Explore results and share findings

It's never been easier to share project updates. Explain how your model works, show graphs of how  model versions improved, discuss bugs, and demonstrate progress towards milestones.

Learn more →

“W&B is a key piece of our fast-paced, cutting-edge, large-scale workflow.”
— Adrien Gaidon, Toyota Research Institute

Example Projects

Once you’re using W&B to track and visualize ML expeirments, it’s easy to create a report to showcase your work.

View gallery →

Trusted by 40,000+ machine learning practitioners
at 100+ companies and research institutions

Wojciech Zaremba
Cofounder of OpenAI

"W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands."

Hamel Husain
GitHub

"W&B was fundamental for launching our internal machine learning systems, as it enables collaboration across various teams."

Stay connected with the ML community

Never lose track of another ML project.