If you're using fastai, it's now easier than ever to log, visualize, and compare your experiments. Just import wandb and add our callback:
from wandb.fastai import WandbCallback
learn = cnn_learner(data, model, callback_fns=WandbCallback)
Add wandb, and you'll get a powerful, persistent, and shareable dashboard for exploring your results and comparing experiments. Here are a few snapshots from a project where I'm comparing the ground truth and predictions in my semantic segmentation project.
I'm able to look at example outputs, visually compare versions of my model, and identify anomalies.
Here are some graphs from the same fastai project. I like this as an alternative to TensorBoard or TensorBoardX because W&B keeps the hyperparameters, metric graphs, and checkpointed model versions organized automatically. I can send a link to share my findings, and collaborators can explore my results independently without relying on my screenshots of local TensorBoard instances. It's also nice to know that the results are always saved in the cloud, so I never have to dig through messy local files.
If you'd like to try the fastai integration on a quick example problem, clone my image classification repo and try your hand at classifying Simpsons characters. Give it a try →