2
Feb
2020
/
Nick Bardy

Visualizing 3D Bounding Boxes

When I started working with self driving datasets I grew frustrated with how challenging it was to understand the results of my work. A single 3D visualization can reveal insights that you’ll never find in a terminal full of metrics. I wanted to make it easy to render these visuals with any project and dataset.

Now with just a few lines of code you can log your data, see the 3D visualization and share it with a link.

wandb.log({
 "point_clouds_with_bb": wandb.Object3D({
     "type": "lidar/beta",
     "points": points_rgb,
     "boxes": boxes
   })
})

See the live W&B report ->

I tried this W&B logging on the new Lyft self driving dataset. Using their baseline clustering algorithm, I compare its results with the ground truth labels. immediately finding some interesting results.

(Red: Prediction, Green: Ground Truth)

The clustering algorithm struggled to make sense of car orientation. A lidar sensor only delivers data from a specific direction. This causes a high density of points near the camera, and missing points opposite the camera. The bounding boxes end up skewed toward the normal plane of the camera.

This reveals issues with not just the model, but the dataset itself. Above is one of the many examples of possibly mislabeled data.

Try it yourself— I'm excited to see what you come up with!

Play with the results ->

See the docs ->

Join our mailing list to get the latest machine learning updates.