Like most deep learning problems, computer vision for autonomous driving is best solved by adding more labeled data. This is often prohibitively slow and expensive. In a CVPR 2017 paper, Tinghui Zhou’s team presents an unsupervised framework for estimating depth and motion from monocular video, such as a car dashboard camera. They train two networks in parallel: one to predict depth from a single frame and the other to predict the current view from several frames (e.g. the previous and next frames). At testing time, they only use the first network, enabling depth perception from a single photo. Dive into examples and details with the full report=>
Enter your email to get updates about new features and blog posts.
We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.
We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Lavanya to learn about opportunities to share your research and insights.