Boris Dayma, Colorizing Wizard

Posted on
October 18, 2018
by
Carey Phelps, Product Lead at Weights & Biases

The Colorizer Challenge

Boris Dayma from Houston, TX was one of the champions in our summer colorizer competition. He developed a neural network to take black and white images and turn them into beautiful, full-color renderings. Take a moment to compare the black and white and color images below.

How could you predict what colors each flower would be? To do this by hand you would need to research each flower and make educated guesses on the palette and arrangement of the bouquet. When black and white films are colorized, artists painstakingly imagine the colors for each frame and paint the color individually, by hand. We challenged researchers to colorize black and white photos of flowers with neural networks, and our own results weren't great.

Defining a good loss function for a colorizer is hard, because the easy way to minimize the distance between a predicted color and the correct color is to guess something in the middle of all the colors, which ends up being brown.

Before he left for a 2 week vacation in Brazil, Boris printed out a stack of published papers on colorizers. He leafed through them on the plane, read through implementations on the beach, and formulated a concept for how to approach the problem so that when he got back to the US, he hit the ground running.

He carefully kept track of his process training models, using real time loss curves from Weights & Biases to identify outliers and cut off training runs early when they weren't performing well.


Boris' Method

The black and white images are in the RGB color space by default, so Boris moved images to the YCRCB space. This makes one of the dimensions just the brightness of the image, so it simplifies the problem to just outputting the CR and CB. He built his own architecture, which was inspired by U-Net, MobileNets, and ResNet for image segmentation. Boris also cleaned the training data and found more images of flowers to fill out the training set. He also did some data augmentation— random cropping and a vertical flip.

  • Baseline - 5 layers: The first baseline run was set with 5 layers and 32 initial filters.
  • Baseline + upconvolution: Using up-convolution instead of up-sampling did not bring any improvement, and it significantly increased the model size.
  • 6 layers - weight decay:Using weight decay lead to a too slow training, even after decreasing several times its contribution to total loss.
  • 6 layers: The best results were obtained with 6 layers, 32 initial filters and no regularization.

Results

His results were far better than the sepia outputs we were getting with our naive model. You can learn more about his process and results in his Weights & Biases project. Check out a sample of his results— he was able to accurately train the model to colorize this thistle flower purple and the background grass green, no easy feat!

Our Celebration

We were delighted by his results and flew Boris out to meet the team and take a ride with Shivon Zilis. He spent the afternoon eating ice cream and experiencing the newest version of Tesla's autopilot features!

Do you want the glory, the prestige, and the free ice cream? Email us at contest@wandb.com to hear about our next epic challenge!

Try Weights & Biases now →