Classifying ASL Digits

Posted on
October 2, 2018
by
Connor and Trent, Data Scientists

Classifying ASL digits is sort of a step above MNIST - getting the baseline model working is straightforward, but achieving an accuracy rate over 90% isn't easy. I made a model using the ASL digits dataset (https://github.com/ardamavi/Sign-Language-Digits-Dataset) to demonstrate the power and ease of machine learning to my coworkers, and I had to make it in a day. The Keras library combined with the Weights and Biases platform made the task even quicker than expected. Check out the finished project on GitHub (https://github.com/18sheimanr/ASL_digits).

I started with a standard image classifying neural network. 2 layers of both convolution then pooling, then a fully connected hidden layer before the 10-neuron output layer. I thought such a simple data set wouldn’t require much image compression because the machine would converge quickly either way. However, it was a little slow with 64x64 images, so I scaled them down to 32x32, which didn’t affect accuracy, just sped up learning. After that, my models’ accuracy converged in about 15 epochs in about a minute.

After the images were compressed and prototyping was fast, it was time to tweak the model layers and hyperparameters to maximize accuracy. This is where weights and biases (WandB) comes in.

Weights and Biases is fantastic platform, especially for teams working on one model. It is almost like version control for deep learning, plus myriad visualization tools make it useful for individuals, too.  It takes only two pieces of code to implement WandB with Keras:

  1. wandb.init()
  2. WandbCallback()

One should also use wandb.config to store hyperparameters. These hyperparameters will be reported to weights and biases, allowing one to keep track of changes to see how it effects accuracy (or loss). Wandb even stores each configuration run time so you can be sure your model is efficient. I used a parallel coordinates plot of all the key hyperparameters, along with validation data accuracy:

Wandb also provides a loss graph for all runs:

I checked the project page, containing these figures, every few runs. The parallel coordinates plot showing how all hyper parameters effected accuracy helped me tweak my neural network in the right direction.

After many tweaks, I selected the configuration with the best accuracy and stuck with it. It actually only took around an hour, and my best validation accuracy was 95%! Here are some of the actual classification results, found in the weights and biases run page. All you have to do is give WandbCallback() some validation data and labels for the classes and it will show you real results like this:

Try Weights & Biases now →