How Efficient is EfficientNet?

Ajay Arasanipalai, Student at the University of Illinois at Urbana-Champaign

If you’ve taken a look at the state of the art benchmarks/leaderboards for ImageNet sometime in the recent past, you’ve probably seen a whole lot of this thing called “EfficientNet.”


Now, considering that we’re talking about a dataset of 14 million images, which is probably a bit more than you took on your last family vacation, take the prefix “Efficient” with a fat pinch of salt. But what makes the EfficientNet family special is that they easily outperform other architectures that have a similar computational cost.

In this article, we’ll discuss the core principles that govern the EfficientNet family. Primarily, we’ll explore an idea called compound scaling which is a technique that efficiently scales neural networks to accommodate more computational resources that you might have/gain.

In this report, I’ll present the results I got from attempting to try the various EfficientNet scales on a dataset much smaller than ImageNet which is much more representative of the real world. You’ll also be able to interactively visualize the results and answer the question that the title of this post asks — how efficient is EfficientNet?

Let’s begin.

See the results here --> 

Join our mailing list to get the latest machine learning updates.