In the words of Yann LeCun, Generative Adversarial Networks (GANs) are "The most interesting idea in Machine Learning in the last 10 years". This is not surprising since GANs have been able to generate almost anything from high resolution images of people "resembling" celebrities, building layouts and blueprints all the way to memes.
Their strength lies in their incredible ability to model complex distributions. While autoencoders have attempted to be as versatile as GANs, they have (at least until now) not had the same generative power as GANs and historically have learnt entangled representations.
The authors of the paper draw inspiration from recent progress in GANs and propose a novel autoencoder which addresses these fundamental limitations.
In this report, we'll dive deeper and find out how.