In the past few years, we have seen a tremendous amount of development in the field of self-supervised learning specifically for computer vision-related problems. While the field of natural language processing has been benefitting from the virtues of self-supervised learning for a long time, but it wasn't that long computer vision systems started to see the real impact of self-supervised learning paradigms. Works like MoCo, PIRL really demonstrated the kind of benefits self-supervised systems can bring to the table for computer vision-based problems.
This year, Chen at al. published their paper A Simple Framework for Contrastive Learning of Visual Representations (SimCLR for short) which presented a simpler yet effective framework for training computer vision-based models in self-supervised ways.
In this report, I am going to present some findings from a minimal implementation of SimCLR I did with a subset of the ImageNet dataset.