TL;DR let’s train a network on a rare visual language together—join us!
Weights & Biases makes collaborative deep learning easy for teams by organizing experiments and notes in a shared workspace, tracking all the code, standardizing all other inputs and outputs, and handling the boring parts (like plotting results) so you can focus on the most exciting part—solving an interesting and meaningful puzzle in conversation with others.
With public benchmarks, we want to explore collaboration at the broader community level: how can individual effort and ideas be maximally accessible to and useful for the field? We add wandb to the Kuzushiji-MNIST dataset (kmnist): images of 10 different characters of a classical Japanese cursive script. In three commands (at the bottom of this post), you can set up and run your own version, play with hyperparameters, and visualize model performance in your browser.
We chose this dataset because it is a fresh reimagining of the well-known baseline of handwritten digits (mnist). It preserves the technical simplicity of mnist and offers more creative headroom, since the solution space is less explored and visual intuition is unreliable (only a few experts can read Kuzushiji). Mnist generalization ends at 10 digits; kmnist extends to Kuzushiji-49 (270,912 images, 49 characters) and the heavily-imbalanced Kuzushiji-Kanji (140,426 images, 3832 characters, some with 12 distinct variants). While mnist is basically solved, kmnist can help us understand the structure of a disappearing language and digitize ~300,000 old Japanese books (see this paper for more details).
To incentivize initial work on the kmnist benchmark, we’re offering $1000 in compute credits to the contributor of the highest validation accuracy within six weeks (by October 8th). We hope you will use those to make something awesome!
We are developing benchmarks to encourage clear documentation, synthesis of background and new ideas, and compression of research effort. We can build better and faster by starting with a team at the top of the collectively-reinforced foundation instead of alone at the bottom of a pile of papers and blog posts. We hope you’ll help us nudge the machine learning world in this direction by collaborating on the benchmark here.
We wanted to make it ridiculously easy to participate. You can go to https://app.wandb.ai/wandb/kmnist/benchmark or follow these commands:
1. Get the code and training data:
> git clone https://github.com/wandb/kmnist
> cd kmnist/benchmarks && pip install -r requirements.txt
> ./init.sh kmnist
2. Run your first training run:
> python cnn_kmnist.py --quick_run
We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.
We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Carey to learn about opportunities to share your research and insights.