In this tutorial we'll walk through a simple convolutional neural network to classify the images in the Simpson dataset using Keras.
We’ll also set up Weights & Biases to log models metrics, inspect performance and share findings about the best architecture for the network. In this example we're using Google Colab as a convenient hosted environment, but you can run your own training scripts from anywhere and visualize metrics with W&B's experiment tracking tool.
Start out by installing the experiment tracking library and setting up your free W&B account:
# Initilize a new wandb run
# Default values for hyper-parameters
config = wandb.config # Config is a variable that holds and saves hyperparameters and inputs
config.learning_rate = 0.01
config.batch_size = 128
config.activation = 'relu'
config.optimizer = 'nadam'
Below, we define a simplified version of a VGG19 model in Keras, and add the following lines of code to log models metrics, visualize performance and output and track our experiments easily:
# Define the model architecture - This is a simplified version of the VGG19 architecture
model = tf.keras.models.Sequential()
# Set of Conv2D, Conv2D, MaxPooling2D layers with 32 and 64 filters
model.add(tf.keras.layers.Conv2D(filters = 32, kernel_size = (3, 3), padding = 'same',
activation ='relu', input_shape = input_shape))
# Flattens our array so we can feed the convolution layer outputs (a matrix) into our fully connected layer (an array)
model.add(tf.keras.layers.Dense(512, activation ='relu'))
model.add(tf.keras.layers.Dense(num_classes, activation = "softmax"))
# Define the optimizer
optimizer = tf.keras.optimizers.Nadam(lr=config.learning_rate, beta_1=0.9, beta_2=0.999, clipnorm=1.0)
model.compile(loss = "categorical_crossentropy", optimizer = optimizer, metrics=['accuracy'])
# Fit the model to the training data
model.fit_generator(datagen.flow(X_train, y_train, batch_size=config.batch_size),
steps_per_epoch=len(X_train) / 32, epochs=config.epochs,
validation_data=(X_test, y_test), verbose=0,
callbacks=[WandbCallback(data_type="image", validation_data=(X_test, y_test), labels=character_names),
In this section we make predictions and add wandb.log() to log custom images - in this case our test images with predicted probabilities overlaid on top.
Click through to a single run to see more details about that run. For example, on this run page you can see the performance metrics I logged when I ran this script.
The overview tab picks up a link to the code. In this case, it's a link to the Google Colab. If you're running a script from a git repo, we'll pick up the SHA of the latest git commit and give you a link to that version of the code in your own GitHub repo.
The System tab on the runs page lets you visualize how resource efficient your model was. It lets you monitor the GPU, memory, CPU, disk, and network usage in one spot.
As you can see running sweeps is super easy! We highly encourage you to fork the accompanying notebook, tweak the parameters, or try the model with your own dataset!
Enter your email to get updates about new features and blog posts.
We're building lightweight, flexible experiment tracking tools for deep learning. Add a couple of lines to your python script, and we'll keep track of your hyperparameters and output metrics, making it easy to compare runs and see the whole history of your progress. Think of us like GitHub for deep learning.
We are building our library of deep learning articles, and we're delighted to feature the work of community members. Contact Carey to learn about opportunities to share your research and insights.