Monitor progress of your Keras based neural network using Tensorboard

In the past few weeks I’ve been breaking my brain over a way to automatically answer questions using a neural network. I have a working version, but debugging a neural network is a nightmare.

Neural networks by their very nature are hard to reason about. You can’t really find out how or why something happened in a neural network, because they are too complex for that. Also, there’s a real art to selecting the right number of layers, the right number of neurons per layers and which optimizer you should use.

There is however a great tool, called Tensorboard that makes things a little easier and it works with Keras, a higher level neural network library that I happen to use.

What is tensorboard?

Tensorflow, the deep learning framework from Google comes with a great tool to debug algorithms that you created using the framework, called Tensorboard.

Tensorboard

It hosts a website on your local machine in which you can monitor things like accuracy, cost functions and visualize the computational graph that Tensorflow is running based on what you defined in Keras.

Installing Tensorboard

Tensorboard is a separate tool you need to install on your computer. You can install Tensorboard using pip the python package manager:

1
pip install Tensorboard

Collecting run data from your Keras program

With Tensorboard installed you can start collecting data from your Keras program. For example, if you have the following network defined:

1
2
3
4
5
6
7
8
9
10
11
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential()
model.add(Dense(10, input_shape=(784,)))
model.add(Activation('softmax'))
model.compile(optimizer='sgd', loss='categorical_cross_entropy')
model.fit(x_train, y_train, verbose=1)

This neural network is compiled with a standard Gradient Descent optimizer and a Categorical Cross Entropy loss function. Finally the network is trained using a labelled dataset.

When you run this code you will find that nothing appears on screen and there’s no way to know how well things are going. This can be problematic if you’re spending multiple hours training a network and discover that it leads to nothing.

Tensorboard can help solve this problem. For this you need to modify your code a little bit:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from time import time
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.callbacks import TensorBoard
model = Sequential()
model.add(Dense(10, input_shape=(784,)))
model.add(Activation('softmax'))
model.compile(optimizer='sgd', loss='categorical_crossentropy')
tensorboard = TensorBoard(log_dir="logs/{}".format(time()))
model.fit(x_train, y_train, verbose=1, callbacks=[tensorboard])

You need to create a new Tensorboard instance and point it to a log directory where data should be collected. Next you need to modify the fit call so that it includes the tensorboard callback.

Monitoring progress

Now that you have a tensorboard instance hooked up you can start to monitor the program by executing the following command in a separate terminal:

1
tensorboard --logdir=logs/

Notice that the logdir setting is pointing to the root of your log directory. I told the tensorboard callback to write to a subfolder based on a timestamp. Writing to a separate folder for each run is necessary, so you can compare different runs. When you point Tensorboard to the root of the log folder, it will automaticall pick up all the runs you perform. You can then select the runs you want to see on the website

Multiple runs in Tensorboard

Where to find more information

This post covers the basics of adding tensorboard as a monitoring tool to your Keras program, for more information checkout these websites: