Here's why Microsoft Cognitive Toolkit is the coolest deep learning framework around

Published 2/16/2018 9:00:17 AM
Filed under Machine Learning

Microsoft cognitive toolkit supports loading and saving models in Open Neural Network Exchange) ONNX format, which enables you as a developer to run your trained network on Java and C# with much better overall performance than you can get with Python.

I personally feel that this is the way forward. Train your models in Python and use them from C# and Java.

Let's explore how this would work for a typical model.

Train and validate the model in python

Let's start out with the most important step. The first step is to build a model. For this we're going to use Keras, which is a open source deep learning framework.

Keras makes it really simple to build a neural network. CNTK has a good API of its own, but Keras makes it even better.

model = Sequential()

model.add(Dense(hidden_size, input_dim=input_size, activation='relu'))
model.add(Dense(hidden_size, activation='relu'))
model.add(Dense(output_size, activation='softmax'))

model.compile(optimizer='rmsprop', 
    loss='categorical_crossentropy', 
    metrics=['accuracy'])

Once built, we train the model by executing the fit method on the model instance.

model.fit(features, labels)

Now that we have it trained we can run predictions using the predict method. It accepts a set of features (a list of floating point numbers) and outputs another list of floating point numbers representing the prediction of the network.

prediction = model.predict(sample_features)

As any data scientist knows, you want to make sure that your model works correctly. For this you can use the evaluate method:

loss, accuracy = model.evaluate(eval_features, eval_labels)

Once the model is working as expected we need to export it, so we can use it from C#. For this we're going to use the ONNX format.

Export your model to ONNX

The ONNX format is a new file format to describe neural networks and other machine learning models. The ONNX format is meant as an intermediate representation format.

What does this mean? Models exported to ONNX coming from Tensorflow can be imported into CNTK. That's useful since that promotes the exchange of models between different companies, research facilities and developers in general. It doesn't matter what you use, as long as you and your coworker both use the ONNX format you can share your model with that person.

To export a model trained with Keras you need to write a small piece of code:

func = model.model.outputs[0]
func.save('model.onnx', format=C.ModelFormat.ONNX)

Remember we compiled the model earlier and trained it. The compile operation creates a graph in the model attribute. This graph has arguments, inputs and outputs. We need to grab the first output from the model attribute and save this as a function to disk.

Why the first output? Neural networks can have multiple output layers, which result in multiple outputs. We however have just one output in our model. The output in this case points to the previous layer, which in turn points to the previous layer, etc. Essentially, when you have the output you have the neural network as a function.

You can save this function to disk, using the save method. This method accepts a format argument with the value C.ModelFormat.ONNX so that the function is stored correctly.

Using your model from C#

Now that you have the model in ONNX format, let's load it into a C# application and use it to generate predictions.

Microsoft made things easy in this area. There's a nuget package CNTK.CPUOnly that is the C# equivalent of the CNTK framework. It features both a training and evaluation API.

We are going to use the evaluation API to make some predictions.

First up, add a using statement for the CTNK package.

using CTNK;

Next load up the model from disk:

var function = Function.Load("model.onnx", _deviceDescriptor, ModelFormat.ONNX);

The Load method needs a device for the model to be evaluated on. I've left it out in this code for clarity reasons. You will find that in the full sample code at the end of this post there is a device descriptor.

You can use the following code to initialize the device descriptor:

_deviceDescriptor = DeviceDescriptor.CPUDevice;

This results in a model that looks very similar to the function object we had in the Python application. To use the function you call function.Evaluate(inputs, outputs, deviceDescriptor). The Evaluate method accepts inputs, outputs and a device descriptor.

To obtain the inputs, you need to build a mapping between the input variables used by your model and values you want to provide for those variables.

var inputVariable = _function.Arguments.Single();
var inputs = new Dictionary<Variable, Value>();

var inputValue = Value.CreateBatch(inputVariable.Shape, 
    InputVector(inputVariable.Shape[0]), _deviceDescriptor);

inputs.Add(inputVariable, inputValue);

We grab the first argument that is available on the function. This is the original input layer that was generated by keras when we built the model.

To provide a value to the input layer, we need to create a dictionary of variables and corresponding value instances.

The input value we're using is generated with the following function:

IEnumerable<float> InputVector(int size)
{
    return Enumerable.Repeat(_random.NextDouble(), size).Select(x => (float)x);
}

In short, a list of random floating points. Normally this would be the input values for the neural networks encoded as floating points.

Now that we have the inputs, we need to describe the expected outputs as well.

var outputs = new Dictionary<Variable, Value>();
outputs.Add(_function.Output, null);

The outputs are described as a dictionary of variables and values. We provide the output layer of our neural network as variable, but don't assign a value to it. This is to be expected, we don't want to provide a value. We want a value back.

So we have inputs and ouputs for our network. Let's take a look at how to use them.

_function.Evaluate(inputs, outputs, _deviceDescriptor);

The Evaluate function automatically replaces the entry for the output variable with the output generated by the function.

To get access to the actual prediction we need to retrieve it from the output dictionary like so:

var prediction = outputs[_function.Output].GetDenseData<float>(_function.Output);

The prediction variable now contains a list of list of floats. Basically a 2D matrix where the rows represent the results for individual samples. The columns represent the values for the individual neurons in the output layer of our neural network.

Other languages are supported as well

The sample I've shown above is written in C#. However that is not the only language that is supported by CNTK. You can do the exact same thing in Java.

The importance of ONNX for professional AI solutions

Why would you want to train your model in Python and use it in Java or C#?
Python, as many know, is rather slow. C# and Java are much better for running things like web application.

Java and C# don't work for training the models. The tools provided in Python for working with data are much better.

I want to have the best experience for training and the best performance for serving the model. It makes for a much faster and higher quality solution.

Interested? Give it a spin and let me know how you fare!