Skip to content


Over the years, the Applied Machine Learning Group here at the University of Waikato has been working on a number of projects that involved applying deep learning algorithms to image problems. Setting up deep learning frameworks is always a slow and painstaking process, getting all the library dependencies right (CUDA, cuDNN, numpy, etc). To speed things up, we have developed (and maintain) a number of Python libraries and Docker images that can be used for various deep learning tasks. Here we are showing examples on how you can use these ready-to-use images on datasets to train your own models and how to make predictions with them.

The use of Docker made it a lot easier and faster to apply algorithms to new datasets. If you have not used Docker before, we recommend you to have a look at our introduction called Docker for Data Scientists.

The following domains are covered by the examples:

These tutorials will use shorter training times (i.e., lower epochs/iterations/steps) to arrive faster at a model. However, this also means that the quality of the model will be lower. You will need to experiment with the training duration (and probably other hyper parameters) to achieve a good performance. The point of these tutorials is to get you going with deep learning.

Note on I/O#

To keep things simple, these examples use file-polling when making predictions, i.e., looking for images in an input directory and outputting predictions (and original input images in another directory).

However, this may not be optimal when using SSDs, as they can wear out quickly when processing large amounts of files on a 24/7 basis. Having model pipelines can acerbate things even further.

To alleviate wear and tear on the hardware, these frameworks also allow processing images via a Redis in-memory database backend. In this case, the models listen for images being broadcast on a Redis channel, pick them up, make predictions and then broadcast the predictions on another Redis channel. This allows for the construction of efficient, low-latency processing pipelines.

When using this approach, the predictions are commonly broadcast in a JSON format, which can be easily processed in most programming languages.

Redis itself has clients available in a wide range of programming languages as well.

Of course, you can run a Redis server also in a docker container:

docker run --net=host --name redis-server -d redis

Note on Windows#

The instructions in these tutorials were written for and executed on a Linux machine. However, you should be able to replicate these on Windows as well. Check out these instructions to get you set up.