// The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt /* This is an example illustrating the use of the deep learning tools from the dlib C++ Library. I'm assuming you have already read the dnn_mnist_ex.cpp example. So in this example program I'm going to go over a number of more advanced parts of the API, including: - Using multiple GPUs - Training on large datasets that don't fit in memory - Defining large networks - Accessing and configuring layers in a network */ #include #include #include using namespace std; using namespace dlib; // ---------------------------------------------------------------------------------------- // Let's start by showing how you can conveniently define large networks. The // most important tool for doing this are C++'s alias templates. These let us // define new layer types that are combinations of a bunch of other layers. // These will form the building blocks for more complex networks. // So let's begin by defining the building block of a residual network (see // Figure 2 in Deep Residual Learning for Image Recognition by He, Zhang, Ren, // and Sun). You can see a few things in this statement. The most obvious is // that we have combined a bunch of layers into the name "base_res". You can // also see the use of the tag1 layer. This layer doesn't do any computation. // It exists solely so other layers can refer to it. In this case, the // add_prev1 layer looks for the tag1 layer and will take the tag1 output and // add it to the input of the add_prev1 layer. This combination allows us to // implement skip and residual style networks. We have also made base_res // parameterized by BN, which will let us insert different batch normalization // layers. template