Commit b16cc99e authored by Davis King's avatar Davis King

Added comments about using multiple GPUs

parent ca11d108
......@@ -4,6 +4,7 @@
dlib C++ Library. I'm assuming you have already read the dnn_mnist_ex.cpp
example. So in this example program I'm going to go over a number of more
advanced parts of the API, including:
- Using multiple GPUs
- Training on large datasets that don't fit in memory
- Defining large networks
- Accessing and configuring layers in a network
......@@ -201,7 +202,13 @@ int main(int argc, char** argv) try
// The dnn_trainer will use SGD by default, but you can tell it to use
// different solvers like adam.
dnn_trainer<net_type,adam> trainer(net,adam(0.001));
//dnn_trainer<net_type,adam> trainer(net,adam(0.001));
// Also, if you have multiple graphics cards you can tell the trainer to use
// them together to make the training faster. For example, replacing the
// above constructor call with this one would cause it to use GPU cards 0
// and 1.
dnn_trainer<net_type,adam> trainer(net,adam(0.001), {0,1});
trainer.be_verbose();
trainer.set_synchronization_file("mnist_resnet_sync", std::chrono::seconds(100));
// While the trainer is running it keeps an eye on the training error. If
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment