- Added matlab_object to the mex wrapper. Now you can have parameters that are arbitrary matlab objects.
- add support for loading of RGBA JPEG images
- DNN stuff
- Added test_one_step() to the dnn_trainer. This allows you to do automatic early stopping based on observing the loss on held out data.
- Added loss_metric_
- Added loss_mean_squared_
- Added loss_mean_squared_multioutput_
- Made the dnn_trainer automatically reload from the last good state if a loss of NaN is encountered.
- Added l2normalize_ layer
- Added is_vector() for tensor objects.
- Made alias_tensor usable when it is const.
Non-Backwards Compatible Changes:
- Changed the loss layer interface to use two typedefs, output_label_type and training_label_type instead of a single label_type. This way, the label type used for training can be distinct from the type output by the network. This change breaks backwards compatibility with the previous API.
Bug fixes:
- Fixed compiler warnings and errors on newer compilers.
- Fixed a bug in the repeat layer that caused it to throw exceptions in some cases.
- Fixed matlab crashing when an error message from a mex file included the % character, since that is interpreted by matlab as part of an eventual printf() code.
- Fixed compile time error in random_subset_selector::swap()
- Fixed missing implementation of map_input_to_output() and map_output_to_input() in the concat_ layer.
Other:
- Minor usability improvements to DNN API.
- Wrote replacements for set_tensor() and scale_tensor() since the previous versions were calling into cuDNN, however, the cuDNN functions for doing this are horrifically slow, well over 100x slower than they should be, which is surprising since these functions are so trivial.
- Made the dnn_trainer's detection and backtracking from situations with increasing loss more robust. Now it will never get into a situation where it backtracks over and over. Instead, it will only backtrack a few times in a row before just letting SGD run unimpeded.
- Improved C++11 detection and enabling, especially on OS X.
- Made dlib::thread_pool use std::thread and join on the threads in thread_pool's destructor. The previous implementation used dlib's global thread pooling to allocate threads to dlib::thread_pool, however, this sometimes caused annoying behavior when used as part of a MATLAB mex file.