- 06 Jan, 2019 1 commit
-
-
Juha Reunanen authored
* Add concat_prev layer, and U-net example for semantic segmentation * Allow to supply mini-batch size as command-line parameter * Decrease default mini-batch size from 30 to 24 * Resize t1, if needed * Use DenseNet-style blocks instead of residual learning * Increase default mini-batch size to 50 * Increase default mini-batch size from 50 to 60 * Resize even during the backward step, if needed * Use resize_bilinear_gradient for the backward step * Fix function call ambiguity problem * Clear destination before adding gradient * Works OK-ish * Add more U-tags * Tweak default mini-batch size * Define a simpler network when using Microsoft Visual C++ compiler; clean up the DenseNet stuff (leaving it for a later PR) * Decrease default mini-batch size from 24 to 23 * Define separate dnn filename for MSVC++ and not * Add documentation for the resize_to_prev layer; move the implementation so that it comes after mult_prev * Fix previous typo * Minor formatting changes * Reverse the ordering of levels * Increase the learning-rate stopping criterion back to 1e-4 (was 1e-8) * Use more U-tags even on Windows * Minor formatting * Latest MSVC 2017 builds fast, so there's no need to limit the depth any longer * Tweak default mini-batch size again * Even though latest MSVC can now build the extra layers, it does not mean we should add them! * Fix naming
-
- 18 Dec, 2017 1 commit
-
-
Davis King authored
the code, but it helps visual studio use less RAM when building the example, and might make appveyor not crash. It's also a slightly cleaner way to write the code anyway.
-
- 15 Nov, 2017 2 commits
-
-
Davis King authored
-
Juha Reunanen authored
* Add example of semantic segmentation using the PASCAL VOC2012 dataset * Add note about Debug Information Format when using MSVC * Make the upsampling layers residual as well * Fix declaration order * Use a wider net * trainer.set_iterations_without_progress_threshold(5000); // (was 20000) * Add residual_up * Process entire directories of images (just easier to use) * Simplify network structure so that builds finish even on Visual Studio (faster, or at all) * Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle * Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call * Review fix: remove the batch normalization layer from right before the loss * Review fix: point out that only the Visual C++ compiler has problems. Also expand the instructions how to run MSBuild.exe to circumvent the problems. * Review fix: use dlib::match_endings * Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from. * Review fix: make formatting comply with dlib style conventions. * Review fix: output training parameters. * Review fix: remove #ifndef __INTELLISENSE__ * Review fix: use std::string instead of char* * Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter * Fix whitespace formatting * Add more comments * Fix finding image files for inference * Resize inference test output to the size of the input; add clarifying remarks * Resize net output even in calculate_accuracy * After all crop the net output instead of resizing it by interpolation * For clarity, add an empty line in the console output
-