1. 23 Nov, 2017 2 commits
    • Davis King's avatar
      merged · 66a5a9c4
      Davis King authored
      66a5a9c4
    • Davis King's avatar
      Made the loss dumping between learning rate changes a little more relaxed. In · 9a8f3121
      Davis King authored
      particular, rather than just dumping exactly 400 of the last loss values, it
      now dumps 400 + 10% of the loss buffer.  This way, the amount of the dump is
      proportional to the steps without progress threshold.  This is better because
      when the user sets the steps without progress to something larger it probably
      means you need to look at more loss values to determine that we should stop,
      so dumping more in that case ought to be better.
      9a8f3121
  2. 22 Nov, 2017 1 commit
  3. 21 Nov, 2017 7 commits
  4. 20 Nov, 2017 3 commits
  5. 19 Nov, 2017 10 commits
  6. 18 Nov, 2017 6 commits
  7. 17 Nov, 2017 3 commits
  8. 16 Nov, 2017 1 commit
  9. 15 Nov, 2017 6 commits
    • Davis King's avatar
      Removed unneeded assert · 9f6ad63b
      Davis King authored
      9f6ad63b
    • Davis King's avatar
      b84e2123
    • Davis King's avatar
      Minor tweaks to spec · 36392bb2
      Davis King authored
      36392bb2
    • Davis King's avatar
      merged · 483e6ab4
      Davis King authored
      483e6ab4
    • Juha Reunanen's avatar
      Add semantic segmentation example (#943) · e48125c2
      Juha Reunanen authored
      * Add example of semantic segmentation using the PASCAL VOC2012 dataset
      
      * Add note about Debug Information Format when using MSVC
      
      * Make the upsampling layers residual as well
      
      * Fix declaration order
      
      * Use a wider net
      
      * trainer.set_iterations_without_progress_threshold(5000); // (was 20000)
      
      * Add residual_up
      
      * Process entire directories of images (just easier to use)
      
      * Simplify network structure so that builds finish even on Visual Studio (faster, or at all)
      
      * Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle
      
      * Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call
      
      * Review fix: remove the batch normalization layer from right before the loss
      
      * Review fix: point out that only the Visual C++ compiler has problems.
      Also expand the instructions how to run MSBuild.exe to circumvent the problems.
      
      * Review fix: use dlib::match_endings
      
      * Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from.
      
      * Review fix: make formatting comply with dlib style conventions.
      
      * Review fix: output training parameters.
      
      * Review fix: remove #ifndef __INTELLISENSE__
      
      * Review fix: use std::string instead of char*
      
      * Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter
      
      * Fix whitespace formatting
      
      * Add more comments
      
      * Fix finding image files for inference
      
      * Resize inference test output to the size of the input; add clarifying remarks
      
      * Resize net output even in calculate_accuracy
      
      * After all crop the net output instead of resizing it by interpolation
      
      * For clarity, add an empty line in the console output
      e48125c2
    • Sebastian Höffner's avatar
  10. 14 Nov, 2017 1 commit