- 20 Nov, 2017 1 commit
-
-
Davis King authored
-
- 19 Nov, 2017 10 commits
-
-
Juha Reunanen authored
* Problem: log loss may become infinite, if g[idx] goes zero Solution: limit the input of the log function to 1e-6 (or more) * Parameterize the safe_log epsilon limit, and make the default value 1e-10
-
Davis King authored
-
Davis King authored
upper_bound_function::add(). Also fixed some issues in the solver.
-
Davis E. King authored
-
Davis E. King authored
-
Davis King authored
-
Davis E. King authored
-
Davis E. King authored
-
Davis E. King authored
-
Davis E. King authored
-
- 18 Nov, 2017 6 commits
-
-
Davis King authored
updated without needing to resolve the whole QP.
-
Davis King authored
-
Davis King authored
-
Davis King authored
specification and this means "make the filter cover the whole input image dimension". So it's just an easy way to make a filter sized exactly so that it will have one output along that dimension.
-
Davis King authored
-
Davis King authored
-
- 17 Nov, 2017 3 commits
-
-
Kino authored
* generic_image all the way tried to hunt down and correct the functions that were using a non-generic_image approach to dlib’s generic images. * generic image fix fix Had to change a couple of const_image_view to non-const versions so array access is possible in the rest of the code * same same * back to sanity
-
Amin Cheloh authored
-
Davis King authored
-
- 16 Nov, 2017 1 commit
-
-
Davis King authored
-
- 15 Nov, 2017 6 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Juha Reunanen authored
* Add example of semantic segmentation using the PASCAL VOC2012 dataset * Add note about Debug Information Format when using MSVC * Make the upsampling layers residual as well * Fix declaration order * Use a wider net * trainer.set_iterations_without_progress_threshold(5000); // (was 20000) * Add residual_up * Process entire directories of images (just easier to use) * Simplify network structure so that builds finish even on Visual Studio (faster, or at all) * Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle * Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call * Review fix: remove the batch normalization layer from right before the loss * Review fix: point out that only the Visual C++ compiler has problems. Also expand the instructions how to run MSBuild.exe to circumvent the problems. * Review fix: use dlib::match_endings * Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from. * Review fix: make formatting comply with dlib style conventions. * Review fix: output training parameters. * Review fix: remove #ifndef __INTELLISENSE__ * Review fix: use std::string instead of char* * Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter * Fix whitespace formatting * Add more comments * Fix finding image files for inference * Resize inference test output to the size of the input; add clarifying remarks * Resize net output even in calculate_accuracy * After all crop the net output instead of resizing it by interpolation * For clarity, add an empty line in the console output
-
Sebastian Höffner authored
-
- 14 Nov, 2017 5 commits
-
-
Davis King authored
-
Sean Warren authored
* Determine lapack fortran linking convention in CMake Looks for lapack function with and without trailing underscore - allows use of CLAPACK on windows where functions are decorated but fortran_id.h otherwise assumes they are not * Use enable_preprocessor_switch for LAPACK decoration detection * Add lapack decoration defines to config.h.in * Use correct variable for lapack_libraries
-
Davis King authored
-
Davis King authored
find_global_maximum() and global_function_search.
-
Davis King authored
general templates in dlib::relational_operators. I did this because the templates in dlib::relational_operators sometimes cause clashes with other code in irritating ways.
-
- 13 Nov, 2017 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 12 Nov, 2017 1 commit
-
-
Davis King authored
-
- 11 Nov, 2017 4 commits
-
-
Davis King authored
-
Davis King authored
-
Juha Reunanen authored
* Add capability to train scale-variant MMOD models * Review fixes: change bool scale_invariant to strongly typed enum, etc * Add serialization and deserialization of assumed_input_layer_type * Fix code formatting * Rename things as per review feedback * Review fix: move enum use_image_pyramid outside mmod_options * Continue execution with net, if deserialization of shape predictor fails * Revert "Continue execution with net, if deserialization of shape predictor fails" This reverts commit 8ea4482c043b5b98b97ed5b78bfc6916a1e2a453.
-
Pierre Fenoll authored
-