- 11 Nov, 2017 10 commits
-
-
Davis King authored
-
Davis King authored
-
Juha Reunanen authored
* Add capability to train scale-variant MMOD models * Review fixes: change bool scale_invariant to strongly typed enum, etc * Add serialization and deserialization of assumed_input_layer_type * Fix code formatting * Rename things as per review feedback * Review fix: move enum use_image_pyramid outside mmod_options * Continue execution with net, if deserialization of shape predictor fails * Revert "Continue execution with net, if deserialization of shape predictor fails" This reverts commit 8ea4482c043b5b98b97ed5b78bfc6916a1e2a453.
-
Pierre Fenoll authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
output 2 more statistics, which are the mean absolute error and the standard deviation of the absolute error. This means these functions now return 4D rather than 2D vectors. I also made test_regression_function() take a non-const reference to the regression function so that DNN objects can be tested.
-
Davis King authored
-
- 09 Nov, 2017 1 commit
-
-
Sean Warren authored
* Remove explicit specification of library path in dlib.cmake Enables side-by-side multi configuration build on windows * Add dlib_LIBS For backwards compatability
-
- 08 Nov, 2017 1 commit
-
-
https://github.com/davisking/dlib/issues/925OtacilioNeto authored
* This fix suggested by davisking make unit tests more reliable. Fix issue https://github.com/davisking/dlib/issues/925 * This fix suggested by davisking make unit tests more reliable. Fix issue https://github.com/davisking/dlib/issues/925
-
- 06 Nov, 2017 1 commit
-
-
Davis King authored
-
- 05 Nov, 2017 6 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
0.5*MSE. The only thing this effects is the logging messages that print during training, which were confusing since the reported loss was half the size you would expect.
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 04 Nov, 2017 1 commit
-
-
Gilles Rochefort authored
-
- 02 Nov, 2017 4 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
atomics. This makes the timing code a lot more precise.
-
Davis King authored
minutes in the output.
-
- 01 Nov, 2017 1 commit
-
-
Davis King authored
as input layer specifications. This will create input tensors with K channels.
-
- 29 Oct, 2017 8 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
be smaller. Instead, they now behave like std::vector in that they just change their nominal size but keep the same memory, only reallocating if they are resized to something larger than their underlying memory block. This change makes some uses of dlib faster, in particular, running networks on a large set of images of differing sizes will now run faster since there won't be any GPU reallocations, which are notoriously slow.
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 28 Oct, 2017 7 commits
-
-
Davis King authored
when we switched everything to std::shared_ptr. Turns out std::shared_ptr has some surprising limitations. This change fixes a bug where the program crashes or hangs sometimes during program shutdown.
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
included in the edge graph. If it isn't then the output labels from chinese_whispers would be missing faces in this degenerate case. So basically this fixes a bug where chinese_whispers(), when called from python, would sometimes return a labels array that doesn't include labels for all the inputs.
-
Davis King authored
included in the edge graph. If it isn't then the output labels from chinese_whispers would be missing faces in this degenerate case.
-