- 27 May, 2016 1 commit
-
-
Fm authored
-
- 26 May, 2016 5 commits
-
-
Evgeniy Fominov authored
-
Fm authored
-
Fm authored
-
https://github.com/davisking/dlibFm authored
-
Davis King authored
dimensions of its inputs rather than always outputting a tensor that has the dimensions of its immediate predecessors.
-
- 24 May, 2016 4 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
find_max_box_constrained(). Now the bounds can be empty for some variables.
-
Davis King authored
-
- 23 May, 2016 5 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
caused by num_computational_layers being wrong when tax layers were placed as the first layer. These visit functions being wrong also caused multi-GPU support to not work on such networks.
-
Davis King authored
std::async() since std::async creates new threads with each invocation, which in turn causes objects with thread_local storage duration to be reconstructed each time. This is problematic because CUDA context objects for cublas and cudnn get reconstructed over and over, slowing things down and generally using more resources than should be used.
-
Davis King authored
-
- 22 May, 2016 6 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
layers. Updated the solvers to support this.
-
Davis King authored
-
Davis King authored
-
Davis King authored
functions.
-
- 20 May, 2016 1 commit
-
-
Davis King authored
-
- 19 May, 2016 1 commit
-
-
Davis King authored
-
- 17 May, 2016 2 commits
- 16 May, 2016 7 commits
-
-
Davis King authored
Made LIB_INSTALL_DIR only appear when building dlib as an installable library, not when using dlib in another cmake project.
-
Davis King authored
for each layer if you have passed a tensor through the next.
-
Davis King authored
interface.
-
Davis King authored
-
Davis King authored
progress" estimate. I also renamed the get/set functions for the shrink amount to have a consistent name and use the word "factor" instead of "amount".
-
Davis King authored
-
Davis King authored
-
- 15 May, 2016 2 commits
-
-
Davis King authored
-
Davis King authored
object as an input. This allows the solvers to exhibit a more complex behavior that depends on the specific layer. It also removes the learning rate from the solver's parameter set and pushes it entirely into the core training code. This also removes the need for the separate "step size" which previously was multiplied with the output of the solvers. Most of the code is still the same, and in the core and trainer the step_size variables have just been renamed to learning_rate. The dnn_trainer's relevant member functions have also been renamed. The examples have been updated to reflect these API changes. I also cleaned up the resnet definition and added better downsampling.
-
- 14 May, 2016 2 commits
-
-
Davis King authored
-
Davis King authored
skip layers and add_prev style layers. In particular, now in-place layers only overwrite the gradient information in their child layer if they are operating in in-place mode. Otherwise, they add their gradients to their child layers. It should also be noted that it's safe for in-place layers to overwrite gradients when in in-place mode since their child layers are inaccessible when in-place layers operate in in-place mode. This prevents any other layers from trying to add to the child layer, thereby avoiding the potability of layer interference. So the bug this change fixes is that, when not in in-place mode the child layers are still accessible but in-place layers were *still* overwriting child gradients.
-
- 13 May, 2016 4 commits
-
-
Davis King authored
if so. This way, if you have a long running mex file it will be killable if it is periodically printing.
-
Davis King authored
-
Davis King authored
using the same seed.
-
Davis King authored
-