- 25 Aug, 2017 6 commits
-
-
Deniz Evrenci authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 24 Aug, 2017 2 commits
-
-
Evgeniy Fominov authored
-
ipeterson authored
PNG_LIBRARY set by libpng's FindPNG.cmake does not contain zlib dependancy. This causes the CHECK_FUNCTION_EXISTS(png_create_read_struct LIBPNG_IS_GOOD) to fail with liner errors, and for dlib to use it's internal copy of PNG. Updated to use libpng's PNG_LIBRARIES variable. This also sets both PNG and ZLib libraries in dlib_needed_libraries.
-
- 23 Aug, 2017 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 22 Aug, 2017 4 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
Fixed linker errors when building pyhton on windows. This fixes a bug that was introduced in a recent PR. Also fixed compiler errors that occurred in visual studio.
-
Davis King authored
-
- 21 Aug, 2017 6 commits
-
-
Davis King authored
in variance when the learning rate resets.
-
Davis King authored
-
Davis King authored
-
Davis King authored
coordinate mappings work on networks that contain skip layers.
-
Davis King authored
to 500. Now that we have a better history management of loss values in the trainer it's much more sensible to have a larger value here.
-
Davis King authored
when it determines that there have been a lot of steps without progress and shrinks the learning rate. Instead, it removes only the oldest 100. The problem with the old way of removing all the loss values in the history was that if you set the steps without progress threshold to a really high number you would often observe that the last few learning rate values were obviously not making progress, however, since all the previous loss values were forgotten the trainer needed to fully populate it's loss history from scratch before it would figure this out. This new style makes the trainer not waste time running this excessive optimization of obviously useless mini-batches.
-
- 20 Aug, 2017 1 commit
-
-
Davis King authored
which it was in all ways except you couldn't deserialize directly like you would expect. This has now been fixed.
-
- 19 Aug, 2017 4 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Adam Geitgey authored
-
- 18 Aug, 2017 2 commits
-
-
Davis King authored
-
Davis King authored
across scales regardless of the input image size. Previously, if you gave really large images or really small images it had a bias towards giving only large patches or small patches respectively.
-
- 16 Aug, 2017 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 15 Aug, 2017 6 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
concat layer's backward() method. It was assigning the gradient to previous layers instead of adding the gradient, as required by the layer interface specification. This change also noticeably speeds up concat layers since only one CUDA kernel launch now happens per concat operation, rather than one kernel launch for each sample in a tensor.
-
- 14 Aug, 2017 1 commit
-
-
Davis King authored
-
- 12 Aug, 2017 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 11 Aug, 2017 1 commit
-
-
Davis King authored
-