• Davis King's avatar
    Made the loss dumping between learning rate changes a little more relaxed. In · 9a8f3121
    Davis King authored
    particular, rather than just dumping exactly 400 of the last loss values, it
    now dumps 400 + 10% of the loss buffer.  This way, the amount of the dump is
    proportional to the steps without progress threshold.  This is better because
    when the user sets the steps without progress to something larger it probably
    means you need to look at more loss values to determine that we should stop,
    so dumping more in that case ought to be better.
    9a8f3121
Name
Last commit
Last update
dlib Loading commit data...
docs Loading commit data...
examples Loading commit data...
python_examples Loading commit data...
tools Loading commit data...
.gitignore Loading commit data...
.hgignore Loading commit data...
.hgtags Loading commit data...
.travis.yml Loading commit data...
CMakeLists.txt Loading commit data...
ISSUE_TEMPLATE.md Loading commit data...
MANIFEST.in Loading commit data...
README.md Loading commit data...
appveyor.yml Loading commit data...
setup.py Loading commit data...