• Davis King's avatar
    Changed the solver interface to take the learning rate and the layer details · 66166c67
    Davis King authored
    object as an input.  This allows the solvers to exhibit a more complex behavior
    that depends on the specific layer.  It also removes the learning rate from the
    solver's parameter set and pushes it entirely into the core training code.
    This also removes the need for the separate "step size" which previously was
    multiplied with the output of the solvers.
    
    Most of the code is still the same, and in the core and trainer the step_size
    variables have just been renamed to learning_rate.  The dnn_trainer's relevant
    member functions have also been renamed.
    
    The examples have been updated to reflect these API changes.  I also cleaned up
    the resnet definition and added better downsampling.
    66166c67
dnn_mnist_advanced_ex.cpp 15.2 KB