1. 08 May, 2017 1 commit
  2. 22 Jan, 2017 2 commits
  3. 23 Aug, 2016 1 commit
  4. 25 Jun, 2016 3 commits
  5. 22 Jun, 2016 2 commits
  6. 11 Jun, 2016 1 commit
  7. 24 May, 2016 1 commit
  8. 23 May, 2016 1 commit
  9. 16 May, 2016 1 commit
  10. 15 May, 2016 1 commit
    • Davis King's avatar
      Changed the solver interface to take the learning rate and the layer details · 66166c67
      Davis King authored
      object as an input.  This allows the solvers to exhibit a more complex behavior
      that depends on the specific layer.  It also removes the learning rate from the
      solver's parameter set and pushes it entirely into the core training code.
      This also removes the need for the separate "step size" which previously was
      multiplied with the output of the solvers.
      
      Most of the code is still the same, and in the core and trainer the step_size
      variables have just been renamed to learning_rate.  The dnn_trainer's relevant
      member functions have also been renamed.
      
      The examples have been updated to reflect these API changes.  I also cleaned up
      the resnet definition and added better downsampling.
      66166c67
  11. 08 May, 2016 1 commit
  12. 05 May, 2016 1 commit
  13. 29 Apr, 2016 1 commit
  14. 19 Apr, 2016 2 commits
  15. 16 Apr, 2016 1 commit
  16. 12 Apr, 2016 4 commits
  17. 11 Apr, 2016 1 commit
  18. 10 Apr, 2016 1 commit
  19. 09 Apr, 2016 1 commit
  20. 27 Mar, 2016 3 commits