- 24 Jan, 2016 1 commit
-
-
Davis King authored
unit tests for the routines supporting this feature.
-
- 23 Jan, 2016 3 commits
-
-
Davis King authored
implementation of assign_conv_bias_gradient().
-
Davis King authored
-
Davis King authored
wrong outputs sometimes.
-
- 22 Jan, 2016 1 commit
-
-
Davis King authored
-
- 18 Jan, 2016 1 commit
-
-
Davis King authored
-
- 12 Jan, 2016 1 commit
-
-
Davis King authored
-
- 10 Jan, 2016 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
network (which would needlessly double VRAM usage). I also added a set_synchronization_file() method so you can tell it to automatically synchronize itself to disk every so often during training. This makes resuming an interrupted training session trivially easy.
-
- 09 Jan, 2016 7 commits
-
-
Davis King authored
-
Davis King authored
the loss isn't being reduced. Also, there is a stopping condition now based on how large the current learning rate is. That is, training stops when the learning rate gets small enough and it is clear that no progress is being made.
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 08 Jan, 2016 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 05 Jan, 2016 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 04 Jan, 2016 4 commits
-
-
Davis King authored
explicitly gave it a solver.
-
Davis King authored
-
Davis King authored
-
Davis King authored
the number of threads and blocks rather than using the hard coded numbers I had in there. This makes some functions noticeably faster. Also added a dot() function that is fully asynchronous.
-
- 03 Jan, 2016 1 commit
-
-
Davis King authored
-
- 02 Jan, 2016 1 commit
-
-
Davis King authored
-
- 01 Jan, 2016 3 commits
-
-
Davis King authored
-
Davis King authored
networks. This revolved mostly around removing really deep template recursions since that upsets the compiler when you make really deep networks.
-
Davis King authored
-
- 31 Dec, 2015 3 commits
-
-
Davis King authored
method to more efficiently give the input gradient in some instances.
-
Davis King authored
network is allocated on the heap rather than resulting in really large stack usage for large networks.
-
Davis King authored
-
- 25 Dec, 2015 1 commit
-
-
Davis King authored
-
- 24 Dec, 2015 5 commits
-
-
Davis King authored
tensors with different sizes and it will zero pad them as needed.
-
Davis King authored
-
Davis King authored
define combination layers made out of other combination layers without being hassled by the compiler.
-
Davis King authored
-
Davis King authored
-