- Feb 14, 2018
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- Feb 09, 2018
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- Feb 08, 2018
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
Performance enhancement for faster start time: reuse compiled networks even if their regularizations differ. This is safe because regularization should not affect predictions
-
- Feb 06, 2018
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- Dec 19, 2017
-
-
Tim O'Donnell authored
-
- Dec 10, 2017
-
-
Tim O'Donnell authored
-
- Dec 01, 2017
-
-
Tim O'Donnell authored
-
- Nov 28, 2017
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- Nov 25, 2017
-
-
Tim O'Donnell authored
-
- Nov 15, 2017
-
-
Tim O'Donnell authored
* First cut of supporting quantiles ("percent ranks") per allele. Closes #87. * Some refactoring of amino_acid.py This builds on #114
-
- Nov 14, 2017
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- Nov 13, 2017
-
-
Tim O'Donnell authored
-
- Aug 03, 2017
-
-
Tim O'Donnell authored
This should give a substantial prediction speed improvement as identified by @rohanpai * Add borrow_cached_network() method to Class1NeuralNetwork * Use cached models at in Class1NeuralNetwork.predict * Set names for all neural network layers so the model JSON is identical for identical architectures
-
- May 25, 2017
-
-
Tim O'Donnell authored
-
- May 24, 2017
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- May 22, 2017
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- May 21, 2017
-
-
Tim O'Donnell authored
-