- Sep 15, 2016
-
-
Tim O'Donnell authored
Lazily putting this all in one commit. * infrastructure for downloading datasets and published trained models (the `mhcflurry-downloads` command) * docs and scripts (in `downloads-generation`) to generate the pubilshed datsets and trained models * parallelized cross validation and model training implementation, including support for imputation (based on the old mhcflurry-cloud repo, which is now gone) * a single front-end script for class1 allele-specific cross validation and model training / testing (`mhcflurry-class1-allele-specific-cv-and-train`) * refactor how we deal with hyper-parameters and how we instantiate Class1BindingPredictors * make Class1BindingPredictor pickleable and remove old serialization code * move code particular to class 1 allele-specific predictors into its own submodule * remove unused code including arg parsing, plotting, and ensembles * had to bump the binding prediction threshold for the Titin1 epitope from 500 to 700, as this test was sporadically failing for me (see test_known_class1_epitopes.py) * Attempt to make tests involving randomness somewhat more reproducible by setting numpy random seed * update README
-
Tim O'Donnell authored
Remove experiements and notebooks
-
Tim O'Donnell authored
-
- Sep 01, 2016
-
-
Tim O'Donnell authored
Split python2 into separate Dockerfile to speed up build (docker hub …
-
Tim O'Donnell authored
-
- Aug 30, 2016
-
-
Tim O'Donnell authored
dockerize take 2
-
Tim O'Donnell authored
-
Tim O'Donnell authored
This is an attempt to have a single docker image that can be used as a base for cloud runs (with mhcflurry-cloud) and also eventually as a way for users to experiment with mhcflurry. Plan is to have this built automatically at https://hub.docker.com/r/hammerlab/mhcflurry/ * supports cpu and theoretically gpu (not tested though) * supports python 2 and 3 * putting Dockerfile in the root of our repo lets us just copy the current checkout of the mhcflurry repo into the image instead of pulling it from github, so it works with branches and non-released versions * by default this runs a python 3 jupyter notebook on port 8888 with mhcflurry and some convenience packages installed * does not try to train any models or run tests. Models will eventually be downloaded from google cloud storage once we have that working * it’s pretty hefty unfortunately, around 2 gig Also: removed the pin on keras<1.0 in requirements.txt
-
- Aug 03, 2016
-
-
Jeff Hammerbacher authored
Add notebook that builds simple model running on TF backend
-
Jeff Hammerbacher authored
-
Jeff Hammerbacher authored
-
Alex Rubinsteyn authored
-
- Aug 01, 2016
-
-
Jeff Hammerbacher authored
-
Jeff Hammerbacher authored
-
Jeff Hammerbacher authored
-
Jeff Hammerbacher authored
-
- Jul 12, 2016
-
-
Alex Rubinsteyn authored
Didn't get to multiple outputs but did some experiments on shared embedding
-
Alex Rubinsteyn authored
-
- Jun 24, 2016
-
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
- Jun 23, 2016
-
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
- Jun 03, 2016
-
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
fixed commandline script for making predictions
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
Stratified cross-validation
-
- Jun 02, 2016
-
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
added batch normalization between layers, changed default weight initialization
-
- Jun 01, 2016
-
-
Alex Rubinsteyn authored
added batch normalization between layers and changed default initialization to divide by sum of fan_in + fan_out instead of just fan_in
-
- May 21, 2016
-
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-
Alex Rubinsteyn authored
-