- Mar 24, 2017
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
-
- Sep 26, 2016
-
-
Tim O'Donnell authored
-
- Sep 16, 2016
-
-
Tim O'Donnell authored
-
- Sep 15, 2016
-
-
Tim O'Donnell authored
Lazily putting this all in one commit. * infrastructure for downloading datasets and published trained models (the `mhcflurry-downloads` command) * docs and scripts (in `downloads-generation`) to generate the pubilshed datsets and trained models * parallelized cross validation and model training implementation, including support for imputation (based on the old mhcflurry-cloud repo, which is now gone) * a single front-end script for class1 allele-specific cross validation and model training / testing (`mhcflurry-class1-allele-specific-cv-and-train`) * refactor how we deal with hyper-parameters and how we instantiate Class1BindingPredictors * make Class1BindingPredictor pickleable and remove old serialization code * move code particular to class 1 allele-specific predictors into its own submodule * remove unused code including arg parsing, plotting, and ensembles * had to bump the binding prediction threshold for the Titin1 epitope from 500 to 700, as this test was sporadically failing for me (see test_known_class1_epitopes.py) * Attempt to make tests involving randomness somewhat more reproducible by setting numpy random seed * update README
-
- Sep 01, 2016
-
-
Tim O'Donnell authored
-
- Aug 30, 2016
-
-
Tim O'Donnell authored
-
Tim O'Donnell authored
This is an attempt to have a single docker image that can be used as a base for cloud runs (with mhcflurry-cloud) and also eventually as a way for users to experiment with mhcflurry. Plan is to have this built automatically at https://hub.docker.com/r/hammerlab/mhcflurry/ * supports cpu and theoretically gpu (not tested though) * supports python 2 and 3 * putting Dockerfile in the root of our repo lets us just copy the current checkout of the mhcflurry repo into the image instead of pulling it from github, so it works with branches and non-released versions * by default this runs a python 3 jupyter notebook on port 8888 with mhcflurry and some convenience packages installed * does not try to train any models or run tests. Models will eventually be downloaded from google cloud storage once we have that working * it’s pretty hefty unfortunately, around 2 gig Also: removed the pin on keras<1.0 in requirements.txt
-