Skip to content
Snippets Groups Projects
Commit 9d5d36b0 authored by Tim O'Donnell's avatar Tim O'Donnell
Browse files

Comment tweaks

parent c3cf271d
No related merge requests found
......@@ -2,15 +2,13 @@
This download contains trained MHC Class I allele-specific MHCflurry models. The training data used is in the [data_combined_iedb_kim2014](../data_combined_iedb_kim2014) MHCflurry download. We first select network hyperparameters for each allele individually using cross validation over the models enumerated in [models.py](models.py). The best hyperparameter settings are selected via average of AUC (at 500nm), F1, and Kendall's Tau over the training folds. We then train the production models over the full training set using the selected hyperparameters.
The training script supports multi-node parallel execution using the [dask-distributed](https://distributed.readthedocs.io/en/latest/) library. To enable this, pass the IP and port of the dask scheduler to the training script with the '--dask-scheduler' option. The GENERATE.sh script passes all arguments to the training script so you can just give it as an argument to GENERATE.sh.
The training script supports multi-node parallel execution using the [kubeface](https://github.com/hammerlab/kubeface) librarie.
We run dask distributed on Google Container Engine using Kubernetes as described [here](https://github.com/hammerlab/dask-distributed-on-kubernetes).
To use kubeface, you should make a google storage bucket and pass it below with the --storage-prefix argument.
To generate this download we run:
```
# If you are running dask distributed using our kubernetes config, you can use the DASK_IP one liner below.
# Otherwise, just set it to the IP of the dask scheduler.
./GENERATE.sh \
--cv-folds-per-task 10 \
--backend kubernetes \
......
......@@ -187,6 +187,9 @@ def cross_validation_folds(
lambda kwargs: impute_and_select_allele(**kwargs),
imputation_args)
# Here _replace is a method on named tuples that returns a new named
# tuple with the specified key set to the given value and all other key/
# values the same as the original.
return [
result_fold._replace(imputed_train=imputation_result)
for (result_fold, imputation_result) in zip(
......
......@@ -69,6 +69,9 @@ def test_small_run():
"--num-local-threads", "1",
]
if KUBEFACE_INSTALLED:
# If kubeface is installed, then this command will by default use it.
# In that case, we want to have the kubeface storage written to a
# local file and not assume the existence of a google storage bucket.
args.extend(["--storage-prefix", "/tmp/"])
print("Running cv_and_train_command with args: %s " % str(args))
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment