Skip to content
Snippets Groups Projects
Select Git revision
  • 1.7.0
  • 1.7.0b
  • 1.7.0c
  • 2017-11-13
  • 20180114
  • 201911
  • 20200121_pre_presentation_refactor
  • 202010-phospho
  • 3-flank-variant
  • add-class1-ensemble
  • add-class1-ensemble-rebased
  • add-hyperparameters-base
  • antigen-presentation-updates
  • arch2
  • cartesian-product-cli
  • dataset-object
  • docs-pan-motifs
  • fix-commandline-predict-peptide-lengths
  • gh-pages
  • improved-allele-parsing
  • pre-2.1
  • v2.0.1
  • v2.0.0
  • pre-2.0
  • 1.6.1
  • pre-1.7.0
  • pre_presentation_refactor
  • 1.6.0
  • 1.4.0
  • pre-1.4.0
  • pan-dev1
  • 1.2.2
  • 1.2.0
  • pre-1.2.1
  • pre-1.2
  • pre-1.1
  • 1.0.0
  • pre-1.0
  • 0.9.2
  • 0.9.1
40 results
You can move around the graph by using the arrow keys.
Created with Raphaël 2.2.013Oct12Aug1154325Feb232221201918171615127Dec123Nov2019181613121110527Oct14969Sep824Jul20131130Jun2726242316151110985328May26221811643127Apr30Mar2322212019161312118613Feb1075427Jan26252423222120191817161514128765432131Dec26252322212013121110976543230Nov292322211918141130OctUpdate README.mdpublishpublishupdated and removed unnecessary code leftover from mhcflurry for migration to githubmastermasterUpdate class1_processing_neural_network.pyfinished training with new architecture variationarch2arch2Training new architectureStopped training because number of filters used for peptide terminals taking too long for 512 hyper. Revised code to combine the union of terminal amino acid containing kernels with the stacked 2 1DConvs. filter number now matches the number of post conv dense layers like 1st step in the stackedMerge branch 'skillman-lawrence.1-master-patch-28781' into 'master'Update mhc_rank/class1_processing_neural_network.pyAdjusted architecture to not need padding on ends prior to first conv layerAdjusted architecture to avoid padding in initial Conv1Drecover_lost_datarecover_lost_datafixed benchmarking notebookMerge branch 'recover_lost_data' into 'master'finished training of models with learned embeddingsTraining results from day 7results from day 6 of trainingAdded Jupyter NB for statistical analyses & Results from day 5 of trainingResults from 4th day of trainingadding training results from day 3adding results from most recent training intervalupdated get_preds.py post testingDebugged get_preds.pyfinished script to run predictions as SLURM jobsmodels from first round of training + some files to enable predictions to be to be run as SLURM jobsfinished debuggin for learned embeddings and began trainingcorrected typo and kicked off all training jobs in ap_models3Debugged new changes and kicked off 1st job of new trainings as testAdjusted sequence preprocessing to work with the ability to learn embeddingsRemoved scripts relating to pssmadjusted some though processes and removed conclusions / experiments that either arent valid or we arent doing anymoremanuscriptmanuscriptUpdated the name of the model / approachadded functionality to model creation that allows for learned embeddings (or combo of learned and hardcoded). Also, adjusted amino_acid.py to make X's index = 0 so that it can be used as a mask.created hyperparameter files and set up training for models with learned embeddingsadded hyperparam options for learned embeddingscreated benchmarking dataset templateadded manuscript latex filesdeleted original mhcflurry and old scriptDeleted mhcflurry/__init__.py, mhcflurry/allele_encoding.py, mhcflurry/amino_acid.py, mhcflurry/calibrate_percentile_ranks_command.py, mhcflurry/class1_affinity_predictor.py, mhcflurry/class1_neural_network.py, mhcflurry/class1_presentation_predictor.py, mhcflurry/class1_processing_neural_network.py, mhcflurry/class1_processing_predictor.py, mhcflurry/cluster_parallelism.py, mhcflurry/common.py, mhcflurry/custom_loss.py, mhcflurry/data_dependent_weights_initialization.py, mhcflurry/downloads.py, mhcflurry/downloads.yml, mhcflurry/downloads_command.py, mhcflurry/encodable_sequences.py, mhcflurry/ensemble_centrality.py, mhcflurry/fasta.py, mhcflurry/flanking_encoding.py, mhcflurry/hyperparameters.py, mhcflurry/local_parallelism.py, mhcflurry/percent_rank_transform.py, mhcflurry/predict_command.py, mhcflurry/predict_scan_command.py, mhcflurry/random_negative_peptides.py, mhcflurry/regression_target.py, mhcflurry/scoring.py, mhcflurry/select_allele_specific_models_command.py, mhcflurry/select_pan_allele_models_command.py, mhcflurry/select_processing_models_command.py, mhcflurry/testing_utils.py, mhcflurry/train_allele_specific_models_command.py, mhcflurry/train_pan_allele_models_command.py, mhcflurry/train_presentation_models_command.py, mhcflurry/train_processing_models_command.py, mhcflurry/version.py, train_APmodels.sh filesremoved original mhcflurry2.0 filesadding training data from ap models with crossentropy lossaltered training init to work with SLURM and corrected typo (BLOSUM --> BLOSUM62) in hyperparameters, specifically encoding options
Loading