Closes#740. Updates the lung model regression test to use the latest
parameters and train for a substantial number of steps to ensure
training is progressing as expecting. The small number of epochs and
smaller data subset is used as running a full training run isn't
feasible. The new test runs in < 30 minutes but on real data.
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Bypassing branch protections for failing PyTest as the test that is failing is `Tests/SSL/test_ssl_containers.py::test_innereye_ssl_container_cifar10_resnet_simclr` as the URL for CIFAR seems to be broken currently (returning 500 server error).
While we updated DeepMIL for the Panda dataset to work with the latest changes, we did not update DeepMIL for the TCGA CRCK dataset.
This PR updates how the caching of the encoded tiles is done and how the checkpoints of the DeepMIL model is saved and loaded.
No additional tests are required since these are the same functions that we use for the Panda dataset. For all of them a test already exists.
Last, the PR updates the cudatoolkit version, Anton and I found that this is the root cause for all our problems with ddp
AzureML jobs from failed previous PR builds do not get cancelled, consuming excessive resources. Now kill all queued and running jobs before starting new ones.
* initial commit
* updating the build
* flake8
* update main page
* add the links
* try to fix the env
* update build, gitignore and remove duplicate license
* update gitignore again
* Adding to changelog
* conda activate
* update again
* wrong instruction
* add data quality
* rephrase
* first pass on Readme.md
* switch from our to the, and clarify the cxr datasets
* move content to a separate markdown file
* move additional content to config readme file
* finish updating dataquality readme
* Rename
* pr ocmment
* todos
* changed default dir for cifar10 dataset
Co-authored-by: Ozan Oktay <ozan.oktay@microsoft.com>
Removing the Windows branch of our testing as we have been experiencing intermittent random failures in windows, which could be the result of a change in image on the test machines.
Closes#541
- Enable regression tests on text and binary files, that are either produced by the job or uploaded to the run context
- Adding a large set of these regression test files to all models in PR builds
Add necessary tooling and examples for running fastMRI reconstruction models.
- Script to create and run an Azure Data Factory to download the raw data, and place them into a storage account
- Detailed examples to run the VarNet model from the fastMRI github repo
- Ability to work with fixed mounting points for datasets
This PR changes the codepath so all models trained on AzureML are registered. The codepath previously allowed only segmentation models (subclasses of `SegmentationModelBase`) to be registered. Models are registered after a training run or if the `only_register_model` flag is set. Models may be legacy InnerEye config-based models or may be defined using the LightningContainer class.
The PR also removes the AzureRunner conda environment. The full InnerEye conda environment is needed to submit a training job to AzureML.
It splits the `TrainHelloWorldAndHelloContainer` job in the PR build into two jobs, `TrainHelloWorld` and `TrainHelloContainer`. It adds a pytest marker `after_training_hello_container` for tests that can be run after training is finished in the `TrainHelloContainer` job.
This will solve the issue of model registration in #377 and #398.
- The use_gpu flag for container models was not picked up correctly, always running without GPU
- When running inference for container models with the test_step method, PL would fail when running on >1 GPU
- Adds an extra test to run the HelloContainer model in AzureML
* Add auto-restart
* Change handling of checkpoints and clean-up
* Save last k recovery checkpoints
* Log epoch for keeping last ckpt
* Keeping k last checkpoints
* Add possibility to recover from particular checkpoint
* Update tests
* Check k recovery
* Re-add skipif
* Correct pick up of recovery runs and add test
* Correct pick up of recovery runs and add test
* Remove all start epochs
* Remove all start epochs
* Spimplify run recovery logic
* Fix it
* Merge conflicts import errors
* Fix it
* Fix tests in test_scalar_model.py
* Fix tests in test_model_util.py
* Fix tests in test_scalar_model.py
* Fix tests in test_model_training.py
* Avoid forcing the user to log epoch
* Fix test_get_checkpoints
* Fix test_checkpoint_handling.py
* Fix callback
* Update CHANGELOG.md
* Self PR review comments
* Fix more tests
* Fix argument in test
* Mypy
* Update InnerEye-DeepLearning.iml
* Update InnerEye-DeepLearning.iml
* Fix mypy errors
* Address PR comment
* Typo
* mypy fix
* just style
* Fix the bug in PL
* Add back the test
* Missing import
* CHANGELOG.md
* Fix it
* Only plugin if more than one gpu
* Only plugin if more than one gpu
* Mypy
* Mypy again
* Fix cross validation results downloading for classification models
* Fix
* Fix it
* Back to main
* CHANGELOG.md
* try this out
* Add new build step
* Update
* roll back
* push again
* Update build-pr.yml
* Update GlaucomaPublic.py
* Update plot_cross_validation.py
* Changing model import
* Write model files was not working as expected
* Wrong indent
* Fix the aggregation code and add a test
* Update the environment.yml
* Fix it
* Attempt to fix test in build PR
* Update build-pr.yml
* Update GlaucomaPublic.py
* Delete model_paper_glaucoma.py
* Update model_util.py
* Update GlaucomaPublic.py
* Just format
* Add additional tests
* Add additional tests
* Add test for check_count for ensemble
* Style
* Rename to more meaningful
* Adding cross validation fold to test metrics dict expect for ensemble
* Only download ensemble to CV if segmentation model
* Add explicit possible labels for tests
* Delete unecessary files
* Only compute val metrics if this is not ensemble run
* Adapt test
* Improve for segmentation model too
* Update again
* Fix it
* Update PR build
* Update config to avoid clashing import
* Update config to avoid clashing import
* Update config to avoid clashing import
* Flake8
* Update test
* Try out new env
* Try out spawn instead
* Back to main
* Update CHANGELOG.md
* Try out fix mentioned in PL issue
* Roll back weird fix
* Test files to match true structure of cv
* Add new tests to check the CV folder
* Roll back wrong commit
* Flake8
* Flake8
* Update doc PR comment
* Add docstring
* Fallback runs needed to be updated
* Update build-pr
* Update linux test
* Commented out by mistake
* Mypy
* dont need to change mypy
* Update InnerEye/ML/deep_learning_config.py
Co-authored-by: Anton Schwaighofer <antonsc@microsoft.com>
* Update InnerEye/ML/model_training.py
Co-authored-by: Anton Schwaighofer <antonsc@microsoft.com>
* Update azure-pipelines/build-pr.yml
Co-authored-by: Anton Schwaighofer <antonsc@microsoft.com>
* Custom type for complex signature
* Simplify signature for aggregate and create metrics
* Update
* Need to skip train 2 nodes
* Add warning in CHANGELOG.md
* Mark
* Fix multi-node with one gpu
* Update CHANGELOG.md
* Move to 1.2.7
* reformat
* reformat
* linesep
* reformat
* Type declaration beginning PR comment
Co-authored-by: Anton Schwaighofer <antonsc@microsoft.com>
At present, external contributors don't have any insight into why the PR builds fail because they run on ADO. This PR moves some of the basic checks to Github Actions, where they are fully visible: Flake8, mypy, and training the HelloWorld model.
- Coverage reporting complains that it does not like the HTML output folder.
- Exclude the Tests* folders from the report, so that the overall coverage figures make more sense
* test
* fix test
* download fix
* create separate model folder
* fixing tests
* making HD check better
* Tests
* inverted logic
* registering on parent run
* docu
- Make file structure consistent across normal training and training when InnerEye is a submodule
- Add test coverage for the file structure of registered models
- Add documentation around how the model structure looks like
- If multiple Conda files are used in an InnerEye run, they are merged into one environment file for deployment. The complicated merge inside of `run_scoring` could be deprecated in principle, but leaving it there if we need for legacy models.
- Add test coverage for `submit_for_inference`: Previous test was using a hardcoded legacy model, meaning that any changes to model structure could have broken the script
- The test for `submit_for_inference` is no longer submitted from the big AzureML run, shortening the runtime of that part of the PR build. Instead, it is triggered after the `TrainViaSubmodule` part of the build. The corresponding AzureML experiment is no longer `model_inference`, but the same experiment as all other AzureML runs.
- The test for `submit_for_inference` was previously running on the expensive `training-nd24` cluster, now on the cheaper `nc12`.
- `submit_for_inference` now correctly uses the `score.py` file that is inside of the model, rather than copying it from the repository root.
- Rename the `TestOutputDirectories` class because it is picked up by pytest as something it expects to contain tests
- Switch fields to using `Path`, rather than `str`
- Compute aggregate metrics over the whole training run
- Get allocated and reserved memory
- Store aggregate metrics in AzureML
Note, diagnostic metrics are no longer stored in AzureML. Tensorboard is better for vast amounts of metrics.
- Marks tests as `gpu`, `cpu_and_gpu` or `azureml`. Tests marked `gpu` and `azureml` are not run in the normal test set, only on the AzureML run triggered by the PR builds. Long tests like test_submit_for_inference are no longer run as part of the main set.
- Cleans up pytest.ini
- Separates the logic used to determine from what checkpoint/checkpoint path we will recover
- Separates model creation, and model checkpoint loading from optimizer creation and checkpoint loading and keeps all this under class ModelAndInfo.
- Optimizers created after model is moved to GPU - Fixes#198
- Test added to train_via_submodule.yml which continues training from a previous run using run recovery.
Add the capability to not check in the complete `settings.yml` file, and fill in the missing ones via the file `InnerEyePrivateSettings.yml` file in the repository root.