This PR changes the codepath so all models trained on AzureML are registered. The codepath previously allowed only segmentation models (subclasses of `SegmentationModelBase`) to be registered. Models are registered after a training run or if the `only_register_model` flag is set. Models may be legacy InnerEye config-based models or may be defined using the LightningContainer class.
The PR also removes the AzureRunner conda environment. The full InnerEye conda environment is needed to submit a training job to AzureML.
It splits the `TrainHelloWorldAndHelloContainer` job in the PR build into two jobs, `TrainHelloWorld` and `TrainHelloContainer`. It adds a pytest marker `after_training_hello_container` for tests that can be run after training is finished in the `TrainHelloContainer` job.
This will solve the issue of model registration in #377 and #398.
* Add auto-restart
* Change handling of checkpoints and clean-up
* Save last k recovery checkpoints
* Log epoch for keeping last ckpt
* Keeping k last checkpoints
* Add possibility to recover from particular checkpoint
* Update tests
* Check k recovery
* Re-add skipif
* Correct pick up of recovery runs and add test
* Correct pick up of recovery runs and add test
* Remove all start epochs
* Remove all start epochs
* Spimplify run recovery logic
* Fix it
* Merge conflicts import errors
* Fix it
* Fix tests in test_scalar_model.py
* Fix tests in test_model_util.py
* Fix tests in test_scalar_model.py
* Fix tests in test_model_training.py
* Avoid forcing the user to log epoch
* Fix test_get_checkpoints
* Fix test_checkpoint_handling.py
* Fix callback
* Update CHANGELOG.md
* Self PR review comments
* Fix more tests
* Fix argument in test
* Mypy
* Update InnerEye-DeepLearning.iml
* Update InnerEye-DeepLearning.iml
* Fix mypy errors
* Address PR comment
* Typo
* mypy fix
* just style
- Make file structure consistent across normal training and training when InnerEye is a submodule
- Add test coverage for the file structure of registered models
- Add documentation around how the model structure looks like
- If multiple Conda files are used in an InnerEye run, they are merged into one environment file for deployment. The complicated merge inside of `run_scoring` could be deprecated in principle, but leaving it there if we need for legacy models.
- Add test coverage for `submit_for_inference`: Previous test was using a hardcoded legacy model, meaning that any changes to model structure could have broken the script
- The test for `submit_for_inference` is no longer submitted from the big AzureML run, shortening the runtime of that part of the PR build. Instead, it is triggered after the `TrainViaSubmodule` part of the build. The corresponding AzureML experiment is no longer `model_inference`, but the same experiment as all other AzureML runs.
- The test for `submit_for_inference` was previously running on the expensive `training-nd24` cluster, now on the cheaper `nc12`.
- `submit_for_inference` now correctly uses the `score.py` file that is inside of the model, rather than copying it from the repository root.
- Rename the `TestOutputDirectories` class because it is picked up by pytest as something it expects to contain tests
- Switch fields to using `Path`, rather than `str`
- Compute aggregate metrics over the whole training run
- Get allocated and reserved memory
- Store aggregate metrics in AzureML
Note, diagnostic metrics are no longer stored in AzureML. Tensorboard is better for vast amounts of metrics.
- Separates the logic used to determine from what checkpoint/checkpoint path we will recover
- Separates model creation, and model checkpoint loading from optimizer creation and checkpoint loading and keeps all this under class ModelAndInfo.
- Optimizers created after model is moved to GPU - Fixes#198
- Test added to train_via_submodule.yml which continues training from a previous run using run recovery.
Add the capability to not check in the complete `settings.yml` file, and fill in the missing ones via the file `InnerEyePrivateSettings.yml` file in the repository root.
* PR builds throw repeated " mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.". Known issue with MKL, as per pytorch/pytorch#37377
* Avoid logging noise from Urllib
* Typos and documentation fixes
Most git-related information is presently expected in commandline arguments, populated in pipelines. Change that to read via gitpython, so that also in runs from user's local machines the branch info is correct. This fixes#151
Improve documentation around queuing training runs and run recovery. Write run recovery ID to a file for later use in pipelines.
This PR reworks mypy_runner.py both to ensure all files are checked, and to speed up the process (from about 3 minutes to about 12 seconds in the PR build). Rather than processing one file at a time, mypy is called repeatedly with "--verbose" set, and the logs are (silently) checked to see if files have been visited. Visited files are excluded from the set to be checked, and mypy is invoked again on the remaining ones until there are none (or until no further files are visited - though this should not and does not seem to happen).
Care is taken to ensure that this script can also be called when this repo is present as a submodule (assumed to be called innereye-deeplearning as usual). When this is the case, we do not check the files inside the submodule, as we assume they have already been checked as part of the build process here.
It is also now possible to provide the script with a specific list of files to check, by supplying them on the command line.
Running this new version turned up a couple of previously undetected type issues, which are also fixed here.