Medical Imaging Deep Learning library to train and deploy 3D segmentation models on Azure Machine Learning
Перейти к файлу
Fernando Pérez-García 45e7d5ff4d
Fix some Sphinx warnings (#699)
* Fix shell lexer name

* Update CHANGELOG

* Fix CHANGELOG

* Fix "html_static_path entry '_static' does not exist"

* Clean up preprocess script

* Fix link to InnerEye-DataQuality

* Use shutil.copy to copy files

* Remove extra info from CHANGELOG

* Fix broken link to LICENSE

* Fix lexer name for YAML

* Remove colons from headers

* Fix InnerEye module not being found
2022-03-22 09:46:15 +00:00
.github/workflows Fix for stuck Linux build: Move pytest to Windows (#652) 2022-02-04 11:20:03 +00:00
.idea Improve recovery of preempted jobs (#633) 2022-01-17 12:05:39 +00:00
.vscode Clean up legacy code (#671) 2022-03-09 14:53:12 +00:00
InnerEye Specify `formatter_class=argparse.ArgumentDefaultsHelpFormatter` in `generic_parsing.py` to show default values for arguments in help message (#689) 2022-03-15 17:27:01 +00:00
InnerEye-DataQuality Replace data quality folder with a link to the commit (#692) 2022-03-10 09:14:25 +00:00
RegressionTestResults Improve recovery of preempted jobs (#633) 2022-01-17 12:05:39 +00:00
TestSubmodule Switch more code to using Path (#305) 2020-11-02 19:49:13 +00:00
Tests Ensure the shape of input patches is compatible with model constraints (#682) 2022-03-10 17:01:25 +00:00
TestsOutsidePackage Downloading checkpoints from AML if not found on disk (#614) 2021-12-09 20:18:41 +00:00
azure-pipelines Replace data quality folder with a link to the commit (#692) 2022-03-10 09:14:25 +00:00
docs Fix some Sphinx warnings (#699) 2022-03-22 09:46:15 +00:00
fastMRI@13560d2f19 Upgrade to Pytorch Lightning 1.5.5 (#591) 2021-12-15 10:48:35 +00:00
hi-ml@0250715c5a Add FP and TN to DeepMIL outputs and configure outputs for multi-class classification (#679) 2022-03-04 16:16:49 +00:00
sphinx-docs Fix some Sphinx warnings (#699) 2022-03-22 09:46:15 +00:00
.amlignore Add histopathology module and add hi-ml as submodule (#603) 2021-12-06 17:23:17 +00:00
.coveragerc Fix error messages in test coverage reporting (#394) 2021-02-10 14:29:20 +00:00
.editorconfig Add source code 2020-07-29 00:30:35 +05:30
.flake8 Bug fix: deployed models and training code use different versions of hi-ml (#606) 2021-12-07 12:45:49 +00:00
.gitattributes Moving nii.gz from git lfs to git to simplify the HelloWorld test (#632) 2022-01-11 16:19:17 +00:00
.gitconfig Add source code 2020-07-29 00:30:35 +05:30
.gitignore Replace data quality folder with a link to the commit (#692) 2022-03-10 09:14:25 +00:00
.gitmodules Add histopathology module and add hi-ml as submodule (#603) 2021-12-06 17:23:17 +00:00
.pre-commit-config.yaml Vsalva/deepmil panda (#619) 2021-12-14 19:45:07 +00:00
CHANGELOG.md Fix some Sphinx warnings (#699) 2022-03-22 09:46:15 +00:00
CODE_OF_CONDUCT.md Add source code 2020-07-29 00:30:35 +05:30
GeoPol.xml Add source code 2020-07-29 00:30:35 +05:30
LICENSE Add source code 2020-07-29 00:30:35 +05:30
README.md Fix some Sphinx warnings (#699) 2022-03-22 09:46:15 +00:00
SECURITY.md Add source code 2020-07-29 00:30:35 +05:30
THIRDPARTYNOTICES.md Remove unnecessary notices in THIRDPARTYNOTICES.md 2020-11-03 15:37:43 +00:00
conftest.py Moving InnerEye's Azure code to hi-ml package (#548) 2021-08-26 09:17:09 +01:00
environment.yml Update TorchIO version (#677) 2022-03-14 16:50:29 +00:00
mypy.ini Add source code 2020-07-29 00:30:35 +05:30
mypy_runner.py Moving InnerEye's Azure code to hi-ml package (#548) 2021-08-26 09:17:09 +01:00
pull_request_template.md Document how we do releases (#315) 2020-11-13 14:03:51 +00:00
pytest.ini Enable tiling non-PANDA WSI datasets (#621) 2021-12-16 16:11:55 +00:00
score.py Bug fix: deployed models and training code use different versions of hi-ml (#606) 2021-12-07 12:45:49 +00:00
setup.py Add accuracy at threshold 0.5 to classification report (#450) 2021-05-04 10:09:35 +00:00

README.md

InnerEye-DeepLearning

Build Status

Overview

This is a deep learning toolbox to train models on medical images (or more generally, 3D images). It integrates seamlessly with cloud computing in Azure.

On the modelling side, this toolbox supports

  • Segmentation models
  • Classification and regression models
  • Adding cloud support to any PyTorch Lightning model, via a bring-your-own-model setup

On the user side, this toolbox focusses on enabling machine learning teams to achieve more. It is cloud-first, and relies on Azure Machine Learning Services (AzureML) for execution, bookkeeping, and visualization. Taken together, this gives:

  • Traceability: AzureML keeps a full record of all experiments that were executed, including a snapshot of the code. Tags are added to the experiments automatically, that can later help filter and find old experiments.
  • Transparency: All team members have access to each other's experiments and results.
  • Reproducibility: Two model training runs using the same code and data will result in exactly the same metrics. All sources of randomness like multithreading are controlled for.
  • Cost reduction: Using AzureML, all compute (virtual machines, VMs) is requested at the time of starting the training job, and freed up at the end. Idle VMs will not incur costs. In addition, Azure low priority nodes can be used to further reduce costs (up to 80% cheaper).
  • Scale out: Large numbers of VMs can be requested easily to cope with a burst in jobs.

Despite the cloud focus, all training and model testing works just as well on local compute, which is important for model prototyping, debugging, and in cases where the cloud can't be used. In particular, if you already have GPU machines available, you will be able to utilize them with the InnerEye toolbox.

In addition, our toolbox supports:

  • Cross-validation using AzureML's built-in support, where the models for individual folds are trained in parallel. This is particularly important for the long-running training jobs often seen with medical images.
  • Hyperparameter tuning using Hyperdrive.
  • Building ensemble models.
  • Easy creation of new models via a configuration-based approach, and inheritance from an existing architecture.

Once training in AzureML is done, the models can be deployed from within AzureML.

Getting started

We recommend using our toolbox with Linux or with the Windows Subsystem for Linux (WSL2). Much of the core functionality works fine on Windows, but PyTorch's full feature set is only available on Linux. Read more about WSL here.

Clone the repository into a subfolder of the current directory:

git clone --recursive https://github.com/microsoft/InnerEye-DeepLearning
cd InnerEye-DeepLearning
git lfs install
git lfs pull

After that, you need to set up your Python environment:

  • Install conda or miniconda for your operating system.
  • Create a Conda environment from the environment.yml file in the repository root, and activate it:
conda env create --file environment.yml
conda activate InnerEye
  • If the environment creation fails with odd error messages on a Windows machine, please continue here.

Now try to run the HelloWorld segmentation model - that's a very simple model that will train for 2 epochs on any machine, no GPU required. You need to set the PYTHONPATH environment variable to point to the repository root first. Assuming that your current directory is the repository root folder, on Linux bash that is:

export PYTHONPATH=`pwd`
python InnerEye/ML/runner.py --model=HelloWorld

(Note the "backtick" around the pwd command, this is not a standard single quote!)

On Windows:

set PYTHONPATH=%cd%
python InnerEye/ML/runner.py --model=HelloWorld

If that works: Congratulations! You have successfully built your first model using the InnerEye toolbox.

If it fails, please check the troubleshooting page on the Wiki.

Further detailed instructions, including setup in Azure, are here:

  1. Setting up your environment
  2. Training a Hello World segmentation model
  3. Setting up Azure Machine Learning
  4. Creating a dataset
  5. Building models in Azure ML
  6. Sample Segmentation and Classification tasks
  7. Debugging and monitoring models
  8. Model diagnostics
  9. Move a model to a different workspace
  10. Working with FastMRI models
  11. Active label cleaning and noise robust learning toolbox

Deployment

We offer a companion set of open-sourced tools that help to integrate trained CT segmentation models with clinical software systems:

  • The InnerEye-Gateway is a Windows service running in a DICOM network, that can route anonymized DICOM images to an inference service.
  • The InnerEye-Inference component offers a REST API that integrates with the InnnEye-Gateway, to run inference on InnerEye-DeepLearning models.

Details can be found here.

docs/deployment.png

More information

  1. Project InnerEye
  2. Releases
  3. Changelog
  4. Testing
  5. How to do pull requests
  6. Contributing

Licensing

MIT License

You are responsible for the performance, the necessary testing, and if needed any regulatory clearance for any of the models produced by this toolbox.

Acknowledging usage of Project InnerEye OSS tools

When using Project InnerEye open-source software (OSS) tools, please acknowledge with the following wording:

This project used Microsoft Research's Project InnerEye open-source software tools (https://aka.ms/InnerEyeOSS).

Contact

If you have any feature requests, or find issues in the code, please create an issue on GitHub.

Please send an email to InnerEyeInfo@microsoft.com if you would like further information about this project.

Publications

Oktay O., Nanavati J., Schwaighofer A., Carter D., Bristow M., Tanno R., Jena R., Barnett G., Noble D., Rimmer Y., Glocker B., OHara K., Bishop C., Alvarez-Valle J., Nori A.: Evaluation of Deep Learning to Augment Image-Guided Radiotherapy for Head and Neck and Prostate Cancers. JAMA Netw Open. 2020;3(11):e2027426. doi:10.1001/jamanetworkopen.2020.27426

Bannur S., Oktay O., Bernhardt M, Schwaighofer A., Jena R., Nushi B., Wadhwani S., Nori A., Natarajan K., Ashraf S., Alvarez-Valle J., Castro D. C.: Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs. ICML 2021 Workshop on Interpretable Machine Learning in Healthcare. https://arxiv.org/abs/2107.06618

Bernhardt M., Castro D. C., Tanno R., Schwaighofer A., Tezcan K. C., Monteiro M., Bannur S., Lungren M., Nori S., Glocker B., Alvarez-Valle J., Oktay. O: Active label cleaning for improved dataset quality under resource constraints. https://www.nature.com/articles/s41467-022-28818-3. Accompagnying code InnerEye-DataQuality

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

This toolbox is maintained by the

Microsoft Medical Image Analysis team.