Enables inference and deployment of InnerEye-DeepLearning (https://github.com/microsoft/InnerEye-deeplearning) models as an async REST API on Azure
Перейти к файлу
Peter Hessey 1197fd1723
Add workflow to automatically add new issues to OSS Project (#36)
Adds a new workflow that automatically pushes newly-opened issues to the InnerEye-OSS GitHub Project
2022-05-17 13:23:35 +01:00
.github Add workflow to automatically add new issues to OSS Project (#36) 2022-05-17 13:23:35 +01:00
Tests ⬆️ Upgrade Dependencies (#34) 2022-05-12 10:17:09 +01:00
docs Init 2021-03-29 17:38:31 +01:00
.flake8 Init 2021-03-29 17:38:31 +01:00
.gitattributes Init 2021-03-29 17:38:31 +01:00
.gitignore Init 2021-03-29 17:38:31 +01:00
CODE_OF_CONDUCT.md Init 2021-03-29 17:38:31 +01:00
GeoPol.xml Init 2021-03-29 17:38:31 +01:00
LICENSE Init 2021-03-29 17:38:31 +01:00
README.md Update README.md 2022-03-29 11:42:44 +01:00
SECURITY.md Init 2021-03-29 17:38:31 +01:00
SUPPORT.md Init 2021-03-29 17:38:31 +01:00
THIRDPARTYNOTICES.md Init 2021-03-29 17:38:31 +01:00
app.py ⬆️ Upgrade Dependencies (#34) 2022-05-12 10:17:09 +01:00
azure_config.py Pushing image to datastore and deleting after inference (#4) 2021-04-26 15:31:23 +01:00
configuration_constants.py Pushing image to datastore and deleting after inference (#4) 2021-04-26 15:31:23 +01:00
configure.py Pushing image to datastore and deleting after inference (#4) 2021-04-26 15:31:23 +01:00
conftest.py Init 2021-03-29 17:38:31 +01:00
dockerignore Init 2021-03-29 17:38:31 +01:00
download_model_and_run_scoring.py Pushing image to datastore and deleting after inference (#4) 2021-04-26 15:31:23 +01:00
environment.yml Init 2021-03-29 17:38:31 +01:00
mypy.ini Init 2021-03-29 17:38:31 +01:00
requirements.txt ⬆️ Upgrade Dependencies (#34) 2022-05-12 10:17:09 +01:00
source_config.py Pushing image to datastore and deleting after inference (#4) 2021-04-26 15:31:23 +01:00
submit_for_inference.py Pushing image to datastore and deleting after inference (#4) 2021-04-26 15:31:23 +01:00

README.md

Introduction

InnerEye Inference API

InnerEye-Inference is a AppService webapp in python to run inference on medical imaging models trained with the InnerEye-DeepLearning toolkit.

You can also integrate this with DICOM using the InnerEye-Gateway

Getting Started

Installing Conda or Miniconda

Download a Conda or Miniconda installer for your platform and run it.

Creating a Conda environment

Note that in order to create the Conda environment you will need to have build tools installed on your machine. If you are running Windows, they should be already installed with Conda distribution.

You can install build tools on Ubuntu (and Debian-based distributions) by running
sudo apt-get install build-essential
If you are running CentOS/RHEL distributions, you can install the build tools by running
yum install gcc gcc-c++ kernel-devel make

Start the conda prompt for your platform. In that prompt, navigate to your repository root and run

  • conda env create --file environment.yml
  • conda activate inference

Configuration

Add this script with name set_environment.sh to set your env variables. This can be executed in Linux. The code will read the file if the environment variables are not present.

#!/bin/bash
export CUSTOMCONNSTR_AZUREML_SERVICE_PRINCIPAL_SECRET=
export CUSTOMCONNSTR_API_AUTH_SECRET=
export CLUSTER=
export WORKSPACE_NAME=
export EXPERIMENT_NAME=
export RESOURCE_GROUP=
export SUBSCRIPTION_ID=
export APPLICATION_ID=
export TENANT_ID=
export DATASTORE_NAME=
export IMAGE_DATA_FOLDER=

Run with source set_environment.sh

Running flask app locally

  • flask run to test it locally

Testing flask app locally

The app can be tested locally using curl.

Ping

To check that the server is running, issue this command from a local shell:

curl -i -H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" http://localhost:5000/v1/ping

This should produce an output similar to:

HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 11:50:20 GMT

Start

To test DICOM image segmentation of a file, first create Tests/TestData/HN.zip containing a zipped set of the test DICOM files in Tests/TestData/HN. Then assuming there is a model PassThroughModel:4, issue this command:

curl -i \
    -X POST \
    -H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" \
    --data-binary @Tests/TestData/HN.zip \
    http://localhost:5000/v1/model/start/PassThroughModel:4

This should produce an output similar to:

HTTP/1.0 201 CREATED
Content-Type: text/plain
Content-Length: 33
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 13:00:13 GMT

api_inference_1629291609_fb5dfdf9

here api_inference_1629291609_fb5dfdf9 is the run id for the newly submitted inference job.

Results

To monitor the progress of the previously submitted inference job, issue this command:

curl -i \
    -H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" \
    --head \
    http://localhost:5000/v1/model/results/api_inference_1629291609_fb5dfdf9 \
    --next \
    -H "API_AUTH_SECRET: <val of CUSTOMCONNSTR_API_AUTH_SECRET>" \
    --output "HN_rt.zip" \
    http://localhost:5000/v1/model/results/api_inference_1629291609_fb5dfdf9

If the run is still in progress then this should produce output similar to:

HTTP/1.0 202 ACCEPTED
Content-Type: text/html; charset=utf-8
Content-Length: 0
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 13:45:20 GMT

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0

If the run is complete then this should produce an output similar to:

HTTP/1.0 200 OK
Content-Type: application/zip
Content-Length: 131202
Server: Werkzeug/1.0.1 Python/3.7.3
Date: Wed, 18 Aug 2021 14:01:27 GMT

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  128k  100  128k    0     0   150k      0 --:--:-- --:--:-- --:--:--  150k

and download the inference result as a zipped DICOM-RT file to HN_rt.zip.

Running flask app in Azure

  • Install Azure CLI: curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
  • Login: az login --use-device-code
  • Deploy: az webapp up --sku S1 --name test-python12345 --subscription <your_subscription_name> -g InnerEyeInference --location <your region>
  • In the Azure portal go to Monitoring > Log Stream for debugging logs

Deployment build

If you would like to reproduce the automatic deployment of the service for testing purposes:

  • az ad sp create-for-rbac --name "<name>" --role contributor --scope /subscriptions/<subs>/resourceGroups/InnerEyeInference --sdk-auth
  • The previous command will return a json object with the content for the variable secrets.AZURE_CREDENTIALS .github/workflows/deploy.yml

Images

During inference the image data zip file is copied to the IMAGE_DATA_FOLDER in the AzureML workspace's DATASTORE_NAME datastore. At the end of inference the copied image data zip file is overwritten with a simple line of text. At present we cannot delete these. If you would like these overwritten files removed from your datastore you can add a policy to delete items from the datastore after a period of time. We recommend 7 days.

Help and Bug Reporting

  1. Guidelines for how to report bug.

Licensing

MIT License

You are responsible for the performance and any necessary testing or regulatory clearances for any models generated

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Microsoft Open Source Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources: