Co-authored-by: Javier <jaalvare@microsoft.com>
Co-authored-by: Anton Schwaighofer <antonsc@microsoft.com>
This commit is contained in:
Clemens Siebler 2021-02-03 10:03:19 +01:00 коммит произвёл GitHub
Родитель b415367431
Коммит 5d5e55e314
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
2 изменённых файлов: 12 добавлений и 5 удалений

Просмотреть файл

@ -48,7 +48,7 @@ class Prostate(ProstateBase):
def __init__(self) -> None: def __init__(self) -> None:
super().__init__( super().__init__(
ground_truth_ids=["femur_r", "femur_l", "rectum", "prostate"], ground_truth_ids=["femur_r", "femur_l", "rectum", "prostate"],
azure_dataset_id="id-of-your-blob-containing-prostate-data") azure_dataset_id="name-of-your-AML-dataset-with-prostate-data")
``` ```
The allowed parameters and their meanings are defined in [`SegmentationModelBase`](/InnerEye/ML/config.py). The allowed parameters and their meanings are defined in [`SegmentationModelBase`](/InnerEye/ML/config.py).
The class name must be the same as the basename of the file containing it, so `Prostate.py` must contain `Prostate`. The class name must be the same as the basename of the file containing it, so `Prostate.py` must contain `Prostate`.
@ -64,12 +64,14 @@ class HeadAndNeck(HeadAndNeckBase):
def __init__(self) -> None: def __init__(self) -> None:
super().__init__( super().__init__(
ground_truth_ids=["parotid_l", "parotid_r", "smg_l", "smg_r", "spinal_cord"] ground_truth_ids=["parotid_l", "parotid_r", "smg_l", "smg_r", "spinal_cord"]
azure_dataset_id="id-of-your-blob-containing-prostate-data") azure_dataset_id="name-of-your-AML-dataset-with-prostate-data")
``` ```
### Training a new model ### Training a new model
* Set up your model configuration as above. * Set up your model configuration as above and update `azure_dataset_id` to the name of your Dataset in the AML workspace.
It is enough to put your dataset into blob storage. The dataset should be a contained in a folder at the root of the datasets container.
The InnerEye runner will check if there is a dataset in the AzureML workspace already, and if not, generate it directly from blob storage.
* Train a new model, for example `Prostate`: * Train a new model, for example `Prostate`:
```shell script ```shell script

Просмотреть файл

@ -64,8 +64,13 @@ You can skip this if you have chosen automatic deployment above.
We recommend using [low priority](https://docs.microsoft.com/en-us/azure/batch/batch-low-pri-vms) clusters, since We recommend using [low priority](https://docs.microsoft.com/en-us/azure/batch/batch-low-pri-vms) clusters, since
they only cost a fraction of the dedicated VMs. they only cost a fraction of the dedicated VMs.
As a reference, the Prostate model and the Head and Neck model require VMs with 4 GPUs with at least 16GB of memory As a reference:
per GPU, for example `Standard_ND24s`, `Standard_NC24s_v3` or `Standard_NC24s_v2`. * The Prostate, HeadAndNeck, and the Lung model require VMs with 4 GPUs with at least 24GB of memory
per GPU, for example `Standard_ND24s` (4 GPUs, 24GB per GPU).
* It is possible to train all of these models on machines with fewer GPUs, or GPUs with less memory. If using GPUs with
less memory, some model parameters will need to be adjusted. As a starting point, we would suggest reducing the `train_batch_size`,
and if that is not sufficient, reducing the `crop_size`, bearing in mind though that the size of the crops has a large
impact on the model's accuracy.
You need to ensure that your Azure subscription actually has a quota for accessing GPU machines. To see your quota, You need to ensure that your Azure subscription actually has a quota for accessing GPU machines. To see your quota,
find your newly created AzureML workspace in the [Azure portal](http://portal.azure.com), using the search bar at the find your newly created AzureML workspace in the [Azure portal](http://portal.azure.com), using the search bar at the