Shorten the most frequent commandline options, rename settings file (#232)

Rename commandline options: --submit_to_azureml -> --azureml, --is_train -> --train, --gpu_cluster_name -> --cluster
Rename train_variables.yml -> settings.yml
This commit is contained in:
Anton Schwaighofer 2020-09-21 17:40:05 +01:00 коммит произвёл GitHub
Родитель ad77f95d24
Коммит 3e8b92d0f1
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
38 изменённых файлов: 99 добавлений и 97 удалений

Просмотреть файл

@ -12,7 +12,7 @@
<option name="ADD_CONTENT_ROOTS" value="true" /> <option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" /> <option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="InnerEye/Azure/azure_runner.py" /> <option name="SCRIPT_NAME" value="InnerEye/Azure/azure_runner.py" />
<option name="PARAMETERS" value="--model=Lung --is_train=True --hyperdrive=True --number_of_cross_validation_splits=10" /> <option name="PARAMETERS" value="--model=Lung --train=True --hyperdrive=True --number_of_cross_validation_splits=10" />
<option name="SHOW_COMMAND_LINE" value="false" /> <option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" /> <option name="EMULATE_TERMINAL" value="false" />
<option name="MODULE_MODE" value="false" /> <option name="MODULE_MODE" value="false" />

Просмотреть файл

@ -12,7 +12,7 @@
<option name="ADD_CONTENT_ROOTS" value="true" /> <option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" /> <option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="InnerEye/ML/runner.py" /> <option name="SCRIPT_NAME" value="InnerEye/ML/runner.py" />
<option name="PARAMETERS" value="--submit_to_azureml=True --model=BasicModel2Epochs --is_train=True --tensorboard=True" /> <option name="PARAMETERS" value="--azureml=True --model=BasicModel2Epochs --train=True --tensorboard=True" />
<option name="SHOW_COMMAND_LINE" value="false" /> <option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" /> <option name="EMULATE_TERMINAL" value="false" />
<option name="MODULE_MODE" value="false" /> <option name="MODULE_MODE" value="false" />

Просмотреть файл

@ -12,7 +12,7 @@
<option name="ADD_CONTENT_ROOTS" value="true" /> <option name="ADD_CONTENT_ROOTS" value="true" />
<option name="ADD_SOURCE_ROOTS" value="true" /> <option name="ADD_SOURCE_ROOTS" value="true" />
<option name="SCRIPT_NAME" value="InnerEye/ML/runner.py" /> <option name="SCRIPT_NAME" value="InnerEye/ML/runner.py" />
<option name="PARAMETERS" value="--model=GlaucomaPublic --is_train=True" /> <option name="PARAMETERS" value="--model=GlaucomaPublic" />
<option name="SHOW_COMMAND_LINE" value="false" /> <option name="SHOW_COMMAND_LINE" value="false" />
<option name="EMULATE_TERMINAL" value="false" /> <option name="EMULATE_TERMINAL" value="false" />
<option name="MODULE_MODE" value="false" /> <option name="MODULE_MODE" value="false" />

Просмотреть файл

@ -34,8 +34,8 @@ class VMPriority(Enum):
Dedicated = 'dedicated' Dedicated = 'dedicated'
# The name of the submit_to_azureml property of AzureConfig # The name of the "azureml" property of AzureConfig
AZURECONFIG_SUBMIT_TO_AZUREML = "submit_to_azureml" AZURECONFIG_SUBMIT_TO_AZUREML = "azureml"
@dataclass(frozen=True) @dataclass(frozen=True)
@ -54,7 +54,7 @@ class GitInformation:
class AzureConfig(GenericConfig): class AzureConfig(GenericConfig):
""" """
Azure related configurations to set up valid workspace. Note that for a parameter to be settable (when not given Azure related configurations to set up valid workspace. Note that for a parameter to be settable (when not given
on the command line) to a value from train_variables.yaml, its default here needs to be None and not the empty on the command line) to a value from settings.yml, its default here needs to be None and not the empty
string, and its type will be Optional[str], not str. string, and its type will be Optional[str], not str.
""" """
subscription_id: str = param.String(doc="The ID of your Azure subscription.") subscription_id: str = param.String(doc="The ID of your Azure subscription.")
@ -72,14 +72,16 @@ class AzureConfig(GenericConfig):
resource_group: str = param.String(None, doc="The Azure resource group that contains the AzureML workspace.") resource_group: str = param.String(None, doc="The Azure resource group that contains the AzureML workspace.")
docker_shm_size: str = param.String("440g", doc="The shared memory in the docker image for the AzureML VMs.") docker_shm_size: str = param.String("440g", doc="The shared memory in the docker image for the AzureML VMs.")
hyperdrive: bool = param.Boolean(False, doc="If True, use AzureML HyperDrive for run execution.") hyperdrive: bool = param.Boolean(False, doc="If True, use AzureML HyperDrive for run execution.")
gpu_cluster_name: str = param.String(doc="GPU cluster to use when running inside AzureML.") cluster: str = param.String(doc="The name of the GPU cluster inside the AzureML workspace, that should "
"execute the job.")
pip_extra_index_url: str = \ pip_extra_index_url: str = \
param.String(doc="An additional URL where PIP packages should be loaded from.") param.String(doc="An additional URL where PIP packages should be loaded from.")
submit_to_azureml: bool = param.Boolean(False, doc="If True, submit the executing script to run on AzureML.") azureml: bool = param.Boolean(False, doc="If True, submit the executing script to run on AzureML.")
tensorboard: bool = param.Boolean(False, doc="If True, then automatically launch TensorBoard to monitor the" tensorboard: bool = param.Boolean(False, doc="If True, then automatically launch TensorBoard to monitor the"
" latest submitted AzureML run.") " latest submitted AzureML run.")
is_train: bool = param.Boolean(True, train: bool = param.Boolean(True,
doc="If True, train a new model. If False, run inference on an existing model.") doc="If True, train a new model. If False, run inference on an existing model. For "
"inference, you need to specify a --run_recovery_id=... as well.")
model: str = param.String(doc="The name of the model to train/test.") model: str = param.String(doc="The name of the model to train/test.")
register_model_only_for_epoch: Optional[int] = param.Integer(None, register_model_only_for_epoch: Optional[int] = param.Integer(None,
doc="If set, and run_recovery_id is also set, " doc="If set, and run_recovery_id is also set, "
@ -248,7 +250,7 @@ class SourceConfig:
def set_script_params_except_submit_flag(self) -> None: def set_script_params_except_submit_flag(self) -> None:
""" """
Populates the script_param field of the present object from the arguments in sys.argv, with the exception Populates the script_param field of the present object from the arguments in sys.argv, with the exception
of the "submit_to_azureml" flag. of the "azureml" flag.
""" """
args = sys.argv[1:] args = sys.argv[1:]
submit_flag = f"--{AZURECONFIG_SUBMIT_TO_AZUREML}" submit_flag = f"--{AZURECONFIG_SUBMIT_TO_AZUREML}"
@ -258,10 +260,10 @@ class SourceConfig:
arg = args[i] arg = args[i]
if arg.startswith(submit_flag): if arg.startswith(submit_flag):
if len(arg) == len(submit_flag): if len(arg) == len(submit_flag):
# The argument list contains something like ["--submit_to_azureml", "True]: Skip 2 entries # The argument list contains something like ["--azureml", "True]: Skip 2 entries
i = i + 1 i = i + 1
elif arg[len(submit_flag)] != "=": elif arg[len(submit_flag)] != "=":
# The argument list contains a flag like "--submit_to_azureml_foo": Keep that. # The argument list contains a flag like "--azureml_foo": Keep that.
retained_args.append(arg) retained_args.append(arg)
else: else:
retained_args.append(arg) retained_args.append(arg)

Просмотреть файл

@ -93,7 +93,7 @@ def set_run_tags(run: Run, azure_config: AzureConfig, model_config_overrides: st
"tag": azure_config.tag, "tag": azure_config.tag,
"model_name": azure_config.model, "model_name": azure_config.model,
"friendly_name": azure_config.user_friendly_name, "friendly_name": azure_config.user_friendly_name,
"execution_mode": ModelExecutionMode.TRAIN.value if azure_config.is_train else ModelExecutionMode.TEST.value, "execution_mode": ModelExecutionMode.TRAIN.value if azure_config.train else ModelExecutionMode.TEST.value,
RUN_RECOVERY_ID_KEY_NAME: azure_util.create_run_recovery_id(run=run), RUN_RECOVERY_ID_KEY_NAME: azure_util.create_run_recovery_id(run=run),
RUN_RECOVERY_FROM_ID_KEY_NAME: azure_config.run_recovery_id, RUN_RECOVERY_FROM_ID_KEY_NAME: azure_config.run_recovery_id,
"build_number": str(azure_config.build_number), "build_number": str(azure_config.build_number),
@ -168,7 +168,7 @@ def create_and_submit_experiment(
f"--run_recovery_id={recovery_id}") f"--run_recovery_id={recovery_id}")
print(f"The run recovery ID has been written to this file: {recovery_file}") print(f"The run recovery ID has been written to this file: {recovery_file}")
print("==============================================================================") print("==============================================================================")
if azure_config.tensorboard and azure_config.submit_to_azureml: if azure_config.tensorboard and azure_config.azureml:
print("Starting TensorBoard now because you specified --tensorboard") print("Starting TensorBoard now because you specified --tensorboard")
monitor(monitor_config=AMLTensorBoardMonitorConfig(run_ids=[run.id]), azure_config=azure_config) monitor(monitor_config=AMLTensorBoardMonitorConfig(run_ids=[run.id]), azure_config=azure_config)
else: else:
@ -297,7 +297,7 @@ def create_estimator_from_configs(workspace: Workspace, azure_config: AzureConfi
source_directory=source_config.root_folder, source_directory=source_config.root_folder,
entry_script=entry_script_relative_path, entry_script=entry_script_relative_path,
script_params=source_config.script_params, script_params=source_config.script_params,
compute_target=azure_config.gpu_cluster_name, compute_target=azure_config.cluster,
# Use blob storage for storing the source, rather than the FileShares section of the storage account. # Use blob storage for storing the source, rather than the FileShares section of the storage account.
source_directory_data_store=workspace.datastores.get(WORKSPACE_DEFAULT_BLOB_STORE_NAME), source_directory_data_store=workspace.datastores.get(WORKSPACE_DEFAULT_BLOB_STORE_NAME),
inputs=estimator_inputs, inputs=estimator_inputs,

Просмотреть файл

@ -95,4 +95,4 @@ def main(yaml_file_path: Path) -> None:
if __name__ == '__main__': if __name__ == '__main__':
main(fixed_paths.TRAIN_YAML_FILE) main(fixed_paths.SETTINGS_YAML_FILE)

Просмотреть файл

@ -119,14 +119,14 @@ class ModelProcessing(Enum):
(3) Inference on an ensemble model taking place in a HyperDrive child run that trained one of the component (3) Inference on an ensemble model taking place in a HyperDrive child run that trained one of the component
models of the ensemble and whose cross validation index is 0. models of the ensemble and whose cross validation index is 0.
(4) Inference on a single or ensemble model created in an another run specified by the value of run_recovery_id. (4) Inference on a single or ensemble model created in an another run specified by the value of run_recovery_id.
* Scenario (1) happens when we train a model (is_train=True) with number_of_cross_validation_splits=0. In this * Scenario (1) happens when we train a model (train=True) with number_of_cross_validation_splits=0. In this
case, the value of ModelProcessing passed around is DEFAULT. case, the value of ModelProcessing passed around is DEFAULT.
* Scenario (2) happens when we train a model (is_train=True) with number_of_cross_validation_splits>0. In this * Scenario (2) happens when we train a model (train=True) with number_of_cross_validation_splits>0. In this
case, the value of ModelProcessing passed around is DEFAULT in each of the child runs while training and running case, the value of ModelProcessing passed around is DEFAULT in each of the child runs while training and running
inference on its own single model. However, the child run whose cross validation index is 0 then goes on to inference on its own single model. However, the child run whose cross validation index is 0 then goes on to
carry out Scenario (3), and does more processing with ModelProcessing value ENSEMBLE_CREATION, to create and carry out Scenario (3), and does more processing with ModelProcessing value ENSEMBLE_CREATION, to create and
register the ensemble model, run inference on it, and upload information about the ensemble model to the parent run. register the ensemble model, run inference on it, and upload information about the ensemble model to the parent run.
* Scenario (4) happens when we do an inference-only run (is_train=False), and specify an existing model with * Scenario (4) happens when we do an inference-only run (train=False), and specify an existing model with
run_recovery_id (and necessarily number_of_cross_validation_splits=0, even if the recovered run was a HyperDrive run_recovery_id (and necessarily number_of_cross_validation_splits=0, even if the recovered run was a HyperDrive
one). This model may be either a single one or an ensemble one; in both cases, a ModelProcessing value of DEFAULT is one). This model may be either a single one or an ensemble one; in both cases, a ModelProcessing value of DEFAULT is
used. used.

Просмотреть файл

@ -49,8 +49,8 @@ VISUALIZATION_NOTEBOOK_PATH = os.path.join("ML", "visualizers", "gradcam_visuali
PROJECT_SECRETS_FILE = "InnerEyeTestVariables.txt" PROJECT_SECRETS_FILE = "InnerEyeTestVariables.txt"
INNEREYE_PACKAGE_ROOT = repository_root_directory(INNEREYE_PACKAGE_NAME) INNEREYE_PACKAGE_ROOT = repository_root_directory(INNEREYE_PACKAGE_NAME)
TRAIN_YAML_FILE_NAME = "train_variables.yml" SETTINGS_YAML_FILE_NAME = "settings.yml"
TRAIN_YAML_FILE = INNEREYE_PACKAGE_ROOT / TRAIN_YAML_FILE_NAME SETTINGS_YAML_FILE = INNEREYE_PACKAGE_ROOT / SETTINGS_YAML_FILE_NAME
MODEL_INFERENCE_JSON_FILE_NAME = 'model_inference_config.json' MODEL_INFERENCE_JSON_FILE_NAME = 'model_inference_config.json'
AZURE_RUNNER_ENVIRONMENT_YAML_FILE_NAME = "azure_runner.yml" AZURE_RUNNER_ENVIRONMENT_YAML_FILE_NAME = "azure_runner.yml"

Просмотреть файл

@ -112,4 +112,4 @@ def main(yaml_file_path: Path) -> None:
if __name__ == '__main__': if __name__ == '__main__':
main(yaml_file_path=fixed_paths.TRAIN_YAML_FILE) main(yaml_file_path=fixed_paths.SETTINGS_YAML_FILE)

Просмотреть файл

@ -303,7 +303,7 @@ class MLRunner:
ml_util.validate_dataset_paths(self.model_config.local_dataset) ml_util.validate_dataset_paths(self.model_config.local_dataset)
# train a new model if required # train a new model if required
if self.azure_config.is_train: if self.azure_config.train:
with logging_section("Model training"): with logging_section("Model training"):
model_train(self.model_config, run_recovery) model_train(self.model_config, run_recovery)
else: else:
@ -350,7 +350,7 @@ class MLRunner:
model (from the run we recovered) should already have been registered, so we should only model (from the run we recovered) should already have been registered, so we should only
do so if this run is specifically for that purpose. do so if this run is specifically for that purpose.
""" """
return self.azure_config.is_train or self.azure_config.register_model_only_for_epoch is not None return self.azure_config.train or self.azure_config.register_model_only_for_epoch is not None
def decide_registration_epoch_without_evaluating(self) -> Optional[int]: def decide_registration_epoch_without_evaluating(self) -> Optional[int]:
""" """

Просмотреть файл

@ -264,7 +264,7 @@ class Runner:
# force hyperdrive usage if performing cross validation # force hyperdrive usage if performing cross validation
self.azure_config.hyperdrive = True self.azure_config.hyperdrive = True
run_object: Optional[Run] = None run_object: Optional[Run] = None
if self.azure_config.submit_to_azureml: if self.azure_config.azureml:
run_object = self.submit_to_azureml() run_object = self.submit_to_azureml()
else: else:
self.run_in_situ() self.run_in_situ()
@ -401,7 +401,7 @@ def run(project_root: Path,
def main() -> None: def main() -> None:
run(project_root=fixed_paths.repository_root_directory(), run(project_root=fixed_paths.repository_root_directory(),
yaml_config_file=fixed_paths.TRAIN_YAML_FILE, yaml_config_file=fixed_paths.SETTINGS_YAML_FILE,
post_cross_validation_hook=default_post_cross_validation_hook) post_cross_validation_hook=default_post_cross_validation_hook)

Просмотреть файл

@ -122,8 +122,8 @@ class PlotCrossValidationConfig(GenericConfig):
ignore_subjects: List[int] = param.List(None, class_=int, bounds=(1, None), allow_None=True, instantiate=False, ignore_subjects: List[int] = param.List(None, class_=int, bounds=(1, None), allow_None=True, instantiate=False,
doc="List of the subject ids to ignore from the results") doc="List of the subject ids to ignore from the results")
is_zero_index: bool = param.Boolean(True, doc="If True, start cross validation split indices from 0 otherwise 1") is_zero_index: bool = param.Boolean(True, doc="If True, start cross validation split indices from 0 otherwise 1")
train_yaml_path: str = param.String(default=str(fixed_paths.TRAIN_YAML_FILE), train_yaml_path: str = param.String(default=str(fixed_paths.SETTINGS_YAML_FILE),
doc="Path to train_variables.yml file containing the Azure configuration " doc="Path to settings.yml file containing the Azure configuration "
"for the workspace") "for the workspace")
_azure_config: Optional[AzureConfig] = \ _azure_config: Optional[AzureConfig] = \
param.ClassSelector(class_=AzureConfig, allow_None=True, param.ClassSelector(class_=AzureConfig, allow_None=True,

Просмотреть файл

@ -30,7 +30,7 @@ class SubmitForInferenceConfig(GenericConfig):
model_id: str = param.String(doc="Id of model, e.g. Prostate:123") model_id: str = param.String(doc="Id of model, e.g. Prostate:123")
image_file: Path = param.ClassSelector(class_=Path, doc="Image file to segment, ending in .nii.gz") image_file: Path = param.ClassSelector(class_=Path, doc="Image file to segment, ending in .nii.gz")
yaml_file: Path = param.ClassSelector( yaml_file: Path = param.ClassSelector(
class_=Path, doc="File containing subscription details, typically your train_variables.yml") class_=Path, doc="File containing subscription details, typically your settings.yml")
download_folder: Optional[Path] = param.ClassSelector(default=None, download_folder: Optional[Path] = param.ClassSelector(default=None,
class_=Path, class_=Path,
doc="Folder into which to download the segmentation result") doc="Folder into which to download the segmentation result")

Просмотреть файл

@ -7,6 +7,6 @@ variables:
resource_group: 'InnerEye-DeepLearning' resource_group: 'InnerEye-DeepLearning'
docker_shm_size: '440g' docker_shm_size: '440g'
workspace_name: 'InnerEye-DeepLearning' workspace_name: 'InnerEye-DeepLearning'
gpu_cluster_name: 'training-nd24' cluster: 'training-nd24'
model_configs_namespace: '' model_configs_namespace: ''
extra_code_directory: '' extra_code_directory: ''

Просмотреть файл

@ -7,7 +7,7 @@ import sys
from pathlib import Path from pathlib import Path
# This file here mimics how the InnerEye code would be used as a git submodule. The test script will # This file here mimics how the InnerEye code would be used as a git submoTestdule. The test script will
# copy the InnerEye code to a folder Submodule. The test will then invoke the present file as a runner, # copy the InnerEye code to a folder Submodule. The test will then invoke the present file as a runner,
# and train a model in AzureML. # and train a model in AzureML.
@ -40,7 +40,7 @@ def main() -> None:
from InnerEye.Common import fixed_paths from InnerEye.Common import fixed_paths
print(f"Repository root: {repository_root}") print(f"Repository root: {repository_root}")
runner.run(project_root=repository_root, runner.run(project_root=repository_root,
yaml_config_file=fixed_paths.TRAIN_YAML_FILE, yaml_config_file=fixed_paths.SETTINGS_YAML_FILE,
post_cross_validation_hook=None) post_cross_validation_hook=None)

Просмотреть файл

@ -20,7 +20,7 @@ def test_git_info() -> None:
Test if git branch information can be read correctly. Test if git branch information can be read correctly.
""" """
logging_to_stdout(log_level=logging.DEBUG) logging_to_stdout(log_level=logging.DEBUG)
azure_config = AzureConfig.from_yaml(fixed_paths.TRAIN_YAML_FILE) azure_config = AzureConfig.from_yaml(fixed_paths.SETTINGS_YAML_FILE)
azure_config.project_root = project_root azure_config.project_root = project_root
assert azure_config.build_branch == "" assert azure_config.build_branch == ""
assert azure_config.build_source_id == "" assert azure_config.build_source_id == ""
@ -41,7 +41,7 @@ def test_git_info_from_commandline() -> None:
""" """
Test if git branch information can be overriden on the commandline Test if git branch information can be overriden on the commandline
""" """
azure_config = AzureConfig.from_yaml(fixed_paths.TRAIN_YAML_FILE) azure_config = AzureConfig.from_yaml(fixed_paths.SETTINGS_YAML_FILE)
azure_config.project_root = project_root azure_config.project_root = project_root
azure_config.build_branch = "branch" azure_config.build_branch = "branch"
azure_config.build_source_id = "id" azure_config.build_source_id = "id"

Просмотреть файл

@ -82,7 +82,7 @@ def test_create_runner_parser(with_config: bool) -> None:
Check that default and non-default arguments are set correctly and recognized as default/non-default. Check that default and non-default arguments are set correctly and recognized as default/non-default.
""" """
azure_parser = create_runner_parser(SegmentationModelBase if with_config else None) azure_parser = create_runner_parser(SegmentationModelBase if with_config else None)
args_list = ["--model=Lung", "--is_train=False", "--l_rate=100.0", args_list = ["--model=Lung", "--train=False", "--l_rate=100.0",
"--unknown=1", "--subscription_id", "Test1", "--tenant_id=Test2", "--unknown=1", "--subscription_id", "Test1", "--tenant_id=Test2",
"--application_id", "Test3", "--datasets_storage_account=Test4", "--application_id", "Test3", "--datasets_storage_account=Test4",
"--log_level=INFO", "--log_level=INFO",
@ -90,13 +90,13 @@ def test_create_runner_parser(with_config: bool) -> None:
"--pip_extra_index_url=foo"] "--pip_extra_index_url=foo"]
with mock.patch("sys.argv", [""] + args_list): with mock.patch("sys.argv", [""] + args_list):
parser_result = parse_args_and_add_yaml_variables(azure_parser, parser_result = parse_args_and_add_yaml_variables(azure_parser,
yaml_config_file=fixed_paths.TRAIN_YAML_FILE) yaml_config_file=fixed_paths.SETTINGS_YAML_FILE)
azure_config = AzureConfig(**parser_result.args) azure_config = AzureConfig(**parser_result.args)
# These values have been set on the commandline, to values that are not the parser defaults. # These values have been set on the commandline, to values that are not the parser defaults.
non_default_args = { non_default_args = {
"datasets_storage_account": "Test4", "datasets_storage_account": "Test4",
"is_train": False, "train": False,
"model": "Lung", "model": "Lung",
"subscription_id": "Test1", "subscription_id": "Test1",
"application_id": "Test3", "application_id": "Test3",
@ -151,7 +151,7 @@ def test_azureml_submit_constant() -> None:
def test_source_config_set_params() -> None: def test_source_config_set_params() -> None:
""" """
Check that commandline arguments are set correctly when submitting the script to AzureML. Check that commandline arguments are set correctly when submitting the script to AzureML.
In particular, the submit_to_azureml flag should be omitted, irrespective of how the argument is written. In particular, the azureml flag should be omitted, irrespective of how the argument is written.
""" """
s = SourceConfig(root_folder="", entry_script="something.py", conda_dependencies_files=[]) s = SourceConfig(root_folder="", entry_script="something.py", conda_dependencies_files=[])
@ -166,7 +166,7 @@ def test_source_config_set_params() -> None:
with mock.patch("sys.argv", ["", "some", "--param", "1", f"--{AZURECONFIG_SUBMIT_TO_AZUREML}", "False", "more"]): with mock.patch("sys.argv", ["", "some", "--param", "1", f"--{AZURECONFIG_SUBMIT_TO_AZUREML}", "False", "more"]):
s.set_script_params_except_submit_flag() s.set_script_params_except_submit_flag()
assert_has_params("some --param 1 more") assert_has_params("some --param 1 more")
# Arguments where submit_to_azureml is just the prefix should not be removed. # Arguments where azureml is just the prefix should not be removed.
with mock.patch("sys.argv", ["", "some", f"--{AZURECONFIG_SUBMIT_TO_AZUREML}foo", "False", "more"]): with mock.patch("sys.argv", ["", "some", f"--{AZURECONFIG_SUBMIT_TO_AZUREML}foo", "False", "more"]):
s.set_script_params_except_submit_flag() s.set_script_params_except_submit_flag()
assert_has_params(f"some --{AZURECONFIG_SUBMIT_TO_AZUREML}foo False more") assert_has_params(f"some --{AZURECONFIG_SUBMIT_TO_AZUREML}foo False more")

Просмотреть файл

@ -64,18 +64,18 @@ def test_read_variables_from_yaml() -> None:
Test that variables are read from a yaml file correctly. Test that variables are read from a yaml file correctly.
""" """
# this will return a dictionary of all variables in the yaml file # this will return a dictionary of all variables in the yaml file
yaml_path = full_azure_test_data_path('dummy_train_variables.yml') yaml_path = full_azure_test_data_path('settings.yml')
vars_dict = secrets_handling.read_variables_from_yaml(yaml_path) vars_dict = secrets_handling.read_variables_from_yaml(yaml_path)
assert vars_dict == {'some_key': 'some_val'} assert vars_dict == {'some_key': 'some_val'}
# YAML file missing "variables" key should raise key error # YAML file missing "variables" key should raise key error
fail_yaml_path = full_azure_test_data_path('settings_with_missing_section.yml')
with pytest.raises(KeyError): with pytest.raises(KeyError):
fail_yaml_path = full_azure_test_data_path('dummy_train_missing_variables.yml')
secrets_handling.read_variables_from_yaml(fail_yaml_path) secrets_handling.read_variables_from_yaml(fail_yaml_path)
def test_parse_yaml() -> None: def test_parse_yaml() -> None:
assert os.path.isfile(fixed_paths.TRAIN_YAML_FILE) assert os.path.isfile(fixed_paths.SETTINGS_YAML_FILE)
variables = read_variables_from_yaml(fixed_paths.TRAIN_YAML_FILE) variables = read_variables_from_yaml(fixed_paths.SETTINGS_YAML_FILE)
# Check that there are at least two of the variables that we know of # Check that there are at least two of the variables that we know of
tenant_id = "tenant_id" tenant_id = "tenant_id"
assert tenant_id in variables assert tenant_id in variables

Просмотреть файл

@ -33,7 +33,7 @@ def test_create_ml_runner_args(is_default_namespace: bool,
model_configs_namespace = "Tests.ML.configs" model_configs_namespace = "Tests.ML.configs"
model_name = "DummyModel" model_name = "DummyModel"
args_list = [f"--model={model_name}", "--is_train=True", "--l_rate=100.0", args_list = [f"--model={model_name}", "--train=True", "--l_rate=100.0",
"--norm_method=Simple Norm", "--subscription_id", "Test1", "--tenant_id=Test2", "--norm_method=Simple Norm", "--subscription_id", "Test1", "--tenant_id=Test2",
"--application_id", "Test3", "--datasets_storage_account=Test4", "--datasets_container", "Test5", "--application_id", "Test3", "--datasets_storage_account=Test4", "--datasets_container", "Test5",
"--pytest_mark", "gpu", f"--output_to={outputs_folder}"] "--pytest_mark", "gpu", f"--output_to={outputs_folder}"]
@ -42,7 +42,7 @@ def test_create_ml_runner_args(is_default_namespace: bool,
with mock.patch("sys.argv", [""] + args_list): with mock.patch("sys.argv", [""] + args_list):
with mock.patch("InnerEye.ML.deep_learning_config.is_offline_run_context", return_value=is_offline_run): with mock.patch("InnerEye.ML.deep_learning_config.is_offline_run_context", return_value=is_offline_run):
runner = Runner(project_root=project_root, yaml_config_file=fixed_paths.TRAIN_YAML_FILE) runner = Runner(project_root=project_root, yaml_config_file=fixed_paths.SETTINGS_YAML_FILE)
runner.parse_and_load_model() runner.parse_and_load_model()
azure_config = runner.azure_config azure_config = runner.azure_config
model_config = runner.model_config model_config = runner.model_config
@ -107,7 +107,7 @@ def test_read_yaml_file_into_args(test_output_dirs: TestOutputDirectories) -> No
with mock.patch("sys.argv", ["", "--model=Lung"]): with mock.patch("sys.argv", ["", "--model=Lung"]):
# Default behaviour: Application ID (service principal) should be picked up from YAML # Default behaviour: Application ID (service principal) should be picked up from YAML
runner1 = Runner(project_root=fixed_paths.repository_root_directory(), runner1 = Runner(project_root=fixed_paths.repository_root_directory(),
yaml_config_file=fixed_paths.TRAIN_YAML_FILE) yaml_config_file=fixed_paths.SETTINGS_YAML_FILE)
runner1.parse_and_load_model() runner1.parse_and_load_model()
assert len(runner1.azure_config.application_id) > 0 assert len(runner1.azure_config.application_id) > 0
# When specifying a dummy YAML file that does not contain the application ID, it should not # When specifying a dummy YAML file that does not contain the application ID, it should not

Просмотреть файл

@ -248,7 +248,7 @@ def test_run_ml_with_sequence_model(use_combined_model: bool,
segmentations=np.random.randint(0, 2, SCAN_SIZE)) segmentations=np.random.randint(0, 2, SCAN_SIZE))
with mock.patch('InnerEye.ML.utils.io_util.load_image_in_known_formats', return_value=image_and_seg): with mock.patch('InnerEye.ML.utils.io_util.load_image_in_known_formats', return_value=image_and_seg):
azure_config = get_default_azure_config() azure_config = get_default_azure_config()
azure_config.is_train = True azure_config.train = True
MLRunner(config, azure_config).run() MLRunner(config, azure_config).run()
@ -449,7 +449,7 @@ def test_run_ml_with_multi_label_sequence_model(test_output_dirs: TestOutputDire
config.num_epochs = 1 config.num_epochs = 1
config.max_batch_grad_cam = 1 config.max_batch_grad_cam = 1
azure_config = get_default_azure_config() azure_config = get_default_azure_config()
azure_config.is_train = True azure_config.train = True
MLRunner(config, azure_config).run() MLRunner(config, azure_config).run()
# The metrics file should have one entry per epoch per subject per prediction target, # The metrics file should have one entry per epoch per subject per prediction target,
# for all the 3 prediction targets. # for all the 3 prediction targets.

Просмотреть файл

@ -260,7 +260,7 @@ def test_image_encoder_with_segmentation(test_output_dirs: TestOutputDirectories
segmentations=np.ones(scan_size, dtype=np.uint8)) segmentations=np.ones(scan_size, dtype=np.uint8))
with mock.patch('InnerEye.ML.utils.io_util.load_image_in_known_formats', return_value=image_and_seg): with mock.patch('InnerEye.ML.utils.io_util.load_image_in_known_formats', return_value=image_and_seg):
azure_config = get_default_azure_config() azure_config = get_default_azure_config()
azure_config.is_train = True azure_config.train = True
MLRunner(config, azure_config).run() MLRunner(config, azure_config).run()
# No further asserts here because the models are still in experimental state. Most errors would come # No further asserts here because the models are still in experimental state. Most errors would come
# from having invalid model architectures, which would throw runtime errors during training. # from having invalid model architectures, which would throw runtime errors during training.

Просмотреть файл

@ -153,7 +153,7 @@ def test_run_ml_with_classification_model(test_output_dirs: TestOutputDirectorie
""" """
logging_to_stdout() logging_to_stdout()
azure_config = get_default_azure_config() azure_config = get_default_azure_config()
azure_config.is_train = True azure_config.train = True
train_config: ScalarModelBase = ModelConfigLoader[ScalarModelBase]() \ train_config: ScalarModelBase = ModelConfigLoader[ScalarModelBase]() \
.create_model_config_from_name(model_name) .create_model_config_from_name(model_name)
train_config.number_of_cross_validation_splits = number_of_offline_cross_validation_splits train_config.number_of_cross_validation_splits = number_of_offline_cross_validation_splits
@ -198,7 +198,7 @@ def test_run_ml_with_segmentation_model(test_output_dirs: TestOutputDirectories)
train_config.perform_validation_and_test_set_inference = True train_config.perform_validation_and_test_set_inference = True
train_config.set_output_to(test_output_dirs.root_dir) train_config.set_output_to(test_output_dirs.root_dir)
azure_config = get_default_azure_config() azure_config = get_default_azure_config()
azure_config.is_train = True azure_config.train = True
MLRunner(train_config, azure_config).run() MLRunner(train_config, azure_config).run()
@ -216,14 +216,14 @@ def test_runner1(test_output_dirs: TestOutputDirectories) -> None:
output_root = str(test_output_dirs.root_dir) output_root = str(test_output_dirs.root_dir)
args = ["", args = ["",
"--model", model_name, "--model", model_name,
"--is_train", "True", "--train", "True",
"--random_seed", str(set_from_commandline), "--random_seed", str(set_from_commandline),
"--non_image_feature_channels", scalar1, "--non_image_feature_channels", scalar1,
"--output_to", output_root, "--output_to", output_root,
] ]
with mock.patch("sys.argv", args): with mock.patch("sys.argv", args):
config, _ = runner.run(project_root=fixed_paths.repository_root_directory(), config, _ = runner.run(project_root=fixed_paths.repository_root_directory(),
yaml_config_file=fixed_paths.TRAIN_YAML_FILE) yaml_config_file=fixed_paths.SETTINGS_YAML_FILE)
assert isinstance(config, ScalarModelBase) assert isinstance(config, ScalarModelBase)
assert config.model_name == "DummyClassification" assert config.model_name == "DummyClassification"
assert config.get_effective_random_seed() == set_from_commandline assert config.get_effective_random_seed() == set_from_commandline
@ -241,12 +241,12 @@ def test_runner2(test_output_dirs: TestOutputDirectories) -> None:
output_root = str(test_output_dirs.root_dir) output_root = str(test_output_dirs.root_dir)
args = ["", args = ["",
"--model", "DummyClassification", "--model", "DummyClassification",
"--is_train", "True", "--train", "True",
"--output_to", output_root, "--output_to", output_root,
] ]
with mock.patch("sys.argv", args): with mock.patch("sys.argv", args):
config, _ = runner.run(project_root=fixed_paths.repository_root_directory(), config, _ = runner.run(project_root=fixed_paths.repository_root_directory(),
yaml_config_file=fixed_paths.TRAIN_YAML_FILE) yaml_config_file=fixed_paths.SETTINGS_YAML_FILE)
assert isinstance(config, ScalarModelBase) assert isinstance(config, ScalarModelBase)
assert config.name.startswith("DummyClassification") assert config.name.startswith("DummyClassification")

Просмотреть файл

@ -21,7 +21,7 @@ from Tests.Common.test_util import DEFAULT_MODEL_ID_NUMERIC
def test_submit_for_inference() -> None: def test_submit_for_inference() -> None:
args = ["--image_file", "Tests/ML/test_data/train_and_test_data/id1_channel1.nii.gz", args = ["--image_file", "Tests/ML/test_data/train_and_test_data/id1_channel1.nii.gz",
"--model_id", DEFAULT_MODEL_ID_NUMERIC, "--model_id", DEFAULT_MODEL_ID_NUMERIC,
"--yaml_file", "InnerEye/train_variables.yml", "--yaml_file", "InnerEye/settings.yml",
"--download_folder", "."] "--download_folder", "."]
seg_path = Path(DEFAULT_RESULT_IMAGE_NAME) seg_path = Path(DEFAULT_RESULT_IMAGE_NAME)
if seg_path.exists(): if seg_path.exists():

Просмотреть файл

@ -32,7 +32,7 @@ def runner_config() -> AzureConfig:
""" """
config = get_default_azure_config() config = get_default_azure_config()
config.model = "" config.model = ""
config.is_train = False config.train = False
config.datasets_container = "" config.datasets_container = ""
return config return config

Просмотреть файл

@ -21,7 +21,7 @@ def test_visualize_commandline1() -> None:
new_dataset = "new_dataset" new_dataset = "new_dataset"
assert default_config.azure_dataset_id != new_dataset assert default_config.azure_dataset_id != new_dataset
with mock.patch("sys.argv", ["", f"--azure_dataset_id={new_dataset}"]): with mock.patch("sys.argv", ["", f"--azure_dataset_id={new_dataset}"]):
updated_config, runner_config, _ = get_configs(default_config, yaml_file_path=fixed_paths.TRAIN_YAML_FILE) updated_config, runner_config, _ = get_configs(default_config, yaml_file_path=fixed_paths.SETTINGS_YAML_FILE)
assert updated_config.azure_dataset_id == new_dataset assert updated_config.azure_dataset_id == new_dataset
# These two values were not specified on the commandline, and should be at their original values. # These two values were not specified on the commandline, and should be at their original values.
assert updated_config.norm_method == old_photonorm assert updated_config.norm_method == old_photonorm

Просмотреть файл

@ -170,9 +170,9 @@ def get_model_loader(namespace: Optional[str] = None) -> ModelConfigLoader[Segme
def get_default_azure_config() -> AzureConfig: def get_default_azure_config() -> AzureConfig:
""" """
Gets the Azure-related configuration options, using the default settings file train_variables.yaml. Gets the Azure-related configuration options, using the default settings file settings.yaml.
""" """
return AzureConfig.from_yaml(yaml_file_path=fixed_paths.TRAIN_YAML_FILE) return AzureConfig.from_yaml(yaml_file_path=fixed_paths.SETTINGS_YAML_FILE)
def get_default_workspace() -> Workspace: def get_default_workspace() -> Workspace:

Просмотреть файл

@ -416,7 +416,7 @@ def test_run_ml_with_multi_label_sequence_in_crossval(test_output_dirs: TestOutp
config.num_epochs = 1 config.num_epochs = 1
config.number_of_cross_validation_splits = 2 config.number_of_cross_validation_splits = 2
azure_config = get_default_azure_config() azure_config = get_default_azure_config()
azure_config.is_train = True azure_config.train = True
MLRunner(config, azure_config).run() MLRunner(config, azure_config).run()

Просмотреть файл

@ -1,7 +1,7 @@
name: PR-$(Date:yyyyMMdd)$(Rev:-r) name: PR-$(Date:yyyyMMdd)$(Rev:-r)
variables: variables:
model: 'BasicModel2Epochs' model: 'BasicModel2Epochs'
is_train: 'True' train: 'True'
more_switches: '--log_level=DEBUG' more_switches: '--log_level=DEBUG'
run_recovery_id: '' run_recovery_id: ''
tags: 'PR' tags: 'PR'
@ -23,8 +23,8 @@ jobs:
- job: TrainInAzureML - job: TrainInAzureML
variables: variables:
- template: ../InnerEye/train_variables.yml - template: ../InnerEye/settings.yml
- name: gpu_cluster_name - name: cluster
value: 'training-nc12' value: 'training-nc12'
pool: pool:
vmImage: 'ubuntu-18.04' vmImage: 'ubuntu-18.04'
@ -43,8 +43,8 @@ jobs:
- job: TrainInAzureMLViaSubmodule - job: TrainInAzureMLViaSubmodule
variables: variables:
- template: ../InnerEye/train_variables.yml - template: ../InnerEye/settings.yml
- name: gpu_cluster_name - name: cluster
value: 'training-nc12' value: 'training-nc12'
pool: pool:
vmImage: 'ubuntu-18.04' vmImage: 'ubuntu-18.04'

Просмотреть файл

@ -7,7 +7,7 @@ steps:
full_branch_name=$(Build.SourceBranch) full_branch_name=$(Build.SourceBranch)
branch_name_without_prefix=${full_branch_name#$branch_prefix} branch_name_without_prefix=${full_branch_name#$branch_prefix}
source_version_message="`echo $(Build.SourceVersionMessage) | tr -dc 'A-Za-z0-9_ ' | cut -c1-120`" source_version_message="`echo $(Build.SourceVersionMessage) | tr -dc 'A-Za-z0-9_ ' | cut -c1-120`"
python ./InnerEye/ML/runner.py --submit_to_azureml=True --model="$(model)" --is_train="$(is_train)" $(more_switches) --number_of_cross_validation_splits="$(number_of_cross_validation_splits)" --wait_for_completion="${{parameters.wait_for_completion}}" --pytest_mark="${{parameters.pytest_mark}}" --gpu_cluster_name="$(gpu_cluster_name)" --user_friendly_name="$(user_friendly_name)" --run_recovery_id="$(run_recovery_id)" --tag="$(tags)" --build_number=$(Build.BuildId) --build_user="$(Build.RequestedFor)" --build_branch="$branch_name_without_prefix" --build_source_id="$(Build.SourceVersion)" --build_source_message="$source_version_message" --build_source_author="$(Build.SourceVersionAuthor)" --build_source_repository="$(Build.Repository.Name)" python ./InnerEye/ML/runner.py --azureml=True --model="$(model)" --train="$(train)" $(more_switches) --number_of_cross_validation_splits="$(number_of_cross_validation_splits)" --wait_for_completion="${{parameters.wait_for_completion}}" --pytest_mark="${{parameters.pytest_mark}}" --cluster="$(cluster)" --user_friendly_name="$(user_friendly_name)" --run_recovery_id="$(run_recovery_id)" --tag="$(tags)" --build_number=$(Build.BuildId) --build_user="$(Build.RequestedFor)" --build_branch="$branch_name_without_prefix" --build_source_id="$(Build.SourceVersion)" --build_source_message="$source_version_message" --build_source_author="$(Build.SourceVersionAuthor)" --build_source_repository="$(Build.Repository.Name)"
env: env:
PYTHONPATH: $(Build.SourcesDirectory)/ PYTHONPATH: $(Build.SourcesDirectory)/
APPLICATION_KEY: $(InnerEyeDeepLearningServicePrincipalKey) APPLICATION_KEY: $(InnerEyeDeepLearningServicePrincipalKey)

Просмотреть файл

@ -4,7 +4,7 @@ steps:
# Create a directory structure with a runner script and the InnerEye submodule, starting training from there. # Create a directory structure with a runner script and the InnerEye submodule, starting training from there.
# Then do a recovery run in AzureML to see if that works well. # Then do a recovery run in AzureML to see if that works well.
# python $(Agent.TempDirectory)/InnerEye/TestSubmodule/test_submodule_runner.py --run_recovery_id=`cat most_recent_run.txt` --start_epoch=2 --num_epochs=4 --perform_validation_and_test_set_inference=False --submit_to_azureml=True --model="$(model)" --is_train="$(is_train)" $(more_switches) --wait_for_completion="${{parameters.wait_for_completion}}" --gpu_cluster_name="$(gpu_cluster_name)" --tag="$(tags)" --build_number=$(Build.BuildId) --build_user="$(Build.RequestedFor)" --build_branch="$branch_name_without_prefix" --build_source_id="$(Build.SourceVersion)" --build_source_message="$source_version_message" --build_source_author="$(Build.SourceVersionAuthor)" --build_source_repository="$(Build.Repository.Name)" # python $(Agent.TempDirectory)/InnerEye/TestSubmodule/test_submodule_runner.py --run_recovery_id=`cat most_recent_run.txt` --start_epoch=2 --num_epochs=4 --perform_validation_and_test_set_inference=False --azureml=True --model="$(model)" --train="$(train)" $(more_switches) --wait_for_completion="${{parameters.wait_for_completion}}" --cluster="$(cluster)" --tag="$(tags)" --build_number=$(Build.BuildId) --build_user="$(Build.RequestedFor)" --build_branch="$branch_name_without_prefix" --build_source_id="$(Build.SourceVersion)" --build_source_message="$source_version_message" --build_source_author="$(Build.SourceVersionAuthor)" --build_source_repository="$(Build.Repository.Name)"
- bash: | - bash: |
source activate AzureRunner source activate AzureRunner
mkdir $(Agent.TempDirectory)/InnerEye/ mkdir $(Agent.TempDirectory)/InnerEye/
@ -15,7 +15,7 @@ steps:
full_branch_name=$(Build.SourceBranch) full_branch_name=$(Build.SourceBranch)
branch_name_without_prefix=${full_branch_name#$branch_prefix} branch_name_without_prefix=${full_branch_name#$branch_prefix}
source_version_message="`echo $(Build.SourceVersionMessage) | tr -dc 'A-Za-z0-9_ ' | cut -c1-120`" source_version_message="`echo $(Build.SourceVersionMessage) | tr -dc 'A-Za-z0-9_ ' | cut -c1-120`"
python $(Agent.TempDirectory)/InnerEye/TestSubmodule/test_submodule_runner.py --perform_validation_and_test_set_inference=False --submit_to_azureml=True --model="$(model)" --is_train="$(is_train)" $(more_switches) --wait_for_completion="${{parameters.wait_for_completion}}" --gpu_cluster_name="$(gpu_cluster_name)" --tag="$(tags)" --build_number=$(Build.BuildId) --build_user="$(Build.RequestedFor)" --build_branch="$branch_name_without_prefix" --build_source_id="$(Build.SourceVersion)" --build_source_message="$source_version_message" --build_source_author="$(Build.SourceVersionAuthor)" --build_source_repository="$(Build.Repository.Name)" python $(Agent.TempDirectory)/InnerEye/TestSubmodule/test_submodule_runner.py --perform_validation_and_test_set_inference=False --azureml=True --model="$(model)" --train="$(train)" $(more_switches) --wait_for_completion="${{parameters.wait_for_completion}}" --cluster="$(cluster)" --tag="$(tags)" --build_number=$(Build.BuildId) --build_user="$(Build.RequestedFor)" --build_branch="$branch_name_without_prefix" --build_source_id="$(Build.SourceVersion)" --build_source_message="$source_version_message" --build_source_author="$(Build.SourceVersionAuthor)" --build_source_repository="$(Build.Repository.Name)"
env: env:
PYTHONPATH: $(Agent.TempDirectory)/InnerEye PYTHONPATH: $(Agent.TempDirectory)/InnerEye
APPLICATION_KEY: $(InnerEyeDeepLearningServicePrincipalKey) APPLICATION_KEY: $(InnerEyeDeepLearningServicePrincipalKey)

Просмотреть файл

@ -8,7 +8,7 @@ We recommend the latter as it offers more flexibility and better separation of c
create a directory `InnerEyeLocal` beside `InnerEye`. create a directory `InnerEyeLocal` beside `InnerEye`.
As well as your configurations (dealt with below) you will need these files: As well as your configurations (dealt with below) you will need these files:
* `train_variables.yml`: A file similar to `InnerEye\train_variables.yml` containing all your Azure settings. * `settings.yml`: A file similar to `InnerEye\settings.yml` containing all your Azure settings.
The value of `extra_code_directory` should (in our example) be `'InnerEyeLocal'`, The value of `extra_code_directory` should (in our example) be `'InnerEyeLocal'`,
and model_configs_namespace should be `'InnerEyeLocal.ML.configs'`. and model_configs_namespace should be `'InnerEyeLocal.ML.configs'`.
* A folder like `InnerEyeLocal` that contains your additional code, and model configurations. * A folder like `InnerEyeLocal` that contains your additional code, and model configurations.
@ -24,7 +24,7 @@ def main() -> None:
current = os.path.dirname(os.path.realpath(__file__)) current = os.path.dirname(os.path.realpath(__file__))
project_root = Path(os.path.realpath(os.path.join(current, "..", ".."))) project_root = Path(os.path.realpath(os.path.join(current, "..", "..")))
runner.run(project_root=project_root, runner.run(project_root=project_root,
yaml_config_file=project_root / "relative/path/to/train_variables.yml", yaml_config_file=project_root / "relative/path/to/settings.yml",
post_cross_validation_hook=None) post_cross_validation_hook=None)
@ -50,7 +50,7 @@ class Prostate(ProstateBase):
``` ```
The allowed parameters and their meanings are defined in [`SegmentationModelBase`](/InnerEye/ML/config.py). The allowed parameters and their meanings are defined in [`SegmentationModelBase`](/InnerEye/ML/config.py).
The class name must be the same as the basename of the file containing it, so `Prostate.py` must contain `Prostate`. The class name must be the same as the basename of the file containing it, so `Prostate.py` must contain `Prostate`.
In `train_variables.yml`, set `model_configs_namespace` to `InnerEyeLocal.ML.configs` so this config In `settings.yml`, set `model_configs_namespace` to `InnerEyeLocal.ML.configs` so this config
is found by the runner. is found by the runner.
### Training a new model ### Training a new model
@ -59,11 +59,11 @@ is found by the runner.
* Train a new model, for example `Prostate`: * Train a new model, for example `Prostate`:
```shell script ```shell script
python InnerEyeLocal/ML/runner.py --submit_to_azureml=True --model=Prostate --is_train=True python InnerEyeLocal/ML/runner.py --azureml=True --model=Prostate --train=True
``` ```
Alternatively, you can train the model on your current machine if it is powerful enough. In Alternatively, you can train the model on your current machine if it is powerful enough. In
this case, you should specify `--submit_to_azureml=False`, and instead of specifying this case, you would simply omit the `azureml` flag, and instead of specifying
`azure_dataset_id` in the class constructor, you can instead use `local_dataset="my/data/folder"`, `azure_dataset_id` in the class constructor, you can instead use `local_dataset="my/data/folder"`,
where the folder `my/data/folder` contains a `dataset.csv` file and subfolders `0`, `1`, `2`, ..., where the folder `my/data/folder` contains a `dataset.csv` file and subfolders `0`, `1`, `2`, ...,
one for each image. one for each image.
@ -137,9 +137,9 @@ run recovery ID without the final underscore and digit.
### Testing an existing model ### Testing an existing model
As for continuing training, but set `--is_train` to `False`. Thus your command should look like this: As for continuing training, but set `--train` to `False`. Thus your command should look like this:
```shell script ```shell script
python Inner/ML/runner.py --submit_to_azureml=True --model=Prostate --is_train=False --gpu_cluster_name=my_cluster_name \ python Inner/ML/runner.py --azureml=True --model=Prostate --train=False --cluster=my_cluster_name \
--run_recovery_id=foo_bar:foo_bar_12345_abcd --start_epoch=120 --run_recovery_id=foo_bar:foo_bar_12345_abcd --start_epoch=120
``` ```
@ -147,7 +147,7 @@ Alternatively, to submit an AzureML run to apply a model to a single image on yo
you can use the script `submit_for_inference.py`, with a command of this form: you can use the script `submit_for_inference.py`, with a command of this form:
```shell script ```shell script
python InnerEye/Scripts/submit_for_inference.py --image_file ~/somewhere/ct.nii.gz --model_id Prostate:555 \ python InnerEye/Scripts/submit_for_inference.py --image_file ~/somewhere/ct.nii.gz --model_id Prostate:555 \
--yaml_file ../somewhere_else/train_variables.yml --download_folder ~/my_existing_folder --yaml_file ../somewhere_else/settings.yml --download_folder ~/my_existing_folder
``` ```
### Model Ensembles ### Model Ensembles

Просмотреть файл

@ -23,7 +23,7 @@ standard Linux or Windows machines.
The main entry point into the code is [`InnerEye/ML/runner.py`](/InnerEye/ML/runner.py). The code takes its The main entry point into the code is [`InnerEye/ML/runner.py`](/InnerEye/ML/runner.py). The code takes its
configuration elements from commandline arguments and a settings file, configuration elements from commandline arguments and a settings file,
[`InnerEye/train_variables.yml`](/InnerEye/train_variables.yml). [`InnerEye/settings.yml`](/InnerEye/settings.yml).
A password for the (optional) Azure Service A password for the (optional) Azure Service
Principal is read from `InnerEyeTestVariables.txt` in the repository root directory. The file Principal is read from `InnerEyeTestVariables.txt` in the repository root directory. The file
@ -33,7 +33,7 @@ APPLICATION_KEY=<app key for your AML workspace>
``` ```
For developing and running your own models, you will probably find it convenient to create your own variants of For developing and running your own models, you will probably find it convenient to create your own variants of
`runner.py` and `train_variables.yml`, as detailed in the page on [model building](building_models.md). `runner.py` and `settings.yml`, as detailed in the page on [model building](building_models.md).
To quickly access both runner scripts for local debugging, we created template PyCharm run configurations, called To quickly access both runner scripts for local debugging, we created template PyCharm run configurations, called
"Template: Azure runner" and "Template: ML runner". If you want to execute the runners on your machine, then "Template: Azure runner" and "Template: ML runner". If you want to execute the runners on your machine, then

Просмотреть файл

@ -9,7 +9,7 @@ submodule.
If you go down the second route, here's the list of files you will need in your project (that's the same as those If you go down the second route, here's the list of files you will need in your project (that's the same as those
given in [this document](building_models.md)) given in [this document](building_models.md))
* `environment.yml`: Conda environment with python, pip, pytorch * `environment.yml`: Conda environment with python, pip, pytorch
* `train_variables.yml`: A file similar to `InnerEye\train_variables.yml` containing all your Azure settings * `settings.yml`: A file similar to `InnerEye\settings.yml` containing all your Azure settings
* A folder like `ML` that contains your additional code, and model configurations. * A folder like `ML` that contains your additional code, and model configurations.
* A file `ML/runner.py` that invokes the InnerEye training runner, but that points the code to your environment and Azure * A file `ML/runner.py` that invokes the InnerEye training runner, but that points the code to your environment and Azure
settings; see the [Building models](building_models.md) instructions for details. settings; see the [Building models](building_models.md) instructions for details.

Просмотреть файл

@ -24,7 +24,7 @@ see [Setting up AzureML](setting_up_aml.md#step-4-create-a-storage-account-for-y
### Setting up training ### Setting up training
1. Set up a directory outside of InnerEye to holds your configs, as in 1. Set up a directory outside of InnerEye to holds your configs, as in
[Setting Up Training](building_models.md#setting-up-training). After this step, you should have a folder InnerEyeLocal [Setting Up Training](building_models.md#setting-up-training). After this step, you should have a folder InnerEyeLocal
beside InnerEye with files train_variables.yml and ML/runner.py. beside InnerEye with files `settings.yml` and `ML/runner.py`.
### Creating the classification model configuration ### Creating the classification model configuration
The full configuration for the Glaucoma model is at InnerEye/ML/configs/classification/GlaucomaPublic. The full configuration for the Glaucoma model is at InnerEye/ML/configs/classification/GlaucomaPublic.
@ -40,13 +40,13 @@ class GlaucomaPublicExt(GlaucomaPublic):
def __init__(self) -> None: def __init__(self) -> None:
super().__init__(azure_dataset_id="name_of_your_dataset_on_azure") super().__init__(azure_dataset_id="name_of_your_dataset_on_azure")
``` ```
1. In `train_variables.yml`, set `model_configs_namespace` to `InnerEyeLocal.ML.configs` so this config 1. In `settings.yml`, set `model_configs_namespace` to `InnerEyeLocal.ML.configs` so this config
is found by the runner. Set `extra_code_directory` to `InnerEyeLocal`. is found by the runner. Set `extra_code_directory` to `InnerEyeLocal`.
### Start Training ### Start Training
Run the following to start a job on AzureML Run the following to start a job on AzureML
``` ```
python InnerEyeLocal/ML/runner.py --submit_to_azureml=True --model=GlaucomaPublicExt --is_train=True python InnerEyeLocal/ML/runner.py --azureml=True --model=GlaucomaPublicExt --train=True
``` ```
See [Model Training](building_models.md) for details on training outputs, resuming training, testing models and model ensembles. See [Model Training](building_models.md) for details on training outputs, resuming training, testing models and model ensembles.
@ -74,7 +74,7 @@ see [Setting up AzureML](setting_up_aml.md#step-4-create-a-storage-account-for-y
### Setting up training ### Setting up training
1. Set up a directory outside of InnerEye to holds your configs, as in 1. Set up a directory outside of InnerEye to holds your configs, as in
[Setting Up Training](building_models.md#setting-up-training). After this step, you should have a folder InnerEyeLocal [Setting Up Training](building_models.md#setting-up-training). After this step, you should have a folder InnerEyeLocal
beside InnerEye with files train_variables.yml and ML/runner.py. beside InnerEye with files settings.yml and ML/runner.py.
### Creating the segmentation model configuration ### Creating the segmentation model configuration
The full configuration for the Lung model is at InnerEye/ML/configs/segmentation/Lung. The full configuration for the Lung model is at InnerEye/ML/configs/segmentation/Lung.
@ -90,13 +90,13 @@ class LungExt(Lung):
def __init__(self) -> None: def __init__(self) -> None:
super().__init__(azure_dataset_id="name_of_your_dataset_on_azure") super().__init__(azure_dataset_id="name_of_your_dataset_on_azure")
``` ```
1. In `train_variables.yml`, set `model_configs_namespace` to `InnerEyeLocal.ML.configs` so this config 1. In `settings.yml`, set `model_configs_namespace` to `InnerEyeLocal.ML.configs` so this config
is found by the runner. Set `extra_code_directory` to `InnerEyeLocal`. is found by the runner. Set `extra_code_directory` to `InnerEyeLocal`.
### Start Training ### Start Training
Run the following to start a job on AzureML Run the following to start a job on AzureML
``` ```
python InnerEyeLocal/ML/runner.py --submit_to_azureml=True --model=LungExt --is_train=True python InnerEyeLocal/ML/runner.py --azureml=True --model=LungExt --train=True
``` ```
See [Model Training](building_models.md) for details on training outputs, resuming training, testing models and model ensembles. See [Model Training](building_models.md) for details on training outputs, resuming training, testing models and model ensembles.

Просмотреть файл

@ -10,7 +10,7 @@ In short, you will need to:
* Optional: Register your application to create a Service Principal Object. * Optional: Register your application to create a Service Principal Object.
* Optional: Set up a storage account to store your datasets. You may already have such a storage account, or you may * Optional: Set up a storage account to store your datasets. You may already have such a storage account, or you may
want to re-use the storage account that is created with the AzureML workspace - in both cases, you can skip this step. want to re-use the storage account that is created with the AzureML workspace - in both cases, you can skip this step.
* Update your [train_variables.yml](/InnerEye/train_variables.yml) file and KeyVault with your own credentials. * Update your [settings.yml](/InnerEye/settings.yml) file and KeyVault with your own credentials.
Once you're done with these steps, you will be ready for the next steps described in [Creating a dataset](https://github.com/microsoft/InnerEye-createdataset), Once you're done with these steps, you will be ready for the next steps described in [Creating a dataset](https://github.com/microsoft/InnerEye-createdataset),
[Building models in Azure ML](building_models.md) and [Building models in Azure ML](building_models.md) and
@ -75,8 +75,8 @@ low priority nodes, click on the "Request Quota" button at the bottom of the pag
Details about creating compute clusters can be found Details about creating compute clusters can be found
[here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#set-up-in-azure-machine-learning-studio). [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#set-up-in-azure-machine-learning-studio).
Note down the name of your compute cluster - this will later go into the `gpu_cluster_name` entry of your settings Note down the name of your compute cluster - this will later go into the `cluster` entry of your settings
file `train_variables.yml`. file `settings.yml`.
### Step 3 (Optional): Create a Service Principal Authentication object. ### Step 3 (Optional): Create a Service Principal Authentication object.
@ -105,7 +105,7 @@ To create the Service Principal:
a few minutes. Click on the resource to access its properties. In particular, you will need the application ID. a few minutes. Click on the resource to access its properties. In particular, you will need the application ID.
You can find this ID in the `Overview` tab (accessible from the list on the left of the page). You can find this ID in the `Overview` tab (accessible from the list on the left of the page).
Note it down for later - this will go into the `application_id` entry of your settings Note it down for later - this will go into the `application_id` entry of your settings
file `train_variables.yml`. file `settings.yml`.
1. You need to create an application secret to access the resources managed by this service principal. 1. You need to create an application secret to access the resources managed by this service principal.
On the pane on the left find `Certificates & Secrets`. Click on `+ New client secret` (bottom of the page), note down your token. On the pane on the left find `Certificates & Secrets`. Click on `+ New client secret` (bottom of the page), note down your token.
Warning: this token will only appear once at the creation of the token, you will not be able to re-display it again later. Warning: this token will only appear once at the creation of the token, you will not be able to re-display it again later.
@ -169,11 +169,11 @@ on your local machine:
- Create a file called `InnerEyeTestVariables.txt` in the root directory of your git repository, and add a line - Create a file called `InnerEyeTestVariables.txt` in the root directory of your git repository, and add a line
`DATASETS_ACCOUNT_KEY=TheKeyThatYouJustCopied`. `DATASETS_ACCOUNT_KEY=TheKeyThatYouJustCopied`.
- Copy the name of the datasets storage account into the field `datasets_storage_account` of your settings file - Copy the name of the datasets storage account into the field `datasets_storage_account` of your settings file
`train_variables.yml`. `settings.yml`.
### Step 6: Update the variables in `train_variables.yml` ### Step 6: Update the variables in `settings.yml`
The [train_variables.yml](/InnerEye/train_variables.yml) file is used to store your Azure setup. In order to be able to The [settings.yml](/InnerEye/settings.yml) file is used to store your Azure setup. In order to be able to
train your model you will need to update this file using the settings for your Azure subscription. train your model you will need to update this file using the settings for your Azure subscription.
1. You will first need to retrieve your `tenant_id`. You can find your tenant id by navigating to 1. You will first need to retrieve your `tenant_id`. You can find your tenant id by navigating to
`Azure Active Directory > Properties > Tenant ID` (use the search bar above to access the `Azure Active Directory` `Azure Active Directory > Properties > Tenant ID` (use the search bar above to access the `Azure Active Directory`
@ -181,12 +181,12 @@ resource. Copy and paste the GUID to the `tenant_id` field of the `.yml` file. M
[here](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant). [here](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant).
2. You then need to retrieve your subscription id. In the search bar look for `Subscriptions`. Then in the subscriptions list, 2. You then need to retrieve your subscription id. In the search bar look for `Subscriptions`. Then in the subscriptions list,
look for the subscription you are using for your workspace. Copy the value of the `Subscription ID` in the corresponding look for the subscription you are using for your workspace. Copy the value of the `Subscription ID` in the corresponding
field of [train_variables.yml](/InnerEye/train_variables.yml). field of [settings.yml](/InnerEye/settings.yml).
3. Copy the application ID of your Service Principal that you retrieved earlier (cf. Step 3) to the `application_id` field. 3. Copy the application ID of your Service Principal that you retrieved earlier (cf. Step 3) to the `application_id` field.
If you did not set up a Service Principal, fill that with an empty string or leave out altogether. If you did not set up a Service Principal, fill that with an empty string or leave out altogether.
6. Update the `resource_group:` field with your resource group name (created in Step 1). 6. Update the `resource_group:` field with your resource group name (created in Step 1).
7. Update the `workspace_name:` field with the name of the AzureML workspace created in Step 1. 7. Update the `workspace_name:` field with the name of the AzureML workspace created in Step 1.
8. Update the `gpu_cluster_name:` field with the name of your own compute cluster (Step 2). If you chose automatic 8. Update the `cluster:` field with the name of your own compute cluster (Step 2). If you chose automatic
deployment, this cluster will be called "NC24-LowPrio" deployment, this cluster will be called "NC24-LowPrio"
Leave all other fields as they are for now. Leave all other fields as they are for now.

Просмотреть файл

@ -113,7 +113,7 @@ for requirements_line in pip_list:
if is_dev_package: if is_dev_package:
published_package_name += "-dev" published_package_name += "-dev"
package_data[INNEREYE_PACKAGE_NAME] += [ package_data[INNEREYE_PACKAGE_NAME] += [
fixed_paths.TRAIN_YAML_FILE_NAME fixed_paths.SETTINGS_YAML_FILE_NAME
] ]
print("\n ***** NOTE: This package is built for development purpose only. DO NOT RELEASE THIS! *****") print("\n ***** NOTE: This package is built for development purpose only. DO NOT RELEASE THIS! *****")
print(f"\n ***** Will install dev package data: {package_data} *****\n") print(f"\n ***** Will install dev package data: {package_data} *****\n")