* Add script submit_for_inference.py to submit the job; see change to docs/building_models.md for how to use it.
* Modify run_scoring.py so it can be called with a command line argument --model-id instead of being provided with a project_root argument, and use this way of calling it in the job
* Also modify run_scoring.py to handle the situation where CONDA_DEFAULT_ENV is not set (because we're in a Docker image that already has the correct environment - which is created by submit_for_inference.py).
* Implement a workaround in merge_conda_dependencies (needed by the submission script) to avoid a bug in azureml-sdk which should be fixed soon (then we can take out the workaround)
Change the value of RUN_OUTPUTS_DIR_NAME from "run_outputs" to DEFAULT_AML_UPLOAD_DIR which is "outputs". This means that output files are automatically uploaded to blob storage and are also all visible in AzureML.
Add a check at the end of the run method in runner.py that the run_outputs directory is indeed the default upload directory (this will always succeed at the moment) and if not, upload the directory explicitly.
Improve the documentation on the model output directory so that it covers everything that's there.