Further test updates
Updates to linux template Fixed thumbnail generation on linux Added image label to pool display Added images to setup instructions Additional documentation
|
@ -228,3 +228,4 @@ packages/Newtonsoft.Json.7.0.1/lib/net40/Newtonsoft.Json.xml
|
|||
packages/Newtonsoft.Json.7.0.1/lib/net35/Newtonsoft.Json.xml
|
||||
packages/Newtonsoft.Json.7.0.1/lib/net20/Newtonsoft.Json.xml
|
||||
.vs/config/applicationhost.config
|
||||
tests/test_scene/juggernaut.ma
|
||||
|
|
|
@ -127,14 +127,17 @@
|
|||
</Compile>
|
||||
<Compile Include="azure_batch_maya\plug-in\AzureBatch.py" />
|
||||
<Compile Include="azure_batch_maya\scripts\ui\ui_shared.py" />
|
||||
<Compile Include="docs\submitting_jobs.md">
|
||||
<SubType>Code</SubType>
|
||||
</Compile>
|
||||
<Compile Include="package.py" />
|
||||
<Compile Include="tests\test_assets.py">
|
||||
<SubType>Code</SubType>
|
||||
</Compile>
|
||||
<Compile Include="tests\test_environment.py">
|
||||
<Compile Include="tests\test_extensions.py">
|
||||
<SubType>Code</SubType>
|
||||
</Compile>
|
||||
<Compile Include="tests\test_history.py">
|
||||
<Compile Include="tests\test_integration.py">
|
||||
<SubType>Code</SubType>
|
||||
</Compile>
|
||||
<Compile Include="tests\test_jobwatcher.py">
|
||||
|
@ -163,7 +166,10 @@
|
|||
<Folder Include="azure_batch_maya\scripts\tools\" />
|
||||
<Folder Include="azure_batch_maya\scripts\ui\" />
|
||||
<Folder Include="azure_batch_maya\templates\" />
|
||||
<Folder Include="docs\" />
|
||||
<Folder Include="docs\images\" />
|
||||
<Folder Include="tests\" />
|
||||
<Folder Include="tests\test_scene\" />
|
||||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
<Content Include="azure_batch_maya\icons\btn_background.png" />
|
||||
|
|
33
README.md
|
@ -2,11 +2,11 @@
|
|||
|
||||
This project demonstrates cloud rendering using the Azure Batch service with integrated licensing for Maya and Arnold.
|
||||
|
||||
Please note that the Azure Batch licensing service for Maya is currently running as a gated public preview.
|
||||
For more information and to register, please see [rendering.azure.com](rendering.azure.com).
|
||||
Please note that the Azure Batch licensing service for Maya is currently in preview.
|
||||
For more information and to register your interest, please see [rendering.azure.com](rendering.azure.com).
|
||||
|
||||
In order to run this sample, you will need to [create an Azure Batch account](https://docs.microsoft.com/azure/batch/batch-account-create-portal).
|
||||
You will also need a storage account. You will have the option to create a storage account as part of the Batch account setup for use as "Auto Storage".
|
||||
You will also need a storage account. You will have the option to create a storage account as part of the Batch account setup for use as `Auto Storage`.
|
||||
You can use this, or you can [set up a storage account independently](https://docs.microsoft.com/azure/storage/storage-create-storage-account).
|
||||
|
||||
|
||||
|
@ -19,7 +19,7 @@ To install the plug-in:
|
|||
|
||||
1. Run Maya
|
||||
2. Open Window > Settings/Preferences > Plug-in Manager
|
||||
3. Click 'Browse'
|
||||
3. Click `Browse`
|
||||
5. Navigate to and select azure_batch_maya/plug-in/AzureBatch.py.
|
||||
6. Once activated, the plug-in shelf will have appeared in the UI.
|
||||
|
||||
|
@ -27,6 +27,8 @@ The first time the plug-in is loaded, you will be prompted to agree to some term
|
|||
The downloading and installing of the Python dependencies may take a few minutes, after which you will need to close and reopen Maya to
|
||||
ensure the updated dependencies are loaded correctly.
|
||||
|
||||
![](./docs/images/install_dependencies.png)
|
||||
|
||||
|
||||
## Authentication
|
||||
|
||||
|
@ -34,14 +36,23 @@ Before using the plug-in, it will need to be authenticated using your Azure Batc
|
|||
In order to retrieve this information:
|
||||
|
||||
1. Open the Azure management portal (ms.portal.azure.com).
|
||||
2. Select Azure Batch Accounts in the left-hand menu. This can be found under "More Services" in the "Compute" category.
|
||||
3. Select your account in the list. Copy and paste the account URL into "Service" field of the plug-in UI. Paste the account name into the "Batch Account" field.
|
||||
4. In the portal, select "Keys" on the left-hand menu. Copy and paste one of the access keys into the "Batch Key" field in the plug-in.
|
||||
5. Return to the management portal home, and select Storage Accounts from the left-hand menu. This can be found under "More Services" in the "Storage" category.
|
||||
6. Select your account from the list. Copy and paste the account name into the "Storage Account" field.
|
||||
7. In the portal, select "Access Keys" on the left-hand menu. Copy and paste one of the access keys into the "Storage Key" field.
|
||||
8. Click "Authenticate".
|
||||
2. Select Azure Batch Accounts in the left-hand menu. This can be found under `More Services` in the `Compute` category.
|
||||
3. Select your account in the list. Copy and paste the account URL into `Service` field of the plug-in UI. Paste the account name into the `Batch Account` field.
|
||||
4. In the portal, select `Keys` on the left-hand menu. Copy and paste one of the access keys into the `Batch Key` field in the plug-in.
|
||||
5. Return to the management portal home, and select Storage Accounts from the left-hand menu. This can be found under `More Services` in the `Storage` category.
|
||||
6. Select your account from the list. Copy and paste the account name into the `Storage Account` field.
|
||||
7. In the portal, select `Access Keys` on the left-hand menu. Copy and paste one of the access keys into the `Storage Key` field.
|
||||
8. Click `Authenticate`.
|
||||
|
||||
![](./docs/images/authentication.png)
|
||||
|
||||
## Using the Azure Batch plug-in
|
||||
|
||||
- [Job configuration](./docs/submitting_jobs.md#job-configuration)
|
||||
- [Managing assets](./docs/submitting_jobs.md#managing-assets)
|
||||
- [Environment configuration](./docs/submitting_jobs.md#environment-configuration)
|
||||
- [Managing Pools](./docs/submitting_jobs.md#managing-pools)
|
||||
- [Monitoring jobs](./docs/submitting_jobs.md#monitoring-jobs)
|
||||
|
||||
## License
|
||||
|
||||
|
|
|
@ -186,7 +186,10 @@ class AzureBatchAssets(object):
|
|||
handle.write("loadPlugin \"{}\";\n".format(plugin))
|
||||
handle.write("dirmap -en true;\n")
|
||||
for local, remote in pathmap.items():
|
||||
full_remote_path = "X:\\\\" + remote(os_flavor)
|
||||
if os_flavor == 'Windows':
|
||||
full_remote_path = "X:\\\\" + remote(os_flavor)
|
||||
else:
|
||||
full_remote_path = "/X/" + remote(os_flavor)
|
||||
parsed_local = local.replace('\\', '\\\\')
|
||||
handle.write("dirmap -m \"{}\" \"{}\";\n".format(parsed_local, full_remote_path))
|
||||
handle.write("}")
|
||||
|
@ -376,13 +379,14 @@ class Assets(object):
|
|||
ref_file = os.path.basename(ref_path)
|
||||
ref_dir = os.path.dirname(ref_path)
|
||||
pattern = ('*' in ref_path or '[0-9]' in ref_path)
|
||||
self._log.debug("Checking pattern asset: {0}".format(pattern))
|
||||
self._log.debug("Searching for asset path: {}".format(ref_path))
|
||||
self._log.debug("Checking pattern asset: {}".format(pattern))
|
||||
if pattern:
|
||||
path_matches = glob.glob(ref_path)
|
||||
if path_matches:
|
||||
self.pathmaps[ref_dir] = utils.get_remote_file_path(ref_path)
|
||||
self._log.debug("Mapping this path {} to {}".format(ref_path, self.pathmaps[ref_dir]))
|
||||
self._log.debug("Found matches: {0}".format(path_matches))
|
||||
self._log.debug("Found matches: {}".format(path_matches))
|
||||
return path_matches
|
||||
elif os.path.exists(ref_path):
|
||||
self.pathmaps[ref_dir] = utils.get_remote_file_path(ref_path)
|
||||
|
@ -396,7 +400,7 @@ class Assets(object):
|
|||
if path_matches:
|
||||
self.pathmaps[ref_dir] = utils.get_remote_file_path(alt_path)
|
||||
self._log.debug("Mapping this path {} to {}".format(ref_path, self.pathmaps[ref_dir]))
|
||||
self._log.debug("Found matches: {0}".format(path_matches))
|
||||
self._log.debug("Found matches: {}".format(path_matches))
|
||||
return path_matches
|
||||
elif os.path.exists(alt_path):
|
||||
self.pathmaps[ref_dir] = utils.get_remote_file_path(alt_path)
|
||||
|
@ -408,7 +412,7 @@ class Assets(object):
|
|||
alt_path = os.path.join(_root, ref_file)
|
||||
if pattern:
|
||||
path_matches = glob.glob(alt_path)
|
||||
self._log.debug("Found matches: {0}".format(path_matches))
|
||||
self._log.debug("Found matches: {}".format(path_matches))
|
||||
if path_matches:
|
||||
self.pathmaps[ref_dir] = utils.get_remote_file_path(alt_path)
|
||||
self._log.debug("Mapping this path {} to {}".format(ref_path, self.pathmaps[ref_dir]))
|
||||
|
|
|
@ -193,7 +193,7 @@ def _add_cmd_prefix(task, os_flavor):
|
|||
# TODO: Do we need windows shell escaping?
|
||||
task.command_line = 'cmd /c "{}"'.format(task.command_line) #.replace('\"','\\\\\"')
|
||||
elif os_flavor == pool_utils.PoolOperatingSystemFlavor.LINUX:
|
||||
task.command_line = '/bin/bash -c {}'.format(task.command_line)
|
||||
task.command_line = '/bin/bash -c \'set -e; set -o pipefail; {}; wait\''.format(task.command_line)
|
||||
else:
|
||||
raise ValueError("Unknown pool OS flavor: " + os_flavor)
|
||||
|
||||
|
@ -584,10 +584,13 @@ def _parse_template(template_str, template_obj, parameters):
|
|||
current_index = string_end + 1
|
||||
updated_json += template_str[current_index:]
|
||||
try:
|
||||
updated_json = updated_json.encode('string_escape').replace('\\\\','\\')
|
||||
except LookupError:
|
||||
pass
|
||||
return json.loads(updated_json)
|
||||
return json.loads(updated_json)
|
||||
except ValueError as exp:
|
||||
try:
|
||||
return json.loads(updated_json.encode('string_escape').replace('\\\\','\\'))
|
||||
except LookupError:
|
||||
raise ValueError("Unable to load JSON template {}, error: {}".format(
|
||||
updated_json, str(exp)))
|
||||
|
||||
|
||||
def _process_resource_files(request, fileutils):
|
||||
|
|
|
@ -123,7 +123,6 @@ class AzureBatchEnvironment(object):
|
|||
license_servers.extend([v['id'] for v in LICENSES if v['label'] == name])
|
||||
return license_servers
|
||||
|
||||
|
||||
def set_image(self, image):
|
||||
self._session.store_image(image)
|
||||
|
||||
|
@ -134,6 +133,17 @@ class AzureBatchEnvironment(object):
|
|||
selected_image = self.ui.get_image()
|
||||
return dict(MAYA_IMAGES[selected_image])
|
||||
|
||||
def get_image_label(self, image_ref):
|
||||
"""Retrieve the image label from the data in a pool image
|
||||
reference object.
|
||||
"""
|
||||
pool_image = [k for k,v in MAYA_IMAGES.items() if v['offer'] == image_ref.offer]
|
||||
if pool_image:
|
||||
return pool_image[0]
|
||||
else:
|
||||
self._log.debug("Pool using unknown image reference: {}".format(image_ref['offer']))
|
||||
return ""
|
||||
|
||||
def get_vm_sku(self):
|
||||
return self.ui.get_sku()
|
||||
|
||||
|
|
|
@ -125,6 +125,7 @@ class AzureBatchPools(object):
|
|||
self.selected_pool.set_created(pool.creation_time)
|
||||
self.selected_pool.set_licenses(pool.application_licenses)
|
||||
self.selected_pool.set_vm_sku(pool.vm_size)
|
||||
self.selected_pool.set_image(self.environment.get_image_label(pool.virtual_machine_configuration.image_reference))
|
||||
maya.refresh()
|
||||
except Exception as exp:
|
||||
self._log.warning(str(exp))
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
# --------------------------------------------------------------------------------------------
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See License.txt in the project root for license information.
|
||||
|
@ -44,7 +42,10 @@ if __name__ == '__main__':
|
|||
|
||||
task_id = os.environ['AZ_BATCH_TASK_ID']
|
||||
output_file = os.path.join(thumb_dir, task_id + '_thumb.png')
|
||||
commands = ['magick', 'convert', input_file, '-thumbnail', '200x150', output_file]
|
||||
commands = ['convert', input_file, '-thumbnail', '200x150', output_file]
|
||||
if os.name == 'nt':
|
||||
commands.insert(0, 'magick')
|
||||
print("Running imagemagick: {}".format(commands))
|
||||
conversion = subprocess.Popen(commands, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
conversion.wait()
|
||||
if conversion.returncode != 0:
|
||||
|
|
|
@ -230,6 +230,12 @@ class AzureBatchPoolInfo(object):
|
|||
"""
|
||||
maya.text(self._vm_sku, edit=True, label=" {0}".format(value))
|
||||
|
||||
def set_image(self, value):
|
||||
"""Set the image name running on the VM.
|
||||
:param str value: The VM image.
|
||||
"""
|
||||
maya.text(self._image, edit=True, label=" {0}".format(value))
|
||||
|
||||
def on_expand(self):
|
||||
"""Command for the expanding of the pool reference frame layout.
|
||||
Loads latest details for the specified pool and populates UI.
|
||||
|
@ -240,6 +246,7 @@ class AzureBatchPoolInfo(object):
|
|||
self._created = self.display_info("Created: ")
|
||||
self._state = self.display_info("State: ")
|
||||
self._tasks = self.display_info("Tasks per VM: ")
|
||||
self._image = self.display_info("Image: ")
|
||||
self._allocation = self.display_info("Allocation State: ")
|
||||
self._licenses = self.display_info("Licenses: ")
|
||||
self._vm_sku = self.display_info("VM type: ")
|
||||
|
|
|
@ -97,15 +97,25 @@
|
|||
],
|
||||
"repeatTask": {
|
||||
"displayName": "Frame {0}",
|
||||
"commandLine": "mkdir /X;mount -rbind $AZ_BATCH_JOB_PREP_WORKING_DIR/assets /X;render -renderer [parameters('renderer')] -verb -preRender renderPrep -rd \"$AZ_BATCH_TASK_WORKING_DIR/images\" -s {0} -e {0} \"[parameters('sceneFile')]\";err=$?;$AZ_BATCH_JOB_PREP_WORKING_DIR/thumbnail.py $err;exit $err",
|
||||
"userIdentity": {
|
||||
"autoUser": {
|
||||
"scope": "task",
|
||||
"elevationLevel": "admin"
|
||||
}
|
||||
},
|
||||
"commandLine": "sudo mkdir -m a=rwx -p \"/X\";sudo mount --rbind $AZ_BATCH_JOB_PREP_WORKING_DIR/assets /X;Render -renderer [parameters('renderer')] -verb -preRender renderPrep -rd \"$AZ_BATCH_TASK_WORKING_DIR/images\" -s {0} -e {0} \"[parameters('sceneFile')]\";err=$?;python /mnt/resource/batch/tasks/workitems/[parameters('outputs')]/job-1/jobpreparation/wd/thumbnail.py $err;sudo umount \"/X\";exit $err",
|
||||
"environmentSettings": [
|
||||
{
|
||||
"name": "MAYA_SCRIPT_PATH",
|
||||
"value": "$AZ_BATCH_JOB_PREP_WORKING_DIR/scripts"
|
||||
"value": "/mnt/resource/batch/tasks/workitems/[parameters('outputs')]/job-1/jobpreparation/wd/scripts"
|
||||
},
|
||||
{
|
||||
"name": "FLEXLM_TIMEOUT",
|
||||
"value": "5000000"
|
||||
},
|
||||
{
|
||||
"name": "MAYA_RENDER_DESC_PATH",
|
||||
"value": "/opt/solidangle/mtoa/2017/"
|
||||
}
|
||||
],
|
||||
"outputFiles": [
|
||||
|
|
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 14 KiB |
После Ширина: | Высота: | Размер: 15 KiB |
После Ширина: | Высота: | Размер: 38 KiB |
После Ширина: | Высота: | Размер: 37 KiB |
После Ширина: | Высота: | Размер: 20 KiB |
После Ширина: | Высота: | Размер: 15 KiB |
После Ширина: | Высота: | Размер: 19 KiB |
|
@ -0,0 +1,84 @@
|
|||
# Submitting render jobs using the Azure Batch plug-in
|
||||
|
||||
## Job configuration
|
||||
|
||||
The `Submit` tab of the plug-in is where you will configure the basic parameters for the job.
|
||||
The plug-in will detect which rendering engine you're currently using and display the applicabale settings.
|
||||
The plug-in will also warn you if your selected render engine is not supported.
|
||||
|
||||
If you load a new scene while the plug-in is open, click the `Refresh` button to make sure the settings are updated.
|
||||
|
||||
### Scene parameters
|
||||
|
||||
The basic parameters for your job (start frame, end frame, output prefix etc) will be automatically set based on the render settings of the current scene.
|
||||
You can override these using the submission UI - any changes you make here will not be persisted back to the scene file render settings, so you can make changes
|
||||
on a job-by-job basis without needing to re-upload the scene file.
|
||||
|
||||
### Pool options
|
||||
|
||||
You have the option to determine which VM pool the render will run on. You can pick an existing VM pool, or create a new pool.
|
||||
When creating a new pool, you can choose to make an `auto pool`, which will run exclusively for this job, and will automatically shut down
|
||||
on completion of the render. Alternatively you can create a new persistent pool, that will remain available after the render has completed and can
|
||||
be used for any number of jobs. For more information on managing persistent pools see [managing pools](#managing-pools).
|
||||
|
||||
![](./images/submit.png)
|
||||
|
||||
|
||||
## Managing assets
|
||||
|
||||
When you load the plug-in it will scan the scene file for any external file references. These references are then display in the `Assets` tab.
|
||||
If a referenced path cannot be resolved, the plug-in will attempt to locate the file in a few default locations (for example the scene file location, current
|
||||
project sourceimages directory and the current working directory).
|
||||
If the asset still cannot be located, it will be listed with a warning icon.
|
||||
|
||||
![](./images/missing_assets.png)
|
||||
|
||||
If you know the location of any unresolved file references, you can click the warning icon and you will be prompted to add a search path. The plug-in will then
|
||||
use this search path to attempt to resolve any missing assets. You can add any number of additional search paths.
|
||||
|
||||
![](./images/found_assets.png)
|
||||
|
||||
If you are aware of asset references that the plug-in has not detected, additional files, or directories of files can be added using the `Add Files` and
|
||||
`Add Directory` buttons.
|
||||
If you load a new scene while the plug-in is open, be sure to click `Refresh` to make sure the scene is re-parsed for references.
|
||||
|
||||
### Uploading assets to a project
|
||||
|
||||
When you submit a render, all referenced files (as displayed in the `Assets` tab) will be automatically uploaded. You also have the option to upload the asset
|
||||
files independently of a job submission, using the `Upload` button on the `Assets` tab.
|
||||
Assets will be uploaded to a project container in Azure storage, where their file structure will be maintained. This project will be named after the current
|
||||
Maya project by default. An asset project can be referenced by any number of render jobs, and all assets that have been uploaded to a project will be available to a job that
|
||||
references that project (regardless of whether a particular asset was explicitly referenced by that scene).
|
||||
To change the asset project referenced by your next job, simply change the name in the `Project` field in the `Assets` tab.
|
||||
If there are referenced files that you wish to exclude from uploading, simply unselect it using the green button beside the listing.
|
||||
|
||||
|
||||
## Environment configuration
|
||||
|
||||
The `Env` tab will allow you to configure the VMs to be deployed when you create a pool. You can select the image to be run (Windows or CentOS) and the VM type. You can see a list
|
||||
of VM types and their descriptions in [this article](https://azure.microsoft.com/pricing/details/virtual-machines/series/).
|
||||
You can also select whether you wish to have the pool licensed for Maya and Arnold. Maya licensing will be enabled by default, whereas Arnold licensing will be enabled if Arnold
|
||||
is detected as the active render engine (you may need to click `Refresh` to get the latest scene settings).
|
||||
If you wish to render using your own license server, you can deselet the license options and configure your license end point be adding the applicable environment variables in the table.
|
||||
Note that you will be billed for the licensing of applications for the entire up-time of VMs running in the pool, regardless of whether they are being used for rendering.
|
||||
|
||||
![](./images/environment.png)
|
||||
|
||||
## Managing Pools
|
||||
|
||||
Any persistent pools that have been deployed previously for render jobs from the plug-in can be managed in the `Pools` tab.
|
||||
Selecting a listed pool will display the current state of the pool, including how many nodes are running, the selected VM image, the VM type and
|
||||
the licenses that are deployed to that pool.
|
||||
|
||||
You can also delete the pool and resize the number of VMs in the pool. A pool can be left at 0 nodes to avoid incurring costs inbetween workloads.
|
||||
|
||||
![](./images/pools.png)
|
||||
|
||||
## Monitoring jobs
|
||||
|
||||
Once you have submitted a job, you can monitor it both via the [Azure Management Portal](http://ms.portal.azure.com/) and the `Jobs` tab of the plug-in.
|
||||
Selecting a listed job will display the current state of the job. You can also use this tab to cancel and delete jobs, as well as download the outputs and rendering logs.
|
||||
To download outputs, use the `Outputs` field to set the desired destination directory, and click the center button (with the gear icon) to start a background process that will
|
||||
watch the job and download outputs as it progresses. You can close Maya without disrupting the download.
|
||||
|
||||
![](./images/jobs.png)
|
|
@ -0,0 +1,41 @@
|
|||
{
|
||||
"templateMetadata": {
|
||||
"description": "A test application template that makes use of multiple parameters after properly declaring them."
|
||||
},
|
||||
"jobManagerTask": {
|
||||
"id":"mytask1",
|
||||
"commandLine":"myprogram.exe",
|
||||
"resourceFiles": [ {
|
||||
"blobSource":"http://mystorage1.blob.core.windows.net/scripts/myprogram.exe?st=2013-08-09T08%3a49%3a37.0000000Z&se=2013-08-10T08%3a49%3a37.0000000Z&sr=c&sp=d&si=YWJjZGTVMZw%3d%3d&sig= %2bSzBm0wi8xECuGkKw97wnkSZ%2f62sxU%2b6Hq6a7qojIVE%3d",
|
||||
"filePath":"myprogram.exe"
|
||||
},
|
||||
{
|
||||
"blobSource":"http://mystorage1.blob.core.windows.net/scripts/test.txt?st=2013-08-09T08%3a49%3a37.0000000Z&se=2013-08-10T08%3a49%3a37.0000000Z&sr=c&sp=d&si=YWJjZGTVMZw%3d%3d&sig= %2bSzBm0wi8xECuGkKw97wnkSZ%2f62sxU%2b6Hq6a7qojIVE%3d",
|
||||
"filePath":"[parameters('blobName')]"
|
||||
} ],
|
||||
"environmentSettings": [ {
|
||||
"name":"myvariable",
|
||||
"value":"myvalue"
|
||||
} ],
|
||||
"constraints": {
|
||||
"maxWallClockTime":"PT1H",
|
||||
"maxTaskRetryCount":0,
|
||||
"retentionTime":"PT1H"
|
||||
},
|
||||
"killJobOnCompletion":false,
|
||||
"runElevated":false,
|
||||
"runExclusive":true
|
||||
},
|
||||
"metadata": [ {
|
||||
"name":"myproperty",
|
||||
"value":"[parameters('keyValue')]"
|
||||
} ],
|
||||
"parameters": {
|
||||
"blobName" : {
|
||||
"type": "string"
|
||||
},
|
||||
"keyValue" : {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"templateInfo": {
|
||||
"description": "A test application template that specifies the prohibited property 'applicationTemplate'."
|
||||
},
|
||||
"applicationTemplateInfo": {
|
||||
"filePath" : "sample\\path"
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
{
|
||||
"templateInfo": {
|
||||
"description": "A test application template that specifies prohibited property 'id'."
|
||||
},
|
||||
"id" : "jobid"
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"templateInfo": {
|
||||
"description": "A test application template that specifies prohibited property 'poolInfo'."
|
||||
},
|
||||
"poolInfo": {
|
||||
"poolId" : "swimming"
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"templateInfo": {
|
||||
"description": "A test application template that specifies the prohibited property 'priority'."
|
||||
},
|
||||
"displayName": "Static Application Template",
|
||||
"priority": 100
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
{
|
||||
"templateMetadata": {
|
||||
"description": "A test application template that has no parameters and has exactly the same result every time."
|
||||
},
|
||||
"jobManagerTask": {
|
||||
"id": "jobManager",
|
||||
"displayName": "jobManagerDisplay",
|
||||
"commandLine": "cmd /c dir /s",
|
||||
"resourceFiles": [
|
||||
{
|
||||
"blobSource": "https://testacct.blob.core.windows.net/",
|
||||
"filePath": "filePath"
|
||||
}
|
||||
],
|
||||
"environmentSettings": [
|
||||
{
|
||||
"name": "name1",
|
||||
"value": "value1"
|
||||
},
|
||||
{
|
||||
"name": "name2",
|
||||
"value": "value2"
|
||||
}
|
||||
],
|
||||
"constraints": {
|
||||
"maxWallClockTime": "PT1H"
|
||||
},
|
||||
"killJobOnCompletion": false,
|
||||
"runElevated": false
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
{
|
||||
"templateInfo": {
|
||||
"description": "A test application template that specifies the unsupported properties'fluxCapacitorModel' and 'vehicleMarque'."
|
||||
},
|
||||
"fluxCapacitorModel": "DocBrown55",
|
||||
"vehicleMarque": "deLorean"
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
{
|
||||
"templateMetadata": {
|
||||
"description": "A test application template that declares a property with no specified type."
|
||||
},
|
||||
"jobManagerTask": {
|
||||
"id":"mytask1",
|
||||
"commandLine":"myprogram.exe",
|
||||
"resourceFiles": [ {
|
||||
"blobSource":"http://mystorage1.blob.core.windows.net/scripts/myprogram.exe?st=2013-08-09T08%3a49%3a37.0000000Z&se=2013-08-10T08%3a49%3a37.0000000Z&sr=c&sp=d&si=YWJjZGTVMZw%3d%3d&sig= %2bSzBm0wi8xECuGkKw97wnkSZ%2f62sxU%2b6Hq6a7qojIVE%3d",
|
||||
"filePath":"myprogram.exe"
|
||||
},
|
||||
{
|
||||
"blobSource":"http://mystorage1.blob.core.windows.net/scripts/test.txt?st=2013-08-09T08%3a49%3a37.0000000Z&se=2013-08-10T08%3a49%3a37.0000000Z&sr=c&sp=d&si=YWJjZGTVMZw%3d%3d&sig= %2bSzBm0wi8xECuGkKw97wnkSZ%2f62sxU%2b6Hq6a7qojIVE%3d",
|
||||
"filePath":"[parameters('blobName')]"
|
||||
} ],
|
||||
"environmentSettings": [ {
|
||||
"name":"myvariable",
|
||||
"value":"myvalue"
|
||||
} ],
|
||||
"constraints": {
|
||||
"maxWallClockTime":"PT1H",
|
||||
"maxTaskRetryCount":0,
|
||||
"retentionTime":"PT1H"
|
||||
},
|
||||
"killJobOnCompletion":false,
|
||||
"runElevated":false,
|
||||
"runExclusive":true
|
||||
},
|
||||
"metadata": [ {
|
||||
"name":"myproperty",
|
||||
"value":"[parameters('keyValue')]"
|
||||
} ],
|
||||
"parameters": {
|
||||
"blobName" : {
|
||||
"defaultValue": "name"
|
||||
},
|
||||
"keyValue" : {
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,17 @@
|
|||
{
|
||||
"jobId": {
|
||||
"value": "helloworld"
|
||||
},
|
||||
"poolId": {
|
||||
"value": "xplatTestPool"
|
||||
},
|
||||
"outputFileStorageUrl": {
|
||||
"value": "<blob url with sas>"
|
||||
},
|
||||
"taskStart": {
|
||||
"value": 1
|
||||
},
|
||||
"taskEnd": {
|
||||
"value": 3
|
||||
}
|
||||
}
|
|
@ -0,0 +1,104 @@
|
|||
{
|
||||
"parameters": {
|
||||
"inputFileGroup": {
|
||||
"type": "string",
|
||||
"defaultValue": "convert_data",
|
||||
"metadata": {
|
||||
"description": "The auto-storage group where the input data is stored"
|
||||
}
|
||||
},
|
||||
"outputFileStorageUrl": {
|
||||
"type": "string",
|
||||
"metadata": {
|
||||
"description": "The SAS URL for a container where outputs will be stored"
|
||||
}
|
||||
},
|
||||
"inputType": {
|
||||
"type": "string",
|
||||
"defaultValue": "wav",
|
||||
"metadata": {
|
||||
"description": "The extension of the input data"
|
||||
}
|
||||
},
|
||||
"poolId": {
|
||||
"type": "string",
|
||||
"defaultValue": "ffmpeg-pool",
|
||||
"metadata": {
|
||||
"description": "The id of Azure Batch pool which runs the job"
|
||||
}
|
||||
},
|
||||
"jobId": {
|
||||
"type": "string",
|
||||
"metadata": {
|
||||
"description": "The id of Azure Batch job"
|
||||
}
|
||||
},
|
||||
"taskStart": {
|
||||
"type": "int",
|
||||
"metadata": {
|
||||
"description": "The sweep start parameter"
|
||||
}
|
||||
},
|
||||
"taskEnd": {
|
||||
"type": "int",
|
||||
"metadata": {
|
||||
"description": "The sweep end parameter"
|
||||
}
|
||||
}
|
||||
},
|
||||
"job": {
|
||||
"type": "Microsoft.Batch/batchAccounts/jobs",
|
||||
"apiVersion": "2016-12-01",
|
||||
"properties": {
|
||||
"id": "[parameters('jobId')]",
|
||||
"constraints": {
|
||||
"maxWallClockTime": "PT5H",
|
||||
"maxTaskRetryCount": 1
|
||||
},
|
||||
"poolInfo": {
|
||||
"poolId": "[parameters('poolId')]"
|
||||
},
|
||||
"taskFactory": {
|
||||
"type": "parametricSweep",
|
||||
"parameterSets": [
|
||||
{
|
||||
"start": "[parameters('taskStart')]",
|
||||
"end": "[parameters('taskEnd')]",
|
||||
"step": 1
|
||||
}
|
||||
],
|
||||
"repeatTask": {
|
||||
"commandLine": "ffmpeg -y -i sample{0}.[parameters('inputType')] -acodec libmp3lame output.mp3",
|
||||
"resourceFiles": [
|
||||
{
|
||||
"source": {
|
||||
"fileGroup": "[parameters('inputFileGroup')]",
|
||||
"prefix": "sample{0}.[parameters('inputType')]"
|
||||
}
|
||||
}
|
||||
],
|
||||
"outputFiles": [
|
||||
{
|
||||
"filePattern": "output.mp3",
|
||||
"destination": {
|
||||
"container": {
|
||||
"path": "audio{0}.mp3",
|
||||
"containerSas": "[parameters('outputFileStorageUrl')]"
|
||||
}
|
||||
},
|
||||
"uploadDetails": {
|
||||
"taskStatus": "TaskSuccess"
|
||||
}
|
||||
}
|
||||
],
|
||||
"packageReferences": [
|
||||
{
|
||||
"type": "aptPackage",
|
||||
"id": "ffmpeg"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
{
|
||||
"parameters": {
|
||||
"jobId": {
|
||||
"type": "string",
|
||||
"defaultValue": "ffmpegpool",
|
||||
"metadata": {
|
||||
"description": "The name of Azure Batch pool which runs the job"
|
||||
}
|
||||
},
|
||||
"poolId": {
|
||||
"type": "string",
|
||||
"metadata": {
|
||||
"description": "The name of Azure Batch job"
|
||||
}
|
||||
}
|
||||
},
|
||||
"job": {
|
||||
"type": "Microsoft.Batch/batchAccounts/jobs",
|
||||
"apiVersion": "2016-12-01",
|
||||
"properties": {
|
||||
"id": "[parameters('jobId')]",
|
||||
"poolInfo": {
|
||||
"poolId": "[parameters('poolId')]"
|
||||
},
|
||||
"taskFactory": {
|
||||
"type": "taskCollection",
|
||||
"tasks": [
|
||||
{
|
||||
"id" : "mytask1",
|
||||
"commandLine": "cmd /c echo hello1"
|
||||
},
|
||||
{
|
||||
"id" : "mytask2",
|
||||
"commandLine": "cmd /c echo hello2"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,5 @@
|
|||
{
|
||||
"poolName": {
|
||||
"value": "testpool1"
|
||||
}
|
||||
}
|
|
@ -0,0 +1,57 @@
|
|||
{
|
||||
"parameters": {
|
||||
"vmSize": {
|
||||
"type": "string",
|
||||
"metadata": {
|
||||
"description": "The size of the virtual machines that runs the application"
|
||||
},
|
||||
"defaultValue": "STANDARD_D1",
|
||||
"allowedValues": [
|
||||
"STANDARD_A1",
|
||||
"STANDARD_A2",
|
||||
"STANDARD_A3",
|
||||
"STANDARD_A4",
|
||||
"STANDARD_D1",
|
||||
"STANDARD_D2",
|
||||
"STANDARD_D3",
|
||||
"STANDARD_D4"
|
||||
]
|
||||
},
|
||||
"vmCount": {
|
||||
"type": "int",
|
||||
"defaultValue": 3,
|
||||
"metadata": {
|
||||
"description": "The number of the virtual machines"
|
||||
}
|
||||
},
|
||||
"poolName": {
|
||||
"type": "string",
|
||||
"defaultValue": "ffmpegpool",
|
||||
"metadata": {
|
||||
"description": "The name of Azure Batch pool"
|
||||
}
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"osType": {
|
||||
"publisher": "Canonical",
|
||||
"offer": "UbuntuServer",
|
||||
"sku": "15.10",
|
||||
"version": "latest"
|
||||
}
|
||||
},
|
||||
"pool": {
|
||||
"type": "Microsoft.Batch/batchAccounts/pools",
|
||||
"apiVersion": "2016-12-01",
|
||||
"properties": {
|
||||
"id": "[parameters('poolName')]",
|
||||
"virtualMachineConfiguration": {
|
||||
"imageReference": "[variables('osType')]",
|
||||
"nodeAgentSKUId": "batch.node.debian 8"
|
||||
},
|
||||
"vmSize": "[parameters('vmSize')]",
|
||||
"vmCount": "[parameters('vmCount')]",
|
||||
"enableAutoScale": false
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
1
|
|
@ -0,0 +1,93 @@
|
|||
# --------------------------------------------------------------------------------------------
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See License.txt in the project root for license information.
|
||||
# --------------------------------------------------------------------------------------------
|
||||
|
||||
import os
|
||||
import json
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from environment import MAYA_IMAGES
|
||||
import utils
|
||||
|
||||
import batch_extensions as batch
|
||||
from batch_extensions import models
|
||||
from batch_extensions.batch_auth import SharedKeyCredentials
|
||||
from azure.storage.blob.blockblobservice import BlockBlobService
|
||||
|
||||
STORAGE_ACCOUNT = os.environ['AZURE_STORAGE_ACCOUNT']
|
||||
STORAGE_KEY = os.environ['AZURE_STORAGE_ACCESS_KEY']
|
||||
BATCH_ENDPOINT = os.environ['AZURE_BATCH_ENDPOINT']
|
||||
BATCH_ACCOUNT = os.environ['AZURE_BATCH_ACCOUNT']
|
||||
BATCH_KEY = os.environ['AZURE_BATCH_ACCESS_KEY']
|
||||
SAMPLE_DIR = os.path.join(os.path.dirname(__file__), 'test_scene')
|
||||
TEMPLATE_DIR = os.path.abspath('azure_batch_maya/templates')
|
||||
SCRIPT_DIR = os.path.abspath('azure_batch_maya/scripts/tools')
|
||||
POOL_ID = "" # The OS of the pool will determine whether the job is run with the linux or windows templates.
|
||||
|
||||
|
||||
def os_flavor(pool_image):
|
||||
windows_offers = [value['offer'] for value in MAYA_IMAGES.values() if 'windows' in value['node_sku_id']]
|
||||
linux_offers = [value['offer'] for value in MAYA_IMAGES.values() if value['offer'] not in windows_offers]
|
||||
if pool_image.offer in windows_offers:
|
||||
return 'Windows'
|
||||
elif pool_image.offer in linux_offers:
|
||||
return 'Linux'
|
||||
else:
|
||||
raise ValueError('Selected pool is not using a valid Maya image.')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# Setup client
|
||||
storage_client = BlockBlobService(STORAGE_ACCOUNT, STORAGE_KEY, endpoint_suffix="core.windows.net")
|
||||
credentials = SharedKeyCredentials(BATCH_ACCOUNT, BATCH_KEY)
|
||||
client = batch.BatchExtensionsClient(credentials, base_url=BATCH_ENDPOINT, storage_client=storage_client)
|
||||
|
||||
# Setup test render input data
|
||||
scene_file = 'juggernaut.ma'
|
||||
maya_data = 'maya-data-{}'.format(uuid.uuid4())
|
||||
client.file.upload(SAMPLE_DIR, maya_data, flatten=True)
|
||||
client.file.upload(os.path.join(SCRIPT_DIR, 'generate_thumbnails.py'), maya_data, flatten=True)
|
||||
|
||||
# Create pool using existing pool template file
|
||||
pool_ref = client.pool.get(POOL_ID)
|
||||
os_flavor = os_flavor(pool_ref.virtual_machine_configuration.image_reference)
|
||||
pool_info = {'poolId': POOL_ID}
|
||||
|
||||
# Create a pool model with an application template reference
|
||||
job_id = 'maya_test_{}_{}'.format(os_flavor.lower(), uuid.uuid4())
|
||||
batch_parameters = {'id': job_id}
|
||||
batch_parameters['displayName'] = "Maya Integration Test using {}".format(os_flavor)
|
||||
batch_parameters['metadata'] = [{"name": "JobType", "value": "Maya"}]
|
||||
template_file = os.path.join(TEMPLATE_DIR, 'arnold-basic-{}.json'.format(os_flavor.lower()))
|
||||
batch_parameters['applicationTemplateInfo'] = {'filePath': template_file}
|
||||
application_params = {}
|
||||
batch_parameters['applicationTemplateInfo']['parameters'] = application_params
|
||||
|
||||
application_params['outputs'] = job_id
|
||||
application_params['sceneFile'] = utils.format_scene_path(scene_file, os_flavor)
|
||||
application_params['projectData'] = maya_data
|
||||
application_params['assetScript'] = client.file.generate_sas_url(maya_data, 'asset_map_{}.mel'.format(os_flavor.lower()))
|
||||
application_params['thumbScript'] = client.file.generate_sas_url(maya_data, 'generate_thumbnails.py')
|
||||
application_params['frameStart'] = 1
|
||||
application_params['frameEnd'] = 3
|
||||
application_params['frameStep'] = 1
|
||||
application_params['renderer'] = 'arnold'
|
||||
|
||||
batch_parameters['poolInfo'] = pool_info
|
||||
new_job = client.job.jobparameter_from_json(batch_parameters)
|
||||
client.job.add(new_job)
|
||||
|
||||
# When job is finished, delete it along with input/output file groups
|
||||
while True:
|
||||
time.sleep(15)
|
||||
job = client.job.get(job_id)
|
||||
print("Watching job: {}".format(job.state))
|
||||
if job.state == models.JobState.completed:
|
||||
client.file.download(SAMPLE_DIR, job_id)
|
||||
break
|
||||
|
||||
client.job.delete(job_id)
|
||||
client.file.delete_group(maya_data)
|
||||
client.file.delete_group(job_id)
|
|
@ -212,6 +212,7 @@ class AzureTestBatchPools(unittest.TestCase):
|
|||
def test_pools_create(self):
|
||||
pool_obj = None
|
||||
def call(func, new_pool):
|
||||
global pool_obj
|
||||
self.assertTrue(callable(func))
|
||||
pool_obj = new_pool
|
||||
self.assertEqual(new_pool.target_dedicated_nodes, 5)
|
||||
|
@ -226,7 +227,7 @@ class AzureTestBatchPools(unittest.TestCase):
|
|||
self.mock_self.environment.get_image.return_value = {
|
||||
'publisher': 'foo', 'sku': 'bar', 'offer': 'baz', 'node_sku_id':'sku_id'}
|
||||
AzureBatchPools.create_pool(self.mock_self, 5, "test job")
|
||||
self.mock_self.batch.pool.add.assert_called_with(pool_obj)
|
||||
self.mock_self.batch.pool.add.assert_called_with(mock.ANY)
|
||||
|
||||
@mock.patch("pools.maya")
|
||||
def test_pools_resize(self, mock_maya):
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
global proc renderPrep()
|
||||
{
|
||||
dirmap -en true;
|
||||
dirmap -m "C:\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\sourceimages" "/X/C/Users/antisch/Documents/maya/projects/juggernaut/sourceimages";
|
||||
dirmap -m "C:\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\scenes" "/X/C/Users/antisch/Documents/maya/projects/juggernaut/scenes";
|
||||
dirmap -m "H:/hsm/vault/hdr" "/X/C/Users/antisch/Documents/maya/projects/juggernaut/sourceimages";
|
||||
dirmap -m "P:/hsm/asset/character/juggernaut/lookdev/sourceimages" "/X/C/Users/antisch/Documents/maya/projects/juggernaut/sourceimages";
|
||||
}
|
|
@ -0,0 +1,8 @@
|
|||
global proc renderPrep()
|
||||
{
|
||||
dirmap -en true;
|
||||
dirmap -m "C:\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\sourceimages" "X:\\C\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\sourceimages";
|
||||
dirmap -m "C:\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\scenes" "X:\\C\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\scenes";
|
||||
dirmap -m "H:/hsm/vault/hdr" "X:\\C\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\sourceimages";
|
||||
dirmap -m "P:/hsm/asset/character/juggernaut/lookdev/sourceimages" "X:\\C\\Users\\antisch\\Documents\\maya\\projects\\juggernaut\\sourceimages";
|
||||
}
|