Init repo: copy content from azure-cli repo.

This commit is contained in:
xingwu1 2017-03-01 15:23:31 -08:00
Родитель 094bab5110
Коммит b74a47282b
186 изменённых файлов: 10474 добавлений и 0 удалений

7
.flake8 Normal file
Просмотреть файл

@ -0,0 +1,7 @@
[flake8]
max-line-length = 100
max-complexity = 10
ignore =
E501,
F401,
C901

13
.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1,13 @@
# Set the default behavior (used when a rule below doesn't match)
* text=auto
*.sln -text
*.ico -text
*.bmp -text
*.png -text
*.snk -text
*.mht -text
*.pickle -text
# Some Windows-specific files should always be CRLF
*.bat eol=crlf

76
.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,76 @@
# Python cache
__pycache__/
*.pyc
# Virtual environment
env/
env27/
# PTVS analysis
.ptvs/
# Build results
obj/
dist/
MANIFEST
# Result of running python setup.py install/pip install -e
RECORD.txt
build/
*.egg-info/
# Test results
TestResults/
# Credentials
credentials_real.json
testsettings_local.json
servicebus_settings_real.py
storage_settings_real.py
legacy_mgmt_settings_real.py
mgmt_settings_real.py
app_creds_real.py
# User-specific files
*.suo
*.user
*.sln.docstates
.vs/
# Windows image file caches
Thumbs.db
ehthumbs.db
# Folder config file
Desktop.ini
# Recycle Bin used on file shares
$RECYCLE.BIN/
# Mac desktop service store files
.DS_Store
.idea
src/build
*.iml
/doc/_build
/doc/sphinx/_build
/.vs/config/applicationhost.config
.vscode/settings.json
.vscode/.ropeproject/
.project
.pydevproject
# Azure deployment credentials
*.pubxml
# Auxiliary files
command_coverage.txt
# Test artifacts
private_config.json
scripts/smart_create_gen/config.ini
test_results/
# Code coverage
.coverage

11
.travis.yml Normal file
Просмотреть файл

@ -0,0 +1,11 @@
sudo: false
language: python
python:
- "2.7"
- "3.5"
- "3.6"
install:
- pip install -qqq virtualenv # used by package_verify script
- python scripts/dev_setup.py
script:
- ./scripts/build.sh

9
CONTRIBUTING.rst Normal file
Просмотреть файл

@ -0,0 +1,9 @@
Contribute Code
===================================
This project has adopted the `Microsoft Open Source Code of Conduct <https://opensource.microsoft.com/codeofconduct/>`__.
For more information see the `Code of Conduct FAQ <https://opensource.microsoft.com/codeofconduct/faq/>`__ or contact `opencode@microsoft.com <mailto:opencode@microsoft.com>`__ with any additional questions or comments.
If you would like to become an active contributor to this project please
follow the instructions provided in `Microsoft Azure Projects Contribution Guidelines <http://azure.github.io/guidelines.html>`__

52
Dockerfile Normal file
Просмотреть файл

@ -0,0 +1,52 @@
#---------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
#---------------------------------------------------------------------------------------------
# This Dockerfile uses the latest code from the Git repo.
# Clone the repo then run 'docker build' with this Dockerfile file to get the latest versions of
# *all* CLI modules as in the Git repo.
FROM python:3.5.2-alpine
WORKDIR azure-cli
COPY . /azure-cli
# pip wheel - required for CLI packaging
# jmespath-terminal - we include jpterm as a useful tool
RUN pip install --upgrade pip wheel jmespath-terminal
# bash gcc openssl-dev libffi-dev musl-dev - dependencies required for CLI
# jq - we include jq as a useful tool
# openssh - included for ssh-keygen
# ca-certificates
# wget - required for installing jp
RUN apk update && apk add bash gcc openssl-dev libffi-dev musl-dev jq openssh ca-certificates wget openssl && update-ca-certificates
# We also, install jp
RUN wget https://github.com/jmespath/jp/releases/download/0.1.2/jp-linux-amd64 -qO /usr/local/bin/jp && chmod +x /usr/local/bin/jp
# 1. Build packages and store in tmp dir
# 2. Install the cli and the other command modules that weren't included
RUN /bin/bash -c 'TMP_PKG_DIR=$(mktemp -d); \
for d in src/azure-cli src/azure-cli-core src/azure-cli-nspkg src/command_modules/azure-cli-*/; \
do cd $d; python setup.py bdist_wheel -d $TMP_PKG_DIR; cd -; \
done; \
MODULE_NAMES=""; \
for m in src/command_modules/azure-cli-*/; \
do MODULE_NAMES="$MODULE_NAMES $(echo $m | cut -d '/' -f 3)"; \
done; \
pip install azure-cli $MODULE_NAMES -f $TMP_PKG_DIR;'
# Tab completion
RUN echo -e "\
_python_argcomplete() {\n\
local IFS='\v'\n\
COMPREPLY=( \$(IFS=\"\$IFS\" COMP_LINE=\"\$COMP_LINE\" COMP_POINT=\"\$COMP_POINT\" _ARGCOMPLETE_COMP_WORDBREAKS=\"\$COMP_WORDBREAKS\" _ARGCOMPLETE=1 \"\$1\" 8>&1 9>&2 1>/dev/null 2>/dev/null) )\n\
if [[ \$? != 0 ]]; then\n\
unset COMPREPLY\n\
fi\n\
}\n\
complete -o nospace -F _python_argcomplete \"az\"\n\
" > ~/.bashrc
WORKDIR /
CMD bash

13
LICENSE.txt Normal file
Просмотреть файл

@ -0,0 +1,13 @@
Azure CLI
Copyright (c) Microsoft Corporation
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

181
README.md Normal file
Просмотреть файл

@ -0,0 +1,181 @@
# Microsoft Azure CLI 2.0
[![Python](https://img.shields.io/pypi/pyversions/azure-cli.svg?maxAge=2592000)](https://pypi.python.org/pypi/azure-cli)
[![Travis](https://travis-ci.org/Azure/azure-cli.svg?branch=master)](https://travis-ci.org/Azure/azure-cli)
A great cloud needs great tools; we're excited to introduce *Azure CLI 2.0*, our next generation multi-platform command line experience for Azure.
## Installation
```bash
$ curl -L https://aka.ms/InstallAzureCli | bash
```
Please refer to the [install guide](https://docs.microsoft.com/en-us/cli/azure/install-az-cli2) for detailed install instructions.
A list of common install issues and their resolutions are available at [install troubleshooting](https://github.com/Azure/azure-cli/blob/master/doc/install_troubleshooting.md).
### Developer Installation (see below)
- [Docker](#docker)
- [Nightly Builds](#nightly-builds)
- [Developer Setup](#developer-setup)
## Usage
```bash
$ az [ group ] [ subgroup ] [ command ] {parameters}
```
### Get Started
Please refer to the ["get started" guide](https://docs.microsoft.com/en-us/cli/azure/get-started-with-az-cli2) for in-depth instructions.
For usage and help content, pass in the `-h` parameter, for example:
```bash
$ az storage -h
$ az vm create -h
```
### Highlights
Here are a few features and concepts that can help you get the most out of the Azure CLI 2.0 Preview
![Azure CLI 2.0 Highlight Reel](doc/assets/AzBlogAnimation4.gif)
The following examples are showing using the `--output table` format, you can change your default using the `az configure` command.
#### Tab Completion
We support tab-completion for groups, commands, and some parameters
```bash
# looking up resource group and name
$ az vm show -g [tab][tab]
AccountingGroup RGOne WebPropertiesRG
$ az vm show -g WebPropertiesRG -n [tab][tab]
StoreVM Bizlogic
$ az vm show -g WebPropertiesRG -n Bizlogic
```
#### Query
You can use the `--query` parameter and the [JMESPath](http://jmespath.org/) query syntax to customize your output.
```bash
$ az vm list --query "[?provisioningState=='Succeeded'].{ name: name, os: storageProfile.osDisk.osType }"
Name Os
---------------------- -------
storevm Linux
bizlogic Linux
demo32111vm Windows
dcos-master-39DB807E-0 Linux
```
#### Creating a VM
The following block creates a new resource group in the 'westus' region, then creates a new Ubuntu VM. We automatically provide a series of smart defaults, such as setting up SSH with your `~/.ssh/id_rsa.pub` key. For more details, try `az vm create -h`.
```bash
$ az group create -l westus -n MyGroup
Name Location
------- ----------
MyGroup westus
$ az vm create -g MyGroup -n MyVM --image ubuntults
MacAddress ResourceGroup PublicIpAddress PrivateIpAddress
----------------- --------------- ----------------- ------------------
00-0D-3A-30-B2-D7 MyGroup 52.160.111.118 10.0.0.4
$ ssh 52.160.111.118
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.19.0-65-generic x86_64)
System information as of Thu Sep 15 20:47:31 UTC 2016
System load: 0.39 Memory usage: 2% Processes: 80
Usage of /: 39.6% of 1.94GB Swap usage: 0% Users logged in: 0
jasonsha@MyVM:~$
```
#### More Samples and Snippets
For more usage examples, take a look at our [GitHub samples repo](http://github.com/Azure/azure-cli-samples) or [https://docs.microsoft.com/en-us/cli/azure/overview](https://docs.microsoft.com/en-us/cli/azure/overview).
## Reporting issues and feedback
If you encounter any bugs with the tool please file an issue in the [Issues](https://github.com/Azure/azure-cli/issues) section of our GitHub repo.
To provide feedback from the command line, try the `az feedback` command!
## Developer Installation
### Docker
**This install does not support the component feature.**
We maintain a Docker image preconfigured with the Azure CLI. Run the latest automated Docker build with the command below.
```bash
$ docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:latest
```
All command modules are included in this version as the image is built directly from the Git repository.
You can also get a specific version of Azure CLI 2.0 via Docker.
```bash
$ docker run -v ${HOME}:/root -it azuresdk/azure-cli-python:<version>
```
See our [Docker tags](https://hub.docker.com/r/azuresdk/azure-cli-python/tags/) for available versions.
### Nightly Builds
Install nightly builds with pip in a virtual environment.
```bash
$ pip install --pre azure-cli --extra-index-url https://azureclinightly.blob.core.windows.net/packages
```
- Builds happen at 21:00:00 PDT each night. They are published shortly afterwards.
- Whilst all command modules are built each night, not all are included on install.
- Install additional components with:
```
$ export AZURE_COMPONENT_PACKAGE_INDEX_URL=https://azureclinightly.blob.core.windows.net/packages
```
```
$ az component update --add <component_name> --private
```
- To view the list of installed packages, run ``az component list``
## Developer Setup
If you would like to setup a development environment and contribute to the CLI, see
[Configuring Your Machine](https://github.com/Azure/azure-cli/blob/master/doc/configuring_your_machine.md).
## Contribute Code
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
If you would like to become an active contributor to this project please
follow the instructions provided in [Microsoft Azure Projects Contribution Guidelines](http://azure.github.io/guidelines.html).
## License
```
Azure CLI
Copyright (c) Microsoft Corporation
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ""Software""), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```

65
appveyor.yml Normal file
Просмотреть файл

@ -0,0 +1,65 @@
branches:
only:
- master
init:
- git config --global core.autocrlf false
environment:
PYTHON: "C:\\Python35"
access_token:
secure: VMFbecLLHzDq/09YDPbcM0VDDSwwgY57vr5GXK6cZZ4Ti/Xs5RZoylzV8MMr1350
install:
- SET PATH=%PYTHON%;%PYTHON%\Scripts;%PATH%
- python scripts/dev_setup.py
- python -m pip install sphinx
build_script:
- ps: |
$docSourceRaw = Get-Content -Raw -Path "$env:APPVEYOR_BUILD_FOLDER\doc\sphinx\azhelpgen\doc_source_map.json" | ConvertFrom-Json
$docFiles = $docSourceRaw | Get-Member -type NoteProperty | %{ $docSourceRaw."$($_.Name)" }
$docFiles = $docFiles | select -Unique
$fileCommitMap = @{}
foreach($file in $docFiles)
{
$commitId = (git rev-list -1 HEAD $file)
if($commitId -match '^[0-9a-f]{40}$')
{
$date = Get-Date -Date (git log --pretty=format:%cd -n 1 --date=iso $file)
$date = $date.ToUniversalTime()
$date = Get-Date $date -format G
$fileCommitMap[$file] = @{}
$fileCommitMap[$file]["commit"] = $commitId
$fileCommitMap[$file]["date"] = $date
}
else
{
Write-Host -ForegroundColor Red "Failed to get commit id for $file"
$host.SetShouldExit(-1)
}
}
$fileCommitMap | ConvertTo-Json | Out-File "$env:APPVEYOR_BUILD_FOLDER\doc\sphinx\doc_commit_map.json"
- cd doc\sphinx
- make xml
- mkdir %TEMP%\azure-cli-xml2yml
- cd %TEMP%\azure-cli-xml2yml
- nuget install azure.cli.doc.xml2yml -Source https://ci.appveyor.com/nuget/azure-docs-cli-pre-process
- cd azure.cli.doc.xml2yml*\tools
- AzCliDocPreprocessor -s "%APPVEYOR_BUILD_FOLDER%\doc\sphinx\_build\xml\ind.xml" -d "%TEMP%\azure-cli-xml2yml\yml-output" -r "https://github.com/%APPVEYOR_REPO_NAME%" -b "%APPVEYOR_REPO_BRANCH%" -c "%APPVEYOR_BUILD_FOLDER%\doc\sphinx\doc_commit_map.json" -v 1
artifacts:
- path: doc\sphinx\_build
on_success:
- git config --global credential.helper store
- ps: Add-Content "$env:USERPROFILE\.git-credentials" "https://$($env:access_token):x-oauth-basic@github.com`n"
- git config --global user.email "%GIT_USER_EMAIL%"
- git config --global user.name "%GIT_USER_NAME%"
- git clone --depth 5 -q --branch=%TARGET_BRANCH% %GIT_CONTENT_REPO_URL% %TEMP%\azure-cli-content
- if exist %TEMP%\azure-cli-content\%YML_OUTPUT_FOLDER% ((cd %TEMP%\azure-cli-content\%YML_OUTPUT_FOLDER%) & (del %FILES_TO_DELETE% /f /q) & (for /d %%p IN (*) do rmdir "%%p" /s /q))
- cd %TEMP%\azure-cli-content
- SETLOCAL EnableDelayedExpansion & robocopy %TEMP%\azure-cli-xml2yml\yml-output %TEMP%\azure-cli-content\%YML_OUTPUT_FOLDER% /e & IF !ERRORLEVEL! EQU 1 (exit 0) ELSE (IF !ERRORLEVEL! EQU 3 (exit 0) ELSE (exit 1))
- ps: .\merge-toc.ps1 "$env:YML_OUTPUT_FOLDER" "$env:CONCEPTUAL_DOC_FOLDER"
- git add -A
- git diff --quiet --exit-code --cached || git commit -m "Update Document Content" && git push origin %TARGET_BRANCH% && appveyor AddMessage "Document Updated"

Двоичные данные
doc/assets/AzBlogAnimation4.gif Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 3.5 MiB

Просмотреть файл

@ -0,0 +1,186 @@
Authoring Command Modules
=========================
The document provides instructions and guidelines on how to author command modules. For other help, please see the following:
**Module Authoring**:<br>You are here!
**Command Authoring**:<br>https://github.com/Azure/azure-cli/blob/master/doc/authoring_command_modules/authoring_commands.md
**Help Authoring**:<br>https://github.com/Azure/azure-cli/blob/master/doc/authoring_help.md
**Test Authoring**:<br>https://github.com/Azure/azure-cli/blob/master/doc/recording_vcr_tests.md
<a name="heading_set_up"></a>Set Up
------
Create you dev environment if you haven't already. This is how to do that.
Clone the repo, enter the repo directory then create your virtual environment.
For example:
```
git clone https://github.com/Azure/azure-cli.git
cd azure-cli
virtualenv env
source env/bin/activate
python scripts/dev_setup.py
```
After this, you should be able to run the CLI with `az`.
[Author your command module...](#heading_author_command_mod)
Now, install your command module into the environment with pip in editable mode.
Where `<path_to_your_command_module>` is the path to the directory containing your `setup.py` file.
```
pip install -e <path_to_your_command_module>
```
If installation was successful, you should be able to run `pip list` and see your command module.
```
$ pip list
...
azure-cli-example (0.0.1, /Users/myuser/Repos/azure-cli-example)
...
```
Also, you can run `az` and if your command module contributes any commands, they should appear.
If your commands aren't showing with `az`, use `az --debug` to help debug. There could have been an exception
thrown whilst attempting to load your module.
<a name="heading_author_command_mod"></a>Authoring command modules
------
Currently, all command modules should start with `azure-cli-`.
When the CLI loads, it search for packages installed that start with that prefix.
The `example_module_template` directory gives a basic command module with 1 command.
Command modules should have the following structure:
```
.
|-- README.rst
|-- azure
| |-- __init__.py
| `-- cli
| |-- __init__.py
| `-- command_modules
| |-- __init__.py
| `-- <MODULE_NAME>
| `-- __init__.py
`-- setup.py
```
**Create an \_\_init__.py for your module**
In the \_\_init__ file, two methods need to be defined:
- `load_commands` - Uses the file in the 'Writing a Command' section below to load the commands.
- `load_params` - Uses the file in the 'Customizing Arguments' section below to load parameter customizations.
```Python
def load_params(command):
import azure.cli.command_modules.<module_name>._params
def load_commands():
import azure.cli.command_modules.<module_name>.commands
```
```python
from azure.cli.commands import cli_command
def example(my_required_arg, my_optional_arg='MyDefault'):
'''Returns the params you passed in.
:param str my_required_arg: The argument that is required
'''
result = {'a': my_required_arg, 'b': my_optional_arg}
return result
cli_command('example', example)
```
The snippet above shows what it takes to author a basic command.
1. Import `cli_command` from `azure.cli.commands`
This holds the core logic for creating commands.
2. Use `cli_command` to create your command
The only required parameters to this method are:
- `name` Name of the command
- `operation` The callable that will execute for that command
3. Define the callable that will execute
The CLI inspects the callable to determine required params, defaults and help text and more.
Try out the example to see these in action!
When running the command with the `--help` flag, you should see the command.
You can also now execute the command for yourself.
```
$ az example --help
Command
az example
Arguments
--my-required-arg [Required]: The argument that is required.
--my-optional-arg : Default: MyDefault.
...
$ az example --my-required-arg abc
{
"a": "abc",
"b": "MyDefault"
}
```
Testing
-------
```
python -m unittest discover -s <path_to_your_command_module>/azure/cli/command_modules/<module_name>/tests
```
```
pylint -r n <path_to_your_command_module>/azure
```
Packaging/Publishing
--------------------
### Packaging
You have finished your command module and now want to package and publish it.
Make sure you are in the directory for your command module.
Create the package by running:
```
python setup.py sdist
```
This will create a `dist` directory that contains your package.
### Publishing
You are now ready to publish the package to PyPI or a private PyPI server of your choice.
There are many ways to publish but here is a template using Twine.
```bash
echo "[distutils]
index-servers =
cli-pypi
[cli-pypi]
repository: <PYPI_REPO_URL>
username: <PYPI_REPO_USERNAME>
password: <PYPI_REPO_PASSWORD>
" > ~/.pypirc
pip install twine
# Uploads the packages in dist/ to the server referred to by cli-pypi.
twine upload -r cli-pypi dist/*
```
### Installing your published command module
If you published the package publicly, simply use `az component update --add example`.
If you published it to a private server, use `az component update --add example --private`.
NOTE:
- Don't include the `azure-cli-` prefix when installing a command module.

Просмотреть файл

@ -0,0 +1,198 @@
Authoring Commands
=============================
The document provides instructions and guidelines on how to author individual commands.
**Overview**
The basic process of adding commands is presented below, and elaborated upon later in this document.
1. Create an \_\_init__.py file for your command module.
2. Write your command as a standard Python function.
3. Register your command using the `cli_command` (or similar) function.
4. Write up your command's help entry.
5. Use the `register_cli_argument` function to add the following enhancements to your arguments, as needed:
- option names, including short names
- validators, actions or types
- choice lists
- completers
**Writing a Command**
Write your command as a simple function, specifying your arguments as the parameter names.
When choosing names, it is recommended that you look at similiar commands and follow those naming conventions to take advantage of any aliasing that may already be in place. For example, you should choose `resource_group_name` over `rg`, `resource_group` or some other shorthand, because this parameter is globally aliased and you will inherit the `-g` short option and the completer.
If you specify a default value in your function signature, this will flag the argument as optional and will automatically display the default value in the help text for the command. Any parameters that do not have a default value are required and will automatically appear in help with the [Required] label. The required and default behaviors for arguments can be overridden if needed with the `register_cli_argument` function (see Argument Customization below) but this is not generally needed.
**Registering Commands**
Before your command can be used in the CLI, it must be registered. Insert the following statement in your file:
```Python
from azure.cli.commands import cli_command
```
The signature of this method is
```Python
def cli_command(module_name, name, operation, client_factory=None, transform=None, table_transformer=None, confirmation=None):
```
You will generally only specify `name`, `operation` and possibly `table_transformer`.
- `module_name` - The name of the module that is registering the command (e.g. `azure.cli.command_modules.vm.commands`). Typically this will be `__name__`.
- `name` - String uniquely naming your command and placing it within the command hierachy. It will be the string that you would type at the command line, omitting `az` (ex: access your command at `az mypackage mycommand` using a name of `mypackage mycommand`).
- `operation` - The handler that will be executed. Format is `<module_to_import>#<attribute_list>`
- For example if `operation='azure.mgmt.compute.operations.virtual_machines_operations#VirtualMachinesOperations.get'`, the CLI will import `azure.mgmt.compute.operations.virtual_machines_operations`, get the `VirtualMachinesOperations` attribute and then the `get` attribute of `VirtualMachinesOperations`.
- `table_transformer` (optional) - Supply a callable that takes, transforms and returns a result for table output.
- `confirmation` (optional) - Supply True to enable default confirmation. Alternatively, supply a callable that takes the command arguments as a dict and returning a boolean. Alternatively, supply a string for the prompt.
At this point, you should be able to access your command using `az [name]` and access the built-in help with `az [name] -h/--help`. Your command will automatically be 'wired up' with the global parameters.
**Write Your Help Entry**
See the following for guidance on writing a help entry: https://github.com/Azure/azure-cli/blob/master/doc/authoring_help.md
**Customizing Arguments**
There are a number of customizations that you can make to the arguments of a command that alter their behavior within the CLI. To modify/enhance your command arguments, use the `register_cli_argument` method from the `azure.cli.commands` package. For the standard modules, these entries are contained within a file called `_params.py`.
The signature of this method is
```Python
def register_cli_argument(scope, dest, arg_type=None, **kwargs):
```
- `dest` - This string is the name of the parameter you wish to modify, as specified in the function signature.
- `scope` - This string is the level at which your customizations are applied. For example, consider the case where you have commands `az mypackage command1` and `az mypackage command2`, which both have a parameter `my_param`.
```Python
register_cli_argument('mypackage', 'my_param', ...) # applies to BOTH command1 and command2
```
But
```Python
register_cli_argument('mypackage command1', 'my_param', ...) # applies to command1 but not command2
```
Like CSS rules, modifications are applied in order from generic to specific.
```Python
register_cli_argument('mypackage', 'my_param', ...) # applies to both command1 and command2
register_cli_argument('mypackage command2', 'my_param', ...) # command2 inherits and build upon the previous changes
```
- `arg_type` - An instance of the `azure.cli.commands.CliArgumentType` class. This essentially serves as a named, reusable packaging of the `kwargs` that modify your command's argument. It is useful when you want to reuse an argument definition, but is generally not required. It is most commonly used for name type parameters.
- `kwargs` - Most likely, you will simply specify keyword arguments in `register_cli_argument` that will accomplish what you need. Any `kwargs` specified will override or extended the definition in `arg_type`, if provided.
The follow keyword arguments are supported:
- `options_list` - By default, your argument will be exposed as an option in hyphenated form (ex: `my_param` becomes `--my-param`). If you would like to change the option string without changing the parameter name, and/or add a short option, specify the `options_list` kwarg. This is a tuple of two string values, one for an standard option string, and the other for an optional short string. (Ex: `options_list=('--myparam', '-m')`)
- `validator` - The name of a callable that takes the function namespace as a parameter. Allows you to perform any custom logic or validation on the entire namespace prior to command execution. Validators are executed after argument parsing, and thus after `type` and `action` have been applied. However, because the order in which validators are exectued is random, you should not have multiple validators modifying the same parameter within the namespace.
- `completer` - The name of a callable that takes the following parameters `(prefix, action, parsed_args, **kwargs)` and return a list of completion values.
Additionally, the following `kwargs`, supported by argparse, are supported as well:
- `nargs` - See https://docs.python.org/3/library/argparse.html#nargs
- `action` - See https://docs.python.org/3/library/argparse.html#action
- `const` - See https://docs.python.org/3/library/argparse.html#const
- `default` - See https://docs.python.org/3/library/argparse.html#default. Note that the default value is inferred from the parameter's default value in the function signature. If specified, this will override that value.
- `type` - See https://docs.python.org/3/library/argparse.html#type
- `choices` - See https://docs.python.org/3/library/argparse.html#choices. If specified this will also serve as a value completer for people using tab completion.
- `required` - See https://docs.python.org/3/library/argparse.html#required. Note that this value is inferred from the function signature depending on whether or not the parameter has a default value. If specified, this will override that value.
- `help` - See https://docs.python.org/3/library/argparse.html#help. Generally you should avoid adding help text in this way, instead opting to create a help file as described above.
- `metavar` - See https://docs.python.org/3/library/argparse.html#metavar
- `id_part` - See below the section on Supporting the IDs Parameter.
Supporting the IDs Parameter
=============================
Most ARM resources can be identified by an ID. In many cases, for example show and delete commands, it may be more useful to copy and paste an ID to identify the target resource instead of having to specify the names of the resource group, the resource, and the parent resource (if any).
Azure CLI 2.0 Preview supports exposing an `--ids` parameter that will parse a resource ID into its constituent named parts so that this parsing need not be done as part of a client script. Additionally `--ids` will accept a _list_ of space separated IDs, allowing the client to loop the command over each ID.
Enabling this functionality only requires the command author specify the appropriate values for `id_part` in their calls to `register_cli_argument`.
Consider the following simplified example for NIC IP config.
```Python
def show_nic_ip_config(resource_group_name, nic_name, ip_config_name):
# retrieve and return the IP config
register_cli_command('network nic ip-config show', ...#show_nic_ip_config, ...)
register_cli_argument('network nic ip-config', 'nic_name', help='The NIC name.')
register_cli_argument('network nic ip-config', 'ip_config_name', options_list=('--name', '-n'), help='The IP config name.')
```
The help output for this command would be:
```
Arguments
--name -n : The IP config name.
--nic-name : The NIC name.
--resource-group -g: Name of resource group.
```
Now let's specify values for the `id_part` kwarg in the calls to `register_cli_argument`:
```Python
def show_nic_ip_config(resource_group_name, nic_name, ip_config_name):
# retrieve and return the IP config
register_cli_command('network nic ip-config show', ...#show_nic_ip_config, ...)
register_cli_argument('network nic ip-config', 'nic_name', id_part='name', help='The NIC name.')
register_cli_argument('network nic ip-config', 'ip_config_name', id_part='child_name', options_list=('--name', '-n'), help='The IP config name.')
```
The help output becomes:
```
Arguments
Resource Id Arguments
--ids : One or more resource IDs. If provided, no other 'Resource Id' arguments
should be specified.
--name -n : The IP config name.
--nic-name : The NIC name.
--resource-group -g: Name of resource group.
```
Now the user may identify the target IP config by specifying either the resource group, NIC and IP config names or by simply pasting in the ID for the IP config itself.
A couple things to note:
- Currently, `--ids` is not exposed for any command that is called 'create', even if it is configured properly.
- The supported values for `id_part` are: `name`, `child_name`, and `grandchild_name`.
Generic Update Commands
=============================
The update commands within the CLI expose a set of generic update arguments: `--add`, `--remove` and `--set`. This allows the user to manipulate objects in a consistent way that may not have long option flags supported by the command. The method which exposes these arguments is `cli_generic_update_command` in the `azure.cli.core.commands.arm` package. The signature of this method is:
```Python
def cli_generic_update_command(name, getter, setter, factory=None, setter_arg_name='parameters',
table_transformer=None, child_collection_prop_name=None,
child_collection_key='name', child_arg_name='item_name',
custom_function_op=None):
```
For many commands will only specify `name`, `getter`, `setter` and `factory`.
- `name` - Same as registering a command with `cli_command(...)`.
- `getter` - A method which returns an instance of the object being updated.
- `setter` - A method which takes an instance of the object and updates it.
- `factory` (optional) - Any client object upon which the getter and setter rely. If omitted, then the getter and setter are responsible for creating their own client objects as needed.
- `setter_arg_name` (optional) - The is the name for the object instance used in the setter method. By default it is `parameters` because many Azure SDK APIs use this convention. If your setter uses a different name, specify it here.
- `custom_function` (optional) - A method which accepts the object being updated (must be named `instance`) and returns that object. This is commonly used to process convenience options which may be added to the command by listing them in the method signature, similar to a purely custom method. The difference is that a custom command function returns the command result while a generic update custom function returns only the object being updated. A simple custom function might look like:
```Python
def my_custom_function(instance, item_name, custom_arg=None):
if custom_arg:
instance.property = custom_arg
return instance
```
- `table_transformer` (optional) - Same as `cli_command(...)`
**Working With Child Collections and Properties (Advanced)**
Sometimes you will want to write commands that operate on child resources and it may be that these child resources don't have dedicated getters and setters. In these cases, you must rely on the getter and setter of the parent resource. For these cases, `cli_generic_update_command` has three additional parameters:
- `child_collection_prop_name` - the name of the child collection property. For example, if object `my_parent` has a child collection called `my_children` that you would access using `my_parent.my_children` then the name you would use is 'my_children'.
- `child_collection_key_name` - Most child collections in Azure are lists of objects (as opposed to dictionaries) which will have a property in them that serves as the key. This is the name of that key property. By default it is 'name'. In the above example, if an entry in the `my_children` collection has a key property called `my_identifier` then the value you would supply is 'my_identifier'.
- `child_arg_name` - If you want to refer the child object key (the property identified by `child_collection_key_name`) inside a custom function, you should specify the argument name you use in your custom function. By default, this is called `item_name`. In the above example, where our child object had a key called `my_identifier`, you could refer to this property within your custom function through the `item_name` property, or specify something different.
**Logic Flow**
A simplified understand of the flow of the generic update is as follows:
```Python
instance = getter(...) # retrieve the object
if custom_function:
instance = custom_function(...) # apply custom logic
instance = _process_generic_updates(...) # apply generic updates, which will overwrite custom logic in the event of a conflict
return setter(instance) # update the instance and return the result
```

Просмотреть файл

@ -0,0 +1,3 @@
Microsoft Azure CLI 'example' Command Module
==================================

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import pkg_resources
pkg_resources.declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import pkg_resources
pkg_resources.declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import pkg_resources
pkg_resources.declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1,16 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# Add command module logic to this package.
from azure.cli.commands import cli_command
def example(my_required_arg, my_optional_arg='MyDefault'):
'''Returns the params you passed in.
:param str my_required_arg: The argument that is required
'''
result = {'a': my_required_arg, 'b': my_optional_arg}
return result
cli_command('example', example)

Просмотреть файл

@ -0,0 +1,50 @@
#!/usr/bin/env python
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from codecs import open
from setuptools import setup
# The full list of classifiers is available at
# https://pypi.python.org/pypi?%3Aaction=list_classifiers
CLASSIFIERS = [
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
]
DEPENDENCIES = [
]
with open('README.rst', 'r', encoding='utf-8') as f:
README = f.read()
setup(
name='azure-cli-example',
version='0.0.1',
description='Microsoft Azure Command-Line Tools Example Command Module',
long_description=README,
license='MIT',
author='Example Author',
author_email='author@example.com',
url='https://github.com/example/repo',
classifiers=CLASSIFIERS,
namespace_packages = [
'azure',
'azure.cli',
'azure.cli.command_modules',
],
packages=[
'azure.cli.command_modules.example',
],
install_requires=DEPENDENCIES,
)

109
doc/authoring_help.md Normal file
Просмотреть файл

@ -0,0 +1,109 @@
# Project AZ Help System #
Help authoring for commands is done in a number of places, all of which are contained in the Az code base. Some help text comes from product code, but it can be overridden using a YAML-based help authoring system. The YAML-based system is the recommended way to update command and group help text.
## YAML Help Authoring ##
The YAML syntax is described [here](http://www.yaml.org/spec/1.2/spec.html "here").
To override help for a given command:
1. Find the command's module, Example "az account clear".
1. Search code base for "account clear".
2. Search result: src/command_modules/azure-cli-**profile**/azure/cli/command_modules/**profile**/commands.py.
3. Result shows "account clear" is in the "profile" module.
2. Using the module name, find the YAML help file which follows the path pattern:
1. src/command_modules/azure-cli-**[module name]**/azure/cli/command_modules/**[module name]**/_help.py.
2. If the file doesn't exist, it can be created.
3. Find or create a help entry with the name of the command/group you want to document. See example below.
### Example YAML help file, _help.py ###
<pre>
#---------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
#---------------------------------------------------------------------------------------------
from azure.cli.help_files import helps
#pylint: disable=line-too-long
helps['account clear'] = """
type: command
short-summary: Clear account
long-summary: Longer summary of clearing account
parameters:
- name: --account-name -n
type: string
short-summary: 'Account name'
long-summary: |
Longer summary with newlines preserved. Preserving newlines is helpful for paragraph breaks.
populator-commands:
- az account list
- These indicate where values can be retrieved for input to this command
- name: --another-parameter
short-summary: These parameter names must match what is shown in the command's CLI help output, including abbreviation.
examples:
- name: Document a parameter that doesn't exist
text: >
You will get an error when you show help for the command stating there is an extra parameter.
- name: Collapse whitespace in YAML
text: >
The > character collapses multiple lines into a single line, which is good for on-screen wrapping.
"""
</pre>
You can also document groups using the same format.
<pre>
helps['account'] = """
type: group
short-summary: The account group
long-summary: Longer summary of account
examples:
- name: Clear an account
text: Description
- name: Choose your current account
text: az account set...
"""
</pre>
# Tips to write effective help for your command
- Make sure the doc contains all the details that someone unfamiliar with the API needs to use the command.
- Examples are worth a thousand words. Provide examples that cover common use cases.
- Don't use "etc". Sometimes it makes sense to spell out a list completely. Sometimes it works to say "like ..." instead of "..., etc".
- The short summary for a group should start with "Commands to...".
- Use active voice. For example, say "Update web app configurations" instead of "Updates web app congfigurations" or "Updating web app configurations".
- Refer to the CLI as "Azure CLI 2.0 (Preview)". We'll drop "(Preview)" when the CLI GAs.
- Don't use highly formal language. If you imagine that another dev sat down with you and you were telling him what he needs to know to use the command, that's exactly what you need to write, in those words.
# Testing Authored Help #
To verify the YAML help is correctly formatted, the command/group's help command must be executed at runtime. For example, to verify "az account clear", run the command "az account clear -h" and verify the text.
Runtime is also when help authoring errors will be reported, such as documenting a parameter that doesn't exist. Errors will only show when the CLI help is executed, so verifying the CLI help is required to ensure your authoring is correct.
# Other Help Authoring #
Commands without YAML usually still have help text. Where does it come from? These sections briefly outline where Az help text comes from.
Authoring note: it is not recommended to use the product code to author command/group help--YAML is the recommended way (see above). This information is provided for completeness and may be useful for fixing small typos in existing help text.
## Help Layers ##
Command help starts with its raw SDK docstring text, if available. Non-SDK commands may have their own docstring. Code can specify values that replace the SDK/docstring contents. YAML is the final override for help content and is the recommended way for authoring command and group help. Note that group help can only be authored via YAML.
Here are the layers of Project Az help, with each layer overriding the layer below it:
| Help Display |
|----------------|
| YAML Authoring |
| Code Specified |
| Docstring |
| SDK Text |
## Page titles for command groups ##
Page titles for your command groups as generated from the source are simply the command syntax, "az vm", but we use friendly titles on the published pages - "Virtual machines - az vm". To do that, ee add the friendly part of the page title to [titlemapping.json](https://github.com/Azure/azure-docs-cli-python/blob/master/titleMapping.json) in the azure-docs-cli-python repo. When you add a new command group, make sure to update the mapping.

306
doc/azure2az_commands.rst Normal file
Просмотреть файл

@ -0,0 +1,306 @@
Azure XPlat CLI to Azure CLI 2.0 commands
=========================================
Below is a list of common Azure XPlat commands and their Azure CLI 2.0 Preview equivalent.
Services included:
* Account
* Network
* Storage
* VM
========================================================== ==========================================================
Azure XPlat CLI Azure CLI 2.0
========================================================== ==========================================================
azure account clear az account clear
azure account list az account list
azure account set az account set
azure account show az account show
azure location list az account list-locations
azure network application-gateway address-pool create az network application-gateway address-pool create
azure network application-gateway address-pool delete az network application-gateway address-pool delete
azure network application-gateway create az network application-gateway create
azure network application-gateway delete az network application-gateway delete
azure network application-gateway frontend-ip create az network application-gateway frontend-ip create
azure network application-gateway frontend-ip delete az network application-gateway frontend-ip delete
azure network application-gateway frontend-port create az network application-gateway frontend-port create
azure network application-gateway frontend-port delete az network application-gateway frontend-port delete
azure network application-gateway http-listener create az network application-gateway http-listener create
azure network application-gateway http-listener delete az network application-gateway http-listener delete
azure network application-gateway http-settings create az network application-gateway http-settings create
azure network application-gateway http-settings delete az network application-gateway http-settings delete
azure network application-gateway list az network application-gateway list
azure network application-gateway probe create az network application-gateway probe create
azure network application-gateway probe delete az network application-gateway probe delete
azure network application-gateway rule create az network application-gateway rule create
azure network application-gateway rule delete az network application-gateway rule delete
azure network application-gateway set az network application-gateway update
azure network application-gateway show az network application-gateway show
azure network application-gateway ssl-cert create az network application-gateway ssl-cert create
azure network application-gateway ssl-cert delete az network application-gateway ssl-cert delete
azure network application-gateway start az network application-gateway start
azure network application-gateway stop az network application-gateway stop
azure network application-gateway url-path-map create az network application-gateway url-path-map create
azure network application-gateway url-path-map delete az network application-gateway url-path-map delete
azure network application-gateway url-path-map rule create az network application-gateway url-path-map rule create
azure network application-gateway url-path-map rule delete az network application-gateway url-path-map rule delete
azure network dns record-set add-record az network dns record-set [ptr|mx|*] add
azure network dns record-set create az network dns record-set create
azure network dns record-set delete az network dns record-set delete
azure network dns record-set delete-record az network dns record-set [ptr|mx|*] remove
azure network dns record-set set az network dns record-set update
azure network dns record-set show az network dns record-set show
azure network dns zone create az network dns zone create
azure network dns zone delete az network dns zone delete
azure network dns zone list az network dns zone list
azure network dns zone set az network dns zone update
azure network dns zone show az network dns zone show
azure network express-route authorization create az network express-route circuit-auth create
azure network express-route authorization delete az network express-route circuit-auth delete
azure network express-route authorization list az network express-route circuit-auth list
azure network express-route authorization set az network express-route circuit-auth update
azure network express-route authorization show az network express-route circuit-auth show
azure network express-route circuit create az network express-route circuit create
azure network express-route circuit delete az network express-route circuit delete
azure network express-route circuit list az network express-route circuit list
azure network express-route circuit set az network express-route circuit update
azure network express-route circuit show az network express-route circuit show
azure network express-route peering create az network express-route circuit-peering create
azure network express-route peering delete az network express-route circuit-peering delete
azure network express-route peering list az network express-route circuit-peering list
azure network express-route peering set az network express-route circuit-peering update
azure network express-route peering show az network express-route circuit-peering show
azure network express-route provider list az network express-route service-provider list
azure network lb address-pool create az network lb address-pool create
azure network lb address-pool delete az network lb address-pool delete
azure network lb address-pool list az network lb address-pool list
azure network lb create az network lb create
azure network lb delete az network lb delete
azure network lb frontend-ip create az network lb frontend-ip create
azure network lb frontend-ip delete az network lb frontend-ip delete
azure network lb frontend-ip list az network lb frontend-ip list
azure network lb frontend-ip set az network lb frontend-ip update
azure network lb inbound-nap-pool delete az network lb inbound-nap-pool delete
azure network lb inbound-nat-pool create az network lb inbound-nat-pool create
azure network lb inbound-nat-pool list az network lb inbound-nat-pool list
azure network lb inbound-nat-pool set az network lb inbound-nat-pool update
azure network lb inbound-nat-rule create az network lb inbound-nat-rule create
azure network lb inbound-nat-rule delete az network lb inbound-nat-rule delete
azure network lb inbound-nat-rule list az network lb inbound-nat-rule list
azure network lb inbound-nat-rule set az network lb inbound-nat-rule update
azure network lb list az network lb list
azure network lb probe create az network lb probe create
azure network lb probe delete az network lb probe delete
azure network lb probe list az network lb probe list
azure network lb probe set az network lb probe update
azure network lb rule create az network lb rule create
azure network lb rule delete az network lb rule delete
azure network lb rule list az network lb rule list
azure network lb rule set az network lb rule update
azure network lb set az network lb update
azure network lb show az network lb show
azure network local-gateway create az network local-gateway create
azure network local-gateway delete az network local-gateway delete
azure network local-gateway list az network local-gateway list
azure network local-gateway set az network local-gateway update
azure network local-gateway show az network local-gateway show
azure network nic create az network nic create
azure network nic delete az network nic delete
azure network nic ip-config address-pool create az network nic ip-config address-pool add
azure network nic ip-config address-pool delete az network nic ip-config address-pool remove
azure network nic ip-config create az network nic ip-config create
azure network nic ip-config delete az network nic ip-config delete
azure network nic ip-config inbound-nat-rule create az network nic ip-config inbound-nat-rule add
azure network nic ip-config inbound-nat-rule delete az network nic ip-config inbound-nat-rule remove
azure network nic ip-config list az network nic ip-config list
azure network nic ip-config set az network nic ip-config update
azure network nic ip-config show az network nic ip-config show
azure network nic list az network nic list
azure network nic set az network nic update
azure network nic show az network nic show
azure network nsg create az network nsg create
azure network nsg delete az network nsg delete
azure network nsg list az network nsg list
azure network nsg rule create az network nsg rule create
azure network nsg rule delete az network nsg rule delete
azure network nsg rule list az network nsg rule list
azure network nsg rule set az network nsg rule update
azure network nsg rule show az network nsg rule show
azure network nsg set az network nsg update
azure network nsg show az network nsg show
azure network public-ip create az network public-ip create
azure network public-ip delete az network public-ip delete
azure network public-ip list az network public-ip list
azure network public-ip set az network public-ip update
azure network public-ip show az network public-ip show
azure network route-table create az network route-table create
azure network route-table delete az network route-table delete
azure network route-table list az network route-table list
azure network route-table route create az network route-table route create
azure network route-table route delete az network route-table route delete
azure network route-table route list az network route-table route list
azure network route-table route set az network route-table route update
azure network route-table route show az network route-table route show
azure network route-table set az network route-table update
azure network route-table show az network route-table show
azure network traffic-manager endpoint create az network traffic-manager endpoint create
azure network traffic-manager endpoint delete az network traffic-manager endpoint delete
azure network traffic-manager endpoint set az network traffic-manager endpoint update
azure network traffic-manager endpoint show az network traffic-manager endpoint show
azure network traffic-manager profile create az network traffic-manager profile create
azure network traffic-manager profile delete az network traffic-manager profile delete
azure network traffic-manager profile is-dns-available az network traffic-manager profile check-dns
azure network traffic-manager profile list az network traffic-manager profile list
azure network traffic-manager profile set az network traffic-manager profile update
azure network traffic-manager profile show az network traffic-manager profile show
azure network vnet create az network vnet create
azure network vnet delete az network vnet delete
azure network vnet list az network vnet list
azure network vnet set az network vnet update
azure network vnet show az network vnet show
azure network vnet subnet create az network vnet subnet create
azure network vnet subnet delete az network vnet subnet delete
azure network vnet subnet list az network vnet subnet list
azure network vnet subnet set az network vnet subnet update
azure network vnet subnet show az network vnet subnet show
azure network vpn-connection create az network vpn-connection create
azure network vpn-connection delete az network vpn-connection delete
azure network vpn-connection list az network vpn-connection list
azure network vpn-connection set az network vpn-connection update
azure network vpn-connection shared-key reset az network vpn-connection shared-key reset
azure network vpn-connection shared-key set az network vpn-connection shared-key update
azure network vpn-connection shared-key show az network vpn-connection shared-key show
azure network vpn-connection show az network vpn-connection show
azure network vpn-gateway create az network vpn-gateway create
azure network vpn-gateway delete az network vpn-gateway delete
azure network vpn-gateway list az network vpn-gateway list
azure network vpn-gateway revoked-cert create az network vpn-gateway revoked-cert create
azure network vpn-gateway revoked-cert delete az network vpn-gateway revoked-cert delete
azure network vpn-gateway root-cert create az network vpn-gateway root-cert create
azure network vpn-gateway root-cert delete az network vpn-gateway root-cert delete
azure network vpn-gateway set az network vpn-gateway update
azure network vpn-gateway show az network vpn-gateway show
azure storage account check az storage account check-name
azure storage account connectionstring show az storage account show-connection-string
azure storage account create az storage account create
azure storage account delete az storage account delete
azure storage account keys list az storage account keys list
azure storage account keys renew az storage account keys renew
azure storage account list az storage account list
azure storage account sas create az storage account generate-sas
azure storage account set az storage account update
azure storage account show az storage account show
azure storage account usage show az storage account show-usage
azure storage blob copy start az storage blob copy start
azure storage blob copy show az storage blob show
azure storage blob copy stop az storage blob copy cancel
azure storage blob delete az storage blob delete
azure storage blob download az storage blob download
azure storage blob lease acquire az storage blob lease acquire
azure storage blob lease break az storage blob lease break
azure storage blob lease change az storage blob lease change
azure storage blob lease release az storage blob lease release
azure storage blob lease renew az storage blob lease renew
azure storage blob list az storage blob list
azure storage blob sas create az storage blob generate-sas
azure storage blob show az storage blob show
azure storage blob snapshot az storage blob snapshot
azure storage blob update az storage blob update
azure storage blob upload az storage blob upload
azure storage container create az storage container create
azure storage container delete az storage container delete
azure storage container lease acquire az storage container lease acquire
azure storage container lease break az storage container lease break
azure storage container lease change az storage container lease change
azure storage container lease release az storage container lease release
azure storage container lease renew az storage container lease renew
azure storage container list az storage container list
azure storage container policy create az storage container policy create
azure storage container policy delete az storage container policy delete
azure storage container policy list az storage container policy list
azure storage container policy set az storage container policy update
azure storage container policy show az storage container policy show
azure storage container sas create az storage container generate-sas
azure storage container set az storage container set-permission
azure storage container show az storage container show
azure storage container show az storage container show-permission
azure storage cors delete az storage cors clear
azure storage cors set az storage cors add
azure storage cors show az storage cors list
azure storage directory create az storage directory create
azure storage directory delete az storage directory delete
azure storage file copy start az storage file copy start
azure storage file copy show az storage file show
azure storage file copy stop az storage file copy cancel
azure storage file delete az storage file delete
azure storage file download az storage file download
azure storage file list az storage file list
azure storage file sas create az storage file generate-sas
azure storage file upload az storage file upload
azure storage logging set az storage logging update
azure storage logging show az storage logging show
azure storage metrics set az storage metrics update
azure storage metrics show az storage metrics show
azure storage queue create az storage queue create
azure storage queue delete az storage queue delete
azure storage queue list az storage queue list
azure storage queue policy create az storage queue policy create
azure storage queue policy delete az storage queue policy delete
azure storage queue policy list az storage queue policy list
azure storage queue policy set az storage queue policy update
azure storage queue policy show az storage queue policy show
azure storage queue sas create az storage queue generate-sas
azure storage queue show az storage queue metadata show
azure storage share create az storage share create
azure storage share delete az storage share delete
azure storage share list az storage share list
azure storage share policy create az storage share policy create
azure storage share policy delete az storage share policy delete
azure storage share policy list az storage share policy list
azure storage share policy set az storage share policy set
azure storage share policy show az storage share policy show
azure storage share sas create az storage share sas create
azure storage share set az storage share update
azure storage share show az storage share show
azure storage table create az storage table create
azure storage table delete az storage table delete
azure storage table list az storage table list
azure storage table policy create az storage table policy create
azure storage table policy delete az storage table policy delete
azure storage table policy list az storage table policy list
azure storage table policy set az storage table policy update
azure storage table policy show az storage table policy show
azure storage table sas create az storage table generate-sas
azure vm capture az vm capture
azure vm create az vm create
azure vm deallocate az vm deallocate
azure vm delete az vm delete
azure vm disk attach az vm disk attach
azure vm disk attach-new az vm disk attach-new
azure vm disk detach az vm disk detach
azure vm extension get az vm extension get
azure vm extension set az vm extension set
azure vm extension-image list az vm extension image list
azure vm extension-image list-types az vm extension image list-names
azure vm extension-image list-versions az vm extension image list-versions
azure vm extension-image show az vm extension image show
azure vm generalize az vm generalize
azure vm get-instance-view az vm get-instance-view
azure vm get-serial-output az vm boot-diagnostics get-boot-log
azure vm image list az vm image list
azure vm image list-offers az vm image list-offers
azure vm image list-publishers az vm image list-publishers
azure vm image list-skus az vm image list-skus
azure vm image show az vm image show
azure vm list az vm list
azure vm list-usage az vm list-usage
azure vm redeploy az vm redeploy
azure vm reset-access az vm access
azure vm restart az vm restart
azure vm set az vm update
azure vm show az vm show
azure vm sizes az vm list-sizes
azure vm start az vm start
azure vm stop az vm stop
========================================================== ==========================================================

Просмотреть файл

@ -0,0 +1,140 @@
# Application templates
Application templates provide a way to partition the details of a job into two parts.
All of the details about how the job should be processed are moved into the **application template**, creating a reusable definition that is independent of a particular account. Application templates are parameterized to allow the processing to be customized without requiring modification of the template itself.
The job itself retains all of the account specific configuration for the job, specifying the appropriate pool, any runtime constraints and so on.
To link the two, the job now references the required application template and supplies any parameters required to customize processing for the needs of the current user.
## Example job
This sample Blender job specifies rendering of a Blender scene using application templates.
```json
{
"id": "blenderjob",
"displayName": "Blender Sample using Application Templates",
"poolInfo": {
"poolid" : "blender-pool"
},
"applicationTemplateInfo" : {
"filePath" : "render-template.json",
"parameters" : {
"jobName": "blender_render",
"blendFile": "scene.blend",
"frameStart": 1,
"frameEnd": 100,
"outputFileStorageUrl": "https://storage.blob.core.windows.net/blender-outputs"
}
}
}
```
The `applicationTemplateInfo` element gives the `filePath` to the application template and provides all the `parameters` required to configure the application template for use.
### Samples
The following samples use application templates:
* [Task per file](samples/hello-world/task-per-file)
* [Blender with application templates](samples/blender-appTemplate)
## Job schema changes
The batch job gains the following element, used to specify which application template should be used:
| Element name | Required | Type | Description |
| ----------------------- | -------- | ------------ | ------------------------------------------------------------------------------------------------------ |
| applicationTemplateInfo | Optional | Complex Type | Identifies an application template and supplies parameter values for expansion when the job is created |
### applicationTemplateInfo
This new complex object is used to specify the application template used and to provide any parameters required by the templates.
| Element name | Required | Type | Description |
| ------------ | --------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| filePath | Mandatory | String | Location of an application template in json format on the local client filesystem. <br/> Relative paths are resolved from the directory containing the `job.json` file; specifying `template.json` will look for a file in the same folder as the job itself. |
| parameters | Optional | Collection | A set of parameter values for use by the template, listed in standard JSON syntax. <br/> Must be supplied if the specified template defines any parameters without a defaultValue. |
## Application template schema
An application template broadly follows the existing schema for a Batch job, but with modifications to reflect that it is parameterized and contains only part of a full job.
### New properties
These newly introduced properties are used to define the templating capabilities.
| Element name | Required | Type | Description |
| ---------------- | -------- | ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| templateMetadata | Optional | Complex Type | Additional information about the template itself. <br/>Included for documentation purposes only. None of this information is passed through to the generated job. |
| parameters | Optional | Dictionary | A (potentially empty) dictionary of parameter definitions, indexed by the name of the property. |
### templateMetadata
Though completely optional and not actually used by the Xplat-CLI, template metadata is supported to allow the templates to be somewhat self documenting. These properties are recommended for each template.
| Element name | Required | Type | Description |
| ------------ | ----------- | ------- | --------------------------------------------------------------------------------------------- |
| description | Optional | String | A simple description of the functionality provided by the template. |
| author | Optional | String | The name or email address of the template author. |
| dateUpdated | Optional | String | A human readable message (a date or a version number) for when the template was last modified.|
### parameters
These parameter definitions in an ARM style specify the parameters consumed by the template. Every parameter used by the template must be pre-defined in this collection.
| Element name | Required | Type | Description |
| ------------ | ------------ | ---------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| type | Required | String | Specifies the data type of the parameter. <br/> One of `int`, `string` or `bool` only. <br/> Other parameter types not are supported in our initial release. |
| defaultValue | Optional | `<type>` | Provides a default value for the parameter. <br/> This value will be used if no value is provided by the end user. <br/> Must be a value compatible with/convertible to **type**. |
| metadata | Optional | Dictionary | A list of name-value pairs of additional information. <br/> We recommend supplying a **description** for every parameter. |
### Reserved for application template use
When an application template is referenced by a job, these properties are reserved for use by the application template and may not be directly used on the job itself.
| Element name | Required | Type | Description |
| ------------------------- | --------- | ------------ | --------------------------------------------------------------------------------------------------------------- |
| jobManagerTask | Optional | Complex Type | Specifies details of a Job Manager task to be launched when the job is started. |
| jobPreparationTask | Optional | Complex Type | Specifies the Job Preparation task. |
| jobReleaseTask | Optional | Complex Type | Specifies the Job Release task. |
| commonEnvironmentSettings | Optional | Collection | A list of common environment variable settings. |
| usesTaskDependencies | Optional | Boolean | Specifies whether tasks in the job can define dependencies on each other. |
| onAllTasksComplete | Optional | String | Specifies an action the Batch service should take when all tasks in the job are in the completed state. |
| onTaskFailure | Optional | String | Specifies an action the Batch service should take when any task in the job fails. |
| taskFactory | Optional | Complex Type | Reference to a task factory which creates task(s) for the current job. <br/> *New feature in the **Xplat-CLI**. |
### Reserved for job use
These job properties are used to define the environment in which the job is run. These details are specific to the Batch account and user running the job and may not be specified on the application template. This table details some of the motivations why these properties are not permitted on an application template.
| Element name | Type | Description |
| ----------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| id | String | A string that uniquely identifies the job within the account. <br/> Specifying the unique identifier of a job on a reusable template does not make sense. |
| displayName | String | A display name for the job. |
| priority | Int32 | The priority of the job. <br/> Prioritization of jobs is the responsibility of the end user, not an application template author. |
| constraints | Complex Type | Specifies the execution constraints for the job. <br/>The selection of appropriate constraints requires knowledge of the specific job being processed (e.g. size) and associated pool. |
| poolInfo | Complex Type | Specifies the pool on which the Batch service runs the jobs tasks. <br/> Establishing and managing a pool is the responsibility of the end user, not an application template author, not least because a forgotten pool might result in unexpected costs. |
| applicationTemplateInfo | Complex Type | A reference to an application template that defines the computation for this job. <br/> Creating a chain of templates will not be supported in the initial release. |
### Shared properties
Both the application template and the referencing job may specify **metadata** about the job.
| Element name | Required | Type | Description |
| ------------ | -------- | ------------ | --------------------------------------------------------------- |
| metadata | Optional | Collection | A list of name-value pairs associated with the job as metadata. |
The two lists will be merged, allowing for local management properties defined on the job (such as cost-center or department) to be merged with any properties defined on the template.
Additional metadata will be created by the **xplatcli** when processing the template to allow details of the job to be traced back to the original template. All these items will use the reserved prefix `az_batch:`.
| Item | Type | Description |
| ---------------------------- | ------ | ---------------------------------------------------------------------------- |
| `az_batch:template_filePath` | String | The fully qualified file path to the template used when the job was created. |

Просмотреть файл

@ -0,0 +1,223 @@
# Input files
## Input file upload
We have introduced a new command to allow a user to upload job input data directly
to the storage account linked to their Azure Batch.
The uploaded files will be stored under a name, or `file group` that can be referenced by any job run
within that Batch Account. For examples on how to use the data in a file group, see [Referencing input data](#referencing-input-data)
The data will also be stored in a way so as to preserve the local directory structure, allowing
this to be recreated on the compute node.
Example local input data:
```
data/img_processing/1.png
data/img_processing/2.png
data/img_processing/3.png
data/alpha.png
data/rgb.png
data/1.png
```
Example command:
```bash
azure batch file upload C:\data\**\*.png raw-images
azure batch file upload /tmp/data/**/*.png raw-images
```
In the above commands, all the PNG files in the `data` directory (C:\\data or /tmp/data) will be uploaded to a linked storage account under
a file group named "raw-images". Any subdirectory structure relative to the `data` directory will
be retained.
If any of the data already exists in the file group, only data that has since been modified will be uploaded to overwrite existing files.
Resulting structure:
```
raw-images/img_processing/1.png
raw-images/img_processing/2.png
raw-images/img_processing/3.png
raw-images/alpha.png
raw-images/rgb.png
raw-images/1.png
```
### Other options
#### flatten
This will discard the local directory structure and upload files to a flattened structure.
Example command:
```bash
azure batch file upload C:\data\**\*.png raw-images --flatten
azure batch file upload /tmp/data/**/*.png raw-images --flatten
```
Resulting structure:
```
raw-images/1.png
raw-images/2.png
raw-images/3.png
raw-images/alpha.png
raw-images/rgb.png
```
Notice that in the above example we have a name collision. When `flatten` is used, multiple files of the same name that were
previously distinguished by a unique file path will not be supported regardless of the content of the file.
Files of the same name will be compared by their date of last modification, and only the most recently modified will be stored.
#### path
This will upload any data into a subdirectory within the file group. This directory will also
be created on the compute node. Example command:
```bash
azure batch file upload C:\data\**\*.png raw-images --path first_pass
azure batch file upload /tmp/data/**/*.png raw-images --path first_pass
```
Resulting structure:
```
raw-images/first_pass/img_processing/1.png
raw-images/first_pass/img_processing/2.png
raw-images/first_pass/img_processing/3.png
raw-images/first_pass/alpha.png
raw-images/first_pass/rgb.png
raw-images/first_pass/1.png
```
You can also combine `path` and `flatten` like so:
```bash
azure batch file upload C:\data\**\*.png raw-images --path first_pass --flatten
azure batch file upload /tmp/data/**/*.png raw-images --path first_pass -flatten
```
Resulting structure:
```
raw-images/first_pass/1.png
raw-images/first_pass/2.png
raw-images/first_pass/3.png
raw-images/first_pass/alpha.png
raw-images/first_pass/rgb.png
```
## Referencing input data
Input data stored in linked storage under a file group can be simply referenced by a task
(including Job Preparation and Release tasks, Job Manager tasks and Pool Start tasks)
by using some new ResourceFile properties.
Example input data in in the file group `raw-images`:
```
raw-images/first_pass/img_processing/1.png
raw-images/first_pass/img_processing/2.png
raw-images/first_pass/img_processing/3.png
raw-images/first_pass/alpha.png
raw-images/first_pass/rgb.png
raw-images/first_pass/1.png
image-config-data/first_pass/img_config_2016.cfg
image-config-data/second_pass/img_config_2016.cfg
```
Example JSON:
```json
"resourceFiles": [
{
"source": {
"fileGroup": "raw-images",
}
},
{
"source": {
"fileGroup": "image-config-data",
}
}
]
```
The above schema will include all the data found in the specified file groups with the job.
Files on node (where `wd` refers to the task current working directory):
```
wd/first_pass/img_processing/1.png
wd/first_pass/img_processing/2.png
wd/first_pass/img_processing/3.png
wd/first_pass/alpha.png
wd/first_pass/rgb.png
wd/first_pass/1.png
wd/first_pass/img_config_2016.cfg
wd/second_pass/img_config_2016.cfg
```
The data can be further filtered by including a prefix. This prefix can be a
whole filename, partial filename, or a subdirectory.
```json
"resourceFiles": [
{
"source": {
"fileGroup": "raw-images",
"prefix": "first_pass/img_"
}
},
{
"source": {
"fileGroup": "image-config-data",
"prefix": "first_pass/img_config_2016.cfg"
}
}
]
```
Files on node:
```
wd/first_pass/img_processing/1.png
wd/first_pass/img_processing/2.png
wd/first_pass/img_processing/3.png
wd/first_pass/img_config_2016.cfg
```
Finally, you can also specify the path to which the files will be downloaded on the
compute node. If the source reference is a single file, the specified `filePath` is assumed
to include the filename. Otherwise, if the source references multiple files,
`filePath` will be treated as a directory.
In the case where `filePath` is used as a directory, any directory structure already associated
with the input data will be retained in full and appended to the specified `filePath` directory and recreated
within the specified directory.
```json
"resourceFiles": [
{
"source": {
"fileGroup": "raw-images",
"prefix": "first_pass/img_"
},
"filePath": "images"
},
{
"source": {
"fileGroup": "image-config-data",
"prefix": "first_pass/img_config_2016.cfg"
},
"filePath": "images/first_pass.cfg"
}
]
```
Files on node:
```
wd/images/first_pass/img_processing/1.png
wd/images/first_pass/img_processing/2.png
wd/images/first_pass/img_processing/3.png
wd/images/first_pass.cfg
```
## Samples
The following samples automatically use the upload feature to make files available for processing
* [Blender](samples/blender)
* [Task Per File](samples/hello-world/task-per-file)
* [OCR](samples/ocr)

Просмотреть файл

@ -0,0 +1,92 @@
# Output files
When adding a task, you can now declare a list of output files to be automatically uploaded to
an Azure Storage container of your choice.
An output file description can be added to a task or Job Manager task (or the taskFactory.repeatTask):
```json
{
"id" : "2",
"commandLine": "ffmpeg -i sampleVideo2.mkv -vcodec copy -acodec copy outputVideo2.mp4 -y",
"outputFiles": [
{
"filePattern": "outputVideo2.mp4",
"destination": {
"container": {
"path": "mytask2output.mp4",
"containerSas": "https://storage.blob.core.windows.net/container?sv=2015-04-05sig=tAp0r3I3SV5PbjpZ5CIjvuo1jdUs5xW"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
},
{
"filePattern": "../stderr.txt",
"destination": {
"container": {
"path": "2_error.log",
"containerSas": "https://storage.blob.core.windows.net/container?sv=2015-04-05sig=tAp0r3I3SV5PbjpZ5CIjvuo1jdUs5xW"
}
},
"uploadDetails": {
"taskStatus": "TaskFailure"
}
}
]
}
```
Multiple output file descriptions can be included to cover different file patterns and different upload circumstances.
In the above example, if the process completes successfully (the process exits with code 0), then the output will be uploaded,
otherwise the error logs are uploaded for debugging.
## Options
| Property | Required | Type | Description |
| ------------- | --------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| filePattern | Mandatory | String | The name of the file or files to be uploaded. This could be an absolute path, or a path relative to the task working directory. This can be a single file, or a pattern using wildcards (`**` and `*`). |
| destination | Mandatory | Complex Type | The destination to which the output files specified in `filePattern` will be uploaded. |
| uploadDetails | Mandatory | Complex Type | The details regarding the upload conditions. |
### destination
| Property | Required | Type | Description |
| --------- | --------- | ------------ | ------------------------------------ |
| container | Mandatory | Complex Type | Details of the destination container |
### container
| Property | Required | Type | Description |
| ------------ | --------- | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| path | Optional | String | Path within the container to which data will be uploaded. If `filePath` refers to multiple files, `path` will be considered a virtual directory within the container. Otherwise `path` will be considered to include the filename used in storage. |
| containerSas | Mandatory | String | The SAS URL to the storage container used to hold the output data. The SAS must have write permissions. <br/> Note: A SAS URL to your entire storage account will not work, nor will one that has expired. |
### uploadDetails
| Property | Required | Type | Description |
| ---------- | --------- | ------- | ------------------------------------------------------------ |
| taskStatus | Mandatory | String | Specify circumstances when output files should be persisted. |
Available options for `taskStatus` are:
* `TaskSuccess` - Upload if the task completed with an exit code of zero.
* `TaskFailure` - Upload if the task completed with a nonzero exit code.
* `TaskComplete` - Uploaded always (irrespective of the exit code of the task).
## Samples
The following samples automatically upload their output files as they complete:
* [Task Per File](samples/hello-world/task-per-file)
* [Blender](samples/blender)
* [Blender with application templates](samples/blender-appTemplate)
* [FFMpeg](samples/ffmpeg)
* [OCR](samples/ocr)
## Troubleshooting
### Files do not upload to blob storage
If there are no files uploaded to blob storage when your task completes, check error messages in an `uploadlog.txt` file on the node that ran the task. (You can do this from the [Azure portal](https://portal.azure.com)).

Просмотреть файл

@ -0,0 +1,130 @@
# Support for package managers
There are many existing 3rd party package managers which streamline the installation of applications.
For example, Chocolatey on Windows and APT on Ubuntu/Debian allow users to easily install a wide range of applications
including Java, Python, NodeJS, R, and many more.
By integrating with these package managers, we can empower our users to install their applications on Batch nodes
without having to become a master of each applications installation procedure. Three different package managers are currently
supported.
### Chocolatey
Chocolatey is a package manager for Windows that includes installations for many common applications/runtimes,
including: Java, Python, NodeJS, R, FFmpeg, Blender, etc.
### Advanced Package Tool (APT)
APT (apt-get) is a package manager used by some Linux distros including Ubuntu, Debian, and Fedora.
Some supported packages include Java, NodeJS, R, OpenFOAM.
### Yellowdog Updater, Modified (Yum)
Yum is a package manager used by some Linux distros including Red Hat Enterprise Linux, Fedora, CentOS.
Some supported packages include Java, NodeJS, R, OpenFOAM.
## Referencing packages
Packages can be referenced in a Batch pool (including autopool) or in the task template of a
task factory (`taskFactory.repeatTask`).
An example of an APT package reference in an IaaS pool:
```json
"pool": {
"id": "my-ffmpeg-pool",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
},
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"vmSize": "STANDARD_D1",
"targetDedicated": 5,
"enableAutoScale": false,
"packageReferences": [
{
"type": "aptPackage",
"id": "ffmpeg"
}
]
}
```
Another example of a package reference used in a task factory:
```json
"job": {
"id": "my-ffmpeg-job",
"constraints": {
"maxWallClockTime": "PT5H",
"maxTaskRetryCount": 3
},
"poolInfo": {
"poolId": "my-ffmpeg-pool"
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": 1,
"end": 500,
"step": 1
}
],
"repeatTask": {
"commandLine": "ffmpeg -i sampleVideo{0}.mkv -vcodec copy -acodec copy output{0}.mp4 -y",
"packageReferences": [
{
"type": "aptPackage",
"id": "ffmpeg"
}
]
}
}
}
```
### Options:
Different options are available depending on chosen package manager.
The required `type` field is used to determine which package manger should be used to install the package.
This is dependent on the chosen OS of the compute nodes. Available options:
#### APT
| Property | Required | Type | Description |
| --------- | --------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `type` | Mandatory | string | Must be `"aptPackage"`<br/> The package will be installed using **apt-get**. <br/>Compatible with Ubuntu, Debian and Fedora. |
| `id` | Mandatory | string | Name or ID of the package to be installed as identified according to the package repository. <br/> Currently only packages found in the default repository are supported. |
| `version` | Optional | string | Specific version of a package to be installed. If not set, the latest version will be installed. |
#### Chocolatey
| Property | Required | Type | Description |
| --------------------- | --------- | ------- | ------------------------------------------------------------------------------------------------------------------------- |
| `type` | Mandatory | string | Must be `"chocolateyPackage"`<br/> The package will be installed using **Chocolatey**. <br/>Only compatible with Windows. |
| `id` | Mandatory | string | Name or ID of the package to be installed as identified according to the package repository. |
| `version` | Optional | string | Specific version of a package to be installed. If not set, the latest version will be installed. |
| `allowEmptyChecksums` | Optional | Boolean | If `true`, Chocolatey will install packages without a checksum for validation. Default is `false`. |
#### Yum
| Property | Required | Type | Description |
| ----------------- | --------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `type` | Mandatory | string | Must be `"yumPackage"`<br/> The package will be installed using **Yum**. <br/>Compatible with Red Hat, CentOS and Fedora. |
| `id` | Mandatory | string | Name or ID of the package to be installed as identified according to the package repository. <br/> Currently only packages found in the default RPM repository are supported. |
| `version` | Optional | string | Specific version of a package to be installed. If not set, the latest version will be installed. |
| `disableExcludes` | Optional | string | Allows the user to specify packages that might otherwise be excluded by VM configuration (e.g. kernel packages). |
## Samples
The following samples use package managers to install software for use:
* [FFMpeg](samples/ffmpeg)
* [OCR](samples/ocr)

Просмотреть файл

@ -0,0 +1,14 @@
# New Features for Azure Batch
These experimental features provide new ways to work with Azure Batch without needing to write your own applications. See the [included samples](samples) for some ideas on use.
* [Input data upload to Batch linked storage accounts](inputFiles.md#input-file-upload)
* [Input data references using linked storage accounts](inputFiles.md#referencing-input-data)
* [Automatic persistence of task output files to Azure Storage](outputFiles.md)
* [Job and pool templates with parameterization](templates.md)
* [Task factories for automatic task generation on job submission](taskFactories.md)
* [Container-based workflows through integration with Batch Shipyard (Docker)](shipyard.md)
* [Split job configuration and management with reusable application templates](application-templates.md)
* [Easy software installation via package managers](packages.md)
See also the [available samples](samples).

Просмотреть файл

@ -0,0 +1,57 @@
{
"parameters": {
"vmSize": {
"type": "string",
"defaultValue": "STANDARD_D1",
"metadata": {
"description": "The size of the virtual machines that run the application"
},
"allowedValues": [
"STANDARD_A1",
"STANDARD_A2",
"STANDARD_A3",
"STANDARD_A4",
"STANDARD_D1",
"STANDARD_D2",
"STANDARD_D3",
"STANDARD_D4"
]
},
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of virtual machines desired in the pool"
}
},
"poolId": {
"type": "string",
"defaultValue": "blender-pool",
"metadata": {
"description": "The unique id of the newly created Azure Batch pool"
}
}
},
"variables": {
"osType": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('poolId')]",
"virtualMachineConfiguration": {
"imageReference": "[variables('osType')]",
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"vmSize": "[parameters('vmSize')]",
"targetDedicated": "[parameters('vmCount')]",
"enableAutoScale": false
}
}
}

Просмотреть файл

@ -0,0 +1,76 @@
# Azure Batch Blender using application templates
This template shows how to use **Blender** to render using an *application template* to separate the logic of processing from administration and management.
With an *application template*, the processing steps required for the job are defined in a separate file - see `render-template.json` which is appropriately parameterized. The job itself references the template, supplies any required parameter values and specifies the pool on which the job is to run.
## Features used by this sample
* [Split job configuration and management with reusable application templates](../../application-templates.md)
* [Parametric sweep task factory](../../taskFactories.md#parametric-sweep)
* [Automatic persistence of task output files to Azure Storage](../../outputFiles.md)
## Prerequisites
You will need an Azure Batch account with a linked Azure Storage account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
You will need a Blender scene to render.
## Setup Pool
The sample specifies an Azure Batch Pool with the id `blender-pool` - if you don't already have one, run this command:
```bash
azure batch pool create --template pool.json
```
The default settings in `pool.json` specify a pool named `blender-pool` containing **3** **STANDARD_D1** virtual machines.
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
If you want to use an existing pool to run the job, modify the `render-job.json` file to specify the unique id of your pool.
## Upload files
To upload your blender files:
```bash
azure batch file upload <path> blender-data
```
Run this command on a folder containing the blender files you want to process. `blender-data` is the default value of the `sceneData` parameter in the job template. If you upload your files to a different file group, be sure to provide this value for the `sceneData` parameter when creating your job (see the next section).
## Create the job
Edit the `render-job.json` file to specify appropriate parameters for the job:
| Parameter | Description |
| -------------------- | -------------------------------------------------------- |
| blendFile | The Blender scene file to be rendered |
| sceneData | The file group where the input data is stored |
| outputPrefix | The prefix to use when naming the rendered outputs |
| frameStart | Index of the first frame to render |
| frameEnd | Index of the last frame to render |
| outputFileStorageUrl | The SAS URL for a container where outputs will be stored |
When you are ready to run the job, use this command:
```bash
azure batch job create --json-file render-job.json
```
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `render-job.json` | Specifies the job to run, including the pool to use and any runtime constraints. <br/> Does not contain any of the job logic - instead it has an `applicationTemplateInfo` element that specifies the template to use (see `filePath`) and any parameters required by the template (see `parameters`). |
| `render-template.json` | Template file describing the required processing, making use of the experimental **taskFactory** feature. |

Просмотреть файл

@ -0,0 +1,17 @@
{
"id": "blenderjob",
"displayName": "Blender",
"poolInfo": {
"poolId" : "blender-pool"
},
"applicationTemplateInfo" : {
"filePath" : "render-template.json",
"parameters" : {
"blendFile": "scene.blend",
"frameStart": 1,
"frameEnd": 100,
"outputFileStorageUrl": "https://storage.blob.core.windows.net/blender-outputs",
"outputPrefix": "blenderjob"
}
}
}

Просмотреть файл

@ -0,0 +1,79 @@
{
"templateMetadata" : {
"description" : "Sample application template for working with Blender."
},
"parameters": {
"blendFile": {
"type": "string",
"metadata": {
"description": "The Blender scene file to be rendered"
}
},
"sceneData": {
"type": "string",
"defaultValue": "blender-data",
"metadata": {
"description": "The file group where the input data is stored"
}
},
"outputPrefix": {
"type": "string",
"metadata": {
"description": "The prefix to use when naming the rendered outputs"
}
},
"frameStart": {
"type": "int",
"metadata": {
"description": "Index of the first frame to render"
}
},
"frameEnd": {
"type": "int",
"metadata": {
"description": "Index of the last frame to render"
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "The SAS URL for a container where outputs will be stored"
}
}
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": "[parameters('frameStart')]",
"end": "[parameters('frameEnd')]",
"step": 1
}
],
"repeatTask": {
"displayName": "Frame {0}",
"commandLine": "blender -b \"[parameters('blendFile')]\" -o \"[parameters('outputPrefix')]_####\" -f {0} -t 0",
"resourceFiles": [
{
"source": {
"fileGroup": "[parameters('sceneData')]"
}
}
],
"outputFiles": [
{
"filePattern": "[parameters('outputPrefix')]_*",
"destination": {
"container": {
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
}
]
}
},
"onAllTasksComplete": "terminateJob"
}

Просмотреть файл

@ -0,0 +1,17 @@
{
"jobName": {
"value": "blender_render"
},
"blendFile": {
"value": "scene.blend"
},
"frameStart": {
"value": 1
},
"frameEnd": {
"value": 100
},
"outputFileStorageUrl": {
"value": "https://storage.blob.core.windows.net/blender-outputs"
}
}

Просмотреть файл

@ -0,0 +1,64 @@
# Azure Batch Blender
This sample shows how to use **Blender** to render using a parameterized template that specifies a virtualMachineConfiguration based autopool.
## Features used by this sample
* [Job template with parameterization](../../templates.md)
* [Parametric sweep task factory](../../taskFactories.md#parametric-sweep)
* [Automatic persistence of task output files to Azure Storage](../../outputFiles.md)
## Prerequisites
You will need an Azure Batch account with a linked Azure Storage account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
You will need a Blender scene to render.
## Upload files
To upload your blender files:
```bash
azure batch file upload <path> blender-data
```
Run this command on a folder containing the blender files you want to process. `blender-data` is the default value of the `sceneData` parameter in the job template. If you upload your files to a different file group, be sure to provide this value for the `sceneData` parameter when creating your job (see the next section).
## Create the job
Modify the `parameters.json` file to specify appropriate parameters for the job. The full set of available parameters is defined in `render.json`:
| Parameter | Required | Description |
| -------------------- | --------- | ------------------------------------------------------------------------------------------- |
| blendFile | Mandatory | File name of the Blender scene to be rendered |
| sceneData | Optional | Name of the file group where the input data is stored. <br/> Defaults to `blender-data`. |
| numberNodes | Optional | Number of nodes in the Azure Batch pool where the job will run. <br/> Defaults to `5`. |
| vmSize | Optional | Size of the virtual machines that run the application. <br/> Defaults to `STANDARD_A1`. |
| jobName | Mandatory | Prefix of the name of the Azure Batch job, also used to prefix rendered outputs. |
| frameStart | Mandatory | Index of the first frame to render. |
| frameEnd | Mandatory | Index of the last frame to render. |
| outputFileStorageUrl | Mandatory | SAS URL for a container where outputs will be stored. |
When you are ready to run the job, use this command:
```bash
azure batch job create --template render.json --parameters parameters.json
```
This job uses an **autopool** which will automatically be deleted once the job reaches the Completed state. If the job never reaches the Completed state (e.g. when tasks are unable to run because of scheduling errors, or errors downloading files from storage), you will continue to be charged for the pool. In this case, you may want to use the [Azure portal](https://portal.azure.com) to manually delete the pool to ensure you're not billed unnecessarily.
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `render.json` | Specifies the job to run, including a parametric sweep task factory, the autopool definition, parameter and variable definitions. |
| `parameters.json` | Provides parameter values used to create the actual job that will run. |

Просмотреть файл

@ -0,0 +1,138 @@
{
"parameters": {
"blendFile": {
"type": "string",
"metadata": {
"description": "The Blender scene file to be rendered"
}
},
"sceneData": {
"type": "string",
"defaultValue": "blender-data",
"metadata": {
"description": "The file group where the input data is stored"
}
},
"numberNodes": {
"type": "int",
"defaultValue": 5,
"metadata": {
"description": "The number of nodes in the Azure Batch pool where the job will run"
}
},
"vmSize": {
"type": "string",
"metadata": {
"description": "The size of the virtual machines that run the application"
},
"defaultValue": "STANDARD_A1",
"allowedValues": [
"STANDARD_A1",
"STANDARD_A2",
"STANDARD_A3",
"STANDARD_A4",
"STANDARD_D1",
"STANDARD_D2",
"STANDARD_D3",
"STANDARD_D4"
]
},
"jobName": {
"type": "string",
"metadata": {
"description": "The prefix of the name of the Azure Batch job, also used to prefix rendered outputs"
}
},
"frameStart": {
"type": "int",
"metadata": {
"description": "The first frame to render"
}
},
"frameEnd": {
"type": "int",
"metadata": {
"description": "The last frame to render"
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "The SAS URL for a container where outputs will be stored"
}
}
},
"variables": {
"osType": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
},
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"properties": {
"id": "[parameters('jobName')]_[parameters('frameStart')]-[parameters('frameEnd')]",
"displayName": "[parameters('blendFile')]",
"onAllTasksComplete": "terminateJob",
"poolInfo": {
"autoPoolSpecification": {
"autoPoolIdPrefix": "blender_pool",
"poolLifetimeOption": "job",
"keepAlive": false,
"pool": {
"displayName": "Blender pool running: [parameters('jobName')]",
"vmSize": "[parameters('vmSize')]",
"virtualMachineConfiguration": "[variables('osType')]",
"packageReferences": [
{
"type": "aptPackage",
"id": "blender"
}
],
"targetDedicated": "[parameters('numberNodes')]",
"enableAutoScale": false
}
}
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": "[parameters('frameStart')]",
"end": "[parameters('frameEnd')]",
"step": 1
}
],
"repeatTask": {
"displayName": "Frame {0}",
"commandLine": "blender -b \"[parameters('blendFile')]\" -o \"[parameters('jobName')]_####\" -f {0} -t 0",
"resourceFiles": [
{
"source": {
"fileGroup": "[parameters('sceneData')]"
}
}
],
"outputFiles": [
{
"filePattern": "[parameters('jobName')]_*",
"destination": {
"container": {
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
}
]
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,43 @@
{
"parameters": {
"poolId": {
"type": "string",
"defaultValue": "docker-caffe",
"metadata": {
"description": "The id of Azure Batch pool"
}
},
"jobId": {
"type": "string",
"defaultValue": "docker-caffe-job",
"metadata": {
"description": "The id of Azure Batch job"
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"poolInfo": {
"poolId": "[parameters('poolId')]"
},
"taskFactory": {
"type": "taskCollection",
"tasks": [
{
"id": "task01",
"commandLine": "/caffe/run_mnist.sh",
"clientExtensions": {
"dockerOptions": {
"image": "alfpark/caffe:cpu",
"remove_container_after_exit": true
}
}
}
]
}
}
}
}

Просмотреть файл

@ -0,0 +1,33 @@
{
"parameters": {
"poolId": {
"type": "string",
"defaultValue": "docker-caffe",
"metadata": {
"description": "The id of Azure Batch pool"
}
}
},
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('poolId')]",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS"
}
},
"vmSize": "STANDARD_D1_V2",
"targetDedicated": 1,
"maxTasksPerNode": 1,
"clientExtensions": {
"dockerOptions": {
"image": "alfpark/caffe:cpu"
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,59 @@
# Caffe on Docker via Batch Shipyard Integration
This sample shows how to run Caffe in a Docker container on a Batch compute node.
## Features used by this sample
* [Batch Shipyard integration and Docker support](../../shipyard.md)
## Prerequisites
* You will need an Azure Batch account with a linked Azure Storage account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
* You will need to install Batch Shipyard on your local machine. See the help page on [Batch Shipyard integration and Docker support](../../shipyard.md) for details.
* You must agree to the [Caffe license](https://github.com/BVLC/caffe/blob/master/LICENSE) prior to use.
## Create the pool
The `pool.docker.caffe.json` file contains the following parameters. You may optionally create a `parameters.json` file to provide your own values.
| Parameter | Required | Description |
| --------- | --------- | --------------------------------------------------------------------------------------------------- |
| poolId | Optional | Unique id of the Azure Batch pool to create. Defaults to `docker-caffe` if not otherwise specified. |
When you are ready to create the pool, use this command:
```bash
azure batch pool create --template pool.docker.caffe.json [--parameters parameters.json]
```
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
## Create the job
The `job.docker.caffe.json` file contains the following parameters. You may optionally create a `parameters.json` file to provide your own values.
| Parameter | Required | Description |
| --------- | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| jobId | Optional | Unique id of the Azure Batch job to create. Defaults to `docker-caffe-job` if not otherwise specified. |
| poolId | Optional | Unique id of the Azure Batch pool which runs the job. Must match the `poolId` used when you created the pool (see above). Defaults to `docker-caffe` if not otherwise specified. |
When you are ready to create the job, use this command:
```bash
azure batch job create --template job.docker.caffe.json [--parameters parameters.json]
```
## Monitor the job
You can use this command to monitor the task in the job and its progress:
``` bash
azure batch task show --job-id <jobid> --id 'task01'
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| `pool.docker.caffe.json` | A template defining the pool which will run the job. The pool contains a single compute node which is configured for running Docker containers. |
| `job.docker.caffe.json` | A template defining the job and a task. The task will run Caffe in a Docker container. |

Просмотреть файл

@ -0,0 +1,17 @@
{
"jobId": {
"value": "<job id>"
},
"inputFileGroup": {
"value": "<file group name>"
},
"outputFileStorageUrl": {
"value": "<blob url with sas>"
},
"taskStart": {
"value": <the first id of the files to convert>
},
"taskEnd": {
"value": <the last id of the files to convert>
}
}

Просмотреть файл

@ -0,0 +1,78 @@
{
"parameters": {
"inputFileGroup": {
"type": "string",
"metadata": {
"description": "The auto-storage group where the input data is stored"
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "The SAS URL of the Azure storage container where the output files will be stored"
}
},
"poolId": {
"type": "string",
"defaultValue": "ffmpeg-pool",
"metadata": {
"description": "The id of the Azure Batch pool which will run the job"
}
},
"jobId": {
"type": "string",
"metadata": {
"description": "The id of Azure Batch job"
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"constraints": {
"maxWallClockTime": "PT5H",
"maxTaskRetryCount": 1
},
"poolInfo": {
"poolId": "[parameters('poolId')]"
},
"taskFactory": {
"type": "taskPerFile",
"source": {
"fileGroup": "[parameters('inputFileGroup')]"
},
"repeatTask": {
"commandLine": "ffmpeg -y -i {fileName} -acodec libmp3lame {fileNameWithoutExtension}.mp3",
"resourceFiles": [
{
"blobSource": "{url}",
"filePath" : "{fileName}"
}
],
"outputFiles": [
{
"filePattern": "{fileNameWithoutExtension}.mp3",
"destination": {
"container": {
"path": "{fileNameWithoutExtension}.mp3",
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
}
],
"packageReferences": [
{
"type": "aptPackage",
"id": "ffmpeg"
}
]
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,103 @@
{
"parameters": {
"inputFileGroup": {
"type": "string",
"metadata": {
"description": "The auto-storage group where the input data is stored"
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "The SAS URL for a container where outputs will be stored"
}
},
"inputType": {
"type": "string",
"defaultValue": "wav",
"metadata": {
"description": "The extension of the input data"
}
},
"poolId": {
"type": "string",
"defaultValue": "ffmpeg-pool",
"metadata": {
"description": "The id of Azure Batch pool which runs the job"
}
},
"jobId": {
"type": "string",
"metadata": {
"description": "The id of Azure Batch job"
}
},
"taskStart": {
"type": "int",
"metadata": {
"description": "The sweep start parameter"
}
},
"taskEnd": {
"type": "int",
"metadata": {
"description": "The sweep end parameter"
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"constraints": {
"maxWallClockTime": "PT5H",
"maxTaskRetryCount": 1
},
"poolInfo": {
"poolId": "[parameters('poolId')]"
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": "[parameters('taskStart')]",
"end": "[parameters('taskEnd')]",
"step": 1
}
],
"repeatTask": {
"commandLine": "ffmpeg -y -i sample{0}.[parameters('inputType')] -acodec libmp3lame output.mp3",
"resourceFiles": [
{
"source": {
"fileGroup": "[parameters('inputFileGroup')]",
"prefix": "sample{0}.[parameters('inputType')]"
}
}
],
"outputFiles": [
{
"filePattern": "output.mp3",
"destination": {
"container": {
"path": "audio{0}.mp3",
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
}
],
"packageReferences": [
{
"type": "aptPackage",
"id": "ffmpeg"
}
]
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,57 @@
{
"parameters": {
"vmSize": {
"type": "string",
"defaultValue": "STANDARD_D1",
"metadata": {
"description": "The size of the virtual machines that runs the application"
},
"allowedValues": [
"STANDARD_A1",
"STANDARD_A2",
"STANDARD_A3",
"STANDARD_A4",
"STANDARD_D1",
"STANDARD_D2",
"STANDARD_D3",
"STANDARD_D4"
]
},
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of the virtual machines"
}
},
"poolId": {
"type": "string",
"defaultValue": "ffmpeg-pool",
"metadata": {
"description": "The id of Azure Batch pool"
}
}
},
"variables": {
"osType": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('poolId')]",
"virtualMachineConfiguration": {
"imageReference": "[variables('osType')]",
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"vmSize": "[parameters('vmSize')]",
"targetDedicated": "[parameters('vmCount')]",
"enableAutoScale": false
}
}
}

Просмотреть файл

@ -0,0 +1,115 @@
# Azure Batch FFMpeg Pool/Job Template
This sample shows how to use `ffmpeg` to convert one kind of media file (`WAV`) to another type media file (`MP3`). Two approaches are shown, one using a *parametric sweep task factory* and one using a *task per file task factory*.
## Features used by this sample
* [Job template with parameterization](../../templates.md)
* [Automatic persistence of task output files to Azure Storage](../../outputFiles.md)
* [Easy software installation via package managers](../../packages.md)
* [Parametric sweep task factory](../../taskFactories.md#parametric-sweep)
* [Task per file task factory](../../taskFactories.md#task-per-file)
## Prerequisites
You will need an Azure Batch account with a linked Azure Storage account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
## Create a pool
Create your pool using the default settings:
``` bash
azure batch pool create --template pool.json
```
The default settings create pool a named `ffmpeg-pool` with **3** x **STANDARD_D1 VM** virtual machines.
If you want to change the default values of the pool creation, create a JSON file to supply the parameters of your pool. If you have a large number of media files to convert, you should use a larger pool or bigger VMs in the pool.
In order to create the pool with your own configurations, run instead:
``` bash
azure batch pool create --template pool.json --parameters <your settings JSON file>
```
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
## Using a parametric sweep for processing
### Upload files
Upload your WAV media files by running this command on a folder containing media files (`*.wav`).
``` bash
azure batch file upload <path> <group>
```
The parametric sweep expects the files to be named `sample1.wav`, `sample2.wav`, `sample3.wav` and so on - each with the prefix `sample` and an increasing index number. It's important that your files are sequentially numbered with no gaps.
### Configure parametric sweep parameters
Modify `job.parameters.json` to supply parameters to the template. If you want to configure other options of the job, such as the the pool id, look in the `job.sweep.json` parameters section to see what options are available.
| Parameter | Required | Description | |
| -------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| poolId | Optional | Name of the Azure Batch Pool to use for processing. <br/> Must match the pool you created earlier. Default value if not otherwise specified: `ffmpeg-pool`. | |
| inputFileGroup | Mandatory | Name of the file group in your storage account containing the files to process. <br/> Must match the name of the group used in the `azure batch file upload` command earlier. | |
| outputFileStorageUrl | Mandatory | SAS enabled URL to a writable storage container for the output files. | |
| jobId | Mandatory | Unique id of the job for processing. <br/> Must not duplicate the `id` of any existing job. | |
| taskStart | Mandatory | The index # of the first file for processing. <br/>Must match the index of the first WAV file you uploaded earlier. <br/>e.g. specify `1` to reference `sample1.wav`. | |
| taskEnd | Mandatory | The index # of the last file for processing. <br/>Must match the index of the last WAV file you uploaded earlier. <br/> e.g. specify `10` to reference `sample10.wav`. | |
### Run the job with tasks generated by a parametric sweep
To create your job and tasks:
``` bash
azure batch job create --template job.sweep.json --parameters job.parameters.json
```
## Using a task per file for processing
### Upload files
Upload your WAV media files by running this command on a folder containing media files (*.wav):
``` bash
azure batch file upload <path> <group>
```
Unlike the sample using parametric sweep, there's no requirement for your filenames to confirm to a specific pattern.
### Configure task per file parameters
Modify `job.parameters.json` file to supply parameters to the template. If you want to configure other options of the job, such as the the pool id, you can look in the `job.perFile.json` parameters section to see what options are available.
| Parameter | Required | Description |
| -------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| poolId | Optional | Name of the Azure Batch Pool to use for processing. <br/> Must match the pool you created earlier. Default value if not otherwise specified: `ffmpeg-pool`. |
| inputFileGroup | Mandatory | Name of the file group in your storage account containing the files to process. <br/> Must match the name of the group used in the `azure batch file upload` command earlier. |
| outputFileStorageUrl | Mandatory | SAS enabled URL to a writable storage container for the output files. |
| jobId | Mandatory | Unique id of the job for processing. <br/> Must not duplicate the `id` of any existing job. |
### Run the job with tasks generated per input file
To create your job and tasks:
``` bash
azure batch job create --template job.perFile.json --parameters job.parameters.json
```
## Monitor the job
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
The outputs of the tasks will be uploaded to the Azure Storage container which you specified as the individual tasks complete.
The target container will contain a new virtual directory for each task that ran.
## Structure of the sample
| File | Content |
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `pool.json` | A template for creating a pool for processing with `ffmpeg`. <br/> By default, this template will create a pool called `ffmpeg-pool` containing **3** x **STANDARD_D1** virtual machines.<br/> You will need to create a parameter file (with suggested name `pool.parameters.json`) if you want to customize the pool. |
| `job.sweep.json` | A template for creating a job that uses a parametric sweep to process a set of sequentially numbered input files with `ffmpeg`. |
| `job.parameters.json` | Specifies values for the parametric sweep parameters defined in the file `job.sweep.json`. You will need to provide values for the placeholders present in this file before creating your job. |
| `job.perFile.json` | A template for creating a job that uses a per-file task factory to process a set of input files with `ffmpeg`. |

Просмотреть файл

@ -0,0 +1,58 @@
{
"parameters": {
"jobId": {
"type": "string",
"metadata": {
"description": "The id of Azure Batch job"
}
},
"poolId": {
"type": "string",
"metadata": {
"description": "The id of Azure Batch pool which runs the job"
}
},
"taskStart": {
"type": "int",
"metadata": {
"description": "The sweep start parameter"
}
},
"taskEnd": {
"type": "int",
"metadata": {
"description": "The sweep end parameter"
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"constraints": {
"maxWallClockTime": "PT5H",
"maxTaskRetryCount": 1
},
"poolInfo": {
"poolId": "[parameters('poolId')]"
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": "[parameters('taskStart')]",
"end": "[parameters('taskEnd')]",
"step": 1
}
],
"repeatTask": {
"commandLine": "/bin/bash -c 'echo Hello world from task {0}'",
"constraints": {
"retentionTime":"PT1H"
}
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,14 @@
{
"jobId": {
"value": "<Job id>"
},
"poolId": {
"value": "<Pool id>"
},
"taskStart": {
"value": <Task start value>
},
"taskEnd": {
"value": <Task end value>
}
}

Просмотреть файл

@ -0,0 +1,41 @@
{
"parameters": {
"vmSize": {
"type": "string",
"defaultValue": "STANDARD_D1_V2",
"metadata": {
"description": "The size of the virtual machines that run the application"
}
},
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of virtual machines"
}
},
"poolId": {
"type": "string",
"metadata": {
"description": "The id of the Azure Batch pool"
}
}
},
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('poolId')]",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS"
},
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"vmSize": "[parameters('vmSize')]",
"targetDedicated": "[parameters('vmCount')]"
}
}
}

Просмотреть файл

@ -0,0 +1,5 @@
{
"poolId": {
"value": "<Pool id>"
}
}

Просмотреть файл

@ -0,0 +1,75 @@
# Azure Batch Pool/Job Template
This sample shows how to create a pool, and run a parametric sweep job on it, using *parameterized* templates for both the pool and the job.
## Features used by this sample
* [Pool and job templates with parameterization](../../templates.md)
* [Parametric sweep task factory](../../taskFactories.md#parametric-sweep)
* [Task per file task factory](../../taskFactories.md#task-per-file)
## Prerequisites
You will need an Azure Batch account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
## Preparation
Modify the parameters specified in `pool.parameters.json` to configure your pool. Available parameters are defined in `pool.json`:
| Parameter | Required | Description |
| --------- | --------- | --------------------------------------------------------------------------------------------------------------------------- |
| poolId | Mandatory | Unique id of the Azure Batch pool to create. |
| vmCount | Optional | Number of virtual machines. <br/> Defaults to **3** if not otherwise specified. |
| vmSize | Optional | Size of the virtual machines that run the application. <br/> Defaults to **STANDARD_D1_V2** if not otherwise specified. |
Modify the parameters specified `job.parameters.json` as appropriate to configure your job. Available parameters are defined in `job.json`:
| Parameter | Required | Description |
| --------- | --------- | -------------------------------------------------------------------------------------------------------------- |
| jobId | Mandatory | Unique id of the Azure Batch job to create. |
| poolId | Mandatory | Unique id of Azure Batch pool which runs the job. <br/> Must match the `poolId` used for the pool (see above). |
| taskStart | Mandatory | Start index of the parametric sweep. |
| taskEnd | Mandatory | Finishing index (inclusive) of the parametric sweep. |
## Run commands
To create your pool:
``` bash
azure batch pool create --template pool.json --parameters pool.parameters.json
```
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
To create your job:
``` bash
azure batch job create --template job.json --parameters job.parameters.json
```
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `pool.json` | A template for defining a new pool. |
| `pool.parameters.json` | Provides values for the parameters defined in `pool.json`. <br/> You will need to provide a value for `poolId` before pool creation will succeed. If you do not want to use the default values for `vmCount` or `vmSize`, add values for those parameters to this file before creating the pool. |
| `job.json` | A template for a new job. |
| `job.parameters.json` | Provides values for the parameters defined in `job.json`. <br/> You will need to provide actual values for these parameters before job creation will succeed. |
Note that the **taskFactory** feature used in `job.json` is an experimental feature currently only available through the XPlat CLI.
## Troubleshooting
### "The value provided for one of the properties in the request body is invalid."
This error will occur during pool creation if you have not modified the `pool.parameters.json` to provide a legal pool id.
This error will occur during job creation if you have not modified the parameters in `job.parameters.json` to specify the job id, pool id and so on.
In either case, review the `azure.err` listed in the logs to see more details about the error.

Просмотреть файл

@ -0,0 +1,21 @@
{
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "helloworld-job",
"poolInfo": {
"poolId": "helloworld-pool"
},
"taskFactory": {
"type": "taskCollection",
"tasks": [
{
"id" : "mytask1",
"commandLine": "cmd /c echo Hello world"
}
]
}
}
}
}

Просмотреть файл

@ -0,0 +1,15 @@
{
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "helloworld-pool",
"cloudServiceConfiguration": {
"osFamily": "4",
"targetOSVersion": "*"
},
"vmSize": "small",
"targetDedicated": 1
}
}
}

Просмотреть файл

@ -0,0 +1,60 @@
# Azure Batch Pool/Job Template
This sample shows how to create a pool, and run a simple job on it using templates.
The pool is a standard *cloud service configuration* pool with a single small virtual machine.
The job contains just one task, echoing the greeting "Hello World" to standard output.
## Features used by this sample
* [Pool and job templates with parameterization](../../templates.md)
* [Task collection task factory](../../taskFactories.md#task-collection)
## Prerequisites
You will need an Azure Batch account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
## Create Pool
To create your pool:
```bash
azure batch pool create --template pool.json
```
The template specifies an Azure Batch Pool with the id `helloword-pool` that contains a single small virtual machine running Windows.
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
## Create Job
To create your job:
``` bash
azure batch job create --template job.json
```
The template specifies a job with one task, printing *Hello World* to standard output.
## Monitor the job
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| ----------- | ------------------------------------------------- |
| `pool.json` | A template specifying the pool to create. |
| `job.json` | A template specifying the job to run on the pool. |
To change either the pool or the job, modify the details within the `properties` element of the template in the appropriate file.
For more information on the properties available when creating a new pool, see [*Add a pool*](https://docs.microsoft.com/rest/api/batchservice/pool) from the [Batch REST API](https://docs.microsoft.com/rest/api/batchservice/) reference.
For more information on the properties available when creating jobs and tasks see [*Add a job*](https://docs.microsoft.com/rest/api/batchservice/job) and [*Add a task*](https://docs.microsoft.com/rest/api/batchservice/task).
Note that the **taskFactory** feature used in `job.json` is not documented as a part of the REST API as it is currently an experimental feature only available through the XPlat CLI.

Просмотреть файл

@ -0,0 +1,31 @@
{
"id": "taskperfileapptemplate",
"displayName": "Task Per File with Application Template",
"applicationTemplateInfo" : {
"filePath" : "template.json",
"parameters" : {
"testData" : "<linked storage group name>",
"outputFileStorageUrl" : "<linked blob storage container url with write access>"
}
},
"poolInfo": {
"autoPoolSpecification": {
"autoPoolIdPrefix": "movement",
"poolLifetimeOption": "job",
"keepAlive": false,
"pool": {
"vmSize": "STANDARD_D1_V2",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
},
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"targetDedicated": 3
}
}
}
}

Просмотреть файл

@ -0,0 +1,92 @@
# Azure Batch data movement using application templates
This sample shows how to create a job using an *application template* to separate the logic of processing from administration and management. This sample also demonstrates how to use `ResourceFiles` and `OutputFiles` to automatically download files to the virtual machine and to upload the output after the task completes.
With an application template, the processing steps required for the job are defined in a separate file (see the file `movement-template.json`) which is appropriately parameterized. The job itself references the template, supplies any required parameter values and specifies the pool on which the job is to run.
This particular *application template* runs a simple commandline (`cat {fileName}`) for each of the files found in a specified file group from blob storage.
## Features used by this sample
* [Split job configuration and management with reusable application templates](../../../application-templates.md)
* [Task per file task factory](../../../taskFactories.md#task-per-file)
* [Automatic persistence of task output files to Azure Storage](../../../outputFiles.md)
## Prerequisites
You will need an Azure Batch account with a linked Azure Storage account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
## Upload files
To upload a folder of files run this command:
``` bash
azure batch file upload <path> <group>
```
| Parameter | Description |
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `<path>` | File specification of the files to upload. Relative paths are resolved relative to the current directory. |
| `<group>` | Name for the file group that will be created in blob storage. <br/>When you view the file group in the Azure portal it will have the prefix `fgrp-` followed by the name specified here. |
For more information see the documentation on [input files](../../../inputFiles.md).
## Preparation
Fill out the parameter placeholders in `job.json`:
| Parameter | Required | Description |
| ---------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| testData | Mandatory | The same name as you used for `<group>` when you uploaded files in the previous step.<br/>Note that this does not include the `fgrp-` prefix visible when you view blob storage through the Azure portal. |
| outputStorageUrl | Mandatory | A valid (non-expired) writable SAS key for blob storage (use the Azure portal to generate this). |
To customize the job id or any of the details of the autopool, modify the appropriate details in `job.json`. These are not parameterized because they are not specified in the template file.
## Run commands
To create your job, run the following command:
``` bash
azure batch job create --json-file job.json
```
This job uses an **autopool** which will automatically be deleted once the job reaches the Completed state. If the job never reaches the Completed state (e.g. when tasks are unable to run because of scheduling errors, or errors downloading files from storage), you will continue to be charged for the pool. In this case, you may want to use the [Azure portal](https://portal.azure.com) to manually delete the pool to ensure you're not billed unnecessarily.
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `template.json` | Specifies an application template, containing all of the logic for the job we are going to run and any required parameters. |
| `job.json` | Defines the job to run by referencing the template file `template.json` and providing values for appropriate parameters. <br/> It also specifies the pool to use for the job, in this sample an auto pool containing **3** **STANDARD_D1_V2** virtual machines. |
## Troubleshooting
### "One of the specified Azure Blob(s) is not found"
If the preparation tasks for the job fail with the error *"One of the specified Azure Blob(s) is not found"*, verify that the resource file URLs specified for the file egress scripts are still correct (these URLs are dependent on the branch structure in the git repo for the XPlat CLI and may change without warning).
To check these URLs with the Azure Batch Portal, select the *Preparation Tasks* details page for your job then click the link next to *Resource Files*. Another pane will open showing all the associated resource files and their URLs. Check that none of these return a 404 (not found) result in your browser.
If any of these files return a 404, you will need to point your installation to the correct files from github.com, as follows:
1. Go to [the github repository](https://github.com/Azure/azure-xplat-cli) (`https://github.com/Azure/azure-xplat-cli`)
2. Check the following branches (in order) to find one that contains the file `lib/commands/batch/fileegress/batchfileuploader.py`.
* master
* dev
* batch-beta
* batch-beta-dev
3. Browse your installation of the XPlat CLI and open the file `lib/commands/batch/batch.templateUtil._js` in a Unicode-aware developers' text editor (such as [Visual Studio Code](https://code.visualstudio.com/), [Notepad++](https://notepad-plus-plus.org/) or [Vim](http://www.vim.org/)).
4. Modify the assignment of `batchTemplateUtils.rootFileUploadUrl` (around line #34) to specify the branch you found above; the branch is the last part of the string.
5. Save the file and recreate your job from the command line.
To illustrate, this assignment specifies the branch `batch-beta`:
``` javascript
batchTemplateUtils.rootFileUploadUrl = 'https://raw.githubusercontent.com/Azure/azure-xplat-cli/batch-beta';
```

Просмотреть файл

@ -0,0 +1,49 @@
{
"templateMetadata" : {
"description" : "Sample application template for data movement."
},
"parameters": {
"testData": {
"type": "string",
"metadata": {
"description": "The linked storage group where the input data is stored."
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "A storage SAS URL to a linked blob storage container with write access."
}
}
},
"onAllTasksComplete": "terminateJob",
"taskFactory": {
"type": "taskPerFile",
"source" : {
"fileGroup" : "[parameters('testData')]"
},
"repeatTask": {
"commandLine": "/bin/bash -c 'cat {fileName}'",
"resourceFiles": [
{
"blobSource" : "{url}",
"filePath" : "{fileName}"
}
],
"outputFiles": [
{
"filePattern": "$AZ_BATCH_TASK_DIR/stdout.txt",
"destination": {
"container": {
"path": "output-{fileNameWithoutExtension}.txt",
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
}
]
}
}
}

Просмотреть файл

@ -0,0 +1,103 @@
{
"parameters": {
"jobId": {
"type": "string",
"metadata": {
"description": "The id of Azure Batch job"
}
},
"poolId": {
"type": "string",
"defaultValue": "helloworld-pool",
"metadata": {
"description": "The id of Azure Batch pool which runs the job"
}
},
"vmSize": {
"type": "string",
"defaultValue": "STANDARD_D1_V2",
"metadata": {
"description": "The size of the virtual machines that run the application"
}
},
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of virtual machines"
}
},
"testData": {
"type": "string",
"metadata": {
"description": "The auto-storage group where the input data is stored"
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "A storage SAS URL to a container with write access"
}
}
},
"variables": {
"osType": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS",
"version": "latest"
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"onAllTasksComplete": "terminateJob",
"poolInfo": {
"autoPoolSpecification": {
"autoPoolIdPrefix": "[parameters('poolId')]",
"poolLifetimeOption": "job",
"keepAlive": false,
"pool": {
"vmSize": "[parameters('vmSize')]",
"virtualMachineConfiguration": {
"imageReference": "[variables('osType')]",
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"targetDedicated": "[parameters('vmCount')]"
}
}
},
"taskFactory": {
"type": "taskPerFile",
"source" : {
"fileGroup" : "[parameters('testData')]"
},
"repeatTask": {
"commandLine": "/bin/bash -c 'cat {fileName}'",
"resourceFiles": [
{
"blobSource" : "{url}",
"filePath" : "{fileName}"
}
],
"outputFiles": [
{
"filePattern": "$AZ_BATCH_TASK_DIR/stdout.txt",
"destination": {
"container": {
"path": "output-{fileName}",
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
}
]
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,11 @@
{
"jobId": {
"value": "<Job id>"
},
"testData": {
"value": "<Auto-storage group name>"
},
"outputFileStorageUrl": {
"value": "<Blob url with sas>"
}
}

Просмотреть файл

@ -0,0 +1,67 @@
# Azure Batch data processing with a task factory
This sample shows how to use a task factory to process a set of files uploaded into storage. The job runs on an VirtualMachineConfiguration based autopool and demonstrates how to use OutputFiles to automatically download/upload files to/from the virtual machine.
## Features used by this sample
* [Input data upload to Batch linked storage accounts](../../../inputFiles.md#input-file-upload)
* [Pool and job templates with parameterization](../../../templates.md)
* [Task per file task factory](../../../taskFactories.md#task-per-file)
* [Automatic persistence of task output files to Azure Storage](../../../outputFiles.md)
## Prerequisites
You will need an Azure Batch account with a linked Azure Storage account. See [Create an Azure Batch account using the Azure portal](https://docs.microsoft.com/azure/batch/batch-account-create-portal) for details.
## Upload files
To upload a folder of files, run this command:
``` bash
azure batch file upload <path> <group>
```
`<path>` should point to a folder containing some text files you want to upload for processing. You'll need to quote the path if it contains spaces.
`<group>` is a name to use for these files. You'll use the same group name when configuring your job later on.
## Preparation
Modify `job.parameters.json` to set the parameters for job creation. The following parameters are available:
| Parameter | Required | Description |
| -------------------- | --------- | --------------------------------------------------------------------------------------------------------------- |
| jobId | Mandatory | The id of the Azure Batch job. |
| poolId | Optional | The id of the Azure Batch pool which runs the job. <br/>Defaults to `helloworld-pool` if not specified. |
| vmSize | Optional | The size of the virtual machines that run the application. <br/> Defaults to `STANDARD_D1_V2` if not specified. |
| vmCount | Optional | The number of virtual machines in the auto pool. <br/> Defaults to `3` if not specified. |
| testData | Mandatory | The auto-storage group where the input data is stored. |
| outputFileStorageUrl | Mandatory | A storage SAS URL to a container with write access. |
If you want to configure other options of the job, such as the pool id, you can look in the `job.json` parameters section to see what options are available.
At minimum, legal values must be provided for all mandatory parameters.
## Run commands
To create your job:
``` bash
azure batch job create --template job.json --parameters job.parameters.json
```
This job uses an **autopool** which will automatically be deleted once the job reaches the Completed state. If the job never reaches the Completed state (e.g. when tasks are unable to run because of scheduling errors, or errors downloading files from storage), you will continue to be charged for the pool. In this case, you may want to use the [Azure portal](https://portal.azure.com) to manually delete the pool to ensure you're not billed unnecessarily.
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| --------------------- | -------------------------------------------------------------------------------------------------- |
| `job.json` | A template for the job to run, including parameter definitions and a **taskPerFile** task factory. |
| `job.parameters.json` | Provides values for the parameters defined in `job.json`. |

Просмотреть файл

@ -0,0 +1,62 @@
{
"parameters": {
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of virtual machines to execute the multi-instance task on"
}
},
"inputFileGroup": {
"type": "string",
"defaultValue": "mpi",
"metadata": {
"description": "The auto-storage group where the MS-MPI program and its dependencies are stored"
}
},
"poolId": {
"type": "string",
"defaultValue": "MultiInstanceSamplePool",
"metadata": {
"description": "The id of Azure Batch pool which runs the job"
}
},
"jobId": {
"type": "string",
"defaultValue": "MultiInstanceSampleJob",
"metadata": {
"description": "The id of Azure Batch job"
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"poolInfo": {
"poolId": "[parameters('poolId')]"
},
"taskFactory": {
"type": "taskCollection",
"tasks": [
{
"id" : "mpi",
"commandLine": "cmd /c mpiexec.exe -c 1 -wdir %AZ_BATCH_TASK_SHARED_DIR% MPIHelloWorld.exe",
"multiInstanceSettings": {
"numberOfInstances": "[parameters('vmCount')]",
"coordinationCommandLine": "cmd /c start cmd /c smpd.exe -d",
"commonResourceFiles": [
{
"source": {
"fileGroup": "[parameters('inputFileGroup')]"
}
}
]
}
}
]
}
}
}
}

Просмотреть файл

@ -0,0 +1,71 @@
{
"parameters": {
"vmSize": {
"type": "string",
"metadata": {
"description": "The size of the virtual machines that run the application"
},
"defaultValue": "Small",
"allowedValues": [
"Small",
"Medium",
"Large",
"ExtraLarge",
"STANDARD_A8",
"STANDARD_A9"
]
},
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of the virtual machines"
}
},
"poolId": {
"type": "string",
"defaultValue": "MultiInstanceSamplePool",
"metadata": {
"description": "The id of Azure Batch pool"
}
},
"appPackageId": {
"type": "string",
"defaultValue": "MSMPI",
"metadata": {
"description": "The id of application package which contains the MS-MPI setup"
}
},
"appPackageVersion": {
"type": "string",
"defaultValue": "1.0",
"metadata": {
"description": "The version of application package which contains the MS-MPI setup"
}
}
},
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('poolId')]",
"cloudServiceConfiguration": {
"osFamily": "4"
},
"vmSize": "[parameters('vmSize')]",
"targetDedicated": "[parameters('vmCount')]",
"enableInterNodeCommunication": true,
"maxTasksPerNode": 1,
"applicationPackageReferences": [
{
"applicationId": "[parameters('appPackageId')]"
}
],
"startTask": {
"commandLine": "cmd /c %AZ_BATCH_APP_PACKAGE_[parameters('appPackageId')]#[parameters('appPackageVersion')]%\\MSMpiSetup.exe -unattend -force",
"runElevated": true,
"waitForSuccess": true
}
}
}
}

Просмотреть файл

@ -0,0 +1,103 @@
# Azure Batch MPI Template
This samples shows how to use `MS-MPI` to run MPI work.
## Features used by this sample
* [Pool and job templates with parameterization](../../templates.md)
* [Task collection factory](../../taskFactories.md#task-collection)
## Prerequisites
You must have an Azure Batch account set up with a linked Azure Storage account.
You will need an MS-MPI program for the multi-instance task to execute. We provide the [MPIHelloWorld sample project](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/MultiInstanceTasks/MPIHelloWorld) for you to compile and to use as your MS-MPI program. Build a release version of `MPIHelloWorld.exe` so that you don't have to include any additional dependencies (for example, `msvcp140d.dll` or `vcruntime140d.dll`).
## Create application package
To successfully run this sample, you must first create an [application package](https://docs.microsoft.com/azure/batch/batch-application-packages) containing [MSMpiSetup.exe](https://msdn.microsoft.com/library/bb524831.aspx) (installed on a pool's compute nodes with a start task).
The following commands can be used as example to create the application package:
First, create the application package `MSMPI` itself:
```bash
azure batch application create --application-id MSMPI --account-name <account name> --resource-group <resource group>
```
You will need to supply your own values for `<account name>` and `<resource group>`.
Create a zip file containing `MSMpiSetup.exe` (make sure this file is at the root of zip file). Create version `1.0` of the application `MSMPI`:
```bash
azure batch application package create --application-id MSMPI --version 1.0 --account-name <account name> --resource-group <resource group> --package-file <local path to MSMpiSetup.exe zip file>
```
Then, activate the application package `MSMPI:1.0`:
```bash
azure batch application package activate --application-id MSMPI --version 1.0 --account-name <account name> --resource-group <resource group> --format zip
```
Finally, set the application default version to `1.0`:
```bash
azure batch application set --application-id MSMPI --default-version 1.0 --account-name <account name> --resource-group <resource group>
```
## Create a pool
Create your pool:
```bash
azure batch pool create --template pool.json
```
The default settings in `pool.json` specify a pool named `MultiInstanceSamplePool` containing **3** **small** virtual machines.
If you want to change the default values of the pool creation, create a JSON file to supply the parameters of your pool and include it on your command line:
```bash
azure batch pool create --template pool.json --parameters <your settings JSON file>
```
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
## Upload files
Upload the `MPIHelloWorld.exe` application and its dependencies from a folder:
```bash
azure batch file upload <path> mpi
```
`mpi` is the default value of the inputFileGroup parameter in the job template. If you upload your files to a different file group, be sure to provide this value for the inputFileGroup parameter when creating your job (see the next section).
## Create a job with an MPI task
To create your job with default settings:
```bash
azure batch job create --template job.json
```
If you want to configure other options of the job, such as the the pool id, you can look in the `job.json` parameters section to see what options are available.
| Parameter | Required | Description |
| -------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| poolId | Optional | Name of the Azure Batch Pool to use for processing. <br/> Must match the pool you created earlier. Default value if not otherwise specified: `MultiInstanceSamplePool`. |
| inputFileGroup | Optional | Name of the file group in your storage account containing the files to process. <br/> Must match the name of the group used in the `azure batch file upload` command earlier. <br/> Default value if not otherwise specified: `mpi`. |
| vmCount | Optional | The number of VM instances to execute the multi-instance task on. <br/> It must be less than or equal to the pool's VM count. Default value if not otherwise specified: 3 |
| jobId | Mandatory | Unique id of the job for processing. <br/> Must not duplicate the `id` of any existing job. |
To create a job with a different configuration:
```bash
azure batch job create --template job.json --parameters <your settings JSON file>
```
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.

Просмотреть файл

@ -0,0 +1,92 @@
{
"parameters": {
"jobId": {
"type": "string",
"metadata": {
"description": "The id of the Azure Batch job"
}
},
"poolId": {
"type": "string",
"defaultValue": "ocrpool",
"metadata": {
"description": "The id of the Azure Batch pool to run on"
}
},
"inputFileGroup": {
"type": "string",
"metadata": {
"description": "The file group containing the input files"
}
},
"outputFileStorageUrl": {
"type": "string",
"metadata": {
"description": "A storage SAS URL to a container with write access."
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('jobId')]",
"poolInfo": {
"poolId": "[parameters('poolId')]"
},
"taskFactory": {
"type": "taskPerFile",
"source" : {
"fileGroup" : "[parameters('inputFileGroup')]"
},
"repeatTask": {
"commandLine": "gs -dQUIET -dSAFER -dBATCH -dNOPAUSE -dNOPROMPT -sDEVICE=pngalpha -sOutputFile={fileNameWithoutExtension}-%03d.png -r250 {fileName} && for f in *.png; do tesseract $f ${{f%.*}}; done",
"resourceFiles": [
{
"source": {
"fileGroup": "[parameters('inputFileGroup')]",
"prefix": "{fileName}"
}
}
],
"outputFiles": [
{
"filePattern": "*.txt",
"destination": {
"container": {
"path": "text",
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskSuccess"
}
},
{
"filePattern": "../*.txt",
"destination": {
"container": {
"path": "output",
"containerSas": "[parameters('outputFileStorageUrl')]"
}
},
"uploadDetails": {
"taskStatus": "TaskFailure"
}
}
],
"packageReferences": [
{
"type": "aptPackage",
"id": "ghostscript"
},
{
"type": "aptPackage",
"id": "tesseract-ocr"
}
]
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,11 @@
{
"jobId": {
"value": "ocr_demo_01"
},
"inputFileGroup": {
"value": "<Auto-storage group name>"
},
"outputFileStorageUrl": {
"value": "<the storage url to hold output file>"
}
}

Просмотреть файл

@ -0,0 +1,43 @@
{
"parameters": {
"vmSize": {
"type": "string",
"metadata": {
"description": "The size of the virtual machines that run the application"
},
"defaultValue": "STANDARD_D1_V2"
},
"vmCount": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "The number of virtual machines"
}
},
"poolId": {
"type": "string",
"defaultValue": "ocrpool",
"metadata": {
"description": "The name of the Azure Batch pool"
}
}
},
"pool": {
"type": "Microsoft.Batch/batchAccounts/pools",
"apiVersion": "2016-12-01",
"properties": {
"id": "[parameters('poolId')]",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS"
},
"nodeAgentSKUId": "batch.node.ubuntu 16.04"
},
"vmSize": "[parameters('vmSize')]",
"targetDedicated": "[parameters('vmCount')]",
"enableAutoScale": false
}
}
}

Просмотреть файл

@ -0,0 +1,85 @@
# Azure Batch OCR job template
This sample shows how to use `ghostscript` and `tesseract-ocr` to transform PDF files into plain text files (`.txt`). It does this in two stages:
1. Use [`ghostscript`](https://ghostscript.com/) to convert a PDF to a set of PNG files (one for each page of the PDF).
2. Use [`tesseract-ocr`](https://github.com/tesseract-ocr) to convert the PNG images into plain text files (`.txt`).
## Features used by this sample
* [Pool and job templates with parameterization](../../templates.md)
* [Parametric sweep task factory](../../taskFactories.md#parametric-sweep)
* [Automatic persistence of task output files to Azure Storage](../../outputFiles.md)
* [Easy software installation via package managers](../../packages.md)
## Prerequisites
You must have an Azure Batch account set up with a linked Azure Storage account.
## Create a pool
To create your pool:
```bash
azure batch pool create --template pool.json
```
The default settings in `pool.json` specify a pool named `ocrpool` containing **3** **STANDARD_D1_V2** virtual machines.
If you want to change the default values of the pool creation, you can create a JSON file to supply the parameters of your pool. If you have a large number of files to convert, you should use a larger pool or bigger VMs in the pool.
In order to create the pool with your own configurations, run:
```bash
azure batch pool create --template pool.json --parameters <your settings JSON file>
```
**You are billed for your Azure Batch pools, so don't forget to delete this pool through the [Azure portal](https://portal.azure.com) when you're done.**
## Upload files
To upload your PDF files:
```bash
azure batch file upload <path> <group>
```
Run this command on a folder containing the PDF files you want to process.
## Create a job and tasks
Edit the `job.parameters.json` file to supply parameters to the template. If you want to configure other options of the job, such as the the pool id, you can look in the `job.json` parameters section to see what options are available.
| Parameter | Required | Description |
| -------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| jobId | Mandatory | The id of the Azure Batch job. |
| poolId | Optional | The id of the Azure Batch pool to run on. <br/> Must match the id of the pool you created earlier. <br/> Default value if not otherwise specified: `ocrpool` |
| inputFileGroup | Mandatory | The file group containing the input files. <br/> Must match the name of the file group used by your `azure batch file upload` command earlier. |
| outputFileStorageUrl | Mandatory | A storage SAS URL to a **container** with write access. <br/> A general SAS url to blob storage will not work. |
## Run the job
To create your job and tasks:
```bash
azure batch job create --template job.json --parameters job.parameters.json
```
The outputs of the tasks will be uploaded to the Azure Storage container which you specified as the individual tasks complete.
The target container will contain a new virtual directory for each task that ran.
## Monitor the job
You can use this command to monitor the tasks in the job and their progress:
``` bash
azure batch task list --job-id <jobid>`
```
You can also use the [Azure portal](https://portal.azure.com) or [Batch Explorer](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/BatchExplorer) for monitoring.
## Structure of the sample
| File | Content |
| --------------------- | ------------------------------------------------------------------------------------------------------ |
| `pool.json` | A template for creating the pool required for OCR processing. |
| `job.json` | A template for the job to run, including parameter definitions and a **parametricSweep** task factory. |
| `job.parameters.json` | Provides values for the parameters defined in `job.json`. |

Просмотреть файл

@ -0,0 +1,108 @@
# Azure Batch Samples
Here are a collection of samples to demonstrate the new features in this Batch preview CLI.
## Hello World Samples
These samples use the new features in very simple ways to make it easier to see how these features might fit into your workflow.
### [Create pool and job](hello-world/create-pool-and-job)
Create a pool and then run a job with a single task. Both the pool and the job are defined using templates with hard coded values.
Features used:
* [Pool and job templates with parameterization](../templates.md)
* [Task collection task factory](../taskFactories.md#task-collection)
### [Create pool and job with templates](hello-world/create-pool-and-job-with-templates)
Create a pool and then run a job with a single task. Both the pool and the job are defined using a parameterized templates. Parameter values used to fill out the templates are stored in separate files that are easy to modify as required.
Features used:
* [Pool and job templates with parameterization](../templates.md)
* [Parametric sweep task factory](../taskFactories.md#parametric-sweep)
* [Task per file task factory](../taskFactories.md#task-per-file)
### [Task per file](hello-world/task-per-file)
Run a specific piece of processing independently across a set of files that are uploaded into storage. The job is specified as a template accepting parameters.
Features used:
* [Input data upload to Batch linked storage accounts](../inputFiles.md#input-file-upload)
* [Pool and job templates with parameterization](../templates.md)
* [Task per file task factory](../taskFactories.md#task-per-file)
* [Automatic persistence of task output files to Azure Storage](../outputFiles.md)
### [Task per file with application template](hello-world/task-per-file-with-app-template)
Run a specific piece of processing independently across a set of files that are uploaded into storage. The actual processing involved is split out into a separate *application template*. The job itself references the template while specifying parameters, pool information and other management details. Application templates are intended to be flexible and reusable across a number of jobs.
Features used:
* [Split job configuration and management with reusable application templates](../application-templates.md)
* [Task per file task factory](../taskFactories.md#task-per-file)
* [Automatic persistence of task output files to Azure Storage](../outputFiles.md)
## More Complex Samples
These samples show how to use the new features with real world applications.
### [FFmpeg](ffmpeg)
FFmpeg is an open-source command line tool for processing multimedia files. This is a sample demonstrating audio compression with Azure Batch on a large number of numerically-named files using a parametric sweep.
Features used:
* [Job template with parameterization](../templates.md)
* [Automatic persistence of task output files to Azure Storage](../outputFiles.md)
* [Easy software installation via package managers](../packages.md)
* [Parametric sweep task factory](../taskFactories.md#parametric-sweep)
* [Task per file task factory](../taskFactories.md#task-per-file)
### [OCR](ocr)
OCR (Optical Character Recognition) is the process of extracting text from PDF images. This sample demonstrates the batch processing of PDF files.
Features used:
* [Pool and job templates with parameterization](../templates.md)
* [Parametric sweep task factory](../taskFactories.md#parametric-sweep)
* [Automatic persistence of task output files to Azure Storage](../outputFiles.md)
* [Easy software installation via package managers](../packages.md)
### [MPI](mpi)
This sample demonstrates the batch run a MPI task with MultiInstanceSettings feature.
### [Blender](blender)
Blender is an open-source 3D content creation suite. This sample demonstrates distributed rendering on Azure Batch.
Features used:
* [Job template with parameterization](../templates.md)
* [Parametric sweep task factory](../taskFactories.md#parametric-sweep)
* [Automatic persistence of task output files to Azure Storage](../outputFiles.md)
* [Easy software installation via package managers](../packages.md)
### [Blender (Application Template)](blender-appTemplate)
A variation of the [Blender](blender) sample that uses an application template to separate job definiton and management.
Features used:
* [Job template with parameterization](../templates.md)
* [Parametric sweep task factory](../taskFactories.md#parametric-sweep)
* [Automatic persistence of task output files to Azure Storage](../outputFiles.md)
### [Docker - Caffe](docker)
Caffe is an open-source deep learning framework. This sample demonstrates configuration of Caffe via Docker integration using Shipyard.

Просмотреть файл

@ -0,0 +1,139 @@
# Batch Shipyard integration and Docker Support
The Azure Batch preview CLI integrates with Batch Shipyard to allow you to provision Batch compute nodes with Docker containers and to schedule Docker workloads. For more information on Batch Shipyard, see its [GitHub page](https://github.com/azure/batch-shipyard).
**Note:** This feature is only available on Linux VMs.
In order to use Docker containers with the Azure Batch preview CLI, the following prerequisites must also be installed locally:
- You must install Python (2.7 or 3.3+), and it must be available on the PATH.
- You must install Batch Shipyard. See the [installation guide](https://github.com/Azure/batch-shipyard/blob/2.2.0/docs/01-batch-shipyard-installation.md) for more details. The Azure Batch CLI currently targets Batch Shipyard version 2.2.0. Compatibility with other versions is not guaranteed.
## Provisioning a pool to use Docker containers:
To provision Batch compute nodes that use Docker containers, you must supply a `clientExtensions.dockerOptions` definition in the JSON request you pass to the `batch pool create` command (either with `--json-file` or `--template`. The following is the dockerOptions schema:
`dockerOptions`:
* `image`: (required, string) The Docker image to install on every compute node when the pool is created.
* `registry`: (optional, object) Configure Docker image distribution options from public/private Docker Hub.
* `hub`: (optional, object) Docker Hub login settings. This does not need to be supplied if pulling from public Docker repositories.
* `username`: (required, string) The username to log in with.
* `password`: (required, string) The password to log in with.
* `private`: (optional, object) Controls settings for private registries.
* `allowPublicPullOnMissing`: (required, bool) Whether to allow pass-through of Docker image retrieval to public Docker Hub if the image is missing in the private registry.
* `sharedDataVolumes`: (optional, array) Configures the initialization of persistent shared storage volumes. Each array item has the following properties:
* `name`: (required, string) The name used to identify the shared data volume. This name is created by the user and only used as a reference in dockerOptions definitions on pool and task bodies.
* `volumeType`: (required, string) The type of the shared data volume. Currently, the only supported value is "azurefile".
* `azureFileShareName`: (required, string) The Azure File share name. Note that this share must already be created in the Batch account's linked storage account. The linked storage account must also be in the same region as the Batch account. For more information on linked storage accounts, see [this article](https://azure.microsoft.com/documentation/articles/batch-account-create-portal/#linked-azure-storage-account).
The following is an example pool specification with its clientExtensions.dockerOptions set.
```json
{
"pool": {
"id": "myPool",
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04.0-LTS"
}
},
"vmSize": "STANDARD_D1_V2",
"targetDedicated": 1,
"maxTasksPerNode": 1,
"clientExtensions": {
"dockerOptions": {
"image": "batch/images:abc",
"sharedDataVolumes": [
{
"name": "myShare",
"volumeType": "azurefile",
"azureFileShareName": "batchclishare"
}
]
}
}
}
}
```
## Scheduling a Docker workflow
To schedule Docker workflows, you must supply a clientExtensions.dockerOptions definition on your task bodies within your taskFactory when using the 'batch job create' command. The following is the dockerOptions schema:
`dockerOptions`:
* `image`: (required, string) The Docker image to use for this task. This image must be available on the pool that the task is scheduled against.
* `additionalDockerRunOptions`: (optional, array) The additional `docker run` option strings to pass to the Docker daemon when starting the container.
* `dataVolumes`: (optional, array) The data volumes to mount in the container. Each array item has the following properties:
* `hostPath`: (optional, string) The path on the host node which will be mounted in the container.
* `containerPath`: (required, string) The path in the container where the data volume will be mounted/found.
* `sharedDataVolumes`: (optional, array) The persisted shared storage volumes to use in the container. Each array item has the following properties:
* `name`: (required, string) The name used to identify the shared data volume. A volume with this name must have been configured on the pool that the task is scheduled against (i.e. If you reference a shared data volume with name 'myShare' on your task, then you must have a shared data volume with name 'myShare' defined on your pool).
* `volumeType`: (required, string) The type of the shared data volume. Currently, the only supported value is "azurefile".
* `containerPath`: (required, string) The path in the container where the shared data volume will be mounted.
* `removeContainerAfterExit`: (optional, bool) Whether the container should be automatically removed after the task completes. If unspecified, the default value of false will be used.
* `useHostInfiniband`: (optional, bool) Whether the container requires access to the Infiniband/RDMA devices on the host. Note that this will automatically force the container to use the host network stack. If unspecified, the default value of false will be used.
The following is an example job specification which uses a `taskCollection` task factory where clientExtensions.dockerOptions are specified:
```json
{
"job": {
"id": "myJob",
"poolInfo": {
"poolId": "myPool"
},
"taskFactory": {
"type": "taskCollection",
"tasks": [
{
"id": "task01",
"commandLine": "/opt/runWorkflow.sh",
"clientExtensions": {
"dockerOptions": {
"image": "batch/images:abc",
"removeContainerAfterExit": true,
"sharedDataVolumes": [
{
"name": "myShare",
"volumeType": "azurefile",
"containerPath": "/tmp/sharefiles"
}
]
}
}
}
]
}
}
}
```
The following is an example job specification which uses a `parametricSweep` task factory where clientExtensions.dockerOptions are specified:
```json
{
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"properties": {
"id": "myJob",
"poolInfo": {
"poolId": "myPool"
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": "1",
"end": "100",
"step": 1
}
],
"repeatTask": {
"commandLine": "/opt/workflow{0}.sh",
"clientExtensions": {
"dockerOptions": {
"image": "batch/images:abc",
"removeContainerAfterExit": true
}
}
}
}
}
}
}
```

Просмотреть файл

@ -0,0 +1,316 @@
# Task factories
Task factories provide a way for a job and all its tasks to be created in one command instead
of calling `azure batch task create` for each task.
**Note:** If the CLI should lose connectivity during the addition of tasks, the operation will not be completed and the job
will continue with a partial set of tasks. The remainder of the tasks must be added manually using `azure batch task create`.
There are currently three kinds of task factories:
* Task Collection - tasks are explicitly defined as a part of the job
* Parametric Sweep - a set of tasks are created by substituting a range or sequence of values into a template
* Per File - a template task is replicated for each available input file
See below for details.
## Task collection
This task factory is where each task is individually specified according to the
[Batch API schema](https://msdn.microsoft.com/library/azure/dn820105.aspx).
The `task collection` task factory most closely mirrors the Batch task creation API.
An example of a `task collection` task factory:
```json
"job": {
"id": "my-ffmpeg-job",
"constraints": {
"maxWallClockTime": "PT5H",
"maxTaskRetryCount": 3
},
"poolInfo": {
"poolId": "my-ffmpeg-pool"
},
"jobPreparationTask": {
"commandLine" : "sudo apt-get install ffmpeg -y",
"runElevated": true,
"waitForSuccess": true
},
"taskFactory": {
"type": "taskCollection",
"tasks": [
{
"id" : "mytask1",
"commandLine": "ffmpeg -i sampleVideo1.mkv -vcodec copy -acodec copy output.mp4 -y",
},
{
"id" : "mytask2",
"commandLine": "ffmpeg -i sampleVideo2.mkv -vcodec copy -acodec copy output.mp4 -y",
}
]
}
}
```
### Samples
The following samples use the task collection task factory:
* [MPI](samples/mpi)
## Parametric sweep
The `parametric sweep` task factory creates a set of tasks by substituting a range or sequence
of values into a template. Substitutions can be made in most attributes of the task, but are most commonly
made in the commandLine attribute or resourceFile collection.
Currently the following task attributes are not supported in a parametric sweep task factory:
- `id`: The ID of a task will be automatically generated.
- `dependsOn`: Dependencies between tasks within a factory, or tasks created by other means are not yet supported.
An example:
```json
"job": {
"id": "my-ffmpeg-job",
"poolInfo": {
"poolId": "my-ffmpeg-pool"
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": 1,
"end": 500,
"step": 1
}
],
"repeatTask": {
"commandLine": "ffmpeg -i sampleVideo{0}.mkv -vcodec copy -acodec copy output{0}.mp4 -y",
}
}
}
```
The range of values used to create the tasks are set in `parameterSets`. The first task to be created is represented
by the `start` field, and the last that that could potentially be created is represented by the `end` field. Whether
this last task is created will depend on the chosen increment size; the vlaue of `step`.
For example, a parameteric sweep with a `start` of 5, `end` of 10 and a `step` of 3 will produce two tasks using the values 5 and 8.
Multiple `parameterSets` can be defined to produce multi-dimensional parametric sweeps.
The task template into which the parameter or parameters will be substituted is defined in `repeatTask`. Substitutions are achieved
through the use of placeholders. A placeholder for parameter substitutions is represented by `{0}`. The number 0 here represents
the index of the parameter set to be substituted. Where a literal `{` or `}` character is required, it can be escaped
by duplicating it: `{{` or `}}`. The parameter can also be padded with zeros to a maximum length of 9 characters by using the format
`{0:4}` where the number 0 represents the index of the parameter set and the parameter will be zero-padded to 4 characters, e.g.: `0001`.
The above task factory would be expanded into the following tasks:
```
"tasks": [
{
"id" : "0",
"commandLine": "ffmpeg -i sampleVideo1.mkv -vcodec copy -acodec copy output1.mp4 -y",
},
{
"id" : "1",
"commandLine": "ffmpeg -i sampleVideo2.mkv -vcodec copy -acodec copy output2.mp4 -y",
},
{
...
},
{
"id" : "499",
"commandLine": "ffmpeg -i sampleVideo500.mkv -vcodec copy -acodec copy output500.mp4 -y",
}
]
```
An example of a task factory with a two-dimensional sweep with zero-padding:
```json
"job": {
"id": "my-ffmpeg-job",
"poolInfo": {
"poolId": "my-ffmpeg-pool"
},
"taskFactory": {
"type": "parametricSweep",
"parameterSets": [
{
"start": 1,
"end": 500,
"step": 1
},
{
"start": 500,
"end": 1000,
"step": 500
}
],
"repeatTask": {
"commandLine": "ffmpeg -i sampleVideo_{0:3}.mkv -vcodec copy -acodec copy scale={1}:{1} output_x{1}_{0:3}.mp4 -y",
}
}
}
```
Where the following tasks would be created:
```
"tasks": [
{
"id" : "0",
"commandLine": "ffmpeg -i sampleVideo_001.mkv -vcodec copy -acodec copy scale=500:500 output_x500_001.mp4 -y",
},
{
"id" : "1",
"commandLine": "ffmpeg -i sampleVideo_001.mkv -vcodec copy -acodec copy scale=1000:1000 output_x1000_001_.mp4 -y",
},
{
"id" : "2",
"commandLine": "ffmpeg -i sampleVideo_002.mkv -vcodec copy -acodec copy scale=500:500 output_x500_002.mp4 -y",
},
{
"id" : "3",
"commandLine": "ffmpeg -i sampleVideo_002.mkv -vcodec copy -acodec copy scale=1000:1000 output_x1000_002.mp4 -y",
},
{
...
},
{
"id" : "998",
"commandLine": "ffmpeg -i sampleVideo500.mkv -vcodec copy -acodec copy scale=500:500 output_x500_500.mp4 -y",
},
{
"id" : "999",
"commandLine": "ffmpeg -i sampleVideo500.mkv -vcodec copy -acodec copy scale=1000:1000 output_x1000_500.mp4 -y",
}
]
```
### Samples
The following samples use the parametric sweep task factory:
* [Blender](samples/blender)
* [Blender with application templates](samples/blender-appTemplate)
* [Create Pool and Job with templates](samples/hello-world/create-pool-and-job-with-templates)
* [FFMpeg](samples/ffmpeg)
* [OCR](samples/ocr)
## Task per file
The `task per file` task factory generates a task per input file from a file group or Azure storage container. Substitutions can be made in most attributes of the task, but are most commonly
made in the commandLine attribute, resourceFile collection or taskOutput attribute.
Currently the following task attributes are not supported in a task per file task factory:
- `id`: The ID of a task will be automatically generated.
- `dependsOn`: Dependencies between tasks within a factory, or tasks created by other means are not yet supported.
An example:
```json
"job": {
"id": "my-ffmpeg-job",
"poolInfo": {
"poolId": "my-ffmpeg-pool"
},
"taskFactory": {
"type": "taskPerFile",
"source": {
"fileGroup": "raw-images"
},
"repeatTask": {
"commandLine": "ffmpeg -i {fileName} -vcodec copy -acodec copy {fileNameWithoutExtension}.mp4 -y",
"resourceFiles": [
{
"blobSource": "{url}",
"filePath" : "{fileName}"
}
]
}
}
}
```
The list of files used to create the tasks are set in `source`. Similar to new `ResourceFiles` property, there are two ways to specify the file container in Azure Storage.
1. Specify the name of a file group to reference data stored in a linked storage account.
2. Specify the full container URL include the SAS key which has to have List and Read permission. For example:
```json
"source": {
"containerUrl": "https://storage.blob.core.windows.net/container?sv=2015-04-05sig=tAp0r3I3SV5PbjpZ5CIjvuo1jdUs5xW"
}
```
The files can be further filtered by including a prefix. This prefix can be a partial filename, or a subdirectory. If prefix is not specified, all the files in the container will be used for creating task. An example using prefix:
```json
"source": {
"fileGroup": "raw-images",
"prefix": "first_pass/img_"
}
```
The task template into which the file URL/name will be substituted is defined in `repeatTask`. Substitutions are achieved
through the use of placeholders. A placeholder for name substitutions is represented by `{keyword}`. The keyword here represents
which part of file URL to be substituted. The supported keyword are:
| Keyword | Note | Example |
| ---------------------------- | ---------------------------------------------------- | -------------------------------------------------------------- |
| `{url}` | The full URL of file location | http://account.blob.azure.com/container/path/blob.ext?sasToken |
| `{filePath}` | The file name including the path (virtual directory) | path/blob.ext |
| `{fileName}` | The file name only, without path | blob.ext |
| `{fileNameWithoutExtension}` | The file name without last extension | blob |
Where a literal `{` or `}` character is required, it can be escaped by duplicating it: `{{` or `}}`.
For example, if the files in file group are:
```
raw-images/first_pass/mov_processing/1.mkv
raw-images/first_pass/mov_processing/2.mkv
raw-images/first_pass/alpha.mkv
```
The above task factory would be expanded into the following tasks:
```json
"tasks": [
{
"id" : "0",
"commandLine": "ffmpeg -i 1.mkv -vcodec copy -acodec copy 1.mp4 -y",
"resourceFiles": [
{
"blobSource": "http://account.blob.azure.com/raw-images/first_pass/mov_processing/1.mkv?sasToken",
"filePath" : "1.mkv"
}
]
},
{
"id" : "1",
"commandLine": "ffmpeg -i 2.mkv -vcodec copy -acodec copy 2.mp4 -y",
"resourceFiles": [
{
"blobSource": "http://account.blob.azure.com/raw-images/first_pass/mov_processing/2.mkv?sasToken",
"filePath" : "2.mkv"
}
]
},
{
"id" : "2",
"commandLine": "ffmpeg -i alpha.mkv -vcodec copy -acodec copy alpha.mp4 -y",
"resourceFiles": [
{
"blobSource": "http://account.blob.azure.com/raw-images/first_pass/alpha.mkv?sasToken",
"filePath" : "alpha.mkv"
}
]
}
]
```
### Samples
The following samples use the task per file task factory:
* [Task Per File](samples/hello-world/task-per-file)
* [Create Pool and Job with templates](samples/hello-world/create-pool-and-job-with-templates)
* [Task Per File](samples/hello-world/task-per-file)
* [FFMpeg](samples/ffmpeg)

Просмотреть файл

@ -0,0 +1,158 @@
# Job and pool templates with parameterization
Two new options have been introduced to the `batch job create` and `batch pool create` commands:
- `--template`
- The path to a JSON file with the template for either a Batch job or pool.
- `--parameters`
- The path to a JSON file containing parameter values. When used exclusive of `--template` this option will be ignored.
```bash
azure batch job create --template <JSON template> --parameters <JSON parameter values>
```
The format of this template draws on the structure of an Azure Resource Manager template.
The structure of template consists of the following sections:
- `parameters`: (Optional) Values that are provided during execution to customize the entity.
- `variables`: (Optional) Values that are used as JSON fragments in the template.
- `job` or `pool`: (Required) The entity description to be created.
For further reading on ARM templates, see [Authoring Azure Resource Manager templates](https://azure.microsoft.com/documentation/articles/resource-group-authoring-templates).
## Samples
The following samples make use of job and/or pool templates:
* [Create Pool and Job with templates](samples/hello-world/create-pool-and-job-with-templates)
* [Task Per File](samples/hello-world/task-per-file)
* [Blender](samples/blender)
* [FFMpeg](samples/ffmpeg)
* [MPI](samples/mpi)
* [OCR](samples/ocr)
## Parameters
This section consists of a set of parameter definitions, with accompanying metadata and constraints.
For example:
```json
{
"parameters": {
"poolId": {
"type": "string",
"metadata": {
"description": "The ID of Azure Batch pool which runs the job"
}
}
}
}
```
Parameters can be referenced using ARM-template parameter syntax: `[parameters('parameterName')]`.
The following options for a parameter are currently supported:
- `type`
- `int`, `string` or `bool`
- `defaultValue`
- `allowedValues`
- `minValue`
- `maxValue`
- `minLength`
- `maxLength`
- `metadata`
- `description`
## Variables
In this section you can construct complex JSON fragments that can be used throughout the template. Typically
variables also include references to values defined in `parameters`. Variables can be referenced using ARM-template variable syntax: `[variables('variableName')]`.
For example:
```json
{
"variables": {
"pool": {
"poolInfo": {
"poolId": "[parameters('poolId')]"
}
}
}
}
```
## Defining the job or pool entity
The job or pool to be created adheres to the same schema as the request body found in the
[Batch API documentation](https://msdn.microsoft.com/library/azure/dn820110.aspx) and is wrapped in a `properties` layer consistent
with the structure of an [ARM resource](https://azure.microsoft.com/documentation/articles/resource-group-authoring-templates/#resources).
A `type` field is also present, referencing the entity type to be created. Other ARM options (for example `location` or `dependsOn`) will be ignored.
Valid `type` options are:
- `"Microsoft.Batch/batchAccounts/jobs"`
- `"Microsoft.Batch/batchAccounts/pools"`
**Note:** Unlike true ARM templates, Batch CLI templates must contain only a single definition for either a Batch job or pool.
Additionally, not all ARM template syntax is supported. We currently support the following expressions and functions:
- `parameters()`: A function to inject the value of a defined parameter into the JSON.
- Example: `"id": "[parameters('jobId')]"`
- `variables()`: A function to inject the fragment of a defined variable into the JSON.
- Example: `"poolInfo": "[variables('autoPool')]"`
- `concat()`: A function to join two strings together.
- Example: `"displayName": "[concat("Processing: ", parameters('inputName'))]"`
Example templates and their accompanying parameter files can be found at
[Documentation/BatchDocumentation/samples](Documentation/BatchDocumentation/samples).
A simple parameterized template might look like this:
```json
{
"parameters": {
"poolId": {
"type": "string",
"defaultValue": "testpool",
"metadata": {
"description": "The ID of the Batch pool on which to run the job"
}
},
"jobId": {
"type": "string",
"metadata": {
"description": "The ID of the Batch job"
}
}
},
"job": {
"type": "Microsoft.Batch/batchAccounts/jobs",
"properties": {
"id": "[parameters('jobId')]",
"poolInfo": {
"poolId": "[parameters('poolId')]"
}
}
}
}
```
You could then run this template with the following command:
```bash
azure batch job create --template my-simple-job.json
```
The values of the parameters will either use a default value if supplied,
or the CLI will interactively prompt you to provide an input value.
Alternatively, you can supply these parameter values in a separate file, like so:
```json
{
"jobId": {
"value": "test_job"
},
"poolId": {
"value": "ubuntu_16_04"
}
}
```
You can then pass this parameter file to the job create command:
```bash
azure batch job create --template my-simple-job.json --parameters my-input-values.json
```

Просмотреть файл

@ -0,0 +1,130 @@
Setting up your development environment
========================================
The Azure Python CLI projects sources are located on GitHub (https://github.com/Azure/azure-cli/). In order to contribute to the project, you are expected to:
- Have a GitHub account. For Microsoft contributors, follow the guidelines on https://opensourcehub.microsoft.com/ to create, configure and link your account
- Fork the https://github.com/Azure/azure-cli/ repository into your private GitHub account
- Create pull requests against the https://github.com/azure/azure-cli repository to get your code changes merged into the project repository.
##Preparing your machine
1. Install Python 3.5.x from http://python.org. Please note that the version of Python that comes preinstalled on OSX is 2.7.
2. Clone your repository and check out the master branch.
3. Create a new virtual environment “env” for Python 3.5 in the root of your clone. You can do this by running:
#####Windows
```BatchFile
python -m venv <clone root>\env
```
#####OSX/Ubuntu (bash)
```Shell
python –m venv <clone root>/env
```
4. Activate the env virtual environment by running:
#####Windows
```BatchFile
<clone root>\env\scripts\activate.bat
```
#####OSX/Ubuntu (bash)
```Shell
. <clone root>/env/bin/activate
```
5. Install the dependencies and load the command modules as local packages using pip.
```Shell
python scripts/dev_setup.py
```
6. Add `<clone root>\src` to your PYTHONPATH environment variable:
#####Windows
```BatchFile
set PYTHONPATH=<clone root>\src;%PYTHONPATH%
```
#####OSX/Ubuntu (bash)
```Shell
export PYTHONPATH=<clone root>/src:${PYTHONPATH}
```
7. Setup tab completion (OSX/Ubuntu ONLY).
Open Bash or zsh window and run:
```Shell
source az.completion.sh
```
##Configuring your IDE
####Visual Studio (Windows only)
1. Install Python Tools for Visual Studio. As of 2/18/2016, the current version (PTVS 2.2) can be found at http://microsoft.github.io/PTVS/.
2. Open the azure-cli.pyproj project
You should now be able to launch your project by pressing F5/start debugging
####Visual Studio Code (Any platform)
Experimental steps – still havent been able to get virtual environments to work well with VSCode
1. Install VS Code
2. Install (one of) the python extension(s) (https://marketplace.visualstudio.com/items?itemName=donjayamanne.python)
Debugging should now work (including stepping and setting breakpoints).
The repo has a launch.json file that will launch the version of Python that is first on your path.
##Running CLI
####Command line
1. Activate your virtual environment if not already done
#####OSX/Ubuntu (bash):
```Shell
source <clone root>/env/scripts/activate
```
#####Windows:
```BatchFile
<clone root>\env\scripts\activate.bat
```
2. Invoke the CLI using:
#####OSX/Ubuntu (bash):
```Shell
az
```
#####Windows:
```BatchFile
<clone root>\az.bat [commands]
```
which is equivalent to the following:
```BatchFile
<clone root>\src\python -m azure.cli [commands]
```
##Running Tests:
####Command line
#####Windows:
Provided your PYTHONPATH was set correctly, you can run the tests from your `<root clone>` directory.
To test the core of the CLI:
```BatchFile
python -m unittest discover -s src/azure-cli/azure/cli/tests
```
To test the command modules:
```BatchFile
python scripts/command_modules/test.py
```
To check or pylint errors in the core of the CLI:
```BatchFile
pylint src/azure
```
To check the command modules for pylint errors:
```Batch
python scripts/command_modules/pylint.py
```
Additionally, you can run pylint tests for the core CLI and all command modules using the `lintall.bat` script, and run tests for the core CLI and all command modules using the `testall.bat` script.
####VS Code
Under construction...
####Visual Studio
Select `Test > Windows > Test Explorer` and click `Run All` in the Test Explorer pane.

Просмотреть файл

@ -0,0 +1,27 @@
# Debug in Visual Studio Code
## Prerequisite
* Visual Studio Code [Link](http://code.visualstudio.com/)
* Visual Studio Code Python Extension [Link](https://marketplace.visualstudio.com/items?itemName=donjayamanne.python)
* Python 2.7 or 3.5
* Set up development environment [Link](https://github.com/Azure/azure-cli/blob/master/doc/configuring_your_machine.md)
## Quick start
1. Start VS Code at the root of the `azure-cli` source code folder.
2. Switch to [debug panel](https://code.visualstudio.com/Docs/editor/debugging). (CMD + Shift + D)
3. Select one of the debug configuration in the dropdown on top of the debug panel.
4. Start debugging (Press F5 or click the play button)
## Configuration
The `launch.json` under `.vscode` folder has already been pre-configured to enable execute `az --help` and break in to debug immediately. You can update it to execute the scenario you need.
1. Set `false` to `stopOnEntry` property to break at the first break point you specified.
2. Update `args` array to run specified command. You need to make sure the arguments are split by space.
3. Choose between external termninal and integrated termninal. The latter can be toggle by Ctrl + `
4. Set `config.python.pythonPath` in either user setting or workspace setting to pick the version of python to be executed.
## Reference
- `launch.json` schema: https://code.visualstudio.com/Docs/editor/debugging

Просмотреть файл

@ -0,0 +1,14 @@
#How to generate the Sphinx help file output#
## Set up environment ##
1. Inside a Python virtual environment, run "pip install sphinx"
2. If Az isn't set up in this virtual environment, run "python scripts\dev_setup.py" from azure-cli
## Run Sphinx ##
1. From azure-cli\doc\sphinx, run "make xml"
## Retrieve output ##
1. XML output is stored in azure-cli\doc\sphinx\_build\xml\ind.xml

Просмотреть файл

@ -0,0 +1,91 @@
Linux Install Prerequisites
===========================
Some native Linux packages are required when installing the CLI with:
- Interactive install script
- ``pip``
The commands to run to install the dependencies for some common distributions are listed below.
* Ubuntu
* [12.04 LTS](#ubuntu-1204-lts)
* [14.04 LTS](#ubuntu-1404-lts-and-bash-on-windows-build-14362)
* [15.10](#ubuntu-1510)
* [16.04 LTS](#ubuntu-1604-lts)
* Debian
* [7](#debian-7)
* [8](#debian-8)
* CentOS
* [6.5 - 6.7](#centos-65--66--67)
* [7.1 - 7.2](#centos-71--72)
* RHEL
* [6.7](#redhat-rhel-67)
* [7.2](#redhat-rhel-72)
* [SUSE](#suse-opensuse-132)
* [CoreOS](#coreos-stable-899150--beta-101010--alpha-101010)
### Ubuntu 12.04 LTS
Python 2.7.3 should be already on the machine.
```
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev
```
### Ubuntu 14.04 LTS and BASH on Windows (Build 14362+)
Python 2.7.6 should be already on the machine.
```
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev
```
### Ubuntu 15.10
Python 2.7.10 should be already on the machine.
```
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev build-essential
```
### Ubuntu 16.04 LTS
Python 2.7.11 should be already on the machine.
```
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev build-essential
```
### Debian 7
Python 2.7.3 should be already on the machine.
```
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev
```
### Debian 8
Python 2.7.9 should be already on the machine.
```
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev python-dev build-essential
```
### CentOS 6.5 / 6.6 / 6.7
Not supported with the default version of Python (2.6.6) on the machine.
### CentOS 7.1 / 7.2
Python 2.7.5 should be already on the machine.
```
sudo yum check-update; sudo yum install -y gcc libffi-devel python-devel openssl-devel
```
### RedHat RHEL 6.7
Not supported with the default version of Python (2.6.6) on the machine.
### RedHat RHEL 7.2
Python 2.7.5 should be already on the machine.
```
sudo yum check-update; sudo yum install -y gcc libffi-devel python-devel openssl-devel
```
### SUSE OpenSUSE 13.2
Python 2.7.8 should be already on the machine.
```
sudo zypper refresh && sudo zypper --non-interactive install gcc libffi-devel python-devel openssl-devel
```
### CoreOS Stable-899.15.0 / Beta-1010.1.0 / Alpha-1010.1.0
Python is not installed by default.

Просмотреть файл

@ -0,0 +1,146 @@
Install Troubleshooting
=======================
Before posting an issue, please review our list of [common issues](https://github.com/Azure/azure-cli/issues?q=label%3AFAQ+is%3Aclosed).
These are issues we have closed because we cannot address them within the CLI due to platform or language limitations.
Upgrade from 0.1.0b10 causes 'KeyError: Azure' error
----------------------------------------------------
On Python 2, it's recommended to upgrade with the `--ignore-installed` flag:
`pip install --upgrade --ignore-installed azure-cli`.
Alternatively, use the interactive install script.
See [#1540](https://github.com/Azure/azure-cli/issues/1540#issue-195125878)
Error: 'Could not find a version that satisfies the requirement azure-cli'
--------------------------------------------------------------------------
The error message from pip usually means a very old version of pip is installed.
Run `pip --version` to confirm. [Latest pip version](https://pip.pypa.io/en/stable/news/)
Upgrade `pip` with ``$ pip install --upgrade pip`` or install with the ``--pre`` flag.
See [#1308](https://github.com/Azure/azure-cli/issues/1308#issuecomment-260413613)
'X509' object has no attribute '_x509'
--------------------------------------
If you run into an ``AttributeError: 'X509' object has no attribute '_x509'`` error, downgrade your version of the requests library from 2.12.1 to 2.11.1.
See [#1360](https://github.com/Azure/azure-cli/issues/1360)
Windows - 'FileNotFoundError' error on install
----------------------------------------------
Verify that the file path quoted in the error has more than 260 characters.
If so, the installation files exceed the 260 character limit for file paths on Windows.
This can be resolved by installing the CLI in a higher directory to prevent reaching the Windows max filepath length.
See [#1221](https://github.com/Azure/azure-cli/issues/1221#issuecomment-258290204)
Ubuntu 12.04 LTS - Known warning
--------------------------------
You may see the following warning message during install and execution of `az`.
```
/usr/local/az/envs/default/local/lib/python2.7/site-packages/pip/pep425tags.py:30: RuntimeWarning: invalid Python installation: unable to open /usr/az/envs/default/lib/python2.7/config/Makefile (No such file or directory)
warnings.warn("{0}".format(e), RuntimeWarning)
```
See [#348](https://github.com/Azure/azure-cli/issues/348)
See also [pypa/pip#1074](https://github.com/pypa/pip/issues/1074)
Errors with curl redirection
----------------------------
If you get an error with the curl command regarding the `-L` parameter or an error saying `Object Moved`, try using the full url instead of the aka.ms url:
```shell
# If you see this:
$ curl -L https://aka.ms/InstallAzureCli | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 175 100 175 0 0 562 0 --:--:-- --:--:-- --:--:-- 560
bash: line 1: syntax error near unexpected token `<'
'ash: line 1: `<html><head><title>Object moved</title></head><body>
# Try this instead:
$ curl https://azurecliprod.blob.core.windows.net/install | bash
```
Errors on install with cffi or cryptography
-------------------------------------------
If you get errors on install on **OS X**, upgrade pip by typing:
```shell
pip install --upgrade --force-reinstall pip
```
If you get errors on install on **Debian or Ubuntu** such as the examples below,
install libssl-dev and libffi-dev by typing:
```shell
sudo apt-get update && sudo apt-get install -y libssl-dev libffi-dev
```
Also install Python Dev for your version of Python.
Python 2:
```shell
sudo apt-get install -y python-dev
```
Python 3:
```shell
sudo apt-get install -y python3-dev
```
Ubuntu 15 may require `build-essential` also:
```shell
sudo apt-get install -y build-essential
```
**Example Errors**
```shell
Downloading cffi-1.5.2.tar.gz (388kB)
100% |################################| 389kB 3.9MB/s
Complete output from command python setup.py egg_info:
No working compiler found, or bogus compiler options
passed to the compiler from Python's distutils module.
See the error messages above.
(If they are about -mno-fused-madd and you are on OS/X 10.8,
see http://stackoverflow.com/questions/22313407/ .)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-77i2fido/cffi/
```
```shell
#include <openssl/e_os2.h>
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Failed building wheel for cryptography
```
See Stack Overflow question - [Failed to install Python Cryptography package with PIP and setup.py](http://stackoverflow.com/questions/22073516/failed-to-install-python-cryptography-package-with-pip-and-setup-py)

Просмотреть файл

@ -0,0 +1,41 @@
The following commands showcase how to perform some common operations available in the preview.
```shell
# Login to Azure using DeviceAuth
$ az login
# Create a resource group
$ az group create -l westus -n demo100
# Find common vm images
$ az vm image list
# Search all vm images, takes about 20s
$ az vm image list --all
# simplified help experience
$ az vm create --help
# Create a simple Linux VM using SSH
# --admin-user will be looked-up from the ~/.ssh/id_rsa.pub comments
# --location will be looked-up from the resource group
# --authentication-type will be set to SSH for Linux images and password for Windows images
$ az vm create -g Demo100 -n Demo100VM --image UbuntuLTS
# Lists all the IP addresses for all of the virtual machines in the given resource group
$ az vm list-ip-addresses -g Demo100
# Export a resource group to an ARM template
$ az group export -n Demo100 > template.json
$ cat template.json | less
# learning to query with JPTerm (OSX: brew install jpterm)
# try out [].{name:name, os:storageProfile.osDisk.osType}
$ az vm list | jpterm
# List all VMs and their OS type in a 'grep-able' format
$ az vm list -g demo100 --query "[].[name,storageProfile.osDisk.osType]" --out tsv
Demo100VM Linux
Demo300VM Linux
```

111
doc/recording_vcr_tests.md Normal file
Просмотреть файл

@ -0,0 +1,111 @@
Recording Command Tests with VCR.py
========================================
Azure CLI uses the VCR.py library to record the HTTP messages exchanged during a program run and play them back at a later time, making it useful for creating command level scenario tests. These tests can be replayed at a later time without any network activity, allowing us to detect regressions in the handling of parameters and in the compatability between AzureCLI and the PythonSDK.
##Overview
Each command module has a `tests` folder with a file called: `test_<module>_commands.py`. This is where you will define tests.
Tests all derive from the `VCRTestBase` class found in `azure.cli.core.test_utils.vcr_test_base`. This class exposes the VCR tests using the standard Python `unittest` framework and allows the tests to be discovered by and debugged in Visual Studio.
The majority of tests however inherit from the `ResourceGroupVCRTestBase` class as this handles creating and tearing down the test resource group automatically, helping to ensure that tests can be recorded and cleaned up without manual creation or deletion of resources.
After adding your test, run it. The test driver will automatically detect the test is unrecorded and record the HTTP requests and responses in a cassette .yaml file. If the test succeeds, the cassette will be preserved and future playthroughs of the test will come from the cassette.
If the tests are run on TravisCI, any tests which cannot be replayed will automatically fail.
##Authoring Tests
To create a new test, simply create a class in the `test_<module>_commands.py` file with the following structure:
```Python
class MyTestClass(ResourceGroupVCRTestBase): # or VCRTestBase in special circumstances
def __init__(self, test_method):
# TODO: replace MyTestClass with your class name
super(MyTestClass, self).__init__(__file__, test_method, debug=False, run_live=False, skip_setup=False, skip_teardown=False)
def test_my_test_class(self): # TODO: rename to 'test_<your name here>'
self.execute()
def body(self):
# TODO: insert your test logic here
def set_up(self):
super(MyTestClass, self).set_up() # if you need custom logic, be sure to call the base class version first
# TODO: Optional setup logic here (will not be replayed on playback)
def tear_down(self):
# TODO: Optional tear down logic here (will not be replayed on playback)
super(MyTestClass, self).tear_down() # if you need custom logic, call the base class version last
```
The `debug`, `run_live`, `skip_setup` and `skip_teardown` parameters in the `__init__` method are shown with their defaults and can be omitted. `debug` is the equivalent of specifying `debug=True` for all calls to `cmd` in the test (see below). Specifying `run_live=True` will cause the test to always be run with actual HTTP requests, ignoring VCR entirely. `skip_setup` and `skip_teardown` can be useful during test creation to avoid repeatedly creating and deleting resource groups.
The `set_up` and `tear_down` methods are optional and can be omitted. For the ResourceGroupVCRTestBase these have default implementations which set up a test resource group and tear it down after the recording completes. Any commands used in these methods are only executed during a live or recorded test. These sections are skipped during playback so your test body should not rely any logic within these methods.
A number of helper methods are available for structuring your script tests.
####cmd(command_string, checks=None, allowed_exceptions=None, debug=False)
This method executes a given command and returns the output. If the output is in JSON format, the method will return the results as a JSON object for easier manuipulation.
The `debug` parameter can be specified as `True` on a single call to `cmd` or in the init of the test class. Turning this on will print the command string, the results and, if a failure occurred, the failure.
The `allowed_exceptions` parameter allows you to specify one or more (as a list) exception messages that will allow the test to still pass. Exception types are not used because the CLI wraps many types of errors in a `CLIError`. There are some tests where a specific exception is intended. Add the exception message to this list to allow the test to continue successfully in the presence of this message.
The `checks` parameter allows you to specify one or more (as a list) checks to automatically validate the output. A number of Check objects exist for this purpose. You can create your own as long as they implement the compare method (see existing checks for examples):
#####JMESPathCheck(query, expected_result)
Use the JMESPathCheck object to validate the result using any valid JMESPath query. This is useful for checking that the JSON result has fields you were expecting, arrays of certain lengths etc. See www.jmespath.org for guidance on writing JMESPath queries.
######Usage
```
JMESPathCheck(query, expected_result)
```
- `query` - JMESPath query as a string.
- `expected_result` - The expected result from the JMESPath query (see [jmespath.search()](https://github.com/jmespath/jmespath.py#api))
######Example
The example below shows how you can use a JMESPath query to validate the values from a command.
When calling `test(command_string, checks)` you can pass in just one JMESPathComparator or a list of JMESPathComparators.
```Python
self.cmd('vm list-ip-addresses --resource-group myResourceGroup', checks=[
JMESPathCheck('length(@)', 1),
JMESPathCheck('[0].virtualMachine.name', 'myVMName')
])
```
#####NoneCheck()
Use this to verify that the output contains nothing. Note that this is different from `checks=None` which will skip any validation.
#####StringCheck(expected_result)
Matches string output to expected.
#####BooleanCheck(expected_result)
Compares truthy responses (True, 'true', 1, etc.) to a Boolean True or False.
####set_env(variable_name, value)
This method is a wrapper around `os.environ` and simply sets an environment variable to the specified value.
####pop_env(variable_name)
Another wrapper around `os.environ` this pops the value of the indicated environment variable.
##Test Issues
Here are some common issues that occur when authoring tests that you should be aware of.
- **Non-deterministic results**: If you find that a test will pass on some playbacks but fail on others, there are a couple possible things to check:
1. check if your command makes use of concurrency.
2. check your parameter aliasing (particularly if it complains that a required parameter is missing that you know is there)
- **Paths**: When including paths in your tests as parameter values, always wrap them in double quotes. While this isn't necessary when running from the command line (depending on your shell environment) it will likely cause issues with the test framework.

Просмотреть файл

@ -0,0 +1,17 @@
Releasing Components
====================
To create a release for a component, create a PR with title `Release <component-name> <version>`
e.g. 'Release azure-cli-vm 0.1.1'
The 'Release' label should be added to the PR.
PR checklist:
- [ ] The PR title (commit) has format `Release <component-name> <version>`.
- [ ] `setup.py` has been modified with the same version as in the PR title.
- [ ] If required, `__version__` defined in any `__init__.py` should also be modified to match.
- [ ] `HISTORY.rst` has been modified with appropriate release notes.
When the PR is approved and merged, the component will be released and available on PyPI.

225
doc/sphinx/Makefile Normal file
Просмотреть файл

@ -0,0 +1,225 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " dummy to check syntax errors of document sources"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/az.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/az.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/az"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/az"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: epub3
epub3:
$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
@echo
@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
.PHONY: dummy
dummy:
$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
@echo
@echo "Build finished. Dummy builder generates no files."

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import pkg_resources
pkg_resources.declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1,115 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import argparse
import json
from docutils import nodes
from docutils.statemachine import ViewList
from sphinx.util.compat import Directive
from sphinx.util.nodes import nested_parse_with_titles
from azure.cli.core.application import APPLICATION, Configuration
import azure.cli.core._help as _help
app = APPLICATION
for cmd in app.configuration.get_command_table():
try:
app.execute(cmd.split() + ['-h'])
except:
pass
class AzHelpGenDirective(Directive):
def make_rst(self):
INDENT = ' '
DOUBLEINDENT = INDENT * 2
parser_dict = {}
_store_parsers(app.parser, parser_dict)
doc_source_map = _load_doc_source_map()
help_files = []
for cmd, parser in parser_dict.items():
help_file = _help.GroupHelpFile(cmd, parser) if _is_group(parser) else _help.CommandHelpFile(cmd, parser)
help_file.load(parser)
help_files.append(help_file)
help_files = sorted(help_files, key=lambda x: x.command)
for help_file in help_files:
is_command = isinstance(help_file, _help.CommandHelpFile)
yield '.. cli{}:: {}'.format('command' if is_command else 'group', help_file.command if help_file.command else 'az') #it is top level group az if command is empty
yield ''
yield '{}:summary: {}'.format(INDENT, help_file.short_summary)
yield '{}:description: {}'.format(INDENT, help_file.long_summary)
if not is_command:
top_group_name = help_file.command.split()[0] if help_file.command else 'az'
yield '{}:docsource: {}'.format(INDENT, doc_source_map[top_group_name] if top_group_name in doc_source_map else '')
else:
top_command_name = help_file.command.split()[0] if help_file.command else ''
if top_command_name in doc_source_map:
yield '{}:docsource: {}'.format(INDENT, doc_source_map[top_command_name])
yield ''
if is_command and help_file.parameters:
group_registry = _help.ArgumentGroupRegistry(
[p.group_name for p in help_file.parameters if p.group_name])
for arg in sorted(help_file.parameters,
key=lambda p: group_registry.get_group_priority(p.group_name)
+ str(not p.required) + p.name):
yield '{}.. cliarg:: {}'.format(INDENT, arg.name)
yield ''
yield '{}:required: {}'.format(DOUBLEINDENT, arg.required)
short_summary = arg.short_summary or ''
possible_values_index = short_summary.find(' Possible values include')
short_summary = short_summary[0:possible_values_index
if possible_values_index >= 0 else len(short_summary)]
short_summary = short_summary.strip()
yield '{}:summary: {}'.format(DOUBLEINDENT, short_summary)
yield '{}:description: {}'.format(DOUBLEINDENT, arg.long_summary)
if arg.choices:
yield '{}:values: {}'.format(DOUBLEINDENT, ', '.join(sorted([str(x) for x in arg.choices])))
if arg.default and arg.default != argparse.SUPPRESS:
yield '{}:default: {}'.format(DOUBLEINDENT, arg.default)
if arg.value_sources:
yield '{}:source: {}'.format(DOUBLEINDENT, ', '.join(arg.value_sources))
yield ''
yield ''
if len(help_file.examples) > 0:
for e in help_file.examples:
yield '{}.. cliexample:: {}'.format(INDENT, e.name)
yield ''
yield DOUBLEINDENT + e.text
yield ''
def run(self):
node = nodes.section()
node.document = self.state.document
result = ViewList()
for line in self.make_rst():
result.append(line, '<azhelpgen>')
nested_parse_with_titles(self.state, result, node)
return node.children
def setup(app):
app.add_directive('azhelpgen', AzHelpGenDirective)
def _store_parsers(parser, d):
for s in parser.subparsers.values():
d[_get_parser_name(s)] = s
if _is_group(s):
for c in s.choices.values():
d[_get_parser_name(c)] = c
_store_parsers(c, d)
def _load_doc_source_map():
with open('azhelpgen/doc_source_map.json') as open_file:
return json.load(open_file)
def _is_group(parser):
return getattr(parser, '_subparsers', None) is not None \
or getattr(parser, 'choices', None) is not None
def _get_parser_name(s):
return (s._prog_prefix if hasattr(s, '_prog_prefix') else s.prog)[3:]

Просмотреть файл

@ -0,0 +1,32 @@
{
"az": "src/command_modules/azure-cli-profile/azure/cli/command_modules/profile/_help.py",
"configure": "src/command_modules/azure-cli-configure/azure/cli/command_modules/configure/_help.py",
"feedback": "src/command_modules/azure-cli-feedback/azure/cli/command_modules/feedback/_help.py",
"login": "src/command_modules/azure-cli-profile/azure/cli/command_modules/profile/_help.py",
"logout": "src/command_modules/azure-cli-profile/azure/cli/command_modules/profile/_help.py",
"account": "src/command_modules/azure-cli-profile/azure/cli/command_modules/profile/_help.py",
"ad": "src/command_modules/azure-cli-role/azure/cli/command_modules/role/_help.py",
"acr": "src/command_modules/azure-cli-acr/azure/cli/command_modules/acr/_help.py",
"acs": "src/command_modules/azure-cli-acs/azure/cli/command_modules/acs/_help.py",
"appservice": "src/command_modules/azure-cli-appservice/azure/cli/command_modules/appservice/_help.py",
"batch": "src/command_modules/azure-cli-batch/azure/cli/command_modules/batch/_help.py",
"cloud": "src/command_modules/azure-cli-cloud/azure/cli/command_modules/cloud/_help.py",
"component": "src/command_modules/azure-cli-component/azure/cli/command_modules/component/_help.py",
"container": "src/command_modules/azure-cli-container/azure/cli/command_modules/container/_help.py",
"feature": "src/command_modules/azure-cli-resource/azure/cli/command_modules/resource/_help.py",
"group": "src/command_modules/azure-cli-resource/azure/cli/command_modules/resource/_help.py",
"iot": "src/command_modules/azure-cli-iot/azure/cli/command_modules/iot/_help.py",
"keyvault": "src/command_modules/azure-cli-keyvault/azure/cli/command_modules/keyvault/_help.py",
"network": "src/command_modules/azure-cli-network/azure/cli/command_modules/network/_help.py",
"policy": "src/command_modules/azure-cli-resource/azure/cli/command_modules/resource/_help.py",
"provider": "src/command_modules/azure-cli-resource/azure/cli/command_modules/resource/_help.py",
"redis": "src/command_modules/azure-cli-redis/azure/cli/command_modules/redis/_help.py",
"resource": "src/command_modules/azure-cli-resource/azure/cli/command_modules/resource/_help.py",
"role": "src/command_modules/azure-cli-role/azure/cli/command_modules/role/_help.py",
"sql": "src/command_modules/azure-cli-sql/azure/cli/command_modules/sql/help.py",
"storage": "src/command_modules/azure-cli-storage/azure/cli/command_modules/storage/_help.py",
"tag": "src/command_modules/azure-cli-resource/azure/cli/command_modules/resource/_help.py",
"taskhelp": "src/command_modules/azure-cli-taskhelp/azure/cli/command_modules/taskhelp/_help.py",
"vm": "src/command_modules/azure-cli-vm/azure/cli/command_modules/vm/_help.py",
"vmss": "src/command_modules/azure-cli-vm/azure/cli/command_modules/vm/_help.py"
}

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import pkg_resources
pkg_resources.declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1,67 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import copy
from docutils import nodes
from sphinx import addnodes
from sphinx.directives import ObjectDescription
from sphinx.util.compat import Directive
from sphinx.util.docfields import Field
cli_field_types = [
Field('summary', label='Summary', has_arg=False,
names=('summary', 'shortdesc')),
Field('description', label='Description', has_arg=False,
names=('description', 'desc', 'longdesc'))
]
class CliBaseDirective(ObjectDescription):
def handle_signature(self, sig, signode):
signode += addnodes.desc_addname(sig, sig)
return sig
def needs_arglist(self):
return False
def add_target_and_index(self, name, sig, signode):
signode['ids'].append(name)
def get_index_text(self, modname, name):
return name
class CliGroupDirective(CliBaseDirective):
doc_field_types = copy.copy(cli_field_types)
doc_field_types.append(
Field('docsource', label='Doc Source', has_arg=False,
names=('docsource', 'documentsource'))
)
class CliCommandDirective(CliBaseDirective):
doc_field_types = copy.copy(cli_field_types)
doc_field_types.append(
Field('docsource', label='Doc Source', has_arg=False,
names=('docsource', 'documentsource'))
)
class CliArgumentDirective(CliBaseDirective):
doc_field_types = copy.copy(cli_field_types)
doc_field_types.extend([
Field('required', label='Required', has_arg=False,
names=('required')),
Field('values', label='Allowed values', has_arg=False,
names=('values', 'choices', 'options')),
Field('default', label='Default value', has_arg=False,
names=('default')),
Field('source', label='Values from', has_arg=False,
names=('source', 'sources'))
])
class CliExampleDirective(CliBaseDirective):
pass
def setup(app):
app.add_directive('cligroup', CliGroupDirective)
app.add_directive('clicommand', CliCommandDirective)
app.add_directive('cliarg', CliArgumentDirective)
app.add_directive('cliexample', CliExampleDirective)

345
doc/sphinx/conf.py Normal file
Просмотреть файл

@ -0,0 +1,345 @@
#!/usr/bin/env python3
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# -*- coding: utf-8 -*-
#
# az documentation build configuration file, created by
# sphinx-quickstart on Fri Jul 8 14:34:03 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.getcwd())
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.doctest',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'sphinx.ext.autodoc',
'cligroup.cligroup',
'azhelpgen.azhelpgen'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'ind'
# General information about the project.
project = 'az'
copyright = '2016, msft'
author = 'msft'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#
# today = ''
#
# Else, today_fmt is used as the format for a strftime call.
#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
# html_title = 'az v1.0'
# A shorter title for the navigation bar. Default is the same as html_title.
#
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#
# html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
#
# html_last_updated_fmt = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#
# html_additional_pages = {}
# If false, no module index is generated.
#
# html_domain_indices = True
# If false, no index is generated.
#
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh'
#
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'azdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'az.tex', 'az Documentation',
'msft', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#
# latex_appendices = []
# If false, no module index is generated.
#
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'az', 'az Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'az', 'az Documentation',
author, 'az', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#
# texinfo_appendices = []
# If false, no module index is generated.
#
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
# texinfo_no_detailmenu = False

2
doc/sphinx/ind.rst Normal file
Просмотреть файл

@ -0,0 +1,2 @@
.. azhelpgen::

281
doc/sphinx/make.bat Normal file
Просмотреть файл

@ -0,0 +1,281 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. epub3 to make an epub3
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
echo. dummy to check syntax errors of document sources
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 1>NUL 2>NUL
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\az.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\az.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "epub3" (
%SPHINXBUILD% -b epub3 %ALLSPHINXOPTS% %BUILDDIR%/epub3
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub3 file is in %BUILDDIR%/epub3.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
if "%1" == "dummy" (
%SPHINXBUILD% -b dummy %ALLSPHINXOPTS% %BUILDDIR%/dummy
if errorlevel 1 exit /b 1
echo.
echo.Build finished. Dummy builder generates no files.
goto end
)
:end

231
doc/xplat_cli_migration.md Normal file
Просмотреть файл

@ -0,0 +1,231 @@
# Migrating from Azure XPlat CLI to Azure CLI 2.0
With the introduction of our Azure CLI 2.0 preview, customers now have a
choice of Azure command-line tools. While our Azure CLI 2.0 is built to be
easier to use and automate, please consider the following before starting:
* Both Azure CLI's can be installed and used side-by-side
* The Azure CLI 2.0 will not support ASM/Classic mode services
* Scripts are not compatible between both CLIs
## Why consider trying the Azure CLI 2.0?
Regardless of the tool or vendor, it is important to ensure you benefit when
adding a new tool to your toolbox. While the following covers only a small
number of difference between these two products, we expect customers to benefit
from the following:
* Clean outputs for common workflows
* `--out table` for simplified human output
* `--out json` and `--out jsonc` for JSON outputs
* `--out tsv` for interacting with tools like grep, AWK, and jq
* Improved and consistent in-tool documentation with `-h`
* Includes descriptions for groups, commands, and parameters
* Indicates where to look up required values
* Examples and links to web content
* Improved command-line productivity
* Use `[tab][tab]` to lookup parameters, including resource groups and names (only supported in BASH and BASH on Windows)
* Work with either Azure resource ID values (`--ids`) _or_ resource group and name (`-g -n`)
* Built in client-side query engine powered by JMESPath
* Service support
* Our preview meets or exceeds XPlat CLI functionality for Compute, Storage, Network, RBAC, and ARM
* More services coming online soon!
While we believe the above list is compelling, it's important to remember
**the Azure CLI 2.0 only supports ARM mode**. If you are managing ASM/Classic
resources, you must use the Azure XPlat CLI.
## Getting both CLI's setup side-by-side
First, you run `azure --version` and ensure you are using `0.10.5` or later, as
this is required for sharing your credentials between both CLIs. If you installed
using NPM, upgrade with `npm upgrade -g azure-cli`. If you used an installer,
we recommend downloading the latest installer to upgrade.
To install the Azure CLI 2.0, follow the steps for your preferred platform or
environment on our [Installation Guide](https://github.com/Azure/azure-cli/blob/master/README.rst#installation).
Once installed, you can run `az configure` and follow the steps to setup your default output format.
Then run `az login` to login using device authentication. Once this step is complete you should be authenticated to use both CLIs.
## Important new concepts in the Azure CLI 2.0
Here is a quick list of some new and changed concepts that can help you understand the new tool.
* Interactive Concepts
* Use `az configure` to setup your default output format
* You will find help to be generally more useful, try `az vm create -h` for an example
* Positional parameters are not supported, use `az vm list -g MyGroup` instead of `azure vm list MyGroup`
* Automation and Scripting Concepts
* You can refer to resources using their Azure resource ID with `--ids` or with the resource group and name using `-g [rg] -n [name]`
* Use `--query "[expression]"` to extract single values
* Use `--out tsv` to get plain (no mark-up) value output
* Use `@-` to pipe values such as `az vm list --query [0].[id] --out tsv | az vm show --ids @-`
* Service Specific Concepts
* VM power state is no longer included in `az vm list`, use `az vm get-instance-view` instead
## Moving scripts from XPlat CLI to Azure CLI 2.0
Generally, converting a script from XPlat to Az follows these steps:
1. Switch `azure` commands to `az` commands
2. Update commands to use new input values
3. Update script to use new output formats
4. Use `--query` to extract values reliably
Below, we break down each of these steps.
### Finding and switching to `az` commands
While most commands keep the same group and command names between the Azure XPlat CLI and the Azure CLI 2.0, we've built a [azure to az conversion table](https://github.com/Azure/azure-cli/blob/master/doc/azure2az_commands.rst) for common commands.
#### Set vs. Update
Mutate operations now use the `update` verb instead of `set`. While the XPlat CLI
exposed some common operations as parameters, such as:
```
$ azure vm set -g MyGroup -n MyName --nic-ids $MyNicID
$ azure vm set -g MyGroup -n MyName --tags myTagName=MyTagValue
```
The Azure CLI 2.0 `update` commands work generically against the resource, for example:
```
$ az vm update -g MyGroup -n MyName --add networkProfile.networkInterfaces primary=false id=$MyNicID
$ az vm update -g MyGroup -n MyName --set tags.myTagName=MyTagValue
```
#### Commands with complex input
Some commands which required complex input, such as JSON strings and documents,
have been changed to add/remove single values, instead of requiring users to
pull down the document, modify, and set the new document. For these commands,
see help (`-h`) for details on the specific command.
An example of this is `azure storage cors set` being replaced by `az storage cors add`.
### Updating input values
Once you have identified the `az` commands required for your script, you immediately
notice changes to how inputs are handled. The Azure CLI 2.0 does not accept
'positional parameters' such as `azure vm show MyRG MyName`, but instead require
parameter flags: `az vm show -g MyRG -n MyName`.
In addition, when an input value is missing, we will show an error indicating the
missing parameters, instead of prompting the user automatically:
```
$ az vm show
az vm show: error: (--name --resource-group | --ids) are required
```
In addition to using resource groups and names (`-g -n`), you can also refer to
resources directly by ID value using `--ids`:
```
$ MyVar=$(az vm list --query [0].id --out tsv)
$ echo $MyVar
/subscriptions/xxxx/resourceGroups/VMGROUP1/providers/Microsoft.Compute/virtualMachines/VM-Data
$ az vm show --ids $MyVar --out table
ResourceGroup Name VmId Location ProvisioningState
--------------- ------- ------------------------------------ ---------- -------------------
VMGROUP1 VM-Data 63edd6a0-2796-49e6-acc1-ad3f8bd94f13 westus Succeeded
```
When working with files, you can use the `@` symbol to indicate the contents of a file or file descriptor.
```
$ az role create --role-definition @MyOnCallRoleDef.json
```
> **TIP** Use `@-` as short-hand to pass STDIN as a value.
### Working with output formats
The Azure CLI 2.0 supports 4 primary output formats:
1. json - standard JSON formatted object graphs
2. jsonc - colorized JSON
3. tsv - provides "UNIX-style" output (fields delimited with tabs, records with newlines)
4. table - simplified human-readable output
You can set your default output format with the `az configure` command or on a
by-command basis using `--out` parameter.
Tips:
* Use `--out tsv` for raw output that is easy to parse with command-line tools
* Use `--out json` for outputting object graphs (nested objects), both `tsv` and `table` will only show fields from the outer-most object.
* Avoid using `--out jsonc` output programmatically as not all tools will accept the ANSI values that provide color in the Shell
* Currently, `--out table` does not work with some formatted outputs.
```
$ az vm list --query [0].name --out json
"VM-Data"
$ az vm list --query [0].name --out tsv
VM-Data
$ az vm list --query [0].name --out table
Result
--------
VM-Data
```
### Filtering down output values
A common pattern in Azure XPlat CLI scripts is using command-line tools, such as
AWK, grep, and jq, to extract values from output documents:
```
$ azure vm list --json \
| jq -r '.[].storageProfile.osDisk.vhd.uri' \
| cut -d / -f3 \
| cut -d . -f1 \
| uniq -c \
| sort -bgr
$ MY_SUBSCRIPTION_ID=$(azure account show --json | jq -r '.[0].id')
```
With the Azure CLI 2.0, you can now use the `--query '[expression]'` parameter and the [JMESPath](http://jmespath.org/)
query language to extract values.
```
$ az vm list --query "[].{name:name,os:storageProfile.osDisk.osType}" --out table
Name Os
------------- -------
VM-Data Windows
VM-StagingWeb Linux
VM-Web Linux
MyNewVM Linux
$ az vm list --query "[].{name:name,os:storageProfile.osDisk.osType}" --out tsv
VM-Data Windows
VM-StagingWeb Linux
VM-Web Linux
MyNewVM Linux
$ az vm list --query "[].{name:name,os:storageProfile.osDisk.osType}" --out json
[
{
"name": "VM-Data",
"os": "Windows"
},
{
"name": "VM-StagingWeb",
"os": "Linux"
},
{
"name": "VM-Web",
"os": "Linux"
},
{
"name": "MyNewVM",
"os": "Linux"
}
]
```
You can also extract single values. Using `--out tsv` will prevent any unintended quotes:
```
az vm list --query "[0].id" --out tsv
/subscriptions/xxxx/resourceGroups/VMGROUP1/providers/Microsoft.Compute/virtualMachines/VM-Web
```

19
pylintrc Normal file
Просмотреть файл

@ -0,0 +1,19 @@
[MESSAGES CONTROL]
# For all codes, run 'pylint --list-msgs' or go to 'http://pylint-messages.wikidot.com/all-codes'
# C0111 Missing docstring
# C0103 Invalid %s name "%s"
# I0011 Warning locally suppressed using disable-msg
# W0511 fixme
# R0401 Cyclic import (because of https://github.com/PyCQA/pylint/issues/850)
disable=C0111,C0103,I0011,W0511,R0401
[VARIABLES]
# Tells whether we should check for unused import in __init__ files.
init-import=yes
[DESIGN]
# Maximum number of locals for function / method body
max-locals=25
# Maximum number of branch for function / method body
max-branches=20
[SIMILARITIES]
min-similarity-lines=10

18
requirements.txt Normal file
Просмотреть файл

@ -0,0 +1,18 @@
adal==0.4.3
applicationinsights==0.10.0
argcomplete==1.8.0
colorama==0.3.7
jmespath
mock==1.3.0
nose==1.3.7
paramiko==2.0.2
pip==9.0.1
pygments==2.1.3
pylint==1.5.4
pyOpenSSL==16.1.0
pyyaml==3.11
requests==2.9.1
setuptools==30.4.0
six==1.10.0
tabulate==0.7.5
vcrpy==1.10.3

5
scripts/__init__.py Normal file
Просмотреть файл

@ -0,0 +1,5 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------

28
scripts/_common.py Normal file
Просмотреть файл

@ -0,0 +1,28 @@
#!/usr/bin/env python
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# common utilities for scripts
from __future__ import print_function
def get_repo_root():
"""
Returns the root path to this repository. The root is where .git folder is.
"""
import os.path
here = os.path.dirname(os.path.realpath(__file__))
while not os.path.exists(os.path.join(here, '.git')):
here = os.path.dirname(here)
return here
if __name__ == '__main__':
print(get_repo_root())

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
"""Command linting automation code"""

Просмотреть файл

@ -0,0 +1,90 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from __future__ import print_function
import pkgutil
import argparse
import os
import sys
import json
import yaml
from importlib import import_module
from automation.utilities.path import filter_user_selected_modules_with_tests
from azure.cli.core.application import APPLICATION, Application
from azure.cli.core.application import Configuration
from azure.cli.core.commands import load_params, _update_command_definitions
from azure.cli.core.help_files import helps
def dump_no_help(modules):
cmd_table = APPLICATION.configuration.get_command_table()
exit_val = 0
for cmd in cmd_table:
cmd_table[cmd].load_arguments()
for mod in modules:
try:
import_module('azure.cli.command_modules.' + mod).load_params(mod)
except Exception as ex:
print("EXCEPTION: " + str(mod))
_update_command_definitions(cmd_table)
command_list = []
subgroups_list = []
parameters = {}
for cmd in cmd_table:
if not cmd_table[cmd].description and cmd not in helps:
command_list.append(cmd)
exit_val = 1
group_name = " ".join(cmd.split()[:-1])
if group_name not in helps:
exit_val = 1
if group_name not in subgroups_list:
subgroups_list.append(group_name)
param_list = []
for key in cmd_table[cmd].arguments:
if not cmd_table[cmd].arguments[key].type.settings.get('help'):
exit_val = 1
param_list.append(cmd_table[cmd].arguments[key].name)
if param_list:
parameters[cmd] = param_list
for cmd in helps:
diction_help = yaml.load(helps[cmd])
if "short-summary" in diction_help and "type" in diction_help:
if diction_help["type"] == "command" and cmd in command_list:
command_list.remove(cmd)
elif diction_help["type"] == "group" and cmd in subgroups_list:
subgroups_list.remove(cmd)
if "parameters" in diction_help:
for param in diction_help["parameters"]:
if "short-summary" in param and param["name"].split()[0] in parameters:
parameters.pop(cmd, None)
data = {
"subgroups" : subgroups_list,
"commands" : command_list,
"parameters" : parameters
}
print(json.dumps(data, indent=2, sort_keys=True))
return exit_val
if __name__ == '__main__':
try:
mods_ns_pkg = import_module('azure.cli.command_modules')
installed_command_modules = [modname for _, modname, _ in
pkgutil.iter_modules(mods_ns_pkg.__path__)]
except ImportError:
pass
exit_value = dump_no_help(installed_command_modules)
sys.exit(exit_value)

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
"""Code coverage related automation code"""

Просмотреть файл

@ -0,0 +1,144 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import collections
import os
import os.path
import sys
from azure.cli.core.test_utils.vcr_test_base import COMMAND_COVERAGE_CONTROL_ENV
import automation.tests.nose_helper as automation_tests
import automation.utilities.path as automation_path
# pylint: disable=too-few-public-methods
class CommandCoverageContext(object):
FILE_NAME = 'command_coverage.txt'
def __init__(self, data_file_path):
self._data_file_path = os.path.join(data_file_path, self.FILE_NAME)
def __enter__(self):
os.environ[COMMAND_COVERAGE_CONTROL_ENV] = self._data_file_path
automation_path.make_dirs(os.path.dirname(self.coverage_file_path))
with open(self.coverage_file_path, 'w') as f:
f.write('')
return self
def __exit__(self, exc_type, exc_val, exc_tb):
del os.environ[COMMAND_COVERAGE_CONTROL_ENV]
@property
def coverage_file_path(self):
return self._data_file_path
def run_command_coverage(modules):
test_result_dir = automation_path.get_test_results_dir(with_timestamp=True, prefix='cmdcov')
data_file = os.path.join(test_result_dir, 'cmdcov.data')
# run tests to generate executed command list
run_nose = automation_tests.get_nose_runner(test_result_dir, parallel=False)
with CommandCoverageContext(data_file) as context:
for name, path in modules:
run_nose(name, path)
print('BEGIN: Full executed commands list')
for line in open(context.coverage_file_path):
sys.stdout.write(line)
print('END: Full executed commands list')
# pylint: disable=too-few-public-methods
class CoverageContext(object):
def __init__(self):
from coverage import Coverage
self._cov = Coverage(cover_pylib=False)
self._cov.start()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self._cov.stop()
def run_code_coverage(modules):
# create test results folder
test_results_folder = automation_path.get_test_results_dir(with_timestamp=True, prefix='cover')
# get test runner
run_nose = automation_tests.get_nose_runner(
test_results_folder, code_coverage=True, parallel=False)
# run code coverage on each project
for name, _, test_path in modules:
with CoverageContext():
run_nose(name, test_path)
import shutil
shutil.move('.coverage', os.path.join(test_results_folder, '.coverage.{}'.format(name)))
def coverage_command_rundown(log_file_path):
import azure.cli.core.application
config = azure.cli.core.application.Configuration([])
azure.cli.core.application.APPLICATION = azure.cli.core.application.Application(config)
existing_commands = set(config.get_command_table().keys())
command_counter = collections.defaultdict(lambda: 0)
for line in open(log_file_path, 'r'):
command = line.split(' -', 1)[0].strip()
if command:
command_counter[command] += 1
print('COUNT\tCOMMAND')
for c in sorted(command_counter.keys()):
print('{}\t{}'.format(command_counter[c], c))
print('\nUncovered commands:')
for c in sorted(existing_commands - set(command_counter.keys())):
print(c)
def main():
import argparse
parser = argparse.ArgumentParser('Code coverage tools')
parser.add_argument('--command-coverage', action='store_true', help='Run command coverage')
parser.add_argument('--code-coverage', action='store_true', help='Run code coverage')
parser.add_argument('--module', action='append', dest='modules',
help='The modules to run coverage. Multiple modules can be fed.')
parser.add_argument('--command-rundown', action='store',
help='Analyze a command coverage test result.')
args = parser.parse_args()
selected_modules = automation_path.filter_user_selected_modules(args.modules)
if not selected_modules:
parser.print_help()
sys.exit(1)
if not args.code_coverage and not args.command_coverage and not args.command_rundown:
parser.print_help()
sys.exit(1)
if args.command_rundown:
coverage_command_rundown(args.command_rundown)
sys.exit(0)
if args.code_coverage:
run_code_coverage(selected_modules)
if args.command_coverage:
run_command_coverage(selected_modules)
sys.exit(0)
if __name__ == '__main__':
main()

Просмотреть файл

@ -0,0 +1,56 @@
Examples of use
===============
NOTE:
Wheel and Twine are required.
Use the command below to install them if not installed.
pip install --upgrade wheel twine
Show help
---------
```
$ python -m automation.release.run -h
```
Build azure-cli-core
--------------------
```
$ python -m automation.release.run -c azure-cli-core
```
Build azure-cli-core without patching the version number
--------------------------------------------------------
```
$ python -m automation.release.run -c azure-cli-core --no-version-patch
```
Build & Publish azure-cli-core to test PyPI
-------------------------------------------
```
$ export TWINE_USERNAME=<user>
$ export TWINE_PASSWORD=<pass>
$ python -m automation.release.run -c azure-cli-core -r https://testpypi.python.org/pypi
```
Build & Publish azure-cli-core to public PyPI
---------------------------------------------
```
$ export TWINE_USERNAME=<user>
$ export TWINE_PASSWORD=<pass>
$ python -m automation.release.run -c azure-cli-core -r https://pypi.python.org/pypi
```
Examples of checking for component changes since git tag
========================================================
List changes for all components since all-v0.1.0b11
---------------------------------------------------
```
$ python -m automation.release.check -s all-v0.1.0b11
```
List changes for azure-cli-core since all-v0.1.0b11
---------------------------------------------------
```
$ python -m automation.release.check -c azure-cli-core -s all-v0.1.0b11
```

Просмотреть файл

@ -0,0 +1,6 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
"""Component/Package release automation code"""

Просмотреть файл

@ -0,0 +1,65 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from __future__ import print_function
import sys
import re
import argparse
from subprocess import check_call, CalledProcessError
from ..utilities.path import get_repo_root, get_all_module_paths
REGEX_COMPONENT_NAME = re.compile(r"([a-z\-]*)-([0-9])")
def error_exit(msg):
print('ERROR: '+msg, file=sys.stderr)
sys.exit(1)
def check_component_revisions(component_name, r_start, r_end):
for comp_name, comp_path in get_all_module_paths():
if comp_name == component_name:
revision_range = "{}..{}".format(r_start, r_end)
try:
check_call(["git", "log",
"--pretty=format:'%C(yellow)%h %Cred%ad %Cblue%an%Cgreen%d %Creset%s'",
revision_range, "--", comp_path, ":(exclude)*/tests/*"],
cwd=get_repo_root())
except CalledProcessError as e:
error_exit(str(e))
return
raise error_exit("No component found with name '{}'".format(component_name))
def check_all_component_revisions(r_start, r_end):
for comp_name, _ in get_all_module_paths():
print('<<< {} >>>'.format(comp_name))
check_component_revisions(comp_name, r_start, r_end)
print()
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Check for changes made to a component since a git commit or tag. Empty "
"response means no changes.")
parser.add_argument('--component', '-c',
help='Component name (e.g. azure-cli, azure-cli-vm, etc.). If not '
'specified and --git-revision-start doesn\'t start with the component '
'name, all component changes are shown.')
parser.add_argument('--git-revision-start', '-s', required=True,
help="Git tag (or commit) to use as the start of the revision range. "
"(e.g. release-azure-cli-vm-0.1.0)")
parser.add_argument('--git-revision-end', '-e', default='HEAD',
help='Git tag (or commit) to use as the end of the revision range.')
args = parser.parse_args()
if args.git_revision_start.startswith('azure-cli') and not args.component:
args.component = re.match(REGEX_COMPONENT_NAME, args.git_revision_start).group(1)
if args.component:
check_component_revisions(args.component,
args.git_revision_start,
args.git_revision_end)
else:
check_all_component_revisions(args.git_revision_start,
args.git_revision_end)

Просмотреть файл

@ -0,0 +1,86 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
from __future__ import print_function
import argparse
import os
import tempfile
import tarfile
from subprocess import check_call
from .version_patcher import VersionPatcher
from ..utilities.path import get_all_module_paths
SOURCE_ARCHIVE_NAME = 'source.tar.gz'
def save_source_in_archive(pkg_path, dest):
with tarfile.open(os.path.join(dest, SOURCE_ARCHIVE_NAME), 'w:gz') as tar:
tar.add(pkg_path, arcname=os.path.basename(pkg_path))
def build(pkg_path, dest):
"""
pkg_path - Full path to directory of the package to build
dest - Destination for the built package
"""
check_call(['python', 'setup.py', 'sdist', '-d', dest, 'bdist_wheel', '-d', dest], cwd=pkg_path)
def release(pkg_dir, repo):
"""Release all packages in a directory"""
pkgs = [os.path.join(pkg_dir, f) for f in os.listdir(pkg_dir) if f != SOURCE_ARCHIVE_NAME]
for pkg in pkgs:
check_call(['twine', 'register', '--repository-url', repo, '--repository', repo, pkg])
check_call(['twine', 'upload', '--repository-url', repo, '--repository', repo, pkg])
def run_build_release(component_name, repo, dest=None, use_version_patch=True):
"""
component_name - The full component name (e.g. azure-cli, azure-cli-core, azure-cli-vm, etc.)
"""
for comp_name, comp_path in get_all_module_paths():
if comp_name == component_name:
pkg_dir = dest or tempfile.mkdtemp()
patcher = VersionPatcher(use_version_patch, component_name, comp_path)
patcher.patch()
save_source_in_archive(comp_path, pkg_dir)
build(comp_path, pkg_dir)
patcher.unpatch()
print("Built '{}' to '{}'".format(comp_name, pkg_dir))
if repo:
release(pkg_dir, repo)
return
raise ValueError("No component found with name '{}'".format(component_name))
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description="Automated build and release of a component. To only build, don't specify the"
" repo parameter. The environment variables TWINE_USERNAME and TWINE_PASSWORD "
"are required if releasing.")
parser.add_argument('--component', '-c', required=True,
help='Component name (e.g. azure-cli, azure-cli-vm, etc.)')
parser.add_argument('--no-version-patch', action='store_false',
help="By default, we patch the version number of the package to remove "
"'+dev' if it exists.")
parser.add_argument('--repo', '-r',
help='Repository URL for release (e.g. https://pypi.python.org/pypi, '
'https://testpypi.python.org/pypi)')
parser.add_argument('--dest',
help='Directory to store assets. By default, a temp directory is used.')
args = parser.parse_args()
if args.repo:
assert os.environ.get('TWINE_USERNAME') and os.environ.get('TWINE_PASSWORD'), \
"Set TWINE_USERNAME and TWINE_PASSWORD environment variables to authentication with " \
"PyPI repository."
if args.dest:
args.dest = os.path.abspath(args.dest)
assert os.path.isdir(args.dest), "Directory '{}' does not exist".format(args.dest)
run_build_release(args.component,
args.repo,
args.dest,
args.no_version_patch)

Просмотреть файл

@ -0,0 +1,77 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import fileinput
import os
import sys
class VersionPatcher(object):
"""Manages patching the version of a package to remove '+dev' from the version."""
def __init__(self, use_version_patch, component_name, component_path):
self.use_version_patch = use_version_patch
self.component_name = component_name
self.component_path = component_path
self.setup_py = os.path.join(component_path, 'setup.py')
self.backup_setup_py_version = None
# These two modules also have version defined in the __init__.py file
# These versions have to be kept in sync.
if self.component_name == 'azure-cli':
self.init_py_path = os.path.join(self.component_path, 'azure', 'cli', '__init__.py')
elif self.component_name == 'azure-cli-core':
self.init_py_path = os.path.join(self.component_path, 'azure', 'cli', 'core',
'__init__.py')
else:
self.init_py_path = None
self.backup_init_version = None
def _patch_setup_py(self):
for _, line in enumerate(fileinput.input(self.setup_py, inplace=1)):
if line.startswith('VERSION'):
self.backup_setup_py_version = line
# apply version patch
sys.stdout.write(line.replace('+dev', ''))
else:
sys.stdout.write(line)
def _unpatch_setup_py(self):
for _, line in enumerate(fileinput.input(self.setup_py, inplace=1)):
if line.startswith('VERSION'):
# restore original version
sys.stdout.write(self.backup_setup_py_version)
else:
sys.stdout.write(line)
def _patch_init_py(self):
for _, line in enumerate(fileinput.input(self.init_py_path, inplace=1)):
if line.startswith('__version__'):
self.backup_init_version = line
# apply init version patch
sys.stdout.write(line.replace('+dev', ''))
else:
sys.stdout.write(line)
def _unpatch_init_py(self):
for _, line in enumerate(fileinput.input(self.init_py_path, inplace=1)):
if line.startswith('__version__'):
# restore original init version
sys.stdout.write(self.backup_init_version)
else:
sys.stdout.write(line)
def patch(self):
if not self.use_version_patch:
return
self._patch_setup_py()
if self.init_py_path:
self._patch_init_py()
def unpatch(self):
if not self.use_version_patch:
return
self._unpatch_setup_py()
if self.init_py_path:
self._unpatch_init_py()

Просмотреть файл

@ -0,0 +1,4 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------

Просмотреть файл

@ -0,0 +1,35 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
import sys
import os
import subprocess
import automation.utilities.path as autmation_path
INSTALL_COMMAND = 'python -m pip install -e {}'
def install_modules():
all_modules = autmation_path.get_command_modules_paths()
print('Installing command modules')
print('Modules: {}'.format(', '.join(name for name, _ in all_modules)))
failures = []
for name, path in all_modules:
try:
subprocess.check_call(INSTALL_COMMAND.format(path).split())
except subprocess.CalledProcessError as err:
# exit code is not zero
failures.append('Failed to install {}. Error message: {}'.format(name, err.output))
for f in failures:
print(f)
return not any(failures)
if __name__ == '__main__':
sys.exit(0 if install_modules() else 1)

Просмотреть файл

@ -0,0 +1,4 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше