normalize eol to lf and set core.autocrlf to auto by default (#16)
* normalize eol to lf and set core.autocrlf to auto by default * adding sjyang18 to contributor list
This commit is contained in:
Родитель
26567c4e4c
Коммит
53ab4e3db5
|
@ -1,2 +1,5 @@
|
|||
# Declare files that will always have LF line endings on checkout.
|
||||
*.sh text eol=lf
|
||||
# Set the default behavior, in case people don't have core.autocrlf set.
|
||||
* text=auto
|
||||
|
||||
# Declare *.sh files that will always have LF line endings on checkout.
|
||||
*.sh text eol=lf
|
||||
|
|
|
@ -26,6 +26,7 @@ Azure Orbital analytics has benefited from following developers (in alphabetic o
|
|||
- [Mandar Inamdar](https://github.com/mandarinamdar)
|
||||
- [Nikhil Manchanda](https://github.com/SlickNik)
|
||||
- [Safiyah Sadiq](https://github.com/safiyahs)
|
||||
- [Seokwon Yang](https://github.com/sjyang18)
|
||||
- [Sushil Kumar](https://github.com/sushilkm)
|
||||
- [Tatyana Pearson](https://github.com/tpearson02)
|
||||
- [Taylor Corbett](https://github.com/TaylorCorbett)
|
||||
|
|
590
deploy/README.md
590
deploy/README.md
|
@ -1,296 +1,296 @@
|
|||
# Prerequisites
|
||||
|
||||
The deployment script uses following tools, please follow the links provided to install the suggested tools on your computer using which you would execute the script.
|
||||
|
||||
- [bicep](https://docs.microsoft.com/azure/azure-resource-manager/bicep/install)
|
||||
- [az cli](https://docs.microsoft.com/cli/azure/install-azure-cli)
|
||||
- [docker cli](https://docs.docker.com/get-docker/)
|
||||
- [jq](https://stedolan.github.io/jq/download/)
|
||||
|
||||
- The scripts are executed on bash shell, so if using a computer with windows based operating system, install a [WSL](https://docs.microsoft.com/windows/wsl/about) environment to execute the script.
|
||||
|
||||
- The user performing the deployment of the bicep template and the associated scripts should have `Contributor` role assigned at the subscription to which the resources are being deployed.
|
||||
|
||||
- This solution assumes no interference from Policies deployed to your tenant preventing resources from being deployed.
|
||||
|
||||
- Get the repository to find the scripts. Clone the repository using following command.
|
||||
```bash
|
||||
git clone git@github.com:Azure/Azure-Orbital-Analytics-Samples.git
|
||||
```
|
||||
|
||||
One would need [git](https://github.com/git-guides/install-git) cli tool to download the repository.
|
||||
|
||||
Alternatively, you can use Azure Cloud Bash to deploy this sample solution to your Azure subscription.
|
||||
|
||||
# How does scripts work?
|
||||
|
||||
The shell script runs an `az cli` command to invoke `bicep` tool.
|
||||
|
||||
This command recieves the bicep template as input, and converts the bicep templates into an intermediate ARM template output which is then submitted to Azure APIs to create the Azure resources.
|
||||
|
||||
|
||||
# Executing the script
|
||||
|
||||
Before executing the script one would need to login to azure using `az` cli and set the correct subscription in which they want to provision the resources.
|
||||
|
||||
```bash
|
||||
az login
|
||||
az account set -s <subscription_id>
|
||||
```
|
||||
|
||||
Script has been written to be executed with minimalistic input, it requires following input
|
||||
- `environmentCode` which serves as the prefix for infrastructure services names.
|
||||
- `location` which suggests which azure region infrastructure is deployed in.
|
||||
|
||||
To install infrastructure execute install.sh script as follows
|
||||
|
||||
```bash
|
||||
./deploy/install.sh <environmentCode> <location> <envTag>
|
||||
|
||||
```
|
||||
|
||||
Default values for the parameters are provided in the script itself.
|
||||
|
||||
Arguments | Required | Sample value
|
||||
----------|-----------|-------
|
||||
environmentCode | yes | aoi
|
||||
location | yes | westus
|
||||
envTag | no | synapse\-\<environmentCode\>
|
||||
|
||||
|
||||
For eg.
|
||||
|
||||
```bash
|
||||
./deploy/install.sh aoi-demo westus demo
|
||||
|
||||
|
||||
```
|
||||
|
||||
Note: Currently, this deployment does not deploy Azure Database for PostgreSQL for post-analysis.
|
||||
|
||||
# Using bicep template
|
||||
|
||||
Users can also use bicep template directly instead of using the script `install.sh`
|
||||
|
||||
To deploy the resources using the bicep template use the command as follows:
|
||||
|
||||
```bash
|
||||
az deployment sub create -l <region_name> -n <deployment_name> -f main.bicep -p location=<region_name> environmentCode=<environment_name_prefix> environment=<tag_value>
|
||||
```
|
||||
|
||||
For eg.
|
||||
```bash
|
||||
az deployment sub create -l <region> -n aoi -f main.bicep -p location=<region> environmentCode=aoi-demo environment=devSynapse
|
||||
```
|
||||
|
||||
# Verifying infrastructure resources
|
||||
|
||||
Once setup has been executed one can check for following resource-groups and resources to confirm the successful execution.
|
||||
|
||||
Following is the list of resource-groups and resources that should be created if we executed the command `./deploy/install.sh aoi-demo`
|
||||
|
||||
- `aoi-demo-data-rg`
|
||||
|
||||
This resource group houses data resources.
|
||||
|
||||
- Storage account named `rawdata<6-character-random-string>` to store raw input data for pipelines.
|
||||
- Keyvault named `aoi-demo-data-kv` to store credentials as secrets.
|
||||
|
||||
- `aoi-demo-monitor-rg`
|
||||
|
||||
This resource group houses monitoring resources.
|
||||
|
||||
- App Insights instance named `aoi-demo-monitor-appinsights` for monitoring.
|
||||
- Log Analytics workspace named `aoi-demo-monitor-workspace` to store monitoring data.
|
||||
|
||||
- `aoi-demo-network-rg`
|
||||
|
||||
This resource group houses networking resources.
|
||||
|
||||
- Virtual network named `aoi-demo-vnet` which has 3 subnets.
|
||||
|
||||
- `pipeline-subnet`
|
||||
- `data-subnet`
|
||||
- `orchestration-subnet`
|
||||
- It also has a list security groups to restrict access on the network.
|
||||
|
||||
- `aoi-demo-orc-rg`
|
||||
|
||||
This resource group houses pipeline orchestration resources.
|
||||
|
||||
- Storage account named `aoi-demoorcbatchact` for batch account.
|
||||
- Batch Account named `batchacc<6-character-random-string>`.
|
||||
|
||||
Also, go to the Batch Account and switch to the pools blade. Look for one or more pools created by the bicep template. Make sure the resizing of the pool is completed without any errors.
|
||||
|
||||
- Error while resizing the pools are indicated by red exclamation icon next to the pool. Most common issues causing failure are related to the VM Quota limitations.
|
||||
- Resizing may take a few minutes. Pools that are resizing are indicated by `0 -> 1` numbers under dedicated nodes column. Pools that have completed resizing should show the number of dedicated nodes.
|
||||
|
||||
Wait for all pools to complete resizing before moving to the next steps.
|
||||
|
||||
Note: The Bicep template adds the Synapse workspace's Managed Identity to the Batch Account as `Contributor`. Alternatively, Custom Role Definitions can be used to assign the Synapse workspace's Managed Identity to the Batch Account with required Azure RBAC operations.
|
||||
|
||||
- Keyvault named `aoi-demo-orc-kv`.
|
||||
- User managed identity `aoi-demo8-orc-umi` for access and authentication.
|
||||
- Azure Container registry instance named `aoi-demoorcacr` to store container images.
|
||||
|
||||
- `aoi-demo-pipeline-rg`
|
||||
|
||||
This resource group houses Synapse pipeline resources.
|
||||
|
||||
- Keyvault instance named `aoi-demo-pipeline-kv` to hold secrets for pipeline.
|
||||
- Storage account named `synhns<6-character-random-string>` for Synapse workspace.
|
||||
- Synapse workspace named `aoi-demo-pipeline-syn-ws` to hold pipeline resources.
|
||||
- Synapse spark pool `pool<6-character-random-string>` to run analytics.
|
||||
|
||||
|
||||
# Load the Custom Vision Model to your Container Registry
|
||||
|
||||
There are three ways to load an AI Model with this pipeline:
|
||||
|
||||
a. Use the publicly hosted Custom Vision Model as GitHub Packages.
|
||||
|
||||
No additional steps are required for this approach. Custom Vision Model is containerized image that can be pulled from `docker pull ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest`. The [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) in this repository already points to the publicly hosted GitHub Registry.
|
||||
|
||||
b. Download the publicly hosted Custom Vision Model and host it on your Container Registry.
|
||||
|
||||
Run the shell cmds below to pull and push the image to your Container Registry.
|
||||
|
||||
|
||||
```bash
|
||||
|
||||
docker pull ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest
|
||||
|
||||
docker tag ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
az acr login --name <container-registry-name>
|
||||
|
||||
docker push <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
```
|
||||
|
||||
Update the `algImageName` value in [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) to point to the new image location.
|
||||
|
||||
c. BYOM (Bring-your-own-Model) and host it on your Container Registry.
|
||||
|
||||
If you have the image locally, run the shell cmds below to push the image to your Container Registry.
|
||||
|
||||
```bash
|
||||
|
||||
docker tag custom_vision_offline:latest <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
az acr login --name <container-registry-name>
|
||||
|
||||
docker push <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
```
|
||||
Update the `algImageName` value in [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) to point to the new image location.
|
||||
|
||||
|
||||
Note: When using a private Container Registry, update `containerSettings` property in your [Custom Vision Object Detection v2](/src/workflow/pipeline/Custom%20Vision%20Object%20Detection%20v2.json) pipeline and add the following sub-property in order to authenticate to Container Registry :
|
||||
```json
|
||||
"registry": {
|
||||
"registryServer": "",
|
||||
"username": "",
|
||||
"password": ""
|
||||
}
|
||||
```
|
||||
|
||||
[Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) and [Configuration file](../src/aimodels/custom_vision_object_detection_offline/config/config.json) required to run the Custom Vision Model.
|
||||
|
||||
- Specification document - This solution has a framework defined to standardized way of running AI Models as containerized solutions. A Specification document works as a contract definition document to run an AI Model.
|
||||
|
||||
- Configuration file - Each AI Model may require one or more parameters to run the model. This parameters driven by the end users are passed to the AI Model in the form of a configuration file. The schema of these configuration file is specific to the AI Model and hence we provide a template for the end user to plug-in their values.
|
||||
|
||||
# Configuring the Resources
|
||||
|
||||
Next step is to configure your resources and set them up with the required dependencies like Python files, Library requirements and so on, before importing the Synapse pipeline. Run the `configure.sh` script below to perform the configuration:
|
||||
|
||||
```bash
|
||||
./deploy/configure.sh <environmentCode>
|
||||
```
|
||||
|
||||
# Packaging the Synapse Pipeline
|
||||
|
||||
To package the Synapse pipeline, run the `package.sh` script by following the syntax below:
|
||||
|
||||
```bash
|
||||
./deploy/package.sh <environmentCode>
|
||||
```
|
||||
|
||||
Once the above step completes, a zip file is generated. Upload the generated zip files to your Synapse Studio by following the steps below:
|
||||
|
||||
1. Open the Synapse Studio
|
||||
2. Switch to Integrate tab on the left
|
||||
3. At the top of the left pane, click on the "+" dropdown and select "Import resources from support files"
|
||||
4. When prompted to select a file, pick the zip file generated in the previous step
|
||||
5. Pipelines and its dependencies are imported to the Synapse Studio. Validate the components being imported for any errors
|
||||
6. Click "Publish all" and wait for the imported components to be published
|
||||
|
||||
## Running the pipeline
|
||||
|
||||
Before starting the pipeline, prepare the storage account in <environmentCode>-data-rg resource group by creating a container for the pipeline run.
|
||||
|
||||
- Create a new container for every pipeline run. Make sure the container name does not exceed 8 characters.
|
||||
|
||||
- Under the newly created container, add two folders. One folder named `config` with the following configuration files:
|
||||
|
||||
- [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) configuration file that is provided by the AI Model partner.
|
||||
- [Config file](../src/aimodels/custom_vision_object_detection_offline/config/config.json) specific to the AI Model that contains parameters to be passed to the AI Model.
|
||||
- [Config file](../src/transforms/spark-jobs/raster_crop/config/config-aoi.json) for Crop transformation that container the Area of Interest to crop to.
|
||||
- [Config file](../src/transforms/spark-jobs/raster_convert/config/config-img-convert-png.json) for GeoTiff to Png transform.
|
||||
- [Config file](../src/transforms/spark-jobs/pool_geolocation/config/config-pool-geolocation.json) for pool gelocation transform which converts Image coordinates to Geolocation coordinates.
|
||||
|
||||
Another folder named `raw` with sample Geotiff to be processed by the pipeline. You can use this [Geotiff file](https://aoigeospatial.blob.core.windows.net/public/samples/sample_4326.tif) hosted as a sample or any Geotiff with CRS of EPSG 4326.
|
||||
|
||||
When using this sample file, update your [Crop Transform's Config file](../src/transforms/spark-jobs/raster_crop/config/config-aoi.json) with bbox of `[-117.063550, 32.749467, -116.999386, 32.812946]`.
|
||||
|
||||
To run the pipeline, open the Synapse Studio for the Synapse workspace that you have created and follow the below listed steps.
|
||||
|
||||
- Open the `E2E Custom Vision Model Flow` and click on debug button
|
||||
|
||||
- When presented with the parameters, fill out the values. Below table provide the details on that each parameter represents.
|
||||
|
||||
| parameter | description |
|
||||
|--|--|
|
||||
| Prefix | This is the Storage container name created in [Running the pipeline section](#running-the-pipeline) that hosts the Raw data|
|
||||
| StorageAccountName | Name of the Storage Account in <environmentCode>-data-rg resource group that hosts the Raw data |
|
||||
| StorageAccountKey | Access Key of the Storage Account in <environmentCode>-data-rg resource group that hosts the Raw data |
|
||||
| BatchAccountName | Name of the Batch Account in <environmentCode>-orc-rg resource group to run the AI Model |
|
||||
| BatchJobName | Job name within the Batch Account in <environmentCode>-orc-rg resource group that runs the AI Model |
|
||||
| BatchLocation | Location of the Batch Account in <environmentCode>-orc-rg resource group that runs the AI Model |
|
||||
|
||||
- Once the parameters are entered, click ok to submit and kick off the pipeline.
|
||||
|
||||
- Wait for the pipeline to complete.
|
||||
|
||||
# Cleanup Script
|
||||
|
||||
We have a cleanup script to cleanup the resource groups and thus the resources provisioned using the `environmentCode`.
|
||||
As discussed above the `environmentCode` is used as prefix to generate resource group names, so the cleanup-script deletes the resource groups with generated names.
|
||||
|
||||
Execute the cleanup script as follows:
|
||||
|
||||
```bash
|
||||
./deploy/cleanup.sh <environmentCode>
|
||||
```
|
||||
|
||||
For eg.
|
||||
```bash
|
||||
./deploy/cleanup.sh aoi-demo
|
||||
```
|
||||
|
||||
If one wants not to delete any specific resource group and thus resource they can use NO_DELETE_*_RESOURCE_GROUP environment variable, by setting it to true
|
||||
|
||||
```bash
|
||||
NO_DELETE_DATA_RESOURCE_GROUP=true
|
||||
NO_DELETE_MONITORING_RESOURCE_GROUP=true
|
||||
NO_DELETE_NETWORKING_RESOURCE_GROUP=true
|
||||
NO_DELETE_ORCHESTRATION_RESOURCE_GROUP=true
|
||||
NO_DELETE_PIPELINE_RESOURCE_GROUP=true
|
||||
./deploy/cleanup.sh <environmentCode>
|
||||
```
|
||||
|
||||
# Attributions And Disclaimers
|
||||
|
||||
# Prerequisites
|
||||
|
||||
The deployment script uses following tools, please follow the links provided to install the suggested tools on your computer using which you would execute the script.
|
||||
|
||||
- [bicep](https://docs.microsoft.com/azure/azure-resource-manager/bicep/install)
|
||||
- [az cli](https://docs.microsoft.com/cli/azure/install-azure-cli)
|
||||
- [docker cli](https://docs.docker.com/get-docker/)
|
||||
- [jq](https://stedolan.github.io/jq/download/)
|
||||
|
||||
- The scripts are executed on bash shell, so if using a computer with windows based operating system, install a [WSL](https://docs.microsoft.com/windows/wsl/about) environment to execute the script.
|
||||
|
||||
- The user performing the deployment of the bicep template and the associated scripts should have `Contributor` role assigned at the subscription to which the resources are being deployed.
|
||||
|
||||
- This solution assumes no interference from Policies deployed to your tenant preventing resources from being deployed.
|
||||
|
||||
- Get the repository to find the scripts. Clone the repository using following command.
|
||||
```bash
|
||||
git clone git@github.com:Azure/Azure-Orbital-Analytics-Samples.git
|
||||
```
|
||||
|
||||
One would need [git](https://github.com/git-guides/install-git) cli tool to download the repository.
|
||||
|
||||
Alternatively, you can use Azure Cloud Bash to deploy this sample solution to your Azure subscription.
|
||||
|
||||
# How does scripts work?
|
||||
|
||||
The shell script runs an `az cli` command to invoke `bicep` tool.
|
||||
|
||||
This command recieves the bicep template as input, and converts the bicep templates into an intermediate ARM template output which is then submitted to Azure APIs to create the Azure resources.
|
||||
|
||||
|
||||
# Executing the script
|
||||
|
||||
Before executing the script one would need to login to azure using `az` cli and set the correct subscription in which they want to provision the resources.
|
||||
|
||||
```bash
|
||||
az login
|
||||
az account set -s <subscription_id>
|
||||
```
|
||||
|
||||
Script has been written to be executed with minimalistic input, it requires following input
|
||||
- `environmentCode` which serves as the prefix for infrastructure services names.
|
||||
- `location` which suggests which azure region infrastructure is deployed in.
|
||||
|
||||
To install infrastructure execute install.sh script as follows
|
||||
|
||||
```bash
|
||||
./deploy/install.sh <environmentCode> <location> <envTag>
|
||||
|
||||
```
|
||||
|
||||
Default values for the parameters are provided in the script itself.
|
||||
|
||||
Arguments | Required | Sample value
|
||||
----------|-----------|-------
|
||||
environmentCode | yes | aoi
|
||||
location | yes | westus
|
||||
envTag | no | synapse\-\<environmentCode\>
|
||||
|
||||
|
||||
For eg.
|
||||
|
||||
```bash
|
||||
./deploy/install.sh aoi-demo westus demo
|
||||
|
||||
|
||||
```
|
||||
|
||||
Note: Currently, this deployment does not deploy Azure Database for PostgreSQL for post-analysis.
|
||||
|
||||
# Using bicep template
|
||||
|
||||
Users can also use bicep template directly instead of using the script `install.sh`
|
||||
|
||||
To deploy the resources using the bicep template use the command as follows:
|
||||
|
||||
```bash
|
||||
az deployment sub create -l <region_name> -n <deployment_name> -f main.bicep -p location=<region_name> environmentCode=<environment_name_prefix> environment=<tag_value>
|
||||
```
|
||||
|
||||
For eg.
|
||||
```bash
|
||||
az deployment sub create -l <region> -n aoi -f main.bicep -p location=<region> environmentCode=aoi-demo environment=devSynapse
|
||||
```
|
||||
|
||||
# Verifying infrastructure resources
|
||||
|
||||
Once setup has been executed one can check for following resource-groups and resources to confirm the successful execution.
|
||||
|
||||
Following is the list of resource-groups and resources that should be created if we executed the command `./deploy/install.sh aoi-demo`
|
||||
|
||||
- `aoi-demo-data-rg`
|
||||
|
||||
This resource group houses data resources.
|
||||
|
||||
- Storage account named `rawdata<6-character-random-string>` to store raw input data for pipelines.
|
||||
- Keyvault named `aoi-demo-data-kv` to store credentials as secrets.
|
||||
|
||||
- `aoi-demo-monitor-rg`
|
||||
|
||||
This resource group houses monitoring resources.
|
||||
|
||||
- App Insights instance named `aoi-demo-monitor-appinsights` for monitoring.
|
||||
- Log Analytics workspace named `aoi-demo-monitor-workspace` to store monitoring data.
|
||||
|
||||
- `aoi-demo-network-rg`
|
||||
|
||||
This resource group houses networking resources.
|
||||
|
||||
- Virtual network named `aoi-demo-vnet` which has 3 subnets.
|
||||
|
||||
- `pipeline-subnet`
|
||||
- `data-subnet`
|
||||
- `orchestration-subnet`
|
||||
- It also has a list security groups to restrict access on the network.
|
||||
|
||||
- `aoi-demo-orc-rg`
|
||||
|
||||
This resource group houses pipeline orchestration resources.
|
||||
|
||||
- Storage account named `aoi-demoorcbatchact` for batch account.
|
||||
- Batch Account named `batchacc<6-character-random-string>`.
|
||||
|
||||
Also, go to the Batch Account and switch to the pools blade. Look for one or more pools created by the bicep template. Make sure the resizing of the pool is completed without any errors.
|
||||
|
||||
- Error while resizing the pools are indicated by red exclamation icon next to the pool. Most common issues causing failure are related to the VM Quota limitations.
|
||||
- Resizing may take a few minutes. Pools that are resizing are indicated by `0 -> 1` numbers under dedicated nodes column. Pools that have completed resizing should show the number of dedicated nodes.
|
||||
|
||||
Wait for all pools to complete resizing before moving to the next steps.
|
||||
|
||||
Note: The Bicep template adds the Synapse workspace's Managed Identity to the Batch Account as `Contributor`. Alternatively, Custom Role Definitions can be used to assign the Synapse workspace's Managed Identity to the Batch Account with required Azure RBAC operations.
|
||||
|
||||
- Keyvault named `aoi-demo-orc-kv`.
|
||||
- User managed identity `aoi-demo8-orc-umi` for access and authentication.
|
||||
- Azure Container registry instance named `aoi-demoorcacr` to store container images.
|
||||
|
||||
- `aoi-demo-pipeline-rg`
|
||||
|
||||
This resource group houses Synapse pipeline resources.
|
||||
|
||||
- Keyvault instance named `aoi-demo-pipeline-kv` to hold secrets for pipeline.
|
||||
- Storage account named `synhns<6-character-random-string>` for Synapse workspace.
|
||||
- Synapse workspace named `aoi-demo-pipeline-syn-ws` to hold pipeline resources.
|
||||
- Synapse spark pool `pool<6-character-random-string>` to run analytics.
|
||||
|
||||
|
||||
# Load the Custom Vision Model to your Container Registry
|
||||
|
||||
There are three ways to load an AI Model with this pipeline:
|
||||
|
||||
a. Use the publicly hosted Custom Vision Model as GitHub Packages.
|
||||
|
||||
No additional steps are required for this approach. Custom Vision Model is containerized image that can be pulled from `docker pull ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest`. The [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) in this repository already points to the publicly hosted GitHub Registry.
|
||||
|
||||
b. Download the publicly hosted Custom Vision Model and host it on your Container Registry.
|
||||
|
||||
Run the shell cmds below to pull and push the image to your Container Registry.
|
||||
|
||||
|
||||
```bash
|
||||
|
||||
docker pull ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest
|
||||
|
||||
docker tag ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
az acr login --name <container-registry-name>
|
||||
|
||||
docker push <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
```
|
||||
|
||||
Update the `algImageName` value in [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) to point to the new image location.
|
||||
|
||||
c. BYOM (Bring-your-own-Model) and host it on your Container Registry.
|
||||
|
||||
If you have the image locally, run the shell cmds below to push the image to your Container Registry.
|
||||
|
||||
```bash
|
||||
|
||||
docker tag custom_vision_offline:latest <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
az acr login --name <container-registry-name>
|
||||
|
||||
docker push <container-registry-name>.azurecr.io/custom_vision_offline:latest
|
||||
|
||||
```
|
||||
Update the `algImageName` value in [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) to point to the new image location.
|
||||
|
||||
|
||||
Note: When using a private Container Registry, update `containerSettings` property in your [Custom Vision Object Detection v2](/src/workflow/pipeline/Custom%20Vision%20Object%20Detection%20v2.json) pipeline and add the following sub-property in order to authenticate to Container Registry :
|
||||
```json
|
||||
"registry": {
|
||||
"registryServer": "",
|
||||
"username": "",
|
||||
"password": ""
|
||||
}
|
||||
```
|
||||
|
||||
[Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) and [Configuration file](../src/aimodels/custom_vision_object_detection_offline/config/config.json) required to run the Custom Vision Model.
|
||||
|
||||
- Specification document - This solution has a framework defined to standardized way of running AI Models as containerized solutions. A Specification document works as a contract definition document to run an AI Model.
|
||||
|
||||
- Configuration file - Each AI Model may require one or more parameters to run the model. This parameters driven by the end users are passed to the AI Model in the form of a configuration file. The schema of these configuration file is specific to the AI Model and hence we provide a template for the end user to plug-in their values.
|
||||
|
||||
# Configuring the Resources
|
||||
|
||||
Next step is to configure your resources and set them up with the required dependencies like Python files, Library requirements and so on, before importing the Synapse pipeline. Run the `configure.sh` script below to perform the configuration:
|
||||
|
||||
```bash
|
||||
./deploy/configure.sh <environmentCode>
|
||||
```
|
||||
|
||||
# Packaging the Synapse Pipeline
|
||||
|
||||
To package the Synapse pipeline, run the `package.sh` script by following the syntax below:
|
||||
|
||||
```bash
|
||||
./deploy/package.sh <environmentCode>
|
||||
```
|
||||
|
||||
Once the above step completes, a zip file is generated. Upload the generated zip files to your Synapse Studio by following the steps below:
|
||||
|
||||
1. Open the Synapse Studio
|
||||
2. Switch to Integrate tab on the left
|
||||
3. At the top of the left pane, click on the "+" dropdown and select "Import resources from support files"
|
||||
4. When prompted to select a file, pick the zip file generated in the previous step
|
||||
5. Pipelines and its dependencies are imported to the Synapse Studio. Validate the components being imported for any errors
|
||||
6. Click "Publish all" and wait for the imported components to be published
|
||||
|
||||
## Running the pipeline
|
||||
|
||||
Before starting the pipeline, prepare the storage account in <environmentCode>-data-rg resource group by creating a container for the pipeline run.
|
||||
|
||||
- Create a new container for every pipeline run. Make sure the container name does not exceed 8 characters.
|
||||
|
||||
- Under the newly created container, add two folders. One folder named `config` with the following configuration files:
|
||||
|
||||
- [Specification document](../src/aimodels/custom_vision_object_detection_offline/specs/custom_vision_object_detection.json) configuration file that is provided by the AI Model partner.
|
||||
- [Config file](../src/aimodels/custom_vision_object_detection_offline/config/config.json) specific to the AI Model that contains parameters to be passed to the AI Model.
|
||||
- [Config file](../src/transforms/spark-jobs/raster_crop/config/config-aoi.json) for Crop transformation that container the Area of Interest to crop to.
|
||||
- [Config file](../src/transforms/spark-jobs/raster_convert/config/config-img-convert-png.json) for GeoTiff to Png transform.
|
||||
- [Config file](../src/transforms/spark-jobs/pool_geolocation/config/config-pool-geolocation.json) for pool gelocation transform which converts Image coordinates to Geolocation coordinates.
|
||||
|
||||
Another folder named `raw` with sample Geotiff to be processed by the pipeline. You can use this [Geotiff file](https://aoigeospatial.blob.core.windows.net/public/samples/sample_4326.tif) hosted as a sample or any Geotiff with CRS of EPSG 4326.
|
||||
|
||||
When using this sample file, update your [Crop Transform's Config file](../src/transforms/spark-jobs/raster_crop/config/config-aoi.json) with bbox of `[-117.063550, 32.749467, -116.999386, 32.812946]`.
|
||||
|
||||
To run the pipeline, open the Synapse Studio for the Synapse workspace that you have created and follow the below listed steps.
|
||||
|
||||
- Open the `E2E Custom Vision Model Flow` and click on debug button
|
||||
|
||||
- When presented with the parameters, fill out the values. Below table provide the details on that each parameter represents.
|
||||
|
||||
| parameter | description |
|
||||
|--|--|
|
||||
| Prefix | This is the Storage container name created in [Running the pipeline section](#running-the-pipeline) that hosts the Raw data|
|
||||
| StorageAccountName | Name of the Storage Account in <environmentCode>-data-rg resource group that hosts the Raw data |
|
||||
| StorageAccountKey | Access Key of the Storage Account in <environmentCode>-data-rg resource group that hosts the Raw data |
|
||||
| BatchAccountName | Name of the Batch Account in <environmentCode>-orc-rg resource group to run the AI Model |
|
||||
| BatchJobName | Job name within the Batch Account in <environmentCode>-orc-rg resource group that runs the AI Model |
|
||||
| BatchLocation | Location of the Batch Account in <environmentCode>-orc-rg resource group that runs the AI Model |
|
||||
|
||||
- Once the parameters are entered, click ok to submit and kick off the pipeline.
|
||||
|
||||
- Wait for the pipeline to complete.
|
||||
|
||||
# Cleanup Script
|
||||
|
||||
We have a cleanup script to cleanup the resource groups and thus the resources provisioned using the `environmentCode`.
|
||||
As discussed above the `environmentCode` is used as prefix to generate resource group names, so the cleanup-script deletes the resource groups with generated names.
|
||||
|
||||
Execute the cleanup script as follows:
|
||||
|
||||
```bash
|
||||
./deploy/cleanup.sh <environmentCode>
|
||||
```
|
||||
|
||||
For eg.
|
||||
```bash
|
||||
./deploy/cleanup.sh aoi-demo
|
||||
```
|
||||
|
||||
If one wants not to delete any specific resource group and thus resource they can use NO_DELETE_*_RESOURCE_GROUP environment variable, by setting it to true
|
||||
|
||||
```bash
|
||||
NO_DELETE_DATA_RESOURCE_GROUP=true
|
||||
NO_DELETE_MONITORING_RESOURCE_GROUP=true
|
||||
NO_DELETE_NETWORKING_RESOURCE_GROUP=true
|
||||
NO_DELETE_ORCHESTRATION_RESOURCE_GROUP=true
|
||||
NO_DELETE_PIPELINE_RESOURCE_GROUP=true
|
||||
./deploy/cleanup.sh <environmentCode>
|
||||
```
|
||||
|
||||
# Attributions And Disclaimers
|
||||
|
||||
- [Geotiff file](https://aoigeospatial.blob.core.windows.net/public/samples/sample_4326.tif) provided as sample are attributed to NAIP Imagery available via [Planetary Computer](https://planetarycomputer.microsoft.com) They are covered under [USDA](https://ngda-imagery-geoplatform.hub.arcgis.com)
|
|
@ -1,14 +1,14 @@
|
|||
name: aoi-env
|
||||
channels:
|
||||
- conda-forge
|
||||
- defaults
|
||||
dependencies:
|
||||
- gdal=3.3.0
|
||||
- pip>=20.1.1
|
||||
- azure-storage-file-datalake
|
||||
- libgdal
|
||||
- shapely
|
||||
- pyproj
|
||||
- pip:
|
||||
- rasterio
|
||||
- geopandas
|
||||
name: aoi-env
|
||||
channels:
|
||||
- conda-forge
|
||||
- defaults
|
||||
dependencies:
|
||||
- gdal=3.3.0
|
||||
- pip>=20.1.1
|
||||
- azure-storage-file-datalake
|
||||
- libgdal
|
||||
- shapely
|
||||
- pyproj
|
||||
- pip:
|
||||
- rasterio
|
||||
- geopandas
|
||||
|
|
|
@ -1,129 +1,129 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
// Name parameters for infrastructure resources
|
||||
param dataResourceGroupName string = ''
|
||||
param pipelineResourceGroupName string = ''
|
||||
param pipelineLinkedSvcKeyVaultName string = ''
|
||||
param keyvaultName string = ''
|
||||
param rawDataStorageAccountName string = ''
|
||||
|
||||
// Parameters with default values for Keyvault
|
||||
param keyvaultSkuName string = 'Standard'
|
||||
param objIdForKeyvaultAccessPolicyPolicy string = ''
|
||||
param keyvaultCertPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultKeyPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultSecretPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultStoragePermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultUsePublicIp bool = true
|
||||
param keyvaultPublicNetworkAccess bool = true
|
||||
param keyvaultEnabledForDeployment bool = true
|
||||
param keyvaultEnabledForDiskEncryption bool = true
|
||||
param keyvaultEnabledForTemplateDeployment bool = true
|
||||
param keyvaultEnablePurgeProtection bool = true
|
||||
param keyvaultEnableRbacAuthorization bool = false
|
||||
param keyvaultEnableSoftDelete bool = true
|
||||
param keyvaultSoftDeleteRetentionInDays int = 7
|
||||
|
||||
// Parameters with default values for Synapse and its Managed Identity
|
||||
param synapseMIStorageAccountRoles array = [
|
||||
'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
|
||||
'974c5e8b-45b9-4653-ba55-5f855dd0fb88'
|
||||
]
|
||||
param synapseMIPrincipalId string = ''
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
var dataResourceGroupNameVar = empty(dataResourceGroupName) ? '${namingPrefix}-rg' : dataResourceGroupName
|
||||
var nameSuffix = substring(uniqueString(dataResourceGroupNameVar), 0, 6)
|
||||
var keyvaultNameVar = empty(keyvaultName) ? '${namingPrefix}-kv' : keyvaultName
|
||||
var rawDataStorageAccountNameVar = empty(rawDataStorageAccountName) ? 'rawdata${nameSuffix}' : rawDataStorageAccountName
|
||||
|
||||
module keyVault '../modules/akv.bicep' = {
|
||||
name: '${namingPrefix}-akv'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
keyVaultName: keyvaultNameVar
|
||||
location: location
|
||||
skuName:keyvaultSkuName
|
||||
objIdForAccessPolicyPolicy: objIdForKeyvaultAccessPolicyPolicy
|
||||
certPermission:keyvaultCertPermission
|
||||
keyPermission:keyvaultKeyPermission
|
||||
secretPermission:keyvaultSecretPermission
|
||||
storagePermission:keyvaultStoragePermission
|
||||
usePublicIp: keyvaultUsePublicIp
|
||||
publicNetworkAccess:keyvaultPublicNetworkAccess
|
||||
enabledForDeployment: keyvaultEnabledForDeployment
|
||||
enabledForDiskEncryption: keyvaultEnabledForDiskEncryption
|
||||
enabledForTemplateDeployment: keyvaultEnabledForTemplateDeployment
|
||||
enablePurgeProtection: keyvaultEnablePurgeProtection
|
||||
enableRbacAuthorization: keyvaultEnableRbacAuthorization
|
||||
enableSoftDelete: keyvaultEnableSoftDelete
|
||||
softDeleteRetentionInDays: keyvaultSoftDeleteRetentionInDays
|
||||
}
|
||||
}
|
||||
|
||||
module rawDataStorageAccount '../modules/storage.bicep' = {
|
||||
name: '${namingPrefix}-raw-data-storage'
|
||||
params: {
|
||||
storageAccountName: rawDataStorageAccountNameVar
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
isHnsEnabled: true
|
||||
storeType: 'raw'
|
||||
}
|
||||
}
|
||||
|
||||
module rawDataStorageAccountFileShare '../modules/file-share.bicep' = {
|
||||
name: '${namingPrefix}-raw-data-storage-fileshare'
|
||||
params: {
|
||||
storageAccountName: rawDataStorageAccountNameVar
|
||||
shareName: 'volume-a'
|
||||
}
|
||||
dependsOn: [
|
||||
rawDataStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
module rawDataStorageAccountCredentials '../modules/storage.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-raw-data-storage-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
storageAccountName: rawDataStorageAccountNameVar
|
||||
keyVaultName: pipelineLinkedSvcKeyVaultName
|
||||
keyVaultResourceGroup: pipelineResourceGroupName
|
||||
}
|
||||
dependsOn: [
|
||||
rawDataStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
module synapseIdentityForStorageAccess '../modules/storage-role-assignment.bicep' = [ for (role, index) in synapseMIStorageAccountRoles: {
|
||||
name: '${namingPrefix}-synapse-id-kv-${index}'
|
||||
params: {
|
||||
resourceName: rawDataStorageAccountNameVar
|
||||
principalId: synapseMIPrincipalId
|
||||
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${role}'
|
||||
}
|
||||
dependsOn: [
|
||||
rawDataStorageAccount
|
||||
]
|
||||
}]
|
||||
|
||||
output rawStorageAccountName string = rawDataStorageAccountNameVar
|
||||
output rawStorageFileEndpointUri string = rawDataStorageAccount.outputs.fileEndpointUri
|
||||
output rawStoragePrimaryKey string = rawDataStorageAccount.outputs.primaryKey
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
// Name parameters for infrastructure resources
|
||||
param dataResourceGroupName string = ''
|
||||
param pipelineResourceGroupName string = ''
|
||||
param pipelineLinkedSvcKeyVaultName string = ''
|
||||
param keyvaultName string = ''
|
||||
param rawDataStorageAccountName string = ''
|
||||
|
||||
// Parameters with default values for Keyvault
|
||||
param keyvaultSkuName string = 'Standard'
|
||||
param objIdForKeyvaultAccessPolicyPolicy string = ''
|
||||
param keyvaultCertPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultKeyPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultSecretPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultStoragePermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultUsePublicIp bool = true
|
||||
param keyvaultPublicNetworkAccess bool = true
|
||||
param keyvaultEnabledForDeployment bool = true
|
||||
param keyvaultEnabledForDiskEncryption bool = true
|
||||
param keyvaultEnabledForTemplateDeployment bool = true
|
||||
param keyvaultEnablePurgeProtection bool = true
|
||||
param keyvaultEnableRbacAuthorization bool = false
|
||||
param keyvaultEnableSoftDelete bool = true
|
||||
param keyvaultSoftDeleteRetentionInDays int = 7
|
||||
|
||||
// Parameters with default values for Synapse and its Managed Identity
|
||||
param synapseMIStorageAccountRoles array = [
|
||||
'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
|
||||
'974c5e8b-45b9-4653-ba55-5f855dd0fb88'
|
||||
]
|
||||
param synapseMIPrincipalId string = ''
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
var dataResourceGroupNameVar = empty(dataResourceGroupName) ? '${namingPrefix}-rg' : dataResourceGroupName
|
||||
var nameSuffix = substring(uniqueString(dataResourceGroupNameVar), 0, 6)
|
||||
var keyvaultNameVar = empty(keyvaultName) ? '${namingPrefix}-kv' : keyvaultName
|
||||
var rawDataStorageAccountNameVar = empty(rawDataStorageAccountName) ? 'rawdata${nameSuffix}' : rawDataStorageAccountName
|
||||
|
||||
module keyVault '../modules/akv.bicep' = {
|
||||
name: '${namingPrefix}-akv'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
keyVaultName: keyvaultNameVar
|
||||
location: location
|
||||
skuName:keyvaultSkuName
|
||||
objIdForAccessPolicyPolicy: objIdForKeyvaultAccessPolicyPolicy
|
||||
certPermission:keyvaultCertPermission
|
||||
keyPermission:keyvaultKeyPermission
|
||||
secretPermission:keyvaultSecretPermission
|
||||
storagePermission:keyvaultStoragePermission
|
||||
usePublicIp: keyvaultUsePublicIp
|
||||
publicNetworkAccess:keyvaultPublicNetworkAccess
|
||||
enabledForDeployment: keyvaultEnabledForDeployment
|
||||
enabledForDiskEncryption: keyvaultEnabledForDiskEncryption
|
||||
enabledForTemplateDeployment: keyvaultEnabledForTemplateDeployment
|
||||
enablePurgeProtection: keyvaultEnablePurgeProtection
|
||||
enableRbacAuthorization: keyvaultEnableRbacAuthorization
|
||||
enableSoftDelete: keyvaultEnableSoftDelete
|
||||
softDeleteRetentionInDays: keyvaultSoftDeleteRetentionInDays
|
||||
}
|
||||
}
|
||||
|
||||
module rawDataStorageAccount '../modules/storage.bicep' = {
|
||||
name: '${namingPrefix}-raw-data-storage'
|
||||
params: {
|
||||
storageAccountName: rawDataStorageAccountNameVar
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
isHnsEnabled: true
|
||||
storeType: 'raw'
|
||||
}
|
||||
}
|
||||
|
||||
module rawDataStorageAccountFileShare '../modules/file-share.bicep' = {
|
||||
name: '${namingPrefix}-raw-data-storage-fileshare'
|
||||
params: {
|
||||
storageAccountName: rawDataStorageAccountNameVar
|
||||
shareName: 'volume-a'
|
||||
}
|
||||
dependsOn: [
|
||||
rawDataStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
module rawDataStorageAccountCredentials '../modules/storage.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-raw-data-storage-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
storageAccountName: rawDataStorageAccountNameVar
|
||||
keyVaultName: pipelineLinkedSvcKeyVaultName
|
||||
keyVaultResourceGroup: pipelineResourceGroupName
|
||||
}
|
||||
dependsOn: [
|
||||
rawDataStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
module synapseIdentityForStorageAccess '../modules/storage-role-assignment.bicep' = [ for (role, index) in synapseMIStorageAccountRoles: {
|
||||
name: '${namingPrefix}-synapse-id-kv-${index}'
|
||||
params: {
|
||||
resourceName: rawDataStorageAccountNameVar
|
||||
principalId: synapseMIPrincipalId
|
||||
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${role}'
|
||||
}
|
||||
dependsOn: [
|
||||
rawDataStorageAccount
|
||||
]
|
||||
}]
|
||||
|
||||
output rawStorageAccountName string = rawDataStorageAccountNameVar
|
||||
output rawStorageFileEndpointUri string = rawDataStorageAccount.outputs.fileEndpointUri
|
||||
output rawStoragePrimaryKey string = rawDataStorageAccount.outputs.primaryKey
|
||||
|
||||
|
|
|
@ -1,35 +1,35 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
// Parameters with default values for Monitoring
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
|
||||
module workspace '../modules/log-analytics.bicep' = {
|
||||
name : '${namingPrefix}-workspace'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
workspaceName: '${namingPrefix}-workspace'
|
||||
location: location
|
||||
}
|
||||
}
|
||||
|
||||
module appinsights '../modules/appinsights.bicep' = {
|
||||
name : '${namingPrefix}-appinsights'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
applicationInsightsName: '${namingPrefix}-appinsights'
|
||||
location: location
|
||||
workspaceId: '/subscriptions/${subscription().subscriptionId}/resourceGroups/${resourceGroup().name}/providers/Microsoft.OperationalInsights/workspaces/${namingPrefix}-workspace'
|
||||
}
|
||||
dependsOn: [
|
||||
workspace
|
||||
]
|
||||
}
|
||||
|
||||
output workspaceId string = workspace.outputs.workspaceId
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
// Parameters with default values for Monitoring
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
|
||||
module workspace '../modules/log-analytics.bicep' = {
|
||||
name : '${namingPrefix}-workspace'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
workspaceName: '${namingPrefix}-workspace'
|
||||
location: location
|
||||
}
|
||||
}
|
||||
|
||||
module appinsights '../modules/appinsights.bicep' = {
|
||||
name : '${namingPrefix}-appinsights'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
applicationInsightsName: '${namingPrefix}-appinsights'
|
||||
location: location
|
||||
workspaceId: '/subscriptions/${subscription().subscriptionId}/resourceGroups/${resourceGroup().name}/providers/Microsoft.OperationalInsights/workspaces/${namingPrefix}-workspace'
|
||||
}
|
||||
dependsOn: [
|
||||
workspace
|
||||
]
|
||||
}
|
||||
|
||||
output workspaceId string = workspace.outputs.workspaceId
|
||||
|
|
|
@ -1,63 +1,63 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param Location string
|
||||
|
||||
// Parameters with default values for Virtual Network
|
||||
param virtualNetworkName string = ''
|
||||
param vnetAddressPrefix string = '10.5.0.0/16'
|
||||
param pipelineSubnetAddressPrefix string = '10.5.1.0/24'
|
||||
param dataSubnetAddressPrefix string = '10.5.2.0/24'
|
||||
param orchestrationSubnetAddressPrefix string = '10.5.3.0/24'
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
|
||||
module vnet '../modules/vnet.bicep' = {
|
||||
name: virtualNetworkName
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
virtualNetworkName: virtualNetworkName
|
||||
location: Location
|
||||
addressPrefix: vnetAddressPrefix
|
||||
}
|
||||
}
|
||||
|
||||
module pipelineSubnet '../modules/subnet.bicep' = {
|
||||
name: '${namingPrefix}-pipeline-subnet'
|
||||
params: {
|
||||
vNetName: virtualNetworkName
|
||||
subnetName: 'pipeline-subnet'
|
||||
subnetAddressPrefix: pipelineSubnetAddressPrefix
|
||||
}
|
||||
dependsOn: [
|
||||
vnet
|
||||
]
|
||||
}
|
||||
|
||||
module dataSubnet '../modules/subnet.bicep' = {
|
||||
name: '${namingPrefix}-data-subnet'
|
||||
params: {
|
||||
vNetName: virtualNetworkName
|
||||
subnetName: 'data-subnet'
|
||||
subnetAddressPrefix: dataSubnetAddressPrefix
|
||||
}
|
||||
dependsOn: [
|
||||
pipelineSubnet
|
||||
]
|
||||
}
|
||||
|
||||
module orchestrationSubnet '../modules/subnet.bicep' = {
|
||||
name: '${namingPrefix}-orchestration-subnet'
|
||||
params: {
|
||||
vNetName: virtualNetworkName
|
||||
subnetName: 'orchestration-subnet'
|
||||
subnetAddressPrefix: orchestrationSubnetAddressPrefix
|
||||
}
|
||||
dependsOn: [
|
||||
dataSubnet
|
||||
]
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param Location string
|
||||
|
||||
// Parameters with default values for Virtual Network
|
||||
param virtualNetworkName string = ''
|
||||
param vnetAddressPrefix string = '10.5.0.0/16'
|
||||
param pipelineSubnetAddressPrefix string = '10.5.1.0/24'
|
||||
param dataSubnetAddressPrefix string = '10.5.2.0/24'
|
||||
param orchestrationSubnetAddressPrefix string = '10.5.3.0/24'
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
|
||||
module vnet '../modules/vnet.bicep' = {
|
||||
name: virtualNetworkName
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
virtualNetworkName: virtualNetworkName
|
||||
location: Location
|
||||
addressPrefix: vnetAddressPrefix
|
||||
}
|
||||
}
|
||||
|
||||
module pipelineSubnet '../modules/subnet.bicep' = {
|
||||
name: '${namingPrefix}-pipeline-subnet'
|
||||
params: {
|
||||
vNetName: virtualNetworkName
|
||||
subnetName: 'pipeline-subnet'
|
||||
subnetAddressPrefix: pipelineSubnetAddressPrefix
|
||||
}
|
||||
dependsOn: [
|
||||
vnet
|
||||
]
|
||||
}
|
||||
|
||||
module dataSubnet '../modules/subnet.bicep' = {
|
||||
name: '${namingPrefix}-data-subnet'
|
||||
params: {
|
||||
vNetName: virtualNetworkName
|
||||
subnetName: 'data-subnet'
|
||||
subnetAddressPrefix: dataSubnetAddressPrefix
|
||||
}
|
||||
dependsOn: [
|
||||
pipelineSubnet
|
||||
]
|
||||
}
|
||||
|
||||
module orchestrationSubnet '../modules/subnet.bicep' = {
|
||||
name: '${namingPrefix}-orchestration-subnet'
|
||||
params: {
|
||||
vNetName: virtualNetworkName
|
||||
subnetName: 'orchestration-subnet'
|
||||
subnetAddressPrefix: orchestrationSubnetAddressPrefix
|
||||
}
|
||||
dependsOn: [
|
||||
dataSubnet
|
||||
]
|
||||
}
|
||||
|
|
|
@ -1,303 +1,303 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
param synapseMIPrincipalId string
|
||||
|
||||
// Guid to role definitions to be used during role
|
||||
// assignments including the below roles definitions:
|
||||
// Contributor
|
||||
param synapseMIBatchAccountRoles array = [
|
||||
'b24988ac-6180-42a0-ab88-20f7382dd24c'
|
||||
]
|
||||
|
||||
// Name parameters for infrastructure resources
|
||||
param orchestrationResourceGroupName string = ''
|
||||
param keyvaultName string = ''
|
||||
param batchAccountName string = ''
|
||||
param batchAccountAutoStorageAccountName string = ''
|
||||
param acrName string = ''
|
||||
param uamiName string = ''
|
||||
|
||||
param pipelineResourceGroupName string
|
||||
param pipelineLinkedSvcKeyVaultName string
|
||||
|
||||
// Mount options
|
||||
param mountAccountName string
|
||||
param mountAccountKey string
|
||||
param mountFileUrl string
|
||||
|
||||
// Parameters with default values for Keyvault
|
||||
param keyvaultSkuName string = 'Standard'
|
||||
param objIdForKeyvaultAccessPolicyPolicy string = ''
|
||||
param keyvaultCertPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultKeyPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultSecretPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultStoragePermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultUsePublicIp bool = true
|
||||
param keyvaultPublicNetworkAccess bool = true
|
||||
param keyvaultEnabledForDeployment bool = true
|
||||
param keyvaultEnabledForDiskEncryption bool = true
|
||||
param keyvaultEnabledForTemplateDeployment bool = true
|
||||
param keyvaultEnablePurgeProtection bool = true
|
||||
param keyvaultEnableRbacAuthorization bool = false
|
||||
param keyvaultEnableSoftDelete bool = true
|
||||
param keyvaultSoftDeleteRetentionInDays int = 7
|
||||
|
||||
// Parameters with default values for Batch Account
|
||||
param allowedAuthenticationModesBatchSvc array = [
|
||||
'AAD'
|
||||
'SharedKey'
|
||||
'TaskAuthenticationToken'
|
||||
]
|
||||
param allowedAuthenticationModesUsrSub array = [
|
||||
'AAD'
|
||||
'TaskAuthenticationToken'
|
||||
]
|
||||
|
||||
param batchAccountAutoStorageAuthenticationMode string = 'StorageKeys'
|
||||
param batchAccountPoolAllocationMode string = 'BatchService'
|
||||
param batchAccountPublicNetworkAccess bool = true
|
||||
|
||||
// Parameters with default values for Data Fetch Batch Account Pool
|
||||
param batchAccountCpuOnlyPoolName string = 'data-cpu-pool'
|
||||
param batchAccountCpuOnlyPoolVmSize string = 'standard_d2s_v3'
|
||||
param batchAccountCpuOnlyPoolDedicatedNodes int = 1
|
||||
param batchAccountCpuOnlyPoolImageReferencePublisher string = 'microsoft-azure-batch'
|
||||
param batchAccountCpuOnlyPoolImageReferenceOffer string = 'ubuntu-server-container'
|
||||
param batchAccountCpuOnlyPoolImageReferenceSku string = '20-04-lts'
|
||||
param batchAccountCpuOnlyPoolImageReferenceVersion string = 'latest'
|
||||
param batchAccountCpuOnlyPoolStartTaskCommandLine string = '/bin/bash -c "apt-get update && apt-get install -y python3-pip && pip install requests && pip install azure-storage-blob && pip install pandas"'
|
||||
|
||||
|
||||
param batchLogsDiagCategories array = [
|
||||
'allLogs'
|
||||
]
|
||||
param batchMetricsDiagCategories array = [
|
||||
'AllMetrics'
|
||||
]
|
||||
param logAnalyticsWorkspaceId string
|
||||
|
||||
// Parameters with default values for ACR
|
||||
param acrSku string = 'Standard'
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
var orchestrationResourceGroupNameVar = empty(orchestrationResourceGroupName) ? '${namingPrefix}-rg' : orchestrationResourceGroupName
|
||||
var nameSuffix = substring(uniqueString(orchestrationResourceGroupNameVar), 0, 6)
|
||||
var uamiNameVar = empty(uamiName) ? '${namingPrefix}-umi' : uamiName
|
||||
var keyvaultNameVar = empty(keyvaultName) ? '${namingPrefix}-kv' : keyvaultName
|
||||
var batchAccountNameVar = empty(batchAccountName) ? '${environmentCode}${projectName}batchact' : batchAccountName
|
||||
var batchAccountAutoStorageAccountNameVar = empty(batchAccountAutoStorageAccountName) ? 'batchacc${nameSuffix}' : batchAccountAutoStorageAccountName
|
||||
var acrNameVar = empty(acrName) ? '${environmentCode}${projectName}acr' : acrName
|
||||
|
||||
module keyVault '../modules/akv.bicep' = {
|
||||
name: '${namingPrefix}-akv'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
keyVaultName: keyvaultNameVar
|
||||
location: location
|
||||
skuName:keyvaultSkuName
|
||||
objIdForAccessPolicyPolicy: objIdForKeyvaultAccessPolicyPolicy
|
||||
certPermission:keyvaultCertPermission
|
||||
keyPermission:keyvaultKeyPermission
|
||||
secretPermission:keyvaultSecretPermission
|
||||
storagePermission:keyvaultStoragePermission
|
||||
usePublicIp: keyvaultUsePublicIp
|
||||
publicNetworkAccess:keyvaultPublicNetworkAccess
|
||||
enabledForDeployment: keyvaultEnabledForDeployment
|
||||
enabledForDiskEncryption: keyvaultEnabledForDiskEncryption
|
||||
enabledForTemplateDeployment: keyvaultEnabledForTemplateDeployment
|
||||
enablePurgeProtection: keyvaultEnablePurgeProtection
|
||||
enableRbacAuthorization: keyvaultEnableRbacAuthorization
|
||||
enableSoftDelete: keyvaultEnableSoftDelete
|
||||
softDeleteRetentionInDays: keyvaultSoftDeleteRetentionInDays
|
||||
}
|
||||
}
|
||||
|
||||
module batchAccountAutoStorageAccount '../modules/storage.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-auto-storage'
|
||||
params: {
|
||||
storageAccountName: batchAccountAutoStorageAccountNameVar
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
storeType: 'batch'
|
||||
}
|
||||
}
|
||||
|
||||
module batchStorageAccountCredentials '../modules/storage.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-batch-storage-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
storageAccountName: batchAccountAutoStorageAccountNameVar
|
||||
keyVaultName: keyvaultNameVar
|
||||
keyVaultResourceGroup: resourceGroup().name
|
||||
secretNamePrefix: 'Batch'
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
batchAccountAutoStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
module uami '../modules/managed.identity.user.bicep' = {
|
||||
name: '${namingPrefix}-umi'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
uamiName: uamiNameVar
|
||||
}
|
||||
}
|
||||
|
||||
module batchAccount '../modules/batch.account.bicep' = {
|
||||
name: '${namingPrefix}-batch-account'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
batchAccountName: toLower(batchAccountNameVar)
|
||||
userManagedIdentityId: uami.outputs.uamiId
|
||||
userManagedIdentityPrincipalId: uami.outputs.uamiPrincipalId
|
||||
allowedAuthenticationModes: batchAccountPoolAllocationMode == 'BatchService' ? allowedAuthenticationModesBatchSvc : allowedAuthenticationModesUsrSub
|
||||
autoStorageAuthenticationMode: batchAccountAutoStorageAuthenticationMode
|
||||
autoStorageAccountName: batchAccountAutoStorageAccountNameVar
|
||||
poolAllocationMode: batchAccountPoolAllocationMode
|
||||
publicNetworkAccess: batchAccountPublicNetworkAccess
|
||||
keyVaultName: keyvaultNameVar
|
||||
}
|
||||
dependsOn: [
|
||||
uami
|
||||
batchAccountAutoStorageAccount
|
||||
keyVault
|
||||
]
|
||||
}
|
||||
|
||||
module synapseIdentityForBatchAccess '../modules/batch.account.role.assignment.bicep' = [ for role in synapseMIBatchAccountRoles: {
|
||||
name: '${namingPrefix}-batch-account-role-assgn'
|
||||
params: {
|
||||
resourceName: toLower(batchAccountNameVar)
|
||||
principalId: synapseMIPrincipalId
|
||||
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${role}'
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccount
|
||||
]
|
||||
}]
|
||||
|
||||
module batchAccountPoolCheck '../modules/batch.account.pool.exists.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-pool-exists'
|
||||
params: {
|
||||
batchAccountName: batchAccountNameVar
|
||||
batchPoolName: batchAccountCpuOnlyPoolName
|
||||
userManagedIdentityName: uami.name
|
||||
userManagedIdentityResourcegroupName: resourceGroup().name
|
||||
location: location
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccountAutoStorageAccount
|
||||
batchAccount
|
||||
]
|
||||
}
|
||||
|
||||
module batchAccountCpuOnlyPool '../modules/batch.account.pools.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-data-fetch-pool'
|
||||
params: {
|
||||
batchAccountName: batchAccountNameVar
|
||||
batchAccountPoolName: batchAccountCpuOnlyPoolName
|
||||
vmSize: batchAccountCpuOnlyPoolVmSize
|
||||
fixedScaleTargetDedicatedNodes: batchAccountCpuOnlyPoolDedicatedNodes
|
||||
imageReferencePublisher: batchAccountCpuOnlyPoolImageReferencePublisher
|
||||
imageReferenceOffer: batchAccountCpuOnlyPoolImageReferenceOffer
|
||||
imageReferenceSku: batchAccountCpuOnlyPoolImageReferenceSku
|
||||
imageReferenceVersion: batchAccountCpuOnlyPoolImageReferenceVersion
|
||||
startTaskCommandLine: batchAccountCpuOnlyPoolStartTaskCommandLine
|
||||
azureFileShareConfigurationAccountKey: mountAccountKey
|
||||
azureFileShareConfigurationAccountName: mountAccountName
|
||||
azureFileShareConfigurationAzureFileUrl: mountFileUrl
|
||||
azureFileShareConfigurationMountOptions: '-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp'
|
||||
azureFileShareConfigurationRelativeMountPath: 'S'
|
||||
batchPoolExists: batchAccountPoolCheck.outputs.batchPoolExists
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccountAutoStorageAccount
|
||||
batchAccount
|
||||
batchAccountPoolCheck
|
||||
]
|
||||
}
|
||||
|
||||
module acr '../modules/acr.bicep' = {
|
||||
name: '${namingPrefix}-acr'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
acrName: acrNameVar
|
||||
acrSku: acrSku
|
||||
}
|
||||
}
|
||||
|
||||
module acrCredentials '../modules/acr.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-acr-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
acrName: acrNameVar
|
||||
keyVaultName: keyvaultNameVar
|
||||
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
acr
|
||||
]
|
||||
}
|
||||
|
||||
module batchAccountCredentials '../modules/batch.account.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
batchAccoutName: toLower(batchAccountNameVar)
|
||||
keyVaultName: pipelineLinkedSvcKeyVaultName
|
||||
keyVaultResourceGroup: pipelineResourceGroupName
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
batchAccount
|
||||
]
|
||||
}
|
||||
|
||||
module batchDiagnosticSettings '../modules/batch-diagnostic-settings.bicep' = {
|
||||
name: '${namingPrefix}-synapse-diag-settings'
|
||||
params: {
|
||||
batchAccountName: batchAccountNameVar
|
||||
logs: [for category in batchLogsDiagCategories: {
|
||||
category: null
|
||||
categoryGroup: category
|
||||
enabled: true
|
||||
retentionPolicy: {
|
||||
days: 30
|
||||
enabled: false
|
||||
}
|
||||
}]
|
||||
metrics: [for category in batchMetricsDiagCategories: {
|
||||
category: category
|
||||
enabled: true
|
||||
retentionPolicy: {
|
||||
days: 30
|
||||
enabled: false
|
||||
}
|
||||
}]
|
||||
workspaceId: logAnalyticsWorkspaceId
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccount
|
||||
]
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
param synapseMIPrincipalId string
|
||||
|
||||
// Guid to role definitions to be used during role
|
||||
// assignments including the below roles definitions:
|
||||
// Contributor
|
||||
param synapseMIBatchAccountRoles array = [
|
||||
'b24988ac-6180-42a0-ab88-20f7382dd24c'
|
||||
]
|
||||
|
||||
// Name parameters for infrastructure resources
|
||||
param orchestrationResourceGroupName string = ''
|
||||
param keyvaultName string = ''
|
||||
param batchAccountName string = ''
|
||||
param batchAccountAutoStorageAccountName string = ''
|
||||
param acrName string = ''
|
||||
param uamiName string = ''
|
||||
|
||||
param pipelineResourceGroupName string
|
||||
param pipelineLinkedSvcKeyVaultName string
|
||||
|
||||
// Mount options
|
||||
param mountAccountName string
|
||||
param mountAccountKey string
|
||||
param mountFileUrl string
|
||||
|
||||
// Parameters with default values for Keyvault
|
||||
param keyvaultSkuName string = 'Standard'
|
||||
param objIdForKeyvaultAccessPolicyPolicy string = ''
|
||||
param keyvaultCertPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultKeyPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultSecretPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultStoragePermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultUsePublicIp bool = true
|
||||
param keyvaultPublicNetworkAccess bool = true
|
||||
param keyvaultEnabledForDeployment bool = true
|
||||
param keyvaultEnabledForDiskEncryption bool = true
|
||||
param keyvaultEnabledForTemplateDeployment bool = true
|
||||
param keyvaultEnablePurgeProtection bool = true
|
||||
param keyvaultEnableRbacAuthorization bool = false
|
||||
param keyvaultEnableSoftDelete bool = true
|
||||
param keyvaultSoftDeleteRetentionInDays int = 7
|
||||
|
||||
// Parameters with default values for Batch Account
|
||||
param allowedAuthenticationModesBatchSvc array = [
|
||||
'AAD'
|
||||
'SharedKey'
|
||||
'TaskAuthenticationToken'
|
||||
]
|
||||
param allowedAuthenticationModesUsrSub array = [
|
||||
'AAD'
|
||||
'TaskAuthenticationToken'
|
||||
]
|
||||
|
||||
param batchAccountAutoStorageAuthenticationMode string = 'StorageKeys'
|
||||
param batchAccountPoolAllocationMode string = 'BatchService'
|
||||
param batchAccountPublicNetworkAccess bool = true
|
||||
|
||||
// Parameters with default values for Data Fetch Batch Account Pool
|
||||
param batchAccountCpuOnlyPoolName string = 'data-cpu-pool'
|
||||
param batchAccountCpuOnlyPoolVmSize string = 'standard_d2s_v3'
|
||||
param batchAccountCpuOnlyPoolDedicatedNodes int = 1
|
||||
param batchAccountCpuOnlyPoolImageReferencePublisher string = 'microsoft-azure-batch'
|
||||
param batchAccountCpuOnlyPoolImageReferenceOffer string = 'ubuntu-server-container'
|
||||
param batchAccountCpuOnlyPoolImageReferenceSku string = '20-04-lts'
|
||||
param batchAccountCpuOnlyPoolImageReferenceVersion string = 'latest'
|
||||
param batchAccountCpuOnlyPoolStartTaskCommandLine string = '/bin/bash -c "apt-get update && apt-get install -y python3-pip && pip install requests && pip install azure-storage-blob && pip install pandas"'
|
||||
|
||||
|
||||
param batchLogsDiagCategories array = [
|
||||
'allLogs'
|
||||
]
|
||||
param batchMetricsDiagCategories array = [
|
||||
'AllMetrics'
|
||||
]
|
||||
param logAnalyticsWorkspaceId string
|
||||
|
||||
// Parameters with default values for ACR
|
||||
param acrSku string = 'Standard'
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
var orchestrationResourceGroupNameVar = empty(orchestrationResourceGroupName) ? '${namingPrefix}-rg' : orchestrationResourceGroupName
|
||||
var nameSuffix = substring(uniqueString(orchestrationResourceGroupNameVar), 0, 6)
|
||||
var uamiNameVar = empty(uamiName) ? '${namingPrefix}-umi' : uamiName
|
||||
var keyvaultNameVar = empty(keyvaultName) ? '${namingPrefix}-kv' : keyvaultName
|
||||
var batchAccountNameVar = empty(batchAccountName) ? '${environmentCode}${projectName}batchact' : batchAccountName
|
||||
var batchAccountAutoStorageAccountNameVar = empty(batchAccountAutoStorageAccountName) ? 'batchacc${nameSuffix}' : batchAccountAutoStorageAccountName
|
||||
var acrNameVar = empty(acrName) ? '${environmentCode}${projectName}acr' : acrName
|
||||
|
||||
module keyVault '../modules/akv.bicep' = {
|
||||
name: '${namingPrefix}-akv'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
keyVaultName: keyvaultNameVar
|
||||
location: location
|
||||
skuName:keyvaultSkuName
|
||||
objIdForAccessPolicyPolicy: objIdForKeyvaultAccessPolicyPolicy
|
||||
certPermission:keyvaultCertPermission
|
||||
keyPermission:keyvaultKeyPermission
|
||||
secretPermission:keyvaultSecretPermission
|
||||
storagePermission:keyvaultStoragePermission
|
||||
usePublicIp: keyvaultUsePublicIp
|
||||
publicNetworkAccess:keyvaultPublicNetworkAccess
|
||||
enabledForDeployment: keyvaultEnabledForDeployment
|
||||
enabledForDiskEncryption: keyvaultEnabledForDiskEncryption
|
||||
enabledForTemplateDeployment: keyvaultEnabledForTemplateDeployment
|
||||
enablePurgeProtection: keyvaultEnablePurgeProtection
|
||||
enableRbacAuthorization: keyvaultEnableRbacAuthorization
|
||||
enableSoftDelete: keyvaultEnableSoftDelete
|
||||
softDeleteRetentionInDays: keyvaultSoftDeleteRetentionInDays
|
||||
}
|
||||
}
|
||||
|
||||
module batchAccountAutoStorageAccount '../modules/storage.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-auto-storage'
|
||||
params: {
|
||||
storageAccountName: batchAccountAutoStorageAccountNameVar
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
storeType: 'batch'
|
||||
}
|
||||
}
|
||||
|
||||
module batchStorageAccountCredentials '../modules/storage.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-batch-storage-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
storageAccountName: batchAccountAutoStorageAccountNameVar
|
||||
keyVaultName: keyvaultNameVar
|
||||
keyVaultResourceGroup: resourceGroup().name
|
||||
secretNamePrefix: 'Batch'
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
batchAccountAutoStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
module uami '../modules/managed.identity.user.bicep' = {
|
||||
name: '${namingPrefix}-umi'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
uamiName: uamiNameVar
|
||||
}
|
||||
}
|
||||
|
||||
module batchAccount '../modules/batch.account.bicep' = {
|
||||
name: '${namingPrefix}-batch-account'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
batchAccountName: toLower(batchAccountNameVar)
|
||||
userManagedIdentityId: uami.outputs.uamiId
|
||||
userManagedIdentityPrincipalId: uami.outputs.uamiPrincipalId
|
||||
allowedAuthenticationModes: batchAccountPoolAllocationMode == 'BatchService' ? allowedAuthenticationModesBatchSvc : allowedAuthenticationModesUsrSub
|
||||
autoStorageAuthenticationMode: batchAccountAutoStorageAuthenticationMode
|
||||
autoStorageAccountName: batchAccountAutoStorageAccountNameVar
|
||||
poolAllocationMode: batchAccountPoolAllocationMode
|
||||
publicNetworkAccess: batchAccountPublicNetworkAccess
|
||||
keyVaultName: keyvaultNameVar
|
||||
}
|
||||
dependsOn: [
|
||||
uami
|
||||
batchAccountAutoStorageAccount
|
||||
keyVault
|
||||
]
|
||||
}
|
||||
|
||||
module synapseIdentityForBatchAccess '../modules/batch.account.role.assignment.bicep' = [ for role in synapseMIBatchAccountRoles: {
|
||||
name: '${namingPrefix}-batch-account-role-assgn'
|
||||
params: {
|
||||
resourceName: toLower(batchAccountNameVar)
|
||||
principalId: synapseMIPrincipalId
|
||||
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${role}'
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccount
|
||||
]
|
||||
}]
|
||||
|
||||
module batchAccountPoolCheck '../modules/batch.account.pool.exists.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-pool-exists'
|
||||
params: {
|
||||
batchAccountName: batchAccountNameVar
|
||||
batchPoolName: batchAccountCpuOnlyPoolName
|
||||
userManagedIdentityName: uami.name
|
||||
userManagedIdentityResourcegroupName: resourceGroup().name
|
||||
location: location
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccountAutoStorageAccount
|
||||
batchAccount
|
||||
]
|
||||
}
|
||||
|
||||
module batchAccountCpuOnlyPool '../modules/batch.account.pools.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-data-fetch-pool'
|
||||
params: {
|
||||
batchAccountName: batchAccountNameVar
|
||||
batchAccountPoolName: batchAccountCpuOnlyPoolName
|
||||
vmSize: batchAccountCpuOnlyPoolVmSize
|
||||
fixedScaleTargetDedicatedNodes: batchAccountCpuOnlyPoolDedicatedNodes
|
||||
imageReferencePublisher: batchAccountCpuOnlyPoolImageReferencePublisher
|
||||
imageReferenceOffer: batchAccountCpuOnlyPoolImageReferenceOffer
|
||||
imageReferenceSku: batchAccountCpuOnlyPoolImageReferenceSku
|
||||
imageReferenceVersion: batchAccountCpuOnlyPoolImageReferenceVersion
|
||||
startTaskCommandLine: batchAccountCpuOnlyPoolStartTaskCommandLine
|
||||
azureFileShareConfigurationAccountKey: mountAccountKey
|
||||
azureFileShareConfigurationAccountName: mountAccountName
|
||||
azureFileShareConfigurationAzureFileUrl: mountFileUrl
|
||||
azureFileShareConfigurationMountOptions: '-o vers=3.0,dir_mode=0777,file_mode=0777,sec=ntlmssp'
|
||||
azureFileShareConfigurationRelativeMountPath: 'S'
|
||||
batchPoolExists: batchAccountPoolCheck.outputs.batchPoolExists
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccountAutoStorageAccount
|
||||
batchAccount
|
||||
batchAccountPoolCheck
|
||||
]
|
||||
}
|
||||
|
||||
module acr '../modules/acr.bicep' = {
|
||||
name: '${namingPrefix}-acr'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
acrName: acrNameVar
|
||||
acrSku: acrSku
|
||||
}
|
||||
}
|
||||
|
||||
module acrCredentials '../modules/acr.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-acr-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
acrName: acrNameVar
|
||||
keyVaultName: keyvaultNameVar
|
||||
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
acr
|
||||
]
|
||||
}
|
||||
|
||||
module batchAccountCredentials '../modules/batch.account.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-batch-account-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
batchAccoutName: toLower(batchAccountNameVar)
|
||||
keyVaultName: pipelineLinkedSvcKeyVaultName
|
||||
keyVaultResourceGroup: pipelineResourceGroupName
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
batchAccount
|
||||
]
|
||||
}
|
||||
|
||||
module batchDiagnosticSettings '../modules/batch-diagnostic-settings.bicep' = {
|
||||
name: '${namingPrefix}-synapse-diag-settings'
|
||||
params: {
|
||||
batchAccountName: batchAccountNameVar
|
||||
logs: [for category in batchLogsDiagCategories: {
|
||||
category: null
|
||||
categoryGroup: category
|
||||
enabled: true
|
||||
retentionPolicy: {
|
||||
days: 30
|
||||
enabled: false
|
||||
}
|
||||
}]
|
||||
metrics: [for category in batchMetricsDiagCategories: {
|
||||
category: category
|
||||
enabled: true
|
||||
retentionPolicy: {
|
||||
days: 30
|
||||
enabled: false
|
||||
}
|
||||
}]
|
||||
workspaceId: logAnalyticsWorkspaceId
|
||||
}
|
||||
dependsOn: [
|
||||
batchAccount
|
||||
]
|
||||
}
|
||||
|
|
|
@ -1,250 +1,250 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
// Name parameters for infrastructure resources
|
||||
param synapseResourceGroupName string = ''
|
||||
param keyvaultName string = ''
|
||||
param synapseHnsStorageAccountName string = ''
|
||||
param synapseWorkspaceName string = ''
|
||||
param synapseSparkPoolName string = ''
|
||||
param synapseSqlAdminLoginPassword string = ''
|
||||
|
||||
// Parameters with default values for Keyvault
|
||||
param keyvaultSkuName string = 'Standard'
|
||||
param objIdForKeyvaultAccessPolicyPolicy string = ''
|
||||
param keyvaultCertPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultKeyPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultSecretPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultStoragePermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultUsePublicIp bool = true
|
||||
param keyvaultPublicNetworkAccess bool = true
|
||||
param keyvaultEnabledForDeployment bool = true
|
||||
param keyvaultEnabledForDiskEncryption bool = true
|
||||
param keyvaultEnabledForTemplateDeployment bool = true
|
||||
param keyvaultEnablePurgeProtection bool = true
|
||||
param keyvaultEnableRbacAuthorization bool = false
|
||||
param keyvaultEnableSoftDelete bool = true
|
||||
param keyvaultSoftDeleteRetentionInDays int = 7
|
||||
|
||||
// Parameters with default values for Synapse Workspace
|
||||
param synapseHnsStorageAccountFileSystem string = 'users'
|
||||
param synapseSqlAdminLogin string = 'sqladmin'
|
||||
param synapseFirewallAllowEndIP string = '255.255.255.255'
|
||||
param synapseFirewallAllowStartIP string = '0.0.0.0'
|
||||
param synapseAutoPauseEnabled bool = true
|
||||
param synapseAutoPauseDelayInMinutes int = 15
|
||||
param synapseAutoScaleEnabled bool = true
|
||||
param synapseAutoScaleMinNodeCount int = 1
|
||||
param synapseAutoScaleMaxNodeCount int = 5
|
||||
param synapseCacheSize int = 0
|
||||
param synapseDynamicExecutorAllocationEnabled bool = false
|
||||
param synapseIsComputeIsolationEnabled bool = false
|
||||
param synapseNodeCount int = 0
|
||||
param synapseNodeSize string = 'Medium'
|
||||
param synapseNodeSizeFamily string = 'MemoryOptimized'
|
||||
param synapseSparkVersion string = '3.1'
|
||||
param synapseGitRepoAccountName string = ''
|
||||
param synapseGitRepoCollaborationBranch string = 'main'
|
||||
param synapseGitRepoHostName string = ''
|
||||
param synapseGitRepoLastCommitId string = ''
|
||||
param synapseGitRepoVstsProjectName string = ''
|
||||
param synapseGitRepoRepositoryName string = ''
|
||||
param synapseGitRepoRootFolder string = '.'
|
||||
param synapseGitRepoVstsTenantId string = subscription().tenantId
|
||||
param synapseGitRepoType string = ''
|
||||
|
||||
param synapseCategories array = [
|
||||
'SynapseRbacOperations'
|
||||
'GatewayApiRequests'
|
||||
'SQLSecurityAuditEvents'
|
||||
'BuiltinSqlReqsEnded'
|
||||
'IntegrationPipelineRuns'
|
||||
'IntegrationActivityRuns'
|
||||
'IntegrationTriggerRuns'
|
||||
]
|
||||
|
||||
// Parameters with default values for Synapse and its Managed Identity
|
||||
param synapseMIStorageAccountRoles array = [
|
||||
'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
|
||||
'974c5e8b-45b9-4653-ba55-5f855dd0fb88'
|
||||
]
|
||||
|
||||
param logAnalyticsWorkspaceId string
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
var synapseResourceGroupNameVar = empty(synapseResourceGroupName) ? '${namingPrefix}-rg' : synapseResourceGroupName
|
||||
var nameSuffix = substring(uniqueString(synapseResourceGroupNameVar), 0, 6)
|
||||
var keyvaultNameVar = empty(keyvaultName) ? '${namingPrefix}-kv' : keyvaultName
|
||||
var synapseHnsStorageAccountNameVar = empty(synapseHnsStorageAccountName) ? 'synhns${nameSuffix}' : synapseHnsStorageAccountName
|
||||
var synapseWorkspaceNameVar = empty(synapseWorkspaceName) ? '${namingPrefix}-syn-ws' : synapseWorkspaceName
|
||||
var synapseSparkPoolNameVar = empty(synapseSparkPoolName) ? 'pool${nameSuffix}' : synapseSparkPoolName
|
||||
var synapseSqlAdminLoginPasswordVar = empty(synapseSqlAdminLoginPassword) ? 'SynapsePassword!${nameSuffix}' : synapseSqlAdminLoginPassword
|
||||
|
||||
module keyVault '../modules/akv.bicep' = {
|
||||
name: '${namingPrefix}-akv'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
keyVaultName: keyvaultNameVar
|
||||
location: location
|
||||
skuName:keyvaultSkuName
|
||||
objIdForAccessPolicyPolicy: objIdForKeyvaultAccessPolicyPolicy
|
||||
certPermission:keyvaultCertPermission
|
||||
keyPermission:keyvaultKeyPermission
|
||||
secretPermission:keyvaultSecretPermission
|
||||
storagePermission:keyvaultStoragePermission
|
||||
usePublicIp: keyvaultUsePublicIp
|
||||
publicNetworkAccess:keyvaultPublicNetworkAccess
|
||||
enabledForDeployment: keyvaultEnabledForDeployment
|
||||
enabledForDiskEncryption: keyvaultEnabledForDiskEncryption
|
||||
enabledForTemplateDeployment: keyvaultEnabledForTemplateDeployment
|
||||
enablePurgeProtection: keyvaultEnablePurgeProtection
|
||||
enableRbacAuthorization: keyvaultEnableRbacAuthorization
|
||||
enableSoftDelete: keyvaultEnableSoftDelete
|
||||
softDeleteRetentionInDays: keyvaultSoftDeleteRetentionInDays
|
||||
usage: 'linkedService'
|
||||
}
|
||||
}
|
||||
|
||||
module synapseHnsStorageAccount '../modules/storage.hns.bicep' = {
|
||||
name: '${namingPrefix}-hns-storage'
|
||||
params: {
|
||||
storageAccountName: synapseHnsStorageAccountNameVar
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
storeType: 'synapse'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
module synapseWorkspace '../modules/synapse.workspace.bicep' = {
|
||||
name: '${namingPrefix}-workspace'
|
||||
params:{
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
synapseWorkspaceName: synapseWorkspaceNameVar
|
||||
hnsStorageAccountName: synapseHnsStorageAccountNameVar
|
||||
hnsStorageFileSystem: synapseHnsStorageAccountFileSystem
|
||||
sqlAdminLogin: synapseSqlAdminLogin
|
||||
sqlAdminLoginPassword: synapseSqlAdminLoginPasswordVar
|
||||
firewallAllowEndIP: synapseFirewallAllowEndIP
|
||||
firewallAllowStartIP: synapseFirewallAllowStartIP
|
||||
keyVaultName: keyvaultName
|
||||
gitRepoAccountName: synapseGitRepoAccountName
|
||||
gitRepoCollaborationBranch: synapseGitRepoCollaborationBranch
|
||||
gitRepoHostName: synapseGitRepoHostName
|
||||
gitRepoLastCommitId: synapseGitRepoLastCommitId
|
||||
gitRepoVstsProjectName: synapseGitRepoVstsProjectName
|
||||
gitRepoRepositoryName: synapseGitRepoRepositoryName
|
||||
gitRepoRootFolder: synapseGitRepoRootFolder
|
||||
gitRepoVstsTenantId: synapseGitRepoVstsTenantId
|
||||
gitRepoType: synapseGitRepoType
|
||||
}
|
||||
dependsOn: [
|
||||
synapseHnsStorageAccount
|
||||
keyVault
|
||||
]
|
||||
}
|
||||
|
||||
module synapseIdentityForStorageAccess '../modules/storage-role-assignment.bicep' = [ for (role, roleIndex) in synapseMIStorageAccountRoles: {
|
||||
name: '${namingPrefix}-synapse-id-kv-${roleIndex}'
|
||||
params: {
|
||||
resourceName: synapseHnsStorageAccountNameVar
|
||||
principalId: synapseWorkspace.outputs.synapseMIPrincipalId
|
||||
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${role}'
|
||||
}
|
||||
dependsOn: [
|
||||
synapseHnsStorageAccount
|
||||
synapseWorkspace
|
||||
]
|
||||
}]
|
||||
|
||||
module synapseIdentityKeyVaultAccess '../modules/akv.policy.bicep' = {
|
||||
name: '${namingPrefix}-synapse-id-kv'
|
||||
params: {
|
||||
keyVaultName: keyvaultNameVar
|
||||
policyOps: 'add'
|
||||
objIdForPolicy: synapseWorkspace.outputs.synapseMIPrincipalId
|
||||
}
|
||||
|
||||
dependsOn: [
|
||||
synapseWorkspace
|
||||
]
|
||||
}
|
||||
|
||||
module synapseSparkPool '../modules/synapse.sparkpool.bicep' = {
|
||||
name: '${namingPrefix}-sparkpool'
|
||||
params:{
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
synapseWorkspaceName: synapseWorkspaceNameVar
|
||||
sparkPoolName: synapseSparkPoolNameVar
|
||||
autoPauseEnabled: synapseAutoPauseEnabled
|
||||
autoPauseDelayInMinutes: synapseAutoPauseDelayInMinutes
|
||||
autoScaleEnabled: synapseAutoScaleEnabled
|
||||
autoScaleMinNodeCount: synapseAutoScaleMinNodeCount
|
||||
autoScaleMaxNodeCount: synapseAutoScaleMaxNodeCount
|
||||
cacheSize: synapseCacheSize
|
||||
dynamicExecutorAllocationEnabled: synapseDynamicExecutorAllocationEnabled
|
||||
isComputeIsolationEnabled: synapseIsComputeIsolationEnabled
|
||||
nodeCount: synapseNodeCount
|
||||
nodeSize: synapseNodeSize
|
||||
nodeSizeFamily: synapseNodeSizeFamily
|
||||
sparkVersion: synapseSparkVersion
|
||||
}
|
||||
dependsOn: [
|
||||
synapseHnsStorageAccount
|
||||
synapseWorkspace
|
||||
]
|
||||
}
|
||||
|
||||
module synapseDiagnosticSettings '../modules/synapse-diagnostic-settings.bicep' = {
|
||||
name: '${namingPrefix}-synapse-diag-settings'
|
||||
params: {
|
||||
synapseWorkspaceName: synapseWorkspaceNameVar
|
||||
logs: [for category in synapseCategories: {
|
||||
category: category
|
||||
categoryGroup: null
|
||||
enabled: true
|
||||
retentionPolicy: {
|
||||
days: 30
|
||||
enabled: false
|
||||
}
|
||||
}]
|
||||
workspaceId: logAnalyticsWorkspaceId
|
||||
}
|
||||
dependsOn: [
|
||||
synapseWorkspace
|
||||
]
|
||||
}
|
||||
|
||||
module pkgDataStorageAccountCredentials '../modules/storage.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-pkgs-storage-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
storageAccountName: synapseHnsStorageAccountNameVar
|
||||
keyVaultName: keyvaultNameVar
|
||||
keyVaultResourceGroup: resourceGroup().name
|
||||
secretNamePrefix: 'Packages'
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
synapseHnsStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
output synapseMIPrincipalId string = synapseWorkspace.outputs.synapseMIPrincipalId
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// List of required parameters
|
||||
param environmentCode string
|
||||
param environmentTag string
|
||||
param projectName string
|
||||
param location string
|
||||
|
||||
// Name parameters for infrastructure resources
|
||||
param synapseResourceGroupName string = ''
|
||||
param keyvaultName string = ''
|
||||
param synapseHnsStorageAccountName string = ''
|
||||
param synapseWorkspaceName string = ''
|
||||
param synapseSparkPoolName string = ''
|
||||
param synapseSqlAdminLoginPassword string = ''
|
||||
|
||||
// Parameters with default values for Keyvault
|
||||
param keyvaultSkuName string = 'Standard'
|
||||
param objIdForKeyvaultAccessPolicyPolicy string = ''
|
||||
param keyvaultCertPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultKeyPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultSecretPermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultStoragePermission array = [
|
||||
'All'
|
||||
]
|
||||
param keyvaultUsePublicIp bool = true
|
||||
param keyvaultPublicNetworkAccess bool = true
|
||||
param keyvaultEnabledForDeployment bool = true
|
||||
param keyvaultEnabledForDiskEncryption bool = true
|
||||
param keyvaultEnabledForTemplateDeployment bool = true
|
||||
param keyvaultEnablePurgeProtection bool = true
|
||||
param keyvaultEnableRbacAuthorization bool = false
|
||||
param keyvaultEnableSoftDelete bool = true
|
||||
param keyvaultSoftDeleteRetentionInDays int = 7
|
||||
|
||||
// Parameters with default values for Synapse Workspace
|
||||
param synapseHnsStorageAccountFileSystem string = 'users'
|
||||
param synapseSqlAdminLogin string = 'sqladmin'
|
||||
param synapseFirewallAllowEndIP string = '255.255.255.255'
|
||||
param synapseFirewallAllowStartIP string = '0.0.0.0'
|
||||
param synapseAutoPauseEnabled bool = true
|
||||
param synapseAutoPauseDelayInMinutes int = 15
|
||||
param synapseAutoScaleEnabled bool = true
|
||||
param synapseAutoScaleMinNodeCount int = 1
|
||||
param synapseAutoScaleMaxNodeCount int = 5
|
||||
param synapseCacheSize int = 0
|
||||
param synapseDynamicExecutorAllocationEnabled bool = false
|
||||
param synapseIsComputeIsolationEnabled bool = false
|
||||
param synapseNodeCount int = 0
|
||||
param synapseNodeSize string = 'Medium'
|
||||
param synapseNodeSizeFamily string = 'MemoryOptimized'
|
||||
param synapseSparkVersion string = '3.1'
|
||||
param synapseGitRepoAccountName string = ''
|
||||
param synapseGitRepoCollaborationBranch string = 'main'
|
||||
param synapseGitRepoHostName string = ''
|
||||
param synapseGitRepoLastCommitId string = ''
|
||||
param synapseGitRepoVstsProjectName string = ''
|
||||
param synapseGitRepoRepositoryName string = ''
|
||||
param synapseGitRepoRootFolder string = '.'
|
||||
param synapseGitRepoVstsTenantId string = subscription().tenantId
|
||||
param synapseGitRepoType string = ''
|
||||
|
||||
param synapseCategories array = [
|
||||
'SynapseRbacOperations'
|
||||
'GatewayApiRequests'
|
||||
'SQLSecurityAuditEvents'
|
||||
'BuiltinSqlReqsEnded'
|
||||
'IntegrationPipelineRuns'
|
||||
'IntegrationActivityRuns'
|
||||
'IntegrationTriggerRuns'
|
||||
]
|
||||
|
||||
// Parameters with default values for Synapse and its Managed Identity
|
||||
param synapseMIStorageAccountRoles array = [
|
||||
'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
|
||||
'974c5e8b-45b9-4653-ba55-5f855dd0fb88'
|
||||
]
|
||||
|
||||
param logAnalyticsWorkspaceId string
|
||||
|
||||
var namingPrefix = '${environmentCode}-${projectName}'
|
||||
var synapseResourceGroupNameVar = empty(synapseResourceGroupName) ? '${namingPrefix}-rg' : synapseResourceGroupName
|
||||
var nameSuffix = substring(uniqueString(synapseResourceGroupNameVar), 0, 6)
|
||||
var keyvaultNameVar = empty(keyvaultName) ? '${namingPrefix}-kv' : keyvaultName
|
||||
var synapseHnsStorageAccountNameVar = empty(synapseHnsStorageAccountName) ? 'synhns${nameSuffix}' : synapseHnsStorageAccountName
|
||||
var synapseWorkspaceNameVar = empty(synapseWorkspaceName) ? '${namingPrefix}-syn-ws' : synapseWorkspaceName
|
||||
var synapseSparkPoolNameVar = empty(synapseSparkPoolName) ? 'pool${nameSuffix}' : synapseSparkPoolName
|
||||
var synapseSqlAdminLoginPasswordVar = empty(synapseSqlAdminLoginPassword) ? 'SynapsePassword!${nameSuffix}' : synapseSqlAdminLoginPassword
|
||||
|
||||
module keyVault '../modules/akv.bicep' = {
|
||||
name: '${namingPrefix}-akv'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
keyVaultName: keyvaultNameVar
|
||||
location: location
|
||||
skuName:keyvaultSkuName
|
||||
objIdForAccessPolicyPolicy: objIdForKeyvaultAccessPolicyPolicy
|
||||
certPermission:keyvaultCertPermission
|
||||
keyPermission:keyvaultKeyPermission
|
||||
secretPermission:keyvaultSecretPermission
|
||||
storagePermission:keyvaultStoragePermission
|
||||
usePublicIp: keyvaultUsePublicIp
|
||||
publicNetworkAccess:keyvaultPublicNetworkAccess
|
||||
enabledForDeployment: keyvaultEnabledForDeployment
|
||||
enabledForDiskEncryption: keyvaultEnabledForDiskEncryption
|
||||
enabledForTemplateDeployment: keyvaultEnabledForTemplateDeployment
|
||||
enablePurgeProtection: keyvaultEnablePurgeProtection
|
||||
enableRbacAuthorization: keyvaultEnableRbacAuthorization
|
||||
enableSoftDelete: keyvaultEnableSoftDelete
|
||||
softDeleteRetentionInDays: keyvaultSoftDeleteRetentionInDays
|
||||
usage: 'linkedService'
|
||||
}
|
||||
}
|
||||
|
||||
module synapseHnsStorageAccount '../modules/storage.hns.bicep' = {
|
||||
name: '${namingPrefix}-hns-storage'
|
||||
params: {
|
||||
storageAccountName: synapseHnsStorageAccountNameVar
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
storeType: 'synapse'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
module synapseWorkspace '../modules/synapse.workspace.bicep' = {
|
||||
name: '${namingPrefix}-workspace'
|
||||
params:{
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
synapseWorkspaceName: synapseWorkspaceNameVar
|
||||
hnsStorageAccountName: synapseHnsStorageAccountNameVar
|
||||
hnsStorageFileSystem: synapseHnsStorageAccountFileSystem
|
||||
sqlAdminLogin: synapseSqlAdminLogin
|
||||
sqlAdminLoginPassword: synapseSqlAdminLoginPasswordVar
|
||||
firewallAllowEndIP: synapseFirewallAllowEndIP
|
||||
firewallAllowStartIP: synapseFirewallAllowStartIP
|
||||
keyVaultName: keyvaultName
|
||||
gitRepoAccountName: synapseGitRepoAccountName
|
||||
gitRepoCollaborationBranch: synapseGitRepoCollaborationBranch
|
||||
gitRepoHostName: synapseGitRepoHostName
|
||||
gitRepoLastCommitId: synapseGitRepoLastCommitId
|
||||
gitRepoVstsProjectName: synapseGitRepoVstsProjectName
|
||||
gitRepoRepositoryName: synapseGitRepoRepositoryName
|
||||
gitRepoRootFolder: synapseGitRepoRootFolder
|
||||
gitRepoVstsTenantId: synapseGitRepoVstsTenantId
|
||||
gitRepoType: synapseGitRepoType
|
||||
}
|
||||
dependsOn: [
|
||||
synapseHnsStorageAccount
|
||||
keyVault
|
||||
]
|
||||
}
|
||||
|
||||
module synapseIdentityForStorageAccess '../modules/storage-role-assignment.bicep' = [ for (role, roleIndex) in synapseMIStorageAccountRoles: {
|
||||
name: '${namingPrefix}-synapse-id-kv-${roleIndex}'
|
||||
params: {
|
||||
resourceName: synapseHnsStorageAccountNameVar
|
||||
principalId: synapseWorkspace.outputs.synapseMIPrincipalId
|
||||
roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${role}'
|
||||
}
|
||||
dependsOn: [
|
||||
synapseHnsStorageAccount
|
||||
synapseWorkspace
|
||||
]
|
||||
}]
|
||||
|
||||
module synapseIdentityKeyVaultAccess '../modules/akv.policy.bicep' = {
|
||||
name: '${namingPrefix}-synapse-id-kv'
|
||||
params: {
|
||||
keyVaultName: keyvaultNameVar
|
||||
policyOps: 'add'
|
||||
objIdForPolicy: synapseWorkspace.outputs.synapseMIPrincipalId
|
||||
}
|
||||
|
||||
dependsOn: [
|
||||
synapseWorkspace
|
||||
]
|
||||
}
|
||||
|
||||
module synapseSparkPool '../modules/synapse.sparkpool.bicep' = {
|
||||
name: '${namingPrefix}-sparkpool'
|
||||
params:{
|
||||
environmentName: environmentTag
|
||||
location: location
|
||||
synapseWorkspaceName: synapseWorkspaceNameVar
|
||||
sparkPoolName: synapseSparkPoolNameVar
|
||||
autoPauseEnabled: synapseAutoPauseEnabled
|
||||
autoPauseDelayInMinutes: synapseAutoPauseDelayInMinutes
|
||||
autoScaleEnabled: synapseAutoScaleEnabled
|
||||
autoScaleMinNodeCount: synapseAutoScaleMinNodeCount
|
||||
autoScaleMaxNodeCount: synapseAutoScaleMaxNodeCount
|
||||
cacheSize: synapseCacheSize
|
||||
dynamicExecutorAllocationEnabled: synapseDynamicExecutorAllocationEnabled
|
||||
isComputeIsolationEnabled: synapseIsComputeIsolationEnabled
|
||||
nodeCount: synapseNodeCount
|
||||
nodeSize: synapseNodeSize
|
||||
nodeSizeFamily: synapseNodeSizeFamily
|
||||
sparkVersion: synapseSparkVersion
|
||||
}
|
||||
dependsOn: [
|
||||
synapseHnsStorageAccount
|
||||
synapseWorkspace
|
||||
]
|
||||
}
|
||||
|
||||
module synapseDiagnosticSettings '../modules/synapse-diagnostic-settings.bicep' = {
|
||||
name: '${namingPrefix}-synapse-diag-settings'
|
||||
params: {
|
||||
synapseWorkspaceName: synapseWorkspaceNameVar
|
||||
logs: [for category in synapseCategories: {
|
||||
category: category
|
||||
categoryGroup: null
|
||||
enabled: true
|
||||
retentionPolicy: {
|
||||
days: 30
|
||||
enabled: false
|
||||
}
|
||||
}]
|
||||
workspaceId: logAnalyticsWorkspaceId
|
||||
}
|
||||
dependsOn: [
|
||||
synapseWorkspace
|
||||
]
|
||||
}
|
||||
|
||||
module pkgDataStorageAccountCredentials '../modules/storage.credentials.to.keyvault.bicep' = {
|
||||
name: '${namingPrefix}-pkgs-storage-credentials'
|
||||
params: {
|
||||
environmentName: environmentTag
|
||||
storageAccountName: synapseHnsStorageAccountNameVar
|
||||
keyVaultName: keyvaultNameVar
|
||||
keyVaultResourceGroup: resourceGroup().name
|
||||
secretNamePrefix: 'Packages'
|
||||
}
|
||||
dependsOn: [
|
||||
keyVault
|
||||
synapseHnsStorageAccount
|
||||
]
|
||||
}
|
||||
|
||||
output synapseMIPrincipalId string = synapseWorkspace.outputs.synapseMIPrincipalId
|
||||
|
|
|
@ -1,174 +1,174 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
targetScope='subscription'
|
||||
|
||||
@description('Location for all the resources to be deployed')
|
||||
param location string
|
||||
|
||||
@minLength(3)
|
||||
@maxLength(8)
|
||||
@description('Prefix to be used for naming all the resources in the deployment')
|
||||
param environmentCode string
|
||||
|
||||
@description('Environment will be used as Tag on the resource group')
|
||||
param environment string
|
||||
|
||||
@description('Used for naming of the network resource group and its resources')
|
||||
param networkModulePrefix string = 'network'
|
||||
|
||||
@description('Used for naming of the data resource group and its resources')
|
||||
param dataModulePrefix string = 'data'
|
||||
|
||||
@description('Used for naming of the monitor resource group and its resources')
|
||||
param monitorModulePrefix string = 'monitor'
|
||||
|
||||
@description('Used for naming of the pipeline resource group and its resources')
|
||||
param pipelineModulePrefix string = 'pipeline'
|
||||
|
||||
@description('Used for naming of the orchestration resource group and its resources')
|
||||
param orchestrationModulePrefix string = 'orc'
|
||||
|
||||
var networkResourceGroupName = '${environmentCode}-${networkModulePrefix}-rg'
|
||||
var dataResourceGroupName = '${environmentCode}-${dataModulePrefix}-rg'
|
||||
var monitorResourceGroupName = '${environmentCode}-${monitorModulePrefix}-rg'
|
||||
var pipelineResourceGroupName = '${environmentCode}-${pipelineModulePrefix}-rg'
|
||||
var orchestrationResourceGroupName = '${environmentCode}-${orchestrationModulePrefix}-rg'
|
||||
|
||||
module networkResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : networkResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: networkResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module networkModule 'groups/networking.bicep' = {
|
||||
name: '${networkModulePrefix}-module'
|
||||
scope: resourceGroup(networkResourceGroup.name)
|
||||
params: {
|
||||
projectName: networkModulePrefix
|
||||
Location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
virtualNetworkName: '${environmentCode}-vnet'
|
||||
}
|
||||
dependsOn: [
|
||||
networkResourceGroup
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
|
||||
module monitorResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : monitorResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: monitorResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module monitorModule 'groups/monitoring.bicep' = {
|
||||
name: '${monitorModulePrefix}-module'
|
||||
scope: resourceGroup(monitorResourceGroup.name)
|
||||
params: {
|
||||
projectName: monitorModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
}
|
||||
dependsOn: [
|
||||
networkModule
|
||||
]
|
||||
}
|
||||
|
||||
module pipelineResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : pipelineResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: pipelineResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module pipelineModule 'groups/pipeline.bicep' = {
|
||||
name: '${pipelineModulePrefix}-module'
|
||||
scope: resourceGroup(pipelineResourceGroup.name)
|
||||
params: {
|
||||
projectName: pipelineModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
logAnalyticsWorkspaceId: monitorModule.outputs.workspaceId
|
||||
}
|
||||
dependsOn: [
|
||||
networkModule
|
||||
monitorModule
|
||||
]
|
||||
}
|
||||
|
||||
module dataResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : dataResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: dataResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module dataModule 'groups/data.bicep' = {
|
||||
name: '${dataModulePrefix}-module'
|
||||
scope: resourceGroup(dataResourceGroup.name)
|
||||
params: {
|
||||
projectName: dataModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
synapseMIPrincipalId: pipelineModule.outputs.synapseMIPrincipalId
|
||||
pipelineResourceGroupName: pipelineResourceGroup.name
|
||||
pipelineLinkedSvcKeyVaultName: '${environmentCode}-${pipelineModulePrefix}-kv'
|
||||
}
|
||||
dependsOn: [
|
||||
networkModule
|
||||
pipelineModule
|
||||
]
|
||||
}
|
||||
|
||||
module orchestrationResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : orchestrationResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: orchestrationResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module orchestrationModule 'groups/orchestration.bicep' = {
|
||||
name: '${orchestrationModulePrefix}-module'
|
||||
scope: resourceGroup(orchestrationResourceGroup.name)
|
||||
params: {
|
||||
projectName: orchestrationModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
logAnalyticsWorkspaceId: monitorModule.outputs.workspaceId
|
||||
mountAccountKey: dataModule.outputs.rawStoragePrimaryKey
|
||||
mountAccountName: dataModule.outputs.rawStorageAccountName
|
||||
mountFileUrl: '${dataModule.outputs.rawStorageFileEndpointUri}volume-a'
|
||||
pipelineResourceGroupName: pipelineResourceGroup.name
|
||||
pipelineLinkedSvcKeyVaultName: '${environmentCode}-${pipelineModulePrefix}-kv'
|
||||
synapseMIPrincipalId: pipelineModule.outputs.synapseMIPrincipalId
|
||||
}
|
||||
dependsOn: [
|
||||
pipelineModule
|
||||
networkModule
|
||||
monitorModule
|
||||
]
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
targetScope='subscription'
|
||||
|
||||
@description('Location for all the resources to be deployed')
|
||||
param location string
|
||||
|
||||
@minLength(3)
|
||||
@maxLength(8)
|
||||
@description('Prefix to be used for naming all the resources in the deployment')
|
||||
param environmentCode string
|
||||
|
||||
@description('Environment will be used as Tag on the resource group')
|
||||
param environment string
|
||||
|
||||
@description('Used for naming of the network resource group and its resources')
|
||||
param networkModulePrefix string = 'network'
|
||||
|
||||
@description('Used for naming of the data resource group and its resources')
|
||||
param dataModulePrefix string = 'data'
|
||||
|
||||
@description('Used for naming of the monitor resource group and its resources')
|
||||
param monitorModulePrefix string = 'monitor'
|
||||
|
||||
@description('Used for naming of the pipeline resource group and its resources')
|
||||
param pipelineModulePrefix string = 'pipeline'
|
||||
|
||||
@description('Used for naming of the orchestration resource group and its resources')
|
||||
param orchestrationModulePrefix string = 'orc'
|
||||
|
||||
var networkResourceGroupName = '${environmentCode}-${networkModulePrefix}-rg'
|
||||
var dataResourceGroupName = '${environmentCode}-${dataModulePrefix}-rg'
|
||||
var monitorResourceGroupName = '${environmentCode}-${monitorModulePrefix}-rg'
|
||||
var pipelineResourceGroupName = '${environmentCode}-${pipelineModulePrefix}-rg'
|
||||
var orchestrationResourceGroupName = '${environmentCode}-${orchestrationModulePrefix}-rg'
|
||||
|
||||
module networkResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : networkResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: networkResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module networkModule 'groups/networking.bicep' = {
|
||||
name: '${networkModulePrefix}-module'
|
||||
scope: resourceGroup(networkResourceGroup.name)
|
||||
params: {
|
||||
projectName: networkModulePrefix
|
||||
Location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
virtualNetworkName: '${environmentCode}-vnet'
|
||||
}
|
||||
dependsOn: [
|
||||
networkResourceGroup
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
|
||||
module monitorResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : monitorResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: monitorResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module monitorModule 'groups/monitoring.bicep' = {
|
||||
name: '${monitorModulePrefix}-module'
|
||||
scope: resourceGroup(monitorResourceGroup.name)
|
||||
params: {
|
||||
projectName: monitorModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
}
|
||||
dependsOn: [
|
||||
networkModule
|
||||
]
|
||||
}
|
||||
|
||||
module pipelineResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : pipelineResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: pipelineResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module pipelineModule 'groups/pipeline.bicep' = {
|
||||
name: '${pipelineModulePrefix}-module'
|
||||
scope: resourceGroup(pipelineResourceGroup.name)
|
||||
params: {
|
||||
projectName: pipelineModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
logAnalyticsWorkspaceId: monitorModule.outputs.workspaceId
|
||||
}
|
||||
dependsOn: [
|
||||
networkModule
|
||||
monitorModule
|
||||
]
|
||||
}
|
||||
|
||||
module dataResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : dataResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: dataResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module dataModule 'groups/data.bicep' = {
|
||||
name: '${dataModulePrefix}-module'
|
||||
scope: resourceGroup(dataResourceGroup.name)
|
||||
params: {
|
||||
projectName: dataModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
synapseMIPrincipalId: pipelineModule.outputs.synapseMIPrincipalId
|
||||
pipelineResourceGroupName: pipelineResourceGroup.name
|
||||
pipelineLinkedSvcKeyVaultName: '${environmentCode}-${pipelineModulePrefix}-kv'
|
||||
}
|
||||
dependsOn: [
|
||||
networkModule
|
||||
pipelineModule
|
||||
]
|
||||
}
|
||||
|
||||
module orchestrationResourceGroup 'modules/resourcegroup.bicep' = {
|
||||
name : orchestrationResourceGroupName
|
||||
scope: subscription()
|
||||
params: {
|
||||
environmentName: environment
|
||||
resourceGroupName: orchestrationResourceGroupName
|
||||
resourceGroupLocation: location
|
||||
}
|
||||
}
|
||||
|
||||
module orchestrationModule 'groups/orchestration.bicep' = {
|
||||
name: '${orchestrationModulePrefix}-module'
|
||||
scope: resourceGroup(orchestrationResourceGroup.name)
|
||||
params: {
|
||||
projectName: orchestrationModulePrefix
|
||||
location: location
|
||||
environmentCode: environmentCode
|
||||
environmentTag: environment
|
||||
logAnalyticsWorkspaceId: monitorModule.outputs.workspaceId
|
||||
mountAccountKey: dataModule.outputs.rawStoragePrimaryKey
|
||||
mountAccountName: dataModule.outputs.rawStorageAccountName
|
||||
mountFileUrl: '${dataModule.outputs.rawStorageFileEndpointUri}volume-a'
|
||||
pipelineResourceGroupName: pipelineResourceGroup.name
|
||||
pipelineLinkedSvcKeyVaultName: '${environmentCode}-${pipelineModulePrefix}-kv'
|
||||
synapseMIPrincipalId: pipelineModule.outputs.synapseMIPrincipalId
|
||||
}
|
||||
dependsOn: [
|
||||
pipelineModule
|
||||
networkModule
|
||||
monitorModule
|
||||
]
|
||||
}
|
||||
|
|
|
@ -1,25 +1,25 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param location string = resourceGroup().location
|
||||
param acrName string
|
||||
param acrSku string = 'Standard'
|
||||
param adminUserEnabled bool = true
|
||||
param publicNetworkAccess bool = true
|
||||
param zoneRedundancy bool = false
|
||||
resource containerRepoitory 'Microsoft.ContainerRegistry/registries@2021-12-01-preview' = {
|
||||
name: acrName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
sku: {
|
||||
name: acrSku
|
||||
}
|
||||
properties: {
|
||||
adminUserEnabled: adminUserEnabled
|
||||
publicNetworkAccess: (publicNetworkAccess) ? 'Enabled' : 'Disabled'
|
||||
zoneRedundancy: (zoneRedundancy) ? 'Enabled' : 'Disabled'
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param location string = resourceGroup().location
|
||||
param acrName string
|
||||
param acrSku string = 'Standard'
|
||||
param adminUserEnabled bool = true
|
||||
param publicNetworkAccess bool = true
|
||||
param zoneRedundancy bool = false
|
||||
resource containerRepoitory 'Microsoft.ContainerRegistry/registries@2021-12-01-preview' = {
|
||||
name: acrName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
sku: {
|
||||
name: acrSku
|
||||
}
|
||||
properties: {
|
||||
adminUserEnabled: adminUserEnabled
|
||||
publicNetworkAccess: (publicNetworkAccess) ? 'Enabled' : 'Disabled'
|
||||
zoneRedundancy: (zoneRedundancy) ? 'Enabled' : 'Disabled'
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,44 +1,44 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param acrName string
|
||||
param keyVaultName string
|
||||
param containerRegistryLoginServerSecretName string = 'RegistryServer'
|
||||
param containerRegistryUsernameSecretName string = 'RegistryUserName'
|
||||
param containerRegistryPasswordSecretName string = 'RegistryPassword'
|
||||
param utcValue string = utcNow()
|
||||
|
||||
resource containerRepository 'Microsoft.ContainerRegistry/registries@2021-12-01-preview' existing = {
|
||||
name: acrName
|
||||
}
|
||||
|
||||
module acrLoginServerNameSecret './akv.secrets.bicep' = {
|
||||
name: 'acr-login-server-name-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: containerRegistryLoginServerSecretName
|
||||
secretValue: containerRepository.properties.loginServer
|
||||
}
|
||||
}
|
||||
|
||||
module acrUsernameSecret './akv.secrets.bicep' = {
|
||||
name: 'acr-username-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: containerRegistryUsernameSecretName
|
||||
secretValue: containerRepository.listCredentials().username
|
||||
}
|
||||
}
|
||||
|
||||
module acrPasswordSecret './akv.secrets.bicep' = {
|
||||
name: 'acr-password-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: containerRegistryPasswordSecretName
|
||||
secretValue: containerRepository.listCredentials().passwords[0].value
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param acrName string
|
||||
param keyVaultName string
|
||||
param containerRegistryLoginServerSecretName string = 'RegistryServer'
|
||||
param containerRegistryUsernameSecretName string = 'RegistryUserName'
|
||||
param containerRegistryPasswordSecretName string = 'RegistryPassword'
|
||||
param utcValue string = utcNow()
|
||||
|
||||
resource containerRepository 'Microsoft.ContainerRegistry/registries@2021-12-01-preview' existing = {
|
||||
name: acrName
|
||||
}
|
||||
|
||||
module acrLoginServerNameSecret './akv.secrets.bicep' = {
|
||||
name: 'acr-login-server-name-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: containerRegistryLoginServerSecretName
|
||||
secretValue: containerRepository.properties.loginServer
|
||||
}
|
||||
}
|
||||
|
||||
module acrUsernameSecret './akv.secrets.bicep' = {
|
||||
name: 'acr-username-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: containerRegistryUsernameSecretName
|
||||
secretValue: containerRepository.listCredentials().username
|
||||
}
|
||||
}
|
||||
|
||||
module acrPasswordSecret './akv.secrets.bicep' = {
|
||||
name: 'acr-password-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: containerRegistryPasswordSecretName
|
||||
secretValue: containerRepository.listCredentials().passwords[0].value
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,76 +1,76 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param keyVaultName string
|
||||
param location string = resourceGroup().location
|
||||
param skuName string = 'Standard'
|
||||
param objIdForAccessPolicyPolicy string = ''
|
||||
param usage string = 'general'
|
||||
|
||||
param certPermission array = [
|
||||
'get'
|
||||
]
|
||||
param keyPermission array = [
|
||||
'get'
|
||||
]
|
||||
param secretPermission array = [
|
||||
'get'
|
||||
]
|
||||
param storagePermission array = [
|
||||
'get'
|
||||
]
|
||||
param usePublicIp bool = true
|
||||
param publicNetworkAccess bool = true
|
||||
param enabledForDeployment bool = true
|
||||
param enabledForDiskEncryption bool = true
|
||||
param enabledForTemplateDeployment bool = true
|
||||
param enablePurgeProtection bool = true
|
||||
param enableRbacAuthorization bool = false
|
||||
param enableSoftDelete bool = true
|
||||
param softDeleteRetentionInDays int = 7
|
||||
|
||||
resource akv 'Microsoft.KeyVault/vaults@2021-11-01-preview' = {
|
||||
name: keyVaultName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
usage: usage
|
||||
}
|
||||
properties: {
|
||||
accessPolicies: [
|
||||
{
|
||||
objectId: !empty(objIdForAccessPolicyPolicy)? objIdForAccessPolicyPolicy : '${reference(resourceGroup().id, '2021-04-01', 'Full').subscriptionId}'
|
||||
permissions: {
|
||||
certificates: certPermission
|
||||
keys: keyPermission
|
||||
secrets: secretPermission
|
||||
storage: storagePermission
|
||||
}
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
]
|
||||
|
||||
enabledForDeployment: enabledForDeployment
|
||||
enabledForDiskEncryption: enabledForDiskEncryption
|
||||
enabledForTemplateDeployment: enabledForTemplateDeployment
|
||||
enablePurgeProtection: enablePurgeProtection
|
||||
enableRbacAuthorization: enableRbacAuthorization
|
||||
enableSoftDelete: enableSoftDelete
|
||||
networkAcls: {
|
||||
bypass: 'AzureServices'
|
||||
defaultAction: (usePublicIp) ? 'Allow' : 'Deny'
|
||||
ipRules: []
|
||||
virtualNetworkRules: []
|
||||
}
|
||||
publicNetworkAccess: (publicNetworkAccess) ? 'Enabled' : 'Disabled'
|
||||
sku: {
|
||||
family: 'A'
|
||||
name: skuName
|
||||
}
|
||||
softDeleteRetentionInDays: softDeleteRetentionInDays
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
}
|
||||
|
||||
output vaultUri string = akv.properties.vaultUri
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param keyVaultName string
|
||||
param location string = resourceGroup().location
|
||||
param skuName string = 'Standard'
|
||||
param objIdForAccessPolicyPolicy string = ''
|
||||
param usage string = 'general'
|
||||
|
||||
param certPermission array = [
|
||||
'get'
|
||||
]
|
||||
param keyPermission array = [
|
||||
'get'
|
||||
]
|
||||
param secretPermission array = [
|
||||
'get'
|
||||
]
|
||||
param storagePermission array = [
|
||||
'get'
|
||||
]
|
||||
param usePublicIp bool = true
|
||||
param publicNetworkAccess bool = true
|
||||
param enabledForDeployment bool = true
|
||||
param enabledForDiskEncryption bool = true
|
||||
param enabledForTemplateDeployment bool = true
|
||||
param enablePurgeProtection bool = true
|
||||
param enableRbacAuthorization bool = false
|
||||
param enableSoftDelete bool = true
|
||||
param softDeleteRetentionInDays int = 7
|
||||
|
||||
resource akv 'Microsoft.KeyVault/vaults@2021-11-01-preview' = {
|
||||
name: keyVaultName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
usage: usage
|
||||
}
|
||||
properties: {
|
||||
accessPolicies: [
|
||||
{
|
||||
objectId: !empty(objIdForAccessPolicyPolicy)? objIdForAccessPolicyPolicy : '${reference(resourceGroup().id, '2021-04-01', 'Full').subscriptionId}'
|
||||
permissions: {
|
||||
certificates: certPermission
|
||||
keys: keyPermission
|
||||
secrets: secretPermission
|
||||
storage: storagePermission
|
||||
}
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
]
|
||||
|
||||
enabledForDeployment: enabledForDeployment
|
||||
enabledForDiskEncryption: enabledForDiskEncryption
|
||||
enabledForTemplateDeployment: enabledForTemplateDeployment
|
||||
enablePurgeProtection: enablePurgeProtection
|
||||
enableRbacAuthorization: enableRbacAuthorization
|
||||
enableSoftDelete: enableSoftDelete
|
||||
networkAcls: {
|
||||
bypass: 'AzureServices'
|
||||
defaultAction: (usePublicIp) ? 'Allow' : 'Deny'
|
||||
ipRules: []
|
||||
virtualNetworkRules: []
|
||||
}
|
||||
publicNetworkAccess: (publicNetworkAccess) ? 'Enabled' : 'Disabled'
|
||||
sku: {
|
||||
family: 'A'
|
||||
name: skuName
|
||||
}
|
||||
softDeleteRetentionInDays: softDeleteRetentionInDays
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
}
|
||||
|
||||
output vaultUri string = akv.properties.vaultUri
|
||||
|
|
|
@ -1,66 +1,66 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param keyVaultName string
|
||||
param policyOps string
|
||||
param objIdForPolicy string = ''
|
||||
param certPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Update'
|
||||
'Create'
|
||||
'Import'
|
||||
'Delete'
|
||||
'Recover'
|
||||
'Backup'
|
||||
'Restore'
|
||||
'ManageContacts'
|
||||
'ManageIssuers'
|
||||
'GetIssuers'
|
||||
'ListIssuers'
|
||||
'SetIssuers'
|
||||
'DeleteIssuers'
|
||||
]
|
||||
param keyPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Update'
|
||||
'Create'
|
||||
'Import'
|
||||
'Delete'
|
||||
'Recover'
|
||||
'Backup'
|
||||
'Restore'
|
||||
]
|
||||
param secretPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Set'
|
||||
'Delete'
|
||||
'Recover'
|
||||
'Backup'
|
||||
'Restore'
|
||||
]
|
||||
|
||||
resource akv 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = {
|
||||
name: keyVaultName
|
||||
}
|
||||
|
||||
resource akvAccessPolicy 'Microsoft.KeyVault/vaults/accessPolicies@2021-11-01-preview' = {
|
||||
name: policyOps
|
||||
parent: akv
|
||||
properties: {
|
||||
accessPolicies: [
|
||||
{
|
||||
applicationId: null
|
||||
objectId: objIdForPolicy
|
||||
permissions: {
|
||||
certificates: certPermission
|
||||
keys: keyPermission
|
||||
secrets: secretPermission
|
||||
}
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param keyVaultName string
|
||||
param policyOps string
|
||||
param objIdForPolicy string = ''
|
||||
param certPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Update'
|
||||
'Create'
|
||||
'Import'
|
||||
'Delete'
|
||||
'Recover'
|
||||
'Backup'
|
||||
'Restore'
|
||||
'ManageContacts'
|
||||
'ManageIssuers'
|
||||
'GetIssuers'
|
||||
'ListIssuers'
|
||||
'SetIssuers'
|
||||
'DeleteIssuers'
|
||||
]
|
||||
param keyPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Update'
|
||||
'Create'
|
||||
'Import'
|
||||
'Delete'
|
||||
'Recover'
|
||||
'Backup'
|
||||
'Restore'
|
||||
]
|
||||
param secretPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Set'
|
||||
'Delete'
|
||||
'Recover'
|
||||
'Backup'
|
||||
'Restore'
|
||||
]
|
||||
|
||||
resource akv 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = {
|
||||
name: keyVaultName
|
||||
}
|
||||
|
||||
resource akvAccessPolicy 'Microsoft.KeyVault/vaults/accessPolicies@2021-11-01-preview' = {
|
||||
name: policyOps
|
||||
parent: akv
|
||||
properties: {
|
||||
accessPolicies: [
|
||||
{
|
||||
applicationId: null
|
||||
objectId: objIdForPolicy
|
||||
permissions: {
|
||||
certificates: certPermission
|
||||
keys: keyPermission
|
||||
secrets: secretPermission
|
||||
}
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param keyVaultName string
|
||||
param secretName string
|
||||
param secretValue string
|
||||
|
||||
resource akv 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = {
|
||||
name: keyVaultName
|
||||
}
|
||||
|
||||
resource akvSecret 'Microsoft.KeyVault/vaults/secrets@2021-11-01-preview' = {
|
||||
name: secretName
|
||||
parent: akv
|
||||
properties: {
|
||||
value: secretValue
|
||||
}
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param keyVaultName string
|
||||
param secretName string
|
||||
param secretValue string
|
||||
|
||||
resource akv 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = {
|
||||
name: keyVaultName
|
||||
}
|
||||
|
||||
resource akvSecret 'Microsoft.KeyVault/vaults/secrets@2021-11-01-preview' = {
|
||||
name: secretName
|
||||
parent: akv
|
||||
properties: {
|
||||
value: secretValue
|
||||
}
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param applicationInsightsName string
|
||||
param location string
|
||||
param workspaceId string
|
||||
param appInsightsKind string = 'web'
|
||||
param appInsightsType string = 'web'
|
||||
|
||||
resource applicationInsights 'Microsoft.Insights/components@2020-02-02-preview' = {
|
||||
name: applicationInsightsName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
kind: appInsightsKind
|
||||
properties: {
|
||||
Application_Type: appInsightsType
|
||||
WorkspaceResourceId: workspaceId
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param applicationInsightsName string
|
||||
param location string
|
||||
param workspaceId string
|
||||
param appInsightsKind string = 'web'
|
||||
param appInsightsType string = 'web'
|
||||
|
||||
resource applicationInsights 'Microsoft.Insights/components@2020-02-02-preview' = {
|
||||
name: applicationInsightsName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
kind: appInsightsKind
|
||||
properties: {
|
||||
Application_Type: appInsightsType
|
||||
WorkspaceResourceId: workspaceId
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,33 +1,33 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param batchAccountName string
|
||||
|
||||
param logs array
|
||||
param metrics array
|
||||
param storageAccountId string = ''
|
||||
param workspaceId string = ''
|
||||
param serviceBusId string = ''
|
||||
|
||||
param logAnalyticsDestinationType string = ''
|
||||
param eventHubAuthorizationRuleId string = ''
|
||||
param eventHubName string = ''
|
||||
|
||||
resource existingResource 'Microsoft.Batch/batchAccounts@2021-06-01' existing = {
|
||||
name: batchAccountName
|
||||
}
|
||||
|
||||
resource symbolicname 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
|
||||
name: '${existingResource.name}-diag'
|
||||
scope: existingResource
|
||||
properties: {
|
||||
eventHubAuthorizationRuleId: empty(eventHubAuthorizationRuleId) ? null : eventHubAuthorizationRuleId
|
||||
eventHubName: empty(eventHubName) ? null : eventHubName
|
||||
logAnalyticsDestinationType: empty(logAnalyticsDestinationType) ? null: logAnalyticsDestinationType
|
||||
logs: logs
|
||||
metrics: metrics
|
||||
serviceBusRuleId: empty(serviceBusId) ? null : serviceBusId
|
||||
storageAccountId: empty(storageAccountId) ? null : storageAccountId
|
||||
workspaceId: empty(workspaceId) ? null : workspaceId
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param batchAccountName string
|
||||
|
||||
param logs array
|
||||
param metrics array
|
||||
param storageAccountId string = ''
|
||||
param workspaceId string = ''
|
||||
param serviceBusId string = ''
|
||||
|
||||
param logAnalyticsDestinationType string = ''
|
||||
param eventHubAuthorizationRuleId string = ''
|
||||
param eventHubName string = ''
|
||||
|
||||
resource existingResource 'Microsoft.Batch/batchAccounts@2021-06-01' existing = {
|
||||
name: batchAccountName
|
||||
}
|
||||
|
||||
resource symbolicname 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
|
||||
name: '${existingResource.name}-diag'
|
||||
scope: existingResource
|
||||
properties: {
|
||||
eventHubAuthorizationRuleId: empty(eventHubAuthorizationRuleId) ? null : eventHubAuthorizationRuleId
|
||||
eventHubName: empty(eventHubName) ? null : eventHubName
|
||||
logAnalyticsDestinationType: empty(logAnalyticsDestinationType) ? null: logAnalyticsDestinationType
|
||||
logs: logs
|
||||
metrics: metrics
|
||||
serviceBusRuleId: empty(serviceBusId) ? null : serviceBusId
|
||||
storageAccountId: empty(storageAccountId) ? null : storageAccountId
|
||||
workspaceId: empty(workspaceId) ? null : workspaceId
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,114 +1,114 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param batchAccountName string
|
||||
param userManagedIdentityId string
|
||||
param type string = 'batch'
|
||||
param allowedAuthenticationModes array = [
|
||||
'AAD'
|
||||
'SharedKey'
|
||||
'TaskAuthenticationToken'
|
||||
]
|
||||
param keyVaultName string = ''
|
||||
param autoStorageAuthenticationMode string = 'StorageKeys'
|
||||
param autoStorageAccountName string
|
||||
param poolAllocationMode string = 'BatchService'
|
||||
param publicNetworkAccess bool = true
|
||||
param assignRoleToUserManagedIdentity string = 'Owner'
|
||||
param userManagedIdentityPrincipalId string
|
||||
|
||||
param objIdForPolicy string = 'f520d84c-3fd3-4cc8-88d4-2ed25b00d27a'
|
||||
|
||||
|
||||
param certPermission array = []
|
||||
param keyPermission array = []
|
||||
param secretPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Set'
|
||||
'Delete'
|
||||
'Recover'
|
||||
]
|
||||
|
||||
var policyOpsVar = 'add'
|
||||
|
||||
resource keyVault 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = if( toLower(poolAllocationMode) == 'usersubscription' ) {
|
||||
name: keyVaultName
|
||||
|
||||
}
|
||||
|
||||
resource autoStorageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: autoStorageAccountName
|
||||
}
|
||||
|
||||
resource akvAccessPolicy 'Microsoft.KeyVault/vaults/accessPolicies@2021-11-01-preview' = {
|
||||
name: policyOpsVar
|
||||
parent: keyVault
|
||||
properties: {
|
||||
accessPolicies: [
|
||||
{
|
||||
applicationId: null
|
||||
objectId: objIdForPolicy
|
||||
permissions: {
|
||||
certificates: certPermission
|
||||
keys: keyPermission
|
||||
secrets: secretPermission
|
||||
}
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
resource batchAccount 'Microsoft.Batch/batchAccounts@2021-06-01' = {
|
||||
name: batchAccountName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
type: type
|
||||
}
|
||||
identity: {
|
||||
type: 'UserAssigned'
|
||||
userAssignedIdentities : {
|
||||
'${userManagedIdentityId}': {}
|
||||
}
|
||||
}
|
||||
properties: {
|
||||
allowedAuthenticationModes: allowedAuthenticationModes
|
||||
autoStorage: toLower(poolAllocationMode) == 'usersubscription' ? null : {
|
||||
authenticationMode: autoStorageAuthenticationMode
|
||||
storageAccountId: autoStorageAccount.id
|
||||
}
|
||||
encryption: {
|
||||
keySource: 'Microsoft.Batch'
|
||||
}
|
||||
poolAllocationMode: poolAllocationMode
|
||||
publicNetworkAccess: (publicNetworkAccess) ? 'Enabled' : 'Disabled'
|
||||
keyVaultReference: toLower(poolAllocationMode) == 'usersubscription' ? {
|
||||
id: keyVault.id
|
||||
url: keyVault.properties.vaultUri
|
||||
} : null
|
||||
}
|
||||
dependsOn: [
|
||||
akvAccessPolicy
|
||||
]
|
||||
}
|
||||
|
||||
var role = {
|
||||
owner: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635'
|
||||
contributor: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c'
|
||||
reader: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7'
|
||||
}
|
||||
|
||||
resource assignRole 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
|
||||
name: guid(batchAccount.id, userManagedIdentityPrincipalId, role[toLower(assignRoleToUserManagedIdentity)])
|
||||
scope: batchAccount
|
||||
properties: {
|
||||
principalId: userManagedIdentityPrincipalId
|
||||
roleDefinitionId: role[toLower(assignRoleToUserManagedIdentity)]
|
||||
}
|
||||
}
|
||||
|
||||
output batchAccountId string = batchAccount.id
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param batchAccountName string
|
||||
param userManagedIdentityId string
|
||||
param type string = 'batch'
|
||||
param allowedAuthenticationModes array = [
|
||||
'AAD'
|
||||
'SharedKey'
|
||||
'TaskAuthenticationToken'
|
||||
]
|
||||
param keyVaultName string = ''
|
||||
param autoStorageAuthenticationMode string = 'StorageKeys'
|
||||
param autoStorageAccountName string
|
||||
param poolAllocationMode string = 'BatchService'
|
||||
param publicNetworkAccess bool = true
|
||||
param assignRoleToUserManagedIdentity string = 'Owner'
|
||||
param userManagedIdentityPrincipalId string
|
||||
|
||||
param objIdForPolicy string = 'f520d84c-3fd3-4cc8-88d4-2ed25b00d27a'
|
||||
|
||||
|
||||
param certPermission array = []
|
||||
param keyPermission array = []
|
||||
param secretPermission array = [
|
||||
'Get'
|
||||
'List'
|
||||
'Set'
|
||||
'Delete'
|
||||
'Recover'
|
||||
]
|
||||
|
||||
var policyOpsVar = 'add'
|
||||
|
||||
resource keyVault 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = if( toLower(poolAllocationMode) == 'usersubscription' ) {
|
||||
name: keyVaultName
|
||||
|
||||
}
|
||||
|
||||
resource autoStorageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: autoStorageAccountName
|
||||
}
|
||||
|
||||
resource akvAccessPolicy 'Microsoft.KeyVault/vaults/accessPolicies@2021-11-01-preview' = {
|
||||
name: policyOpsVar
|
||||
parent: keyVault
|
||||
properties: {
|
||||
accessPolicies: [
|
||||
{
|
||||
applicationId: null
|
||||
objectId: objIdForPolicy
|
||||
permissions: {
|
||||
certificates: certPermission
|
||||
keys: keyPermission
|
||||
secrets: secretPermission
|
||||
}
|
||||
tenantId: subscription().tenantId
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
resource batchAccount 'Microsoft.Batch/batchAccounts@2021-06-01' = {
|
||||
name: batchAccountName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
type: type
|
||||
}
|
||||
identity: {
|
||||
type: 'UserAssigned'
|
||||
userAssignedIdentities : {
|
||||
'${userManagedIdentityId}': {}
|
||||
}
|
||||
}
|
||||
properties: {
|
||||
allowedAuthenticationModes: allowedAuthenticationModes
|
||||
autoStorage: toLower(poolAllocationMode) == 'usersubscription' ? null : {
|
||||
authenticationMode: autoStorageAuthenticationMode
|
||||
storageAccountId: autoStorageAccount.id
|
||||
}
|
||||
encryption: {
|
||||
keySource: 'Microsoft.Batch'
|
||||
}
|
||||
poolAllocationMode: poolAllocationMode
|
||||
publicNetworkAccess: (publicNetworkAccess) ? 'Enabled' : 'Disabled'
|
||||
keyVaultReference: toLower(poolAllocationMode) == 'usersubscription' ? {
|
||||
id: keyVault.id
|
||||
url: keyVault.properties.vaultUri
|
||||
} : null
|
||||
}
|
||||
dependsOn: [
|
||||
akvAccessPolicy
|
||||
]
|
||||
}
|
||||
|
||||
var role = {
|
||||
owner: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635'
|
||||
contributor: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c'
|
||||
reader: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7'
|
||||
}
|
||||
|
||||
resource assignRole 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
|
||||
name: guid(batchAccount.id, userManagedIdentityPrincipalId, role[toLower(assignRoleToUserManagedIdentity)])
|
||||
scope: batchAccount
|
||||
properties: {
|
||||
principalId: userManagedIdentityPrincipalId
|
||||
roleDefinitionId: role[toLower(assignRoleToUserManagedIdentity)]
|
||||
}
|
||||
}
|
||||
|
||||
output batchAccountId string = batchAccount.id
|
||||
|
|
|
@ -1,43 +1,43 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param batchAccountName string
|
||||
param batchPoolName string
|
||||
param userManagedIdentityName string
|
||||
param userManagedIdentityResourcegroupName string
|
||||
param location string = resourceGroup().location
|
||||
param utcValue string = utcNow()
|
||||
|
||||
resource queryuserManagedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' existing = {
|
||||
scope: resourceGroup(userManagedIdentityResourcegroupName)
|
||||
name: userManagedIdentityName
|
||||
}
|
||||
|
||||
resource runPowerShellInlineWithOutput 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
|
||||
name: 'runPowerShellInlineWithOutput${utcValue}'
|
||||
location: location
|
||||
kind: 'AzurePowerShell'
|
||||
identity: {
|
||||
type: 'UserAssigned'
|
||||
userAssignedIdentities: {
|
||||
'${queryuserManagedIdentity.id}': {}
|
||||
}
|
||||
}
|
||||
properties: {
|
||||
forceUpdateTag: utcValue
|
||||
azPowerShellVersion: '6.4'
|
||||
scriptContent: '''
|
||||
param([string] $batchAccountName, [string] $batchPoolName)
|
||||
Write-Output $output
|
||||
$DeploymentScriptOutputs = @{}
|
||||
$batchContext = Get-AzBatchAccount -AccountName $batchAccountName
|
||||
$DeploymentScriptOutputs = Get-AzBatchPool -Id $batchPoolName -BatchContext $batchContext
|
||||
'''
|
||||
arguments: '-batchAccountName ${batchAccountName} -batchPoolName ${batchPoolName}'
|
||||
timeout: 'PT1H'
|
||||
cleanupPreference: 'OnSuccess'
|
||||
retentionInterval: 'P1D'
|
||||
}
|
||||
}
|
||||
|
||||
output batchPoolExists bool = contains(runPowerShellInlineWithOutput.properties, 'outputs')
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param batchAccountName string
|
||||
param batchPoolName string
|
||||
param userManagedIdentityName string
|
||||
param userManagedIdentityResourcegroupName string
|
||||
param location string = resourceGroup().location
|
||||
param utcValue string = utcNow()
|
||||
|
||||
resource queryuserManagedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' existing = {
|
||||
scope: resourceGroup(userManagedIdentityResourcegroupName)
|
||||
name: userManagedIdentityName
|
||||
}
|
||||
|
||||
resource runPowerShellInlineWithOutput 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
|
||||
name: 'runPowerShellInlineWithOutput${utcValue}'
|
||||
location: location
|
||||
kind: 'AzurePowerShell'
|
||||
identity: {
|
||||
type: 'UserAssigned'
|
||||
userAssignedIdentities: {
|
||||
'${queryuserManagedIdentity.id}': {}
|
||||
}
|
||||
}
|
||||
properties: {
|
||||
forceUpdateTag: utcValue
|
||||
azPowerShellVersion: '6.4'
|
||||
scriptContent: '''
|
||||
param([string] $batchAccountName, [string] $batchPoolName)
|
||||
Write-Output $output
|
||||
$DeploymentScriptOutputs = @{}
|
||||
$batchContext = Get-AzBatchAccount -AccountName $batchAccountName
|
||||
$DeploymentScriptOutputs = Get-AzBatchPool -Id $batchPoolName -BatchContext $batchContext
|
||||
'''
|
||||
arguments: '-batchAccountName ${batchAccountName} -batchPoolName ${batchPoolName}'
|
||||
timeout: 'PT1H'
|
||||
cleanupPreference: 'OnSuccess'
|
||||
retentionInterval: 'P1D'
|
||||
}
|
||||
}
|
||||
|
||||
output batchPoolExists bool = contains(runPowerShellInlineWithOutput.properties, 'outputs')
|
||||
|
|
|
@ -1,121 +1,121 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param batchAccountName string
|
||||
param batchAccountPoolName string
|
||||
param batchPoolExists bool = false
|
||||
param vmSize string
|
||||
|
||||
// Pool configuration
|
||||
param imageReferencePublisher string = 'microsoft-azure-batch'
|
||||
param imageReferenceOffer string
|
||||
param imageReferenceSku string
|
||||
param imageReferenceVersion string
|
||||
param containerImageNames array = []
|
||||
param containerRegistryPassword string = ''
|
||||
param containerRegistryServer string = ''
|
||||
param containerRegistryUsername string = ''
|
||||
param nodeAgentSkuId string = 'batch.node.ubuntu 20.04'
|
||||
param nodePlacementConfigurationPolicy string = ''
|
||||
param batchAccountPoolDisplayName string = batchAccountPoolName
|
||||
param interNodeCommunication bool = false
|
||||
|
||||
// Mount options
|
||||
param azureFileShareConfigurationAccountKey string = ''
|
||||
param azureFileShareConfigurationAccountName string = ''
|
||||
param azureFileShareConfigurationAzureFileUrl string = ''
|
||||
param azureFileShareConfigurationMountOptions string = ''
|
||||
param azureFileShareConfigurationRelativeMountPath string = ''
|
||||
param publicIPAddressConfigurationProvision string = ''
|
||||
|
||||
param fixedScaleResizeTimeout string = 'PT15M'
|
||||
param fixedScaleTargetDedicatedNodes int = 1
|
||||
param fixedScaleTargetLowPriorityNodes int = 0
|
||||
param startTaskCommandLine string = ''
|
||||
param startTaskEnvironmentSettings array = []
|
||||
param startTaskMaxTaskRetryCount int = 0
|
||||
param startTaskAutoUserElevationLevel string = 'Admin'
|
||||
param startTaskautoUserScope string = 'Pool'
|
||||
param startTaskWaitForSuccess bool = true
|
||||
param taskSchedulingPolicy string = 'Pack'
|
||||
param taskSlotsPerNode int = 1
|
||||
|
||||
resource batchAccount 'Microsoft.Batch/batchAccounts@2021-06-01' existing = {
|
||||
name: batchAccountName
|
||||
}
|
||||
|
||||
resource batchAccountPool 'Microsoft.Batch/batchAccounts/pools@2021-06-01' = if (!batchPoolExists) {
|
||||
name: batchAccountPoolName
|
||||
parent: batchAccount
|
||||
properties: {
|
||||
vmSize: vmSize
|
||||
deploymentConfiguration: {
|
||||
virtualMachineConfiguration: {
|
||||
imageReference: {
|
||||
offer: imageReferenceOffer
|
||||
publisher: imageReferencePublisher
|
||||
sku: imageReferenceSku
|
||||
version: imageReferenceVersion
|
||||
}
|
||||
containerConfiguration: {
|
||||
containerImageNames: containerImageNames
|
||||
containerRegistries: (empty(containerRegistryServer))? null: [
|
||||
{
|
||||
password: containerRegistryPassword
|
||||
registryServer: containerRegistryServer
|
||||
username: containerRegistryUsername
|
||||
}
|
||||
]
|
||||
type: 'DockerCompatible'
|
||||
}
|
||||
nodeAgentSkuId: nodeAgentSkuId
|
||||
nodePlacementConfiguration: (empty(nodePlacementConfigurationPolicy))? {} : {
|
||||
policy: nodePlacementConfigurationPolicy
|
||||
}
|
||||
}
|
||||
}
|
||||
displayName: batchAccountPoolDisplayName
|
||||
interNodeCommunication: (interNodeCommunication) ? 'Enabled' : 'Disabled'
|
||||
|
||||
|
||||
mountConfiguration: (empty(azureFileShareConfigurationAccountName))? []: [
|
||||
{
|
||||
azureFileShareConfiguration: {
|
||||
accountKey: azureFileShareConfigurationAccountKey
|
||||
accountName: azureFileShareConfigurationAccountName
|
||||
azureFileUrl: azureFileShareConfigurationAzureFileUrl
|
||||
mountOptions: azureFileShareConfigurationMountOptions
|
||||
relativeMountPath: azureFileShareConfigurationRelativeMountPath
|
||||
}
|
||||
}
|
||||
]
|
||||
networkConfiguration: (empty(publicIPAddressConfigurationProvision))? {}: {
|
||||
publicIPAddressConfiguration: {
|
||||
provision: publicIPAddressConfigurationProvision
|
||||
}
|
||||
}
|
||||
scaleSettings: {
|
||||
fixedScale: {
|
||||
resizeTimeout: fixedScaleResizeTimeout
|
||||
targetDedicatedNodes: fixedScaleTargetDedicatedNodes
|
||||
targetLowPriorityNodes: fixedScaleTargetLowPriorityNodes
|
||||
}
|
||||
}
|
||||
startTask: (empty(startTaskCommandLine))? {}: {
|
||||
commandLine: startTaskCommandLine
|
||||
environmentSettings: startTaskEnvironmentSettings
|
||||
userIdentity: {
|
||||
autoUser: {
|
||||
elevationLevel: startTaskAutoUserElevationLevel
|
||||
scope: startTaskautoUserScope
|
||||
}
|
||||
}
|
||||
maxTaskRetryCount: startTaskMaxTaskRetryCount
|
||||
waitForSuccess: startTaskWaitForSuccess
|
||||
}
|
||||
taskSchedulingPolicy: {
|
||||
nodeFillType: taskSchedulingPolicy
|
||||
}
|
||||
taskSlotsPerNode: taskSlotsPerNode
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param batchAccountName string
|
||||
param batchAccountPoolName string
|
||||
param batchPoolExists bool = false
|
||||
param vmSize string
|
||||
|
||||
// Pool configuration
|
||||
param imageReferencePublisher string = 'microsoft-azure-batch'
|
||||
param imageReferenceOffer string
|
||||
param imageReferenceSku string
|
||||
param imageReferenceVersion string
|
||||
param containerImageNames array = []
|
||||
param containerRegistryPassword string = ''
|
||||
param containerRegistryServer string = ''
|
||||
param containerRegistryUsername string = ''
|
||||
param nodeAgentSkuId string = 'batch.node.ubuntu 20.04'
|
||||
param nodePlacementConfigurationPolicy string = ''
|
||||
param batchAccountPoolDisplayName string = batchAccountPoolName
|
||||
param interNodeCommunication bool = false
|
||||
|
||||
// Mount options
|
||||
param azureFileShareConfigurationAccountKey string = ''
|
||||
param azureFileShareConfigurationAccountName string = ''
|
||||
param azureFileShareConfigurationAzureFileUrl string = ''
|
||||
param azureFileShareConfigurationMountOptions string = ''
|
||||
param azureFileShareConfigurationRelativeMountPath string = ''
|
||||
param publicIPAddressConfigurationProvision string = ''
|
||||
|
||||
param fixedScaleResizeTimeout string = 'PT15M'
|
||||
param fixedScaleTargetDedicatedNodes int = 1
|
||||
param fixedScaleTargetLowPriorityNodes int = 0
|
||||
param startTaskCommandLine string = ''
|
||||
param startTaskEnvironmentSettings array = []
|
||||
param startTaskMaxTaskRetryCount int = 0
|
||||
param startTaskAutoUserElevationLevel string = 'Admin'
|
||||
param startTaskautoUserScope string = 'Pool'
|
||||
param startTaskWaitForSuccess bool = true
|
||||
param taskSchedulingPolicy string = 'Pack'
|
||||
param taskSlotsPerNode int = 1
|
||||
|
||||
resource batchAccount 'Microsoft.Batch/batchAccounts@2021-06-01' existing = {
|
||||
name: batchAccountName
|
||||
}
|
||||
|
||||
resource batchAccountPool 'Microsoft.Batch/batchAccounts/pools@2021-06-01' = if (!batchPoolExists) {
|
||||
name: batchAccountPoolName
|
||||
parent: batchAccount
|
||||
properties: {
|
||||
vmSize: vmSize
|
||||
deploymentConfiguration: {
|
||||
virtualMachineConfiguration: {
|
||||
imageReference: {
|
||||
offer: imageReferenceOffer
|
||||
publisher: imageReferencePublisher
|
||||
sku: imageReferenceSku
|
||||
version: imageReferenceVersion
|
||||
}
|
||||
containerConfiguration: {
|
||||
containerImageNames: containerImageNames
|
||||
containerRegistries: (empty(containerRegistryServer))? null: [
|
||||
{
|
||||
password: containerRegistryPassword
|
||||
registryServer: containerRegistryServer
|
||||
username: containerRegistryUsername
|
||||
}
|
||||
]
|
||||
type: 'DockerCompatible'
|
||||
}
|
||||
nodeAgentSkuId: nodeAgentSkuId
|
||||
nodePlacementConfiguration: (empty(nodePlacementConfigurationPolicy))? {} : {
|
||||
policy: nodePlacementConfigurationPolicy
|
||||
}
|
||||
}
|
||||
}
|
||||
displayName: batchAccountPoolDisplayName
|
||||
interNodeCommunication: (interNodeCommunication) ? 'Enabled' : 'Disabled'
|
||||
|
||||
|
||||
mountConfiguration: (empty(azureFileShareConfigurationAccountName))? []: [
|
||||
{
|
||||
azureFileShareConfiguration: {
|
||||
accountKey: azureFileShareConfigurationAccountKey
|
||||
accountName: azureFileShareConfigurationAccountName
|
||||
azureFileUrl: azureFileShareConfigurationAzureFileUrl
|
||||
mountOptions: azureFileShareConfigurationMountOptions
|
||||
relativeMountPath: azureFileShareConfigurationRelativeMountPath
|
||||
}
|
||||
}
|
||||
]
|
||||
networkConfiguration: (empty(publicIPAddressConfigurationProvision))? {}: {
|
||||
publicIPAddressConfiguration: {
|
||||
provision: publicIPAddressConfigurationProvision
|
||||
}
|
||||
}
|
||||
scaleSettings: {
|
||||
fixedScale: {
|
||||
resizeTimeout: fixedScaleResizeTimeout
|
||||
targetDedicatedNodes: fixedScaleTargetDedicatedNodes
|
||||
targetLowPriorityNodes: fixedScaleTargetLowPriorityNodes
|
||||
}
|
||||
}
|
||||
startTask: (empty(startTaskCommandLine))? {}: {
|
||||
commandLine: startTaskCommandLine
|
||||
environmentSettings: startTaskEnvironmentSettings
|
||||
userIdentity: {
|
||||
autoUser: {
|
||||
elevationLevel: startTaskAutoUserElevationLevel
|
||||
scope: startTaskautoUserScope
|
||||
}
|
||||
}
|
||||
maxTaskRetryCount: startTaskMaxTaskRetryCount
|
||||
waitForSuccess: startTaskWaitForSuccess
|
||||
}
|
||||
taskSchedulingPolicy: {
|
||||
nodeFillType: taskSchedulingPolicy
|
||||
}
|
||||
taskSlotsPerNode: taskSlotsPerNode
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,26 +1,26 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param batchAccoutName string
|
||||
param keyVaultName string
|
||||
param keyVaultResourceGroup string
|
||||
param secretNamePrefix string = 'Geospatial'
|
||||
param utcValue string = utcNow()
|
||||
|
||||
var batchAccountNameSecretNameVar = '${secretNamePrefix}BatchAccountKey'
|
||||
|
||||
resource batchAccount 'Microsoft.Batch/batchAccounts@2021-06-01' existing = {
|
||||
name: batchAccoutName
|
||||
}
|
||||
|
||||
module storageAccountNameSecret './akv.secrets.bicep' = {
|
||||
name: 'batch-account-key-${utcValue}'
|
||||
scope: resourceGroup(keyVaultResourceGroup)
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: batchAccountNameSecretNameVar
|
||||
secretValue: batchAccount.listKeys().primary
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param batchAccoutName string
|
||||
param keyVaultName string
|
||||
param keyVaultResourceGroup string
|
||||
param secretNamePrefix string = 'Geospatial'
|
||||
param utcValue string = utcNow()
|
||||
|
||||
var batchAccountNameSecretNameVar = '${secretNamePrefix}BatchAccountKey'
|
||||
|
||||
resource batchAccount 'Microsoft.Batch/batchAccounts@2021-06-01' existing = {
|
||||
name: batchAccoutName
|
||||
}
|
||||
|
||||
module storageAccountNameSecret './akv.secrets.bicep' = {
|
||||
name: 'batch-account-key-${utcValue}'
|
||||
scope: resourceGroup(keyVaultResourceGroup)
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: batchAccountNameSecretNameVar
|
||||
secretValue: batchAccount.listKeys().primary
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param shareName string
|
||||
param accessTier string = 'TransactionOptimized'
|
||||
param enabledProtocols string = 'SMB'
|
||||
|
||||
|
||||
resource fileShare 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-08-01' = {
|
||||
name: '${storageAccountName}/default/${shareName}'
|
||||
properties: {
|
||||
accessTier: accessTier
|
||||
enabledProtocols: enabledProtocols
|
||||
metadata: {}
|
||||
shareQuota: 5120
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param shareName string
|
||||
param accessTier string = 'TransactionOptimized'
|
||||
param enabledProtocols string = 'SMB'
|
||||
|
||||
|
||||
resource fileShare 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-08-01' = {
|
||||
name: '${storageAccountName}/default/${shareName}'
|
||||
properties: {
|
||||
accessTier: accessTier
|
||||
enabledProtocols: enabledProtocols
|
||||
metadata: {}
|
||||
shareQuota: 5120
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param workspaceName string
|
||||
param location string
|
||||
param sku string = 'pergb2018'
|
||||
|
||||
resource workspace 'Microsoft.OperationalInsights/workspaces@2020-10-01' = {
|
||||
name: workspaceName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
properties: {
|
||||
sku: {
|
||||
name: sku
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output workspaceId string = workspace.id
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param workspaceName string
|
||||
param location string
|
||||
param sku string = 'pergb2018'
|
||||
|
||||
resource workspace 'Microsoft.OperationalInsights/workspaces@2020-10-01' = {
|
||||
name: workspaceName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
properties: {
|
||||
sku: {
|
||||
name: sku
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output workspaceId string = workspace.id
|
||||
|
|
|
@ -1,19 +1,19 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param location string = resourceGroup().location
|
||||
param uamiName string
|
||||
param environmentName string
|
||||
|
||||
resource uami 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = {
|
||||
name: uamiName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
}
|
||||
|
||||
output uamiId string = uami.id
|
||||
output uamiPrincipalId string = uami.properties.principalId
|
||||
output uamiClientId string = uami.properties.clientId
|
||||
output uamiTenantId string = uami.properties.tenantId
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param location string = resourceGroup().location
|
||||
param uamiName string
|
||||
param environmentName string
|
||||
|
||||
resource uami 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = {
|
||||
name: uamiName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
}
|
||||
|
||||
output uamiId string = uami.id
|
||||
output uamiPrincipalId string = uami.properties.principalId
|
||||
output uamiClientId string = uami.properties.clientId
|
||||
output uamiTenantId string = uami.properties.tenantId
|
||||
|
|
|
@ -1,19 +1,19 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
targetScope='subscription'
|
||||
|
||||
param environmentName string
|
||||
param resourceGroupName string
|
||||
param resourceGroupLocation string
|
||||
|
||||
resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
|
||||
name: resourceGroupName
|
||||
location: resourceGroupLocation
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
}
|
||||
|
||||
output rgLocation string = rg.location
|
||||
output rgId string = rg.id
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
targetScope='subscription'
|
||||
|
||||
param environmentName string
|
||||
param resourceGroupName string
|
||||
param resourceGroupLocation string
|
||||
|
||||
resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
|
||||
name: resourceGroupName
|
||||
location: resourceGroupLocation
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
}
|
||||
|
||||
output rgLocation string = rg.location
|
||||
output rgId string = rg.id
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param principalId string
|
||||
param roleDefinitionId string
|
||||
|
||||
param resourceName string
|
||||
|
||||
param roleAssignmentId string = guid(principalId, roleDefinitionId, resourceName)
|
||||
|
||||
resource existingResource 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: resourceName
|
||||
}
|
||||
|
||||
resource symbolicname 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
|
||||
name: roleAssignmentId
|
||||
scope: existingResource
|
||||
properties: {
|
||||
principalId: principalId
|
||||
roleDefinitionId: roleDefinitionId
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param principalId string
|
||||
param roleDefinitionId string
|
||||
|
||||
param resourceName string
|
||||
|
||||
param roleAssignmentId string = guid(principalId, roleDefinitionId, resourceName)
|
||||
|
||||
resource existingResource 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: resourceName
|
||||
}
|
||||
|
||||
resource symbolicname 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
|
||||
name: roleAssignmentId
|
||||
scope: existingResource
|
||||
properties: {
|
||||
principalId: principalId
|
||||
roleDefinitionId: roleDefinitionId
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,40 +1,40 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param storeType string = 'data'
|
||||
param storageSku string = 'Standard_GRS'
|
||||
param storageKind string = 'StorageV2'
|
||||
param public_access string = 'Enabled'
|
||||
param isHnsEnabled bool = false
|
||||
|
||||
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
|
||||
name: storageAccountName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
store: storeType
|
||||
}
|
||||
sku: {
|
||||
name: storageSku
|
||||
}
|
||||
kind: storageKind
|
||||
properties:{
|
||||
isHnsEnabled: isHnsEnabled
|
||||
accessTier: 'Hot'
|
||||
publicNetworkAccess: public_access
|
||||
networkAcls: {
|
||||
resourceAccessRules: []
|
||||
bypass: 'AzureServices'
|
||||
virtualNetworkRules: []
|
||||
ipRules: []
|
||||
defaultAction: ((public_access == 'Enabled')) ? 'Allow' : 'Deny'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output storageAccountId string = storageAccount.id
|
||||
output fileEndpointUri string = storageAccount.properties.primaryEndpoints.file
|
||||
output primaryKey string = storageAccount.listKeys().keys[0].value
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param storeType string = 'data'
|
||||
param storageSku string = 'Standard_GRS'
|
||||
param storageKind string = 'StorageV2'
|
||||
param public_access string = 'Enabled'
|
||||
param isHnsEnabled bool = false
|
||||
|
||||
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
|
||||
name: storageAccountName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
store: storeType
|
||||
}
|
||||
sku: {
|
||||
name: storageSku
|
||||
}
|
||||
kind: storageKind
|
||||
properties:{
|
||||
isHnsEnabled: isHnsEnabled
|
||||
accessTier: 'Hot'
|
||||
publicNetworkAccess: public_access
|
||||
networkAcls: {
|
||||
resourceAccessRules: []
|
||||
bypass: 'AzureServices'
|
||||
virtualNetworkRules: []
|
||||
ipRules: []
|
||||
defaultAction: ((public_access == 'Enabled')) ? 'Allow' : 'Deny'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
output storageAccountId string = storageAccount.id
|
||||
output fileEndpointUri string = storageAccount.properties.primaryEndpoints.file
|
||||
output primaryKey string = storageAccount.listKeys().keys[0].value
|
||||
|
|
|
@ -1,42 +1,42 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param containerName string
|
||||
param containerPublicAccess string = 'None'
|
||||
param containerDeleteRetentionPolicyEnabled bool = true
|
||||
param containerDeleteRetentionPolicyDays int = 7
|
||||
param deleteRetentionPolicyEnabled bool = true
|
||||
param deleteRetentionPolicyDays int = 7
|
||||
param isVersioningEnabled bool = false
|
||||
|
||||
|
||||
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: storageAccountName
|
||||
}
|
||||
|
||||
resource storageAccountService 'Microsoft.Storage/storageAccounts/blobServices@2021-08-01' = {
|
||||
name: 'default'
|
||||
parent: storageAccount
|
||||
properties: {
|
||||
containerDeleteRetentionPolicy: {
|
||||
days: containerDeleteRetentionPolicyDays
|
||||
enabled: containerDeleteRetentionPolicyEnabled
|
||||
}
|
||||
deleteRetentionPolicy: {
|
||||
days: deleteRetentionPolicyDays
|
||||
enabled: deleteRetentionPolicyEnabled
|
||||
}
|
||||
isVersioningEnabled: isVersioningEnabled
|
||||
}
|
||||
}
|
||||
|
||||
resource stgAcctBlobSvcsContainer 'Microsoft.Storage/storageAccounts/blobServices/containers@2021-08-01' = {
|
||||
name: containerName
|
||||
parent: storageAccountService
|
||||
properties: {
|
||||
publicAccess: containerPublicAccess
|
||||
}
|
||||
}
|
||||
|
||||
output containerId string = stgAcctBlobSvcsContainer.id
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param containerName string
|
||||
param containerPublicAccess string = 'None'
|
||||
param containerDeleteRetentionPolicyEnabled bool = true
|
||||
param containerDeleteRetentionPolicyDays int = 7
|
||||
param deleteRetentionPolicyEnabled bool = true
|
||||
param deleteRetentionPolicyDays int = 7
|
||||
param isVersioningEnabled bool = false
|
||||
|
||||
|
||||
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: storageAccountName
|
||||
}
|
||||
|
||||
resource storageAccountService 'Microsoft.Storage/storageAccounts/blobServices@2021-08-01' = {
|
||||
name: 'default'
|
||||
parent: storageAccount
|
||||
properties: {
|
||||
containerDeleteRetentionPolicy: {
|
||||
days: containerDeleteRetentionPolicyDays
|
||||
enabled: containerDeleteRetentionPolicyEnabled
|
||||
}
|
||||
deleteRetentionPolicy: {
|
||||
days: deleteRetentionPolicyDays
|
||||
enabled: deleteRetentionPolicyEnabled
|
||||
}
|
||||
isVersioningEnabled: isVersioningEnabled
|
||||
}
|
||||
}
|
||||
|
||||
resource stgAcctBlobSvcsContainer 'Microsoft.Storage/storageAccounts/blobServices/containers@2021-08-01' = {
|
||||
name: containerName
|
||||
parent: storageAccountService
|
||||
properties: {
|
||||
publicAccess: containerPublicAccess
|
||||
}
|
||||
}
|
||||
|
||||
output containerId string = stgAcctBlobSvcsContainer.id
|
||||
|
|
|
@ -1,26 +1,26 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param storageAccountName string
|
||||
param keyVaultName string
|
||||
param keyVaultResourceGroup string
|
||||
param secretNamePrefix string = 'Geospatial'
|
||||
param utcValue string = utcNow()
|
||||
|
||||
var storageAccountKeySecretNameVar = '${secretNamePrefix}StorageAccountKey'
|
||||
|
||||
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: storageAccountName
|
||||
}
|
||||
|
||||
module storageAccountKeySecret './akv.secrets.bicep' = {
|
||||
name: '${toLower(secretNamePrefix)}-storage-account-key-${utcValue}'
|
||||
scope: resourceGroup(keyVaultResourceGroup)
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: storageAccountKeySecretNameVar
|
||||
secretValue: storageAccount.listKeys().keys[0].value
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param storageAccountName string
|
||||
param keyVaultName string
|
||||
param keyVaultResourceGroup string
|
||||
param secretNamePrefix string = 'Geospatial'
|
||||
param utcValue string = utcNow()
|
||||
|
||||
var storageAccountKeySecretNameVar = '${secretNamePrefix}StorageAccountKey'
|
||||
|
||||
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: storageAccountName
|
||||
}
|
||||
|
||||
module storageAccountKeySecret './akv.secrets.bicep' = {
|
||||
name: '${toLower(secretNamePrefix)}-storage-account-key-${utcValue}'
|
||||
scope: resourceGroup(keyVaultResourceGroup)
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: storageAccountKeySecretNameVar
|
||||
secretValue: storageAccount.listKeys().keys[0].value
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,27 +1,27 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param storeType string = 'data'
|
||||
param storageSku string = 'Standard_GRS'
|
||||
param storageKind string = 'StorageV2'
|
||||
param public_access string = 'Enabled'
|
||||
|
||||
|
||||
module hnsEnabledStorageAccount 'storage.bicep' = {
|
||||
name: '${storageAccountName}-hns'
|
||||
params: {
|
||||
storageAccountName: storageAccountName
|
||||
location: location
|
||||
environmentName: environmentName
|
||||
storageSku: storageSku
|
||||
storageKind: storageKind
|
||||
isHnsEnabled: true
|
||||
public_access: public_access
|
||||
storeType: storeType
|
||||
}
|
||||
}
|
||||
|
||||
output storageAccountId string = hnsEnabledStorageAccount.outputs.storageAccountId
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param storageAccountName string
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param storeType string = 'data'
|
||||
param storageSku string = 'Standard_GRS'
|
||||
param storageKind string = 'StorageV2'
|
||||
param public_access string = 'Enabled'
|
||||
|
||||
|
||||
module hnsEnabledStorageAccount 'storage.bicep' = {
|
||||
name: '${storageAccountName}-hns'
|
||||
params: {
|
||||
storageAccountName: storageAccountName
|
||||
location: location
|
||||
environmentName: environmentName
|
||||
storageSku: storageSku
|
||||
storageKind: storageKind
|
||||
isHnsEnabled: true
|
||||
public_access: public_access
|
||||
storeType: storeType
|
||||
}
|
||||
}
|
||||
|
||||
output storageAccountId string = hnsEnabledStorageAccount.outputs.storageAccountId
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param vNetName string
|
||||
param subnetName string
|
||||
param subnetAddressPrefix string
|
||||
param serviceEndPoints array = []
|
||||
|
||||
|
||||
//Subnet with RT and NSG
|
||||
resource subnet 'Microsoft.Network/virtualNetworks/subnets@2020-06-01' = {
|
||||
name: '${vNetName}/${subnetName}'
|
||||
properties: {
|
||||
addressPrefix: subnetAddressPrefix
|
||||
serviceEndpoints: serviceEndPoints
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param vNetName string
|
||||
param subnetName string
|
||||
param subnetAddressPrefix string
|
||||
param serviceEndPoints array = []
|
||||
|
||||
|
||||
//Subnet with RT and NSG
|
||||
resource subnet 'Microsoft.Network/virtualNetworks/subnets@2020-06-01' = {
|
||||
name: '${vNetName}/${subnetName}'
|
||||
properties: {
|
||||
addressPrefix: subnetAddressPrefix
|
||||
serviceEndpoints: serviceEndPoints
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,32 +1,32 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param synapseWorkspaceName string
|
||||
|
||||
param logs array
|
||||
param storageAccountId string = ''
|
||||
param workspaceId string = ''
|
||||
param serviceBusId string = ''
|
||||
|
||||
param logAnalyticsDestinationType string = ''
|
||||
param eventHubAuthorizationRuleId string = ''
|
||||
param eventHubName string = ''
|
||||
|
||||
resource existingResource 'Microsoft.Synapse/workspaces@2021-06-01' existing = {
|
||||
name: synapseWorkspaceName
|
||||
}
|
||||
|
||||
resource symbolicname 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
|
||||
name: '${existingResource.name}-diag'
|
||||
scope: existingResource
|
||||
properties: {
|
||||
eventHubAuthorizationRuleId: empty(eventHubAuthorizationRuleId) ? null : eventHubAuthorizationRuleId
|
||||
eventHubName: empty(eventHubName) ? null : eventHubName
|
||||
logAnalyticsDestinationType: empty(logAnalyticsDestinationType) ? null: logAnalyticsDestinationType
|
||||
logs: logs
|
||||
metrics: []
|
||||
serviceBusRuleId: empty(serviceBusId) ? null : serviceBusId
|
||||
storageAccountId: empty(storageAccountId) ? null : storageAccountId
|
||||
workspaceId: empty(workspaceId) ? null : workspaceId
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param synapseWorkspaceName string
|
||||
|
||||
param logs array
|
||||
param storageAccountId string = ''
|
||||
param workspaceId string = ''
|
||||
param serviceBusId string = ''
|
||||
|
||||
param logAnalyticsDestinationType string = ''
|
||||
param eventHubAuthorizationRuleId string = ''
|
||||
param eventHubName string = ''
|
||||
|
||||
resource existingResource 'Microsoft.Synapse/workspaces@2021-06-01' existing = {
|
||||
name: synapseWorkspaceName
|
||||
}
|
||||
|
||||
resource symbolicname 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
|
||||
name: '${existingResource.name}-diag'
|
||||
scope: existingResource
|
||||
properties: {
|
||||
eventHubAuthorizationRuleId: empty(eventHubAuthorizationRuleId) ? null : eventHubAuthorizationRuleId
|
||||
eventHubName: empty(eventHubName) ? null : eventHubName
|
||||
logAnalyticsDestinationType: empty(logAnalyticsDestinationType) ? null: logAnalyticsDestinationType
|
||||
logs: logs
|
||||
metrics: []
|
||||
serviceBusRuleId: empty(serviceBusId) ? null : serviceBusId
|
||||
storageAccountId: empty(storageAccountId) ? null : storageAccountId
|
||||
workspaceId: empty(workspaceId) ? null : workspaceId
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,55 +1,55 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param location string = resourceGroup().location
|
||||
param synapseWorkspaceName string
|
||||
param sparkPoolName string
|
||||
param autoPauseEnabled bool = true
|
||||
param autoPauseDelayInMinutes int = 15
|
||||
param autoScaleEnabled bool = true
|
||||
param autoScaleMinNodeCount int = 1
|
||||
param autoScaleMaxNodeCount int = 5
|
||||
param cacheSize int = 0
|
||||
param dynamicExecutorAllocationEnabled bool = false
|
||||
param isComputeIsolationEnabled bool = false
|
||||
param nodeCount int = 0
|
||||
param nodeSize string
|
||||
param nodeSizeFamily string
|
||||
param sparkVersion string = '3.1'
|
||||
param poolId string = 'default'
|
||||
|
||||
|
||||
resource synapseWorspace 'Microsoft.Synapse/workspaces@2021-06-01' existing = {
|
||||
name: synapseWorkspaceName
|
||||
}
|
||||
|
||||
resource synapseSparkPool 'Microsoft.Synapse/workspaces/bigDataPools@2021-06-01' = {
|
||||
name: sparkPoolName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
poolId: poolId
|
||||
}
|
||||
parent: synapseWorspace
|
||||
properties: {
|
||||
autoPause: {
|
||||
delayInMinutes: autoPauseDelayInMinutes
|
||||
enabled: autoPauseEnabled
|
||||
}
|
||||
autoScale: {
|
||||
enabled: autoScaleEnabled
|
||||
maxNodeCount: autoScaleMaxNodeCount
|
||||
minNodeCount: autoScaleMinNodeCount
|
||||
}
|
||||
cacheSize: cacheSize
|
||||
dynamicExecutorAllocation: {
|
||||
enabled: dynamicExecutorAllocationEnabled
|
||||
}
|
||||
isComputeIsolationEnabled: isComputeIsolationEnabled
|
||||
nodeCount: nodeCount
|
||||
nodeSize: nodeSize
|
||||
nodeSizeFamily: nodeSizeFamily
|
||||
sparkVersion: sparkVersion
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param location string = resourceGroup().location
|
||||
param synapseWorkspaceName string
|
||||
param sparkPoolName string
|
||||
param autoPauseEnabled bool = true
|
||||
param autoPauseDelayInMinutes int = 15
|
||||
param autoScaleEnabled bool = true
|
||||
param autoScaleMinNodeCount int = 1
|
||||
param autoScaleMaxNodeCount int = 5
|
||||
param cacheSize int = 0
|
||||
param dynamicExecutorAllocationEnabled bool = false
|
||||
param isComputeIsolationEnabled bool = false
|
||||
param nodeCount int = 0
|
||||
param nodeSize string
|
||||
param nodeSizeFamily string
|
||||
param sparkVersion string = '3.1'
|
||||
param poolId string = 'default'
|
||||
|
||||
|
||||
resource synapseWorspace 'Microsoft.Synapse/workspaces@2021-06-01' existing = {
|
||||
name: synapseWorkspaceName
|
||||
}
|
||||
|
||||
resource synapseSparkPool 'Microsoft.Synapse/workspaces/bigDataPools@2021-06-01' = {
|
||||
name: sparkPoolName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
poolId: poolId
|
||||
}
|
||||
parent: synapseWorspace
|
||||
properties: {
|
||||
autoPause: {
|
||||
delayInMinutes: autoPauseDelayInMinutes
|
||||
enabled: autoPauseEnabled
|
||||
}
|
||||
autoScale: {
|
||||
enabled: autoScaleEnabled
|
||||
maxNodeCount: autoScaleMaxNodeCount
|
||||
minNodeCount: autoScaleMinNodeCount
|
||||
}
|
||||
cacheSize: cacheSize
|
||||
dynamicExecutorAllocation: {
|
||||
enabled: dynamicExecutorAllocationEnabled
|
||||
}
|
||||
isComputeIsolationEnabled: isComputeIsolationEnabled
|
||||
nodeCount: nodeCount
|
||||
nodeSize: nodeSize
|
||||
nodeSizeFamily: nodeSizeFamily
|
||||
sparkVersion: sparkVersion
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,84 +1,84 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param location string = resourceGroup().location
|
||||
param synapseWorkspaceName string
|
||||
param hnsStorageAccountName string
|
||||
param hnsStorageFileSystem string = 'users'
|
||||
param sqlAdminLogin string = 'sqladmin'
|
||||
param sqlAdminLoginPassword string
|
||||
param firewallAllowEndIP string = '255.255.255.255'
|
||||
param firewallAllowStartIP string = '0.0.0.0'
|
||||
|
||||
param gitRepoAccountName string = ''
|
||||
param gitRepoCollaborationBranch string = 'main'
|
||||
param gitRepoHostName string = ''
|
||||
param gitRepoLastCommitId string = ''
|
||||
param gitRepoVstsProjectName string = ''
|
||||
param gitRepoRepositoryName string = ''
|
||||
param gitRepoRootFolder string = '.'
|
||||
param gitRepoVstsTenantId string = subscription().tenantId
|
||||
param gitRepoType string = ''
|
||||
|
||||
param keyVaultName string = ''
|
||||
param synapseSqlAdminPasswordSecretName string = 'synapse-sqladmin-password'
|
||||
param utcValue string = utcNow()
|
||||
param workspaceId string = 'default'
|
||||
|
||||
resource hnsStorage 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: hnsStorageAccountName
|
||||
}
|
||||
|
||||
resource synapseWorspace 'Microsoft.Synapse/workspaces@2021-06-01' = {
|
||||
name: synapseWorkspaceName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
workspaceId: workspaceId
|
||||
}
|
||||
identity: {
|
||||
type: 'SystemAssigned'
|
||||
}
|
||||
properties: {
|
||||
defaultDataLakeStorage: {
|
||||
resourceId: hnsStorage.id
|
||||
accountUrl: hnsStorage.properties.primaryEndpoints.dfs
|
||||
filesystem: hnsStorageFileSystem
|
||||
}
|
||||
sqlAdministratorLogin: sqlAdminLogin
|
||||
sqlAdministratorLoginPassword: sqlAdminLoginPassword
|
||||
workspaceRepositoryConfiguration:(empty(gitRepoType))? {}: {
|
||||
accountName: gitRepoAccountName
|
||||
collaborationBranch: gitRepoCollaborationBranch
|
||||
hostName: gitRepoHostName
|
||||
lastCommitId: gitRepoLastCommitId
|
||||
projectName: gitRepoVstsProjectName
|
||||
repositoryName: gitRepoRepositoryName
|
||||
rootFolder: gitRepoRootFolder
|
||||
tenantId: gitRepoVstsTenantId
|
||||
type: gitRepoType
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource synapseWorkspaceFwRules 'Microsoft.Synapse/workspaces/firewallRules@2021-06-01' = {
|
||||
name: 'allowAll'
|
||||
parent: synapseWorspace
|
||||
properties: {
|
||||
endIpAddress: firewallAllowEndIP
|
||||
startIpAddress: firewallAllowStartIP
|
||||
}
|
||||
}
|
||||
|
||||
module synapseSqlAdminPasswordSecret './akv.secrets.bicep' = if (!empty(keyVaultName)) {
|
||||
name: 'synapse-sqladmin-password-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: synapseSqlAdminPasswordSecretName
|
||||
secretValue: sqlAdminLoginPassword
|
||||
}
|
||||
}
|
||||
|
||||
output synapseMIPrincipalId string = synapseWorspace.identity.principalId
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
param environmentName string
|
||||
param location string = resourceGroup().location
|
||||
param synapseWorkspaceName string
|
||||
param hnsStorageAccountName string
|
||||
param hnsStorageFileSystem string = 'users'
|
||||
param sqlAdminLogin string = 'sqladmin'
|
||||
param sqlAdminLoginPassword string
|
||||
param firewallAllowEndIP string = '255.255.255.255'
|
||||
param firewallAllowStartIP string = '0.0.0.0'
|
||||
|
||||
param gitRepoAccountName string = ''
|
||||
param gitRepoCollaborationBranch string = 'main'
|
||||
param gitRepoHostName string = ''
|
||||
param gitRepoLastCommitId string = ''
|
||||
param gitRepoVstsProjectName string = ''
|
||||
param gitRepoRepositoryName string = ''
|
||||
param gitRepoRootFolder string = '.'
|
||||
param gitRepoVstsTenantId string = subscription().tenantId
|
||||
param gitRepoType string = ''
|
||||
|
||||
param keyVaultName string = ''
|
||||
param synapseSqlAdminPasswordSecretName string = 'synapse-sqladmin-password'
|
||||
param utcValue string = utcNow()
|
||||
param workspaceId string = 'default'
|
||||
|
||||
resource hnsStorage 'Microsoft.Storage/storageAccounts@2021-08-01' existing = {
|
||||
name: hnsStorageAccountName
|
||||
}
|
||||
|
||||
resource synapseWorspace 'Microsoft.Synapse/workspaces@2021-06-01' = {
|
||||
name: synapseWorkspaceName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
workspaceId: workspaceId
|
||||
}
|
||||
identity: {
|
||||
type: 'SystemAssigned'
|
||||
}
|
||||
properties: {
|
||||
defaultDataLakeStorage: {
|
||||
resourceId: hnsStorage.id
|
||||
accountUrl: hnsStorage.properties.primaryEndpoints.dfs
|
||||
filesystem: hnsStorageFileSystem
|
||||
}
|
||||
sqlAdministratorLogin: sqlAdminLogin
|
||||
sqlAdministratorLoginPassword: sqlAdminLoginPassword
|
||||
workspaceRepositoryConfiguration:(empty(gitRepoType))? {}: {
|
||||
accountName: gitRepoAccountName
|
||||
collaborationBranch: gitRepoCollaborationBranch
|
||||
hostName: gitRepoHostName
|
||||
lastCommitId: gitRepoLastCommitId
|
||||
projectName: gitRepoVstsProjectName
|
||||
repositoryName: gitRepoRepositoryName
|
||||
rootFolder: gitRepoRootFolder
|
||||
tenantId: gitRepoVstsTenantId
|
||||
type: gitRepoType
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
resource synapseWorkspaceFwRules 'Microsoft.Synapse/workspaces/firewallRules@2021-06-01' = {
|
||||
name: 'allowAll'
|
||||
parent: synapseWorspace
|
||||
properties: {
|
||||
endIpAddress: firewallAllowEndIP
|
||||
startIpAddress: firewallAllowStartIP
|
||||
}
|
||||
}
|
||||
|
||||
module synapseSqlAdminPasswordSecret './akv.secrets.bicep' = if (!empty(keyVaultName)) {
|
||||
name: 'synapse-sqladmin-password-${utcValue}'
|
||||
params: {
|
||||
environmentName: environmentName
|
||||
keyVaultName: keyVaultName
|
||||
secretName: synapseSqlAdminPasswordSecretName
|
||||
secretValue: sqlAdminLoginPassword
|
||||
}
|
||||
}
|
||||
|
||||
output synapseMIPrincipalId string = synapseWorspace.identity.principalId
|
||||
|
|
|
@ -1,25 +1,25 @@
|
|||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// Name of the VNET.
|
||||
param virtualNetworkName string
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param addressPrefix string
|
||||
|
||||
resource vnet 'Microsoft.Network/virtualNetworks@2020-06-01' = {
|
||||
name: virtualNetworkName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
properties: {
|
||||
addressSpace: {
|
||||
addressPrefixes: [
|
||||
addressPrefix
|
||||
]
|
||||
}
|
||||
subnets: [
|
||||
]
|
||||
}
|
||||
}
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the MIT license.
|
||||
|
||||
// Name of the VNET.
|
||||
param virtualNetworkName string
|
||||
param location string = resourceGroup().location
|
||||
param environmentName string
|
||||
param addressPrefix string
|
||||
|
||||
resource vnet 'Microsoft.Network/virtualNetworks@2020-06-01' = {
|
||||
name: virtualNetworkName
|
||||
location: location
|
||||
tags: {
|
||||
environment: environmentName
|
||||
}
|
||||
properties: {
|
||||
addressSpace: {
|
||||
addressPrefixes: [
|
||||
addressPrefix
|
||||
]
|
||||
}
|
||||
subnets: [
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,98 +1,98 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os
|
||||
import re
|
||||
import argparse
|
||||
import shutil
|
||||
|
||||
|
||||
# Collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run packaging function')
|
||||
parser.add_argument('--raw_storage_account_name', type=str, required=True, help='Name of the Raw data hosting Storage Account')
|
||||
parser.add_argument('--synapse_storage_account_name', type=str, required=True, help='Name of the Raw data hosting Storage Account')
|
||||
parser.add_argument('--synapse_pool_name', type=str, required=True, help='Name of the Synapse pool in the Synapse workspace to use as default')
|
||||
parser.add_argument('--batch_storage_account_name', type=str, required=True, help='Name of the Batch Storage Account')
|
||||
parser.add_argument('--batch_account', type=str, required=True, help="Batch Account name")
|
||||
parser.add_argument('--linked_key_vault', type=str, required=True, help="Key Vault to be added as Linked Service")
|
||||
parser.add_argument('--location', type=str, required=True, help="Batch Account Location")
|
||||
|
||||
#Parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def replace(tokens_map: dict, body: str):
|
||||
|
||||
# use regex to identify tokens in the files. Token are in the format __token_name__
|
||||
# same token can occur multiple times in the same file
|
||||
tokenizer = re.compile("([\w\'\-]+|\s+|.?)")
|
||||
|
||||
# replace tokens with actual values
|
||||
swap = lambda x: '{0}'.format(tokens_map.get(x)) if x in tokens_map else x
|
||||
|
||||
# find all and replace
|
||||
result = ''.join(swap(st) for st in tokenizer.findall(body))
|
||||
|
||||
return result
|
||||
|
||||
def package(tokens_map: dict):
|
||||
|
||||
script_dirname = os.path.dirname(__file__)
|
||||
src_folder_path = os.path.join(script_dirname, '..', 'src', 'workflow')
|
||||
package_folder_path= os.path.join(os.getcwd(), 'package')
|
||||
|
||||
# mode
|
||||
mode = 0o766
|
||||
|
||||
# if package folder already exists, delete it before starting a new iteration
|
||||
if os.path.exists(package_folder_path):
|
||||
shutil.rmtree(package_folder_path)
|
||||
|
||||
# copy the folder structure from src/workflow folder before replacing the
|
||||
# tokens with values
|
||||
shutil.copytree(src_folder_path, package_folder_path)
|
||||
|
||||
# set of folder names are fixed for synapse pipelines and hence hardcoding them
|
||||
for folder in ['linkedService', 'sparkJobDefinition', 'pipeline', 'bigDataPool']:
|
||||
|
||||
# iterate through all file
|
||||
for file in os.listdir(f'{package_folder_path}/{folder}'):
|
||||
|
||||
# check whether file is in json format or not
|
||||
if file.endswith(".json"):
|
||||
|
||||
file_path = os.path.join(package_folder_path, folder ,file)
|
||||
|
||||
with open(file_path, 'r') as f:
|
||||
|
||||
# replaced token string in memory
|
||||
token_replaced_file_content = replace(tokens_map, f.read())
|
||||
|
||||
with open(file_path, 'w') as file_write:
|
||||
|
||||
if token_replaced_file_content is not None:
|
||||
|
||||
# write back the token replaced string to file
|
||||
file_write.write(token_replaced_file_content)
|
||||
|
||||
# zip the folder contents to package.zip
|
||||
shutil.make_archive('package', 'zip', package_folder_path)
|
||||
|
||||
# finally clean up the package folder
|
||||
if os.path.exists(package_folder_path):
|
||||
shutil.rmtree(package_folder_path)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# list of tokens and their values to be replaced
|
||||
tokens_map = {
|
||||
'__raw_data_storage_account__': args.raw_storage_account_name,
|
||||
'__batch_storage_account__': args.batch_storage_account_name,
|
||||
'__batch_account__': args.batch_account,
|
||||
'__linked_key_vault__': args.linked_key_vault,
|
||||
'__synapse_storage_account__': args.synapse_storage_account_name,
|
||||
'__synapse_pool_name__': args.synapse_pool_name,
|
||||
'__location__': args.location
|
||||
}
|
||||
|
||||
# invoke package method
|
||||
package(tokens_map)
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os
|
||||
import re
|
||||
import argparse
|
||||
import shutil
|
||||
|
||||
|
||||
# Collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run packaging function')
|
||||
parser.add_argument('--raw_storage_account_name', type=str, required=True, help='Name of the Raw data hosting Storage Account')
|
||||
parser.add_argument('--synapse_storage_account_name', type=str, required=True, help='Name of the Raw data hosting Storage Account')
|
||||
parser.add_argument('--synapse_pool_name', type=str, required=True, help='Name of the Synapse pool in the Synapse workspace to use as default')
|
||||
parser.add_argument('--batch_storage_account_name', type=str, required=True, help='Name of the Batch Storage Account')
|
||||
parser.add_argument('--batch_account', type=str, required=True, help="Batch Account name")
|
||||
parser.add_argument('--linked_key_vault', type=str, required=True, help="Key Vault to be added as Linked Service")
|
||||
parser.add_argument('--location', type=str, required=True, help="Batch Account Location")
|
||||
|
||||
#Parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def replace(tokens_map: dict, body: str):
|
||||
|
||||
# use regex to identify tokens in the files. Token are in the format __token_name__
|
||||
# same token can occur multiple times in the same file
|
||||
tokenizer = re.compile("([\w\'\-]+|\s+|.?)")
|
||||
|
||||
# replace tokens with actual values
|
||||
swap = lambda x: '{0}'.format(tokens_map.get(x)) if x in tokens_map else x
|
||||
|
||||
# find all and replace
|
||||
result = ''.join(swap(st) for st in tokenizer.findall(body))
|
||||
|
||||
return result
|
||||
|
||||
def package(tokens_map: dict):
|
||||
|
||||
script_dirname = os.path.dirname(__file__)
|
||||
src_folder_path = os.path.join(script_dirname, '..', 'src', 'workflow')
|
||||
package_folder_path= os.path.join(os.getcwd(), 'package')
|
||||
|
||||
# mode
|
||||
mode = 0o766
|
||||
|
||||
# if package folder already exists, delete it before starting a new iteration
|
||||
if os.path.exists(package_folder_path):
|
||||
shutil.rmtree(package_folder_path)
|
||||
|
||||
# copy the folder structure from src/workflow folder before replacing the
|
||||
# tokens with values
|
||||
shutil.copytree(src_folder_path, package_folder_path)
|
||||
|
||||
# set of folder names are fixed for synapse pipelines and hence hardcoding them
|
||||
for folder in ['linkedService', 'sparkJobDefinition', 'pipeline', 'bigDataPool']:
|
||||
|
||||
# iterate through all file
|
||||
for file in os.listdir(f'{package_folder_path}/{folder}'):
|
||||
|
||||
# check whether file is in json format or not
|
||||
if file.endswith(".json"):
|
||||
|
||||
file_path = os.path.join(package_folder_path, folder ,file)
|
||||
|
||||
with open(file_path, 'r') as f:
|
||||
|
||||
# replaced token string in memory
|
||||
token_replaced_file_content = replace(tokens_map, f.read())
|
||||
|
||||
with open(file_path, 'w') as file_write:
|
||||
|
||||
if token_replaced_file_content is not None:
|
||||
|
||||
# write back the token replaced string to file
|
||||
file_write.write(token_replaced_file_content)
|
||||
|
||||
# zip the folder contents to package.zip
|
||||
shutil.make_archive('package', 'zip', package_folder_path)
|
||||
|
||||
# finally clean up the package folder
|
||||
if os.path.exists(package_folder_path):
|
||||
shutil.rmtree(package_folder_path)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# list of tokens and their values to be replaced
|
||||
tokens_map = {
|
||||
'__raw_data_storage_account__': args.raw_storage_account_name,
|
||||
'__batch_storage_account__': args.batch_storage_account_name,
|
||||
'__batch_account__': args.batch_account,
|
||||
'__linked_key_vault__': args.linked_key_vault,
|
||||
'__synapse_storage_account__': args.synapse_storage_account_name,
|
||||
'__synapse_pool_name__': args.synapse_pool_name,
|
||||
'__location__': args.location
|
||||
}
|
||||
|
||||
# invoke package method
|
||||
package(tokens_map)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"prob_cutoff": 0.25,
|
||||
"tag_type": "pool",
|
||||
"bbox_color":"red",
|
||||
"bbox_width":1,
|
||||
"json": true
|
||||
}
|
||||
{
|
||||
"prob_cutoff": 0.25,
|
||||
"tag_type": "pool",
|
||||
"bbox_color":"red",
|
||||
"bbox_width":1,
|
||||
"json": true
|
||||
}
|
||||
|
|
|
@ -1,22 +1,22 @@
|
|||
{
|
||||
"algImageName" :"ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest",
|
||||
"containerName":"pool",
|
||||
"containerReference":"custom_vision_object_detection",
|
||||
"mountedDirectory": "/data",
|
||||
"submissionDirectory" : "in",
|
||||
"resultsDirectory" : "out",
|
||||
"logsDirectory":"logs",
|
||||
"modelPython":"./custom_vision.py",
|
||||
"vaultUri":"__vault_uri__",
|
||||
"contextFileName":"config.json",
|
||||
"cpu":3,
|
||||
"memory":14,
|
||||
"gpu":"",
|
||||
"validations":[
|
||||
{
|
||||
"validator":"FileExtensionValidator",
|
||||
"expected":".png",
|
||||
"value":"*.*"
|
||||
}
|
||||
]
|
||||
}
|
||||
{
|
||||
"algImageName" :"ghcr.io/azure/azure-orbital-analytics-samples/custom_vision_offline:latest",
|
||||
"containerName":"pool",
|
||||
"containerReference":"custom_vision_object_detection",
|
||||
"mountedDirectory": "/data",
|
||||
"submissionDirectory" : "in",
|
||||
"resultsDirectory" : "out",
|
||||
"logsDirectory":"logs",
|
||||
"modelPython":"./custom_vision.py",
|
||||
"vaultUri":"__vault_uri__",
|
||||
"contextFileName":"config.json",
|
||||
"cpu":3,
|
||||
"memory":14,
|
||||
"gpu":"",
|
||||
"validations":[
|
||||
{
|
||||
"validator":"FileExtensionValidator",
|
||||
"expected":".png",
|
||||
"value":"*.*"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -1,142 +1,142 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import argparse
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
PKG_PATH = Path(__file__).parent
|
||||
PKG_NAME = PKG_PATH.name
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run copy noop function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
|
||||
parser.add_argument('--src_container', type=str, required=False, help='Source container in Azure Storage')
|
||||
parser.add_argument('--src_fileshare', type=str, required=False, help='Source File share in Azure Storage')
|
||||
parser.add_argument('--src_folder', default=None, required=True, help='Source folder path in Azure Storage Container or File Share')
|
||||
|
||||
parser.add_argument('--dst_container', type=str, required=False, help='Destination container in Azure Storage')
|
||||
parser.add_argument('--dst_fileshare', type=str, required=False, help='Destination File share in Azure Storage')
|
||||
parser.add_argument('--dst_folder', default=None, required=True, help='Destination folder path in Azure Storage Container or File Share')
|
||||
|
||||
parser.add_argument('--folders_to_create', action='append', required=False, help='Folders to create in container or file share')
|
||||
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def copy(src_mounted_path: str,
|
||||
src_unmounted_path: str,
|
||||
dst_mounted_path: str,
|
||||
dst_unmounted_path: str,
|
||||
dst_folder: str,
|
||||
folders: any):
|
||||
|
||||
# create only if it does not already exists
|
||||
if not os.path.isdir(f'{dst_unmounted_path}') and dst_unmounted_path.startswith('https'):
|
||||
mssparkutils.fs.mkdirs(dst_unmounted_path)
|
||||
|
||||
dst_path = dst_mounted_path.replace(f'/{dst_folder}', '')
|
||||
|
||||
# folders are not required, so do not try to iterate
|
||||
# it if it is empty
|
||||
if folders is not None:
|
||||
for folder in folders:
|
||||
logger.info(f"creating folder path {dst_path}/{folder}")
|
||||
|
||||
# create only if it does not already exists
|
||||
if not os.path.isdir(f'{dst_path}/{folder}'):
|
||||
os.makedirs(f'{dst_path}/{folder}')
|
||||
|
||||
# mssparkutils.fs.cp works with source and destination
|
||||
# that are of the same type storage container to storage
|
||||
# container
|
||||
logger.info(f"copying from {src_mounted_path} to {dst_mounted_path}")
|
||||
|
||||
# using shutil to copy directory or individual files as needed
|
||||
if os.path.isdir(src_mounted_path):
|
||||
shutil.copytree(src_mounted_path, dst_mounted_path, dirs_exist_ok=True)
|
||||
else:
|
||||
shutil.copy(src_mounted_path, dst_mounted_path)
|
||||
|
||||
logger.info("finished copying")
|
||||
|
||||
def map_source(storage_account_name: str,
|
||||
storage_account_key: str,
|
||||
container_name: str,
|
||||
fileshare_name: str,
|
||||
folder_path: str):
|
||||
|
||||
# unmounted path refer to the storage account path that is not mounted to /synfs/{job_id}/{file_share_name} path
|
||||
unmounted_path = ''
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
# if container name is specified, then the mapping / mount is for a container in azure storage account
|
||||
if container_name:
|
||||
|
||||
unmounted_path = f'abfss://{container_name}@{storage_account_name}.dfs.core.windows.net/{folder_path}'
|
||||
|
||||
mssparkutils.fs.unmount(f'/{container_name}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{container_name}@{storage_account_name}.dfs.core.windows.net',
|
||||
f'/{container_name}',
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
mounted_path = f'/synfs/{jobId}/{container_name}/{folder_path}'
|
||||
|
||||
# if file share is specified, then the mapping / mount is for a file share in azure storage account
|
||||
elif fileshare_name:
|
||||
|
||||
unmounted_path = f'https://{fileshare_name}@{storage_account_name}.file.core.windows.net/{folder_path}'
|
||||
|
||||
mssparkutils.fs.unmount(f'/{fileshare_name}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'https://{fileshare_name}@{storage_account_name}.file.core.windows.net/{folder_path}',
|
||||
f'/{fileshare_name}',
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
mounted_path = f'/synfs/{jobId}/{fileshare_name}/{folder_path}'
|
||||
|
||||
return mounted_path, unmounted_path
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
|
||||
logger = logging.getLogger("copy_noop")
|
||||
|
||||
# map / mount the source container / file share in azure storage account
|
||||
src_mounted_path, src_unmounted_path = map_source(args.storage_account_name,
|
||||
args.storage_account_key,
|
||||
args.src_container,
|
||||
args.src_fileshare,
|
||||
args.src_folder)
|
||||
|
||||
# map / mount the destination container / file share in azure storage account
|
||||
dst_mounted_path, dst_unmounted_path = map_source(args.storage_account_name,
|
||||
args.storage_account_key,
|
||||
args.dst_container,
|
||||
args.dst_fileshare,
|
||||
args.dst_folder)
|
||||
|
||||
# copy method allows the three scenarios:
|
||||
# 1. source container to destination container
|
||||
# 2. source container to destination file share
|
||||
# 3. source file share to destination file share
|
||||
# source file share to destination container is not supported at this time
|
||||
copy(src_mounted_path, src_unmounted_path, dst_mounted_path, dst_unmounted_path, args.dst_folder, args.folders_to_create)
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import logging
|
||||
import os
|
||||
import argparse
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
PKG_PATH = Path(__file__).parent
|
||||
PKG_NAME = PKG_PATH.name
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run copy noop function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
|
||||
parser.add_argument('--src_container', type=str, required=False, help='Source container in Azure Storage')
|
||||
parser.add_argument('--src_fileshare', type=str, required=False, help='Source File share in Azure Storage')
|
||||
parser.add_argument('--src_folder', default=None, required=True, help='Source folder path in Azure Storage Container or File Share')
|
||||
|
||||
parser.add_argument('--dst_container', type=str, required=False, help='Destination container in Azure Storage')
|
||||
parser.add_argument('--dst_fileshare', type=str, required=False, help='Destination File share in Azure Storage')
|
||||
parser.add_argument('--dst_folder', default=None, required=True, help='Destination folder path in Azure Storage Container or File Share')
|
||||
|
||||
parser.add_argument('--folders_to_create', action='append', required=False, help='Folders to create in container or file share')
|
||||
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def copy(src_mounted_path: str,
|
||||
src_unmounted_path: str,
|
||||
dst_mounted_path: str,
|
||||
dst_unmounted_path: str,
|
||||
dst_folder: str,
|
||||
folders: any):
|
||||
|
||||
# create only if it does not already exists
|
||||
if not os.path.isdir(f'{dst_unmounted_path}') and dst_unmounted_path.startswith('https'):
|
||||
mssparkutils.fs.mkdirs(dst_unmounted_path)
|
||||
|
||||
dst_path = dst_mounted_path.replace(f'/{dst_folder}', '')
|
||||
|
||||
# folders are not required, so do not try to iterate
|
||||
# it if it is empty
|
||||
if folders is not None:
|
||||
for folder in folders:
|
||||
logger.info(f"creating folder path {dst_path}/{folder}")
|
||||
|
||||
# create only if it does not already exists
|
||||
if not os.path.isdir(f'{dst_path}/{folder}'):
|
||||
os.makedirs(f'{dst_path}/{folder}')
|
||||
|
||||
# mssparkutils.fs.cp works with source and destination
|
||||
# that are of the same type storage container to storage
|
||||
# container
|
||||
logger.info(f"copying from {src_mounted_path} to {dst_mounted_path}")
|
||||
|
||||
# using shutil to copy directory or individual files as needed
|
||||
if os.path.isdir(src_mounted_path):
|
||||
shutil.copytree(src_mounted_path, dst_mounted_path, dirs_exist_ok=True)
|
||||
else:
|
||||
shutil.copy(src_mounted_path, dst_mounted_path)
|
||||
|
||||
logger.info("finished copying")
|
||||
|
||||
def map_source(storage_account_name: str,
|
||||
storage_account_key: str,
|
||||
container_name: str,
|
||||
fileshare_name: str,
|
||||
folder_path: str):
|
||||
|
||||
# unmounted path refer to the storage account path that is not mounted to /synfs/{job_id}/{file_share_name} path
|
||||
unmounted_path = ''
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
# if container name is specified, then the mapping / mount is for a container in azure storage account
|
||||
if container_name:
|
||||
|
||||
unmounted_path = f'abfss://{container_name}@{storage_account_name}.dfs.core.windows.net/{folder_path}'
|
||||
|
||||
mssparkutils.fs.unmount(f'/{container_name}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{container_name}@{storage_account_name}.dfs.core.windows.net',
|
||||
f'/{container_name}',
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
mounted_path = f'/synfs/{jobId}/{container_name}/{folder_path}'
|
||||
|
||||
# if file share is specified, then the mapping / mount is for a file share in azure storage account
|
||||
elif fileshare_name:
|
||||
|
||||
unmounted_path = f'https://{fileshare_name}@{storage_account_name}.file.core.windows.net/{folder_path}'
|
||||
|
||||
mssparkutils.fs.unmount(f'/{fileshare_name}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'https://{fileshare_name}@{storage_account_name}.file.core.windows.net/{folder_path}',
|
||||
f'/{fileshare_name}',
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
mounted_path = f'/synfs/{jobId}/{fileshare_name}/{folder_path}'
|
||||
|
||||
return mounted_path, unmounted_path
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
|
||||
logger = logging.getLogger("copy_noop")
|
||||
|
||||
# map / mount the source container / file share in azure storage account
|
||||
src_mounted_path, src_unmounted_path = map_source(args.storage_account_name,
|
||||
args.storage_account_key,
|
||||
args.src_container,
|
||||
args.src_fileshare,
|
||||
args.src_folder)
|
||||
|
||||
# map / mount the destination container / file share in azure storage account
|
||||
dst_mounted_path, dst_unmounted_path = map_source(args.storage_account_name,
|
||||
args.storage_account_key,
|
||||
args.dst_container,
|
||||
args.dst_fileshare,
|
||||
args.dst_folder)
|
||||
|
||||
# copy method allows the three scenarios:
|
||||
# 1. source container to destination container
|
||||
# 2. source container to destination file share
|
||||
# 3. source file share to destination file share
|
||||
# source file share to destination container is not supported at this time
|
||||
copy(src_mounted_path, src_unmounted_path, dst_mounted_path, dst_unmounted_path, args.dst_folder, args.folders_to_create)
|
||||
|
|
|
@ -1,130 +1,130 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os, argparse, sys
|
||||
import json
|
||||
import glob
|
||||
from osgeo import gdal
|
||||
import logging
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run convert to png function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
parser.add_argument('--config_file_name', required=True, help='Config file name')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def convert_directory(
|
||||
input_path,
|
||||
output_path,
|
||||
config_file,
|
||||
logger,
|
||||
default_options={"format": "png", "metadata": False},
|
||||
):
|
||||
gdal.UseExceptions()
|
||||
|
||||
logger.info("looking for config file: %s", config_file)
|
||||
|
||||
translate_options_dict = default_options
|
||||
logger.debug("default config options: %s", translate_options_dict)
|
||||
|
||||
try:
|
||||
# read config file
|
||||
with open(config_file, "r") as config:
|
||||
config_file_dict = json.load(config)
|
||||
logger.debug("read in %s", config_file_dict)
|
||||
translate_options_dict.update(config_file_dict)
|
||||
except Exception as e:
|
||||
# config file is missing or there is issue reading the config file
|
||||
logger.error("error reading config file %s", e)
|
||||
sys.exit(1)
|
||||
|
||||
logger.info("using config options: %s", translate_options_dict)
|
||||
|
||||
keep_metadata = translate_options_dict.pop("metadata")
|
||||
|
||||
opt = gdal.TranslateOptions(**translate_options_dict)
|
||||
|
||||
logger.debug("looking for input files in %s", input_path)
|
||||
for in_file in os.scandir(input_path):
|
||||
in_name = in_file.name
|
||||
logger.info("ingesting file %s", in_file.path)
|
||||
# ! this is a landmine; will error for files w/o extension but with '.', and for formats with spaces
|
||||
out_name = os.path.splitext(in_name)[0] + "." + translate_options_dict["format"]
|
||||
out_path = os.path.join(output_path, out_name)
|
||||
try:
|
||||
# call gdal to convert the file format
|
||||
gdal.Translate(out_path, in_file.path, options=opt)
|
||||
except Exception as e:
|
||||
logger.error("gdal error: %s", e)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logger.info("successfully translated %s", out_path)
|
||||
|
||||
# check to see if we need to carry over the geo-coordinates / metadata file?
|
||||
if not keep_metadata:
|
||||
xml_glob = os.path.join(output_path, "*.aux.xml")
|
||||
logger.debug(f"deleting metadata files that match {xml_glob}")
|
||||
for xml_file in glob.glob(xml_glob):
|
||||
logger.debug(f"deleting metadata file f{xml_file}")
|
||||
os.remove(xml_file)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_convert")
|
||||
|
||||
# unmount any previously mounted storage account container
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
config_path = f'/synfs/{jobId}/{args.storage_container}/config/{args.config_file_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
logger.debug(f"config file path {config_path}")
|
||||
|
||||
convert_directory(input_path, output_path, config_path, logger)
|
||||
|
||||
# scan the directory to find tif files to convert to png file format
|
||||
for in_file in os.scandir(input_path):
|
||||
|
||||
# tif file extensions are removed so that we can use the same file name for png
|
||||
file_name = os.path.basename(in_file.path).replace('.tif', '')
|
||||
|
||||
copy_src_file_name = f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{file_name}'
|
||||
copy_dst_file_name = f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/convert/{file_name}'
|
||||
|
||||
# move source png file to destination path
|
||||
mssparkutils.fs.mv(
|
||||
f'{copy_src_file_name}.png',
|
||||
f'{copy_dst_file_name}.png',
|
||||
True
|
||||
)
|
||||
|
||||
# move source xml (geo-coordinates) to destination path
|
||||
mssparkutils.fs.mv(
|
||||
f'{copy_src_file_name}.png.aux.xml',
|
||||
f'{copy_dst_file_name}.png.aux.xml',
|
||||
True
|
||||
)
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os, argparse, sys
|
||||
import json
|
||||
import glob
|
||||
from osgeo import gdal
|
||||
import logging
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run convert to png function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
parser.add_argument('--config_file_name', required=True, help='Config file name')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def convert_directory(
|
||||
input_path,
|
||||
output_path,
|
||||
config_file,
|
||||
logger,
|
||||
default_options={"format": "png", "metadata": False},
|
||||
):
|
||||
gdal.UseExceptions()
|
||||
|
||||
logger.info("looking for config file: %s", config_file)
|
||||
|
||||
translate_options_dict = default_options
|
||||
logger.debug("default config options: %s", translate_options_dict)
|
||||
|
||||
try:
|
||||
# read config file
|
||||
with open(config_file, "r") as config:
|
||||
config_file_dict = json.load(config)
|
||||
logger.debug("read in %s", config_file_dict)
|
||||
translate_options_dict.update(config_file_dict)
|
||||
except Exception as e:
|
||||
# config file is missing or there is issue reading the config file
|
||||
logger.error("error reading config file %s", e)
|
||||
sys.exit(1)
|
||||
|
||||
logger.info("using config options: %s", translate_options_dict)
|
||||
|
||||
keep_metadata = translate_options_dict.pop("metadata")
|
||||
|
||||
opt = gdal.TranslateOptions(**translate_options_dict)
|
||||
|
||||
logger.debug("looking for input files in %s", input_path)
|
||||
for in_file in os.scandir(input_path):
|
||||
in_name = in_file.name
|
||||
logger.info("ingesting file %s", in_file.path)
|
||||
# ! this is a landmine; will error for files w/o extension but with '.', and for formats with spaces
|
||||
out_name = os.path.splitext(in_name)[0] + "." + translate_options_dict["format"]
|
||||
out_path = os.path.join(output_path, out_name)
|
||||
try:
|
||||
# call gdal to convert the file format
|
||||
gdal.Translate(out_path, in_file.path, options=opt)
|
||||
except Exception as e:
|
||||
logger.error("gdal error: %s", e)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logger.info("successfully translated %s", out_path)
|
||||
|
||||
# check to see if we need to carry over the geo-coordinates / metadata file?
|
||||
if not keep_metadata:
|
||||
xml_glob = os.path.join(output_path, "*.aux.xml")
|
||||
logger.debug(f"deleting metadata files that match {xml_glob}")
|
||||
for xml_file in glob.glob(xml_glob):
|
||||
logger.debug(f"deleting metadata file f{xml_file}")
|
||||
os.remove(xml_file)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_convert")
|
||||
|
||||
# unmount any previously mounted storage account container
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
config_path = f'/synfs/{jobId}/{args.storage_container}/config/{args.config_file_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
logger.debug(f"config file path {config_path}")
|
||||
|
||||
convert_directory(input_path, output_path, config_path, logger)
|
||||
|
||||
# scan the directory to find tif files to convert to png file format
|
||||
for in_file in os.scandir(input_path):
|
||||
|
||||
# tif file extensions are removed so that we can use the same file name for png
|
||||
file_name = os.path.basename(in_file.path).replace('.tif', '')
|
||||
|
||||
copy_src_file_name = f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{file_name}'
|
||||
copy_dst_file_name = f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/convert/{file_name}'
|
||||
|
||||
# move source png file to destination path
|
||||
mssparkutils.fs.mv(
|
||||
f'{copy_src_file_name}.png',
|
||||
f'{copy_dst_file_name}.png',
|
||||
True
|
||||
)
|
||||
|
||||
# move source xml (geo-coordinates) to destination path
|
||||
mssparkutils.fs.mv(
|
||||
f'{copy_src_file_name}.png.aux.xml',
|
||||
f'{copy_dst_file_name}.png.aux.xml',
|
||||
True
|
||||
)
|
||||
|
|
|
@ -1,132 +1,132 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os, argparse, sys
|
||||
import shapely as shp
|
||||
import shapely.geometry as geo
|
||||
from osgeo import gdal
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(os.getcwd())
|
||||
|
||||
import utils
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run crop function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
parser.add_argument('--config_file_name', required=True, help='Config file name')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def crop(storage_account_name: str,
|
||||
storage_account_key: str,
|
||||
storage_container: str,
|
||||
src_folder_name: str,
|
||||
config_file_name: str):
|
||||
'''
|
||||
Crops the GeoTiff to the Area of Interest (AOI)
|
||||
|
||||
Inputs:
|
||||
storage_account_name - Name of the storage account name where the input data resides
|
||||
storage_account_key - Key to the storage account where the input data resides
|
||||
storage_container - Container under which the input data resides
|
||||
src_folder_name - Folder containing the source file for cropping
|
||||
config_file_name - Config file name
|
||||
|
||||
Output:
|
||||
Cropped GeeTiff saved into the user specified directory
|
||||
'''
|
||||
# enable logging
|
||||
logger = utils.init_logger("stac_download")
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
mssparkutils.fs.unmount(f'/{storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{storage_container}@{storage_account_name}.dfs.core.windows.net',
|
||||
f'/{storage_container}',
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{storage_container}/{src_folder_name}'
|
||||
config_path = f'/synfs/{jobId}/{storage_container}/config/{config_file_name}'
|
||||
output_path = f'/synfs/{jobId}/{storage_container}/crop'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
logger.debug(f"config file path {config_path}")
|
||||
|
||||
try:
|
||||
# parse config file
|
||||
config = utils.parse_config(config_path)
|
||||
except Exception:
|
||||
exit(1)
|
||||
|
||||
# get the aoi for cropping from config file
|
||||
geom = config.get("geometry")
|
||||
bbox = config.get("bbox")
|
||||
|
||||
if (geom is not None) and (bbox is not None):
|
||||
logger.error('found both "geomtry" and "bbox"')
|
||||
exit(1)
|
||||
elif (geom is None) and (bbox is None):
|
||||
logger.error('found neither geomtry" and "bbox"')
|
||||
exit(1)
|
||||
|
||||
try:
|
||||
aoi = geo.asShape(geom) if bbox is None else geo.box(*bbox)
|
||||
except Exception as e:
|
||||
logger.error(f"error parsing config:{e}")
|
||||
exit(1)
|
||||
|
||||
if aoi.is_empty:
|
||||
logger.error(f"empty area of interest {aoi.wkt}")
|
||||
exit(1)
|
||||
|
||||
logger.debug(f"using aoi '{aoi}'")
|
||||
|
||||
input_files = []
|
||||
|
||||
# list all the files in the folder that will be part of the crop
|
||||
files = mssparkutils.fs.ls(f'abfss://{storage_container}@{storage_account_name}.dfs.core.windows.net/{src_folder_name}')
|
||||
for file in files:
|
||||
if not file.isDir:
|
||||
input_files.append(file)
|
||||
|
||||
# crop the raster file
|
||||
utils.crop_images(input_files, f'abfss://{storage_container}@{storage_account_name}.dfs.core.windows.net/{src_folder_name}', input_path, output_path, aoi)
|
||||
|
||||
for file in input_files:
|
||||
# this is the newly created cropped file path in local host
|
||||
temp_src_path = file.path.replace(f'/{src_folder_name}', '/')
|
||||
|
||||
# this is the destination path (storage account) where the newly
|
||||
# created cropped file will be moved to
|
||||
perm_src_path = file.path.replace(f'/{src_folder_name}/', '/crop/').replace(os.path.basename(file.path), 'output.tif')
|
||||
|
||||
mssparkutils.fs.mv(
|
||||
temp_src_path,
|
||||
perm_src_path,
|
||||
True
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
print("Starting Tiling Process")
|
||||
|
||||
crop(args.storage_account_name,
|
||||
args.storage_account_key,
|
||||
args.storage_container,
|
||||
args.src_folder_name,
|
||||
args.config_file_name)
|
||||
|
||||
print("Tiling Process Completed")
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os, argparse, sys
|
||||
import shapely as shp
|
||||
import shapely.geometry as geo
|
||||
from osgeo import gdal
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(os.getcwd())
|
||||
|
||||
import utils
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run crop function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
parser.add_argument('--config_file_name', required=True, help='Config file name')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def crop(storage_account_name: str,
|
||||
storage_account_key: str,
|
||||
storage_container: str,
|
||||
src_folder_name: str,
|
||||
config_file_name: str):
|
||||
'''
|
||||
Crops the GeoTiff to the Area of Interest (AOI)
|
||||
|
||||
Inputs:
|
||||
storage_account_name - Name of the storage account name where the input data resides
|
||||
storage_account_key - Key to the storage account where the input data resides
|
||||
storage_container - Container under which the input data resides
|
||||
src_folder_name - Folder containing the source file for cropping
|
||||
config_file_name - Config file name
|
||||
|
||||
Output:
|
||||
Cropped GeeTiff saved into the user specified directory
|
||||
'''
|
||||
# enable logging
|
||||
logger = utils.init_logger("stac_download")
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
mssparkutils.fs.unmount(f'/{storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{storage_container}@{storage_account_name}.dfs.core.windows.net',
|
||||
f'/{storage_container}',
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{storage_container}/{src_folder_name}'
|
||||
config_path = f'/synfs/{jobId}/{storage_container}/config/{config_file_name}'
|
||||
output_path = f'/synfs/{jobId}/{storage_container}/crop'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
logger.debug(f"config file path {config_path}")
|
||||
|
||||
try:
|
||||
# parse config file
|
||||
config = utils.parse_config(config_path)
|
||||
except Exception:
|
||||
exit(1)
|
||||
|
||||
# get the aoi for cropping from config file
|
||||
geom = config.get("geometry")
|
||||
bbox = config.get("bbox")
|
||||
|
||||
if (geom is not None) and (bbox is not None):
|
||||
logger.error('found both "geomtry" and "bbox"')
|
||||
exit(1)
|
||||
elif (geom is None) and (bbox is None):
|
||||
logger.error('found neither geomtry" and "bbox"')
|
||||
exit(1)
|
||||
|
||||
try:
|
||||
aoi = geo.asShape(geom) if bbox is None else geo.box(*bbox)
|
||||
except Exception as e:
|
||||
logger.error(f"error parsing config:{e}")
|
||||
exit(1)
|
||||
|
||||
if aoi.is_empty:
|
||||
logger.error(f"empty area of interest {aoi.wkt}")
|
||||
exit(1)
|
||||
|
||||
logger.debug(f"using aoi '{aoi}'")
|
||||
|
||||
input_files = []
|
||||
|
||||
# list all the files in the folder that will be part of the crop
|
||||
files = mssparkutils.fs.ls(f'abfss://{storage_container}@{storage_account_name}.dfs.core.windows.net/{src_folder_name}')
|
||||
for file in files:
|
||||
if not file.isDir:
|
||||
input_files.append(file)
|
||||
|
||||
# crop the raster file
|
||||
utils.crop_images(input_files, f'abfss://{storage_container}@{storage_account_name}.dfs.core.windows.net/{src_folder_name}', input_path, output_path, aoi)
|
||||
|
||||
for file in input_files:
|
||||
# this is the newly created cropped file path in local host
|
||||
temp_src_path = file.path.replace(f'/{src_folder_name}', '/')
|
||||
|
||||
# this is the destination path (storage account) where the newly
|
||||
# created cropped file will be moved to
|
||||
perm_src_path = file.path.replace(f'/{src_folder_name}/', '/crop/').replace(os.path.basename(file.path), 'output.tif')
|
||||
|
||||
mssparkutils.fs.mv(
|
||||
temp_src_path,
|
||||
perm_src_path,
|
||||
True
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
print("Starting Tiling Process")
|
||||
|
||||
crop(args.storage_account_name,
|
||||
args.storage_account_key,
|
||||
args.storage_container,
|
||||
args.src_folder_name,
|
||||
args.config_file_name)
|
||||
|
||||
print("Tiling Process Completed")
|
||||
|
|
|
@ -1,136 +1,136 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import json
|
||||
import logging
|
||||
import logging.config
|
||||
import pyproj
|
||||
import rasterio as rio
|
||||
import rasterio.mask
|
||||
import shapely as shp
|
||||
import shapely.geometry
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
from pathlib import Path
|
||||
from shapely.ops import transform
|
||||
|
||||
def parse_config(config_path: str):
|
||||
LOGGER.info(f"reading config file {config_path}")
|
||||
|
||||
# read the config file
|
||||
try:
|
||||
with open(config_path, "r") as file:
|
||||
config = json.load(file)
|
||||
LOGGER.info(f"using configuration {config}")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"error reading config file:{e}")
|
||||
raise
|
||||
|
||||
return config
|
||||
|
||||
def area_sq_km(area: shp.geometry.base.BaseGeometry, src_crs) -> float:
|
||||
tfmr = pyproj.Transformer.from_crs(src_crs, {'proj':'cea'}, always_xy=True)
|
||||
return transform(tfmr.transform, area).area / 1e6
|
||||
|
||||
def crop_images(
|
||||
images: any,
|
||||
input_path: Path,
|
||||
local_input_path: str,
|
||||
output_path: Path,
|
||||
aoi: shp.geometry.base.BaseGeometry,
|
||||
):
|
||||
for image in images:
|
||||
LOGGER.info(f"starting on file {image}")
|
||||
print(input_path)
|
||||
print(local_input_path)
|
||||
image_path = image.path.replace(input_path, local_input_path)
|
||||
|
||||
print(image_path)
|
||||
|
||||
with rio.open(image_path, "r") as img_src:
|
||||
LOGGER.debug(f"opening file {image.name}")
|
||||
dst_meta = img_src.meta
|
||||
|
||||
crs_src = img_src.crs
|
||||
src_shape = img_src.shape
|
||||
src_area = area_sq_km(shp.geometry.box(*img_src.bounds), crs_src)
|
||||
|
||||
# convert the aoi boundary to the images native CRS
|
||||
# shapely is (x,y) coord order, but its (lat, long) for WGS84
|
||||
# so force consistency with always_xy
|
||||
tfmr = pyproj.Transformer.from_crs("epsg:4326", crs_src, always_xy=True)
|
||||
aoi_src = transform(tfmr.transform, aoi)
|
||||
|
||||
# possible changes - better decision making on nodata choices here
|
||||
#! and use better choice than 0 for floats and signed ints
|
||||
data_dst, tfm_dst = rio.mask.mask(
|
||||
img_src, [aoi_src], crop=True, nodata=0
|
||||
)
|
||||
|
||||
dst_meta.update(
|
||||
{
|
||||
"driver": "gtiff",
|
||||
"height": data_dst.shape[1],
|
||||
"width": data_dst.shape[2],
|
||||
"alpha": "unspecified",
|
||||
"nodata": 0,
|
||||
"transform": tfm_dst,
|
||||
}
|
||||
)
|
||||
|
||||
out_meta_str = str(dst_meta).replace("\n", "")
|
||||
LOGGER.debug(f"using options for destination image {out_meta_str}")
|
||||
local_output_path = output_path.replace('/crop', '')
|
||||
rel_local_path = image_path.replace(local_input_path, '')
|
||||
dst_path = f'{local_output_path}/{rel_local_path}'
|
||||
|
||||
with rio.open(dst_path, "w", **dst_meta) as img_dst:
|
||||
img_dst.write(data_dst)
|
||||
|
||||
dst_area = area_sq_km(shp.geometry.box(*img_dst.bounds), crs_src)
|
||||
dst_shape = img_dst.shape
|
||||
|
||||
|
||||
LOGGER.debug(f"source dimensions {src_shape} and area (sq km) {src_area}")
|
||||
LOGGER.debug(f"destination dimensions {dst_shape} and area (sq km) {dst_area}")
|
||||
|
||||
LOGGER.info(f"saved cropped image to {dst_path}")
|
||||
|
||||
|
||||
##########################################################################################
|
||||
# logging
|
||||
##########################################################################################
|
||||
|
||||
|
||||
LOGGER = None
|
||||
|
||||
|
||||
def init_logger(name: str = __name__, level: int = logging.DEBUG) -> logging.Logger:
|
||||
config = {
|
||||
"version": 1,
|
||||
"disable_existing_loggers": False,
|
||||
"formatters": {
|
||||
"standard": {"format": "%(asctime)s:[%(levelname)s]:%(name)s:%(message)s"},
|
||||
},
|
||||
"handlers": {
|
||||
f"{name}_hdl": {
|
||||
"level": level,
|
||||
"formatter": "standard",
|
||||
"class": "logging.StreamHandler",
|
||||
# 'stream': 'ext://sys.stdout', # Default is stderr
|
||||
},
|
||||
},
|
||||
"loggers": {
|
||||
name: {"propagate": False, "handlers": [f"{name}_hdl"], "level": level,},
|
||||
},
|
||||
}
|
||||
logging.config.dictConfig(config)
|
||||
global LOGGER
|
||||
LOGGER = logging.getLogger(name)
|
||||
return LOGGER
|
||||
|
||||
|
||||
def default_logger():
|
||||
if LOGGER is None:
|
||||
init_logger()
|
||||
return LOGGER
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import json
|
||||
import logging
|
||||
import logging.config
|
||||
import pyproj
|
||||
import rasterio as rio
|
||||
import rasterio.mask
|
||||
import shapely as shp
|
||||
import shapely.geometry
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
from pathlib import Path
|
||||
from shapely.ops import transform
|
||||
|
||||
def parse_config(config_path: str):
|
||||
LOGGER.info(f"reading config file {config_path}")
|
||||
|
||||
# read the config file
|
||||
try:
|
||||
with open(config_path, "r") as file:
|
||||
config = json.load(file)
|
||||
LOGGER.info(f"using configuration {config}")
|
||||
except Exception as e:
|
||||
LOGGER.error(f"error reading config file:{e}")
|
||||
raise
|
||||
|
||||
return config
|
||||
|
||||
def area_sq_km(area: shp.geometry.base.BaseGeometry, src_crs) -> float:
|
||||
tfmr = pyproj.Transformer.from_crs(src_crs, {'proj':'cea'}, always_xy=True)
|
||||
return transform(tfmr.transform, area).area / 1e6
|
||||
|
||||
def crop_images(
|
||||
images: any,
|
||||
input_path: Path,
|
||||
local_input_path: str,
|
||||
output_path: Path,
|
||||
aoi: shp.geometry.base.BaseGeometry,
|
||||
):
|
||||
for image in images:
|
||||
LOGGER.info(f"starting on file {image}")
|
||||
print(input_path)
|
||||
print(local_input_path)
|
||||
image_path = image.path.replace(input_path, local_input_path)
|
||||
|
||||
print(image_path)
|
||||
|
||||
with rio.open(image_path, "r") as img_src:
|
||||
LOGGER.debug(f"opening file {image.name}")
|
||||
dst_meta = img_src.meta
|
||||
|
||||
crs_src = img_src.crs
|
||||
src_shape = img_src.shape
|
||||
src_area = area_sq_km(shp.geometry.box(*img_src.bounds), crs_src)
|
||||
|
||||
# convert the aoi boundary to the images native CRS
|
||||
# shapely is (x,y) coord order, but its (lat, long) for WGS84
|
||||
# so force consistency with always_xy
|
||||
tfmr = pyproj.Transformer.from_crs("epsg:4326", crs_src, always_xy=True)
|
||||
aoi_src = transform(tfmr.transform, aoi)
|
||||
|
||||
# possible changes - better decision making on nodata choices here
|
||||
#! and use better choice than 0 for floats and signed ints
|
||||
data_dst, tfm_dst = rio.mask.mask(
|
||||
img_src, [aoi_src], crop=True, nodata=0
|
||||
)
|
||||
|
||||
dst_meta.update(
|
||||
{
|
||||
"driver": "gtiff",
|
||||
"height": data_dst.shape[1],
|
||||
"width": data_dst.shape[2],
|
||||
"alpha": "unspecified",
|
||||
"nodata": 0,
|
||||
"transform": tfm_dst,
|
||||
}
|
||||
)
|
||||
|
||||
out_meta_str = str(dst_meta).replace("\n", "")
|
||||
LOGGER.debug(f"using options for destination image {out_meta_str}")
|
||||
local_output_path = output_path.replace('/crop', '')
|
||||
rel_local_path = image_path.replace(local_input_path, '')
|
||||
dst_path = f'{local_output_path}/{rel_local_path}'
|
||||
|
||||
with rio.open(dst_path, "w", **dst_meta) as img_dst:
|
||||
img_dst.write(data_dst)
|
||||
|
||||
dst_area = area_sq_km(shp.geometry.box(*img_dst.bounds), crs_src)
|
||||
dst_shape = img_dst.shape
|
||||
|
||||
|
||||
LOGGER.debug(f"source dimensions {src_shape} and area (sq km) {src_area}")
|
||||
LOGGER.debug(f"destination dimensions {dst_shape} and area (sq km) {dst_area}")
|
||||
|
||||
LOGGER.info(f"saved cropped image to {dst_path}")
|
||||
|
||||
|
||||
##########################################################################################
|
||||
# logging
|
||||
##########################################################################################
|
||||
|
||||
|
||||
LOGGER = None
|
||||
|
||||
|
||||
def init_logger(name: str = __name__, level: int = logging.DEBUG) -> logging.Logger:
|
||||
config = {
|
||||
"version": 1,
|
||||
"disable_existing_loggers": False,
|
||||
"formatters": {
|
||||
"standard": {"format": "%(asctime)s:[%(levelname)s]:%(name)s:%(message)s"},
|
||||
},
|
||||
"handlers": {
|
||||
f"{name}_hdl": {
|
||||
"level": level,
|
||||
"formatter": "standard",
|
||||
"class": "logging.StreamHandler",
|
||||
# 'stream': 'ext://sys.stdout', # Default is stderr
|
||||
},
|
||||
},
|
||||
"loggers": {
|
||||
name: {"propagate": False, "handlers": [f"{name}_hdl"], "level": level,},
|
||||
},
|
||||
}
|
||||
logging.config.dictConfig(config)
|
||||
global LOGGER
|
||||
LOGGER = logging.getLogger(name)
|
||||
return LOGGER
|
||||
|
||||
|
||||
def default_logger():
|
||||
if LOGGER is None:
|
||||
init_logger()
|
||||
return LOGGER
|
||||
|
|
|
@ -1,101 +1,101 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import argparse, sys
|
||||
from osgeo import gdal
|
||||
import logging
|
||||
|
||||
from pandas import array
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run mosaic function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def mosaic_tifs(input_path: str,
|
||||
output_path: str,
|
||||
files: any):
|
||||
print("file names are listed below")
|
||||
print(files)
|
||||
'''
|
||||
Stitches two or more GeoTiffs into one single large GeoTiff
|
||||
|
||||
Inputs:
|
||||
storage_account_name - Name of the storage account name where the input data resides
|
||||
storage_account_key - Key to the storage account where the input data resides
|
||||
storage_container - Container under which the input data resides
|
||||
src_folder_name - Folder where the input data is stored
|
||||
file_names - Array of input file names (with extension)
|
||||
|
||||
Output:
|
||||
Single large GeoTiff saved into the user specified storage account
|
||||
|
||||
'''
|
||||
gdal.UseExceptions()
|
||||
|
||||
# two or more files to be mosaic'ed are passed as comma separated values
|
||||
files_to_mosaic = [ f"{input_path}/{file}" for file in files ]
|
||||
|
||||
temp_output_path = output_path.replace('/mosaic', '')
|
||||
|
||||
# gdal library's wrap method is called to perform the mosaic'ing
|
||||
g = gdal.Warp(f'{temp_output_path}/output.tif', files_to_mosaic, format="GTiff", options=["COMPRESS=LZW", "TILED=YES"])
|
||||
|
||||
# close file and flush to disk
|
||||
g = None
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_mosaic")
|
||||
|
||||
# mount storage account container
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}/mosaic'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
|
||||
# list the files in the source folder path under the storage account's container
|
||||
files = mssparkutils.fs.ls(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{args.src_folder_name}')
|
||||
input_files = []
|
||||
for file in files:
|
||||
if not file.isDir and file.name.endswith('.TIF'):
|
||||
input_files.append(file.name)
|
||||
|
||||
print("Starting Mosaicing Process")
|
||||
|
||||
# mosaic method is called
|
||||
mosaic_tifs(input_path, output_path, input_files)
|
||||
|
||||
temp_output_path = output_path.replace('/mosaic', '')
|
||||
|
||||
# final output from mosaic'ing is moved from its temp location in local host
|
||||
# to a permanent persistent storage account container mounted to the host
|
||||
mssparkutils.fs.mv(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/output.tif',
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/mosaic/output.tif',
|
||||
True
|
||||
)
|
||||
|
||||
print("Mosaicing Process Completed")
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import argparse, sys
|
||||
from osgeo import gdal
|
||||
import logging
|
||||
|
||||
from pandas import array
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run mosaic function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def mosaic_tifs(input_path: str,
|
||||
output_path: str,
|
||||
files: any):
|
||||
print("file names are listed below")
|
||||
print(files)
|
||||
'''
|
||||
Stitches two or more GeoTiffs into one single large GeoTiff
|
||||
|
||||
Inputs:
|
||||
storage_account_name - Name of the storage account name where the input data resides
|
||||
storage_account_key - Key to the storage account where the input data resides
|
||||
storage_container - Container under which the input data resides
|
||||
src_folder_name - Folder where the input data is stored
|
||||
file_names - Array of input file names (with extension)
|
||||
|
||||
Output:
|
||||
Single large GeoTiff saved into the user specified storage account
|
||||
|
||||
'''
|
||||
gdal.UseExceptions()
|
||||
|
||||
# two or more files to be mosaic'ed are passed as comma separated values
|
||||
files_to_mosaic = [ f"{input_path}/{file}" for file in files ]
|
||||
|
||||
temp_output_path = output_path.replace('/mosaic', '')
|
||||
|
||||
# gdal library's wrap method is called to perform the mosaic'ing
|
||||
g = gdal.Warp(f'{temp_output_path}/output.tif', files_to_mosaic, format="GTiff", options=["COMPRESS=LZW", "TILED=YES"])
|
||||
|
||||
# close file and flush to disk
|
||||
g = None
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_mosaic")
|
||||
|
||||
# mount storage account container
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}/mosaic'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
|
||||
# list the files in the source folder path under the storage account's container
|
||||
files = mssparkutils.fs.ls(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{args.src_folder_name}')
|
||||
input_files = []
|
||||
for file in files:
|
||||
if not file.isDir and file.name.endswith('.TIF'):
|
||||
input_files.append(file.name)
|
||||
|
||||
print("Starting Mosaicing Process")
|
||||
|
||||
# mosaic method is called
|
||||
mosaic_tifs(input_path, output_path, input_files)
|
||||
|
||||
temp_output_path = output_path.replace('/mosaic', '')
|
||||
|
||||
# final output from mosaic'ing is moved from its temp location in local host
|
||||
# to a permanent persistent storage account container mounted to the host
|
||||
mssparkutils.fs.mv(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/output.tif',
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/mosaic/output.tif',
|
||||
True
|
||||
)
|
||||
|
||||
print("Mosaicing Process Completed")
|
||||
|
|
|
@ -1,154 +1,154 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os, math, argparse
|
||||
from pathlib import Path
|
||||
from PIL import Image, UnidentifiedImageError
|
||||
import shutil
|
||||
import logging
|
||||
from osgeo import gdal
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
Image.MAX_IMAGE_PIXELS = None
|
||||
|
||||
# Collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run tiling function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder where the input data is stored')
|
||||
parser.add_argument('--file_name', type=str, required=True, help='Input file name to be tiled (with extension)')
|
||||
parser.add_argument('--tile_size', type=str, required=True, help='Tile size')
|
||||
|
||||
#Parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
# Define functions
|
||||
def tile_img(input_path: str,
|
||||
output_path: str,
|
||||
file_name: str,
|
||||
tile_size):
|
||||
'''
|
||||
Tiles/chips images into a user defined size using the tile_size parameter
|
||||
|
||||
Inputs:
|
||||
input_path - Name of the storage account name where the input data resides
|
||||
output_path - Key to the storage account where the input data resides
|
||||
file_name - Input file name to be tiled (with extension)
|
||||
tile_size - Tile size
|
||||
|
||||
Output:
|
||||
All image chips saved into the user specified directory
|
||||
|
||||
'''
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
print("Getting tile size")
|
||||
|
||||
tile_size = int(tile_size)
|
||||
|
||||
print(f"Tile size retrieved - {tile_size}")
|
||||
|
||||
try:
|
||||
print("Getting image")
|
||||
img = Image.open(str(Path(input_path) / file_name))
|
||||
print("Image Retrieved")
|
||||
|
||||
print("Determining Tile width")
|
||||
n_tile_width = list(range(0,math.floor(img.size[0]/tile_size)))
|
||||
print(f"Tile width {n_tile_width}")
|
||||
print("Determining Tile height")
|
||||
n_tile_height = list(range(0,math.floor(img.size[1]/tile_size)))
|
||||
print(f"Tile height {n_tile_height}")
|
||||
tile_combinations = [(a,b) for a in n_tile_width for b in n_tile_height]
|
||||
|
||||
print("Processing tiles")
|
||||
for tile_touple in tile_combinations:
|
||||
print("Getting starting coordinates")
|
||||
x_start_point = tile_touple[0]*tile_size
|
||||
y_start_point = tile_touple[1]*tile_size
|
||||
print(f"Got Starting Coordinates - {x_start_point},{y_start_point}")
|
||||
|
||||
print("Cropping Tile")
|
||||
crop_box = (x_start_point, y_start_point, x_start_point+tile_size, y_start_point+tile_size)
|
||||
tile_crop = img.crop(crop_box)
|
||||
print("Tile Cropped")
|
||||
|
||||
print("Getting tile name")
|
||||
img_name = os.path.basename(file_name)
|
||||
tile_name = img_name.rsplit('.',1)
|
||||
tile_name = '.'.join([tile_name[0],'tile',str(tile_touple[0]),str(tile_touple[1]),tile_name[1]])
|
||||
print(f"Retreived Tile name - {tile_name}")
|
||||
|
||||
print(f"Saving Tile - {tile_name}")
|
||||
tile_crop.save(str(Path(output_path) / tile_name))
|
||||
print(f"Saved Tile - {tile_name}")
|
||||
except UnidentifiedImageError:
|
||||
print("File is not an image, copying to destination directory")
|
||||
sourcePath = str(Path(input_path) / img_name)
|
||||
destinationPath = str(Path(output_path) / img_name)
|
||||
|
||||
print(f"Copying file from {sourcePath} to {destinationPath}")
|
||||
shutil.copyfile(sourcePath,destinationPath)
|
||||
print(f"Copied file from {sourcePath} to {destinationPath}")
|
||||
|
||||
|
||||
def process_img_folder(args):
|
||||
'''
|
||||
Function to process all the images in a given source directory
|
||||
|
||||
Input:
|
||||
args - command line Arguments passed to the file
|
||||
|
||||
Output:
|
||||
Nothing returned. Processed images placed in the output directory
|
||||
|
||||
'''
|
||||
for img_name in os.listdir(args.path_to_input_img):
|
||||
|
||||
print('Processing',str(img_name))
|
||||
|
||||
tile_img(args.path_to_input_img, args.path_to_output, img_name, args.tile_size)
|
||||
|
||||
print(f"{img_name} finished processing")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_convert")
|
||||
|
||||
# mount the storage account
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}/tiles'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
|
||||
print("Starting Tiling Process")
|
||||
|
||||
# create a placeholder file
|
||||
mssparkutils.fs.put(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/tiles/__processing__.txt', 'started tiling ...', True)
|
||||
|
||||
try:
|
||||
tile_img(input_path, output_path, args.file_name, args.tile_size)
|
||||
mssparkutils.fs.rm(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/tiles/__processing__.txt', True)
|
||||
except:
|
||||
mssparkutils.fs.append(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/tiles/__processing__.txt', 'tiling errored out', True)
|
||||
raise
|
||||
|
||||
print("Tiling Process Completed")
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os, math, argparse
|
||||
from pathlib import Path
|
||||
from PIL import Image, UnidentifiedImageError
|
||||
import shutil
|
||||
import logging
|
||||
from osgeo import gdal
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
Image.MAX_IMAGE_PIXELS = None
|
||||
|
||||
# Collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run tiling function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder where the input data is stored')
|
||||
parser.add_argument('--file_name', type=str, required=True, help='Input file name to be tiled (with extension)')
|
||||
parser.add_argument('--tile_size', type=str, required=True, help='Tile size')
|
||||
|
||||
#Parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
# Define functions
|
||||
def tile_img(input_path: str,
|
||||
output_path: str,
|
||||
file_name: str,
|
||||
tile_size):
|
||||
'''
|
||||
Tiles/chips images into a user defined size using the tile_size parameter
|
||||
|
||||
Inputs:
|
||||
input_path - Name of the storage account name where the input data resides
|
||||
output_path - Key to the storage account where the input data resides
|
||||
file_name - Input file name to be tiled (with extension)
|
||||
tile_size - Tile size
|
||||
|
||||
Output:
|
||||
All image chips saved into the user specified directory
|
||||
|
||||
'''
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
print("Getting tile size")
|
||||
|
||||
tile_size = int(tile_size)
|
||||
|
||||
print(f"Tile size retrieved - {tile_size}")
|
||||
|
||||
try:
|
||||
print("Getting image")
|
||||
img = Image.open(str(Path(input_path) / file_name))
|
||||
print("Image Retrieved")
|
||||
|
||||
print("Determining Tile width")
|
||||
n_tile_width = list(range(0,math.floor(img.size[0]/tile_size)))
|
||||
print(f"Tile width {n_tile_width}")
|
||||
print("Determining Tile height")
|
||||
n_tile_height = list(range(0,math.floor(img.size[1]/tile_size)))
|
||||
print(f"Tile height {n_tile_height}")
|
||||
tile_combinations = [(a,b) for a in n_tile_width for b in n_tile_height]
|
||||
|
||||
print("Processing tiles")
|
||||
for tile_touple in tile_combinations:
|
||||
print("Getting starting coordinates")
|
||||
x_start_point = tile_touple[0]*tile_size
|
||||
y_start_point = tile_touple[1]*tile_size
|
||||
print(f"Got Starting Coordinates - {x_start_point},{y_start_point}")
|
||||
|
||||
print("Cropping Tile")
|
||||
crop_box = (x_start_point, y_start_point, x_start_point+tile_size, y_start_point+tile_size)
|
||||
tile_crop = img.crop(crop_box)
|
||||
print("Tile Cropped")
|
||||
|
||||
print("Getting tile name")
|
||||
img_name = os.path.basename(file_name)
|
||||
tile_name = img_name.rsplit('.',1)
|
||||
tile_name = '.'.join([tile_name[0],'tile',str(tile_touple[0]),str(tile_touple[1]),tile_name[1]])
|
||||
print(f"Retreived Tile name - {tile_name}")
|
||||
|
||||
print(f"Saving Tile - {tile_name}")
|
||||
tile_crop.save(str(Path(output_path) / tile_name))
|
||||
print(f"Saved Tile - {tile_name}")
|
||||
except UnidentifiedImageError:
|
||||
print("File is not an image, copying to destination directory")
|
||||
sourcePath = str(Path(input_path) / img_name)
|
||||
destinationPath = str(Path(output_path) / img_name)
|
||||
|
||||
print(f"Copying file from {sourcePath} to {destinationPath}")
|
||||
shutil.copyfile(sourcePath,destinationPath)
|
||||
print(f"Copied file from {sourcePath} to {destinationPath}")
|
||||
|
||||
|
||||
def process_img_folder(args):
|
||||
'''
|
||||
Function to process all the images in a given source directory
|
||||
|
||||
Input:
|
||||
args - command line Arguments passed to the file
|
||||
|
||||
Output:
|
||||
Nothing returned. Processed images placed in the output directory
|
||||
|
||||
'''
|
||||
for img_name in os.listdir(args.path_to_input_img):
|
||||
|
||||
print('Processing',str(img_name))
|
||||
|
||||
tile_img(args.path_to_input_img, args.path_to_output, img_name, args.tile_size)
|
||||
|
||||
print(f"{img_name} finished processing")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_convert")
|
||||
|
||||
# mount the storage account
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}/tiles'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
|
||||
print("Starting Tiling Process")
|
||||
|
||||
# create a placeholder file
|
||||
mssparkutils.fs.put(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/tiles/__processing__.txt', 'started tiling ...', True)
|
||||
|
||||
try:
|
||||
tile_img(input_path, output_path, args.file_name, args.tile_size)
|
||||
mssparkutils.fs.rm(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/tiles/__processing__.txt', True)
|
||||
except:
|
||||
mssparkutils.fs.append(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/tiles/__processing__.txt', 'tiling errored out', True)
|
||||
raise
|
||||
|
||||
print("Tiling Process Completed")
|
||||
|
|
|
@ -1,80 +1,78 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import shutil
|
||||
import argparse, sys
|
||||
from osgeo import gdal
|
||||
import logging
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
dst_folder_name = 'warp'
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run warp function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def warp(
|
||||
input_path: str,
|
||||
output_path: str,
|
||||
file_name: str):
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
# specify options and run Warp
|
||||
kwargs = {'format': 'GTiff', 'dstSRS': '+proj=lcc +datum=WGS84 +lat_1=25 +lat_2=60 +lat_0=42.5 +lon_0=-100 +x_0=0 +y_0=0 +units=m +no_defs', 'srcSRS': '+proj=longlat +datum=WGS84 +no_defs'}
|
||||
ds = gdal.Warp(f'{output_path}/output_warp.tif', f'{input_path}/{file_name}', **kwargs)
|
||||
|
||||
# close file and flush to disk
|
||||
ds = None
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_convert")
|
||||
|
||||
# unmount any previously mounted storage account
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
# mount the storage account containing data required for this transform
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}/{dst_folder_name}'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
|
||||
# create a temporary placeholder for the folder path to be available
|
||||
mssparkutils.fs.put(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{dst_folder_name}/__processing__.txt', 'started tiling ...', True)
|
||||
|
||||
try:
|
||||
files = mssparkutils.fs.ls(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{args.src_folder_name}')
|
||||
|
||||
for file in files:
|
||||
if not file.isDir and file.name.endswith('.tif'):
|
||||
warp(input_path, output_path, file.name)
|
||||
|
||||
# clean up the temporary placeholder on successful run
|
||||
mssparkutils.fs.rm(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{dst_folder_name}/__processing__.txt', True)
|
||||
except:
|
||||
# clean up the temporary placeholder on failed run
|
||||
mssparkutils.fs.append(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{dst_folder_name}/__processing__.txt', 'tiling errored out', True)
|
||||
raise
|
||||
|
||||
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import shutil
|
||||
import argparse, sys
|
||||
from osgeo import gdal
|
||||
import logging
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
dst_folder_name = 'warp'
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run warp function')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--src_folder_name', default=None, required=True, help='Folder containing the source file for cropping')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def warp(
|
||||
input_path: str,
|
||||
output_path: str,
|
||||
file_name: str):
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
# specify options and run Warp
|
||||
kwargs = {'format': 'GTiff', 'dstSRS': '+proj=lcc +datum=WGS84 +lat_1=25 +lat_2=60 +lat_0=42.5 +lon_0=-100 +x_0=0 +y_0=0 +units=m +no_defs', 'srcSRS': '+proj=longlat +datum=WGS84 +no_defs'}
|
||||
ds = gdal.Warp(f'{output_path}/output_warp.tif', f'{input_path}/{file_name}', **kwargs)
|
||||
|
||||
# close file and flush to disk
|
||||
ds = None
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
# enable logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(name)s:%(message)s"
|
||||
)
|
||||
logger = logging.getLogger("image_convert")
|
||||
|
||||
# unmount any previously mounted storage account
|
||||
mssparkutils.fs.unmount(f'/{args.storage_container}')
|
||||
|
||||
# mount the storage account containing data required for this transform
|
||||
mssparkutils.fs.mount(
|
||||
f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net',
|
||||
f'/{args.storage_container}',
|
||||
{"accountKey": args.storage_account_key}
|
||||
)
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
input_path = f'/synfs/{jobId}/{args.storage_container}/{args.src_folder_name}'
|
||||
output_path = f'/synfs/{jobId}/{args.storage_container}/{dst_folder_name}'
|
||||
|
||||
logger.debug(f"input data directory {input_path}")
|
||||
logger.debug(f"output data directory {output_path}")
|
||||
|
||||
# create a temporary placeholder for the folder path to be available
|
||||
mssparkutils.fs.put(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{dst_folder_name}/__processing__.txt', 'started tiling ...', True)
|
||||
|
||||
try:
|
||||
files = mssparkutils.fs.ls(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{args.src_folder_name}')
|
||||
|
||||
for file in files:
|
||||
if not file.isDir and file.name.endswith('.tif'):
|
||||
warp(input_path, output_path, file.name)
|
||||
|
||||
# clean up the temporary placeholder on successful run
|
||||
mssparkutils.fs.rm(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{dst_folder_name}/__processing__.txt', True)
|
||||
except:
|
||||
# clean up the temporary placeholder on failed run
|
||||
mssparkutils.fs.append(f'abfss://{args.storage_container}@{args.storage_account_name}.dfs.core.windows.net/{dst_folder_name}/__processing__.txt', 'tiling errored out', True)
|
||||
raise
|
||||
|
|
@ -1,56 +1,56 @@
|
|||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import shutil
|
||||
import sys
|
||||
import geopandas
|
||||
from osgeo import gdal
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run vector feature')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', default=None, required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_account_src_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--storage_account_dst_container', default=None, required=True, help='Container where the output data will be saved')
|
||||
parser.add_argument('--file_name', type=str, required=True, help='Input file name to be tiled (with extension)')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def extract_features_from_gpkg(storage_account_name: str, storage_account_key: str, storage_account_src_container: str, src_storage_folder: str, storage_account_dst_container: str, dst_storage_folder: str, file_name: str):
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
# unmount any previously mounted storage container to this path
|
||||
mssparkutils.fs.unmount("/aoi")
|
||||
|
||||
print(f"abfss://{storage_account_dst_container}@{storage_account_name}.dfs.core.windows.net")
|
||||
|
||||
# mount the storage container containing data required for this transform
|
||||
mssparkutils.fs.mount(
|
||||
f"abfss://{storage_account_dst_container}@{storage_account_name}.dfs.core.windows.net",
|
||||
"/aoi",
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
# set Storage Account Information for source TIF data
|
||||
gdal.SetConfigOption('AZURE_STORAGE_ACCOUNT', storage_account_name)
|
||||
gdal.SetConfigOption('AZURE_STORAGE_ACCESS_KEY', storage_account_key)
|
||||
|
||||
# specify options and run warp
|
||||
kwargs = {'format': 'GTiff', 'dstSRS': '+proj=lcc +datum=WGS84 +lat_1=25 +lat_2=60 +lat_0=42.5 +lon_0=-100 +x_0=0 +y_0=0 +units=m +no_defs', 'srcSRS': '+proj=longlat +datum=WGS84 +no_defs'}
|
||||
ds = gdal.Warp('output_warp.tif', f'/vsiadls/{storage_account_src_container}/{src_storage_folder}/{file_name}', **kwargs)
|
||||
|
||||
# close file and flush to disk
|
||||
ds = None
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
# copy the output file from local host to the storage account container
|
||||
# that is mounted to this host
|
||||
shutil.copy("output_warp.tif", f"/synfs/{jobId}/aoi/output_warp.tif")
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
extract_features_from_gpkg(args)
|
||||
# Copyright (c) Microsoft Corporation.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import shutil
|
||||
import sys
|
||||
import geopandas
|
||||
from osgeo import gdal
|
||||
from notebookutils import mssparkutils
|
||||
|
||||
# collect args
|
||||
parser = argparse.ArgumentParser(description='Arguments required to run vector feature')
|
||||
parser.add_argument('--storage_account_name', type=str, required=True, help='Name of the storage account name where the input data resides')
|
||||
parser.add_argument('--storage_account_key', default=None, required=True, help='Key to the storage account where the input data resides')
|
||||
parser.add_argument('--storage_account_src_container', type=str, required=True, help='Container under which the input data resides')
|
||||
parser.add_argument('--storage_account_dst_container', default=None, required=True, help='Container where the output data will be saved')
|
||||
parser.add_argument('--file_name', type=str, required=True, help='Input file name to be tiled (with extension)')
|
||||
|
||||
# parse Args
|
||||
args = parser.parse_args()
|
||||
|
||||
def extract_features_from_gpkg(storage_account_name: str, storage_account_key: str, storage_account_src_container: str, src_storage_folder: str, storage_account_dst_container: str, dst_storage_folder: str, file_name: str):
|
||||
|
||||
gdal.UseExceptions()
|
||||
|
||||
# unmount any previously mounted storage container to this path
|
||||
mssparkutils.fs.unmount("/aoi")
|
||||
|
||||
print(f"abfss://{storage_account_dst_container}@{storage_account_name}.dfs.core.windows.net")
|
||||
|
||||
# mount the storage container containing data required for this transform
|
||||
mssparkutils.fs.mount(
|
||||
f"abfss://{storage_account_dst_container}@{storage_account_name}.dfs.core.windows.net",
|
||||
"/aoi",
|
||||
{"accountKey": storage_account_key}
|
||||
)
|
||||
|
||||
# set Storage Account Information for source TIF data
|
||||
gdal.SetConfigOption('AZURE_STORAGE_ACCOUNT', storage_account_name)
|
||||
gdal.SetConfigOption('AZURE_STORAGE_ACCESS_KEY', storage_account_key)
|
||||
|
||||
# specify options and run warp
|
||||
kwargs = {'format': 'GTiff', 'dstSRS': '+proj=lcc +datum=WGS84 +lat_1=25 +lat_2=60 +lat_0=42.5 +lon_0=-100 +x_0=0 +y_0=0 +units=m +no_defs', 'srcSRS': '+proj=longlat +datum=WGS84 +no_defs'}
|
||||
ds = gdal.Warp('output_warp.tif', f'/vsiadls/{storage_account_src_container}/{src_storage_folder}/{file_name}', **kwargs)
|
||||
|
||||
# close file and flush to disk
|
||||
ds = None
|
||||
|
||||
jobId = mssparkutils.env.getJobId()
|
||||
|
||||
# copy the output file from local host to the storage account container
|
||||
# that is mounted to this host
|
||||
shutil.copy("output_warp.tif", f"/synfs/{jobId}/aoi/output_warp.tif")
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
extract_features_from_gpkg(args)
|
||||
|
|
|
@ -1,33 +1,33 @@
|
|||
{
|
||||
"name": "__synapse_pool_name__",
|
||||
"location": "__location__",
|
||||
"properties": {
|
||||
"autoPause": {
|
||||
"enabled": true,
|
||||
"delayInMinutes": 15
|
||||
},
|
||||
"autoScale": {
|
||||
"enabled": true,
|
||||
"maxNodeCount": 4,
|
||||
"minNodeCount": 3
|
||||
},
|
||||
"nodeCount": 0,
|
||||
"nodeSize": "Medium",
|
||||
"nodeSizeFamily": "MemoryOptimized",
|
||||
"sparkVersion": "3.1",
|
||||
"libraryRequirements": {
|
||||
"content": "name: aoidemo\r\nchannels:\r\n - conda-forge\r\n - defaults\r\ndependencies:\r\n - gdal=3.3.0\r\n - pip>=20.1.1\r\n - azure-storage-file-datalake\r\n - libgdal\r\n - shapely\r\n - pyproj\r\n - pip:\r\n - rasterio\r\n - geopandas",
|
||||
"filename": "environment.yml",
|
||||
"time": "2022-02-22T00:52:46.8995063Z"
|
||||
},
|
||||
"isComputeIsolationEnabled": false,
|
||||
"sparkConfigProperties": {
|
||||
"configurationType": "File",
|
||||
"filename": "config.txt",
|
||||
"content": "spark.storage.synapse.linkedServiceName \"AOI Geospatial v2\"\rfs.azure.account.oauth.provider.type com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider",
|
||||
"time": "2022-02-22T00:52:46.8995063Z"
|
||||
},
|
||||
"sessionLevelPackagesEnabled": true,
|
||||
"annotations": []
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "__synapse_pool_name__",
|
||||
"location": "__location__",
|
||||
"properties": {
|
||||
"autoPause": {
|
||||
"enabled": true,
|
||||
"delayInMinutes": 15
|
||||
},
|
||||
"autoScale": {
|
||||
"enabled": true,
|
||||
"maxNodeCount": 4,
|
||||
"minNodeCount": 3
|
||||
},
|
||||
"nodeCount": 0,
|
||||
"nodeSize": "Medium",
|
||||
"nodeSizeFamily": "MemoryOptimized",
|
||||
"sparkVersion": "3.1",
|
||||
"libraryRequirements": {
|
||||
"content": "name: aoidemo\r\nchannels:\r\n - conda-forge\r\n - defaults\r\ndependencies:\r\n - gdal=3.3.0\r\n - pip>=20.1.1\r\n - azure-storage-file-datalake\r\n - libgdal\r\n - shapely\r\n - pyproj\r\n - pip:\r\n - rasterio\r\n - geopandas",
|
||||
"filename": "environment.yml",
|
||||
"time": "2022-02-22T00:52:46.8995063Z"
|
||||
},
|
||||
"isComputeIsolationEnabled": false,
|
||||
"sparkConfigProperties": {
|
||||
"configurationType": "File",
|
||||
"filename": "config.txt",
|
||||
"content": "spark.storage.synapse.linkedServiceName \"AOI Geospatial v2\"\rfs.azure.account.oauth.provider.type com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider",
|
||||
"time": "2022-02-22T00:52:46.8995063Z"
|
||||
},
|
||||
"sessionLevelPackagesEnabled": true,
|
||||
"annotations": []
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
source(allowSchemaDrift: true,
|
||||
validateSchema: false,
|
||||
ignoreNoFilesFound: false,
|
||||
documentForm: 'arrayOfDocuments') ~> source
|
||||
source sink(allowSchemaDrift: true,
|
||||
validateSchema: false,
|
||||
skipDuplicateMapInputs: true,
|
||||
skipDuplicateMapOutputs: true,
|
||||
store: 'cache',
|
||||
format: 'inline',
|
||||
output: true,
|
||||
saveOrder: 1) ~> sink
|
||||
source(allowSchemaDrift: true,
|
||||
validateSchema: false,
|
||||
ignoreNoFilesFound: false,
|
||||
documentForm: 'arrayOfDocuments') ~> source
|
||||
source sink(allowSchemaDrift: true,
|
||||
validateSchema: false,
|
||||
skipDuplicateMapInputs: true,
|
||||
skipDuplicateMapOutputs: true,
|
||||
store: 'cache',
|
||||
format: 'inline',
|
||||
output: true,
|
||||
saveOrder: 1) ~> sink
|
||||
|
|
|
@ -1,37 +1,37 @@
|
|||
{
|
||||
"name": "ReadSpecDocumentFlow",
|
||||
"properties": {
|
||||
"type": "MappingDataFlow",
|
||||
"typeProperties": {
|
||||
"sources": [
|
||||
{
|
||||
"dataset": {
|
||||
"referenceName": "spec_doc_specification",
|
||||
"type": "DatasetReference"
|
||||
},
|
||||
"name": "source"
|
||||
}
|
||||
],
|
||||
"sinks": [
|
||||
{
|
||||
"name": "sink"
|
||||
}
|
||||
],
|
||||
"transformations": [],
|
||||
"scriptLines": [
|
||||
"source(allowSchemaDrift: true,",
|
||||
" validateSchema: false,",
|
||||
" ignoreNoFilesFound: false,",
|
||||
" documentForm: 'arrayOfDocuments') ~> source",
|
||||
"source sink(allowSchemaDrift: true,",
|
||||
" validateSchema: false,",
|
||||
" skipDuplicateMapInputs: true,",
|
||||
" skipDuplicateMapOutputs: true,",
|
||||
" store: 'cache',",
|
||||
" format: 'inline',",
|
||||
" output: true,",
|
||||
" saveOrder: 1) ~> sink"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "ReadSpecDocumentFlow",
|
||||
"properties": {
|
||||
"type": "MappingDataFlow",
|
||||
"typeProperties": {
|
||||
"sources": [
|
||||
{
|
||||
"dataset": {
|
||||
"referenceName": "spec_doc_specification",
|
||||
"type": "DatasetReference"
|
||||
},
|
||||
"name": "source"
|
||||
}
|
||||
],
|
||||
"sinks": [
|
||||
{
|
||||
"name": "sink"
|
||||
}
|
||||
],
|
||||
"transformations": [],
|
||||
"scriptLines": [
|
||||
"source(allowSchemaDrift: true,",
|
||||
" validateSchema: false,",
|
||||
" ignoreNoFilesFound: false,",
|
||||
" documentForm: 'arrayOfDocuments') ~> source",
|
||||
"source sink(allowSchemaDrift: true,",
|
||||
" validateSchema: false,",
|
||||
" skipDuplicateMapInputs: true,",
|
||||
" skipDuplicateMapOutputs: true,",
|
||||
" store: 'cache',",
|
||||
" format: 'inline',",
|
||||
" output: true,",
|
||||
" saveOrder: 1) ~> sink"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,33 +1,33 @@
|
|||
{
|
||||
"name": "rawtifs",
|
||||
"properties": {
|
||||
"linkedServiceName": {
|
||||
"referenceName": "AOI Data Storage Account v2",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"parameters": {
|
||||
"containername": {
|
||||
"type": "string"
|
||||
},
|
||||
"folderpath": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"type": "Binary",
|
||||
"typeProperties": {
|
||||
"location": {
|
||||
"type": "AzureBlobStorageLocation",
|
||||
"folderPath": {
|
||||
"value": "@dataset().folderpath",
|
||||
"type": "Expression"
|
||||
},
|
||||
"container": {
|
||||
"value": "@dataset().containername",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/datasets"
|
||||
}
|
||||
{
|
||||
"name": "rawtifs",
|
||||
"properties": {
|
||||
"linkedServiceName": {
|
||||
"referenceName": "AOI Data Storage Account v2",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"parameters": {
|
||||
"containername": {
|
||||
"type": "string"
|
||||
},
|
||||
"folderpath": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"type": "Binary",
|
||||
"typeProperties": {
|
||||
"location": {
|
||||
"type": "AzureBlobStorageLocation",
|
||||
"folderPath": {
|
||||
"value": "@dataset().folderpath",
|
||||
"type": "Expression"
|
||||
},
|
||||
"container": {
|
||||
"value": "@dataset().containername",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/datasets"
|
||||
}
|
||||
|
|
|
@ -1,41 +1,41 @@
|
|||
{
|
||||
"name": "spec_doc_specification",
|
||||
"properties": {
|
||||
"linkedServiceName": {
|
||||
"referenceName": "AOI Data Storage Account v2",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"parameters": {
|
||||
"filename": {
|
||||
"type": "string"
|
||||
},
|
||||
"folderpath": {
|
||||
"type": "string"
|
||||
},
|
||||
"containername": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"type": "Json",
|
||||
"typeProperties": {
|
||||
"location": {
|
||||
"type": "AzureBlobStorageLocation",
|
||||
"fileName": {
|
||||
"value": "@dataset().filename",
|
||||
"type": "Expression"
|
||||
},
|
||||
"folderPath": {
|
||||
"value": "@dataset().folderpath",
|
||||
"type": "Expression"
|
||||
},
|
||||
"container": {
|
||||
"value": "@dataset().containername",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
},
|
||||
"schema": {}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/datasets"
|
||||
}
|
||||
{
|
||||
"name": "spec_doc_specification",
|
||||
"properties": {
|
||||
"linkedServiceName": {
|
||||
"referenceName": "AOI Data Storage Account v2",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"parameters": {
|
||||
"filename": {
|
||||
"type": "string"
|
||||
},
|
||||
"folderpath": {
|
||||
"type": "string"
|
||||
},
|
||||
"containername": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"type": "Json",
|
||||
"typeProperties": {
|
||||
"location": {
|
||||
"type": "AzureBlobStorageLocation",
|
||||
"fileName": {
|
||||
"value": "@dataset().filename",
|
||||
"type": "Expression"
|
||||
},
|
||||
"folderPath": {
|
||||
"value": "@dataset().folderpath",
|
||||
"type": "Expression"
|
||||
},
|
||||
"container": {
|
||||
"value": "@dataset().containername",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
},
|
||||
"schema": {}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/datasets"
|
||||
}
|
||||
|
|
|
@ -1,23 +1,23 @@
|
|||
{
|
||||
"name": "AOI Batch Storage",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBlobStorage",
|
||||
"typeProperties": {
|
||||
"connectionString": "DefaultEndpointsProtocol=https;AccountName=__batch_storage_account__;EndpointSuffix=core.windows.net;",
|
||||
"accountKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "PackageStorageAccountKey"
|
||||
}
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
{
|
||||
"name": "AOI Batch Storage",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBlobStorage",
|
||||
"typeProperties": {
|
||||
"connectionString": "DefaultEndpointsProtocol=https;AccountName=__batch_storage_account__;EndpointSuffix=core.windows.net;",
|
||||
"accountKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "PackageStorageAccountKey"
|
||||
}
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
|
|
|
@ -1,29 +1,29 @@
|
|||
{
|
||||
"name": "AOI Batch",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBatch",
|
||||
"typeProperties": {
|
||||
"batchUri": "https://__batch_account__.__location__.batch.azure.com",
|
||||
"poolName": "data-cpu-pool",
|
||||
"accountName": "__batch_account__",
|
||||
"linkedServiceName": {
|
||||
"referenceName": "AOI Batch Storage",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"accessKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "GeospatialBatchAccountKey"
|
||||
}
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
{
|
||||
"name": "AOI Batch",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBatch",
|
||||
"typeProperties": {
|
||||
"batchUri": "https://__batch_account__.__location__.batch.azure.com",
|
||||
"poolName": "data-cpu-pool",
|
||||
"accountName": "__batch_account__",
|
||||
"linkedServiceName": {
|
||||
"referenceName": "AOI Batch Storage",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"accessKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "GeospatialBatchAccountKey"
|
||||
}
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
|
|
|
@ -1,23 +1,23 @@
|
|||
{
|
||||
"name": "AOI Data Storage Account v2",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBlobStorage",
|
||||
"typeProperties": {
|
||||
"connectionString": "DefaultEndpointsProtocol=https;AccountName=__raw_data_storage_account__;EndpointSuffix=core.windows.net;",
|
||||
"accountKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "GeospatialStorageAccountKey"
|
||||
}
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
{
|
||||
"name": "AOI Data Storage Account v2",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBlobStorage",
|
||||
"typeProperties": {
|
||||
"connectionString": "DefaultEndpointsProtocol=https;AccountName=__raw_data_storage_account__;EndpointSuffix=core.windows.net;",
|
||||
"accountKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "GeospatialStorageAccountKey"
|
||||
}
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
|
|
|
@ -1,24 +1,24 @@
|
|||
{
|
||||
"name": "AOI Geospatial v2 FS",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureFileStorage",
|
||||
"typeProperties": {
|
||||
"connectionString": "DefaultEndpointsProtocol=https;AccountName=__raw_data_storage_account__;EndpointSuffix=core.windows.net;",
|
||||
"accountKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "GeospatialStorageAccountKey"
|
||||
},
|
||||
"fileShare": "volume-a"
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
{
|
||||
"name": "AOI Geospatial v2 FS",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureFileStorage",
|
||||
"typeProperties": {
|
||||
"connectionString": "DefaultEndpointsProtocol=https;AccountName=__raw_data_storage_account__;EndpointSuffix=core.windows.net;",
|
||||
"accountKey": {
|
||||
"type": "AzureKeyVaultSecret",
|
||||
"store": {
|
||||
"referenceName": "AOI Pipeline Key Vault",
|
||||
"type": "LinkedServiceReference"
|
||||
},
|
||||
"secretName": "GeospatialStorageAccountKey"
|
||||
},
|
||||
"fileShare": "volume-a"
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices"
|
||||
}
|
||||
|
|
|
@ -1,15 +1,15 @@
|
|||
{
|
||||
"name": "AOI Geospatial v2",
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBlobFS",
|
||||
"typeProperties": {
|
||||
"url": "https://__raw_data_storage_account__.dfs.core.windows.net"
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "AOI Geospatial v2",
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureBlobFS",
|
||||
"typeProperties": {
|
||||
"url": "https://__raw_data_storage_account__.dfs.core.windows.net"
|
||||
},
|
||||
"connectVia": {
|
||||
"referenceName": "AutoResolveIntegrationRuntime",
|
||||
"type": "IntegrationRuntimeReference"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
{
|
||||
"name": "AOI Pipeline Key Vault",
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureKeyVault",
|
||||
"typeProperties": {
|
||||
"baseUrl": "https://__linked_key_vault__.vault.azure.net/"
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "AOI Pipeline Key Vault",
|
||||
"type": "Microsoft.Synapse/workspaces/linkedservices",
|
||||
"properties": {
|
||||
"annotations": [],
|
||||
"type": "AzureKeyVault",
|
||||
"typeProperties": {
|
||||
"baseUrl": "https://__linked_key_vault__.vault.azure.net/"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,377 +1,377 @@
|
|||
{
|
||||
"name": "Custom Vision Model Transforms v2",
|
||||
"properties": {
|
||||
"activities": [
|
||||
{
|
||||
"name": "GetFilesToMosaic",
|
||||
"type": "GetMetadata",
|
||||
"dependsOn": [],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"dataset": {
|
||||
"referenceName": "rawtifs",
|
||||
"type": "DatasetReference",
|
||||
"parameters": {
|
||||
"containername": {
|
||||
"value": "@pipeline().parameters.Prefix",
|
||||
"type": "Expression"
|
||||
},
|
||||
"folderpath": "raw"
|
||||
}
|
||||
},
|
||||
"fieldList": [
|
||||
"childItems"
|
||||
],
|
||||
"storeSettings": {
|
||||
"type": "AzureBlobStorageReadSettings",
|
||||
"recursive": true,
|
||||
"enablePartitionDiscovery": false
|
||||
},
|
||||
"formatSettings": {
|
||||
"type": "BinaryReadSettings"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Crop",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "More than one GeoTiff",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Crop",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_crop/src/crop.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"@variables('CropSourceFolder')",
|
||||
"--config_file_name",
|
||||
"config-aoi.json"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Convert",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Crop",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Convert",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_convert/src/convert.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"crop",
|
||||
"--config_file_name",
|
||||
"config-img-convert-png.json"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Tiling",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Convert",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Tiling",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_tiling/src/tiling.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"convert",
|
||||
"--file_name",
|
||||
"output.png",
|
||||
"--tile_size",
|
||||
"512",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "More than one GeoTiff",
|
||||
"type": "IfCondition",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "For Each File to Mosaic",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"expression": {
|
||||
"value": "@greater(length(activity('GetFilesToMosaic').output.childItems),1)",
|
||||
"type": "Expression"
|
||||
},
|
||||
"ifFalseActivities": [
|
||||
{
|
||||
"name": "Set Crop Source Folder to raw",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "CropSourceFolder",
|
||||
"value": "raw"
|
||||
}
|
||||
}
|
||||
],
|
||||
"ifTrueActivities": [
|
||||
{
|
||||
"name": "Mosaic",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Mosaic",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_mosaic/src/mosaic.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"raw"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Set Crop Source Folder to mosaic",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Mosaic",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "CropSourceFolder",
|
||||
"value": "mosaic"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "For Each File to Mosaic",
|
||||
"type": "ForEach",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "GetFilesToMosaic",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"items": {
|
||||
"value": "@activity('GetFilesToMosaic').output.childItems",
|
||||
"type": "Expression"
|
||||
},
|
||||
"isSequential": true,
|
||||
"activities": [
|
||||
{
|
||||
"name": "Set Mosaic File Names",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Store Temp Mosaic File Names",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "MosaicFileNames",
|
||||
"value": {
|
||||
"value": "@concat(variables('TempMosaicFileNames'), if(equals(variables('TempMosaicFileNames'), ''),'',','), item().name)",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Store Temp Mosaic File Names",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "TempMosaicFileNames",
|
||||
"value": {
|
||||
"value": "@variables('MosaicFileNames')",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"FunctionCompleted": {
|
||||
"type": "String",
|
||||
"defaultValue": "None"
|
||||
},
|
||||
"FunctionError": {
|
||||
"type": "String"
|
||||
},
|
||||
"MosaicFileNames": {
|
||||
"type": "String"
|
||||
},
|
||||
"TempMosaicFileNames": {
|
||||
"type": "String"
|
||||
},
|
||||
"CropSourceFolder": {
|
||||
"type": "String"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"lastPublishTime": "2022-03-06T06:06:58Z"
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/pipelines"
|
||||
}
|
||||
{
|
||||
"name": "Custom Vision Model Transforms v2",
|
||||
"properties": {
|
||||
"activities": [
|
||||
{
|
||||
"name": "GetFilesToMosaic",
|
||||
"type": "GetMetadata",
|
||||
"dependsOn": [],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"dataset": {
|
||||
"referenceName": "rawtifs",
|
||||
"type": "DatasetReference",
|
||||
"parameters": {
|
||||
"containername": {
|
||||
"value": "@pipeline().parameters.Prefix",
|
||||
"type": "Expression"
|
||||
},
|
||||
"folderpath": "raw"
|
||||
}
|
||||
},
|
||||
"fieldList": [
|
||||
"childItems"
|
||||
],
|
||||
"storeSettings": {
|
||||
"type": "AzureBlobStorageReadSettings",
|
||||
"recursive": true,
|
||||
"enablePartitionDiscovery": false
|
||||
},
|
||||
"formatSettings": {
|
||||
"type": "BinaryReadSettings"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Crop",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "More than one GeoTiff",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Crop",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_crop/src/crop.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"@variables('CropSourceFolder')",
|
||||
"--config_file_name",
|
||||
"config-aoi.json"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Convert",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Crop",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Convert",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_convert/src/convert.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"crop",
|
||||
"--config_file_name",
|
||||
"config-img-convert-png.json"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Tiling",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Convert",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Tiling",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_tiling/src/tiling.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"convert",
|
||||
"--file_name",
|
||||
"output.png",
|
||||
"--tile_size",
|
||||
"512",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "More than one GeoTiff",
|
||||
"type": "IfCondition",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "For Each File to Mosaic",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"expression": {
|
||||
"value": "@greater(length(activity('GetFilesToMosaic').output.childItems),1)",
|
||||
"type": "Expression"
|
||||
},
|
||||
"ifFalseActivities": [
|
||||
{
|
||||
"name": "Set Crop Source Folder to raw",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "CropSourceFolder",
|
||||
"value": "raw"
|
||||
}
|
||||
}
|
||||
],
|
||||
"ifTrueActivities": [
|
||||
{
|
||||
"name": "Mosaic",
|
||||
"type": "SparkJob",
|
||||
"dependsOn": [],
|
||||
"policy": {
|
||||
"timeout": "7.00:00:00",
|
||||
"retry": 0,
|
||||
"retryIntervalInSeconds": 30,
|
||||
"secureOutput": false,
|
||||
"secureInput": false
|
||||
},
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"sparkJob": {
|
||||
"referenceName": "Mosaic",
|
||||
"type": "SparkJobDefinitionReference"
|
||||
},
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_mosaic/src/mosaic.py",
|
||||
"args": [
|
||||
"--storage_account_name",
|
||||
"@pipeline().parameters.StorageAccountName",
|
||||
"--storage_account_key",
|
||||
"@pipeline().parameters.StorageAccountKey",
|
||||
"--storage_container",
|
||||
"@pipeline().parameters.Prefix",
|
||||
"--src_folder_name",
|
||||
"raw"
|
||||
],
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"executorSize": "Medium",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.minExecutors": 2,
|
||||
"spark.dynamicAllocation.maxExecutors": 3
|
||||
},
|
||||
"driverSize": "Medium",
|
||||
"numExecutors": 2
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Set Crop Source Folder to mosaic",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Mosaic",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "CropSourceFolder",
|
||||
"value": "mosaic"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "For Each File to Mosaic",
|
||||
"type": "ForEach",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "GetFilesToMosaic",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"items": {
|
||||
"value": "@activity('GetFilesToMosaic').output.childItems",
|
||||
"type": "Expression"
|
||||
},
|
||||
"isSequential": true,
|
||||
"activities": [
|
||||
{
|
||||
"name": "Set Mosaic File Names",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Store Temp Mosaic File Names",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "MosaicFileNames",
|
||||
"value": {
|
||||
"value": "@concat(variables('TempMosaicFileNames'), if(equals(variables('TempMosaicFileNames'), ''),'',','), item().name)",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Store Temp Mosaic File Names",
|
||||
"type": "SetVariable",
|
||||
"dependsOn": [],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"variableName": "TempMosaicFileNames",
|
||||
"value": {
|
||||
"value": "@variables('MosaicFileNames')",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"FunctionCompleted": {
|
||||
"type": "String",
|
||||
"defaultValue": "None"
|
||||
},
|
||||
"FunctionError": {
|
||||
"type": "String"
|
||||
},
|
||||
"MosaicFileNames": {
|
||||
"type": "String"
|
||||
},
|
||||
"TempMosaicFileNames": {
|
||||
"type": "String"
|
||||
},
|
||||
"CropSourceFolder": {
|
||||
"type": "String"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"lastPublishTime": "2022-03-06T06:06:58Z"
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/pipelines"
|
||||
}
|
||||
|
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -1,114 +1,114 @@
|
|||
{
|
||||
"name": "E2E Custom Vision Model Flow",
|
||||
"properties": {
|
||||
"activities": [
|
||||
{
|
||||
"name": "Transforms",
|
||||
"type": "ExecutePipeline",
|
||||
"dependsOn": [],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"pipeline": {
|
||||
"referenceName": "Custom Vision Model Transforms v2",
|
||||
"type": "PipelineReference"
|
||||
},
|
||||
"waitOnCompletion": true,
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"value": "@pipeline().parameters.Prefix",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"value": "@pipeline().parameters.StorageAccountName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"value": "@pipeline().parameters.StorageAccountKey",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Custom Vision Object Detection",
|
||||
"type": "ExecutePipeline",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Transforms",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"pipeline": {
|
||||
"referenceName": "Custom Vision Object Detection v2",
|
||||
"type": "PipelineReference"
|
||||
},
|
||||
"waitOnCompletion": true,
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"value": "@pipeline().parameters.Prefix",
|
||||
"type": "Expression"
|
||||
},
|
||||
"BatchName": {
|
||||
"value": "@pipeline().parameters.BatchAccountName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"JobName": {
|
||||
"value": "@pipeline().parameters.BatchJobName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"BatchLocation": {
|
||||
"value": "@pipeline().parameters.BatchLocation",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"value": "@pipeline().parameters.StorageAccountName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"value": "@pipeline().parameters.StorageAccountKey",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"BatchAccountName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"BatchJobName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"BatchLocation": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"Storage_Account_Conn_String": {
|
||||
"type": "String"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"lastPublishTime": "2022-03-06T05:42:39Z"
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/pipelines"
|
||||
}
|
||||
{
|
||||
"name": "E2E Custom Vision Model Flow",
|
||||
"properties": {
|
||||
"activities": [
|
||||
{
|
||||
"name": "Transforms",
|
||||
"type": "ExecutePipeline",
|
||||
"dependsOn": [],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"pipeline": {
|
||||
"referenceName": "Custom Vision Model Transforms v2",
|
||||
"type": "PipelineReference"
|
||||
},
|
||||
"waitOnCompletion": true,
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"value": "@pipeline().parameters.Prefix",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"value": "@pipeline().parameters.StorageAccountName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"value": "@pipeline().parameters.StorageAccountKey",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Custom Vision Object Detection",
|
||||
"type": "ExecutePipeline",
|
||||
"dependsOn": [
|
||||
{
|
||||
"activity": "Transforms",
|
||||
"dependencyConditions": [
|
||||
"Succeeded"
|
||||
]
|
||||
}
|
||||
],
|
||||
"userProperties": [],
|
||||
"typeProperties": {
|
||||
"pipeline": {
|
||||
"referenceName": "Custom Vision Object Detection v2",
|
||||
"type": "PipelineReference"
|
||||
},
|
||||
"waitOnCompletion": true,
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"value": "@pipeline().parameters.Prefix",
|
||||
"type": "Expression"
|
||||
},
|
||||
"BatchName": {
|
||||
"value": "@pipeline().parameters.BatchAccountName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"JobName": {
|
||||
"value": "@pipeline().parameters.BatchJobName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"BatchLocation": {
|
||||
"value": "@pipeline().parameters.BatchLocation",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"value": "@pipeline().parameters.StorageAccountName",
|
||||
"type": "Expression"
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"value": "@pipeline().parameters.StorageAccountKey",
|
||||
"type": "Expression"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"parameters": {
|
||||
"Prefix": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"StorageAccountKey": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"BatchAccountName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"BatchJobName": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
},
|
||||
"BatchLocation": {
|
||||
"type": "string",
|
||||
"defaultValue": ""
|
||||
}
|
||||
},
|
||||
"variables": {
|
||||
"Storage_Account_Conn_String": {
|
||||
"type": "String"
|
||||
}
|
||||
},
|
||||
"annotations": [],
|
||||
"lastPublishTime": "2022-03-06T05:42:39Z"
|
||||
},
|
||||
"type": "Microsoft.Synapse/workspaces/pipelines"
|
||||
}
|
||||
|
|
|
@ -1 +1 @@
|
|||
{"publishBranch":"workspace_test"}
|
||||
{"publishBranch":"workspace_test"}
|
||||
|
|
|
@ -1,29 +1,29 @@
|
|||
{
|
||||
"name": "Convert",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Convert",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_convert/src/convert.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "72aef2fd-aaae-40ed-8a09-7b2e87353ace"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Convert",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Convert",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_convert/src/convert.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "72aef2fd-aaae-40ed-8a09-7b2e87353ace"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,29 +1,30 @@
|
|||
{
|
||||
"name": "Copy noop",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Copy noop",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/copy_noop/src/main.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "01767b3a-cede-4abf-8b79-52cb6d0ff80d"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Copy noop",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Copy noop",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/copy_noop/src/main.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "01767b3a-cede-4abf-8b79-52cb6d0ff80d"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1,31 +1,31 @@
|
|||
{
|
||||
"name": "Crop",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Crop",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_crop/src/crop.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "f4cbbafe-9d98-476f-9bd4-e5bfc7bad06c"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [
|
||||
"abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_crop/src/utils.py"
|
||||
],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Crop",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Crop",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_crop/src/crop.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "f4cbbafe-9d98-476f-9bd4-e5bfc7bad06c"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [
|
||||
"abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_crop/src/utils.py"
|
||||
],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,29 +1,29 @@
|
|||
{
|
||||
"name": "Mosaic",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Mosaic",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_mosaic/src/mosaic.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "3",
|
||||
"spark.dynamicAllocation.maxExecutors": "3",
|
||||
"spark.autotune.trackingId": "811de002-982f-4b4b-9732-147d3565c502"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Mosaic",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Mosaic",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_mosaic/src/mosaic.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "3",
|
||||
"spark.dynamicAllocation.maxExecutors": "3",
|
||||
"spark.autotune.trackingId": "811de002-982f-4b4b-9732-147d3565c502"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,31 +1,31 @@
|
|||
{
|
||||
"name": "Pool Geolocation",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Pool Geolocation",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/pool_geolocation/src/main.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "0d715b42-8d99-4e74-8a24-860c7275f387"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [
|
||||
"abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/pool_geolocation/src/utils.py"
|
||||
],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Pool Geolocation",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Pool Geolocation",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/pool_geolocation/src/main.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "0d715b42-8d99-4e74-8a24-860c7275f387"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [
|
||||
"abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/pool_geolocation/src/utils.py"
|
||||
],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,29 +1,29 @@
|
|||
{
|
||||
"name": "Tiling",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Tiling",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_tiling/src/tiling.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "65be76e5-ef21-47ec-be7a-38039b2abfd4"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Tiling",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Tiling",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_tiling/src/tiling.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "1",
|
||||
"spark.dynamicAllocation.maxExecutors": "2",
|
||||
"spark.autotune.trackingId": "65be76e5-ef21-47ec-be7a-38039b2abfd4"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,29 +1,29 @@
|
|||
{
|
||||
"name": "Warp",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Warp",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_warp/src/warp.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "3",
|
||||
"spark.dynamicAllocation.maxExecutors": "3",
|
||||
"spark.autotune.trackingId": "335dd1ad-fc75-4734-ad92-03a79e9ad399"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
{
|
||||
"name": "Warp",
|
||||
"properties": {
|
||||
"targetBigDataPool": {
|
||||
"referenceName": "__synapse_pool_name__",
|
||||
"type": "BigDataPoolReference"
|
||||
},
|
||||
"requiredSparkVersion": "3.1",
|
||||
"language": "python",
|
||||
"jobProperties": {
|
||||
"name": "Warp",
|
||||
"file": "abfss://spark-jobs@__synapse_storage_account__.dfs.core.windows.net/raster_warp/src/warp.py",
|
||||
"conf": {
|
||||
"spark.dynamicAllocation.enabled": "false",
|
||||
"spark.dynamicAllocation.minExecutors": "3",
|
||||
"spark.dynamicAllocation.maxExecutors": "3",
|
||||
"spark.autotune.trackingId": "335dd1ad-fc75-4734-ad92-03a79e9ad399"
|
||||
},
|
||||
"args": [],
|
||||
"jars": [],
|
||||
"files": [],
|
||||
"driverMemory": "56g",
|
||||
"driverCores": 8,
|
||||
"executorMemory": "56g",
|
||||
"executorCores": 8,
|
||||
"numExecutors": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Загрузка…
Ссылка в новой задаче