зеркало из https://github.com/microsoft/AzureTRE.git
Tessferrandez/move to mkdocs (#885)
* re-organize docs * fix links * configure navigation * fix formatting in md files * fix formatting in docs * add workflow and link to github pages
This commit is contained in:
Родитель
b342efcceb
Коммит
f63292252b
|
@ -0,0 +1,17 @@
|
|||
name: Publish docs via Github Pages
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [develop, main]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Deploy docs
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: 3.x
|
||||
- run: pip install -r docs/requirements.txt
|
||||
- run: mkdocs gh-deploy --force
|
|
@ -0,0 +1,28 @@
|
|||
source "https://rubygems.org"
|
||||
|
||||
gem "github-pages", group: :jekyll_plugins
|
||||
|
||||
# Windows and JRuby does not include zoneinfo files, so bundle the tzinfo-data gem
|
||||
# and associated library.
|
||||
platforms :mingw, :x64_mingw, :mswin, :jruby do
|
||||
gem "tzinfo", "~> 1.2"
|
||||
gem "tzinfo-data"
|
||||
end
|
||||
|
||||
# Performance-booster for watching directories on Windows
|
||||
gem "wdm", "~> 0.1.1", :platforms => [:mingw, :x64_mingw, :mswin]
|
||||
|
||||
# gem "minima", "~> 2.5"
|
||||
|
||||
group :jekyll_plugins do
|
||||
gem "jekyll-paginate"
|
||||
gem 'jekyll-sitemap'
|
||||
gem 'jekyll-gist'
|
||||
gem "jekyll-feed"
|
||||
gem "jemoji"
|
||||
gem "jekyll-include-cache"
|
||||
gem "jekyll-algolia"
|
||||
end
|
||||
|
||||
# vulnerability found
|
||||
gem "kramdown", ">= 2.3.1"
|
|
@ -1,5 +1,7 @@
|
|||
# Azure Trusted Research Environment
|
||||
|
||||
[Full documentation](https://microsoft.github.io/AzureTRE/)
|
||||
|
||||
## Project Status
|
||||
|
||||
The aim is to bring together learnings from past customer engagements where TREs have been built into a single reference solution. This is a solution accelerator aiming to be a great starting point for a customized TRE solution. You're encouraged to download and customize the solution to meet your requirements
|
||||
|
@ -18,10 +20,6 @@ Workspaces can be configured with a variety of tools to enable tasks such as the
|
|||
|
||||
A successful Trusted Research Environments enables users to be as productive, if not more productive than they would be working in environments without strict information governance controls.
|
||||
|
||||
## Documentation
|
||||
|
||||
See [Index](./docs/index.md) to get started.
|
||||
|
||||
## Support
|
||||
|
||||
For details of support expectations, please review our [Support Policy](./SUPPORT.md).
|
||||
|
|
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 3.3 KiB |
|
@ -1,15 +1,16 @@
|
|||
# Overview
|
||||
# Azure TRE Architecture
|
||||
|
||||
The Azure Trusted Research Environment (TRE) consists of multiple components, all encapsulated in networks with restricted ingress- & egress traffic. There is one network for the management components and one network per Workspace. All traffic has to be explicitly allowed by the Application Gateway or the Firewall.
|
||||
|
||||
![Architecture overview](./assets/archtecture-overview.png)
|
||||
![Architecture overview](../assets/archtecture-overview.png)
|
||||
|
||||
The Azure TRE management plane consists of two groups of components:
|
||||
|
||||
- API & Composition Service
|
||||
- Shared Services
|
||||
|
||||
> Shared Services is still work in progress. Please see [#23](https://github.com/microsoft/AzureTRE/issues/23), [#22](https://github.com/microsoft/AzureTRE/issues/21), & [#21](https://github.com/microsoft/AzureTRE/issues/21)
|
||||
!!! todo
|
||||
Shared Services is still work in progress. Please see [#23](https://github.com/microsoft/AzureTRE/issues/23), [#22](https://github.com/microsoft/AzureTRE/issues/21), & [#21](https://github.com/microsoft/AzureTRE/issues/21)
|
||||
|
||||
The TRE API is a service that users can interact with to request changes to workspaces e.g., to create, update, delete workspaces and workspace services inside each workspace. The Composition Service is doing the actual work of mutating the state of each Workspace including the Workspace Services.
|
||||
|
||||
|
@ -22,22 +23,18 @@ Shared Services are services available to all Workspaces. **Source Mirror** can
|
|||
|
||||
The Composition Service is responsible for managing and mutating Workspaces and Workspace Services.
|
||||
|
||||
A Workspace is an instance of a Workspace Template. A Workspace Template is implemented as a [Porter](https://porter.sh/) bundle - read more about [Authoring workspaces templates](./authoring-workspace-templates.md).
|
||||
A Workspace is an instance of a Workspace Template. A Workspace Template is implemented as a [Porter](https://porter.sh/) bundle - read more about [Authoring workspaces templates](../tre-workspace-authors/authoring-workspace-templates.md).
|
||||
|
||||
A Porter bundle is a fully encapsulated versioned bundle with everything needed (binaries, scripts, IoC templates etc.) to provision an instance of Workspace Template.
|
||||
|
||||
The [TRE Administrator](./user-roles.md#tre-administrator) can register a Porter bundle to use the Composition Service to provision instances of the Workspace Templates.
|
||||
The [TRE Administrator](user-roles.md#tre-administrator) can register a Porter bundle to use the Composition Service to provision instances of the Workspace Templates.
|
||||
|
||||
This requires:
|
||||
|
||||
1. The Porter bundle to be pushed to the Azure Container Registry (ACR).
|
||||
1. Registering the Workspace through the API.
|
||||
|
||||
Details on how to [register a Workspace Template](registering-workspace-templates.md).
|
||||
|
||||
### Provision a Workspace
|
||||
|
||||
![Composition Service](./assets/composition-service.png)
|
||||
Details on how to [register a Workspace Template](../tre-workspace-authors/registering-workspace-templates.md).
|
||||
|
||||
The Composition Service consists of multiple components.
|
||||
|
||||
|
@ -48,6 +45,10 @@ The Composition Service consists of multiple components.
|
|||
| Service Bus | [Azure Service Bus](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview) responsible for reliable delivery of messages between components. |
|
||||
| Resource Processor | Responsible for starting the process of mutating a Workspace via a Workspace Template. |
|
||||
|
||||
## Provisioning a Workspace
|
||||
|
||||
![Composition Service](../assets/composition-service.png)
|
||||
|
||||
The flow to provision a Workspace is as follows (the flow is the same for all kinds of mutations to a Workspace):
|
||||
|
||||
1. An HTTP request to the TRE API to create a new Workspace. The request contains information like the name of the Workspace, the Workspace Template to use, and the parameters required for the Workspace Template (Workspace Templates can expose the parameters via a JSON Schema ).
|
||||
|
@ -77,23 +78,20 @@ The flow to provision a Workspace is as follows (the flow is the same for all ki
|
|||
porter install --reference msfttreacr.azurecr.io/bundles/BaseWorkspaceTemplate:1.0 --params param1=value1 --cred azure.json
|
||||
```
|
||||
|
||||
Deployments are carried out against the Azure Subscription using a User Assigned Managed Identity. The `azure.json` tells Porter where the credential information can be found and for the Resource Processor they are set as environment variables (Base Workspace Template [azure.json](../templates/workspaces/base/azure.json)).
|
||||
Deployments are carried out against the Azure Subscription using a User Assigned Managed Identity. The `azure.json` tells Porter where the credential information can be found and for the Resource Processor they are set as environment variables.
|
||||
|
||||
Porter bundle actions are required to be idempotent, so if a deployment fails, the Resource Processor can retry.
|
||||
|
||||
> The Resource Processor is a Docker container running on a Linux VM scale set.
|
||||
|
||||
1. The Porter Docker bundle is pulled from the Azure Container Registry (ACR) and executed.
|
||||
1. The Porter bundle executes against Azure Resource Manager to provision Azure resources. Any kind of infrastructure of code frameworks like ARM, Terraform, or Pulumi can be used or scripted via PowerShell or Azure CLI.
|
||||
1. State and output management is handled via Azure Storage Containers. State for keeping persistent state between executions of a bundled with the same Workspace.
|
||||
|
||||
> Currently, the bundle keeps state between executions in a Storage Container (TF state) passed in a parameters to the bundle. An enhancement issues [#536](https://github.com/microsoft/AzureTRE/issues/536) exists to configure Porter state management.
|
||||
|
||||
1. For the time being, the Porter bundle updates Firewall rules directly setting egress rules. An enhancement to implement a Shared Firewall services is planned ([#23](https://github.com/microsoft/AzureTRE/issues/23)).
|
||||
1. The Resource Processor sends events to the `deploymentstatus` queue on state changes and informs if the deployment succeeded or failed.
|
||||
1. The status of a Porter bundle execution is received.
|
||||
1. The status of a Porter bundle execution is updated in the Configuration Store.
|
||||
|
||||
## Network architecture
|
||||
!!! info
|
||||
The Resource Processor is a Docker container running on a Linux VM scale set.
|
||||
|
||||
See [networking](./networking.md).
|
||||
!!! todo
|
||||
Currently, the bundle keeps state between executions in a Storage Container (TF state) passed in a parameters to the bundle. An enhancement issues [#536](https://github.com/microsoft/AzureTRE/issues/536) exists to configure Porter state management.
|
|
@ -2,27 +2,6 @@
|
|||
|
||||
The TRE API is a service that users can interact with to request changes to workspaces e.g., to create, update, delete workspaces and workspace services inside each workspace.
|
||||
|
||||
*Table of contents:*
|
||||
|
||||
* [Prerequisites](#prerequisites)
|
||||
* [Tools](#tools)
|
||||
* [Azure resources](#azure-resources)
|
||||
* [Creating resources (Bash)](#creating-resources-bash)
|
||||
* [Configuration](#configuration)
|
||||
* [Auth](#auth)
|
||||
* [State store](#state-store)
|
||||
* [Service Bus](#service-bus)
|
||||
* [Logging and monitoring](#logging-and-monitoring)
|
||||
* [Service principal for API process identity](#service-principal-for-api-process-identity)
|
||||
* [Running API](#running-api)
|
||||
* [Develop and run locally](#develop-and-run-locally)
|
||||
* [Develop and run in dev container](#develop-and-run-in-dev-container)
|
||||
* [Deploy with Docker](#deploy-with-docker)
|
||||
* [Unit tests](#unit-tests)
|
||||
* [Implementation](#implementation)
|
||||
* [Auth in code](#auth-in-code)
|
||||
* [Workspace requests](#workspace-requests)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Tools
|
||||
|
@ -36,7 +15,7 @@ The TRE API is a service that users can interact with to request changes to work
|
|||
* You can use the [Cosmos DB Emulator](https://docs.microsoft.com/azure/cosmos-db/local-emulator?tabs=cli%2Cssl-netstd21) for testing locally
|
||||
* [Azure Service Bus](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview)
|
||||
* Service principal for the API to access Azure services such as Azure Service Bus
|
||||
* AAD applications (for the API and Swagger UI) - see [Authentication & authorization](../docs/auth.md) for more information
|
||||
* AAD applications (for the API and Swagger UI) - see [Authentication & authorization](../../tre-admins/deploying-the-tre/auth.md) for more information
|
||||
|
||||
#### Creating resources (Bash)
|
||||
|
||||
|
@ -93,7 +72,8 @@ az role assignment create \
|
|||
--scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ServiceBus/namespaces/$SERVICE_BUS_NAMESPACE
|
||||
```
|
||||
|
||||
> Keep in mind that Azure role assignments may take up to five minutes to propagate.
|
||||
!!! caution
|
||||
Keep in mind that Azure role assignments may take up to five minutes to propagate.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -107,14 +87,14 @@ az role assignment create \
|
|||
|
||||
### Auth
|
||||
|
||||
The TRE API depends on [TRE API](#tre-api) and [TRE Swagger UI](#tre-swagger-ui) app registrations. The API requires the environment variables listed in the table below to be present. See [Authentication and authorization](../docs/auth.md) for more information.
|
||||
The TRE API depends on [TRE API](../../tre-admins/deploying-the-tre/auth.md#tre-api) and [TRE Swagger UI](../../tre-admins/deploying-the-tre/auth.md#tre-swagger-ui) app registrations. The API requires the environment variables listed in the table below to be present. See [Authentication and authorization](../../tre-admins/deploying-the-tre/auth.md) for more information.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `AAD_TENANT_ID` | The tenant ID of the Azure AD. |
|
||||
| `API_CLIENT_ID` | The application (client) ID of the [TRE API](../docs/auth.md#tre-api) service principal. |
|
||||
| `API_CLIENT_SECRET` | The application password (client secret) of the [TRE API](../docs/auth.md#tre-api) service principal. |
|
||||
| `SWAGGER_UI_CLIENT_ID` | The application (client) ID of the [TRE Swagger UI](../docs/auth.md#tre-swagger-ui) service principal. |
|
||||
| `API_CLIENT_ID` | The application (client) ID of the [TRE API](../../tre-admins/deploying-the-tre/auth.md#tre-api) service principal. |
|
||||
| `API_CLIENT_SECRET` | The application password (client secret) of the [TRE API](../../tre-admins/deploying-the-tre/auth.md#tre-api) service principal. |
|
||||
| `SWAGGER_UI_CLIENT_ID` | The application (client) ID of the [TRE Swagger UI](../../tre-admins/deploying-the-tre/auth.md#tre-swagger-ui) service principal. |
|
||||
|
||||
See also: [Auth in code](#auth-in-code)
|
||||
|
||||
|
@ -125,8 +105,8 @@ See also: [Auth in code](#auth-in-code)
|
|||
| `STATE_STORE_ENDPOINT` | The Cosmos DB endpoint. Use `localhost` with an emulator. Example value: `https://localhost:8081` |
|
||||
| `STATE_STORE_KEY` | The Cosmos DB key. Use only with localhost emulator. |
|
||||
| `COSMOSDB_ACCOUNT_NAME` | The Cosmos DB account name. |
|
||||
| `SUBSCRIPTION_ID` | The Azure Subscription ID where Cosmos DB is lcoated. |
|
||||
| `RESOURCE_GROUP_NAME` | The Azure Resource Group name where Cosmos DB is lcoated. |
|
||||
| `SUBSCRIPTION_ID` | The Azure Subscription ID where Cosmos DB is located. |
|
||||
| `RESOURCE_GROUP_NAME` | The Azure Resource Group name where Cosmos DB is located. |
|
||||
|
||||
### Service Bus
|
||||
|
||||
|
@ -134,14 +114,14 @@ See also: [Auth in code](#auth-in-code)
|
|||
| ------------------------- | ----------- |
|
||||
| `SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE` | Example value: `<your namespace>.servicebus.windows.net` |
|
||||
| `SERVICE_BUS_RESOURCE_REQUEST_QUEUE` | The queue for resource request messages sent by the API. Example value: `workspacequeue` |
|
||||
| `SERVICE_BUS_DEPLOYMENT_STATUS_UPDATE_QUEUE` | The queue for deployment status update messages sent by [Resource Processor](../resource_processor/vmss_porter/readme.md) and received by the API. Example value: `deploymentstatus` |
|
||||
| `SERVICE_BUS_DEPLOYMENT_STATUS_UPDATE_QUEUE` | The queue for deployment status update messages sent by [Resource Processor](resource-processor.md) and received by the API. Example value: `deploymentstatus` |
|
||||
|
||||
### Logging and monitoring
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `APPLICATIONINSIGHTS_CONNECTION_STRING` | Application Insights connection string - can be left blank when debugging locally. |
|
||||
| `APPINSIGHTS_INSTRUMENTATIONKEY` | pplication Insights instrumentation key - can be left blank when debugging locally. |
|
||||
| `APPINSIGHTS_INSTRUMENTATIONKEY` | Application Insights instrumentation key - can be left blank when debugging locally. |
|
||||
|
||||
### Service principal for API process identity
|
||||
|
||||
|
@ -252,7 +232,7 @@ AccessService (access_service.py) <─── AADAccessService (aad_access_servic
|
|||
fastapi.security.OAuth2AuthorizationCodeBearer <─── AzureADAuthorization (aad_authentication.py)
|
||||
```
|
||||
|
||||
All the sensitive routes (API calls that can query sensitive data or modify resources) in the TRE API depend on having a "current user" authenticated. E.g., in [`/api_app/api/routes/workspaces.py`](api/routes/workspaces.py):
|
||||
All the sensitive routes (API calls that can query sensitive data or modify resources) in the TRE API depend on having a "current user" authenticated. E.g., in `/api_app/api/routes/workspaces.py`:
|
||||
|
||||
```python
|
||||
router = APIRouter(dependencies=[Depends(get_current_user)])
|
||||
|
@ -260,11 +240,11 @@ router = APIRouter(dependencies=[Depends(get_current_user)])
|
|||
|
||||
Where `APIRouter` is part of the [FastAPI](https://fastapi.tiangolo.com/).
|
||||
|
||||
The user details, once authenticated, are stored as an instance of the custom [User](./services/authentication.py) class.
|
||||
The user details, once authenticated, are stored as an instance of the custom `User` class.
|
||||
|
||||
## Workspace requests
|
||||
|
||||
Some workspace routes require `authConfig` field in the request body. The AAD specific implementation expects a dictionary inside `data` field to contain the application (client) ID of the [app registration associated with workspace](../docs/auth.md#workspaces):
|
||||
Some workspace routes require `authConfig` field in the request body. The AAD specific implementation expects a dictionary inside `data` field to contain the application (client) ID of the [app registration associated with workspace](../../tre-admins/deploying-the-tre/auth.md#workspaces):
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -277,11 +257,12 @@ Some workspace routes require `authConfig` field in the request body. The AAD sp
|
|||
}
|
||||
```
|
||||
|
||||
> **Note:** The app registration for a workspace is not created by the API. One needs to be present (created manually) before using the API to provision a new workspace.
|
||||
!!! caution
|
||||
The app registration for a workspace is not created by the API. One needs to be present (created manually) before using the API to provision a new workspace.
|
||||
|
||||
## Network requirements
|
||||
|
||||
To be able to run the TRE API it needs to acccess the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
To be able to run the TRE API it needs to access the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
|
||||
| Service Tag / Destination | Justification |
|
||||
| --- | --- |
|
|
@ -22,7 +22,7 @@ To work locally checkout the source code and run:
|
|||
pip install -r ./vmss_porter/requirements.txt
|
||||
```
|
||||
|
||||
If you use visual studio code you can set up your launch.json to include the follwing block which will enable launching and debugging.
|
||||
If you use visual studio code you can set up your launch.json to include the following block which will enable launching and debugging.
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -55,25 +55,25 @@ If you use visual studio code you can set up your launch.json to include the fol
|
|||
|
||||
When working locally we use a service principal (SP). This SP needs enough permissions to be able to talk to service bus and to deploy resources into the subscription. That means the service principal needs Owner access to subscription(ARM_SUBSCRIPTION_ID) and also needs **Azure Service Bus Data Sender** and **Azure Service Bus Data Receiver** on the service bus namespace defined above (SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE).
|
||||
|
||||
Once the above is setup you can simulate receiving messages from service bus by going to service bus explorer on the portal and using a message payload for SERVICE_BUS_RESOURCE_REQUEST_QUEUE as follows
|
||||
Once the above is set up you can simulate receiving messages from service bus by going to service bus explorer on the portal and using a message payload for SERVICE_BUS_RESOURCE_REQUEST_QUEUE as follows
|
||||
|
||||
```json
|
||||
{"action": "install", "id": "a8911125-50b4-491b-9e7c-ed8ff42220f9", "name": "tre-workspace-base", "version": "0.1.0", "parameters": {"azure_location": "westeurope", "workspace_id": "20f9", "tre_id": "myfavtre", "address_space": "192.168.3.0/24"}}
|
||||
```
|
||||
|
||||
This will trigger receiving of messages and you can freely debug the code by setting breakpoints as desired.
|
||||
This will trigger receiving of messages, and you can freely debug the code by setting breakpoints as desired.
|
||||
|
||||
## Porter Azure plugin
|
||||
|
||||
Resource Processor uses [Porter Azure plugin](https://github.com/getporter/azure-plugins) to store Porter data in TRE management storage account. The storage container, named `porter`, is created during the bootstrapping phase of TRE deployment. [`run.sh`](./run.sh) script generates `config.toml` file in Porter home folder to enable the Azure plugin when the image is started.
|
||||
Resource Processor uses [Porter Azure plugin](https://github.com/getporter/azure-plugins) to store Porter data in TRE management storage account. The storage container, named `porter`, is created during the bootstrapping phase of TRE deployment. The `/resource_processor/run.sh` script generates a `config.toml` file in Porter home folder to enable the Azure plugin when the image is started.
|
||||
|
||||
## Debugging deployed processor on Azure
|
||||
|
||||
Check the section **Checking the Virtual Machine Scale Set(VMSS) instance running resource processor** in [debugging and troubleshooting guide](../../docs/ops_debugging_troubleshooting.md)
|
||||
Check the section **Checking the Virtual Machine Scale Set(VMSS) instance running resource processor** in [debugging and troubleshooting guide](../../tre-admins/troubleshooting-guide.md)
|
||||
|
||||
## Network requirements
|
||||
|
||||
To be able to run the Resource Processer it needs to acccess the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
To be able to run the Resource Processor it needs to access the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
|
||||
| Service Tag | Justification |
|
||||
| --- | --- |
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Trusted Research Environments (TRE) enforce a secure boundary around distinct workspaces to enable information governance controls to be enforced.
|
||||
|
||||
![Concepts](./assets/treconcepts.png)
|
||||
![Concepts](../assets/treconcepts.png)
|
||||
|
||||
A Trusted Research Environment (typically one per organization, or one per department in large organizations) consist of
|
||||
|
||||
|
@ -22,7 +22,7 @@ Following are more detailed descriptions of the TRE concepts
|
|||
|
||||
## Application components of the TRE
|
||||
|
||||
A TRE consist of multiple processes orchestrating managing workspaces and services. These are components that enable researchers and TRE admins to provision and manage workspaces in a self-service manner. The components are of relevance for [Azure administrators](./user-roles.md#Azure-administrator), [TRE service integrator](./user-roles.md#TRE-service-integrator) and [TRE developers](./user-roles.md#Azure-TRE-developer).
|
||||
A TRE consist of multiple processes orchestrating managing workspaces and services. These are components that enable researchers and TRE admins to provision and manage workspaces in a self-service manner. The components are of relevance for [Azure administrators](user-roles.md#Azure-administrator), [TRE service integrator](user-roles.md#TRE-service-integrator) and [TRE developers](user-roles.md#Azure-TRE-developer).
|
||||
|
||||
### Composition Service
|
||||
|
||||
|
@ -58,7 +58,7 @@ Workspaces can be enhanced with one or more building blocks called **workspace s
|
|||
|
||||
Multiple workspaces can be created within a single Trusted Research Environment to create the required separation for your projects.
|
||||
|
||||
Each workspace has [workspace users](./user-roles.md): one workspace owner, and one or more workspace researchers that can access the data and workspace services in the workspace. The workspace owner is also considered a workspace researcher.
|
||||
Each workspace has [workspace users](user-roles.md): one workspace owner, and one or more workspace researchers that can access the data and workspace services in the workspace. The workspace owner is also considered a workspace researcher.
|
||||
|
||||
## Workspace Service
|
||||
|
||||
|
@ -91,6 +91,7 @@ The templates describe the porter bundles used, and the input parameters needed
|
|||
|
||||
To use a template, and deploy a resource, the template needs to be registered in the TRE. This is done using the TRE API.
|
||||
|
||||
> **Note:** Once a template is registered it can be used multiple times to deploy multiple workspaces, workspace services etc.
|
||||
!!! tip
|
||||
Once a template is registered it can be used multiple times to deploy multiple workspaces, workspace services etc.
|
||||
|
||||
If you want to author your own workspace, workspace service, or user resource template, consult the [template authoring guide](./authoring-workspace-templates.md)
|
||||
If you want to author your own workspace, workspace service, or user resource template, consult the [template authoring guide](../tre-workspace-authors/authoring-workspace-templates.md)
|
|
@ -1,14 +1,15 @@
|
|||
# Networking
|
||||
# Network Architecture
|
||||
|
||||
The Trusted Research Environment (TRE) network topology is based on [hub-spoke](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). The TRE Management VNET ([Azure Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview)) is the central hub and each workspace is a spoke.
|
||||
|
||||
> Note: TRE Management is referred to as **core** in scripts and code.
|
||||
!!! note
|
||||
TRE Management is referred to as **core** in scripts and code.
|
||||
|
||||
![Network architecture](./assets/network-architecture.png)
|
||||
![Network architecture](../assets/network-architecture.png)
|
||||
|
||||
Azure TRE VNETs are segregated allowing limited traffic between the TRE Management VNET and Workspace VNETs. The security rules are managed by `nsg-ws` network security group. See [workspace network security groups (NSG)](#workspaces) further down.
|
||||
|
||||
The Core VNET is further devided into subnets.
|
||||
The Core VNET is further divided into subnets.
|
||||
|
||||
| Subnet | Description |
|
||||
| -------| ----------- |
|
||||
|
@ -20,10 +21,10 @@ The Core VNET is further devided into subnets.
|
|||
| `SharedSubnet` | Shared Services subnet for all things shared by TRE Core and Workspaces. Such as Source Mirror Shared Service and Package Mirror Shared Service. |
|
||||
|
||||
All subnets (Core and Workspace subnets) has a default route which directs egress traffic to the Azure Firewall, to ensure only explicitly allowed destinations on the Internet to be accessed.
|
||||
There are couple of exceptions
|
||||
There are a couple of exceptions
|
||||
|
||||
- `AzureFirewallSubnet` as it hosts the Azure Firewall which routes traffic to the Internet.
|
||||
- `AzureBastionSubnet` as it hosts [Azure Bastion](https://azure.microsoft.com/en-us/services/azure-bastion) which is the management jumpbox within the VNET with Internet access.
|
||||
- `AzureBastionSubnet` as it hosts [Azure Bastion](https://azure.microsoft.com/en-us/services/azure-bastion) which is the management jump box within the VNET with Internet access.
|
||||
- `AppGwSubnet` as it hosts the Azure Application Gateway which has to be able to a ping the health endpoints e.g. TRE API.
|
||||
|
||||
## Ingress and egress
|
||||
|
@ -32,18 +33,18 @@ Ingress traffic from the Internet is only allowed through the Application Gatewa
|
|||
|
||||
Egress traffic is routed through the Azure Firewall with a few exceptions and by default all ingress and egress traffic is denied except explicitly allowed.
|
||||
|
||||
The explicitly allowed egress trafic is described here:
|
||||
The explicitly allowed egress traffic is described here:
|
||||
|
||||
- [Resource Processor](../resource_processor/vmss_porter/readme.md#network-requirements)
|
||||
- [TRE API](../api_app/README.md#network-requirements)
|
||||
- [Gitea Shared Service](../templates/shared_services/gitea/readme.md#network-requirements)
|
||||
- [Nexus Shared Service](../templates/shared_services/sonatype-nexus/readme.md#network-requirements)
|
||||
- [Resource Processor](composition-service/resource-processor.md#network-requirements)
|
||||
- [TRE API](composition-service/api.md#network-requirements)
|
||||
- [Gitea Shared Service](shared-services/gitea.md#network-requirements)
|
||||
- [Nexus Shared Service](shared-services/nexus.md#network-requirements)
|
||||
|
||||
## Network security groups
|
||||
|
||||
### TRE Management/core
|
||||
|
||||
Network security groups (NSG) and their security rules for TRE core resources are defined in [`/templates/core/terraform/network/network_security_groups.tf`](../templates/core/terraform/network/network_security_groups.tf).
|
||||
Network security groups (NSG), and their security rules for TRE core resources are defined in `/templates/core/terraform/network/network_security_groups.tf`.
|
||||
|
||||
| Network security group | Associated subnet(s) |
|
||||
| ---------------------- | -------------------- |
|
||||
|
@ -60,6 +61,7 @@ Azure TRE VNETs are segregated allowing limited traffic between the TRE Manageme
|
|||
- Outbound traffic to Internet allowed on HTTPS port 443 (next hop Azure Firewall).
|
||||
- All other outbound traffic denied.
|
||||
|
||||
> In Azure, traffic between subnets are allowed except explicitly denied.
|
||||
|
||||
Each of these rules can be managed per workspace.
|
||||
|
||||
!!! caution
|
||||
In Azure, traffic between subnets are allowed except explicitly denied.
|
|
@ -14,9 +14,9 @@ In order to connect to the gitea admin console use the user "gitea_admin". The u
|
|||
|
||||
## Network requirements
|
||||
|
||||
To be able to run the Gitea Shared Service it need to be able to acccess the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
To be able to run the Gitea Shared Service it needs to be able to access the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
|
||||
| Service Tag / Destionation | Justification |
|
||||
| Service Tag / Destination | Justification |
|
||||
| --- | --- |
|
||||
| AzureActiveDirectory | Authorize the signed in user against Azure Active Directory. |
|
||||
| AzureContainerRegistry | Pull the Gitea container image, as it is located in Azure Container Registry. |
|
|
@ -1,6 +1,6 @@
|
|||
# Nexus Shared Service
|
||||
|
||||
This service allows users in workspaces to access external software packages in a secure manner by relying on Sonatype Nexus (RepoManager).
|
||||
This service allows users in workspaces to access external software packages securely by relying on Sonatype Nexus (RepoManager).
|
||||
Documentation on Nexus can be found here: [https://help.sonatype.com/repomanager3/](https://help.sonatype.com/repomanager3/).
|
||||
|
||||
## Deploy
|
||||
|
@ -12,14 +12,14 @@ To deploy set `DEPLOY_NEXUS=true` in `templates/core/.env`.
|
|||
1. Wait for the application to come online - it can take a few minutes... and then navigate to its homepage. You can find the url by looking at the management resource group and finding a web application whose name start with nexus.
|
||||
1. Retrieve the initial admin password from the "admin.password" file. You will find it by going to TRE management resource group -> storage account named "stg\<TRE-ID\>" -> "File Shares" -> "nexus-data".
|
||||
1. Use the password to login to Nexus and go through the initial setup wizard. You can allow anonymous access because the purpose of this service is to use publicly available software packages.
|
||||
1. On the admin screen, add **proxy** repositories as needed. Note that other types of repositories might be a way to move data in/out workspaces and you should not allow that.
|
||||
1. On the admin screen, add **proxy** repositories as needed. Note that other types of repositories might be a way to move data in/out workspaces, and you should not allow that.
|
||||
1. Finally, share the repositories addresses with your users.
|
||||
|
||||
## Network requirements
|
||||
|
||||
To be able to run the Nexus Shared Service it need to be able to acccess the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
To be able to run the Nexus Shared Service it needs to be able to access the following resource outside the Azure TRE VNET via explicit allowed [Service Tags](https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview) or URLs.
|
||||
|
||||
| Service Tag / Destionation | Justification |
|
||||
| Service Tag / Destination | Justification |
|
||||
| --- | --- |
|
||||
| AzureActiveDirectory | Authorize the signed in user against Azure Active Directory. |
|
||||
| AzureContainerRegistry | Pull the Nexus container image, as it is located in Azure Container Registry. |
|
|
@ -37,7 +37,8 @@ Now, let's open the cloned repository in Visual Studio Code and connect to the d
|
|||
AzureTRE> code .
|
||||
```
|
||||
|
||||
> Visual Studio Code should recognize the available development container and ask you to open the folder using it. For additional details on connecting to remote containers, please see the [Open an existing folder in a container](https://code.visualstudio.com/docs/remote/containers#_quick-start-open-an-existing-folder-in-a-container) quickstart.
|
||||
!!! tip
|
||||
Visual Studio Code should recognize the available development container and ask you to open the folder using it. For additional details on connecting to remote containers, please see the [Open an existing folder in a container](https://code.visualstudio.com/docs/remote/containers#_quick-start-open-an-existing-folder-in-a-container) quickstart.
|
||||
|
||||
When you start the development container for the first time, the container will be built. This usually takes a few minutes.
|
||||
|
||||
|
@ -49,17 +50,21 @@ The `/devops/.env` file contains configuration variables for the shared manageme
|
|||
|
||||
You need to provide values for the following variables:
|
||||
|
||||
* `LOCATION` - The Azure region to deploy to
|
||||
* `MGMT_RESOURCE_GROUP_NAME` - Resource group name
|
||||
* `MGMT_STORAGE_ACCOUNT_NAME` - Storage account name
|
||||
* `ACR_NAME` - Container registry name
|
||||
* `ARM_SUBSCRIPTION_ID` - Azure subscription id
|
||||
| VARIABLE | DESCRIPTION |
|
||||
| -- | -- |
|
||||
| `LOCATION` | The Azure region to deploy to |
|
||||
| `MGMT_RESOURCE_GROUP_NAME` | Resource group name |
|
||||
| `MGMT_STORAGE_ACCOUNT_NAME` | Storage account name |
|
||||
| `ACR_NAME` | Container registry name |
|
||||
| `ARM_SUBSCRIPTION_ID` | Azure subscription id |
|
||||
|
||||
Comment out the following variables by starting the line with a hash `#`.
|
||||
|
||||
* `ARM_TENANT_ID`
|
||||
* `ARM_CLIENT_ID`
|
||||
* `ARM_CLIENT_SECRET`
|
||||
```cmd
|
||||
# ARM_TENANT_ID=...
|
||||
# ARM_CLIENT_ID=...
|
||||
# ARM_CLIENT_SECRET=...
|
||||
```
|
||||
|
||||
The rest of the variables can have their default values. You should now have a `.env`file that looks similar to below.
|
||||
|
||||
|
@ -85,7 +90,8 @@ PORTER_OUTPUT_CONTAINER_NAME=porterout
|
|||
DEBUG="false"
|
||||
```
|
||||
|
||||
> To retrieve your Azure subscription id, you can use the `az` command line interface available in the development container. In the terminal window in Visual Studio Code, type `az login` followed by `az account show` to see your default subscription. Please refer to `az account -help` for further details on how to change your active subscription if desired.
|
||||
!!! tip
|
||||
To retrieve your Azure subscription id, you can use the `az` command line interface available in the development container. In the terminal window in Visual Studio Code, type `az login` followed by `az account show` to see your default subscription. Please refer to `az account -help` for further details on how to change your active subscription if desired.
|
||||
|
||||
## Set environment configuration variables of the Azure TRE instance
|
||||
|
||||
|
@ -101,21 +107,25 @@ Use the terminal window in Visual Studio Code to execute the following script fr
|
|||
/workspaces/tre> az login
|
||||
```
|
||||
|
||||
> note: in case you have several subscriptions and would like to change your default subscription use ```az account set --subscription desired_subscription_id```
|
||||
!!! note
|
||||
In case you have several subscriptions and would like to change your default subscription use ```az account set --subscription desired_subscription_id```
|
||||
|
||||
```bash
|
||||
/workspaces/tre> ./scripts/aad-app-reg.sh -n aztreqs -r https://aztreqs.westeurope.cloudapp.azure.com/oidc-redirect
|
||||
```
|
||||
|
||||
> Note: `aztreqs` is a placeholder for the unique name you have to choose for your Azure TRE instance. Likewise `westeurope` is a placeholder for the location where the resources will be deployed, this should match the value you set on the location variable in the previous step.
|
||||
!!! note
|
||||
`aztreqs` is a placeholder for the unique name you have to choose for your Azure TRE instance. Likewise `westeurope` is a placeholder for the location where the resources will be deployed, this should match the value you set on the location variable in the previous step.
|
||||
|
||||
With the output from the `add-app-reg.sh` script, you can now provide the required values for the following variables in the `/templates/core/.env` configuration file:
|
||||
|
||||
* `TRE_ID` - The identifier for your Azure TRE instance. Will be used for naming Azure resources. Needs to be globally unique and less than 12 characters.
|
||||
* `AAD_TENANT_ID` - The Azure AD tenant id
|
||||
* `API_CLIENT_ID` - Service principal id for the API
|
||||
* `API_CLIENT_SECRET` - Client secret for the API
|
||||
* `SWAGGER_UI_CLIENT_ID` - Service principal id for the Swagger (Open API) UI
|
||||
| VARIABLE | DESCRIPTION |
|
||||
| -- | -- |
|
||||
| `TRE_ID` | The identifier for your Azure TRE instance. Will be used for naming Azure resources. Needs to be globally unique and less than 12 characters. |
|
||||
| `AAD_TENANT_ID` | The Azure AD tenant id |
|
||||
| `API_CLIENT_ID` | Service principal id for the API |
|
||||
| `API_CLIENT_SECRET` | Client secret for the API |
|
||||
| `SWAGGER_UI_CLIENT_ID` | Service principal id for the Swagger (Open API) UI |
|
||||
|
||||
All other variables can have their default values for now. You should now have a `.env` file that looks similar to below.
|
||||
|
||||
|
@ -189,5 +199,5 @@ Open your browser and navigate to the `/api/docs` route of the API: `https://<a
|
|||
## Next steps
|
||||
|
||||
* Deploy a new workspace for Azure Machine Learning
|
||||
* [Enable users to access the Azure TRE instance](./auth.md#enabling-users)
|
||||
* [Create a new workspace template](./authoring-workspace-templates.md)
|
||||
* [Enable users to access the Azure TRE instance](tre-admins/deploying-the-tre/auth.md#enabling-users)
|
||||
* [Create a new workspace template](tre-workspace-authors/authoring-workspace-templates.md)
|
||||
|
|
|
@ -1,25 +1,25 @@
|
|||
# Azure TRE documentation
|
||||
|
||||
* Overview
|
||||
* [Concepts](./concepts.md)
|
||||
* [User roles](./user-roles.md)
|
||||
* [Architecture](./architecture.md)
|
||||
* [Networking](./networking.md)
|
||||
* [Concepts](azure-tre-overview/concepts.md)
|
||||
* [User roles](azure-tre-overview/user-roles.md)
|
||||
* [Architecture](azure-tre-overview/architecture.md)
|
||||
* [Networking](azure-tre-overview/networking.md)
|
||||
* [Logical data model](./logical-data-model.md)
|
||||
* Getting started
|
||||
* [Dev environment](./dev-environment.md)
|
||||
* [Authentication & authorization](./auth.md)
|
||||
* [Dev environment](tre-developers/dev-environment.md)
|
||||
* [Authentication & authorization](tre-admins/deploying-the-tre/auth.md)
|
||||
* The two ways of provisioning an instance of Azure TRE:
|
||||
1. [GitHub Actions workflows (CI/CD)](./workflows.md)
|
||||
1. [Quickstart](./deployment-quickstart.md)/[Manual deployment](./manual-deployment.md)
|
||||
1. [GitHub Actions workflows (CI/CD)](tre-admins/deploying-the-tre/workflows.md)
|
||||
1. [Quickstart](./deployment-quickstart.md)/[Manual deployment](tre-admins/deploying-the-tre/manual-deployment.md)
|
||||
* Composition Service components
|
||||
* [API](../api_app/README.md)
|
||||
* [Resource Processor](../resource_processor/README.md)
|
||||
* [End-to-end tests](../e2e_tests/README.md)
|
||||
* [API](azure-tre-overview/composition-service/api.md)
|
||||
* [Resource Processor](azure-tre-overview/composition-service/resource-processor.md)
|
||||
* [End-to-end tests](tre-developers/end-to-end-tests.md)
|
||||
* Workspaces and workspace services
|
||||
* [Authoring workspace templates](./authoring-workspace-templates.md)
|
||||
* [Registering workspace templates](./registering-workspace-templates.md)
|
||||
* [Firewall rules](./firewall-rules.md)
|
||||
* [Authoring workspace templates](tre-workspace-authors/authoring-workspace-templates.md)
|
||||
* [Registering workspace templates](tre-workspace-authors/registering-workspace-templates.md)
|
||||
* [Firewall rules](tre-workspace-authors/firewall-rules.md)
|
||||
|
||||
## Repository structure
|
||||
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
markdown
|
||||
mdx-truly-sane-lists
|
||||
mkdocs
|
||||
mkdocs-material
|
||||
mkdocs-material-extensions
|
||||
mkdocsstrings
|
|
@ -5,10 +5,10 @@ This document describes the authentication and authorization (A&A) of deployed A
|
|||
|
||||
## App registrations
|
||||
|
||||
App registrations (represented by service principals) define the privileges enabling access to the TRE system (e.g., [API](../api_app/README.md)) as well as the workspaces.
|
||||
App registrations (represented by service principals) define the privileges enabling access to the TRE system (e.g., [API](../../azure-tre-overview/composition-service/api.md)) as well as the workspaces.
|
||||
|
||||
<!-- markdownlint-disable-next-line MD013 -->
|
||||
It is recommended to run the [`/scripts/aad-app-reg.sh`](../scripts/aad-app-reg.sh) script to create the two main app registrations: **TRE API** and **TRE Swagger UI**. This automatically sets up the app registrations with the required permissions to run Azure TRE. The script will create an app password (client secret) for the **TRE API** app; make sure to take note of it in the script output as it is only shown once.
|
||||
It is recommended to run the `/scripts/aad-app-reg.sh` script to create the two main app registrations: **TRE API** and **TRE Swagger UI**. This automatically sets up the app registrations with the required permissions to run Azure TRE. The script will create an app password (client secret) for the **TRE API** app; make sure to take note of it in the script output as it is only shown once.
|
||||
|
||||
Alternatively you can also choose to create the app registrations manually via the Azure Portal - see [Quickstart: Register an application with the Microsoft identity platform](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app) on how. The required setup with permissions is documented below.
|
||||
|
||||
|
@ -25,7 +25,7 @@ The **TRE API** app registration defines the permissions, scopes and app roles f
|
|||
| API/permission name | Type | Description | Admin consent required | Status | TRE usage |
|
||||
| ------------------- | ---- | ----------- | ---------------------- | ------ | --------- |
|
||||
| Microsoft Graph/Directory.Read.All (`https://graph.microsoft.com/Directory.Read.All`) | Application* | Allows the app to read data in your organization's directory, such as users, groups and apps, without a signed-in user. | Yes | Granted for *[directory name]* | Used e.g., to retrieve app registration details, user associated app roles etc. |
|
||||
| Microsoft Graph/User.Read.All (`https://graph.microsoft.com/User.Read.All`) | Application* | Allows the app to read user profiles without a signed in user. | Yes | Granted for *[directory name]* | Reading user role assignments to check that the user has permissions to execute an action e.g., to view workspaces. See [`aad_access_service.py`](../api_app/services/aad_access_service.py). |
|
||||
| Microsoft Graph/User.Read.All (`https://graph.microsoft.com/User.Read.All`) | Application* | Allows the app to read user profiles without a signed in user. | Yes | Granted for *[directory name]* | Reading user role assignments to check that the user has permissions to execute an action e.g., to view workspaces. See `/api_app/services/aad_access_service.py`. |
|
||||
|
||||
*) See the difference between [delegated and application permission](https://docs.microsoft.com/graph/auth/auth-concepts#delegated-and-application-permissions) types.
|
||||
|
||||
|
@ -47,7 +47,7 @@ See [Microsoft Graph permissions reference](https://docs.microsoft.com/graph/per
|
|||
|
||||
The **TRE API** app registration requires no redirect URLs defined or anything else for that matter. From a security standpoint it should be noted that public client flows should not be allowed (see the image below taken from app registration authentication blade in Azure Portal).
|
||||
|
||||
![Allow public client flows - No](./assets/app-reg-authentication-allow-public-client-flows-no.png)
|
||||
![Allow public client flows - No](../../assets/app-reg-authentication-allow-public-client-flows-no.png)
|
||||
|
||||
### TRE Swagger UI
|
||||
|
||||
|
@ -71,21 +71,20 @@ The **TRE API** app registration requires no redirect URLs defined or anything e
|
|||
|
||||
Redirect URLs:
|
||||
|
||||
* `https://<app name>.<location>.cloudapp.azure.com/docs/oauth2-redirect`
|
||||
* `http://localhost:8000/docs/oauth2-redirect` - For local testing
|
||||
- `https://<app name>.<location>.cloudapp.azure.com/docs/oauth2-redirect`
|
||||
- `http://localhost:8000/docs/oauth2-redirect` - For local testing
|
||||
|
||||
The Swagger UI is a public client, so public client flows need to be enabled:
|
||||
|
||||
![Allow public client flows - Yes](./assets/app-reg-authentication-allow-public-client-flows-yes.png)
|
||||
![Allow public client flows - Yes](../../assets/app-reg-authentication-allow-public-client-flows-yes.png)
|
||||
|
||||
### TRE e2e test
|
||||
|
||||
The **TRE e2e test** app registration is used to authorize end-to-end test scenarios. It has no scopes or app roles defined.
|
||||
|
||||
> **Note:**
|
||||
>
|
||||
> * This app registration is only needed and used for **testing**
|
||||
> * As of writing this, there is no automated way provided for creating the **TRE e2e test** app registration, so it needs to be created manually.
|
||||
!!! note
|
||||
- This app registration is only needed and used for **testing**
|
||||
- As of writing this, there is no automated way provided for creating the **TRE e2e test** app registration, so it needs to be created manually.
|
||||
|
||||
#### API permissions - TRE e2e test
|
||||
|
||||
|
@ -102,16 +101,20 @@ The **TRE e2e test** app registration is used to authorize end-to-end test scena
|
|||
|
||||
In the **TRE e2e test** app registration go to Authentication -> Add platform -> Select Mobile & Desktop and add:
|
||||
|
||||
* `https://login.microsoftonline.com/common/oauth2/nativeclient`
|
||||
* `msal<TRE e2e test app registration application (client) ID>://auth`
|
||||
```cmd
|
||||
https://login.microsoftonline.com/common/oauth2/nativeclient
|
||||
msal<TRE e2e test app registration application (client) ID>://auth
|
||||
```
|
||||
|
||||
![Add auth platform](assets/aad-add-auth-platform.png)
|
||||
![Add auth platform](../../assets/aad-add-auth-platform.png)
|
||||
|
||||
1. Allow public client flows (see the image below). This enables the end-to-end tests to use a username and password combination to authenticate.
|
||||
|
||||
> **Note:** this should never be allowed for a production environment as it poses a security risk.
|
||||
![Allow public client flows - Yes](../../assets/app-reg-authentication-allow-public-client-flows-yes.png)
|
||||
|
||||
!!! warning
|
||||
Public client flows should never be allowed for a production environment as it poses a security risk.
|
||||
|
||||
![Allow public client flows - Yes](./assets/app-reg-authentication-allow-public-client-flows-yes.png)
|
||||
|
||||
#### End-to-end test user
|
||||
|
||||
|
@ -125,7 +128,8 @@ The end-to-end test should be added to **TRE Administrator** role exposed by the
|
|||
|
||||
Access to workspaces is also controlled using app registrations - one per workspace. The configuration of the app registration depends on the nature of the workspace, but this section covers the typical minimum settings.
|
||||
|
||||
> **Note:** The app registration for a workspace is not created by the [API](../api_app/README.md). One needs to be present (created manually) before using the API to provision a new workspace.
|
||||
!!! caution
|
||||
The app registration for a workspace is not created by the [API](../../azure-tre-overview/composition-service/api.md). One needs to be present (created manually) before using the API to provision a new workspace.
|
||||
|
||||
#### Authentication - Workspaces
|
||||
|
||||
|
@ -153,8 +157,8 @@ For a user to gain access to the system, they have to:
|
|||
|
||||
When these requirements are met, the user can sign-in using their credentials and use their privileges to use the API, login to workspace environment etc. based on their specific roles.
|
||||
|
||||
![User linked with app registrations](./assets/aad-user-linked-with-app-regs.png)
|
||||
![User linked with app registrations](../../assets/aad-user-linked-with-app-regs.png)
|
||||
|
||||
The users can also be linked via the Enterprise application view:
|
||||
|
||||
![Adding users to Enterprise application](./assets/adding-users-to-enterprise-application.png)
|
||||
![Adding users to Enterprise application](../../assets/adding-users-to-enterprise-application.png)
|
|
@ -12,7 +12,8 @@ az account list
|
|||
az account set --subscription <subscription ID>
|
||||
```
|
||||
|
||||
> **Note:** When running locally the credentials of the logged-in user will be used to deploy the infrastructure. Hence, it is essential that the user has enough permissions to deploy all resources.
|
||||
!!! caution
|
||||
When running locally the credentials of the logged-in user will be used to deploy the infrastructure. Hence, it is essential that the user has enough permissions to deploy all resources.
|
||||
|
||||
See [Sign in with Azure CLI](https://docs.microsoft.com/cli/azure/authenticate-azure-cli) for more details.
|
||||
|
||||
|
@ -26,11 +27,9 @@ A service principal needs to be created to authorize CI/CD workflows to provisio
|
|||
az ad sp create-for-rbac --name "sp-aztre-core" --role Owner --scopes /subscriptions/<subscription_id> --sdk-auth
|
||||
```
|
||||
|
||||
1. Save the JSON output
|
||||
|
||||
* Locally - as you will need it later.
|
||||
* Create a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) called `AZURE_CREDENTIALS` with the JSON output.
|
||||
1. Save the JSON output locally - as you will need it later.
|
||||
1. Create a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) called `AZURE_CREDENTIALS` with the JSON output.
|
||||
|
||||
## Create app registrations
|
||||
|
||||
Create app registrations for auth based on the [Authentication & authorization](./auth.md) guide.
|
||||
Create app registrations for auth based on the [Authentication & authorization](auth.md) guide.
|
|
@ -6,58 +6,59 @@ By following this guide you will deploy a new Azure TRE instance for development
|
|||
|
||||
### Bootstrap and create prerequisite resources
|
||||
|
||||
1. By now you should have a [developer environment](./dev-environment.md) set up
|
||||
1. Create app registrations for auth; follow the [Authentication & authorization](./auth.md) guide
|
||||
1. By now you should have a [developer environment](../../tre-developers/dev-environment.md) set up
|
||||
1. Create app registrations for auth; follow the [Authentication & authorization](auth.md) guide
|
||||
|
||||
### Configure variables
|
||||
|
||||
Before running any of the scripts, the configuration variables need to be set. This is done in an `.env` file, and this file is read and parsed by the scripts.
|
||||
|
||||
> **Note:** the `.tfvars` file is not used, this is intentional. The `.env` file format is easier to parse, meaning we can use the values for bash scripts and other purposes.
|
||||
!!! info
|
||||
The `.tfvars` file is not used, this is intentional. The `.env` file format is easier to parse, meaning we can use the values for bash scripts and other purposes.
|
||||
|
||||
Copy [/devops/.env.sample](../devops/.env.sample) to `/devops/.env`.
|
||||
1. Copy `/devops/.env.sample` to `/devops/.env`.
|
||||
|
||||
```cmd
|
||||
cp devops/.env.sample devops/.env
|
||||
```
|
||||
```cmd
|
||||
cp devops/.env.sample devops/.env
|
||||
```
|
||||
|
||||
Then, open the `.env` file in a text editor and set the values for the required variables described in the table below:
|
||||
Then, open the `.env` file in a text editor and set the values for the required variables described in the table below:
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `LOCATION` | The Azure location (region) for all resources. |
|
||||
| `MGMT_RESOURCE_GROUP_NAME` | The shared resource group for all management resources, including the storage account. |
|
||||
| `MGMT_STORAGE_ACCOUNT_NAME` | The name of the storage account to hold the Terraform state and other deployment artifacts. |
|
||||
| `TERRAFORM_STATE_CONTAINER_NAME` | The name of the blob container to hold the Terraform state *Default value is `tfstate`.* |
|
||||
| `IMAGE_TAG` | The default tag for Docker images that will be pushed to the container registry and deployed with the Azure TRE. |
|
||||
| `ACR_NAME` | A globally unique name for the Azure Container Registry (ACR) that will be created to store deployment images. |
|
||||
| `ARM_SUBSCRIPTION_ID` | *Optional for manual deployment. If not specified the `az cli` selected subscription will be used.* The Azure subscription ID for all resources. |
|
||||
| `ARM_CLIENT_ID` | *Optional for manual deployment without logged-in credentials.* The client whose azure identity will be used to deploy the solution. |
|
||||
| `ARM_CLIENT_SECRET` | *Optional for manual deployment without logged-in credentials.* The password of the client defined in `ARM_CLIENT_ID`. |
|
||||
| `ARM_TENANT_ID` | *Optional for manual deployment. If not specified the `az cli` selected subscription will be used.* The AAD tenant of the client defined in `ARM_CLIENT_ID`. |
|
||||
| `PORTER_OUTPUT_CONTAINER_NAME` | The name of the storage container where to store the workspace/workspace service deployment output. Workspaces and workspace templates are implemented using [Porter](https://porter.sh) bundles - hence the name of the variable. The storage account used is the one defined in `STATE_STORAGE_ACCOUNT_NAME`. |
|
||||
| `DEBUG` | If set to "true" disables purge protection of keyvault. |
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `LOCATION` | The Azure location (region) for all resources. |
|
||||
| `MGMT_RESOURCE_GROUP_NAME` | The shared resource group for all management resources, including the storage account. |
|
||||
| `MGMT_STORAGE_ACCOUNT_NAME` | The name of the storage account to hold the Terraform state and other deployment artifacts. |
|
||||
| `TERRAFORM_STATE_CONTAINER_NAME` | The name of the blob container to hold the Terraform state *Default value is `tfstate`.* |
|
||||
| `IMAGE_TAG` | The default tag for Docker images that will be pushed to the container registry and deployed with the Azure TRE. |
|
||||
| `ACR_NAME` | A globally unique name for the Azure Container Registry (ACR) that will be created to store deployment images. |
|
||||
| `ARM_SUBSCRIPTION_ID` | *Optional for manual deployment. If not specified the `az cli` selected subscription will be used.* The Azure subscription ID for all resources. |
|
||||
| `ARM_CLIENT_ID` | *Optional for manual deployment without logged-in credentials.* The client whose azure identity will be used to deploy the solution. |
|
||||
| `ARM_CLIENT_SECRET` | *Optional for manual deployment without logged-in credentials.* The password of the client defined in `ARM_CLIENT_ID`. |
|
||||
| `ARM_TENANT_ID` | *Optional for manual deployment. If not specified the `az cli` selected subscription will be used.* The AAD tenant of the client defined in `ARM_CLIENT_ID`. |
|
||||
| `PORTER_OUTPUT_CONTAINER_NAME` | The name of the storage container where to store the workspace/workspace service deployment output. Workspaces and workspace templates are implemented using [Porter](https://porter.sh) bundles - hence the name of the variable. The storage account used is the one defined in `STATE_STORAGE_ACCOUNT_NAME`. |
|
||||
| `DEBUG` | If set to "true" disables purge protection of keyvault. |
|
||||
|
||||
Copy [/templates/core/.env.sample](../templates/core/.env.sample) to `/templates/core/.env` and set values for all variables described in the table below:
|
||||
1. Copy `/templates/core/.env.sample` to `/templates/core/.env` and set values for all variables described in the table below:
|
||||
|
||||
```cmd
|
||||
cp templates/core/.env.sample templates/core/.env
|
||||
```
|
||||
```cmd
|
||||
cp templates/core/.env.sample templates/core/.env
|
||||
```
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `TRE_ID` | A globally unique identifier. `TRE_ID` can be found in the resource names of the Azure TRE instance; for example, a `TRE_ID` of `mytre-dev-3142` will result in a resource group name for Azure TRE instance of `rg-mytre-dev-3142`. This must be less than 12 characters. Allowed characters: Alphanumeric, underscores, and hyphens. |
|
||||
| `CORE_ADDRESS_SPACE` | The address space for the Azure TRE core virtual network. `/22` or larger. |
|
||||
| `TRE_ADDRESS_SPACE` | The address space for the whole TRE environment virtual network where workspaces networks will be created (can include the core network as well). E.g. `10.0.0.0/12`|
|
||||
| `API_IMAGE_TAG` | The tag of the API image. Make it the same as `IMAGE_TAG` above.|
|
||||
| `RESOURCE_PROCESSOR_VMSS_PORTER_IMAGE_TAG` | The tag of the resource processor image. Make it the same as `IMAGE_TAG` above.|
|
||||
| `GITEA_IMAGE_TAG` | The tag of the Gitea image. Make it the same as `IMAGE_TAG` above.|
|
||||
| `SWAGGER_UI_CLIENT_ID` | Generated when following auth guide. Client ID for swagger client to make requests. |
|
||||
| `AAD_TENANT_ID` | Generated when following auth guide. Tenant id against which auth is performed. |
|
||||
| `API_CLIENT_ID` | Generated when following auth guide. Client id of the "TRE API". |
|
||||
| `API_CLIENT_SECRET` | Generated when following auth guide. Client secret of the "TRE API". |
|
||||
| `DEPLOY_GITEA` | If set to `false` disables deployment of the Gitea shared service. |
|
||||
| `DEPLOY_NEXUS` | If set to `false` disables deployment of the Nexus shared service. |
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `TRE_ID` | A globally unique identifier. `TRE_ID` can be found in the resource names of the Azure TRE instance; for example, a `TRE_ID` of `mytre-dev-3142` will result in a resource group name for Azure TRE instance of `rg-mytre-dev-3142`. This must be less than 12 characters. Allowed characters: Alphanumeric, underscores, and hyphens. |
|
||||
| `CORE_ADDRESS_SPACE` | The address space for the Azure TRE core virtual network. `/22` or larger. |
|
||||
| `TRE_ADDRESS_SPACE` | The address space for the whole TRE environment virtual network where workspaces networks will be created (can include the core network as well). E.g. `10.0.0.0/12`|
|
||||
| `API_IMAGE_TAG` | The tag of the API image. Make it the same as `IMAGE_TAG` above.|
|
||||
| `RESOURCE_PROCESSOR_VMSS_PORTER_IMAGE_TAG` | The tag of the resource processor image. Make it the same as `IMAGE_TAG` above.|
|
||||
| `GITEA_IMAGE_TAG` | The tag of the Gitea image. Make it the same as `IMAGE_TAG` above.|
|
||||
| `SWAGGER_UI_CLIENT_ID` | Generated when following auth guide. Client ID for swagger client to make requests. |
|
||||
| `AAD_TENANT_ID` | Generated when following auth guide. Tenant id against which auth is performed. |
|
||||
| `API_CLIENT_ID` | Generated when following auth guide. Client id of the "TRE API". |
|
||||
| `API_CLIENT_SECRET` | Generated when following auth guide. Client secret of the "TRE API". |
|
||||
| `DEPLOY_GITEA` | If set to `false` disables deployment of the Gitea shared service. |
|
||||
| `DEPLOY_NEXUS` | If set to `false` disables deployment of the Nexus shared service. |
|
||||
|
||||
### Deploy
|
||||
|
||||
|
@ -84,7 +85,8 @@ The Azure TRE is initially deployed with an invalid self-signed SSL certificate.
|
|||
make letsencrypt
|
||||
```
|
||||
|
||||
Note that there are rate limits with Let's Encrypt, so this should not be run when not needed.
|
||||
!!! caution
|
||||
There are rate limits with Let's Encrypt, so this should not be run when not needed.
|
||||
|
||||
## Details of deployment and infrastructure
|
||||
|
||||
|
@ -102,7 +104,9 @@ A bootstrap script is used to create the initial storage account and resource gr
|
|||
|
||||
You can do this step using the following command but as stated above this is already part of ``make all``.
|
||||
|
||||
- `make bootstrap`
|
||||
```cmd
|
||||
make bootstrap
|
||||
```
|
||||
|
||||
This script should never need running a second time even if the other management resources are modified.
|
||||
|
||||
|
@ -110,7 +114,9 @@ This script should never need running a second time even if the other management
|
|||
|
||||
The deployment of the rest of the shared management resources is done via Terraform, and the various `.tf` files in the root of this repo.
|
||||
|
||||
- `make mgmt-deploy`
|
||||
```cmd
|
||||
make mgmt-deploy
|
||||
```
|
||||
|
||||
This Terraform creates & configures the following:
|
||||
|
||||
|
@ -166,7 +172,7 @@ curl https://<azure_tre_fqdn>/api/health
|
|||
|
||||
1. Once logged in. Click `Try it out` on the `POST` `/api/workspace-templates` operation:
|
||||
|
||||
![Post Workspace Template](./assets/post-template.png)
|
||||
![Post Workspace Template](../../assets/post-template.png)
|
||||
|
||||
1. Paste the payload json generated earlier into the `Request body` field, then click `Execute`. Review the server response.
|
||||
|
||||
|
@ -176,10 +182,10 @@ curl https://<azure_tre_fqdn>/api/health
|
|||
|
||||
Now that we have published and registered a base workspace bundle we can use the deployed API to create a base workspace.
|
||||
|
||||
<!-- markdownlint-disable-next-line MD013 -->
|
||||
> **Note:** All routes are auth protected. Click the green **Authorize** button to receive a token for swagger client.
|
||||
!!! info
|
||||
All routes are auth protected. Click the green **Authorize** button to receive a token for swagger client.
|
||||
|
||||
As explained in the [auth guide](auth.md), every workspace has a corresponding app registration which can be created using the helper script [../scripts/workspace-app-reg.py](../scripts/workspace-app-reg.py). Multiple workspaces can share an app registration.
|
||||
As explained in the [auth guide](auth.md), every workspace has a corresponding app registration which can be created using the helper script `/scripts/workspace-app-reg.py`. Multiple workspaces can share an app registration.
|
||||
|
||||
Running the script will report app id of the generated app which needs to be used in the POST body below.
|
||||
|
||||
|
@ -204,8 +210,8 @@ The API will report the ``workspace_id`` of the created workspace, which can be
|
|||
|
||||
You can also follow the progress in Azure portal as various resources come up.
|
||||
|
||||
<!-- markdownlint-disable-next-line MD013 -->
|
||||
> To query the status using the API your user needs to have TREResearcher or TREOwner role assigned to the app.
|
||||
!!! info
|
||||
To query the status using the API your user needs to have TREResearcher or TREOwner role assigned to the app.
|
||||
|
||||
## Deleting the Azure TRE deployment
|
||||
|
|
@ -2,16 +2,17 @@
|
|||
|
||||
## Setup instructions
|
||||
|
||||
These are onetime configuration steps required to set up the GitHub Actions workflows (pipelines). After the steps the [TRE deployment workflow](../.github/workflows/deploy_tre.yml) is ready to run.
|
||||
These are onetime configuration steps required to set up the GitHub Actions workflows (pipelines). After the steps the TRE deployment workflow (`/.github/workflows/deploy_tre.yml`) is ready to run.
|
||||
|
||||
1. Create service principal and set their [repository secrets](https://docs.github.com/en/actions/reference/encrypted-secrets) as explained in [Bootstrapping](./bootstrapping.md#create-service-principals)
|
||||
1. Create app registrations for auth based on the [Authentication & authorization](./auth.md) guide
|
||||
1. Create service principal and set their [repository secrets](https://docs.github.com/en/actions/reference/encrypted-secrets) as explained in [Bootstrapping](bootstrapping.md#create-service-principals)
|
||||
1. Create app registrations for auth based on the [Authentication & authorization](auth.md) guide
|
||||
1. Set other repository secrets as explained in the table below
|
||||
|
||||
*Required repository secrets for the CI/CD.*
|
||||
|
||||
| Secret name | Description |
|
||||
| ----------- | ----------- |
|
||||
| `AZURE_CREDENTIALS` | Explained in [Bootstrapping - Create service principals](./bootstrapping.md#create-service-principals). Main service principal credentials output. |
|
||||
| `AZURE_CREDENTIALS` | Explained in [Bootstrapping - Create service principals](bootstrapping.md#create-service-principals). Main service principal credentials output. |
|
||||
| `TF_STATE_CONTAINER` | The name of the blob container to hold the Terraform state. By convention the value is `tfstate`. |
|
||||
| `MGMT_RESOURCE_GROUP` | The name of the shared resource group for all Azure TRE core resources. |
|
||||
| `STATE_STORAGE_ACCOUNT_NAME` | The name of the storage account to hold the Terraform state and other deployment artifacts. E.g. `mystorageaccount`. |
|
||||
|
@ -21,13 +22,13 @@ These are onetime configuration steps required to set up the GitHub Actions work
|
|||
| `TRE_ID` | A globally unique identifier. `TRE_ID` can be found in the resource names of the Azure TRE instance; for example, a `TRE_ID` of `tre-dev-42` will result in a resource group name for Azure TRE instance of `rg-tre-dev-42`. This must be less than 12 characters. Allowed characters: Alphanumeric, underscores, and hyphens. |
|
||||
| `CORE_ADDRESS_SPACE` | The address space for the Azure TRE core virtual network. E.g. `10.1.0.0/22`. Recommended `/22` or larger. |
|
||||
| `TRE_ADDRESS_SPACE` | The address space for the whole TRE environment virtual network where workspaces networks will be created (can include the core network as well). E.g. `10.0.0.0/12`|
|
||||
| `SWAGGER_UI_CLIENT_ID` | The application (client) ID of the [TRE Swagger UI](./auth.md#tre-swagger-ui) service principal. |
|
||||
| `SWAGGER_UI_CLIENT_ID` | The application (client) ID of the [TRE Swagger UI](auth.md#tre-swagger-ui) service principal. |
|
||||
| `AAD_TENANT_ID` | The tenant ID of the Azure AD. |
|
||||
| `API_CLIENT_ID` | The application (client) ID of the [TRE API](./auth.md#tre-api) service principal. |
|
||||
| `API_CLIENT_SECRET` | The application password (client secret) of the [TRE API](./auth.md#tre-api) service principal. |
|
||||
| `API_CLIENT_ID` | The application (client) ID of the [TRE API](auth.md#tre-api) service principal. |
|
||||
| `API_CLIENT_SECRET` | The application password (client secret) of the [TRE API](auth.md#tre-api) service principal. |
|
||||
| `DEPLOY_GITEA` | If set to `false` disables deployment of the Gitea shared service. |
|
||||
| `DEPLOY_NEXUS` | If set to `false` disables deployment of the Nexus shared service. |
|
||||
| `TEST_APP_ID` | The application (client) ID of the [E2E Test app](./auth.md#tre-e2e-test) service principal. |
|
||||
| `TEST_USER_NAME` | The username of the [E2E Test User](./auth.md#end-to-end-test-user). |
|
||||
| `TEST_USER_PASSWORD` | The password of the [E2E Test User](./auth.md#end-to-end-test-user). |
|
||||
| `TEST_WORKSPACE_APP_ID` | The application (client) ID of the [Workspaces app](./auth.md#workspaces) service principal. |
|
||||
| `TEST_APP_ID` | The application (client) ID of the [E2E Test app](auth.md#tre-e2e-test) service principal. |
|
||||
| `TEST_USER_NAME` | The username of the [E2E Test User](auth.md#end-to-end-test-user). |
|
||||
| `TEST_USER_PASSWORD` | The password of the [E2E Test User](auth.md#end-to-end-test-user). |
|
||||
| `TEST_WORKSPACE_APP_ID` | The application (client) ID of the [Workspaces app](auth.md#workspaces) service principal. |
|
|
@ -14,7 +14,7 @@ However, you can enable **DEBUG=true** in the configuration settings of the API
|
|||
1. Click New Application Setting.
|
||||
1. in the new dialog box set Name = DEBUG and Value = true
|
||||
|
||||
![API Debug True](./assets/api_debug_true.png)
|
||||
![API Debug True](../assets/api_debug_true.png)
|
||||
|
||||
With DEBUG mode enabled when an error occurs at the API level it will display a detailed error message which should help in understanding why the payload was not accepted.
|
||||
|
||||
|
@ -22,7 +22,7 @@ With DEBUG mode enabled when an error occurs at the API level it will display a
|
|||
|
||||
You should also check that the version you are debugging/troubleshooting is the actual one deployed on the App Service. This can be checked using Deployment Center. You can also follow the logs as generated by the container in the logs tabs.
|
||||
|
||||
![Deployment Center](./assets/deployment_center.png)
|
||||
![Deployment Center](../assets/deployment_center.png)
|
||||
|
||||
## Checking the Service Bus
|
||||
|
||||
|
@ -42,7 +42,7 @@ which should eventually change as the message flows through the system. If the m
|
|||
1. Select the Service Bus from deployed resources and click Entities > Queues > workspacequeue.
|
||||
1. Select the Service Bus Explorer and the Peek tab to check for hanging messages.
|
||||
|
||||
![Service Bus](./assets/sb.png)
|
||||
![Service Bus](../assets/sb.png)
|
||||
|
||||
## Checking the logs in App Insights
|
||||
|
||||
|
@ -54,7 +54,7 @@ traces
|
|||
| where message contains tracking_id or operation_Id == tracking_id | sort by timestamp desc
|
||||
```
|
||||
|
||||
![App Insights](./assets/app_insights.png)
|
||||
![App Insights](../assets/app_insights.png)
|
||||
|
||||
For a successful deployment you should see the last message (at the top since the order is timestamp descending) something like
|
||||
|
||||
|
@ -68,15 +68,16 @@ It should also be evident from the message flow where the current processing is
|
|||
|
||||
If you see messages hanging in the service bus queue then the resource processor is not up and running. Verify that the VMSS instance is up and healthy.
|
||||
|
||||
![VMSS Running](./assets/vmss_running.png)
|
||||
![VMSS Running](../assets/vmss_running.png)
|
||||
|
||||
The processor runs in a vnet, and you cannot connect to it directly. If the instance is up then you need to connect to the instance using Bastion. Bastion is already deployed, and you can use the username ``adminuser`` and the password is stored in the keyvault under the secret ``resource-processor-vmss-password``
|
||||
|
||||
> **Note:** You cannot see secrets unless you are added to a suitable Access Policy for the keyvault.
|
||||
!!! info
|
||||
You cannot see secrets unless you are added to a suitable Access Policy for the keyvault.
|
||||
|
||||
![VMSS Password](./assets/vmss_password.png)
|
||||
![VMSS Password](../assets/vmss_password.png)
|
||||
|
||||
![Bastion](./assets/bastion.png "Bastion")
|
||||
![Bastion](../assets/bastion.png "Bastion")
|
||||
|
||||
After logging in you should check the status of cloud-init which is used to bootstrap the machine with docker and start the processor. Log files for cloud init are
|
||||
|
||||
|
@ -97,7 +98,8 @@ docker run -v /var/run/docker.sock:/var/run/docker.sock --env-file .env --name r
|
|||
|
||||
**runner_image:tag** can be obtained using ``docker ps``
|
||||
|
||||
> **Note:** All logs which you see from the resource processor should also be transferred to the App Insights instance as noted above, so it is not essential to follow the progress by logging into the instance. Logging into the instance and starting a container manually is helpful in live debugging.
|
||||
!!! info
|
||||
All logs which you see from the resource processor should also be transferred to the App Insights instance as noted above, so it is not essential to follow the progress by logging into the instance. Logging into the instance and starting a container manually is helpful in live debugging.
|
||||
|
||||
### Updating the running container
|
||||
|
|
@ -9,13 +9,13 @@ The supported development environments for Azure TRE are:
|
|||
Regardless of the development environment you choose, you will still need to fulfill the following prerequisites:
|
||||
|
||||
* [An Azure subscription](https://azure.microsoft.com/)
|
||||
* [Azure Active Directory (AAD)](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-whatis) with service principals created as explained in [Authentication & authorization](./auth.md)
|
||||
* [Azure Active Directory (AAD)](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-whatis) with service principals created as explained in [Authentication & authorization](../tre-admins/deploying-the-tre/auth.md)
|
||||
|
||||
### Obtain the source
|
||||
|
||||
Copy the source or clone the repository to your local machine or choose to use the pre-configured [dev container](#dev-container) via [GitHub Codespaces](https://github.com/features/codespaces).
|
||||
|
||||
![Clone options](../docs/assets/clone_options.png)
|
||||
![Clone options](../assets/clone_options.png)
|
||||
|
||||
## Dev container
|
||||
|
|
@ -8,13 +8,13 @@ The supported development environment for Azure TRE is devcontainer. Please see
|
|||
|
||||
### Prerequisites
|
||||
|
||||
1. Authentication and Authorization configuration set up as noted [here](auth.md)
|
||||
1. Authentication and Authorization configuration set up as noted [here](../tre-admins/deploying-the-tre/auth.md)
|
||||
1. An Azure Tre deployed environment.
|
||||
|
||||
## Running the Unit and E2E tests
|
||||
|
||||
To run the Unit and E2E tests follow the [testing documentation](testing.md).
|
||||
To run the Unit and E2E tests follow the [testing documentation](end-to-end-tests.md).
|
||||
|
||||
### Running the TRE API locally
|
||||
|
||||
To run the TRE API locally follow the steps noted [here](../api_app/README.md).
|
||||
To run the TRE API locally follow the steps noted [here](../azure-tre-overview/composition-service/api.md).
|
|
@ -9,13 +9,13 @@ To run the E2E tests locally:
|
|||
| ------------------------- | ----------- | ------------- |
|
||||
| `RESOURCE_LOCATION` | The Azure Tre deployed environment `LOCATION`. | `eastus` |
|
||||
| `TRE_ID` | The Azure TRE instance name - used for deployment of resources (can be set to anything when debugging locally). | `mytre-dev-3142` |
|
||||
| `RESOURCE` | The application (client) ID of the [TRE API](../docs/auth.md#tre-api) service principal. | |
|
||||
| `RESOURCE` | The application (client) ID of the [TRE API](../tre-admins/deploying-the-tre/auth.md#tre-api) service principal. | |
|
||||
| `AUTH_TENANT_ID` | The tenant ID of the Azure AD. | |
|
||||
| `CLIENT_ID` | The application (client) ID of the [E2E Test app](../docs/auth.md#tre-e2e-test) service principal. | |
|
||||
| `CLIENT_ID` | The application (client) ID of the [E2E Test app](../tre-admins/deploying-the-tre/auth.md#tre-e2e-test) service principal. | |
|
||||
| `SCOPE` | Scope(s) for the token. | `api://<TRE API app client ID>/Workspace.Read api://<TRE API app client ID>/Workspace.Write` |
|
||||
| `USERNAME` | The username of the [E2E User](../docs/auth.md#end-to-end-test-user). | |
|
||||
| `PASSWORD` | The password of the [E2E User](../docs/auth.md#end-to-end-test-user). | |
|
||||
| `AUTH_APP_CLIENT_ID` | The application (client) ID of the [workspaces app](auth.md#workspaces). | |
|
||||
| `USERNAME` | The username of the [E2E User](../tre-admins/deploying-the-tre/auth.md#end-to-end-test-user). | |
|
||||
| `PASSWORD` | The password of the [E2E User](../tre-admins/deploying-the-tre/auth.md#end-to-end-test-user). | |
|
||||
| `AUTH_APP_CLIENT_ID` | The application (client) ID of the [workspaces app](../tre-admins/deploying-the-tre/auth.md#workspaces). | |
|
||||
| `ACR_NAME` | The name of the TRE container registry. | |
|
||||
|
||||
1. Run the E2E tests:
|
|
@ -7,7 +7,8 @@ Workspace authors are free to choose the technology stack for provisioning resou
|
|||
|
||||
This document describes the requirements, and the process to author a template.
|
||||
|
||||
> **Tip:** Use [the base workspace bundle](../templates/workspaces/base/README.md) and [others](../templates/workspaces/README.md) as reference or as the basis for the new bundle.
|
||||
!!! tip
|
||||
Use [the base workspace bundle](../workspace-templates/workspaces/base.md) as reference or as the basis for the new bundle.
|
||||
|
||||
To create a bundle from scratch follow the Porter [Quickstart Guide](https://porter.sh/quickstart/) ([`porter create` CLI command](https://porter.sh/cli/porter_create/) will generate a new bundle in the current directory).
|
||||
|
||||
|
@ -74,7 +75,8 @@ Any **custom parameters** are picked up by Azure TRE API and will be queried fro
|
|||
|
||||
### Output
|
||||
|
||||
> **TBD:** After a workspace with virtual machines is implemented this section can be written based on that. ([Outputs in Porter documentation](https://porter.sh/author-bundles/#outputs) to be linked here too.)
|
||||
!!! todo
|
||||
After a workspace with virtual machines is implemented this section can be written based on that. ([Outputs in Porter documentation](https://porter.sh/author-bundles/#outputs) to be linked here too.)
|
||||
|
||||
### Actions
|
||||
|
||||
|
@ -115,7 +117,7 @@ The deployment runner of Azure TRE supports the following [Porter mixins](https:
|
|||
* [arm](https://porter.sh/mixins/arm/)
|
||||
* [terraform](https://github.com/getporter/terraform-mixin)
|
||||
|
||||
To add support for additional mixins including custom ones, [the Porter installation script of TRE](../devops/scripts/install_porter.sh) needs to be modified.
|
||||
To add support for additional mixins including custom ones, the TRE Porter installation script `/devops/scripts/install_porter.sh` needs to be modified.
|
||||
|
||||
## Versioning
|
||||
|
||||
|
@ -125,4 +127,4 @@ TRE does not provide means to update an existing workspace to a newer version. I
|
|||
|
||||
## Publishing workspace bundle
|
||||
|
||||
See [Registering workspace templates](./registering-workspace-templates.md).
|
||||
See [Registering workspace templates](registering-workspace-templates.md).
|
|
@ -4,7 +4,7 @@ To deploy a new type of Workspace, we need to register a Workspace Template usin
|
|||
|
||||
## Porter Bundles
|
||||
|
||||
Porter bundles can either be registered interactively using the Swagger UI or automatically using the utility script (useful in CI/CD scenarios). The script is provided at: [../devops/scripts/publish_register_bundle.sh](../devops/scripts/publish_register_bundle.sh).
|
||||
Porter bundles can either be registered interactively using the Swagger UI or automatically using the utility script (useful in CI/CD scenarios). The script is provided at `/devops/scripts/publish_register_bundle.sh`.
|
||||
|
||||
The script can also be used to generate the payload required by the API without actually calling the API. The script carries out the following actions:
|
||||
|
||||
|
@ -24,14 +24,14 @@ The script can also be used to generate the payload required by the API without
|
|||
1. Log into the Swagger UI by clicking `Authorize`, then `Authorize` again. You will be redirected to the login page.
|
||||
1. Once logged in. Click `Try it out` on the `POST` `/api/workspace-templates` operation:
|
||||
|
||||
![Post Workspace Template](./assets/post-template.png)
|
||||
![Post Workspace Template](../assets/post-template.png)
|
||||
|
||||
1. Paste the payload json generated earlier into the `Request body` field, then click `Execute`. Review the server response.
|
||||
1. To verify registration of the template do `GET` operation on `/api/workspace-templates`. The name of the template should now be listed.
|
||||
|
||||
### Registration using script
|
||||
|
||||
To use the script to automatically register the template, a user that does not require an interactive login must be created as per the [e2e test user documentation here](auth.md#tre-e2e-test).
|
||||
To use the script to automatically register the template, a user that does not require an interactive login must be created as per the [e2e test user documentation here](../tre-admins/deploying-the-tre/auth.md#tre-e2e-test).
|
||||
|
||||
The script needs to be executed from within the bundle directory, for example `/templates/workspaces/azureml_devtestlabs/`.
|
||||
|
||||
|
@ -51,4 +51,5 @@ Options:
|
|||
|
||||
In addition to generating the payload, the script posts the payload to the `/api/workspace-templates` endpoint. Once registered the template can be retrieved by a `GET` operation on `/api/workspace-templates`.
|
||||
|
||||
> The same procedure can be followed to register workspace service templates and user resource templates
|
||||
!!! tip
|
||||
Follow the same procedure to register workspace service templates and user resource templates
|
|
@ -0,0 +1,31 @@
|
|||
# Guacamole User Resource Service bundle (Windows 10)
|
||||
|
||||
This is a User Resource Service template. It contains a Windows 10 to be used by TRE researchers and to be connected using a [Guacamole server](https://guacamole.apache.org/).
|
||||
It blocks all inbound and outbound traffic to the internet and allows only RDP connections from within the vnet.
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Please be aware that the following Firewall rules are opened for the workspace when this service is deployed:
|
||||
|
||||
- Inbound connectivity from within the VNET to the RDP port
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A base workspace bundle installed](../workspaces/base.md)
|
||||
- [A guacamole workspace service bundle installed](../workspace-services/guacamole.md)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Create a copy of `templates/workspace_services/guacamole/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `PARENT_SERVICE_ID` | The unique identifier of this service parent (a Guacamole service) |
|
||||
|
||||
1. Build and install the Guacamole Service bundle
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/guacamole/user_resources/guacamole-azure-win10vm
|
||||
make porter-install DIR=./templates/workspace_services/guacamole/user_resources/guacamole-azure-win10vm
|
||||
```
|
|
@ -1,4 +1,4 @@
|
|||
# Azure machine Learning Service bundle
|
||||
# Azure Machine Learning Service bundle
|
||||
|
||||
See: [https://azure.microsoft.com/services/machine-learning/](https://azure.microsoft.com/services/machine-learning/)
|
||||
|
||||
|
@ -25,17 +25,21 @@ Service Tags:
|
|||
- Storage.`{AzureRegion}`
|
||||
- AzureContainerRegistry
|
||||
|
||||
## Manual Deployment
|
||||
## Prerequisites
|
||||
|
||||
1. Prerequisites for deployment:
|
||||
- [A base workspace bundle installed](../../workspaces/base)
|
||||
- [A base workspace bundle installed](../../../templates/workspaces/base)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Create a copy of `templates/workspace_services/azureml/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
|
||||
1. Build and install the Azure ML Service bundle
|
||||
- `make porter-build DIR=./templates/workspace_services/azureml`
|
||||
- `make porter-install DIR=./templates/workspace_services/azureml`
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/azureml
|
||||
make porter-install DIR=./templates/workspace_services/azureml
|
||||
```
|
|
@ -2,24 +2,29 @@
|
|||
|
||||
See: [https://azure.microsoft.com/services/devtest-lab/](https://azure.microsoft.com/services/devtest-lab/)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A base workspace bundle installed](../../../templates/workspaces/base)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Prerequisites for deployment:
|
||||
- [A base workspace bundle installed](../../workspaces/base)
|
||||
|
||||
1. Create a copy of `templates/workspace_services/devtestlabs/.env.sample` with the name `.env` and update with the Workspace ID used when deploying the base workspace.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
|
||||
1. Build and install the Azure DevTest Labs Service bundle
|
||||
- `make porter-build DIR=./templates/workspace_services/devtestlabs`
|
||||
- `make porter-install DIR=./templates/workspace_services/devtestlabs`
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/devtestlabs
|
||||
make porter-install DIR=./templates/workspace_services/devtestlabs
|
||||
```
|
||||
|
||||
## Create and expose a VM via the Firewall
|
||||
|
||||
When this service used without a virtual desktop gateway it might be necessary to manually create and expose a VM via the TRE firewall. This method of exposing VMs is not recomended for large scale deployments given there will be multiple resources and rules to manually manage.
|
||||
When this service used without a virtual desktop gateway it might be necessary to manually create and expose a VM via the TRE firewall. This method of exposing VMs is not recommended for large scale deployments given there will be multiple resources and rules to manually manage.
|
||||
|
||||
1. Create a DevTest Labs VM and open a port in the TRE firewall using the script provided.
|
||||
|
||||
|
@ -40,4 +45,4 @@ When this service used without a virtual desktop gateway it might be necessary t
|
|||
|
||||
```
|
||||
|
||||
2. Using the details provided by the script and a remote desktop connection client connect to the VM.
|
||||
2. Using the details provided by the script, and a remote desktop connection client connect to the VM.
|
|
@ -0,0 +1,33 @@
|
|||
# Guacamole Service bundle
|
||||
|
||||
See: [https://guacamole.apache.org/](https://guacamole.apache.org/)
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Please be aware that the following Firewall rules are opened for the workspace when this service is deployed:
|
||||
|
||||
URLs:
|
||||
|
||||
!!! todo
|
||||
Add firewall rules
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A base workspace bundle installed](../../../templates/workspaces/base)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
|
||||
1. Create a copy of `templates/workspace_services/guacamole/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `GUACAMOLE_IMAGE_TAG` | Image tag of the Guacamole server |
|
||||
|
||||
1. Build and install the Guacamole Service bundle
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/guacamole
|
||||
make porter-install DIR=./templates/workspace_services/guacamole
|
||||
```
|
До Ширина: | Высота: | Размер: 157 KiB После Ширина: | Высота: | Размер: 157 KiB |
|
@ -0,0 +1,73 @@
|
|||
# InnerEye DeepLearning Service Bundle
|
||||
|
||||
See: [https://github.com/microsoft/InnerEye-DeepLearning](https://github.com/microsoft/InnerEye-DeepLearning)
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Please be aware that the following Firewall rules are opened for the workspace when this service is deployed. These are all dependencies needed by InnerEye to be able to develop and train models:
|
||||
|
||||
URLs:
|
||||
|
||||
- *.anaconda.com
|
||||
- *.anaconda.org
|
||||
- binstar-cio-packages-prod.s3.amazonaws.com
|
||||
- github.com
|
||||
- *pypi.org
|
||||
- *pythonhosted.org
|
||||
- github-cloud.githubusercontent.com
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A workspace with an Azure ML Service bundle installed](../../../templates/workspace_services/azureml)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Create a copy of `templates/workspace_services/innereye_deeplearning/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `AZUREML_WORKSPACE_NAME` | Name of the Azure ML workspace deployed as part of the Azure ML workspace service prerequisite. |
|
||||
| `AZUREML_ACR_ID` | Azure resource ID of the Azure Container Registry deployed as part of the Azure ML workspace service prerequisite. |
|
||||
|
||||
1. Build and install the InnerEye Deep Learning Service bundle
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/innereye_deeplearning
|
||||
make porter-publish DIR=./templates/workspace_services/innereye_deeplearning
|
||||
make porter-install DIR=./templates/workspace_services/innereye_deeplearning
|
||||
```
|
||||
|
||||
## Running the InnerEye HelloWorld on AML Compute Cluster
|
||||
|
||||
1. Log onto a VM in the workspace, open PowerShell and run:
|
||||
|
||||
```cmd
|
||||
git clone https://github.com/microsoft/InnerEye-DeepLearning
|
||||
cd InnerEye-DeepLearning
|
||||
git lfs install
|
||||
git lfs pull
|
||||
conda init
|
||||
conda env create --file environment.yml
|
||||
```
|
||||
|
||||
1. Restart PowerShell and navigate to the "InnerEye-DeepLearning" folder
|
||||
|
||||
```cmd
|
||||
conda activate InnerEye
|
||||
```
|
||||
|
||||
1. Open Azure Storage Explorer and connect to your Storage Account using name and access key
|
||||
1. On the storage account create a container with name ```datasets``` and a folder named ```hello_world```
|
||||
1. Copy `dataset.csv` file from `Tests/ML/test_data/dataset.csv` to the `hello_world` folder
|
||||
1. Copy the whole `train_and_test_data` folder from `Test/ML/test_data/train_and_test_data` to the `hello_world` folder
|
||||
1. Update the following variables in `InnerEye/settings.yml`: subscription_id, resource_group, workspace_name, cluster (see [AML setup](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/setting_up_aml.md) for more details).
|
||||
1. Open your browser to ml.azure.com, login, select the right Subscription and AML workspace and then navigate to `Data stores`. Create a New datastore named `innereyedatasets` and link it to your storage account and datasets container.
|
||||
1. Back from PowerShell run
|
||||
|
||||
```cmd
|
||||
python InnerEye/ML/runner.py --model=HelloWorld --azureml=True
|
||||
```
|
||||
|
||||
1. The runner will provide you with a link and ask you to open it to login. Copy the link and open it in browser (Edge) on the DSVM and login. The run will continue after login.
|
||||
1. In your browser navigate to ml.azure.com and open the `Experiments` tab to follow the progress of the training
|
|
@ -0,0 +1,63 @@
|
|||
# InnerEye Inference service bundle
|
||||
|
||||
See: [https://github.com/microsoft/InnerEye-Inference](https://github.com/microsoft/InnerEye-Inference)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [A workspace with an InnerEye Deep Learning bundle installed](../workspaces/inner-eye-deep-learning.md)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
|
||||
1. Create a service principal with contributor rights over the subscription. This will be replaced with a Managed Identity in the future:
|
||||
|
||||
```cmd
|
||||
az ad sp create-for-rbac --name <sp-name> --role Contributor --scopes /subscriptions/<subscription-id>
|
||||
```
|
||||
|
||||
1. Create a copy of `templates/workspace_services/innereye_inference/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `AZUREML_WORKSPACE_NAME` | Name of the Azure ML workspace deployed as part of the Azure ML workspace service prerequisite. |
|
||||
| `AZUREML_ACR_ID` | ID of the Azure Container Registry deployed as part of the Azure ML workspace service prerequisite. |
|
||||
| `INFERENCE_SP_CLIENT_ID` | Service principal client ID used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
| `INFERENCE_SP_CLIENT_SECRET` | Service principal client secret used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
|
||||
1. Build and deploy the InnerEye Inference service
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/innereye_inference
|
||||
make porter-install DIR=./templates/workspace_services/innereye_inference
|
||||
```
|
||||
|
||||
1. Log onto a VM in the workspace and run:
|
||||
|
||||
```cmd
|
||||
git clone https://github.com/microsoft/InnerEye-Inference
|
||||
cd InnerEye-Inference
|
||||
az webapp up --name <inference-app-name> -g <resource-group-name>
|
||||
```
|
||||
|
||||
## Configuring and testing inference service
|
||||
|
||||
The workspace service provision an App Service Plan and an App Service for hosting the inference webapp. The webapp will be integrated into the workspace network, allowing the webapp to connect to the AML workspace. Following the setup you will need to:
|
||||
|
||||
1. Create a new container in your storage account for storing inference images called `inferencedatastore`.
|
||||
1. Create a new folder in that container called `imagedata`.
|
||||
1. Navigate to the ml.azure.com, `Datastores` and create a new datastore named `inferencedatastore` and connect it to the newly created container.
|
||||
1. The key used for authentication is the `inference_auth_key` provided as an output of the service deployment.
|
||||
1. Test the service by sending a GET or POST command using curl or Invoke-WebRequest:
|
||||
|
||||
Simple ping:
|
||||
|
||||
```cmd
|
||||
Invoke-WebRequest https://yourservicename.azurewebsites.net/v1/ping -Headers @{'Accept' = 'application/json'; 'API_AUTH_SECRET' = 'your-secret-1234-1123445'}
|
||||
```
|
||||
|
||||
Test connection with AML:
|
||||
|
||||
```cmd
|
||||
Invoke-WebRequest https://yourservicename.azurewebsites.net/v1/model/start/HelloWorld:1 -Method POST -Headers @{'Accept' = 'application/json'; 'API_AUTH_SECRET' = 'your-secret-1234-1123445'}
|
||||
```
|
|
@ -0,0 +1,48 @@
|
|||
# Azure ML and Dev Test Labs Workspace
|
||||
|
||||
This deploys a TRE workspace with the following services:
|
||||
|
||||
- [Azure ML](../../../templates/workspace_services/azureml)
|
||||
- [Azure Dev Test Labs](../../../templates/workspace_services/devtestlabs)
|
||||
|
||||
Please follow the above links to learn more about how to access the services and any firewall rules that they will open in the workspace.
|
||||
|
||||
## Manual deployment
|
||||
|
||||
1. Publish the bundles required for this workspace:
|
||||
|
||||
*Base Workspace*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspaces/base
|
||||
make porter-publish DIR=./templates/workspaces/base
|
||||
```
|
||||
|
||||
*Azure ML Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/azureml
|
||||
make porter-publish DIR=./templates/workspace_services/azureml
|
||||
```
|
||||
|
||||
*DevTest Labs Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/devtestlabs
|
||||
make porter-publish DIR=./templates/workspace_services/devtestlabs
|
||||
```
|
||||
|
||||
1. Create a copy of `workspaces/azureml_devtestlabs/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 character unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
|
||||
1. Build and install the workspace:
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspaces/azureml_devtestlabs
|
||||
make porter-publish DIR=./templates/workspaces/azureml_devtestlabs
|
||||
make porter-install DIR=./templates/workspaces/azureml_devtestlabs
|
||||
```
|
|
@ -0,0 +1,21 @@
|
|||
# Azure TRE base workspace
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A TRE environment
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Create a copy of `/templates/workspaces/base/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 ter unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
|
||||
1. Build and deploy the base workspace
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspaces/base
|
||||
make porter-install DIR=./templates/workspaces/base
|
||||
```
|
|
@ -0,0 +1,71 @@
|
|||
# InnerEye Deep Learning and Inference Workspace
|
||||
|
||||
This deploys a TRE workspace with the following services:
|
||||
|
||||
- [Azure ML](../../../templates/workspace_services/azureml)
|
||||
- [Azure Dev Test Labs](../../../templates/workspace_services/devtestlabs)
|
||||
- [InnerEye deep learning](../../../templates/workspace_services/innereye_deeplearning)
|
||||
- [InnerEye Inference](../../../templates/workspace_services/innereye_inference)
|
||||
|
||||
Follow the links to learn more about how to access the services and any firewall rules that they will open in the workspace.
|
||||
|
||||
## Manual deployment
|
||||
|
||||
1. Publish the bundles required for this workspace:
|
||||
|
||||
*Base Workspace*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspaces/base
|
||||
make porter-publish DIR=./templates/workspaces/base
|
||||
```
|
||||
|
||||
*Azure ML Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/azureml
|
||||
make porter-publish DIR=./templates/workspace_services/azureml
|
||||
```
|
||||
|
||||
*DevTest Labs Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/devtestlabs
|
||||
make porter-publish DIR=./templates/workspace_services/devtestlabs
|
||||
```
|
||||
|
||||
*InnerEye Deep Learning Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/innereye_deeplearning
|
||||
make porter-publish DIR=./templates/workspace_services/innereye_deeplearning
|
||||
```
|
||||
|
||||
*InnerEye Inference Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/innereye_inference
|
||||
make porter-publish DIR=./templates/workspace_services/innereye_inference
|
||||
```
|
||||
|
||||
1. Create a service principal with contributor rights over Azure ML:
|
||||
|
||||
```cmd
|
||||
az ad sp create-for-rbac --name <sp-name> --role Contributor --scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>
|
||||
```
|
||||
|
||||
1. Create a copy of `workspaces/innereye_deeplearning_inference/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 ter unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
| `INFERENCE_SP_CLIENT_ID` | Service principal client ID used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
| `INFERENCE_SP_CLIENT_SECRET` | Service principal client secret used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
|
||||
1. Build and install the workspace:
|
||||
|
||||
```cmd
|
||||
make porter-publish DIR=./templates/workspaces/innereye_deeplearning_inference
|
||||
make porter-install DIR=./templates/workspaces/innereye_deeplearning_inference
|
||||
```
|
|
@ -0,0 +1,56 @@
|
|||
# InnerEye Deep Learning Workspace
|
||||
|
||||
This deploys a TRE workspace with the following services:
|
||||
|
||||
- [Azure ML](../../../templates/workspace_services/azureml)
|
||||
- [Azure Dev Test Labs](../../../templates/workspace_services/devtestlabs)
|
||||
- [InnerEye deep learning](../../../templates/workspace_services/innereye_deeplearning)
|
||||
|
||||
Follow the links to learn more about how to access the services and any firewall rules that they will open in the workspace.
|
||||
|
||||
## Manual deployment
|
||||
|
||||
1. Publish the bundles required for this workspace:
|
||||
|
||||
*Base Workspace*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspaces/base
|
||||
make porter-publish DIR=./templates/workspaces/base
|
||||
```
|
||||
|
||||
*Azure ML Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/azureml
|
||||
make porter-publish DIR=./templates/workspace_services/azureml
|
||||
```
|
||||
|
||||
*DevTest Labs Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/devtestlabs
|
||||
make porter-publish DIR=./templates/workspace_services/devtestlabs
|
||||
```
|
||||
|
||||
*InnerEye Deep Learning Service*
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspace_services/innereye_deeplearning
|
||||
make porter-publish DIR=./templates/workspace_services/innereye_deeplearning
|
||||
```
|
||||
|
||||
1. Create a copy of `workspaces/innereye_deeplearning/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 character unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
|
||||
1. Build and install the workspace:
|
||||
|
||||
```cmd
|
||||
make porter-build DIR=./templates/workspaces/innereye_deeplearning
|
||||
make porter-publish DIR=./templates/workspaces/innereye_deeplearning
|
||||
make porter-install DIR=./templates/workspaces/innereye_deeplearning
|
||||
```
|
|
@ -0,0 +1,11 @@
|
|||
{% extends "base.html" %}
|
||||
|
||||
{% block analytics %}
|
||||
<script type="text/javascript">
|
||||
(function(c,l,a,r,i,t,y){
|
||||
c[a]=c[a]||function(){(c[a].q=c[a].q||[]).push(arguments)};
|
||||
t=l.createElement(r);t.async=1;t.src="https://www.clarity.ms/tag/"+i;
|
||||
y=l.getElementsByTagName(r)[0];y.parentNode.insertBefore(t,y);
|
||||
})(window, document, "clarity", "script", "7gescazz1m");
|
||||
</script>
|
||||
{% endblock %}
|
|
@ -0,0 +1,79 @@
|
|||
site_name: Azure TRE
|
||||
site_url: https://github.com/microsoft/AzureTRE
|
||||
site_description: Azure TRE
|
||||
site_author: Microsoft
|
||||
|
||||
repo_url: https://github.com/microsoft/AzureTre/
|
||||
edit_uri: ""
|
||||
|
||||
theme:
|
||||
name: material
|
||||
custom_dir: mkdocs-overrides
|
||||
font:
|
||||
text: Roboto
|
||||
code: Roboto Mono
|
||||
palette:
|
||||
scheme: default
|
||||
primary: blue grey
|
||||
accent: indigo
|
||||
logo: assets/ms_icon.png
|
||||
favicon: assets/ms_icon.png
|
||||
features:
|
||||
- navigation.instant
|
||||
- navigation.indexes
|
||||
|
||||
plugins:
|
||||
- search
|
||||
|
||||
markdown_extensions:
|
||||
- meta
|
||||
- admonition
|
||||
- pymdownx.highlight
|
||||
- pymdownx.superfences
|
||||
- pymdownx.pathconverter
|
||||
- pymdownx.tabbed
|
||||
- mdx_truly_sane_lists
|
||||
- pymdownx.tasklist
|
||||
|
||||
nav:
|
||||
- Azure TRE Overview:
|
||||
- Concepts: 'azure-tre-overview/concepts.md'
|
||||
- User Roles: 'azure-tre-overview/user-roles.md'
|
||||
- Architecture: 'azure-tre-overview/architecture.md'
|
||||
- Network Architecture: 'azure-tre-overview/networking.md'
|
||||
- Composition Service:
|
||||
- API: 'azure-tre-overview/composition-service/api.md'
|
||||
- Resource Processor: 'azure-tre-overview/composition-service/resource-processor.md'
|
||||
- Shared Services:
|
||||
- Gitea (Source Mirror): 'azure-tre-overview/shared-services/gitea.md'
|
||||
- Nexus (Package Mirror): 'azure-tre-overview/shared-services/nexus.md'
|
||||
- Deployment Quickstart: 'deployment-quickstart.md'
|
||||
- TRE Admins:
|
||||
- Deploying the TRE:
|
||||
- Authentication and Authorization: 'tre-admins/deploying-the-tre/auth.md'
|
||||
- Bootstrapping: 'tre-admins/deploying-the-tre/bootstrapping.md'
|
||||
- Manual Deployment: 'tre-admins/deploying-the-tre/manual-deployment.md'
|
||||
- Workflow Deployment: 'tre-admins/deploying-the-tre/workflows.md'
|
||||
- Troubleshooting Guide: 'tre-admins/troubleshooting-guide.md'
|
||||
- TRE Developers:
|
||||
- Developer Guide: 'tre-developers/developer-guide.md'
|
||||
- Development Environment: 'tre-developers/dev-environment.md'
|
||||
- End to End Tests: 'tre-developers/end-to-end-tests.md'
|
||||
- TRE Workspace Authors:
|
||||
- Authoring Workspace Templates: 'tre-workspace-authors/authoring-workspace-templates.md'
|
||||
- Firewall Rules: 'tre-workspace-authors/firewall-rules.md'
|
||||
- Registering Workspace Templates: 'tre-workspace-authors/registering-workspace-templates.md'
|
||||
- Workspace Templates:
|
||||
- Workspaces:
|
||||
- Base: 'workspace-templates/workspaces/base.md'
|
||||
- Azure ML DevTest Labs: 'workspace-templates/workspaces/azure-ml-dev-test-labs.md'
|
||||
- InnerEye Deep Learning: 'workspace-templates/workspaces/inner-eye-deep-learning.md'
|
||||
- InnerEye Inferencing: 'workspace-templates/workspaces/inner-eye-deep-learning-inferencing.md'
|
||||
- Workspace Services:
|
||||
- Azure ML: 'workspace-templates/workspace-services/azure-ml.md'
|
||||
- DevTest Labs: 'workspace-templates/workspace-services/dev-test-labs.md'
|
||||
- Guacamole: 'workspace-templates/workspace-services/guacamole.md'
|
||||
- InnerEye Deep Learning: 'workspace-templates/workspace-services/inner-eye-deep-learning.md'
|
||||
- InnerEye Inferencing: 'workspace-templates/workspace-services/inner-eye-inference.md'
|
||||
- User Resources:
|
||||
- Guacamole Win10 VM: 'workspace-templates/user-resources/guacamole-win10-vm.md'
|
|
@ -5,3 +5,4 @@ pre-commit==2.13.0
|
|||
-r api_app/requirements.txt
|
||||
-r api_app/requirements-dev.txt
|
||||
-r resource_processor/vmss_porter/requirements.txt
|
||||
-r docs/requirements.txt
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
# Guacamole Service bundle
|
||||
|
||||
See: [https://guacamole.apache.org/](https://guacamole.apache.org/)
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Please be aware that the following Firewall rules are opened for the workspace when this service is deployed:
|
||||
|
||||
URLs:
|
||||
|
||||
TBD
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Prerequisites for deployment:
|
||||
- [A base workspace bundle installed](../../workspaces/base)
|
||||
|
||||
1. Create a copy of `templates/workspace_services/guacamole/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `GUACAMOLE_IMAGE_TAG` | Image tag of the Guacamole server |
|
||||
|
||||
1. Build and install the Guacamole Service bundle
|
||||
- `make porter-build DIR=./templates/workspace_services/guacamole`
|
||||
- `make porter-install DIR=./templates/workspace_services/guacamole`
|
|
@ -1,27 +0,0 @@
|
|||
# Guacamole User Resource Service bundle (Windows 10)
|
||||
|
||||
This is a User Resource Service template. It contains a Windows 10 to be used by TRE researchers and to be connected using a [Guacamole server](https://guacamole.apache.org/).
|
||||
It blocks all inbound and outbound traffic to the internet and allows only RDP connections from within the vnet.
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Please be aware that the following Firewall rules are opened for the workspace when this service is deployed:
|
||||
|
||||
Inbound connectivity from within the VNET to the RDP port
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Prerequisites for deployment:
|
||||
- [A base workspace bundle installed](../../workspaces/base)
|
||||
- [A guacamole workspace service bundle installed](../guacamole)
|
||||
|
||||
1. Create a copy of `templates/workspace_services/guacamole/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `PARENT_SERVICE_ID` | The unique identifier of this service parent (a Guacamole service) |
|
||||
|
||||
1. Build and install the Guacamole Service bundle
|
||||
- `make porter-build DIR=./templates/workspace_services/guacamole/user_resources/guacamole-azure-win10vm`
|
||||
- `make porter-install DIR=./templates/workspace_services/guacamole/user_resources/guacamole-azure-win10vm`
|
|
@ -1,57 +0,0 @@
|
|||
# InnerEye DeepLearning Service Bundle
|
||||
|
||||
See: [https://github.com/microsoft/InnerEye-DeepLearning](https://github.com/microsoft/InnerEye-DeepLearning)
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
Please be aware that the following Firewall rules are opened for the workspace when this service is deployed. These are all dependencies needed by InnerEye to be able to develop and train models:
|
||||
|
||||
URLs:
|
||||
|
||||
- *.anaconda.com
|
||||
- *.anaconda.org
|
||||
- binstar-cio-packages-prod.s3.amazonaws.com
|
||||
- github.com
|
||||
- *pypi.org
|
||||
- *pythonhosted.org
|
||||
- github-cloud.githubusercontent.com
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Prerequisites for deployment:
|
||||
- [A workspace with an Azure ML Service bundle installed](../azureml)
|
||||
|
||||
1. Create a copy of `templates/workspace_services/innereye_deeplearning/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `AZUREML_WORKSPACE_NAME` | Name of the Azure ML workspace deployed as part of the Azure ML workspace service prerequisite. |
|
||||
| `AZUREML_ACR_ID` | Azure sesource ID of the Azure Container Registry deployed as part of the Azure ML workspace service prerequisite. |
|
||||
|
||||
1. Build and install the InnerEye Deep Learning Service bundle
|
||||
- `make porter-build DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
- `make porter-publish DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
- `make porter-install DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
|
||||
## Running the InnerEye HelloWorld on AML Compute Cluster
|
||||
|
||||
1. Log onto a VM in the workspace, open PowerShell and run:
|
||||
|
||||
- ```git clone https://github.com/microsoft/InnerEye-DeepLearning```
|
||||
- ```cd InnerEye-DeepLearning```
|
||||
- ```git lfs install```
|
||||
- ```git lfs pull```
|
||||
- ```conda init```
|
||||
- ```conda env create --file environment.yml```
|
||||
- Restart PowerShell and navigate to the "InnerEye-DeepLearning" folder
|
||||
- ```conda activate InnerEye```
|
||||
- Open Azure Storage Explorer and connect to your Storage Account using name and access key
|
||||
- On the storage account create a container with name ```datasets``` and a folder named ```hello_world```
|
||||
- Copy ```dataset.csv``` file from Tests/ML/test_data/dataset.csv to the "hello_world" folder
|
||||
- Copy the whole ```train_and_test_data``` folder from Test/ML/test_data/train_and_test_data to the "hello_world" folder
|
||||
- Update the following variables in ```InnerEye/settings.yml```: subscription_id, resource_group, workspace_name, cluster (see [AML setup](https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/setting_up_aml.md) for more details).
|
||||
- Open your browser to ml.azure.com, login, select the right Subscription and AML workspace and then navigate to "Datastores". Create a New datastore named "innereyedatasets" and link it to your storage account and datasets container.
|
||||
- Back from PowerShell run ```python InnerEye/ML/runner.py --model=HelloWorld --azureml=True```
|
||||
- The runner will provide you with a link and ask you to open it to login. Copy the link and open it in browser (Edge) on the DSVM and login. The run will continue after login.
|
||||
- In your browser navigate to ml.azure.com and open the "Experiments" tab to follow the progress of the training
|
|
@ -1,50 +0,0 @@
|
|||
# InnerEye Inference service bundle
|
||||
|
||||
See: [https://github.com/microsoft/InnerEye-Inference](https://github.com/microsoft/InnerEye-Inference)
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Prerequisites for deployment:
|
||||
- [A workspace with an InnerEye Deep Learning bundle installed](../innereye_deep_learning)
|
||||
|
||||
1. Create a service principal with contributor rights over the subscription. This will be replaced with a Managed Identity in the future:
|
||||
|
||||
```cmd
|
||||
az ad sp create-for-rbac --name <sp-name> --role Contributor --scopes /subscriptions/<subscription-id>
|
||||
```
|
||||
|
||||
1. Create a copy of `templates/workspace_services/innereye_inference/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | The 4 character unique identifier used when deploying the base workspace bundle. |
|
||||
| `AZUREML_WORKSPACE_NAME` | Name of the Azure ML workspace deployed as part of the Azure ML workspace service prerequisite. |
|
||||
| `AZUREML_ACR_ID` | ID of the Azure Container Registry deployed as part of the Azure ML workspace service prerequisite. |
|
||||
| `INFERENCE_SP_CLIENT_ID` | Service principal client ID used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
| `INFERENCE_SP_CLIENT_SECRET` | Service principal client secret used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
|
||||
1. Build and deploy the InnerEye Inference service
|
||||
- `make porter-build DIR=./templates/workspace_services/innereye_inference`
|
||||
- `make porter-install DIR=./templates/workspace_services/innereye_inference`
|
||||
|
||||
1. Log onto a VM in the workspace and run:
|
||||
|
||||
```cmd
|
||||
git clone https://github.com/microsoft/InnerEye-Inference
|
||||
cd InnerEye-Inference
|
||||
az webapp up --name <inference-app-name> -g <resource-group-name>
|
||||
```
|
||||
|
||||
## Configuring and testing inference service
|
||||
|
||||
The workspace service provision an App Service Plan and an App Service for hosting the inference webapp. The webapp will be integrated into the workspace network, allowing the webapp to connect to the AML workspace. Following the setup you will need to:
|
||||
|
||||
- Create a new container in your storage account for storing inference images called `inferencedatastore`.
|
||||
- Create a new folder in that container called `imagedata`.
|
||||
- Navigate to the ml.azure.com, `Datastores` and create a new datastore named `inferencedatastore` and connect it to the newly created container.
|
||||
- The key used for authentication is the `inference_auth_key` provided as an output of the service deployment.
|
||||
- Test the service by sending a GET or POST command using curl or Invoke-WebRequest:
|
||||
- Simple ping:
|
||||
```Invoke-WebRequest https://yourservicename.azurewebsites.net/v1/ping -Headers @{'Accept' = 'application/json'; 'API_AUTH_SECRET' = 'your-secret-1234-1123445'}```
|
||||
- Test connection with AML:
|
||||
```Invoke-WebRequest https://yourservicename.azurewebsites.net/v1/model/start/HelloWorld:1 -Method POST -Headers @{'Accept' = 'application/json'; 'API_AUTH_SECRET' = 'your-secret-1234-1123445'}```
|
|
@ -4,9 +4,9 @@ In this folder Workspace Templates are located. These Templates for the Composit
|
|||
|
||||
| Template name | Description |
|
||||
| --- | --- |
|
||||
| [base](./base/README.md) | A base template that deploys a Resource Group, Virtual network, Subnets ... A good base to extend. |
|
||||
| [azureml_devtestlabs](./azureml_devtestlabs/readme.md) | Deploys Azure Machine Learning & Dev Test Labs. |
|
||||
| [innereye_deeplearning](./innereye_deeplearning/readme.md)) | Deploys InnerEye Deep learning. |
|
||||
| [innereye_deeplearning_inference](./innereye_deeplearning_inference/readme.md) | Deploys InnerEye inference service. |
|
||||
| [base](../../docs/workspace-templates/workspaces/base.md) | A base template that deploys a Resource Group, Virtual network, Subnets ... A good base to extend. |
|
||||
| [AzureML dev test labs](../../docs/workspace-templates/workspaces/azure-ml-dev-test-labs.md) | Deploys Azure Machine Learning & Dev Test Labs. |
|
||||
| [InnerEye deep learning](../../docs/workspace-templates/workspaces/inner-eye-deep-learning.md)) | Deploys InnerEye Deep learning. |
|
||||
| [InnerEye deep learning inference](../../docs/workspace-templates/workspaces/inner-eye-deep-learning-inferencing.md) | Deploys InnerEye inference service. |
|
||||
|
||||
To customize or author new Workspace Templates read the [Authoring Workspace Templates](../../docs/authoring-workspace-templates.md).
|
||||
To customize or author new Workspace Templates read the [Authoring Workspace Templates](../../docs/tre-workspace-authors/authoring-workspace-templates.md).
|
||||
|
|
|
@ -1,37 +0,0 @@
|
|||
# Azure ML and Dev Test Labs Worksapce
|
||||
|
||||
This deploys a TRE workspace with the following services:
|
||||
|
||||
- [Azure ML](../../workspace_services/azureml)
|
||||
- [Azure Dev Test Labs](../../workspace_services/devtestlabs)
|
||||
|
||||
Please follow the above links to learn more about how to access the services and any firewall rules that they will open in the workspace.
|
||||
|
||||
## Manual deployment
|
||||
|
||||
1. Publish the bundles required for this workspace:
|
||||
|
||||
- Base Workspace
|
||||
`make porter-build DIR=./templates/workspaces/base`
|
||||
`make porter-publish DIR=./templates/workspaces/base`
|
||||
|
||||
- Azure ML Service
|
||||
`make porter-build DIR=./templates/workspace_services/azureml`
|
||||
`make porter-publish DIR=./templates/workspace_services/azureml`
|
||||
|
||||
- DevTest Labs Service
|
||||
`make porter-build DIR=./templates/workspace_services/devtestlabs`
|
||||
`make porter-publish DIR=./templates/workspace_services/devtestlabs`
|
||||
|
||||
1. Create a copy of `workspaces/azureml_devtestlabs/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 character unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
|
||||
1. Build and install the workspace:
|
||||
|
||||
`make porter-build DIR=./templates/workspaces/azureml_devtestlabs`
|
||||
`make porter-publish DIR=./templates/workspaces/azureml_devtestlabs`
|
||||
`make porter-install DIR=./templates/workspaces/azureml_devtestlabs`
|
|
@ -1,18 +0,0 @@
|
|||
# Azure TRE base workspace
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- A TRE environment
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
1. Create a copy of `/templates/workspaces/base/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 ter unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
|
||||
1. Build and deploy the base workspace
|
||||
- `make porter-build DIR=./templates/workspaces/base`
|
||||
- `make porter-install DIR=./templates/workspaces/base`
|
|
@ -1,42 +0,0 @@
|
|||
# InnerEye Deep Learning Workspace
|
||||
|
||||
This deploys a TRE workspace with the following services:
|
||||
|
||||
- [Azure ML](../../workspace_services/azureml)
|
||||
- [Azure Dev Test Labs](../../workspace_services/devtestlabs)
|
||||
- [InnerEye deep learning](../../workspace_services/innereye_deeplearning)
|
||||
|
||||
Please follow the above links to learn more about how to access the services and any firewall rules that they will open in the workspace.
|
||||
|
||||
## Manual deployment
|
||||
|
||||
1. Publish the bundles required for this workspace:
|
||||
|
||||
- Base Workspace
|
||||
`make porter-build DIR=./templates/workspaces/base`
|
||||
`make porter-publish DIR=./templates/workspaces/base`
|
||||
|
||||
- Azure ML Service
|
||||
`make porter-build DIR=./templates/workspace_services/azureml`
|
||||
`make porter-publish DIR=./templates/workspace_services/azureml`
|
||||
|
||||
- DevTest Labs Service
|
||||
`make porter-build DIR=./templates/workspace_services/devtestlabs`
|
||||
`make porter-publish DIR=./templates/workspace_services/devtestlabs`
|
||||
|
||||
- InnerEye Deep Learning Service
|
||||
`make porter-build DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
`make porter-publish DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
|
||||
1. Create a copy of `workspaces/innereye_deeplearning/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 character unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
|
||||
1. Build and install the workspace:
|
||||
|
||||
`make porter-build DIR=./templates/workspaces/innereye_deeplearning`
|
||||
`make porter-publish DIR=./templates/workspaces/innereye_deeplearning`
|
||||
`make porter-install DIR=./templates/workspaces/innereye_deeplearning`
|
|
@ -1,54 +0,0 @@
|
|||
# InnerEye Deep Learning and Inference Workspace
|
||||
|
||||
This deploys a TRE workspace with the following services:
|
||||
|
||||
- [Azure ML](../../workspace_services/azureml)
|
||||
- [Azure Dev Test Labs](../../workspace_services/devtestlabs)
|
||||
- [InnerEye deep learning](../../workspace_services/innereye_deeplearning)
|
||||
- [InnerEye Inference](../../workspace_services/innereye_inference)
|
||||
|
||||
Please follow the above links to learn more about how to access the services and any firewall rules that they will open in the workspace.
|
||||
|
||||
## Manual deployment
|
||||
|
||||
1. Publish the bundles required for this workspace:
|
||||
|
||||
- Base Workspace
|
||||
`make porter-build DIR=./templates/workspaces/base`
|
||||
`make porter-publish DIR=./templates/workspaces/base`
|
||||
|
||||
- Azure ML Service
|
||||
`make porter-build DIR=./templates/workspace_services/azureml`
|
||||
`make porter-publish DIR=./templates/workspace_services/azureml`
|
||||
|
||||
- DevTest Labs Service
|
||||
`make porter-build DIR=./templates/workspace_services/devtestlabs`
|
||||
`make porter-publish DIR=./templates/workspace_services/devtestlabs`
|
||||
|
||||
- InnerEye Deep Learning Service
|
||||
`make porter-build DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
`make porter-publish DIR=./templates/workspace_services/innereye_deeplearning`
|
||||
|
||||
- InnerEye Inference Service
|
||||
`make porter-build DIR=./templates/workspace_services/innereye_inference`
|
||||
`make porter-publish DIR=./templates/workspace_services/innereye_inference`
|
||||
|
||||
1. Create a service principal with contributor rights over Azure ML:
|
||||
|
||||
```cmd
|
||||
az ad sp create-for-rbac --name <sp-name> --role Contributor --scopes /subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>
|
||||
```
|
||||
|
||||
1. Create a copy of `workspaces/innereye_deeplearning_inference/.env.sample` with the name `.env` and update the variables with the appropriate values.
|
||||
|
||||
| Environment variable name | Description |
|
||||
| ------------------------- | ----------- |
|
||||
| `WORKSPACE_ID` | A 4 ter unique identifier for the workspace for this TRE. `WORKSPACE_ID` can be found in the resource names of the workspace resources; for example, a `WORKSPACE_ID` of `ab12` will result in a resource group name for workspace of `rg-<tre-id>-ab12`. Allowed characters: Alphanumeric. |
|
||||
| `ADDRESS_SPACE` | The address space for the workspace virtual network. For example `192.168.1.0/24`|
|
||||
| `INFERENCE_SP_CLIENT_ID` | Service principal client ID used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
| `INFERENCE_SP_CLIENT_SECRET` | Service principal client secret used by the inference service to connect to Azure ML. Use the output from the step above. |
|
||||
|
||||
1. Build and install the workspace:
|
||||
|
||||
`make porter-publish DIR=./templates/workspaces/innereye_deeplearning_inference`
|
||||
`make porter-install DIR=./templates/workspaces/innereye_deeplearning_inference`
|
Загрузка…
Ссылка в новой задаче