1
0
Форкнуть 0
This commit is contained in:
Pericles Alves 2021-02-05 18:25:29 -08:00
Родитель 0571871f83
Коммит 92d87028fe
3 изменённых файлов: 251 добавлений и 0 удалений

Просмотреть файл

@ -368,6 +368,9 @@ folder we provide an example of a custom adapter written in TypeScript that forw
This adapter uses a client automatically generated with [AutoRest](https://github.com/Azure/autorest) using the Bridge swagger available
under `Docs/swagger.json`. The example also contains the necessary ARM template to deploy it as a sidecar with the Bridge.
We also provide an example of how to deploy multiple adapters (`Samples/MultipleAdapterDeployment`). In this setup,
each adapter is deployed as a separate container and requests are routed based on the path.
The diagrams below illustrate the original Bridge architecture and how a custom adapter fits into it:
![Original architecture](Docs/Assets/original-architecture.png "original architecture")

Просмотреть файл

@ -0,0 +1,49 @@
# Multiple adapter deployment
The `deploy-multi-adapter.sh` script in this folder can be used to deploy multiple adapters to an existing
Device Bridge instance (to deploy the Bridge for the first time, use the template available in the root of the repository).
If the number of adapters is the same as the current instance, the update will be done in place (containers will be restarted with their new version). If new adapters are being added, the container group instance will be deleted and recreated with the correct configuration.
## Parameters
Before deploying, replace the parameters at the top of the script file with the appropriate values.
The script will look for an existing instance of device bridge in the provided resource group.
## Adapter configuration and request routing
Each adapter is deployed as a separate container. The webserver is configured to route external requests to
each adapter based on the first part of the request path. For instance, consider the parameters below:
```bash
adapterImages=("myacr.io/adpterimage1" "myacr.io/adpterimage2" "myacr.io/adpterimage3")
adapterPathPrefixes=("adapter1" "adapter2" "adapter3")
```
All requests whose path start with `/adapter1/` (e.g., https://mybridge.azurecontainers.io/adapter1/message) will be
routes to the adapter running image `myacr.io/adpterimage1`. Requests whose path starts with `/adapter2/` will
be routed to `myacr.io/adpterimage2` and so on.
## Ports and environment variables
Each adapter will be given a port for incoming external requests. It will also have an internal port
visible to the local container network, so it can receive requests from other adapters or the bridge core. These and
other parameters are passed to the container as environment variables as follows:
- `PORT`: port to which the adapter should listen for external requests. The webserver will route to this port any requests that
match the path prefix of the adapter.
- `INTERNAL_PORT`: port that is only exposed to the internal network and that can be used, for instance, to
listen for requests from the Bridge core.
- `BRIDGE_PORT`: internal port on which the Bridge core container is listening. Requests for core operations, such as sending
telemetry or subscribing to events, should be send to this port (e.g., `http://localhost:{BRIDGE_PORT}/devices/{deviceId}/messages/events`).
- `PATH_PREFIX`: the path used by the webserver to route external requests to this adapter. The adapter code should listen for
requests whose first path component equals this prefix.
The following is an example of an adapter file written in TypeScript that uses these variables:
```typescript
import express from 'express';
const port = process.env['PORT'];
const pathPrefix = process.env['PATH_PREFIX'];
const externalApp = express();
externalApp.post(`/${pathPrefix}/event`, async (req, res, next) => res.sendStatus(200));
externalApp.listen(port, () => console.log(`External server listening at http://localhost:${port}`));
```

Просмотреть файл

@ -0,0 +1,199 @@
#!/bin/bash
####################################################################
# Deploys multiple adapters to an existing device bridge instance. #
# Requests are routed to each adapter based on a path prefix. #
####################################################################
# Replace the parameters below with the appropriate values
####################################################################
resourceGroup="<resource-group>"
acrServer="<acr-server>"
acrUsername="<acr-username>"
acrPassword="<acr-password>"
logAnalyticsId="<log-analytics-workspace-id>"
logAnalyticsKey="<log-analytics-workspace-key>"
adapterImages=("<adapter-image-1>" "<adapter-image-2>" "<adapter-image-3>")
adapterPathPrefixes=("<path-prefix-1>" "<path-prefix-1>" "<path-prefix-1>")
####################################################################
# Check for existing bridge resources in the resource group (we locate the resources by name)
echo "Fetching existing Bridge resources..."
containerGroupName=$(az container list --resource-group $resourceGroup --query "[?starts_with(name, 'iotc-container-groups-')].{name:name}[?"'!'"contains(name, '-setup-')]" --output tsv)
echo "Bridge container group:" $containerGroupName
location=$(az container show --resource-group $resourceGroup --name $containerGroupName --query location --output tsv)
dnsNameLabel=$(az container show --resource-group $resourceGroup --name $containerGroupName --query 'ipAddress.dnsNameLabel' --output tsv)
bridgeContainerName=$(az container show --resource-group $resourceGroup --name $containerGroupName --query "containers[?starts_with(name, 'iotc-bridge-container-')][name]" --output tsv)
bridgeImage=$(az container show --resource-group $resourceGroup --name $containerGroupName --query "containers[?starts_with(name, 'iotc-bridge-container-')][image]" --output tsv)
keyvaultName=$(az keyvault list --resource-group $resourceGroup --query "[?starts_with(name, 'iotc-kv-')].[name]" --output tsv)
echo "Key Vault:" $keyvaultName
storageAccountName=$(az storage account list --resource-group $resourceGroup --query "[?starts_with(name, 'iotcsa')].[name]" --output tsv)
storageAccountKey=$(az storage account keys list --resource-group $resourceGroup --account-name $storageAccountName --query '[0].value' -o tsv)
echo "Storage account": $storageAccountName
# Generate Caddyfile
caddyFile="${dnsNameLabel}.${location}.azurecontainer.io
route {
"
adapterCount=${#adapterImages[@]}
for ((i=0;i<adapterCount;i++)); do
caddyFile="${caddyFile} reverse_proxy /${adapterPathPrefixes[i]}/* localhost:3$(printf %03d $i)
"
done
caddyFile="${caddyFile} respond 404
}"
base64Caddyfile=$(echo "$caddyFile" | base64)
echo -e "\nCaddyfile config"
echo -e "----------------"
echo "$caddyFile"
yamlFile=$(mktemp)
# Add first part of YAML deployment file
echo "type: Microsoft.ContainerInstance/containerGroups
apiVersion: 2019-12-01
name: \"${containerGroupName}\"
location: \"${location}\"
identity:
type: SystemAssigned
properties:
sku: Standard
containers:" >> "$yamlFile"
# Add the YAML deployment definition for each adapter
for ((i=0;i<adapterCount;i++)); do
adapterDefinition=" - name: \"${adapterPathPrefixes[i]}\"
properties:
image: \"${adapterImages[i]}\"
ports:
- port: 3$(printf %03d $i)
- port: 4$(printf %03d $i)
environmentVariables:
- name: PORT
value: 3$(printf %03d $i)
- name: INTERNAL_PORT
value: 4$(printf %03d $i)
- name: BRIDGE_PORT
value: 5001
- name: PATH_PREFIX
value: \"${adapterPathPrefixes[i]}\"
resources:
requests:
memoryInGB: 0.5
cpu: 0.5"
echo "${adapterDefinition}" >> "$yamlFile"
done
# Add the last part of the YAML file
echo " - name: \"${bridgeContainerName}\"
properties:
image: \"${bridgeImage}\"
ports:
- port: 5001
environmentVariables:
- name: MAX_POOL_SIZE
value: \"50\"
- name: DEVICE_RAMPUP_BATCH_SIZE
value: \"150\"
- name: DEVICE_RAMPUP_BATCH_INTERVAL_MS
value: \"1000\"
- name: KV_URL
value: \"https://${keyvaultName}.vault.azure.net/\"
- name: PORT
value: \"5001\"
resources:
requests:
memoryInGB: 1.5
cpu: 0.8
- name: caddy-ssl-server
properties:
image: caddy:latest
command:
- \"caddy\"
- \"run\"
- \"--config\"
- \"/mnt/caddyfile\"
- \"--adapter\"
- \"caddyfile\"
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 80
environmentVariables: []
resources:
requests:
memoryInGB: 0.5
cpu: 0.2
volumeMounts:
- name: data
mountPath: \"/data\"
- name: config
mountPath: \"/config\"
- name: caddyfile
mountPath: \"/mnt\"
volumes:
- name: data
azureFile:
shareName: bridge
storageAccountName: \"${storageAccountName}\"
storageAccountKey: \"${storageAccountKey}\"
- name: config
azureFile:
shareName: bridge
storageAccountName: \"${storageAccountName}\"
storageAccountKey: \"${storageAccountKey}\"
- name: caddyfile
secret:
caddyfile: \"${base64Caddyfile}\"
initContainers: []
restartPolicy: Always
imageRegistryCredentials:
- server: \"${acrServer}\"
username: \"${acrUsername}\"
password: \"${acrPassword}\"
diagnostics:
logAnalytics:
workspaceId: \"${logAnalyticsId}\"
workspaceKey: \"${logAnalyticsKey}\"
ipAddress:
ports:
- protocol: TCP
port: 443
type: Public
dnsNameLabel: \"${dnsNameLabel}\"
osType: Linux" >> "$yamlFile"
echo -e "\nYAML deplyment definition"
echo -e "-------------------------"
cat "$yamlFile"
# Deploy final YAML
echo -e "\nTrying to update containers in place..."
updateResult=$(az container create --resource-group $resourceGroup --file $yamlFile 2>&1)
# If the update changes the number of adapters, the container group needs to be deleted and recreated
if [[ $updateResult == *"BadRequestError"*"delete it first and then create a new one"* ]]; then
echo "Updates can't be made in place. Will delete and recreate the container group"
az container delete --resource-group $resourceGroup --name $containerGroupName
az container create --resource-group $resourceGroup --file $yamlFile
identity=$(az container show --resource-group $resourceGroup --name $containerGroupName --query identity.principalId --out tsv)
az keyvault set-policy --name $keyvaultName --resource-group $resourceGroup --object-id $identity --secret-permissions get list
else
echo "$updateResult"
echo "Container group successfully updated"
fi
rm "$yamlFile"