Merge pull request #4 from Azure/anfurgiu/init

Anfurgiu/init
This commit is contained in:
Drew Furgiuele 2022-10-11 14:45:31 -04:00 коммит произвёл GitHub
Родитель 43d8084842 c6543a2934
Коммит 3cb6e0aa7d
21 изменённых файлов: 596 добавлений и 172 удалений

6
.gitignore поставляемый
Просмотреть файл

@ -350,3 +350,9 @@ MigrationBackup/
# Ionide (cross platform F# VS Code tools) working folder
.ionide/
# Local Development Resources/Artifacts
.DS_Store
azure/bootstrap-local.bicep
oms-deployments/omenvironment-pgsql.yml
oms-deployments/omenvironment.yml

Просмотреть файл

@ -1,6 +1,6 @@
# QuickStart Guide: Sterling Order Management on Azure
This repository provides deployument guidance and best practices for running IBM Sterling Order management (OMS) on Azure Redhat OpenShift (ARO) in the Azure public cloud. This guide was written and tested with Azure RedHat OpenShift 4.9.9 and OMS Version X.XX
This repository provides deployument guidance and best practices for running IBM Sterling Order management (OMS) on Azure Redhat OpenShift (ARO) in the Azure public cloud. This guide was written and tested with Azure RedHat OpenShift 4.9.9.
> 🚧 **NOTE**: The scripts contained within this repo were written with the intention of testing various configurations and integrations on Azure. They allow you to quickly deploy the required infrastructure on Azure so that you migrate an existing OMS to Azure, or start fresh with new development.
@ -37,15 +37,15 @@ This repository provides deployument guidance and best practices for running IBM
- [Create Required Database User & Assign Permissions](#create-required-database-user--assign-permissions)
- [Update Maximum Connections to Azure PostgreSQL Database (if applicable)](#update-maximum-connections-to-azure-postgresql-database-if-applicable)
- [Create OMS Secret](#create-oms-secret)
- [Create MQ Bindings ConfigMap](#create-mq-bindings-configmap)
- [Create MQ Bindings ConfigMap (if needed)](#create-mq-bindings-configmap-if-needed)
- [Create Required PVC(s)](#create-required-pvcs)
- [Create RBAC Role](#create-rbac-role)
- [Pushing (and pulling) your containers to an Azure Container Registry](#pushing-and-pulling-your-containers-to-an-azure-container-registry)
- [SSL Connections and Keystore/Truststore Configuration](#ssl-connections-and-keystoretruststore-configuration)
- [Step 7: Create IBM Entitlement Key Secret](#step-7-create-ibm-entitlement-key-secret)
- [Step 8: Deploying OMS](#step-8-deploying-oms)
- [Deploying OMS Via the OpenShift Operator](#deploying-oms-via-the-openshift-operator)
- [Step 8: Post Deployment Tasks](#step-8-post-deployment-tasks)
- [Deploying OMS Via the OpenShift Operator](#deploying-oms-via-the-openshift-operator)
- [Step 9: Post Deployment Tasks](#step-9-post-deployment-tasks)
- [Right-sizing / Resizing your ARO Cluster](#right-sizing--resizing-your-aro-cluster)
- [Licensing your DB2 and MQ Instances](#licensing-your-db2-and-mq-instances)
- [Migrating Your Data](#migrating-your-data)
@ -54,21 +54,20 @@ This repository provides deployument guidance and best practices for running IBM
## What's in this repository?
This repository serves two purposes: first, it is designed to give you an idea of what sort of architecture you can consider deploying into your Azure subscription to support
running your Sterling Order Management workload(s) as well as best practice considerations for scale, performance, and security.
This repository serves two purposes: first, it is designed to give you an idea of what sort of architecture you can consider deploying into your Azure subscription to support running your Sterling Order Management workload(s) as well as best practice considerations for scale, performance, and security.
Secondly, there are a series of sample deployment templates and configuration scripts designed to get you up and running with an environment ready for you to deploy your existing Sterling OMS resources into. These resources are broken out into the following directories within this repository:
- ./azure - Contains a series of .bicep files that can be used to bootstrap a reference deployment of all the required Azure resources for your deployment
- ./config - Contains files used by the installer examples or Azure automation scripts to configure services or other requirements of the platform:
- activemq - Contains sample Dockerfile for creating an ActiveMQ container, and deployment subfolders for sample deployments in Azure Container Instances and Azure RedHat OpenShift
- azure-file-storage - Contains artifacts for configuring Azure File Storage CSI drivers in Azure RedHat OpenShift
- db2 - Contains a sample response file (.rsp) for silent, unattended installs of DB2
- installers - Automation scripts used by the boostrap installer
- mq - Contains instructions for deploying HA-native MQ containers inside of Azure Kubernetes Service
- oms - Contains sample .yaml files for configuring OMS volumes, claims, pull secrets, and RBAC
- operators - Contains OpenShift operator deployment .yaml files
- ./datamigration - Contains a sample Azure Data Factory Pipeline and instructions for helping migrate DB2 data to PostgreSQL
- [```./azure```](./azure/README.md) - Contains a series of .bicep files that can be used to bootstrap a reference deployment of all the required Azure resources for your deployment
- [```./config```](./config/) - Contains files used by the installer examples or Azure automation scripts to configure services or other requirements of the platform:
- [```./config/activemq```](./config/activemq/) - Contains sample Dockerfile for creating an ActiveMQ container, and deployment subfolders for sample deployments in Azure Container Instances and Azure RedHat OpenShift
- [```./config/azure-file-storage```](./config/azure-file-storage/) - Contains artifacts for configuring Azure File Storage CSI drivers in Azure RedHat OpenShift
- [```./config/db2```](./config/db2/) - Contains a sample response file (.rsp) for silent, unattended installs of DB2
- [```./config/installers```](./config/installers/) - Automation scripts used by the boostrap installer
- [```./config/mq```](./config/mq) - Contains instructions for deploying HA-native MQ containers inside of Azure Kubernetes Service
- [```./config/oms```](./config/oms/) - Contains sample .yaml files for configuring OMS volumes, claims, pull secrets, and RBAC
- [```./config/operators```](./config/operators/) - Contains OpenShift operator deployment .yaml files
- [```./datamigration```](./datamigration/README.md) - Contains a sample Azure Data Factory Pipeline and instructions for helping migrate DB2 data to PostgreSQL
If you are interested in a bootstrap environment to deploy Sterling OMS into, please see this README that explains more: [Sterling Azure Bootstrap Resources](./azure/README.md)
@ -101,20 +100,20 @@ To successfully install and configure OMS on Azure, you'll need to make sure you
* An active Azure subscription
* A quota of at least 40 vCPU allowed for your VM type(s) of choice. Request a quota increase if needed.
* You will need subscription owner permissions for the deployment.
* A target resource group to deploy to
* You will need to deploy a JMS-based messaging system into your environment. Most likely, this is IBM MQ, but there are other alteratives. As such, you can:
* Deploy Virtual Machines configured with appropriate storage and install the messaging components yourself, OR
* Deploy MQ in an Azure Kubernetes Cluster (or ARO) with a High Availability configuration, OR
* Deploy one or more alterative JMS Broker nodes in Azure Container Instances
* You will need to deploy a backend database as part of your environment. Depending on your chosen platform, you currently have the following options:
* For PostgreSQL:
* The most recent Operator for IBM Sterling OMS has support for PostgreSQL. As such, you can deploy Azure Database for PostgreSQL - Flexible Server in your Azure subscription. Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. You can read more about Flexible Server here: https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/overview
* For IBM DB2:
* You can obtain a licensed copy of DB2 from the IBM Passport Advantage Website: https://www-112.ibm.com/software/howtobuy/passportadvantage/paoreseller/LoginPage?abu=
* IBM also offers a community edition of DB2 for testing and development purposes: https://www.ibm.com/products/db2-database/developers
* You can place this installation media on an Azure Storage Account and download the images to your Virtual Machines to install the software
* For Oracle:
* IBM Provides guidance around configuring Oracle for Sterling OMS: https://www.ibm.com/products/db2-database/developers
* The images provided for OMS do not include Oracle drivers; your images will need updated with these drivers. For more information, see this support document:
* For PostgreSQL:
* The most recent Operator for IBM Sterling OMS has support for PostgreSQL. As such, you can deploy Azure PostgreSQL Database Flexible Server in your Azure subscription
* The Azure CLI: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
* OpenShift Command Line Tools (oc): https://mirror.openshift.com/pub/openshift-v4/clients/ocp/
@ -131,10 +130,11 @@ At a minimum, your Azure environment should contain a resource group that contai
- development (/28): this subnet can be used to deploy developer virtual machines, if needed, to develop, test, and deploy OMS customized container images securely to the Azure Container Registry.
- endpoints (/25): this subnet exists for hosting private endpoints for Azure services such as storage accounts, container registries, and other services to provide private connectivity.
- data (/26): this subnet should be used to deploy Azure PostgreSQL Flexible Server, as that service requires a delegted subnet
- anf (/24): this subnet should be delegated to Azure NetApp Files (in the case of you deploying DB2 on virtual machines)
- Note: This is by no means a complete or exhausitve list; depending on other components you wish to deploy, you should plan and/or expand your address spaces and subnets as needed
2. Azure Premium Files storage account: For hosting MQ Queue Manager data
3. Azure Virtual Machines:
- (If needed) At least one Virtual Machine to host IBM DB2. For production scenarios, you should consider configuring more than one host and using high availability for the instances. More information on this configuration can be found here:
- (If needed) At least one Virtual Machine to host IBM DB2. For production scenarios, you should consider configuring more than one host and using high availability for the instances. More information on this configuration (as well as performance guidelines) can be found here: https://learn.microsoft.com/en-us/azure/virtual-machines/workloads/sap/dbms_guide_ibm
- (If needed) At least one Virtual Machine to host IBM MQ. For production scenarios, you should consider configuring more than one host and using a shared storage location (aka Azure Premium Files) for the queue storage
- A Jump Box VM: This machine should be deployed and configured with any management tools you'll need to administer your environment.
- Development VM(s): Machines that can be used be developers for that can connect to any required cluster, data, or queue resources inside the virtual network
@ -176,6 +176,9 @@ Once all of the networking requirements are met, you should install Azure RedHat
You can create a new cluster through the Azure Portal, or from the Azure CLI:
```bash
#Note: Change your bash variable names as needed
RESOURCEGROUP=""
CLUSTER=""
az aro create --resource-group $RESOURCEGROUP --name $CLUSTER --vnet aro-vnet --master-subnet master-subnet --worker-subnet worker-subnet --client-id <your application client ID> --client-secret <your generated secret>
```
@ -221,7 +224,7 @@ cd /var/ibm/db2/V11.5/install/pcmk
sudo sudo ./db2cppcmk -i
```
For more information, please refer to this documentation about building a highly-available DB2 instance in Azure: https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-ibm-db2-luw
For more information, please refer to this documentation about building a highly-available DB2 instance in Azure: https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-ibm-db2-luw. Additional considerations, such as performance and scaling options for DB2 on Azure can be found here: https://learn.microsoft.com/en-us/azure/virtual-machines/workloads/sap/dbms_guide_ibm
### Configure your Azure PostgresSQL Database (if applicable)
@ -438,7 +441,7 @@ oc create -f /tmp/oms-secret-updated.yaml
rm /tmp/oms-secret-updated
```
### Create MQ Bindings ConfigMap
### Create MQ Bindings ConfigMap (if needed)
Users of IBM MQ for their messaging platform will need to create a configuration map in their OMS namespace that contains queue binding information. After you have configured your queue managers and created your JMS bindings, you need to obtain a copy of your ```.bindings``` file. Next, you'll create your configuration map with the following command:
@ -521,11 +524,11 @@ oc create secret docker-registry ibm-entitlement-key --docker-username=cp --dock
Once you have your Azure environment built, you are now prepared to deploy your OMEnvironment using the IBM Sterling Order Management Operator. You'll first install the operator from the IBM Catalog, then use the operator to deploy your OMEnvironment.
## Deploying OMS Via the OpenShift Operator
### Deploying OMS Via the OpenShift Operator
Once the operator is deployed, you can now deploy your OMEnvironment provided you have met all the pre-requisites. For more information about the installation process and available options (as well as sample configuration files) please visit: https://www.ibm.com/docs/en/order-management-sw/10.0?topic=operator-installing-order-management-software-by-using
## Step 8: Post Deployment Tasks
## Step 9: Post Deployment Tasks
Once your environment is set up and configured, please consider the following steps to complete your installation.

Просмотреть файл

@ -7,7 +7,15 @@ In this folder, you can find resources that can help you get up to speed quickly
## Updating cloud init file(s) (Optional)
There are a series of cloud-init files in this repository that are used during different deployment steps to "stage" a virtual machine with different software packages, custom installers, and other steps. If you'd like to modify a particular VMs cloud init script, simply modify the commands in the relevant yaml file that is referenced in each bicep template. The results will be loaded at deployment time, and are "asynchronous" (meaning that the scripts will run after the resources are created, but any subsequent deployments do not wait for these post-creation scripts to run).
There are a series of cloud-init files in this repository that are used during different deployment steps to "stage" a virtual machine with different software packages, custom installers, and other steps. If you'd like to modify a particular VMs cloud init script, simply modify the commands in the relevant yaml file that corresponds to the relevant vm template.
Once you finish your changes, you'll need to put the resulting data into an inline string in the template. You can convert your file to the relevant string by using the following command:
```bash
awk -v ORS='\\n' '1' <filename>.yaml
```
This will output the resulting string your console; place this in the relevant ```var cloudInitData = ''``` line in your template. **Note**: Be mindful of escaping single quotes in your strings!
### More Deployment Options
@ -16,6 +24,16 @@ In addition to this bootstrap resource, note that configurations for message bro
- [Using AKS for Native HA IBM MQ](../config/mq/README.md)
- [Using ActiveMQ in ARO or Azure Container Instances](../config/activemq/README.md)
### Monitoring (via Log Analytics)
This deployment will prompt you as to whether to add a log analytics workspace to your deployment. If you choose to do so, you will get a new Log Analytics workspace in your target resource group and some of the deployed resources will have their logs and metrics pre-configured to be sent there:
* Azure Premium Files Storage
* Azure Database for PosgreSQL - Flexible Server
* Azure Container Registry
Any Azure Virtual Machines deployed will NOT be configured to be sent to the workspace, as they may require an agent to be installed on the VM (which is not currently part of this template). You should consider adding these where appropriate (or modifying this deployment to include them).
## Preparing to deploy
### Service Principal

Просмотреть файл

@ -71,7 +71,7 @@ resource aksCluster 'Microsoft.ContainerService/managedClusters@2022-07-02-previ
networkProfile: {
networkPlugin: 'azure'
networkPolicy: 'azure'
loadBalancerSku: 'Basic'
loadBalancerSku: 'Standard'
serviceCidr: serviceCidr
dnsServiceIP: dnsServiceIP
dockerBridgeCidr: '172.17.0.1/16'

Просмотреть файл

@ -72,17 +72,10 @@ param subnetVMPrefix string
param subnetDataName string
@description('Data subnet address space')
param subnetDataPrefix string
//@description('If installing MQ as part of this deployment, provide the filename of the MQ tar.gz file')
//param mqInstallerArchiveName string
//@description('If installing DB2 as part of this deployment, provide the filename of the DB2 tar.gz file')
//param db2InstallerArchiveName string
//@description('If installing DB2 and/or MQ as part of this deployment, provide the the storage account name where the installers can be downloaded from')
//param installerStorageAccountName string
//@description('If installing DB2 and/or MQ as part of this deployment, provide the the storage account container name where the installers can be downloaded from')
//param installerContainerName string
//@description('If installing DB2 and/or MQ as part of this deployment, provide the the a SAS token with read and list permissions to the container with the binaries')
//@secure()
//param installerSASToken string
@description('Azure NetApp Files Subnet Name')
param subnetANFName string
@description('Azure NetApp Files Subnet Address Space')
param subnetANFPrefix string
@description('The name of the Azure Premium File Share to create for your MQ instance')
param mqsharename string
@description('The name of the outbound NAT gateway for your virtual machines')
@ -107,19 +100,30 @@ param devVMName string
param registryName string
@description('Which OMS Version (image) to deploy')
param whichOMS string
//@description('If installing DB2, the name of the empty database to be created')
//param db2DatabaseName string
//@description('If installing DB2, name of the schema to be created in your new, empty database')
//param db2SchemaName string
@description('Your IBM Entitlement Key')
param ibmEntitlementKey string
@description('Storage Account Name Prefix')
param storageNamePrefix string
@description('Azure NetApp Files Account Name')
param anfName string
@description('Azure NetApp Files Data Volume Size (GB)')
param db2DataSizeGB int
param loadBalancerName string
param db2lbprivateIP string
param logAnalyticsWorkspaceName string
@description('Do you want to create a VMs for DB2? (Y/N)?')
@description('Do you want to deploy a Log Analytics Workspace as part of this deployment? (Y/N)?')
@allowed([
'Y'
'N'
])
param deployLogAnalytics string
@description('Do you want to create VMs and Azure NetApp Files for DB2? (Y/N)?')
@allowed([
'Y'
'N'
@ -159,6 +163,8 @@ module network 'networking.bicep' = {
subnetDataName: subnetDataName
location: location
gatewayName: gatewayName
subnetANFName: subnetANFName
subnetANFPrefix: subnetANFPrefix
}
}
@ -199,6 +205,8 @@ module postgreSQL 'postgresFlexible.bicep' = if (installPostgres == 'Y' || insta
adminPassword: adminPassword
subnetDataName: subnetDataName
virtualNetworkName: vnetName
deployLogAnalytics: deployLogAnalytics
logAnalyticsWorkSpaceName: logAnalyticsWorkspaceName
}
dependsOn:[
network
@ -213,6 +221,8 @@ module containerRegistery 'containerregistry.bicep' = {
location: location
registryname: registryName
vnetName: vnetName
deployLogAnalytics: deployLogAnalytics
logAnalyticsWorkSpaceName: logAnalyticsWorkspaceName
}
dependsOn:[
network
@ -228,6 +238,8 @@ module premiumStorage 'storage.bicep' = {
vnetName: vnetName
location: location
mqsharename: mqsharename
deployLogAnalytics: deployLogAnalytics
logAnalyticsWorkSpaceName: logAnalyticsWorkspaceName
}
dependsOn:[
network
@ -250,6 +262,22 @@ module bastionHost 'bastion.bicep' = {
]
}
module anf 'netappfiles.bicep' = if (installdb2vm == 'Y' || installdb2vm == 'y') {
name: 'netappfiles'
scope: resourceGroup()
params: {
anfName: anfName
location: location
db2vmprefix: db2VirtualMachineNamePrefix
dataVolGB: db2DataSizeGB
virtualNetworkName: vnetName
anfSubnetName: '${anfName}-vnet'
}
dependsOn: [
network
]
}
module loadbalancer 'loadbalancer.bicep' = if (installdb2vm == 'Y' || installdb2vm == 'y') {
name: 'db2-lb'
scope: resourceGroup()
@ -283,17 +311,16 @@ module db2vm1 'db2.bicep' = if (installdb2vm == 'Y' || installdb2vm == 'y') {
adminUsername: adminUsername
adminPassword: adminPassword
zone: '1'
//installerStorageAccountName: installerStorageAccountName
//installerContainerName: installerContainerName
//installerSASToken: installerSASToken
//db2InstallerArchiveName: db2InstallerArchiveName
//loadBalancerName: loadBalancerName
//db2DatabaseName: db2DatabaseName
//db2SchemaName: db2SchemaName
anfAccountName: anfName
anfPoolName: '${db2VirtualMachineNamePrefix}-1'
loadBalancerName: loadBalancerName
clientID: clientID
clientSecret: clientSecret
}
dependsOn: [
network
//loadbalancer
loadbalancer
anf
]
}
@ -305,28 +332,27 @@ module db2vm2 'db2.bicep'= if (installdb2vm == 'Y' || installdb2vm == 'y') {
params: {
branchName: branchName
location: location
networkInterfaceName: '${db2VirtualMachineNamePrefix}-1-nic'
networkSecurityGroupName: '${db2VirtualMachineNamePrefix}-1-nsg'
networkInterfaceName: '${db2VirtualMachineNamePrefix}-2-nic'
networkSecurityGroupName: '${db2VirtualMachineNamePrefix}-2-nsg'
networkSecurityGroupRules:networkSecurityGroupRules
subnetName: subnetVMName
virtualNetworkName: vnetName
virtualMachineName: '${db2VirtualMachineNamePrefix}-1'
virtualMachineName: '${db2VirtualMachineNamePrefix}-2'
osDiskType: osDiskType
virtualMachineSize: db2VirtualMachineSize
adminUsername: adminUsername
adminPassword: adminPassword
zone: '1'
//installerStorageAccountName: installerStorageAccountName
//installerContainerName: installerContainerName
//installerSASToken: installerSASToken
//db2InstallerArchiveName: db2InstallerArchiveName
//loadBalancerName: loadBalancerName
//db2DatabaseName: db2DatabaseName
//db2SchemaName: db2SchemaName
zone: '3'
anfAccountName: anfName
anfPoolName: '${db2VirtualMachineNamePrefix}-2'
loadBalancerName: loadBalancerName
clientID: clientID
clientSecret: clientSecret
}
dependsOn: [
network
//loadbalancer
loadbalancer
anf
]
}
@ -348,12 +374,8 @@ module mqvm1 'mq.bicep' = if (installmqvm == 'Y' || installmqvm == 'y') {
adminUsername: adminUsername
adminPassword: adminPassword
zone: '1'
//installerStorageAccountName: installerStorageAccountName
//installerContainerName: installerContainerName
//installerSASToken: installerSASToken
storageNamePrefix: storageNamePrefix
mqsharename: mqsharename
//mqInstallerArchiveName: mqInstallerArchiveName
branchName: branchName
}
dependsOn: [
@ -367,23 +389,19 @@ module mqvm3 'mq.bicep' = if (installmqvm == 'Y' || installmqvm == 'y') {
scope: resourceGroup()
params: {
location: location
networkInterfaceName: '${mqVirtualMachineName}-1-nic'
networkSecurityGroupName: '${mqVirtualMachineName}-1-nsg'
networkInterfaceName: '${mqVirtualMachineName}-2-nic'
networkSecurityGroupName: '${mqVirtualMachineName}-2-nsg'
networkSecurityGroupRules:networkSecurityGroupRules
subnetName: subnetWorkerNodeName
virtualNetworkName: vnetName
virtualMachineName: '${mqVirtualMachineName}-1'
virtualMachineName: '${mqVirtualMachineName}-2'
osDiskType: osDiskType
virtualMachineSize: mqVirtualMachineSize
adminUsername: adminUsername
adminPassword: adminPassword
zone: '1'
//installerStorageAccountName: installerStorageAccountName
//installerContainerName: installerContainerName
//installerSASToken: installerSASToken
zone: '3'
storageNamePrefix: storageNamePrefix
mqsharename: mqsharename
//mqInstallerArchiveName: mqInstallerArchiveName
branchName: branchName
}
dependsOn: [
@ -408,6 +426,7 @@ module devvm 'devvm.bicep' = {
adminUsername: adminUsername
adminPassword: adminPassword
zone: '1'
branchName: branchName
}
dependsOn: [
network

Просмотреть файл

@ -4,29 +4,24 @@ runcmd:
- export ADMIN_USERNAME=${adminUsername}
- export DB2_ADMIN_PASSWORD=${adminPassword}
- export DB2_FENCED_PASSWORD=${adminPassword}
- #export INSTALLER_STORAGEACCOUNT_NAME=${installerStorageAccountName}
- #export INSTALLER_STORAGECONTAINER_NAME=${installerContainerName}
- #export INSTALLER_SAS_TOKEN="${installerSASToken}"
- #export DB2_INSTALLER_ARCHIVE_FILENAME=${db2InstallerArchiveName}
- #export DB2_DATABASE_NAME=${db2DatabaseName}
- #export DB2_SCHEMA_NAME=${db2SchemaName}
- export RESOURCE_GROUP=${resourceGroupName}
- export VM_NAME=${virtualMachineName}
- export ANF_ACCOUNT_NAME=${anfAccountName}
- export ANF_POOL_NAME=${anfPoolName}
- export BRANCH_NAME=${branchName}
- mkdir ~/.azure/
- echo '{"subscriptionId":"${subscriptionID}","clientId":"${clientID}","clientSecret":"${clientSecret}","tenantId":"${tenantID}","resourceGroup":"${resourceGroupName}"}' > ~/.azure/osServicePrincipal.json
- sudo yum -y install libstdc++.i686 libXmu.i686 libacl.i686 ncurses-libs.i686 ncurses-compat-libs.i686 motif.i686 xterm libmount.i686 libgcc.i686 libnsl.i686 libXdmcp.i686 libxcrypt.i686 libXdmcp libnsl psmisc elfutils-libelf-devel make pam-devel
- sudo yum -y install ksh mksh
- sudo yum -y install jq
- sudo yum -y install java-1.8.0-openjdk
- sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
- sudo dnf install -y python3-dnf-plugin-versionlock
- sudo yum install -y nfs-utils
- sudo wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tar.gz
- sudo tar -xvf /tmp/azcopy.tar.gz -C /tmp
- sudo mv /tmp/azcopy_linux* /tmp/azcopy
- sudo sed -i 's/enforcing/disabled/g' /etc/selinux/config /etc/selinux/config
- sudo parted /dev/sdb --script mklabel gpt mkpart xfspart xfs 0% 100%
- sudo mkfs.xfs /dev/sdb1
- sudo partprobe /dev/sdb1
- sudo mkdir /db2data
- sudo mount /dev/sdb1 /db2data
- FSTAB="$(blkid | grep sdb1 | awk '$0=$2' | sed 's/"//g') /db2data xfs defaults,nofail 1 2"
- sudo su -c "echo $FSTAB >> /etc/fstab"
- #[ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/install-db2-from-storageaccount.sh", -O, /tmp/install-db2-from-storageaccount.sh ]
- #chmod +x /tmp/install-db2-from-storageaccount.sh
- #sudo -E /tmp/install-db2-from-storageaccount.sh
- [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/db2/configure-db2-anf-storage.sh", -O, /tmp/configure-db2-anf-storage.sh ]
- chmod +x /tmp/configure-db2-anf-storage.sh.sh
- sudo -E /tmp/configure-db2-anf-storage.sh.sh

Просмотреть файл

@ -1,9 +1,6 @@
#cloud-config
runcmd:
- #export INSTALLER_STORAGEACCOUNT_NAME=${installerStorageAccountName}
- #export INSTALLER_STORAGECONTAINER_NAME=${installerContainerName}
- #export INSTALLER_SAS_TOKEN="${installerSASToken}"
- #export MQ_INSTALLER_ARCHIVE_FILENAME=${mqInstallerArchiveName}
- sudo yum update
- sudo yum install -y nfs-utils
- sudo yum install -y java-1.8.0-openjdk
@ -16,7 +13,4 @@ runcmd:
- sudo useradd app
- sudo wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tar.gz
- sudo tar -xvf /tmp/azcopy.tar.gz -C /tmp
- sudo mv /tmp/azcopy_linux* /tmp/azcopy
- #[ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/install-mq-from-storageaccount.sh", -O, /tmp/install-mq-from-storageaccount.sh ]
- #chmod +x /tmp/install-mq-from-storageaccount.sh
- #sudo -E /tmp/install-mq-from-storageaccount.sh
- sudo mv /tmp/azcopy_linux* /tmp/azcopy

Просмотреть файл

@ -3,9 +3,12 @@ param registryname string
param location string
param vnetName string
param subnetEndpointsName string
param deployLogAnalytics string
param logAnalyticsWorkSpaceName string
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', vnetName)
var subnetReference = '${vnetId}/subnets/${subnetEndpointsName}'
//var logAnalyticsId = resourceId(resourceGroup().name, 'insights-integration/providers/Microsoft.OperationalInsights/workspaces', logAnalyticsWorkSpaceName)
resource registry_resource 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
name: registryname
@ -72,3 +75,28 @@ resource registry_private_zone_group 'Microsoft.Network/privateEndpoints/private
]
}
}
resource pgLogAnalyticsSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (deployLogAnalytics == 'Y' || deployLogAnalytics == 'y') {
name: registry_resource.name
scope: registry_resource
properties: {
logAnalyticsDestinationType: 'AzureDiagnostics'
logs: [
{
category: 'allLogs'
enabled: true
}
{
category: 'audit'
enabled: true
}
]
metrics: [
{
category: 'AllMetrics'
enabled: true
}
]
workspaceId: resourceId(resourceGroup().name, 'insights-integration/providers/Microsoft.OperationalInsights/workspaces', logAnalyticsWorkSpaceName)
}
}

Просмотреть файл

@ -7,25 +7,34 @@ param virtualNetworkName string
param virtualMachineName string
param osDiskType string
param virtualMachineSize string
// param db2vmprefix string
param adminUsername string
@secure()
param adminPassword string
param zone string
param anfAccountName string
param anfPoolName string
//param installerStorageAccountName string
//param installerContainerName string
//@secure()
//param installerSASToken string
//param loadBalancerName string
param loadBalancerName string
//param db2InstallerArchiveName string
param branchName string
//param db2DatabaseName string
//param db2SchemaName string
param clientID string
@secure()
param clientSecret string
var subscriptionID = subscription().subscriptionId
var resourceGroupName = resourceGroup().name
var tenantID = tenant().tenantId
//var nsgId = resourceId(resourceGroup().name, 'Microsoft.Network/networkSecurityGroups', networkSecurityGroupName)
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', virtualNetworkName)
var subnetRef = '${vnetId}/subnets/${subnetName}'
//var cloudInitData = '#cloud-config\n\nruncmd:\n - export ADMIN_USERNAME=${adminUsername}\n - export DB2_ADMIN_PASSWORD=${adminPassword}\n - export DB2_FENCED_PASSWORD=${adminPassword}\n - export INSTALLER_STORAGEACCOUNT_NAME=${installerStorageAccountName}\n - export INSTALLER_STORAGECONTAINER_NAME=${installerContainerName}\n - export INSTALLER_SAS_TOKEN="${installerSASToken}"\n - export DB2_INSTALLER_ARCHIVE_FILENAME=${db2InstallerArchiveName}\n - export DB2_DATABASE_NAME=${db2DatabaseName}\n - export DB2_SCHEMA_NAME=${db2SchemaName}\n - export BRANCH_NAME=${branchName}\n - sudo yum -y install libstdc++.i686 libXmu.i686 libacl.i686 ncurses-libs.i686 ncurses-compat-libs.i686 motif.i686 xterm libmount.i686 libgcc.i686 libnsl.i686 libXdmcp.i686 libxcrypt.i686 libXdmcp libnsl psmisc elfutils-libelf-devel make pam-devel\n - sudo yum -y install ksh mksh\n - sudo yum -y install java-1.8.0-openjdk\n - sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\n - sudo dnf install -y python3-dnf-plugin-versionlock\n - sudo wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tar.gz\n - sudo tar -xvf /tmp/azcopy.tar.gz -C /tmp\n - sudo mv /tmp/azcopy_linux* /tmp/azcopy\n - sudo sed -i \'s/enforcing/disabled/g\' /etc/selinux/config /etc/selinux/config\n - sudo parted /dev/sdb --script mklabel gpt mkpart xfspart xfs 0% 100%\n - sudo mkfs.xfs /dev/sdb1\n - sudo partprobe /dev/sdb1 \n - sudo mkdir /db2data\n - sudo mount /dev/sdb1 /db2data\n - FSTAB="$(blkid | grep sdb1 | awk \'$0=$2\' | sed \'s/"//g\') /db2data xfs defaults,nofail 1 2"\n - sudo su -c "echo $FSTAB >> /etc/fstab"\n - [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/install-db2-from-storageaccount.sh", -O, /tmp/install-db2-from-storageaccount.sh ]\n - chmod +x /tmp/install-db2-from-storageaccount.sh\n - sudo -E /tmp/install-db2-from-storageaccount.sh\n'
var cloudInitData = '#cloud-config\n\nruncmd:\n - export ADMIN_USERNAME=${adminUsername}\n - export DB2_ADMIN_PASSWORD=${adminPassword}\n - export DB2_FENCED_PASSWORD=${adminPassword}\n - export RESOURCE_GROUP=${resourceGroupName}\n - export VM_NAME=${virtualMachineName}\n - export ANF_ACCOUNT_NAME=${anfAccountName}\n - export ANF_POOL_NAME=${anfPoolName}\n - export BRANCH_NAME=${branchName}\n - mkdir ~/.azure/\n - echo \'{"subscriptionId":"${subscriptionID}","clientId":"${clientID}","clientSecret":"${clientSecret}","tenantId":"${tenantID}","resourceGroup":"${resourceGroupName}"}\' > ~/.azure/osServicePrincipal.json\n - sudo yum -y install libstdc++.i686 libXmu.i686 libacl.i686 ncurses-libs.i686 ncurses-compat-libs.i686 motif.i686 xterm libmount.i686 libgcc.i686 libnsl.i686 libXdmcp.i686 libxcrypt.i686 libXdmcp libnsl psmisc elfutils-libelf-devel make pam-devel\n - sudo yum -y install ksh mksh\n - sudo yum -y install jq\n - sudo yum -y install java-1.8.0-openjdk\n - sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\n - sudo dnf install -y python3-dnf-plugin-versionlock\n - sudo yum install -y nfs-utils\n - sudo wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tar.gz\n - sudo tar -xvf /tmp/azcopy.tar.gz -C /tmp\n - sudo mv /tmp/azcopy_linux* /tmp/azcopy\n - sudo sed -i \'s/enforcing/disabled/g\' /etc/selinux/config /etc/selinux/config\n - [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/db2/configure-db2-anf-storage.sh", -O, /tmp/configure-db2-anf-storage.sh ]\n - chmod +x /tmp/configure-db2-anf-storage.sh\n - sudo -E /tmp/configure-db2-anf-storage.sh\n'
resource networkInterfaceName_resource 'Microsoft.Network/networkInterfaces@2018-10-01' = {
@ -40,11 +49,11 @@ resource networkInterfaceName_resource 'Microsoft.Network/networkInterfaces@2018
id: subnetRef
}
privateIPAllocationMethod: 'Dynamic'
//loadBalancerBackendAddressPools: [
//{
// id: resourceId('Microsoft.Network/loadBalancers/backendAddressPools', loadBalancerName, '${loadBalancerName}-bep')
//}
//]
loadBalancerBackendAddressPools: [
{
id: resourceId('Microsoft.Network/loadBalancers/backendAddressPools', loadBalancerName, '${loadBalancerName}-bep')
}
]
}
}
]
@ -61,29 +70,29 @@ resource networkInterfaceName_resource 'Microsoft.Network/networkInterfaces@2018
*/
}
resource datadisk_resource 'Microsoft.Compute/disks@2021-12-01' = {
name: '${virtualMachineName}-db2data'
location: location
sku: {
name: 'Premium_LRS'
}
zones: [
zone
]
properties: {
creationData: {
createOption: 'Empty'
}
diskSizeGB: 256
diskIOPSReadWrite: 1100
diskMBpsReadWrite: 125
encryption: {
type: 'EncryptionAtRestWithPlatformKey'
}
networkAccessPolicy: 'AllowAll'
publicNetworkAccess: 'Enabled'
}
}
//resource datadisk_resource 'Microsoft.Compute/disks@2021-12-01' = {
// name: '${virtualMachineName}-db2data'
// location: location
// sku: {
// name: 'Premium_LRS'
// }
// zones: [
// zone
// ]
// properties: {
// creationData: {
// createOption: 'Empty'
// }
// diskSizeGB: 256
// diskIOPSReadWrite: 1100
// diskMBpsReadWrite: 125
// encryption: {
// type: 'EncryptionAtRestWithPlatformKey'
// }
// networkAccessPolicy: 'AllowAll'
// publicNetworkAccess: 'Enabled'
// }
//}
/*
resource networkSecurityGroupName_resource 'Microsoft.Network/networkSecurityGroups@2019-02-01' = {
@ -103,18 +112,18 @@ resource virtualMachineName_resource 'Microsoft.Compute/virtualMachines@2021-03-
vmSize: virtualMachineSize
}
storageProfile: {
dataDisks: [
{
createOption: 'Attach'
deleteOption: 'Detach'
lun: 0
managedDisk: {
id: datadisk_resource.id
}
toBeDetached: false
writeAcceleratorEnabled: false
}
]
//dataDisks: [
// {
// createOption: 'Attach'
// deleteOption: 'Detach'
// lun: 0
// managedDisk: {
// id: datadisk_resource.id
// }
// toBeDetached: false
// writeAcceleratorEnabled: false
// }
//]
osDisk: {
createOption: 'FromImage'
managedDisk: {
@ -139,8 +148,8 @@ resource virtualMachineName_resource 'Microsoft.Compute/virtualMachines@2021-03-
computerName: virtualMachineName
adminUsername: adminUsername
adminPassword: adminPassword
//customData: base64(cloudInitData)
customData: base64(loadTextContent('cloud-init-db2.yaml'))
customData: base64(cloudInitData)
//customData: base64(loadTextContent('cloud-init-db2.yaml'))
linuxConfiguration: {
disablePasswordAuthentication: false
}

Просмотреть файл

@ -11,13 +11,14 @@ param adminUsername string
@secure()
param adminPassword string
param zone string
param branchName string
var nsgId = resourceId(resourceGroup().name, 'Microsoft.Network/networkSecurityGroups', networkSecurityGroupName)
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', virtualNetworkName)
var subnetRef = '${vnetId}/subnets/${subnetName}'
//var cloudInitData = '#cloud-config\n\nruncmd:\n - sudo apt-get update -y \n - sudo apt-get install -y ca-certificates curl gnupg lsb-release\n - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg\n - echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n - sudo apt-get update -y\n - sudo apt-get -y install docker-ce docker-ce-cli containerd.io\n - curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3\n - chmod 700 get_helm.sh\n - ./get_helm.sh\n - sudo usermod -aG docker $USER\n'
var cloudInitData = '#cloud-config\n\nruncmd:\n - sudo apt-get update -y \n - sudo apt-get install -y ca-certificates curl gnupg lsb-release\n - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg\n - echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null\n - sudo apt-get update -y\n - sudo apt-get -y install docker-ce docker-ce-cli containerd.io\n - curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3\n - chmod 700 get_helm.sh\n - ./get_helm.sh\n - sudo usermod -aG docker $USER\n - mkdir /tmp/OCPInstall\n - [ wget, -nv, "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz", -O, /tmp/OCPInstall/openshift-client-linux.tar.gz ]\n - tar xvf /tmp/OCPInstall/openshift-client-linux.tar.gz -C /tmp/OCPInstall\n - sudo cp /tmp/OCPInstall/oc /usr/bin'
resource networkInterfaceName_resource 'Microsoft.Network/networkInterfaces@2018-10-01' = {
@ -96,8 +97,8 @@ resource virtualMachineName_resource 'Microsoft.Compute/virtualMachines@2021-03-
computerName: virtualMachineName
adminUsername: adminUsername
adminPassword: adminPassword
//customData: base64(cloudInitData)
customData: base64(loadTextContent('cloud-init-jumpbox.yaml'))
customData: base64(cloudInitData)
//customData: base64(loadTextContent('cloud-init-jumpbox.yaml'))
linuxConfiguration: {
disablePasswordAuthentication: false
}

Просмотреть файл

@ -26,7 +26,7 @@ var subnetRef = '${vnetId}/subnets/${subnetName}'
var subscriptionID = subscription().subscriptionId
var resourceGroupName = resourceGroup().name
var tenantID = tenant().tenantId
//var cloudInitData = '#cloud-config\n\nruncmd:\n - echo "Setting environment variables..."\n - export OMS_NAMESPACE=${omsNamespace}\n - export ARO_CLUSTER=${aroName}\n - export WHICH_OMS=${whichOMS}\n - export BRANCH_NAME=${branchName}\n - export LOCATION=${location}\n - export ADMIN_PASSWORD=${adminPassword}\n - export IBM_ENTITLEMENT_KEY=${ibmEntitlementKey}\n - export ACR_NAME=${acrName}\n - mkdir ~/.azure/\n - echo \'{"subscriptionId":"${subscriptionID}","clientId":"${clientID}","clientSecret":"${clientSecret}","tenantId":"${tenantID}","resourceGroup":"${resourceGroupName}"}\' > ~/.azure/osServicePrincipal.json\n - echo "Running system update..."\n - sudo dnf update -y\n - echo "System update completed!"\n - echo "Getting latest configuration script..."\n - [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/configure-aro-and-requirements.sh", -O, /tmp/configure-aro-and-requirements.sh ]\n - chmod +x /tmp/configure-aro-and-requirements.sh\n - echo "Running configuration script..."\n - sudo -E /tmp/configure-aro-and-requirements.sh\n'
var cloudInitData = '#cloud-config\n\nruncmd:\n - echo "Setting environment variables..."\n - export OMS_NAMESPACE=${omsNamespace}\n - export ARO_CLUSTER=${aroName}\n - export WHICH_OMS=${whichOMS}\n - export BRANCH_NAME=${branchName}\n - export LOCATION=${location}\n - export ADMIN_PASSWORD=${adminPassword}\n - export IBM_ENTITLEMENT_KEY=${ibmEntitlementKey}\n - export ACR_NAME=${acrName}\n - mkdir ~/.azure/\n - echo \'{"subscriptionId":"${subscriptionID}","clientId":"${clientID}","clientSecret":"${clientSecret}","tenantId":"${tenantID}","resourceGroup":"${resourceGroupName}"}\' > ~/.azure/osServicePrincipal.json\n - echo "Running system update..."\n - sudo dnf update -y\n - echo "System update completed!"\n - echo "Getting latest configuration script..."\n - [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/configure-aro-and-requirements.sh", -O, /tmp/configure-aro-and-requirements.sh ]\n - chmod +x /tmp/configure-aro-and-requirements.sh\n - echo "Running configuration script..."\n - sudo -E /tmp/configure-aro-and-requirements.sh\n - echo "Getting pgsql tools/configuration script..."\n - [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/setup-pgsql-tools.sh", -O, /tmp/setup-pgsql-tools.sh ]\n - echo "Running pgsql installation script..."\n - sudo -E /tmp/setup-pgsql-tools.sh'
resource networkInterfaceName_resource 'Microsoft.Network/networkInterfaces@2018-10-01' = {
@ -105,8 +105,8 @@ resource virtualMachineName_resource 'Microsoft.Compute/virtualMachines@2021-03-
computerName: virtualMachineName
adminUsername: adminUsername
adminPassword: adminPassword
customData: base64(loadTextContent('cloud-init-jumpbox.yaml'))
//customData: base64(cloudInitData)
//customData: base64(loadTextContent('cloud-init-jumpbox.yaml'))
customData: base64(cloudInitData)
linuxConfiguration: {
disablePasswordAuthentication: false
}

Просмотреть файл

@ -23,7 +23,7 @@ param branchName string
var nsgId = resourceId(resourceGroup().name, 'Microsoft.Network/networkSecurityGroups', networkSecurityGroupName)
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', virtualNetworkName)
var subnetRef = '${vnetId}/subnets/${subnetName}'
//var cloudInitData = '#cloud-config\nruncmd:\n - export INSTALLER_STORAGEACCOUNT_NAME=${installerStorageAccountName}\n - export INSTALLER_STORAGECONTAINER_NAME=${installerContainerName}\n - export INSTALLER_SAS_TOKEN="${installerSASToken}"\n - export MQ_INSTALLER_ARCHIVE_FILENAME=${mqInstallerArchiveName}\n - sudo yum update\n - sudo yum install -y nfs-utils\n - sudo yum install -y java-1.8.0-openjdk\n - sudo mkdir /MQHA\n - sudo mount -t nfs ${storageNamePrefix}prm.file.core.windows.net:/${storageNamePrefix}prm/${mqsharename} /MQHA -o vers=4,minorversion=1,sec=sys\n - sudo echo "${storageNamePrefix}prm.file.core.windows.net:/${storageNamePrefix}prm/${mqsharename} /MQHA nfs rw,hard,noatime,nolock,vers=4,tcp,_netdev 0 0" >> /etc/fstab \n - sudo mkdir -p /MQHA/logs\n - sudo mkdir -p /MQHA/qmgrs\n - sudo groupadd mqclient\n - sudo useradd app\n - sudo wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tar.gz\n - sudo tar -xvf /tmp/azcopy.tar.gz -C /tmp\n - sudo mv /tmp/azcopy_linux* /tmp/azcopy\n - [ wget, -nv, "https://raw.githubusercontent.com/Azure/sterling/${branchName}/config/installers/install-mq-from-storageaccount.sh", -O, /tmp/install-mq-from-storageaccount.sh ]\n - chmod +x /tmp/install-mq-from-storageaccount.sh\n - sudo -E /tmp/install-mq-from-storageaccount.sh\n'
var cloudInitData = '#cloud-config\n\nruncmd:\n - sudo yum update\n - sudo yum install -y nfs-utils\n - sudo yum install -y java-1.8.0-openjdk\n - sudo mkdir /MQHA\n - sudo mount -t nfs ${storageNamePrefix}prm.file.core.windows.net:/${storageNamePrefix}prm/${mqsharename} /MQHA -o vers=4,minorversion=1,sec=sys\n - sudo echo "${storageNamePrefix}prm.file.core.windows.net:/${storageNamePrefix}prm/${mqsharename} /MQHA nfs rw,hard,noatime,nolock,vers=4,tcp,_netdev 0 0" >> /etc/fstab \n - sudo mkdir -p /MQHA/logs\n - sudo mkdir -p /MQHA/qmgrs\n - sudo groupadd mqclient\n - sudo useradd app\n - sudo wget https://aka.ms/downloadazcopy-v10-linux -O /tmp/azcopy.tar.gz\n - sudo tar -xvf /tmp/azcopy.tar.gz -C /tmp\n - sudo mv /tmp/azcopy_linux* /tmp/azcopy'
resource networkInterfaceName_resource 'Microsoft.Network/networkInterfaces@2018-10-01' = {
@ -102,8 +102,8 @@ resource virtualMachineName_resource 'Microsoft.Compute/virtualMachines@2021-03-
computerName: virtualMachineName
adminUsername: adminUsername
adminPassword: adminPassword
//customData: base64(cloudInitData)
customData: base64(loadTextContent('cloud-init-mq.yaml'))
customData: base64(cloudInitData)
//customData: base64(loadTextContent('cloud-init-mq.yaml'))
linuxConfiguration: {
disablePasswordAuthentication: false
}

248
azure/netappfiles.bicep Normal file
Просмотреть файл

@ -0,0 +1,248 @@
param anfName string
param location string
param db2vmprefix string
param dataVolGB int
//param logVolGB int
param virtualNetworkName string
param anfSubnetName string
var dataVolBytes = dataVolGB * 1073741824
//var logVolBytes = logVolGB * 1073741824
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', virtualNetworkName)
var subnetReference = '${vnetId}/subnets/${anfSubnetName}'
resource anfAccount 'Microsoft.NetApp/netAppAccounts@2022-03-01' = {
name: anfName
location: location
properties: {
encryption: {
keySource: 'Microsoft.NetApp'
}
}
}
resource db2vm1Pool 'Microsoft.NetApp/netAppAccounts/capacityPools@2022-03-01' = {
name: '${anfAccount.name}/${db2vmprefix}-1'
location: location
properties: {
serviceLevel: 'Ultra'
size: 4398046511104
qosType: 'Auto'
encryptionType: 'Single'
coolAccess: false
}
}
resource db2vm2Pool 'Microsoft.NetApp/netAppAccounts/capacityPools@2022-03-01' = {
name: '${anfAccount.name}/${db2vmprefix}-2'
location: location
properties: {
serviceLevel: 'Ultra'
size: 4398046511104
qosType: 'Auto'
encryptionType: 'Single'
coolAccess: false
}
}
resource db2vm1Datavol 'Microsoft.NetApp/netAppAccounts/capacityPools/volumes@2022-03-01' = {
name: '${db2vm1Pool.name}/${db2vmprefix}-1-data'
location: location
properties: {
serviceLevel: 'Ultra'
creationToken: '${db2vmprefix}-1-data'
usageThreshold: dataVolBytes
exportPolicy: {
rules: [
{
ruleIndex: 1
unixReadOnly: false
unixReadWrite: true
cifs: false
nfsv3: false
nfsv41: true
allowedClients: '0.0.0.0/0'
kerberos5ReadOnly: false
kerberos5ReadWrite: false
kerberos5iReadOnly: false
kerberos5iReadWrite: false
kerberos5pReadOnly: false
kerberos5pReadWrite: false
hasRootAccess: true
chownMode: 'Restricted'
}
]
}
protocolTypes: [
'NFSv4.1'
]
subnetId: subnetReference
networkFeatures: 'Basic'
snapshotDirectoryVisible: true
kerberosEnabled: false
securityStyle: 'Unix'
smbEncryption: false
smbContinuouslyAvailable: false
encryptionKeySource: 'Microsoft.NetApp'
ldapEnabled: false
unixPermissions: '0770'
volumeSpecName: 'generic'
coolAccess: false
avsDataStore: 'Disabled'
isDefaultQuotaEnabled: false
enableSubvolumes: 'Disabled'
}
}
/*
//Log File Mount; maybe needed, maybe not.
resource db2vm1Logvol 'Microsoft.NetApp/netAppAccounts/capacityPools/volumes@2022-03-01' = {
name: '${anfAccount.name}/${db2vm1Pool}/${db2vmprefix}-1-log'
location: location
properties: {
serviceLevel: 'Ultra'
creationToken: '${db2vmprefix}-1-log'
usageThreshold: logVolBytes
exportPolicy: {
rules: [
{
ruleIndex: 1
unixReadOnly: false
unixReadWrite: true
cifs: false
nfsv3: false
nfsv41: true
allowedClients: '0.0.0.0/0'
kerberos5ReadOnly: false
kerberos5ReadWrite: false
kerberos5iReadOnly: false
kerberos5iReadWrite: false
kerberos5pReadOnly: false
kerberos5pReadWrite: false
hasRootAccess: true
chownMode: 'Restricted'
}
]
}
protocolTypes: [
'NFSv4.1'
]
subnetId: subnetReference
networkFeatures: 'Basic'
snapshotDirectoryVisible: true
kerberosEnabled: false
securityStyle: 'Unix'
smbEncryption: false
smbContinuouslyAvailable: false
encryptionKeySource: 'Microsoft.NetApp'
ldapEnabled: false
unixPermissions: '0770'
volumeSpecName: 'generic'
coolAccess: false
avsDataStore: 'Disabled'
isDefaultQuotaEnabled: false
enableSubvolumes: 'Disabled'
}
}
*/
resource db2vm2Datavol 'Microsoft.NetApp/netAppAccounts/capacityPools/volumes@2022-03-01' = {
name: '${db2vm2Pool.name}/${db2vmprefix}-2-data'
location: location
properties: {
serviceLevel: 'Ultra'
creationToken: '${db2vmprefix}-2-data'
usageThreshold: dataVolBytes
exportPolicy: {
rules: [
{
ruleIndex: 1
unixReadOnly: false
unixReadWrite: true
cifs: false
nfsv3: false
nfsv41: true
allowedClients: '0.0.0.0/0'
kerberos5ReadOnly: false
kerberos5ReadWrite: false
kerberos5iReadOnly: false
kerberos5iReadWrite: false
kerberos5pReadOnly: false
kerberos5pReadWrite: false
hasRootAccess: true
chownMode: 'Restricted'
}
]
}
protocolTypes: [
'NFSv4.1'
]
subnetId: subnetReference
networkFeatures: 'Basic'
snapshotDirectoryVisible: true
kerberosEnabled: false
securityStyle: 'Unix'
smbEncryption: false
smbContinuouslyAvailable: false
encryptionKeySource: 'Microsoft.NetApp'
ldapEnabled: false
unixPermissions: '0770'
volumeSpecName: 'generic'
coolAccess: false
avsDataStore: 'Disabled'
isDefaultQuotaEnabled: false
enableSubvolumes: 'Disabled'
}
}
/*
//Log File Mount; maybe needed, maybe not.
resource db2vm2Logvol 'Microsoft.NetApp/netAppAccounts/capacityPools/volumes@2022-03-01' = {
name: '${anfAccount.name}/${db2vm2Pool}/${db2vmprefix}-2-log'
location: location
properties: {
serviceLevel: 'Ultra'
creationToken: '${db2vmprefix}-2-log'
usageThreshold: logVolBytes
exportPolicy: {
rules: [
{
ruleIndex: 1
unixReadOnly: false
unixReadWrite: true
cifs: false
nfsv3: false
nfsv41: true
allowedClients: '0.0.0.0/0'
kerberos5ReadOnly: false
kerberos5ReadWrite: false
kerberos5iReadOnly: false
kerberos5iReadWrite: false
kerberos5pReadOnly: false
kerberos5pReadWrite: false
hasRootAccess: true
chownMode: 'Restricted'
}
]
}
protocolTypes: [
'NFSv4.1'
]
subnetId: subnetReference
networkFeatures: 'Basic'
snapshotDirectoryVisible: true
kerberosEnabled: false
securityStyle: 'Unix'
smbEncryption: false
smbContinuouslyAvailable: false
encryptionKeySource: 'Microsoft.NetApp'
ldapEnabled: false
unixPermissions: '0770'
volumeSpecName: 'generic'
coolAccess: false
avsDataStore: 'Disabled'
isDefaultQuotaEnabled: false
enableSubvolumes: 'Disabled'
}
}
*/

Просмотреть файл

@ -30,6 +30,8 @@ param subnetVMName string
param subnetVMPrefix string
param subnetDataPrefix string
param subnetDataName string
param subnetANFPrefix string
param subnetANFName string
resource vnet 'Microsoft.Network/virtualNetworks@2021-03-01' = {
name: vnetName
@ -75,7 +77,21 @@ resource vnet 'Microsoft.Network/virtualNetworks@2021-03-01' = {
}
]
}
}
}
{
name: subnetANFName
properties: {
addressPrefix: subnetANFPrefix
delegations: [
{
name: 'NetAppDelegation'
properties: {
serviceName: 'Microsoft.Netapp/volumes'
}
}
]
}
}
{
name: subnetEndpointsName
properties: {

Просмотреть файл

@ -9,7 +9,7 @@
"value": "oms"
},
"whichOMS":{
"value": "icr.io/cpopen/ibm-oms-pro-case-catalog:v1.0"
"value": "icr.io/cpopen/ibm-oms-pro-case-catalog:v1.0.1-220921-0820"
},
"ibmEntitlementKey" : {
"value": ""
@ -79,6 +79,12 @@
"subnetDataPrefix": {
"value": "10.0.6.0/26"
},
"subnetANFName:" : {
"value" : "anf"
},
"subnetANFPrefix": {
"value": "10.0.6.0/26"
},
"osDiskType": {
"value": "Premium_LRS"
},
@ -98,26 +104,14 @@
"value": "omsvmdb2"
},
"db2VirtualMachineSize": {
"value": "Standard_E8s_v4"
"value": "Standard_E16ds_v4"
},
"db2InstallerArchiveName" : {
"value": "v11.5.7_linuxx64_server_dec.tar.gz"
},
"db2DatabaseName": {
"value" : "omsDB2"
},
"db2SchemaName": {
"value": "oms"
},
"mqVirtualMachineName": {
"value": "omsvmmq"
},
"mqVirtualMachineSize": {
"value": "Standard_B2ms"
},
"mqInstallerArchiveName" : {
"value": "IBM_MQ_9.2.0_LINUX_X86-64_TRIAL.tar.gz"
},
"storageNamePrefix": {
"value": "omsfiles"
},
@ -134,7 +128,7 @@
"value": "omsvmgateway"
},
"db2lbprivateIP":{
"value": "10.0.4.100"
"value": "10.0.5.50"
},
"registryName":{
"value": "omsacr01"
@ -155,10 +149,19 @@
"value": "13"
},
"postgreSQLVMClass": {
"value": "Standard_E2ds_v4"
"value": "Standard_E16ds_v4"
},
"postgreSQLEdition": {
"value": "MemoryOptimized"
},
"logAnalyticsWorkspaceName": {
"value": "omsLogAnalytics"
},
"anfName" : {
"value": "omsanf"
},
"db2DataSizeGB" : {
"value" : 1000
}
}
}

Просмотреть файл

@ -6,13 +6,17 @@ param postgreSQLVersion string
param postgreSQLVMClass string
param postgreSQLEdition string
param adminUserName string
@secure()
param adminPassword string
param subnetDataName string
param virtualNetworkName string
param postgreSQLName string
param deployLogAnalytics string
param logAnalyticsWorkSpaceName string
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', virtualNetworkName)
var subnetReference = '${vnetId}/subnets/${subnetDataName}'
//var logAnalyticsId = resourceId(resourceGroup().name, 'insights-integration/providers/Microsoft.OperationalInsights/workspaces', logAnalyticsWorkSpaceName)
resource postgresprivatednszone 'Microsoft.Network/privateDnsZones@2020-06-01' = {
name: 'omspostgres.private.postgres.database.azure.com'
@ -62,3 +66,24 @@ resource postgressql 'Microsoft.DBforPostgreSQL/flexibleServers@2021-06-01' = {
registry_private_zone_link
]
}
resource pgLogAnalyticsSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (deployLogAnalytics == 'Y' || deployLogAnalytics == 'y') {
name: postgressql.name
scope: postgressql
properties: {
logAnalyticsDestinationType: 'AzureDiagnostics'
logs: [
{
category: 'PostgreSQLLogs'
enabled: true
}
]
metrics: [
{
category: 'AllMetrics'
enabled: true
}
]
workspaceId: resourceId(resourceGroup().name, 'insights-integration/providers/Microsoft.OperationalInsights/workspaces', logAnalyticsWorkSpaceName)
}
}

Просмотреть файл

@ -14,9 +14,13 @@ param vnetName string
param mqsharename string
param logAnalyticsWorkSpaceName string
param deployLogAnalytics string
// Some variables to grab the details we need
var vnetId = resourceId(resourceGroup().name, 'Microsoft.Network/virtualNetworks', vnetName)
var subnetReference = '${vnetId}/subnets/${subnetEndpointsName}'
//var logAnalyticsId = resourceId(resourceGroup().name, 'insights-integration/providers/Microsoft.OperationalInsights/workspaces', logAnalyticsWorkSpaceName)
// More performant and lower latency storage for databases, Kafka and
// other resources.
@ -192,3 +196,34 @@ resource mq_file_share 'Microsoft.Storage/storageAccounts/fileServices/shares@20
shareQuota: 100
}
}
resource storageLogAnalyticsSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = if (deployLogAnalytics == 'Y' || deployLogAnalytics == 'y') {
name: storage_premium.name
scope: storage_premium
properties: {
logAnalyticsDestinationType: 'AzureDiagnostics'
logs: [
{
category: 'StorageDelete'
enabled: true
}
{
category: 'StorageRead'
enabled: true
}
{
category: 'StorageWrite'
enabled: true
}
]
metrics: [
{
category: 'Transaction'
enabled: true
}
]
workspaceId: resourceId(resourceGroup().name, 'insights-integration/providers/Microsoft.OperationalInsights/workspaces', logAnalyticsWorkSpaceName)
}
}

Просмотреть файл

@ -0,0 +1,22 @@
echo "==== AZURE CLI INSTALL ===="
#Azure CLI Install
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
echo -e "[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc" | sudo tee /etc/yum.repos.d/azure-cli.repo
sudo dnf -y install azure-cli
#Azure CLI Login
echo "==== AZURE CLI LOGIN ===="
az login --service-principal -u $(cat ~/.azure/osServicePrincipal.json | jq -r .clientId) -p $(cat ~/.azure/osServicePrincipal.json | jq -r .clientSecret) --tenant $(cat ~/.azure/osServicePrincipal.json | jq -r .tenantId) --output none && az account set -s $(cat ~/.azure/osServicePrincipal.json | jq -r .subscriptionId) --output none
#Make mount folder and get anf IP/volume mapping
mkdir /db2data
export data_mount_ip="$(az netappfiles volume list -g $RESOURCE_GROUP --account-name $ANF_ACCOUNT_NAME --pool-name $ANF_POOL_NAME -o json | jq -r '.[] | select (.name | contains(env.ANF_POOL_NAME) and contains("data")).mountTargets[0].ipAddress')"
export data_mount_vol_name="$(az netappfiles volume list -g $RESOURCE_GROUP --account-name $ANF_ACCOUNT_NAME --pool-name $ANF_POOL_NAME -o json | jq -r '.[] | select (.name | contains(env.ANF_POOL_NAME) and contains("data")).creationToken')"
fstab="$data_mount_ip:/$data_mount_vol_name /db2data nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=4.1,tcp,_netdev 0 0"
sudo su -c "echo $fstab >> /etc/fstab"
sudo mount -a

Просмотреть файл

@ -32,12 +32,12 @@ Once you have your address space and/or subnet configured you can proceed. The i
### Deploy AKS
To deploy your cluster, you can either do so through the Azure Portal, or use the provided sample bicep file to quickly stand up your cluster. The provided bicep and parameters file contains everything you should need to get started, but you can certainly customize the template for your needs (such as initial node pool sizes, VM sizes, etc).
To deploy your cluster, you can either do so through the Azure Portal, or use the provided sample bicep file to quickly stand up your cluster. There is a provided bicep file that contains everything you should need to get started, but you can certainly customize the template for your needs (such as initial node pool sizes, VM sizes, etc).
To deploy the provided example, simply use the Azure CLI's ```az deploy``` command:
To deploy the provided example, simply use the Azure CLI's ```az deploy``` command from the repository directory:
```bash
az deployment group create --resource-group <resource group name> --name MQAKS --template-file ./aks-mq.bicep
az deployment group create --resource-group <resource group name> --name MQAKS --template-file ./azure/aks.bicep
```
You will be prompted for values for things like the cluter name, location, and more. When the deployment finishes, you can get the cluster credentials added to your local ```.kubecfg``` by using the following commands:
@ -73,7 +73,7 @@ az role assignment create --assignee "<managed identity object ID>" --role "Cont
For persistent storage, IBM MQ requires NFS-backed shared storage. For this reason, it is reccomended to use Azure Premium File storage with NFS shares in your cluster. To enable this, you must first create an NFS storage class using the ```file.csi.azure.com``` provisioner. A sample .yaml file is available in this repository (```azurefile-premium-nfs-storagecass.yaml```) to help you.
```bash
kubectl apply -f ./azurefile-premium-nfs-storagecass.yaml
kubectl apply -f ./azurefile-premium-nfs-storageclass.yaml
```
### Create Config Maps / Secrets

Просмотреть файл

@ -21,6 +21,8 @@ This ADF pipeline is provided for demonstration and testing purposes only; you *
Before you begin, you need to make sure that you have a source DB2 Database that contains your data and a target destination Azure PostgreSQL Database. You will also need access to a user account that can read all the source tables, as well as write to the target tables on the Postgres side. Furthermore, this tool only works on the assumption that your tables exist both places. As such, it's best to do an OMS deployment and use the new deployment to create your empty schema.
**Database/Schema/Object Ownership Note:** If you choose NOT to use the same user/role that OMS will be using when you migrate your data (or create your database/schema/objects), note that you should take the time to change ownership of the objects prior to deploying OMS, especially if you need to run any DataMangement tasks (as part of an upgrade/fixpack).
### Define Migration Plan
This process works off of a defined migration plan in the form of an XML configuration file. A sample file is provided in this repository, but the general idea is an array of JSON objects that contain the source table to copy, the source schema of the table, and if the target table on the destination side should be truncated or not. If you'd like to specify a "WHERE" clause for your source data (for instance all data up until a particular date/time) you can write that SQL statement in the "whereClause" section (an empty where clause will copy all table data).

Двоичные данные
docs/images/SterlingNetworkDiagram.png Normal file → Executable file

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 138 KiB

После

Ширина:  |  Высота:  |  Размер: 144 KiB