Kubernetes based pool provider implementation for Azure DevOps pipelines
Перейти к файлу
Preeti Bansal 9d902d92d2
Update README.md
2020-04-01 11:03:04 +05:30
.vscode removing status 2019-12-27 11:18:01 +05:30
agent-custom replacing vsts with azurepipelines 2020-01-02 17:41:59 +05:30
agentpods Changes for Cleanup 2019-12-20 12:23:11 +05:30
build repo name changes 2020-03-27 15:59:25 +05:30
cmd/manager repo name changes 2020-03-27 15:59:25 +05:30
helm updating container images 2020-03-27 17:07:11 +05:30
pkg repo name changes 2020-03-27 15:59:25 +05:30
version operator-sdk 2019-12-23 17:57:58 +05:30
.gitignore supporting building of operator 2020-03-25 18:43:25 +05:30
Dockerfile Added L0 testcases and logging 2019-10-22 17:23:08 +05:30
README.md Update README.md 2020-04-01 11:03:04 +05:30
Security.md added security file back 2019-09-06 11:10:35 +05:30
azure-pipelines-L2testcase.yml Update azure-pipelines-L2testcase.yml for Azure Pipelines 2020-03-30 21:42:02 +05:30
consistent-hash.go Correcting consistent hashing and adding test case 2019-10-15 10:02:30 +05:30
consistent_hash_test.go Added L0 testcases and logging 2019-10-22 17:23:08 +05:30
constants.go Better error handling 2019-09-26 12:16:39 +05:30
contracts.go working with new image 2019-09-26 15:20:41 +05:30
crd_controller_test.go repo name changes 2020-03-27 15:59:25 +05:30
go.mod repo name changes 2020-03-27 15:59:25 +05:30
go.sum operator-sdk 2019-12-23 17:57:58 +05:30
hmac.go correcting hmac code 2019-10-31 10:01:01 +05:30
hmac_test.go Adding test case 2019-10-31 11:17:16 +05:30
http_handler_test.go Changing names to MCR repo 2020-02-10 11:45:33 +05:30
kubernetes-auth.go repo name changes 2020-03-27 15:59:25 +05:30
kubernetes.go repo name changes 2020-03-27 15:59:25 +05:30
kubernetes_test.go Addressing review comments 2020-01-07 16:27:01 +05:30
main.go Changing names to MCR repo 2020-02-10 11:45:33 +05:30

README.md

k8s-poolprovider

Introduction

When using multi-tenant hosted pools there are times the jobs remain in queued state because all the agents are occupied or we still hold the physical resources even when there are very few requests present to be addressed. This causes performance issues. To address the problems above, we have implemented Kubernetes based poolprovider which provides the elasticity of agent pools.

This feature is in private preview. We recommend that you do not use this in production.

The k8s-poolprovider uses Kubernetes cluster as the build infrastructure.

This repository consists implementation of two major helm charts -

1. k8s-poolprovidercrd :

This helm chart installs all the resources required for configuring Kubernetes poolprovider resources on the Kuberenetes cluster. It first installs the controller implemented using Operator-SDK. This is required for lifecycle management of poolprovider resources deployed in the cluster. As soon as user applies the custom resource yaml i.e. azurepipelinespool_cr.yaml; the controller instantiates multiple external resources like webserver deployment, service, buildkit pods etc. The controller handles the reinitialization and reconfiguration at runtime if any changes are observed in the configured instances. User can make changes to Custom resource file i.e. azurepipelinespool_cr.yaml as per requirements. In this file user can add modified controller container image, change the number of buildkit pods instances and add the customised agent container images, refer this CRD specification.

2. k8s-certmanager :

This helm chart installs different resources required for configuring the load balancer endpoint with https support.

Approach 1 - User provides the existing certificates and Key

In this helm chart installs the ingress resource to configure the rules that route traffic to internal webserver already installed as part of previous helm chart. Assuming user has already created a tls-secret with the existing certificate and key.

Approach 2 - Use Let's Encrypt to create a valid certificate and Key

In this helm chart installs the ClusterIsuer and Certificate along with ingress resource.

In order to set up your Kubernetes cluster as the build infrastructure, you need to

  1. Configure the pool provider on Kuberentes cluster
  2. Add the Agent pool configured as Kubernetes poolprovider

1. Configure the poolprovider on Kubernetes cluster

  1. Install k8s-poolprovidercrd helm chart
    helm install k8s-poolprovidercrd --name-template k8spoolprovidercrd --set "azurepipelines.VSTS_SECRET=sharedsecretval" --set "app.namespace=namespaceval"
    sharedsecretval - Value must be of atleast 16 characters
    namespaceval - Namespace where all the poolprovider resources will be deployed

  2. Apply poolprovider custom resource yaml
    kubectl apply azurepipelinespool_cr.yaml

  3. Run helm install stable/nginx-ingress
    helm install stable/nginx-ingress --generate-name --namespace $namespaceval

  4. Execute commands to link the ingress service public ip with valid DNS name
    For azure following set of commands are used -

    kubectl get service -l app=nginx-ingress --namespace=namespaceval -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}'
    publicpid=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, 'ingressip')].[id]" --output tsv) 
    
    az network public-ip update --ids $publicpid --dns-name dnsname 
    

    Note : You can learn more about the az network public-ip update command here

  5. Run helm install cert-manager if you want to use Let's Encrypt else execute
    kubectl create secret tls tls-secret --key keypath --cert certpath -n namespace
    keypath - Specify path for key
    certpath - Specify path for certificate

  6. Install k8s-certmanager helm chart
    helm install k8s-certmanager --name-template k8spoolprovidercert --set "configvalues.dnsname=fqdn" --set "letsencryptcert.val=false" --set "app.namespace=namespaceval"

    fqdn - Fully qualified domain name for which the key and certificate are generated
    namespaceval - Namespace where all the poolprovider resources will be deployed, this parameter is same as required in Step 1

User can configure Azure Kubernetes Cluster using existing setup script -

Note - If using an existing AKS cluster, user needs to have az login and get access credentials for a managed Kubernetes cluster using az aks get-credentials command. Refer here for the command documentation.

Before running the script user need to have az login.

Approach 1 - User provides the existing certificates and Key

./setup.sh -s "sharedsecret" -d "dnsname" -u "useletsencrypt" -k "keypath" -c "certificate path"

Approach 2 - Use Let's Encrypt to create a valid certificate and Key

./setup.sh -s "sharedsecret" -d "dnsname" -u "useletsencrypt"

Description of option arguments passed in the script
  -d : (string) dnsname (mandatory) ex: testdomainname
  -u : (bool - true|false) (mandatory) uses letsencrypt if set to true else pass the exiting certifacte path
  -k : (string) indicates existing key path; used when -u is set to false
  -c : (string) indicates existing certificate path; used when -u is set to false
  -n : (string) namespace (optional)
  -s : (string) sharedsecret (mandatory)
  -h : help

Note : As part of setup script we bind the public ip of ingress with the DNS name provided by user. Currently to perform this operation script is using az commands if you want to configure cluster other than AKS please change those commands.

2. Add Agent pool configured as Kubernetes poolprovider

  1. Run the powershell script poolprovidersetup.ps1

    ./poolprovidersetup.ps1

    Description of option arguments passed in the script
     URI : Account URI to be configured for poolprovider
     PATToken : PAT token for the account
     PoolName : AgentPool name to be configured as Kubernetes poolprovider
     DNSName : Same DNS name with which the key and secrets are generated
     Sharedsecret : Secret value having atleast 16 characters; needs to be xact same value as provided while configuring the cluster
     TargetSize : Target parallelism required in agent pool
    

This repo will have telemetry enabled at a later point of time to monitor usage of this task by individuals/organisations.