Deleted traefik template files

This commit is contained in:
Bahram Rushenas 2022-08-29 10:39:48 -07:00
Родитель c7e3d8e4b4
Коммит f24aa4ce36
4 изменённых файлов: 8 добавлений и 352 удалений

Просмотреть файл

@ -4,31 +4,31 @@ This folder contains manifest files and other artifacts to deploy common service
Example of shared services could be third-party services such as [Traefik](https://doc.traefik.io/traefik/v1.7/user-guide/kubernetes/?msclkid=2309fcb3b1bc11ec92c03b099f5d4e1c), [Prisma defender](https://docs.paloaltonetworks.com/prisma/prisma-cloud) and [Splunk](https://github.com/splunk/splunk-connect-for-kubernetes) or open source services such as [NGINX](https://www.nginx.com/resources/glossary/kubernetes-ingress-controller), [KEDA](https://keda.sh), [External-dns](https://github.com/kubernetes-sigs/external-dns#:~:text=ExternalDNS%20supports%20multiple%20DNS%20providers%20which%20have%20been,and%20we%20have%20limited%20resources%20to%20test%20changes.), [Cert-manager](https://cert-manager.io/docs/) or [Istio](https://istio.io/).
This **shared-services** directory is the root of the GitOps configuration directory. The Kubernetes manifest files included in the subdirectories are expected to be deployed via our in-cluster Flux operator. They are our AKS cluster's baseline configurations. The Flux operator is bootstrapped as part of the cluster deployment through IaC.
This **shared-services** directory is the root of the GitOps configuration directory. The Kubernetes manifest files included in the subdirectories are expected to be deployed via our in-cluster Flux operator. They are our AKS cluster's baseline configurations. The Flux operator is bootstrapped as part of the cluster deployment through the bicep or terraform IaC workflows.
The **namespaces** directory contains the configuration for each namespace and the resources created under those namespaces:
* Namespace **cluster-baseline-settings**:
* [Kured](#kured)
* Azure AD Pod Identity
* Kubernetes RBAC Role Assignments (cluster and namespace) through Azure AD Groups (_optional_)
* Kubernetes RBAC Role Assignments through Azure AD Groups (_optional_)
* Namespace: **kube-system**:
* Azure Monitor Prometheus Scraping
* Namespace: **traefik**:
* Ingress Controller [Traefik](#Traefik)
* Namespace: **a0008**:
* Ingress Network Policy
* RBAC settings specific to this namespace
* RBAC settings specific to this namespace (_optional_)
The first three namespaces are workload agnostic and tend to all cluster-wide configuration concerns, while the forth one is workload specific. Typically workload specific configuration settings are controlled by the application teams through their own GitHub repos and GitOps solution, which may be different from Flux used here to configure the cluster.
The first three namespaces are workload agnostic and tend to all cluster-wide configuration concerns, while the forth one is workload specific. Typically, workload specific configuration settings are controlled by the application teams through their own GitHub repos and GitOps solution, which may be different from Flux used here to configure the cluster.
The **cluster** directory contains the configuration that applies to entire cluster (such as ClusterRole, ClusterRoleBinding), rather than to individual namespaces.
## Private bootstrapping repository
Typically, your bootstrapping repository wouldn't be a public facing repository like this one, but instead a private GitHub or Azure DevOps repo. The Flux operator deployed with the cluster supports private git repositories as your bootstrapping source. In addition to requiring network line of sight to the repository from your cluster's nodes, you'll also need to ensure that you've provided the necessary credentials. This can come, typically, in the form of certificate based SSH or personal access tokens (PAT), both ideally scoped as read-only to the repo with no additional permissions.
Typically, your bootstrapping repository wouldn't be a public facing repository like this one, but instead a private GitHub or an Azure DevOps repo. The Flux operator deployed with the cluster supports private git repositories as your bootstrapping source. In addition to requiring network line of sight to the repository from your cluster's nodes, you'll also need to ensure that you've provided the necessary credentials. This can come, typically, in the form of certificate based SSH or personal access tokens (PAT), both ideally scoped as read-only to the repo with no additional permissions.
To configure the settings for the GitHub repo that you want flux to pull from, update the parameter file for your cluster prior to deploying it:
To configure the settings for the GitHub repo that you want flux to pull from, update the cluster parameter file in your forked repo prior to deploying it:
* If you are using terraform modify the [`flux.yaml`](../../IaC/terraform/configuration/workloads/flux.tfvars) file.
* If you are using bicep modify the [`cluster.parameters.json`](../../IaC/bicep/rg-spoke/cluster.parameters.json) file.
@ -54,6 +54,7 @@ To deploy traefik into your cluster through GitOps using flux follow these steps
Note that most of the parameters requested above will only be available to you after the deployment of your cluster.
## Kured
Kured is included as a solution to handle occasional required reboots from daily OS patching. This open-source software component is only needed if you require a managed rebooting solution between weekly [node image upgrades](https://docs.microsoft.com/azure/aks/node-image-upgrade). Building a process around deploying node image upgrades [every week](https://github.com/Azure/AKS/releases) satisfies most organizational weekly patching cadence requirements. Combined with most security patches on Linux not requiring reboots often, this leaves your cluster in a well supported state. If weekly node image upgrades satisfies your business requirements, then remove Kured from this solution by deleting [`kured.yaml`](./cluster-baseline-settings/kured.yaml). If however weekly patching using node image upgrades is not sufficient and you need to respond to daily security updates that mandate a reboot ASAP, then using a solution like Kured will help you achieve that objective. **Kured is not supported by Microsoft Support.**
Kured is included as a solution to handle occasional required reboots from daily OS patching. No customization is required for this service to get it started.
This open-source software component is only needed if you require a managed rebooting solution between weekly [node image upgrades](https://docs.microsoft.com/azure/aks/node-image-upgrade). Building a process around deploying node image upgrades [every week](https://github.com/Azure/AKS/releases) satisfies most organizational weekly patching cadence requirements. Combined with most security patches on Linux not requiring reboots often, this leaves your cluster in a well supported state. If weekly node image upgrades satisfies your business requirements, then remove Kured from this solution by deleting [`kured.yaml`](./cluster-baseline-settings/kured.yaml). If however weekly patching using node image upgrades is not sufficient and you need to respond to daily security updates that mandate a reboot ASAP, then using a solution like Kured will help you achieve that objective. **Kured is not supported by Microsoft Support.**

Просмотреть файл

@ -1,18 +0,0 @@
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: podmi-ingress-controller-identity
namespace: a0008
spec:
type: 0
resourceID: ${TRAEFIK_USER_ASSIGNED_IDENTITY_RESOURCE_ID}
clientID: ${TRAEFIK_USER_ASSIGNED_IDENTITY_CLIENT_ID}
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: podmi-ingress-controller-binding
namespace: a0008
spec:
azureIdentity: podmi-ingress-controller-identity
selector: podmi-ingress-controller

Просмотреть файл

@ -1,22 +0,0 @@
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aks-ingress-tls-secret-csi-akv
namespace: a0008
spec:
provider: azure
parameters:
usePodIdentity: "true"
useVMManagedIdentity: "false"
keyvaultName: ${KEYVAULT_NAME_AKS_BASELINE}
objects: |
array:
- |
objectName: traefik-ingress-internal-aks-ingress-tls
objectAlias: tls.crt
objectType: cert
- |
objectName: traefik-ingress-internal-aks-ingress-tls
objectAlias: tls.key
objectType: secret
tenantId: ${TENANTID_AZURERBAC_AKS_BASELINE}

Просмотреть файл

@ -1,305 +0,0 @@
kind: ServiceAccount
apiVersion: v1
metadata:
name: traefik-ingress-controller
namespace: a0008
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
namespace: a0008
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- middlewaretcps
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
- serverstransports
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-watch-workloads
namespace: a0008
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: a0008
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-ingress-config
namespace: a0008
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
data:
traefik.toml: |
[metrics]
[metrics.prometheus]
entryPoint = "metrics"
addEntryPointsLabels = true
addServicesLabels = true
[accessLog]
filePath = "/data/access.log"
bufferingSize = 100
[global]
# prevent Traefik from checking newer versions in production
checknewversion = false
# prevent Traefik from collecting and sending stats from production
sendanonymoususage = false
[log]
level = "ERROR"
format = "json"
[api]
dashboard = false
[providers]
# Configuration reload frequency:
# * duration that Traefik waits for, after a configuration reload, before taking into account any new configuration refresh event
# * the most recent one is taken into account, and all the previous others are dropped.
providersThrottleDuration = 10
[providers.file]
filename = "/config/traefik.toml"
watch = true
# Traefik provider that supports the native Kubernetes Ingress specification
# and derives the corresponding dynamic configuration from it. https://kubernetes.io/docs/concepts/services-networking/ingress/
[providers.kubernetesingress]
ingressClass = "traefik-internal"
namespaces = ["a0008"]
[providers.kubernetesIngress.ingressEndpoint]
publishedService = "a0008/traefik-ingress-service"
# Enable gzip compression
[http.middlewares]
[http.middlewares.gzip-compress.compress]
[http.middlewares.app-gateway-snet.ipWhiteList]
sourceRange = ["10.240.5.0/24"]
[entryPoints]
[entryPoints.metrics]
address = ":8082"
[entryPoints.traefik]
address = ":9000"
[entryPoints.websecure]
address = ":8443"
[entryPoints.websecure.forwardedHeaders]
trustedIPs = ["10.240.5.0/24"]
[entryPoints.websecure.http.tls]
options = "default"
[ping]
entryPoint = "traefik"
[tls]
# without duplicating this cert config and with SNI enabled, Traefik won't
# find the certificates for your host. It may be a Traefik's issue.
[[tls.certificates]]
certFile = "/certs/tls.crt"
keyFile = "/certs/tls.key"
stores = ["default"]
[tls.stores]
[tls.stores.default]
[tls.stores.default.defaultCertificate]
# without specifying in here your certs, Traefik will create its own
# certificate
certFile = "/certs/tls.crt"
keyFile = "/certs/tls.key"
[tls.options.default]
minVersion = "VersionTLS12"
sniStrict = true
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ingress-service
namespace: a0008
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "snet-clusteringressservices"
spec:
type: LoadBalancer
loadBalancerIP: 10.240.4.4
externalTrafficPolicy: Local
selector:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
ports:
- port: 443
name: "https"
targetPort: "websecure"
protocol: "TCP"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik-ingress-controller
namespace: a0008
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
aadpodidbinding: podmi-ingress-controller
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8082"
labels:
app.kubernetes.io/name: traefik-ingress-ilb
app.kubernetes.io/instance: traefik-ingress-ilb
aadpodidbinding: podmi-ingress-controller
spec:
hostNetwork: false
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- traefik-ingress-ilb
topologyKey: "kubernetes.io/hostname"
containers:
# PRODUCTION READINESS CHANGE REQUIRED
# This image should be sourced from a non-public container registry, such as the
# one deployed along side of this reference implementation.
# az acr import --source docker.io/library/traefik:v2.5.3 -n <your-acr-instance-name>
# and then set this to
# image: <your-acr-instance-name>.azurecr.io/library/traefik:v2.5.3
# in order to use the public image, replace the image setting with the following line
# - image: docker.io/library/traefik:v2.5.3
- image: ${ACR_NAME_AKS_BASELINE}.azurecr.io/library/traefik:v2.5.3
name: traefik-ingress-controller
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
readinessProbe:
httpGet:
path: /ping
port: "traefik"
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /ping
port: "traefik"
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
ports:
- name: "traefik"
containerPort: 9000
protocol: TCP
- name: "websecure"
containerPort: 8443
protocol: TCP
- name: "metrics"
containerPort: 8082
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
volumeMounts:
- name: data
mountPath: /data
- name: config
mountPath: /config
readOnly: true
- name: ssl-csi
mountPath: /certs
readOnly: true
args:
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-ingress-config
- name: ssl-csi
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-ingress-tls-secret-csi-akv
- name: data
emptyDir: {}
nodeSelector:
agentpool: npuser01