* build & publish dependent charts

* dont update chart

* fix chart template
This commit is contained in:
Vishwanath 2021-10-20 16:51:03 -07:00 коммит произвёл GitHub
Родитель 34b885ad3a
Коммит b2e4259cfe
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
41 изменённых файлов: 1873 добавлений и 5 удалений

40
.github/workflows/build-and-push-dependent-helm-charts поставляемый Normal file
Просмотреть файл

@ -0,0 +1,40 @@
name: build-and-push-dependent-helm-chart
on:
workflow_dispatch:
inputs:
dependentHelmChartName:
description: 'Dependent HELM chart name. This must be `kube-state-metrics` or `prometheus-node-exporter`'
required: true
jobs:
build-and-push-dependent-HELM-chart:
runs-on: ubuntu-latest
steps:
- name: Set-workflow-initiator
run: echo "Initiated by - ${GITHUB_ACTOR}"
- name: Show-versions-On-deployment-machine
run: lsb_release -a && helm version
- name: Set-Helm-OCI-Experimental-feature
run: echo "HELM_EXPERIMENTAL_OCI=1" >> $GITHUB_ENV
- name: Set-ACR-Registry
run: echo "ACR_REGISTRY=containerinsightsprod.azurecr.io" >> $GITHUB_ENV
- name: Set-ACR-Repository-HELM-Chart
run: echo "ACR_REPOSITORY_HELM=/public/azuremonitor/containerinsights/cidev" >> $GITHUB_ENV
- name: Set-MCR-HELM-CHART-Repository-Root
run: echo "MCR_REPOSITORY_ROOT=mcr.microsoft.com/azuremonitor/containerinsights/cidev/" >> $GITHUB_ENV
- name: Lint-HELM-Chart
run: helm lint ./otelcollector/deploy/dependentcharts/${{ github.event.inputs.dependentHelmChartName }}/
- name: Run-trivy-scanner-on-HELM-chart-fs
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: "./otelcollector/deploy/dependentcharts/${{ github.event.inputs.dependentHelmChartName }}/"
format: 'table'
severity: 'CRITICAL,HIGH'
exit-code: '1'
timeout: '5m0s'
- name: Package-HELM-chart
run: cd ./otelcollector/deploy/dependentcharts/ && rm -f ./${{ github.event.inputs.dependentHelmChartName }}/*.md && helm package ./${{ github.event.inputs.dependentHelmChartName }}/
- name: Login-to-ACR-thru-Helm
run: helm registry login containerinsightsprod.azurecr.io --username ${{ secrets.RCA_PS_DI }} --password ${{ secrets.RCA_PS_CES }}
- name: Publish-Helm-chart-to-ACR
run: cd ./otelcollector/deploy/dependentcharts/ && helm push ${{ github.event.inputs.dependentHelmChartName }}-*.tgz oci://${{ env.ACR_REGISTRY }}${{ env.ACR_REPOSITORY_HELM }}

3
.gitignore поставляемый
Просмотреть файл

@ -4,7 +4,8 @@
*.dll
*.so
*.dylib
otelcollector
*.tgz
#otelcollector
promconfigvalidator
# Test binary, built with `go test -c`

Просмотреть файл

@ -32,7 +32,3 @@ dependencies:
version: "1.18.*"
repository: https://prometheus-community.github.io/helm-charts
condition: scrapeTargets.nodeExporter

Просмотреть файл

@ -0,0 +1,28 @@
- **Main branch builds:** ![Builds on main branch](https://github.com/Azure/prometheus-collector/actions/workflows/build-and-push-image-and-chart.yml/badge.svg?branch=main&event=push)
- **PR builds:** ![PRs to main branch](https://github.com/Azure/prometheus-collector/actions/workflows/build-and-push-image-and-chart.yml/badge.svg?branch=main&event!=push)
# Instructions for taking newer versions for our dependent charts
We have dependency on kube-state-metrics and prometheus-node-exporter external charts. The source for both the dependent charts are under otelcollector/deploy/dependentcharts in respective folders.
We will take periodic updated charts (and images) for these dependencies. Below is the outline for steps involved in updating these dependencies to a later version. MSFT OSS Upstream team will produce safe images for each release for the above 2 projects. We (Container Insights team) will consume that image and produce charts for these 2 projects.
#### Step 1 : Check and look for updated versions in the below repos for these charts -
- [Kube-state-metrics](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-state-metrics)
- [Prometheus node exporter](https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-node-exporter/)
Note: You can update 1 chart or both the charts, depending on what needs to be updated/refreshed.
Chart version and image version are different. You can check the latest chart & image versions for kube-state-metrics [here](https://github.com/kubernetes/kube-state-metrics/) and for Prometheus-node-exporter [here](https://github.com/prometheus/node_exporter/). Every chart version goes hand-in-hand with the image version released along with it. So please don't change the image version used by any given chart version. The only change we need to make in the chart is to pick up the image from MSFT OSS container registry (the same image version as in the chart version we pick).
The OSS MSFT repository for kube-state-metrics is [here](https://azcuindexer.azurewebsites.net/repositories/oss/kubernetes/kube-state-metrics) and for Prometheus-node-exporter is [here](https://azcuindexer.azurewebsites.net/repositories/oss/prometheus/node-exporter). Ensure that the image tag used in the chart, is indeed available in MSFT OSS container repository for the corresponding chart.
After taking the latest chart(s), only change required is changing the default value for `image.repository` in values.yaml to the below -
kube-state-metrics : `mcr.microsoft.com/oss/prometheus/node-exporter`
prometheus-node-exporter : `mcr.microsoft.com/oss/kubernetes/kube-state-metrics`
#### Step 2 : Create a PR for chart update only. Please keep this PR seperate from other changes.
#### Step 3 : After PR is approved and merged, trigger chart build & push thru the action `build-and-push-dependent-helm-charts`. The parameter is 1 chart name. i.e `prometheus-node-exporter` or `kube-state-metrics` depending on what is being updated/refreshed. If you want to update both, you would trigger this action twice (one after another). Currently, these charts will be packaged and pushed to our cidev ACR repository 9which will be reconciled with MCR).
#### Step 4 : Once dependent chart(s) is/are packaged and pushed to our mcr, update our Prometheus collector charts' Chart-template.yaml with the correct chart version(s) for the dependency(ies) updated, and creatre a PR.

Просмотреть файл

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

Просмотреть файл

@ -0,0 +1,19 @@
apiVersion: v2
name: kube-state-metrics
description: Install kube-state-metrics to generate and expose cluster-level metrics
keywords:
- metric
- monitoring
- prometheus
- kubernetes
type: application
version: 3.5.2
appVersion: 2.2.0
home: https://github.com/kubernetes/kube-state-metrics/
sources:
- https://github.com/kubernetes/kube-state-metrics/
maintainers:
- name: tariq1890
email: tariq.ibrahim@mulesoft.com
- name: mrueg
email: manuel@rueg.eu

Просмотреть файл

@ -0,0 +1,6 @@
approvers:
- tariq1890
- mrueg
reviewers:
- tariq1890
- mrueg

Просмотреть файл

@ -0,0 +1,68 @@
# kube-state-metrics Helm Chart
Installs the [kube-state-metrics agent](https://github.com/kubernetes/kube-state-metrics).
## Get Repo Info
```console
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
```
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Install Chart
```console
helm install [RELEASE_NAME] prometheus-community/kube-state-metrics [flags]
```
_See [configuration](#configuration) below._
_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
## Uninstall Chart
```console
helm uninstall [RELEASE_NAME]
```
This removes all the Kubernetes components associated with the chart and deletes the release.
_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
## Upgrading Chart
```console
helm upgrade [RELEASE_NAME] prometheus-community/kube-state-metrics [flags]
```
_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
### Migrating from stable/kube-state-metrics and kubernetes/kube-state-metrics
You can upgrade in-place:
1. [get repo info](#get-repo-info)
1. [upgrade](#upgrading-chart) your existing release name using the new chart repo
## Upgrading to v3.0.0
v3.0.0 includes kube-state-metrics v2.0, see the [changelog](https://github.com/kubernetes/kube-state-metrics/blob/release-2.0/CHANGELOG.md) for major changes on the application-side.
The upgraded chart now the following changes:
* Dropped support for helm v2 (helm v3 or later is required)
* collectors key was renamed to resources
* namespace key was renamed to namespaces
## Configuration
See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments:
```console
helm show values prometheus-community/kube-state-metrics
```
You may also run `helm show values` on this chart's [dependencies](#dependencies) for additional options.

Просмотреть файл

@ -0,0 +1,10 @@
kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
The exposed metrics can be found here:
https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics
The metrics are exported on the HTTP endpoint /metrics on the listening port.
In your case, {{ template "kube-state-metrics.fullname" . }}.{{ template "kube-state-metrics.namespace" . }}.svc.cluster.local:{{ .Values.service.port }}/metrics
They are served either as plaintext or protobuf depending on the Accept header.
They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint.

Просмотреть файл

@ -0,0 +1,47 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "kube-state-metrics.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kube-state-metrics.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "kube-state-metrics.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "kube-state-metrics.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}

Просмотреть файл

@ -0,0 +1,23 @@
{{- if and .Values.rbac.create .Values.rbac.useClusterRole -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
{{- if .Values.rbac.useExistingRole }}
name: {{ .Values.rbac.useExistingRole }}
{{- else }}
name: {{ template "kube-state-metrics.fullname" . }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.serviceAccountName" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
{{- end -}}

Просмотреть файл

@ -0,0 +1,157 @@
apiVersion: apps/v1
{{- if .Values.autosharding.enabled }}
kind: StatefulSet
{{- else }}
kind: Deployment
{{- end }}
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
replicas: {{ .Values.replicas }}
{{- if .Values.autosharding.enabled }}
serviceName: {{ template "kube-state-metrics.fullname" . }}
volumeClaimTemplates: []
{{- end }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: "{{ .Release.Name }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 8 }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
hostNetwork: {{ .Values.hostNetwork }}
serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
{{- if .Values.autosharding.enabled }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- end }}
args:
{{- if .Values.extraArgs }}
{{- range .Values.extraArgs }}
- {{ . }}
{{- end }}
{{- end }}
{{- if .Values.service.port }}
- --port={{ .Values.service.port | default 8080}}
{{- end }}
{{- if .Values.collectors }}
- --resources={{ .Values.collectors | join "," }}
{{- end }}
{{- if .Values.metricLabelsAllowlist }}
- --metric-labels-allowlist={{ .Values.metricLabelsAllowlist | join "," }}
{{- end }}
{{- if .Values.metricAnnotationsAllowList }}
- --metric-annotations-allowlist={{ .Values.metricAnnotationsAllowList | join "," }}
{{- end }}
{{- if .Values.metricAllowlist }}
- --metric-allowlist={{ .Values.metricAllowlist | join "," }}
{{- end }}
{{- if .Values.metricDenylist }}
- --metric-denylist={{ .Values.metricDenylist | join "," }}
{{- end }}
{{- if .Values.namespaces }}
- --namespaces={{ tpl (.Values.namespaces | join ",") $ }}
{{- end }}
{{- if .Values.autosharding.enabled }}
- --pod=$(POD_NAME)
- --pod-namespace=$(POD_NAMESPACE)
{{- end }}
{{- if .Values.kubeconfig.enabled }}
- --kubeconfig=/opt/k8s/.kube/config
{{- end }}
{{- if .Values.selfMonitor.telemetryHost }}
- --telemetry-host={{ .Values.selfMonitor.telemetryHost }}
{{- end }}
- --telemetry-port={{ .Values.selfMonitor.telemetryPort | default 8081 }}
{{- if .Values.kubeconfig.enabled }}
volumeMounts:
- name: kubeconfig
mountPath: /opt/k8s/.kube/
readOnly: true
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.service.port | default 8080}}
{{- if .Values.selfMonitor.enabled }}
- containerPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }}
{{- end }}
livenessProbe:
httpGet:
path: /healthz
port: {{ .Values.service.port | default 8080}}
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.port | default 8080}}
initialDelaySeconds: 5
timeoutSeconds: 5
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- end }}
{{- if .Values.containerSecurityContext }}
securityContext:
{{ toYaml .Values.containerSecurityContext | indent 10 }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.kubeconfig.enabled}}
volumes:
- name: kubeconfig
secret:
secretName: {{ template "kube-state-metrics.fullname" . }}-kubeconfig
{{- end }}

Просмотреть файл

@ -0,0 +1,15 @@
{{- if .Values.kubeconfig.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "kube-state-metrics.fullname" . }}-kubeconfig
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
type: Opaque
data:
config: '{{ .Values.kubeconfig.secret }}'
{{- end -}}

Просмотреть файл

@ -0,0 +1,20 @@
{{- if .Values.podDisruptionBudget -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
{{- end -}}

Просмотреть файл

@ -0,0 +1,42 @@
{{- if .Values.podSecurityPolicy.enabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.podSecurityPolicy.annotations }}
annotations:
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
privileged: false
volumes:
- 'secret'
{{- if .Values.podSecurityPolicy.additionalVolumes }}
{{ toYaml .Values.podSecurityPolicy.additionalVolumes | indent 4 }}
{{- end }}
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
{{- end }}

Просмотреть файл

@ -0,0 +1,22 @@
{{- if and .Values.podSecurityPolicy.enabled .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: psp-{{ template "kube-state-metrics.fullname" . }}
rules:
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }}
{{- if semverCompare "> 1.15.0-0" $kubeTargetVersion }}
- apiGroups: ['policy']
{{- else }}
- apiGroups: ['extensions']
{{- end }}
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "kube-state-metrics.fullname" . }}
{{- end }}

Просмотреть файл

@ -0,0 +1,19 @@
{{- if and .Values.podSecurityPolicy.enabled .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: psp-{{ template "kube-state-metrics.fullname" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-{{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.serviceAccountName" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
{{- end }}

Просмотреть файл

@ -0,0 +1,190 @@
{{- if and (eq .Values.rbac.create true) (not .Values.rbac.useExistingRole) -}}
{{- range (split "," .Values.namespaces) }}
---
apiVersion: rbac.authorization.k8s.io/v1
{{- if eq $.Values.rbac.useClusterRole false }}
kind: Role
{{- else }}
kind: ClusterRole
{{- end }}
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" $ }}
helm.sh/chart: {{ $.Chart.Name }}-{{ $.Chart.Version }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
app.kubernetes.io/instance: {{ $.Release.Name }}
name: {{ template "kube-state-metrics.fullname" $ }}
{{- if eq $.Values.rbac.useClusterRole false }}
namespace: {{ . }}
{{- end }}
rules:
{{ if has "certificatesigningrequests" $.Values.collectors }}
- apiGroups: ["certificates.k8s.io"]
resources:
- certificatesigningrequests
verbs: ["list", "watch"]
{{ end -}}
{{ if has "configmaps" $.Values.collectors }}
- apiGroups: [""]
resources:
- configmaps
verbs: ["list", "watch"]
{{ end -}}
{{ if has "cronjobs" $.Values.collectors }}
- apiGroups: ["batch"]
resources:
- cronjobs
verbs: ["list", "watch"]
{{ end -}}
{{ if has "daemonsets" $.Values.collectors }}
- apiGroups: ["extensions", "apps"]
resources:
- daemonsets
verbs: ["list", "watch"]
{{ end -}}
{{ if has "deployments" $.Values.collectors }}
- apiGroups: ["extensions", "apps"]
resources:
- deployments
verbs: ["list", "watch"]
{{ end -}}
{{ if has "endpoints" $.Values.collectors }}
- apiGroups: [""]
resources:
- endpoints
verbs: ["list", "watch"]
{{ end -}}
{{ if has "horizontalpodautoscalers" $.Values.collectors }}
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{ if has "ingresses" $.Values.collectors }}
- apiGroups: ["extensions", "networking.k8s.io"]
resources:
- ingresses
verbs: ["list", "watch"]
{{ end -}}
{{ if has "jobs" $.Values.collectors }}
- apiGroups: ["batch"]
resources:
- jobs
verbs: ["list", "watch"]
{{ end -}}
{{ if has "limitranges" $.Values.collectors }}
- apiGroups: [""]
resources:
- limitranges
verbs: ["list", "watch"]
{{ end -}}
{{ if has "mutatingwebhookconfigurations" $.Values.collectors }}
- apiGroups: ["admissionregistration.k8s.io"]
resources:
- mutatingwebhookconfigurations
verbs: ["list", "watch"]
{{ end -}}
{{ if has "namespaces" $.Values.collectors }}
- apiGroups: [""]
resources:
- namespaces
verbs: ["list", "watch"]
{{ end -}}
{{ if has "networkpolicies" $.Values.collectors }}
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs: ["list", "watch"]
{{ end -}}
{{ if has "nodes" $.Values.collectors }}
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
{{ end -}}
{{ if has "persistentvolumeclaims" $.Values.collectors }}
- apiGroups: [""]
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
{{ end -}}
{{ if has "persistentvolumes" $.Values.collectors }}
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
{{ end -}}
{{ if has "poddisruptionbudgets" $.Values.collectors }}
- apiGroups: ["policy"]
resources:
- poddisruptionbudgets
verbs: ["list", "watch"]
{{ end -}}
{{ if has "pods" $.Values.collectors }}
- apiGroups: [""]
resources:
- pods
verbs: ["list", "watch"]
{{ end -}}
{{ if has "replicasets" $.Values.collectors }}
- apiGroups: ["extensions", "apps"]
resources:
- replicasets
verbs: ["list", "watch"]
{{ end -}}
{{ if has "replicationcontrollers" $.Values.collectors }}
- apiGroups: [""]
resources:
- replicationcontrollers
verbs: ["list", "watch"]
{{ end -}}
{{ if has "resourcequotas" $.Values.collectors }}
- apiGroups: [""]
resources:
- resourcequotas
verbs: ["list", "watch"]
{{ end -}}
{{ if has "secrets" $.Values.collectors }}
- apiGroups: [""]
resources:
- secrets
verbs: ["list", "watch"]
{{ end -}}
{{ if has "services" $.Values.collectors }}
- apiGroups: [""]
resources:
- services
verbs: ["list", "watch"]
{{ end -}}
{{ if has "statefulsets" $.Values.collectors }}
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
{{ end -}}
{{ if has "storageclasses" $.Values.collectors }}
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
{{ end -}}
{{ if has "validatingwebhookconfigurations" $.Values.collectors }}
- apiGroups: ["admissionregistration.k8s.io"]
resources:
- validatingwebhookconfigurations
verbs: ["list", "watch"]
{{ end -}}
{{ if has "volumeattachments" $.Values.collectors }}
- apiGroups: ["storage.k8s.io"]
resources:
- volumeattachments
verbs: ["list", "watch"]
{{ end -}}
{{ if has "verticalpodautoscalers" $.Values.collectors }}
- apiGroups: ["autoscaling.k8s.io"]
resources:
- verticalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
{{- end -}}
{{- end -}}

Просмотреть файл

@ -0,0 +1,27 @@
{{- if and (eq .Values.rbac.create true) (eq .Values.rbac.useClusterRole false) -}}
{{- range (split "," $.Values.namespaces) }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" $ }}
helm.sh/chart: {{ $.Chart.Name }}-{{ $.Chart.Version }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
app.kubernetes.io/instance: {{ $.Release.Name }}
name: {{ template "kube-state-metrics.fullname" $ }}
namespace: {{ . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
{{- if (not $.Values.rbac.useExistingRole) }}
name: {{ template "kube-state-metrics.fullname" $ }}
{{- else }}
name: {{ $.Values.rbac.useExistingRole }}
{{- end }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.serviceAccountName" $ }}
namespace: {{ template "kube-state-metrics.namespace" $ }}
{{- end -}}
{{- end -}}

Просмотреть файл

@ -0,0 +1,42 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
annotations:
{{- if .Values.prometheusScrape }}
prometheus.io/scrape: '{{ .Values.prometheusScrape }}'
{{- end }}
{{- if .Values.service.annotations }}
{{- toYaml .Values.service.annotations | nindent 4 }}
{{- end }}
spec:
type: "{{ .Values.service.type }}"
ports:
- name: "http"
protocol: TCP
port: {{ .Values.service.port | default 8080}}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: {{ .Values.service.port | default 8080}}
{{ if .Values.selfMonitor.enabled }}
- name: "metrics"
protocol: TCP
port: {{ .Values.selfMonitor.telemetryPort | default 8081 }}
targetPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }}
{{ end }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.service.loadBalancerIP }}"
{{- end }}
selector:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}

Просмотреть файл

@ -0,0 +1,18 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
name: {{ template "kube-state-metrics.serviceAccountName" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
{{- if .Values.serviceAccount.annotations }}
annotations:
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
{{- end }}
imagePullSecrets:
{{ toYaml .Values.serviceAccount.imagePullSecrets | indent 2 }}
{{- end -}}

Просмотреть файл

@ -0,0 +1,50 @@
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
{{- if .Values.prometheus.monitor.additionalLabels }}
{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
{{- end }}
{{- if .Values.customLabels }}
{{ toYaml .Values.customLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
endpoints:
- port: http
{{- if .Values.prometheus.monitor.honorLabels }}
honorLabels: true
{{- end }}
{{- if .Values.prometheus.monitor.metricRelabelings }}
metricRelabelings:
{{- toYaml .Values.prometheus.monitor.metricRelabelings | nindent 6 }}
{{- end }}
{{- if .Values.prometheus.monitor.relabelings }}
relabelings:
{{- toYaml .Values.prometheus.monitor.relabelings | nindent 6 }}
{{- end }}
{{ if .Values.selfMonitor.enabled }}
- port: metrics
{{- if .Values.prometheus.monitor.honorLabels }}
honorLabels: true
{{- end }}
{{- if .Values.prometheus.monitor.metricRelabelings }}
metricRelabelings:
{{- toYaml .Values.prometheus.monitor.metricRelabelings | nindent 6 }}
{{- end }}
{{- if .Values.prometheus.monitor.relabelings }}
relabelings:
{{- toYaml .Values.prometheus.monitor.relabelings | nindent 6 }}
{{- end }}
{{ end }}
{{- end }}

Просмотреть файл

@ -0,0 +1,29 @@
{{- if and .Values.autosharding.enabled .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- apps
resourceNames:
- {{ template "kube-state-metrics.fullname" . }}
resources:
- statefulsets
verbs:
- get
- list
- watch
{{- end }}

Просмотреть файл

@ -0,0 +1,20 @@
{{- if and .Values.autosharding.enabled .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
labels:
app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "kube-state-metrics.serviceAccountName" . }}
namespace: {{ template "kube-state-metrics.namespace" . }}
{{- end }}

Просмотреть файл

@ -0,0 +1,214 @@
# Default values for kube-state-metrics.
prometheusScrape: true
image:
repository: mcr.microsoft.com/oss/kubernetes/kube-state-metrics
tag: v2.2.0
pullPolicy: IfNotPresent
imagePullSecrets: []
# - name: "image-pull-secret"
# If set to true, this will deploy kube-state-metrics as a StatefulSet and the data
# will be automatically sharded across <.Values.replicas> pods using the built-in
# autodiscovery feature: https://github.com/kubernetes/kube-state-metrics#automated-sharding
# This is an experimental feature and there are no stability guarantees.
autosharding:
enabled: false
replicas: 1
# List of additional cli arguments to configure kube-state-metrics
# for example: --enable-gzip-encoding, --log-file, etc.
# all the possible args can be found here: https://github.com/kubernetes/kube-state-metrics/blob/master/docs/cli-arguments.md
extraArgs: []
service:
port: 8080
# Default to clusterIP for backward compatibility
type: ClusterIP
nodePort: 0
loadBalancerIP: ""
annotations: {}
customLabels: {}
hostNetwork: false
rbac:
# If true, create & use RBAC resources
create: true
# Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to it, rolename set here.
# useExistingRole: your-existing-role
# If set to false - Run without Cluteradmin privs needed - ONLY works if namespace is also set (if useExistingRole is set this name is used as ClusterRole or Role to bind to)
useClusterRole: true
serviceAccount:
# Specifies whether a ServiceAccount should be created, require rbac true
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
# Reference to one or more secrets to be used when pulling images
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# ServiceAccount annotations.
# Use case: AWS EKS IAM roles for service accounts
# ref: https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html
annotations: {}
prometheus:
monitor:
enabled: false
additionalLabels: {}
namespace: ""
honorLabels: false
metricRelabelings: []
relabelings: []
## Specify if a Pod Security Policy for kube-state-metrics must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
##
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
additionalVolumes: []
securityContext:
enabled: true
runAsGroup: 65534
runAsUser: 65534
fsGroup: 65534
## Specify security settings for a Container
## Allows overrides and additional options compared to (Pod) securityContext
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
containerSecurityContext: {}
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
## Affinity settings for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity: {}
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Annotations to be added to the pod
podAnnotations: {}
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
podDisruptionBudget: {}
# Comma-separated list of metrics to be exposed.
# This list comprises of exact metric names and/or regex patterns.
# The allowlist and denylist are mutually exclusive.
metricAllowlist: []
# Comma-separated list of metrics not to be enabled.
# This list comprises of exact metric names and/or regex patterns.
# The allowlist and denylist are mutually exclusive.
metricDenylist: []
# Comma-separated list of additional Kubernetes label keys that will be used in the resource's
# labels metric. By default the metric contains only name and namespace labels.
# To include additional labels, provide a list of resource names in their plural form and Kubernetes
# label keys you would like to allow for them (Example: '=namespaces=[k8s-label-1,k8s-label-n,...],pods=[app],...)'.
# A single '*' can be provided per resource instead to allow any labels, but that has
# severe performance implications (Example: '=pods=[*]').
metricLabelsAllowlist: []
# - namespaces=[k8s-label-1,k8s-label-n]
# Comma-separated list of Kubernetes annotations keys that will be used in the resource'
# labels metric. By default the metric contains only name and namespace labels.
# To include additional annotations provide a list of resource names in their plural form and Kubernetes
# annotation keys you would like to allow for them (Example: '=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...)'.
# A single '*' can be provided per resource instead to allow any annotations, but that has
# severe performance implications (Example: '=pods=[*]').
metricAnnotationsAllowList: []
# - pods=[k8s-annotation-1,k8s-annotation-n]
# Available collectors for kube-state-metrics.
# By default, all available resources are enabled, comment out to disable.
collectors:
- certificatesigningrequests
- configmaps
- cronjobs
- daemonsets
- deployments
- endpoints
- horizontalpodautoscalers
- ingresses
- jobs
- limitranges
- mutatingwebhookconfigurations
- namespaces
- networkpolicies
- nodes
- persistentvolumeclaims
- persistentvolumes
- poddisruptionbudgets
- pods
- replicasets
- replicationcontrollers
- resourcequotas
- secrets
- services
- statefulsets
- storageclasses
- validatingwebhookconfigurations
- volumeattachments
# - verticalpodautoscalers # not a default resource, see also: https://github.com/kubernetes/kube-state-metrics#enabling-verticalpodautoscalers
# Enabling kubeconfig will pass the --kubeconfig argument to the container
kubeconfig:
enabled: false
# base64 encoded kube-config file
secret:
# Comma-separated list of namespaces to be enabled for collecting resources. By default all namespaces are collected.
namespaces: ""
## Override the deployment namespace
##
namespaceOverride: ""
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 64Mi
# requests:
# cpu: 10m
# memory: 32Mi
## Provide a k8s version to define apiGroups for podSecurityPolicy Cluster Role.
## For example: kubeTargetVersionOverride: 1.14.9
##
kubeTargetVersionOverride: ""
# Enable self metrics configuration for service and Service Monitor
# Default values for telemetry configuration can be overridden
selfMonitor:
enabled: false
# telemetryHost: 0.0.0.0
# telemetryPort: 8081

Просмотреть файл

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

Просмотреть файл

@ -0,0 +1,18 @@
apiVersion: v2
appVersion: 1.2.2
description: A Helm chart for prometheus node-exporter
name: prometheus-node-exporter
version: 2.2.0
type: application
home: https://github.com/prometheus/node_exporter/
sources:
- https://github.com/prometheus/node_exporter/
keywords:
- node-exporter
- prometheus
- exporter
maintainers:
- email: gianrubio@gmail.com
name: gianrubio
- name: vsliouniaev
- name: bismarck

Просмотреть файл

@ -0,0 +1,50 @@
# Prometheus Node Exporter
Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.
This chart bootstraps a prometheus [Node Exporter](http://github.com/prometheus/node_exporter) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Get Repo Info
```console
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
```
_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._
## Install Chart
```console
helm install [RELEASE_NAME] prometheus-community/prometheus-node-exporter
```
_See [configuration](#configuration) below._
_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
## Uninstall Chart
```console
helm uninstall [RELEASE_NAME]
```
This removes all the Kubernetes components associated with the chart and deletes the release.
_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
## Upgrading Chart
```console
helm upgrade [RELEASE_NAME] [CHART] --install
```
_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
## Configuring
See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands:
```console
helm show values prometheus-community/prometheus-node-exporter
```

Просмотреть файл

@ -0,0 +1,3 @@
service:
targetPort: 9102
port: 9102

Просмотреть файл

@ -0,0 +1,15 @@
1. Get the application URL by running these commands:
{{- if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ template "prometheus-node-exporter.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus-node-exporter.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ template "prometheus-node-exporter.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "prometheus-node-exporter.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ template "prometheus-node-exporter.namespace" . }} {{ template "prometheus-node-exporter.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ template "prometheus-node-exporter.namespace" . }} -l "app={{ template "prometheus-node-exporter.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9100 to use your application"
kubectl port-forward --namespace {{ template "prometheus-node-exporter.namespace" . }} $POD_NAME 9100
{{- end }}

Просмотреть файл

@ -0,0 +1,66 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "prometheus-node-exporter.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "prometheus-node-exporter.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/* Generate basic labels */}}
{{- define "prometheus-node-exporter.labels" }}
app: {{ template "prometheus-node-exporter.name" . }}
heritage: {{.Release.Service }}
release: {{.Release.Name }}
chart: {{ template "prometheus-node-exporter.chart" . }}
{{- if .Values.podLabels}}
{{ toYaml .Values.podLabels }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "prometheus-node-exporter.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "prometheus-node-exporter.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "prometheus-node-exporter.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Allow the release namespace to be overridden for multi-namespace deployments in combined charts
*/}}
{{- define "prometheus-node-exporter.namespace" -}}
{{- if .Values.namespaceOverride -}}
{{- .Values.namespaceOverride -}}
{{- else -}}
{{- .Release.Namespace -}}
{{- end -}}
{{- end -}}

Просмотреть файл

@ -0,0 +1,188 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "prometheus-node-exporter.fullname" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
spec:
selector:
matchLabels:
app: {{ template "prometheus-node-exporter.name" . }}
release: {{ .Release.Name }}
{{- if .Values.updateStrategy }}
updateStrategy:
{{ toYaml .Values.updateStrategy | indent 4 }}
{{- end }}
template:
metadata:
labels: {{ include "prometheus-node-exporter.labels" . | indent 8 }}
{{- if .Values.podAnnotations }}
annotations:
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
spec:
automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }}
serviceAccountName: {{ template "prometheus-node-exporter.serviceAccountName" . }}
{{- if .Values.securityContext }}
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if .Values.extraInitContainers }}
initContainers:
{{ toYaml .Values.extraInitContainers | nindent 6 }}
{{- end }}
containers:
- name: node-exporter
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
{{- if .Values.hostRootFsMount }}
- --path.rootfs=/host/root
{{- end }}
- --web.listen-address=$(HOST_IP):{{ .Values.service.port }}
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 12 }}
{{- end }}
{{- with .Values.containerSecurityContext }}
securityContext: {{ toYaml . | nindent 12 }}
{{- end }}
env:
- name: HOST_IP
{{- if .Values.service.listenOnAllInterfaces }}
value: 0.0.0.0
{{- else }}
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
{{- end }}
ports:
- name: {{ .Values.service.portName }}
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.port }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.port }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumeMounts:
- name: proc
mountPath: /host/proc
readOnly: true
- name: sys
mountPath: /host/sys
readOnly: true
{{- if .Values.hostRootFsMount }}
- name: root
mountPath: /host/root
mountPropagation: HostToContainer
readOnly: true
{{- end }}
{{- if .Values.extraHostVolumeMounts }}
{{- range $_, $mount := .Values.extraHostVolumeMounts }}
- name: {{ $mount.name }}
mountPath: {{ $mount.mountPath }}
readOnly: {{ $mount.readOnly }}
{{- if $mount.mountPropagation }}
mountPropagation: {{ $mount.mountPropagation }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.sidecarVolumeMount }}
{{- range $_, $mount := .Values.sidecarVolumeMount }}
- name: {{ $mount.name }}
mountPath: {{ $mount.mountPath }}
readOnly: true
{{- end }}
{{- end }}
{{- if .Values.configmaps }}
{{- range $_, $mount := .Values.configmaps }}
- name: {{ $mount.name }}
mountPath: {{ $mount.mountPath }}
{{- end }}
{{- if .Values.secrets }}
{{- range $_, $mount := .Values.secrets }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.sidecars }}
{{ toYaml .Values.sidecars | indent 8 }}
{{- if .Values.sidecarVolumeMount }}
volumeMounts:
{{- range $_, $mount := .Values.sidecarVolumeMount }}
- name: {{ $mount.name }}
mountPath: {{ $mount.mountPath }}
readOnly: {{ $mount.readOnly }}
{{- end }}
{{- end }}
{{- end }}
hostNetwork: {{ .Values.hostNetwork }}
hostPID: {{ .Values.hostPID }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
{{- with .Values.dnsConfig }}
dnsConfig:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
{{- if .Values.hostRootFsMount }}
- name: root
hostPath:
path: /
{{- end }}
{{- if .Values.extraHostVolumeMounts }}
{{- range $_, $mount := .Values.extraHostVolumeMounts }}
- name: {{ $mount.name }}
hostPath:
path: {{ $mount.hostPath }}
{{- end }}
{{- end }}
{{- if .Values.sidecarVolumeMount }}
{{- range $_, $mount := .Values.sidecarVolumeMount }}
- name: {{ $mount.name }}
emptyDir:
medium: Memory
{{- end }}
{{- end }}
{{- if .Values.configmaps }}
{{- range $_, $mount := .Values.configmaps }}
- name: {{ $mount.name }}
configMap:
name: {{ $mount.name }}
{{- end }}
{{- end }}
{{- if .Values.secrets }}
{{- range $_, $mount := .Values.secrets }}
- name: {{ $mount.name }}
secret:
secretName: {{ $mount.name }}
{{- end }}
{{- end }}

Просмотреть файл

@ -0,0 +1,18 @@
{{- if .Values.endpoints }}
apiVersion: v1
kind: Endpoints
metadata:
name: {{ template "prometheus-node-exporter.fullname" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
labels:
{{ include "prometheus-node-exporter.labels" . | indent 4 }}
subsets:
- addresses:
{{- range .Values.endpoints }}
- ip: {{ . }}
{{- end }}
ports:
- name: {{ .Values.service.portName }}
port: 9100
protocol: TCP
{{- end }}

Просмотреть файл

@ -0,0 +1,35 @@
{{- if .Values.prometheus.monitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "prometheus-node-exporter.fullname" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
{{- if .Values.prometheus.monitor.additionalLabels }}
{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
{{- end }}
spec:
selector:
matchLabels:
app: {{ template "prometheus-node-exporter.name" . }}
release: {{ .Release.Name }}
endpoints:
- port: {{ .Values.service.portName }}
scheme: {{ $.Values.prometheus.monitor.scheme }}
{{- if $.Values.prometheus.monitor.bearerTokenFile }}
bearerTokenFile: {{ $.Values.prometheus.monitor.bearerTokenFile }}
{{- end }}
{{- if $.Values.prometheus.monitor.tlsConfig }}
tlsConfig: {{ toYaml $.Values.prometheus.monitor.tlsConfig | nindent 8 }}
{{- end }}
{{- if .Values.prometheus.monitor.proxyUrl }}
proxyUrl: {{ .Values.prometheus.monitor.proxyUrl}}
{{- end }}
{{- if .Values.prometheus.monitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.prometheus.monitor.scrapeTimeout }}
{{- end }}
{{- if .Values.prometheus.monitor.relabelings }}
relabelings:
{{ toYaml .Values.prometheus.monitor.relabelings | indent 6 }}
{{- end }}
{{- end }}

Просмотреть файл

@ -0,0 +1,15 @@
{{- if .Values.rbac.create }}
{{- if .Values.rbac.pspEnabled }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp-{{ template "prometheus-node-exporter.fullname" . }}
labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- {{ template "prometheus-node-exporter.fullname" . }}
{{- end }}
{{- end }}

Просмотреть файл

@ -0,0 +1,17 @@
{{- if .Values.rbac.create }}
{{- if .Values.rbac.pspEnabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp-{{ template "prometheus-node-exporter.fullname" . }}
labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-{{ template "prometheus-node-exporter.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "prometheus-node-exporter.fullname" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
{{- end }}
{{- end }}

Просмотреть файл

@ -0,0 +1,56 @@
{{- if .Values.rbac.create }}
{{- if .Values.rbac.pspEnabled }}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "prometheus-node-exporter.fullname" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
{{- if .Values.rbac.pspAnnotations }}
annotations:
{{ toYaml .Values.rbac.pspAnnotations | indent 4 }}
{{- end}}
spec:
privileged: false
# Required to prevent escalations to root.
# allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
#requiredDropCapabilities:
# - ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
- 'hostPath'
hostNetwork: true
hostIPC: false
hostPID: true
hostPorts:
- min: 0
max: 65535
runAsUser:
# Permits the container to run with root privileges as well.
rule: 'RunAsAny'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 0
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 0
max: 65535
readOnlyRootFilesystem: false
{{- end }}
{{- end }}

Просмотреть файл

@ -0,0 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "prometheus-node-exporter.fullname" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
{{- if ( and (eq .Values.service.type "NodePort" ) (not (empty .Values.service.nodePort)) ) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: {{ .Values.service.portName }}
selector:
app: {{ template "prometheus-node-exporter.name" . }}
release: {{ .Release.Name }}

Просмотреть файл

@ -0,0 +1,18 @@
{{- if .Values.rbac.create -}}
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "prometheus-node-exporter.serviceAccountName" . }}
namespace: {{ template "prometheus-node-exporter.namespace" . }}
labels:
app: {{ template "prometheus-node-exporter.name" . }}
chart: {{ template "prometheus-node-exporter.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
annotations:
{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
imagePullSecrets:
{{ toYaml .Values.serviceAccount.imagePullSecrets | indent 2 }}
{{- end -}}
{{- end -}}

Просмотреть файл

@ -0,0 +1,184 @@
# Default values for prometheus-node-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: mcr.microsoft.com/oss/prometheus/node-exporter
tag: v1.2.2
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 9100
targetPort: 9100
nodePort:
portName: metrics
listenOnAllInterfaces: true
annotations:
prometheus.io/scrape: "true"
prometheus:
monitor:
enabled: false
additionalLabels: {}
namespace: ""
scheme: http
bearerTokenFile:
tlsConfig: {}
## proxyUrl: URL of a proxy that should be used for scraping.
##
proxyUrl: ""
relabelings: []
scrapeTimeout: 10s
## Customize the updateStrategy if set
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 200m
# memory: 50Mi
# requests:
# cpu: 100m
# memory: 30Mi
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
annotations: {}
imagePullSecrets: []
automountServiceAccountToken: false
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
containerSecurityContext: {}
# capabilities:
# add:
# - SYS_TIME
rbac:
## If true, create & use RBAC resources
##
create: true
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
pspEnabled: true
pspAnnotations: {}
# for deployments that have node_exporter deployed outside of the cluster, list
# their addresses here
endpoints: []
# Expose the service to the host network
hostNetwork: true
# Share the host process ID namespace
hostPID: true
## If true, node-exporter pods mounts host / at /host/root
##
hostRootFsMount: true
## Assign a group of affinity scheduling rules
##
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchFields:
# - key: metadata.name
# operator: In
# values:
# - target-host-name
# Annotations to be added to node exporter pods
podAnnotations:
# Fix for very slow GKE cluster upgrades
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
# Extra labels to be added to node exporter pods
podLabels: {}
# Custom DNS configuration to be added to prometheus-node-exporter pods
dnsConfig: {}
# nameservers:
# - 1.2.3.4
# searches:
# - ns1.svc.cluster-domain.example
# - my.dns.search.suffix
# options:
# - name: ndots
# value: "2"
# - name: edns0
## Assign a nodeSelector if operating a hybrid cluster
##
nodeSelector: {}
# beta.kubernetes.io/arch: amd64
# beta.kubernetes.io/os: linux
tolerations:
- effect: NoSchedule
operator: Exists
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
## Additional container arguments
##
extraArgs: []
# - --collector.diskstats.ignored-devices=^(ram|loop|fd|(h|s|v)d[a-z]|nvme\\d+n\\d+p)\\d+$
# - --collector.textfile.directory=/run/prometheus
## Additional mounts from the host
##
extraHostVolumeMounts: []
# - name: <mountName>
# hostPath: <hostPath>
# mountPath: <mountPath>
# readOnly: true|false
# mountPropagation: None|HostToContainer|Bidirectional
## Additional configmaps to be mounted.
##
configmaps: []
# - name: <configMapName>
# mountPath: <mountPath>
secrets: []
# - name: <secretName>
# mountPath: <mountPatch>
## Override the deployment namespace
##
namespaceOverride: ""
## Additional containers for export metrics to text file
##
sidecars: []
## - name: nvidia-dcgm-exporter
## image: nvidia/dcgm-exporter:1.4.3
## Volume for sidecar containers
##
sidecarVolumeMount: []
## - name: collector-textfiles
## mountPath: /run/prometheus
## readOnly: false
## Additional InitContainers to initialize the pod
##
extraInitContainers: []

17
otelcollector/opentelemetry-collector-builder/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,17 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
*.tgz
otelcollector
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/