Operator changes to support CRD (#554)

This commit is contained in:
rashmichandrashekar 2023-10-18 16:10:28 -07:00 коммит произвёл GitHub
Родитель c9d49d56ce
Коммит 07cb97561a
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
784 изменённых файлов: 330507 добавлений и 2474 удалений

0
.gitmodules поставляемый Normal file
Просмотреть файл

Просмотреть файл

@ -2,6 +2,7 @@ trigger:
branches:
include:
- main
pr:
autoCancel: true
branches:
@ -29,6 +30,8 @@ jobs:
pool:
name: Azure-Pipelines-CI-Test-EO
steps:
- checkout: self
submodules: true
- bash: |
if [ $(IS_PR) == "True" ]; then
BRANCH_NAME=$(System.PullRequest.SourceBranch)
@ -46,6 +49,10 @@ jobs:
# Truncating to 128 characters as it is required by docker
LINUX_IMAGE_TAG=$(echo "${LINUX_IMAGE_TAG}" | cut -c1-128)
#Truncating this to 124 to add the cfg suffix
LINUX_IMAGE_TAG_PREFIX=$(echo "${LINUX_IMAGE_TAG}" | cut -c1-124)
LINUX_CONFIG_READER_IMAGE_TAG=$LINUX_IMAGE_TAG_PREFIX-cfg
#Truncating this to 113 to add the ref app suffices
LINUX_REF_APP_IMAGE_TAG_PREFIX=$(echo "${LINUX_IMAGE_TAG}" | cut -c1-113)
LINUX_REF_APP_GOLANG_IMAGE_TAG=$LINUX_REF_APP_IMAGE_TAG_PREFIX-ref-app-golang
@ -55,6 +62,11 @@ jobs:
WINDOWS_IMAGE_TAG_PREFIX=$(echo "${LINUX_IMAGE_TAG}" | cut -c1-115)
WINDOWS_IMAGE_TAG=$WINDOWS_IMAGE_TAG_PREFIX-win
#Truncating this to 112 characters to add the targetallocator suffix
TARGET_ALLOCATOR_IMAGE_TAG_PREFIX=$(echo "${LINUX_IMAGE_TAG}" | cut -c1-124)
TARGET_ALLOCATOR_IMAGE_TAG=$TARGET_ALLOCATOR_IMAGE_TAG_PREFIX-targetallocator
#Truncating this to 113 to add the ref app suffices
WIN_REF_APP_IMAGE_TAG_PREFIX=$(echo "${LINUX_IMAGE_TAG}" | cut -c1-107)
WIN_REF_APP_GOLANG_IMAGE_TAG=$WIN_REF_APP_IMAGE_TAG_PREFIX-win-ref-app-golang
@ -65,6 +77,8 @@ jobs:
WINDOWS_2022_BASE_IMAGE_VERSION=ltsc2022
LINUX_FULL_IMAGE_NAME=$ACR_REGISTRY$ACR_REPOSITORY:$LINUX_IMAGE_TAG
TARGET_ALLOCATOR_FULL_IMAGE_NAME=$ACR_REGISTRY$ACR_REPOSITORY:$TARGET_ALLOCATOR_IMAGE_TAG
LINUX_CONFIG_READER_FULL_IMAGE_NAME=$ACR_REGISTRY$ACR_REPOSITORY:$LINUX_CONFIG_READER_IMAGE_TAG
WINDOWS_FULL_IMAGE_NAME=$ACR_REGISTRY$ACR_REPOSITORY:$WINDOWS_IMAGE_TAG
HELM_FULL_IMAGE_NAME=$ACR_REGISTRY$ACR_REPOSITORY_HELM/$HELM_CHART_NAME:$SEMVER
ARC_HELM_FULL_IMAGE_NAME=$ACR_REGISTRY$ACR_REPOSITORY_HELM/$ARC_HELM_CHART_NAME:$SEMVER
@ -76,6 +90,9 @@ jobs:
echo "##vso[build.updatebuildnumber]$SEMVER"
echo "##vso[task.setvariable variable=SEMVER;isOutput=true]$SEMVER"
echo "##vso[task.setvariable variable=LINUX_FULL_IMAGE_NAME;isOutput=true]$LINUX_FULL_IMAGE_NAME"
echo "##vso[task.setvariable variable=TARGET_ALLOCATOR_IMAGE_TAG;isOutput=true]$TARGET_ALLOCATOR_IMAGE_TAG"
echo "##vso[task.setvariable variable=TARGET_ALLOCATOR_FULL_IMAGE_NAME;isOutput=true]$TARGET_ALLOCATOR_FULL_IMAGE_NAME"
echo "##vso[task.setvariable variable=LINUX_CONFIG_READER_FULL_IMAGE_NAME;isOutput=true]$LINUX_CONFIG_READER_FULL_IMAGE_NAME"
echo "##vso[task.setvariable variable=WINDOWS_FULL_IMAGE_NAME;isOutput=true]$WINDOWS_FULL_IMAGE_NAME"
echo "##vso[task.setvariable variable=LINUX_REF_APP_GOLANG_FULL_IMAGE_NAME;isOutput=true]$LINUX_REF_APP_GOLANG_FULL_IMAGE_NAME"
echo "##vso[task.setvariable variable=LINUX_REF_APP_PYTHON_FULL_IMAGE_NAME;isOutput=true]$LINUX_REF_APP_PYTHON_FULL_IMAGE_NAME"
@ -227,6 +244,8 @@ jobs:
# This is necessary because of: https://github.com/moby/moby/issues/37965
DOCKER_BUILDKIT: 1
steps:
- checkout: self
submodules: true
- task: CodeQL3000Init@0
displayName: 'SDL: init codeql'
@ -242,14 +261,14 @@ jobs:
make
condition: or(eq(variables.IS_PR, true), eq(variables.IS_MAIN_BRANCH, true))
workingDirectory: $(Build.SourcesDirectory)/otelcollector/opentelemetry-collector-builder/
displayName: "SDL: build otelcollector, promconfigvalidator, and fluent-bit plugin for scanning"
displayName: "SDL: build otelcollector, promconfigvalidator, targetallocator, and fluent-bit plugin for scanning"
- task: BinSkim@4
displayName: 'SDL: run binskim'
condition: or(eq(variables.IS_PR, true), eq(variables.IS_MAIN_BRANCH, true))
inputs:
InputType: 'CommandLine'
arguments: 'analyze --rich-return-code $(Build.SourcesDirectory)/otelcollector/opentelemetry-collector-builder/otelcollector $(Build.SourcesDirectory)/otelcollector/prom-config-validator-builder/promconfigvalidator $(Build.SourcesDirectory)/otelcollector/fluent-bit/src/out_appinsights.so'
arguments: 'analyze --rich-return-code $(Build.SourcesDirectory)/otelcollector/opentelemetry-collector-builder/otelcollector $(Build.SourcesDirectory)/otelcollector/prom-config-validator-builder/promconfigvalidator $(Build.SourcesDirectory)/otelcollector/otel-allocator/targetallocator $(Build.SourcesDirectory)/otelcollector/fluent-bit/src/out_appinsights.so'
- task: Gosec@1
displayName: 'SDL: run gosec'
@ -420,6 +439,176 @@ jobs:
GdnBreakGdnToolSemmle: true
GdnBreakGdnToolSemmleSeverity: 'Warning'
- job: TargetAllocator
displayName: Build target allocator image
pool:
name: Azure-Pipelines-CI-Test-EO
dependsOn: common
variables:
TARGET_ALLOCATOR_FULL_IMAGE_NAME: $[ dependencies.common.outputs['setup.TARGET_ALLOCATOR_FULL_IMAGE_NAME'] ]
# This is necessary because of: https://github.com/moby/moby/issues/37965
DOCKER_BUILDKIT: 1
steps:
- checkout: self
persistCredentials: true
- bash: |
mkdir -p $(Build.ArtifactStagingDirectory)/targetallocator
# Necessary due to necessary due to https://stackoverflow.com/questions/60080264/docker-cannot-build-multi-platform-images-with-docker-buildx
sudo apt-get update && sudo apt-get -y install qemu binfmt-support qemu-user-static
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker buildx create --name dockerbuilder
docker buildx use dockerbuilder
docker login containerinsightsprod.azurecr.io -u $(ACR_USERNAME) -p $(ACR_PASSWORD)
if [ "$(Build.Reason)" != "PullRequest" ]; then
docker buildx build . --platform=linux/amd64,linux/arm64 --file Dockerfile -t $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) --metadata-file $(Build.ArtifactStagingDirectory)/targetallocator/metadata.json --push
docker pull $(TARGET_ALLOCATOR_FULL_IMAGE_NAME)
else
# Build multiarch image to make sure there are no issues
docker buildx build . --platform=linux/amd64,linux/arm64 --file Dockerfile -t $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) --metadata-file $(Build.ArtifactStagingDirectory)/targetallocator/metadata.json
# Load in amd64 image to run vulnerability scan
docker buildx build . --file Dockerfile -t $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) --metadata-file $(Build.ArtifactStagingDirectory)/targetallocator/metadata.json
fi
MEDIA_TYPE=$(docker manifest inspect -v $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) | jq '.Descriptor.mediaType')
DIGEST=$(docker manifest inspect -v $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) | jq '.Descriptor.digest')
SIZE=$(docker manifest inspect -v $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) | jq '.Descriptor.size')
cat <<EOF >>$(Build.ArtifactStagingDirectory)/targetallocator/payload.json
{"targetArtifact":{"mediaType":$MEDIA_TYPE,"digest":$DIGEST,"size":$SIZE}}
EOF
workingDirectory: $(Build.SourcesDirectory)/otelcollector/otel-allocator
displayName: "Build: build and push target allocator image to dev ACR"
- bash: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
trivy image --ignore-unfixed --no-progress --severity HIGH,CRITICAL,MEDIUM --exit-code 1 $(TARGET_ALLOCATOR_FULL_IMAGE_NAME)
workingDirectory: $(Build.SourcesDirectory)
displayName: "Build: run trivy scan"
condition: eq(variables.IS_PR, false)
- task: EsrpCodeSigning@3
displayName: "ESRP CodeSigning for TargetAllocator"
inputs:
ConnectedServiceName: "ESRPServiceConnectionForPrometheusImages"
FolderPath: $(Build.ArtifactStagingDirectory)/targetallocator/
Pattern: "*.json"
signConfigType: inlineSignParams
inlineOperation: |
[
{
"keyCode": "CP-469451",
"operationSetCode": "NotaryCoseSign",
"parameters": [
{
"parameterName": "CoseFlags",
"parameterValue": "chainunprotected"
}
],
"toolName": "sign",
"toolVersion": "1.0"
}
]
- bash: |
set -euxo pipefail
curl -LO "https://github.com/oras-project/oras/releases/download/v1.0.0/oras_1.0.0_linux_amd64.tar.gz"
mkdir -p oras-install/
tar -zxf oras_1.0.0_*.tar.gz -C oras-install/
sudo mv oras-install/oras /usr/local/bin/
rm -rf oras_1.0.0_*.tar.gz oras-install/
oras attach $(TARGET_ALLOCATOR_FULL_IMAGE_NAME) \
--artifact-type 'application/vnd.cncf.notary.signature' \
./payload.json:application/cose \
-a "io.cncf.notary.x509chain.thumbprint#S256=[\"79E6A702361E1F60DAA84AEEC4CBF6F6420DE6BA\"]"
workingDirectory: $(Build.ArtifactStagingDirectory)/targetallocator/
displayName: "ORAS Push Artifacts in $(Build.ArtifactStagingDirectory)/targetallocator/"
condition: eq(variables.IS_MAIN_BRANCH, true)
- job: Linux_ConfigReader
displayName: Build linux image for config reader
pool:
name: Azure-Pipelines-CI-Test-EO
dependsOn: common
variables:
LINUX_CONFIG_READER_FULL_IMAGE_NAME: $[ dependencies.common.outputs['setup.LINUX_CONFIG_READER_FULL_IMAGE_NAME'] ]
# This is necessary because of: https://github.com/moby/moby/issues/37965
DOCKER_BUILDKIT: 1
steps:
- bash: |
mkdir -p $(Build.ArtifactStagingDirectory)/linuxcfgreader
# Necessary due to necessary due to https://stackoverflow.com/questions/60080264/docker-cannot-build-multi-platform-images-with-docker-buildx
sudo apt-get update && sudo apt-get -y install qemu binfmt-support qemu-user-static
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
docker buildx create --name dockerbuilder
docker buildx use dockerbuilder
docker login containerinsightsprod.azurecr.io -u $(ACR_USERNAME) -p $(ACR_PASSWORD)
if [ "$(Build.Reason)" != "PullRequest" ]; then
docker buildx build . --platform=linux/amd64,linux/arm64 --file ./build/linux/configuration-reader/Dockerfile -t $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) --metadata-file $(Build.ArtifactStagingDirectory)/linux/configuration-reader/metadata.json --push
docker pull $(LINUX_CONFIG_READER_FULL_IMAGE_NAME)
else
# Build multiarch image to make sure there are no issues
docker buildx build . --platform=linux/amd64,linux/arm64 --file ./build/linux/configuration-reader/Dockerfile -t $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) --metadata-file $(Build.ArtifactStagingDirectory)/linux/configuration-reader/metadata.json
# Load in amd64 image to run vulnerability scan
docker buildx build . --file ./build/linux/configuration-reader/Dockerfile -t $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) --metadata-file $(Build.ArtifactStagingDirectory)/linux/configuration-reader/metadata.json
fi
MEDIA_TYPE=$(docker manifest inspect -v $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) | jq '.Descriptor.mediaType')
DIGEST=$(docker manifest inspect -v $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) | jq '.Descriptor.digest')
SIZE=$(docker manifest inspect -v $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) | jq '.Descriptor.size')
cat <<EOF >>$(Build.ArtifactStagingDirectory)/linuxcfgreader/payload.json
{"targetArtifact":{"mediaType":$MEDIA_TYPE,"digest":$DIGEST,"size":$SIZE}}
EOF
workingDirectory: $(Build.SourcesDirectory)/otelcollector/
displayName: "Build: build and push configuration reader image to dev ACR"
- bash: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin
trivy image --ignore-unfixed --no-progress --severity HIGH,CRITICAL,MEDIUM --exit-code 1 $(LINUX_CONFIG_READER_FULL_IMAGE_NAME)
trivy image --ignore-unfixed --no-progress --severity HIGH,CRITICAL,MEDIUM --exit-code 1 $(KUBE_STATE_METRICS_IMAGE)
workingDirectory: $(Build.SourcesDirectory)
displayName: "Build: run trivy scan"
condition: eq(variables.IS_PR, false)
- task: EsrpCodeSigning@3
displayName: "ESRP CodeSigning for Config Reader"
inputs:
ConnectedServiceName: "ESRPServiceConnectionForPrometheusImages"
FolderPath: $(Build.ArtifactStagingDirectory)/linuxcfgreader/
Pattern: "*.json"
signConfigType: inlineSignParams
inlineOperation: |
[
{
"keyCode": "CP-469451",
"operationSetCode": "NotaryCoseSign",
"parameters": [
{
"parameterName": "CoseFlags",
"parameterValue": "chainunprotected"
}
],
"toolName": "sign",
"toolVersion": "1.0"
}
]
- bash: |
set -euxo pipefail
curl -LO "https://github.com/oras-project/oras/releases/download/v1.0.0/oras_1.0.0_linux_amd64.tar.gz"
mkdir -p oras-install/
tar -zxf oras_1.0.0_*.tar.gz -C oras-install/
sudo mv oras-install/oras /usr/local/bin/
rm -rf oras_1.0.0_*.tar.gz oras-install/
oras attach $(LINUX_CONFIG_READER_FULL_IMAGE_NAME) \
--artifact-type 'application/vnd.cncf.notary.signature' \
./payload.json:application/cose \
-a "io.cncf.notary.x509chain.thumbprint#S256=[\"79E6A702361E1F60DAA84AEEC4CBF6F6420DE6BA\"]"
workingDirectory: $(Build.ArtifactStagingDirectory)/linuxcfgreader/
displayName: "ORAS Push Artifacts in $(Build.ArtifactStagingDirectory)/linuxcfgreader/"
condition: eq(variables.IS_MAIN_BRANCH, true)
- job: Windows2019
displayName: "Build windows 2019 image"
pool:
@ -738,7 +927,7 @@ jobs:
else
echo "-e error failed to login to az with managed identity credentials"
exit 1
fi
fi
ACCESS_TOKEN=$(az account get-access-token --resource $RESOURCE_AUDIENCE --query accessToken -o json)
if [ $? -eq 0 ]; then
@ -746,7 +935,7 @@ jobs:
else
echo "-e error get access token from resource:$RESOURCE_AUDIENCE failed."
exit 1
fi
fi
ACCESS_TOKEN=$(echo $ACCESS_TOKEN | tr -d '"' | tr -d '"\r\n')
ARC_API_URL="https://eastus2euap.dp.kubernetesconfiguration.azure.com"
@ -767,7 +956,7 @@ jobs:
inputs:
azureSubscription: 'ContainerInsights_Build_Subscription(9b96ebbd-c57a-42d1-bbe9-b69296e4c7fb)'
scriptType: 'bash'
scriptLocation: 'inlineScript'
scriptLocation: 'inlineScript'
inlineScript: |
az config set extension.use_dynamic_install=yes_without_prompt
az k8s-extension update --name azuremonitor-metrics --resource-group ci-dev-arc-wcus --cluster-name ci-dev-arc-wcus --cluster-type connectedClusters --version $HELM_SEMVER --release-train pipeline
@ -788,6 +977,7 @@ jobs:
HELM_FULL_IMAGE_NAME: $[ dependencies.common.outputs['setup.HELM_FULL_IMAGE_NAME'] ]
steps:
- checkout: self
submodules: true
persistCredentials: true
- bash: |

Просмотреть файл

@ -0,0 +1,64 @@
{
"$schema": "http://schema.express.azure.com/schemas/2015-01-01-alpha/RolloutParameters.json",
"contentVersion": "1.0.0.0",
"wait": [
{
"name": "waitSdpBakeTime",
"properties": {
"duration": "PT24H"
}
}
],
"shellExtensions": [
{
"name": "PushAgentToACR",
"type": "ShellExtensionType",
"properties": {
"maxexecutiontime": "PT1H"
},
"package": {
"reference": {
"path": "artifacts.tar.gz"
}
},
"launch": {
"command": [
"/bin/bash",
"pushAgentToAcr.sh"
],
"environmentVariables": [
{
"name": "ACR_REGISTRY",
"value": "__ACR_REGISTRY__"
},
{
"name": "PROD_ACR_REPOSITORY",
"value": "__PROD_ACR_AGENT_REPOSITORY__"
},
{
"name": "MCR_REGISTRY",
"value": "__MCR_REGISTRY__"
},
{
"name": "PROD_MCR_REPOSITORY",
"value": "__PROD_MCR_AGENT_REPOSITORY__"
},
{
"name": "DEV_MCR_REPOSITORY",
"value": "__DEV_MCR_AGENT_REPOSITORY__"
},
{
"name": "IMAGE_TAG",
"value": "__CONFIGREADER_TAG__"
}
],
"identity": {
"type": "userAssigned",
"userAssignedIdentities": [
"__MANAGED_IDENTITY__"
]
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,64 @@
{
"$schema": "http://schema.express.azure.com/schemas/2015-01-01-alpha/RolloutParameters.json",
"contentVersion": "1.0.0.0",
"wait": [
{
"name": "waitSdpBakeTime",
"properties": {
"duration": "PT24H"
}
}
],
"shellExtensions": [
{
"name": "PushAgentToACR",
"type": "ShellExtensionType",
"properties": {
"maxexecutiontime": "PT1H"
},
"package": {
"reference": {
"path": "artifacts.tar.gz"
}
},
"launch": {
"command": [
"/bin/bash",
"pushAgentToAcr.sh"
],
"environmentVariables": [
{
"name": "ACR_REGISTRY",
"value": "__ACR_REGISTRY__"
},
{
"name": "PROD_ACR_REPOSITORY",
"value": "__PROD_ACR_AGENT_REPOSITORY__"
},
{
"name": "MCR_REGISTRY",
"value": "__MCR_REGISTRY__"
},
{
"name": "PROD_MCR_REPOSITORY",
"value": "__PROD_MCR_AGENT_REPOSITORY__"
},
{
"name": "DEV_MCR_REPOSITORY",
"value": "__DEV_MCR_AGENT_REPOSITORY__"
},
{
"name": "IMAGE_TAG",
"value": "__TARGETALLOCATOR_TAG__"
}
],
"identity": {
"type": "userAssigned",
"userAssignedIdentities": [
"__MANAGED_IDENTITY__"
]
}
}
}
]
}

Просмотреть файл

@ -32,6 +32,20 @@
"actions": [ "Shell/PushAgentToACR" ],
"dependsOn": [ ]
},
{
"name": "PushTargetAllocator",
"targetType": "ServiceResource",
"targetName": "PushTargetAllocator",
"actions": [ "Shell/PushAgentToACR" ],
"dependsOn": [ ]
},
{
"name": "PushConfigReader",
"targetType": "ServiceResource",
"targetName": "PushConfigReader",
"actions": [ "Shell/PushAgentToACR" ],
"dependsOn": [ ]
},
{
"name": "PushKSMChart",
"targetType": "ServiceResource",

Просмотреть файл

@ -53,6 +53,14 @@
"find": "__WINDOWS_TAG__",
"replaceWith": "$(WindowsTag)"
},
{
"find": "__TARGETALLOCATOR_TAG__",
"replaceWith": "$(TargetAllocatorTag)"
},
{
"find": "__TARGETALLOCATOR_TAG__",
"replaceWith": "$(ConfigReaderTag)"
},
{
"find": "__PROD_MCR_AGENT_REPOSITORY__",
"replaceWith": "$(ProdMCRAgentRepository)"

Просмотреть файл

@ -50,6 +50,16 @@
"InstanceOf": "ShellExtension",
"RolloutParametersPath": "Parameters\\PrometheusCollector.Windows.Parameters.json"
},
{
"Name": "PushTargetAllocator",
"InstanceOf": "ShellExtension",
"RolloutParametersPath": "Parameters\\PrometheusCollector.TargetAllocator.Parameters.json"
},
{
"Name": "PushConfigReader",
"InstanceOf": "ShellExtension",
"RolloutParametersPath": "Parameters\\PrometheusCollector.ConfigReader.Parameters.json"
},
{
"Name": "Push1PHelmChart",
"InstanceOf": "ShellExtension",

Просмотреть файл

@ -1,23 +1,18 @@
# Check for HIGH/CRITICAL & MEDIUM CVEs. HIGH/CRITICAL to be fixed asap, MEDIUM is best effort
# ignore these CVEs, but continue scanning to catch other vulns. Note : this will ignore these cves globally
# CRITICAL
# =========== CRITICAL ================
# none
# =========== HIGH ================
# HIGH - otelcollector
CVE-2023-2253
CVE-2023-28840
# HIGH - promconfigvalidator
CVE-2023-2253
CVE-2023-28840
# none
# =========== MEDIUM ================
# MEDIUM - otelcollector
CVE-2023-28841
CVE-2023-28842
CVE-2023-40577
# MEDIUM - promconfigvalidator
CVE-2023-28841
CVE-2023-28842
CVE-2023-40577
# MEDIUM - go vulnerabilities
CVE-2023-39325
CVE-2023-3978
CVE-2023-44487

1
NOTICE
Просмотреть файл

@ -4,6 +4,7 @@ This repository incorporates material as listed below or described in the code.
OpenTelemetry Collector
https://github.com/open-telemetry/opentelemetry-collector
https://github.com/open-telemetry/opentelemetry-operator/
Apache License

Просмотреть файл

@ -40,6 +40,8 @@ apiVersion: v1
kind: Service
metadata:
name: prometheus-reference-service
labels:
app: prometheus-reference-app
spec:
selector:
app: prometheus-reference-app

Просмотреть файл

@ -54,14 +54,15 @@ COPY ./logrotate/crontab /etc/crontab
COPY ./scripts/livenessprobe.sh $tmpdir/microsoft/liveness/livenessprobe.sh
COPY ./configmapparser/*.rb $tmpdir/microsoft/configmapparser/
COPY ./configmapparser/default-prom-configs/*.yml $tmpdir/microsoft/otelcollector/default-prom-configs/
COPY ./opentelemetry-collector-builder/collector-config-default.yml ./opentelemetry-collector-builder/collector-config-template.yml ./opentelemetry-collector-builder/PROMETHEUS_VERSION $tmpdir/microsoft/otelcollector/
COPY ./opentelemetry-collector-builder/collector-config-default.yml ./opentelemetry-collector-builder/collector-config-template.yml ./opentelemetry-collector-builder/collector-config-replicaset.yml ./opentelemetry-collector-builder/PROMETHEUS_VERSION $tmpdir/microsoft/otelcollector/
COPY --from=otelcollector-builder /src/opentelemetry-collector-builder/otelcollector $tmpdir/microsoft/otelcollector/
COPY --from=otelcollector-builder /src/opentelemetry-collector-builder/otelcollector $tmpdir/microsoft/otelcollector/
COPY --from=otelcollector-builder /src/goversion.txt $tmpdir/goversion.txt
COPY --from=prom-config-validator-builder /src/prom-config-validator-builder/promconfigvalidator $tmpdir/
COPY ./scripts/setup.sh ./scripts/main.sh $tmpdir/
COPY ./scripts/*.sh $tmpdir/
COPY ./metricextension/me.config ./metricextension/me_internal.config ./metricextension/me_ds.config ./metricextension/me_ds_internal.config /usr/sbin/
COPY ./telegraf/telegraf-prometheus-collector.conf $tmpdir/telegraf/
COPY ./telegraf/ $tmpdir/telegraf/
COPY ./fluent-bit/fluent-bit.conf ./fluent-bit/fluent-bit-daemonset.conf ./fluent-bit/fluent-bit-parsers.conf $tmpdir/fluent-bit/
COPY --from=fluent-bit-builder /src/out_appinsights.so $tmpdir/fluent-bit/bin/
COPY ./react /static/react
@ -73,7 +74,7 @@ COPY ./build/linux/rpm-repos/ /etc/yum.repos.d/
ARG TARGETARCH
RUN tdnf clean all
RUN tdnf repolist --refresh
RUN tdnf update
RUN tdnf update -y
RUN tdnf install -y wget sudo net-tools cronie vim ruby-devel logrotate procps-ng busybox diffutils curl
RUN mkdir /busybin && busybox --install /busybin
RUN chmod 775 /etc/cron.daily/logrotate

Просмотреть файл

@ -0,0 +1,108 @@
FROM --platform=$BUILDPLATFORM mcr.microsoft.com/oss/go/microsoft/golang:1.19 as prom-config-validator-builder
WORKDIR /src
RUN apt-get update && apt-get install gcc-aarch64-linux-gnu -y
COPY ./prom-config-validator-builder/go.mod ./prom-config-validator-builder/go.sum ./prom-config-validator-builder/
COPY ./prometheusreceiver/go.mod ./prometheusreceiver/go.sum ./prometheusreceiver/
WORKDIR /src/prometheusreceiver
RUN go version
RUN go mod download
WORKDIR /src/prom-config-validator-builder
RUN go mod download
COPY ./prom-config-validator-builder /src/prom-config-validator-builder
COPY ./prometheusreceiver /src/prometheusreceiver
ARG TARGETOS TARGETARCH
RUN if [ "$TARGETARCH" = "arm64" ] ; then CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o promconfigvalidator . ; else CGO_ENABLED=1 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o promconfigvalidator . ; fi
FROM --platform=$BUILDPLATFORM mcr.microsoft.com/oss/go/microsoft/golang:1.19 as configuration-reader-builder
WORKDIR /src
RUN apt-get update && apt-get install gcc-aarch64-linux-gnu -y
COPY ./configuration-reader-builder/go.mod ./configuration-reader-builder/go.sum ./configuration-reader-builder/
RUN go version > goversion.txt
WORKDIR /src/configuration-reader-builder
RUN go mod download
COPY ./configuration-reader-builder /src/configuration-reader-builder
ARG TARGETOS TARGETARCH
RUN if [ "$TARGETARCH" = "arm64" ] ; then CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o configurationreader . ; else CGO_ENABLED=1 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o configurationreader . ; fi
FROM mcr.microsoft.com/cbl-mariner/base/core:2.0 as builder
LABEL description="Azure Monitor Prometheus metrics collector - configuration reader sidecar"
LABEL maintainer="ciprometheus@microsoft.com"
ENV OS_TYPE "linux"
ENV tmpdir /opt
COPY ./logrotate/logrotate /etc/cron.daily/logrotate
COPY ./logrotate/crontab /etc/crontab
COPY ./scripts/livenessprobe-configreader.sh $tmpdir/microsoft/liveness/livenessprobe-configreader.sh
COPY ./configmapparser/*.rb $tmpdir/microsoft/configmapparser/
COPY ./configmapparser/default-prom-configs/*.yml $tmpdir/microsoft/otelcollector/default-prom-configs/
COPY ./opentelemetry-collector-builder/collector-config-default.yml ./opentelemetry-collector-builder/collector-config-template.yml ./opentelemetry-collector-builder/PROMETHEUS_VERSION $tmpdir/microsoft/otelcollector/
COPY --from=configuration-reader-builder /src/goversion.txt $tmpdir/goversion.txt
COPY --from=prom-config-validator-builder /src/prom-config-validator-builder/promconfigvalidator $tmpdir/
COPY --from=configuration-reader-builder /src/configuration-reader-builder/configurationreader $tmpdir/
COPY ./scripts/*.sh $tmpdir/
COPY ./LICENSE $tmpdir/microsoft
COPY ./NOTICE $tmpdir/microsoft
COPY ./build/linux/rpm-repos/ /etc/yum.repos.d/
ARG TARGETARCH
RUN tdnf clean all
RUN tdnf repolist --refresh
RUN tdnf update -y
RUN tdnf install -y wget sudo net-tools cronie vim ruby-devel logrotate procps-ng busybox diffutils curl
RUN mkdir /busybin && busybox --install /busybin
RUN chmod 775 /etc/cron.daily/logrotate
RUN chmod 775 $tmpdir/*.sh;
RUN sync;
RUN $tmpdir/setup-configreader.sh ${TARGETARCH}
FROM mcr.microsoft.com/cbl-mariner/distroless/base:2.0
ENV PATH="/busybin:${PATH}"
ENV OS_TYPE "linux"
# files
COPY --from=builder /opt /opt
COPY --from=builder /etc /etc
COPY --from=builder /busybin /busybin
COPY --from=builder /var/lib/logrotate /var/lib/logrotate
COPY --from=builder /var/spool/cron /var/spool/cron
# executables
COPY --from=builder /usr/bin/ruby /usr/bin/ruby
COPY --from=builder /usr/lib/ruby /usr/lib/ruby
COPY --from=builder /usr/bin/inotifywait /usr/bin/inotifywait
COPY --from=builder /usr/bin/bash /usr/bin/bash
COPY --from=builder /usr/sbin/busybox /usr/sbin/busybox
COPY --from=builder /usr/sbin/crond /usr/sbin/crond
COPY --from=builder /usr/bin/vim /usr/bin/vim
COPY --from=builder /usr/share/vim /usr/share/vim
COPY --from=builder /usr/sbin/logrotate /usr/sbin/logrotate
COPY --from=builder /usr/bin/gzip /usr/bin/
COPY --from=builder /usr/bin/curl /usr/bin/
COPY --from=builder /bin/sh /bin/sh
# bash dependencies
COPY --from=builder /lib/libreadline.so.8 /lib/
COPY --from=builder /usr/lib/libncursesw.so.6 /usr/lib/libtinfo.so.6 /usr/lib/
# inotifywait dependencies
COPY --from=builder /lib/libinotifytools.so.0 /lib/
# crond dependencies
COPY --from=builder /lib/libselinux.so.1 /lib/libpam.so.0 /lib/libc.so.6 /lib/libpcre.so.1 /lib/libaudit.so.1 /lib/libcap-ng.so.0/ /lib/
# vim dependencies
COPY --from=builder /lib/libm.so.6 /lib/libtinfo.so.6 /lib/
# ruby dependencies
COPY --from=builder /usr/lib/libruby.so.3.1 /usr/lib/libz.so.1 /usr/lib/libgmp.so.10 /usr/lib/libcrypt.so.1 /usr/lib/libm.so.6 /usr/lib/
# ruby re2 dependencies
COPY --from=builder /usr/lib/libre2.so.0a /usr/lib/libstdc++.so.6 /usr/lib/libgcc_s.so.1 /usr/lib/libz.so.1 /usr/lib/libgmp.so.10 /usr/lib/libcrypt.so.1 /usr/lib/libm.so.6 /usr/lib/
# logrotate dependencies
COPY --from=builder /lib/libselinux.so.1 /lib/libpopt.so.0 /lib/libpcre.so.1 /lib/
# curl dependencies
COPY --from=builder /lib/libcurl.so.4 /lib/libz.so.1 /lib/libc.so.6 /lib/libnghttp2.so.14 /lib/libssh2.so.1 /lib/libssl.so.1.1 /lib/libcrypto.so.1.1 /lib/libgssapi_krb5.so.2 /lib/libzstd.so.1 /lib/
COPY --from=builder /usr/lib/libkrb5.so.3 /usr/lib/libk5crypto.so.3 /usr/lib/libcom_err.so.2 /usr/lib/libkrb5support.so.0 /usr/lib/libresolv.so.2 /usr/lib/
# sh dependencies
COPY --from=builder /lib/libreadline.so.8 /lib/libc.so.6 /usr/lib/libncursesw.so.6 /usr/lib/libtinfo.so.6 /lib/
RUN [ "/bin/bash", "-c", "chmod 644 /etc/crontab" ]
RUN [ "/bin/bash", "-c", "chown root.root /etc/crontab" ]
RUN [ "/bin/bash", "-c", "chmod 755 /etc/cron.daily/logrotate" ]
ENTRYPOINT [ "/bin/bash" ]
CMD [ "/opt/main-configreader.sh" ]

Просмотреть файл

@ -0,0 +1,576 @@
#!/usr/local/bin/ruby
# frozen_string_literal: true
require "tomlrb"
require "deep_merge"
require "yaml"
require_relative "ConfigParseErrorLogger"
LOGGING_PREFIX = "prometheus-config-merger-with-operator"
@configMapMountPath = "/etc/config/settings/prometheus/prometheus-config"
@promMergedConfigPath = "/opt/promMergedConfig.yml"
@mergedDefaultConfigPath = "/opt/defaultsMergedConfig.yml"
@replicasetControllerType = "replicaset"
@daemonsetControllerType = "daemonset"
@configReaderSidecarContainerType = "configreadersidecar"
@supportedSchemaVersion = true
@defaultPromConfigPathPrefix = "/opt/microsoft/otelcollector/default-prom-configs/"
@regexHashFile = "/opt/microsoft/configmapparser/config_def_targets_metrics_keep_list_hash"
@regexHash = {}
@sendDSUpMetric = false
@intervalHashFile = "/opt/microsoft/configmapparser/config_def_targets_scrape_intervals_hash"
@intervalHash = {}
@kubeletDefaultFileRsSimple = @defaultPromConfigPathPrefix + "kubeletDefaultRsSimple.yml"
@kubeletDefaultFileRsAdvanced = @defaultPromConfigPathPrefix + "kubeletDefaultRsAdvanced.yml"
@kubeletDefaultFileDs = @defaultPromConfigPathPrefix + "kubeletDefaultDs.yml"
@kubeletDefaultFileRsAdvancedWindowsDaemonset = @defaultPromConfigPathPrefix + "kubeletDefaultRsAdvancedWindowsDaemonset.yml"
@corednsDefaultFile = @defaultPromConfigPathPrefix + "corednsDefault.yml"
@cadvisorDefaultFileRsSimple = @defaultPromConfigPathPrefix + "cadvisorDefaultRsSimple.yml"
@cadvisorDefaultFileRsAdvanced = @defaultPromConfigPathPrefix + "cadvisorDefaultRsAdvanced.yml"
@cadvisorDefaultFileDs = @defaultPromConfigPathPrefix + "cadvisorDefaultDs.yml"
@kubeproxyDefaultFile = @defaultPromConfigPathPrefix + "kubeproxyDefault.yml"
@apiserverDefaultFile = @defaultPromConfigPathPrefix + "apiserverDefault.yml"
@kubestateDefaultFile = @defaultPromConfigPathPrefix + "kubestateDefault.yml"
@nodeexporterDefaultFileRsSimple = @defaultPromConfigPathPrefix + "nodeexporterDefaultRsSimple.yml"
@nodeexporterDefaultFileRsAdvanced = @defaultPromConfigPathPrefix + "nodeexporterDefaultRsAdvanced.yml"
@nodeexporterDefaultFileDs = @defaultPromConfigPathPrefix + "nodeexporterDefaultDs.yml"
@prometheusCollectorHealthDefaultFile = @defaultPromConfigPathPrefix + "prometheusCollectorHealth.yml"
@windowsexporterDefaultRsSimpleFile = @defaultPromConfigPathPrefix + "windowsexporterDefaultRsSimple.yml"
@windowsexporterDefaultDsFile = @defaultPromConfigPathPrefix + "windowsexporterDefaultDs.yml"
@windowskubeproxyDefaultFileRsSimpleFile = @defaultPromConfigPathPrefix + "windowskubeproxyDefaultRsSimple.yml"
@windowskubeproxyDefaultDsFile = @defaultPromConfigPathPrefix + "windowskubeproxyDefaultDs.yml"
@podannotationsDefaultFile = @defaultPromConfigPathPrefix + "podannotationsDefault.yml"
@windowskubeproxyDefaultRsAdvancedFile = @defaultPromConfigPathPrefix + "windowskubeproxyDefaultRsAdvanced.yml"
@kappiebasicDefaultFileDs = @defaultPromConfigPathPrefix + "kappieBasicDefaultDs.yml"
def parseConfigMap
begin
# Check to see if config map is created
if (File.file?(@configMapMountPath))
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Custom prometheus config exists")
config = File.read(@configMapMountPath)
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Successfully parsed configmap for prometheus config")
return config
else
ConfigParseErrorLogger.logWarning(LOGGING_PREFIX, "Custom prometheus config does not exist, using only default scrape targets if they are enabled")
return ""
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while parsing configmap for prometheus config: #{errorStr}. Custom prometheus config will not be used. Please check configmap for errors")
return ""
end
end
def loadRegexHash
begin
@regexHash = YAML.load_file(@regexHashFile)
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception in loadRegexHash for prometheus config: #{errorStr}. Keep list regexes will not be used")
end
end
def loadIntervalHash
begin
@intervalHash = YAML.load_file(@intervalHashFile)
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception in loadIntervalHash for prometheus config: #{errorStr}. Scrape interval will not be used")
end
end
def isConfigReaderSidecar
if !ENV["CONTAINER_TYPE"].nil? && !ENV["CONTAINER_TYPE"].empty?
currentContainerType = ENV["CONTAINER_TYPE"].strip.downcase
if !currentContainerType.nil? && currentContainerType == @configReaderSidecarContainerType
return true
end
end
return false
end
def UpdateScrapeIntervalConfig(yamlConfigFile, scrapeIntervalSetting)
begin
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Updating scrape interval config for #{yamlConfigFile}")
config = YAML.load(File.read(yamlConfigFile))
scrapeIntervalConfig = scrapeIntervalSetting
# Iterate through each scrape config and update scrape interval config
if !config.nil?
scrapeConfigs = config["scrape_configs"]
if !scrapeConfigs.nil? && !scrapeConfigs.empty?
scrapeConfigs.each { |scfg|
scrapeCfgs = scfg["scrape_interval"]
if !scrapeCfgs.nil?
scfg["scrape_interval"] = scrapeIntervalConfig
end
}
cfgYamlWithScrapeConfig = YAML::dump(config)
File.open(yamlConfigFile, "w") { |file| file.puts cfgYamlWithScrapeConfig }
end
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while updating scrape interval config in default target file - #{yamlConfigFile} : #{errorStr}. The Scrape interval will not be used")
end
end
def AppendMetricRelabelConfig(yamlConfigFile, keepListRegex)
begin
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Adding keep list regex or minimal ingestion regex for #{yamlConfigFile}")
config = YAML.load(File.read(yamlConfigFile))
keepListMetricRelabelConfig = [{ "source_labels" => ["__name__"], "action" => "keep", "regex" => keepListRegex }]
# Iterate through each scrape config and append metric relabel config for keep list
if !config.nil?
scrapeConfigs = config["scrape_configs"]
if !scrapeConfigs.nil? && !scrapeConfigs.empty?
scrapeConfigs.each { |scfg|
metricRelabelCfgs = scfg["metric_relabel_configs"]
if metricRelabelCfgs.nil?
scfg["metric_relabel_configs"] = keepListMetricRelabelConfig
else
scfg["metric_relabel_configs"] = metricRelabelCfgs.concat(keepListMetricRelabelConfig)
end
}
cfgYamlWithMetricRelabelConfig = YAML::dump(config)
File.open(yamlConfigFile, "w") { |file| file.puts cfgYamlWithMetricRelabelConfig }
end
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while appending metric relabel config in default target file - #{yamlConfigFile} : #{errorStr}. The keep list regex will not be used")
end
end
def AppendRelabelConfig(yamlConfigFile, relabelConfig, keepRegex)
begin
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Adding relabel config for #{yamlConfigFile}")
config = YAML.load(File.read(yamlConfigFile))
# Iterate through each scrape config and append metric relabel config for keep list
if !config.nil?
scrapeConfigs = config["scrape_configs"]
if !scrapeConfigs.nil? && !scrapeConfigs.empty?
scrapeConfigs.each { |scfg|
relabelCfgs = scfg["relabel_configs"]
if relabelCfgs.nil?
scfg["relabel_configs"] = relabelConfig
else
scfg["relabel_configs"] = relabelCfgs.concat(relabelConfig)
end
}
cfgYamlWithRelabelConfig = YAML::dump(config)
File.open(yamlConfigFile, "w") { |file| file.puts cfgYamlWithRelabelConfig }
end
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while appending relabel config in default target file - #{yamlConfigFile} : #{errorStr}. The keep list regex will not be used")
end
end
# Get the list of default configs to be included in the otel's prometheus config
def populateDefaultPrometheusConfig
begin
# check if running in daemonset or replicaset
currentControllerType = ""
if !ENV["CONTROLLER_TYPE"].nil? && !ENV["CONTROLLER_TYPE"].empty?
currentControllerType = ENV["CONTROLLER_TYPE"].strip.downcase
end
advancedMode = false #default is false
windowsDaemonset = false #default is false
# get current mode (advanced or not...)
if !ENV["MODE"].nil? && !ENV["MODE"].empty?
currentMode = ENV["MODE"].strip.downcase
if currentMode == "advanced"
advancedMode = true
end
end
# get if windowsdaemonset is enabled or not (ie. WINMODE env = advanced or not...)
if !ENV["WINMODE"].nil? && !ENV["WINMODE"].empty?
winMode = ENV["WINMODE"].strip.downcase
if winMode == "advanced"
windowsDaemonset = true
end
end
defaultConfigs = []
if !ENV["AZMON_PROMETHEUS_KUBELET_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_KUBELET_SCRAPING_ENABLED"].downcase == "true"
kubeletMetricsKeepListRegex = @regexHash["KUBELET_METRICS_KEEP_LIST_REGEX"]
kubeletScrapeInterval = @intervalHash["KUBELET_SCRAPE_INTERVAL"]
if (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
if advancedMode == false
UpdateScrapeIntervalConfig(@kubeletDefaultFileRsSimple, kubeletScrapeInterval)
if !kubeletMetricsKeepListRegex.nil? && !kubeletMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@kubeletDefaultFileRsSimple, kubeletMetricsKeepListRegex)
end
defaultConfigs.push(@kubeletDefaultFileRsSimple)
elsif windowsDaemonset == true && @sendDSUpMetric == true
UpdateScrapeIntervalConfig(@kubeletDefaultFileRsAdvancedWindowsDaemonset, kubeletScrapeInterval)
defaultConfigs.push(@kubeletDefaultFileRsAdvancedWindowsDaemonset)
elsif @sendDSUpMetric == true
UpdateScrapeIntervalConfig(@kubeletDefaultFileRsAdvanced, kubeletScrapeInterval)
defaultConfigs.push(@kubeletDefaultFileRsAdvanced)
end
else
if advancedMode == true && currentControllerType == @daemonsetControllerType && (windowsDaemonset == true || ENV["OS_TYPE"].downcase == "linux")
UpdateScrapeIntervalConfig(@kubeletDefaultFileDs, kubeletScrapeInterval)
if !kubeletMetricsKeepListRegex.nil? && !kubeletMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@kubeletDefaultFileDs, kubeletMetricsKeepListRegex)
end
contents = File.read(@kubeletDefaultFileDs)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
contents = contents.gsub("$$OS_TYPE$$", ENV["OS_TYPE"])
File.open(@kubeletDefaultFileDs, "w") { |file| file.puts contents }
defaultConfigs.push(@kubeletDefaultFileDs)
end
end
end
if !ENV["AZMON_PROMETHEUS_COREDNS_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_COREDNS_SCRAPING_ENABLED"].downcase == "true" && (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
corednsMetricsKeepListRegex = @regexHash["COREDNS_METRICS_KEEP_LIST_REGEX"]
corednsScrapeInterval = @intervalHash["COREDNS_SCRAPE_INTERVAL"]
UpdateScrapeIntervalConfig(@corednsDefaultFile, corednsScrapeInterval)
if !corednsMetricsKeepListRegex.nil? && !corednsMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@corednsDefaultFile, corednsMetricsKeepListRegex)
end
defaultConfigs.push(@corednsDefaultFile)
end
if !ENV["AZMON_PROMETHEUS_CADVISOR_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_CADVISOR_SCRAPING_ENABLED"].downcase == "true"
cadvisorMetricsKeepListRegex = @regexHash["CADVISOR_METRICS_KEEP_LIST_REGEX"]
cadvisorScrapeInterval = @intervalHash["CADVISOR_SCRAPE_INTERVAL"]
if (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
if advancedMode == false
UpdateScrapeIntervalConfig(@cadvisorDefaultFileRsSimple, cadvisorScrapeInterval)
if !cadvisorMetricsKeepListRegex.nil? && !cadvisorMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@cadvisorDefaultFileRsSimple, cadvisorMetricsKeepListRegex)
end
defaultConfigs.push(@cadvisorDefaultFileRsSimple)
elsif @sendDSUpMetric == true
UpdateScrapeIntervalConfig(@cadvisorDefaultFileRsAdvanced, cadvisorScrapeInterval)
defaultConfigs.push(@cadvisorDefaultFileRsAdvanced)
end
else
if advancedMode == true && ENV["OS_TYPE"].downcase == "linux" && currentControllerType == @daemonsetControllerType
UpdateScrapeIntervalConfig(@cadvisorDefaultFileDs, cadvisorScrapeInterval)
if !cadvisorMetricsKeepListRegex.nil? && !cadvisorMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@cadvisorDefaultFileDs, cadvisorMetricsKeepListRegex)
end
contents = File.read(@cadvisorDefaultFileDs)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@cadvisorDefaultFileDs, "w") { |file| file.puts contents }
defaultConfigs.push(@cadvisorDefaultFileDs)
end
end
end
if !ENV["AZMON_PROMETHEUS_KUBEPROXY_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_KUBEPROXY_SCRAPING_ENABLED"].downcase == "true" && (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
kubeproxyMetricsKeepListRegex = @regexHash["KUBEPROXY_METRICS_KEEP_LIST_REGEX"]
kubeproxyScrapeInterval = @intervalHash["KUBEPROXY_SCRAPE_INTERVAL"]
UpdateScrapeIntervalConfig(@kubeproxyDefaultFile, kubeproxyScrapeInterval)
if !kubeproxyMetricsKeepListRegex.nil? && !kubeproxyMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@kubeproxyDefaultFile, kubeproxyMetricsKeepListRegex)
end
defaultConfigs.push(@kubeproxyDefaultFile)
end
if !ENV["AZMON_PROMETHEUS_APISERVER_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_APISERVER_SCRAPING_ENABLED"].downcase == "true" && (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
apiserverMetricsKeepListRegex = @regexHash["APISERVER_METRICS_KEEP_LIST_REGEX"]
apiserverScrapeInterval = @intervalHash["APISERVER_SCRAPE_INTERVAL"]
UpdateScrapeIntervalConfig(@apiserverDefaultFile, apiserverScrapeInterval)
if !apiserverMetricsKeepListRegex.nil? && !apiserverMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@apiserverDefaultFile, apiserverMetricsKeepListRegex)
end
defaultConfigs.push(@apiserverDefaultFile)
end
if !ENV["AZMON_PROMETHEUS_KUBESTATE_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_KUBESTATE_SCRAPING_ENABLED"].downcase == "true" && (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
kubestateMetricsKeepListRegex = @regexHash["KUBESTATE_METRICS_KEEP_LIST_REGEX"]
kubestateScrapeInterval = @intervalHash["KUBESTATE_SCRAPE_INTERVAL"]
UpdateScrapeIntervalConfig(@kubestateDefaultFile, kubestateScrapeInterval)
if !kubestateMetricsKeepListRegex.nil? && !kubestateMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@kubestateDefaultFile, kubestateMetricsKeepListRegex)
end
contents = File.read(@kubestateDefaultFile)
contents = contents.gsub("$$KUBE_STATE_NAME$$", ENV["KUBE_STATE_NAME"])
contents = contents.gsub("$$POD_NAMESPACE$$", ENV["POD_NAMESPACE"])
File.open(@kubestateDefaultFile, "w") { |file| file.puts contents }
defaultConfigs.push(@kubestateDefaultFile)
end
if !ENV["AZMON_PROMETHEUS_NODEEXPORTER_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_NODEEXPORTER_SCRAPING_ENABLED"].downcase == "true"
nodeexporterMetricsKeepListRegex = @regexHash["NODEEXPORTER_METRICS_KEEP_LIST_REGEX"]
nodeexporterScrapeInterval = @intervalHash["NODEEXPORTER_SCRAPE_INTERVAL"]
if (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
if advancedMode == true && @sendDSUpMetric == true
UpdateScrapeIntervalConfig(@nodeexporterDefaultFileRsAdvanced, nodeexporterScrapeInterval)
contents = File.read(@nodeexporterDefaultFileRsAdvanced)
contents = contents.gsub("$$NODE_EXPORTER_NAME$$", ENV["NODE_EXPORTER_NAME"])
contents = contents.gsub("$$POD_NAMESPACE$$", ENV["POD_NAMESPACE"])
File.open(@nodeexporterDefaultFileRsAdvanced, "w") { |file| file.puts contents }
defaultConfigs.push(@nodeexporterDefaultFileRsAdvanced)
elsif advancedMode == false
UpdateScrapeIntervalConfig(@nodeexporterDefaultFileRsSimple, nodeexporterScrapeInterval)
if !nodeexporterMetricsKeepListRegex.nil? && !nodeexporterMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@nodeexporterDefaultFileRsSimple, nodeexporterMetricsKeepListRegex)
end
contents = File.read(@nodeexporterDefaultFileRsSimple)
contents = contents.gsub("$$NODE_EXPORTER_NAME$$", ENV["NODE_EXPORTER_NAME"])
contents = contents.gsub("$$POD_NAMESPACE$$", ENV["POD_NAMESPACE"])
File.open(@nodeexporterDefaultFileRsSimple, "w") { |file| file.puts contents }
defaultConfigs.push(@nodeexporterDefaultFileRsSimple)
end
else
if advancedMode == true && ENV["OS_TYPE"].downcase == "linux" && currentControllerType == @daemonsetControllerType
UpdateScrapeIntervalConfig(@nodeexporterDefaultFileDs, nodeexporterScrapeInterval)
if !nodeexporterMetricsKeepListRegex.nil? && !nodeexporterMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@nodeexporterDefaultFileDs, nodeexporterMetricsKeepListRegex)
end
contents = File.read(@nodeexporterDefaultFileDs)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_EXPORTER_TARGETPORT$$", ENV["NODE_EXPORTER_TARGETPORT"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@nodeexporterDefaultFileDs, "w") { |file| file.puts contents }
defaultConfigs.push(@nodeexporterDefaultFileDs)
end
end
end
if !ENV["AZMON_PROMETHEUS_KAPPIEBASIC_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_KAPPIEBASIC_SCRAPING_ENABLED"].downcase == "true"
kappiebasicMetricsKeepListRegex = @regexHash["KAPPIEBASIC_METRICS_KEEP_LIST_REGEX"]
kappiebasicScrapeInterval = @intervalHash["KAPPIEBASIC_SCRAPE_INTERVAL"]
if (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
#do nothing -- kappie is not supported to be scrapped automatically outside ds. if needed, customer can disable this ds target, and enable rs scraping thru custom config map
elsif currentControllerType == @daemonsetControllerType #kappie scraping will be turned ON by default only when in MAC/addon mode (for both windows & linux)
if advancedMode == true && !ENV["MAC"].nil? && !ENV["MAC"].empty? && ENV["MAC"].strip.downcase == "true" #&& ENV["OS_TYPE"].downcase == "linux"
UpdateScrapeIntervalConfig(@kappiebasicDefaultFileDs, kappiebasicScrapeInterval)
if !kappiebasicMetricsKeepListRegex.nil? && !kappiebasicMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@kappiebasicDefaultFileDs, kappiebasicMetricsKeepListRegex)
end
contents = File.read(@kappiebasicDefaultFileDs)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@kappiebasicDefaultFileDs, "w") { |file| file.puts contents }
defaultConfigs.push(@kappiebasicDefaultFileDs)
end
end
end
# Collector health config should be enabled or disabled for both replicaset and daemonset
if !ENV["AZMON_PROMETHEUS_COLLECTOR_HEALTH_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_COLLECTOR_HEALTH_SCRAPING_ENABLED"].downcase == "true"
prometheusCollectorHealthInterval = @intervalHash["PROMETHEUS_COLLECTOR_HEALTH_SCRAPE_INTERVAL"]
UpdateScrapeIntervalConfig(@prometheusCollectorHealthDefaultFile, prometheusCollectorHealthInterval)
defaultConfigs.push(@prometheusCollectorHealthDefaultFile)
end
if !ENV["AZMON_PROMETHEUS_WINDOWSEXPORTER_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_WINDOWSEXPORTER_SCRAPING_ENABLED"].downcase == "true"
winexporterMetricsKeepListRegex = @regexHash["WINDOWSEXPORTER_METRICS_KEEP_LIST_REGEX"]
windowsexporterScrapeInterval = @intervalHash["WINDOWSEXPORTER_SCRAPE_INTERVAL"]
# Not adding the isConfigReaderSidecar check instead of replicaset check since this is legacy 1P chart path and not relevant anymore.
if currentControllerType == @replicasetControllerType && advancedMode == false && ENV["OS_TYPE"].downcase == "linux"
UpdateScrapeIntervalConfig(@windowsexporterDefaultRsSimpleFile, windowsexporterScrapeInterval)
if !winexporterMetricsKeepListRegex.nil? && !winexporterMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@windowsexporterDefaultRsSimpleFile, winexporterMetricsKeepListRegex)
end
contents = File.read(@windowsexporterDefaultRsSimpleFile)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@windowsexporterDefaultRsSimpleFile, "w") { |file| file.puts contents }
defaultConfigs.push(@windowsexporterDefaultRsSimpleFile)
elsif currentControllerType == @daemonsetControllerType && advancedMode == true && windowsDaemonset == true && ENV["OS_TYPE"].downcase == "windows"
UpdateScrapeIntervalConfig(@windowsexporterDefaultDsFile, windowsexporterScrapeInterval)
if !winexporterMetricsKeepListRegex.nil? && !winexporterMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@windowsexporterDefaultDsFile, winexporterMetricsKeepListRegex)
end
contents = File.read(@windowsexporterDefaultDsFile)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@windowsexporterDefaultDsFile, "w") { |file| file.puts contents }
defaultConfigs.push(@windowsexporterDefaultDsFile)
end
end
if !ENV["AZMON_PROMETHEUS_WINDOWSKUBEPROXY_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_WINDOWSKUBEPROXY_SCRAPING_ENABLED"].downcase == "true"
winkubeproxyMetricsKeepListRegex = @regexHash["WINDOWSKUBEPROXY_METRICS_KEEP_LIST_REGEX"]
windowskubeproxyScrapeInterval = @intervalHash["WINDOWSKUBEPROXY_SCRAPE_INTERVAL"]
# Not adding the isConfigReaderSidecar check instead of replicaset check since this is legacy 1P chart path and not relevant anymore.
if currentControllerType == @replicasetControllerType && advancedMode == false && ENV["OS_TYPE"].downcase == "linux"
UpdateScrapeIntervalConfig(@windowskubeproxyDefaultFileRsSimpleFile, windowskubeproxyScrapeInterval)
if !winkubeproxyMetricsKeepListRegex.nil? && !winkubeproxyMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@windowskubeproxyDefaultFileRsSimpleFile, winkubeproxyMetricsKeepListRegex)
end
contents = File.read(@windowskubeproxyDefaultFileRsSimpleFile)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@windowskubeproxyDefaultFileRsSimpleFile, "w") { |file| file.puts contents }
defaultConfigs.push(@windowskubeproxyDefaultFileRsSimpleFile)
elsif currentControllerType == @daemonsetControllerType && advancedMode == true && windowsDaemonset == true && ENV["OS_TYPE"].downcase == "windows"
UpdateScrapeIntervalConfig(@windowskubeproxyDefaultDsFile, windowskubeproxyScrapeInterval)
if !winkubeproxyMetricsKeepListRegex.nil? && !winkubeproxyMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@windowskubeproxyDefaultDsFile, winkubeproxyMetricsKeepListRegex)
end
contents = File.read(@windowskubeproxyDefaultDsFile)
contents = contents.gsub("$$NODE_IP$$", ENV["NODE_IP"])
contents = contents.gsub("$$NODE_NAME$$", ENV["NODE_NAME"])
File.open(@windowskubeproxyDefaultDsFile, "w") { |file| file.puts contents }
defaultConfigs.push(@windowskubeproxyDefaultDsFile)
end
end
if !ENV["AZMON_PROMETHEUS_POD_ANNOTATION_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_POD_ANNOTATION_SCRAPING_ENABLED"].downcase == "true" && (isConfigReaderSidecar || currentControllerType == @replicasetControllerType)
podannotationNamespacesRegex = ENV["AZMON_PROMETHEUS_POD_ANNOTATION_NAMESPACES_REGEX"]
podannotationMetricsKeepListRegex = @regexHash["POD_ANNOTATION_METRICS_KEEP_LIST_REGEX"]
podannotationScrapeInterval = @intervalHash["POD_ANNOTATION_SCRAPE_INTERVAL"]
UpdateScrapeIntervalConfig(@podannotationsDefaultFile, podannotationScrapeInterval)
if !podannotationMetricsKeepListRegex.nil? && !podannotationMetricsKeepListRegex.empty?
AppendMetricRelabelConfig(@podannotationsDefaultFile, podannotationMetricsKeepListRegex)
end
if !podannotationNamespacesRegex.nil? && !podannotationNamespacesRegex.empty?
relabelConfig = [{ "source_labels" => ["__meta_kubernetes_namespace"], "action" => "keep", "regex" => podannotationNamespacesRegex }]
AppendRelabelConfig(@podannotationsDefaultFile, relabelConfig, podannotationNamespacesRegex)
end
defaultConfigs.push(@podannotationsDefaultFile)
end
@mergedDefaultConfigs = mergeDefaultScrapeConfigs(defaultConfigs)
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while merging default scrape targets - #{errorStr}. No default scrape targets will be included")
@mergedDefaultConfigs = ""
end
end
def mergeDefaultScrapeConfigs(defaultScrapeConfigs)
mergedDefaultConfigs = ""
begin
if defaultScrapeConfigs.length > 0
mergedDefaultConfigs = YAML.load("scrape_configs:")
# Load each of the default scrape configs and merge them
defaultScrapeConfigs.each { |defaultScrapeConfig|
# Load yaml from default config
defaultConfigYaml = YAML.load(File.read(defaultScrapeConfig))
mergedDefaultConfigs = mergedDefaultConfigs.deep_merge!(defaultConfigYaml)
}
end
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Done merging #{defaultScrapeConfigs.length} default prometheus config(s)")
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while adding default scrape config- #{errorStr}. No default scrape targets will be included")
mergedDefaultConfigs = ""
end
return mergedDefaultConfigs
end
def mergeDefaultAndCustomScrapeConfigs(customPromConfig)
mergedConfigYaml = ""
begin
if !@mergedDefaultConfigs.nil? && !@mergedDefaultConfigs.empty?
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Merging default and custom scrape configs")
customPrometheusConfig = YAML.load(customPromConfig)
mergedConfigs = @mergedDefaultConfigs.deep_merge!(customPrometheusConfig)
mergedConfigYaml = YAML::dump(mergedConfigs)
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Done merging default scrape config(s) with custom prometheus config, writing them to file")
else
ConfigParseErrorLogger.logWarning(LOGGING_PREFIX, "The merged default scrape config is nil or empty, using only custom scrape config")
mergedConfigYaml = customPromConfig
end
File.open(@promMergedConfigPath, "w") { |file| file.puts mergedConfigYaml }
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while merging default and custom scrape configs- #{errorStr}")
end
end
#this will enforce num labels, label name length & label value length for every scrape job to be with-in azure monitor supported limits
# by injecting these into every custom scrape job's config. For default scrape jobs, this is already included in them. We do this here, so the config validation can happen after we inject these into the custom scrape jobs .
def setLabelLimitsPerScrape(prometheusConfigString)
customConfig = prometheusConfigString
ConfigParseErrorLogger.log(LOGGING_PREFIX, "setLabelLimitsPerScrape()")
begin
if !customConfig.nil? && !customConfig.empty?
limitedCustomConfig = YAML.load(customConfig)
limitedCustomscrapes = limitedCustomConfig["scrape_configs"]
if !limitedCustomscrapes.nil? && !limitedCustomscrapes.empty?
limitedCustomscrapes.each { |scrape|
scrape["label_limit"] = 63
scrape["label_name_length_limit"] = 511
scrape["label_value_length_limit"] = 1023
ConfigParseErrorLogger.log(LOGGING_PREFIX, " Successfully set label limits in custom scrape config for job #{scrape["job_name"]}")
}
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Done setting label limits for custom scrape config ...")
return YAML::dump(limitedCustomConfig)
else
ConfigParseErrorLogger.logWarning(LOGGING_PREFIX, "No Jobs found to set label limits while processing custom scrape config")
return prometheusConfigString
end
else
ConfigParseErrorLogger.logWarning(LOGGING_PREFIX, "Nothing to set for label limits while processing custom scrape config")
return prometheusConfigString
end
rescue => errStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception when setting label limits while processing custom scrape config - #{errStr}")
return prometheusConfigString
end
end
# Populate default scrape config(s) if AZMON_PROMETHEUS_NO_DEFAULT_SCRAPING_ENABLED is set to false
# and write them as a collector config file, in case the custom config validation fails,
# and we need to fall back to defaults
def writeDefaultScrapeTargetsFile()
ConfigParseErrorLogger.logSection(LOGGING_PREFIX, "Start Merging Default and Custom Prometheus Config")
if !ENV["AZMON_PROMETHEUS_NO_DEFAULT_SCRAPING_ENABLED"].nil? && ENV["AZMON_PROMETHEUS_NO_DEFAULT_SCRAPING_ENABLED"].downcase == "false"
begin
loadRegexHash
loadIntervalHash
populateDefaultPrometheusConfig
if !@mergedDefaultConfigs.nil? && !@mergedDefaultConfigs.empty?
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Starting to merge default prometheus config values in collector template as backup")
mergedDefaultConfigYaml = YAML::dump(@mergedDefaultConfigs)
File.open(@mergedDefaultConfigPath, "w") { |file| file.puts mergedDefaultConfigYaml }
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Error while populating default scrape targets and writing them to the default scrape targets file")
end
end
end
def setDefaultFileScrapeInterval(scrapeInterval)
defaultFilesArray = [
@kubeletDefaultFileRsSimple, @kubeletDefaultFileRsAdvanced, @kubeletDefaultFileDs, @kubeletDefaultFileRsAdvancedWindowsDaemonset,
@corednsDefaultFile, @cadvisorDefaultFileRsSimple, @cadvisorDefaultFileRsAdvanced, @cadvisorDefaultFileDs, @kubeproxyDefaultFile,
@apiserverDefaultFile, @kubestateDefaultFile, @nodeexporterDefaultFileRsSimple, @nodeexporterDefaultFileRsAdvanced, @nodeexporterDefaultFileDs,
@prometheusCollectorHealthDefaultFile, @windowsexporterDefaultRsSimpleFile, @windowsexporterDefaultDsFile,
@windowskubeproxyDefaultFileRsSimpleFile, @windowskubeproxyDefaultDsFile, @podannotationsDefaultFile,
]
defaultFilesArray.each { |currentFile|
contents = File.read(currentFile)
contents = contents.gsub("$$SCRAPE_INTERVAL$$", scrapeInterval)
File.open(currentFile, "w") { |file| file.puts contents }
}
end
def setGlobalScrapeConfigInDefaultFilesIfExists(configString)
customConfig = YAML.load(configString)
# set scrape interval to 30s for updating the default merged config
scrapeInterval = "30s"
if customConfig.has_key?("global") && customConfig["global"].has_key?("scrape_interval")
scrapeInterval = customConfig["global"]["scrape_interval"]
# Checking to see if the duration matches the pattern specified in the prometheus config
# Link to documenation with regex pattern -> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file
matched = /^((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)$/.match(scrapeInterval)
if !matched
# set default global scrape interval to 1m if its not in the proper format
customConfig["global"]["scrape_interval"] = "1m"
scrapeInterval = "30s"
end
end
setDefaultFileScrapeInterval(scrapeInterval)
return YAML::dump(customConfig)
end
prometheusConfigString = parseConfigMap
if !prometheusConfigString.nil? && !prometheusConfigString.empty?
modifiedPrometheusConfigString = setGlobalScrapeConfigInDefaultFilesIfExists(prometheusConfigString)
writeDefaultScrapeTargetsFile()
#set label limits for every custom scrape job, before merging the default & custom config
labellimitedconfigString = setLabelLimitsPerScrape(modifiedPrometheusConfigString)
mergeDefaultAndCustomScrapeConfigs(labellimitedconfigString)
else
setDefaultFileScrapeInterval("30s")
writeDefaultScrapeTargetsFile()
end
ConfigParseErrorLogger.logSection(LOGGING_PREFIX, "Done Merging Default and Custom Prometheus Config")

Просмотреть файл

@ -2,12 +2,14 @@
# frozen_string_literal: true
require "tomlrb"
require "yaml"
require_relative "ConfigParseErrorLogger"
LOGGING_PREFIX = "debug-mode-config"
@configMapMountPath = "/etc/config/settings/debug-mode"
@configVersion = ""
@configSchemaVersion = ""
@replicasetCollectorConfig = "/opt/microsoft/otelcollector/collector-config-replicaset.yml"
# Setting default values which will be used in case they are not set in the configmap or if configmap doesnt exist
@defaultEnabled = false
@ -54,14 +56,34 @@ end
file = File.open("/opt/microsoft/configmapparser/config_debug_mode_env_var", "w")
if !file.nil?
if !ENV['OS_TYPE'].nil? && ENV['OS_TYPE'].downcase == "linux"
if !ENV["OS_TYPE"].nil? && ENV["OS_TYPE"].downcase == "linux"
file.write("export DEBUG_MODE_ENABLED=#{@defaultEnabled}\n")
else
file.write("DEBUG_MODE_ENABLED=#{@defaultEnabled}\n")
end
file.close
else
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while opening file for writing prometheus-collector config environment variables")
end
# Adding logic to set otlp in service pipeline metrics when debug mode is enabled. This is done in promconfigvalidator for daemonset.
# We need to do this here for the replicaset since we don't run the promconfigvalidator for rs config.
if @defaultEnabled == true
begin
controllerType = ENV["CONTROLLER_TYPE"]
if !controllerType.nil? && !controllerType.empty? && controllerType == "ReplicaSet"
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Setting otlp in the exporter metrics for service pipeline since debug mode is enabled ...")
config = YAML.load(File.read(@replicasetCollectorConfig))
if !config.nil?
config["service"]["pipelines"]["metrics"]["exporters"] = ["otlp", "prometheus"]
cfgYamlWithDebugModeSettings = YAML::dump(config)
File.open(@replicasetCollectorConfig, "w") { |file| file.puts cfgYamlWithDebugModeSettings }
end
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Done setting otlp in the exporter metrics for service pipeline.")
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while setting otlp in the exporter metrics for service pipeline when debug mode is enabled - #{errorStr}")
end
end
ConfigParseErrorLogger.logSection(LOGGING_PREFIX, "End debug-mode Settings Processing")

Просмотреть файл

@ -15,6 +15,8 @@ LOGGING_PREFIX = "config"
@clusterAlias = "" # user provided alias (thru config map or chart param)
@clusterLabel = "" # value of the 'cluster' label in every time series scraped
@isOperatorEnabled = ""
@isOperatorEnabledChartSetting = ""
# Use parser to parse the configmap toml file to a ruby structure
def parseConfigMap
@ -49,7 +51,7 @@ def populateSettingValuesFromConfigMap(parsedConfig)
if !parsedConfig.nil? && !parsedConfig[:cluster_alias].nil?
@clusterAlias = parsedConfig[:cluster_alias].strip
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Got configmap setting for cluster_alias:#{@clusterAlias}")
@clusterAlias = @clusterAlias.gsub(/[^0-9a-z]/i, '_') #replace all non alpha-numeric characters with "_" -- this is to ensure that all down stream places where this is used (like collector, telegraf config etc are keeping up with sanity)
@clusterAlias = @clusterAlias.gsub(/[^0-9a-z]/i, "_") #replace all non alpha-numeric characters with "_" -- this is to ensure that all down stream places where this is used (like collector, telegraf config etc are keeping up with sanity)
ConfigParseErrorLogger.log(LOGGING_PREFIX, "After g-subing configmap setting for cluster_alias:#{@clusterAlias}")
end
rescue => errorStr
@ -57,6 +59,20 @@ def populateSettingValuesFromConfigMap(parsedConfig)
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while reading config map settings for cluster_alias in prometheus collector settings- #{errorStr}, using defaults, please check config map for errors")
end
# Safeguard to fall back to non operator model, enable to set to true or false only when toggle is enabled
if !ENV["AZMON_OPERATOR_ENABLED"].nil? && ENV["AZMON_OPERATOR_ENABLED"].downcase == "true"
begin
@isOperatorEnabledChartSetting = "true"
if !parsedConfig.nil? && !parsedConfig[:operator_enabled].nil?
@isOperatorEnabled = parsedConfig[:operator_enabled]
ConfigParseErrorLogger.log(LOGGING_PREFIX, "Configmap setting enabling operator: #{@isOperatorEnabled}")
end
rescue => errorStr
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while reading config map settings for prometheus collector settings- #{errorStr}, using defaults, please check config map for errors")
end
else
@isOperatorEnabledChartSetting = "false"
end
end
@configSchemaVersion = ENV["AZMON_AGENT_CFG_SCHEMA_VERSION"]
@ -74,14 +90,14 @@ end
# get clustername from cluster's full ARM resourceid (to be used for mac mode as 'cluster' label)
begin
if !ENV['MAC'].nil? && !ENV['MAC'].empty? && ENV['MAC'].strip.downcase == "true"
resourceArray=ENV['CLUSTER'].strip.split("/")
@clusterLabel=resourceArray[resourceArray.length - 1]
if !ENV["MAC"].nil? && !ENV["MAC"].empty? && ENV["MAC"].strip.downcase == "true"
resourceArray = ENV["CLUSTER"].strip.split("/")
@clusterLabel = resourceArray[resourceArray.length - 1]
else
@clusterLabel=ENV['CLUSTER']
@clusterLabel = ENV["CLUSTER"]
end
rescue => errorStr
@clusterLabel=ENV['CLUSTER']
@clusterLabel = ENV["CLUSTER"]
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while parsing to determine cluster label from full cluster resource id in prometheus collector settings- #{errorStr}, using default as full CLUSTER passed-in '#{@clusterLabel}'")
end
@ -99,16 +115,21 @@ ConfigParseErrorLogger.log(LOGGING_PREFIX, "AZMON_CLUSTER_LABEL:#{@clusterLabel}
file = File.open("/opt/microsoft/configmapparser/config_prometheus_collector_settings_env_var", "w")
if !file.nil?
if !ENV['OS_TYPE'].nil? && ENV['OS_TYPE'].downcase == "linux"
if !ENV["OS_TYPE"].nil? && ENV["OS_TYPE"].downcase == "linux"
file.write("export AZMON_DEFAULT_METRIC_ACCOUNT_NAME=#{@defaultMetricAccountName}\n")
file.write("export AZMON_CLUSTER_LABEL=#{@clusterLabel}\n") #used for cluster label value when scraping
file.write("export AZMON_CLUSTER_LABEL=#{@clusterLabel}\n") #used for cluster label value when scraping
file.write("export AZMON_CLUSTER_ALIAS=#{@clusterAlias}\n") #used only for telemetry
file.write("export AZMON_OPERATOR_ENABLED_CHART_SETTING=#{@isOperatorEnabledChartSetting}\n")
if !@isOperatorEnabled.nil? && !@isOperatorEnabled.empty? && @isOperatorEnabled.length > 0
file.write("export AZMON_OPERATOR_ENABLED=#{@isOperatorEnabled}\n")
file.write("export AZMON_OPERATOR_ENABLED_CFG_MAP_SETTING=#{@isOperatorEnabled}\n")
end
else
file.write("AZMON_DEFAULT_METRIC_ACCOUNT_NAME=#{@defaultMetricAccountName}\n")
file.write("AZMON_CLUSTER_LABEL=#{@clusterLabel}\n") #used for cluster label value when scraping
file.write("AZMON_CLUSTER_LABEL=#{@clusterLabel}\n") #used for cluster label value when scraping
file.write("AZMON_CLUSTER_ALIAS=#{@clusterAlias}\n") #used only for telemetry
end
file.close
else
ConfigParseErrorLogger.logError(LOGGING_PREFIX, "Exception while opening file for writing prometheus-collector config environment variables")

Просмотреть файл

@ -0,0 +1,9 @@
.PHONY: configurationreader
configurationreader:
@echo "========================= Building configurationreader ========================="
@echo "========================= cleanup existing configurationreader ========================="
rm -rf configurationreader
@echo "========================= go get ========================="
go get
@echo "========================= go build ========================="
go build -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o configurationreader .

Просмотреть файл

@ -0,0 +1,73 @@
module github.com/configurationreader
go 1.20
require (
github.com/prometheus/prometheus v0.45.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.26.2
k8s.io/apimachinery v0.26.2
k8s.io/client-go v0.26.2
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.3.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v0.8.1 // indirect
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
github.com/aws/aws-sdk-go v1.44.276 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.10.1 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/gnostic v0.6.9 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/prometheus/client_golang v1.15.1 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/common/sigv4 v0.1.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect
golang.org/x/crypto v0.8.0 // indirect
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 // indirect
golang.org/x/net v0.10.0 // indirect
golang.org/x/oauth2 v0.8.0 // indirect
golang.org/x/sys v0.8.0 // indirect
golang.org/x/term v0.8.0 // indirect
golang.org/x/text v0.9.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.30.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/klog/v2 v2.100.1 // indirect
k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f // indirect
k8s.io/utils v0.0.0-20230308161112-d77c459e9343 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

Просмотреть файл

@ -0,0 +1,714 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible h1:HzKLt3kIwMm4KeJYTdx9EbjRYTySD/t8i1Ee/W5EGXw=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.3.1 h1:gVXuXcWd1i4C2Ruxe321aU+IKGaStvGB/S90PUPB/W8=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.3.1/go.mod h1:DffdKW9RFqa5VgmsjUOsS7UE7eiA5iAvYUs63bhKQ0M=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.1 h1:T8quHYlUGyb/oqtSTwqlCr1ilJHrDv+ZtpSfo+hm1BU=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.2.1/go.mod h1:gLa1CL2RNE4s7M3yopJ/p0iq5DdY6Yv5ZUt9MTRZOQM=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest/autorest v0.11.29 h1:I4+HL/JDvErx2LjyzaVxllw2lRDB5/BT2Bm4g20iqYw=
github.com/Azure/go-autorest/autorest/adal v0.9.23 h1:Yepx8CvFxwNKpH6ja7RZ+sKX+DWYNldbLiALMC3BTz8=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/to v0.4.0 h1:oXVqrxakqqV1UZdSazDOPOLvOIz+XA683u8EctwboHk=
github.com/Azure/go-autorest/autorest/validation v0.3.1 h1:AgyqjAd94fwNAoTjl/WQXg4VvFeRFpO+UhNyRXqF1ac=
github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/AzureAD/microsoft-authentication-library-for-go v0.8.1 h1:oPdPEZFSbl7oSPEAIPMPBMUmiL+mqgzBJwM/9qYcwNg=
github.com/AzureAD/microsoft-authentication-library-for-go v0.8.1/go.mod h1:4qFor3D/HDsvBME35Xy9rwW9DecL+M2sNw1ybjPtwA0=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
github.com/aws/aws-sdk-go v1.44.276 h1:ywPlx9C5Yc482dUgAZ9bHpQ6onVvJvYE9FJWsNDCEy0=
github.com/aws/aws-sdk-go v1.44.276/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20230428030218-4003588d1b74 h1:zlUubfBUxApscKFsF4VSvvfhsBNTBu0eF/ddvpo96yk=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/digitalocean/godo v1.99.0 h1:gUHO7n9bDaZFWvbzOum4bXE0/09ZuYA9yA8idQHX57E=
github.com/dnaeon/go-vcr v1.1.0 h1:ReYa/UBrRyQdant9B4fNHGoCNKw6qh6P0fsdGmZpR7c=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/docker v24.0.2+incompatible h1:eATx+oLz9WdNVkQrr0qjQ8HvRJ4bOOxfzEo8R+dA3cg=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/emicklei/go-restful/v3 v3.10.1 h1:rc42Y5YTp7Am7CS630D7JmhRjq4UlEUuEKfrDac4bSQ=
github.com/emicklei/go-restful/v3 v3.10.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.11.0 h1:jtLewhRR2vMRNnq2ZZUoCjUlgut+Y0+sDDWPOfwOi1o=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v1.0.1 h1:kt9FtLiooDc0vbwTLhdg3dyNX1K9Qwa1EK9LcD4jVUQ=
github.com/fatih/color v1.14.1 h1:qfhVLaG5s+nCROl1zJsZRxFeYrHLqWroPOQ8BWiNb4w=
github.com/flowstack/go-jsonschema v0.1.1/go.mod h1:yL7fNggx1o8rm9RlgXv7hTBWxdBM0rVwpMwimd3F3N0=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU=
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-resty/resty/v2 v2.7.0 h1:me+K9p3uhSmXtrBZ4k9jcEAfJmuC8IivWHwaLZwPrFY=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-zookeeper/zk v1.0.3 h1:7M2kwOsc//9VeeFiPtf+uSJlVpU66x9Ba5+8XK7/TDg=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/gnostic v0.6.9 h1:ZK/5VhkoX835RikCHpSUJV9a+S3e1zLh59YnyWeBW+0=
github.com/google/gnostic v0.6.9/go.mod h1:Nm8234We1lq6iB9OmlgNv3nH91XLLVZHCDayfA3xq+E=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/gophercloud/gophercloud v1.4.0 h1:RqEu43vaX0lb0LanZr5BylK5ICVxjpFFoc0sxivyuHU=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd h1:PpuIBO5P3e9hpqBD0O/HjhShYuM6XE0i/lbE6J94kww=
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.21.0 h1:WMR2JiyuaQWRAMFaOGiYfY4Q4HRpyYRe/oYQofjyduM=
github.com/hashicorp/cronexpr v1.1.1 h1:NJZDd87hGXjoZBdvyCF9mX4DCq5Wy7+A/w+A7q0wn6c=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-hclog v1.4.0 h1:ctuWFGrhFha8BnnzxqeRGidlEcQkDyL5u8J8t5eA11I=
github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-retryablehttp v0.7.2 h1:AcYqCvkpalPnPF2pn0KamgwamS42TqUDDYFRKq/RAd0=
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4=
github.com/hashicorp/nomad/api v0.0.0-20230605233119-67e39d5d248f h1:yxjcAZRuYymIDC0W4IQHgTe9EQdu2BsjPlVmKwyVZT4=
github.com/hashicorp/serf v0.10.1 h1:Z1H2J60yRKvfDYAOZLd2MU0ND4AH/WDz7xYHDWQsIPY=
github.com/hetznercloud/hcloud-go v1.45.1 h1:nl0OOklFfQT5J6AaNIOhl5Ruh3fhmGmhvZEqHbibVuk=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
github.com/ionos-cloud/sdk-go/v6 v6.1.7 h1:uVG1Q/ZDJ7YmCI9Oevpue9xJEH5UrUMyXv8gm7NTxIw=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b h1:udzkj9S/zlT5X367kqJis0QP7YMxobob6zhzq6Yre00=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/linode/linodego v1.17.0 h1:aWS98f0jUoY2lhsEuBxRdVkqyGM0nazPd68AEDF0EvU=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/dns v1.1.54 h1:5jon9mWcb0sFJGpnI99tOMhCPyJ+RPVz5b63MQG0VWI=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/onsi/ginkgo/v2 v2.4.0 h1:+Ig9nvqgS5OBSACXNk15PLdp0U9XPYROt9CFzVdFGIs=
github.com/onsi/gomega v1.23.0 h1:/oxKu9c2HVap+F3PfKort2Hw5DEU+HGlW8n+tguWsys=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
github.com/ovh/go-ovh v1.4.1 h1:VBGa5wMyQtTP7Zb+w97zRCh9sLtM/2YKRyy+MEJmWaM=
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU=
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.15.1 h1:8tXpTmJbyH5lydzFPoxSIJ0J46jdh3tylbvM1xCv0LI=
github.com/prometheus/client_golang v1.15.1/go.mod h1:e9yaBhRPU2pPNsZwE+JdQl0KEt1N9XgF6zxWmaC0xOk=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.29.0/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY=
github.com/prometheus/prometheus v0.45.0 h1:O/uG+Nw4kNxx/jDPxmjsSDd+9Ohql6E7ZSY1x5x/0KI=
github.com/prometheus/prometheus v0.45.0/go.mod h1:jC5hyO8ItJBnDWGecbEucMyXjzxGv9cxsxsjS9u5s1w=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.17 h1:1WuWJu7/e8SqK+uQl7lfk/N/oMZTL2NE/TJsNKRNMc4=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/vultr/govultr/v2 v2.17.2 h1:gej/rwr91Puc/tgh+j33p/BLR16UrIPnSr+AIwYWZQs=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1 h1:k/i9J1pBpvlfR+9QsetwPyERsqu1GIbi967PQMq3Ivc=
golang.org/x/exp v0.0.0-20230522175609-2e198f4a06a1/go.mod h1:V1LtkGg67GoY2N1AnLN78QLrzxkLyJw7RJb1gzOOz9w=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.8.0 h1:6dkIjl3j3LtZ/O3sTgZTMsLKSftL/B8Zgq4huOIIUu8=
golang.org/x/oauth2 v0.8.0/go.mod h1:yr7u4HXZRm1R1kBWqr/xKNqewf0plRYoB7sla+BCIXE=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0 h1:EBmGv8NaZBZTWvrbjNoL6HVt+IVy3QDQpJs7VRIw3tU=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.8.0 h1:n5xxQn2i3PC0yLAbjTpNT85q/Kgzcr2gIoX9OrJUols=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.9.3 h1:Gn1I8+64MsuTb/HpH+LmQtNas23LhUVr3rYZ0eKuaMM=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20230320184635-7606e756e683 h1:khxVcsk/FhnzxMKOyD+TDGwjbEOpcPuIpmafPGFmhMA=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.55.0 h1:3Oj82/tFSCeUrRTg/5E/7d/W5A1tj6Ky1ABAuZuv5ag=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.26.2 h1:dM3cinp3PGB6asOySalOZxEG4CZ0IAdJsrYZXE/ovGQ=
k8s.io/api v0.26.2/go.mod h1:1kjMQsFE+QHPfskEcVNgL3+Hp88B80uj0QtSOlj8itU=
k8s.io/apimachinery v0.26.2 h1:da1u3D5wfR5u2RpLhE/ZtZS2P7QvDgLZTi9wrNZl/tQ=
k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
k8s.io/client-go v0.26.2 h1:s1WkVujHX3kTp4Zn4yGNFK+dlDXy1bAAkIl+cFAiuYI=
k8s.io/client-go v0.26.2/go.mod h1:u5EjOuSyBa09yqqyY7m3abZeovO/7D/WehVVlZ2qcqU=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f h1:2kWPakN3i/k81b0gvD5C5FJ2kxm1WrQFanWchyKuqGg=
k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f/go.mod h1:byini6yhqGC14c3ebc/QwanvYwhuMWF6yz2F8uwW8eg=
k8s.io/utils v0.0.0-20230308161112-d77c459e9343 h1:m7tbIjXGcGIAtpmQr7/NAi7RsWoW3E7Zcm4jI1HicTc=
k8s.io/utils v0.0.0-20230308161112-d77c459e9343/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=

Просмотреть файл

@ -0,0 +1,95 @@
package main
import (
"flag"
"fmt"
"log"
"os"
yaml "gopkg.in/yaml.v2"
)
type Config struct {
LabelSelector map[string]string `yaml:"label_selector,omitempty"`
Config map[string]interface{} `yaml:"config"`
AllocationStrategy string `yaml:"allocation_strategy,omitempty"`
}
type OtelConfig struct {
Exporters interface{} `yaml:"exporters"`
Processors interface{} `yaml:"processors"`
Extensions interface{} `yaml:"extensions"`
Receivers struct {
Prometheus struct {
Config map[string]interface{} `yaml:"config"`
TargetAllocator interface{} `yaml:"target_allocator"`
} `yaml:"prometheus"`
} `yaml:"receivers"`
Service struct {
Extensions interface{} `yaml:"extensions"`
Pipelines struct {
Metrics struct {
Exporters interface{} `yaml:"exporters"`
Processors interface{} `yaml:"processors"`
Receivers interface{} `yaml:"receivers"`
} `yaml:"metrics"`
} `yaml:"pipelines"`
Telemetry struct {
Logs struct {
Level interface{} `yaml:"level"`
Encoding interface{} `yaml:"encoding"`
} `yaml:"logs"`
} `yaml:"telemetry"`
} `yaml:"service"`
}
var RESET = "\033[0m"
var RED = "\033[31m"
var taConfigFilePath = "/ta-configuration/targetallocator.yaml"
func logFatalError(message string) {
// Always log the full message
log.Fatalf("%s%s%s", RED, message, RESET)
}
func updateTAConfigFile(configFilePath string) {
defaultsMergedConfigFileContents, err := os.ReadFile(configFilePath)
if err != nil {
logFatalError(fmt.Sprintf("config-reader::Unable to read file contents from: %s - %v\n", configFilePath, err))
os.Exit(1)
}
var promScrapeConfig map[string]interface{}
var otelConfig OtelConfig
err = yaml.Unmarshal([]byte(defaultsMergedConfigFileContents), &otelConfig)
if err != nil {
logFatalError(fmt.Sprintf("config-reader::Unable to unmarshal merged otel configuration from: %s - %v\n", configFilePath, err))
os.Exit(1)
}
promScrapeConfig = otelConfig.Receivers.Prometheus.Config
targetAllocatorConfig := Config{
AllocationStrategy: "consistent-hashing",
LabelSelector: map[string]string{
"rsName": "ama-metrics",
"kubernetes.azure.com/managedby": "aks",
},
Config: promScrapeConfig,
}
targetAllocatorConfigYaml, _ := yaml.Marshal(targetAllocatorConfig)
if err := os.WriteFile(taConfigFilePath, targetAllocatorConfigYaml, 0644); err != nil {
logFatalError(fmt.Sprintf("config-reader::Unable to write to: %s - %v\n", taConfigFilePath, err))
os.Exit(1)
}
log.Println("Updated file - targetallocator.yaml for the TargetAllocator to pick up new config changes")
}
func main() {
configFilePtr := flag.String("config", "", "Config file to read")
flag.Parse()
otelConfigFilePath := *configFilePtr
updateTAConfigFile(otelConfigFilePath)
}

Просмотреть файл

@ -0,0 +1,12 @@
apiVersion: azmonitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: <pod monitor name>
spec:
# The following limits - labelLimit, labelNameLengthLimit and labelValueLengthLimit should exist in the pod monitor CR
# These ensure that the metrics don't get dropped because labels/labelnames/labelvalues exceed the limits supported by the processing pipeline
labelLimit: 63
labelNameLengthLimit: 511
labelValueLengthLimit: 1023
# rest of the pod monitor

Просмотреть файл

@ -0,0 +1,12 @@
apiVersion: azmonitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: <service monitor name>
spec:
# The following limits - labelLimit, labelNameLengthLimit and labelValueLengthLimit should exist in the service monitor CR
# These ensure that the metrics don't get dropped because labels/labelnames/labelvalues exceed the limits supported by the processing pipeline
labelLimit: 63
labelNameLengthLimit: 511
labelValueLengthLimit: 1023
# rest of the service monitor

Просмотреть файл

@ -31,3 +31,20 @@ rules:
- apiGroups: ["clusterconfig.azure.com"]
resources: ["azureclusteridentityrequests", "azureclusteridentityrequests/status"]
verbs: ["get", "update", "list", "create"]
{{- if and (or (ne .Values.AzureMonitorMetrics.ArcExtension true) (eq .Values.AzureMonitorMetrics.ArcEnableOperator true)) (eq .Values.AzureMonitorMetrics.TargetAllocatorEnabled true) }}
- apiGroups:
- azmonitoring.coreos.com
resources:
- servicemonitors
- podmonitors
verbs:
- '*'
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
{{- end }}

Просмотреть файл

@ -35,20 +35,20 @@ spec:
requests:
cpu: 50m
memory: 150Mi
{{- if and (.Values.AzureMonitorMetrics.ArcExtension) (.Values.Azure.proxySettings.isProxyEnabled) }}
{{- if and (eq .Values.AzureMonitorMetrics.ArcExtension true) (.Values.Azure.proxySettings.isProxyEnabled) }}
envFrom:
- secretRef:
name: ama-metrics-proxy-config
{{- end }}
env:
- name: CLUSTER
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
{{- if eq .Values.AzureMonitorMetrics.ArcExtension true }}
value: "{{ .Values.Azure.Cluster.ResourceId }}"
{{- else }}
value: "{{ .Values.global.commonGlobals.Customer.AzureResourceID }}"
{{- end }}
- name: AKSREGION
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
{{- if eq .Values.AzureMonitorMetrics.ArcExtension true }}
value: "{{ .Values.Azure.Cluster.Region }}"
{{- else }}
value: "{{ .Values.global.commonGlobals.Region }}"
@ -58,7 +58,7 @@ spec:
- name: AZMON_COLLECT_ENV
value: "false"
- name: customEnvironment
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
{{- if eq .Values.AzureMonitorMetrics.ArcExtension true }}
value: "{{ lower .Values.Azure.Cluster.Cloud }}"
{{- else if .Values.AzureMonitorMetrics.isArcACluster }}
value: "arcautonomous"
@ -67,7 +67,7 @@ spec:
{{- end }}
- name: OMS_TLD
value: "opinsights.azure.com"
{{- if .Values.AzureMonitorMetrics.isArcACluster }}
{{- if eq .Values.AzureMonitorMetrics.isArcACluster true }}
- name: customRegionalEndpoint
value: {{ required "customRegionalEndpoint is required in Arc Autonomous" .Values.AzureMonitorMetrics.arcAutonomousSettings.customRegionalEndpoint | toString | trim | quote }}
- name: customGlobalEndpoint
@ -110,7 +110,7 @@ spec:
- name: NODE_EXPORTER_NAME
value: "" # Replace this with the node exporter shipped out of box with AKS
- name: NODE_EXPORTER_TARGETPORT
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
{{- if eq .Values.AzureMonitorMetrics.ArcExtension true }}
value: "{{ index .Values "prometheus-node-exporter" "service" "targetPort" }}"
{{- else }}
value: "19100"
@ -154,7 +154,7 @@ spec:
name: anchors-ubuntu
readOnly: true
{{- end }}
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
{{- if eq .Values.AzureMonitorMetrics.ArcExtension true }}
- mountPath: /anchors/proxy
name: ama-metrics-proxy-cert
readOnly: true
@ -169,7 +169,7 @@ spec:
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
{{- if eq .Values.AzureMonitorMetrics.ArcExtension true }}
- name: arc-msi-adapter
imagePullPolicy: IfNotPresent
env:

Просмотреть файл

@ -5,8 +5,13 @@ metadata:
namespace: kube-system
labels:
component: ama-metrics
kubernetes.azure.com/managedby: aks
spec:
{{- if .Values.AzureMonitorMetrics.TargetAllocatorEnabled }}
replicas: {{ .Values.AzureMonitorMetrics.DeploymentReplicas }}
{{- else}}
replicas: 1
{{- end }}
revisionHistoryLimit: 2
paused: false
selector:
@ -60,6 +65,12 @@ spec:
value: "true"
- name: AZMON_COLLECT_ENV
value: "false"
- name: AZMON_OPERATOR_ENABLED
{{- if and (or (ne .Values.AzureMonitorMetrics.ArcExtension true) (eq .Values.AzureMonitorMetrics.ArcEnableOperator true)) (eq .Values.AzureMonitorMetrics.TargetAllocatorEnabled true) }}
value: "true"
{{- else }}
value: "false"
{{- end }}
- name: customEnvironment
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
value: "{{ lower .Values.Azure.Cluster.Cloud }}"

Просмотреть файл

@ -0,0 +1,423 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.1
creationTimestamp: null
name: podmonitors.azmonitoring.coreos.com
spec:
group: azmonitoring.coreos.com
names:
categories:
- prometheus-operator
kind: PodMonitor
listKind: PodMonitorList
plural: podmonitors
shortNames:
- pmon
singular: podmonitor
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
attachMetadata:
properties:
node:
type: boolean
type: object
jobLabel:
type: string
labelLimit:
format: int64
type: integer
labelNameLengthLimit:
format: int64
type: integer
labelValueLengthLimit:
format: int64
type: integer
namespaceSelector:
properties:
any:
type: boolean
matchNames:
items:
type: string
type: array
type: object
podMetricsEndpoints:
items:
properties:
authorization:
properties:
credentials:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type:
type: string
type: object
basicAuth:
properties:
password:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
username:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
bearerTokenSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
enableHttp2:
type: boolean
filterRunning:
type: boolean
followRedirects:
type: boolean
honorLabels:
type: boolean
honorTimestamps:
type: boolean
interval:
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
metricRelabelings:
items:
properties:
action:
default: replace
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
- keepequal
- KeepEqual
- dropequal
- DropEqual
type: string
modulus:
format: int64
type: integer
regex:
type: string
replacement:
type: string
separator:
type: string
sourceLabels:
items:
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
type: string
type: object
type: array
oauth2:
properties:
clientId:
properties:
configMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
clientSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
endpointParams:
additionalProperties:
type: string
type: object
scopes:
items:
type: string
type: array
tokenUrl:
minLength: 1
type: string
required:
- clientId
- clientSecret
- tokenUrl
type: object
params:
additionalProperties:
items:
type: string
type: array
type: object
path:
type: string
port:
type: string
proxyUrl:
type: string
relabelings:
items:
properties:
action:
default: replace
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
- keepequal
- KeepEqual
- dropequal
- DropEqual
type: string
modulus:
format: int64
type: integer
regex:
type: string
replacement:
type: string
separator:
type: string
sourceLabels:
items:
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
type: string
type: object
type: array
scheme:
enum:
- http
- https
type: string
scrapeTimeout:
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
targetPort:
anyOf:
- type: integer
- type: string
x-kubernetes-int-or-string: true
tlsConfig:
properties:
ca:
properties:
configMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
cert:
properties:
configMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
insecureSkipVerify:
type: boolean
keySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
serverName:
type: string
type: object
type: object
type: array
podTargetLabels:
items:
type: string
type: array
sampleLimit:
format: int64
type: integer
selector:
properties:
matchExpressions:
items:
properties:
key:
type: string
operator:
type: string
values:
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
type: object
type: object
x-kubernetes-map-type: atomic
targetLimit:
format: int64
type: integer
required:
- podMetricsEndpoints
- selector
type: object
required:
- spec
type: object
served: true
storage: true

Просмотреть файл

@ -0,0 +1,435 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.1
creationTimestamp: null
name: servicemonitors.azmonitoring.coreos.com
spec:
group: azmonitoring.coreos.com
names:
categories:
- prometheus-operator
kind: ServiceMonitor
listKind: ServiceMonitorList
plural: servicemonitors
shortNames:
- smon
singular: servicemonitor
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
attachMetadata:
properties:
node:
type: boolean
type: object
endpoints:
items:
properties:
authorization:
properties:
credentials:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type:
type: string
type: object
basicAuth:
properties:
password:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
username:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
bearerTokenFile:
type: string
bearerTokenSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
enableHttp2:
type: boolean
filterRunning:
type: boolean
followRedirects:
type: boolean
honorLabels:
type: boolean
honorTimestamps:
type: boolean
interval:
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
metricRelabelings:
items:
properties:
action:
default: replace
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
- keepequal
- KeepEqual
- dropequal
- DropEqual
type: string
modulus:
format: int64
type: integer
regex:
type: string
replacement:
type: string
separator:
type: string
sourceLabels:
items:
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
type: string
type: object
type: array
oauth2:
properties:
clientId:
properties:
configMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
clientSecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
endpointParams:
additionalProperties:
type: string
type: object
scopes:
items:
type: string
type: array
tokenUrl:
minLength: 1
type: string
required:
- clientId
- clientSecret
- tokenUrl
type: object
params:
additionalProperties:
items:
type: string
type: array
type: object
path:
type: string
port:
type: string
proxyUrl:
type: string
relabelings:
items:
properties:
action:
default: replace
enum:
- replace
- Replace
- keep
- Keep
- drop
- Drop
- hashmod
- HashMod
- labelmap
- LabelMap
- labeldrop
- LabelDrop
- labelkeep
- LabelKeep
- lowercase
- Lowercase
- uppercase
- Uppercase
- keepequal
- KeepEqual
- dropequal
- DropEqual
type: string
modulus:
format: int64
type: integer
regex:
type: string
replacement:
type: string
separator:
type: string
sourceLabels:
items:
pattern: ^[a-zA-Z_][a-zA-Z0-9_]*$
type: string
type: array
targetLabel:
type: string
type: object
type: array
scheme:
enum:
- http
- https
type: string
scrapeTimeout:
pattern: ^(0|(([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?)$
type: string
targetPort:
anyOf:
- type: integer
- type: string
x-kubernetes-int-or-string: true
tlsConfig:
properties:
ca:
properties:
configMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
caFile:
type: string
cert:
properties:
configMap:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
secret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
certFile:
type: string
insecureSkipVerify:
type: boolean
keyFile:
type: string
keySecret:
properties:
key:
type: string
name:
type: string
optional:
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
serverName:
type: string
type: object
type: object
type: array
jobLabel:
type: string
labelLimit:
format: int64
type: integer
labelNameLengthLimit:
format: int64
type: integer
labelValueLengthLimit:
format: int64
type: integer
namespaceSelector:
properties:
any:
type: boolean
matchNames:
items:
type: string
type: array
type: object
podTargetLabels:
items:
type: string
type: array
sampleLimit:
format: int64
type: integer
selector:
properties:
matchExpressions:
items:
properties:
key:
type: string
operator:
type: string
values:
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
type: object
type: object
x-kubernetes-map-type: atomic
targetLabels:
items:
type: string
type: array
targetLimit:
format: int64
type: integer
required:
- endpoints
- selector
type: object
required:
- spec
type: object
served: true
storage: true

Просмотреть файл

@ -0,0 +1,26 @@
{{- if and (or (ne .Values.AzureMonitorMetrics.ArcExtension true) (eq .Values.AzureMonitorMetrics.ArcEnableOperator true)) (eq .Values.AzureMonitorMetrics.TargetAllocatorEnabled true) }}
apiVersion: v1
kind: Service
metadata:
labels:
component: ama-metrics-operator-targets
kubernetes.azure.com/managedby: aks
name: ama-metrics-operator-targets
namespace: kube-system
spec:
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: targetallocation
port: 80
protocol: TCP
targetPort: 8080
selector:
rsName: ama-metrics-operator-targets
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
{{- end }}

Просмотреть файл

@ -0,0 +1,227 @@
{{- if and (or (ne .Values.AzureMonitorMetrics.ArcExtension true) (eq .Values.AzureMonitorMetrics.ArcEnableOperator true)) (eq .Values.AzureMonitorMetrics.TargetAllocatorEnabled true) }}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: ama-metrics-operator-targets
kubernetes.azure.com/managedby: aks
name: ama-metrics-operator-targets
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
rsName: ama-metrics-operator-targets
kubernetes.azure.com/managedby: aks
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
rsName: ama-metrics-operator-targets
kubernetes.azure.com/managedby: aks
spec:
containers:
- name: targetallocator
args:
- --enable-prometheus-cr-watcher
image: "mcr.microsoft.com{{ .Values.AzureMonitorMetrics.ImageRepository }}:{{ .Values.AzureMonitorMetrics.ImageTagTargetAllocator }}"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 5
memory: 8Gi
requests:
cpu: 10m
memory: 50Mi
env:
- name: OTELCOL_NAMESPACE
value: "kube-system"
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CLUSTER
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
value: "{{ .Values.Azure.Cluster.ResourceId }}"
{{- else }}
value: "{{ .Values.global.commonGlobals.Customer.AzureResourceID }}"
{{- end }}
- name: PROMETHEUS_OPERATOR_V1_CUSTOM_GROUP
value: "azmonitoring.coreos.com"
- name: AGENT_VERSION
value: {{ .Values.AzureMonitorMetrics.ImageTagTargetAllocator }}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /conf
name: ta-config-shared
livenessProbe:
httpGet:
path: /metrics
port: 8080
initialDelaySeconds: 60
periodSeconds: 3
- name: config-reader
image: "mcr.microsoft.com{{ .Values.AzureMonitorMetrics.ImageRepository }}:{{ .Values.AzureMonitorMetrics.ImageTagCfgReader }}"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 1m
memory: 10Mi
env:
- name: CLUSTER
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
value: "{{ .Values.Azure.Cluster.ResourceId }}"
{{- else }}
value: "{{ .Values.global.commonGlobals.Customer.AzureResourceID }}"
{{- end }}
- name: AKSREGION
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
value: "{{ .Values.Azure.Cluster.Region }}"
{{- else }}
value: "{{ .Values.global.commonGlobals.Region }}"
{{- end }}
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
value: "kube-system"
- name: CONTAINER_TYPE
value: "ConfigReaderSidecar"
- name: MODE
value: "advanced" # only supported mode is 'advanced', any other value will be the default/non-advance mode
- name: MAC
value: "true"
- name: AZMON_COLLECT_ENV
value: "false"
- name: KUBE_STATE_NAME
value: ama-metrics-ksm
- name: NODE_EXPORTER_NAME
value: "" # Replace this with the node exporter shipped out of box with AKS
- name: NODE_EXPORTER_TARGETPORT
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
value: "{{ index .Values "prometheus-node-exporter" "service" "targetPort" }}"
{{- else }}
value: "19100"
{{- end }}
- name: customEnvironment
{{- if .Values.AzureMonitorMetrics.ArcExtension }}
value: "{{ lower .Values.Azure.Cluster.Cloud }}"
{{- else if .Values.AzureMonitorMetrics.isArcACluster }}
value: "arcautonomous"
{{- else }}
value: "{{ lower .Values.global.commonGlobals.CloudEnvironment }}"
{{- end }}
- name: WINMODE
value: "" # WINDOWS: only supported mode is 'advanced', any other value will be the default/non-advance mode
- name: MINIMAL_INGESTION_PROFILE
value: "true" # only supported value is the string "true"
- name: AGENT_VERSION
value: {{ .Values.AzureMonitorMetrics.ImageTagCfgReader }}
volumeMounts:
- mountPath: /etc/config/settings
name: settings-vol-config
readOnly: true
- mountPath: /etc/config/settings/prometheus
name: prometheus-config-vol
readOnly: true
- mountPath: /ta-configuration
name: ta-config-shared
livenessProbe:
exec:
command:
- /bin/bash
- -c
- /opt/microsoft/liveness/livenessprobe-configreader.sh
initialDelaySeconds: 60
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: ama-metrics-serviceaccount
serviceAccountName: ama-metrics-serviceaccount
terminationGracePeriodSeconds: 30
affinity:
nodeAffinity:
# affinity to schedule on to ephemeral os node if its available
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.azure.com/mode
operator: In
values:
- system
- weight: 50
preference:
matchExpressions:
- key: azuremonitor/metrics.replica.preferred
operator: In
values:
- "true"
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: type
operator: NotIn
values:
- virtual-kubelet
{{- if not .Values.AzureMonitorMetrics.ArcExtension }}
- key: kubernetes.azure.com/cluster
operator: Exists
{{- end }}
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- operator: "Exists"
effect: NoExecute
- operator: "Exists"
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
volumes:
- name: settings-vol-config
configMap:
name: ama-metrics-settings-configmap
optional: true
- name: prometheus-config-vol
configMap:
name: ama-metrics-prometheus-config
optional: true
- name: ta-config-shared
emptyDir: {}
{{- end }}

Просмотреть файл

@ -38,6 +38,10 @@ AzureMonitorMetrics:
ImageRepository: ${MCR_REPOSITORY}
ImageTag: ${IMAGE_TAG}
ImageTagWin: ${IMAGE_TAG}-win
ImageTagTargetAllocator: ${IMAGE_TAG}-targetallocator"
ImageTagCfgReader: ${IMAGE_TAG}-cfg"
TargetAllocatorEnabled: false
DeploymentReplicas: 1
# The below 2 settings are not Azure Monitor Metrics adapter chart. They are substituted in a different manner.
# Please update these with the latest ones from here so that you get the image that is currently deployed by the AKS RP -
# Repository: https://msazure.visualstudio.com/CloudNativeCompute/_git/aks-rp?path=/ccp/charts/addon-charts/azure-monitor-metrics-addon/templates/ama-metrics-daemonset.yaml&version=GBrashmi/prom-addon-arm64&line=136&lineEnd=136&lineStartColumn=56&lineEndColumn=85&lineStyle=plain&_a=contents
@ -48,6 +52,7 @@ AzureMonitorMetrics:
ImageRepositoryWin: "/aks/hcp/addon-token-adapter"
ImageTagWin: "20230120winbeta"
ArcExtension: ${ARC_EXTENSION}
ArcEnableOperator: false
# Do not change the below settings. They are reserved for Arc Autonomous
isArcACluster: false
arcAutonomousSettings:
@ -57,9 +62,9 @@ AzureMonitorMetrics:
global:
commonGlobals:
CloudEnvironment: "azurepubliccloud"
Region: "${AKS_REGION}"
Region: "westus2"
Customer:
AzureResourceID: ${AKS_RESOURCE_ID}
AzureResourceID: "/subscriptions/0e4773a2-8221-441a-a06f-17db16ab16d4/resourcegroups/rashmi-operator-cfg/providers/Microsoft.ContainerService/managedClusters/rashmi-operator-cfg"
# For ARC backdoor testing
Azure:

Просмотреть файл

@ -0,0 +1,23 @@
apiVersion: azmonitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: prometheus-reference-app-job
spec:
labelLimit: 63
labelNameLengthLimit: 511
labelValueLengthLimit: 1023
selector:
matchLabels:
app: prometheus-reference-app
podMetricsEndpoints:
- relabelings:
- sourceLabels: [__meta_kubernetes_pod_label_app]
action: keep
regex: "prometheus-reference-app"
- sourceLabels: [__meta_kubernetes_pod_node_name]
action: replace
regex: ('$$NODE_NAME$$')
targetLabel: instance

Просмотреть файл

@ -0,0 +1,22 @@
apiVersion: azmonitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: prometheus-reference-app-monitor
spec:
selector:
matchLabels:
app: prometheus-reference-app
endpoints:
- port: weather-app
interval: 30s
path: /metrics
scheme: http
- port: untyped-metrics
interval: 30s
path: /metrics
scheme: http
- port: python-client
interval: 30s
path: /metrics
scheme: http

Просмотреть файл

@ -38,6 +38,32 @@
Skip_Long_Lines On
Ignore_Older 2m
# targetallocator targetallocator container logs
[INPUT]
Name tail
Tag prometheus.log.targetallocator.tacontainer
Path /var/log/containers/ama-metrics-*operator-targets*kube-system*targetallocator*.log
DB /var/opt/microsoft/state/prometheus-collector-ai.db
DB.Sync Off
Parser cri
Read_from_Head true
Mem_Buf_Limit 1m
Path_Key filepath
Skip_Long_Lines On
# targetallocator config-reader container logs
[INPUT]
Name tail
Tag prometheus.log.targetallocator.configreader
Path /var/log/containers/ama-metrics-*operator-targets*kube-system*config-reader*.log
DB /var/opt/microsoft/state/prometheus-collector-ai.db
DB.Sync Off
Parser cri
Read_from_Head true
Mem_Buf_Limit 1m
Path_Key filepath
Skip_Long_Lines On
# addon-token-adapter container logs
[INPUT]
Name tail

Просмотреть файл

@ -44,7 +44,7 @@ func FLBPluginInit(ctx unsafe.Pointer) int {
}
if strings.ToLower(os.Getenv(envControllerType)) == "daemonset" && strings.ToLower(os.Getenv("OS_TYPE")) == "linux" {
go SendKsmCpuMemoryToAppInsightsMetrics()
go SendContainersCpuMemoryToAppInsightsMetrics()
}
go PushMEProcessedAndReceivedCountToAppInsightsMetrics()

Просмотреть файл

@ -406,20 +406,21 @@ func SendCoreCountToAppInsightsMetrics() {
// Struct for getting relevant fields from JSON object obtained from cadvisor endpoint
type CadvisorJson struct {
Pods []struct {
Containers []struct {
Name string `json:"name"`
Cpu struct {
UsageNanoCores float64 `json:"usageNanoCores"`
} `json:"cpu"`
Memory struct {
RssBytes float64 `json:"rssBytes"`
} `json:"memory"`
} `json:"containers"`
Containers []Container `json:"containers"`
} `json:"pods"`
}
type Container struct {
Name string `json:"name"`
Cpu struct {
UsageNanoCores float64 `json:"usageNanoCores"`
} `json:"cpu"`
Memory struct {
RssBytes float64 `json:"rssBytes"`
} `json:"memory"`
}
// Send Cpu and Memory Usage for Kube state metrics to Application Insights periodically
func SendKsmCpuMemoryToAppInsightsMetrics() {
// Send Cpu and Memory Usage for our containers to Application Insights periodically
func SendContainersCpuMemoryToAppInsightsMetrics() {
var p CadvisorJson
err := json.Unmarshal(retrieveKsmData(), &p)
@ -431,31 +432,41 @@ func SendKsmCpuMemoryToAppInsightsMetrics() {
ksmTelemetryTicker := time.NewTicker(time.Second * time.Duration(ksmAttachedTelemetryIntervalSeconds))
for ; true; <-ksmTelemetryTicker.C {
cpuKsmUsageNanoCoresLinux := float64(0)
memoryKsmRssBytesLinux := float64(0)
for podId := 0; podId < len(p.Pods); podId++ {
for containerId := 0; containerId < len(p.Pods[podId].Containers); containerId++ {
if strings.TrimSpace(p.Pods[podId].Containers[containerId].Name) == "" {
container := p.Pods[podId].Containers[containerId]
containerName := strings.TrimSpace(container.Name)
switch containerName {
case "":
message := fmt.Sprintf("Container name is missing")
Log(message)
continue
}
if strings.TrimSpace(p.Pods[podId].Containers[containerId].Name) == "ama-metrics-ksm" {
cpuKsmUsageNanoCoresLinux += p.Pods[podId].Containers[containerId].Cpu.UsageNanoCores
memoryKsmRssBytesLinux += p.Pods[podId].Containers[containerId].Memory.RssBytes
case "ama-metrics-ksm":
GetAndSendContainerCPUandMemoryFromCadvisorJSON(container, ksmCpuMemoryTelemetryName, "MemKsmRssBytes")
case "targetallocator":
GetAndSendContainerCPUandMemoryFromCadvisorJSON(container, "taCPUUsage", "taMemRssBytes")
case "config-reader":
GetAndSendContainerCPUandMemoryFromCadvisorJSON(container, "cnfgRdrCPUUsage", "cnfgRdrMemRssBytes")
}
}
}
// Send metric to app insights for Cpu and Memory Usage for Kube state metrics
metricTelemetryItem := appinsights.NewMetricTelemetry(ksmCpuMemoryTelemetryName, cpuKsmUsageNanoCoresLinux)
// Abbreviated properties to save telemetry cost
metricTelemetryItem.Properties["MemKsmRssBytesLinux"] = fmt.Sprintf("%d", memoryKsmRssBytesLinux)
TelemetryClient.Track(metricTelemetryItem)
}
}
func GetAndSendContainerCPUandMemoryFromCadvisorJSON(container Container, cpuMetricName string, memMetricName string) {
cpuUsageNanoCoresLinux := container.Cpu.UsageNanoCores
memoryRssBytesLinux := container.Memory.RssBytes
// Send metric to app insights for Cpu and Memory Usage for Kube state metrics
metricTelemetryItem := appinsights.NewMetricTelemetry(cpuMetricName, cpuUsageNanoCoresLinux)
// Abbreviated properties to save telemetry cost
metricTelemetryItem.Properties[memMetricName] = fmt.Sprintf("%d", int(memoryRssBytesLinux))
TelemetryClient.Track(metricTelemetryItem)
Log(fmt.Sprintf("Sent container CPU and Mem data for %s", cpuMetricName))
}
// Retrieve the JSON payload of Kube state metrics from Cadvisor endpoint
@ -521,12 +532,13 @@ func PushLogErrorsToAppInsightsTraces(records []map[interface{}]interface{}, sev
for _, record := range records {
var logEntry = ""
// Logs have different parsed formats depending on if they're from otelcollector or metricsextension
// Logs have different parsed formats depending on if they're from otelcollector or container logs
if tag == fluentbitOtelCollectorLogsTag {
logEntry = fmt.Sprintf("%s %s", ToString(record["caller"]), ToString(record["msg"]))
} else if tag == fluentbitContainerLogsTag {
} else {
logEntry = ToString(record["log"])
}
logLines = append(logLines, logEntry)
}
@ -773,9 +785,10 @@ func UpdateMEReceivedMetricsCount(records []map[interface{}]interface{}) int {
// Add to the total that PublishTimeseriesVolume() uses
if strings.ToLower(os.Getenv(envPrometheusCollectorHealth)) == "true" {
TimeseriesVolumeMutex.Lock()
TimeseriesVolumeMutex.Lock()
TimeseriesReceivedTotal += metricsReceivedCount
TimeseriesVolumeMutex.Unlock()
}
}

Просмотреть файл

@ -1,4 +1,4 @@
all: otelcollector fluentbitplugin promconfigvalidator
all: otelcollector fluentbitplugin promconfigvalidator targetallocator
.PHONY: otelcollector
otelcollector:
@ -10,4 +10,10 @@ fluentbitplugin:
make -C ../fluent-bit/src
promconfigvalidator:
make -C ../prom-config-validator-builder
make -C ../prom-config-validator-builder
targetallocator:
make -C ../otel-allocator
configurationreader:
make -C ../configuration-reader-builder

Просмотреть файл

@ -1 +1 @@
2.43.0
2.47.0

Просмотреть файл

@ -0,0 +1,47 @@
exporters:
prometheus:
endpoint: "127.0.0.1:9091"
const_labels:
cluster: $AZMON_CLUSTER_LABEL
otlp:
endpoint: 127.0.0.1:55680
tls:
insecure: true
compression: "gzip"
retry_on_failure:
enabled: false
timeout: 12s
processors:
batch:
send_batch_size: 7000
timeout: 200ms
send_batch_max_size: 7000
resource:
attributes:
- key: cluster
value: "$AZMON_CLUSTER_LABEL"
action: "upsert"
- key: job
from_attribute: service.name
action: insert
- key: instance
from_attribute: service.instance.id
action: insert
receivers:
prometheus:
target_allocator:
endpoint: http://ama-metrics-operator-targets.kube-system.svc.cluster.local
interval: 30s
collector_id: "$POD_NAME"
service:
pipelines:
metrics:
receivers: [prometheus]
exporters: [otlp]
processors: [batch,resource]
telemetry:
logs:
level: warn
encoding: json
metrics:
level: detailed

Просмотреть файл

@ -22,6 +22,7 @@ import (
"go.opentelemetry.io/collector/extension"
"go.opentelemetry.io/collector/extension/zpagesextension"
"go.opentelemetry.io/collector/receiver"
"github.com/open-telemetry/opentelemetry-collector-contrib/extension/healthcheckextension"
)
func components() (otelcol.Factories, error) {
@ -31,6 +32,7 @@ func components() (otelcol.Factories, error) {
factories.Extensions, err = extension.MakeFactoryMap(
pprofextension.NewFactory(),
zpagesextension.NewFactory(),
healthcheckextension.NewFactory(),
)
if err != nil {
return otelcol.Factories{}, err

Просмотреть файл

@ -6,219 +6,252 @@ replace github.com/gracewehner/prometheusreceiver => ../prometheusreceiver
require (
github.com/gracewehner/prometheusreceiver v0.0.0-00010101000000-000000000000
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/fileexporter v0.74.0
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter v0.74.0
github.com/open-telemetry/opentelemetry-collector-contrib/extension/pprofextension v0.74.0
github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourceprocessor v0.74.0
go.opentelemetry.io/collector v0.74.0
go.opentelemetry.io/collector/component v0.74.0
go.opentelemetry.io/collector/connector/forwardconnector v0.74.0
go.opentelemetry.io/collector/exporter v0.74.0
go.opentelemetry.io/collector/exporter/loggingexporter v0.73.0
go.opentelemetry.io/collector/exporter/otlpexporter v0.73.0
go.opentelemetry.io/collector/exporter/otlphttpexporter v0.74.0
go.opentelemetry.io/collector/extension/ballastextension v0.74.0
go.opentelemetry.io/collector/extension/zpagesextension v0.74.0
go.opentelemetry.io/collector/processor/batchprocessor v0.73.0
go.opentelemetry.io/collector/processor/memorylimiterprocessor v0.74.0
go.opentelemetry.io/collector/receiver v0.74.0
go.opentelemetry.io/collector/receiver/otlpreceiver v0.74.0
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/fileexporter v0.85.0
github.com/open-telemetry/opentelemetry-collector-contrib/exporter/prometheusexporter v0.85.0
github.com/open-telemetry/opentelemetry-collector-contrib/extension/healthcheckextension v0.85.0
github.com/open-telemetry/opentelemetry-collector-contrib/extension/pprofextension v0.85.0
github.com/open-telemetry/opentelemetry-collector-contrib/processor/resourceprocessor v0.85.0
go.opentelemetry.io/collector/component v0.86.0
go.opentelemetry.io/collector/connector v0.86.0
go.opentelemetry.io/collector/connector/forwardconnector v0.85.0
go.opentelemetry.io/collector/exporter v0.86.0
go.opentelemetry.io/collector/exporter/loggingexporter v0.85.0
go.opentelemetry.io/collector/exporter/otlpexporter v0.85.0
go.opentelemetry.io/collector/extension v0.86.0
go.opentelemetry.io/collector/extension/zpagesextension v0.86.0
go.opentelemetry.io/collector/otelcol v0.86.0
go.opentelemetry.io/collector/processor v0.86.0
go.opentelemetry.io/collector/processor/batchprocessor v0.85.0
go.opentelemetry.io/collector/receiver v0.86.0
)
require (
cloud.google.com/go/compute v1.18.0 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
cloud.google.com/go/compute v1.23.0 // indirect
cloud.google.com/go/compute/metadata v0.2.4-0.20230617002413-005d2dfb6b68 // indirect
contrib.go.opencensus.io/exporter/prometheus v0.4.2 // indirect
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.28 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.22 // indirect
github.com/Azure/go-autorest/autorest v0.11.29 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.23 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/Microsoft/go-winio v0.5.1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
github.com/armon/go-metrics v0.3.10 // indirect
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d // indirect
github.com/aws/aws-sdk-go v1.44.220 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/aws/aws-sdk-go v1.45.12 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/cncf/xds/go v0.0.0-20230105202645-06c439db220b // indirect
github.com/coreos/go-systemd/v22 v22.4.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 // indirect
github.com/cnf/structhash v0.0.0-20201127153200-e1b16c1ebc08 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dennwc/varint v1.0.0 // indirect
github.com/digitalocean/godo v1.95.0 // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker v23.0.1+incompatible // indirect
github.com/digitalocean/godo v1.99.0 // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/docker v24.0.6+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/edsrzf/mmap-go v1.1.0 // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
github.com/envoyproxy/go-control-plane v0.10.3 // indirect
github.com/envoyproxy/protoc-gen-validate v0.9.1 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/emicklei/go-restful/v3 v3.10.2 // indirect
github.com/envoyproxy/go-control-plane v0.11.1 // indirect
github.com/envoyproxy/protoc-gen-validate v1.0.2 // indirect
github.com/fatih/color v1.15.0 // indirect
github.com/felixge/httpsnoop v1.0.3 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-kit/log v0.2.1 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/analysis v0.21.4 // indirect
github.com/go-openapi/errors v0.20.3 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/errors v0.20.4 // indirect
github.com/go-openapi/jsonpointer v0.20.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/loads v0.21.2 // indirect
github.com/go-openapi/spec v0.20.7 // indirect
github.com/go-openapi/strfmt v0.21.3 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-openapi/validate v0.22.0 // indirect
github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48 // indirect
github.com/go-openapi/spec v0.20.9 // indirect
github.com/go-openapi/strfmt v0.21.7 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-openapi/validate v0.22.1 // indirect
github.com/go-resty/resty/v2 v2.7.0 // indirect
github.com/go-zookeeper/zk v1.0.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect
github.com/googleapis/gax-go/v2 v2.7.0 // indirect
github.com/gophercloud/gophercloud v1.1.1 // indirect
github.com/google/s2a-go v0.1.7 // indirect
github.com/google/uuid v1.3.1 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.5 // indirect
github.com/googleapis/gax-go/v2 v2.12.0 // indirect
github.com/gophercloud/gophercloud v1.5.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd // indirect
github.com/hashicorp/consul/api v1.20.0 // indirect
github.com/hashicorp/cronexpr v1.1.1 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.0 // indirect
github.com/hashicorp/consul/api v1.24.0 // indirect
github.com/hashicorp/cronexpr v1.1.2 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-hclog v1.3.1 // indirect
github.com/hashicorp/go-hclog v1.5.0 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.4 // indirect
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
github.com/hashicorp/golang-lru v0.6.0 // indirect
github.com/hashicorp/nomad/api v0.0.0-20230124213148-69fd1a0e4bf7 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/hashicorp/nomad/api v0.0.0-20230718173136-3a687930bd3e // indirect
github.com/hashicorp/serf v0.10.1 // indirect
github.com/hetznercloud/hcloud-go v1.39.0 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/ionos-cloud/sdk-go/v6 v6.1.3 // indirect
github.com/hetznercloud/hcloud-go/v2 v2.0.0 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/ionos-cloud/sdk-go/v6 v6.1.8 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/julienschmidt/httprouter v1.3.0 // indirect
github.com/klauspost/compress v1.16.3 // indirect
github.com/knadh/koanf v1.5.0 // indirect
github.com/klauspost/compress v1.17.0 // indirect
github.com/knadh/koanf/maps v0.1.1 // indirect
github.com/knadh/koanf/providers/confmap v0.1.0 // indirect
github.com/knadh/koanf/v2 v2.0.1 // indirect
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b // indirect
github.com/linode/linodego v1.12.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/linode/linodego v1.19.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/dns v1.1.50 // indirect
github.com/miekg/dns v1.1.55 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/hashstructure/v2 v2.0.2 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mostynb/go-grpc-compression v1.1.17 // indirect
github.com/mostynb/go-grpc-compression v1.2.1 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal v0.74.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/internal/sharedcomponent v0.74.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatautil v0.74.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/resourcetotelemetry v0.74.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/translator/prometheus v0.74.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal v0.86.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/internal/sharedcomponent v0.86.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/pdatautil v0.86.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/resourcetotelemetry v0.86.0 // indirect
github.com/open-telemetry/opentelemetry-collector-contrib/pkg/translator/prometheus v0.86.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.2 // indirect
github.com/ovh/go-ovh v1.3.0 // indirect
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/opencontainers/image-spec v1.1.0-rc4 // indirect
github.com/ovh/go-ovh v1.4.1 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/alertmanager v0.25.0 // indirect
github.com/prometheus/client_golang v1.14.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/client_golang v1.16.0 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/common/assets v0.2.0 // indirect
github.com/prometheus/common/sigv4 v0.1.0 // indirect
github.com/prometheus/exporter-toolkit v0.8.2 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/prometheus/prometheus v0.42.1-0.20230210113933-af1d9e01c7e4 // indirect
github.com/prometheus/exporter-toolkit v0.10.0 // indirect
github.com/prometheus/procfs v0.11.0 // indirect
github.com/prometheus/prometheus v0.47.0 // indirect
github.com/prometheus/statsd_exporter v0.22.7 // indirect
github.com/rogpeppe/go-internal v1.8.0 // indirect
github.com/rs/cors v1.8.3 // indirect
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.12 // indirect
github.com/shirou/gopsutil/v3 v3.23.2 // indirect
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749 // indirect
github.com/spf13/cobra v1.6.1 // indirect
github.com/rs/cors v1.10.0 // indirect
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.20 // indirect
github.com/shirou/gopsutil/v3 v3.23.8 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c // indirect
github.com/spf13/cobra v1.7.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/testify v1.8.2 // indirect
github.com/tklauser/go-sysconf v0.3.11 // indirect
github.com/tklauser/numcpus v0.6.0 // indirect
github.com/stretchr/testify v1.8.4 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/vultr/govultr/v2 v2.17.2 // indirect
github.com/yusufpapurcu/wmi v1.2.2 // indirect
go.mongodb.org/mongo-driver v1.11.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.mongodb.org/mongo-driver v1.12.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/collector/confmap v0.74.0 // indirect
go.opentelemetry.io/collector/consumer v0.74.0 // indirect
go.opentelemetry.io/collector/featuregate v0.74.0 // indirect
go.opentelemetry.io/collector/pdata v1.0.0-rc8 // indirect
go.opentelemetry.io/collector/semconv v0.74.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.40.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.40.0 // indirect
go.opentelemetry.io/contrib/propagators/b3 v1.15.0 // indirect
go.opentelemetry.io/contrib/zpages v0.40.0 // indirect
go.opentelemetry.io/otel v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.37.0 // indirect
go.opentelemetry.io/otel/metric v0.37.0 // indirect
go.opentelemetry.io/otel/sdk v1.14.0 // indirect
go.opentelemetry.io/otel/sdk/metric v0.37.0 // indirect
go.opentelemetry.io/otel/trace v1.14.0 // indirect
go.uber.org/atomic v1.10.0 // indirect
go.opentelemetry.io/collector v0.86.0 // indirect
go.opentelemetry.io/collector/config/configauth v0.86.0 // indirect
go.opentelemetry.io/collector/config/configcompression v0.86.0 // indirect
go.opentelemetry.io/collector/config/configgrpc v0.86.0 // indirect
go.opentelemetry.io/collector/config/confighttp v0.86.0 // indirect
go.opentelemetry.io/collector/config/confignet v0.86.0 // indirect
go.opentelemetry.io/collector/config/configopaque v0.86.0 // indirect
go.opentelemetry.io/collector/config/configtelemetry v0.86.0 // indirect
go.opentelemetry.io/collector/config/configtls v0.86.0 // indirect
go.opentelemetry.io/collector/config/internal v0.86.0 // indirect
go.opentelemetry.io/collector/confmap v0.86.0 // indirect
go.opentelemetry.io/collector/consumer v0.86.0 // indirect
go.opentelemetry.io/collector/extension/auth v0.86.0 // indirect
go.opentelemetry.io/collector/featuregate v1.0.0-rcv0015 // indirect
go.opentelemetry.io/collector/pdata v1.0.0-rcv0015 // indirect
go.opentelemetry.io/collector/semconv v0.86.0 // indirect
go.opentelemetry.io/collector/service v0.86.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.44.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0 // indirect
go.opentelemetry.io/contrib/propagators/b3 v1.19.0 // indirect
go.opentelemetry.io/contrib/zpages v0.44.0 // indirect
go.opentelemetry.io/otel v1.18.0 // indirect
go.opentelemetry.io/otel/bridge/opencensus v0.41.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.41.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.41.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.41.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.18.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.18.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.18.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.41.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v0.41.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.18.0 // indirect
go.opentelemetry.io/otel/metric v1.18.0 // indirect
go.opentelemetry.io/otel/sdk v1.18.0 // indirect
go.opentelemetry.io/otel/sdk/metric v0.41.0 // indirect
go.opentelemetry.io/otel/trace v1.18.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/goleak v1.2.1 // indirect
go.uber.org/multierr v1.10.0 // indirect
go.uber.org/zap v1.24.0 // indirect
golang.org/x/crypto v0.7.0 // indirect
golang.org/x/exp v0.0.0-20230124195608-d38c7dcee874 // indirect
golang.org/x/mod v0.9.0 // indirect
golang.org/x/net v0.8.0 // indirect
golang.org/x/oauth2 v0.6.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.6.0 // indirect
golang.org/x/term v0.6.0 // indirect
golang.org/x/text v0.8.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/crypto v0.13.0 // indirect
golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1 // indirect
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.15.0 // indirect
golang.org/x/oauth2 v0.12.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.12.0 // indirect
golang.org/x/term v0.12.0 // indirect
golang.org/x/text v0.13.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.7.0 // indirect
gonum.org/v1/gonum v0.12.0 // indirect
google.golang.org/api v0.112.0 // indirect
golang.org/x/tools v0.13.0 // indirect
gonum.org/v1/gonum v0.14.0 // indirect
google.golang.org/api v0.141.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230303212802-e74f57abe488 // indirect
google.golang.org/grpc v1.53.0 // indirect
google.golang.org/protobuf v1.29.1 // indirect
google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230911183012-2d3300fd4832 // indirect
google.golang.org/grpc v1.58.1 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.66.6 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/api v0.26.2 // indirect
k8s.io/apimachinery v0.26.2 // indirect
k8s.io/client-go v0.26.2 // indirect
k8s.io/klog/v2 v2.80.1 // indirect
k8s.io/kube-openapi v0.0.0-20221207184640-f3cff1453715 // indirect
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448 // indirect
k8s.io/api v0.28.2 // indirect
k8s.io/apimachinery v0.28.2 // indirect
k8s.io/client-go v0.28.2 // indirect
k8s.io/klog/v2 v2.100.1 // indirect
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
k8s.io/utils v0.0.0-20230711102312-30195339c3c7 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.3.0 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -15,7 +15,7 @@ func main() {
info := component.BuildInfo{
Command: "custom-collector-distro",
Description: "Custom OpenTelemetry Collector distribution",
Version: "0.73.0",
Version: "0.85.0",
}
app := otelcol.NewCommand(otelcol.CollectorSettings{BuildInfo: info, Factories: factories})

Просмотреть файл

@ -0,0 +1,44 @@
# Build the otel-allocator binary
FROM mcr.microsoft.com/oss/go/microsoft/golang:1.20 as builder
WORKDIR /app
# Copy prometheus-operator repo files
COPY ./prometheus-operator/go.mod ./prometheus-operator/go.sum ./prometheus-operator/
WORKDIR /app
COPY ./prometheus-operator/pkg/apis/monitoring/go.mod ./prometheus-operator/pkg/apis/monitoring/go.sum ./prometheus-operator/pkg/apis/monitoring/
WORKDIR /app/prometheus-operator/pkg/apis/monitoring/
RUN go mod download
WORKDIR /app
COPY ./prometheus-operator/pkg/client/go.mod ./prometheus-operator/pkg/client/go.sum ./prometheus-operator/pkg/client/
WORKDIR /app/prometheus-operator/pkg/client/
RUN go mod download
WORKDIR /app/prometheus-operator/
RUN go mod download
WORKDIR /app
COPY ./prometheus-operator /app/prometheus-operator
# Copy go mod and sum files
COPY go.mod go.sum ./
RUN go mod download
COPY . .
ARG TARGETOS TARGETARCH
# Build the Go app
RUN if [ "$TARGETARCH" = "arm64" ] ; then CC=aarch64-linux-gnu-gcc CGO_ENABLED=1 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -a -installsuffix -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o main . ; else CGO_ENABLED=1 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -a -installsuffix -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o main . ; fi
######## Start a new stage from scratch #######
FROM mcr.microsoft.com/cbl-mariner/distroless/debug:2.0
WORKDIR /root/
# Copy the pre-built binary file from the previous stage
COPY --from=builder /app/main .
ENTRYPOINT ["./main"]

Просмотреть файл

@ -0,0 +1,9 @@
.PHONY: targetallocator
targetallocator:
@echo "========================= Building targetallocator ========================="
@echo "========================= cleanup existing targetallocator ========================="
rm -rf targetallocator
@echo "========================= go get ========================="
go get
@echo "========================= go build ========================="
go build -buildmode=pie -ldflags '-linkmode external -extldflags=-Wl,-z,now' -o targetallocator .

Просмотреть файл

@ -0,0 +1,278 @@
# Target Allocator
Target Allocator is an optional component of the OpenTelemetry Collector [Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR). The release version matches the
operator's most recent release as well.
In a nutshell, the TA is a mechanism for decoupling the service discovery and metric collection functions of Prometheus such that they can be scaled independently. The Collector manages Prometheus metrics without needing to install Prometheus. The TA manages the configuration of the Collector's [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
The TA serves two functions:
* Even distribution of Prometheus targets among a pool of Collectors
* Discovery of Prometheus Custom Resources
## Even Distribution of Prometheus Targets
The Target Allocator's first job is to discover targets to scrape and collectors to allocate targets to. Then it can distribute the targets it discovers among the collectors. This means that the OTel Collectors collect the metrics instead of a Prometheus [scraper](https://uzxmx.github.io/prometheus-scrape-internals.html). Metrics are ingested by the OTel Collectors by way of the [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md).
## Discovery of Prometheus Custom Resources
The Target Allocator also provides for the discovery of [Prometheus Operator CRs](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md), namely the [ServiceMonitor and PodMonitor](https://github.com/open-telemetry/opentelemetry-operator/tree/main/cmd/otel-allocator#target-allocator). The ServiceMonitor and the PodMonitor dont do any scraping themselves; their purpose is to inform the Target Allocator (or Prometheus) to add a new job to their scrape configuration. These metrics are then ingested by way of the Prometheus Receiver on the OpenTelemetry Collector.
Even though Prometheus is not required to be installed in your Kubernetes cluster to use the Target Allocator for Prometheus CR discovery, the TA does require that the ServiceMonitor and PodMonitor be installed. These CRs are bundled with Prometheus Operator; however, they can be installed standalone as well.
The easiest way to do this is by going to the [Prometheus Operators Releases page](https://github.com/prometheus-operator/prometheus-operator/releases), grabbing a copy of the latest `bundle.yaml` file (for example, [this one](https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.66.0/bundle.yaml)), and stripping out all of the YAML except the ServiceMonitor and PodMonitor YAML definitions.
# Usage
The `spec.targetAllocator:` controls the TargetAllocator general properties. Full API spec can be found here: [api.md#opentelemetrycollectorspectargetallocator](../../docs/api.md#opentelemetrycollectorspectargetallocator)
A basic example that deploys.
```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: collector-with-ta
spec:
mode: statefulset
targetAllocator:
enabled: true
config: |
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 10s
static_configs:
- targets: [ '0.0.0.0:8888' ]
exporters:
logging:
service:
pipelines:
metrics:
receivers: [prometheus]
processors: []
exporters: [logging]
```
In essence, Prometheus Receiver configs are overridden with a `http_sd_config` directive that points to the
Allocator, these are then loadbalanced/sharded to the Collectors. The [Prometheus Receiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/prometheusreceiver/README.md) configs that are overridden
are what will be distributed with the same name.
## PrometheusCR specifics
TargetAllocator discovery of PrometheusCRs can be turned on by setting
`.spec.targetAllocator.prometheusCR.enabled` to `true`, which it presents as scrape configs
and jobs on the `/scrape_configs` and `/jobs` endpoints respectively.
The CRs can be filtered by labels as documented here: [api.md#opentelemetrycollectorspectargetallocatorprometheuscr](../../docs/api.md#opentelemetrycollectorspectargetallocatorprometheuscr)
The Prometheus Receiver in the deployed Collector also has to know where the Allocator service exists. This is done by a
OpenTelemetry Collector Operator-specific config.
```yaml
config: |
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector'
target_allocator:
endpoint: http://my-targetallocator-service
interval: 30s
collector_id: "${POD_NAME}"
```
Upstream documentation here: [PrometheusReceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver#opentelemetry-operator)
The TargetAllocator service is named based on the OpenTelemetryCollector CR name. `collector_id` should be unique per
collector instance, such as the pod name. The `POD_NAME` environment variable is convenient since this is supplied
to collector instance pods by default.
### RBAC
The ServiceAccount that the TargetAllocator runs as, has to have access to the CRs. A role like this will provide that
access.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: opentelemetry-targetallocator-cr-role
rules:
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
- podmonitors
verbs:
- '*'
```
In addition, the TargetAllocator needs the same permissions as a Prometheus instance would to find the matching targets
from the CR instances.
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: opentelemetry-targetallocator-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs: ["get", "list", "watch"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
```
These roles can be combined.
A ServiceAccount bound with the above permissions in the namespaces that are to be monitored can then be referenced in
the `targetAllocator:` part of the OpenTelemetryCollector CR.
```yaml
targetAllocator:
enabled: true
serviceAccount: opentelemetry-targetallocator-sa
prometheusCR:
enabled: true
```
**Note**: The Collector part of this same CR *also* has a serviceAccount key which only affects the collector and *not*
the TargetAllocator.
### Service / Pod monitor endpoint credentials
If your service or pod monitor endpoints require credentials or other supported form of authentication (bearer token, basic auth, OAuth2 etc.), you need to ensure that the collector has access to this information. Due to some limitations in how the endpoints configuration is handled, target allocator currently does **not** support credentials provided via secrets. It is only possible to provide credentials in a file (for more details see issue https://github.com/open-telemetry/opentelemetry-operator/issues/1669).
In order to ensure your endpoints can be scraped, your collector instance needs to have the particular secret mounted as a file at the correct path.
# Design
If the Allocator is activated, all Prometheus configurations will be transferred in a separate ConfigMap which get in
turn mounted to the Allocator.
This configuration will be resolved to target configurations and then split across all OpenTelemetryCollector instances.
TargetAllocators expose the results as [HTTP_SD endpoints](https://prometheus.io/docs/prometheus/latest/http_sd/)
split by collector.
Currently, the Target Allocator handles the sharding of targets. The operator sets the `$SHARD` variable to 0 to allow
collectors to keep targets generated by a Prometheus CRD. Using Prometheus sharding and target allocator sharding is not
recommended currently and may lead to unknown results.
[See this thread for more information](https://github.com/open-telemetry/opentelemetry-operator/pull/1124#discussion_r984683577)
#### Endpoints
`/scrape_configs`:
```json
{
"job1": {
"follow_redirects": true,
"honor_timestamps": true,
"job_name": "job1",
"metric_relabel_configs": [],
"metrics_path": "/metrics",
"scheme": "http",
"scrape_interval": "1m",
"scrape_timeout": "10s",
"static_configs": []
},
"job2": {
"follow_redirects": true,
"honor_timestamps": true,
"job_name": "job2",
"metric_relabel_configs": [],
"metrics_path": "/metrics",
"relabel_configs": [],
"scheme": "http",
"scrape_interval": "1m",
"scrape_timeout": "10s",
"kubernetes_sd_configs": []
}
}
```
`/jobs`:
```json
{
"job1": {
"_link": "/jobs/job1/targets"
},
"job2": {
"_link": "/jobs/job1/targets"
}
}
```
`/jobs/{jobID}/targets`:
```json
{
"collector-1": {
"_link": "/jobs/job1/targets?collector_id=collector-1",
"targets": [
{
"Targets": [
"10.100.100.100",
"10.100.100.101",
"10.100.100.102"
],
"Labels": {
"namespace": "a_namespace",
"pod": "a_pod"
}
}
]
}
}
```
`/jobs/{jobID}/targets?collector_id={collectorID}`:
```json
[
{
"targets": [
"10.100.100.100",
"10.100.100.101",
"10.100.100.102"
],
"labels": {
"namespace": "a_namespace",
"pod": "a_pod"
}
}
]
```
## Packages
### Watchers
Watchers are responsible for the translation of external sources into Prometheus readable scrape configurations and
triggers updates to the DiscoveryManager
### DiscoveryManager
Watches the Prometheus service discovery for new targets and sets targets to the Allocator
### Allocator
Shards the received targets based on the discovered Collector instances
### Collector
Client to watch for deployed Collector instances which will then provided to the Allocator.

Просмотреть файл

@ -0,0 +1,58 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"fmt"
"strconv"
"github.com/prometheus/common/model"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
func colIndex(index, numCols int) int {
if numCols == 0 {
return -1
}
return index % numCols
}
func MakeNNewTargets(n int, numCollectors int, startingIndex int) map[string]*target.Item {
toReturn := map[string]*target.Item{}
for i := startingIndex; i < n+startingIndex; i++ {
collector := fmt.Sprintf("collector-%d", colIndex(i, numCollectors))
label := model.LabelSet{
"collector": model.LabelValue(collector),
"i": model.LabelValue(strconv.Itoa(i)),
"total": model.LabelValue(strconv.Itoa(n + startingIndex)),
}
newTarget := target.NewItem(fmt.Sprintf("test-job-%d", i), "test-url", label, collector)
toReturn[newTarget.Hash()] = newTarget
}
return toReturn
}
func MakeNCollectors(n int, startingIndex int) map[string]*Collector {
toReturn := map[string]*Collector{}
for i := startingIndex; i < n+startingIndex; i++ {
collector := fmt.Sprintf("collector-%d", i)
toReturn[collector] = &Collector{
Name: collector,
NumTargets: 0,
}
}
return toReturn
}

Просмотреть файл

@ -0,0 +1,293 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"sync"
"github.com/buraksezer/consistent"
"github.com/cespare/xxhash/v2"
"github.com/go-logr/logr"
"github.com/prometheus/client_golang/prometheus"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/diff"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
var _ Allocator = &consistentHashingAllocator{}
const consistentHashingStrategyName = "consistent-hashing"
type hasher struct{}
func (h hasher) Sum64(data []byte) uint64 {
return xxhash.Sum64(data)
}
type consistentHashingAllocator struct {
// m protects consistentHasher, collectors and targetItems for concurrent use.
m sync.RWMutex
consistentHasher *consistent.Consistent
// collectors is a map from a Collector's name to a Collector instance
// collectorKey -> collector pointer
collectors map[string]*Collector
// targetItems is a map from a target item's hash to the target items allocated state
// targetItem hash -> target item pointer
targetItems map[string]*target.Item
// collectorKey -> job -> target item hash -> true
targetItemsPerJobPerCollector map[string]map[string]map[string]bool
log logr.Logger
filter Filter
}
func newConsistentHashingAllocator(log logr.Logger, opts ...AllocationOption) Allocator {
config := consistent.Config{
PartitionCount: 1061,
ReplicationFactor: 5,
Load: 1.1,
Hasher: hasher{},
}
consistentHasher := consistent.New(nil, config)
chAllocator := &consistentHashingAllocator{
consistentHasher: consistentHasher,
collectors: make(map[string]*Collector),
targetItems: make(map[string]*target.Item),
targetItemsPerJobPerCollector: make(map[string]map[string]map[string]bool),
log: log,
}
for _, opt := range opts {
opt(chAllocator)
}
return chAllocator
}
// SetFilter sets the filtering hook to use.
func (c *consistentHashingAllocator) SetFilter(filter Filter) {
c.filter = filter
}
// addCollectorTargetItemMapping keeps track of which collector has which jobs and targets
// this allows the allocator to respond without any extra allocations to http calls. The caller of this method
// has to acquire a lock.
func (c *consistentHashingAllocator) addCollectorTargetItemMapping(tg *target.Item) {
if c.targetItemsPerJobPerCollector[tg.CollectorName] == nil {
c.targetItemsPerJobPerCollector[tg.CollectorName] = make(map[string]map[string]bool)
}
if c.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName] == nil {
c.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName] = make(map[string]bool)
}
c.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName][tg.Hash()] = true
}
// addTargetToTargetItems assigns a target to the collector based on its hash and adds it to the allocator's targetItems
// This method is called from within SetTargets and SetCollectors, which acquire the needed lock.
// This is only called after the collectors are cleared or when a new target has been found in the tempTargetMap.
// INVARIANT: c.collectors must have at least 1 collector set.
// NOTE: by not creating a new target item, there is the potential for a race condition where we modify this target
// item while it's being encoded by the server JSON handler.
func (c *consistentHashingAllocator) addTargetToTargetItems(tg *target.Item) {
// Check if this is a reassignment, if so, decrement the previous collector's NumTargets
if previousColName, ok := c.collectors[tg.CollectorName]; ok {
previousColName.NumTargets--
delete(c.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName], tg.Hash())
TargetsPerCollector.WithLabelValues(previousColName.String(), consistentHashingStrategyName).Set(float64(c.collectors[previousColName.String()].NumTargets))
}
colOwner := c.consistentHasher.LocateKey([]byte(tg.Hash()))
tg.CollectorName = colOwner.String()
c.targetItems[tg.Hash()] = tg
c.addCollectorTargetItemMapping(tg)
c.collectors[colOwner.String()].NumTargets++
TargetsPerCollector.WithLabelValues(colOwner.String(), consistentHashingStrategyName).Set(float64(c.collectors[colOwner.String()].NumTargets))
}
// handleTargets receives the new and removed targets and reconciles the current state.
// Any removals are removed from the allocator's targetItems and unassigned from the corresponding collector.
// Any net-new additions are assigned to the next available collector.
func (c *consistentHashingAllocator) handleTargets(diff diff.Changes[*target.Item]) {
// Check for removals
for k, item := range c.targetItems {
// if the current item is in the removals list
if _, ok := diff.Removals()[k]; ok {
col := c.collectors[item.CollectorName]
col.NumTargets--
delete(c.targetItems, k)
delete(c.targetItemsPerJobPerCollector[item.CollectorName][item.JobName], item.Hash())
TargetsPerCollector.WithLabelValues(item.CollectorName, consistentHashingStrategyName).Set(float64(col.NumTargets))
}
}
// Check for additions
for k, item := range diff.Additions() {
// Do nothing if the item is already there
if _, ok := c.targetItems[k]; ok {
continue
} else {
// Add item to item pool and assign a collector
c.addTargetToTargetItems(item)
}
}
}
// handleCollectors receives the new and removed collectors and reconciles the current state.
// Any removals are removed from the allocator's collectors. New collectors are added to the allocator's collector map.
// Finally, update all targets' collectors to match the consistent hashing.
func (c *consistentHashingAllocator) handleCollectors(diff diff.Changes[*Collector]) {
// Clear removed collectors
for _, k := range diff.Removals() {
delete(c.collectors, k.Name)
delete(c.targetItemsPerJobPerCollector, k.Name)
c.consistentHasher.Remove(k.Name)
TargetsPerCollector.WithLabelValues(k.Name, consistentHashingStrategyName).Set(0)
}
// Insert the new collectors
for _, i := range diff.Additions() {
c.collectors[i.Name] = NewCollector(i.Name)
c.consistentHasher.Add(c.collectors[i.Name])
}
// Re-Allocate all targets
for _, item := range c.targetItems {
c.addTargetToTargetItems(item)
}
}
// SetTargets accepts a list of targets that will be used to make
// load balancing decisions. This method should be called when there are
// new targets discovered or existing targets are shutdown.
func (c *consistentHashingAllocator) SetTargets(targets map[string]*target.Item) {
timer := prometheus.NewTimer(TimeToAssign.WithLabelValues("SetTargets", consistentHashingStrategyName))
defer timer.ObserveDuration()
if c.filter != nil {
targets = c.filter.Apply(targets)
}
RecordTargetsKept(targets)
c.m.Lock()
defer c.m.Unlock()
if len(c.collectors) == 0 {
c.log.Info("No collector instances present, saving targets to allocate to collector(s)")
// If there were no targets discovered previously, assign this as the new set of target items
if len(c.targetItems) == 0 {
c.log.Info("Not discovered any targets previously, saving targets found to the targetItems set")
for k, item := range targets {
c.targetItems[k] = item
}
} else {
// If there were previously discovered targets, add or remove accordingly
targetsDiffEmptyCollectorSet := diff.Maps(c.targetItems, targets)
// Check for additions
if len(targetsDiffEmptyCollectorSet.Additions()) > 0 {
c.log.Info("New targets discovered, adding new targets to the targetItems set")
for k, item := range targetsDiffEmptyCollectorSet.Additions() {
// Do nothing if the item is already there
if _, ok := c.targetItems[k]; ok {
continue
} else {
// Add item to item pool
c.targetItems[k] = item
}
}
}
// Check for deletions
if len(targetsDiffEmptyCollectorSet.Removals()) > 0 {
c.log.Info("Targets removed, Removing targets from the targetItems set")
for k, _ := range targetsDiffEmptyCollectorSet.Removals() {
// Delete item from target items
delete(c.targetItems, k)
}
}
}
return
}
// Check for target changes
targetsDiff := diff.Maps(c.targetItems, targets)
// If there are any additions or removals
if len(targetsDiff.Additions()) != 0 || len(targetsDiff.Removals()) != 0 {
c.handleTargets(targetsDiff)
}
}
// SetCollectors sets the set of collectors with key=collectorName, value=Collector object.
// This method is called when Collectors are added or removed.
func (c *consistentHashingAllocator) SetCollectors(collectors map[string]*Collector) {
timer := prometheus.NewTimer(TimeToAssign.WithLabelValues("SetCollectors", consistentHashingStrategyName))
defer timer.ObserveDuration()
CollectorsAllocatable.WithLabelValues(consistentHashingStrategyName).Set(float64(len(collectors)))
if len(collectors) == 0 {
c.log.Info("No collector instances present")
return
}
c.m.Lock()
defer c.m.Unlock()
// Check for collector changes
collectorsDiff := diff.Maps(c.collectors, collectors)
if len(collectorsDiff.Additions()) != 0 || len(collectorsDiff.Removals()) != 0 {
c.handleCollectors(collectorsDiff)
}
}
func (c *consistentHashingAllocator) GetTargetsForCollectorAndJob(collector string, job string) []*target.Item {
c.m.RLock()
defer c.m.RUnlock()
if _, ok := c.targetItemsPerJobPerCollector[collector]; !ok {
return []*target.Item{}
}
if _, ok := c.targetItemsPerJobPerCollector[collector][job]; !ok {
return []*target.Item{}
}
targetItemsCopy := make([]*target.Item, len(c.targetItemsPerJobPerCollector[collector][job]))
index := 0
for targetHash := range c.targetItemsPerJobPerCollector[collector][job] {
targetItemsCopy[index] = c.targetItems[targetHash]
index++
}
return targetItemsCopy
}
// TargetItems returns a shallow copy of the targetItems map.
func (c *consistentHashingAllocator) TargetItems() map[string]*target.Item {
c.m.RLock()
defer c.m.RUnlock()
targetItemsCopy := make(map[string]*target.Item)
for k, v := range c.targetItems {
targetItemsCopy[k] = v
}
return targetItemsCopy
}
// Collectors returns a shallow copy of the collectors map.
func (c *consistentHashingAllocator) Collectors() map[string]*Collector {
c.m.RLock()
defer c.m.RUnlock()
collectorsCopy := make(map[string]*Collector)
for k, v := range c.collectors {
collectorsCopy[k] = v
}
return collectorsCopy
}

Просмотреть файл

@ -0,0 +1,105 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestCanSetSingleTarget(t *testing.T) {
cols := MakeNCollectors(3, 0)
c := newConsistentHashingAllocator(logger)
c.SetCollectors(cols)
c.SetTargets(MakeNNewTargets(1, 3, 0))
actualTargetItems := c.TargetItems()
assert.Len(t, actualTargetItems, 1)
for _, item := range actualTargetItems {
assert.Equal(t, "collector-2", item.CollectorName)
}
}
func TestRelativelyEvenDistribution(t *testing.T) {
numCols := 15
numItems := 10000
cols := MakeNCollectors(numCols, 0)
var expectedPerCollector = float64(numItems / numCols)
expectedDelta := (expectedPerCollector * 1.5) - expectedPerCollector
c := newConsistentHashingAllocator(logger)
c.SetCollectors(cols)
c.SetTargets(MakeNNewTargets(numItems, 0, 0))
actualTargetItems := c.TargetItems()
assert.Len(t, actualTargetItems, numItems)
actualCollectors := c.Collectors()
assert.Len(t, actualCollectors, numCols)
for _, col := range actualCollectors {
assert.InDelta(t, col.NumTargets, expectedPerCollector, expectedDelta)
}
}
func TestFullReallocation(t *testing.T) {
cols := MakeNCollectors(10, 0)
c := newConsistentHashingAllocator(logger)
c.SetCollectors(cols)
c.SetTargets(MakeNNewTargets(10000, 10, 0))
actualTargetItems := c.TargetItems()
assert.Len(t, actualTargetItems, 10000)
actualCollectors := c.Collectors()
assert.Len(t, actualCollectors, 10)
newCols := MakeNCollectors(10, 10)
c.SetCollectors(newCols)
updatedTargetItems := c.TargetItems()
assert.Len(t, updatedTargetItems, 10000)
updatedCollectors := c.Collectors()
assert.Len(t, updatedCollectors, 10)
for _, item := range updatedTargetItems {
_, ok := updatedCollectors[item.CollectorName]
assert.True(t, ok, "Some items weren't reallocated correctly")
}
}
func TestNumRemapped(t *testing.T) {
numItems := 10_000
numInitialCols := 15
numFinalCols := 16
expectedDelta := float64((numFinalCols - numInitialCols) * (numItems / numFinalCols))
cols := MakeNCollectors(numInitialCols, 0)
c := newConsistentHashingAllocator(logger)
c.SetCollectors(cols)
c.SetTargets(MakeNNewTargets(numItems, numInitialCols, 0))
actualTargetItems := c.TargetItems()
assert.Len(t, actualTargetItems, numItems)
actualCollectors := c.Collectors()
assert.Len(t, actualCollectors, numInitialCols)
newCols := MakeNCollectors(numFinalCols, 0)
c.SetCollectors(newCols)
updatedTargetItems := c.TargetItems()
assert.Len(t, updatedTargetItems, numItems)
updatedCollectors := c.Collectors()
assert.Len(t, updatedCollectors, numFinalCols)
countRemapped := 0
countNotRemapped := 0
for _, item := range updatedTargetItems {
previousItem, ok := actualTargetItems[item.Hash()]
assert.True(t, ok)
if previousItem.CollectorName != item.CollectorName {
countRemapped++
} else {
countNotRemapped++
}
}
assert.InDelta(t, numItems/numFinalCols, countRemapped, expectedDelta)
}

Просмотреть файл

@ -0,0 +1,261 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"sync"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/diff"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
"github.com/go-logr/logr"
"github.com/prometheus/client_golang/prometheus"
)
var _ Allocator = &leastWeightedAllocator{}
const leastWeightedStrategyName = "least-weighted"
/*
Load balancer will serve on an HTTP server exposing /jobs/<job_id>/targets
The targets are allocated using the least connection method
Load balancer will need information about the collectors in order to set the URLs
Keep a Map of what each collector currently holds and update it based on new scrape target updates
*/
// leastWeightedAllocator makes decisions to distribute work among
// a number of OpenTelemetry collectors based on the number of targets.
// Users need to call SetTargets when they have new targets in their
// clusters and call SetCollectors when the collectors have changed.
type leastWeightedAllocator struct {
// m protects collectors and targetItems for concurrent use.
m sync.RWMutex
// collectors is a map from a Collector's name to a Collector instance
collectors map[string]*Collector
// targetItems is a map from a target item's hash to the target items allocated state
targetItems map[string]*target.Item
// collectorKey -> job -> target item hash -> true
targetItemsPerJobPerCollector map[string]map[string]map[string]bool
log logr.Logger
filter Filter
}
// SetFilter sets the filtering hook to use.
func (allocator *leastWeightedAllocator) SetFilter(filter Filter) {
allocator.filter = filter
}
func (allocator *leastWeightedAllocator) GetTargetsForCollectorAndJob(collector string, job string) []*target.Item {
allocator.m.RLock()
defer allocator.m.RUnlock()
if _, ok := allocator.targetItemsPerJobPerCollector[collector]; !ok {
return []*target.Item{}
}
if _, ok := allocator.targetItemsPerJobPerCollector[collector][job]; !ok {
return []*target.Item{}
}
targetItemsCopy := make([]*target.Item, len(allocator.targetItemsPerJobPerCollector[collector][job]))
index := 0
for targetHash := range allocator.targetItemsPerJobPerCollector[collector][job] {
targetItemsCopy[index] = allocator.targetItems[targetHash]
index++
}
return targetItemsCopy
}
// TargetItems returns a shallow copy of the targetItems map.
func (allocator *leastWeightedAllocator) TargetItems() map[string]*target.Item {
allocator.m.RLock()
defer allocator.m.RUnlock()
targetItemsCopy := make(map[string]*target.Item)
for k, v := range allocator.targetItems {
targetItemsCopy[k] = v
}
return targetItemsCopy
}
// Collectors returns a shallow copy of the collectors map.
func (allocator *leastWeightedAllocator) Collectors() map[string]*Collector {
allocator.m.RLock()
defer allocator.m.RUnlock()
collectorsCopy := make(map[string]*Collector)
for k, v := range allocator.collectors {
collectorsCopy[k] = v
}
return collectorsCopy
}
// findNextCollector finds the next collector with fewer number of targets.
// This method is called from within SetTargets and SetCollectors, whose caller
// acquires the needed lock. This method assumes there are is at least 1 collector set.
// INVARIANT: allocator.collectors must have at least 1 collector set.
func (allocator *leastWeightedAllocator) findNextCollector() *Collector {
var col *Collector
for _, v := range allocator.collectors {
// If the initial collector is empty, set the initial collector to the first element of map
if col == nil {
col = v
} else if v.NumTargets < col.NumTargets {
col = v
}
}
return col
}
// addCollectorTargetItemMapping keeps track of which collector has which jobs and targets
// this allows the allocator to respond without any extra allocations to http calls. The caller of this method
// has to acquire a lock.
func (allocator *leastWeightedAllocator) addCollectorTargetItemMapping(tg *target.Item) {
if allocator.targetItemsPerJobPerCollector[tg.CollectorName] == nil {
allocator.targetItemsPerJobPerCollector[tg.CollectorName] = make(map[string]map[string]bool)
}
if allocator.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName] == nil {
allocator.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName] = make(map[string]bool)
}
allocator.targetItemsPerJobPerCollector[tg.CollectorName][tg.JobName][tg.Hash()] = true
}
// addTargetToTargetItems assigns a target to the next available collector and adds it to the allocator's targetItems
// This method is called from within SetTargets and SetCollectors, which acquire the needed lock.
// This is only called after the collectors are cleared or when a new target has been found in the tempTargetMap.
// INVARIANT: allocator.collectors must have at least 1 collector set.
// NOTE: by not creating a new target item, there is the potential for a race condition where we modify this target
// item while it's being encoded by the server JSON handler.
func (allocator *leastWeightedAllocator) addTargetToTargetItems(tg *target.Item) {
chosenCollector := allocator.findNextCollector()
tg.CollectorName = chosenCollector.Name
allocator.targetItems[tg.Hash()] = tg
allocator.addCollectorTargetItemMapping(tg)
chosenCollector.NumTargets++
TargetsPerCollector.WithLabelValues(chosenCollector.Name, leastWeightedStrategyName).Set(float64(chosenCollector.NumTargets))
}
// handleTargets receives the new and removed targets and reconciles the current state.
// Any removals are removed from the allocator's targetItems and unassigned from the corresponding collector.
// Any net-new additions are assigned to the next available collector.
func (allocator *leastWeightedAllocator) handleTargets(diff diff.Changes[*target.Item]) {
// Check for removals
for k, item := range allocator.targetItems {
// if the current item is in the removals list
if _, ok := diff.Removals()[k]; ok {
c := allocator.collectors[item.CollectorName]
c.NumTargets--
delete(allocator.targetItems, k)
delete(allocator.targetItemsPerJobPerCollector[item.CollectorName][item.JobName], item.Hash())
TargetsPerCollector.WithLabelValues(item.CollectorName, leastWeightedStrategyName).Set(float64(c.NumTargets))
}
}
// Check for additions
for k, item := range diff.Additions() {
// Do nothing if the item is already there
if _, ok := allocator.targetItems[k]; ok {
continue
} else {
// Add item to item pool and assign a collector
allocator.addTargetToTargetItems(item)
}
}
}
// handleCollectors receives the new and removed collectors and reconciles the current state.
// Any removals are removed from the allocator's collectors. New collectors are added to the allocator's collector map.
// Finally, any targets of removed collectors are reallocated to the next available collector.
func (allocator *leastWeightedAllocator) handleCollectors(diff diff.Changes[*Collector]) {
// Clear removed collectors
for _, k := range diff.Removals() {
delete(allocator.collectors, k.Name)
delete(allocator.targetItemsPerJobPerCollector, k.Name)
TargetsPerCollector.WithLabelValues(k.Name, leastWeightedStrategyName).Set(0)
}
// Insert the new collectors
for _, i := range diff.Additions() {
allocator.collectors[i.Name] = NewCollector(i.Name)
}
// Re-Allocate targets of the removed collectors
for _, item := range allocator.targetItems {
if _, ok := diff.Removals()[item.CollectorName]; ok {
allocator.addTargetToTargetItems(item)
}
}
}
// SetTargets accepts a list of targets that will be used to make
// load balancing decisions. This method should be called when there are
// new targets discovered or existing targets are shutdown.
func (allocator *leastWeightedAllocator) SetTargets(targets map[string]*target.Item) {
timer := prometheus.NewTimer(TimeToAssign.WithLabelValues("SetTargets", leastWeightedStrategyName))
defer timer.ObserveDuration()
if allocator.filter != nil {
targets = allocator.filter.Apply(targets)
}
RecordTargetsKept(targets)
allocator.m.Lock()
defer allocator.m.Unlock()
if len(allocator.collectors) == 0 {
allocator.log.Info("No collector instances present, cannot set targets")
return
}
// Check for target changes
targetsDiff := diff.Maps(allocator.targetItems, targets)
// If there are any additions or removals
if len(targetsDiff.Additions()) != 0 || len(targetsDiff.Removals()) != 0 {
allocator.handleTargets(targetsDiff)
}
}
// SetCollectors sets the set of collectors with key=collectorName, value=Collector object.
// This method is called when Collectors are added or removed.
func (allocator *leastWeightedAllocator) SetCollectors(collectors map[string]*Collector) {
timer := prometheus.NewTimer(TimeToAssign.WithLabelValues("SetCollectors", leastWeightedStrategyName))
defer timer.ObserveDuration()
CollectorsAllocatable.WithLabelValues(leastWeightedStrategyName).Set(float64(len(collectors)))
if len(collectors) == 0 {
allocator.log.Info("No collector instances present")
return
}
allocator.m.Lock()
defer allocator.m.Unlock()
// Check for collector changes
collectorsDiff := diff.Maps(allocator.collectors, collectors)
if len(collectorsDiff.Additions()) != 0 || len(collectorsDiff.Removals()) != 0 {
allocator.handleCollectors(collectorsDiff)
}
}
func newLeastWeightedAllocator(log logr.Logger, opts ...AllocationOption) Allocator {
lwAllocator := &leastWeightedAllocator{
log: log,
collectors: make(map[string]*Collector),
targetItems: make(map[string]*target.Item),
targetItemsPerJobPerCollector: make(map[string]map[string]map[string]bool),
}
for _, opt := range opts {
opt(lwAllocator)
}
return lwAllocator
}

Просмотреть файл

@ -0,0 +1,258 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"math"
"math/rand"
"testing"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/assert"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
var logger = logf.Log.WithName("unit-tests")
func TestSetCollectors(t *testing.T) {
s, _ := New("least-weighted", logger)
cols := MakeNCollectors(3, 0)
s.SetCollectors(cols)
expectedColLen := len(cols)
collectors := s.Collectors()
assert.Len(t, collectors, expectedColLen)
for _, i := range cols {
assert.NotNil(t, collectors[i.Name])
}
}
func TestAddingAndRemovingTargets(t *testing.T) {
// prepare allocator with initial targets and collectors
s, _ := New("least-weighted", logger)
cols := MakeNCollectors(3, 0)
s.SetCollectors(cols)
initTargets := MakeNNewTargets(6, 3, 0)
// test that targets and collectors are added properly
s.SetTargets(initTargets)
// verify
expectedTargetLen := len(initTargets)
assert.Len(t, s.TargetItems(), expectedTargetLen)
// prepare second round of targets
tar := MakeNNewTargets(4, 3, 0)
// test that fewer targets are found - removed
s.SetTargets(tar)
// verify
targetItems := s.TargetItems()
expectedNewTargetLen := len(tar)
assert.Len(t, targetItems, expectedNewTargetLen)
// verify results map
for _, i := range tar {
_, ok := targetItems[i.Hash()]
assert.True(t, ok)
}
}
// Tests that two targets with the same target url and job name but different label set are both added.
func TestAllocationCollision(t *testing.T) {
// prepare allocator with initial targets and collectors
s, _ := New("least-weighted", logger)
cols := MakeNCollectors(3, 0)
s.SetCollectors(cols)
firstLabels := model.LabelSet{
"test": "test1",
}
secondLabels := model.LabelSet{
"test": "test2",
}
firstTarget := target.NewItem("sample-name", "0.0.0.0:8000", firstLabels, "")
secondTarget := target.NewItem("sample-name", "0.0.0.0:8000", secondLabels, "")
targetList := map[string]*target.Item{
firstTarget.Hash(): firstTarget,
secondTarget.Hash(): secondTarget,
}
// test that targets and collectors are added properly
s.SetTargets(targetList)
// verify
targetItems := s.TargetItems()
expectedTargetLen := len(targetList)
assert.Len(t, targetItems, expectedTargetLen)
// verify results map
for _, i := range targetList {
_, ok := targetItems[i.Hash()]
assert.True(t, ok)
}
}
func TestNoCollectorReassignment(t *testing.T) {
s, _ := New("least-weighted", logger)
cols := MakeNCollectors(3, 0)
s.SetCollectors(cols)
expectedColLen := len(cols)
assert.Len(t, s.Collectors(), expectedColLen)
for _, i := range cols {
assert.NotNil(t, s.Collectors()[i.Name])
}
initTargets := MakeNNewTargets(6, 3, 0)
// test that targets and collectors are added properly
s.SetTargets(initTargets)
// verify
expectedTargetLen := len(initTargets)
targetItems := s.TargetItems()
assert.Len(t, targetItems, expectedTargetLen)
// assign new set of collectors with the same names
newCols := MakeNCollectors(3, 0)
s.SetCollectors(newCols)
newTargetItems := s.TargetItems()
assert.Equal(t, targetItems, newTargetItems)
}
func TestSmartCollectorReassignment(t *testing.T) {
t.Skip("This test is flaky and fails frequently, see issue 1291")
s, _ := New("least-weighted", logger)
cols := MakeNCollectors(4, 0)
s.SetCollectors(cols)
expectedColLen := len(cols)
assert.Len(t, s.Collectors(), expectedColLen)
for _, i := range cols {
assert.NotNil(t, s.Collectors()[i.Name])
}
initTargets := MakeNNewTargets(6, 0, 0)
// test that targets and collectors are added properly
s.SetTargets(initTargets)
// verify
expectedTargetLen := len(initTargets)
targetItems := s.TargetItems()
assert.Len(t, targetItems, expectedTargetLen)
// assign new set of collectors with the same names
newCols := map[string]*Collector{
"collector-0": {
Name: "collector-0",
}, "collector-1": {
Name: "collector-1",
}, "collector-2": {
Name: "collector-2",
}, "collector-4": {
Name: "collector-4",
},
}
s.SetCollectors(newCols)
newTargetItems := s.TargetItems()
assert.Equal(t, len(targetItems), len(newTargetItems))
for key, targetItem := range targetItems {
item, ok := newTargetItems[key]
assert.True(t, ok, "all target items should be found in new target item list")
if targetItem.CollectorName != "collector-3" {
assert.Equal(t, targetItem.CollectorName, item.CollectorName)
} else {
assert.Equal(t, "collector-4", item.CollectorName)
}
}
}
// Tests that the delta in number of targets per collector is less than 15% of an even distribution.
func TestCollectorBalanceWhenAddingAndRemovingAtRandom(t *testing.T) {
// prepare allocator with 3 collectors and 'random' amount of targets
s, _ := New("least-weighted", logger)
cols := MakeNCollectors(3, 0)
s.SetCollectors(cols)
targets := MakeNNewTargets(27, 3, 0)
s.SetTargets(targets)
// Divisor needed to get 15%
divisor := 6.7
targetItemLen := len(s.TargetItems())
collectors := s.Collectors()
count := targetItemLen / len(collectors)
percent := float64(targetItemLen) / divisor
// test
for _, i := range collectors {
assert.InDelta(t, i.NumTargets, count, percent)
}
// removing targets at 'random'
// Remove half of targets randomly
toDelete := len(targets) / 2
counter := 0
for index := range targets {
shouldDelete := rand.Intn(toDelete) //nolint:gosec
if counter < shouldDelete {
delete(targets, index)
}
counter++
}
s.SetTargets(targets)
targetItemLen = len(s.TargetItems())
collectors = s.Collectors()
count = targetItemLen / len(collectors)
percent = float64(targetItemLen) / divisor
// test
for _, i := range collectors {
assert.InDelta(t, i.NumTargets, count, math.Round(percent))
}
// adding targets at 'random'
for _, item := range MakeNNewTargets(13, 3, 100) {
targets[item.Hash()] = item
}
s.SetTargets(targets)
targetItemLen = len(s.TargetItems())
collectors = s.Collectors()
count = targetItemLen / len(collectors)
percent = float64(targetItemLen) / divisor
// test
for _, i := range collectors {
assert.InDelta(t, i.NumTargets, count, math.Round(percent))
}
}

Просмотреть файл

@ -0,0 +1,133 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"errors"
"fmt"
"github.com/buraksezer/consistent"
"github.com/go-logr/logr"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
type AllocatorProvider func(log logr.Logger, opts ...AllocationOption) Allocator
var (
registry = map[string]AllocatorProvider{}
// TargetsPerCollector records how many targets have been assigned to each collector.
// It is currently the responsibility of the strategy to track this information.
TargetsPerCollector = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "opentelemetry_allocator_targets_per_collector",
Help: "The number of targets for each collector.",
}, []string{"collector_name", "strategy"})
CollectorsAllocatable = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "opentelemetry_allocator_collectors_allocatable",
Help: "Number of collectors the allocator is able to allocate to.",
}, []string{"strategy"})
TimeToAssign = promauto.NewHistogramVec(prometheus.HistogramOpts{
Name: "opentelemetry_allocator_time_to_allocate",
Help: "The time it takes to allocate",
}, []string{"method", "strategy"})
targetsRemaining = promauto.NewCounter(prometheus.CounterOpts{
Name: "opentelemetry_allocator_targets_remaining",
Help: "Number of targets kept after filtering.",
})
)
type AllocationOption func(Allocator)
type Filter interface {
Apply(map[string]*target.Item) map[string]*target.Item
}
func WithFilter(filter Filter) AllocationOption {
return func(allocator Allocator) {
allocator.SetFilter(filter)
}
}
func RecordTargetsKept(targets map[string]*target.Item) {
targetsRemaining.Add(float64(len(targets)))
}
func New(name string, log logr.Logger, opts ...AllocationOption) (Allocator, error) {
if p, ok := registry[name]; ok {
return p(log.WithValues("allocator", name), opts...), nil
}
return nil, fmt.Errorf("unregistered strategy: %s", name)
}
func Register(name string, provider AllocatorProvider) error {
if _, ok := registry[name]; ok {
return errors.New("already registered")
}
registry[name] = provider
return nil
}
func GetRegisteredAllocatorNames() []string {
var names []string
for s := range registry {
names = append(names, s)
}
return names
}
type Allocator interface {
SetCollectors(collectors map[string]*Collector)
SetTargets(targets map[string]*target.Item)
TargetItems() map[string]*target.Item
Collectors() map[string]*Collector
GetTargetsForCollectorAndJob(collector string, job string) []*target.Item
SetFilter(filter Filter)
}
var _ consistent.Member = Collector{}
// Collector Creates a struct that holds Collector information.
// This struct will be parsed into endpoint with Collector and jobs info.
// This struct can be extended with information like annotations and labels in the future.
type Collector struct {
Name string
NumTargets int
}
func (c Collector) Hash() string {
return c.Name
}
func (c Collector) String() string {
return c.Name
}
func NewCollector(name string) *Collector {
return &Collector{Name: name}
}
func init() {
err := Register(leastWeightedStrategyName, newLeastWeightedAllocator)
if err != nil {
panic(err)
}
err = Register(consistentHashingStrategyName, newConsistentHashingAllocator)
if err != nil {
panic(err)
}
}

Просмотреть файл

@ -0,0 +1,136 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package allocation
import (
"fmt"
"reflect"
"testing"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/diff"
)
func BenchmarkGetAllTargetsByCollectorAndJob(b *testing.B) {
var table = []struct {
numCollectors int
numJobs int
}{
{numCollectors: 100, numJobs: 100},
{numCollectors: 100, numJobs: 1000},
{numCollectors: 100, numJobs: 10000},
{numCollectors: 100, numJobs: 100000},
{numCollectors: 1000, numJobs: 100},
{numCollectors: 1000, numJobs: 1000},
{numCollectors: 1000, numJobs: 10000},
{numCollectors: 1000, numJobs: 100000},
}
for _, s := range GetRegisteredAllocatorNames() {
for _, v := range table {
a, err := New(s, logger)
if err != nil {
b.Log(err)
b.Fail()
}
cols := MakeNCollectors(v.numCollectors, 0)
jobs := MakeNNewTargets(v.numJobs, v.numCollectors, 0)
a.SetCollectors(cols)
a.SetTargets(jobs)
b.Run(fmt.Sprintf("%s_num_cols_%d_num_jobs_%d", s, v.numCollectors, v.numJobs), func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
a.GetTargetsForCollectorAndJob(fmt.Sprintf("collector-%d", v.numCollectors/2), fmt.Sprintf("test-job-%d", v.numJobs/2))
}
})
}
}
}
func Benchmark_Setting(b *testing.B) {
var table = []struct {
numCollectors int
numTargets int
}{
{numCollectors: 100, numTargets: 100},
{numCollectors: 100, numTargets: 1000},
{numCollectors: 100, numTargets: 10000},
{numCollectors: 100, numTargets: 100000},
{numCollectors: 1000, numTargets: 100},
{numCollectors: 1000, numTargets: 1000},
{numCollectors: 1000, numTargets: 10000},
{numCollectors: 1000, numTargets: 100000},
}
for _, s := range GetRegisteredAllocatorNames() {
for _, v := range table {
a, _ := New(s, logger)
cols := MakeNCollectors(v.numCollectors, 0)
targets := MakeNNewTargets(v.numTargets, v.numCollectors, 0)
b.Run(fmt.Sprintf("%s_num_cols_%d_num_jobs_%d", s, v.numCollectors, v.numTargets), func(b *testing.B) {
b.ReportAllocs()
for i := 0; i < b.N; i++ {
a.SetCollectors(cols)
a.SetTargets(targets)
}
})
}
}
}
func TestCollectorDiff(t *testing.T) {
collector0 := NewCollector("collector-0")
collector1 := NewCollector("collector-1")
collector2 := NewCollector("collector-2")
collector3 := NewCollector("collector-3")
collector4 := NewCollector("collector-4")
type args struct {
current map[string]*Collector
new map[string]*Collector
}
tests := []struct {
name string
args args
want diff.Changes[*Collector]
}{
{
name: "diff two collector maps",
args: args{
current: map[string]*Collector{
"collector-0": collector0,
"collector-1": collector1,
"collector-2": collector2,
"collector-3": collector3,
},
new: map[string]*Collector{
"collector-0": collector0,
"collector-1": collector1,
"collector-2": collector2,
"collector-4": collector4,
},
},
want: diff.NewChanges(map[string]*Collector{
"collector-4": collector4,
}, map[string]*Collector{
"collector-3": collector3,
}),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := diff.Maps(tt.args.current, tt.args.new); !reflect.DeepEqual(got, tt.want) {
t.Errorf("DiffMaps() = %v, want %v", got, tt.want)
}
})
}
}

Просмотреть файл

@ -0,0 +1,144 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package collector
import (
"context"
"os"
"time"
"github.com/go-logr/logr"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation"
)
const (
watcherTimeout = 15 * time.Minute
)
var (
ns = os.Getenv("OTELCOL_NAMESPACE")
collectorsDiscovered = promauto.NewGauge(prometheus.GaugeOpts{
Name: "opentelemetry_allocator_collectors_discovered",
Help: "Number of collectors discovered.",
})
)
type Client struct {
log logr.Logger
k8sClient kubernetes.Interface
close chan struct{}
}
func NewClient(logger logr.Logger, kubeConfig *rest.Config) (*Client, error) {
clientset, err := kubernetes.NewForConfig(kubeConfig)
if err != nil {
return &Client{}, err
}
return &Client{
log: logger.WithValues("component", "opentelemetry-targetallocator"),
k8sClient: clientset,
close: make(chan struct{}),
}, nil
}
func (k *Client) Watch(ctx context.Context, labelMap map[string]string, fn func(collectors map[string]*allocation.Collector)) error {
collectorMap := map[string]*allocation.Collector{}
opts := metav1.ListOptions{
LabelSelector: labels.SelectorFromSet(labelMap).String(),
}
pods, err := k.k8sClient.CoreV1().Pods(ns).List(ctx, opts)
if err != nil {
k.log.Error(err, "Pod failure")
os.Exit(1)
}
for i := range pods.Items {
pod := pods.Items[i]
if pod.GetObjectMeta().GetDeletionTimestamp() == nil {
collectorMap[pod.Name] = allocation.NewCollector(pod.Name)
}
}
fn(collectorMap)
for {
if !k.restartWatch(ctx, opts, collectorMap, fn) {
return nil
}
}
}
func (k *Client) restartWatch(ctx context.Context, opts metav1.ListOptions, collectorMap map[string]*allocation.Collector, fn func(collectors map[string]*allocation.Collector)) bool {
// add timeout to the context before calling Watch
ctx, cancel := context.WithTimeout(ctx, watcherTimeout)
defer cancel()
watcher, err := k.k8sClient.CoreV1().Pods(ns).Watch(ctx, opts)
if err != nil {
k.log.Error(err, "unable to create collector pod watcher")
return false
}
k.log.Info("Successfully started a collector pod watcher")
if msg := runWatch(ctx, k, watcher.ResultChan(), collectorMap, fn); msg != "" {
k.log.Info("Collector pod watch event stopped " + msg)
return false
}
return true
}
func runWatch(ctx context.Context, k *Client, c <-chan watch.Event, collectorMap map[string]*allocation.Collector, fn func(collectors map[string]*allocation.Collector)) string {
for {
collectorsDiscovered.Set(float64(len(collectorMap)))
select {
case <-k.close:
return "kubernetes client closed"
case <-ctx.Done():
return ""
case event, ok := <-c:
if !ok {
k.log.Info("No event found. Restarting watch routine")
return ""
}
pod, ok := event.Object.(*v1.Pod)
if !ok {
k.log.Info("No pod found in event Object. Restarting watch routine")
return ""
}
switch event.Type { //nolint:exhaustive
case watch.Added:
collectorMap[pod.Name] = allocation.NewCollector(pod.Name)
case watch.Deleted:
delete(collectorMap, pod.Name)
}
fn(collectorMap)
}
}
}
func (k *Client) Close() {
close(k.close)
}

Просмотреть файл

@ -0,0 +1,221 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package collector
import (
"context"
"fmt"
"os"
"sync"
"testing"
"time"
"k8s.io/apimachinery/pkg/watch"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/kubernetes/fake"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation"
)
var logger = logf.Log.WithName("collector-unit-tests")
func getTestClient() (Client, watch.Interface) {
kubeClient := Client{
k8sClient: fake.NewSimpleClientset(),
close: make(chan struct{}),
log: logger,
}
labelMap := map[string]string{
"app.kubernetes.io/instance": "default.test",
"app.kubernetes.io/managed-by": "opentelemetry-operator",
}
opts := metav1.ListOptions{
LabelSelector: labels.SelectorFromSet(labelMap).String(),
}
watcher, err := kubeClient.k8sClient.CoreV1().Pods("test-ns").Watch(context.Background(), opts)
if err != nil {
fmt.Printf("failed to setup a Collector Pod watcher: %v", err)
os.Exit(1)
}
return kubeClient, watcher
}
func pod(name string) *v1.Pod {
labelSet := make(map[string]string)
labelSet["app.kubernetes.io/instance"] = "default.test"
labelSet["app.kubernetes.io/managed-by"] = "opentelemetry-operator"
return &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: "test-ns",
Labels: labelSet,
},
}
}
func Test_runWatch(t *testing.T) {
type args struct {
kubeFn func(t *testing.T, client Client, group *sync.WaitGroup)
collectorMap map[string]*allocation.Collector
}
tests := []struct {
name string
args args
want map[string]*allocation.Collector
}{
{
name: "pod add",
args: args{
kubeFn: func(t *testing.T, client Client, group *sync.WaitGroup) {
for _, k := range []string{"test-pod1", "test-pod2", "test-pod3"} {
p := pod(k)
group.Add(1)
_, err := client.k8sClient.CoreV1().Pods("test-ns").Create(context.Background(), p, metav1.CreateOptions{})
assert.NoError(t, err)
}
},
collectorMap: map[string]*allocation.Collector{},
},
want: map[string]*allocation.Collector{
"test-pod1": {
Name: "test-pod1",
},
"test-pod2": {
Name: "test-pod2",
},
"test-pod3": {
Name: "test-pod3",
},
},
},
{
name: "pod delete",
args: args{
kubeFn: func(t *testing.T, client Client, group *sync.WaitGroup) {
for _, k := range []string{"test-pod2", "test-pod3"} {
group.Add(1)
err := client.k8sClient.CoreV1().Pods("test-ns").Delete(context.Background(), k, metav1.DeleteOptions{})
assert.NoError(t, err)
}
},
collectorMap: map[string]*allocation.Collector{
"test-pod1": {
Name: "test-pod1",
},
"test-pod2": {
Name: "test-pod2",
},
"test-pod3": {
Name: "test-pod3",
},
},
},
want: map[string]*allocation.Collector{
"test-pod1": {
Name: "test-pod1",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
kubeClient, watcher := getTestClient()
defer func() {
close(kubeClient.close)
watcher.Stop()
}()
var wg sync.WaitGroup
actual := make(map[string]*allocation.Collector)
for _, k := range tt.args.collectorMap {
p := pod(k.Name)
_, err := kubeClient.k8sClient.CoreV1().Pods("test-ns").Create(context.Background(), p, metav1.CreateOptions{})
wg.Add(1)
assert.NoError(t, err)
}
go runWatch(context.Background(), &kubeClient, watcher.ResultChan(), map[string]*allocation.Collector{}, func(colMap map[string]*allocation.Collector) {
actual = colMap
wg.Done()
})
tt.args.kubeFn(t, kubeClient, &wg)
wg.Wait()
assert.Len(t, actual, len(tt.want))
assert.Equal(t, actual, tt.want)
})
}
}
// this tests runWatch in the case of watcher channel closing and watcher timing out.
func Test_closeChannel(t *testing.T) {
tests := []struct {
description string
isCloseChannel bool
timeoutSeconds time.Duration
}{
{
// event is triggered by channel closing.
description: "close_channel",
isCloseChannel: true,
// channel should be closed before this timeout occurs
timeoutSeconds: 10 * time.Second,
},
{
// event triggered by timeout.
description: "watcher_timeout",
isCloseChannel: false,
timeoutSeconds: 0 * time.Second,
},
}
for _, tc := range tests {
t.Run(tc.description, func(t *testing.T) {
kubeClient, watcher := getTestClient()
defer func() {
close(kubeClient.close)
watcher.Stop()
}()
var wg sync.WaitGroup
wg.Add(1)
terminated := false
go func(watcher watch.Interface) {
defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), tc.timeoutSeconds)
defer cancel()
if msg := runWatch(ctx, &kubeClient, watcher.ResultChan(), map[string]*allocation.Collector{}, func(colMap map[string]*allocation.Collector) {}); msg != "" {
terminated = true
return
}
}(watcher)
if tc.isCloseChannel {
// stop pod watcher to trigger event.
watcher.Stop()
}
wg.Wait()
assert.False(t, terminated)
})
}
}

Просмотреть файл

@ -0,0 +1,155 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"errors"
"flag"
"fmt"
"io/fs"
"os"
"path/filepath"
"time"
"github.com/go-logr/logr"
"github.com/prometheus/common/model"
promconfig "github.com/prometheus/prometheus/config"
_ "github.com/prometheus/prometheus/discovery/install"
"github.com/spf13/pflag"
"gopkg.in/yaml.v2"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)
const DefaultResyncTime = 5 * time.Minute
const DefaultConfigFilePath string = "/conf/targetallocator.yaml"
const DefaultCRScrapeInterval model.Duration = model.Duration(time.Second * 30)
type Config struct {
LabelSelector map[string]string `yaml:"label_selector,omitempty"`
Config *promconfig.Config `yaml:"config"`
AllocationStrategy *string `yaml:"allocation_strategy,omitempty"`
FilterStrategy *string `yaml:"filter_strategy,omitempty"`
PrometheusCR PrometheusCRConfig `yaml:"prometheus_cr,omitempty"`
PodMonitorSelector map[string]string `yaml:"pod_monitor_selector,omitempty"`
ServiceMonitorSelector map[string]string `yaml:"service_monitor_selector,omitempty"`
}
type PrometheusCRConfig struct {
ScrapeInterval model.Duration `yaml:"scrape_interval,omitempty"`
}
func (c Config) GetAllocationStrategy() string {
if c.AllocationStrategy != nil {
return *c.AllocationStrategy
}
return "least-weighted"
}
func (c Config) GetTargetsFilterStrategy() string {
if c.FilterStrategy != nil {
return *c.FilterStrategy
}
return ""
}
type PrometheusCRWatcherConfig struct {
Enabled *bool
}
type CLIConfig struct {
ListenAddr *string
ConfigFilePath *string
ClusterConfig *rest.Config
// KubeConfigFilePath empty if in cluster configuration is in use
KubeConfigFilePath string
RootLogger logr.Logger
PromCRWatcherConf PrometheusCRWatcherConfig
}
func Load(file string) (Config, error) {
cfg := createDefaultConfig()
if err := unmarshal(&cfg, file); err != nil {
return Config{}, err
}
return cfg, nil
}
func unmarshal(cfg *Config, configFile string) error {
yamlFile, err := os.ReadFile(configFile)
if err != nil {
return err
}
if err = yaml.UnmarshalStrict(yamlFile, cfg); err != nil {
return fmt.Errorf("error unmarshaling YAML: %w", err)
}
return nil
}
func createDefaultConfig() Config {
return Config{
PrometheusCR: PrometheusCRConfig{
ScrapeInterval: DefaultCRScrapeInterval,
},
}
}
func ParseCLI() (CLIConfig, error) {
opts := zap.Options{}
opts.BindFlags(flag.CommandLine)
cLIConf := CLIConfig{
ListenAddr: pflag.String("listen-addr", ":8080", "The address where this service serves."),
ConfigFilePath: pflag.String("config-file", DefaultConfigFilePath, "The path to the config file."),
PromCRWatcherConf: PrometheusCRWatcherConfig{
Enabled: pflag.Bool("enable-prometheus-cr-watcher", false, "Enable Prometheus CRs as target sources"),
},
}
kubeconfigPath := pflag.String("kubeconfig-path", filepath.Join(homedir.HomeDir(), ".kube", "config"), "absolute path to the KubeconfigPath file")
pflag.Parse()
cLIConf.RootLogger = zap.New(zap.UseFlagOptions(&opts))
klog.SetLogger(cLIConf.RootLogger)
ctrl.SetLogger(cLIConf.RootLogger)
clusterConfig, err := clientcmd.BuildConfigFromFlags("", *kubeconfigPath)
cLIConf.KubeConfigFilePath = *kubeconfigPath
if err != nil {
pathError := &fs.PathError{}
if ok := errors.As(err, &pathError); !ok {
return CLIConfig{}, err
}
clusterConfig, err = rest.InClusterConfig()
if err != nil {
return CLIConfig{}, err
}
cLIConf.KubeConfigFilePath = "" // reset as we use in cluster configuration
}
cLIConf.ClusterConfig = clusterConfig
return cLIConf, nil
}
// ValidateConfig validates the cli and file configs together.
func ValidateConfig(config *Config, cliConfig *CLIConfig) error {
scrapeConfigsPresent := (config.Config != nil && len(config.Config.ScrapeConfigs) > 0)
if !(*cliConfig.PromCRWatcherConf.Enabled || scrapeConfigsPresent) {
return fmt.Errorf("at least one scrape config must be defined, or Prometheus CR watching must be enabled")
}
return nil
}

Просмотреть файл

@ -0,0 +1,226 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"fmt"
"testing"
"time"
commonconfig "github.com/prometheus/common/config"
promconfig "github.com/prometheus/prometheus/config"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/discovery"
"github.com/prometheus/prometheus/discovery/file"
"github.com/stretchr/testify/assert"
)
func TestLoad(t *testing.T) {
type args struct {
file string
}
tests := []struct {
name string
args args
want Config
wantErr assert.ErrorAssertionFunc
}{
{
name: "file sd load",
args: args{
file: "./testdata/config_test.yaml",
},
want: Config{
LabelSelector: map[string]string{
"app.kubernetes.io/instance": "default.test",
"app.kubernetes.io/managed-by": "opentelemetry-operator",
},
PrometheusCR: PrometheusCRConfig{
ScrapeInterval: model.Duration(time.Second * 60),
},
Config: &promconfig.Config{
GlobalConfig: promconfig.GlobalConfig{
ScrapeInterval: model.Duration(60 * time.Second),
ScrapeTimeout: model.Duration(10 * time.Second),
EvaluationInterval: model.Duration(60 * time.Second),
},
ScrapeConfigs: []*promconfig.ScrapeConfig{
{
JobName: "prometheus",
HonorTimestamps: true,
ScrapeInterval: model.Duration(60 * time.Second),
ScrapeTimeout: model.Duration(10 * time.Second),
MetricsPath: "/metrics",
Scheme: "http",
HTTPClientConfig: commonconfig.HTTPClientConfig{
FollowRedirects: true,
EnableHTTP2: true,
},
ServiceDiscoveryConfigs: []discovery.Config{
&file.SDConfig{
Files: []string{"./file_sd_test.json"},
RefreshInterval: model.Duration(5 * time.Minute),
},
discovery.StaticConfig{
{
Targets: []model.LabelSet{
{model.AddressLabel: "prom.domain:9001"},
{model.AddressLabel: "prom.domain:9002"},
{model.AddressLabel: "prom.domain:9003"},
},
Labels: model.LabelSet{
"my": "label",
},
Source: "0",
},
},
},
},
},
},
},
wantErr: assert.NoError,
},
{
name: "no config",
args: args{
file: "./testdata/no_config.yaml",
},
want: createDefaultConfig(),
wantErr: assert.NoError,
},
{
name: "service monitor pod monitor selector",
args: args{
file: "./testdata/pod_service_selector_test.yaml",
},
want: Config{
LabelSelector: map[string]string{
"app.kubernetes.io/instance": "default.test",
"app.kubernetes.io/managed-by": "opentelemetry-operator",
},
PrometheusCR: PrometheusCRConfig{
ScrapeInterval: DefaultCRScrapeInterval,
},
Config: &promconfig.Config{
GlobalConfig: promconfig.GlobalConfig{
ScrapeInterval: model.Duration(60 * time.Second),
ScrapeTimeout: model.Duration(10 * time.Second),
EvaluationInterval: model.Duration(60 * time.Second),
},
ScrapeConfigs: []*promconfig.ScrapeConfig{
{
JobName: "prometheus",
HonorTimestamps: true,
ScrapeInterval: model.Duration(60 * time.Second),
ScrapeTimeout: model.Duration(10 * time.Second),
MetricsPath: "/metrics",
Scheme: "http",
HTTPClientConfig: commonconfig.HTTPClientConfig{
FollowRedirects: true,
EnableHTTP2: true,
},
ServiceDiscoveryConfigs: []discovery.Config{
discovery.StaticConfig{
{
Targets: []model.LabelSet{
{model.AddressLabel: "prom.domain:9001"},
{model.AddressLabel: "prom.domain:9002"},
{model.AddressLabel: "prom.domain:9003"},
},
Labels: model.LabelSet{
"my": "label",
},
Source: "0",
},
},
},
},
},
},
PodMonitorSelector: map[string]string{
"release": "test",
},
ServiceMonitorSelector: map[string]string{
"release": "test",
},
},
wantErr: assert.NoError,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Load(tt.args.file)
if !tt.wantErr(t, err, fmt.Sprintf("Load(%v)", tt.args.file)) {
return
}
assert.Equalf(t, tt.want, got, "Load(%v)", tt.args.file)
})
}
}
func TestValidateConfig(t *testing.T) {
enabled := true
disabled := false
testCases := []struct {
name string
cliConfig CLIConfig
fileConfig Config
expectedErr error
}{
{
name: "promCR enabled, no Prometheus config",
cliConfig: CLIConfig{PromCRWatcherConf: PrometheusCRWatcherConfig{Enabled: &enabled}},
fileConfig: Config{Config: nil},
expectedErr: nil,
},
{
name: "promCR disabled, no Prometheus config",
cliConfig: CLIConfig{PromCRWatcherConf: PrometheusCRWatcherConfig{Enabled: &disabled}},
fileConfig: Config{Config: nil},
expectedErr: fmt.Errorf("at least one scrape config must be defined, or Prometheus CR watching must be enabled"),
},
{
name: "promCR disabled, Prometheus config present, no scrapeConfigs",
cliConfig: CLIConfig{PromCRWatcherConf: PrometheusCRWatcherConfig{Enabled: &disabled}},
fileConfig: Config{Config: &promconfig.Config{}},
expectedErr: fmt.Errorf("at least one scrape config must be defined, or Prometheus CR watching must be enabled"),
},
{
name: "promCR disabled, Prometheus config present, scrapeConfigs present",
cliConfig: CLIConfig{PromCRWatcherConf: PrometheusCRWatcherConfig{Enabled: &disabled}},
fileConfig: Config{
Config: &promconfig.Config{ScrapeConfigs: []*promconfig.ScrapeConfig{{}}},
},
expectedErr: nil,
},
{
name: "promCR enabled, Prometheus config present, scrapeConfigs present",
cliConfig: CLIConfig{PromCRWatcherConf: PrometheusCRWatcherConfig{Enabled: &enabled}},
fileConfig: Config{
Config: &promconfig.Config{ScrapeConfigs: []*promconfig.ScrapeConfig{{}}},
},
expectedErr: nil,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
err := ValidateConfig(&tc.fileConfig, &tc.cliConfig)
assert.Equal(t, tc.expectedErr, err)
})
}
}

17
otelcollector/otel-allocator/config/testdata/config_test.yaml поставляемый Normal file
Просмотреть файл

@ -0,0 +1,17 @@
label_selector:
app.kubernetes.io/instance: default.test
app.kubernetes.io/managed-by: opentelemetry-operator
prometheus_cr:
scrape_interval: 60s
config:
scrape_configs:
- job_name: prometheus
file_sd_configs:
- files:
- ./file_sd_test.json
static_configs:
- targets: ["prom.domain:9001", "prom.domain:9002", "prom.domain:9003"]
labels:
my: label

18
otelcollector/otel-allocator/config/testdata/file_sd_test.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,18 @@
[
{
"labels": {
"job": "node"
},
"targets": [
"promfile.domain:1001"
]
},
{
"labels": {
"foo1": "bar1"
},
"targets": [
"promfile.domain:3000"
]
}
]

0
otelcollector/otel-allocator/config/testdata/no_config.yaml поставляемый Normal file
Просмотреть файл

Просмотреть файл

@ -0,0 +1,14 @@
label_selector:
app.kubernetes.io/instance: default.test
app.kubernetes.io/managed-by: opentelemetry-operator
pod_monitor_selector:
release: test
service_monitor_selector:
release: test
config:
scrape_configs:
- job_name: prometheus
static_configs:
- targets: ["prom.domain:9001", "prom.domain:9002", "prom.domain:9003"]
labels:
my: label

Просмотреть файл

@ -0,0 +1,63 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package diff
// Changes is the result of the difference between two maps – items that are added and items that are removed
// This map is used to reconcile state differences.
type Changes[T Hasher] struct {
additions map[string]T
removals map[string]T
}
type Hasher interface {
Hash() string
}
func NewChanges[T Hasher](additions map[string]T, removals map[string]T) Changes[T] {
return Changes[T]{additions: additions, removals: removals}
}
func (c Changes[T]) Additions() map[string]T {
return c.additions
}
func (c Changes[T]) Removals() map[string]T {
return c.removals
}
// Maps generates Changes for two maps with the same type signature by checking for any removals and then checking for
// additions.
// TODO: This doesn't need to create maps, it can return slices only. This function doesn't need to insert the values.
func Maps[T Hasher](current, new map[string]T) Changes[T] {
additions := map[string]T{}
removals := map[string]T{}
for key, newValue := range new {
if currentValue, found := current[key]; !found {
additions[key] = newValue
} else if currentValue.Hash() != newValue.Hash() {
additions[key] = newValue
removals[key] = currentValue
}
}
for key, value := range current {
if _, found := new[key]; !found {
removals[key] = value
}
}
return Changes[T]{
additions: additions,
removals: removals,
}
}

Просмотреть файл

@ -0,0 +1,108 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package diff
import (
"reflect"
"testing"
)
type HasherString string
func (s HasherString) Hash() string {
return string(s)
}
func TestDiffMaps(t *testing.T) {
type args struct {
current map[string]Hasher
new map[string]Hasher
}
tests := []struct {
name string
args args
want Changes[Hasher]
}{
{
name: "basic replacement",
args: args{
current: map[string]Hasher{
"current": HasherString("one"),
},
new: map[string]Hasher{
"new": HasherString("another"),
},
},
want: Changes[Hasher]{
additions: map[string]Hasher{
"new": HasherString("another"),
},
removals: map[string]Hasher{
"current": HasherString("one"),
},
},
},
{
name: "single addition",
args: args{
current: map[string]Hasher{
"current": HasherString("one"),
},
new: map[string]Hasher{
"current": HasherString("one"),
"new": HasherString("another"),
},
},
want: Changes[Hasher]{
additions: map[string]Hasher{
"new": HasherString("another"),
},
removals: map[string]Hasher{},
},
},
{
name: "value change",
args: args{
current: map[string]Hasher{
"k1": HasherString("v1"),
"k2": HasherString("v2"),
"change": HasherString("before"),
},
new: map[string]Hasher{
"k1": HasherString("v1"),
"k3": HasherString("v3"),
"change": HasherString("after"),
},
},
want: Changes[Hasher]{
additions: map[string]Hasher{
"k3": HasherString("v3"),
"change": HasherString("after"),
},
removals: map[string]Hasher{
"k2": HasherString("v2"),
"change": HasherString("before"),
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := Maps(tt.args.current, tt.args.new); !reflect.DeepEqual(got, tt.want) {
t.Errorf("DiffMaps() = %v, want %v", got, tt.want)
}
})
}
}

Просмотреть файл

@ -0,0 +1,215 @@
module github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator
go 1.20
replace github.com/prometheus-operator/prometheus-operator => ./prometheus-operator
replace github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring => ./prometheus-operator/pkg/apis/monitoring
replace github.com/prometheus-operator/prometheus-operator/pkg/client => ./prometheus-operator/pkg/client
require (
github.com/buraksezer/consistent v0.10.0
github.com/cespare/xxhash/v2 v2.2.0
github.com/cnf/structhash v0.0.0-20201127153200-e1b16c1ebc08
github.com/fsnotify/fsnotify v1.6.0
github.com/ghodss/yaml v1.0.0
github.com/gin-gonic/gin v1.9.1
github.com/go-kit/log v0.2.1
github.com/go-logr/logr v1.2.4
github.com/json-iterator/go v1.1.12
github.com/oklog/run v1.1.0
github.com/prometheus-operator/prometheus-operator v0.67.1
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.67.1
github.com/prometheus-operator/prometheus-operator/pkg/client v0.67.1
github.com/prometheus/client_golang v1.16.0
github.com/prometheus/common v0.44.0
github.com/prometheus/prometheus v0.47.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.28.2
k8s.io/apimachinery v0.28.2
k8s.io/client-go v0.28.2
k8s.io/klog/v2 v2.100.1
sigs.k8s.io/controller-runtime v0.16.2
)
require (
cloud.google.com/go/compute v1.22.0 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.29 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.23 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/aws/aws-sdk-go v1.44.302 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bytedance/sonic v1.9.1 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dennwc/varint v1.0.0 // indirect
github.com/digitalocean/godo v1.99.0 // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/docker/docker v24.0.4+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/edsrzf/mmap-go v1.1.0 // indirect
github.com/efficientgo/core v1.0.0-rc.2 // indirect
github.com/emicklei/go-restful/v3 v3.10.2 // indirect
github.com/envoyproxy/go-control-plane v0.11.1 // indirect
github.com/envoyproxy/protoc-gen-validate v1.0.2 // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/fatih/color v1.15.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-logfmt/logfmt v0.6.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-logr/zapr v1.2.4 // indirect
github.com/go-openapi/analysis v0.21.4 // indirect
github.com/go-openapi/errors v0.20.4 // indirect
github.com/go-openapi/jsonpointer v0.20.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/loads v0.21.2 // indirect
github.com/go-openapi/runtime v0.26.0 // indirect
github.com/go-openapi/spec v0.20.9 // indirect
github.com/go-openapi/strfmt v0.21.7 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-openapi/validate v0.22.1 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.14.0 // indirect
github.com/go-resty/resty/v2 v2.7.0 // indirect
github.com/go-zookeeper/zk v1.0.3 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/s2a-go v0.1.4 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.5 // indirect
github.com/googleapis/gax-go/v2 v2.12.0 // indirect
github.com/gophercloud/gophercloud v1.5.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd // indirect
github.com/hashicorp/consul/api v1.22.0 // indirect
github.com/hashicorp/cronexpr v1.1.2 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-hclog v1.5.0 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.4 // indirect
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
github.com/hashicorp/golang-lru v0.6.0 // indirect
github.com/hashicorp/nomad/api v0.0.0-20230718173136-3a687930bd3e // indirect
github.com/hashicorp/serf v0.10.1 // indirect
github.com/hetznercloud/hcloud-go/v2 v2.0.0 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/ionos-cloud/sdk-go/v6 v6.1.8 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/jpillora/backoff v1.0.0 // indirect
github.com/klauspost/compress v1.16.7 // indirect
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/leodido/go-urn v1.2.4 // indirect
github.com/linode/linodego v1.19.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a // indirect
github.com/miekg/dns v1.1.55 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.2 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/ovh/go-ovh v1.4.1 // indirect
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus-community/prom-label-proxy v0.7.0 // indirect
github.com/prometheus/alertmanager v0.25.1 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common/sigv4 v0.1.0 // indirect
github.com/prometheus/procfs v0.11.0 // indirect
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.20 // indirect
github.com/spf13/cobra v1.7.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect
github.com/vultr/govultr/v2 v2.17.2 // indirect
go.mongodb.org/mongo-driver v1.12.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/otel v1.16.0 // indirect
go.opentelemetry.io/otel/metric v1.16.0 // indirect
go.opentelemetry.io/otel/trace v1.16.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/goleak v1.2.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.25.0 // indirect
golang.org/x/arch v0.3.0 // indirect
golang.org/x/crypto v0.11.0 // indirect
golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1 // indirect
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.13.0 // indirect
golang.org/x/oauth2 v0.10.0 // indirect
golang.org/x/sync v0.3.0 // indirect
golang.org/x/sys v0.11.0 // indirect
golang.org/x/term v0.10.0 // indirect
golang.org/x/text v0.11.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.11.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/api v0.132.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230717213848-3f92550aa753 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20230717213848-3f92550aa753 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230717213848-3f92550aa753 // indirect
google.golang.org/grpc v1.56.2 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.28.0 // indirect
k8s.io/component-base v0.28.1 // indirect
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
k8s.io/utils v0.0.0-20230711102312-30195339c3c7 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.3.0 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)
// A exclude directive is needed for k8s.io/client-go because Cortex (which
// is an indirect dependency through Thanos and PrometheusOperator) has a requirement on v12.0.0.
exclude k8s.io/client-go v12.0.0+incompatible

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,13 @@
Copyright The OpenTelemetry Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Просмотреть файл

@ -0,0 +1,266 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"context"
"os"
"os/signal"
"strings"
"syscall"
gokitlog "github.com/go-kit/log"
"github.com/oklog/run"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/prometheus/discovery"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
ctrl "sigs.k8s.io/controller-runtime"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/allocation"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/collector"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/config"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/prehook"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/server"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
allocatorWatcher "github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/watcher"
)
var (
setupLog = ctrl.Log.WithName("setup")
eventsMetric = promauto.NewCounterVec(prometheus.CounterOpts{
Name: "opentelemetry_allocator_events",
Help: "Number of events in the channel.",
}, []string{"source"})
)
func main() {
var (
// allocatorPrehook will be nil if filterStrategy is not set or
// unrecognized. No filtering will be used in this case.
allocatorPrehook prehook.Hook
allocator allocation.Allocator
discoveryManager *discovery.Manager
collectorWatcher *collector.Client
fileWatcher allocatorWatcher.Watcher
promWatcher allocatorWatcher.Watcher
targetDiscoverer *target.Discoverer
discoveryCancel context.CancelFunc
runGroup run.Group
eventChan = make(chan allocatorWatcher.Event)
eventCloser = make(chan bool, 1)
interrupts = make(chan os.Signal, 1)
errChan = make(chan error)
)
// EULA statement is required for Arc extension
clusterResourceId := os.Getenv("CLUSTER")
if strings.EqualFold(clusterResourceId, "connectedclusters") {
setupLog.Info("MICROSOFT SOFTWARE LICENSE TERMS\n\nMICROSOFT Azure Arc-enabled Kubernetes\n\nThis software is licensed to you as part of your or your company's subscription license for Microsoft Azure Services. You may only use the software with Microsoft Azure Services and subject to the terms and conditions of the agreement under which you obtained Microsoft Azure Services. If you do not have an active subscription license for Microsoft Azure Services, you may not use the software. Microsoft Azure Legal Information: https://azure.microsoft.com/en-us/support/legal/")
}
cliConf, err := config.ParseCLI()
if err != nil {
setupLog.Error(err, "Failed to parse parameters")
os.Exit(1)
}
// Defaulting to consistent hashing
allocationStrategy := "consistent-hashing"
// Config file will not exist at startup, so not attempting to load the file which results in an error and just using defaults here.
cfg := config.Config{
AllocationStrategy: &allocationStrategy,
LabelSelector: map[string]string{
"rsName": "ama-metrics",
"kubernetes.azure.com/managedby": "aks",
},
}
if validationErr := config.ValidateConfig(&cfg, &cliConf); validationErr != nil {
setupLog.Error(validationErr, "Invalid configuration")
}
cliConf.RootLogger.Info("Starting the Target Allocator")
ctx := context.Background()
log := ctrl.Log.WithName("allocator")
allocatorPrehook = prehook.New(cfg.GetTargetsFilterStrategy(), log)
allocator, err = allocation.New(cfg.GetAllocationStrategy(), log, allocation.WithFilter(allocatorPrehook))
if err != nil {
setupLog.Error(err, "Unable to initialize allocation strategy")
os.Exit(1)
}
srv := server.NewServer(log, allocator, cliConf.ListenAddr)
discoveryCtx, discoveryCancel := context.WithCancel(ctx)
discoveryManager = discovery.NewManager(discoveryCtx, gokitlog.NewNopLogger())
targetDiscoverer = target.NewDiscoverer(log, discoveryManager, allocatorPrehook, srv)
collectorWatcher, collectorWatcherErr := collector.NewClient(log, cliConf.ClusterConfig)
if collectorWatcherErr != nil {
setupLog.Error(collectorWatcherErr, "Unable to initialize collector watcher")
os.Exit(1)
}
fileWatcher, err = allocatorWatcher.NewFileWatcher(setupLog.WithName("file-watcher"), cliConf)
if err != nil {
setupLog.Error(err, "Can't start the file watcher")
os.Exit(1)
}
signal.Notify(interrupts, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)
defer close(interrupts)
if *cliConf.PromCRWatcherConf.Enabled {
promWatcher, err = allocatorWatcher.NewPrometheusCRWatcher(setupLog.WithName("prometheus-cr-watcher"), cfg, cliConf)
if err != nil {
setupLog.Error(err, "Can't start the prometheus watcher")
os.Exit(1)
}
runGroup.Add(
func() error {
promWatcherErr := promWatcher.Watch(eventChan, errChan)
setupLog.Info("Prometheus watcher exited")
return promWatcherErr
},
func(_ error) {
setupLog.Info("Closing prometheus watcher")
promWatcherErr := promWatcher.Close()
if promWatcherErr != nil {
setupLog.Error(promWatcherErr, "prometheus watcher failed to close")
}
})
}
runGroup.Add(
func() error {
fileWatcherErr := fileWatcher.Watch(eventChan, errChan)
setupLog.Info("File watcher exited")
return fileWatcherErr
},
func(_ error) {
setupLog.Info("Closing file watcher")
fileWatcherErr := fileWatcher.Close()
if fileWatcherErr != nil {
setupLog.Error(fileWatcherErr, "file watcher failed to close")
}
})
runGroup.Add(
func() error {
discoveryManagerErr := discoveryManager.Run()
setupLog.Info("Discovery manager exited")
return discoveryManagerErr
},
func(_ error) {
setupLog.Info("Closing discovery manager")
discoveryCancel()
})
runGroup.Add(
func() error {
// Initial loading of the config file's scrape config
cliConf.RootLogger.Info("Checking to see if config file exists for initial loading")
if _, err := os.Stat(*cliConf.ConfigFilePath); err == nil {
cliConf.RootLogger.Info("File Exists. Loading and applying config...\n")
loadConfig, err := fileWatcher.LoadConfig(ctx)
if err != nil {
setupLog.Error(err, "Unable to load configuration")
}
err = targetDiscoverer.ApplyConfig(allocatorWatcher.EventSourceConfigMap, loadConfig)
if err != nil {
setupLog.Error(err, "Unable to apply initial configuration")
return err
}
} else {
cliConf.RootLogger.Info("Config file doesn't yet exist for initial loading, using empty config to begin with")
err = targetDiscoverer.ApplyConfig(allocatorWatcher.EventSourceConfigMap, cfg.Config)
if err != nil {
setupLog.Error(err, "Unable to apply initial configuration")
return err
}
}
err := targetDiscoverer.Watch(allocator.SetTargets)
setupLog.Info("Target discoverer exited")
return err
},
func(_ error) {
setupLog.Info("Closing target discoverer")
targetDiscoverer.Close()
})
runGroup.Add(
func() error {
err := collectorWatcher.Watch(ctx, cfg.LabelSelector, allocator.SetCollectors)
setupLog.Info("Collector watcher exited")
return err
},
func(_ error) {
setupLog.Info("Closing collector watcher")
collectorWatcher.Close()
})
runGroup.Add(
func() error {
err := srv.Start()
setupLog.Info("Server failed to start")
return err
},
func(_ error) {
setupLog.Info("Closing server")
if shutdownErr := srv.Shutdown(ctx); shutdownErr != nil {
setupLog.Error(shutdownErr, "Error on server shutdown")
}
})
runGroup.Add(
func() error {
for {
select {
case event := <-eventChan:
eventsMetric.WithLabelValues(event.Source.String()).Inc()
loadConfig, err := event.Watcher.LoadConfig(ctx)
if err != nil {
setupLog.Error(err, "Unable to load configuration")
continue
}
err = targetDiscoverer.ApplyConfig(event.Source, loadConfig)
if err != nil {
setupLog.Error(err, "Unable to apply configuration")
continue
}
case err := <-errChan:
setupLog.Error(err, "Watcher error")
case <-eventCloser:
return nil
}
}
},
func(_ error) {
setupLog.Info("Closing watcher loop")
close(eventCloser)
})
runGroup.Add(
func() error {
for {
select {
case <-interrupts:
setupLog.Info("Received interrupt")
return nil
case <-eventCloser:
return nil
}
}
},
func(_ error) {
setupLog.Info("Closing interrupt loop")
})
if runErr := runGroup.Run(); runErr != nil {
setupLog.Error(runErr, "run group exited")
}
setupLog.Info("Target allocator exited.")
}

Просмотреть файл

@ -0,0 +1,64 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prehook
import (
"errors"
"github.com/go-logr/logr"
"github.com/prometheus/prometheus/model/relabel"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
const (
relabelConfigTargetFilterName = "relabel-config"
)
type Hook interface {
Apply(map[string]*target.Item) map[string]*target.Item
SetConfig(map[string][]*relabel.Config)
GetConfig() map[string][]*relabel.Config
}
type HookProvider func(log logr.Logger) Hook
var (
registry = map[string]HookProvider{}
)
func New(name string, log logr.Logger) Hook {
if p, ok := registry[name]; ok {
return p(log.WithName("Prehook").WithName(name))
}
log.Info("Unrecognized filter strategy; filtering disabled")
return nil
}
func Register(name string, provider HookProvider) error {
if _, ok := registry[name]; ok {
return errors.New("already registered")
}
registry[name] = provider
return nil
}
func init() {
err := Register(relabelConfigTargetFilterName, NewRelabelConfigTargetFilter)
if err != nil {
panic(err)
}
}

Просмотреть файл

@ -0,0 +1,110 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prehook
import (
"github.com/go-logr/logr"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/model/relabel"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
type RelabelConfigTargetFilter struct {
log logr.Logger
relabelCfg map[string][]*relabel.Config
}
func NewRelabelConfigTargetFilter(log logr.Logger) Hook {
return &RelabelConfigTargetFilter{
log: log,
relabelCfg: make(map[string][]*relabel.Config),
}
}
// helper function converts from model.LabelSet to []labels.Label.
func convertLabelToPromLabelSet(lbls model.LabelSet) []labels.Label {
newLabels := make([]labels.Label, len(lbls))
index := 0
for k, v := range lbls {
newLabels[index].Name = string(k)
newLabels[index].Value = string(v)
index++
}
return newLabels
}
func (tf *RelabelConfigTargetFilter) Apply(targets map[string]*target.Item) map[string]*target.Item {
numTargets := len(targets)
// need to wait until relabelCfg is set
if len(tf.relabelCfg) == 0 {
return targets
}
// Note: jobNameKey != tItem.JobName (jobNameKey is hashed)
for jobNameKey, tItem := range targets {
keepTarget := true
lset := convertLabelToPromLabelSet(tItem.Labels)
for _, cfg := range tf.relabelCfg[tItem.JobName] {
if newLset, keep := relabel.Process(lset, cfg); !keep {
keepTarget = false
break // inner loop
} else {
lset = newLset
}
}
if !keepTarget {
delete(targets, jobNameKey)
}
}
tf.log.V(2).Info("Filtering complete", "seen", numTargets, "kept", len(targets))
return targets
}
func (tf *RelabelConfigTargetFilter) SetConfig(cfgs map[string][]*relabel.Config) {
relabelCfgCopy := make(map[string][]*relabel.Config)
for key, val := range cfgs {
relabelCfgCopy[key] = tf.replaceRelabelConfig(val)
}
tf.relabelCfg = relabelCfgCopy
}
// See this thread [https://github.com/open-telemetry/opentelemetry-operator/pull/1124/files#r983145795]
// for why SHARD == 0 is a necessary substitution. Otherwise the keep action that uses this env variable,
// would not match the regex and all targets end up dropped. Also note, $(SHARD) will always be 0 and it
// does not make sense to read from the environment because it is never set in the allocator.
func (tf *RelabelConfigTargetFilter) replaceRelabelConfig(cfg []*relabel.Config) []*relabel.Config {
for i := range cfg {
str := cfg[i].Regex.String()
if str == "$(SHARD)" {
cfg[i].Regex = relabel.MustNewRegexp("0")
}
}
return cfg
}
func (tf *RelabelConfigTargetFilter) GetConfig() map[string][]*relabel.Config {
relabelCfgCopy := make(map[string][]*relabel.Config)
for k, v := range tf.relabelCfg {
relabelCfgCopy[k] = v
}
return relabelCfgCopy
}

Просмотреть файл

@ -0,0 +1,270 @@
// Copyright The OpenTelemetry Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prehook
import (
"crypto/rand"
"fmt"
"math/big"
"strconv"
"testing"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/relabel"
"github.com/stretchr/testify/assert"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"github.com/open-telemetry/opentelemetry-operator/cmd/otel-allocator/target"
)
var (
logger = logf.Log.WithName("unit-tests")
defaultNumTargets = 100
defaultNumCollectors = 3
defaultStartIndex = 0
relabelConfigs = []relabelConfigObj{
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"i"},
Action: "replace",
Separator: ";",
Regex: relabel.MustNewRegexp("(.*)"),
Replacement: "$1",
TargetLabel: "foo",
},
},
isDrop: false,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"i"},
Regex: relabel.MustNewRegexp("(.*)"),
Separator: ";",
Action: "keep",
Replacement: "$1",
},
},
isDrop: false,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"i"},
Regex: relabel.MustNewRegexp("bad.*match"),
Action: "drop",
Separator: ";",
Replacement: "$1",
},
},
isDrop: false,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"label_not_present"},
Regex: relabel.MustNewRegexp("(.*)"),
Separator: ";",
Action: "keep",
Replacement: "$1",
},
},
isDrop: false,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"i"},
Regex: relabel.MustNewRegexp("(.*)"),
Separator: ";",
Action: "drop",
Replacement: "$1",
},
},
isDrop: true,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"collector"},
Regex: relabel.MustNewRegexp("(collector.*)"),
Separator: ";",
Action: "drop",
Replacement: "$1",
},
},
isDrop: true,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"i"},
Regex: relabel.MustNewRegexp("bad.*match"),
Separator: ";",
Action: "keep",
Replacement: "$1",
},
},
isDrop: true,
},
{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"collector"},
Regex: relabel.MustNewRegexp("collectors-n"),
Separator: ";",
Action: "keep",
Replacement: "$1",
},
},
isDrop: true,
},
}
HashmodConfig = relabelConfigObj{
cfg: []*relabel.Config{
{
SourceLabels: model.LabelNames{"i"},
Regex: relabel.MustNewRegexp("(.*)"),
Separator: ";",
Modulus: 1,
TargetLabel: "tmp-0",
Action: "hashmod",
Replacement: "$1",
},
{
SourceLabels: model.LabelNames{"tmp-$(SHARD)"},
Regex: relabel.MustNewRegexp("$(SHARD)"),
Separator: ";",
Action: "keep",
Replacement: "$1",
},
},
isDrop: false,
}
DefaultDropRelabelConfig = relabel.Config{
SourceLabels: model.LabelNames{"i"},
Regex: relabel.MustNewRegexp("(.*)"),
Action: "drop",
}
)
type relabelConfigObj struct {
cfg []*relabel.Config
isDrop bool
}
func colIndex(index, numCols int) int {
if numCols == 0 {
return -1
}
return index % numCols
}
func makeNNewTargets(rCfgs []relabelConfigObj, n int, numCollectors int, startingIndex int) (map[string]*target.Item, int, map[string]*target.Item, map[string][]*relabel.Config) {
toReturn := map[string]*target.Item{}
expectedMap := make(map[string]*target.Item)
numItemsRemaining := n
relabelConfig := make(map[string][]*relabel.Config)
for i := startingIndex; i < n+startingIndex; i++ {
collector := fmt.Sprintf("collector-%d", colIndex(i, numCollectors))
label := model.LabelSet{
"collector": model.LabelValue(collector),
"i": model.LabelValue(strconv.Itoa(i)),
"total": model.LabelValue(strconv.Itoa(n + startingIndex)),
}
jobName := fmt.Sprintf("test-job-%d", i)
newTarget := target.NewItem(jobName, "test-url", label, collector)
// add a single replace, drop, or keep action as relabel_config for targets
var index int
ind, _ := rand.Int(rand.Reader, big.NewInt(int64(len(relabelConfigs))))
index = int(ind.Int64())
relabelConfig[jobName] = rCfgs[index].cfg
targetKey := newTarget.Hash()
if relabelConfigs[index].isDrop {
numItemsRemaining--
} else {
expectedMap[targetKey] = newTarget
}
toReturn[targetKey] = newTarget
}
return toReturn, numItemsRemaining, expectedMap, relabelConfig
}
func TestApply(t *testing.T) {
allocatorPrehook := New("relabel-config", logger)
assert.NotNil(t, allocatorPrehook)
targets, numRemaining, expectedTargetMap, relabelCfg := makeNNewTargets(relabelConfigs, defaultNumTargets, defaultNumCollectors, defaultStartIndex)
allocatorPrehook.SetConfig(relabelCfg)
remainingItems := allocatorPrehook.Apply(targets)
assert.Len(t, remainingItems, numRemaining)
assert.Equal(t, remainingItems, expectedTargetMap)
// clear out relabelCfg to test with empty values
for key := range relabelCfg {
relabelCfg[key] = nil
}
// cfg = createMockConfig(relabelCfg)
allocatorPrehook.SetConfig(relabelCfg)
remainingItems = allocatorPrehook.Apply(targets)
// relabelCfg is empty so targets should be unfiltered
assert.Len(t, remainingItems, len(targets))
assert.Equal(t, remainingItems, targets)
}
func TestApplyHashmodAction(t *testing.T) {
allocatorPrehook := New("relabel-config", logger)
assert.NotNil(t, allocatorPrehook)
hashRelabelConfigs := append(relabelConfigs, HashmodConfig)
targets, numRemaining, expectedTargetMap, relabelCfg := makeNNewTargets(hashRelabelConfigs, defaultNumTargets, defaultNumCollectors, defaultStartIndex)
allocatorPrehook.SetConfig(relabelCfg)
remainingItems := allocatorPrehook.Apply(targets)
assert.Len(t, remainingItems, numRemaining)
assert.Equal(t, remainingItems, expectedTargetMap)
}
func TestApplyEmptyRelabelCfg(t *testing.T) {
allocatorPrehook := New("relabel-config", logger)
assert.NotNil(t, allocatorPrehook)
targets, _, _, _ := makeNNewTargets(relabelConfigs, defaultNumTargets, defaultNumCollectors, defaultStartIndex)
relabelCfg := map[string][]*relabel.Config{}
allocatorPrehook.SetConfig(relabelCfg)
remainingItems := allocatorPrehook.Apply(targets)
// relabelCfg is empty so targets should be unfiltered
assert.Len(t, remainingItems, len(targets))
assert.Equal(t, remainingItems, targets)
}
func TestSetConfig(t *testing.T) {
allocatorPrehook := New("relabel-config", logger)
assert.NotNil(t, allocatorPrehook)
_, _, _, relabelCfg := makeNNewTargets(relabelConfigs, defaultNumTargets, defaultNumCollectors, defaultStartIndex)
allocatorPrehook.SetConfig(relabelCfg)
assert.Equal(t, relabelCfg, allocatorPrehook.GetConfig())
}

Просмотреть файл

@ -0,0 +1,14 @@
root = true
[*.py]
indent_style = space
end_of_line = lf
insert_final_newline = true
max_line_length = 100
trim_trailing_whitespace = true
indent_size = 4
[*.yaml]
indent_style = space
indent_size = 2
trim_trailing_whitespace = true

Просмотреть файл

@ -0,0 +1,6 @@
**/zz_generated.*.go linguist-generated=true
bundle.yaml linguist-generated=true
example/prometheus-operator-crd-full/* linguist-generated=true
example/prometheus-operator-crd/* linguist-generated=true
example/jsonnet/prometheus-operator/* linguist-generated=true
Documentation/api.md linguist-generated=true

Просмотреть файл

@ -0,0 +1,4 @@
* @prometheus-operator/prometheus-operator-reviewers
/scripts/ @paulfantom
/.github/workflows/ @paulfantom

Просмотреть файл

@ -0,0 +1,52 @@
---
name: Bug
about: Report a bug related to the Prometheus Operator
labels: kind/bug
---
<!--
Feel free to ask questions in #prometheus-operator on Kubernetes Slack!
Note: This repository is about prometheus-operator itself, if you have questions about:
- helm installation, go to https://github.com/prometheus-community/helm-charts repository
- kube-prometheus setup, go to https://github.com/prometheus-operator/kube-prometheus
-->
**What happened?**
**Did you expect to see something different?**
**How to reproduce it (as minimally and precisely as possible)**:
**Environment**
* Prometheus Operator version:
`Insert image tag or Git SHA here`
<!-- Try kubectl -n monitoring describe deployment prometheus-operator -->
<!-- Note: please provide operator version and not kube-prometheus/helm chart version -->
* Kubernetes version information:
`kubectl version`
<!-- Replace the command with its output above -->
* Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, etc.
* Manifests:
```
insert manifests relevant to the issue
```
* Prometheus Operator Logs:
```
insert Prometheus Operator logs relevant to the issue here
```
**Anything else we need to know?**:

Просмотреть файл

@ -0,0 +1,8 @@
blank_issues_enabled: false # Show or hide the Create a blank issue choice when users select New issue.
contact_links:
- name: "Questions via prometheus-operator community support on kubernetes slack - #prometheus-operator"
url: https://kubernetes.slack.com/archives/CFFDS2Z7F
about: "Join us for questions, answers or prometheus-operator related chat. Please do create issues on Github for better collaboration. If you don't have an account, sign up at http://slack.k8s.io/"
- name: "Question via prometheus-operator discussions (similar to Stack Overflow)"
url: https://github.com/prometheus-operator/prometheus-operator/discussions
about: "Please ask and answer questions here for async response."

Просмотреть файл

@ -0,0 +1,29 @@
---
name: Feature
about: If you want to propose a new feature or enhancement
labels: kind/feature
---
<!--
Feel free to ask questions in #prometheus-operator on Kubernetes Slack!
Note: This repository is about prometheus-operator itself, if you have questions about:
- helm installation, go to https://github.com/prometheus-community/helm-charts repository
- kube-prometheus setup, go to https://github.com/prometheus-operator/kube-prometheus
-->
**What is missing?**
**Why do we need it?**
**Environment**
* Prometheus Operator version:
`Insert image tag or Git SHA here`
<!-- Try kubectl -n monitoring describe deployment prometheus-operator -->
<!-- Note: please provide operator version and not kube-prometheus/helm chart version -->
**Anything else we need to know?**:

Просмотреть файл

@ -0,0 +1,50 @@
---
name: Support
about: For questions about prometheus-operator. For Helm, go to https://github.com/prometheus-community/helm-charts. For kube-prometheus, go to https://github.com/prometheus-operator/kube-prometheus.
labels: kind/support
---
<!--
Feel free to ask questions in #prometheus-operator on Kubernetes Slack!
Note: This repository is about prometheus-operator itself, if you have questions about:
- helm installation, go to https://github.com/prometheus-community/helm-charts repository
- kube-prometheus setup, go to https://github.com/prometheus-operator/kube-prometheus
-->
**What did you do?**
**Did you expect to see some different?**
**Environment**
* Prometheus Operator version:
`Insert image tag or Git SHA here`
<!-- Try: kubectl -n monitoring describe deployment prometheus-operator -->
<!-- Note: please provide operator version and not kube-prometheus/helm chart version -->
* Kubernetes version information:
`kubectl version`
<!-- Replace the command with its output above -->
* Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, etc.
* Manifests:
```
insert manifests relevant to the issue
```
* Prometheus Operator Logs:
```
insert Prometheus Operator logs relevant to the issue here
```
**Anything else we need to know?**:

Просмотреть файл

@ -0,0 +1,33 @@
## Description
_Describe the big picture of your changes here to communicate to the maintainers why we should accept this pull request.
If it fixes a bug or resolves a feature request, be sure to link to that issue._
## Type of change
_What type of changes does your code introduce to the Prometheus operator? Put an `x` in the box that apply._
- [ ] `CHANGE` (fix or feature that would cause existing functionality to not work as expected)
- [ ] `FEATURE` (non-breaking change which adds functionality)
- [ ] `BUGFIX` (non-breaking change which fixes an issue)
- [ ] `ENHANCEMENT` (non-breaking change which improves existing functionality)
- [ ] `NONE` (if none of the other choices apply. Example, tooling, build system, CI, docs, etc.)
## Changelog entry
_Please put a one-line changelog entry below. This will be copied to the changelog file during the release process._
<!--
Your release note should be written in clear and straightforward sentences. Most often, users aren't familiar with
the technical details of your PR, so consider what they need to know when you write your release note.
Some brief examples of release notes:
- Add metadataConfig field to the Prometheus CRD for configuring how remote-write sends metadata information.
- Generate correct scraping configuration for Probes with empty or unset module parameter.
-->
```release-note
```

Просмотреть файл

@ -0,0 +1,11 @@
version: 2
updates:
- package-ecosystem: gomod
directory: /
schedule:
interval: daily
- package-ecosystem: github-actions
directory: /
schedule:
interval: daily

3
otelcollector/otel-allocator/prometheus-operator/.github/env поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
golang-version=1.20
kind-version=v0.20.0
kind-image=kindest/node:v1.27.3

Просмотреть файл

@ -0,0 +1,35 @@
# Configuration for Lock Threads - https://github.com/dessant/lock-threads
# Number of days of inactivity before a closed issue or pull request is locked
daysUntilLock: 21
# Skip issues and pull requests created before a given timestamp. Timestamp must
# follow ISO 8601 (`YYYY-MM-DD`). Set to `false` to disable
skipCreatedBefore: false
# Issues and pull requests with these labels will be ignored. Set to `[]` to disable
exemptLabels: []
# Label to add before locking, such as `outdated`. Set to `false` to disable
lockLabel: false
# Comment to post before locking. Set to `false` to disable
lockComment: >
This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.
# Assign `resolved` as the reason for locking. Set to `false` to disable
setLockReason: true
# Limit to only `issues` or `pulls`
only: issues
# Optionally, specify configuration settings just for `issues` or `pulls`
# issues:
# exemptLabels:
# - help-wanted
# lockLabel: outdated
# pulls:
# daysUntilLock: 30

Просмотреть файл

@ -0,0 +1,92 @@
name: checks
on:
pull_request:
push:
branches:
- 'release-*'
- 'master'
- 'main'
tags:
- 'v*'
jobs:
generate:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- macos-latest
- ubuntu-latest
name: Generate and format
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- run: make --always-make format generate && git diff --exit-code
check-docs:
runs-on: ubuntu-latest
name: Check Documentation formatting and links
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- run: make check-docs
check-golang:
runs-on: ubuntu-latest
name: Golang linter
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- name: golangci-lint
uses: golangci/golangci-lint-action@v3.6.0
with:
version: v1.53.1
args: --timeout 10m0s
check-metrics:
runs-on: ubuntu-latest
name: Check prometheus metrics
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- run: make check-metrics
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- macos-latest
- ubuntu-latest
name: Build operator binary
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- run: make operator
po-rule-migration:
runs-on: ubuntu-latest
name: Build Prometheus Operator rule config map to rule file CRDs CLI tool
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- run: cd cmd/po-rule-migration && go install

Просмотреть файл

@ -0,0 +1,107 @@
name: e2e
on:
pull_request:
push:
branches:
- 'release-*'
- 'master'
- 'main'
tags:
- 'v*'
jobs:
e2e-tests:
name: E2E tests
runs-on: ubuntu-latest
strategy:
matrix:
suite: [alertmanager, prometheus, prometheusAllNS, thanosruler, operatorUpgrade]
include:
- suite: alertmanager
prometheus: "exclude"
prometheusAllNS: "exclude"
alertmanager: ""
thanosruler: "exclude"
operatorUpgrade: "exclude"
featureGated: "include"
- suite: prometheus
prometheus: ""
prometheusAllNS: "exclude"
alertmanager: "exclude"
thanosruler: "exclude"
operatorUpgrade: "exclude"
featureGated: "include"
- suite: prometheusAllNS
prometheus: "exclude"
prometheusAllNS: ""
alertmanager: "exclude"
thanosruler: "exclude"
operatorUpgrade: "exclude"
featureGated: "include"
- suite: thanosruler
prometheus: "exclude"
prometheusAllNS: "exclude"
alertmanager: "exclude"
thanosruler: ""
operatorUpgrade: "exclude"
featureGated: "include"
- suite: operatorUpgrade
prometheus: "exclude"
prometheusAllNS: "exclude"
alertmanager: "exclude"
thanosruler: "exclude"
operatorUpgrade: ""
featureGated: "include"
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- name: Build images
run: |
export SHELL=/bin/bash
make build image
- name: Start KinD
uses: engineerd/setup-kind@v0.5.0
with:
version: ${{ env.kind-version }}
image: ${{ env.kind-image }}
wait: 300s
config: /test/e2e/kind-conf.yaml
- name: Wait for cluster to finish bootstraping
run: |
kubectl wait --for=condition=Ready pods --all --all-namespaces --timeout=300s
kubectl cluster-info
kubectl get pods -A
- name: Load images
run: |
kind load docker-image quay.io/prometheus-operator/prometheus-operator:$(git rev-parse --short HEAD)
kind load docker-image quay.io/prometheus-operator/prometheus-config-reloader:$(git rev-parse --short HEAD)
kind load docker-image quay.io/prometheus-operator/admission-webhook:$(git rev-parse --short HEAD)
kubectl apply -f scripts/kind-rbac.yaml
- name: Run tests
run: >
EXCLUDE_ALERTMANAGER_TESTS=${{ matrix.alertmanager }}
EXCLUDE_PROMETHEUS_TESTS=${{ matrix.prometheus }}
EXCLUDE_PROMETHEUS_ALL_NS_TESTS=${{ matrix.prometheusAllNS }}
EXCLUDE_THANOSRULER_TESTS=${{ matrix.thanosruler }}
EXCLUDE_OPERATOR_UPGRADE_TESTS=${{ matrix.operatorUpgrade }}
FEATURE_GATED_TESTS=${{ matrix.featureGated }}
make test-e2e
# Added to summarize the matrix and allow easy branch protection rules setup
e2e-tests-result:
name: End-to-End Test Results
if: always()
needs:
- e2e-tests
runs-on: ubuntu-latest
steps:
- name: Mark the job as a success
if: needs.e2e-tests.result == 'success'
run: exit 0
- name: Mark the job as a failure
if: needs.e2e-tests.result != 'success'
run: exit 1

Просмотреть файл

@ -0,0 +1,57 @@
name: publish
on:
workflow_dispatch:
push:
branches:
- 'release-*'
- 'master'
- 'main'
tags:
- 'v*'
- '!pkg*'
jobs:
publish:
name: Publish container images
permissions:
id-token: write # needed to sign images with cosign.
packages: write # needed to push images to ghcr.io.
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- name: Install cosign
uses: sigstore/cosign-installer@main
- name: Check the Docker version
run: docker version
- name: Check the cosign version
run: cosign version
- name: Install crane
uses: imjasonh/setup-crane@v0.3
- name: Login to quay.io
uses: docker/login-action@v2
with:
registry: quay.io
username: ${{ secrets.quay_username }}
password: ${{ secrets.quay_password }}
- name: Login to ghcr.io
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Cosign login
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | cosign login -u ${{ github.repository_owner }} --password-stdin ghcr.io
echo "${{ secrets.quay_password }}" | cosign login -u ${{ secrets.quay_username }} --password-stdin quay.io
- name: Build images and push
run: ./scripts/push-docker-image.sh

Просмотреть файл

@ -0,0 +1,37 @@
name: release
on:
release:
types:
- created
jobs:
upload-assets:
runs-on: ubuntu-latest
name: Upload release assets
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- name: Upload bundle.yaml to release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
file: bundle.yaml
asset_name: bundle.yaml
tag: ${{ github.ref }}
overwrite: true
- name: Generate stripped down version of CRDs
run: make stripped-down-crds.yaml
- name: Upload stripped-down-crds.yaml to release
uses: svenstaro/upload-release-action@v2
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
file: stripped-down-crds.yaml
asset_name: stripped-down-crds.yaml
tag: ${{ github.ref }}
overwrite: true

Просмотреть файл

@ -0,0 +1,21 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
with:
stale-issue-message: 'This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.'
close-issue-message: 'This issue was closed because it has not had any activity in the last 120 days. Please reopen if you feel this is still valid.'
days-before-stale: 60
days-before-issue-close: 120
days-before-pr-close: -1 # Prevent closing PRs
exempt-issue-labels: 'kind/feature,help wanted,kind/bug'
stale-issue-label: 'stale'
stale-pr-label: 'stale'
exempt-draft-pr: true
operations-per-run: 500

Просмотреть файл

@ -0,0 +1,33 @@
name: unit
on:
pull_request:
push:
branches:
- 'release-*'
- 'master'
- 'main'
tags:
- 'v*'
jobs:
unit-tests:
runs-on: ubuntu-latest
name: Unit tests
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: '${{ env.golang-version }}'
- run: make test-unit
extended-tests:
runs-on: ubuntu-latest
name: Extended tests
steps:
- uses: actions/checkout@v3
- name: Import environment variables from file
run: cat ".github/env" >> $GITHUB_ENV
- uses: actions/setup-go@v4
with:
go-version: ${{ env.golang-version }}
- run: make test-long

23
otelcollector/otel-allocator/prometheus-operator/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,23 @@
/operator
/admission-webhook
/po-lint
/prometheus-config-reloader
example/alertmanager-webhook/linux/
.build/
*~
*.tgz
requirements.lock
.idea
*.iml
.DS_Store
__pycache__
.env/
.history/
.vscode/
tmp
stripped-down-crds.yaml
# These are empty target files, created on every docker build. Their sole
# purpose is to track the last target execution time to evalualte, whether the
# container needds to be rebuild
.hack-*-image

Просмотреть файл

@ -0,0 +1,26 @@
run:
deadline: 10m
linters:
enable:
- revive
- gci
issues:
exclude-rules:
- path: _test.go
linters:
- errcheck
# TODO: fix linter errors before enabling it for the framework
- path: test/framework
linters:
- revive
linters-settings:
errcheck:
exclude: scripts/errcheck_excludes.txt
gci:
sections:
- standard
- default
- prefix(github.com/prometheus-operator/prometheus-operator)

Просмотреть файл

@ -0,0 +1,13 @@
// Copyright The prometheus-operator Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

Просмотреть файл

@ -0,0 +1,30 @@
version: 1
timeout: "1m"
explicitLocalValidators: true
validators:
# docs.github.com returns 403 if not in browser. Cannot curl as well.
- regex: 'docs\.github\.com'
type: "ignore"
# Cloudflare protection, so returns 503 if not in browser. Cannot curl as well.
- regex: 'wise\.com'
type: "ignore"
# Adopters example link.
- regex: "our-link"
type: "ignore"
# 301 errors even when curl-ed.
- regex: "envoyproxy"
type: "ignore"
# Ignore release links.
- regex: 'https:\/\/github\.com\/prometheus-operator\/prometheus-operator\/releases'
type: "ignore"
# Ignore GitHub container packages link as it returns 404 in curl, but works in browser
- regex: 'https://github.com/prometheus-operator/prometheus-operator/pkgs/container/prometheus-operator'
type: "ignore"
# Ignore links to /img/ because the generated content will resolve them correctly.
- regex: '/img/.+'
type: ignore
# Ignore anchor links pointing to the API documentation which are HTML <a> tags and not supported by mdox.
- regex: 'api\.md#monitoring\.coreos\.com/v1\.(BasicAuth|PrometheusSpec|StorageSpec)$'
type: ignore

Просмотреть файл

@ -0,0 +1,324 @@
---
title: Adopters
draft: false
date: "2021-03-08T23:50:39+01:00"
---
<!--
Insert your entry using this template keeping the list alphabetically sorted:
## <Company/Organization Name>
https://our-link.com/
Environments: AWS, Azure, Google Cloud, Bare Metal, etc
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes | No
Details (optional):
- HA Pair of Prometheus
- 1000 samples/s (query: `rate(prometheus_tsdb_head_samples_appended_total[5m])`)
- 10k active series (query: `prometheus_tsdb_head_series`)
-->
This document tracks people and use cases for the Prometheus Operator in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various the Prometheus Operator applications, operation environments, and cluster sizes. The Prometheus Operator development team may reach out periodically to check-in on how the Prometheus Operator is working in the field and update this list.
Go ahead and [add your organization](https://github.com/prometheus-operator/prometheus-operator/edit/main/ADOPTERS.md) to the list.
## CERN
[European Laboratory for Particle Physics](https://home.cern/)
Environments: On-premises
Prometheus is used extensively as part of the CERN Kubernetes infrastructure,
both managed and unmanaged. Metrics deployment is managed by the community owned
__kube-prometheus-stack__ helm chart. Be sure to check our [blog](https://kubernetes.web.cern.ch/).
Details:
- 400+ Kubernetes clusters, with cluster sizes ranging from few nodes to ~100s
Significant usage also exists outside Kubernetes for generic service and infrastructure monitoring.
## Clyso
[clyso.com](https://www.clyso.com/en)
Environments: Bare Metal, Opennebula
Uses kube-prometheus: Yes
Details:
- multiple K8s cluster with prometheus deployed through prom-operator
- several own ceph cluster providing metrics via ceph mgr prometheus module
- several customer ceph clusters pushing metrics via external pushgateway to our our central monitoring instances
- thanos receiver connected to own S3 storage
## Coralogix
[coralogix.com](https://coralogix.com)
Environments: AWS
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- Operator installed on each Kubernetes cluster, with Thanos aggregating metrics from a central query endpoint
- Two Prometheus instances per cluster
- Loose coupling between Kubernetes cluster administrators who manage alerting sinks and service owners who define alerts for their services
- 800K samples/s
- 30M active series
## Deckhouse
[deckhouse.io](https://deckhouse.io/)
Environments: AWS, Azure, Google Cloud, Bare Metal
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Deckhouse is a Kubernetes Platform. Its clusters running on any infrastructure are provided with the monitoring system based on highly available Prometheus and Prometheus Operator. Essential metrics are preconfigured out-of-the-box to ensure monitoring of all levels, from hardware and Kubernetes internals to the platforms modules functionality. The monitoring-custom module simplifies adding custom metrics for user applications. Deckhouse also hosts a dedicated Prometheus instance in each cluster to store downsampled metric series for longer periods.
## Giant Swarm
[giantswarm.io](https://www.giantswarm.io/)
Environments: AWS, Azure, Bare Metal
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes (with additional tight Giant Swarm integrations)
Details:
- One prometheus operator per management cluster and one prometheus instance per workload cluster
- Customers can also install kube-prometheus for their workload using our App Platform
- 760000 samples/s
- 35M active series
## Gitpod
[gitpod.io](https://www.gitpod.io/)
Environments: Google Cloud
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes (with additional Gitpod mixins)
Details:
- One prometheus instance per cluster (8 so far)
- 20000 samples/s
- 1M active series
## Innovaccer
https://innovaccer.com/
Environments: AWS, Azure
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details (optional):
- multiple remote K8s cluster in which we have prometheus deployed through prom-operator.
- these remote prometheus instances push cluster metrics to central Thanos receiver which is connected to S3 storage.
- on top of Thanos we have Grafana for dashboarding and visualisation.
## Kinvolk Lokomotive Kubernetes
https://kinvolk.io/lokomotive-kubernetes/
Environments: AKS, AWS, Bare Metal, Equinix Metal
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- Self-hosted (control plane runs as pods inside the cluster)
- Deploys full K8s stack (as a distro) or managed Kubernetes (currently only AKS supported)
- Deployed by Kinvolk for its own hosted infrastructure (including Flatcar Container Linux update server), as well as by Kinvolk customers and community users
## Lunar
[lunar.app](https://www.lunar.app/)
Environments: AWS
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- One prometheus operator in our platform cluster and one prometheus instance per workload cluster
- 17k samples/s
- 841k active series
## Mattermost
[mattermost.com](https://mattermost.com)
Environments: AWS
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- All Mattermost clusters use the Prometheus Operator with Thanos sidecar for cluster monitoring and central Thanos query component to gather all data.
- 977k samples/s
- 29.4M active series
## Nozzle
[nozzle.io](https://nozzle.io)
Environment: Google Cloud
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- 100k samples/s
- 1M active series
## OpenShift
[openshift.com](https://www.openshift.com/)
Environments: AWS, Azure, Google Cloud, Bare Metal
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes (with additional tight OpenShift integrations)
This is a meta user; please feel free to document specific OpenShift users!
All OpenShift clusters use the Prometheus Operator to manage the cluster monitoring stack as well as user workload monitoring. This means the Prometheus Operator's users include all OpenShift customers.
## Opstrace
[https://opstrace.com](https://opstrace.com)
Environments: AWS, Google Cloud
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): No
Opstrace installations use the Prometheus Operator internally to collect metrics and to alert. Opstrace users also often use the Prometheus Operator to scrape their own aplications and remote_write those metrics to Opstrace.
## Polar Signals
[polarsignals.com](https://www.polarsignals.com/)
Environment: Google Cloud
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- HA Pair of Prometheus
- 4000 samples/s
- 100k active series
## Robusta
[Robusta docs](https://docs.robusta.dev/master/)
Environments: EKS, GKE, AKS, and self-hosted Kubernetes
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
We're an open source project that builds upon the awesome Prometheus Operator. We run automated playbooks in response to Prometheus alerts and other events in your cluster. For example, you can automatically fetch logs and send them to Slack when a Prometheus alert occurs. All it takes is this YAML:
```yaml
triggers:
- on_prometheus_alert:
alert_name: KubePodCrashLooping
actions:
- logs_enricher: {}
sinks:
- slack
```
## Skyscanner
[skyscanner.net](https://skyscanner.net/)
Environment: AWS
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details (optional):
- HA Pairs of Prometheus
- 25000 samples/s
- 1.2M active series
## SUSE Rancher
[suse.com/products/suse-rancher](https://www.suse.com/products/suse-rancher/)
Environments: RKE, RKE2, K3s, Windows, AWS, Azure, Google Cloud, Bare Metal, etc.
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Rancher Monitoring supports use cases for Prometheus Operator across various different
cluster types and setups that are managed via the Rancher product. All Rancher users that
install Monitoring V2 deploy this chart.
For more information, please see [how Rancher monitoring works](https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/how-monitoring-works/).
The open-source rancher-monitoring Helm chart (based on [kube-prometheus-stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)) can be found at [rancher/charts](https://github.com/rancher/charts).
## Trendyol
[trendyol.com](https://trendyol.com)
Environments: OpenStack, VMware vCloud
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details:
- All Kubernetes clusters use one Prometheus Operator instance with remote write enabled
- Prometheus instances push metrics to central H/A VirtualMetric, which gathers all data from clusters in 3 different data centers
- Grafana is used for dashboarding and visualization
- 7.50M samples/s
- 190M active series
## Veepee
[veepee.com](https://www.veepee.com)
Environments: Bare Metal
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details (optional):
- HA Pair of Prometheus
- 786000 samples/s
- 23.6M active series
## VSHN AG
[vshn.ch](https://www.vshn.ch/)
Environments: AWS, Azure, Google Cloud, cloudscale.ch, Exoscale, Swisscom
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes
Details (optional):
- A huge fleet of OpenShift and Kubernetes clusters, each using Prometheus Operator
- All managed by [Project Syn](https://syn.tools/), leveraging Commodore Components like [component-rancher-monitoring](https://github.com/projectsyn/component-rancher-monitoring) which re-uses Prometheus Operator
## Wise
[wise.com](https://wise.com)
Environments: Kubernetes, AWS (via some EC2)
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): No
Details (optional):
- About 30 HA pairs of sharded Promethei across 10 environments, wired together with Thanos
- Operator also helps us seamlessly manage anywhere between 600-1500 short-lived prometheus instances for our "integration" kubernetes cluster.
- ~15mn samples/s
- ~200mn active series
## <Insert Company/Organization Name>
https://our-link.com/
Environments: AWS, Azure, Google Cloud, Bare Metal, etc
Uses [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus): Yes | No
Details (optional):
- HA Pair of Prometheus
- 1000 samples/s (query: `rate(prometheus_tsdb_head_samples_appended_total[5m])`)
- 10k active series (query: `prometheus_tsdb_head_series`)

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1 @@
www.prometheus-operator.dev

Просмотреть файл

@ -0,0 +1,196 @@
---
weight: 120
toc: true
title: Contributing
menu:
docs:
parent: prologue
lead: ""
lastmod: "2021-03-08T08:48:57+00:00"
images: []
draft: false
description: How can I contribute to the Prometheus Operator and kube-prometheus?
date: "2021-03-08T08:48:57+00:00"
---
This project is licensed under the [Apache 2.0 license](LICENSE) and accept
contributions via GitHub pull requests. This document outlines some of the
conventions on development workflow, commit message formatting, contact points
and other resources to make it easier to get your contribution accepted.
To maintain a safe and welcoming community, all participants must adhere to the
project's [Code of Conduct](code-of-conduct.md).
# Certificate of Origin
By contributing to this project you agree to the Developer Certificate of
Origin (DCO). This document was created by the Linux Kernel community and is a
simple statement that you, as a contributor, have the legal right to make the
contribution. See the [DCO](DCO) file for details.
# Email and Chat
The project currently uses the [Kubernetes Slack](https://slack.k8s.io/):
- [#prometheus-operator](https://kubernetes.slack.com/archives/CFFDS2Z7F)
- [#prometheus-operator-dev](https://kubernetes.slack.com/archives/C01B03QCSMN)
Please avoid emailing maintainers found in the MAINTAINERS file directly. They
are very busy and read the mailing lists.
# Office Hours Meetings
The project also holds bi-weekly public meetings where maintainers,
contributors and users of the Prometheus Operator and kube-prometheus can
discuss issues, pull requests or any topic related to the projects. The
meetings happen at 09:00 UTC on Monday, check the [online
notes](https://docs.google.com/document/d/1-fjJmzrwRpKmSPHtXN5u6VZnn39M28KqyQGBEJsqUOk/edit?usp=sharing)
to know the exact dates and the connection details.
## Getting Started
- Fork the repository on GitHub
- Read the [README](README.md) for build and test instructions
- Play with the project, submit bugs, submit patches!
## Contribution Flow
This is a rough outline of what a contributor's workflow looks like:
- Create a topic branch from where you want to base your work (usually `main`).
- Make commits of logical units.
- Make sure your commit messages are in the proper format (see below).
- Push your changes to a topic branch in your fork of the repository.
- Make sure the tests pass, and add any new tests as appropriate.
- Submit a pull request to the original repository.
Many files (documentation, manifests, ...) in this repository are auto-generated. For instance, `bundle.yaml` is generated from the *Jsonnet* files in `/jsonnet/prometheus-operator`. Before submitting a pull request, make sure that you've executed `make generate` and committed the generated changes.
Thanks for your contributions!
### Changes to the APIs
When designing Custom Resource Definitions (CRDs), please refer to the existing Kubernetes guidelines:
* [API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md).
* [API changes](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api_changes.md).
In particular, this project follows the API stability guidelines:
* For alpha API versions (e.g. `v1alpha1`, `v1alpha2`, ...), we may allow to break forward and backward compatibility (but we'll try hard to avoid it).
* For beta API versions (e.g. `v1beta1`, `v1beta2`, ...), we may allow to break backward compatibility but not forward compatibility.
* For stable API versions (e.g. `v1`), we don't allow to break backward and forward compatibility.
### Format of the Commit Message
We follow a rough convention for commit messages that is designed to answer two
questions: what changed and why. The subject line should feature the what and
the body of the commit should describe the why.
```
scripts: add the test-cluster command
This uses tmux to setup a test cluster that you can easily kill and
start for debugging.
Fixes #38
```
The format can be described more formally as follows:
```
<subsystem>: <what changed>
<BLANK LINE>
<why this change was made>
<BLANK LINE>
<footer>
```
The first line is the subject and should be no longer than 70 characters, the
second line is always blank, and other lines should be wrapped at 80 characters.
This allows the message to be easier to read on GitHub as well as in various
Git tools.
# Proposal Process
The Prometheus Operator project accepts proposals for new features, enhancements and design documents.
Proposals can be submitted in the form of a pull request using the template below.
The process is adopted from the Thanos community.
## Your Proposal Title
* **Owners:**
* `<@author: single champion for the moment of writing>`
* **Related Tickets:**
* `<JIRA, GH Issues>`
* **Other docs:**
* `<Links…>`
> TL;DR: Give a summary of what this document is proposing and what components it is touching.
>
> *For example: This design doc is proposing a consistent design template for “example.com” organization.*
## Why
Provide a motivation behind the change proposed by this design document, give context.
*For example: Its important to clearly explain the reasons behind certain design decisions in order to have a
consensus between team members, as well as external stakeholders.
Such a design document can also be used as a reference and for knowledge-sharing purposes.
Thats why we are proposing a consistent style of the design document that will be used for future designs.*
### Pitfalls of the current solution
What specific problems are we hitting with the current solution? Why is it not enough?
*For example: We were missing a consistent design doc template, so each team/person was creating their own.
Because of inconsistencies, those documents were harder to understand, and it was easy to miss important sections.
This was causing certain engineering time to be wasted.*
## Goals
Goals and use cases for the solution as proposed in [How](#how):
* Allow easy collaboration and decision making on design ideas.
* Have a consistent design style that is readable and understandable.
* Have a design style that is concise and covers all the essential information.
### Audience
If this is not clear already, provide the target audience for this change.
## Non-Goals
* Move old designs to the new format.
* Not doing X,Y,Z.
## How
Explain the full overview of the proposed solution. Some guidelines:
* Make it concise and **simple**; put diagrams; be concrete, avoid using “really”, “amazing” and “great” (:
* How will you test and verify?
* How will you migrate users, without downtime. How do we solve incompatibilities?
* What open questions are left? (“Known unknowns”)
## Alternatives
This section should state potential alternatives.
Highlight the objections the reader should have towards your proposal as they read it.
Tell them why you still think you should take this path.
1. This is why not solution Z...
## Action Plan
The tasks to do in order to migrate to the new idea.
* [ ] Task one
<gh issue="">
* [ ] Task two
<gh issue="">
...

Просмотреть файл

@ -0,0 +1,36 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

Просмотреть файл

@ -0,0 +1,65 @@
# Additional Scrape Configuration
AdditionalScrapeConfigs allows specifying a key of a Secret containing
additional Prometheus scrape configurations. Scrape configurations specified
are appended to the configurations generated by the Prometheus Operator.
Job configurations specified must have the form as specified in the official
[Prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config).
As scrape configs are appended, the user is responsible to make sure it is
valid. *Note* that using this feature may expose the possibility to break
upgrades of Prometheus.
It is advised to review Prometheus release notes to ensure that no incompatible
scrape configs are going to break Prometheus after the upgrade.
## Creating an additional configuration
First, you will need to create the additional configuration.
Below we are making a simple "prometheus" config. Name this
`prometheus-additional.yaml` or something similar.
```yaml
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
```
Then you will need to make a secret out of this configuration.
```sh
kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run=client -oyaml > additional-scrape-configs.yaml
```
Next, apply the generated kubernetes manifest
```
kubectl apply -f additional-scrape-configs.yaml -n monitoring
```
Finally, reference this additional configuration in your `prometheus.yaml` CRD.
```yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
```
NOTE: Use only one secret for ALL additional scrape configurations.
## Additional References
* [Prometheus Spec](api.md#monitoring.coreos.com/v1.PrometheusSpec)
* [Additional Scrape Configs](../example/additional-scrape-configs)

24533
otelcollector/otel-allocator/prometheus-operator/Documentation/api.md сгенерированный Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,86 @@
---
weight: 202
toc: true
title: Compatibility
menu:
docs:
parent: operator
lead: The Prometheus Operator supports a number of Kubernetes and Prometheus releases.
images: []
draft: false
description: The Prometheus Operator supports a number of Kubernetes and Prometheus releases.
---
It is recommended to use versions of the components identical or close to the versions used by the operator's end-to-end test suite (the specific version numbers are listed below).
## Kubernetes
Due to the use of apiextensions.k8s.io/v1 CustomResourceDefinitions, prometheus-operator requires Kubernetes >= v1.16.0.
The Prometheus Operator uses the official [Go client](https://github.com/kubernetes/client-go) for Kubernetes to communicate with the Kubernetes API. The compatibility matrix for client-go and Kubernetes clusters can be found [here](https://github.com/kubernetes/client-go#compatibility-matrix). All additional compatibility is only best effort, or happens to be still/already supported.
The current version of the Prometheus operator uses the following Go client version:
```$ mdox-exec="go list -m -f '{{ .Version }}' k8s.io/client-go"
v0.27.4
```
## Prometheus
Prometheus Operator supports all Prometheus versions >= v2.0.0. The operator's end-to-end tests verify that the operator can deploy the following Prometheus versions:
```$ mdox-exec="go run ./cmd/po-docgen/. compatibility"
* v2.37.0
* v2.37.1
* v2.37.2
* v2.37.3
* v2.37.4
* v2.37.5
* v2.37.6
* v2.37.7
* v2.37.8
* v2.38.0
* v2.39.0
* v2.39.1
* v2.39.2
* v2.40.0
* v2.40.1
* v2.40.2
* v2.40.3
* v2.40.4
* v2.40.5
* v2.40.6
* v2.40.7
* v2.41.0
* v2.42.0
* v2.43.0
* v2.43.1
* v2.44.0
* v2.45.0
```
The end-to-end tests are mostly tested against
```$ mdox-exec="go run ./cmd/po-docgen/. compatibility defaultPrometheusVersion"
* v2.45.0
```
## Alertmanager
The Prometheus Operator is compatible with Alertmanager v0.15 and above.
The end-to-end tests are mostly tested against
```$ mdox-exec="go run ./cmd/po-docgen/. compatibility defaultAlertmanagerVersion"
* v0.25.0
```
## Thanos
The Prometheus Operator is compatible with Thanos v0.10 and above.
The end-to-end tests are mostly tested against
```$ mdox-exec="go run ./cmd/po-docgen/. compatibility defaultThanosVersion"
* v0.31.0
```

Просмотреть файл

@ -0,0 +1,28 @@
<br>
<div class="alert alert-info" role="alert">
<i class="fa fa-exclamation-triangle"></i><b> Note:</b> Starting with v0.39.0, Prometheus Operator requires use of Kubernetes v1.16.x and up.
</div>
**Deprecation Warning:** The *custom configuration* option of the Prometheus Operator will be deprecated in favor of the [*additional scrape config*](additional-scrape-config.md) option.
# Custom Configuration
There are a few reasons, why one may want to provide a custom configuration to Prometheus instances, instead of having the Prometheus Operator generate the configuration based on `ServiceMonitor` objects.
> Note that custom configurations are not the primary goal the Prometheus Operator is solving. Changes made in the Prometheus Operator, may affect custom configurations in a breaking way. Additionally the Prometheus Operator attempts to generate configurations in a forward and backward compatible way, with custom configurations, this is up to the user to manage gracefully.
Use cases include:
* The necessity to use a service discovery mechanism other than the Kubernetes service discovery, such as AWS SD, Azure SD, etc.
* Cases that are not (yet) very well supported by the Prometheus Operator, such as performing blackbox probes.
Note that because the Prometheus Operator does not generate the Prometheus configuration in this case, any fields of the Prometheus resource, which influence the configuration will have no effect, and one has to specify this explicitly. The features that will not be supported, meaning they will have to be configured manually:
* `serviceMonitorSelector`: Auto-generating Prometheus configuration from `ServiceMonitor` objects. This means, that creating `ServiceMonitor` objects is not how a Prometheus instance is configured, but rather the raw configuration has to be written.
* `alerting`: Alertmanager discovery as available in the Prometheus object is translated to the Prometheus configuration, meaning this configuration has to be done manually.
* `scrapeInterval`
* `scrapeTimeout`
* `evaluationInterval`
* `externalLabels`
In order to enable to specify a custom configuration, specify neither `serviceMonitorSelector` nor `podMonitorSelector`. When these fields are empty, the Prometheus Operator will not attempt to manage the `Secret`, that contains the Prometheus configuration. The `Secret`, that contains the Prometheus configuration is called `prometheus-<name-of-prometheus-object>`, in the same namespace as the Prometheus object. Within this `Secret`, the key that contains the Prometheus configuration is called `prometheus.yaml.gz`.

Просмотреть файл

@ -0,0 +1,156 @@
---
weight: 201
toc: true
title: Design
menu:
docs:
parent: operator
images: []
draft: false
description: This document describes the design and interaction between the custom resource definitions that the Prometheus Operator manages.
---
This document describes the design and interaction between the [custom resource definitions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/) that the Prometheus Operator manages.
The custom resources managed by the Prometheus Operator are:
* [Prometheus](#prometheus)
* [Alertmanager](#alertmanager)
* [ThanosRuler](#thanosruler)
* [ServiceMonitor](#servicemonitor)
* [PodMonitor](#podmonitor)
* [Probe](#probe)
* [PrometheusRule](#prometheusrule)
* [AlertmanagerConfig](#alertmanagerconfig)
* [PrometheusAgent](#prometheusagent)
## Prometheus
The `Prometheus` custom resource definition (CRD) declaratively defines a desired [Prometheus](https://prometheus.io/docs/prometheus) setup to run in a Kubernetes cluster. It provides options to configure the number of replicas, persistent storage, and Alertmanagers to which the deployed Prometheus instances send alerts to.
For each `Prometheus` resource, the Operator deploys one or several `StatefulSet` objects in the same namespace (the number of statefulsets is equal to the number of shards but by default it is 1).
The CRD defines via label and namespace selectors which `ServiceMonitor`, `PodMonitor` and `Probe` objects should be associated to the deployed Prometheus instances. The CRD also defines which `PrometheusRules` objects should be reconciled. The operator continuously reconciles the custom resources and generates one or several `Secret` objects holding the Prometheus configuration. A `config-reloader` container running as a sidecar in the Prometheus pod detects any change to the configuration and reloads Prometheus if needed.
## Alertmanager
The `Alertmanager` custom resource definition (CRD) declaratively defines a desired [Alertmanager](https://prometheus.io/docs/alerting) setup to run in a Kubernetes cluster. It provides options to configure the number of replicas and persistent storage.
For each `Alertmanager` resource, the Operator deploys a `StatefulSet` in the same namespace. The Alertmanager pods are configured to mount a `Secret` called `alertmanager-<alertmanager-name>` which holds the Alertmanager configuration under the key `alertmanager.yaml`.
When there are two or more configured replicas, the Operator runs the Alertmanager instances in high-availability mode.
## ThanosRuler
The `ThanosRuler` custom resource definition (CRD) declaratively defines a desired [Thanos Ruler](https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md) setup to run in a Kubernetes cluster. With Thanos Ruler recording and alerting rules can be processed across multiple Prometheus instances.
A `ThanosRuler` instance requires at least one query endpoint which points to the location of Thanos Queriers or Prometheus instances.
Further information can also be found in the [Thanos section]({{< ref "thanos.md" >}}).
## ServiceMonitor
The `ServiceMonitor` custom resource definition (CRD) allows to declaratively define how a dynamic set of services should be monitored. Which services are selected to be monitored with the desired configuration is defined using label selections. This allows an organization to introduce conventions around how metrics are exposed, and then following these conventions new services are automatically discovered, without the need to reconfigure the system.
For Prometheus to monitor any application within Kubernetes an `Endpoints` object needs to exist. `Endpoints` objects are essentially lists of IP addresses. Typically an `Endpoints` object is populated by a `Service` object. A `Service` object discovers `Pod`s by a label selector and adds those to the `Endpoints` object.
A `Service` may expose one or more service ports, which are backed by a list of multiple endpoints that point to a `Pod` in the common case. This is reflected in the respective `Endpoints` object as well.
The `ServiceMonitor` object introduced by the Prometheus Operator in turn discovers those `Endpoints` objects and configures Prometheus to monitor those `Pod`s.
The `endpoints` section of the `ServiceMonitorSpec`, is used to configure which ports of these `Endpoints` are going to be scraped for metrics, and with which parameters. For advanced use cases one may want to monitor ports of backing `Pod`s, which are not directly part of the service endpoints. Therefore when specifying an endpoint in the `endpoints` section, they are strictly used.
> Note: `endpoints` (lowercase) is the field in the `ServiceMonitor` CRD, while `Endpoints` (capitalized) is the Kubernetes object kind.
Both `ServiceMonitors` as well as discovered targets may come from any namespace. This is important to allow cross-namespace monitoring use cases, e.g. for meta-monitoring. Using the `ServiceMonitorNamespaceSelector` of the `PrometheusSpec`, one can restrict the namespaces `ServiceMonitor`s are selected from by the respective Prometheus server. Using the `namespaceSelector` of the `ServiceMonitorSpec`, one can restrict the namespaces the `Endpoints` objects are allowed to be discovered from.
One can discover targets in all namespaces like this:
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
namespaceSelector:
any: true
```
## PodMonitor
The `PodMonitor` custom resource definition (CRD) allows to declaratively define how a dynamic set of pods should be monitored.
Which pods are selected to be monitored with the desired configuration is defined using label selections.
This allows an organization to introduce conventions around how metrics are exposed, and then following these conventions new pods are automatically discovered, without the need to reconfigure the system.
A `Pod` is a collection of one or more containers which can expose Prometheus metrics on a number of ports.
The `PodMonitor` object introduced by the Prometheus Operator discovers these pods and generates the relevant configuration for the Prometheus server in order to monitor them.
The `PodMetricsEndpoints` section of the `PodMonitorSpec`, is used to configure which ports of a pod are going to be scraped for metrics, and with which parameters.
Both `PodMonitors` as well as discovered targets may come from any namespace. This is important to allow cross-namespace monitoring use cases, e.g. for meta-monitoring.
Using the `namespaceSelector` of the `PodMonitorSpec`, one can restrict the namespaces the `Pods` are allowed to be discovered from.
Once can discover targets in all namespaces like this:
```yaml
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: web
namespaceSelector:
any: true
```
## Probe
The `Probe` custom resource definition (CRD) allows to declarative define how groups of ingresses and static targets should be monitored. Besides the target, the `Probe` object requires a `prober` which is the service that monitors the target and provides metrics for Prometheus to scrape. Typically, this is achieved using the [blackbox exporter](https://github.com/prometheus/blackbox_exporter).
## PrometheusRule
The `PrometheusRule` custom resource definition (CRD) declaratively defines desired Prometheus rules to be consumed by Prometheus or Thanos Ruler instances.
Alerts and recording rules are reconciled by the Operator and dynamically loaded without requiring any restart of Prometheus/Thanos Rulre.
## AlertmanagerConfig
The `AlertmanagerConfig` custom resource definition (CRD) declaratively specifies subsections of the Alertmanager configuration, allowing routing of alerts to custom receivers, and setting inhibition rules. The `AlertmanagerConfig` can be defined on a namespace level providing an aggregated configuration to Alertmanager. An example on how to use it is provided below. Please be aware that this CRD is not stable yet.
```yaml mdox-exec="cat example/user-guides/alerting/alertmanager-config-example.yaml"
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: config-example
labels:
alertmanagerConfig: example
spec:
route:
groupBy: ['job']
groupWait: 30s
groupInterval: 5m
repeatInterval: 12h
receiver: 'webhook'
receivers:
- name: 'webhook'
webhookConfigs:
- url: 'http://example.com/'
```
## PrometheusAgent
The `PrometheusAgent` custom resource definition (CRD) declaratively defines a desired [Prometheus Agent](https://prometheus.io/blog/2021/11/16/agent/) setup to run in a Kubernetes cluster.
Similar to the binaries of Prometheus Server and Prometheus Agent, the `Prometheus` and `PrometheusAgent` CRs are also similar. Inspired in the Agent binary, the Agent CR has several configuration options redacted when compared with regular Prometheus CR, e.g. alerting, PrometheusRules selectors, remote-read, storage and thanos sidecars.
A more extensive read explaining why Agent support was done with a whole new CRD can be seen [here](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/designs/prometheus-agent.md).

Просмотреть файл

@ -0,0 +1,182 @@
# Prometheus Agent support
## Summary
The Prometheus 2.32.0 release introduces the Prometheus Agent, a mode optimized for remote-write dominant scenarios. This document proposes extending the Prometheus Operator to allow running a Prometheus Agent with different deployment strategies.
## Background
The Prometheus Operator in its current state does not allow a simple way of deploying the Prometheus agent. A potential workaround has been described in a [Github comment](https://github.com/prometheus-operator/prometheus-operator/issues/3989#issuecomment-974137486), where the agent can be deployed through the existing Prometheus CRD by explicitly setting command-line arguments specific to the agent mode.
As described in the comment, one significant problem with this approach is that the Prometheus Operator always generates `alerts` and `rules` sections in the Prometheus config file. These sections are not allowed when running the agent so users need to take additional actions to pause reconciliation of the Prometheus CR, tweak the generated secret and then unpause reconciliation in order to resolve the problem. Alternatively, users can apply a strategic merge patch to the prometheus container as described in the kube-prometheus docs: [https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/prometheus-agent.md](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/prometheus-agent.md)
While this workaround can be used as a stop-gap solution to unblock users in the short term, it has the drawback of needing additional steps which require understanding implementation details of the operator itself. In addition to this, overriding the value of the argument `--config.file` also requires knowledge of Prometheus Operator internals.
A lot of the fields supported by the current PrometheusSpec are not applicable to the agent mode. These fields are documented in the PrometheusAgent CRD section.
Finally, the Prometheus agent is significantly different from the Prometheus server in the way that it fits in a monitoring stack. Therefore, running it as a StatefulSet might not be the only possible deployment strategy, users might want to run it as a DaemonSet or a Deployment instead.
## Proposal
This document proposes introducing a PrometheusAgent CRD to allow users to run Prometheus in agent mode. Having a separate CRD allows the Prometheus and PrometheusAgent CRDs to evolve independently and expose parameters specific to each Prometheus mode.
For example, the PrometheusAgent CRD could have a `strategy` field indicating the deployment strategy for the agent, but no `alerting` field since alerts are not supported in agent mode. Even though there will be an upfront cost for introducing a new CRD, having separate APIs would simplify long-term maintenance by allowing the use of CRD validation mechanisms provided by Kubernetes.
In addition, dedicated APIs with mode-specific fields are self documenting since they remove the need to explicitly document which fields and field values are allowed or required for each individual mode. Users will also be able to get an easier overview of the different parameters they could set for each mode, which leads to a better user experience when using the operator.
Finally, the advantage of using a separate CRD is the possibility of using an alpha API version, which would clearly indicate that the CRD is still under development. The Prometheus CRD, on the other hand, has already been declared as v1 and adding experimental fields to it will be challenging from both documentation and implementation aspects.
### Prometheus Agent CRD
The PrometheusAgent CRD would be similar to the Prometheus CRD, with the exception of removing fields which are not applicable to the prometheus agent mode.
Here is the list of fields we want to exclude:
* `retention`
* `retentionSize`
* `disableCompaction`
* `evaluationInterval`
* `rules`
* `query`
* `ruleSelector`
* `ruleNamespaceSelector`
* `alerting`
* `remoteRead`
* `additionalAlertRelabelConfigs`
* `additionalAlertManagerConfigs`
* `thanos`
* `prometheusRulesExcludedFromEnforce`
* `queryLogFile`
* `allowOverlappingBlocks`
The `enabledFeatures` field can be validated for agent-specific features only, which include: `expand-external-labels`, `extra-scrape-metrics` and `new-service-discovery-manager`.
Finally, the `remoteWrite` field should be made required only for the agent since it is a mandatory configuration section in agent mode.
### Deployment Strategies
When using Prometheus in server mode, scraped samples are stored in memory and on disk. These samples need to be preserved during disruptions, such as pod replacements or cluster maintenance operations which cause evictions. Because of this, the Prometheus Operator currently deploys Prometheus instances as Kubernetes StatefulSets.
On the other hand, when running Prometheus in agent mode, samples are sent to a remote write target immediately, and are not kept locally for a long time. The only use-case for storing samples locally is to allow retries when remote write targets are not available. This is achieved by keeping scraped samples in a WAL for 2h at most. Samples which have been successfully sent to remote write targets are immediately removed from local storage.
Since the Prometheus agent has slightly different storage requirements, this proposal suggests allowing users to choose different deployment strategies.
#### Running the agent with cluster-wide scope
Even though the Prometheus agent has very little need for storage, there are still scenarios where sample data can be lost if persistent storage is not used. If a remote write target is unavailable and an agent pod is evicted at the same time, the samples collected during the unavailability window of the remote write target will be completely lost.
For this reason, the cluster-wide strategy would be implemented by deploying a StatefulSet, similarly to how `Prometheus` CRs are currently reconciled. This also allows for reusing existing code from the operator and delivering a working solution faster and with fewer changes. Familiarity with how StatefulSets work, together with the possibility to reuse existing code, were the primary reasons for choosing StatefulSets for this strategy over Deployments.
The following table documents the problems that could occur with a Deployment and StatefulSet strategy in different situations.
<table>
<tr>
<td>
</td>
<td><strong>Pod update</strong>
</td>
<td><strong>Network outage during pod update</strong>
</td>
<td><strong>Network outage during node drain</strong>
</td>
<td><strong>Cloud k8s node rotation</strong>
</td>
<td><strong>Non-graceful pod deletion</strong>
</td>
</tr>
<tr>
<td><strong>Deployment with emptyDir volume</strong>
</td>
<td>No delay in scrapes if the new pod is created before the old one is terminated
</td>
<td>Unsent samples will be lost.
<p>
EmptyDir is tied to a pod <em>and</em> node, and data from the old pod will not be preserved.
</td>
<td>Unsent samples will be lost.
<p>
EmptyDir is tied to a pod <em>and</em> node, and data from the old pod will not be preserved.
</td>
<td>Unsent samples will be lost
</td>
<td>Unsent samples will be lost.
<p>
EmptyDir is tied to a pod <em>and</em> node, and data from the old pod will not be preserved.
</td>
</tr>
<tr>
<td><strong>Statefulset with a PVC</strong>
</td>
<td>Potential delay in a subsequent scrape due to recreation of the pod
</td>
<td>No data loss, the volume will contain all unsent data
</td>
<td>No data loss, the volume will contain all unsent data
</td>
<td>No data loss if a new pod scheduled to the same AZ node. May be stuck in pending state otherwise
</td>
<td>No data loss, the volume will contain all unsent data
</td>
</tr>
<tr>
<td><strong>Deployment or STS with replicas</strong>
</td>
<td>No delay, mitigated by replicas
</td>
<td>Unsent data will be lost if last replica terminated before network outage resolves
</td>
<td>No data loss, as other replicas are running on other nodes
</td>
<td>No data loss, as other replicas running on other nodes
</td>
<td>No data loss as other replicas untouched
</td>
</tr>
</table>
#### Running the agent with node-specific scope
This strategy has a built-in auto-scaling mechanism since each agent will scrape only a subset of the targets. As the cluster grows and more nodes are added to it, new agent instances will automatically be scheduled to scrape pods on those nodes. Even though the load distribution will not be perfect (targets on certain nodes might produce far more metrics than targets on other nodes), it is a simple way of adding some sort of load management.
Another advantage is that persistent storage can now be handled by mounting a host volume, a strategy commonly used by log collectors. The need for persistent storage is described in the StatefulSet strategy section.
The Grafana Agent config exposes a `host_filter` boolean flag which, when enabled, instructs the agent to only filter targets from the same node, in addition to the scrape config already provided. With this option, the same config can be used for agents running on multiple nodes, and the agents will automatically scrape targets from their own nodes. Such a config option is not yet available in Prometheus. An issue has already been raised [[3]](https://github.com/prometheus/prometheus/issues/9637) and there is an open PR for addressing it [[4]](https://github.com/prometheus/prometheus/pull/10004).
Until the upstream work has been completed, it could be possible to implement this strategy with a few tweaks:
* the operator could use the [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api) to inject the node name in the pods.
* the operator's config reloader already supports expansion of environment variables.
With this setup, the unexpanded Prometheus configuration would look as follows
```yaml
relabel_configs:
- source_labels: [__meta_kubernetes_pod_node_name]
action: keep
regex: $NODE_NAME
in the pod definition:
spec:
- container: config-reloader
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
```
## Additional implementation details
There has been a suggestion in [a Github comment](https://github.com/prometheus-operator/prometheus-operator/issues/3989#issuecomment-821249404) to introduce a ScrapeConfig CRD in parallel to adding the PrometheusAgent CRD, and “translate” PrometheusAgent CRs to ScrapeConfig CRs. The main challenge with this approach is that it significantly increases the scope of the work that needs to be done to support deploying Prometheus agents.
A leaner alternative would be to focus on implementing the PrometheusAgent CRD by reusing code from the existing Prometheus controller. The ScrapeConfig can then be introduced separately, and the PrometheusAgent can be the first CRD which gets migrated to it.
### Implementation steps
The first step in the implementation process would include creating the PrometheusAgent CRD and deploying the agent as a StatefulSet, similar to how the Prometheus CRD is currently reconciled. This will allow for reusing a lot of the existing codebase from the Prometheus controller and the new CRD can be released in a timely manner.
Subsequent steps would include iterating on users' feedback and either implementing different deployment strategies, or refining the existing one.
## References
* [1] [https://github.com/grafana/agent/blob/5bf8cf452fa76c75387e30b6373630923679221c/production/kubernetes/agent-bare.yaml#L43](https://github.com/grafana/agent/blob/5bf8cf452fa76c75387e30b6373630923679221c/production/kubernetes/agent-bare.yaml#L43)
* [2] [https://github.com/open-telemetry/opentelemetry-operator#deployment-modes](https://github.com/open-telemetry/opentelemetry-operator#deployment-modes)
* [3] [https://github.com/prometheus/prometheus/issues/9637](https://github.com/prometheus/prometheus/issues/9637)
* [4] [https://github.com/prometheus/prometheus/pull/10004](https://github.com/prometheus/prometheus/pull/10004)

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше