Bump to v2.0.12: VMWatch Integration (#60)

## Overview
This PR contains changes multiple pull requests into a feature branch
that will support running VMWatch (amd64 and arm64) as an executable via
goroutines and channels. In addition, a number of dev/debugging tools
were included to improve developer productivity.

> VMWatch is a standardized, lightweight, and open-sourced testing
framework designed to enhance the monitoring and management of guest VMs
on the Azure platform, including both 1P and 3P instances. VMWatch is
engineered to collect vital health signals across multiple dimensions,
which will be seamlessly integrated into Azure's quality systems. By
leveraging these signals, VMWatch will enable Azure to swiftly detect
and prevent regressions induced by platform updates or configuration
changes, identify gaps in platform telemetry, and ultimately improve the
guest experience for all Azure customers.

## Behavior
VMWatch will run asynchronously as a separate process than
ApplicationHealth, so the probing of application health will not be
affected by the state of VMWatch. Depending on extension settings,
VMWatch can be enabled/disabled, and also specify the test names and
parameter overrides to VMWatch binary. The status of VMWatch will be
displayed in the extension status file and also in GET VM Instance View.
Main process will manage VMWatch process and communicate VMWatch status
via extension status file.

## Process Leaks & Resource Governance
Main process ensures proper resource utilization limits for CPU and
Memory, along with avoiding process leaks by subscribing to
shutdown/termination signals in the main process.
This commit is contained in:
frank-pang-msft 2024-06-11 15:16:42 -07:00 коммит произвёл GitHub
Родитель 4b71f92489 b2b0c04c96
Коммит ab885dcecc
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
837 изменённых файлов: 350241 добавлений и 2681 удалений

47
.devcontainer/Dockerfile Normal file
Просмотреть файл

@ -0,0 +1,47 @@
FROM mcr.microsoft.com/devcontainers/go:1.22-bullseye
RUN apt-get -qqy update && \
apt-get -qqy install jq openssl ca-certificates && \
apt-get -qqy clean && \
rm -rf /var/lib/apt/lists/*
# Create the directories and files that need to be present
RUN mkdir -p /var/lib/waagent && \
mkdir -p /var/lib/waagent/Extension/config && \
mkdir -p /var/lib/waagent/Extension/status && \
mkdir -p /var/log/azure/Extension/VE.RS.ION \
mkdir -p /var/log/azure/Extension/events
# copy default extension settings into the appropriate location
COPY extension-settings.json /var/lib/waagent/Extension/config/0.settings
# install go tools we need for build
RUN go install github.com/ahmetb/govvv@latest
# Install npm
RUN apt-get update && \
apt-get install -y npm
# Updating npm to the latest version
RUN npm cache clean -f && npm install -g n && n stable
# Install dev enviroment dependencies
RUN npm install bats -g
#Install Bats-Assert and Bats-Support
RUN npm install -g https://github.com/bats-core/bats-assert && \
npm install -g https://github.com/bats-core/bats-support
# Install Parallel
RUN apt-get install -y parallel
# Install Docker
RUN apt-get install runc -y && \
apt-get install containerd -y && \
apt-get install docker.io -y
# Install Docker Engine
RUN curl -fsSL https://test.docker.com -o test-docker.sh && \
sh test-docker.sh
# Creating ENV variables
ENV CUSTOM_BATS_LIB_PATH /usr/local/lib/node_modules

33
.devcontainer/README.md Normal file
Просмотреть файл

@ -0,0 +1,33 @@
# Dev Container Info
This directory contains files to support running the code in a dev container. This allows you build and debug the code locally, either on a PC, Mac or linux machine.
This works using VSCode's dev container support.
# Requirements
1. Docker – needed for building the docker image and for devcontainer workflow. Microsoft has an enterprise agreement with docker, details [here](https://microsoft.service-now.com/sp?id=sc_cat_item&sys_id=234197ba1b418d54bba22173b24bcbf0)
- Windows Installer is [here](https://docs.docker.com/desktop/install/windows-install/)
- Mac installer is [here](https://docs.docker.com/desktop/install/mac-install/)
1. Dev Containers VSCode extension
- installation info is [here](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
# Running In DevContainer
This works either on windows/mac machine or via a remote SSH session on a linux machine (this is the only way that the integration tests can reliably run, debugging works in all modes)
1. Go to the root of the repo in a command window
1. Open vscode using `code .`
1. Click the Blue `><` thing in thr bottom left of the screen and select `reopen in container` (the first time this runs it will take some time as it will build the docker container for the dev environment based on the dockerfile in the .devcontainer directory)
1. Once it has opened, open a bash terminal in vscode
1. run `make devcontainer`
1. you are now ready to run and debug the extension
## Debugging
1. configure the appropriate settings in the file `.devcontainer/extension-settings.json` (the default one enables the `simple` and `process` tests for vmwatch but you can change it)
1. click the debug icon on the left and select `devcontainer run - enable` target
- you can add more in `launch.json` as needed
1. set breakpoints as required
1. hit f5 to launch the extension code

Просмотреть файл

Просмотреть файл

@ -0,0 +1,68 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.191.0/containers/dotnet
{
"name": "Golang",
"build": {
"dockerfile": "Dockerfile",
"args": {}
},
"runArgs": [
// add this if we need privileged access
"--privileged",
"--env-file",
".devcontainer/devcontainer.env"
],
"containerEnv": {
"RUNNING_IN_DEV_CONTAINER": "1",
"ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE": "1",
"DOCKER_DEFAULT_PLATFORM": "linux/amd64"
},
"customizations": {
"vscode": {
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"golang.go",
"ms-azuretools.vscode-docker"
],
"recommendations": [
"GitHub.copilot",
"GitHub.copilot-chat",
"GitHub.vscode-pull-request-github"
]
}
},
"remoteUser": "root",
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "go mod download && echo hello",
"mounts": [
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind"
]
// // Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [5000, 5001],B
// [Optional] To reuse of your local HTTPS dev cert:
//
// 1. Export it locally using this command:
// * Windows PowerShell:
// dotnet dev-certs https --trust; dotnet dev-certs https -ep "$env:USERPROFILE/.aspnet/https/aspnetapp.pfx" -p "SecurePwdGoesHere"
// * macOS/Linux terminal:
// dotnet dev-certs https --trust; dotnet dev-certs https -ep "${HOME}/.aspnet/https/aspnetapp.pfx" -p "SecurePwdGoesHere"
//
// 2. Uncomment these 'remoteEnv' lines:
// "remoteEnv": {
// "ASPNETCORE_Kestrel__Certificates__Default__Password": "SecurePwdGoesHere",
// "ASPNETCORE_Kestrel__Certificates__Default__Path": "/home/vscode/.aspnet/https/aspnetapp.pfx",
// },
//
// 3. Do one of the following depending on your scenario:
// * When using GitHub Codespaces and/or Remote - Containers:
// 1. Start the container
// 2. Drag ~/.aspnet/https/aspnetapp.pfx into the root of the file explorer
// 3. Open a terminal in VS Code and run "mkdir -p /home/vscode/.aspnet/https && mv aspnetapp.pfx /home/vscode/.aspnet/https"
//
// * If only using Remote - Containers with a local container, uncomment this line instead:
// "mounts": [ "source=${env:HOME}${env:USERPROFILE}/.aspnet/https,target=/home/vscode/.aspnet/https,type=bind" ],
}

Просмотреть файл

@ -0,0 +1,26 @@
{
"runtimeSettings": [
{
"handlerSettings": {
"protectedSettingsCertThumbprint": "$cert_tp",
"publicSettings": {
"requestPath": "/health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 10,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals" : [ "outbound_connectivity", "disk_io", "az_storage_blob", "clockskew", "process", "dns" ],
"enabledOptionalSignals" : [ "simple" ]
},
"environmentAttributes" : {
"OutboundConnectivityEnabled" : true
}
}
}
}
}
]
}

6
.gitattibutes Normal file
Просмотреть файл

@ -0,0 +1,6 @@
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto
# make sure sh files are encoded with unix line endings
*.sh text eol=lf
*.bash text eol=lf

31
.github/workflows/go.yml поставляемый
Просмотреть файл

@ -1,19 +1,20 @@
name: Go
name: Go (Ext V2)
on:
workflow_dispatch:
push:
branches:
- master
- feature/*
- feature/**
pull_request:
branches:
- master
- feature/*
- feature/**
jobs:
build:
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, ubuntu-20.04]
@ -27,9 +28,9 @@ jobs:
run: sudo apt-get update
- name: Setup Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: '1.18.10'
go-version: '1.22.2'
- name: Setup Go Environment
run: |
@ -52,6 +53,20 @@ jobs:
sudo apt install npm
sudo npm install -g bats
- name: Setup bats libs
uses: bats-core/bats-action@1.5.6
with:
assert-install: true
support-install: true
bats-install: false
detik-install: false
file-install: false
- name: Testing Bats Installation
run: |
bats --version
sudo bats --version
- name: Install Parallel
run: |
sudo apt install parallel
@ -78,7 +93,7 @@ jobs:
working-directory: ${{ env.repo_root }}
- name: Unit Tests
continue-on-error: true
continue-on-error: false
run: go list ./... | grep -v '/vendor/' | xargs go test -v -cover
working-directory: ${{ env.repo_root }}
@ -109,10 +124,10 @@ jobs:
continue-on-error: true
run: |
mkdir -p integration-test/test/parallel/.bats/run-logs/
sudo bats integration-test/test/parallel --jobs 10 -T --trace
sudo bats integration-test/test/parallel --jobs 10 -T --trace --filter-tags !linuxhostonly
working-directory: ${{ env.repo_root }}
- name: Retry Failing Parallel Integration Tests
run: |
sudo bats integration-test/test/parallel --filter-status failed -T --trace
sudo bats integration-test/test/parallel --filter-status failed -T --trace --filter-tags !linuxhostonly
working-directory: ${{ env.repo_root }}

9
.vscode/extensions.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,9 @@
{
"recommendations": [
"ms-vscode-remote.remote-containers",
"ms-vscode-remote.remote-ssh",
"github.copilot",
"ms-azuretools.vscode-docker",
"github.vscode-pull-request-github"
]
}

50
.vscode/launch.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,50 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "devcontainer run - install",
"type": "go",
"request": "launch",
"mode": "exec",
"program": "/var/lib/waagent/Extension/bin/applicationhealth-extension",
"cwd": "${workspaceFolder}",
"args" : [
"install"
]
},
{
"name": "devcontainer run - enable",
"type": "go",
"request": "launch",
"mode": "exec",
"program": "/var/lib/waagent/Extension/bin/applicationhealth-extension",
"cwd": "${workspaceFolder}",
"args" : [
"enable"
],
"preLaunchTask": "make devcontainer"
},
{
"name": "Run integration tests",
"type": "node-terminal",
"request": "launch",
"command": "./integration-test/run.sh",
"cwd": "${workspaceFolder}",
"preLaunchTask": "make binary"
},
{
"name": "devcontainer run - enable NOBUILD",
"type": "go",
"request": "launch",
"mode": "exec",
"program": "/var/lib/waagent/Extension/bin/applicationhealth-extension",
"cwd": "${workspaceFolder}",
"args" : [
"enable"
]
}
]
}

35
.vscode/tasks.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,35 @@
{
"version": "2.0.0",
"tasks": [
{
"label": "make devcontainer",
"command": "make",
"args": [
"devcontainer"
],
"type": "shell",
"group": "build",
"problemMatcher": []
},
{
"label": "make binary",
"command": "make",
"args": [
"binary"
],
"type": "shell",
"group": "build",
"problemMatcher": []
},
{
"label": "make testenv",
"command": "make",
"args": [
"testenv"
],
"type": "shell",
"group": "build",
"problemMatcher": []
}
]
}

Просмотреть файл

@ -34,4 +34,27 @@ binary: clean
clean:
rm -rf "$(BINDIR)" "$(BUNDLEDIR)" "$(TESTBINDIR)"
# set up the files in the dev container for debugging locally with a default settings file
# ONLY run this if in a dev container as it can mess with local machine
testenv:
ifneq ("$(RUNNING_IN_DEV_CONTAINER)", "1")
echo "Target can only run in dev container $(RUNNING_IN_DEV_CONTAINER)"
exit 1
endif
cp -r ./integration-test/env/* /var/lib/waagent/
cp -r ./testbin/* /var/lib/waagent/
ln -sf /var/lib/waagent/fake-waagent /sbin/fake-waagent || ture
ln -sf /var/lib/waagent/wait-for-enable /sbin/wait-for-enable
ln -sf /var/lib/waagent/webserver /sbin/webserver
ln -sf /var/lib/waagent/webserver_shim /sbin/webserver_shim
cp misc/HandlerManifest.json /var/lib/waagent/Extension/
cp misc/manifest.xml /var/lib/waagent/Extension/
cp misc/applicationhealth-shim /var/lib/waagent/Extension/bin/
cp bin/applicationhealth-extension /var/lib/waagent/Extension/bin
mkdir -p /var/log/azure/Extension/events
mkdir -p /var/lib/waagent/Extension/config/
cp ./.devcontainer/extension-settings.json /var/lib/waagent/Extension/config/0.settings
devcontainer: binary testenv
.PHONY: clean binary

Просмотреть файл

@ -1,4 +1,4 @@
# Azure ApplicationHealth Extension for Linux (1.0.0)
# Azure ApplicationHealth Extension for Linux (V2)
[![Build Status](https://travis-ci.org/Azure/applicationhealth-extension-linux.svg?branch=master)](https://travis-ci.org/Azure/applicationhealth-extension-linux)
[![GitHub Build Status](https://github.com/Azure/applicationhealth-extension-linux/actions/workflows/go.yml/badge.svg)](https://github.com/Azure/applicationhealth-extension-linux/actions/workflows/go.yml)

29
go.mod
Просмотреть файл

@ -1,21 +1,32 @@
module github.com/Azure/run-command-extension-linux
module github.com/Azure/applicationhealth-extension-linux
go 1.17
go 1.22
require (
github.com/Azure/azure-docker-extension v0.0.0-20160802215703-0dd2f199467d
github.com/go-kit/kit v0.1.1-0.20160721083846-b076b44dbec2
github.com/pkg/errors v0.7.1-0.20160627222352-a2d6902c6d2a
github.com/stretchr/testify v1.1.4-0.20160615092844-d77da356e56a
github.com/Azure/azure-extension-platform v0.0.0-20240521173920-6b2acfda81e9
github.com/containerd/cgroups/v3 v3.0.2
github.com/opencontainers/runtime-spec v1.0.2
github.com/pkg/errors v0.9.1
github.com/stretchr/testify v1.8.0
github.com/xeipuuv/gojsonschema v0.0.0-20160623135812-c539bca196be
)
require github.com/go-kit/log v0.2.0
require (
github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2 // indirect
github.com/go-logfmt/logfmt v0.2.1-0.20160601130801-d4327190ff83 // indirect
github.com/go-stack/stack v1.5.2 // indirect
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 // indirect
github.com/cilium/ebpf v0.9.1 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/godbus/dbus/v5 v5.0.4 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/sirupsen/logrus v1.9.0 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20151027082146-e0fe6f683076 // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20150808065054-e02fc20de94c // indirect
golang.org/x/sys v0.2.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

71
go.sum
Просмотреть файл

@ -1,24 +1,67 @@
github.com/Azure/azure-docker-extension v0.0.0-20160802215703-0dd2f199467d h1:IZq7wAvhHb/IObHOh8RHClV2zv4dHh5MrHdRWTTIwe0=
github.com/Azure/azure-docker-extension v0.0.0-20160802215703-0dd2f199467d/go.mod h1:tVA4DYQYxotjw+EkJhfywtM99w7nAOatBvgNAkpsBvk=
github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2 h1:5zdDAMuB3gvbHB1m2BZT9+t9w+xaBmK3ehb7skDXcwM=
github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-kit/kit v0.1.1-0.20160721083846-b076b44dbec2 h1:awXynDTA1TiAp1SA/o/xoU6oRHE3xKCokck9l4/poMc=
github.com/go-kit/kit v0.1.1-0.20160721083846-b076b44dbec2/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.2.1-0.20160601130801-d4327190ff83 h1:WEFlTYvIQSd2ofUwgM9nOR+KTzdG/pLZzdUlDp6mciM=
github.com/go-logfmt/logfmt v0.2.1-0.20160601130801-d4327190ff83/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-stack/stack v1.5.2 h1:5sTB/0oZM2O31k/N1IRwxxVXzLIt5NF2Aqx/2gWI9OY=
github.com/go-stack/stack v1.5.2/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515 h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/pkg/errors v0.7.1-0.20160627222352-a2d6902c6d2a h1:dKpZ0nc8i7prliB4AIfJulQxsX7whlVwi6j5HqaYUl4=
github.com/pkg/errors v0.7.1-0.20160627222352-a2d6902c6d2a/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/Azure/azure-extension-platform v0.0.0-20240521173920-6b2acfda81e9 h1:/DP5C4Fx89JD/Te5ZDIG5hNNqUS0Jkaytqk+FP0owpM=
github.com/Azure/azure-extension-platform v0.0.0-20240521173920-6b2acfda81e9/go.mod h1:nEQQIC3RKmMnpdc+RakYHIdu556jdcHv67ML8PdsQeQ=
github.com/cilium/ebpf v0.9.1 h1:64sn2K3UKw8NbP/blsixRpF3nXuyhz/VjRlRzvlBRu4=
github.com/cilium/ebpf v0.9.1/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY=
github.com/containerd/cgroups/v3 v3.0.2 h1:f5WFqIVSgo5IZmtTT3qVBo6TzI1ON6sycSBKkymb9L0=
github.com/containerd/cgroups/v3 v3.0.2/go.mod h1:JUgITrzdFqp42uI2ryGA+ge0ap/nxzYgkGmIcetmErE=
github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/frankban/quicktest v1.14.0 h1:+cqqvzZV87b4adx/5ayVOaYZ2CrvM4ejQvUdBzPPUss=
github.com/frankban/quicktest v1.14.0/go.mod h1:NeW+ay9A/U67EYXNFA1nPE8e/tnQv/09mUdL/ijj8og=
github.com/go-kit/log v0.2.0 h1:7i2K3eKTos3Vc0enKCfnVcgHh2olr/MyfboYq7cAcFw=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/godbus/dbus/v5 v5.0.4 h1:9349emZab16e7zQvpmsbtjc18ykshndd8y2PG3sgJbA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/opencontainers/runtime-spec v1.0.2 h1:UfAcuLBJB9Coz72x1hgl8O5RVzTdNiaglX6v2DM6FI0=
github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.1.4-0.20160615092844-d77da356e56a h1:UWu0XgfW9PCuyeZYNe2eGGkDZjooQKjVQqY/+d/jYmc=
github.com/stretchr/testify v1.1.4-0.20160615092844-d77da356e56a/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBOAvL+k=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/xeipuuv/gojsonpointer v0.0.0-20151027082146-e0fe6f683076 h1:KM4T3G70MiR+JtqplcYkNVoNz7pDwYaBxWBXQK804So=
github.com/xeipuuv/gojsonpointer v0.0.0-20151027082146-e0fe6f683076/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20150808065054-e02fc20de94c h1:XZWnr3bsDQWAZg4Ne+cPoXRPILrNlPNQfxBuwLl43is=
github.com/xeipuuv/gojsonreference v0.0.0-20150808065054-e02fc20de94c/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v0.0.0-20160623135812-c539bca196be h1:sRGd3e18izj1hQgF1hSvDOA8RPPnA2t4p8YeLZ/GdBU=
github.com/xeipuuv/gojsonschema v0.0.0-20160623135812-c539bca196be/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
go.uber.org/goleak v1.1.12/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0 h1:ljd4t30dBnAvMZaQCevtY0xLLD0A+bRZXbgLMLU1F/A=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

Просмотреть файл

@ -40,3 +40,5 @@ root:
make binary
./integration-test/run.sh
```
If you are running on a linux host (a real linux host not WSL), you can use the command `./integration-test/run.sh --all` to run all the tests.

Просмотреть файл

@ -7,7 +7,9 @@
"logFolder": "/var/log/azure/Extension/VE.RS.ION",
"configFolder": "/var/lib/waagent/Extension/config",
"statusFolder": "/var/lib/waagent/Extension/status",
"heartbeatFile": "/var/lib/waagent/Extension/heartbeat.log"
"heartbeatFile": "/var/lib/waagent/Extension/heartbeat.log",
"eventsFolder": "/var/log/azure/Extension/events",
"eventsFolder_preview": "/var/log/azure/Extension/events"
}
}
]

42421
integration-test/env/Extension/bin/VMWatch/NOTICE.txt поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

1
integration-test/env/Extension/bin/VMWatch/version.txt поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
1.0.02698.147-official-0e96b031

279
integration-test/env/Extension/bin/VMWatch/vmwatch.conf поставляемый Executable file
Просмотреть файл

@ -0,0 +1,279 @@
# Telegraf Configuration
#
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
#
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
#
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# file would generate.
#
# Environment variables can be used anywhere in this config file, simply surround
# them with ${}. For strings the variable must be within quotes (ie, "${STR_VAR}"),
# for numbers and booleans they should be plain (ie, ${INT_VAR}, ${BOOL_VAR})
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = false
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## Maximum number of unwritten metrics per output. Increasing this value
## allows for longer periods of output downtime without dropping metrics at the
## cost of higher maximum memory usage.
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "20s"
## Collection offset is used to shift the collection by the given amount.
## This can be be used to avoid many plugins querying constraint devices
## at the same time by manually scheduling them in time.
# collection_offset = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## Collected metrics are rounded to the precision specified. Precision is
## specified as an interval with an integer + unit (e.g. 0s, 10ms, 2us, 4s).
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
##
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s:
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
##
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
precision = "0s"
## Log at debug level.
# debug = false
## Log only error level messages.
# quiet = false
## Log target controls the destination for logs and can be one of "file",
## "stderr" or, on Windows, "eventlog". When set to "file", the output file
## is determined by the "logfile" setting.
logtarget = "file"
## Name of the file to be logged to when using the "file" logtarget. If set to
## the empty string then logs are written to stderr.
logfile = "${VERBOSE_LOG_FILE_FULL_PATH:-}"
## The logfile will be rotated after the time interval specified. When set
## to 0 no time based rotation is performed. Logs are rotated only when
## written to, if there is no log activity rotation may be delayed.
logfile_rotation_interval = "60s"
## The logfile will be rotated when it becomes larger than the specified
## size. When set to 0 no size based rotation is performed.
logfile_rotation_max_size = "4MB"
## Maximum number of rotated archives to keep, any older logs are deleted.
## If set to -1, no archives are removed.
logfile_rotation_max_archives = 5
## Pick a timezone to use when logging or type 'local' for local time.
## Example: America/Chicago
# log_with_timezone = ""
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = true
## Method of translating SNMP objects. Can be "netsnmp" (deprecated) which
## translates by calling external programs snmptranslate and snmptable,
## or "gosmi" which translates using the built-in gosmi library.
# snmp_translator = "netsnmp"
## Name of the file to load the state of plugins from and store the state to.
## If uncommented and not empty, this file will be used to save the state of
## stateful plugins on termination of Telegraf. If the file exists on start,
## the state in the file will be restored for the plugins.
# statefile = ""
[[inputs.outbound_connectivity]]
interval = "${OUTBOUND_CONNECTIVITY_INTERVAL:-60s}"
name_suffix = "${OUTBOUND_CONNECTIVITY_NAME_SUFFIX:-}"
urls = "${OUTBOUND_CONNECTIVITY_URLS:-http://www.msftconnecttest.com/connecttest.txt}"
timeout_in_milliseconds = ${OUTBOUND_CONNECTIVITY_TIMEOUT_IN_MILLISECONDS:-30000}
[[inputs.disk_io]]
interval = "${DISK_IO_INTERVAL:-180s}"
name_suffix = "${DISK_IO_NAME_SUFFIX:-}"
mount_points = "${DISK_IO_MOUNT_POINTS:-}"
ignore_file_system_list = "${DISK_IO_IGNORE_FS_LIST:-tmpfs,devtmpfs,devfs,iso9660,overlay,aufs,squashfs,autofs}"
file_name = "${DISK_IO_FILENAME:-}"
[[inputs.simple]]
interval = "${SIMPLE_INTERVAL:-10s}"
name_suffix = "${SIMPLE_NAME_SUFFIX:-}"
ok = ${SIMPLE_OK:-false}
[[inputs.az_storage_blob]]
interval = "${AZ_STORAGE_BLOB_INTERVAL:-180s}"
name_suffix = "${AZ_STORAGE_BLOB_NAME_SUFFIX:-}"
storage_account_name = "${AZ_STORAGE_ACCOUNT_NAME:-}"
container_name = "${AZ_STORAGE_CONTAINER_NAME:-}"
blob_name = "${AZ_STORAGE_BLOB_NAME:-}"
blob_domain_name = "${AZ_STORAGE_BLOB_DOMAIN_NAME:-blob.core.windows.net}"
sas_token_base64 = "${AZ_STORAGE_SAS_TOKEN_BASE64:-}"
use_managed_identity = ${AZ_STORAGE_USE_MANAGED_IDENTITY:-false}
managed_identity_client_id = "${AZ_STORAGE_MANAGED_IDENTITY_CLIENT_ID:-}"
[[inputs.clockskew]]
interval = "${CLOCK_SKEW_INTERVAL:-180s}"
name_suffix = "${CLOCK_SKEW_NAME_SUFFIX:-}"
ntp_server = "${CLOCK_SKEW_NTP_SERVER:-time.windows.com}"
time_skew_threshold_in_seconds = ${CLOCK_SKEW_TIME_SKEW_THRESHOLD_IN_SECONDS:-5.0}
[[inputs.process]]
interval = "${PROCESS_INTERVAL:-180s}"
name_suffix = "${PROCESS_NAME_SUFFIX:-}"
timeout = "${PROCESS_TIMEOUT:-10s}"
[[inputs.process_monitor]]
interval = "${PROCESS_MONITOR_INTERVAL:-180s}"
name_suffix = "${PROCESS_MONITOR_NAME_SUFFIX:-}"
process_names = "${PROCESS_MONITOR_PROCESS_NAMES:-}"
[[inputs.test]]
interval = "${TEST_INTERVAL:-1s}"
name_suffix = "${TEST_NAME_SUFFIX:-}"
exit_process = ${TEST_EXIT_PROCESS:-false}
allocate_memory = ${TEST_ALLOCATE_MEMORY:-false}
high_cpu = ${TEST_HIGH_CPU:-false}
[[inputs.dns]]
interval = "${DNS_INTERVAL:-180s}"
name_suffix = "${DNS_NAME_SUFFIX:-}"
dns_names = "${DNS_NAMES:-www.msftconnecttest.com}"
[[inputs.imds]]
interval = "${IMDS_INTERVAL:-180s}"
name_suffix = "${IMDS_NAME_SUFFIX:-}"
imds_endpoint = "${IMDS_ENDPOINT:-http://169.254.169.254/metadata/instance/compute}"
timeout_in_seconds = ${IMDS_TIMEOUT_IN_SECONDS:-10}
retry_interval_in_seconds = ${IMDS_RETRY_INTERVAL_IN_SEONDS:-3}
query_Total_attempts = ${IMDS_QUERY_TOTAL_ATTEMPTS:-3}
[[inputs.tcp_stats]]
interval = "${TCP_STATS_INTERVAL:-180s}"
name_suffix = "${TCP_STATS_NAME_SUFFIX:-}"
[[inputs.process_cpu]]
interval = "${PROCESS_CPU_INTERVAL:-180s}"
name_suffix = "${PROCESS_CPU_NAME_SUFFIX:-}"
[[inputs.hardware_health_monitor]]
interval = "${HARDWARE_HEALTH_MONITOR_INTERVAL:-180s}"
name_override = "hardware_health_monitor"
[[aggregators.check_aggregator]]
period = "${CHECK_AGGREGATION_INTERVAL:-300s}"
drop_original = true
[aggregators.check_aggregator.tagpass]
EventLevel = ["Check"]
[[aggregators.metric_aggregator]]
period = "${METRIC_AGGREGATION_INTERVAL:-300s}"
drop_original = true
[aggregators.metric_aggregator.tagpass]
EventLevel = ["Metric"]
[[aggregators.eventlog_aggregator]]
period = "${EVENTLOG_AGGREGATION_INTERVAL:-300s}"
drop_original = true
max_allowed_count = ${EVENTLOG_AGGREGATION_MAX_ALLOWED_COUNT:-3}
[aggregators.eventlog_aggregator.tagpass]
EventLevel = ["EventLog"]
[[processors.event_processor]]
period = "${EVENT_PROCESSOR_INTERVAL:-300s}"
namepass = ["hardware_health_monitor"]
[processors.event_processor.tagdrop]
EventLevel = ["EventLog"] # Telegraf processor processes signals sent from both inputs and aggregators. To prevent signals from feeding into the processor again after aggregation, excluding signals with tag EventLevel=EventLog as the processor adds this tag after processing.
# Send telegraf metrics to file(s)
[[outputs.file]]
flush_interval = "30s"
## Files to write to, "stdout" is a specially handled file.
folder = "${SIGNAL_FOLDER:-stdout}"
## Use batch serialization format instead of line based delimiting. The
## batch format allows for the production of non line based output formats and
## may more efficiently encode and write metrics.
use_batch_format = false
## The file will be rotated after the time interval specified. When set
## to 0 no time based rotation is performed.
rotation_interval = "10s"
## max number of files that can be present in the folder, if exceeded new files will not be written
rotation_max_file_count = 1000
## The logfile will be rotated when it becomes larger than the specified
## size. When set to 0 no size based rotation is performed.
# rotation_max_size = "0MB"
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "json"
json_timestamp_format = "2006-01-02T15:04:05Z07:00"
json_nested_fields_include = ["RuntimeParameters", "Errors", "Stats", "Contents"]
json_transformation = '''
$merge([{"Timestamp": timestamp, "Message": $string( $merge([{"Name": name}, fields]) )}, tags])
'''
[[outputs.file]]
flush_interval = "30s"
## this output will be enabled if this variable is set otherwise it is a no-op
## it specifies and additional location to write the events to for troubleshooting
folder = "${ADDITIONAL_SIGNAL_FOLDER_FOR_TROUBLESHOOTING:-}"
## Use batch serialization format instead of line based delimiting. The
## batch format allows for the production of non line based output formats and
## may more efficiently encode and write metrics.
use_batch_format = false
## The file will be rotated after the time interval specified. When set
## to 0 no time based rotation is performed.
rotation_interval = "10s"
## max number of files that can be present in the folder, if exceeded new files will not be written
rotation_max_file_count = 1000
## The logfile will be rotated when it becomes larger than the specified
## size. When set to 0 no size based rotation is performed.
# rotation_max_size = "0MB"
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "json"
json_timestamp_format = "2006-01-02T15:04:05Z07:00"
json_nested_fields_include = ["RuntimeParameters", "Errors", "Stats", "Contents"]
json_transformation = '''
$merge([{"Timestamp": timestamp, "Message": $string( $merge([{"Name": name}, fields]) )}, tags])
'''

Двоичные данные
integration-test/env/Extension/bin/VMWatch/vmwatch_linux_amd64 поставляемый Executable file

Двоичный файл не отображается.

Двоичные данные
integration-test/env/Extension/bin/VMWatch/vmwatch_linux_arm64 поставляемый Executable file

Двоичный файл не отображается.

18
integration-test/env/Extension/bin/update-vmwatch.sh поставляемый Executable file
Просмотреть файл

@ -0,0 +1,18 @@
#!/bin/bash
set -uex
if [ -z "$1" ]
then
echo "USAGE update-vmwatch.sh <version>"
echo eg specific version : update-vmwatch.cmd "1.0.2"
echo eg latest version of 1.0 : update-vmwatch.cmd "1.0.*"
echo eg latest version : update-vmwatch.cmd "*"
exit 1
fi
az artifacts universal download --organization "https://msazure.visualstudio.com/" --project "b32aa71e-8ed2-41b2-9d77-5bc261222004" --scope project --feed "VMWatch" --name "vmwatch" --version "$1" --path ./VMWatch
# remove the windows and darwin binaries
rm ./VMWatch/*windows*
rm ./VMWatch/*darwin*
chmod +x ./VMWatch/vmwatch_linux*

33
integration-test/env/Extension/bin/upload-zip.sh поставляемый Executable file
Просмотреть файл

@ -0,0 +1,33 @@
#!/bin/bash
# this is a helper script to upload the latest binaries for app health extension and vmwatch to a zip file
# temp solution until we get something better in place, run this script on a dev machine
set -uex
org=$1
projectGuid=$2
pipelineId=$3
subscription=$4
container="${5:-packages}"
# get the latest build of the linux pipeline from devops
latestbuild=$(az pipelines runs list --org $org --project "$projectGuid" --pipeline-ids $pipelineId | jq '[ .[] | select(.result == "succeeded")]' | jq '[.[].id] | sort | last')
# download the final output artifacts
rm -rf /tmp/linux_artifact
az pipelines runs artifact download --org $org --project "$projectGuid" --run-id $latestbuild --artifact-name drop_2_windows --path /tmp/linux_artifact
# get the zip file from the build artifact
unzip /tmp/linux_artifact/caps/ApplicationHealthLinuxTest/v2/ServiceGroupRoot/applicationhealth-extension*.zip -d /tmp/linux_artifact/caps/ApplicationHealthLinuxTest/v2/ServiceGroupRoot/unzipped
# now copy the latest binaries for vmwatch and app health extension to the folder structure
rm /tmp/linux_artifact/caps/ApplicationHealthLinuxTest/v2/ServiceGroupRoot/unzipped/bin/VMWatch/*
cp ./VMWatch/* /tmp/linux_artifact/caps/ApplicationHealthLinuxTest/v2/ServiceGroupRoot/unzipped/bin/VMWatch/
cp ../../../../bin/* /tmp/linux_artifact/caps/ApplicationHealthLinuxTest/v2/ServiceGroupRoot/unzipped/bin/
# zip it up
cd /tmp/linux_artifact/caps/ApplicationHealthLinuxTest/v2/ServiceGroupRoot/unzipped && zip -r /tmp/vmwatch.zip . && cd -
# upload it to the storage account
echo "Uploading linux.zip to container: $container"
az storage blob upload --account-name vmwatchtest --subscription $subscription --container-name $container --name linux.zip --file /tmp/vmwatch.zip --overwrite

19
integration-test/env/README.md поставляемый
Просмотреть файл

@ -5,12 +5,15 @@ integration testing Docker image.
```
.
├── {THUMBPRINT}.crt <-- tests generate and push this certificate
├── {THUMBPRINT}.prv <-- tests generate and push this private key
└── Extension/
├── HandlerManifest.json <-- docker image build pushes it here
├── HandlerEnvironment.json <-- the extension reads this
├── bin/ <-- docker image build pushes the extension binary here
├── config/ <-- tests push 0.settings file here
└── status/ <-- extension should write here
├── {THUMBPRINT}.crt <-- tests generate and push this certificate
├── {THUMBPRINT}.prv <-- tests generate and push this private key
├── Extension/
├ ├── HandlerManifest.json <-- docker image build pushes it here
├ ├── HandlerEnvironment.json <-- the extension reads this
├ ├── bin/ <-- docker image build pushes the extension binary here
├ ├ └── VMWatch/
├ ├ ├── vmwatch_linux_amd64 <-- VMWatch AMD64 binary
├ ├ └── vmwatch.conf <-- VMWatch configuration file
├───├── config/ <-- tests push 0.settings file here
└───└── status/ <-- extension should write here
```

17
integration-test/env/check-avg-cpu.sh поставляемый Executable file
Просмотреть файл

@ -0,0 +1,17 @@
#!/bin/bash
set -uex
process_name=$1
min_cpu=$2
max_cpu=$3
pid=$(pgrep $process_name -n)
# get avg cpu over 10 seconds
avg_cpu=$(pidstat -p $pid 1 10 | awk 'NR > 3 { sum += $8 } END { if (NR > 0) print sum / NR; else print 0 }' )
# check that cpu usage is > min_cpu and < max_cpu % as there is some wiggle room with cgroups
if (( $(echo "$avg_cpu > $min_cpu && $avg_cpu < $max_cpu" | bc -l) )); then
echo "PASS : avg cpu is $avg_cpu" > /var/log/azure/Extension/vmwatch-avg-cpu-check.txt
else
echo "FAIL : avg cpu is $avg_cpu" > /var/log/azure/Extension/vmwatch-avg-cpu-check.txt
fi

40
integration-test/env/extension-test-helpers.sh поставляемый Normal file
Просмотреть файл

@ -0,0 +1,40 @@
#!/bin/bash
set -uex
logFilePath="/var/log/azure/Extension/force-kill-extension.txt"
force_kill_apphealth() {
app_health_pid=$(ps -ef | grep "applicationhealth-extension" | grep -v grep | grep -v tee | awk '{print $2}')
if [ -z "$app_health_pid" ]; then
echo "Applicationhealth extension is not running" > $logFilePath
return 0
fi
echo "Killing the applicationhealth extension forcefully" >> $logFilePath
kill -9 $app_health_pid
output=$(check_running_processes)
if [ "$output" == "Applicationhealth and VMWatch are not running" ]; then
echo "$output" >> /var/log/azure/Extension/force-kill-extension.txt
echo "Successfully killed the apphealth extension" >> $logFilePath
echo "Successfully killed the VMWatch extension" >> $logFilePath
else
echo "$output" >> /var/log/azure/Extension/force-kill-extension.txt
echo "Failed to kill the apphealth extension" >> $logFilePath
fi
}
check_running_processes() {
local output=$(ps -ef | grep -e "applicationhealth-extension" -e "vmwatch_linux_amd64" | grep -v grep | grep -v tee)
if [ -z "$output" ]; then
echo "Applicationhealth and VMWatch are not running"
else
if [ -n "$(echo $output | grep "applicationhealth-extension")" ]; then
echo "Applicationhealth is running"
fi
if [ -n "$(echo $output | grep "vmwatch_linux_amd64")" ]; then
echo "VMWatch is running"
fi
echo "$output"
fi
}

2
integration-test/env/fake-waagent поставляемый
Просмотреть файл

@ -25,7 +25,7 @@ if [[ -z "$hCmd" ]] || [[ "$hCmd" == "null" ]]; then
exit 1
fi
hCmd="./Extension/$hCmd"
hCmd="$SCRIPT_DIR/Extension/$hCmd"
echo "Invoking: $hCmd" >&2
echo "=================" >&2
eval "$hCmd"

8
integration-test/env/wait-for-enable поставляемый
Просмотреть файл

@ -21,7 +21,11 @@ done
sleep 1
bin="bin/applicationhealth-extension"
while true; do
out="$(ps aux)"
# get running processes removing sh which is the entry command and has a bunch of commands in it
# this allows the statements below to work in native linux and on mac silicon where rosetta will
# show up so the format of ps aux changes a bit and more restrictive regexes don't find the data
# removing the entrypoint command allows the regexes to be less restrictive.
out="$(ps aux | grep -v sh)"
if [ $waitonstatus == "y" ]; then
log "waiting on successful status"
if grep -q 'success' /var/lib/waagent/Extension/status/0.status; then
@ -39,7 +43,7 @@ while true; do
if [[ "$1" != "webserverexit" ]]; then
log "'$bin' process exited"
exit 0
elif [[ "$out" != **"0 webserver -args="** ]]; then
elif [[ "$out" != **"webserver -args="** ]]; then
log "webserver exited"
exit 0
fi

Просмотреть файл

@ -1,11 +1,31 @@
#!/bin/bash
# set the filter skip tests that can only run when the host is really a linux machine (not just WSL or docker)
FILTER=-"-filter-tags !linuxhostonly"
for i in "$@"; do
case $i in
--all)
FILTER=
shift
;;
-*|--*)
echo "Unknown option $i"
exit 1
;;
*)
;;
esac
done
source integration-test/test/test_helper.bash
create_certificate
# Run Sequential Integration Tests
sudo bats integration-test/test/sequential -T --trace
bats integration-test/test/sequential -T --trace
err1=$?
# Run Parallel Integration Tests
sudo bats integration-test/test/parallel --jobs 10 -T --trace
bats integration-test/test/parallel --jobs 10 -T --trace $FILTER
err2=$?
delete_certificate
rm_image
exit $((err1 + err2))

Просмотреть файл

@ -1,8 +1,7 @@
#!/usr/bin/env bats
load ../test_helper
setup(){
load "../test_helper"
build_docker_image
container_name="handler-command_$BATS_TEST_NUMBER"
}
@ -19,7 +18,7 @@ teardown(){
run start_container
echo "$output"
[ "$status" -eq 0 ]
[[ "$output" = *'event=installed'* ]]
[[ "$output" = *'event="Handler successfully installed"'* ]]
diff="$(container_diff)"
echo "$diff"

Просмотреть файл

@ -1,8 +1,7 @@
#!/usr/bin/env bats
load ../test_helper
setup(){
load "../test_helper"
build_docker_image
container_name="rich-states_$BATS_TEST_NUMBER"
}

Просмотреть файл

@ -1,8 +1,7 @@
#!/usr/bin/env bats
load ../test_helper
setup(){
load "../test_helper"
build_docker_image
container_name="grace-period_$BATS_TEST_NUMBER"
}

Просмотреть файл

@ -1,8 +1,7 @@
#!/usr/bin/env bats
load ../test_helper
setup(){
load "../test_helper"
build_docker_image
container_name="custom-metrics_$BATS_TEST_NUMBER"
}

Просмотреть файл

@ -1,8 +1,7 @@
#!/usr/bin/env bats
load ../test_helper
setup(){
load "../test_helper"
build_docker_image
container_name="tls-config_$BATS_TEST_NUMBER"
}

Просмотреть файл

@ -0,0 +1,647 @@
#!/usr/bin/env bats
setup(){
load "../test_helper"
_load_bats_libs
build_docker_image
container_name="vmwatch_$BATS_TEST_NUMBER"
extension_version=$(get_extension_version)
echo "extension version: $extension_version"
}
teardown(){
rm -rf "$certs_dir"
cleanup
}
@test "handler command: enable - vm watch disabled - vmwatch settings omitted" {
mk_container $container_name sh -c "webserver & fake-waagent install && fake-waagent enable && wait-for-enable webserverexit"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600
}' ''
run start_container
echo "$output"
[[ "$output" == *'VMWatch is disabled'* ]]
status_file="$(container_read_extension_status)"
[[ ! $status_file == *'VMWatch'* ]]
}
@test "handler command: enable - vm watch disabled - empty vmwatch settings" {
mk_container $container_name sh -c "webserver & fake-waagent install && fake-waagent enable && wait-for-enable webserverexit && sleep 2"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {}
}' ''
run start_container
echo "$output"
[[ "$output" == *'VMWatch is disabled'* ]]
status_file="$(container_read_extension_status)"
verify_substatus_item "$status_file" VMWatch warning "VMWatch is disabled"
}
@test "handler command: enable - vm watch disabled - explicitly disable" {
mk_container $container_name sh -c "webserver & fake-waagent install && fake-waagent enable && wait-for-enable webserverexit"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": false
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'VMWatch is disabled'* ]]
status_file="$(container_read_extension_status)"
verify_substatus_item "$status_file" VMWatch warning "VMWatch is disabled"
}
@test "handler command: enable - vm watch enabled - default vmwatch settings" {
mk_container $container_name sh -c "webserver & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'--config /var/lib/waagent/Extension/bin/VMWatch/vmwatch.conf'* ]]
[[ "$output" == *"--apphealth-version $extension_version"* ]]
[[ "$output" == *'Env: [SIGNAL_FOLDER=/var/log/azure/Extension/events VERBOSE_LOG_FILE_FULL_PATH=/var/log/azure/Extension/VE.RS.ION/vmwatch.log]'* ]]
[[ "$output" == *'VMWatch is running'* ]]
status_file="$(container_read_extension_status)"
verify_substatus_item "$status_file" VMWatch success "VMWatch is running"
}
@test "handler command: enable - vm watch enabled - can override default settings" {
mk_container $container_name sh -c "webserver & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns"]
},
"parameterOverrides": {
"ABC": "abc",
"BCD": "bcd"
}
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'--config /var/lib/waagent/Extension/bin/VMWatch/vmwatch.conf'* ]]
[[ "$output" == *'--disabled-signals clockskew:az_storage_blob:process:dns'* ]]
[[ "$output" == *"--apphealth-version $extension_version"* ]]
[[ "$output" == *'Env: [ABC=abc BCD=bcd SIGNAL_FOLDER=/var/log/azure/Extension/events VERBOSE_LOG_FILE_FULL_PATH=/var/log/azure/Extension/VE.RS.ION/vmwatch.log]'* ]]
[[ "$output" == *'VMWatch is running'* ]]
status_file="$(container_read_extension_status)"
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState transitioning Initializing
verify_substatus_item "$status_file" VMWatch success "VMWatch is running"
}
@test "handler command: enable - vm watch enabled - app health works as expected" {
mk_container $container_name sh -c "webserver -args=2h,2h & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 2,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns"]
}
}
}' ''
run start_container
echo "$output"
enableLog="$(echo "$output" | grep 'operation=enable' | grep state)"
expectedTimeDifferences=(0 5)
verify_state_change_timestamps "$enableLog" "${expectedTimeDifferences[@]}"
expectedStateLogs=(
"Health state changed to healthy"
"Committed health state is initializing"
"Committed health state is healthy"
)
verify_states "$enableLog" "${expectedStateLogs[@]}"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'--config /var/lib/waagent/Extension/bin/VMWatch/vmwatch.conf'* ]]
[[ "$output" == *'--disabled-signals clockskew:az_storage_blob:process:dns'* ]]
[[ "$output" == *"--apphealth-version $extension_version"* ]]
[[ "$output" == *'--memory-limit-bytes 80000000'* ]]
[[ "$output" == *'Env: [SIGNAL_FOLDER=/var/log/azure/Extension/events VERBOSE_LOG_FILE_FULL_PATH=/var/log/azure/Extension/VE.RS.ION/vmwatch.log]'* ]]
[[ "$output" == *'VMWatch is running'* ]]
status_file="$(container_read_extension_status)"
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
verify_substatus_item "$status_file" VMWatch success "VMWatch is running"
}
@test "handler command: enable - vm watch enabled - with disabled and enabled tests works as expected" {
mk_container $container_name sh -c "webserver -args=2h,2h & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 2,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"enabledTags" : [ "Network" ],
"disabledTags" : [ "Accuracy" ],
"disabledSignals" : [ "outbound_connectivity", "disk_io" ],
"enabledOptionalSignals" : [ "simple" ]
},
"environmentAttributes" : {
"OutboundConnectivityEnabled" : true
}
}
}' ''
run start_container
echo "$output"
enableLog="$(echo "$output" | grep 'operation=enable' | grep state)"
expectedTimeDifferences=(0 5)
verify_state_change_timestamps "$enableLog" "${expectedTimeDifferences[@]}"
expectedStateLogs=(
"Health state changed to healthy"
"Committed health state is initializing"
"Committed health state is healthy"
)
verify_states "$enableLog" "${expectedStateLogs[@]}"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'--config /var/lib/waagent/Extension/bin/VMWatch/vmwatch.conf'* ]]
[[ "$output" == *'--disabled-signals outbound_connectivity:disk_io'* ]]
[[ "$output" == *'--enabled-tags Network'* ]]
[[ "$output" == *'--disabled-tags Accuracy'* ]]
[[ "$output" == *'--enabled-optional-signals simple'* ]]
[[ "$output" == *'--env-attributes OutboundConnectivityEnabled=true'* ]]
[[ "$output" == *"--apphealth-version $extension_version"* ]]
[[ "$output" == *'Env: [SIGNAL_FOLDER=/var/log/azure/Extension/events VERBOSE_LOG_FILE_FULL_PATH=/var/log/azure/Extension/VE.RS.ION/vmwatch.log]'* ]]
[[ "$output" == *'VMWatch is running'* ]]
status_file="$(container_read_extension_status)"
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
verify_substatus_item "$status_file" VMWatch success "VMWatch is running"
}
@test "handler command: enable - vm watch failed - force kill vmwatch process 3 times" {
mk_container $container_name sh -c "fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 10 && pkill -f vmwatch_linux_amd64 && sleep 10 && pkill -f vmwatch_linux_amd64 && sleep 10 && pkill -f vmwatch_linux_amd64 && sleep 10"
push_settings '
{
"protocol": "http",
"requestPath": "health",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
status_file="$(container_read_file /var/lib/waagent/Extension/status/0.status)"
hanlder_log="$(container_read_handler_log)"
echo "$handler_log"
vmwatch_log="$(container_read_vmwatch_log)"
echo "$vmwatch_log"
echo "$output"
echo "$status_file"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Attempt 1: Started VMWatch'* ]]
[[ "$output" == *'Attempt 3: Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'Attempt 1: VMWatch process exited'* ]]
[[ "$output" == *'Attempt 3: VMWatch process exited'* ]]
[[ "$output" == *'VMWatch reached max 3 retries, sleeping for 3 hours before trying again'* ]]
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState transitioning Initializing
verify_substatus_item "$status_file" VMWatch error "VMWatch failed: .* Attempt 3: .* Error: .*"
}
@test "handler command: enable - vm watch process exit - give up after 3 restarts" {
mk_container $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 30"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns", "outbound_connectivity", "disk_io"],
"enabledOptionalSignals": ["test"]
},
"parameterOverrides": {
"TEST_EXIT_PROCESS": "true"
}
}
}' ''
run start_container
status_file="$(container_read_file /var/lib/waagent/Extension/status/0.status)"
hanlder_log="$(container_read_handler_log)"
echo "$handler_log"
vmwatch_log="$(container_read_vmwatch_log)"
echo "$vmwatch_log"
echo "$output"
echo "$status_file"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Attempt 1: Started VMWatch'* ]]
[[ "$output" == *'Attempt 3: Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'Attempt 1: VMWatch process exited'* ]]
[[ "$output" == *'Attempt 3: VMWatch process exited'* ]]
[[ "$output" == *'VMWatch reached max 3 retries, sleeping for 3 hours before trying again'* ]]
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
verify_substatus_item "$status_file" VMWatch error "VMWatch failed: .* Attempt 3: .* Error: exit status 1.*"
}
@test "handler command: enable - vm watch process does not start when cgroup assignment fails" {
mk_container $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 30"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns", "outbound_connectivity", "disk_io"],
"enabledOptionalSignals": ["test"]
}
}
}' ''
run start_container
status_file="$(container_read_file /var/lib/waagent/Extension/status/0.status)"
hanlder_log="$(container_read_handler_log)"
echo "$handler_log"
echo "$output"
echo "$status_file"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Killing VMWatch process as cgroup assignment failed'* ]]
[[ "$output" == *'VMWatch reached max 3 retries, sleeping for 3 hours before trying again'* ]]
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
verify_substatus_item "$status_file" VMWatch error "VMWatch failed: .* VMWatch process exited. Error:.* Failed to assign VMWatch process to cgroup.**"
}
@test "handler command: enable/disable - vm watch killed when disable is called" {
mk_container $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 5 && fake-waagent disable"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'Invoking: /var/lib/waagent/Extension/bin/applicationhealth-shim disable'* ]]
[[ "$output" == *'applicationhealth-extension process terminated'* ]]
status_file="$(container_read_extension_status)"
verify_status_item "$status_file" Disable success "Disable succeeded"
}
@test "handler command: enable/uninstall - vm watch killed when uninstall is called" {
mk_container $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 5 && fake-waagent uninstall"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'Invoking: /var/lib/waagent/Extension/bin/applicationhealth-shim uninstall'* ]]
[[ "$output" == *'applicationhealth-extension process terminated'* ]]
any_regex_pattern="[[:digit:]|[:space:]|[:alpha:]|[:punct:]]"
assert_line --regexp "operation=uninstall seq=0 path=/var/lib/waagent/apphealth ${any_regex_pattern}* event=\"Handler successfully uninstalled\""
}
@test "handler command: enable - Graceful Shutdown - vm watch killed when Apphealth is killed gracefully with SIGTERM" {
mk_container $container_name bash -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 2 && source /var/lib/waagent/test_helper.bash;kill_apphealth_extension_gracefully SIGTERM & sleep 2"
push_settings '
{
"protocol": "http",
"requestPath": "/",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'event="Received shutdown request"'* ]]
[[ "$output" == *'Successfully killed VMWatch process with PID'* ]]
[[ "$output" == *'Application health process terminated'* ]]
}
@test "handler command: enable - Graceful Shutdown - vm watch killed when Apphealth is killed gracefully with SIGINT" {
mk_container $container_name bash -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 2 && source /var/lib/waagent/test_helper.bash;kill_apphealth_extension_gracefully SIGINT & sleep 2"
push_settings '
{
"protocol": "http",
"requestPath": "/",
"port": 8080,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'event="Received shutdown request"'* ]]
[[ "$output" == *'Successfully killed VMWatch process with PID'* ]]
[[ "$output" == *'Application health process terminated'* ]]
}
@test "handler command: enable - Forced Shutdown - vm watch killed when Apphealth is killed gracefully with SIGKILL" {
mk_container $container_name bash -c "nc -l localhost 22 -k & export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 10 && source /var/lib/waagent/extension-test-helpers.sh;force_kill_apphealth"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true
}
}' ''
run start_container
echo "$output"
shutdown_log="$(container_read_file /var/log/azure/Extension/force-kill-extension.txt)"
echo "$shutdown_log"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$shutdown_log" == *'Successfully killed the apphealth extension'* ]]
[[ "$shutdown_log" == *'Successfully killed the VMWatch extension'* ]]
}
@test "handler command: enable/uninstall - vm passes memory to commandline" {
mk_container $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 5 && fake-waagent uninstall"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"memoryLimitInBytes" : 40000000
}
}' ''
run start_container
echo "$output"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'--memory-limit-bytes 40000000'* ]]
}
# bats test_tags=linuxhostonly
@test "handler command: enable - vm watch oom - process should be killed" {
mk_container_priviliged $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 300"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns", "outbound_connectivity", "disk_io"],
"enabledOptionalSignals": ["test"]
},
"parameterOverrides": {
"TEST_ALLOCATE_MEMORY": "true"
}
}
}' ''
run start_container
status_file="$(container_read_file /var/lib/waagent/Extension/status/0.status)"
hanlder_log="$(container_read_handler_log)"
echo "$handler_log"
vmwatch_log="$(container_read_vmwatch_log)"
echo "$vmwatch_log"
echo "$output"
echo "$status_file"
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Attempt 1: Started VMWatch'* ]]
[[ "$output" == *'Attempt 3: Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
[[ "$output" == *'Attempt 1: VMWatch process exited'* ]]
[[ "$output" == *'Attempt 3: VMWatch process exited'* ]]
[[ "$output" == *'VMWatch reached max 3 retries, sleeping for 3 hours before trying again'* ]]
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
verify_substatus_item "$status_file" VMWatch error "VMWatch failed: .* Attempt 3: .* Error: signal: killed.*"
}
# bats test_tags=linuxhostonly
@test "handler command: enable - vm watch cpu - process should not use more than 1 percent cpu" {
mk_container_priviliged $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 10 && /var/lib/waagent/check-avg-cpu.sh vmwatch_linux 0.5 1.5"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns", "outbound_connectivity", "disk_io"],
"enabledOptionalSignals": ["test"]
},
"parameterOverrides": {
"TEST_HIGH_CPU": "true"
}
}
}' ''
run start_container
status_file="$(container_read_file /var/lib/waagent/Extension/status/0.status)"
hanlder_log="$(container_read_handler_log)"
avg_cpu="$(container_read_file /var/log/azure/Extension/vmwatch-avg-cpu-check.txt)"
echo "$handler_log"
vmwatch_log="$(container_read_vmwatch_log)"
echo "$vmwatch_log"
echo "$output"
echo "$status_file"
echo "$avg_cpu"
[[ "$avg_cpu" == *'PASS'* ]]
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Attempt 1: Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
}
# bats test_tags=linuxhostonly
@test "handler command: enable - vm watch cpu - process should use more than 30 percent cpu when non-privileged" {
mk_container $container_name sh -c "nc -l localhost 22 -k & fake-waagent install && export RUNNING_IN_DEV_CONTAINER=1 && export ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE=1 && fake-waagent enable && wait-for-enable webserverexit && sleep 10 && /var/lib/waagent/check-avg-cpu.sh vmwatch_linux 30 150"
push_settings '
{
"protocol": "tcp",
"requestPath": "",
"port": 22,
"numberOfProbes": 1,
"intervalInSeconds": 5,
"gracePeriod": 600,
"vmWatchSettings": {
"enabled": true,
"signalFilters": {
"disabledSignals": ["clockskew", "az_storage_blob", "process", "dns", "outbound_connectivity", "disk_io"],
"enabledOptionalSignals": ["test"]
},
"parameterOverrides": {
"TEST_HIGH_CPU": "true"
}
}
}' ''
run start_container
status_file="$(container_read_file /var/lib/waagent/Extension/status/0.status)"
hanlder_log="$(container_read_handler_log)"
avg_cpu="$(container_read_file /var/log/azure/Extension/vmwatch-avg-cpu-check.txt)"
echo "$handler_log"
vmwatch_log="$(container_read_vmwatch_log)"
echo "$vmwatch_log"
echo "$output"
echo "$status_file"
echo "$avg_cpu"
[[ "$avg_cpu" == *'PASS'* ]]
[[ "$output" == *'Setup VMWatch command: /var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64'* ]]
[[ "$output" == *'Attempt 1: Started VMWatch'* ]]
[[ "$output" == *'VMWatch is running'* ]]
verify_substatus_item "$status_file" AppHealthStatus success "Application found to be healthy"
verify_substatus_item "$status_file" ApplicationHealthState success Healthy
}

1
integration-test/test/sequential/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
certs/

Просмотреть файл

@ -1,7 +1,8 @@
#!/usr/bin/env bats
load ../test_helper
setup() {
load "../test_helper"
}
@test "meta: docker is installed" {
run docker version
echo "$output">&2
@ -9,7 +10,7 @@ load ../test_helper
}
@test "meta: can build the test container image" {
run build_docker_image
run build_docker_image_nocache
echo "$output"
[ "$status" -eq 0 ]
}

Просмотреть файл

@ -6,20 +6,22 @@ TEST_CONTAINER=test
certs_dir="$BATS_TEST_DIRNAME/certs"
# This function builds a Docker image for testing purposes.
# If the image already exists, a random number is appended to the name.
# a unique name is needed to avoid conflicts with other tests while running in parallel.
# This function builds a Docker image for testing purposes, if it already doesn't exist.
build_docker_image_nocache() {
# Check if the image already exists
echo "Building test image $IMAGE..."
docker build --no-cache -q -f $DOCKERFILE -t $IMAGE . 1>&2
}
# This function builds a Docker image for testing purposes, if it already doesn't exist.
build_docker_image() {
# Generate a base name for the image
BASE_IMAGE_NAME=$IMAGE
# Loop until we find a unique image name
while [ -n "$(docker images -q $IMAGE)" ]; do
# Append the counter to the base image name
IMAGE="${BASE_IMAGE_NAME}_$RANDOM"
done
# Check if the image already exists
if [ -z "$(docker images -q $IMAGE)" ]; then
echo "Building test image $IMAGE..."
docker build -q -f $DOCKERFILE -t $IMAGE . 1>&2
else
echo "Test image $IMAGE already exists. Skipping build."
fi
}
in_tmp_container() {
@ -29,7 +31,6 @@ in_tmp_container() {
cleanup() {
echo "Cleaning up...">&2
rm_container
rm_image
}
rm_container() {
@ -49,7 +50,6 @@ rm_image() {
}
mk_container() {
if [ $# -gt 3 ]; then # if less than two arguments are supplied
local container_name="${1:-$TEST_CONTAINER}" # assign the value of $TEST_CONTAINER if $1 is empty
echo "container_name: $container_name"
@ -61,11 +61,23 @@ mk_container() {
docker create --name=$TEST_CONTAINER $IMAGE "$@" 1>/dev/null
}
# creates a container in priviged mode (allowing cgroup integration to work)
mk_container_priviliged() {
if [ $# -gt 3 ]; then # if less than two arguments are supplied
local container_name="${1:-$TEST_CONTAINER}" # assign the value of $TEST_CONTAINER if $1 is empty
echo "container_name: $container_name"
TEST_CONTAINER="$container_name"
shift
fi
rm_container && echo "Creating test container with commands: $@">&2 && \
docker create --privileged --name=$TEST_CONTAINER $IMAGE "$@" 1>/dev/null
}
in_container() {
set -e
rm_container
mk_container "$@"
echo "Starting test container...">&2
start_container
}
@ -80,11 +92,23 @@ container_diff() {
container_read_file() { # reads the file at container path $1
set -eo pipefail
docker cp $TEST_CONTAINER:"$1" - | tar x --to-stdout
}
}
container_read_extension_status() {
container_read_file /var/lib/waagent/Extension/status/0.status
}
container_read_vmwatch_log() {
container_read_file /var/log/azure/Extension/VE.RS.ION/vmwatch.log
}
container_read_handler_log() {
container_read_file /var/log/azure/applicationhealth-extension/handler.log
}
mk_certs() { # creates certs/{THUMBPRINT}.(crt|key) files under ./certs/ and prints THUMBPRINT
set -eo pipefail
mkdir -p "$certs_dir" && cd "$certs_dir" && rm -f "$certs_dir/*"
mkdir -p "$certs_dir" && rm -f "$certs_dir/*" && cd "$certs_dir"
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 -nodes -batch &>/dev/null
thumbprint=$(openssl x509 -in cert.pem -fingerprint -noout| sed 's/.*=//g' | sed 's/://g')
mv cert.pem $thumbprint.crt && \
@ -162,25 +186,26 @@ copy_config() { # places specified settings file ($1) into container as 0.settin
}
# first argument is the string containing healthextension logs separated by newline
# it also expects the time={time in TZ format} version={version} to be in each log line
# it also expects the time={time in TZ format} level... to be in each log line
# second argument is an array of expected time difference (in seconds) between previous log
# for example: [5,10] means that the expected time difference between second log and first log is 5 seconds
# and time difference between third log and second log is 10 seconds
verify_state_change_timestamps() {
expectedTimeDifferences="$2"
regex='time=(.*) version=(.*)'
regex='time=([^[:space:]]*)' # regex to extract time from log line, will select everything until a space is found
prevDate=""
index=0
while IFS=$'\n' read -ra enableLogs; do
for i in "${!enableLogs[@]}"; do
[[ $enableLogs[index] =~ $regex ]]
currentDate=${BASH_REMATCH[1]}
if [[ ! -z "$prevDate" ]]; then
diff=$(( $(date -d "${BASH_REMATCH[1]}" "+%s") - $(date -d "$prevDate" "+%s") ))
diff=$(( $(date -d "$currentDate" "+%s") - $(date -d "$prevDate" "+%s") ))
echo "Actual time difference is: $diff and expected is: ${expectedTimeDifferences[$index-1]}"
[[ "$diff" -ge "${expectedTimeDifferences[$index-1]}" ]]
fi
index=$index+1
prevDate=${BASH_REMATCH[1]}
prevDate=$currentDate
done
done <<< "$1"
}
@ -203,11 +228,24 @@ verify_states() {
done <<< "$1"
}
verify_status_item() {
# $1 status_file contents
# $2 status.operation
# $3 status.status
# $4 status.formattedMessage.message
# Note that this can contain regex
FMT='"operation": "'%s'",((.*)|\s*?).*,\s*"status": "'%s'",\s+"formattedMessage": {\s+"lang": "en",\s+"message": "'%s'"'
printf -v STATUS "$FMT" "$2" "$3" "$4"
echo "Searching status file for status item: $STATUS"
echo "$1" | egrep -z "$STATUS"
}
verify_substatus_item() {
# $1 status_file contents
# $2 substatus.name
# $3 substatus.status
# $4 substatus.formattedMessage.message
# Note that this can contain regex
FMT='"name": "'%s'",\s+"status": "'%s'",\s+"formattedMessage": {\s+"lang": "en",\s+"message": "'%s'"'
printf -v SUBSTATUS "$FMT" "$2" "$3" "$4"
echo "Searching status file for substatus item: $SUBSTATUS"
@ -228,4 +266,35 @@ create_certificate() {
delete_certificate() {
rm -f testbin/webserverkey.pem
rm -f testbin/webservercert.pem
}
}
get_extension_version() {
# extract version from manifest.xml
version=$(awk -F'[<>]' '/<Version>/ {print $3}' misc/manifest.xml)
echo $version
}
# Accepted Kill Signals SIGINT SIGTERM
kill_apphealth_extension_gracefully() {
# kill the applicationhealth extension gracefully
# echo "Printing the process list Before killing the applicationhealth extension"
ps -ef | grep -e "applicationhealth-extension" -e "vmwatch_linux_amd64" | grep -v grep
kill_signal=$1
[[ $kill_signal == "SIGINT" || $kill_signal == "SIGTERM" ]] || { echo "Invalid signal: $kill_signal"; return 1; }
app_health_pid=$(ps -ef | grep "applicationhealth-extension" | grep -v grep | grep -v tee | awk '{print $2}')
if [ -z "$app_health_pid" ]; then
echo "Applicationhealth extension is not running"
return 0
fi
# echo "Killing applicationhealth extension with signal: $kill_signal"
# echo "PID: $app_health_pid"
kill -s $kill_signal $app_health_pid
# echo "Printing the process list after killing the applicationhealth extension"
ps -ef | grep -e "applicationhealth-extension" -e "vmwatch_linux_amd64" | grep -v grep
}
_load_bats_libs() {
export BATS_LIB_PATH=${CUSTOM_BATS_LIB_PATH:-"/usr/lib:/usr/local/lib/node_modules"}
echo "BATS_LIB_PATH: $BATS_LIB_PATH"
bats_load_library bats-support
bats_load_library bats-assert
}

Просмотреть файл

@ -0,0 +1,28 @@
package handlerenv
import (
"encoding/json"
"github.com/Azure/applicationhealth-extension-linux/internal/manifest"
"github.com/Azure/azure-extension-platform/pkg/handlerenv"
)
type HandlerEnvironment struct {
handlerenv.HandlerEnvironment
}
func (he *HandlerEnvironment) String() string {
env, _ := json.MarshalIndent(he, "", "\t")
return string(env)
}
func GetHandlerEnviroment() (he *HandlerEnvironment, _ error) {
em, err := manifest.GetExtensionManifest()
if err != nil {
return nil, err
}
env, _ := handlerenv.GetHandlerEnvironment(em.Name(), em.Version)
return &HandlerEnvironment{
HandlerEnvironment: *env,
}, err
}

Просмотреть файл

@ -0,0 +1,96 @@
package manifest
import (
"encoding/xml"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/Azure/azure-extension-platform/pkg/utils"
)
// manifestFileName is the name of the manifest file.
const (
manifestFileName = "manifest.xml"
)
// GetDirFunc is a function type that returns a directory path and an error.
type GetDirFunc func() (string, error)
var (
// Set a package-level variable for the directory function
getDir GetDirFunc = utils.GetCurrentProcessWorkingDir
)
// ExtensionManifest represents the structure of an extension manifest.
type ExtensionManifest struct {
ProviderNameSpace string `xml:"ProviderNameSpace"`
Type string `xml:"Type"`
Version string `xml:"Version"`
Label string `xml:"Label"`
HostingResources string `xml:"HostingResources"`
MediaLink string `xml:"MediaLink"`
Description string `xml:"Description"`
IsInternalExtension bool `xml:"IsInternalExtension"`
IsJsonExtension bool `xml:"IsJsonExtension"`
SupportedOS string `xml:"SupportedOS"`
CompanyName string `xml:"CompanyName"`
}
// Name returns the formatted name of the extension manifest.
func (em *ExtensionManifest) Name() string {
return fmt.Sprintf("%s.%s", em.ProviderNameSpace, em.Type)
}
// GetExtensionManifest retrieves the extension manifest from the specified directory.
// If getDir is nil, it uses the current process working directory.
// It returns the extension manifest and an error, if any.
func GetExtensionManifest() (*ExtensionManifest, error) {
dir, err := getDir()
if err != nil {
return nil, err
}
fp, err := findManifestFilePath(dir)
if err != nil {
return nil, err
}
file, err := os.Open(fp)
if err != nil {
return nil, err
}
defer file.Close()
decoder := xml.NewDecoder(file)
var manifest ExtensionManifest
err = decoder.Decode(&manifest)
if err != nil {
return nil, err
}
return &manifest, nil
}
// findManifestFilePath finds the path of the manifest file in the specified directory.
// It returns the path and an error, if any.
func findManifestFilePath(dir string) (string, error) {
var (
paths = []string{
filepath.Join(dir, manifestFileName), // this level (i.e. executable is in [EXT_NAME]/.)
filepath.Join(dir, "..", manifestFileName), // one up (i.e. executable is in [EXT_NAME]/bin/.)
}
)
for _, p := range paths {
_, err := os.ReadFile(p)
if err != nil && !os.IsNotExist(err) {
return "", fmt.Errorf("cannot read file at path %s: %v", p, err)
} else if err == nil {
return p, nil
}
}
return "", fmt.Errorf("cannot find HandlerEnvironment at paths: %s", strings.Join(paths, ", "))
}

Просмотреть файл

@ -0,0 +1,80 @@
package manifest
import (
"fmt"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func Test_ExtensionManifestVersion(t *testing.T) {
// Save the original function and restore it after the test
originalGetDir := getDir
defer func() { getDir = originalGetDir }()
currVersion := "2.0.10"
expectedManifest := ExtensionManifest{
ProviderNameSpace: "Microsoft.ManagedServices",
Type: "ApplicationHealthLinux",
Version: currVersion,
Label: "Microsoft Azure Application Health Extension for Linux Virtual Machines",
HostingResources: "VmRole",
MediaLink: "",
Description: "Microsoft Azure Application Health Extension is an extension installed on a VM to periodically determine configured application health.",
IsInternalExtension: true,
IsJsonExtension: true,
SupportedOS: "Linux",
CompanyName: "Microsoft",
}
// Override the getDir function to return a mock directory
getDir = func() (string, error) {
return "../../misc", nil
}
currentManifest, err := GetExtensionManifest()
require.Nil(t, err)
require.Equal(t, expectedManifest.Type, currentManifest.Type)
require.Equal(t, expectedManifest.Label, currentManifest.Label)
require.Equal(t, expectedManifest.HostingResources, currentManifest.HostingResources)
require.Equal(t, expectedManifest.MediaLink, currentManifest.MediaLink)
require.Equal(t, expectedManifest.Description, currentManifest.Description)
require.Equal(t, expectedManifest.IsInternalExtension, currentManifest.IsInternalExtension)
require.Equal(t, expectedManifest.IsJsonExtension, currentManifest.IsJsonExtension)
require.Equal(t, expectedManifest.SupportedOS, currentManifest.SupportedOS)
require.Equal(t, expectedManifest.CompanyName, currentManifest.CompanyName)
}
func Test_FindManifestFilePath(t *testing.T) {
var (
manifestFileName = "manifest.xml"
src = "../../misc/" // Replace with the actual directory path
dst = "/tmp/lib/waagent/Microsoft.ManagedServices-ApplicationHealthLinux/" // Replace with the actual directory path
)
err := copyFileNewDirectory(src, dst, manifestFileName)
require.NoError(t, err, "failed to copy manifest file to a new directory")
defer os.RemoveAll(dst)
// Test case 1: Manifest file exists in the current directory
workingDir := filepath.Join(dst, "bin")
path, err := findManifestFilePath(workingDir)
require.NoError(t, err)
require.Equalf(t, filepath.Join(dst, manifestFileName), path, "failed to find manifest file from bin directory: %s", workingDir)
// Test case 2: Manifest file exists in the parent directory
workingDir = filepath.Join(workingDir, "..")
path, err = findManifestFilePath(workingDir)
require.NoError(t, err)
require.Equal(t, filepath.Join(dst, manifestFileName), path, "failed to find manifest file from parent directory: %s", workingDir)
os.Remove(filepath.Join(dst, manifestFileName))
// Test case 3: Manifest file does not exist
workingDir = filepath.Join(dst, "bin")
path, err = findManifestFilePath(workingDir)
require.Error(t, err)
require.EqualError(t, err, fmt.Sprintf("cannot find HandlerEnvironment at paths: %s", strings.Join([]string{filepath.Join(workingDir, manifestFileName), filepath.Join(workingDir, "..", manifestFileName)}, ", ")))
require.Equal(t, "", path)
}

Просмотреть файл

@ -0,0 +1,47 @@
package manifest
import (
"fmt"
"io"
"os"
"path/filepath"
)
func copyFileNewDirectory(src, dst, fileName string) error {
// This function is used to copy the manifest file to a new directory
// The function is not implemented as it is not relevant to the test case
if src == "" || dst == "" {
return fmt.Errorf("invalid source or destination path")
}
src, err := filepath.Abs(src)
if err != nil {
return err
}
err = os.MkdirAll(filepath.Dir(dst), 0755)
if err != nil {
return err
}
// Open the source file
srcFile, err := os.Open(filepath.Join(src, fileName))
if err != nil {
return err
}
defer srcFile.Close()
// Create the destination file
dstFile, err := os.Create(filepath.Join(dst, fileName))
if err != nil {
return err
}
defer dstFile.Close()
// Copy the contents of the source file to the destination file
_, err = io.Copy(dstFile, srcFile)
if err != nil {
return err
}
return nil
}

Просмотреть файл

@ -0,0 +1,82 @@
package telemetry
import (
"runtime"
"github.com/Azure/azure-extension-platform/pkg/extensionevents"
"github.com/go-kit/log"
)
type EventLevel string
type EventTask string
const (
EventLevelCritical EventLevel = "Critical"
EventLevelError EventLevel = "Error"
EventLevelWarning EventLevel = "Warning"
EventLevelVerbose EventLevel = "Verbose"
EventLevelInfo EventLevel = "Informational"
)
const (
MainTask EventTask = "Main"
AppHealthTask EventTask = "AppHealth"
AppHealthProbeTask EventTask = "AppHealth-HealthProbe"
ReportStatusTask EventTask = "ReportStatus"
ReportHeatBeatTask EventTask = "CheckHealthAndReportHeartBeat"
StartVMWatchTask EventTask = "StartVMWatchIfApplicable"
StopVMWatchTask EventTask = "OnExited"
SetupVMWatchTask EventTask = "SetupVMWatchProcess"
KillVMWatchTask EventTask = "KillVMWatchIfApplicable"
)
type LogFunc func(logger log.Logger, keyvals ...interface{})
type LogEventFunc func(logger log.Logger, level EventLevel, taskName EventTask, message string, keyvals ...interface{})
type TelemetryEventSender struct {
eem *extensionevents.ExtensionEventManager
}
func NewTelemetryEventSender(eem *extensionevents.ExtensionEventManager) *TelemetryEventSender {
return &TelemetryEventSender{
eem: eem,
}
}
// sendEvent sends a telemetry event with the specified level, task name, and message.
func (t *TelemetryEventSender) sendEvent(level EventLevel, taskName EventTask, message string) {
switch level {
case EventLevelCritical:
t.eem.LogCriticalEvent(string(taskName), message)
case EventLevelError:
t.eem.LogErrorEvent(string(taskName), message)
case EventLevelWarning:
t.eem.LogWarningEvent(string(taskName), message)
case EventLevelVerbose:
t.eem.LogVerboseEvent(string(taskName), message)
case EventLevelInfo:
t.eem.LogInformationalEvent(string(taskName), message)
default:
return
}
}
// LogStdOutAndEventWithSender is a higher-order function that returns a LogEventFunc.
// It logs the event to the provided logger and sends the event to the specified sender.
// If the taskName is empty, it automatically determines the caller's function name as the taskName.
// The event level, task name, and message are appended to the keyvals slice.
// Finally, it calls the sender's sendEvent method to send the event.
func LogStdOutAndEventWithSender(sender *TelemetryEventSender) LogEventFunc {
return func(logger log.Logger, level EventLevel, taskName EventTask, message string, keyvals ...interface{}) {
if taskName == "" {
pc, _, _, _ := runtime.Caller(1)
callerName := runtime.FuncForPC(pc).Name()
taskName = EventTask(callerName)
}
keyvals = append(keyvals, "level", level, "task", taskName, "event", message)
logger.Log(keyvals...)
// logger.Log("eventLevel", level, "eventTask", taskName, "event", message)
(*sender).sendEvent(level, taskName, message)
}
}

Просмотреть файл

@ -6,13 +6,14 @@ import (
"strings"
"time"
"github.com/Azure/azure-docker-extension/pkg/vmextension"
"github.com/go-kit/kit/log"
"github.com/Azure/applicationhealth-extension-linux/internal/handlerenv"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/go-kit/log"
"github.com/pkg/errors"
)
type cmdFunc func(ctx *log.Context, hEnv vmextension.HandlerEnvironment, seqNum int) (msg string, err error)
type preFunc func(ctx *log.Context, seqNum int) error
type cmdFunc func(lg log.Logger, hEnv *handlerenv.HandlerEnvironment, seqNum int) (msg string, err error)
type preFunc func(lg log.Logger, seqNum int) error
type cmd struct {
f cmdFunc // associated function
@ -40,31 +41,31 @@ var (
}
)
func noop(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (string, error) {
ctx.Log("event", "noop")
func noop(lg log.Logger, h *handlerenv.HandlerEnvironment, seqNum int) (string, error) {
lg.Log("event", "noop")
return "", nil
}
func install(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (string, error) {
func install(lg log.Logger, h *handlerenv.HandlerEnvironment, seqNum int) (string, error) {
if err := os.MkdirAll(dataDir, 0755); err != nil {
return "", errors.Wrap(err, "failed to create data dir")
}
ctx.Log("event", "created data dir", "path", dataDir)
ctx.Log("event", "installed")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Created data dir", "path", dataDir)
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Handler successfully installed")
return "", nil
}
func uninstall(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (string, error) {
func uninstall(lg log.Logger, h *handlerenv.HandlerEnvironment, seqNum int) (string, error) {
{ // a new context scope with path
ctx = ctx.With("path", dataDir)
ctx.Log("event", "removing data dir", "path", dataDir)
lg = log.With(lg, "path", dataDir)
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Removing data dir")
if err := os.RemoveAll(dataDir); err != nil {
return "", errors.Wrap(err, "failed to delete data dir")
}
ctx.Log("event", "removed data dir")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Successfully removed data dir")
}
ctx.Log("event", "uninstalled")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Handler successfully uninstalled")
return "", nil
}
@ -76,30 +77,47 @@ var (
errTerminated = errors.New("Application health process terminated")
)
func enable(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (string, error) {
func enable(lg log.Logger, h *handlerenv.HandlerEnvironment, seqNum int) (string, error) {
// parse the extension handler settings (not available prior to 'enable')
cfg, err := parseAndValidateSettings(ctx, h.HandlerEnvironment.ConfigFolder)
cfg, err := parseAndValidateSettings(lg, h.ConfigFolder)
if err != nil {
return "", errors.Wrap(err, "failed to get configuration")
}
probe := NewHealthProbe(ctx, &cfg)
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Successfully parsed and validated settings")
sendTelemetry(lg, telemetry.EventLevelVerbose, telemetry.AppHealthTask, fmt.Sprintf("HandlerSettings = %s", cfg))
probe := NewHealthProbe(lg, &cfg)
var (
intervalBetweenProbesInMs = time.Duration(cfg.intervalInSeconds()) * time.Millisecond * 1000
numberOfProbes = cfg.numberOfProbes()
gracePeriodInSeconds = time.Duration(cfg.gracePeriod()) * time.Second
numConsecutiveProbes = 0
prevState = Empty
committedState = Empty
honorGracePeriod = gracePeriodInSeconds > 0
gracePeriodStartTime = time.Now()
intervalBetweenProbesInMs = time.Duration(cfg.intervalInSeconds()) * time.Millisecond * 1000
numberOfProbes = cfg.numberOfProbes()
gracePeriodInSeconds = time.Duration(cfg.gracePeriod()) * time.Second
numConsecutiveProbes = 0
prevState = HealthStatus(Empty)
committedState = HealthStatus(Empty)
commitedCustomMetricsState = CustomMetricsStatus(Empty)
honorGracePeriod = gracePeriodInSeconds > 0
gracePeriodStartTime = time.Now()
vmWatchSettings = cfg.vmWatchSettings()
vmWatchResult = VMWatchResult{Status: Disabled, Error: nil}
vmWatchResultChannel = make(chan VMWatchResult)
timeOfLastVMWatchLog = time.Time{}
)
if !honorGracePeriod {
ctx.Log("event", "Grace period not set")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Grace period not set")
} else {
ctx.Log("event", fmt.Sprintf("Grace period set to %v", gracePeriodInSeconds))
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("Grace period set to %v", gracePeriodInSeconds))
}
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("VMWatch settings: %s", vmWatchSettings))
if vmWatchSettings == nil || vmWatchSettings.Enabled == false {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, "VMWatch is disabled, not starting process.")
} else {
vmWatchResult = VMWatchResult{Status: NotRunning, Error: nil}
go executeVMWatch(lg, vmWatchSettings, h, vmWatchResultChannel)
}
// The committed health status (the state written to the status file) initially does not have a state
// In order to change the state in the status file, the following must be observed:
// 1. Healthy status observed once when committed state is unknown
@ -112,22 +130,47 @@ func enable(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (str
// 2. A valid health state is observed numberOfProbes consecutive times
for {
startTime := time.Now()
probeResponse, err := probe.evaluate(ctx)
probeResponse, err := probe.evaluate(lg)
state := probeResponse.ApplicationHealthState
customMetrics := probeResponse.CustomMetrics
if err != nil {
ctx.Log("error", err)
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask,
fmt.Sprintf("Error evaluating health probe: %v", err), "error", err)
}
if shutdown {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, "Shutting down AppHealth Extension Gracefully")
return "", errTerminated
}
// If VMWatch was never supposed to run, it will be in Disabled state, so we do not need to read from the channel
// If VMWatch failed to execute, we will do not need to read from the channel
// Only if VMWatch is currently running do we need to check if it failed
select {
case result, ok := <-vmWatchResultChannel:
vmWatchResult = result
if !ok {
vmWatchResult = VMWatchResult{Status: Failed, Error: errors.New("VMWatch channel has closed, unknown error")}
} else if result.Status == Running {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.ReportHeatBeatTask, "VMWatch is running")
} else if result.Status == Failed {
sendTelemetry(lg, telemetry.EventLevelError, telemetry.ReportHeatBeatTask, vmWatchResult.GetMessage())
} else if result.Status == NotRunning {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.ReportHeatBeatTask, "VMWatch is not running")
}
default:
if vmWatchResult.Status == Running && time.Since(timeOfLastVMWatchLog) >= 60*time.Second {
timeOfLastVMWatchLog = time.Now()
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.ReportHeatBeatTask, "VMWatch is running")
}
}
// Only increment if it's a repeat of the previous
if prevState == state {
numConsecutiveProbes++
// Log stage changes and also reset consecutive count to 1 as a new state was observed
} else {
ctx.Log("event", "Health state changed to "+strings.ToLower(string(state)))
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("Health state changed to %s", strings.ToLower(string(state))))
numConsecutiveProbes = 1
prevState = state
}
@ -136,27 +179,27 @@ func enable(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (str
timeElapsed := time.Now().Sub(gracePeriodStartTime)
// If grace period expires, application didn't initialize on time
if timeElapsed >= gracePeriodInSeconds {
ctx.Log("event", fmt.Sprintf("No longer honoring grace period - expired. Time elapsed = %v", timeElapsed))
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("No longer honoring grace period - expired. Time elapsed = %v", timeElapsed))
honorGracePeriod = false
state = probe.healthStatusAfterGracePeriodExpires()
prevState = probe.healthStatusAfterGracePeriodExpires()
numConsecutiveProbes = 1
committedState = Empty
committedState = HealthStatus(Empty)
// If grace period has not expired, check if we have consecutive valid probes
} else if (numConsecutiveProbes == numberOfProbes) && (state != probe.healthStatusAfterGracePeriodExpires()) {
ctx.Log("event", fmt.Sprintf("No longer honoring grace period - successful probes. Time elapsed = %v", timeElapsed))
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("No longer honoring grace period - successful probes. Time elapsed = %v", timeElapsed))
honorGracePeriod = false
// Application will be in Initializing state since we have not received consecutive valid health states
} else {
ctx.Log("event", fmt.Sprintf("Honoring grace period. Time elapsed = %v", timeElapsed))
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("Honoring grace period. Time elapsed = %v", timeElapsed))
state = Initializing
}
}
if (numConsecutiveProbes == numberOfProbes) || (committedState == Empty) {
if (numConsecutiveProbes == numberOfProbes) || (committedState == HealthStatus(Empty)) {
if state != committedState {
committedState = state
ctx.Log("event", fmt.Sprintf("Committed health state is %s", strings.ToLower(string(committedState))))
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthTask, fmt.Sprintf("Committed health state is %s", strings.ToLower(string(committedState))))
}
// Only reset if we've observed consecutive probes in order to preserve previous observations when handling grace period
if numConsecutiveProbes == numberOfProbes {
@ -172,17 +215,33 @@ func enable(ctx *log.Context, h vmextension.HandlerEnvironment, seqNum int) (str
NewSubstatus(SubstatusKeyNameApplicationHealthState, committedState.GetStatusType(), string(committedState)),
}
if probeResponse.CustomMetrics != "" {
if customMetrics != Empty {
customMetricsStatusType := StatusError
if probeResponse.validateCustomMetrics() == nil {
customMetricsStatusType = StatusSuccess
}
substatuses = append(substatuses, NewSubstatus(SubstatusKeyNameCustomMetrics, customMetricsStatusType, probeResponse.CustomMetrics))
substatuses = append(substatuses, NewSubstatus(SubstatusKeyNameCustomMetrics, customMetricsStatusType, customMetrics))
if commitedCustomMetricsState != CustomMetricsStatus(customMetrics) {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.ReportStatusTask,
fmt.Sprintf("Reporting CustomMetric Substatus with status: %s , message: %s", customMetricsStatusType, customMetrics))
commitedCustomMetricsState = CustomMetricsStatus(customMetrics)
}
}
err = reportStatusWithSubstatuses(ctx, h, seqNum, StatusSuccess, "enable", statusMessage, substatuses)
// VMWatch substatus should only be displayed when settings are present
if vmWatchSettings != nil {
substatuses = append(substatuses, NewSubstatus(SubstatusKeyNameVMWatch, vmWatchResult.Status.GetStatusType(), vmWatchResult.GetMessage()))
}
err = reportStatusWithSubstatuses(lg, h, seqNum, StatusSuccess, "enable", statusMessage, substatuses)
if err != nil {
ctx.Log("error", err)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.ReportStatusTask,
fmt.Sprintf("Error while trying to report extension status with seqNum: %d, StatusType: %s, message: %s, substatuses: %#v, error: %s",
seqNum,
StatusSuccess,
statusMessage,
substatuses,
err.Error()))
}
endTime := time.Now()

Просмотреть файл

@ -4,7 +4,25 @@ const (
SubstatusKeyNameAppHealthStatus = "AppHealthStatus"
SubstatusKeyNameApplicationHealthState = "ApplicationHealthState"
SubstatusKeyNameCustomMetrics = "CustomMetrics"
SubstatusKeyNameVMWatch = "VMWatch"
ProbeResponseKeyNameApplicationHealthState = "ApplicationHealthState"
ProbeResponseKeyNameCustomMetrics = "CustomMetrics"
AppHealthBinaryNameAmd64 = "applicationhealth-extension"
AppHealthBinaryNameArm64 = "applicationhealth-extension-arm64"
// TODO: The github package responsible for HandlerEnvironment settings is no longer being maintained
// and it also doesn't have the latest properties like EventsFolder. Importing a separate package
// is possible, but may result in lots of code churn. We will temporarily keep this as a constant since the
// events folder is unlikely to change in the future.
VMWatchBinaryNameAmd64 = "vmwatch_linux_amd64"
VMWatchBinaryNameArm64 = "vmwatch_linux_arm64"
VMWatchConfigFileName = "vmwatch.conf"
VMWatchVerboseLogFileName = "vmwatch.log"
VMWatchDefaultTests = "disk_io:outbound_connectivity:clockskew:az_storage_blob"
VMWatchMaxProcessAttempts = 3
ExtensionManifestFileName = "manifest.xml"
)

Просмотреть файл

@ -2,9 +2,13 @@ package main
import (
"encoding/json"
"encoding/xml"
"os"
"path/filepath"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/Azure/azure-docker-extension/pkg/vmextension"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/pkg/errors"
)
@ -23,6 +27,11 @@ type handlerSettings struct {
protectedSettings
}
func (s handlerSettings) String() string {
settings, _ := json.MarshalIndent(s, "", "\t")
return string(settings)
}
func (s *handlerSettings) protocol() string {
return s.publicSettings.Protocol
}
@ -62,6 +71,10 @@ func (s *handlerSettings) gracePeriod() int {
}
}
func (s *handlerSettings) vmWatchSettings() *vmWatchSettings {
return s.publicSettings.VMWatchSettings
}
// validate makes logical validation on the handlerSettings which already passed
// the schema validation.
func (h handlerSettings) validate() error {
@ -81,15 +94,39 @@ func (h handlerSettings) validate() error {
return nil
}
type vmWatchSignalFilters struct {
EnabledTags []string `json:"enabledTags,array"`
DisabledTags []string `json:"disabledTags,array"`
EnabledOptionalSignals []string `json:"enabledOptionalSignals,array"`
DisabledSignals []string `json:"disabledSignals,array"`
}
type vmWatchSettings struct {
Enabled bool `json:"enabled,boolean"`
MemoryLimitInBytes int64 `json:"memoryLimitInBytes,int64"`
MaxCpuPercentage int64 `json:"maxCpuPercentage,int64"`
SignalFilters *vmWatchSignalFilters `json:"signalFilters"`
ParameterOverrides map[string]interface{} `json:"parameterOverrides,object"`
EnvironmentAttributes map[string]interface{} `json:"environmentAttributes,object"`
GlobalConfigUrl string `json:"globalConfigUrl"`
DisableConfigReader bool `json:"disableConfigReader,boolean"`
}
func (v *vmWatchSettings) String() string {
setting, _ := json.MarshalIndent(v, "", "\t")
return string(setting)
}
// publicSettings is the type deserialized from public configuration section of
// the extension handler. This should be in sync with publicSettingsSchema.
type publicSettings struct {
Protocol string `json:"protocol"`
Port int `json:"port,int"`
RequestPath string `json:"requestPath"`
IntervalInSeconds int `json:"intervalInSeconds,int"`
NumberOfProbes int `json:"numberOfProbes,int"`
GracePeriod int `json:"gracePeriod,int"`
Protocol string `json:"protocol"`
Port int `json:"port,int"`
RequestPath string `json:"requestPath"`
IntervalInSeconds int `json:"intervalInSeconds,int"`
NumberOfProbes int `json:"numberOfProbes,int"`
GracePeriod int `json:"gracePeriod,int"`
VMWatchSettings *vmWatchSettings `json:"vmWatchSettings"`
}
// protectedSettings is the type decoded and deserialized from protected
@ -99,31 +136,30 @@ type protectedSettings struct {
// parseAndValidateSettings reads configuration from configFolder, decrypts it,
// runs JSON-schema and logical validation on it and returns it back.
func parseAndValidateSettings(ctx *log.Context, configFolder string) (h handlerSettings, _ error) {
ctx.Log("event", "reading configuration")
func parseAndValidateSettings(lg log.Logger, configFolder string) (h handlerSettings, _ error) {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "Reading configuration")
pubJSON, protJSON, err := readSettings(configFolder)
if err != nil {
return h, err
}
ctx.Log("event", "read configuration")
ctx.Log("event", "validating json schema")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "validating json schema")
if err := validateSettingsSchema(pubJSON, protJSON); err != nil {
return h, errors.Wrap(err, "json validation error")
}
ctx.Log("event", "json schema valid")
ctx.Log("event", "parsing configuration json")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "json schema valid")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "parsing configuration json")
if err := vmextension.UnmarshalHandlerSettings(pubJSON, protJSON, &h.publicSettings, &h.protectedSettings); err != nil {
return h, errors.Wrap(err, "json parsing error")
}
ctx.Log("event", "parsed configuration json")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "parsed configuration json")
ctx.Log("event", "validating configuration logically")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "validating configuration logically")
if err := h.validate(); err != nil {
return h, errors.Wrap(err, "invalid configuration")
}
ctx.Log("event", "validated configuration")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.MainTask, "validated configuration")
return h, nil
}
@ -165,3 +201,57 @@ func toJSON(o map[string]interface{}) (string, error) {
b, err := json.Marshal(o)
return string(b), errors.Wrap(err, "failed to marshal into json")
}
type ExtensionManifest struct {
ProviderNameSpace string `xml:"ProviderNameSpace"`
Type string `xml:"Type"`
Version string `xml:"Version"`
Label string `xml:"Label"`
HostingResources string `xml:"HostingResources"`
MediaLink string `xml:"MediaLink"`
Description string `xml:"Description"`
IsInternalExtension bool `xml:"IsInternalExtension"`
IsJsonExtension bool `xml:"IsJsonExtension"`
SupportedOS string `xml:"SupportedOS"`
CompanyName string `xml:"CompanyName"`
}
func GetExtensionManifest(filepath string) (ExtensionManifest, error) {
file, err := os.Open(filepath)
if err != nil {
return ExtensionManifest{}, err
}
defer file.Close()
decoder := xml.NewDecoder(file)
var manifest ExtensionManifest
err = decoder.Decode(&manifest)
if err != nil {
return ExtensionManifest{}, err
}
return manifest, nil
}
// Get Extension Version set at build time or from manifest file.
func GetExtensionManifestVersion() (string, error) {
// First attempting to read the version set during build time.
v := GetExtensionVersion()
if v != "" {
return v, nil
}
// If the version is not set during build time, then reading it from the manifest file as fallback.
processDirectory, err := GetProcessDirectory()
if err != nil {
return "", err
}
processDirectory = filepath.Dir(processDirectory)
fp := filepath.Join(processDirectory, ExtensionManifestFileName)
manifest, err := GetExtensionManifest(fp)
if err != nil {
return "", err
}
return manifest.Version, nil
}

Просмотреть файл

@ -1,7 +1,11 @@
package main
import "testing"
import "github.com/stretchr/testify/require"
import (
"testing"
"github.com/Azure/azure-docker-extension/pkg/vmextension"
"github.com/stretchr/testify/require"
)
func Test_handlerSettingsValidate(t *testing.T) {
// tcp includes request path
@ -55,3 +59,14 @@ func Test_toJSON(t *testing.T) {
require.Nil(t, err)
require.Equal(t, `{"a":3}`, s)
}
func Test_unMarshalPublicSetting(t *testing.T) {
publicSettings := map[string]interface{}{"requestPath": "health", "port": 8080, "numberOfProbes": 1, "intervalInSeconds": 5, "gracePeriod": 600, "vmWatchSettings": map[string]interface{}{"enabled": true, "globalConfigUrl": "https://testxyz.azurefd.net/config/disable-switch-config.json"}}
h := handlerSettings{}
err := vmextension.UnmarshalHandlerSettings(publicSettings, nil, &h.publicSettings, &h.protectedSettings)
require.Nil(t, err)
require.NotNil(t, h.publicSettings)
require.Equal(t, true, h.publicSettings.VMWatchSettings.Enabled)
require.Equal(t, "https://testxyz.azurefd.net/config/disable-switch-config.json", h.publicSettings.VMWatchSettings.GlobalConfigUrl)
}

Просмотреть файл

@ -12,18 +12,23 @@ import (
"net/url"
"github.com/go-kit/kit/log"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/go-kit/log"
"github.com/pkg/errors"
)
type HealthStatus string
type CustomMetricsStatus string
const (
Initializing HealthStatus = "Initializing"
Healthy HealthStatus = "Healthy"
Unhealthy HealthStatus = "Unhealthy"
Unknown HealthStatus = "Unknown"
Empty HealthStatus = ""
)
const (
Empty string = ""
)
func (p HealthStatus) GetStatusType() StatusType {
@ -55,7 +60,7 @@ func (p HealthStatus) GetMessageForAppHealthStatus() string {
}
type HealthProbe interface {
evaluate(ctx *log.Context) (ProbeResponse, error)
evaluate(lg log.Logger) (ProbeResponse, error)
address() string
healthStatusAfterGracePeriodExpires() HealthStatus
}
@ -69,7 +74,7 @@ type HttpHealthProbe struct {
Address string
}
func NewHealthProbe(ctx *log.Context, cfg *handlerSettings) HealthProbe {
func NewHealthProbe(lg log.Logger, cfg *handlerSettings) HealthProbe {
var p HealthProbe
p = new(DefaultHealthProbe)
switch cfg.protocol() {
@ -77,20 +82,20 @@ func NewHealthProbe(ctx *log.Context, cfg *handlerSettings) HealthProbe {
p = &TcpHealthProbe{
Address: "localhost:" + strconv.Itoa(cfg.port()),
}
ctx.Log("event", "creating tcp probe targeting "+p.address())
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthProbeTask, fmt.Sprintf("Creating %s probe targeting %s", cfg.protocol(), p.address()))
case "http":
fallthrough
case "https":
p = NewHttpHealthProbe(cfg.protocol(), cfg.requestPath(), cfg.port())
ctx.Log("event", "creating "+cfg.protocol()+" probe targeting "+p.address())
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthProbeTask, fmt.Sprintf("Creating %s probe targeting %s", cfg.protocol(), p.address()))
default:
ctx.Log("event", "default settings without probe")
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.AppHealthProbeTask, "Configuration not provided. Using default reporting.")
}
return p
}
func (p *TcpHealthProbe) evaluate(ctx *log.Context) (ProbeResponse, error) {
func (p *TcpHealthProbe) evaluate(lg log.Logger) (ProbeResponse, error) {
conn, err := net.DialTimeout("tcp", p.address(), 30*time.Second)
var probeResponse ProbeResponse
if err != nil {
@ -173,7 +178,7 @@ func NewHttpHealthProbe(protocol string, requestPath string, port int) *HttpHeal
return p
}
func (p *HttpHealthProbe) evaluate(ctx *log.Context) (ProbeResponse, error) {
func (p *HttpHealthProbe) evaluate(lg log.Logger) (ProbeResponse, error) {
req, err := http.NewRequest("GET", p.address(), nil)
var probeResponse ProbeResponse
if err != nil {
@ -209,7 +214,7 @@ func (p *HttpHealthProbe) evaluate(ctx *log.Context) (ProbeResponse, error) {
}
if err := probeResponse.validateCustomMetrics(); err != nil {
ctx.Log("error", err)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.AppHealthProbeTask, err.Error(), "error", err)
}
if err := probeResponse.validateApplicationHealthState(); err != nil {
@ -240,7 +245,7 @@ func noRedirect(req *http.Request, via []*http.Request) error {
type DefaultHealthProbe struct {
}
func (p DefaultHealthProbe) evaluate(ctx *log.Context) (ProbeResponse, error) {
func (p DefaultHealthProbe) evaluate(lg log.Logger) (ProbeResponse, error) {
var probeResponse ProbeResponse
probeResponse.ApplicationHealthState = Healthy
return probeResponse, nil

Просмотреть файл

@ -3,12 +3,16 @@ package main
import (
"fmt"
"os"
"os/exec"
"os/signal"
"strings"
"syscall"
"github.com/Azure/azure-docker-extension/pkg/vmextension"
"github.com/go-kit/kit/log"
"github.com/Azure/applicationhealth-extension-linux/internal/handlerenv"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/Azure/applicationhealth-extension-linux/pkg/logging"
"github.com/Azure/azure-extension-platform/pkg/extensionevents"
"github.com/go-kit/log"
)
var (
@ -16,55 +20,74 @@ var (
dataDir = "/var/lib/waagent/apphealth"
shutdown = false
// We need a reference to the command here so that we can cleanly shutdown VMWatch process
// when a shutdown signal is received
vmWatchCommand *exec.Cmd
eem *extensionevents.ExtensionEventManager
sendTelemetry telemetry.LogEventFunc
)
func main() {
ctx := log.NewContext(log.NewSyncLogger(log.NewLogfmtLogger(
os.Stdout))).With("time", log.DefaultTimestamp).With("version", VersionString())
logger := log.NewSyncLogger(log.NewLogfmtLogger(
os.Stdout))
logger = log.With(logger, "time", log.DefaultTimestamp)
logger = log.With(logger, "version", VersionString())
logger = log.With(logger, "pid", os.Getpid())
// parse command line arguments
cmd := parseCmd(os.Args)
ctx = ctx.With("operation", strings.ToLower(cmd.name))
logger = log.With(logger, "operation", strings.ToLower(cmd.name))
// subscribe to cleanly shutdown
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-sigs
sendTelemetry(logger, telemetry.EventLevelInfo, telemetry.KillVMWatchTask, "Received shutdown request")
shutdown = true
err := killVMWatch(logger, vmWatchCommand)
if err != nil {
sendTelemetry(logger, telemetry.EventLevelError, telemetry.KillVMWatchTask, fmt.Sprintf("Error when killing vmwatch process, error: %s", err.Error()))
}
}()
// parse extension environment
hEnv, err := vmextension.GetHandlerEnv()
hEnv, err := handlerenv.GetHandlerEnviroment()
if err != nil {
ctx.Log("message", "failed to parse handlerenv", "error", err)
logger.Log("message", "failed to parse handlerenv", "error", err)
os.Exit(cmd.failExitCode)
}
seqNum, err := vmextension.FindSeqNum(hEnv.HandlerEnvironment.ConfigFolder)
seqNum, err := FindSeqNum(hEnv.ConfigFolder)
if err != nil {
ctx.Log("messsage", "failed to find sequence number", "error", err)
logger.Log("message", "failed to find sequence number", "error", err)
}
ctx = ctx.With("seq", seqNum)
logger = log.With(logger, "seq", seqNum)
eem = extensionevents.New(logging.NewNopLogger(), &hEnv.HandlerEnvironment)
sendTelemetry = telemetry.LogStdOutAndEventWithSender(telemetry.NewTelemetryEventSender(eem))
// check sub-command preconditions, if any, before executing
ctx.Log("event", "start")
sendTelemetry(logger, telemetry.EventLevelInfo, telemetry.MainTask, fmt.Sprintf("Starting AppHealth Extension %s seqNum=%d operation=%s", GetExtensionVersion(), seqNum, cmd.name))
sendTelemetry(logger, telemetry.EventLevelInfo, telemetry.MainTask, fmt.Sprintf("HandlerEnviroment = %s", hEnv))
if cmd.pre != nil {
ctx.Log("event", "pre-check")
if err := cmd.pre(ctx, seqNum); err != nil {
ctx.Log("event", "pre-check failed", "error", err)
logger.Log("event", "pre-check")
if err := cmd.pre(logger, seqNum); err != nil {
sendTelemetry(logger, telemetry.EventLevelError, telemetry.MainTask, "pre-check failed", "error", err.Error())
os.Exit(cmd.failExitCode)
}
}
// execute the subcommand
reportStatus(ctx, hEnv, seqNum, StatusTransitioning, cmd, "")
msg, err := cmd.f(ctx, hEnv, seqNum)
reportStatus(logger, hEnv, seqNum, StatusTransitioning, cmd, "")
msg, err := cmd.f(logger, hEnv, seqNum)
if err != nil {
ctx.Log("event", "failed to handle", "error", err)
reportStatus(ctx, hEnv, seqNum, StatusError, cmd, err.Error()+msg)
logger.Log("event", "failed to handle", "error", err)
reportStatus(logger, hEnv, seqNum, StatusError, cmd, err.Error()+msg)
os.Exit(cmd.failExitCode)
}
reportStatus(ctx, hEnv, seqNum, StatusSuccess, cmd, msg)
ctx.Log("event", "end")
reportStatus(logger, hEnv, seqNum, StatusSuccess, cmd, msg)
sendTelemetry(logger, telemetry.EventLevelInfo, telemetry.MainTask, fmt.Sprintf("Finished execution of AppHealth Extension %s seqNum=%d operation=%s", GetExtensionVersion(), seqNum, cmd.name))
}
// parseCmd looks at os.Args and parses the subcommand. If it is invalid,

Просмотреть файл

@ -1,8 +1,11 @@
package main
import (
"github.com/Azure/azure-docker-extension/pkg/vmextension"
"github.com/go-kit/kit/log"
"fmt"
"github.com/Azure/applicationhealth-extension-linux/internal/handlerenv"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/go-kit/log"
"github.com/pkg/errors"
)
@ -11,26 +14,28 @@ import (
// status.
//
// If an error occurs reporting the status, it will be logged and returned.
func reportStatus(ctx *log.Context, hEnv vmextension.HandlerEnvironment, seqNum int, t StatusType, c cmd, msg string) error {
func reportStatus(lg log.Logger, hEnv *handlerenv.HandlerEnvironment, seqNum int, t StatusType, c cmd, msg string) error {
if !c.shouldReportStatus {
ctx.Log("status", "not reported for operation (by design)")
lg.Log("status", "not reported for operation (by design)")
return nil
}
s := NewStatus(t, c.name, statusMsg(c, t, msg))
if err := s.Save(hEnv.HandlerEnvironment.StatusFolder, seqNum); err != nil {
ctx.Log("event", "failed to save handler status", "error", err)
if err := s.Save(hEnv.StatusFolder, seqNum); err != nil {
sendTelemetry(lg, telemetry.EventLevelError, telemetry.ReportStatusTask, fmt.Sprintf("failed to save handler status: %s", s), "error", err.Error())
return errors.Wrap(err, "failed to save handler status")
}
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.ReportStatusTask, fmt.Sprintf("saved handler status: %s", s))
return nil
}
func reportStatusWithSubstatuses(ctx *log.Context, hEnv vmextension.HandlerEnvironment, seqNum int, t StatusType, op string, msg string, substatuses []SubstatusItem) error {
func reportStatusWithSubstatuses(lg log.Logger, hEnv *handlerenv.HandlerEnvironment, seqNum int, t StatusType, op string, msg string, substatuses []SubstatusItem) error {
s := NewStatus(t, op, msg)
for _, substatus := range substatuses {
s.AddSubstatusItem(substatus)
}
if err := s.Save(hEnv.HandlerEnvironment.StatusFolder, seqNum); err != nil {
ctx.Log("event", "failed to save handler status", "error", err)
if err := s.Save(hEnv.StatusFolder, seqNum); err != nil {
sendTelemetry(lg, telemetry.EventLevelError, telemetry.ReportStatusTask, fmt.Sprintf("failed to save handler status: %s", s), "error", err.Error())
return errors.Wrap(err, "failed to save handler status")
}
return nil

Просмотреть файл

@ -6,11 +6,20 @@ import (
"path/filepath"
"testing"
"github.com/Azure/azure-docker-extension/pkg/vmextension"
"github.com/go-kit/kit/log"
"github.com/go-kit/log"
"github.com/Azure/applicationhealth-extension-linux/internal/handlerenv"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/Azure/applicationhealth-extension-linux/pkg/logging"
"github.com/Azure/azure-extension-platform/pkg/extensionevents"
"github.com/stretchr/testify/require"
)
func initTelemetry(he *handlerenv.HandlerEnvironment) {
eem = extensionevents.New(logging.NewNopLogger(), &he.HandlerEnvironment)
sendTelemetry = telemetry.LogStdOutAndEventWithSender(telemetry.NewTelemetryEventSender(eem))
}
func Test_statusMsg(t *testing.T) {
require.Equal(t, "Enable succeeded", statusMsg(cmdEnable, StatusSuccess, ""))
require.Equal(t, "Enable succeeded: msg", statusMsg(cmdEnable, StatusSuccess, "msg"))
@ -23,10 +32,12 @@ func Test_statusMsg(t *testing.T) {
}
func Test_reportStatus_fails(t *testing.T) {
fakeEnv := vmextension.HandlerEnvironment{}
fakeEnv.HandlerEnvironment.StatusFolder = "/non-existing/dir/"
fakeEnv := &handlerenv.HandlerEnvironment{}
fakeEnv.StatusFolder = "/non-existing/dir/"
err := reportStatus(log.NewContext(log.NewNopLogger()), fakeEnv, 1, StatusSuccess, cmdEnable, "")
initTelemetry(fakeEnv)
err := reportStatus(log.NewNopLogger(), fakeEnv, 1, StatusSuccess, cmdEnable, "")
require.NotNil(t, err)
require.Contains(t, err.Error(), "failed to save handler status")
}
@ -36,10 +47,11 @@ func Test_reportStatus_fileExists(t *testing.T) {
require.Nil(t, err)
defer os.RemoveAll(tmpDir)
fakeEnv := vmextension.HandlerEnvironment{}
fakeEnv.HandlerEnvironment.StatusFolder = tmpDir
fakeEnv := &handlerenv.HandlerEnvironment{}
fakeEnv.StatusFolder = tmpDir
initTelemetry(fakeEnv)
require.Nil(t, reportStatus(log.NewContext(log.NewNopLogger()), fakeEnv, 1, StatusError, cmdEnable, "FOO ERROR"))
require.Nil(t, reportStatus(log.NewNopLogger(), fakeEnv, 1, StatusError, cmdEnable, "FOO ERROR"))
path := filepath.Join(tmpDir, "1.status")
b, err := ioutil.ReadFile(path)
@ -53,9 +65,11 @@ func Test_reportStatus_checksIfShouldBeReported(t *testing.T) {
require.Nil(t, err)
defer os.RemoveAll(tmpDir)
fakeEnv := vmextension.HandlerEnvironment{}
fakeEnv.HandlerEnvironment.StatusFolder = tmpDir
require.Nil(t, reportStatus(log.NewContext(log.NewNopLogger()), fakeEnv, 2, StatusSuccess, c, ""))
fakeEnv := &handlerenv.HandlerEnvironment{}
fakeEnv.StatusFolder = tmpDir
initTelemetry(fakeEnv)
require.Nil(t, reportStatus(log.NewNopLogger(), fakeEnv, 2, StatusSuccess, c, ""))
fp := filepath.Join(tmpDir, "2.status")
_, err = os.Stat(fp) // check if the .status file is there

Просмотреть файл

@ -20,12 +20,12 @@ const (
"type": "string",
"enum": ["tcp", "http", "https"]
},
"port": {
"description": "Required when the protocol is 'tcp'. Optional when the protocol is 'http' or 'https'.",
"port": {
"description": "Required when the protocol is 'tcp'. Optional when the protocol is 'http' or 'https'.",
"type": "integer",
"minimum": 1,
"maximum": 65535
},
},
"requestPath": {
"description": "Path on which the web request should be sent. Required when the protocol is 'http' or 'https'.",
"type": "string"
@ -49,6 +49,84 @@ const (
"type": "integer",
"minimum": 5,
"maximum": 14400
},
"vmWatchSettings": {
"description": "Optional - VMWatch plugin settings",
"type": "object",
"properties": {
"enabled": {
"description": "Optional - Toggles whether VMWatch plugin will be started",
"type": "boolean",
"default": false
},
"memoryLimitInBytes": {
"description": "Optional - specifies the max memory that vmwatch can use",
"type": "integer",
"default": 80000000,
"minimum": 30000000
},
"maxCpuPercentage": {
"description": "Optional - specifies the max cpu that the vmwatch process is allowed to consume",
"type": "integer",
"default": 1,
"minimum": 1,
"maximum": 100
},
"signalFilters" : {
"description": "Optional - specify filtering for signals, if not specified, all core signals will be enabled",
"type": "object",
"properties": {
"enabledTags": {
"description": "Optional - list of tags to enable",
"type": "array",
"items": {
"type": "string"
}
},
"disabledTags": {
"description": "Optional - list of tags to disabledTags",
"type": "array",
"items": {
"type": "string"
}
},
"enabledOptionalSignals": {
"description": "Optional - list of optional signals to enable",
"type": "array",
"items": {
"type": "string"
}
},
"disabledSignals": {
"description": "Optional - list of signals to disable (both core and optional signals are allowed in this list))",
"type": "array",
"items": {
"type": "string"
}
}
},
"default": {}
},
"parameterOverrides": {
"description": "Optional - Parameter overrides specific to VMWatch execution",
"type": "object",
"default": {}
},
"environmentAttributes": {
"description": "Optional - environment attributes (eg OutboundConnectivityEnabled : true)",
"type": "object",
"default": {}
},
"globalConfigUrl": {
"description": "Optional - specify global config url to download vmwatch configuration from",
"type": "string"
},
"disableConfigReader": {
"description": "Optional - flag to disable config reader",
"type": "boolean",
"default": false
}
}
}
},
"additionalProperties": false

Просмотреть файл

@ -154,3 +154,22 @@ func TestValidatePublicSettings_gracePeriod(t *testing.T) {
})
}
}
func TestValidatePublicSettings_vmwatch(t *testing.T) {
require.Nil(t, validatePublicSettings(`{"port": 1, "vmWatchSettings" : { "enabled" : false }}`), "valid settings")
require.Nil(t, validatePublicSettings(`{"port": 1, "vmWatchSettings" : { "enabled" : true }}`), "valid settings")
require.Nil(t, validatePublicSettings(`{"port": 1, "vmWatchSettings" : { "enabled" : true, "memoryLimitInBytes" : 30000000 }}`), "valid settings")
err := validatePublicSettings(`{"port": 1, "vmWatchSettings" : { "enabled" : true, "memoryLimitInBytes" : 20000000 }}`)
require.NotNil(t, err)
require.Contains(t, err.Error(), "vmWatchSettings.memoryLimitInBytes: Must be greater than or equal to 30000000")
err = validatePublicSettings(`{"port": 1, "vmWatchSettings" : { "enabled" : true, "maxCpuPercentage" : 0 }}`)
require.NotNil(t, err)
require.Contains(t, err.Error(), "vmWatchSettings.maxCpuPercentage: Must be greater than or equal to 1")
err = validatePublicSettings(`{"port": 1, "vmWatchSettings" : { "enabled" : true, "maxCpuPercentage" : 101 }}`)
require.NotNil(t, err)
require.Contains(t, err.Error(), "vmWatchSettings.maxCpuPercentage: Must be less than or equal to 100")
}

32
main/seqnum.go Normal file
Просмотреть файл

@ -0,0 +1,32 @@
package main
import (
"fmt"
"path/filepath"
"sort"
"strconv"
"strings"
)
// FindSeqnum finds the file with the highest number under configFolder
// named like 0.settings, 1.settings so on.
func FindSeqNum(configFolder string) (int, error) {
g, err := filepath.Glob(configFolder + "/*.settings")
if err != nil {
return 0, err
}
seqs := make([]int, len(g))
for _, v := range g {
f := filepath.Base(v)
i, err := strconv.Atoi(strings.Replace(f, ".settings", "", 1))
if err != nil {
return 0, fmt.Errorf("Can't parse int from filename: %s", f)
}
seqs = append(seqs, i)
}
if len(seqs) == 0 {
return 0, fmt.Errorf("Can't find out seqnum from %s, not enough files.", configFolder)
}
sort.Sort(sort.Reverse(sort.IntSlice(seqs)))
return seqs[0], nil
}

Просмотреть файл

@ -21,6 +21,7 @@ type StatusType string
const (
StatusTransitioning StatusType = "transitioning"
StatusWarning StatusType = "warning"
StatusError StatusType = "error"
StatusSuccess StatusType = "success"
)
@ -123,3 +124,8 @@ func (r StatusReport) Save(statusFolder string, seqNum int) error {
}
return nil
}
func (r StatusReport) String() string {
report, _ := json.MarshalIndent(r, "", "\t")
return string(report)
}

Просмотреть файл

@ -26,3 +26,7 @@ func DetailedVersionString() string {
// e.g. v2.2.0 git:03669cef-clean build:2016-07-22T16:22:26.556103000+00:00 go:go1.6.2
return fmt.Sprintf("v%s git:%s-%s build:%s %s", Version, GitCommit, GitState, BuildDate, runtime.Version())
}
func GetExtensionVersion() string {
return Version
}

543
main/vmWatch.go Normal file
Просмотреть файл

@ -0,0 +1,543 @@
package main
import (
"bytes"
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
"syscall"
"time"
"github.com/go-kit/log"
"github.com/Azure/applicationhealth-extension-linux/internal/handlerenv"
"github.com/Azure/applicationhealth-extension-linux/internal/telemetry"
"github.com/containerd/cgroups/v3"
"github.com/containerd/cgroups/v3/cgroup1"
"github.com/containerd/cgroups/v3/cgroup2"
"github.com/opencontainers/runtime-spec/specs-go"
)
type VMWatchStatus string
const (
DefaultMaxCpuPercentage = 1 // 1% cpu
DefaultMaxMemoryInBytes = 80000000 // 80MB
HoursBetweenRetryAttempts = 3
CGroupV2PeriodMs = 1000000 // 1 second
)
const (
NotRunning VMWatchStatus = "NotRunning"
Disabled VMWatchStatus = "Disabled"
Running VMWatchStatus = "Running"
Failed VMWatchStatus = "Failed"
)
const (
AllowVMWatchCgroupAssignmentFailureVariableName string = "ALLOW_VMWATCH_CGROUP_ASSIGNMENT_FAILURE"
RunningInDevContainerVariableName string = "RUNNING_IN_DEV_CONTAINER"
AppHealthExecutionEnvironmentProd string = "Prod"
AppHealthExecutionEnvironmentTest string = "Test"
AppHealthPublisherNameTest string = "Microsoft.ManagedServices.Edp"
)
func (p VMWatchStatus) GetStatusType() StatusType {
switch p {
case Disabled:
return StatusWarning
case Failed:
return StatusError
default:
return StatusSuccess
}
}
type VMWatchResult struct {
Status VMWatchStatus
Error error
}
func (r *VMWatchResult) GetMessage() string {
switch r.Status {
case Disabled:
return "VMWatch is disabled"
case Failed:
return fmt.Sprintf("VMWatch failed: %v", r.Error)
case NotRunning:
return "VMWatch is not running"
default:
return "VMWatch is running"
}
}
// We will setup and execute VMWatch as a separate process. Ideally VMWatch should run indefinitely,
// but as a best effort we will attempt at most 3 times to run the process
func executeVMWatch(lg log.Logger, s *vmWatchSettings, hEnv *handlerenv.HandlerEnvironment, vmWatchResultChannel chan VMWatchResult) {
var vmWatchErr error
defer func() {
if r := recover(); r != nil {
vmWatchErr = fmt.Errorf("%w\n Additonal Details: %+v", vmWatchErr, r)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StopVMWatchTask, fmt.Sprintf("Recovered %+v", r))
}
vmWatchResultChannel <- VMWatchResult{Status: Failed, Error: vmWatchErr}
close(vmWatchResultChannel)
}()
// Best effort to start VMWatch process each time it fails start immediately up to VMWatchMaxProcessAttempts before waiting for
// a longer time before trying again
for !shutdown {
for i := 1; i <= VMWatchMaxProcessAttempts && !shutdown; i++ {
vmWatchResultChannel <- VMWatchResult{Status: Running}
vmWatchErr = executeVMWatchHelper(lg, i, s, hEnv)
vmWatchResultChannel <- VMWatchResult{Status: Failed, Error: vmWatchErr}
}
{
// scoping the errMsg variable to avoid shadowing
errMsg := fmt.Sprintf("VMWatch reached max %d retries, sleeping for %v hours before trying again", VMWatchMaxProcessAttempts, HoursBetweenRetryAttempts)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StartVMWatchTask, errMsg, "error", errMsg)
}
// we have exceeded the retries so now we go to sleep before starting again
time.Sleep(time.Hour * HoursBetweenRetryAttempts)
}
}
func executeVMWatchHelper(lg log.Logger, attempt int, vmWatchSettings *vmWatchSettings, hEnv *handlerenv.HandlerEnvironment) (err error) {
pid := -1
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("error: %w\n Additonal Details: %+v", err, r)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StartVMWatchTask, fmt.Sprintf("Recovered %+v", r))
}
}()
// Setup command
var resourceGovernanceRequired bool
vmWatchCommand, resourceGovernanceRequired, err = setupVMWatchCommand(vmWatchSettings, hEnv)
if err != nil {
err = fmt.Errorf("[%v][PID -1] Attempt %d: VMWatch setup failed. Error: %w", time.Now().UTC().Format(time.RFC3339), attempt, err)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.SetupVMWatchTask, err.Error())
return err
}
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.SetupVMWatchTask,
fmt.Sprintf("Attempt %d: Setup VMWatch command: %s\nArgs: %v\nDir: %s\nEnv: %v\n",
attempt, vmWatchCommand.Path, vmWatchCommand.Args, vmWatchCommand.Dir, vmWatchCommand.Env),
)
// TODO: Combined output may get excessively long, especially since VMWatch is a long running process
// We should trim the output or only get from Stderr
combinedOutput := &bytes.Buffer{}
vmWatchCommand.Stdout = combinedOutput
vmWatchCommand.Stderr = combinedOutput
vmWatchCommand.SysProcAttr = &syscall.SysProcAttr{Pdeathsig: syscall.SIGTERM}
// Start command
if err := vmWatchCommand.Start(); err != nil {
err = fmt.Errorf("[%v][PID -1] Attempt %d: VMWatch failed to start. Error: %w\nOutput: %s", time.Now().UTC().Format(time.RFC3339), attempt, err, combinedOutput.String())
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StartVMWatchTask, err.Error(), "error", err)
return err
}
pid = vmWatchCommand.Process.Pid // cmd.Process should be populated on success
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, fmt.Sprintf("Attempt %d: Started VMWatch with PID %d", attempt, pid))
if !resourceGovernanceRequired {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, fmt.Sprintf("Resource governance was already applied at process launch of PID %d", pid))
} else {
err = applyResourceGovernance(lg, vmWatchSettings, vmWatchCommand)
if err != nil {
// if this has failed we have already killed the process as we failed to assign to cgroup so log the appropriate error
err = fmt.Errorf("[%v][PID %d] Attempt %d: VMWatch process exited. Error: %w\nOutput: %s", time.Now().UTC().Format(time.RFC3339), pid, attempt, err, combinedOutput.String())
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StopVMWatchTask, err.Error(), "error", err)
return err
}
}
processDone := make(chan bool)
// create a waitgroup to coordinate the goroutines
var wg sync.WaitGroup
// add a task to wait for process completion
wg.Add(1)
go func() {
defer wg.Done()
err = vmWatchCommand.Wait()
processDone <- true
close(processDone)
}()
// add a task to monitor heartbeat
wg.Add(1)
go func() {
defer wg.Done()
monitorHeartBeat(lg, GetVMWatchHeartbeatFilePath(hEnv), processDone, vmWatchCommand)
}()
wg.Wait()
err = fmt.Errorf("[%v][PID %d] Attempt %d: VMWatch process exited. Error: %w\nOutput: %s", time.Now().UTC().Format(time.RFC3339), pid, attempt, err, combinedOutput.String())
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StopVMWatchTask, err.Error(), "error", err)
return err
}
// Sets resource governance for VMWatch process, on linux, this is only to be used in the case where systemd-run is not available
func applyResourceGovernance(lg log.Logger, vmWatchSettings *vmWatchSettings, vmWatchCommand *exec.Cmd) error {
// The default way to run vmwatch is via systemd-run. There are some cases where system-run is not available
// (in a container or in a distro without systemd). In those cases we will manage the cgroups directly
pid := vmWatchCommand.Process.Pid
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, fmt.Sprintf("Applying resource governance to PID %d", pid))
err := createAndAssignCgroups(lg, vmWatchSettings, pid)
if err != nil {
err = fmt.Errorf("[%v][PID %d] Failed to assign VMWatch process to cgroup. Error: %w", time.Now().UTC().Format(time.RFC3339), pid, err)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.StartVMWatchTask, err.Error(), "error", err)
// On real VMs we want this to stop vwmwatch from running at all since we want to make sure we are protected
// by resource governance but on dev machines, we may fail due to limitations of execution environment (ie on dev container
// or in a github pipeline container we don't have permission to assign cgroups (also on WSL environments it doesn't
// work at all because the base OS doesn't support it)).
// to allow us to run integration tests we will check the variables RUNING_IN_DEV_CONTAINER and
// ALLOW_VMWATCH_GROUP_ASSIGNMENT_FAILURE and if they are both set we will just log and continue
// this allows us to test both cases
if os.Getenv(AllowVMWatchCgroupAssignmentFailureVariableName) == "" || os.Getenv(RunningInDevContainerVariableName) == "" {
lg.Log("event", "Killing VMWatch process as cgroup assignment failed")
_ = killVMWatch(lg, vmWatchCommand)
return err
}
}
return nil
}
func monitorHeartBeat(lg log.Logger, heartBeatFile string, processDone chan bool, cmd *exec.Cmd) {
maxTimeBetweenHeartBeatsInSeconds := 60
timer := time.NewTimer(time.Second * time.Duration(maxTimeBetweenHeartBeatsInSeconds))
for {
select {
case <-timer.C:
info, err := os.Stat(heartBeatFile)
if err == nil && time.Since(info.ModTime()).Seconds() < float64(maxTimeBetweenHeartBeatsInSeconds) {
// heartbeat was updated
} else {
// heartbeat file was not updated within 60 seconds, process is hung
err = fmt.Errorf("[%v][PID %d] VMWatch process did not update heartbeat file within the time limit, killing the process", time.Now().UTC().Format(time.RFC3339), cmd.Process.Pid)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.ReportHeatBeatTask, err.Error(), "error", err)
err = killVMWatch(lg, cmd)
if err != nil {
err = fmt.Errorf("[%v][PID %d] Failed to kill vmwatch process", time.Now().UTC().Format(time.RFC3339), cmd.Process.Pid)
sendTelemetry(lg, telemetry.EventLevelError, telemetry.ReportHeatBeatTask, err.Error(), "error", err)
}
}
case <-processDone:
return
}
}
}
func killVMWatch(lg log.Logger, cmd *exec.Cmd) error {
if cmd == nil || cmd.Process == nil || cmd.ProcessState != nil {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.KillVMWatchTask, "VMWatch is not running, killing process is not necessary.")
return nil
}
if err := cmd.Process.Kill(); err != nil {
sendTelemetry(lg, telemetry.EventLevelError, telemetry.KillVMWatchTask,
fmt.Sprintf("Failed to kill VMWatch process with PID %d. Error: %v", cmd.Process.Pid, err))
return err
}
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.KillVMWatchTask, fmt.Sprintf("Successfully killed VMWatch process with PID %d", cmd.Process.Pid))
return nil
}
// setupVMWatchCommand sets up the command to run VMWatch
// if we are on a linux distro with systemd-run available, cmd.Path will be systemd-run (or possibly the full path if resolved)
// else it will be the vmwatch binary path. the boolean return code indicates whether further resource goverance is needed
// in the case of running with systemd-run this will be false, otherwise it will be true
func setupVMWatchCommand(s *vmWatchSettings, hEnv *handlerenv.HandlerEnvironment) (*exec.Cmd, bool, error) {
processDirectory, err := GetProcessDirectory()
if err != nil {
return nil, false, err
}
args := []string{"--config", GetVMWatchConfigFullPath(processDirectory)}
args = append(args, "--debug")
args = append(args, "--heartbeat-file", GetVMWatchHeartbeatFilePath(hEnv))
args = append(args, "--execution-environment", GetExecutionEnvironment(hEnv))
// 0 is the default so allow that but any value below 30MB is not allowed
if s.MemoryLimitInBytes == 0 {
s.MemoryLimitInBytes = DefaultMaxMemoryInBytes
}
if s.MemoryLimitInBytes < 30000000 {
err = fmt.Errorf("[%v] Invalid MemoryLimitInBytes specified must be at least 30000000", time.Now().UTC().Format(time.RFC3339))
return nil, false, err
}
// check cpu, if 0 (default) set to the default value
if s.MaxCpuPercentage == 0 {
s.MaxCpuPercentage = DefaultMaxCpuPercentage
}
if s.MaxCpuPercentage < 0 || s.MaxCpuPercentage > 100 {
err = fmt.Errorf("[%v] Invalid maxCpuPercentage specified must be between 0 and 100", time.Now().UTC().Format(time.RFC3339))
return nil, false, err
}
args = append(args, "--memory-limit-bytes", strconv.FormatInt(s.MemoryLimitInBytes, 10))
if s.SignalFilters != nil {
if s.SignalFilters.DisabledSignals != nil && len(s.SignalFilters.DisabledSignals) > 0 {
args = append(args, "--disabled-signals")
args = append(args, strings.Join(s.SignalFilters.DisabledSignals, ":"))
}
if s.SignalFilters.DisabledTags != nil && len(s.SignalFilters.DisabledTags) > 0 {
args = append(args, "--disabled-tags")
args = append(args, strings.Join(s.SignalFilters.DisabledTags, ":"))
}
if s.SignalFilters.EnabledTags != nil && len(s.SignalFilters.EnabledTags) > 0 {
args = append(args, "--enabled-tags")
args = append(args, strings.Join(s.SignalFilters.EnabledTags, ":"))
}
if s.SignalFilters.EnabledOptionalSignals != nil && len(s.SignalFilters.EnabledOptionalSignals) > 0 {
args = append(args, "--enabled-optional-signals")
args = append(args, strings.Join(s.SignalFilters.EnabledOptionalSignals, ":"))
}
}
if len(strings.TrimSpace(s.GlobalConfigUrl)) > 0 {
args = append(args, "--global-config-url", s.GlobalConfigUrl)
}
args = append(args, "--disable-config-reader", strconv.FormatBool(s.DisableConfigReader))
if s.EnvironmentAttributes != nil {
if len(s.EnvironmentAttributes) > 0 {
args = append(args, "--env-attributes")
var envAttributes []string
for k, v := range s.EnvironmentAttributes {
envAttributes = append(envAttributes, fmt.Sprintf("%v=%v", k, v))
}
args = append(args, strings.Join(envAttributes, ":"))
}
}
// if we are running in a dev container don't call IMDS endpoint
if os.Getenv("RUNNING_IN_DEV_CONTAINER") != "" {
args = append(args, "--local")
}
extVersion, err := GetExtensionManifestVersion()
if err == nil {
args = append(args, "--apphealth-version", extVersion)
}
var cmd *exec.Cmd
// flag to tell the caller that further resource governance is required by assigning to cgroups after the process is started
// default to true to that if systemd-run is not available, we will assign cgroups
resourceGovernanceRequired := true
// if we have systemd available, we will use that to launch the process, otherwise we will launch directly and manipulate our own cgroups
if isSystemdAvailable() {
systemdVersion := getSystemdVersion()
// since systemd-run is in different paths on different distros, we will check for systemd but not use the full path
// to systemd-run. This is how guest agent handles it also so seems appropriate.
systemdArgs := []string{"--scope", "-p", fmt.Sprintf("CPUQuota=%v%%", s.MaxCpuPercentage)}
// systemd versions prior to 246 do not support MemoryMax, instead MemoryLimit should be used
if (systemdVersion < 246) {
systemdArgs = append(systemdArgs, "-p", fmt.Sprintf("MemoryLimit=%v", s.MemoryLimitInBytes))
} else {
systemdArgs = append(systemdArgs, "-p", fmt.Sprintf("MemoryMax=%v", s.MemoryLimitInBytes))
}
// now append the env variables (--setenv is supported in all versions, -E only in newer versions)
for _, v := range GetVMWatchEnvironmentVariables(s.ParameterOverrides, hEnv) {
systemdArgs = append(systemdArgs, "--setenv", v)
}
systemdArgs = append(systemdArgs, GetVMWatchBinaryFullPath(processDirectory))
systemdArgs = append(systemdArgs, args...)
// since systemd-run is in different paths on different distros, we will check for systemd but not use the full path
// to systemd-run. This is how guest agent handles it also so seems appropriate.
cmd = exec.Command("systemd-run", systemdArgs...)
// cgroup assignment not required since we are using systemd-run
resourceGovernanceRequired = false
} else {
cmd = exec.Command(GetVMWatchBinaryFullPath(processDirectory), args...)
cmd.Env = GetVMWatchEnvironmentVariables(s.ParameterOverrides, hEnv)
}
return cmd, resourceGovernanceRequired, nil
}
func isSystemdAvailable() bool {
// check if /run/systemd/system exists, if so we have systemd
info, err := os.Stat("/run/systemd/system")
return err == nil && info.IsDir()
}
func getSystemdVersion() (int) {
cmd := exec.Command("systemd-run", "--version")
// Execute the command and capture the output
output, err := cmd.CombinedOutput()
if err != nil {
return 0
}
// Convert output bytes to string
outputStr := string(output)
// Find the version information in the output
return extractVersion(outputStr)
}
// Function to extract the version information from the output
// returns the version or 0 if not found
func extractVersion(output string) int {
lines := strings.Split(output, "\n")
for _, line := range lines {
if strings.HasPrefix(line, "systemd") {
parts := strings.Fields(line)
if len(parts) >= 2 {
ret, err := strconv.Atoi(parts[1])
if err == nil {
return ret
}
return 0
}
}
}
return 0
}
func createAndAssignCgroups(lg log.Logger, vmwatchSettings *vmWatchSettings, vmWatchPid int) error {
// get our process and use this to determine the appropriate mount points for the cgroups
myPid := os.Getpid()
memoryLimitInBytes := int64(vmwatchSettings.MemoryLimitInBytes)
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, "Assigning VMWatch process to cgroup")
// check cgroups mode
if cgroups.Mode() == cgroups.Unified {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, "cgroups v2 detected")
// in cgroup v2, we need to set the period and quota relative to one another.
// Quota is the number of microseconds in the period that process can run
// Period is the length of the period in microseconds
period := uint64(CGroupV2PeriodMs)
cpuQuota := int64(vmwatchSettings.MaxCpuPercentage * 10000)
resources := cgroup2.Resources{
CPU: &cgroup2.CPU{
Max: cgroup2.NewCPUMax(&cpuQuota, &period),
},
Memory: &cgroup2.Memory{
Max: &memoryLimitInBytes,
},
}
// in cgroup v2, it appears that a process already in a cgroup can't create a sub group that limits the same
// kind of resources so we have to do it at the root level. Reference https://manpath.be/f35/7/cgroups#L557
manager, err := cgroup2.NewManager("/sys/fs/cgroup", "/vmwatch.slice", &resources)
if err != nil {
return err
}
err = manager.AddProc(uint64(vmWatchPid))
if err != nil {
return err
}
} else {
sendTelemetry(lg, telemetry.EventLevelInfo, telemetry.StartVMWatchTask, "cgroups v1 detected")
p := cgroup1.PidPath(myPid)
cpuPath, err := p("cpu")
if err != nil {
return err
}
// in cgroup v1, the interval is implied, 1000 == 1 %
cpuQuota := int64(vmwatchSettings.MaxCpuPercentage * 1000)
memoryLimitInBytes := int64(vmwatchSettings.MemoryLimitInBytes)
s := specs.LinuxResources{
CPU: &specs.LinuxCPU{
Quota: &cpuQuota,
},
Memory: &specs.LinuxMemory{
Limit: &memoryLimitInBytes,
},
}
control, err := cgroup1.New(cgroup1.StaticPath(cpuPath+"/vmwatch.slice"), &s)
if err != nil {
return err
}
err = control.AddProc(uint64(vmWatchPid))
if err != nil {
return err
}
defer control.Delete()
}
return nil
}
func GetProcessDirectory() (string, error) {
p, err := filepath.Abs(os.Args[0])
if err != nil {
return "", err
}
return filepath.Dir(p), nil
}
func GetVMWatchHeartbeatFilePath(hEnv *handlerenv.HandlerEnvironment) string {
return filepath.Join(hEnv.LogFolder, "vmwatch-heartbeat.txt")
}
func GetExecutionEnvironment(hEnv *handlerenv.HandlerEnvironment) string {
if strings.Contains(hEnv.LogFolder, AppHealthPublisherNameTest) {
return AppHealthExecutionEnvironmentTest
}
return AppHealthExecutionEnvironmentProd
}
func GetVMWatchConfigFullPath(processDirectory string) string {
return filepath.Join(processDirectory, "VMWatch", VMWatchConfigFileName)
}
func GetVMWatchBinaryFullPath(processDirectory string) string {
binaryName := VMWatchBinaryNameAmd64
if strings.Contains(os.Args[0], AppHealthBinaryNameArm64) {
binaryName = VMWatchBinaryNameArm64
}
return filepath.Join(processDirectory, "VMWatch", binaryName)
}
func GetVMWatchEnvironmentVariables(parameterOverrides map[string]interface{}, hEnv *handlerenv.HandlerEnvironment) []string {
var arr []string
// make sure we get the keys out in order
keys := make([]string, 0, len(parameterOverrides))
for k := range parameterOverrides {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
arr = append(arr, fmt.Sprintf("%s=%s", k, parameterOverrides[k]))
fmt.Println(k, parameterOverrides[k])
}
arr = append(arr, fmt.Sprintf("SIGNAL_FOLDER=%s", hEnv.EventsFolder))
arr = append(arr, fmt.Sprintf("VERBOSE_LOG_FILE_FULL_PATH=%s", filepath.Join(hEnv.LogFolder, VMWatchVerboseLogFileName)))
return arr
}

48
main/vmWatch_test.go Normal file
Просмотреть файл

@ -0,0 +1,48 @@
package main
import (
"testing"
"github.com/pkg/errors"
"github.com/stretchr/testify/require"
)
func TestGetStatusTypeReturnsCorrectValue(t *testing.T) {
status := Failed
require.Equal(t, StatusError, status.GetStatusType())
status = Disabled
require.Equal(t, StatusWarning, status.GetStatusType())
status = NotRunning
require.Equal(t, StatusSuccess, status.GetStatusType())
status = Running
require.Equal(t, StatusSuccess, status.GetStatusType())
}
func TestGetMessageCorrectValue(t *testing.T) {
res := VMWatchResult{Status: Disabled}
require.Equal(t, "VMWatch is disabled", res.GetMessage())
res = VMWatchResult{Status: Failed}
require.Equal(t, "VMWatch failed: <nil>", res.GetMessage())
res = VMWatchResult{Status: Failed, Error: errors.New("this is an error")}
require.Equal(t, "VMWatch failed: this is an error", res.GetMessage())
res = VMWatchResult{Status: NotRunning}
require.Equal(t, "VMWatch is not running", res.GetMessage())
res = VMWatchResult{Status: Running}
require.Equal(t, "VMWatch is running", res.GetMessage())
}
func TestExtractVersion(t *testing.T) {
v := extractVersion("systemd 123")
require.Equal(t, 123, v)
v = extractVersion(`someline
systemd 123
some other line`)
require.Equal(t, 123, v)
v = extractVersion(`someline
systemd abc
some other line`)
require.Equal(t, 0, v)
v = extractVersion("junk")
require.Equal(t, 0, v)
}

Просмотреть файл

@ -5,9 +5,11 @@ readonly SCRIPT_DIR=$(dirname "$0")
readonly LOG_DIR="/var/log/azure/applicationhealth-extension"
readonly LOG_FILE=handler.log
readonly ARCHITECTURE=$( [[ "$(uname -p)" == "unknown" ]] && echo "$(uname -m)" || echo "$(uname -p)" )
VMWATCH_BIN="vmwatch_linux_amd64"
HANDLER_BIN="applicationhealth-extension"
if [ $ARCHITECTURE == "arm64" ] || [ $ARCHITECTURE == "aarch64" ]; then
HANDLER_BIN="applicationhealth-extension-arm64";
HANDLER_BIN="applicationhealth-extension-arm64";
VMWATCH_BIN="vmwatch_linux_arm64"
fi
# status_file returns the .status file path we are supposed to write
@ -53,7 +55,7 @@ write_status() {
fi
}
kill_existing_processes() {
kill_existing_apphealth_processes() {
out="$(ps aux)"
if [[ "$out" == **"$HANDLER_BIN enable"** ]]; then
echo "Terminating existing $HANDLER_BIN process"
@ -77,13 +79,38 @@ kill_existing_processes() {
fi
}
kill_existing_vmwatch_processes() {
out="$(ps aux)"
if [[ "$out" == **"$VMWATCH_BIN"** ]]; then
echo "Terminating existing $VMWATCH_BIN process"
pkill -f $VMWATCH_BIN >&2
echo "Tried terminating existing $VMWATCH_BIN process"
for i in {1..33};
do
out="$(ps aux)"
if [[ "$out" == **"$VMWATCH_BIN"** ]]; then
sleep 1
else
echo "$VMWATCH_BIN process terminated"
break
fi
done
out="$(ps aux)"
if [[ "$out" == **"$VMWATCH_BIN"** ]]; then
echo "Force terminating existing $VMWATCH_BIN process"
pkill -9 -f $VMWATCH_BIN >&2
fi
fi
}
if [ "$#" -ne 1 ]; then
echo "Incorrect usage."
echo "Usage: $0 <command>"
exit 1
fi
kill_existing_processes
kill_existing_apphealth_processes
kill_existing_vmwatch_processes
# Redirect logs of the handler process
mkdir -p "$LOG_DIR"

Просмотреть файл

@ -2,7 +2,7 @@
<ExtensionImage xmlns="http://schemas.microsoft.com/windowsazure">
<ProviderNameSpace>Microsoft.ManagedServices</ProviderNameSpace>
<Type>ApplicationHealthLinux</Type>
<Version>2.0.6</Version>
<Version>2.0.12</Version>
<Label>Microsoft Azure Application Health Extension for Linux Virtual Machines</Label>
<HostingResources>VmRole</HostingResources>
<MediaLink></MediaLink>

68
pkg/logging/logging.go Normal file
Просмотреть файл

@ -0,0 +1,68 @@
package logging
import (
"io"
"github.com/go-kit/log"
)
// NopLogger is a logger implementation that discards all log messages.
// It Implements the Logger interface from the Azure-extension-platform package.
type NopLogger struct {
log.Logger
}
func NewNopLogger() *NopLogger {
return &NopLogger{
Logger: log.NewNopLogger(),
}
}
func (l NopLogger) Info(format string, v ...interface{}) {
err := l.Log()
if err != nil {
panic(err)
}
}
func (l NopLogger) Warn(format string, v ...interface{}) {
err := l.Log()
if err != nil {
panic(err)
}
}
func (l NopLogger) Error(format string, v ...interface{}) {
err := l.Log()
if err != nil {
panic(err)
}
}
func (l NopLogger) ErrorFromStream(prefix string, streamReader io.Reader) {
err := l.Log()
if err != nil {
panic(err)
}
}
func (l NopLogger) WarnFromStream(prefix string, streamReader io.Reader) {
err := l.Log()
if err != nil {
panic(err)
}
}
func (l NopLogger) InfoFromStream(prefix string, streamReader io.Reader) {
err := l.Log()
if err != nil {
panic(err)
}
}
func (l NopLogger) Close() {
err := l.Log()
if err != nil {
panic(err)
}
}

Просмотреть файл

@ -2,6 +2,7 @@ FROM ubuntu:20.04
RUN apt-get -qqy update && \
apt-get -qqy install jq openssl ca-certificates && \
apt-get -y install sysstat bc netcat && \
apt-get -qqy clean && \
rm -rf /var/lib/apt/lists/*
@ -10,7 +11,8 @@ RUN mkdir -p /var/lib/waagent && \
mkdir -p /var/lib/waagent/Extension/config && \
touch /var/lib/waagent/Extension/config/0.settings && \
mkdir -p /var/lib/waagent/Extension/status && \
mkdir -p /var/log/azure/Extension/VE.RS.ION
mkdir -p /var/log/azure/Extension/VE.RS.ION && \
mkdir -p /var/log/azure/Extension/events
# Copy the test environment
WORKDIR /var/lib/waagent
@ -23,5 +25,9 @@ RUN ln -s /var/lib/waagent/fake-waagent /sbin/fake-waagent && \
# Copy the handler files
COPY misc/HandlerManifest.json ./Extension/
COPY misc/manifest.xml ./Extension/
COPY misc/applicationhealth-shim ./Extension/bin/
COPY bin/applicationhealth-extension ./Extension/bin/
COPY bin/applicationhealth-extension ./Extension/bin/
# Copy Helper functions and scripts
COPY integration-test/test/test_helper.bash /var/lib/waagent

21
vendor/github.com/Azure/azure-extension-platform/LICENSE.txt сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,21 @@
Copyright (c) Microsoft Corporation.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

35
vendor/github.com/Azure/azure-extension-platform/pkg/extensionerrors/errorhelper.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,35 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package extensionerrors
import (
"fmt"
"github.com/pkg/errors"
"runtime/debug"
)
func AddStackToError(err error) error {
if err == nil {
return nil
}
stackString := string(debug.Stack())
return fmt.Errorf("%+v\nCallStack: %s", err, stackString)
}
func NewErrorWithStack(errString string) error {
stackString := string(debug.Stack())
return fmt.Errorf("%s\nCallStack: %s", errString, stackString)
}
func CombineErrors(err1 error, err2 error) error {
if err1 == nil && err2 == nil {
return nil
}
if err1 != nil && err2 == nil {
return err1
}
if err1 == nil && err2 != nil {
return err2
}
return errors.Wrap(err1, err2.Error())
}

47
vendor/github.com/Azure/azure-extension-platform/pkg/extensionerrors/extensionerrors.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,47 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package extensionerrors
import "github.com/pkg/errors"
var (
// ErrArgCannotBeNull is returned if a required parameter is null
ErrArgCannotBeNull = errors.New("The argument cannot be null")
// ErrArgCannotBeNullOrEmpty is returned if a required string parameter is null or empty
ErrArgCannotBeNullOrEmpty = errors.New("The argument cannot be null or empty")
// ErrMustRunAsAdmin is returned if an operation ran at permissions below admin
ErrMustRunAsAdmin = errors.New("The process must run as Administrator")
// ErrCertWithThumbprintNotFound is returned if we couldn't find the cert
ErrCertWithThumbprintNotFound = errors.New("The certificate for the specified thumbprint was not found")
// ErrInvalidProtectedSettingsData is returned when the protected settings data is invalid
ErrInvalidProtectedSettingsData = errors.New("The protected settings data is invalid")
// ErrInvalidSettingsFile is returned if the settings file is invalid
ErrInvalidSettingsFile = errors.New("The settings file is invalid")
// ErrInvalidSettingsRuntimeSettingsCount is returned if the runtime settings count is not one
ErrInvalidSettingsRuntimeSettingsCount = errors.New("The runtime settings count in the settings file is invalid")
// ErrNoCertificateThumbprint is returned if protected setting exist but no certificate thumbprint does
ErrNoCertificateThumbprint = errors.New("No certificate thumbprint to decode protected settings")
// ErrCannotDecodeProtectedSettings is returned if we cannot base64 decode the protected settings
ErrCannotDecodeProtectedSettings = errors.New("Failed to base64 decode the protected settings")
// ErrInvalidSettingsFileName is returned if we cannot parse the .settings file name
ErrInvalidSettingsFileName = errors.New("Invalid .settings file name")
// ErrNoSettingsFiles is returned if no .settings file are found
ErrNoSettingsFiles = errors.New("No .settings files exist")
// ErrNoMrseqFile is returned if no mrseq file are found
ErrNoMrseqFile = errors.New("No mrseq file exist")
ErrNotFound = errors.New("NotFound")
ErrInvalidOperationName = errors.New("operation name is invalid")
)

124
vendor/github.com/Azure/azure-extension-platform/pkg/extensionevents/extension_events.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,124 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package extensionevents
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"strconv"
"time"
"github.com/Azure/azure-extension-platform/pkg/handlerenv"
"github.com/Azure/azure-extension-platform/pkg/logging"
)
const (
eventLevelCritical = "Critical"
eventLevelError = "Error"
eventLevelWarning = "Warning"
eventLevelVerbose = "Verbose"
eventLevelInformational = "Informational"
)
type extensionEvent struct {
Version string `json:"Version"`
Timestamp string `json:"Timestamp"`
TaskName string `json:"TaskName"`
EventLevel string `json:"EventLevel"`
Message string `json:"Message"`
EventPid string `json:"EventPid"`
EventTid string `json:"EventTid"`
OperationID string `json:"OperationId"`
}
// ExtensionEventManager allows extensions to log events that will be collected
// by the Guest Agent
type ExtensionEventManager struct {
extensionLogger logging.ILogger
eventsFolder string
operationID string
}
func (eem *ExtensionEventManager) logEvent(taskName string, eventLevel string, message string) {
if eem.eventsFolder == "" {
eem.extensionLogger.Warn("EventsFolder not set. Not writing event.")
return
}
extensionVersion := os.Getenv("AZURE_GUEST_AGENT_EXTENSION_VERSION")
timestamp := time.Now().UTC().Format(time.RFC3339Nano)
pid := fmt.Sprintf("%v", os.Getpid())
tid := getThreadID()
extensionEvent := extensionEvent{
Version: extensionVersion,
Timestamp: timestamp,
TaskName: taskName,
EventLevel: eventLevel,
Message: message,
EventPid: pid,
EventTid: tid,
OperationID: eem.operationID,
}
// File name is the unix time in milliseconds
fileName := strconv.FormatInt(time.Now().UTC().UnixNano()/1000, 10)
filePath := path.Join(eem.eventsFolder, fileName) + ".json"
b, err := json.Marshal(extensionEvent)
if err != nil {
eem.extensionLogger.Error("Unable to serialize extension event: <%v>", err)
return
}
err = ioutil.WriteFile(filePath, b, 0644)
if err != nil {
eem.extensionLogger.Error("Unable to write event file: <%v>", err)
}
}
// New creates a new instance of the ExtensionEventManager
func New(el logging.ILogger, he *handlerenv.HandlerEnvironment) *ExtensionEventManager {
eem := &ExtensionEventManager{
extensionLogger: el,
eventsFolder: he.EventsFolder,
operationID: "",
}
return eem
}
// "SetOperationId()" sets operation Id passed by user while logging extension events
// This is made as separate function (not included in "logEvent()") to enable users to set Operation ID globally for their extension.
// "operationID" corresponds to "Context3" column in 'GuestAgentGenericLogs' table (Rdos cluster)
func (eem *ExtensionEventManager) SetOperationID(operationID string) {
eem.operationID = operationID
}
// LogCriticalEvent writes a message with critical status for the extension
func (eem *ExtensionEventManager) LogCriticalEvent(taskName string, message string) {
eem.logEvent(taskName, eventLevelCritical, message)
}
// LogErrorEvent writes a message with error status for the extension
func (eem *ExtensionEventManager) LogErrorEvent(taskName string, message string) {
eem.logEvent(taskName, eventLevelError, message)
}
// LogWarningEvent writes a message with warning status for the extension
func (eem *ExtensionEventManager) LogWarningEvent(taskName string, message string) {
eem.logEvent(taskName, eventLevelWarning, message)
}
// LogVerboseEvent writes a message with verbose status for the extension
func (eem *ExtensionEventManager) LogVerboseEvent(taskName string, message string) {
eem.logEvent(taskName, eventLevelVerbose, message)
}
// LogInformationalEvent writes a message with informational status for the extension
func (eem *ExtensionEventManager) LogInformationalEvent(taskName string, message string) {
eem.logEvent(taskName, eventLevelInformational, message)
}

13
vendor/github.com/Azure/azure-extension-platform/pkg/extensionevents/extension_events_linux.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,13 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package extensionevents
import (
"fmt"
"golang.org/x/sys/unix"
)
func getThreadID() string {
return fmt.Sprintf("%d", unix.Gettid())
}

12
vendor/github.com/Azure/azure-extension-platform/pkg/extensionevents/extension_events_windows.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,12 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package extensionevents
import (
"fmt"
"golang.org/x/sys/windows"
)
func getThreadID() string {
return fmt.Sprintf("%v", windows.GetCurrentThreadId())
}

128
vendor/github.com/Azure/azure-extension-platform/pkg/handlerenv/handlerenv.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,128 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package handlerenv
import (
"encoding/json"
"fmt"
"github.com/Azure/azure-extension-platform/pkg/extensionerrors"
"github.com/Azure/azure-extension-platform/pkg/utils"
"io/ioutil"
"os"
"path/filepath"
)
const handlerEnvFileName = "HandlerEnvironment.json"
// HandlerEnvironment describes the handler environment configuration for an extension
type HandlerEnvironment struct {
HeartbeatFile string
StatusFolder string
ConfigFolder string
LogFolder string
DataFolder string
EventsFolder string
DeploymentID string
RoleName string
Instance string
HostResolverAddress string
}
// HandlerEnvironment describes the handler environment configuration presented
// to the extension handler by the Azure Guest Agent.
type handlerEnvironmentInternal struct {
Version float64 `json:"version"`
Name string `json:"name"`
HandlerEnvironment struct {
HeartbeatFile string `json:"heartbeatFile"`
StatusFolder string `json:"statusFolder"`
ConfigFolder string `json:"configFolder"`
LogFolder string `json:"logFolder"`
EventsFolder string `json:"eventsFolder"`
EventsFolderPreview string `json:"eventsFolder_preview"`
DeploymentID string `json:"deploymentid"`
RoleName string `json:"rolename"`
Instance string `json:"instance"`
HostResolverAddress string `json:"hostResolverAddress"`
}
}
// GetHandlerEnv locates the HandlerEnvironment.json file by assuming it lives
// next to or one level above the extension handler (read: this) executable,
// reads, parses and returns it.
func GetHandlerEnvironment(name, version string) (he *HandlerEnvironment, _ error) {
contents, _, err := findAndReadFile(handlerEnvFileName)
if err != nil {
return nil, err
}
handlerEnvInternal, err := parseHandlerEnv(contents)
if err != nil {
return nil, err
}
// TODO: before this API goes public, remove the eventsfolder_preview
// This is only used for private preview of the events
eventsFolder := handlerEnvInternal.HandlerEnvironment.EventsFolder
if eventsFolder == "" {
eventsFolder = handlerEnvInternal.HandlerEnvironment.EventsFolderPreview
}
dataFolder := utils.GetDataFolder(name, version)
return &HandlerEnvironment{
HeartbeatFile: handlerEnvInternal.HandlerEnvironment.HeartbeatFile,
StatusFolder: handlerEnvInternal.HandlerEnvironment.StatusFolder,
ConfigFolder: handlerEnvInternal.HandlerEnvironment.ConfigFolder,
LogFolder: handlerEnvInternal.HandlerEnvironment.LogFolder,
DataFolder: dataFolder,
EventsFolder: eventsFolder,
DeploymentID: handlerEnvInternal.HandlerEnvironment.DeploymentID,
RoleName: handlerEnvInternal.HandlerEnvironment.RoleName,
Instance: handlerEnvInternal.HandlerEnvironment.Instance,
HostResolverAddress: handlerEnvInternal.HandlerEnvironment.HostResolverAddress,
}, nil
}
// ParseHandlerEnv parses the HandlerEnvironment.json format.
func parseHandlerEnv(b []byte) (*handlerEnvironmentInternal, error) {
var hf []handlerEnvironmentInternal
if err := json.Unmarshal(b, &hf); err != nil {
return nil, fmt.Errorf("vmextension: failed to parse handler env: %v", err)
}
if len(hf) != 1 {
return nil, fmt.Errorf("vmextension: expected 1 config in parsed HandlerEnvironment, found: %v", len(hf))
}
return &hf[0], nil
}
// findAndReadFile locates the specified file on disk relative to our currently
// executing process and attempts to read the file
func findAndReadFile(fileName string) (b []byte, fileLoc string, _ error) {
dir, err := utils.GetCurrentProcessWorkingDir()
if err != nil {
return nil, "", fmt.Errorf("vmextension: cannot find base directory of the running process: %v", err)
}
paths := []string{
filepath.Join(dir, fileName), // this level (i.e. executable is in [EXT_NAME]/.)
filepath.Join(dir, "..", fileName), // one up (i.e. executable is in [EXT_NAME]/bin/.)
}
for _, p := range paths {
o, err := ioutil.ReadFile(p)
if err != nil && !os.IsNotExist(err) {
return nil, "", fmt.Errorf("vmextension: error examining '%s' at '%s': %v", fileName, p, err)
} else if err == nil {
fileLoc = p
b = o
break
}
}
if b == nil {
return nil, "", extensionerrors.ErrNotFound
}
return b, fileLoc, nil
}

236
vendor/github.com/Azure/azure-extension-platform/pkg/logging/logging.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,236 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package logging
import (
"fmt"
"io"
"io/fs"
"io/ioutil"
"log"
"os"
"path"
"path/filepath"
"runtime/debug"
"sort"
"strconv"
"strings"
"time"
"github.com/Azure/azure-extension-platform/pkg/handlerenv"
)
const (
logLevelError = "Error "
logLevelWarning = "Warning "
logLevelInfo = "Info "
)
const (
thirtyMB = 30 * 1024 * 1034 // 31,457,280 bytes
fortyMB = 40 * 1024 * 1024 // 41,943,040 bytes
logDirThresholdLow = thirtyMB
logDirThresholdHigh = fortyMB
)
type StreamLogReader interface {
ErrorFromStream(prefix string, streamReader io.Reader)
WarnFromStream(prefix string, streamReader io.Reader)
InfoFromStream(prefix string, streamReader io.Reader)
}
// Target interface for Extentsion-Platform
type ILogger interface {
StreamLogReader
Error(format string, v ...interface{})
Warn(format string, v ...interface{})
Info(format string, v ...interface{})
Close()
}
// ExtensionLogger exposes logging capabilities to the extension
// It automatically appends time stamps and debug level to each message
// and ensures all logs are placed in the logs folder passed by the agent
type ExtensionLogger struct {
errorLogger *log.Logger
infoLogger *log.Logger
warnLogger *log.Logger
file *os.File
}
// New creates a new logging instance. If the handlerEnvironment is nil, we'll use a
// standard output logger
func New(he *handlerenv.HandlerEnvironment) *ExtensionLogger {
return NewWithName(he, "")
}
// Allows the caller to specify their own name for the file
// Supports cycling of logs to prevent filling up the disk
func NewWithName(he *handlerenv.HandlerEnvironment, logFileFormat string) *ExtensionLogger {
if he == nil {
return newStandardOutput()
}
if logFileFormat == "" {
logFileFormat = "log_%v"
}
// Rotate log folder to prevent filling up the disk
err := rotateLogFolder(he.LogFolder, logFileFormat)
if err != nil {
return newStandardOutput()
}
fileName := fmt.Sprintf(logFileFormat, strconv.FormatInt(time.Now().UTC().Unix(), 10))
filePath := path.Join(he.LogFolder, fileName)
writer, err := os.OpenFile(filePath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0666)
if err != nil {
return newStandardOutput()
}
return &ExtensionLogger{
errorLogger: log.New(writer, logLevelError, log.Ldate|log.Ltime|log.LUTC),
infoLogger: log.New(writer, logLevelInfo, log.Ldate|log.Ltime|log.LUTC),
warnLogger: log.New(writer, logLevelWarning, log.Ldate|log.Ltime|log.LUTC),
file: writer,
}
}
func GetCallStack() string {
return string(debug.Stack())
}
func newStandardOutput() *ExtensionLogger {
return &ExtensionLogger{
errorLogger: log.New(os.Stdout, logLevelError, 0),
infoLogger: log.New(os.Stdout, logLevelInfo, 0),
warnLogger: log.New(os.Stdout, logLevelWarning, 0),
file: nil,
}
}
// Close closes the file
func (logger *ExtensionLogger) Close() {
if logger.file != nil {
logger.file.Close()
}
}
// Error logs an error. Format is the same as fmt.Print
func (logger *ExtensionLogger) Error(format string, v ...interface{}) {
logger.errorLogger.Printf(format+"\n", v...)
logger.errorLogger.Printf(GetCallStack() + "\n")
}
// Warn logs a warning. Format is the same as fmt.Print
func (logger *ExtensionLogger) Warn(format string, v ...interface{}) {
logger.warnLogger.Printf(format+"\n", v...)
}
// Info logs an information statement. Format is the same as fmt.Print
func (logger *ExtensionLogger) Info(format string, v ...interface{}) {
logger.infoLogger.Printf(format+"\n", v...)
}
// Error logs an error. Get the message from a stream directly
func (logger *ExtensionLogger) ErrorFromStream(prefix string, streamReader io.Reader) {
logger.errorLogger.Print(prefix)
io.Copy(logger.errorLogger.Writer(), streamReader)
logger.errorLogger.Writer().Write([]byte(fmt.Sprintln())) // add a newline at the end of the stream contents
}
// Warn logs a warning. Get the message from a stream directly
func (logger *ExtensionLogger) WarnFromStream(prefix string, streamReader io.Reader) {
logger.warnLogger.Print(prefix)
io.Copy(logger.warnLogger.Writer(), streamReader)
logger.warnLogger.Writer().Write([]byte(fmt.Sprintln())) // add a newline at the end of the stream contents
}
// Info logs an information statement. Get the message from a stream directly
func (logger *ExtensionLogger) InfoFromStream(prefix string, streamReader io.Reader) {
logger.infoLogger.Print(prefix)
io.Copy(logger.infoLogger.Writer(), streamReader)
logger.infoLogger.Writer().Write([]byte(fmt.Sprintln())) // add a newline at the end of the stream contents
}
// Function to get directory size
func getDirSize(dirPath string) (size int64, err error) {
err = filepath.Walk(dirPath, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
size += info.Size()
}
return err
})
if err != nil {
err = fmt.Errorf("unable to compute directory size, error: %v", err)
}
return
}
// Function to rotate log files present in logFolder to avoid filling customer disk space
// File name matching is done on file name pattern provided before '%'
func rotateLogFolder(logFolder string, logFileFormat string) (err error) {
size, err := getDirSize(logFolder)
if err != nil {
return
}
// If directory size is still under high threshold value, nothing to do
if size < logDirThresholdHigh {
return
}
// Get all log files in logFolder
// Files are already sorted according to filenames
// Log file names contains unix timestamp as suffix, Thus we have files sorted according to age as well
var dirEntries []fs.FileInfo
dirEntries, err = ioutil.ReadDir(logFolder)
if err != nil {
err = fmt.Errorf("unable to read log folder, error: %v", err)
return
}
// Sort directory entries according to time (oldest to newest)
sort.Slice(dirEntries, func(idx1, idx2 int) bool {
return dirEntries[idx1].ModTime().Before(dirEntries[idx2].ModTime())
})
// Get log file name prefix
logFilePrefix := strings.Split(logFileFormat, "%")
for _, file := range dirEntries {
// Once directory size goes below lower threshold limit, stop deletion
if size < logDirThresholdLow {
break
}
// Skip directories
if file.IsDir() {
continue
}
// log file names are prefixed according to logFileFormat specified
if !strings.HasPrefix(file.Name(), logFilePrefix[0]) {
continue
}
// Delete the file
err = os.Remove(filepath.Join(logFolder, file.Name()))
if err != nil {
err = fmt.Errorf("unable to delete log files, error: %v", err)
return
}
// Subtract file size from total directory size
size = size - file.Size()
}
return
}

17
vendor/github.com/Azure/azure-extension-platform/pkg/utils/utils.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,17 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package utils
import (
"os"
"path/filepath"
)
// GetCurrentProcessWorkingDir returns the absolute path of the running process.
func GetCurrentProcessWorkingDir() (string, error) {
p, err := filepath.Abs(os.Args[0])
if err != nil {
return "", err
}
return filepath.Dir(p), nil
}

113
vendor/github.com/Azure/azure-extension-platform/pkg/utils/utils_linux.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,113 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package utils
import (
"errors"
"fmt"
"os"
"path"
"path/filepath"
"regexp"
"strconv"
)
// agentDir is where the agent is located, a subdirectory of which we use as the data directory
const agentDir = "/var/lib/waagent"
func GetDataFolder(name string, version string) string {
return path.Join(agentDir, name)
}
// Try clear files whose file names match with a regular expression except the filename passed in exceptFileName argument.
// If deleteFiles is true, files will be deleted, else they will be emptied without deleting.
func TryClearRegexMatchingFilesExcept(directory string, regexFileNamePattern string,
exceptFileName string, deleteFiles bool) error {
if regexFileNamePattern == "" {
return errors.New("Empty regexFileNamePattern argument.")
}
// Check if the directory exists
directoryFDRef, err := os.Open(directory)
if err != nil {
return err
}
regex, err := regexp.Compile(regexFileNamePattern)
if err != nil {
return err
}
dirEntries, err := directoryFDRef.ReadDir(0)
if err == nil {
for _, dirEntry := range dirEntries {
fileName := dirEntry.Name()
if fileName != exceptFileName && regex.MatchString(fileName) {
fullFilePath := filepath.Join(directory, fileName)
if deleteFiles {
os.Remove(fullFilePath)
} else {
os.Truncate(fullFilePath, 0) // Calling create on existing file truncates file
}
}
}
return nil
}
return err
}
// Try delete all directories in parentDirectory excepth directory by name 'exceptDirectoryName'
func TryDeleteDirectoriesExcept(parentDirectory string, exceptDirectoryName string) error {
// Check if the directory exists
directoryFDRef, err := os.Open(parentDirectory)
if err != nil {
return err
}
dirEntries, err := directoryFDRef.ReadDir(0)
if err == nil && dirEntries != nil {
for _, dirEntry := range dirEntries {
entryName := dirEntry.Name()
if dirEntry.IsDir() && entryName != exceptDirectoryName {
fullDirectoryPath := filepath.Join(parentDirectory, entryName)
os.RemoveAll(fullDirectoryPath)
}
}
return nil
}
return err
}
//Try empty runtime settings files for an extension except last, delete scripts except last.
// runtimeSettingsRegexFormatWithAnyExtName - regex identifying all settings files- example. "\\d+.settings", "RunCommandName.\\d+.settings"
// runtimeSettingsLastSeqNumFormatWithAnyExtName - example. "%s.settings", "RunCommandName.%s.settings"
func TryClearExtensionScriptsDirectoriesAndSettingsFilesExceptMostRecent(scriptsDirectory string,
runtimeSettingsDirectory string,
extensionName string,
mostRecentSequenceNumberFinished uint64,
runtimeSettingsRegexFormatWithAnyExtName string,
runtimeSettingsLastSeqNumFormatWithAnyExtName string) error {
recentSeqNumberString := strconv.FormatUint(mostRecentSequenceNumberFinished, 10)
// Delete scripts belonging to previous sequence numbers.
err := TryDeleteDirectoriesExcept(filepath.Join(scriptsDirectory, extensionName), recentSeqNumberString)
if err != nil {
return err
}
mostRecentRuntimeSetting := fmt.Sprintf(runtimeSettingsLastSeqNumFormatWithAnyExtName, mostRecentSequenceNumberFinished)
// Empty Runtimesettings files belonging to previous sequence numbers.
err = TryClearRegexMatchingFilesExcept(runtimeSettingsDirectory,
runtimeSettingsRegexFormatWithAnyExtName,
mostRecentRuntimeSetting,
false)
if err != nil {
return err
}
return nil
}

13
vendor/github.com/Azure/azure-extension-platform/pkg/utils/utils_windows.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,13 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
package utils
import (
"os"
"path"
)
func GetDataFolder(name string, version string) string {
systemDriveFolder := os.Getenv("SystemDrive")
return path.Join(systemDriveFolder, "Packages\\Plugins", name, version, "Downloads")
}

17
vendor/github.com/cilium/ebpf/.clang-format сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,17 @@
---
Language: Cpp
BasedOnStyle: LLVM
AlignAfterOpenBracket: DontAlign
AlignConsecutiveAssignments: true
AlignEscapedNewlines: DontAlign
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: false
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortFunctionsOnASingleLine: false
BreakBeforeBraces: Attach
IndentWidth: 4
KeepEmptyLinesAtTheStartOfBlocks: false
TabWidth: 4
UseTab: ForContinuationAndIndentation
ColumnLimit: 1000
...

14
vendor/github.com/cilium/ebpf/.gitignore сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,14 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
*.o
!*_bpf*.o
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out

28
vendor/github.com/cilium/ebpf/.golangci.yaml сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,28 @@
---
issues:
exclude-rules:
# syscall param structs will have unused fields in Go code.
- path: syscall.*.go
linters:
- structcheck
linters:
disable-all: true
enable:
- deadcode
- errcheck
- goimports
- gosimple
- govet
- ineffassign
- misspell
- staticcheck
- structcheck
- typecheck
- unused
- varcheck
# Could be enabled later:
# - gocyclo
# - maligned
# - gosec

86
vendor/github.com/cilium/ebpf/ARCHITECTURE.md сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,86 @@
Architecture of the library
===
ELF -> Specifications -> Objects -> Links
ELF
---
BPF is usually produced by using Clang to compile a subset of C. Clang outputs
an ELF file which contains program byte code (aka BPF), but also metadata for
maps used by the program. The metadata follows the conventions set by libbpf
shipped with the kernel. Certain ELF sections have special meaning
and contain structures defined by libbpf. Newer versions of clang emit
additional metadata in BPF Type Format (aka BTF).
The library aims to be compatible with libbpf so that moving from a C toolchain
to a Go one creates little friction. To that end, the [ELF reader](elf_reader.go)
is tested against the Linux selftests and avoids introducing custom behaviour
if possible.
The output of the ELF reader is a `CollectionSpec` which encodes
all of the information contained in the ELF in a form that is easy to work with
in Go.
### BTF
The BPF Type Format describes more than just the types used by a BPF program. It
includes debug aids like which source line corresponds to which instructions and
what global variables are used.
[BTF parsing](internal/btf/) lives in a separate internal package since exposing
it would mean an additional maintenance burden, and because the API still
has sharp corners. The most important concept is the `btf.Type` interface, which
also describes things that aren't really types like `.rodata` or `.bss` sections.
`btf.Type`s can form cyclical graphs, which can easily lead to infinite loops if
one is not careful. Hopefully a safe pattern to work with `btf.Type` emerges as
we write more code that deals with it.
Specifications
---
`CollectionSpec`, `ProgramSpec` and `MapSpec` are blueprints for in-kernel
objects and contain everything necessary to execute the relevant `bpf(2)`
syscalls. Since the ELF reader outputs a `CollectionSpec` it's possible to
modify clang-compiled BPF code, for example to rewrite constants. At the same
time the [asm](asm/) package provides an assembler that can be used to generate
`ProgramSpec` on the fly.
Creating a spec should never require any privileges or be restricted in any way,
for example by only allowing programs in native endianness. This ensures that
the library stays flexible.
Objects
---
`Program` and `Map` are the result of loading specs into the kernel. Sometimes
loading a spec will fail because the kernel is too old, or a feature is not
enabled. There are multiple ways the library deals with that:
* Fallback: older kernels don't allow naming programs and maps. The library
automatically detects support for names, and omits them during load if
necessary. This works since name is primarily a debug aid.
* Sentinel error: sometimes it's possible to detect that a feature isn't available.
In that case the library will return an error wrapping `ErrNotSupported`.
This is also useful to skip tests that can't run on the current kernel.
Once program and map objects are loaded they expose the kernel's low-level API,
e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer
wrappers on top of the low-level API, like `MapIterator`. The low-level API is
useful when our higher-level API doesn't support a particular use case.
Links
---
BPF can be attached to many different points in the kernel and newer BPF hooks
tend to use bpf_link to do so. Older hooks unfortunately use a combination of
syscalls, netlink messages, etc. Adding support for a new link type should not
pull in large dependencies like netlink, so XDP programs or tracepoints are
out of scope.
Each bpf_link_type has one corresponding Go type, e.g. `link.tracing` corresponds
to BPF_LINK_TRACING. In general, these types should be unexported as long as they
don't export methods outside of the Link interface. Each Go type may have multiple
exported constructors. For example `AttachTracing` and `AttachLSM` create a
tracing link, but are distinct functions since they may require different arguments.

46
vendor/github.com/cilium/ebpf/CODE_OF_CONDUCT.md сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,46 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at nathanjsweet at gmail dot com or i at lmb dot io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

40
vendor/github.com/cilium/ebpf/CONTRIBUTING.md сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,40 @@
# How to contribute
Development is on [GitHub](https://github.com/cilium/ebpf) and contributions in
the form of pull requests and issues reporting bugs or suggesting new features
are welcome. Please take a look at [the architecture](ARCHITECTURE.md) to get
a better understanding for the high-level goals.
New features must be accompanied by tests. Before starting work on any large
feature, please [join](https://ebpf.io/slack) the
[#ebpf-go](https://cilium.slack.com/messages/ebpf-go) channel on Slack to
discuss the design first.
When submitting pull requests, consider writing details about what problem you
are solving and why the proposed approach solves that problem in commit messages
and/or pull request description to help future library users and maintainers to
reason about the proposed changes.
## Running the tests
Many of the tests require privileges to set resource limits and load eBPF code.
The easiest way to obtain these is to run the tests with `sudo`.
To test the current package with your local kernel you can simply run:
```
go test -exec sudo ./...
```
To test the current package with a different kernel version you can use the [run-tests.sh](run-tests.sh) script.
It requires [virtme](https://github.com/amluto/virtme) and qemu to be installed.
Examples:
```bash
# Run all tests on a 5.4 kernel
./run-tests.sh 5.4
# Run a subset of tests:
./run-tests.sh 5.4 go test ./link
```

23
vendor/github.com/cilium/ebpf/LICENSE сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,23 @@
MIT License
Copyright (c) 2017 Nathan Sweet
Copyright (c) 2018, 2019 Cloudflare
Copyright (c) 2019 Authors of Cilium
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

8
vendor/github.com/cilium/ebpf/MAINTAINERS.md сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,8 @@
# Maintainers
* [Lorenz Bauer]
* [Timo Beckers] (Isovalent)
[Lorenz Bauer]: https://github.com/lmb
[Timo Beckers]: https://github.com/ti-mo

110
vendor/github.com/cilium/ebpf/Makefile сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,110 @@
# The development version of clang is distributed as the 'clang' binary,
# while stable/released versions have a version number attached.
# Pin the default clang to a stable version.
CLANG ?= clang-14
STRIP ?= llvm-strip-14
OBJCOPY ?= llvm-objcopy-14
CFLAGS := -O2 -g -Wall -Werror $(CFLAGS)
CI_KERNEL_URL ?= https://github.com/cilium/ci-kernels/raw/master/
# Obtain an absolute path to the directory of the Makefile.
# Assume the Makefile is in the root of the repository.
REPODIR := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
UIDGID := $(shell stat -c '%u:%g' ${REPODIR})
# Prefer podman if installed, otherwise use docker.
# Note: Setting the var at runtime will always override.
CONTAINER_ENGINE ?= $(if $(shell command -v podman), podman, docker)
CONTAINER_RUN_ARGS ?= $(if $(filter ${CONTAINER_ENGINE}, podman), --log-driver=none, --user "${UIDGID}")
IMAGE := $(shell cat ${REPODIR}/testdata/docker/IMAGE)
VERSION := $(shell cat ${REPODIR}/testdata/docker/VERSION)
# clang <8 doesn't tag relocs properly (STT_NOTYPE)
# clang 9 is the first version emitting BTF
TARGETS := \
testdata/loader-clang-7 \
testdata/loader-clang-9 \
testdata/loader-$(CLANG) \
testdata/btf_map_init \
testdata/invalid_map \
testdata/raw_tracepoint \
testdata/invalid_map_static \
testdata/invalid_btf_map_init \
testdata/strings \
testdata/freplace \
testdata/iproute2_map_compat \
testdata/map_spin_lock \
testdata/subprog_reloc \
testdata/fwd_decl \
btf/testdata/relocs \
btf/testdata/relocs_read \
btf/testdata/relocs_read_tgt
.PHONY: all clean container-all container-shell generate
.DEFAULT_TARGET = container-all
# Build all ELF binaries using a containerized LLVM toolchain.
container-all:
${CONTAINER_ENGINE} run --rm ${CONTAINER_RUN_ARGS} \
-v "${REPODIR}":/ebpf -w /ebpf --env MAKEFLAGS \
--env CFLAGS="-fdebug-prefix-map=/ebpf=." \
--env HOME="/tmp" \
"${IMAGE}:${VERSION}" \
$(MAKE) all
# (debug) Drop the user into a shell inside the container as root.
container-shell:
${CONTAINER_ENGINE} run --rm -ti \
-v "${REPODIR}":/ebpf -w /ebpf \
"${IMAGE}:${VERSION}"
clean:
-$(RM) testdata/*.elf
-$(RM) btf/testdata/*.elf
format:
find . -type f -name "*.c" | xargs clang-format -i
all: format $(addsuffix -el.elf,$(TARGETS)) $(addsuffix -eb.elf,$(TARGETS)) generate
ln -srf testdata/loader-$(CLANG)-el.elf testdata/loader-el.elf
ln -srf testdata/loader-$(CLANG)-eb.elf testdata/loader-eb.elf
# $BPF_CLANG is used in go:generate invocations.
generate: export BPF_CLANG := $(CLANG)
generate: export BPF_CFLAGS := $(CFLAGS)
generate:
go generate ./cmd/bpf2go/test
go generate ./internal/sys
cd examples/ && go generate ./...
testdata/loader-%-el.elf: testdata/loader.c
$* $(CFLAGS) -target bpfel -c $< -o $@
$(STRIP) -g $@
testdata/loader-%-eb.elf: testdata/loader.c
$* $(CFLAGS) -target bpfeb -c $< -o $@
$(STRIP) -g $@
%-el.elf: %.c
$(CLANG) $(CFLAGS) -target bpfel -c $< -o $@
$(STRIP) -g $@
%-eb.elf : %.c
$(CLANG) $(CFLAGS) -target bpfeb -c $< -o $@
$(STRIP) -g $@
.PHONY: generate-btf
generate-btf: KERNEL_VERSION?=5.18
generate-btf:
$(eval TMP := $(shell mktemp -d))
curl -fL "$(CI_KERNEL_URL)/linux-$(KERNEL_VERSION).bz" -o "$(TMP)/bzImage"
./testdata/extract-vmlinux "$(TMP)/bzImage" > "$(TMP)/vmlinux"
$(OBJCOPY) --dump-section .BTF=/dev/stdout "$(TMP)/vmlinux" /dev/null | gzip > "btf/testdata/vmlinux.btf.gz"
curl -fL "$(CI_KERNEL_URL)/linux-$(KERNEL_VERSION)-selftests-bpf.tgz" -o "$(TMP)/selftests.tgz"
tar -xf "$(TMP)/selftests.tgz" --to-stdout tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.ko | \
$(OBJCOPY) --dump-section .BTF="btf/testdata/btf_testmod.btf" - /dev/null
$(RM) -r "$(TMP)"

77
vendor/github.com/cilium/ebpf/README.md сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,77 @@
# eBPF
[![PkgGoDev](https://pkg.go.dev/badge/github.com/cilium/ebpf)](https://pkg.go.dev/github.com/cilium/ebpf)
![HoneyGopher](.github/images/cilium-ebpf.png)
eBPF is a pure Go library that provides utilities for loading, compiling, and
debugging eBPF programs. It has minimal external dependencies and is intended to
be used in long running processes.
The library is maintained by [Cloudflare](https://www.cloudflare.com) and
[Cilium](https://www.cilium.io).
See [ebpf.io](https://ebpf.io) for other projects from the eBPF ecosystem.
## Getting Started
A small collection of Go and eBPF programs that serve as examples for building
your own tools can be found under [examples/](examples/).
Contributions are highly encouraged, as they highlight certain use cases of
eBPF and the library, and help shape the future of the project.
## Getting Help
Please
[join](https://ebpf.io/slack) the
[#ebpf-go](https://cilium.slack.com/messages/ebpf-go) channel on Slack if you
have questions regarding the library.
## Packages
This library includes the following packages:
* [asm](https://pkg.go.dev/github.com/cilium/ebpf/asm) contains a basic
assembler, allowing you to write eBPF assembly instructions directly
within your Go code. (You don't need to use this if you prefer to write your eBPF program in C.)
* [cmd/bpf2go](https://pkg.go.dev/github.com/cilium/ebpf/cmd/bpf2go) allows
compiling and embedding eBPF programs written in C within Go code. As well as
compiling the C code, it auto-generates Go code for loading and manipulating
the eBPF program and map objects.
* [link](https://pkg.go.dev/github.com/cilium/ebpf/link) allows attaching eBPF
to various hooks
* [perf](https://pkg.go.dev/github.com/cilium/ebpf/perf) allows reading from a
`PERF_EVENT_ARRAY`
* [ringbuf](https://pkg.go.dev/github.com/cilium/ebpf/ringbuf) allows reading from a
`BPF_MAP_TYPE_RINGBUF` map
* [features](https://pkg.go.dev/github.com/cilium/ebpf/features) implements the equivalent
of `bpftool feature probe` for discovering BPF-related kernel features using native Go.
* [rlimit](https://pkg.go.dev/github.com/cilium/ebpf/rlimit) provides a convenient API to lift
the `RLIMIT_MEMLOCK` constraint on kernels before 5.11.
## Requirements
* A version of Go that is [supported by
upstream](https://golang.org/doc/devel/release.html#policy)
* Linux >= 4.9. CI is run against kernel.org LTS releases. 4.4 should work but is
not tested against.
## Regenerating Testdata
Run `make` in the root of this repository to rebuild testdata in all
subpackages. This requires Docker, as it relies on a standardized build
environment to keep the build output stable.
It is possible to regenerate data using Podman by overriding the `CONTAINER_*`
variables: `CONTAINER_ENGINE=podman CONTAINER_RUN_ARGS= make`.
The toolchain image build files are kept in [testdata/docker/](testdata/docker/).
## License
MIT
### eBPF Gopher
The eBPF honeygopher is based on the Go gopher designed by Renee French.

149
vendor/github.com/cilium/ebpf/asm/alu.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,149 @@
package asm
//go:generate stringer -output alu_string.go -type=Source,Endianness,ALUOp
// Source of ALU / ALU64 / Branch operations
//
// msb lsb
// +----+-+---+
// |op |S|cls|
// +----+-+---+
type Source uint8
const sourceMask OpCode = 0x08
// Source bitmask
const (
// InvalidSource is returned by getters when invoked
// on non ALU / branch OpCodes.
InvalidSource Source = 0xff
// ImmSource src is from constant
ImmSource Source = 0x00
// RegSource src is from register
RegSource Source = 0x08
)
// The Endianness of a byte swap instruction.
type Endianness uint8
const endianMask = sourceMask
// Endian flags
const (
InvalidEndian Endianness = 0xff
// Convert to little endian
LE Endianness = 0x00
// Convert to big endian
BE Endianness = 0x08
)
// ALUOp are ALU / ALU64 operations
//
// msb lsb
// +----+-+---+
// |OP |s|cls|
// +----+-+---+
type ALUOp uint8
const aluMask OpCode = 0xf0
const (
// InvalidALUOp is returned by getters when invoked
// on non ALU OpCodes
InvalidALUOp ALUOp = 0xff
// Add - addition
Add ALUOp = 0x00
// Sub - subtraction
Sub ALUOp = 0x10
// Mul - multiplication
Mul ALUOp = 0x20
// Div - division
Div ALUOp = 0x30
// Or - bitwise or
Or ALUOp = 0x40
// And - bitwise and
And ALUOp = 0x50
// LSh - bitwise shift left
LSh ALUOp = 0x60
// RSh - bitwise shift right
RSh ALUOp = 0x70
// Neg - sign/unsign signing bit
Neg ALUOp = 0x80
// Mod - modulo
Mod ALUOp = 0x90
// Xor - bitwise xor
Xor ALUOp = 0xa0
// Mov - move value from one place to another
Mov ALUOp = 0xb0
// ArSh - arithmatic shift
ArSh ALUOp = 0xc0
// Swap - endian conversions
Swap ALUOp = 0xd0
)
// HostTo converts from host to another endianness.
func HostTo(endian Endianness, dst Register, size Size) Instruction {
var imm int64
switch size {
case Half:
imm = 16
case Word:
imm = 32
case DWord:
imm = 64
default:
return Instruction{OpCode: InvalidOpCode}
}
return Instruction{
OpCode: OpCode(ALUClass).SetALUOp(Swap).SetSource(Source(endian)),
Dst: dst,
Constant: imm,
}
}
// Op returns the OpCode for an ALU operation with a given source.
func (op ALUOp) Op(source Source) OpCode {
return OpCode(ALU64Class).SetALUOp(op).SetSource(source)
}
// Reg emits `dst (op) src`.
func (op ALUOp) Reg(dst, src Register) Instruction {
return Instruction{
OpCode: op.Op(RegSource),
Dst: dst,
Src: src,
}
}
// Imm emits `dst (op) value`.
func (op ALUOp) Imm(dst Register, value int32) Instruction {
return Instruction{
OpCode: op.Op(ImmSource),
Dst: dst,
Constant: int64(value),
}
}
// Op32 returns the OpCode for a 32-bit ALU operation with a given source.
func (op ALUOp) Op32(source Source) OpCode {
return OpCode(ALUClass).SetALUOp(op).SetSource(source)
}
// Reg32 emits `dst (op) src`, zeroing the upper 32 bit of dst.
func (op ALUOp) Reg32(dst, src Register) Instruction {
return Instruction{
OpCode: op.Op32(RegSource),
Dst: dst,
Src: src,
}
}
// Imm32 emits `dst (op) value`, zeroing the upper 32 bit of dst.
func (op ALUOp) Imm32(dst Register, value int32) Instruction {
return Instruction{
OpCode: op.Op32(ImmSource),
Dst: dst,
Constant: int64(value),
}
}

107
vendor/github.com/cilium/ebpf/asm/alu_string.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,107 @@
// Code generated by "stringer -output alu_string.go -type=Source,Endianness,ALUOp"; DO NOT EDIT.
package asm
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[InvalidSource-255]
_ = x[ImmSource-0]
_ = x[RegSource-8]
}
const (
_Source_name_0 = "ImmSource"
_Source_name_1 = "RegSource"
_Source_name_2 = "InvalidSource"
)
func (i Source) String() string {
switch {
case i == 0:
return _Source_name_0
case i == 8:
return _Source_name_1
case i == 255:
return _Source_name_2
default:
return "Source(" + strconv.FormatInt(int64(i), 10) + ")"
}
}
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[InvalidEndian-255]
_ = x[LE-0]
_ = x[BE-8]
}
const (
_Endianness_name_0 = "LE"
_Endianness_name_1 = "BE"
_Endianness_name_2 = "InvalidEndian"
)
func (i Endianness) String() string {
switch {
case i == 0:
return _Endianness_name_0
case i == 8:
return _Endianness_name_1
case i == 255:
return _Endianness_name_2
default:
return "Endianness(" + strconv.FormatInt(int64(i), 10) + ")"
}
}
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[InvalidALUOp-255]
_ = x[Add-0]
_ = x[Sub-16]
_ = x[Mul-32]
_ = x[Div-48]
_ = x[Or-64]
_ = x[And-80]
_ = x[LSh-96]
_ = x[RSh-112]
_ = x[Neg-128]
_ = x[Mod-144]
_ = x[Xor-160]
_ = x[Mov-176]
_ = x[ArSh-192]
_ = x[Swap-208]
}
const _ALUOp_name = "AddSubMulDivOrAndLShRShNegModXorMovArShSwapInvalidALUOp"
var _ALUOp_map = map[ALUOp]string{
0: _ALUOp_name[0:3],
16: _ALUOp_name[3:6],
32: _ALUOp_name[6:9],
48: _ALUOp_name[9:12],
64: _ALUOp_name[12:14],
80: _ALUOp_name[14:17],
96: _ALUOp_name[17:20],
112: _ALUOp_name[20:23],
128: _ALUOp_name[23:26],
144: _ALUOp_name[26:29],
160: _ALUOp_name[29:32],
176: _ALUOp_name[32:35],
192: _ALUOp_name[35:39],
208: _ALUOp_name[39:43],
255: _ALUOp_name[43:55],
}
func (i ALUOp) String() string {
if str, ok := _ALUOp_map[i]; ok {
return str
}
return "ALUOp(" + strconv.FormatInt(int64(i), 10) + ")"
}

2
vendor/github.com/cilium/ebpf/asm/doc.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,2 @@
// Package asm is an assembler for eBPF bytecode.
package asm

242
vendor/github.com/cilium/ebpf/asm/func.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,242 @@
package asm
//go:generate stringer -output func_string.go -type=BuiltinFunc
// BuiltinFunc is a built-in eBPF function.
type BuiltinFunc int32
func (_ BuiltinFunc) Max() BuiltinFunc {
return maxBuiltinFunc - 1
}
// eBPF built-in functions
//
// You can regenerate this list using the following gawk script:
//
// /FN\(.+\),/ {
// match($1, /\((.+)\)/, r)
// split(r[1], p, "_")
// printf "Fn"
// for (i in p) {
// printf "%s%s", toupper(substr(p[i], 1, 1)), substr(p[i], 2)
// }
// print ""
// }
//
// The script expects include/uapi/linux/bpf.h as it's input.
const (
FnUnspec BuiltinFunc = iota
FnMapLookupElem
FnMapUpdateElem
FnMapDeleteElem
FnProbeRead
FnKtimeGetNs
FnTracePrintk
FnGetPrandomU32
FnGetSmpProcessorId
FnSkbStoreBytes
FnL3CsumReplace
FnL4CsumReplace
FnTailCall
FnCloneRedirect
FnGetCurrentPidTgid
FnGetCurrentUidGid
FnGetCurrentComm
FnGetCgroupClassid
FnSkbVlanPush
FnSkbVlanPop
FnSkbGetTunnelKey
FnSkbSetTunnelKey
FnPerfEventRead
FnRedirect
FnGetRouteRealm
FnPerfEventOutput
FnSkbLoadBytes
FnGetStackid
FnCsumDiff
FnSkbGetTunnelOpt
FnSkbSetTunnelOpt
FnSkbChangeProto
FnSkbChangeType
FnSkbUnderCgroup
FnGetHashRecalc
FnGetCurrentTask
FnProbeWriteUser
FnCurrentTaskUnderCgroup
FnSkbChangeTail
FnSkbPullData
FnCsumUpdate
FnSetHashInvalid
FnGetNumaNodeId
FnSkbChangeHead
FnXdpAdjustHead
FnProbeReadStr
FnGetSocketCookie
FnGetSocketUid
FnSetHash
FnSetsockopt
FnSkbAdjustRoom
FnRedirectMap
FnSkRedirectMap
FnSockMapUpdate
FnXdpAdjustMeta
FnPerfEventReadValue
FnPerfProgReadValue
FnGetsockopt
FnOverrideReturn
FnSockOpsCbFlagsSet
FnMsgRedirectMap
FnMsgApplyBytes
FnMsgCorkBytes
FnMsgPullData
FnBind
FnXdpAdjustTail
FnSkbGetXfrmState
FnGetStack
FnSkbLoadBytesRelative
FnFibLookup
FnSockHashUpdate
FnMsgRedirectHash
FnSkRedirectHash
FnLwtPushEncap
FnLwtSeg6StoreBytes
FnLwtSeg6AdjustSrh
FnLwtSeg6Action
FnRcRepeat
FnRcKeydown
FnSkbCgroupId
FnGetCurrentCgroupId
FnGetLocalStorage
FnSkSelectReuseport
FnSkbAncestorCgroupId
FnSkLookupTcp
FnSkLookupUdp
FnSkRelease
FnMapPushElem
FnMapPopElem
FnMapPeekElem
FnMsgPushData
FnMsgPopData
FnRcPointerRel
FnSpinLock
FnSpinUnlock
FnSkFullsock
FnTcpSock
FnSkbEcnSetCe
FnGetListenerSock
FnSkcLookupTcp
FnTcpCheckSyncookie
FnSysctlGetName
FnSysctlGetCurrentValue
FnSysctlGetNewValue
FnSysctlSetNewValue
FnStrtol
FnStrtoul
FnSkStorageGet
FnSkStorageDelete
FnSendSignal
FnTcpGenSyncookie
FnSkbOutput
FnProbeReadUser
FnProbeReadKernel
FnProbeReadUserStr
FnProbeReadKernelStr
FnTcpSendAck
FnSendSignalThread
FnJiffies64
FnReadBranchRecords
FnGetNsCurrentPidTgid
FnXdpOutput
FnGetNetnsCookie
FnGetCurrentAncestorCgroupId
FnSkAssign
FnKtimeGetBootNs
FnSeqPrintf
FnSeqWrite
FnSkCgroupId
FnSkAncestorCgroupId
FnRingbufOutput
FnRingbufReserve
FnRingbufSubmit
FnRingbufDiscard
FnRingbufQuery
FnCsumLevel
FnSkcToTcp6Sock
FnSkcToTcpSock
FnSkcToTcpTimewaitSock
FnSkcToTcpRequestSock
FnSkcToUdp6Sock
FnGetTaskStack
FnLoadHdrOpt
FnStoreHdrOpt
FnReserveHdrOpt
FnInodeStorageGet
FnInodeStorageDelete
FnDPath
FnCopyFromUser
FnSnprintfBtf
FnSeqPrintfBtf
FnSkbCgroupClassid
FnRedirectNeigh
FnPerCpuPtr
FnThisCpuPtr
FnRedirectPeer
FnTaskStorageGet
FnTaskStorageDelete
FnGetCurrentTaskBtf
FnBprmOptsSet
FnKtimeGetCoarseNs
FnImaInodeHash
FnSockFromFile
FnCheckMtu
FnForEachMapElem
FnSnprintf
FnSysBpf
FnBtfFindByNameKind
FnSysClose
FnTimerInit
FnTimerSetCallback
FnTimerStart
FnTimerCancel
FnGetFuncIp
FnGetAttachCookie
FnTaskPtRegs
FnGetBranchSnapshot
FnTraceVprintk
FnSkcToUnixSock
FnKallsymsLookupName
FnFindVma
FnLoop
FnStrncmp
FnGetFuncArg
FnGetFuncRet
FnGetFuncArgCnt
FnGetRetval
FnSetRetval
FnXdpGetBuffLen
FnXdpLoadBytes
FnXdpStoreBytes
FnCopyFromUserTask
FnSkbSetTstamp
FnImaFileHash
FnKptrXchg
FnMapLookupPercpuElem
FnSkcToMptcpSock
FnDynptrFromMem
FnRingbufReserveDynptr
FnRingbufSubmitDynptr
FnRingbufDiscardDynptr
FnDynptrRead
FnDynptrWrite
FnDynptrData
maxBuiltinFunc
)
// Call emits a function call.
func (fn BuiltinFunc) Call() Instruction {
return Instruction{
OpCode: OpCode(JumpClass).SetJumpOp(Call),
Constant: int64(fn),
}
}

227
vendor/github.com/cilium/ebpf/asm/func_string.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,227 @@
// Code generated by "stringer -output func_string.go -type=BuiltinFunc"; DO NOT EDIT.
package asm
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[FnUnspec-0]
_ = x[FnMapLookupElem-1]
_ = x[FnMapUpdateElem-2]
_ = x[FnMapDeleteElem-3]
_ = x[FnProbeRead-4]
_ = x[FnKtimeGetNs-5]
_ = x[FnTracePrintk-6]
_ = x[FnGetPrandomU32-7]
_ = x[FnGetSmpProcessorId-8]
_ = x[FnSkbStoreBytes-9]
_ = x[FnL3CsumReplace-10]
_ = x[FnL4CsumReplace-11]
_ = x[FnTailCall-12]
_ = x[FnCloneRedirect-13]
_ = x[FnGetCurrentPidTgid-14]
_ = x[FnGetCurrentUidGid-15]
_ = x[FnGetCurrentComm-16]
_ = x[FnGetCgroupClassid-17]
_ = x[FnSkbVlanPush-18]
_ = x[FnSkbVlanPop-19]
_ = x[FnSkbGetTunnelKey-20]
_ = x[FnSkbSetTunnelKey-21]
_ = x[FnPerfEventRead-22]
_ = x[FnRedirect-23]
_ = x[FnGetRouteRealm-24]
_ = x[FnPerfEventOutput-25]
_ = x[FnSkbLoadBytes-26]
_ = x[FnGetStackid-27]
_ = x[FnCsumDiff-28]
_ = x[FnSkbGetTunnelOpt-29]
_ = x[FnSkbSetTunnelOpt-30]
_ = x[FnSkbChangeProto-31]
_ = x[FnSkbChangeType-32]
_ = x[FnSkbUnderCgroup-33]
_ = x[FnGetHashRecalc-34]
_ = x[FnGetCurrentTask-35]
_ = x[FnProbeWriteUser-36]
_ = x[FnCurrentTaskUnderCgroup-37]
_ = x[FnSkbChangeTail-38]
_ = x[FnSkbPullData-39]
_ = x[FnCsumUpdate-40]
_ = x[FnSetHashInvalid-41]
_ = x[FnGetNumaNodeId-42]
_ = x[FnSkbChangeHead-43]
_ = x[FnXdpAdjustHead-44]
_ = x[FnProbeReadStr-45]
_ = x[FnGetSocketCookie-46]
_ = x[FnGetSocketUid-47]
_ = x[FnSetHash-48]
_ = x[FnSetsockopt-49]
_ = x[FnSkbAdjustRoom-50]
_ = x[FnRedirectMap-51]
_ = x[FnSkRedirectMap-52]
_ = x[FnSockMapUpdate-53]
_ = x[FnXdpAdjustMeta-54]
_ = x[FnPerfEventReadValue-55]
_ = x[FnPerfProgReadValue-56]
_ = x[FnGetsockopt-57]
_ = x[FnOverrideReturn-58]
_ = x[FnSockOpsCbFlagsSet-59]
_ = x[FnMsgRedirectMap-60]
_ = x[FnMsgApplyBytes-61]
_ = x[FnMsgCorkBytes-62]
_ = x[FnMsgPullData-63]
_ = x[FnBind-64]
_ = x[FnXdpAdjustTail-65]
_ = x[FnSkbGetXfrmState-66]
_ = x[FnGetStack-67]
_ = x[FnSkbLoadBytesRelative-68]
_ = x[FnFibLookup-69]
_ = x[FnSockHashUpdate-70]
_ = x[FnMsgRedirectHash-71]
_ = x[FnSkRedirectHash-72]
_ = x[FnLwtPushEncap-73]
_ = x[FnLwtSeg6StoreBytes-74]
_ = x[FnLwtSeg6AdjustSrh-75]
_ = x[FnLwtSeg6Action-76]
_ = x[FnRcRepeat-77]
_ = x[FnRcKeydown-78]
_ = x[FnSkbCgroupId-79]
_ = x[FnGetCurrentCgroupId-80]
_ = x[FnGetLocalStorage-81]
_ = x[FnSkSelectReuseport-82]
_ = x[FnSkbAncestorCgroupId-83]
_ = x[FnSkLookupTcp-84]
_ = x[FnSkLookupUdp-85]
_ = x[FnSkRelease-86]
_ = x[FnMapPushElem-87]
_ = x[FnMapPopElem-88]
_ = x[FnMapPeekElem-89]
_ = x[FnMsgPushData-90]
_ = x[FnMsgPopData-91]
_ = x[FnRcPointerRel-92]
_ = x[FnSpinLock-93]
_ = x[FnSpinUnlock-94]
_ = x[FnSkFullsock-95]
_ = x[FnTcpSock-96]
_ = x[FnSkbEcnSetCe-97]
_ = x[FnGetListenerSock-98]
_ = x[FnSkcLookupTcp-99]
_ = x[FnTcpCheckSyncookie-100]
_ = x[FnSysctlGetName-101]
_ = x[FnSysctlGetCurrentValue-102]
_ = x[FnSysctlGetNewValue-103]
_ = x[FnSysctlSetNewValue-104]
_ = x[FnStrtol-105]
_ = x[FnStrtoul-106]
_ = x[FnSkStorageGet-107]
_ = x[FnSkStorageDelete-108]
_ = x[FnSendSignal-109]
_ = x[FnTcpGenSyncookie-110]
_ = x[FnSkbOutput-111]
_ = x[FnProbeReadUser-112]
_ = x[FnProbeReadKernel-113]
_ = x[FnProbeReadUserStr-114]
_ = x[FnProbeReadKernelStr-115]
_ = x[FnTcpSendAck-116]
_ = x[FnSendSignalThread-117]
_ = x[FnJiffies64-118]
_ = x[FnReadBranchRecords-119]
_ = x[FnGetNsCurrentPidTgid-120]
_ = x[FnXdpOutput-121]
_ = x[FnGetNetnsCookie-122]
_ = x[FnGetCurrentAncestorCgroupId-123]
_ = x[FnSkAssign-124]
_ = x[FnKtimeGetBootNs-125]
_ = x[FnSeqPrintf-126]
_ = x[FnSeqWrite-127]
_ = x[FnSkCgroupId-128]
_ = x[FnSkAncestorCgroupId-129]
_ = x[FnRingbufOutput-130]
_ = x[FnRingbufReserve-131]
_ = x[FnRingbufSubmit-132]
_ = x[FnRingbufDiscard-133]
_ = x[FnRingbufQuery-134]
_ = x[FnCsumLevel-135]
_ = x[FnSkcToTcp6Sock-136]
_ = x[FnSkcToTcpSock-137]
_ = x[FnSkcToTcpTimewaitSock-138]
_ = x[FnSkcToTcpRequestSock-139]
_ = x[FnSkcToUdp6Sock-140]
_ = x[FnGetTaskStack-141]
_ = x[FnLoadHdrOpt-142]
_ = x[FnStoreHdrOpt-143]
_ = x[FnReserveHdrOpt-144]
_ = x[FnInodeStorageGet-145]
_ = x[FnInodeStorageDelete-146]
_ = x[FnDPath-147]
_ = x[FnCopyFromUser-148]
_ = x[FnSnprintfBtf-149]
_ = x[FnSeqPrintfBtf-150]
_ = x[FnSkbCgroupClassid-151]
_ = x[FnRedirectNeigh-152]
_ = x[FnPerCpuPtr-153]
_ = x[FnThisCpuPtr-154]
_ = x[FnRedirectPeer-155]
_ = x[FnTaskStorageGet-156]
_ = x[FnTaskStorageDelete-157]
_ = x[FnGetCurrentTaskBtf-158]
_ = x[FnBprmOptsSet-159]
_ = x[FnKtimeGetCoarseNs-160]
_ = x[FnImaInodeHash-161]
_ = x[FnSockFromFile-162]
_ = x[FnCheckMtu-163]
_ = x[FnForEachMapElem-164]
_ = x[FnSnprintf-165]
_ = x[FnSysBpf-166]
_ = x[FnBtfFindByNameKind-167]
_ = x[FnSysClose-168]
_ = x[FnTimerInit-169]
_ = x[FnTimerSetCallback-170]
_ = x[FnTimerStart-171]
_ = x[FnTimerCancel-172]
_ = x[FnGetFuncIp-173]
_ = x[FnGetAttachCookie-174]
_ = x[FnTaskPtRegs-175]
_ = x[FnGetBranchSnapshot-176]
_ = x[FnTraceVprintk-177]
_ = x[FnSkcToUnixSock-178]
_ = x[FnKallsymsLookupName-179]
_ = x[FnFindVma-180]
_ = x[FnLoop-181]
_ = x[FnStrncmp-182]
_ = x[FnGetFuncArg-183]
_ = x[FnGetFuncRet-184]
_ = x[FnGetFuncArgCnt-185]
_ = x[FnGetRetval-186]
_ = x[FnSetRetval-187]
_ = x[FnXdpGetBuffLen-188]
_ = x[FnXdpLoadBytes-189]
_ = x[FnXdpStoreBytes-190]
_ = x[FnCopyFromUserTask-191]
_ = x[FnSkbSetTstamp-192]
_ = x[FnImaFileHash-193]
_ = x[FnKptrXchg-194]
_ = x[FnMapLookupPercpuElem-195]
_ = x[FnSkcToMptcpSock-196]
_ = x[FnDynptrFromMem-197]
_ = x[FnRingbufReserveDynptr-198]
_ = x[FnRingbufSubmitDynptr-199]
_ = x[FnRingbufDiscardDynptr-200]
_ = x[FnDynptrRead-201]
_ = x[FnDynptrWrite-202]
_ = x[FnDynptrData-203]
_ = x[maxBuiltinFunc-204]
}
const _BuiltinFunc_name = "FnUnspecFnMapLookupElemFnMapUpdateElemFnMapDeleteElemFnProbeReadFnKtimeGetNsFnTracePrintkFnGetPrandomU32FnGetSmpProcessorIdFnSkbStoreBytesFnL3CsumReplaceFnL4CsumReplaceFnTailCallFnCloneRedirectFnGetCurrentPidTgidFnGetCurrentUidGidFnGetCurrentCommFnGetCgroupClassidFnSkbVlanPushFnSkbVlanPopFnSkbGetTunnelKeyFnSkbSetTunnelKeyFnPerfEventReadFnRedirectFnGetRouteRealmFnPerfEventOutputFnSkbLoadBytesFnGetStackidFnCsumDiffFnSkbGetTunnelOptFnSkbSetTunnelOptFnSkbChangeProtoFnSkbChangeTypeFnSkbUnderCgroupFnGetHashRecalcFnGetCurrentTaskFnProbeWriteUserFnCurrentTaskUnderCgroupFnSkbChangeTailFnSkbPullDataFnCsumUpdateFnSetHashInvalidFnGetNumaNodeIdFnSkbChangeHeadFnXdpAdjustHeadFnProbeReadStrFnGetSocketCookieFnGetSocketUidFnSetHashFnSetsockoptFnSkbAdjustRoomFnRedirectMapFnSkRedirectMapFnSockMapUpdateFnXdpAdjustMetaFnPerfEventReadValueFnPerfProgReadValueFnGetsockoptFnOverrideReturnFnSockOpsCbFlagsSetFnMsgRedirectMapFnMsgApplyBytesFnMsgCorkBytesFnMsgPullDataFnBindFnXdpAdjustTailFnSkbGetXfrmStateFnGetStackFnSkbLoadBytesRelativeFnFibLookupFnSockHashUpdateFnMsgRedirectHashFnSkRedirectHashFnLwtPushEncapFnLwtSeg6StoreBytesFnLwtSeg6AdjustSrhFnLwtSeg6ActionFnRcRepeatFnRcKeydownFnSkbCgroupIdFnGetCurrentCgroupIdFnGetLocalStorageFnSkSelectReuseportFnSkbAncestorCgroupIdFnSkLookupTcpFnSkLookupUdpFnSkReleaseFnMapPushElemFnMapPopElemFnMapPeekElemFnMsgPushDataFnMsgPopDataFnRcPointerRelFnSpinLockFnSpinUnlockFnSkFullsockFnTcpSockFnSkbEcnSetCeFnGetListenerSockFnSkcLookupTcpFnTcpCheckSyncookieFnSysctlGetNameFnSysctlGetCurrentValueFnSysctlGetNewValueFnSysctlSetNewValueFnStrtolFnStrtoulFnSkStorageGetFnSkStorageDeleteFnSendSignalFnTcpGenSyncookieFnSkbOutputFnProbeReadUserFnProbeReadKernelFnProbeReadUserStrFnProbeReadKernelStrFnTcpSendAckFnSendSignalThreadFnJiffies64FnReadBranchRecordsFnGetNsCurrentPidTgidFnXdpOutputFnGetNetnsCookieFnGetCurrentAncestorCgroupIdFnSkAssignFnKtimeGetBootNsFnSeqPrintfFnSeqWriteFnSkCgroupIdFnSkAncestorCgroupIdFnRingbufOutputFnRingbufReserveFnRingbufSubmitFnRingbufDiscardFnRingbufQueryFnCsumLevelFnSkcToTcp6SockFnSkcToTcpSockFnSkcToTcpTimewaitSockFnSkcToTcpRequestSockFnSkcToUdp6SockFnGetTaskStackFnLoadHdrOptFnStoreHdrOptFnReserveHdrOptFnInodeStorageGetFnInodeStorageDeleteFnDPathFnCopyFromUserFnSnprintfBtfFnSeqPrintfBtfFnSkbCgroupClassidFnRedirectNeighFnPerCpuPtrFnThisCpuPtrFnRedirectPeerFnTaskStorageGetFnTaskStorageDeleteFnGetCurrentTaskBtfFnBprmOptsSetFnKtimeGetCoarseNsFnImaInodeHashFnSockFromFileFnCheckMtuFnForEachMapElemFnSnprintfFnSysBpfFnBtfFindByNameKindFnSysCloseFnTimerInitFnTimerSetCallbackFnTimerStartFnTimerCancelFnGetFuncIpFnGetAttachCookieFnTaskPtRegsFnGetBranchSnapshotFnTraceVprintkFnSkcToUnixSockFnKallsymsLookupNameFnFindVmaFnLoopFnStrncmpFnGetFuncArgFnGetFuncRetFnGetFuncArgCntFnGetRetvalFnSetRetvalFnXdpGetBuffLenFnXdpLoadBytesFnXdpStoreBytesFnCopyFromUserTaskFnSkbSetTstampFnImaFileHashFnKptrXchgFnMapLookupPercpuElemFnSkcToMptcpSockFnDynptrFromMemFnRingbufReserveDynptrFnRingbufSubmitDynptrFnRingbufDiscardDynptrFnDynptrReadFnDynptrWriteFnDynptrDatamaxBuiltinFunc"
var _BuiltinFunc_index = [...]uint16{0, 8, 23, 38, 53, 64, 76, 89, 104, 123, 138, 153, 168, 178, 193, 212, 230, 246, 264, 277, 289, 306, 323, 338, 348, 363, 380, 394, 406, 416, 433, 450, 466, 481, 497, 512, 528, 544, 568, 583, 596, 608, 624, 639, 654, 669, 683, 700, 714, 723, 735, 750, 763, 778, 793, 808, 828, 847, 859, 875, 894, 910, 925, 939, 952, 958, 973, 990, 1000, 1022, 1033, 1049, 1066, 1082, 1096, 1115, 1133, 1148, 1158, 1169, 1182, 1202, 1219, 1238, 1259, 1272, 1285, 1296, 1309, 1321, 1334, 1347, 1359, 1373, 1383, 1395, 1407, 1416, 1429, 1446, 1460, 1479, 1494, 1517, 1536, 1555, 1563, 1572, 1586, 1603, 1615, 1632, 1643, 1658, 1675, 1693, 1713, 1725, 1743, 1754, 1773, 1794, 1805, 1821, 1849, 1859, 1875, 1886, 1896, 1908, 1928, 1943, 1959, 1974, 1990, 2004, 2015, 2030, 2044, 2066, 2087, 2102, 2116, 2128, 2141, 2156, 2173, 2193, 2200, 2214, 2227, 2241, 2259, 2274, 2285, 2297, 2311, 2327, 2346, 2365, 2378, 2396, 2410, 2424, 2434, 2450, 2460, 2468, 2487, 2497, 2508, 2526, 2538, 2551, 2562, 2579, 2591, 2610, 2624, 2639, 2659, 2668, 2674, 2683, 2695, 2707, 2722, 2733, 2744, 2759, 2773, 2788, 2806, 2820, 2833, 2843, 2864, 2880, 2895, 2917, 2938, 2960, 2972, 2985, 2997, 3011}
func (i BuiltinFunc) String() string {
if i < 0 || i >= BuiltinFunc(len(_BuiltinFunc_index)-1) {
return "BuiltinFunc(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _BuiltinFunc_name[_BuiltinFunc_index[i]:_BuiltinFunc_index[i+1]]
}

859
vendor/github.com/cilium/ebpf/asm/instruction.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,859 @@
package asm
import (
"crypto/sha1"
"encoding/binary"
"encoding/hex"
"errors"
"fmt"
"io"
"math"
"sort"
"strings"
"github.com/cilium/ebpf/internal/sys"
"github.com/cilium/ebpf/internal/unix"
)
// InstructionSize is the size of a BPF instruction in bytes
const InstructionSize = 8
// RawInstructionOffset is an offset in units of raw BPF instructions.
type RawInstructionOffset uint64
var ErrUnreferencedSymbol = errors.New("unreferenced symbol")
var ErrUnsatisfiedMapReference = errors.New("unsatisfied map reference")
var ErrUnsatisfiedProgramReference = errors.New("unsatisfied program reference")
// Bytes returns the offset of an instruction in bytes.
func (rio RawInstructionOffset) Bytes() uint64 {
return uint64(rio) * InstructionSize
}
// Instruction is a single eBPF instruction.
type Instruction struct {
OpCode OpCode
Dst Register
Src Register
Offset int16
Constant int64
// Metadata contains optional metadata about this instruction.
Metadata Metadata
}
// Unmarshal decodes a BPF instruction.
func (ins *Instruction) Unmarshal(r io.Reader, bo binary.ByteOrder) (uint64, error) {
data := make([]byte, InstructionSize)
if _, err := io.ReadFull(r, data); err != nil {
return 0, err
}
ins.OpCode = OpCode(data[0])
regs := data[1]
switch bo {
case binary.LittleEndian:
ins.Dst, ins.Src = Register(regs&0xF), Register(regs>>4)
case binary.BigEndian:
ins.Dst, ins.Src = Register(regs>>4), Register(regs&0xf)
}
ins.Offset = int16(bo.Uint16(data[2:4]))
// Convert to int32 before widening to int64
// to ensure the signed bit is carried over.
ins.Constant = int64(int32(bo.Uint32(data[4:8])))
if !ins.OpCode.IsDWordLoad() {
return InstructionSize, nil
}
// Pull another instruction from the stream to retrieve the second
// half of the 64-bit immediate value.
if _, err := io.ReadFull(r, data); err != nil {
// No Wrap, to avoid io.EOF clash
return 0, errors.New("64bit immediate is missing second half")
}
// Require that all fields other than the value are zero.
if bo.Uint32(data[0:4]) != 0 {
return 0, errors.New("64bit immediate has non-zero fields")
}
cons1 := uint32(ins.Constant)
cons2 := int32(bo.Uint32(data[4:8]))
ins.Constant = int64(cons2)<<32 | int64(cons1)
return 2 * InstructionSize, nil
}
// Marshal encodes a BPF instruction.
func (ins Instruction) Marshal(w io.Writer, bo binary.ByteOrder) (uint64, error) {
if ins.OpCode == InvalidOpCode {
return 0, errors.New("invalid opcode")
}
isDWordLoad := ins.OpCode.IsDWordLoad()
cons := int32(ins.Constant)
if isDWordLoad {
// Encode least significant 32bit first for 64bit operations.
cons = int32(uint32(ins.Constant))
}
regs, err := newBPFRegisters(ins.Dst, ins.Src, bo)
if err != nil {
return 0, fmt.Errorf("can't marshal registers: %s", err)
}
data := make([]byte, InstructionSize)
data[0] = byte(ins.OpCode)
data[1] = byte(regs)
bo.PutUint16(data[2:4], uint16(ins.Offset))
bo.PutUint32(data[4:8], uint32(cons))
if _, err := w.Write(data); err != nil {
return 0, err
}
if !isDWordLoad {
return InstructionSize, nil
}
// The first half of the second part of a double-wide instruction
// must be zero. The second half carries the value.
bo.PutUint32(data[0:4], 0)
bo.PutUint32(data[4:8], uint32(ins.Constant>>32))
if _, err := w.Write(data); err != nil {
return 0, err
}
return 2 * InstructionSize, nil
}
// AssociateMap associates a Map with this Instruction.
//
// Implicitly clears the Instruction's Reference field.
//
// Returns an error if the Instruction is not a map load.
func (ins *Instruction) AssociateMap(m FDer) error {
if !ins.IsLoadFromMap() {
return errors.New("not a load from a map")
}
ins.Metadata.Set(referenceMeta{}, nil)
ins.Metadata.Set(mapMeta{}, m)
return nil
}
// RewriteMapPtr changes an instruction to use a new map fd.
//
// Returns an error if the instruction doesn't load a map.
//
// Deprecated: use AssociateMap instead. If you cannot provide a Map,
// wrap an fd in a type implementing FDer.
func (ins *Instruction) RewriteMapPtr(fd int) error {
if !ins.IsLoadFromMap() {
return errors.New("not a load from a map")
}
ins.encodeMapFD(fd)
return nil
}
func (ins *Instruction) encodeMapFD(fd int) {
// Preserve the offset value for direct map loads.
offset := uint64(ins.Constant) & (math.MaxUint32 << 32)
rawFd := uint64(uint32(fd))
ins.Constant = int64(offset | rawFd)
}
// MapPtr returns the map fd for this instruction.
//
// The result is undefined if the instruction is not a load from a map,
// see IsLoadFromMap.
//
// Deprecated: use Map() instead.
func (ins *Instruction) MapPtr() int {
// If there is a map associated with the instruction, return its FD.
if fd := ins.Metadata.Get(mapMeta{}); fd != nil {
return fd.(FDer).FD()
}
// Fall back to the fd stored in the Constant field
return ins.mapFd()
}
// mapFd returns the map file descriptor stored in the 32 least significant
// bits of ins' Constant field.
func (ins *Instruction) mapFd() int {
return int(int32(ins.Constant))
}
// RewriteMapOffset changes the offset of a direct load from a map.
//
// Returns an error if the instruction is not a direct load.
func (ins *Instruction) RewriteMapOffset(offset uint32) error {
if !ins.OpCode.IsDWordLoad() {
return fmt.Errorf("%s is not a 64 bit load", ins.OpCode)
}
if ins.Src != PseudoMapValue {
return errors.New("not a direct load from a map")
}
fd := uint64(ins.Constant) & math.MaxUint32
ins.Constant = int64(uint64(offset)<<32 | fd)
return nil
}
func (ins *Instruction) mapOffset() uint32 {
return uint32(uint64(ins.Constant) >> 32)
}
// IsLoadFromMap returns true if the instruction loads from a map.
//
// This covers both loading the map pointer and direct map value loads.
func (ins *Instruction) IsLoadFromMap() bool {
return ins.OpCode == LoadImmOp(DWord) && (ins.Src == PseudoMapFD || ins.Src == PseudoMapValue)
}
// IsFunctionCall returns true if the instruction calls another BPF function.
//
// This is not the same thing as a BPF helper call.
func (ins *Instruction) IsFunctionCall() bool {
return ins.OpCode.JumpOp() == Call && ins.Src == PseudoCall
}
// IsLoadOfFunctionPointer returns true if the instruction loads a function pointer.
func (ins *Instruction) IsLoadOfFunctionPointer() bool {
return ins.OpCode.IsDWordLoad() && ins.Src == PseudoFunc
}
// IsFunctionReference returns true if the instruction references another BPF
// function, either by invoking a Call jump operation or by loading a function
// pointer.
func (ins *Instruction) IsFunctionReference() bool {
return ins.IsFunctionCall() || ins.IsLoadOfFunctionPointer()
}
// IsBuiltinCall returns true if the instruction is a built-in call, i.e. BPF helper call.
func (ins *Instruction) IsBuiltinCall() bool {
return ins.OpCode.JumpOp() == Call && ins.Src == R0 && ins.Dst == R0
}
// IsConstantLoad returns true if the instruction loads a constant of the
// given size.
func (ins *Instruction) IsConstantLoad(size Size) bool {
return ins.OpCode == LoadImmOp(size) && ins.Src == R0 && ins.Offset == 0
}
// Format implements fmt.Formatter.
func (ins Instruction) Format(f fmt.State, c rune) {
if c != 'v' {
fmt.Fprintf(f, "{UNRECOGNIZED: %c}", c)
return
}
op := ins.OpCode
if op == InvalidOpCode {
fmt.Fprint(f, "INVALID")
return
}
// Omit trailing space for Exit
if op.JumpOp() == Exit {
fmt.Fprint(f, op)
return
}
if ins.IsLoadFromMap() {
fd := ins.mapFd()
m := ins.Map()
switch ins.Src {
case PseudoMapFD:
if m != nil {
fmt.Fprintf(f, "LoadMapPtr dst: %s map: %s", ins.Dst, m)
} else {
fmt.Fprintf(f, "LoadMapPtr dst: %s fd: %d", ins.Dst, fd)
}
case PseudoMapValue:
if m != nil {
fmt.Fprintf(f, "LoadMapValue dst: %s, map: %s off: %d", ins.Dst, m, ins.mapOffset())
} else {
fmt.Fprintf(f, "LoadMapValue dst: %s, fd: %d off: %d", ins.Dst, fd, ins.mapOffset())
}
}
goto ref
}
fmt.Fprintf(f, "%v ", op)
switch cls := op.Class(); {
case cls.isLoadOrStore():
switch op.Mode() {
case ImmMode:
fmt.Fprintf(f, "dst: %s imm: %d", ins.Dst, ins.Constant)
case AbsMode:
fmt.Fprintf(f, "imm: %d", ins.Constant)
case IndMode:
fmt.Fprintf(f, "dst: %s src: %s imm: %d", ins.Dst, ins.Src, ins.Constant)
case MemMode:
fmt.Fprintf(f, "dst: %s src: %s off: %d imm: %d", ins.Dst, ins.Src, ins.Offset, ins.Constant)
case XAddMode:
fmt.Fprintf(f, "dst: %s src: %s", ins.Dst, ins.Src)
}
case cls.IsALU():
fmt.Fprintf(f, "dst: %s ", ins.Dst)
if op.ALUOp() == Swap || op.Source() == ImmSource {
fmt.Fprintf(f, "imm: %d", ins.Constant)
} else {
fmt.Fprintf(f, "src: %s", ins.Src)
}
case cls.IsJump():
switch jop := op.JumpOp(); jop {
case Call:
if ins.Src == PseudoCall {
// bpf-to-bpf call
fmt.Fprint(f, ins.Constant)
} else {
fmt.Fprint(f, BuiltinFunc(ins.Constant))
}
default:
fmt.Fprintf(f, "dst: %s off: %d ", ins.Dst, ins.Offset)
if op.Source() == ImmSource {
fmt.Fprintf(f, "imm: %d", ins.Constant)
} else {
fmt.Fprintf(f, "src: %s", ins.Src)
}
}
}
ref:
if ins.Reference() != "" {
fmt.Fprintf(f, " <%s>", ins.Reference())
}
}
func (ins Instruction) equal(other Instruction) bool {
return ins.OpCode == other.OpCode &&
ins.Dst == other.Dst &&
ins.Src == other.Src &&
ins.Offset == other.Offset &&
ins.Constant == other.Constant
}
// Size returns the amount of bytes ins would occupy in binary form.
func (ins Instruction) Size() uint64 {
return uint64(InstructionSize * ins.OpCode.rawInstructions())
}
type symbolMeta struct{}
// WithSymbol marks the Instruction as a Symbol, which other Instructions
// can point to using corresponding calls to WithReference.
func (ins Instruction) WithSymbol(name string) Instruction {
ins.Metadata.Set(symbolMeta{}, name)
return ins
}
// Sym creates a symbol.
//
// Deprecated: use WithSymbol instead.
func (ins Instruction) Sym(name string) Instruction {
return ins.WithSymbol(name)
}
// Symbol returns the value ins has been marked with using WithSymbol,
// otherwise returns an empty string. A symbol is often an Instruction
// at the start of a function body.
func (ins Instruction) Symbol() string {
sym, _ := ins.Metadata.Get(symbolMeta{}).(string)
return sym
}
type referenceMeta struct{}
// WithReference makes ins reference another Symbol or map by name.
func (ins Instruction) WithReference(ref string) Instruction {
ins.Metadata.Set(referenceMeta{}, ref)
return ins
}
// Reference returns the Symbol or map name referenced by ins, if any.
func (ins Instruction) Reference() string {
ref, _ := ins.Metadata.Get(referenceMeta{}).(string)
return ref
}
type mapMeta struct{}
// Map returns the Map referenced by ins, if any.
// An Instruction will contain a Map if e.g. it references an existing,
// pinned map that was opened during ELF loading.
func (ins Instruction) Map() FDer {
fd, _ := ins.Metadata.Get(mapMeta{}).(FDer)
return fd
}
type sourceMeta struct{}
// WithSource adds source information about the Instruction.
func (ins Instruction) WithSource(src fmt.Stringer) Instruction {
ins.Metadata.Set(sourceMeta{}, src)
return ins
}
// Source returns source information about the Instruction. The field is
// present when the compiler emits BTF line info about the Instruction and
// usually contains the line of source code responsible for it.
func (ins Instruction) Source() fmt.Stringer {
str, _ := ins.Metadata.Get(sourceMeta{}).(fmt.Stringer)
return str
}
// A Comment can be passed to Instruction.WithSource to add a comment
// to an instruction.
type Comment string
func (s Comment) String() string {
return string(s)
}
// FDer represents a resource tied to an underlying file descriptor.
// Used as a stand-in for e.g. ebpf.Map since that type cannot be
// imported here and FD() is the only method we rely on.
type FDer interface {
FD() int
}
// Instructions is an eBPF program.
type Instructions []Instruction
// Unmarshal unmarshals an Instructions from a binary instruction stream.
// All instructions in insns are replaced by instructions decoded from r.
func (insns *Instructions) Unmarshal(r io.Reader, bo binary.ByteOrder) error {
if len(*insns) > 0 {
*insns = nil
}
var offset uint64
for {
var ins Instruction
n, err := ins.Unmarshal(r, bo)
if errors.Is(err, io.EOF) {
break
}
if err != nil {
return fmt.Errorf("offset %d: %w", offset, err)
}
*insns = append(*insns, ins)
offset += n
}
return nil
}
// Name returns the name of the function insns belongs to, if any.
func (insns Instructions) Name() string {
if len(insns) == 0 {
return ""
}
return insns[0].Symbol()
}
func (insns Instructions) String() string {
return fmt.Sprint(insns)
}
// Size returns the amount of bytes insns would occupy in binary form.
func (insns Instructions) Size() uint64 {
var sum uint64
for _, ins := range insns {
sum += ins.Size()
}
return sum
}
// AssociateMap updates all Instructions that Reference the given symbol
// to point to an existing Map m instead.
//
// Returns ErrUnreferencedSymbol error if no references to symbol are found
// in insns. If symbol is anything else than the symbol name of map (e.g.
// a bpf2bpf subprogram), an error is returned.
func (insns Instructions) AssociateMap(symbol string, m FDer) error {
if symbol == "" {
return errors.New("empty symbol")
}
var found bool
for i := range insns {
ins := &insns[i]
if ins.Reference() != symbol {
continue
}
if err := ins.AssociateMap(m); err != nil {
return err
}
found = true
}
if !found {
return fmt.Errorf("symbol %s: %w", symbol, ErrUnreferencedSymbol)
}
return nil
}
// RewriteMapPtr rewrites all loads of a specific map pointer to a new fd.
//
// Returns ErrUnreferencedSymbol if the symbol isn't used.
//
// Deprecated: use AssociateMap instead.
func (insns Instructions) RewriteMapPtr(symbol string, fd int) error {
if symbol == "" {
return errors.New("empty symbol")
}
var found bool
for i := range insns {
ins := &insns[i]
if ins.Reference() != symbol {
continue
}
if !ins.IsLoadFromMap() {
return errors.New("not a load from a map")
}
ins.encodeMapFD(fd)
found = true
}
if !found {
return fmt.Errorf("symbol %s: %w", symbol, ErrUnreferencedSymbol)
}
return nil
}
// SymbolOffsets returns the set of symbols and their offset in
// the instructions.
func (insns Instructions) SymbolOffsets() (map[string]int, error) {
offsets := make(map[string]int)
for i, ins := range insns {
if ins.Symbol() == "" {
continue
}
if _, ok := offsets[ins.Symbol()]; ok {
return nil, fmt.Errorf("duplicate symbol %s", ins.Symbol())
}
offsets[ins.Symbol()] = i
}
return offsets, nil
}
// FunctionReferences returns a set of symbol names these Instructions make
// bpf-to-bpf calls to.
func (insns Instructions) FunctionReferences() []string {
calls := make(map[string]struct{})
for _, ins := range insns {
if ins.Constant != -1 {
// BPF-to-BPF calls have -1 constants.
continue
}
if ins.Reference() == "" {
continue
}
if !ins.IsFunctionReference() {
continue
}
calls[ins.Reference()] = struct{}{}
}
result := make([]string, 0, len(calls))
for call := range calls {
result = append(result, call)
}
sort.Strings(result)
return result
}
// ReferenceOffsets returns the set of references and their offset in
// the instructions.
func (insns Instructions) ReferenceOffsets() map[string][]int {
offsets := make(map[string][]int)
for i, ins := range insns {
if ins.Reference() == "" {
continue
}
offsets[ins.Reference()] = append(offsets[ins.Reference()], i)
}
return offsets
}
// Format implements fmt.Formatter.
//
// You can control indentation of symbols by
// specifying a width. Setting a precision controls the indentation of
// instructions.
// The default character is a tab, which can be overridden by specifying
// the ' ' space flag.
func (insns Instructions) Format(f fmt.State, c rune) {
if c != 's' && c != 'v' {
fmt.Fprintf(f, "{UNKNOWN FORMAT '%c'}", c)
return
}
// Precision is better in this case, because it allows
// specifying 0 padding easily.
padding, ok := f.Precision()
if !ok {
padding = 1
}
indent := strings.Repeat("\t", padding)
if f.Flag(' ') {
indent = strings.Repeat(" ", padding)
}
symPadding, ok := f.Width()
if !ok {
symPadding = padding - 1
}
if symPadding < 0 {
symPadding = 0
}
symIndent := strings.Repeat("\t", symPadding)
if f.Flag(' ') {
symIndent = strings.Repeat(" ", symPadding)
}
// Guess how many digits we need at most, by assuming that all instructions
// are double wide.
highestOffset := len(insns) * 2
offsetWidth := int(math.Ceil(math.Log10(float64(highestOffset))))
iter := insns.Iterate()
for iter.Next() {
if iter.Ins.Symbol() != "" {
fmt.Fprintf(f, "%s%s:\n", symIndent, iter.Ins.Symbol())
}
if src := iter.Ins.Source(); src != nil {
line := strings.TrimSpace(src.String())
if line != "" {
fmt.Fprintf(f, "%s%*s; %s\n", indent, offsetWidth, " ", line)
}
}
fmt.Fprintf(f, "%s%*d: %v\n", indent, offsetWidth, iter.Offset, iter.Ins)
}
}
// Marshal encodes a BPF program into the kernel format.
//
// insns may be modified if there are unresolved jumps or bpf2bpf calls.
//
// Returns ErrUnsatisfiedProgramReference if there is a Reference Instruction
// without a matching Symbol Instruction within insns.
func (insns Instructions) Marshal(w io.Writer, bo binary.ByteOrder) error {
if err := insns.encodeFunctionReferences(); err != nil {
return err
}
if err := insns.encodeMapPointers(); err != nil {
return err
}
for i, ins := range insns {
if _, err := ins.Marshal(w, bo); err != nil {
return fmt.Errorf("instruction %d: %w", i, err)
}
}
return nil
}
// Tag calculates the kernel tag for a series of instructions.
//
// It mirrors bpf_prog_calc_tag in the kernel and so can be compared
// to ProgramInfo.Tag to figure out whether a loaded program matches
// certain instructions.
func (insns Instructions) Tag(bo binary.ByteOrder) (string, error) {
h := sha1.New()
for i, ins := range insns {
if ins.IsLoadFromMap() {
ins.Constant = 0
}
_, err := ins.Marshal(h, bo)
if err != nil {
return "", fmt.Errorf("instruction %d: %w", i, err)
}
}
return hex.EncodeToString(h.Sum(nil)[:unix.BPF_TAG_SIZE]), nil
}
// encodeFunctionReferences populates the Offset (or Constant, depending on
// the instruction type) field of instructions with a Reference field to point
// to the offset of the corresponding instruction with a matching Symbol field.
//
// Only Reference Instructions that are either jumps or BPF function references
// (calls or function pointer loads) are populated.
//
// Returns ErrUnsatisfiedProgramReference if there is a Reference Instruction
// without at least one corresponding Symbol Instruction within insns.
func (insns Instructions) encodeFunctionReferences() error {
// Index the offsets of instructions tagged as a symbol.
symbolOffsets := make(map[string]RawInstructionOffset)
iter := insns.Iterate()
for iter.Next() {
ins := iter.Ins
if ins.Symbol() == "" {
continue
}
if _, ok := symbolOffsets[ins.Symbol()]; ok {
return fmt.Errorf("duplicate symbol %s", ins.Symbol())
}
symbolOffsets[ins.Symbol()] = iter.Offset
}
// Find all instructions tagged as references to other symbols.
// Depending on the instruction type, populate their constant or offset
// fields to point to the symbol they refer to within the insn stream.
iter = insns.Iterate()
for iter.Next() {
i := iter.Index
offset := iter.Offset
ins := iter.Ins
if ins.Reference() == "" {
continue
}
switch {
case ins.IsFunctionReference() && ins.Constant == -1:
symOffset, ok := symbolOffsets[ins.Reference()]
if !ok {
return fmt.Errorf("%s at insn %d: symbol %q: %w", ins.OpCode, i, ins.Reference(), ErrUnsatisfiedProgramReference)
}
ins.Constant = int64(symOffset - offset - 1)
case ins.OpCode.Class().IsJump() && ins.Offset == -1:
symOffset, ok := symbolOffsets[ins.Reference()]
if !ok {
return fmt.Errorf("%s at insn %d: symbol %q: %w", ins.OpCode, i, ins.Reference(), ErrUnsatisfiedProgramReference)
}
ins.Offset = int16(symOffset - offset - 1)
}
}
return nil
}
// encodeMapPointers finds all Map Instructions and encodes their FDs
// into their Constant fields.
func (insns Instructions) encodeMapPointers() error {
iter := insns.Iterate()
for iter.Next() {
ins := iter.Ins
if !ins.IsLoadFromMap() {
continue
}
m := ins.Map()
if m == nil {
continue
}
fd := m.FD()
if fd < 0 {
return fmt.Errorf("map %s: %w", m, sys.ErrClosedFd)
}
ins.encodeMapFD(m.FD())
}
return nil
}
// Iterate allows iterating a BPF program while keeping track of
// various offsets.
//
// Modifying the instruction slice will lead to undefined behaviour.
func (insns Instructions) Iterate() *InstructionIterator {
return &InstructionIterator{insns: insns}
}
// InstructionIterator iterates over a BPF program.
type InstructionIterator struct {
insns Instructions
// The instruction in question.
Ins *Instruction
// The index of the instruction in the original instruction slice.
Index int
// The offset of the instruction in raw BPF instructions. This accounts
// for double-wide instructions.
Offset RawInstructionOffset
}
// Next returns true as long as there are any instructions remaining.
func (iter *InstructionIterator) Next() bool {
if len(iter.insns) == 0 {
return false
}
if iter.Ins != nil {
iter.Index++
iter.Offset += RawInstructionOffset(iter.Ins.OpCode.rawInstructions())
}
iter.Ins = &iter.insns[0]
iter.insns = iter.insns[1:]
return true
}
type bpfRegisters uint8
func newBPFRegisters(dst, src Register, bo binary.ByteOrder) (bpfRegisters, error) {
switch bo {
case binary.LittleEndian:
return bpfRegisters((src << 4) | (dst & 0xF)), nil
case binary.BigEndian:
return bpfRegisters((dst << 4) | (src & 0xF)), nil
default:
return 0, fmt.Errorf("unrecognized ByteOrder %T", bo)
}
}
// IsUnreferencedSymbol returns true if err was caused by
// an unreferenced symbol.
//
// Deprecated: use errors.Is(err, asm.ErrUnreferencedSymbol).
func IsUnreferencedSymbol(err error) bool {
return errors.Is(err, ErrUnreferencedSymbol)
}

127
vendor/github.com/cilium/ebpf/asm/jump.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,127 @@
package asm
//go:generate stringer -output jump_string.go -type=JumpOp
// JumpOp affect control flow.
//
// msb lsb
// +----+-+---+
// |OP |s|cls|
// +----+-+---+
type JumpOp uint8
const jumpMask OpCode = aluMask
const (
// InvalidJumpOp is returned by getters when invoked
// on non branch OpCodes
InvalidJumpOp JumpOp = 0xff
// Ja jumps by offset unconditionally
Ja JumpOp = 0x00
// JEq jumps by offset if r == imm
JEq JumpOp = 0x10
// JGT jumps by offset if r > imm
JGT JumpOp = 0x20
// JGE jumps by offset if r >= imm
JGE JumpOp = 0x30
// JSet jumps by offset if r & imm
JSet JumpOp = 0x40
// JNE jumps by offset if r != imm
JNE JumpOp = 0x50
// JSGT jumps by offset if signed r > signed imm
JSGT JumpOp = 0x60
// JSGE jumps by offset if signed r >= signed imm
JSGE JumpOp = 0x70
// Call builtin or user defined function from imm
Call JumpOp = 0x80
// Exit ends execution, with value in r0
Exit JumpOp = 0x90
// JLT jumps by offset if r < imm
JLT JumpOp = 0xa0
// JLE jumps by offset if r <= imm
JLE JumpOp = 0xb0
// JSLT jumps by offset if signed r < signed imm
JSLT JumpOp = 0xc0
// JSLE jumps by offset if signed r <= signed imm
JSLE JumpOp = 0xd0
)
// Return emits an exit instruction.
//
// Requires a return value in R0.
func Return() Instruction {
return Instruction{
OpCode: OpCode(JumpClass).SetJumpOp(Exit),
}
}
// Op returns the OpCode for a given jump source.
func (op JumpOp) Op(source Source) OpCode {
return OpCode(JumpClass).SetJumpOp(op).SetSource(source)
}
// Imm compares 64 bit dst to 64 bit value (sign extended), and adjusts PC by offset if the condition is fulfilled.
func (op JumpOp) Imm(dst Register, value int32, label string) Instruction {
return Instruction{
OpCode: op.opCode(JumpClass, ImmSource),
Dst: dst,
Offset: -1,
Constant: int64(value),
}.WithReference(label)
}
// Imm32 compares 32 bit dst to 32 bit value, and adjusts PC by offset if the condition is fulfilled.
// Requires kernel 5.1.
func (op JumpOp) Imm32(dst Register, value int32, label string) Instruction {
return Instruction{
OpCode: op.opCode(Jump32Class, ImmSource),
Dst: dst,
Offset: -1,
Constant: int64(value),
}.WithReference(label)
}
// Reg compares 64 bit dst to 64 bit src, and adjusts PC by offset if the condition is fulfilled.
func (op JumpOp) Reg(dst, src Register, label string) Instruction {
return Instruction{
OpCode: op.opCode(JumpClass, RegSource),
Dst: dst,
Src: src,
Offset: -1,
}.WithReference(label)
}
// Reg32 compares 32 bit dst to 32 bit src, and adjusts PC by offset if the condition is fulfilled.
// Requires kernel 5.1.
func (op JumpOp) Reg32(dst, src Register, label string) Instruction {
return Instruction{
OpCode: op.opCode(Jump32Class, RegSource),
Dst: dst,
Src: src,
Offset: -1,
}.WithReference(label)
}
func (op JumpOp) opCode(class Class, source Source) OpCode {
if op == Exit || op == Call || op == Ja {
return InvalidOpCode
}
return OpCode(class).SetJumpOp(op).SetSource(source)
}
// Label adjusts PC to the address of the label.
func (op JumpOp) Label(label string) Instruction {
if op == Call {
return Instruction{
OpCode: OpCode(JumpClass).SetJumpOp(Call),
Src: PseudoCall,
Constant: -1,
}.WithReference(label)
}
return Instruction{
OpCode: OpCode(JumpClass).SetJumpOp(op),
Offset: -1,
}.WithReference(label)
}

53
vendor/github.com/cilium/ebpf/asm/jump_string.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,53 @@
// Code generated by "stringer -output jump_string.go -type=JumpOp"; DO NOT EDIT.
package asm
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[InvalidJumpOp-255]
_ = x[Ja-0]
_ = x[JEq-16]
_ = x[JGT-32]
_ = x[JGE-48]
_ = x[JSet-64]
_ = x[JNE-80]
_ = x[JSGT-96]
_ = x[JSGE-112]
_ = x[Call-128]
_ = x[Exit-144]
_ = x[JLT-160]
_ = x[JLE-176]
_ = x[JSLT-192]
_ = x[JSLE-208]
}
const _JumpOp_name = "JaJEqJGTJGEJSetJNEJSGTJSGECallExitJLTJLEJSLTJSLEInvalidJumpOp"
var _JumpOp_map = map[JumpOp]string{
0: _JumpOp_name[0:2],
16: _JumpOp_name[2:5],
32: _JumpOp_name[5:8],
48: _JumpOp_name[8:11],
64: _JumpOp_name[11:15],
80: _JumpOp_name[15:18],
96: _JumpOp_name[18:22],
112: _JumpOp_name[22:26],
128: _JumpOp_name[26:30],
144: _JumpOp_name[30:34],
160: _JumpOp_name[34:37],
176: _JumpOp_name[37:40],
192: _JumpOp_name[40:44],
208: _JumpOp_name[44:48],
255: _JumpOp_name[48:61],
}
func (i JumpOp) String() string {
if str, ok := _JumpOp_map[i]; ok {
return str
}
return "JumpOp(" + strconv.FormatInt(int64(i), 10) + ")"
}

204
vendor/github.com/cilium/ebpf/asm/load_store.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,204 @@
package asm
//go:generate stringer -output load_store_string.go -type=Mode,Size
// Mode for load and store operations
//
// msb lsb
// +---+--+---+
// |MDE|sz|cls|
// +---+--+---+
type Mode uint8
const modeMask OpCode = 0xe0
const (
// InvalidMode is returned by getters when invoked
// on non load / store OpCodes
InvalidMode Mode = 0xff
// ImmMode - immediate value
ImmMode Mode = 0x00
// AbsMode - immediate value + offset
AbsMode Mode = 0x20
// IndMode - indirect (imm+src)
IndMode Mode = 0x40
// MemMode - load from memory
MemMode Mode = 0x60
// XAddMode - add atomically across processors.
XAddMode Mode = 0xc0
)
// Size of load and store operations
//
// msb lsb
// +---+--+---+
// |mde|SZ|cls|
// +---+--+---+
type Size uint8
const sizeMask OpCode = 0x18
const (
// InvalidSize is returned by getters when invoked
// on non load / store OpCodes
InvalidSize Size = 0xff
// DWord - double word; 64 bits
DWord Size = 0x18
// Word - word; 32 bits
Word Size = 0x00
// Half - half-word; 16 bits
Half Size = 0x08
// Byte - byte; 8 bits
Byte Size = 0x10
)
// Sizeof returns the size in bytes.
func (s Size) Sizeof() int {
switch s {
case DWord:
return 8
case Word:
return 4
case Half:
return 2
case Byte:
return 1
default:
return -1
}
}
// LoadMemOp returns the OpCode to load a value of given size from memory.
func LoadMemOp(size Size) OpCode {
return OpCode(LdXClass).SetMode(MemMode).SetSize(size)
}
// LoadMem emits `dst = *(size *)(src + offset)`.
func LoadMem(dst, src Register, offset int16, size Size) Instruction {
return Instruction{
OpCode: LoadMemOp(size),
Dst: dst,
Src: src,
Offset: offset,
}
}
// LoadImmOp returns the OpCode to load an immediate of given size.
//
// As of kernel 4.20, only DWord size is accepted.
func LoadImmOp(size Size) OpCode {
return OpCode(LdClass).SetMode(ImmMode).SetSize(size)
}
// LoadImm emits `dst = (size)value`.
//
// As of kernel 4.20, only DWord size is accepted.
func LoadImm(dst Register, value int64, size Size) Instruction {
return Instruction{
OpCode: LoadImmOp(size),
Dst: dst,
Constant: value,
}
}
// LoadMapPtr stores a pointer to a map in dst.
func LoadMapPtr(dst Register, fd int) Instruction {
if fd < 0 {
return Instruction{OpCode: InvalidOpCode}
}
return Instruction{
OpCode: LoadImmOp(DWord),
Dst: dst,
Src: PseudoMapFD,
Constant: int64(uint32(fd)),
}
}
// LoadMapValue stores a pointer to the value at a certain offset of a map.
func LoadMapValue(dst Register, fd int, offset uint32) Instruction {
if fd < 0 {
return Instruction{OpCode: InvalidOpCode}
}
fdAndOffset := (uint64(offset) << 32) | uint64(uint32(fd))
return Instruction{
OpCode: LoadImmOp(DWord),
Dst: dst,
Src: PseudoMapValue,
Constant: int64(fdAndOffset),
}
}
// LoadIndOp returns the OpCode for loading a value of given size from an sk_buff.
func LoadIndOp(size Size) OpCode {
return OpCode(LdClass).SetMode(IndMode).SetSize(size)
}
// LoadInd emits `dst = ntoh(*(size *)(((sk_buff *)R6)->data + src + offset))`.
func LoadInd(dst, src Register, offset int32, size Size) Instruction {
return Instruction{
OpCode: LoadIndOp(size),
Dst: dst,
Src: src,
Constant: int64(offset),
}
}
// LoadAbsOp returns the OpCode for loading a value of given size from an sk_buff.
func LoadAbsOp(size Size) OpCode {
return OpCode(LdClass).SetMode(AbsMode).SetSize(size)
}
// LoadAbs emits `r0 = ntoh(*(size *)(((sk_buff *)R6)->data + offset))`.
func LoadAbs(offset int32, size Size) Instruction {
return Instruction{
OpCode: LoadAbsOp(size),
Dst: R0,
Constant: int64(offset),
}
}
// StoreMemOp returns the OpCode for storing a register of given size in memory.
func StoreMemOp(size Size) OpCode {
return OpCode(StXClass).SetMode(MemMode).SetSize(size)
}
// StoreMem emits `*(size *)(dst + offset) = src`
func StoreMem(dst Register, offset int16, src Register, size Size) Instruction {
return Instruction{
OpCode: StoreMemOp(size),
Dst: dst,
Src: src,
Offset: offset,
}
}
// StoreImmOp returns the OpCode for storing an immediate of given size in memory.
func StoreImmOp(size Size) OpCode {
return OpCode(StClass).SetMode(MemMode).SetSize(size)
}
// StoreImm emits `*(size *)(dst + offset) = value`.
func StoreImm(dst Register, offset int16, value int64, size Size) Instruction {
return Instruction{
OpCode: StoreImmOp(size),
Dst: dst,
Offset: offset,
Constant: value,
}
}
// StoreXAddOp returns the OpCode to atomically add a register to a value in memory.
func StoreXAddOp(size Size) OpCode {
return OpCode(StXClass).SetMode(XAddMode).SetSize(size)
}
// StoreXAdd atomically adds src to *dst.
func StoreXAdd(dst, src Register, size Size) Instruction {
return Instruction{
OpCode: StoreXAddOp(size),
Dst: dst,
Src: src,
}
}

80
vendor/github.com/cilium/ebpf/asm/load_store_string.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,80 @@
// Code generated by "stringer -output load_store_string.go -type=Mode,Size"; DO NOT EDIT.
package asm
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[InvalidMode-255]
_ = x[ImmMode-0]
_ = x[AbsMode-32]
_ = x[IndMode-64]
_ = x[MemMode-96]
_ = x[XAddMode-192]
}
const (
_Mode_name_0 = "ImmMode"
_Mode_name_1 = "AbsMode"
_Mode_name_2 = "IndMode"
_Mode_name_3 = "MemMode"
_Mode_name_4 = "XAddMode"
_Mode_name_5 = "InvalidMode"
)
func (i Mode) String() string {
switch {
case i == 0:
return _Mode_name_0
case i == 32:
return _Mode_name_1
case i == 64:
return _Mode_name_2
case i == 96:
return _Mode_name_3
case i == 192:
return _Mode_name_4
case i == 255:
return _Mode_name_5
default:
return "Mode(" + strconv.FormatInt(int64(i), 10) + ")"
}
}
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[InvalidSize-255]
_ = x[DWord-24]
_ = x[Word-0]
_ = x[Half-8]
_ = x[Byte-16]
}
const (
_Size_name_0 = "Word"
_Size_name_1 = "Half"
_Size_name_2 = "Byte"
_Size_name_3 = "DWord"
_Size_name_4 = "InvalidSize"
)
func (i Size) String() string {
switch {
case i == 0:
return _Size_name_0
case i == 8:
return _Size_name_1
case i == 16:
return _Size_name_2
case i == 24:
return _Size_name_3
case i == 255:
return _Size_name_4
default:
return "Size(" + strconv.FormatInt(int64(i), 10) + ")"
}
}

80
vendor/github.com/cilium/ebpf/asm/metadata.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,80 @@
package asm
// Metadata contains metadata about an instruction.
type Metadata struct {
head *metaElement
}
type metaElement struct {
next *metaElement
key, value interface{}
}
// Find the element containing key.
//
// Returns nil if there is no such element.
func (m *Metadata) find(key interface{}) *metaElement {
for e := m.head; e != nil; e = e.next {
if e.key == key {
return e
}
}
return nil
}
// Remove an element from the linked list.
//
// Copies as many elements of the list as necessary to remove r, but doesn't
// perform a full copy.
func (m *Metadata) remove(r *metaElement) {
current := &m.head
for e := m.head; e != nil; e = e.next {
if e == r {
// We've found the element we want to remove.
*current = e.next
// No need to copy the tail.
return
}
// There is another element in front of the one we want to remove.
// We have to copy it to be able to change metaElement.next.
cpy := &metaElement{key: e.key, value: e.value}
*current = cpy
current = &cpy.next
}
}
// Set a key to a value.
//
// If value is nil, the key is removed. Avoids modifying old metadata by
// copying if necessary.
func (m *Metadata) Set(key, value interface{}) {
if e := m.find(key); e != nil {
if e.value == value {
// Key is present and the value is the same. Nothing to do.
return
}
// Key is present with a different value. Create a copy of the list
// which doesn't have the element in it.
m.remove(e)
}
// m.head is now a linked list that doesn't contain key.
if value == nil {
return
}
m.head = &metaElement{key: key, value: value, next: m.head}
}
// Get the value of a key.
//
// Returns nil if no value with the given key is present.
func (m *Metadata) Get(key interface{}) interface{} {
if e := m.find(key); e != nil {
return e.value
}
return nil
}

271
vendor/github.com/cilium/ebpf/asm/opcode.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,271 @@
package asm
import (
"fmt"
"strings"
)
//go:generate stringer -output opcode_string.go -type=Class
// Class of operations
//
// msb lsb
// +---+--+---+
// | ?? |CLS|
// +---+--+---+
type Class uint8
const classMask OpCode = 0x07
const (
// LdClass loads immediate values into registers.
// Also used for non-standard load operations from cBPF.
LdClass Class = 0x00
// LdXClass loads memory into registers.
LdXClass Class = 0x01
// StClass stores immediate values to memory.
StClass Class = 0x02
// StXClass stores registers to memory.
StXClass Class = 0x03
// ALUClass describes arithmetic operators.
ALUClass Class = 0x04
// JumpClass describes jump operators.
JumpClass Class = 0x05
// Jump32Class describes jump operators with 32-bit comparisons.
// Requires kernel 5.1.
Jump32Class Class = 0x06
// ALU64Class describes arithmetic operators in 64-bit mode.
ALU64Class Class = 0x07
)
// IsLoad checks if this is either LdClass or LdXClass.
func (cls Class) IsLoad() bool {
return cls == LdClass || cls == LdXClass
}
// IsStore checks if this is either StClass or StXClass.
func (cls Class) IsStore() bool {
return cls == StClass || cls == StXClass
}
func (cls Class) isLoadOrStore() bool {
return cls.IsLoad() || cls.IsStore()
}
// IsALU checks if this is either ALUClass or ALU64Class.
func (cls Class) IsALU() bool {
return cls == ALUClass || cls == ALU64Class
}
// IsJump checks if this is either JumpClass or Jump32Class.
func (cls Class) IsJump() bool {
return cls == JumpClass || cls == Jump32Class
}
func (cls Class) isJumpOrALU() bool {
return cls.IsJump() || cls.IsALU()
}
// OpCode is a packed eBPF opcode.
//
// Its encoding is defined by a Class value:
//
// msb lsb
// +----+-+---+
// | ???? |CLS|
// +----+-+---+
type OpCode uint8
// InvalidOpCode is returned by setters on OpCode
const InvalidOpCode OpCode = 0xff
// rawInstructions returns the number of BPF instructions required
// to encode this opcode.
func (op OpCode) rawInstructions() int {
if op.IsDWordLoad() {
return 2
}
return 1
}
func (op OpCode) IsDWordLoad() bool {
return op == LoadImmOp(DWord)
}
// Class returns the class of operation.
func (op OpCode) Class() Class {
return Class(op & classMask)
}
// Mode returns the mode for load and store operations.
func (op OpCode) Mode() Mode {
if !op.Class().isLoadOrStore() {
return InvalidMode
}
return Mode(op & modeMask)
}
// Size returns the size for load and store operations.
func (op OpCode) Size() Size {
if !op.Class().isLoadOrStore() {
return InvalidSize
}
return Size(op & sizeMask)
}
// Source returns the source for branch and ALU operations.
func (op OpCode) Source() Source {
if !op.Class().isJumpOrALU() || op.ALUOp() == Swap {
return InvalidSource
}
return Source(op & sourceMask)
}
// ALUOp returns the ALUOp.
func (op OpCode) ALUOp() ALUOp {
if !op.Class().IsALU() {
return InvalidALUOp
}
return ALUOp(op & aluMask)
}
// Endianness returns the Endianness for a byte swap instruction.
func (op OpCode) Endianness() Endianness {
if op.ALUOp() != Swap {
return InvalidEndian
}
return Endianness(op & endianMask)
}
// JumpOp returns the JumpOp.
// Returns InvalidJumpOp if it doesn't encode a jump.
func (op OpCode) JumpOp() JumpOp {
if !op.Class().IsJump() {
return InvalidJumpOp
}
jumpOp := JumpOp(op & jumpMask)
// Some JumpOps are only supported by JumpClass, not Jump32Class.
if op.Class() == Jump32Class && (jumpOp == Exit || jumpOp == Call || jumpOp == Ja) {
return InvalidJumpOp
}
return jumpOp
}
// SetMode sets the mode on load and store operations.
//
// Returns InvalidOpCode if op is of the wrong class.
func (op OpCode) SetMode(mode Mode) OpCode {
if !op.Class().isLoadOrStore() || !valid(OpCode(mode), modeMask) {
return InvalidOpCode
}
return (op & ^modeMask) | OpCode(mode)
}
// SetSize sets the size on load and store operations.
//
// Returns InvalidOpCode if op is of the wrong class.
func (op OpCode) SetSize(size Size) OpCode {
if !op.Class().isLoadOrStore() || !valid(OpCode(size), sizeMask) {
return InvalidOpCode
}
return (op & ^sizeMask) | OpCode(size)
}
// SetSource sets the source on jump and ALU operations.
//
// Returns InvalidOpCode if op is of the wrong class.
func (op OpCode) SetSource(source Source) OpCode {
if !op.Class().isJumpOrALU() || !valid(OpCode(source), sourceMask) {
return InvalidOpCode
}
return (op & ^sourceMask) | OpCode(source)
}
// SetALUOp sets the ALUOp on ALU operations.
//
// Returns InvalidOpCode if op is of the wrong class.
func (op OpCode) SetALUOp(alu ALUOp) OpCode {
if !op.Class().IsALU() || !valid(OpCode(alu), aluMask) {
return InvalidOpCode
}
return (op & ^aluMask) | OpCode(alu)
}
// SetJumpOp sets the JumpOp on jump operations.
//
// Returns InvalidOpCode if op is of the wrong class.
func (op OpCode) SetJumpOp(jump JumpOp) OpCode {
if !op.Class().IsJump() || !valid(OpCode(jump), jumpMask) {
return InvalidOpCode
}
newOp := (op & ^jumpMask) | OpCode(jump)
// Check newOp is legal.
if newOp.JumpOp() == InvalidJumpOp {
return InvalidOpCode
}
return newOp
}
func (op OpCode) String() string {
var f strings.Builder
switch class := op.Class(); {
case class.isLoadOrStore():
f.WriteString(strings.TrimSuffix(class.String(), "Class"))
mode := op.Mode()
f.WriteString(strings.TrimSuffix(mode.String(), "Mode"))
switch op.Size() {
case DWord:
f.WriteString("DW")
case Word:
f.WriteString("W")
case Half:
f.WriteString("H")
case Byte:
f.WriteString("B")
}
case class.IsALU():
f.WriteString(op.ALUOp().String())
if op.ALUOp() == Swap {
// Width for Endian is controlled by Constant
f.WriteString(op.Endianness().String())
} else {
if class == ALUClass {
f.WriteString("32")
}
f.WriteString(strings.TrimSuffix(op.Source().String(), "Source"))
}
case class.IsJump():
f.WriteString(op.JumpOp().String())
if class == Jump32Class {
f.WriteString("32")
}
if jop := op.JumpOp(); jop != Exit && jop != Call {
f.WriteString(strings.TrimSuffix(op.Source().String(), "Source"))
}
default:
fmt.Fprintf(&f, "OpCode(%#x)", uint8(op))
}
return f.String()
}
// valid returns true if all bits in value are covered by mask.
func valid(value, mask OpCode) bool {
return value & ^mask == 0
}

30
vendor/github.com/cilium/ebpf/asm/opcode_string.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,30 @@
// Code generated by "stringer -output opcode_string.go -type=Class"; DO NOT EDIT.
package asm
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[LdClass-0]
_ = x[LdXClass-1]
_ = x[StClass-2]
_ = x[StXClass-3]
_ = x[ALUClass-4]
_ = x[JumpClass-5]
_ = x[Jump32Class-6]
_ = x[ALU64Class-7]
}
const _Class_name = "LdClassLdXClassStClassStXClassALUClassJumpClassJump32ClassALU64Class"
var _Class_index = [...]uint8{0, 7, 15, 22, 30, 38, 47, 58, 68}
func (i Class) String() string {
if i >= Class(len(_Class_index)-1) {
return "Class(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _Class_name[_Class_index[i]:_Class_index[i+1]]
}

50
vendor/github.com/cilium/ebpf/asm/register.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,50 @@
package asm
import (
"fmt"
)
// Register is the source or destination of most operations.
type Register uint8
// R0 contains return values.
const R0 Register = 0
// Registers for function arguments.
const (
R1 Register = R0 + 1 + iota
R2
R3
R4
R5
)
// Callee saved registers preserved by function calls.
const (
R6 Register = R5 + 1 + iota
R7
R8
R9
)
// Read-only frame pointer to access stack.
const (
R10 Register = R9 + 1
RFP = R10
)
// Pseudo registers used by 64bit loads and jumps
const (
PseudoMapFD = R1 // BPF_PSEUDO_MAP_FD
PseudoMapValue = R2 // BPF_PSEUDO_MAP_VALUE
PseudoCall = R1 // BPF_PSEUDO_CALL
PseudoFunc = R4 // BPF_PSEUDO_FUNC
)
func (r Register) String() string {
v := uint8(r)
if v == 10 {
return "rfp"
}
return fmt.Sprintf("r%d", v)
}

65
vendor/github.com/cilium/ebpf/attachtype_string.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,65 @@
// Code generated by "stringer -type AttachType -trimprefix Attach"; DO NOT EDIT.
package ebpf
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[AttachNone-0]
_ = x[AttachCGroupInetIngress-0]
_ = x[AttachCGroupInetEgress-1]
_ = x[AttachCGroupInetSockCreate-2]
_ = x[AttachCGroupSockOps-3]
_ = x[AttachSkSKBStreamParser-4]
_ = x[AttachSkSKBStreamVerdict-5]
_ = x[AttachCGroupDevice-6]
_ = x[AttachSkMsgVerdict-7]
_ = x[AttachCGroupInet4Bind-8]
_ = x[AttachCGroupInet6Bind-9]
_ = x[AttachCGroupInet4Connect-10]
_ = x[AttachCGroupInet6Connect-11]
_ = x[AttachCGroupInet4PostBind-12]
_ = x[AttachCGroupInet6PostBind-13]
_ = x[AttachCGroupUDP4Sendmsg-14]
_ = x[AttachCGroupUDP6Sendmsg-15]
_ = x[AttachLircMode2-16]
_ = x[AttachFlowDissector-17]
_ = x[AttachCGroupSysctl-18]
_ = x[AttachCGroupUDP4Recvmsg-19]
_ = x[AttachCGroupUDP6Recvmsg-20]
_ = x[AttachCGroupGetsockopt-21]
_ = x[AttachCGroupSetsockopt-22]
_ = x[AttachTraceRawTp-23]
_ = x[AttachTraceFEntry-24]
_ = x[AttachTraceFExit-25]
_ = x[AttachModifyReturn-26]
_ = x[AttachLSMMac-27]
_ = x[AttachTraceIter-28]
_ = x[AttachCgroupInet4GetPeername-29]
_ = x[AttachCgroupInet6GetPeername-30]
_ = x[AttachCgroupInet4GetSockname-31]
_ = x[AttachCgroupInet6GetSockname-32]
_ = x[AttachXDPDevMap-33]
_ = x[AttachCgroupInetSockRelease-34]
_ = x[AttachXDPCPUMap-35]
_ = x[AttachSkLookup-36]
_ = x[AttachXDP-37]
_ = x[AttachSkSKBVerdict-38]
_ = x[AttachSkReuseportSelect-39]
_ = x[AttachSkReuseportSelectOrMigrate-40]
_ = x[AttachPerfEvent-41]
}
const _AttachType_name = "NoneCGroupInetEgressCGroupInetSockCreateCGroupSockOpsSkSKBStreamParserSkSKBStreamVerdictCGroupDeviceSkMsgVerdictCGroupInet4BindCGroupInet6BindCGroupInet4ConnectCGroupInet6ConnectCGroupInet4PostBindCGroupInet6PostBindCGroupUDP4SendmsgCGroupUDP6SendmsgLircMode2FlowDissectorCGroupSysctlCGroupUDP4RecvmsgCGroupUDP6RecvmsgCGroupGetsockoptCGroupSetsockoptTraceRawTpTraceFEntryTraceFExitModifyReturnLSMMacTraceIterCgroupInet4GetPeernameCgroupInet6GetPeernameCgroupInet4GetSocknameCgroupInet6GetSocknameXDPDevMapCgroupInetSockReleaseXDPCPUMapSkLookupXDPSkSKBVerdictSkReuseportSelectSkReuseportSelectOrMigratePerfEvent"
var _AttachType_index = [...]uint16{0, 4, 20, 40, 53, 70, 88, 100, 112, 127, 142, 160, 178, 197, 216, 233, 250, 259, 272, 284, 301, 318, 334, 350, 360, 371, 381, 393, 399, 408, 430, 452, 474, 496, 505, 526, 535, 543, 546, 558, 575, 601, 610}
func (i AttachType) String() string {
if i >= AttachType(len(_AttachType_index)-1) {
return "AttachType(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _AttachType_name[_AttachType_index[i]:_AttachType_index[i+1]]
}

897
vendor/github.com/cilium/ebpf/btf/btf.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,897 @@
package btf
import (
"bufio"
"bytes"
"debug/elf"
"encoding/binary"
"errors"
"fmt"
"io"
"math"
"os"
"reflect"
"github.com/cilium/ebpf/internal"
"github.com/cilium/ebpf/internal/sys"
"github.com/cilium/ebpf/internal/unix"
)
const btfMagic = 0xeB9F
// Errors returned by BTF functions.
var (
ErrNotSupported = internal.ErrNotSupported
ErrNotFound = errors.New("not found")
ErrNoExtendedInfo = errors.New("no extended info")
)
// ID represents the unique ID of a BTF object.
type ID = sys.BTFID
// Spec represents decoded BTF.
type Spec struct {
// Data from .BTF.
rawTypes []rawType
strings *stringTable
// All types contained by the spec. For the base type, the position of
// a type in the slice is its ID.
types types
// Type IDs indexed by type.
typeIDs map[Type]TypeID
// Types indexed by essential name.
// Includes all struct flavors and types with the same name.
namedTypes map[essentialName][]Type
byteOrder binary.ByteOrder
}
type btfHeader struct {
Magic uint16
Version uint8
Flags uint8
HdrLen uint32
TypeOff uint32
TypeLen uint32
StringOff uint32
StringLen uint32
}
// typeStart returns the offset from the beginning of the .BTF section
// to the start of its type entries.
func (h *btfHeader) typeStart() int64 {
return int64(h.HdrLen + h.TypeOff)
}
// stringStart returns the offset from the beginning of the .BTF section
// to the start of its string table.
func (h *btfHeader) stringStart() int64 {
return int64(h.HdrLen + h.StringOff)
}
// LoadSpec opens file and calls LoadSpecFromReader on it.
func LoadSpec(file string) (*Spec, error) {
fh, err := os.Open(file)
if err != nil {
return nil, err
}
defer fh.Close()
return LoadSpecFromReader(fh)
}
// LoadSpecFromReader reads from an ELF or a raw BTF blob.
//
// Returns ErrNotFound if reading from an ELF which contains no BTF. ExtInfos
// may be nil.
func LoadSpecFromReader(rd io.ReaderAt) (*Spec, error) {
file, err := internal.NewSafeELFFile(rd)
if err != nil {
if bo := guessRawBTFByteOrder(rd); bo != nil {
// Try to parse a naked BTF blob. This will return an error if
// we encounter a Datasec, since we can't fix it up.
spec, err := loadRawSpec(io.NewSectionReader(rd, 0, math.MaxInt64), bo, nil, nil)
return spec, err
}
return nil, err
}
return loadSpecFromELF(file)
}
// LoadSpecAndExtInfosFromReader reads from an ELF.
//
// ExtInfos may be nil if the ELF doesn't contain section metadta.
// Returns ErrNotFound if the ELF contains no BTF.
func LoadSpecAndExtInfosFromReader(rd io.ReaderAt) (*Spec, *ExtInfos, error) {
file, err := internal.NewSafeELFFile(rd)
if err != nil {
return nil, nil, err
}
spec, err := loadSpecFromELF(file)
if err != nil {
return nil, nil, err
}
extInfos, err := loadExtInfosFromELF(file, spec.types, spec.strings)
if err != nil && !errors.Is(err, ErrNotFound) {
return nil, nil, err
}
return spec, extInfos, nil
}
// variableOffsets extracts all symbols offsets from an ELF and indexes them by
// section and variable name.
//
// References to variables in BTF data sections carry unsigned 32-bit offsets.
// Some ELF symbols (e.g. in vmlinux) may point to virtual memory that is well
// beyond this range. Since these symbols cannot be described by BTF info,
// ignore them here.
func variableOffsets(file *internal.SafeELFFile) (map[variable]uint32, error) {
symbols, err := file.Symbols()
if err != nil {
return nil, fmt.Errorf("can't read symbols: %v", err)
}
variableOffsets := make(map[variable]uint32)
for _, symbol := range symbols {
if idx := symbol.Section; idx >= elf.SHN_LORESERVE && idx <= elf.SHN_HIRESERVE {
// Ignore things like SHN_ABS
continue
}
if symbol.Value > math.MaxUint32 {
// VarSecinfo offset is u32, cannot reference symbols in higher regions.
continue
}
if int(symbol.Section) >= len(file.Sections) {
return nil, fmt.Errorf("symbol %s: invalid section %d", symbol.Name, symbol.Section)
}
secName := file.Sections[symbol.Section].Name
variableOffsets[variable{secName, symbol.Name}] = uint32(symbol.Value)
}
return variableOffsets, nil
}
func loadSpecFromELF(file *internal.SafeELFFile) (*Spec, error) {
var (
btfSection *elf.Section
sectionSizes = make(map[string]uint32)
)
for _, sec := range file.Sections {
switch sec.Name {
case ".BTF":
btfSection = sec
default:
if sec.Type != elf.SHT_PROGBITS && sec.Type != elf.SHT_NOBITS {
break
}
if sec.Size > math.MaxUint32 {
return nil, fmt.Errorf("section %s exceeds maximum size", sec.Name)
}
sectionSizes[sec.Name] = uint32(sec.Size)
}
}
if btfSection == nil {
return nil, fmt.Errorf("btf: %w", ErrNotFound)
}
vars, err := variableOffsets(file)
if err != nil {
return nil, err
}
if btfSection.ReaderAt == nil {
return nil, fmt.Errorf("compressed BTF is not supported")
}
rawTypes, rawStrings, err := parseBTF(btfSection.ReaderAt, file.ByteOrder, nil)
if err != nil {
return nil, err
}
err = fixupDatasec(rawTypes, rawStrings, sectionSizes, vars)
if err != nil {
return nil, err
}
return inflateSpec(rawTypes, rawStrings, file.ByteOrder, nil)
}
func loadRawSpec(btf io.ReaderAt, bo binary.ByteOrder,
baseTypes types, baseStrings *stringTable) (*Spec, error) {
rawTypes, rawStrings, err := parseBTF(btf, bo, baseStrings)
if err != nil {
return nil, err
}
return inflateSpec(rawTypes, rawStrings, bo, baseTypes)
}
func inflateSpec(rawTypes []rawType, rawStrings *stringTable, bo binary.ByteOrder,
baseTypes types) (*Spec, error) {
types, err := inflateRawTypes(rawTypes, baseTypes, rawStrings)
if err != nil {
return nil, err
}
typeIDs, typesByName := indexTypes(types, TypeID(len(baseTypes)))
return &Spec{
rawTypes: rawTypes,
namedTypes: typesByName,
typeIDs: typeIDs,
types: types,
strings: rawStrings,
byteOrder: bo,
}, nil
}
func indexTypes(types []Type, typeIDOffset TypeID) (map[Type]TypeID, map[essentialName][]Type) {
namedTypes := 0
for _, typ := range types {
if typ.TypeName() != "" {
// Do a pre-pass to figure out how big types by name has to be.
// Most types have unique names, so it's OK to ignore essentialName
// here.
namedTypes++
}
}
typeIDs := make(map[Type]TypeID, len(types))
typesByName := make(map[essentialName][]Type, namedTypes)
for i, typ := range types {
if name := newEssentialName(typ.TypeName()); name != "" {
typesByName[name] = append(typesByName[name], typ)
}
typeIDs[typ] = TypeID(i) + typeIDOffset
}
return typeIDs, typesByName
}
// LoadKernelSpec returns the current kernel's BTF information.
//
// Defaults to /sys/kernel/btf/vmlinux and falls back to scanning the file system
// for vmlinux ELFs. Returns an error wrapping ErrNotSupported if BTF is not enabled.
func LoadKernelSpec() (*Spec, error) {
fh, err := os.Open("/sys/kernel/btf/vmlinux")
if err == nil {
defer fh.Close()
return loadRawSpec(fh, internal.NativeEndian, nil, nil)
}
file, err := findVMLinux()
if err != nil {
return nil, err
}
defer file.Close()
return loadSpecFromELF(file)
}
// findVMLinux scans multiple well-known paths for vmlinux kernel images.
func findVMLinux() (*internal.SafeELFFile, error) {
release, err := internal.KernelRelease()
if err != nil {
return nil, err
}
// use same list of locations as libbpf
// https://github.com/libbpf/libbpf/blob/9a3a42608dbe3731256a5682a125ac1e23bced8f/src/btf.c#L3114-L3122
locations := []string{
"/boot/vmlinux-%s",
"/lib/modules/%s/vmlinux-%[1]s",
"/lib/modules/%s/build/vmlinux",
"/usr/lib/modules/%s/kernel/vmlinux",
"/usr/lib/debug/boot/vmlinux-%s",
"/usr/lib/debug/boot/vmlinux-%s.debug",
"/usr/lib/debug/lib/modules/%s/vmlinux",
}
for _, loc := range locations {
file, err := internal.OpenSafeELFFile(fmt.Sprintf(loc, release))
if errors.Is(err, os.ErrNotExist) {
continue
}
return file, err
}
return nil, fmt.Errorf("no BTF found for kernel version %s: %w", release, internal.ErrNotSupported)
}
// parseBTFHeader parses the header of the .BTF section.
func parseBTFHeader(r io.Reader, bo binary.ByteOrder) (*btfHeader, error) {
var header btfHeader
if err := binary.Read(r, bo, &header); err != nil {
return nil, fmt.Errorf("can't read header: %v", err)
}
if header.Magic != btfMagic {
return nil, fmt.Errorf("incorrect magic value %v", header.Magic)
}
if header.Version != 1 {
return nil, fmt.Errorf("unexpected version %v", header.Version)
}
if header.Flags != 0 {
return nil, fmt.Errorf("unsupported flags %v", header.Flags)
}
remainder := int64(header.HdrLen) - int64(binary.Size(&header))
if remainder < 0 {
return nil, errors.New("header length shorter than btfHeader size")
}
if _, err := io.CopyN(internal.DiscardZeroes{}, r, remainder); err != nil {
return nil, fmt.Errorf("header padding: %v", err)
}
return &header, nil
}
func guessRawBTFByteOrder(r io.ReaderAt) binary.ByteOrder {
buf := new(bufio.Reader)
for _, bo := range []binary.ByteOrder{
binary.LittleEndian,
binary.BigEndian,
} {
buf.Reset(io.NewSectionReader(r, 0, math.MaxInt64))
if _, err := parseBTFHeader(buf, bo); err == nil {
return bo
}
}
return nil
}
// parseBTF reads a .BTF section into memory and parses it into a list of
// raw types and a string table.
func parseBTF(btf io.ReaderAt, bo binary.ByteOrder, baseStrings *stringTable) ([]rawType, *stringTable, error) {
buf := internal.NewBufferedSectionReader(btf, 0, math.MaxInt64)
header, err := parseBTFHeader(buf, bo)
if err != nil {
return nil, nil, fmt.Errorf("parsing .BTF header: %v", err)
}
rawStrings, err := readStringTable(io.NewSectionReader(btf, header.stringStart(), int64(header.StringLen)),
baseStrings)
if err != nil {
return nil, nil, fmt.Errorf("can't read type names: %w", err)
}
buf.Reset(io.NewSectionReader(btf, header.typeStart(), int64(header.TypeLen)))
rawTypes, err := readTypes(buf, bo, header.TypeLen)
if err != nil {
return nil, nil, fmt.Errorf("can't read types: %w", err)
}
return rawTypes, rawStrings, nil
}
type variable struct {
section string
name string
}
func fixupDatasec(rawTypes []rawType, rawStrings *stringTable, sectionSizes map[string]uint32, variableOffsets map[variable]uint32) error {
for i, rawType := range rawTypes {
if rawType.Kind() != kindDatasec {
continue
}
name, err := rawStrings.Lookup(rawType.NameOff)
if err != nil {
return err
}
if name == ".kconfig" || name == ".ksyms" {
return fmt.Errorf("reference to %s: %w", name, ErrNotSupported)
}
if rawTypes[i].SizeType != 0 {
continue
}
size, ok := sectionSizes[name]
if !ok {
return fmt.Errorf("data section %s: missing size", name)
}
rawTypes[i].SizeType = size
secinfos := rawType.data.([]btfVarSecinfo)
for j, secInfo := range secinfos {
id := int(secInfo.Type - 1)
if id >= len(rawTypes) {
return fmt.Errorf("data section %s: invalid type id %d for variable %d", name, id, j)
}
varName, err := rawStrings.Lookup(rawTypes[id].NameOff)
if err != nil {
return fmt.Errorf("data section %s: can't get name for type %d: %w", name, id, err)
}
offset, ok := variableOffsets[variable{name, varName}]
if !ok {
return fmt.Errorf("data section %s: missing offset for variable %s", name, varName)
}
secinfos[j].Offset = offset
}
}
return nil
}
// Copy creates a copy of Spec.
func (s *Spec) Copy() *Spec {
types := copyTypes(s.types, nil)
typeIDOffset := TypeID(0)
if len(s.types) != 0 {
typeIDOffset = s.typeIDs[s.types[0]]
}
typeIDs, typesByName := indexTypes(types, typeIDOffset)
// NB: Other parts of spec are not copied since they are immutable.
return &Spec{
s.rawTypes,
s.strings,
types,
typeIDs,
typesByName,
s.byteOrder,
}
}
type marshalOpts struct {
ByteOrder binary.ByteOrder
StripFuncLinkage bool
}
func (s *Spec) marshal(opts marshalOpts) ([]byte, error) {
var (
buf bytes.Buffer
header = new(btfHeader)
headerLen = binary.Size(header)
)
// Reserve space for the header. We have to write it last since
// we don't know the size of the type section yet.
_, _ = buf.Write(make([]byte, headerLen))
// Write type section, just after the header.
for _, raw := range s.rawTypes {
switch {
case opts.StripFuncLinkage && raw.Kind() == kindFunc:
raw.SetLinkage(StaticFunc)
}
if err := raw.Marshal(&buf, opts.ByteOrder); err != nil {
return nil, fmt.Errorf("can't marshal BTF: %w", err)
}
}
typeLen := uint32(buf.Len() - headerLen)
// Write string section after type section.
stringsLen := s.strings.Length()
buf.Grow(stringsLen)
if err := s.strings.Marshal(&buf); err != nil {
return nil, err
}
// Fill out the header, and write it out.
header = &btfHeader{
Magic: btfMagic,
Version: 1,
Flags: 0,
HdrLen: uint32(headerLen),
TypeOff: 0,
TypeLen: typeLen,
StringOff: typeLen,
StringLen: uint32(stringsLen),
}
raw := buf.Bytes()
err := binary.Write(sliceWriter(raw[:headerLen]), opts.ByteOrder, header)
if err != nil {
return nil, fmt.Errorf("can't write header: %v", err)
}
return raw, nil
}
type sliceWriter []byte
func (sw sliceWriter) Write(p []byte) (int, error) {
if len(p) != len(sw) {
return 0, errors.New("size doesn't match")
}
return copy(sw, p), nil
}
// TypeByID returns the BTF Type with the given type ID.
//
// Returns an error wrapping ErrNotFound if a Type with the given ID
// does not exist in the Spec.
func (s *Spec) TypeByID(id TypeID) (Type, error) {
return s.types.ByID(id)
}
// TypeID returns the ID for a given Type.
//
// Returns an error wrapping ErrNoFound if the type isn't part of the Spec.
func (s *Spec) TypeID(typ Type) (TypeID, error) {
if _, ok := typ.(*Void); ok {
// Equality is weird for void, since it is a zero sized type.
return 0, nil
}
id, ok := s.typeIDs[typ]
if !ok {
return 0, fmt.Errorf("no ID for type %s: %w", typ, ErrNotFound)
}
return id, nil
}
// AnyTypesByName returns a list of BTF Types with the given name.
//
// If the BTF blob describes multiple compilation units like vmlinux, multiple
// Types with the same name and kind can exist, but might not describe the same
// data structure.
//
// Returns an error wrapping ErrNotFound if no matching Type exists in the Spec.
func (s *Spec) AnyTypesByName(name string) ([]Type, error) {
types := s.namedTypes[newEssentialName(name)]
if len(types) == 0 {
return nil, fmt.Errorf("type name %s: %w", name, ErrNotFound)
}
// Return a copy to prevent changes to namedTypes.
result := make([]Type, 0, len(types))
for _, t := range types {
// Match against the full name, not just the essential one
// in case the type being looked up is a struct flavor.
if t.TypeName() == name {
result = append(result, t)
}
}
return result, nil
}
// AnyTypeByName returns a Type with the given name.
//
// Returns an error if multiple types of that name exist.
func (s *Spec) AnyTypeByName(name string) (Type, error) {
types, err := s.AnyTypesByName(name)
if err != nil {
return nil, err
}
if len(types) > 1 {
return nil, fmt.Errorf("found multiple types: %v", types)
}
return types[0], nil
}
// TypeByName searches for a Type with a specific name. Since multiple
// Types with the same name can exist, the parameter typ is taken to
// narrow down the search in case of a clash.
//
// typ must be a non-nil pointer to an implementation of a Type.
// On success, the address of the found Type will be copied to typ.
//
// Returns an error wrapping ErrNotFound if no matching
// Type exists in the Spec. If multiple candidates are found,
// an error is returned.
func (s *Spec) TypeByName(name string, typ interface{}) error {
typValue := reflect.ValueOf(typ)
if typValue.Kind() != reflect.Ptr {
return fmt.Errorf("%T is not a pointer", typ)
}
typPtr := typValue.Elem()
if !typPtr.CanSet() {
return fmt.Errorf("%T cannot be set", typ)
}
wanted := typPtr.Type()
if !wanted.AssignableTo(reflect.TypeOf((*Type)(nil)).Elem()) {
return fmt.Errorf("%T does not satisfy Type interface", typ)
}
types, err := s.AnyTypesByName(name)
if err != nil {
return err
}
var candidate Type
for _, typ := range types {
if reflect.TypeOf(typ) != wanted {
continue
}
if candidate != nil {
return fmt.Errorf("type %s: multiple candidates for %T", name, typ)
}
candidate = typ
}
if candidate == nil {
return fmt.Errorf("type %s: %w", name, ErrNotFound)
}
typPtr.Set(reflect.ValueOf(candidate))
return nil
}
// LoadSplitSpecFromReader loads split BTF from a reader.
//
// Types from base are used to resolve references in the split BTF.
// The returned Spec only contains types from the split BTF, not from the base.
func LoadSplitSpecFromReader(r io.ReaderAt, base *Spec) (*Spec, error) {
return loadRawSpec(r, internal.NativeEndian, base.types, base.strings)
}
// TypesIterator iterates over types of a given spec.
type TypesIterator struct {
spec *Spec
index int
// The last visited type in the spec.
Type Type
}
// Iterate returns the types iterator.
func (s *Spec) Iterate() *TypesIterator {
return &TypesIterator{spec: s, index: 0}
}
// Next returns true as long as there are any remaining types.
func (iter *TypesIterator) Next() bool {
if len(iter.spec.types) <= iter.index {
return false
}
iter.Type = iter.spec.types[iter.index]
iter.index++
return true
}
// Handle is a reference to BTF loaded into the kernel.
type Handle struct {
fd *sys.FD
// Size of the raw BTF in bytes.
size uint32
}
// NewHandle loads BTF into the kernel.
//
// Returns ErrNotSupported if BTF is not supported.
func NewHandle(spec *Spec) (*Handle, error) {
if err := haveBTF(); err != nil {
return nil, err
}
if spec.byteOrder != internal.NativeEndian {
return nil, fmt.Errorf("can't load %s BTF on %s", spec.byteOrder, internal.NativeEndian)
}
btf, err := spec.marshal(marshalOpts{
ByteOrder: internal.NativeEndian,
StripFuncLinkage: haveFuncLinkage() != nil,
})
if err != nil {
return nil, fmt.Errorf("can't marshal BTF: %w", err)
}
if uint64(len(btf)) > math.MaxUint32 {
return nil, errors.New("BTF exceeds the maximum size")
}
attr := &sys.BtfLoadAttr{
Btf: sys.NewSlicePointer(btf),
BtfSize: uint32(len(btf)),
}
fd, err := sys.BtfLoad(attr)
if err != nil {
logBuf := make([]byte, 64*1024)
attr.BtfLogBuf = sys.NewSlicePointer(logBuf)
attr.BtfLogSize = uint32(len(logBuf))
attr.BtfLogLevel = 1
// NB: The syscall will never return ENOSPC as of 5.18-rc4.
_, _ = sys.BtfLoad(attr)
return nil, internal.ErrorWithLog(err, logBuf)
}
return &Handle{fd, attr.BtfSize}, nil
}
// NewHandleFromID returns the BTF handle for a given id.
//
// Prefer calling [ebpf.Program.Handle] or [ebpf.Map.Handle] if possible.
//
// Returns ErrNotExist, if there is no BTF with the given id.
//
// Requires CAP_SYS_ADMIN.
func NewHandleFromID(id ID) (*Handle, error) {
fd, err := sys.BtfGetFdById(&sys.BtfGetFdByIdAttr{
Id: uint32(id),
})
if err != nil {
return nil, fmt.Errorf("get FD for ID %d: %w", id, err)
}
info, err := newHandleInfoFromFD(fd)
if err != nil {
_ = fd.Close()
return nil, err
}
return &Handle{fd, info.size}, nil
}
// Spec parses the kernel BTF into Go types.
//
// base is used to decode split BTF and may be nil.
func (h *Handle) Spec(base *Spec) (*Spec, error) {
var btfInfo sys.BtfInfo
btfBuffer := make([]byte, h.size)
btfInfo.Btf, btfInfo.BtfSize = sys.NewSlicePointerLen(btfBuffer)
if err := sys.ObjInfo(h.fd, &btfInfo); err != nil {
return nil, err
}
var baseTypes types
var baseStrings *stringTable
if base != nil {
baseTypes = base.types
baseStrings = base.strings
}
return loadRawSpec(bytes.NewReader(btfBuffer), internal.NativeEndian, baseTypes, baseStrings)
}
// Close destroys the handle.
//
// Subsequent calls to FD will return an invalid value.
func (h *Handle) Close() error {
if h == nil {
return nil
}
return h.fd.Close()
}
// FD returns the file descriptor for the handle.
func (h *Handle) FD() int {
return h.fd.Int()
}
// Info returns metadata about the handle.
func (h *Handle) Info() (*HandleInfo, error) {
return newHandleInfoFromFD(h.fd)
}
func marshalBTF(types interface{}, strings []byte, bo binary.ByteOrder) []byte {
const minHeaderLength = 24
typesLen := uint32(binary.Size(types))
header := btfHeader{
Magic: btfMagic,
Version: 1,
HdrLen: minHeaderLength,
TypeOff: 0,
TypeLen: typesLen,
StringOff: typesLen,
StringLen: uint32(len(strings)),
}
buf := new(bytes.Buffer)
_ = binary.Write(buf, bo, &header)
_ = binary.Write(buf, bo, types)
buf.Write(strings)
return buf.Bytes()
}
var haveBTF = internal.FeatureTest("BTF", "5.1", func() error {
var (
types struct {
Integer btfType
Var btfType
btfVar struct{ Linkage uint32 }
}
strings = []byte{0, 'a', 0}
)
// We use a BTF_KIND_VAR here, to make sure that
// the kernel understands BTF at least as well as we
// do. BTF_KIND_VAR was introduced ~5.1.
types.Integer.SetKind(kindPointer)
types.Var.NameOff = 1
types.Var.SetKind(kindVar)
types.Var.SizeType = 1
btf := marshalBTF(&types, strings, internal.NativeEndian)
fd, err := sys.BtfLoad(&sys.BtfLoadAttr{
Btf: sys.NewSlicePointer(btf),
BtfSize: uint32(len(btf)),
})
if errors.Is(err, unix.EINVAL) || errors.Is(err, unix.EPERM) {
// Treat both EINVAL and EPERM as not supported: loading the program
// might still succeed without BTF.
return internal.ErrNotSupported
}
if err != nil {
return err
}
fd.Close()
return nil
})
var haveFuncLinkage = internal.FeatureTest("BTF func linkage", "5.6", func() error {
if err := haveBTF(); err != nil {
return err
}
var (
types struct {
FuncProto btfType
Func btfType
}
strings = []byte{0, 'a', 0}
)
types.FuncProto.SetKind(kindFuncProto)
types.Func.SetKind(kindFunc)
types.Func.SizeType = 1 // aka FuncProto
types.Func.NameOff = 1
types.Func.SetLinkage(GlobalFunc)
btf := marshalBTF(&types, strings, internal.NativeEndian)
fd, err := sys.BtfLoad(&sys.BtfLoadAttr{
Btf: sys.NewSlicePointer(btf),
BtfSize: uint32(len(btf)),
})
if errors.Is(err, unix.EINVAL) {
return internal.ErrNotSupported
}
if err != nil {
return err
}
fd.Close()
return nil
})

343
vendor/github.com/cilium/ebpf/btf/btf_types.go сгенерированный поставляемый Normal file
Просмотреть файл

@ -0,0 +1,343 @@
package btf
import (
"encoding/binary"
"fmt"
"io"
)
//go:generate stringer -linecomment -output=btf_types_string.go -type=FuncLinkage,VarLinkage
// btfKind describes a Type.
type btfKind uint8
// Equivalents of the BTF_KIND_* constants.
const (
kindUnknown btfKind = iota
kindInt
kindPointer
kindArray
kindStruct
kindUnion
kindEnum
kindForward
kindTypedef
kindVolatile
kindConst
kindRestrict
// Added ~4.20
kindFunc
kindFuncProto
// Added ~5.1
kindVar
kindDatasec
// Added ~5.13
kindFloat
)
// FuncLinkage describes BTF function linkage metadata.
type FuncLinkage int
// Equivalent of enum btf_func_linkage.
const (
StaticFunc FuncLinkage = iota // static
GlobalFunc // global
ExternFunc // extern
)
// VarLinkage describes BTF variable linkage metadata.
type VarLinkage int
const (
StaticVar VarLinkage = iota // static
GlobalVar // global
ExternVar // extern
)
const (
btfTypeKindShift = 24
btfTypeKindLen = 5
btfTypeVlenShift = 0
btfTypeVlenMask = 16
btfTypeKindFlagShift = 31
btfTypeKindFlagMask = 1
)
// btfType is equivalent to struct btf_type in Documentation/bpf/btf.rst.
type btfType struct {
NameOff uint32
/* "info" bits arrangement
* bits 0-15: vlen (e.g. # of struct's members), linkage
* bits 16-23: unused
* bits 24-28: kind (e.g. int, ptr, array...etc)
* bits 29-30: unused
* bit 31: kind_flag, currently used by
* struct, union and fwd
*/
Info uint32
/* "size" is used by INT, ENUM, STRUCT and UNION.
* "size" tells the size of the type it is describing.
*
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
* FUNC and FUNC_PROTO.
* "type" is a type_id referring to another type.
*/
SizeType uint32
}
func (k btfKind) String() string {
switch k {
case kindUnknown:
return "Unknown"
case kindInt:
return "Integer"
case kindPointer:
return "Pointer"
case kindArray:
return "Array"
case kindStruct:
return "Struct"
case kindUnion:
return "Union"
case kindEnum:
return "Enumeration"
case kindForward:
return "Forward"
case kindTypedef:
return "Typedef"
case kindVolatile:
return "Volatile"
case kindConst:
return "Const"
case kindRestrict:
return "Restrict"
case kindFunc:
return "Function"
case kindFuncProto:
return "Function Proto"
case kindVar:
return "Variable"
case kindDatasec:
return "Section"
case kindFloat:
return "Float"
default:
return fmt.Sprintf("Unknown (%d)", k)
}
}
func mask(len uint32) uint32 {
return (1 << len) - 1
}
func readBits(value, len, shift uint32) uint32 {
return (value >> shift) & mask(len)
}
func writeBits(value, len, shift, new uint32) uint32 {
value &^= mask(len) << shift
value |= (new & mask(len)) << shift
return value
}
func (bt *btfType) info(len, shift uint32) uint32 {
return readBits(bt.Info, len, shift)
}
func (bt *btfType) setInfo(value, len, shift uint32) {
bt.Info = writeBits(bt.Info, len, shift, value)
}
func (bt *btfType) Kind() btfKind {
return btfKind(bt.info(btfTypeKindLen, btfTypeKindShift))
}
func (bt *btfType) SetKind(kind btfKind) {
bt.setInfo(uint32(kind), btfTypeKindLen, btfTypeKindShift)
}
func (bt *btfType) Vlen() int {
return int(bt.info(btfTypeVlenMask, btfTypeVlenShift))
}
func (bt *btfType) SetVlen(vlen int) {
bt.setInfo(uint32(vlen), btfTypeVlenMask, btfTypeVlenShift)
}
func (bt *btfType) KindFlag() bool {
return bt.info(btfTypeKindFlagMask, btfTypeKindFlagShift) == 1
}
func (bt *btfType) Linkage() FuncLinkage {
return FuncLinkage(bt.info(btfTypeVlenMask, btfTypeVlenShift))
}
func (bt *btfType) SetLinkage(linkage FuncLinkage) {
bt.setInfo(uint32(linkage), btfTypeVlenMask, btfTypeVlenShift)
}
func (bt *btfType) Type() TypeID {
// TODO: Panic here if wrong kind?
return TypeID(bt.SizeType)
}
func (bt *btfType) Size() uint32 {
// TODO: Panic here if wrong kind?
return bt.SizeType
}
func (bt *btfType) SetSize(size uint32) {
bt.SizeType = size
}
type rawType struct {
btfType
data interface{}
}
func (rt *rawType) Marshal(w io.Writer, bo binary.ByteOrder) error {
if err := binary.Write(w, bo, &rt.btfType); err != nil {
return err
}
if rt.data == nil {
return nil
}
return binary.Write(w, bo, rt.data)
}
// btfInt encodes additional data for integers.
//
// ? ? ? ? e e e e o o o o o o o o ? ? ? ? ? ? ? ? b b b b b b b b
// ? = undefined
// e = encoding
// o = offset (bitfields?)
// b = bits (bitfields)
type btfInt struct {
Raw uint32
}
const (
btfIntEncodingLen = 4
btfIntEncodingShift = 24
btfIntOffsetLen = 8
btfIntOffsetShift = 16
btfIntBitsLen = 8
btfIntBitsShift = 0
)
func (bi btfInt) Encoding() IntEncoding {
return IntEncoding(readBits(bi.Raw, btfIntEncodingLen, btfIntEncodingShift))
}
func (bi *btfInt) SetEncoding(e IntEncoding) {
bi.Raw = writeBits(uint32(bi.Raw), btfIntEncodingLen, btfIntEncodingShift, uint32(e))
}
func (bi btfInt) Offset() Bits {
return Bits(readBits(bi.Raw, btfIntOffsetLen, btfIntOffsetShift))
}
func (bi *btfInt) SetOffset(offset uint32) {
bi.Raw = writeBits(bi.Raw, btfIntOffsetLen, btfIntOffsetShift, offset)
}
func (bi btfInt) Bits() Bits {
return Bits(readBits(bi.Raw, btfIntBitsLen, btfIntBitsShift))
}
func (bi *btfInt) SetBits(bits byte) {
bi.Raw = writeBits(bi.Raw, btfIntBitsLen, btfIntBitsShift, uint32(bits))
}
type btfArray struct {
Type TypeID
IndexType TypeID
Nelems uint32
}
type btfMember struct {
NameOff uint32
Type TypeID
Offset uint32
}
type btfVarSecinfo struct {
Type TypeID
Offset uint32
Size uint32
}
type btfVariable struct {
Linkage uint32
}
type btfEnum struct {
NameOff uint32
Val int32
}
type btfParam struct {
NameOff uint32
Type TypeID
}
func readTypes(r io.Reader, bo binary.ByteOrder, typeLen uint32) ([]rawType, error) {
var header btfType
// because of the interleaving between types and struct members it is difficult to
// precompute the numbers of raw types this will parse
// this "guess" is a good first estimation
sizeOfbtfType := uintptr(binary.Size(btfType{}))
tyMaxCount := uintptr(typeLen) / sizeOfbtfType / 2
types := make([]rawType, 0, tyMaxCount)
for id := TypeID(1); ; id++ {
if err := binary.Read(r, bo, &header); err == io.EOF {
return types, nil
} else if err != nil {
return nil, fmt.Errorf("can't read type info for id %v: %v", id, err)
}
var data interface{}
switch header.Kind() {
case kindInt:
data = new(btfInt)
case kindPointer:
case kindArray:
data = new(btfArray)
case kindStruct:
fallthrough
case kindUnion:
data = make([]btfMember, header.Vlen())
case kindEnum:
data = make([]btfEnum, header.Vlen())
case kindForward:
case kindTypedef:
case kindVolatile:
case kindConst:
case kindRestrict:
case kindFunc:
case kindFuncProto:
data = make([]btfParam, header.Vlen())
case kindVar:
data = new(btfVariable)
case kindDatasec:
data = make([]btfVarSecinfo, header.Vlen())
case kindFloat:
default:
return nil, fmt.Errorf("type id %v: unknown kind: %v", id, header.Kind())
}
if data == nil {
types = append(types, rawType{header, nil})
continue
}
if err := binary.Read(r, bo, data); err != nil {
return nil, fmt.Errorf("type id %d: kind %v: can't read %T: %v", id, header.Kind(), data, err)
}
types = append(types, rawType{header, data})
}
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше