Although the tests have been passing on the latest changes, there was a failure in testing last night.
When investigating I found the cause of the problem. When you call cmd.Execute("systemd-run") golang will (sometimes) replace it with the full path (in this case /usr/bin/systemd-run) and so our check for systemd-run mode was not working and it was going down the old code path of direct cgroup assignment.
Fixing by being explicit about it and returning a boolean indicating whether resource governance is required after the process is launched. This brings it back to the way it was in the previous PR iterations but avoids the objections raised there due to linux only concepts. When we converge the windows code here, the implementation of applyResourceGovernance will use Job objects on windows and the code flow will be the same.
We found when testing on some ditros that they had older versions of systemd installed.
Versions before 246 use `MemoryLimit` and after that use `MemoryMax` so we need to know which version we have when constructing the commandline.
Also older versions didn't support the `-E` flag for environment variables and instead use the longer form `--setenv`. This same flag is supported in both old and new versions
* Removed Noise Telemetry Events, and more details on error log.
* - Created new CustomMetricsStatusType
- CustomMetrics will know be reported only when there is a Change in the CustomMetric Field.
- Added commitedCustomMetricsState variable to keep track of the last CustomMetric Value.
* Adding internal/manifest package from Cross-Platform AppHealth Feature Branch
* Running go mod tidy and go mod vendor
* - Add manifest.xml to Extension folder
- Chaged Github workflow go version to Go 1.18
- Small refactor in setup function for bats tests.
* Update Go version to 1.18 in Dockerfile
* Add logging package with NopLogger implementation
* Add telemetry package for logging events
* - Add telemetry event Logging to main.go
* - Add new String() methods to vmWatchSignalFilters and vmWatchSettings structs
- Add telemetry event Logging to handlersettings.go
* - Add telemetry event Logging to reportstatus.go
* Add telemetry event Logging to health.go
* Refactor install handler in main/cmds.go to use telemetry event logging
* Refactor uninstall handler in main/cmds.go to use telemetry event logging
* Refactor enable handler function in main/cmds.go to use telemetry event logging
* Refactor vmWatch.go to use telemetry event logging
* Fix requestPath in extension-settings.json and updated 2 integration tests, one in 2_handler-commands.bats and another in 7_vmwatch.bats
* ran go mod tidy && go mod vendor
* Update ExtensionManifest version to 2.0.9 on UT
* Refactor telemetry event sender to use EventLevel constants in main/telemetry.go
* Refactor telemetry event sender to use EventTasks constants that match with existing Windows Telemetry
* Update logging messages in 7_vmwatch.bats
* Moved telemetry.go to its package in internal/telemetry
* Update Go version to 1.22 in Dockerfile, go.yml, go.mod, and go.sum
* Update ExtensionManifest version to 2.0.9 on UT
* Add NopLogger documentation to pkg/logging/logging.go
* Added Documentation to Telemetry Pkg
* -Added a Wrapper to HandlerEnviroment to add Additional functionality like the String() func
- Added String() func to handlersettings struct, publicSettings struct, vmWatchSettings struct and
vmWatchSignalFilters struct
- Added Telemetry Event for HandlerSettings, and for HandlerEnviroment
* - Updated HandlerEnviroment String to use MarshallIndent Function.
- Updated HandlerSettings struct String() func to use MarshallIndent
- Fixed Failing UTs due to nil pointer in Embedded Struct inside HandlerEnviroment.
* - Updated vmWatchSetting String Func to use MarshallIdent
* Update ExtensionManifest version to 2.0.10 on Failing UT
* removed duplicated UT
* Removed String() func from VMWatchSignalFilters, publicSettings and protectedSettings
Background:
Our tests have been running fine for a long time but suddenly started failing on specific os versions. This was because the process (although initially associated with the correct cgroup that we created) gets moved back to the parent cgroup. This results in the limits being removed.
I did some research and reached out to various people and found that this is something that has previously been seen. When a process is started with systemd you are not supposed to manage cgroups directly, systemd owns its own hierarchy and can manipulate things within it. Documentation says that you should not modify the cgroups within that slice hierarchy directly but instead you should use `systemd-run` to launch processes.
The GuestAgent folks saw very similar behavior and switching to systemd-run resolved all their issues.
Changes:
Changed the code to run using `systemd-run` to launch the vmwatch process. Using the `--scope` parameter results in the call to wait until the vmwatch process completes.
The process id returned from the call is the actual process id of vmwatch.
I have confirmed that killing vmwatch and killing app health extension still has the same behavior (the PDeathSig integration is working fine) and the aurora tests are working fine with these changes.
NOTE: Because in docker containers, systemd-run is not available, the code falls back to run the process directly and continues to use the old code path in that case. This should also cover and linux distros which don't use systemd where direct cgroup assignment should work fine.
Changes by @dpoole73
- Fix bug where we were not using the global `vmWatchCommand` variable so the SIGTERM handler was not killing anything
- set the `Pdealthsig` property on the command so the SIGTERM signal is sent to the sub process on parent process termination
This fixes both issues:
Before the fix if we killed app health process, vmwatch process was always leaked
After the fix:
`kill <pid>` -> log message "Received shutdown request" and kill vmwatch.
`kill -9 <pid>`-> no log message, vmwatch is killed
Changes by @klugorosado
- Added Integration tests to kill AppHealth Gracefully with SIGTERM and SIGINT, and validated VMWatch Shutdown.
- Added Integration tests to kill AppHealth Forcibly with SIGKILL, and validated VMWatch Shutdown.
- Added the capability for dev containers to run Integration tests inside.
* Added --apphealth-version flag to VMWatch with AppHealth version from manifest.xml
* - Validated Extension Version on existing VMWatch.
- Created bash function to extract Version from manifest.xml.
- GetExtensionManifestVersion now first attempts to get Extension Version from Version passed at build time and uses manifest.xml file as fallback.
* Initial checkpoint
* tweak tests
* tweak the scripts
1. use nc for a tco server instead of web server for simplicity
2. add the variables to control tolerating the failure assignment to cgroup to allow tests to run
3. add new test for the case where it fails
* feedback
* feedback
* feeback
* feedback
* Bootstrapping has no integration test regressions
* Add cleanup of VMWatch process during shutdown signals and upon other commands, plus integration test template
* Added integration tests for VMWatch
* Linting
* Fix file vet issues
* attempt to fix handler command: install - creates the data dir
* nit integration tests
* Use handlerenvironment to dictate vmwatch signal folder and verbose log file paths
* Include missing changes in previous commit
* Remove unnecessary changes
* Try to fix docker installation error in go workflow
* Fix integration tests
* Update HandlerManifest with process names for guest agent to monitor cpu/memory usage
* Run linting
* Remove cpu/memory limits in HandlerManifest + update VMWatch binary directory to bin/VMWatch/ + implement VMWatch process retries + update integration tests
* Update test.Dockerfile
* Rename workflow
* Add formatting & linting
* Add logic to do retries on failed tests + don't fail fast
* Minor nits
* Update integration tests + code changes to resolve comments regarding execution of process
* Formatting + Linting + Vet
* Add logic for recover and defer for executing VMWatch. Proper close and read of channel. Also only every 60 seconds
* fix integration tests
* Bump to v2.0.7
* revert unnecessary changes to schema.go
* Small fix to killVMWatch
* Fix logic for killing VMWatch
* v2.0.8 Added Support for dynamic EventsFolder directory from extension Handler Environment (#39)
* - moved handlerenv.go and seqno.go from "github.com/Azure/azure-docker-extension/pkg/vmextension"
- Added EventsFolder with other missing parameters.
* -removed vmextension lib dependency from VMwatch and other Files.
- Updates HandlerEnviroment.json test file.
- Updated VMwatch Integration Tests.
* - Bump to v2.0.8
* initial devcontainer changes
changes:
1. add devcontainer condig
2. add vscode build config
3. add makefile target to set up the appropriate stuff in the container
4. update some line endings and add gitattributes so script run
5. fix what seems to be a bug in fake-waagent script as it doesn't work without this fix for me
* update binaries and config to latest
* Resource governance, heartbeat and dev container changes
The main feature change here is the addition of resource governance for linux via cgroups.
We discover the current cgroup and add a sub cgroup for our purposes (limiting cpu to 1% and memory to 40MB)
I also added support for detecting a stuck vmwatch using the heartbeat file and implemented the same logic for restarts from the windows version (3 restarts per 3 hours)
As part of the development of this, I added support for devcontainer execution so we can step through the code from a dev machine into either a WSL session or a linux vm with tools installed.
I added integration tests to check process exit, OOM and cpu throttling. These changes required a few changes to the makefile and scripts.
I also updated the vmwatch binaries and added a script to download the latest ones as well
I updated the govendor files using the tool it told me to run I hope I did this right
* feedback
* feedback
* Run 'go mod edit -go=1.18 to be conistent with linux extensions repo
* Run linting/formatting
* Fix merge nits to merge conflicts
* Fix app health handler.log directory path
* Change to applicationhealth-extension
* Mistakenly added two VMWatch substatus items
* Adding filtering for tests which can only run on a real linux host (not WSL or docker)
continuing investigation...
* fix time from minutes to hours plus add makefile target to create zip file (for use in testing)
* feedback
* feedback
* add readme
* updated vmwatch version, config schema and commandline
* typo
* test fixes
* test fixes
* add helper script to upload binaries to storage
* change container name
* feedback
* feedback
* typo
---------
Co-authored-by: Frank Pang <frankpang@microsoft.com>
Co-authored-by: frank-pang-msft <92764154+frank-pang-msft@users.noreply.github.com>
Co-authored-by: klugorosado <142627157+klugorosado@users.noreply.github.com>
## Overview
This PR contains changes to support running VMWatch (amd64 and arm64) as an executable via goroutines and channels.
> VMWatch is a standardized, lightweight, and open-sourced testing framework designed to enhance the monitoring and management of guest VMs on the Azure platform, including both 1P and 3P instances. VMWatch is engineered to collect vital health signals across multiple dimensions, which will be seamlessly integrated into Azure's quality systems. By leveraging these signals, VMWatch will enable Azure to swiftly detect and prevent regressions induced by platform updates or configuration changes, identify gaps in platform telemetry, and ultimately improve the guest experience for all Azure customers.
## Behavior
VMWatch will run asynchronously as a separate process than ApplicationHealth, so the probing of application health will not be affected by the state of VMWatch. Depending on extension settings, VMWatch can be enabled/disabled, and also specify the test names and parameter overrides to VMWatch binary. The status of VMWatch will be displayed in the extension x.status files and also in GET VM Instance View. Main process will attempt to start VMWatch binary up to 3 times, after which VMWatch status will be set to failed.
## Process Leaks
To ensure that VMWatch processes do not accumulate, applicationhealth-shim will be responsible for killing existing VMWatch processes by looking for processes running with the VMWatch binary names according to the architecture type. For unexpected process termination, if for some reason the main applicationhealth-extension process is terminated, we also ensure that the VMWatch process is also killed by subscribing to shutdown/termination signals in the main process, and killing the VMWatch based off process ID.
## Example Binary Execution
Example execution from integration testing
`
SIGNAL_FOLDER=/var/log/azure/Microsoft.ManagedServices.ApplicationHealthLinux/events VERBOSE_LOG_FILE_FULL_PATH=/var/log/azure/Microsoft.ManagedServices.ApplicationHealthLinux/VE.RS.ION/vmwatch.log ./var/lib/waagent/Extension/bin/VMWatch/vmwatch_linux_amd64 --config /var/lib/waagent/Extension/bin/VMWatch/vmwatch.conf --input-filter disk_io:outbound_connectivity
`
## Release/Packaging
In addition to the arm64 or amd64 VMWatch binaries, `vmwatch.conf` will be expected to be present in the bin/VMWatch directory for VMWatch process to read. VMWatch will also be populating and sharing eventsFolder with ApplicationHealth, so events can be viewed in Kusto. The verbose logs of VMWatch will be written to `vmwatch.log`.
---------
Co-authored-by: klugorosado <142627157+klugorosado@users.noreply.github.com>
* - Added min and max TLS version support.
- Included Support for TLS 1.3
- Minimum TLS 1.1 enforced.
* - Added bash function to create and delete certificates.
- Modified run.sh script to use the create and delete certificate functions.
- Added new instructions to run integration tests in README.md
* - Added 3 Integration tests to test each version of TLS.
- Added tlsVersion flag for webserver input.
- Added TLS Config for https webserver.
- Added helper functions to get TLS Version and Health State.
- Changed port of https server from 443 to 4430.
* - refactored NewHttpHealthProbe function.
* - changed min version to TLS 1.0
* - Test all TLS versions, including SSLv3
- Parallelize integration tests.
* - modified go.yml to use run.sh
* - Changes Flag to securityProtocol and updated Comments.
* -Added dynamic container names to bats tests.
* - TLS Config set to Defaults but tested.
* cleanup logic for created container.
* Only basic.bats tests are ran sequentially
* Only basic.bats tests are ran sequentially
* Attempt to fix go.yml
* Revert: TLS Config set to Defaults but tested
* TLS Max Version set to Default.
* Added small comments and verbose logs for integration tests.
* Try fix go workflow
* Update github workflow for v2/main and v2/develop.
* Try Add sequential and parallel integration tests with retry option.
* Try Fix: "Try Add sequential and parallel integration tests with retry option."
* Try Fix: "Try Fix: "Try Add sequential and parallel integration tests with retry option.""
* Update branch names in go.yml workflow
* Refactor health probe address construction
* Add While loop to finf unique docker image and Added Clarification comments.
* Refactor integration test job names
* capture error from run.sh script
* Fixed repeated Assertions on SSLv3 test
* Refactor health test function names and added new tests to validate request path.
* Remove unnecessary code in health.go
* nit changes to getHealthStatus
* Refactor integration test scripts for better
organization and parallelization
* Fix github workflow integration test directory paths.
* Update branch restrictions for push and
pull_request workflows
* - change publicSettingsSchema.properties.gracePeriod to 14400 seconds.
* - Added a new unit test for the grace period (tested multiple scenarios).
* - Attempted to fix docker installation error in go workflow.
Due to pending update of Compute-ART-LinuxExtensions go lang compiler to 1.17.x. I am migrating the code to use go modules.
https://go.dev/blog/migrating-to-go-modules
The vendor directory has been trimmed by running 'go mod init', 'go mod vendor' 'go mod tidy', no files were removed manually.
main.go has been updated to call the renamed method in github.com/Azure/azure-docker-extension/pkg/vmextension
* go mod vendor
* Ran go mod tidy and updated main.go