Linux has a lot of performance fluctuation, and we don't currently have the time or knowledge to work on this. Its a future goal, but for now just increase the thresholds
* Add stats logging option to perf, log debug output to file
Logging connection stats can be an easy way to debug certain issues with the performance tool. Add an option to make this work.
By default, the pwsh script only prints logs in debug mode. Instead, write all output to a file as well, so this can be later reviewed as well. Otherwise all data from step 1 would just be lost.
* Fix builds
* Spacing nit
* Removing extra comma
* Fix incorrect log file
* Extra backtick
* 1 more parameter check
* Actually log local results
* Add build version to machine name
* Fix log name
* Hopefully 1 last fix
* Don't overwrite release logs with debug logs
* Toggle kernel as well
Co-authored-by: Nick Banks <nibanks@microsoft.com>
* Default to send buffering off in secnetpert
We usually want to be testing with no send buffering, however thats not the default. That makes manual testing more difficult, as thats something we have to explicitly set. Make the default no send buffering, and require it to be enabled explcitly
* Adjust tcp throughput thresholds
* Remove stub TLS
We were only using stub TLS to be compatible with ASAN. Now that OpenSSL and Asan work together, we can remove stub TLS and reduce our TLS scope.
* Rename quicperf to netsecperf
Since TCP is now included, its now a network security perf test tool, not just quic perf
* Fix file names
* Fix clog
* 1 more name fix
* Start work on linux perf
* Add linux rps and hps tests
* Remove test functions
* Add linux perf
* Add low latency rps
* Use new machines for loopback too
* Fix windows tests
* Cleanup perf scripts
This removes a lot of what was generic about the remotes, and instead moves remote to a common root context.
This makes new tests much easier to add, especially once we get the full matrix RPS
* Fix progress preference
* Fail build if throughput up test has a 5% or more performance drop
* Allow tests to finish, write failures at end
* Move thesholds to json files
* Fail loopback on regressions to, but use larger threshold
* Fix arguments
* Actual print failures at end
* Fix all negative regressions not counting
* Print more
* Slightly different output
* Fix not triggering
Co-authored-by: Nick Banks <nibanks@microsoft.com>
RPS latency numbers are recorded for every RPS run, as there is no noticeable impact on performance for doing so.
These latency numbers are currently not uploaded to the Database, they will be added at a later date. They are printed though.
Histograms of latency percentiles are also generated. Currently only the numbers from the last run of a set are uploaded to artifacts.
A 60 connection test was added to reduce utilization to a constant level.
New performance driver is custom built specifically for performance, rather then using quicping.
Also will be compatible with server mode, and baseline support is part of this commit.
Perf Tests can now run either locally or cross systems. A setup has been created to make this work automatically on AZP.
Additionally, machine name is now published, along with other smaller code changes.