Latency Benchmarking tool
Перейти к файлу
Dmitry Kakurin 7f37f3b399
Merge pull request #3 from csteegz/coverste/fix-host-header
Add host header support
2019-07-30 16:46:42 -07:00
.vscode First Open Source version 2019-06-04 17:29:35 -07:00
bench First Open Source version 2019-06-04 17:29:35 -07:00
.gitignore First Open Source version 2019-06-04 17:29:35 -07:00
BuildLinux.cmd First Open Source version 2019-06-04 17:29:35 -07:00
BuildWindows.cmd First Open Source version 2019-06-04 17:29:35 -07:00
Dockerfile Add dockerfile to build and run labench 2019-06-26 14:42:49 -07:00
LICENSE First Open Source version 2019-06-04 17:29:35 -07:00
NOTICE.md Embedded bench license into NOTICE.md 2019-06-05 13:46:22 -07:00
README.md Fixed typo 2019-06-04 19:09:22 -07:00
full_config.yaml fix host header 2019-06-21 11:01:41 -07:00
go.mod First Open Source version 2019-06-04 17:29:35 -07:00
go.sum First Open Source version 2019-06-04 17:29:35 -07:00
labench.yaml First Open Source version 2019-06-04 17:29:35 -07:00
main.go First Open Source version 2019-06-04 17:29:35 -07:00
web_requester.go code review comments 2019-07-30 14:55:08 -07:00

README.md

Introduction

LaBench (for LAtency BENCHmark) is a tool that measures latency percentiles of HTTP GET or POST requests under very even and steady load.

The main feature and distinction of this tool is that (unlike many other benchmarking tools) it dictates request rate to the server and tries to maintain that rate very evenly even when server is experiencing slowdowns and hiccups. While other tools would usually back off and let the server to recover (see Coordinated Omission Problem for more details).

The main difference from wrk2 tool is very even load generated by LaBench.

Quick-Start Guide

  1. Copy or compile LaBench binary (there are both Windows and Linux executables). Windows version has more precise clock.
  2. Modify labench.yaml to meet your needs, most basic params should be self-explanatory. For the full list of supported parameters look at full_config.yaml.
  3. Run the benchmark by simply running labench (you can also specify .yaml file on command line, but labench.yaml is used by default).
  4. BEFORE looking at the latency results check the following things in the tool output:
    1. TimelyTicks percentage. If it's less than say 99.9% then you need to increase number of Clients in yaml config. It's very realistic to keep it at 100%.
    2. TimelySends percentage. If it's less than say 99.9% then you need a beefier machine to run the test. It's very realistic to keep it at 100%.
    3. Number of errors returned by the server (non-200 responses). Some small percentage is OK, but they are not accounted for in latency results.
    4. Throughput reported in last line. If should be close to the value RequestRatePerSec in your .yaml config.
  5. If ANY of the above is not satisfied then the run was not valid and there is no point in looking at the latency results produced, so fix and re-run.
  6. The measurement results (latency percentiles) are placed in out\res.hgrm file. You can open it in Excel or go to http://hdrhistogram.github.io/HdrHistogram/plotFiles.html to plot it.
  7. Note that plotted results have logarithmic X axis (i.e. the distance between 99% and 99.9% is the same as the distance between 99.9% and 99.99%).

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.