_content/doc: add PGO usage guide

Add a primary documentation page for PGO, describing both the mechanics
of using PGO as well as best practices around collecting profiles.

Fixes golang/go#55022.

Change-Id: Icca1673ce54a091a5c7329b999e9b36d6bfea538
Reviewed-on: https://go-review.googlesource.com/c/website/+/463684
Run-TryBot: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Eli Bendersky <eliben@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This commit is contained in:
Michael Pratt 2023-01-26 16:46:21 -05:00 коммит произвёл Gopher Robot
Родитель 9abce52941
Коммит d3473f8251
2 изменённых файлов: 276 добавлений и 0 удалений

Просмотреть файл

@ -130,6 +130,11 @@ Main documentation page for Go fuzzing.
Main documentation page for coverage testing of Go applications.
</p>
<h3 id="pgo"><a href="/doc/pgo">Profile-guided optimization</a></h3>
<p>
Main documentation page for profile-guided optimization (PGO) of Go applications.
</p>
<h3 id="data-access">Accessing databases</h3>
<h4 id="data-access-tutorial"><a href="/doc/tutorial/database-access">Tutorial: Accessing a relational database</a></h4>

271
_content/doc/pgo.md Normal file
Просмотреть файл

@ -0,0 +1,271 @@
---
title: Profile-guided optimization
layout: article
---
Beginning in Go 1.20, the Go compiler supports profile-guided optimization (PGO) to further optimize builds.
_Note: As of Go 1.20, PGO is in public preview.
We encourage folks to try it out, but there are still rough edges (noted below) which may preclude production use.
Please report issues you experience to https://go.dev/issue/new. We expect PGO to be generally available in a future release._
Table of Contents:
[Overview](#overview)\
[Collecting profiles](#collecting-profiles)\
[Building with PGO](#building)\
[Notes](#notes)\
[Frequently Asked Questions](#faq)\
[Appendix: alternative profile sources](#alternative-sources)
# Overview {#overview}
Profile-guided optimization (PGO), also known as feedback-directed optimization (FDO), is a compiler optimization technique that feeds information (a profile) from representative runs of the application back into to the compiler for the next build of the application, which uses that information to make more informed optimization decisions.
For example, the compiler may decide to more aggressively inline functions which the profile indicates are called frequently.
In Go, the compiler uses CPU pprof profiles as the input profile, such as from [runtime/pprof](https://pkg.go.dev/runtime/pprof) or [net/http/pprof](https://pkg.go.dev/net/http/pprof).
As of Go 1.20, benchmarks for a representative set of Go programs show that building with PGO improves performance by around 2-4%.
We expect performance gains to generally increase over time as additional optimizations take advantage of PGO in future versions of Go.
# Collecting profiles {#collecting-profiles}
The Go compiler expects a CPU pprof profile as the input to PGO.
Profiles generated by the Go runtime (such as from [runtime/pprof](https://pkg.go.dev/runtime/pprof) and [net/http/pprof](https://pkg.go.dev/net/http/pprof)) can be used directly as the compiler input.
It may also be possible to use/convert profiles from other profiling systems. See [the appendix](#alternative-sources) for additional information.
For best results, it is important that profiles are _representative_ of actual behavior in the applications production environment.
Using an unrepresentative profile is likely to result in a binary with little to no improvement in production.
Thus, collecting profiles directly from the production environment is recommended, and is the primary method that Gos PGO is designed for.
The typical workflow is as follows:
1. Build and release an initial binary (without PGO).
2. Collect profiles from production.
3. When it's time to release an updated binary, build from the latest source and provide the production profile.
4. GOTO 2
Go PGO is generally robust to skew between the profiled version of an application and the version building with the profile, as well as to building with profiles collected from already-optimized binaries.
This is what makes this iterative lifecycle possible.
See the [AutoFDO](#autofdo) section for additional details about this workflow.
If it is difficult or impossible to collect from the production environment (e.g., a command-line tool distributed to end users), it is also possible to collect from a representative benchmark.
Note that constructing representative benchmarks is often quite difficult (as is keeping them representative as the application evolves).
In particular, _microbenchmarks are usually bad candidates for PGO profiling_, as they only exercise a small part of the application, which yields small gains when applied to the whole program.
# Building with PGO {#building}
The `go build -pgo` flag controls PGO profile selection.
Setting this flag to anything other than `-pgo=off` enables PGO optimizations.
The standard approach is to store a pprof CPU profile with filename `default.pgo` in the main package directory of the profiled binary, and build with `go build -pgo=auto`, which will pick up `default.pgo` files automatically.
Commiting profiles directly in the source repository is recommended as profiles are an input to the build important for reproducible (and performant!) builds.
Storing alongside the source simplifies the build experience as there are no additional steps to get the profile beyond fetching the source.
_Note: In Go 1.20, the default is `-pgo=off`.
A future version is likely to change the default to `-pgo=auto` to automatically build any binary with `default.pgo` with PGO._
_Note: In Go 1.20, `-pgo=auto` only works with a single main package.
Attempting to build multiple main packages (`go build -pgo=auto ./cmd/foo ./cmd/bar`) will result in a build error.
This is https://go.dev/issue/58099._
For more complex scenarios (e.g., different profiles for different scenarios of one binary, unable to store profile with source, etc), you may directly pass a path to the profile to use (e.g., `go build -pgo=/tmp/foo.pprof`).
_Note: A path passed to `-pgo` applies to all main packages.
e.g., `go build -pgo=/tmp/foo.pprof ./cmd/foo ./cmd/bar` applies `foo.pprof` to both binaries `foo` and `bar`, which is often not what you want.
Usually different binaries should have different profiles, passed via separate `go build` invocations._
# Notes {#notes}
## Collecting representative profiles from production
Your production environment is the best source of representative profiles for your application, as described in [Collecting profiles](#collecting-profiles).
The simplest way to start with this is to add [net/http/pprof](https://pkg.go.dev/net/http/pprof) to your application and then fetch `/debug/pprof/profile?seconds=30` from an arbitrary instance of your service.
This is a great way to get started, but there are ways that this may be unrepresentative:
* This instance may not be doing anything at the moment it gets profiled, even though it is usually busy.
* Traffic patterns may change throughout the day, making behavior change throughout the day.
* Instances may perform long-running operations (e.g., 5 minutes doing operation A, then 5 minutes doing operation B, etc).
A 30s profile will likely only cover a single operation type.
* Instances may not receive fair distributions of requests (some instances receive more of one type of request than others).
A more robust strategy is collecting multiple profiles at different times from different instances to limit the impact of differences between individual instance profiles.
Multiple profiles may then be [merged](#merging-profiles) into a single profile for use with PGO.
Many organizations run “continuous profiling” services that perform this kind of fleet-wide sampling profiling automatically, which could then be used as a source of profiles for PGO.
## Merging profiles {#merging-profiles}
The pprof tool can merge multiple profiles like this:
```
$ go tool pprof -proto a.pprof b.pprof > merged.pprof
```
This merge is effectively a straightforward sum of samples in the input, regardless of wall duration of the profile.
As a result, when profiling a small time slice of an application (e.g., a server that runs indefinitely), you likely want to ensure that all profiles have the same wall duration (i.e., all profiles are collected for 30s).
Otherwise, profiles with longer wall duration will be overrepresented in the merged profile.
## AutoFDO {#autofdo}
Go PGO is designed to support an “[AutoFDO](https://research.google/pubs/pub45290/)” style workflow.
Let's take a closer look at the workflow described in [Collecting profiles](#collecting-profiles):
1. Build and release an initial binary (without PGO).
2. Collect profiles from production.
3. When it's time to release an updated binary, build from the latest source and provide the production profile.
4. GOTO 2
This sounds deceptively simple, but there are a few important properties to note here:
* Development is always ongoing, so the source code of the profiled version of the binary (step 2) is likely slightly different from the latest source code getting built (step 3).
Go PGO is designed to be robust to this, which we refer to as _source stability_.
* This is a closed loop.
That is, after the first iteration the profiled version of the binary is already PGO-optimized with a profile from a previous iteration.
Go PGO is also designed to be robust to this, which we refer to as _iterative stability_.
_Source stability_ is achieved using heuristics to match samples from the profile to the compiling source.
As a result, many changes to source code, such as adding new functions, have no impact on matching existing code.
When the compiler is not able to match changed code, some optimizations are lost, but note that this is a _graceful degradation_.
A single function failing to match may lose out on optimization opportunities, but overall PGO benefit is usually spread across many functions. See the [source stability](#source-stability) section for more details about matching and degradation.
_Iterative stability_ is the prevention of cycles of variable performance in successive PGO builds (e.g., build #1 is fast, build #2 is slow, build #3 is fast, etc).
We use CPU profiles to identify hot functions to target with optimizations.
In theory, a hot function could be sped up so much by PGO that it no longer appears hot in the next profile and does not get optimized, making it slow again.
The Go compiler takes a conservative approach to PGO optimizations, which we believe prevents significant variance.
If you do observe this kind of instability, please file an issue at https://go.dev/issue/new.
Together, source and iterative stability eliminate the requirement for two-stage builds where a first, unoptimized build is profiled as a canary, and then rebuilt with PGO for production (unless absolutely peak performance is required).
## Source stability and refactoring {#source-stability}
As described in above, Gos PGO makes a best-effort attempt to continue matching samples from older profiles to the current source code.
Specifically, Go uses line offsets within functions (e.g., call on 5th line of function foo).
Many common changes will not break matching, including:
* Changes in a file outside of a hot function (adding/changing code above or below the function).
* Moving a function to another file in the same package (the compiler ignores source filenames altogether).
Some changes that may break matching:
* Changes within a hot function (may affect line offsets).
* Renaming the function (and/or type for methods) (changes symbol name).
* Moving the function to another package (changes symbol name).
If the profile is relatively recent, then differences likely only affect a small number of hot functions, limiting the impact of missed optimizations in functions that fail to match.
Still, degradation will slowly accumulate over time since code is rarely refactored _back_ to its old form, so it is important to collect new profiles regularly to limit source skew from production.
One situation where profile matching may significantly degrade is a large-scale refactor that renames many functions or moves them between packages.
In this case, you may take a short-term performance hit until a new profile shows the new structure.
For rote renames, an existing profile could theoretically be rewritten to change the old symbol names to the new names.
[github.com/google/pprof/profile](https://pkg.go.dev/github.com/google/pprof/profile) contains the primitives required to rewrite a pprof profile in this way, but as of writing no off-the-shelf tool exists for this.
## Performance of new code
When adding new code or enabling new code paths with a flag flip, that code will not be present in the profile on the first build, and thus won't receive PGO optimizations until a new profile reflecting the new code is collected.
Keep in mind when evaluating the rollout of new code that the initial release will not represent its steady state performance.
# Frequently Asked Questions {#faq}
## Is it possible to optimize Go standard library packages with PGO?
Yes.
PGO in Go applies to the entire program.
All packages are rebuilt to consider potential profile-guided optimizations, including standard library packages.
## Is it possible to optimize packages in dependent modules with PGO?
Yes.
PGO in Go applies to the entire program.
All packages are rebuilt to consider potential profile-guided optimizations, including packages in dependencies.
This means that the unique way your application uses a dependency impacts the optimizations applied to that dependency.
## Will PGO with an unrepresentative profile make my program slower than no PGO?
It should not.
While a profile that is not representative of production behavior will result in optimizations in cold parts of the application, it should not make hot parts of the application slower.
If you encounter a program where PGO results in worse performance than disabling PGO, please file an issue at https://go.dev/issue/new.
## Can I use the same profile for different GOOS/GOARCH builds?
Yes.
The format of the profiles is equivalent across OS and architecture configurations, so they may be used across different configurations.
For example, a profile collected from a linux/arm64 binary may be used in a windows/amd64 build.
That said, the source stability caveats discussed [above](#autofdo) apply here as well.
Any source code that differs across these configurations will not be optimized.
For most applications, the vast majority of code is platform-independent, so degradation of this form is limited.
As a specific example, the internals of file handling in package `os` differ between Linux and Windows.
If these functions are hot in the Linux profile, the Windows equivalents will not get PGO optimizations because they do not match the profiles.
You may merge profiles of different GOOS/GOARCH builds. See the next question for the tradeoffs of doing so.
## How should I handle a single binary used for different workload types?
There is no obvious choice here.
A single binary used for different types of workloads (e.g., a database used in a read-heavy way in one service, and write-heavy in another service) may have different hot components, which benefit from different optimizations.
There are three options:
1. Build different versions of the binary for each workload: use profiles from each workload to build multiple workload-specific builds of the binary.
This will provide the best performance for each workload, but may add operational complexity with regard to handling multiple binaries and profile sources.
2. Build a single binary using only profiles from the “most important” workload: select the “most important” workload (largest footprint, most performance sensitive), and build using profiles only from that workload.
This provides the best performance for the selected workload, and likely still modest performance improvements for other workloads from optimizations to common code shared across workloads.
3. Merge profiles across workloads: take profiles from each workload (weighted by total footprint) and merge them into a single “fleet-wide” profile used to build a single common profile used to build.
This likely provides modest performance improvements for all workloads.
## How does PGO affect build time?
Enabling PGO builds should cause measurable, but small, increases in package build times.
Likely more noticeable than individual package build times is that PGO profiles apply to all packages in a binary, meaning that the first use of a profile requires a rebuild of every package in the dependency graph.
These builds are cached like any other, so subsequent incremental builds using the same profile do not require complete rebuilds.
If you experience extreme increases in build time, please file an issue at https://go.dev/issue/new.
_Note: In Go 1.20, profile parsing adds significant overhead, particularly for large profiles, which can significantly increase build times.
This is tracked by https://go.dev/issue/58102 and will be addressed in a future release._
## How does PGO affect binary size?
PGO can result in slightly larger binaries due to additional function inlining.
# Appendix: alternative profile sources {#alternative-sources}
CPU profiles generated by the Go runtime (via [runtime/pprof](https://pkg.go.dev/runtime/pprof), etc) are already in the correct format for direct use as PGO inputs.
However, organizations may have alternative preferred tooling (e.g., Linux perf), or existing fleet-wide continuous profiling systems which they wish to use with Go PGO.
Profiles from alternative source may be used with Go PGO if converted to the [pprof format](https://github.com/google/pprof/tree/main/proto), provided they follow these general requirements:
* Sample index 0 should be type “cpu” and unit “count”.
* Samples should represent samples of CPU time at the sample location.
* The profile must be symbolized ([Function.name](https://github.com/google/pprof/blob/76d1ae5aea2b3f738f2058d17533b747a1a5cd01/proto/profile.proto#L208) must be set).
* Samples must contain stack frames for inlined functions.
If inlined functions are omitted, Go will not be able to maintain iterative stability.
* [Function.start_line](https://github.com/google/pprof/blob/76d1ae5aea2b3f738f2058d17533b747a1a5cd01/proto/profile.proto#L215) must be set.
This is the line number of the start of the function.
i.e., the line containing the `func` keyword.
The Go compiler uses this field to compute line offsets of samples (`Location.Line.line - Function.start_line`).
**Note that many existing pprof converters omit this field.**
_Note: In Go 1.20, DWARF metadata omits function start lines (`DW_AT_decl_line`), which may make it difficult for tools to determine the start line.
This is tracked by https://go.dev/issue/57308, and is expected to be fixed in Go 1.21._