sccache is ccache with cloud storage
Перейти к файлу
dependabot[bot] bad6ae1591 Bump nix from 0.25.0 to 0.26.1
Bumps [nix](https://github.com/nix-rust/nix) from 0.25.0 to 0.26.1.
- [Release notes](https://github.com/nix-rust/nix/releases)
- [Changelog](https://github.com/nix-rust/nix/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nix-rust/nix/compare/v0.25.0...v0.26.1)

---
updated-dependencies:
- dependency-name: nix
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-06 09:14:37 +01:00
.github Update .github/dependabot.yml 2022-11-29 07:42:19 +01:00
docs Document the sccache process 2022-12-01 08:15:36 +01:00
scripts Move remainder of the CI to Github Actions 2021-01-06 07:58:24 +09:00
snap Add snap configure hook 2019-02-08 14:22:12 -05:00
src Add configurable server timeout 2022-12-05 13:10:01 +01:00
tests Add configurable server timeout 2022-12-05 13:10:01 +01:00
.gitignore gitignore VSCode Workspace Files 2022-11-11 00:48:30 +01:00
CODE_OF_CONDUCT.md Add Mozilla Code of Conduct file 2019-03-29 14:56:24 -07:00
Cargo.lock Bump nix from 0.25.0 to 0.26.1 2022-12-06 09:14:37 +01:00
Cargo.toml Bump nix from 0.25.0 to 0.26.1 2022-12-06 09:14:37 +01:00
LICENSE Somewhat-fleshed-out skeleton. 2016-04-26 16:43:03 -04:00
README.md Update README.md to explain GHA cache behavior 2022-12-03 10:52:34 +01:00

README.md

Build Status Crates.io Matrix Crates.io dependency status

CodeCov

sccache - Shared Compilation Cache

sccache is a ccache-like compiler caching tool. It is used as a compiler wrapper and avoids compilation when possible, storing cached results either on local disk or in one of several cloud storage backends.

sccache includes support for caching the compilation of C/C++ code, Rust, as well as NVIDIA's CUDA using nvcc.

sccache also provides icecream-style distributed compilation (automatic packaging of local toolchains) for all supported compilers (including Rust). The distributed compilation system includes several security features that icecream lacks such as authentication, transport layer encryption, and sandboxed compiler execution on build servers. See the distributed quickstart guide for more information.


Table of Contents (ToC)


Installation

There are prebuilt x86-64 binaries available for Windows, Linux (a portable binary compiled against musl), and macOS on the releases page. Several package managers also include sccache packages, you can install the latest release from source using cargo, or build directly from a source checkout.

macOS

On macOS sccache can be installed via Homebrew:

brew install sccache

Windows

On Windows, sccache can be installed via scoop:

scoop install sccache

Via cargo

If you have a Rust toolchain installed you can install sccache using cargo. Note that this will compile sccache from source which is fairly resource-intensive. For CI purposes you should use prebuilt binary packages.

cargo install sccache

Usage

Running sccache is like running ccache: prefix your compilation commands with it, like so:

sccache gcc -o foo.o -c foo.c

If you want to use sccache for caching Rust builds you can define build.rustc-wrapper in the cargo configuration file. For example, you can set it globally in $HOME/.cargo/config.toml by adding:

[build]
rustc-wrapper = "/path/to/sccache"

Note that you need to use cargo 1.40 or newer for this to work.

Alternatively you can use the environment variable RUSTC_WRAPPER:

export RUSTC_WRAPPER=/path/to/sccache
cargo build

sccache supports gcc, clang, MSVC, rustc, NVCC, and Wind River's diab compiler.

If you don't specify otherwise, sccache will use a local disk cache.

sccache works using a client-server model, where the server runs locally on the same machine as the client. The client-server model allows the server to be more efficient by keeping some state in memory. The sccache command will spawn a server process if one is not already running, or you can run sccache --start-server to start the background server process without performing any compilation.

You can run sccache --stop-server to terminate the server. It will also terminate after (by default) 10 minutes of inactivity.

Running sccache --show-stats will print a summary of cache statistics.

Some notes about using sccache with Jenkins are here.

To use sccache with cmake, provide the following command line arguments to cmake 3.4 or newer:

-DCMAKE_C_COMPILER_LAUNCHER=sccache
-DCMAKE_CXX_COMPILER_LAUNCHER=sccache

To generate PDB files for debugging with MSVC, you can use the /Z7 option. Alternatively, the /Zi option together with /Fd can work if /Fd names a different PDB file name for each object file created. Note that CMake sets /Zi by default, so if you use CMake, you can use /Z7 by adding code like this in your CMakeLists.txt:

if(CMAKE_BUILD_TYPE STREQUAL "Debug")
  string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG}")
  string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}")
elseif(CMAKE_BUILD_TYPE STREQUAL "Release")
  string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE}")
  string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE}")
elseif(CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo")
  string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
  string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
endif()

By default, sccache will fail your build if it fails to successfully communicate with its associated server. To have sccache instead gracefully failover to the local compiler without stopping, set the environment variable SCCACHE_IGNORE_SERVER_IO_ERROR=1.


Build Requirements

sccache is a Rust program. Building it requires cargo (and thus rustc). sccache currently requires Rust 1.60.0. We recommend you install Rust via Rustup.

Build

If you are building sccache for non-development purposes make sure you use cargo build --release to get optimized binaries:

cargo build --release [--no-default-features --features=s3|redis|gcs|memcached|azure]

By default, sccache builds with support for all storage backends, but individual backends may be disabled by resetting the list of features and enabling all the other backends. Refer the Cargo Documentation for details on how to select features with Cargo.

Building portable binaries

When building with the dist-server feature, sccache will depend on OpenSSL, which can be an annoyance if you want to distribute portable binaries. It is possible to statically link against OpenSSL using the openssl/vendored feature.

Linux

Build with cargo and use ldd to check that the resulting binary does not depend on OpenSSL anymore.

macOS

Build with cargo and use otool -L to check that the resulting binary does not depend on OpenSSL anymore.

Windows

On Windows, the binary might also depend on a few MSVC CRT DLLs that are not available on older Windows versions.

It is possible to statically link against the CRT using a .cargo/config.toml file with the following contents.

[target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]

Build with cargo and use dumpbin /dependents to check that the resulting binary does not depend on MSVC CRT DLLs anymore.

When statically linking with OpenSSL, you will need Perl available in your $PATH.


Storage Options

Local

sccache defaults to using local disk storage. You can set the SCCACHE_DIR environment variable to change the disk cache location. By default it will use a sensible location for the current platform: ~/.cache/sccache on Linux, %LOCALAPPDATA%\Mozilla\sccache on Windows, and ~/Library/Caches/Mozilla.sccache on MacOS.

The default cache size is 10 gigabytes. To change this, set SCCACHE_CACHE_SIZE, for example SCCACHE_CACHE_SIZE="1G".

The local storage only supports a single sccache server at a time. Multiple concurrent servers will race and cause spurious build failures.

S3

If you want to use S3 storage for the sccache cache, you need to set the SCCACHE_BUCKET environment variable to the name of the S3 bucket to use.

Credentials are resolved using the default AWS provider chain, including the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, the ~/.aws/credentials file, etc. For more details see https://docs.aws.amazon.com/sdk-for-rust/latest/dg/credentials.html. If multiple profiles are available, you can pick one using the AWS_PROFILE environment variable.

If you do not want to use credentials at all, you can set the SCCACHE_S3_NO_CREDENTIALS environment variable. This requires the bucket to allow public readonly access, and can be useful to implement a readonly cache for pull requests, which typically can't be given access to credentials for security reasons.

You can configure the region using the SCCACHE_REGION environment variable, or specify the region key in ~/.aws/credentials. Alternatively you can specify the endpoint URL using the SCCACHE_ENDPOINT environment variable. To connect to a minio storage for example you can set SCCACHE_ENDPOINT=<ip>:<port>.

If your endpoint requires HTTPS/TLS, set SCCACHE_S3_USE_SSL=true. If you don't need a secure network layer, HTTP (SCCACHE_S3_USE_SSL=false) might be better for performance.

You can also define a prefix that will be prepended to the keys of all cache objects created and read within the S3 bucket, effectively creating a scope. To do that use the SCCACHE_S3_KEY_PREFIX environment variable. This can be useful when sharing a bucket with another application.

Redis

Set SCCACHE_REDIS to a Redis url in format redis://[:<passwd>@]<hostname>[:port][/<db>] to store the cache in a Redis instance. Redis can be configured as a LRU (least recently used) cache with a fixed maximum cache size. Set maxmemory and maxmemory-policy according to the Redis documentation. The allkeys-lru policy which discards the least recently accessed or modified key fits well for the sccache use case.

Redis over TLS is supported. Use the rediss:// url scheme (note rediss vs redis). Append #insecure the the url to disable hostname verification and accept self-signed certificates (dangerous!). Note that this also disables SNI.

Memcached

Set SCCACHE_MEMCACHED to a Memcached url in format tcp://<hostname>:<port> ... to store the cache in a Memcached instance.

Google Cloud Storage

To use Google Cloud Storage, you need to set the SCCACHE_GCS_BUCKET environment variable to the name of the GCS bucket.

If you're using authentication, either:

  • Set SCCACHE_GCS_KEY_PATH to the location of your JSON service account credentials
  • (Deprecated) Set SCCACHE_GCS_CREDENTIALS_URL to a URL returning an OAuth token in non-standard {"accessToken": "...", "expireTime": "..."} format.
  • Set SCCACHE_GCS_OAUTH_URL to a URL returning an OAuth token. If you are running on a Google Cloud instance, this is of the form http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/${YOUR_SERVICE_ACCOUNT}/token

By default, SCCACHE on GCS will be read-only. To change this, set SCCACHE_GCS_RW_MODE to either READ_ONLY or READ_WRITE.

You can also define a prefix that will be prepended to the keys of all cache objects created and read within the GCS bucket, effectively creating a scope. To do that use the SCCACHE_GCS_KEY_PREFIX environment variable. This can be useful when sharing a bucket with another application.

To create such account, in GCP, go in APIs and Services => Cloud Storage => Create credentials => Service account. Then, once created, click on the account then Keys => Add key => Create new key. Select the JSON format and here it is. This JSON file is what SCCACHE_GCS_KEY_PATH expects. The service account needs Storage Object Admin permissions on the bucket (otherwise, sccache will fail with a simple Permission denied).

To verify that it works, run:

export SCCACHE_GCS_BUCKET=<bucket name in GCP>
export SCCACHE_GCS_KEY_PATH=secret-gcp-storage.json
./sccache --show-stats
# you should see
[...]
Cache location                  GCS, bucket: Bucket(name=<bucket name in GCP>), key_prefix: (none)

Azure

To use Azure Blob Storage, you'll need your Azure connection string and an existing Blob Storage container name. Set the SCCACHE_AZURE_CONNECTION_STRING environment variable to your connection string, and SCCACHE_AZURE_BLOB_CONTAINER to the name of the container to use. Note that sccache will not create the container for you - you'll need to do that yourself.

You can also define a prefix that will be prepended to the keys of all cache objects created and read within the container, effectively creating a scope. To do that use the SCCACHE_AZURE_KEY_PREFIX environment variable. This can be useful when sharing a bucket with another application.

Important: The environment variables are only taken into account when the server starts, i.e. only on the first run.

GitHub Actions

To use the GitHub Actions cache, you need to set the SCCACHE_GHA_CACHE_URL/ACTIONS_CACHE_URL and SCCACHE_GHA_RUNTIME_TOKEN/ACTIONS_RUNTIME_TOKEN environmental variables. The SCCACHE_ prefixed environmental variables override the variables without the prefix.

In a GitHub Actions workflow, you can set these environmental variables using the following step.

- name: Configure sccache
  uses: actions/github-script@v6
  with:
    script: |
      core.exportVariable('ACTIONS_CACHE_URL', process.env.ACTIONS_CACHE_URL || '');
      core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');      

To write to the cache, set SCCACHE_GHA_CACHE_TO to a cache key, for example sccache-latest. To read from cache key prefixes, set SCCACHE_GHA_CACHE_FROM to a comma-separated list of cache key prefixes, for example sccache-.

In contrast to the @actions/cache action, which saves a single large archive per cache key, sccache with GHA cache storage saves each cache entry separately.

GHA cache storage will create many small caches with the same cache key, e.g. SCCACHE_GHA_CACHE_TO and SCCACHE_GHA_CACHE_FROM. These GHA caches are differentiated by their version. The GHA cache implementation in sccache calculates the cache version from the sccache entry key, e.g. the source file path.

For example, if a cache entry has the version main.rs and has GHA cache entries for the sccache-1 and sccache-2 keys, then SCCACHE_GHA_CACHE_FROM=sccache- will match both and return the most recent entry.

This behavior is useful for scoping caches from different versions of Rust or for cross-platform builds (rust-sdk-{RUST_TOOLKIT}-{TARGET_TRIPLE}-), and to allow newer commits to override older caches by adding the Git SHA as a suffix (-{GITHUB_SHA}), as in the following screenshot.


Separating caches between invocations

In situations where several different compilation invocations should not reuse the cached results from each other, one can set SCCACHE_C_CUSTOM_CACHE_BUSTER to a unique value that'll be mixed into the hash. MACOSX_DEPLOYMENT_TARGET and IPHONEOS_DEPLOYMENT_TARGET variables already exhibit such reuse-suppression behaviour. There are currently no such variables for compiling Rust.


Overwriting the cache

In situations where the cache contains broken build artifacts, it can be necessary to overwrite the contents in the cache. That can be achieved by setting the SCCACHE_RECACHE environment variable.


Debugging

You can set the SCCACHE_ERROR_LOG environment variable to a path and set SCCACHE_LOG to get the server process to redirect its logging there (including the output of unhandled panics, since the server sets RUST_BACKTRACE=1 internally).

SCCACHE_ERROR_LOG=/tmp/sccache_log.txt SCCACHE_LOG=debug sccache

You can also set these environment variables for your build system, for example

SCCACHE_ERROR_LOG=/tmp/sccache_log.txt SCCACHE_LOG=debug cmake --build /path/to/cmake/build/directory

Alternatively, if you are compiling locally, you can run the server manually in foreground mode by running SCCACHE_START_SERVER=1 SCCACHE_NO_DAEMON=1 sccache, and send logging to stderr by setting the SCCACHE_LOG environment variable for example. This method is not suitable for CI services because you need to compile in another shell at the same time.

SCCACHE_LOG=debug SCCACHE_START_SERVER=1 SCCACHE_NO_DAEMON=1 sccache

Interaction with GNU make jobserver

sccache provides support for a GNU make jobserver. When the server is started from a process that provides a jobserver, sccache will use that jobserver and provide it to any processes it spawns. (If you are running sccache from a GNU make recipe, you will need to prefix the command with + to get this behavior.) If the sccache server is started without a jobserver present it will create its own with the number of slots equal to the number of available CPU cores.

This is most useful when using sccache for Rust compilation, as rustc supports using a jobserver for parallel codegen, so this ensures that rustc will not overwhelm the system with codegen tasks. Cargo implements its own jobserver (see the information on NUM_JOBS in the cargo documentation) for rustc to use, so using sccache for Rust compilation in cargo via RUSTC_WRAPPER should do the right thing automatically.


Known Caveats

General

  • Absolute paths to files must match to get a cache hit. This means that even if you are using a shared cache, everyone will have to build at the same absolute path (i.e. not in $HOME) in order to benefit each other. In Rust this includes the source for third party crates which are stored in $HOME/.cargo/registry/cache by default.

Rust

  • Crates that invoke the system linker cannot be cached. This includes bin, dylib, cdylib, and proc-macro crates. You may be able to improve compilation time of large bin crates by converting them to a lib crate with a thin bin wrapper.
  • Incrementally compiled crates cannot be cached. By default, in the debug profile Cargo will use incremental compilation for workspace members and path dependencies. You can disable incremental compilation.

More details on Rust caveats

  • Symbolic links to sccache won't work. Use hardlinks: ln sccache /usr/local/bin/cc