The S3 and Redis caches are optional during build time.
Enable feature "s3" and/or "redis" to build sccache with
support for those backends. Only the local disk cache
is available by default. The "all" feature enables both.
This patch is changing the current default behavior that
always has S3 support.
This cache module uses a Redis instance. To make sccache use this,
set SCCACHE_REDIS to redis://[:<passwd>@]<hostname>[:port][/<db>].
The maximum and current cache size is retrieved from the INFO and
CONFIG GET redis command.
This commit migrates away from the `protobuf` crate to instead just working with
bincode on the wire as a serialization format. This is done by leveraging a few
different crates:
* The `bincode` and `serde_derive` crates are used to define serialization
for Rust structures as well as provide a bincode implementation.
* The `tokio_io::codec::length_delimited` module implements framing via length
prefixes to transform an asynchronous stream of bytes into a literal `Stream`
of `BytesMut`.
* The `tokio_serde_bincode` crate is then used to tie it all together, parsing
these `BytesMut` as the request/response types of sccache.
Most of the changes here are related to moving away from the protobuf API
throughout the codebase (e.g. `has_foo` and `take_foo`) towards a more
rustic-ish API that just uses enums/structs. Overall it felt quite natural (as
one would expect) to just use the raw enum/struct values.
This may not be quite as performant as before but that doesn't really apply to
sccache's use case where perf is hugely dominated by actually compiling and
hashing, so I'm not too too worried about that.
My personal motivation for this is twofold:
1. Using `protobuf` was a little clunky throughout the codebase and definitely
had some sharp edges that felt good to smooth out.
2. There's currently what I believe some mysterious segfault and/or stray write
happening in sccache and I'm not sure where. The `protobuf` crate had a lot
of `unsafe` code and in lieu of actually auditing it I figured it'd be good
to kill two birds with one stone. I have no idea if this fixes my segfault
problem (I never could reproduce it) but I figured it's worth a shot.
This is an abstraction of the Tokio stack now and doesn't have much impact on
sccache itself but I figured it'd be good to update a few deps and pick up the
recent version of things!
Plus that and I'd like to start using the length_delimited module soon...
This should help ensure that we don't wait *too* long for the cache to respond
(for example on an excessively slow network) and time out the server
unnecessarily.
I believe this is possible to hit if the compilation takes quite a long time and
times out the server itself. The server may shut down due to being idle while
we're still connected and cause us to receive an `UnexpectedEof`.
HTTP requires that all timestamps be in GMT. Posts to S3 were using
localtime with an offset instead. Ceph's radosgw is picky enough
to care.
http://tracker.ceph.com/issues/3973
This changeset removes `CompilerInfo` entirely, moving `get_cached_or_compile`
into the `Compiler` trait. The server now deals exclusively with objects of
`Box<Compiler>`.
Also fixes a few other review comments.
The goal here was to make more of the state that's currently persisted
from `parse_arguments` -> `generate_hash_key` -> `compile` private so
the C compilers and the Rust compiler can store different kinds of state.
* Split the `Compiler` trait further into a `CompilerHasher` trait,
which now gets returned in a Box from `Compiler::parse_arguments`
as a field of `CompilerArguments`.
* Move the existing `ParsedArguments` struct into compiler/c to make it
specific to C compilers.
The goal here was to make the state that persists between running the
preprocessor and running the compiler private, since the Rust compiler
does not have a preprocessor, but it will likely have other state it
would like to persist.
* Add a `Compiler` trait to make the interface to compilation generic.
* Add a `Compilation` trait that can be returned from a method on `Compiler`
to hold the preprocessor output that is reused for compilation while still
allowing the calling code to box `Compiler` and `Compilation` as trait
objects.
* Add a `CCompiler` struct that impls `Compiler`, but is generic over a
second `CCompilerImpl` trait for specific C compilers, since most of
the logic of running the preprocessor to generate the hash key is
shared. Move all of `hash_key_from_c_preprocessor_output` into the
`Compiler` impl on `CCompiler`.
* Add {GCC,Clang,MSVC} structs that impl `CCompilerImpl` so they can be
used with `CCompiler`.
* Rework `CompilerKind` to be a simple utility enum and make `CompilerInfo`
actually hold a `Box<Compiler>` and call methods on it directly.
Pull the "run the preprocessor and generate the hash key" chunk of
`get_cached_or_compile` out into a separate method, in expectation of
making things more generic in a followup commit to support the Rust
compiler, which won't be running a preprocessor.
There are a few major changes here:
* put the refactored bits from `get_cached_or_compile` into a
`hash_key_from_c_preprocessor_output`
* added a `generate_hash_key` method to `CompilerKind`, which just calls
the previously mentioned function
* removed `Compiler::compile`, inlined it into `get_cached_or_compile`
since that was the only call site anyway, and it made some lifetime issues
easier
* changed `get_cached_or_compile` to take ownership of `self`, and changed
a few other functions to just take the compiler path as a `&str` instead
of taking a `&Compiler`.
Apply two separate techniques:
* Ensure that core dumps come out of the server by calling `setrlimit` manually
* Install some signal handlers which print out the signal that was received
Hopefully this'll help debugging issues by at least confirming what happens.
Unfortunately a crucial detail I missed is that the previous incarnation of this
function set `bInheritHandles` to `FALSE` which caused the server process to not
inherit any handles from the parent process. The Rust standard library, however,
unconditionally passes `TRUE` for this variable right now.
I discovered, though, that with CMake the `cmake` process itself would hang
until all of its child pipes were closed, and one of those pipes was one that
leaked all the way into the server process. I unfortunately could not figure out
a way to stop the leak, so I've opted to just revert the part to use the
standard library's spawning mechanism back to calling `CreateProcess` manually.
Previous versions of futures-cpupool depended on crossbeam which unfortunately
has known segfaults (spurious) so the updated version no longer uses
futures-cpupool and instead relies on channels in the standard library.
I was sporadically receiving a segfault locally when trying to debug issues on
Windows and in tracking this down I discovered blackbeam/named_pipe#3 which
leads to segfaults locally on startup.
This switches the one use case to the relevant functionality in
`mio-named-pipes` (already pulled in as part of `tokio-process`) and then
otherwise mirrors the same logic as the Unix version, just waiting for a byte
with a timeout.
For the hash calculation the full argument list should be used
and not only the common args from the parsed list.
The string allocation should also be faster than a individual
sha update for each argument.