2016-04-26 23:43:03 +03:00
[ package ]
name = "sccache"
2018-07-12 14:33:52 +03:00
version = "0.2.8-alpha.0"
2017-05-25 19:46:37 +03:00
authors = [ "Ted Mielczarek <ted@mielczarek.org>" , "Alex Crichton <alex@alexcrichton.com>" ]
2016-12-16 03:55:19 +03:00
license = "Apache-2.0"
description = "Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible, storing a cache in a remote storage using the S3 API."
repository = "https://github.com/mozilla/sccache/"
2017-05-25 19:46:37 +03:00
readme = "README.md"
categories = [ "command-line-utilities" , "development-tools::build-utils" ]
keywords = [ "ccache" ]
2016-12-16 03:55:19 +03:00
2017-05-25 19:46:37 +03:00
[ badges ]
travis-ci = { repository = "mozilla/sccache" }
appveyor = { repository = "mozilla/sccache" }
2016-04-26 23:43:03 +03:00
2018-08-04 16:36:35 +03:00
[ [ bin ] ]
name = "sccache"
[ [ bin ] ]
name = "sccache-dist"
required-features = [ "dist-server" ]
2016-04-26 23:43:03 +03:00
[ dependencies ]
2018-08-06 18:12:46 +03:00
arraydeque = { version = "0.4" , optional = true }
2018-01-07 18:58:55 +03:00
base64 = "0.9.0"
2018-04-12 04:25:15 +03:00
bincode = "0.9" # TODO: update to 1.0
2018-07-18 23:21:06 +03:00
boxfnonce = "0.1"
2017-03-22 20:21:52 +03:00
byteorder = "1.0"
2018-05-15 04:49:34 +03:00
bytes = "0.4"
2017-05-18 23:53:00 +03:00
chrono = { version = "0.3" , optional = true }
clap = "2.23.0"
env_logger = "0.4"
2017-10-25 21:53:19 +03:00
error-chain = { version = "0.11" , default-features = false }
2016-05-18 21:30:11 +03:00
filetime = "0.1"
2017-03-09 21:00:18 +03:00
futures = "0.1.11"
Rewrite the server module with Tokio
This commit rewrites the `server` module of sccache to be backed with Tokio. The
previous version was written with `mio`, which Tokio is built on, but is
unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous
programming in Rust and sccache serves as a great testing ground for ergonomics!
It's intended that the support added here will eventually extend to many other
operations that sccache does as well. For example thread spawning has all been
replaced with `CpuPool` to have a shared pool for I/O operations and such
(namely the filesystem). Eventually the HTTP requests made by the S3 backend can
be integrated with the Tokio branch of Hyper as well to run that on the event
loop instead of in a worker thread. I'd also like to eventually extend this with
`tokio-process` as well to move process spawning off helper threads as well, but
I'm leaving that to a future commit as well.
Overall I found the transition was quite smooth, with the high level
architecture look like:
* The `tokio-proto` crate is used in streaming mode. The streaming part is used
for the one RPC sccache gets which requires a second response to be sent later
on. This second response is the "response body" in tokio-proto terms.
* All of sccache's logic is manifested in an implementation of the `Service`
trait.
* The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and
simple deserialization/serialization is performed with protobuf.
Some differences in design are:
* The `SccacheService` for now is just a bunch of reference-counted pointers,
making it cheap to clone. As the futures it returns progress they will each
retain a reference to a cloned copy of the `SccacheService`. Before all this
data was just stored and manipulated in a struct directly, but it's now
directly managed through shared memory.
* The storage backends share a thread pool with the main server instead of
spawning threads.
And finally, some things I've learned along the way:
* Sharing data between futures isn't a trivial operation. It took an explicit
decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics
played out.
* Shutdown is pretty tricky here. I've tried to carry over all the previous
logic but it definitely required not using `TcpServer` in tokio-proto at the
very least, and otherwise required a few custom futures and such to track the
various states. I have a hunch that tokio-proto could provide more options out
of the box for something like this.
2017-01-31 04:04:03 +03:00
futures-cpupool = "0.1"
2017-06-14 18:20:12 +03:00
hyper = { version = "0.11" , optional = true }
2017-06-15 01:30:32 +03:00
hyper-tls = { version = "0.1" , optional = true }
Add jobserver support to sccache
This commit alters the main sccache server to operate and orchestrate its own
GNU make style jobserver. This is primarily intended for interoperation with
rustc itself.
The Rust compiler currently has a multithreaded mode where it will execute code
generation and optimization on the LLVM side of things in parallel. This
parallelism, however, can overload a machine quickly if not properly accounted
for (e.g. if 10 rustcs all spawn 10 threads...). The usage of a GNU make style
jobserver is intended to arbitrate and rate limit all these rustc instances to
ensure that one build's maximal parallelism never exceeds a particular amount.
Currently for Rust Cargo is the primary driver for setting up a jobserver. Cargo
will create this and manage this per compilation, ensuring that any one `cargo
build` invocation never exceeds a maximal parallelism. When sccache enters the
picture, however, the story gets slightly more odd.
The jobserver implementation on Unix relies on inheritance of file descriptors
in spawned processes. With sccache, however, there's no inheritance as the
actual rustc invocation is spawned by the server, not the client. In this case
the env vars used to configure the jobsever are usually incorrect.
To handle this problem this commit bakes a jobserver directly into sccache
itself. The jobserver then overrides whatever jobserver the client has
configured in its own env vars to ensure correct operation. The settings of each
jobserver may be misconfigured (there's no way to configure sccache's jobserver
right now), but hopefully that's not too much of a problem for the forseeable
future.
The implementation here was to provide a thin wrapper around the `jobserver`
crate with a futures-based interface. This interface was then hooked into the
mock command infrastructure to automatically acquire a jobserver token when
spawning a process and automatically drop the token when the process exits.
Additionally, all spawned processes will now automatically receive a configured
jobserver.
cc rust-lang/rust#42867, the original motivation for this commit
2017-09-27 19:14:51 +03:00
jobserver = "0.1"
2018-08-30 03:05:34 +03:00
jsonwebtoken = { version = "4.0.1" , optional = true }
2016-05-02 22:11:20 +03:00
libc = "0.2.10"
2016-09-21 22:57:28 +03:00
local-encoding = "0.2.0"
2016-04-29 16:32:01 +03:00
log = "0.3.6"
2018-01-30 19:54:20 +03:00
lru-disk-cache = { path = "lru-disk-cache" , version = "0.2.0" }
2017-08-10 17:50:33 +03:00
memcached-rs = { version = "0.1" , optional = true }
2017-06-15 01:30:32 +03:00
native-tls = "0.1"
Add jobserver support to sccache
This commit alters the main sccache server to operate and orchestrate its own
GNU make style jobserver. This is primarily intended for interoperation with
rustc itself.
The Rust compiler currently has a multithreaded mode where it will execute code
generation and optimization on the LLVM side of things in parallel. This
parallelism, however, can overload a machine quickly if not properly accounted
for (e.g. if 10 rustcs all spawn 10 threads...). The usage of a GNU make style
jobserver is intended to arbitrate and rate limit all these rustc instances to
ensure that one build's maximal parallelism never exceeds a particular amount.
Currently for Rust Cargo is the primary driver for setting up a jobserver. Cargo
will create this and manage this per compilation, ensuring that any one `cargo
build` invocation never exceeds a maximal parallelism. When sccache enters the
picture, however, the story gets slightly more odd.
The jobserver implementation on Unix relies on inheritance of file descriptors
in spawned processes. With sccache, however, there's no inheritance as the
actual rustc invocation is spawned by the server, not the client. In this case
the env vars used to configure the jobsever are usually incorrect.
To handle this problem this commit bakes a jobserver directly into sccache
itself. The jobserver then overrides whatever jobserver the client has
configured in its own env vars to ensure correct operation. The settings of each
jobserver may be misconfigured (there's no way to configure sccache's jobserver
right now), but hopefully that's not too much of a problem for the forseeable
future.
The implementation here was to provide a thin wrapper around the `jobserver`
crate with a futures-based interface. This interface was then hooked into the
mock command infrastructure to automatically acquire a jobserver token when
spawning a process and automatically drop the token when the process exits.
Additionally, all spawned processes will now automatically receive a configured
jobserver.
cc rust-lang/rust#42867, the original motivation for this commit
2017-09-27 19:14:51 +03:00
num_cpus = "1.0"
2016-05-12 22:46:22 +03:00
number_prefix = "0.2.5"
2017-07-28 20:49:51 +03:00
openssl = { version = "0.9" , optional = true }
2018-08-06 18:12:46 +03:00
rand = "0.4.2"
2017-02-16 23:17:24 +03:00
redis = { version = "0.8.0" , optional = true }
2017-05-18 23:53:00 +03:00
regex = "0.2"
2018-08-30 03:05:34 +03:00
# Exact dependency since we use the unstable async API
reqwest = { version = "=0.8.6" , features = [ "unstable" ] , optional = true }
2016-05-03 12:56:07 +03:00
retry = "0.4.0"
2018-08-12 23:57:59 +03:00
ring = "0.13.2"
2018-08-04 16:36:35 +03:00
# Need https://github.com/tomaka/rouille/pull/185
#rouille = "2.1"
2018-08-30 03:05:34 +03:00
rouille = { git = "https://github.com/tomaka/rouille.git" , rev = "7b6b2eb" , optional = true }
2017-02-16 23:17:24 +03:00
rust-crypto = { version = "0.2.36" , optional = true }
2017-05-18 23:53:00 +03:00
serde = "1.0"
serde_derive = "1.0"
2017-06-14 12:43:05 +03:00
serde_json = "1.0"
2018-02-02 15:11:11 +03:00
strip-ansi-escapes = "0.1"
2018-04-12 04:25:15 +03:00
tar = "0.4"
2016-05-03 12:56:07 +03:00
tempdir = "0.3.4"
2017-05-17 23:38:12 +03:00
tempfile = "2.1.5"
2016-06-28 14:12:54 +03:00
time = "0.1.35"
2017-03-09 21:00:18 +03:00
tokio-core = "0.1.6"
tokio-io = "0.1"
2017-03-22 20:21:52 +03:00
tokio-process = "0.1"
tokio-proto = "0.1"
2017-07-13 21:33:57 +03:00
tokio-serde-bincode = "0.1"
Rewrite the server module with Tokio
This commit rewrites the `server` module of sccache to be backed with Tokio. The
previous version was written with `mio`, which Tokio is built on, but is
unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous
programming in Rust and sccache serves as a great testing ground for ergonomics!
It's intended that the support added here will eventually extend to many other
operations that sccache does as well. For example thread spawning has all been
replaced with `CpuPool` to have a shared pool for I/O operations and such
(namely the filesystem). Eventually the HTTP requests made by the S3 backend can
be integrated with the Tokio branch of Hyper as well to run that on the event
loop instead of in a worker thread. I'd also like to eventually extend this with
`tokio-process` as well to move process spawning off helper threads as well, but
I'm leaving that to a future commit as well.
Overall I found the transition was quite smooth, with the high level
architecture look like:
* The `tokio-proto` crate is used in streaming mode. The streaming part is used
for the one RPC sccache gets which requires a second response to be sent later
on. This second response is the "response body" in tokio-proto terms.
* All of sccache's logic is manifested in an implementation of the `Service`
trait.
* The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and
simple deserialization/serialization is performed with protobuf.
Some differences in design are:
* The `SccacheService` for now is just a bunch of reference-counted pointers,
making it cheap to clone. As the futures it returns progress they will each
retain a reference to a cloned copy of the `SccacheService`. Before all this
data was just stored and manipulated in a struct directly, but it's now
directly managed through shared memory.
* The storage backends share a thread pool with the main server instead of
spawning threads.
And finally, some things I've learned along the way:
* Sharing data between futures isn't a trivial operation. It took an explicit
decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics
played out.
* Shutdown is pretty tricky here. I've tried to carry over all the previous
logic but it definitely required not using `TcpServer` in tokio-proto at the
very least, and otherwise required a few custom futures and such to track the
various states. I have a hunch that tokio-proto could provide more options out
of the box for something like this.
2017-01-31 04:04:03 +03:00
tokio-service = "0.1"
2017-03-09 21:00:18 +03:00
tokio-tls = "0.1"
2018-06-04 20:28:39 +03:00
toml = "0.4"
2018-02-28 22:30:50 +03:00
uuid = { version = "0.6" , features = [ "v4" ] }
2017-05-06 03:02:48 +03:00
url = { version = "1.0" , optional = true }
2017-05-18 23:53:00 +03:00
which = "1.0"
2018-07-17 03:24:44 +03:00
zip = { version = "0.4" , default-features = false , features = [ "deflate" ] }
2018-01-30 19:14:37 +03:00
lazy_static = "1.0.0"
2018-02-02 15:11:11 +03:00
atty = "0.2.6"
2018-03-21 15:52:55 +03:00
directories = "0.8.4"
2016-05-12 21:47:46 +03:00
2018-08-04 16:36:35 +03:00
crossbeam-utils = { version = "0.4" , optional = true }
flate2 = { version = "1.0" , optional = true , default-features = false , features = [ "rust_backend" ] }
libmount = { version = "0.1.10" , optional = true }
nix = { version = "0.11.0" , optional = true }
2017-04-03 17:21:13 +03:00
[ dev-dependencies ]
2018-07-26 22:52:50 +03:00
assert_cmd = "0.6.0"
2018-02-01 18:35:56 +03:00
cc = "1.0"
2018-06-04 20:28:39 +03:00
chrono = "0.3"
2018-01-07 18:58:55 +03:00
itertools = "0.7"
2018-07-26 22:52:50 +03:00
predicates = "0.5.2"
2017-04-03 17:21:13 +03:00
2016-10-26 23:22:52 +03:00
[ target . 'cfg(unix)' . dependencies ]
daemonize = "0.2.3"
2017-02-17 19:43:43 +03:00
tokio-uds = "0.1"
2016-10-26 23:22:52 +03:00
2017-01-27 22:39:47 +03:00
[ target . 'cfg(windows)' . dependencies ]
kernel32-sys = "0.2.2"
winapi = "0.2"
2017-03-21 20:49:17 +03:00
mio-named-pipes = "0.1"
2017-01-27 22:39:47 +03:00
2016-12-13 17:44:20 +03:00
[ features ]
2017-04-05 17:25:02 +03:00
default = [ "s3" ]
2018-03-04 10:33:12 +03:00
all = [ "redis" , "s3" , "memcached" , "gcs" , "azure" ]
2018-02-01 16:13:57 +03:00
# gcs requires openssl, which is a pain on Windows.
2018-05-10 16:07:52 +03:00
all-windows = [ "redis" , "s3" , "memcached" , "azure" ]
2018-03-04 10:33:12 +03:00
azure = [ "chrono" , "hyper" , "hyper-tls" , "rust-crypto" ]
2017-06-14 12:43:05 +03:00
s3 = [ "chrono" , "hyper" , "hyper-tls" , "rust-crypto" , "simple-s3" ]
2017-02-16 23:17:24 +03:00
simple-s3 = [ ]
2018-08-30 03:05:34 +03:00
gcs = [ "chrono" , "hyper" , "hyper-tls" , "jsonwebtoken" , "openssl" , "url" ]
2017-08-10 17:50:33 +03:00
memcached = [ "memcached-rs" ]
2016-12-13 17:44:20 +03:00
# Enable features that require unstable features of Nightly Rust.
unstable = [ ]
2018-07-31 01:06:14 +03:00
# Enables distributed support in the sccache client
2018-08-30 03:05:34 +03:00
dist-client = [ "reqwest" ]
2018-08-04 16:36:35 +03:00
# Enables the sccache-dist binary
2018-08-30 03:05:34 +03:00
dist-server = [ "arraydeque" , "crossbeam-utils" , "jsonwebtoken" , "flate2" , "libmount" , "nix" , "reqwest" , "rouille" ]
2016-12-13 17:44:20 +03:00
2016-11-29 03:59:42 +03:00
[ workspace ]
2017-10-25 21:09:27 +03:00
exclude = [ "tests/test-crate" ]
2018-07-27 19:07:08 +03:00
[ patch . crates-io ]
predicates = { git = "https://github.com/luser/predicates-rs" , branch = "function-unsized" }