sccache/Cargo.toml

123 строки
3.6 KiB
TOML
Исходник Обычный вид История

[package]
name = "sccache"
version = "0.2.8-alpha.0"
2017-05-25 19:46:37 +03:00
authors = ["Ted Mielczarek <ted@mielczarek.org>", "Alex Crichton <alex@alexcrichton.com>"]
2016-12-16 03:55:19 +03:00
license = "Apache-2.0"
description = "Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible, storing a cache in a remote storage using the S3 API."
repository = "https://github.com/mozilla/sccache/"
2017-05-25 19:46:37 +03:00
readme = "README.md"
categories = ["command-line-utilities", "development-tools::build-utils"]
keywords = ["ccache"]
2016-12-16 03:55:19 +03:00
2017-05-25 19:46:37 +03:00
[badges]
travis-ci = { repository = "mozilla/sccache" }
appveyor = { repository = "mozilla/sccache" }
2018-08-04 16:36:35 +03:00
[[bin]]
name = "sccache"
[[bin]]
name = "sccache-dist"
required-features = ["dist-server"]
[dependencies]
arraydeque = { version = "0.4", optional = true }
2018-01-07 18:58:55 +03:00
base64 = "0.9.0"
2018-04-12 04:25:15 +03:00
bincode = "0.9" # TODO: update to 1.0
boxfnonce = "0.1"
Move from protobuf to bincode for a wire format This commit migrates away from the `protobuf` crate to instead just working with bincode on the wire as a serialization format. This is done by leveraging a few different crates: * The `bincode` and `serde_derive` crates are used to define serialization for Rust structures as well as provide a bincode implementation. * The `tokio_io::codec::length_delimited` module implements framing via length prefixes to transform an asynchronous stream of bytes into a literal `Stream` of `BytesMut`. * The `tokio_serde_bincode` crate is then used to tie it all together, parsing these `BytesMut` as the request/response types of sccache. Most of the changes here are related to moving away from the protobuf API throughout the codebase (e.g. `has_foo` and `take_foo`) towards a more rustic-ish API that just uses enums/structs. Overall it felt quite natural (as one would expect) to just use the raw enum/struct values. This may not be quite as performant as before but that doesn't really apply to sccache's use case where perf is hugely dominated by actually compiling and hashing, so I'm not too too worried about that. My personal motivation for this is twofold: 1. Using `protobuf` was a little clunky throughout the codebase and definitely had some sharp edges that felt good to smooth out. 2. There's currently what I believe some mysterious segfault and/or stray write happening in sccache and I'm not sure where. The `protobuf` crate had a lot of `unsafe` code and in lieu of actually auditing it I figured it'd be good to kill two birds with one stone. I have no idea if this fixes my segfault problem (I never could reproduce it) but I figured it's worth a shot.
2017-03-22 20:21:52 +03:00
byteorder = "1.0"
2018-05-15 04:49:34 +03:00
bytes = "0.4"
2018-08-28 14:37:42 +03:00
chrono = { version = "0.4", optional = true }
clap = "2.23.0"
env_logger = "0.5"
2018-08-28 15:48:02 +03:00
error-chain = { version = "0.12", default-features = false }
2018-08-28 14:56:27 +03:00
filetime = "0.2"
futures = "0.1.11"
Rewrite the server module with Tokio This commit rewrites the `server` module of sccache to be backed with Tokio. The previous version was written with `mio`, which Tokio is built on, but is unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous programming in Rust and sccache serves as a great testing ground for ergonomics! It's intended that the support added here will eventually extend to many other operations that sccache does as well. For example thread spawning has all been replaced with `CpuPool` to have a shared pool for I/O operations and such (namely the filesystem). Eventually the HTTP requests made by the S3 backend can be integrated with the Tokio branch of Hyper as well to run that on the event loop instead of in a worker thread. I'd also like to eventually extend this with `tokio-process` as well to move process spawning off helper threads as well, but I'm leaving that to a future commit as well. Overall I found the transition was quite smooth, with the high level architecture look like: * The `tokio-proto` crate is used in streaming mode. The streaming part is used for the one RPC sccache gets which requires a second response to be sent later on. This second response is the "response body" in tokio-proto terms. * All of sccache's logic is manifested in an implementation of the `Service` trait. * The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and simple deserialization/serialization is performed with protobuf. Some differences in design are: * The `SccacheService` for now is just a bunch of reference-counted pointers, making it cheap to clone. As the futures it returns progress they will each retain a reference to a cloned copy of the `SccacheService`. Before all this data was just stored and manipulated in a struct directly, but it's now directly managed through shared memory. * The storage backends share a thread pool with the main server instead of spawning threads. And finally, some things I've learned along the way: * Sharing data between futures isn't a trivial operation. It took an explicit decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics played out. * Shutdown is pretty tricky here. I've tried to carry over all the previous logic but it definitely required not using `TcpServer` in tokio-proto at the very least, and otherwise required a few custom futures and such to track the various states. I have a hunch that tokio-proto could provide more options out of the box for something like this.
2017-01-31 04:04:03 +03:00
futures-cpupool = "0.1"
hyper = { version = "0.11", optional = true }
Add jobserver support to sccache This commit alters the main sccache server to operate and orchestrate its own GNU make style jobserver. This is primarily intended for interoperation with rustc itself. The Rust compiler currently has a multithreaded mode where it will execute code generation and optimization on the LLVM side of things in parallel. This parallelism, however, can overload a machine quickly if not properly accounted for (e.g. if 10 rustcs all spawn 10 threads...). The usage of a GNU make style jobserver is intended to arbitrate and rate limit all these rustc instances to ensure that one build's maximal parallelism never exceeds a particular amount. Currently for Rust Cargo is the primary driver for setting up a jobserver. Cargo will create this and manage this per compilation, ensuring that any one `cargo build` invocation never exceeds a maximal parallelism. When sccache enters the picture, however, the story gets slightly more odd. The jobserver implementation on Unix relies on inheritance of file descriptors in spawned processes. With sccache, however, there's no inheritance as the actual rustc invocation is spawned by the server, not the client. In this case the env vars used to configure the jobsever are usually incorrect. To handle this problem this commit bakes a jobserver directly into sccache itself. The jobserver then overrides whatever jobserver the client has configured in its own env vars to ensure correct operation. The settings of each jobserver may be misconfigured (there's no way to configure sccache's jobserver right now), but hopefully that's not too much of a problem for the forseeable future. The implementation here was to provide a thin wrapper around the `jobserver` crate with a futures-based interface. This interface was then hooked into the mock command infrastructure to automatically acquire a jobserver token when spawning a process and automatically drop the token when the process exits. Additionally, all spawned processes will now automatically receive a configured jobserver. cc rust-lang/rust#42867, the original motivation for this commit
2017-09-27 19:14:51 +03:00
jobserver = "0.1"
jsonwebtoken = { version = "5.0", optional = true }
libc = "0.2.10"
local-encoding = "0.2.0"
log = "0.4"
lru-disk-cache = { path = "lru-disk-cache", version = "0.2.0" }
2018-08-28 15:42:57 +03:00
memcached-rs = { version = "0.3" , optional = true }
Add jobserver support to sccache This commit alters the main sccache server to operate and orchestrate its own GNU make style jobserver. This is primarily intended for interoperation with rustc itself. The Rust compiler currently has a multithreaded mode where it will execute code generation and optimization on the LLVM side of things in parallel. This parallelism, however, can overload a machine quickly if not properly accounted for (e.g. if 10 rustcs all spawn 10 threads...). The usage of a GNU make style jobserver is intended to arbitrate and rate limit all these rustc instances to ensure that one build's maximal parallelism never exceeds a particular amount. Currently for Rust Cargo is the primary driver for setting up a jobserver. Cargo will create this and manage this per compilation, ensuring that any one `cargo build` invocation never exceeds a maximal parallelism. When sccache enters the picture, however, the story gets slightly more odd. The jobserver implementation on Unix relies on inheritance of file descriptors in spawned processes. With sccache, however, there's no inheritance as the actual rustc invocation is spawned by the server, not the client. In this case the env vars used to configure the jobsever are usually incorrect. To handle this problem this commit bakes a jobserver directly into sccache itself. The jobserver then overrides whatever jobserver the client has configured in its own env vars to ensure correct operation. The settings of each jobserver may be misconfigured (there's no way to configure sccache's jobserver right now), but hopefully that's not too much of a problem for the forseeable future. The implementation here was to provide a thin wrapper around the `jobserver` crate with a futures-based interface. This interface was then hooked into the mock command infrastructure to automatically acquire a jobserver token when spawning a process and automatically drop the token when the process exits. Additionally, all spawned processes will now automatically receive a configured jobserver. cc rust-lang/rust#42867, the original motivation for this commit
2017-09-27 19:14:51 +03:00
num_cpus = "1.0"
number_prefix = "0.2.5"
openssl = { version = "0.10", optional = true }
rand = "0.4.2"
2018-08-27 17:23:32 +03:00
redis = { version = "0.9.0", optional = true }
2018-08-28 15:02:13 +03:00
regex = "1"
# Exact dependency since we use the unstable async API
2018-08-28 15:47:29 +03:00
reqwest = { version = "=0.8.8", features = ["unstable"], optional = true }
retry = "0.4.0"
ring = "0.13.2"
2018-08-04 16:36:35 +03:00
# Need https://github.com/tomaka/rouille/pull/185
#rouille = "2.1"
rouille = { git = "https://github.com/tomaka/rouille.git", rev = "7b6b2eb", optional = true }
rust-crypto = { version = "0.2.36", optional = true }
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
strip-ansi-escapes = "0.1"
2018-04-12 04:25:15 +03:00
tar = "0.4"
tempdir = "0.3.4"
2018-08-28 15:20:49 +03:00
tempfile = "3"
2016-06-28 14:12:54 +03:00
time = "0.1.35"
tokio-core = "0.1.6"
tokio-io = "0.1"
2018-08-28 15:37:11 +03:00
tokio-process = "0.2"
Move from protobuf to bincode for a wire format This commit migrates away from the `protobuf` crate to instead just working with bincode on the wire as a serialization format. This is done by leveraging a few different crates: * The `bincode` and `serde_derive` crates are used to define serialization for Rust structures as well as provide a bincode implementation. * The `tokio_io::codec::length_delimited` module implements framing via length prefixes to transform an asynchronous stream of bytes into a literal `Stream` of `BytesMut`. * The `tokio_serde_bincode` crate is then used to tie it all together, parsing these `BytesMut` as the request/response types of sccache. Most of the changes here are related to moving away from the protobuf API throughout the codebase (e.g. `has_foo` and `take_foo`) towards a more rustic-ish API that just uses enums/structs. Overall it felt quite natural (as one would expect) to just use the raw enum/struct values. This may not be quite as performant as before but that doesn't really apply to sccache's use case where perf is hugely dominated by actually compiling and hashing, so I'm not too too worried about that. My personal motivation for this is twofold: 1. Using `protobuf` was a little clunky throughout the codebase and definitely had some sharp edges that felt good to smooth out. 2. There's currently what I believe some mysterious segfault and/or stray write happening in sccache and I'm not sure where. The `protobuf` crate had a lot of `unsafe` code and in lieu of actually auditing it I figured it'd be good to kill two birds with one stone. I have no idea if this fixes my segfault problem (I never could reproduce it) but I figured it's worth a shot.
2017-03-22 20:21:52 +03:00
tokio-proto = "0.1"
tokio-serde-bincode = "0.1"
Rewrite the server module with Tokio This commit rewrites the `server` module of sccache to be backed with Tokio. The previous version was written with `mio`, which Tokio is built on, but is unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous programming in Rust and sccache serves as a great testing ground for ergonomics! It's intended that the support added here will eventually extend to many other operations that sccache does as well. For example thread spawning has all been replaced with `CpuPool` to have a shared pool for I/O operations and such (namely the filesystem). Eventually the HTTP requests made by the S3 backend can be integrated with the Tokio branch of Hyper as well to run that on the event loop instead of in a worker thread. I'd also like to eventually extend this with `tokio-process` as well to move process spawning off helper threads as well, but I'm leaving that to a future commit as well. Overall I found the transition was quite smooth, with the high level architecture look like: * The `tokio-proto` crate is used in streaming mode. The streaming part is used for the one RPC sccache gets which requires a second response to be sent later on. This second response is the "response body" in tokio-proto terms. * All of sccache's logic is manifested in an implementation of the `Service` trait. * The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and simple deserialization/serialization is performed with protobuf. Some differences in design are: * The `SccacheService` for now is just a bunch of reference-counted pointers, making it cheap to clone. As the futures it returns progress they will each retain a reference to a cloned copy of the `SccacheService`. Before all this data was just stored and manipulated in a struct directly, but it's now directly managed through shared memory. * The storage backends share a thread pool with the main server instead of spawning threads. And finally, some things I've learned along the way: * Sharing data between futures isn't a trivial operation. It took an explicit decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics played out. * Shutdown is pretty tricky here. I've tried to carry over all the previous logic but it definitely required not using `TcpServer` in tokio-proto at the very least, and otherwise required a few custom futures and such to track the various states. I have a hunch that tokio-proto could provide more options out of the box for something like this.
2017-01-31 04:04:03 +03:00
tokio-service = "0.1"
toml = "0.4"
2018-02-28 22:30:50 +03:00
uuid = { version = "0.6", features = ["v4"] }
2017-05-06 03:02:48 +03:00
url = { version = "1.0", optional = true }
which = "2"
2018-07-17 03:24:44 +03:00
zip = { version = "0.4", default-features = false, features = ["deflate"] }
lazy_static = "1.0.0"
atty = "0.2.6"
2018-08-28 14:47:30 +03:00
directories = "1"
2018-08-28 14:55:46 +03:00
crossbeam-utils = { version = "0.5", optional = true }
2018-08-04 16:36:35 +03:00
flate2 = { version = "1.0", optional = true, default-features = false, features = ["rust_backend"] }
libmount = { version = "0.1.10", optional = true }
nix = { version = "0.11.0", optional = true }
[dev-dependencies]
assert_cmd = "0.9"
cc = "1.0"
2018-08-28 14:37:42 +03:00
chrono = "0.4"
2018-01-07 18:58:55 +03:00
itertools = "0.7"
predicates = "0.9.0"
2016-10-26 23:22:52 +03:00
[target.'cfg(unix)'.dependencies]
2018-08-28 15:44:11 +03:00
daemonize = "0.3"
2018-08-28 15:22:37 +03:00
tokio-uds = "0.2"
2016-10-26 23:22:52 +03:00
[target.'cfg(windows)'.dependencies]
kernel32-sys = "0.2.2"
winapi = "0.2"
mio-named-pipes = "0.1"
[features]
default = ["s3"]
2018-03-04 10:33:12 +03:00
all = ["redis", "s3", "memcached", "gcs", "azure"]
# gcs requires openssl, which is a pain on Windows.
2018-05-10 16:07:52 +03:00
all-windows = ["redis", "s3", "memcached", "azure"]
azure = ["chrono", "hyper", "rust-crypto", "url"]
s3 = ["chrono", "hyper", "reqwest", "rust-crypto", "simple-s3"]
simple-s3 = []
gcs = ["chrono", "hyper", "jsonwebtoken", "openssl", "url"]
memcached = ["memcached-rs"]
# Enable features that require unstable features of Nightly Rust.
unstable = []
# Enables distributed support in the sccache client
dist-client = ["reqwest"]
2018-08-04 16:36:35 +03:00
# Enables the sccache-dist binary
dist-server = ["arraydeque", "crossbeam-utils", "jsonwebtoken", "flate2", "libmount", "nix", "reqwest", "rouille"]
[workspace]
exclude = ["tests/test-crate"]