sccache/Cargo.toml

65 строки
1.6 KiB
TOML
Исходник Обычный вид История

[package]
name = "sccache"
version = "0.1.1-pre"
authors = ["Ted Mielczarek <ted@mielczarek.org>"]
2016-12-16 03:55:19 +03:00
license = "Apache-2.0"
description = "Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible, storing a cache in a remote storage using the S3 API."
repository = "https://github.com/mozilla/sccache/"
[dependencies]
app_dirs = "1.1.1"
bytes = "0.4"
chrono = "0.2.25"
clap = "2.3.0"
2016-04-29 16:32:01 +03:00
env_logger = "0.3.3"
error-chain = { version = "0.7.2", default-features = false }
2016-06-28 14:12:54 +03:00
fern = "0.3.5"
filetime = "0.1"
futures = "0.1.11"
Rewrite the server module with Tokio This commit rewrites the `server` module of sccache to be backed with Tokio. The previous version was written with `mio`, which Tokio is built on, but is unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous programming in Rust and sccache serves as a great testing ground for ergonomics! It's intended that the support added here will eventually extend to many other operations that sccache does as well. For example thread spawning has all been replaced with `CpuPool` to have a shared pool for I/O operations and such (namely the filesystem). Eventually the HTTP requests made by the S3 backend can be integrated with the Tokio branch of Hyper as well to run that on the event loop instead of in a worker thread. I'd also like to eventually extend this with `tokio-process` as well to move process spawning off helper threads as well, but I'm leaving that to a future commit as well. Overall I found the transition was quite smooth, with the high level architecture look like: * The `tokio-proto` crate is used in streaming mode. The streaming part is used for the one RPC sccache gets which requires a second response to be sent later on. This second response is the "response body" in tokio-proto terms. * All of sccache's logic is manifested in an implementation of the `Service` trait. * The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and simple deserialization/serialization is performed with protobuf. Some differences in design are: * The `SccacheService` for now is just a bunch of reference-counted pointers, making it cheap to clone. As the futures it returns progress they will each retain a reference to a cloned copy of the `SccacheService`. Before all this data was just stored and manipulated in a struct directly, but it's now directly managed through shared memory. * The storage backends share a thread pool with the main server instead of spawning threads. And finally, some things I've learned along the way: * Sharing data between futures isn't a trivial operation. It took an explicit decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics played out. * Shutdown is pretty tricky here. I've tried to carry over all the previous logic but it definitely required not using `TcpServer` in tokio-proto at the very least, and otherwise required a few custom futures and such to track the various states. I have a hunch that tokio-proto could provide more options out of the box for something like this.
2017-01-31 04:04:03 +03:00
futures-cpupool = "0.1"
hyper = { git = "https://github.com/alexcrichton/hyper", branch = "tio" }
hyper-tls = { git = "https://github.com/alexcrichton/hyper-tls", branch = "tio" }
libc = "0.2.10"
local-encoding = "0.2.0"
2016-04-29 16:32:01 +03:00
log = "0.3.6"
lru-disk-cache = { path = "lru-disk-cache" }
number_prefix = "0.2.5"
protobuf = "1.0.18"
regex = "0.1.65"
retry = "0.4.0"
rust-crypto = "0.2.36"
rustc-serialize = "0.3"
serde_json = "0.8.0"
2016-07-15 17:42:08 +03:00
sha1 = "0.2.0"
tempdir = "0.3.4"
2016-06-28 14:12:54 +03:00
time = "0.1.35"
tokio-core = "0.1.6"
Rewrite the server module with Tokio This commit rewrites the `server` module of sccache to be backed with Tokio. The previous version was written with `mio`, which Tokio is built on, but is unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous programming in Rust and sccache serves as a great testing ground for ergonomics! It's intended that the support added here will eventually extend to many other operations that sccache does as well. For example thread spawning has all been replaced with `CpuPool` to have a shared pool for I/O operations and such (namely the filesystem). Eventually the HTTP requests made by the S3 backend can be integrated with the Tokio branch of Hyper as well to run that on the event loop instead of in a worker thread. I'd also like to eventually extend this with `tokio-process` as well to move process spawning off helper threads as well, but I'm leaving that to a future commit as well. Overall I found the transition was quite smooth, with the high level architecture look like: * The `tokio-proto` crate is used in streaming mode. The streaming part is used for the one RPC sccache gets which requires a second response to be sent later on. This second response is the "response body" in tokio-proto terms. * All of sccache's logic is manifested in an implementation of the `Service` trait. * The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and simple deserialization/serialization is performed with protobuf. Some differences in design are: * The `SccacheService` for now is just a bunch of reference-counted pointers, making it cheap to clone. As the futures it returns progress they will each retain a reference to a cloned copy of the `SccacheService`. Before all this data was just stored and manipulated in a struct directly, but it's now directly managed through shared memory. * The storage backends share a thread pool with the main server instead of spawning threads. And finally, some things I've learned along the way: * Sharing data between futures isn't a trivial operation. It took an explicit decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics played out. * Shutdown is pretty tricky here. I've tried to carry over all the previous logic but it definitely required not using `TcpServer` in tokio-proto at the very least, and otherwise required a few custom futures and such to track the various states. I have a hunch that tokio-proto could provide more options out of the box for something like this.
2017-01-31 04:04:03 +03:00
tokio-proto = "0.1"
tokio-io = "0.1"
Rewrite the server module with Tokio This commit rewrites the `server` module of sccache to be backed with Tokio. The previous version was written with `mio`, which Tokio is built on, but is unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous programming in Rust and sccache serves as a great testing ground for ergonomics! It's intended that the support added here will eventually extend to many other operations that sccache does as well. For example thread spawning has all been replaced with `CpuPool` to have a shared pool for I/O operations and such (namely the filesystem). Eventually the HTTP requests made by the S3 backend can be integrated with the Tokio branch of Hyper as well to run that on the event loop instead of in a worker thread. I'd also like to eventually extend this with `tokio-process` as well to move process spawning off helper threads as well, but I'm leaving that to a future commit as well. Overall I found the transition was quite smooth, with the high level architecture look like: * The `tokio-proto` crate is used in streaming mode. The streaming part is used for the one RPC sccache gets which requires a second response to be sent later on. This second response is the "response body" in tokio-proto terms. * All of sccache's logic is manifested in an implementation of the `Service` trait. * The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and simple deserialization/serialization is performed with protobuf. Some differences in design are: * The `SccacheService` for now is just a bunch of reference-counted pointers, making it cheap to clone. As the futures it returns progress they will each retain a reference to a cloned copy of the `SccacheService`. Before all this data was just stored and manipulated in a struct directly, but it's now directly managed through shared memory. * The storage backends share a thread pool with the main server instead of spawning threads. And finally, some things I've learned along the way: * Sharing data between futures isn't a trivial operation. It took an explicit decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics played out. * Shutdown is pretty tricky here. I've tried to carry over all the previous logic but it definitely required not using `TcpServer` in tokio-proto at the very least, and otherwise required a few custom futures and such to track the various states. I have a hunch that tokio-proto could provide more options out of the box for something like this.
2017-01-31 04:04:03 +03:00
tokio-service = "0.1"
tokio-tls = "0.1"
tokio-process = "0.1"
uuid = { version = "0.3.1", features = ["v4"] }
which = "0.2.1"
zip = { version = "0.2", default-features = false }
2016-10-26 23:22:52 +03:00
[target.'cfg(unix)'.dependencies]
daemonize = "0.2.3"
tokio-uds = "0.1"
2016-10-26 23:22:52 +03:00
[target.'cfg(windows)'.dependencies]
kernel32-sys = "0.2.2"
winapi = "0.2"
mio-named-pipes = "0.1"
[features]
default = []
# Enable features that require unstable features of Nightly Rust.
unstable = []
[profile.release]
debug = true
[workspace]