2016-04-26 23:43:03 +03:00
[ package ]
name = "sccache"
2017-01-09 20:20:56 +03:00
version = "0.1.1-pre"
2016-04-26 23:43:03 +03:00
authors = [ "Ted Mielczarek <ted@mielczarek.org>" ]
2016-12-16 03:55:19 +03:00
license = "Apache-2.0"
description = "Sccache is a ccache-like tool. It is used as a compiler wrapper and avoids compilation when possible, storing a cache in a remote storage using the S3 API."
repository = "https://github.com/mozilla/sccache/"
2016-04-26 23:43:03 +03:00
[ dependencies ]
2016-11-17 18:59:15 +03:00
app_dirs = "1.1.1"
2017-03-09 21:00:18 +03:00
bytes = "0.4"
2016-08-12 21:03:37 +03:00
chrono = "0.2.25"
2016-04-26 23:43:03 +03:00
clap = "2.3.0"
2016-04-29 16:32:01 +03:00
env_logger = "0.3.3"
2017-01-31 15:24:47 +03:00
error-chain = { version = "0.7.2" , default-features = false }
2016-06-28 14:12:54 +03:00
fern = "0.3.5"
2016-05-18 21:30:11 +03:00
filetime = "0.1"
2017-03-09 21:00:18 +03:00
futures = "0.1.11"
Rewrite the server module with Tokio
This commit rewrites the `server` module of sccache to be backed with Tokio. The
previous version was written with `mio`, which Tokio is built on, but is
unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous
programming in Rust and sccache serves as a great testing ground for ergonomics!
It's intended that the support added here will eventually extend to many other
operations that sccache does as well. For example thread spawning has all been
replaced with `CpuPool` to have a shared pool for I/O operations and such
(namely the filesystem). Eventually the HTTP requests made by the S3 backend can
be integrated with the Tokio branch of Hyper as well to run that on the event
loop instead of in a worker thread. I'd also like to eventually extend this with
`tokio-process` as well to move process spawning off helper threads as well, but
I'm leaving that to a future commit as well.
Overall I found the transition was quite smooth, with the high level
architecture look like:
* The `tokio-proto` crate is used in streaming mode. The streaming part is used
for the one RPC sccache gets which requires a second response to be sent later
on. This second response is the "response body" in tokio-proto terms.
* All of sccache's logic is manifested in an implementation of the `Service`
trait.
* The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and
simple deserialization/serialization is performed with protobuf.
Some differences in design are:
* The `SccacheService` for now is just a bunch of reference-counted pointers,
making it cheap to clone. As the futures it returns progress they will each
retain a reference to a cloned copy of the `SccacheService`. Before all this
data was just stored and manipulated in a struct directly, but it's now
directly managed through shared memory.
* The storage backends share a thread pool with the main server instead of
spawning threads.
And finally, some things I've learned along the way:
* Sharing data between futures isn't a trivial operation. It took an explicit
decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics
played out.
* Shutdown is pretty tricky here. I've tried to carry over all the previous
logic but it definitely required not using `TcpServer` in tokio-proto at the
very least, and otherwise required a few custom futures and such to track the
various states. I have a hunch that tokio-proto could provide more options out
of the box for something like this.
2017-01-31 04:04:03 +03:00
futures-cpupool = "0.1"
2017-03-09 21:00:18 +03:00
hyper = { git = "https://github.com/alexcrichton/hyper" , branch = "tio" }
hyper-tls = { git = "https://github.com/alexcrichton/hyper-tls" , branch = "tio" }
2016-05-02 22:11:20 +03:00
libc = "0.2.10"
2016-09-21 22:57:28 +03:00
local-encoding = "0.2.0"
2016-04-29 16:32:01 +03:00
log = "0.3.6"
2016-11-29 03:59:42 +03:00
lru-disk-cache = { path = "lru-disk-cache" }
2016-05-12 22:46:22 +03:00
number_prefix = "0.2.5"
2016-04-26 23:43:03 +03:00
protobuf = "1.0.18"
2016-08-12 21:03:37 +03:00
regex = "0.1.65"
2016-05-03 12:56:07 +03:00
retry = "0.4.0"
2016-08-12 21:03:37 +03:00
rust-crypto = "0.2.36"
rustc-serialize = "0.3"
serde_json = "0.8.0"
2016-07-15 17:42:08 +03:00
sha1 = "0.2.0"
2016-05-03 12:56:07 +03:00
tempdir = "0.3.4"
2016-06-28 14:12:54 +03:00
time = "0.1.35"
2017-03-09 21:00:18 +03:00
tokio-core = "0.1.6"
Rewrite the server module with Tokio
This commit rewrites the `server` module of sccache to be backed with Tokio. The
previous version was written with `mio`, which Tokio is built on, but is
unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous
programming in Rust and sccache serves as a great testing ground for ergonomics!
It's intended that the support added here will eventually extend to many other
operations that sccache does as well. For example thread spawning has all been
replaced with `CpuPool` to have a shared pool for I/O operations and such
(namely the filesystem). Eventually the HTTP requests made by the S3 backend can
be integrated with the Tokio branch of Hyper as well to run that on the event
loop instead of in a worker thread. I'd also like to eventually extend this with
`tokio-process` as well to move process spawning off helper threads as well, but
I'm leaving that to a future commit as well.
Overall I found the transition was quite smooth, with the high level
architecture look like:
* The `tokio-proto` crate is used in streaming mode. The streaming part is used
for the one RPC sccache gets which requires a second response to be sent later
on. This second response is the "response body" in tokio-proto terms.
* All of sccache's logic is manifested in an implementation of the `Service`
trait.
* The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and
simple deserialization/serialization is performed with protobuf.
Some differences in design are:
* The `SccacheService` for now is just a bunch of reference-counted pointers,
making it cheap to clone. As the futures it returns progress they will each
retain a reference to a cloned copy of the `SccacheService`. Before all this
data was just stored and manipulated in a struct directly, but it's now
directly managed through shared memory.
* The storage backends share a thread pool with the main server instead of
spawning threads.
And finally, some things I've learned along the way:
* Sharing data between futures isn't a trivial operation. It took an explicit
decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics
played out.
* Shutdown is pretty tricky here. I've tried to carry over all the previous
logic but it definitely required not using `TcpServer` in tokio-proto at the
very least, and otherwise required a few custom futures and such to track the
various states. I have a hunch that tokio-proto could provide more options out
of the box for something like this.
2017-01-31 04:04:03 +03:00
tokio-proto = "0.1"
2017-03-09 21:00:18 +03:00
tokio-io = "0.1"
Rewrite the server module with Tokio
This commit rewrites the `server` module of sccache to be backed with Tokio. The
previous version was written with `mio`, which Tokio is built on, but is
unfortunately less ergonomic. Tokio is the state-of-the-art for asynchronous
programming in Rust and sccache serves as a great testing ground for ergonomics!
It's intended that the support added here will eventually extend to many other
operations that sccache does as well. For example thread spawning has all been
replaced with `CpuPool` to have a shared pool for I/O operations and such
(namely the filesystem). Eventually the HTTP requests made by the S3 backend can
be integrated with the Tokio branch of Hyper as well to run that on the event
loop instead of in a worker thread. I'd also like to eventually extend this with
`tokio-process` as well to move process spawning off helper threads as well, but
I'm leaving that to a future commit as well.
Overall I found the transition was quite smooth, with the high level
architecture look like:
* The `tokio-proto` crate is used in streaming mode. The streaming part is used
for the one RPC sccache gets which requires a second response to be sent later
on. This second response is the "response body" in tokio-proto terms.
* All of sccache's logic is manifested in an implementation of the `Service`
trait.
* The transport layer is provided with `tokio_core::io::{Framed, Codec}`, and
simple deserialization/serialization is performed with protobuf.
Some differences in design are:
* The `SccacheService` for now is just a bunch of reference-counted pointers,
making it cheap to clone. As the futures it returns progress they will each
retain a reference to a cloned copy of the `SccacheService`. Before all this
data was just stored and manipulated in a struct directly, but it's now
directly managed through shared memory.
* The storage backends share a thread pool with the main server instead of
spawning threads.
And finally, some things I've learned along the way:
* Sharing data between futures isn't a trivial operation. It took an explicit
decision to use `Rc` and I'm not sure I'm 100% happy with how the ergonomics
played out.
* Shutdown is pretty tricky here. I've tried to carry over all the previous
logic but it definitely required not using `TcpServer` in tokio-proto at the
very least, and otherwise required a few custom futures and such to track the
various states. I have a hunch that tokio-proto could provide more options out
of the box for something like this.
2017-01-31 04:04:03 +03:00
tokio-service = "0.1"
2017-03-09 21:00:18 +03:00
tokio-tls = "0.1"
2017-02-01 03:19:35 +03:00
tokio-process = "0.1"
2017-01-27 22:39:47 +03:00
uuid = { version = "0.3.1" , features = [ "v4" ] }
2016-08-08 22:06:06 +03:00
which = "0.2.1"
2017-03-07 15:52:53 +03:00
zip = { version = "0.2" , default-features = false }
2016-05-12 21:47:46 +03:00
2016-10-26 23:22:52 +03:00
[ target . 'cfg(unix)' . dependencies ]
daemonize = "0.2.3"
2017-02-17 19:43:43 +03:00
tokio-uds = "0.1"
2016-10-26 23:22:52 +03:00
2017-01-27 22:39:47 +03:00
[ target . 'cfg(windows)' . dependencies ]
kernel32-sys = "0.2.2"
winapi = "0.2"
2017-03-21 20:49:17 +03:00
mio-named-pipes = "0.1"
2017-01-27 22:39:47 +03:00
2016-12-13 17:44:20 +03:00
[ features ]
default = [ ]
# Enable features that require unstable features of Nightly Rust.
unstable = [ ]
2016-05-12 21:47:46 +03:00
[ profile . release ]
debug = true
2016-11-29 03:59:42 +03:00
[ workspace ]