Backed out 2 changesets (bug 1596132) for build bustages CLOSED TREE

Backed out changeset 25524fdb85d8 (bug 1596132)
Backed out changeset 133e5bc3493c (bug 1596132)

--HG--
rename : third_party/rust/ryu/examples/upstream_benchmark.rs => third_party/rust/ryu/benchmark/benchmark.rs
rename : third_party/rust/ryu/src/d2s_intrinsics.rs => third_party/rust/ryu/src/mulshift128.rs
rename : third_party/rust/android_logger/LICENSE-MIT => third_party/rust/utf8-ranges/LICENSE-MIT
This commit is contained in:
Bogdan Tara 2019-12-06 17:19:01 +02:00
Родитель 50f8e95cc3
Коммит 6fdf60c6f6
334 изменённых файлов: 9271 добавлений и 37233 удалений

324
Cargo.lock сгенерированный

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1 +0,0 @@
{"files":{"Cargo.toml":"18788b5d8b84916aedc7c85961a8c99f748969e9562663dbfc9704d2263df23d","LICENSE-APACHE":"4d4c32b31308f5a992434c2cf948205852bb2c7bb85cea4c1ab051f41a3eefb3","LICENSE-MIT":"bb3c0c388d2e5efc777ee1a7bc4671188447d5fbbad130aecac9fd52e0010b76","README.md":"56808f9f272c6fad922f23033591464c1403bb5d1f716ee224b6933b90d62e86","src/lib.rs":"ff810c7e6fe722309ea46f9f2a87c10a857f7c6b3563a5986d2d235cdc2109e2"},"package":"b8052e2d8aabbb8d556d6abbcce2a22b9590996c5f849b9c7ce4544a2e3b984e"}

26
third_party/rust/android_log-sys/Cargo.toml поставляемый
Просмотреть файл

@ -1,26 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g. crates.io) dependencies
#
# If you believe there's an error in this file please file an
# issue against the rust-lang/cargo repository. If you're
# editing this file be aware that the upstream Cargo.toml
# will likely look very different (and much more reasonable)
[package]
name = "android_log-sys"
version = "0.1.2"
authors = ["Nerijus Arlauskas <nercury@gmail.com>"]
description = "FFI bindings to Android log Library.\n"
documentation = "https://docs.rs/android_log-sys"
readme = "README.md"
keywords = ["ffi", "android", "log"]
categories = ["external-ffi-bindings"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/nercury/android_log-sys-rs"
[lib]
name = "android_log_sys"

Просмотреть файл

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016 The android_log_sys Developers
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

19
third_party/rust/android_log-sys/LICENSE-MIT поставляемый
Просмотреть файл

@ -1,19 +0,0 @@
Copyright (c) 2016 The android_log_sys Developers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

17
third_party/rust/android_log-sys/README.md поставляемый
Просмотреть файл

@ -1,17 +0,0 @@
# Bindings to Android log Library
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in the work by you, as defined in the Apache-2.0
license, shall be dual licensed as above, without any additional terms or
conditions.

53
third_party/rust/android_log-sys/src/lib.rs поставляемый
Просмотреть файл

@ -1,53 +0,0 @@
// Copyright 2016 The android_log_sys Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
use std::os::raw;
#[allow(non_camel_case_types)]
pub type c_va_list = raw::c_void;
#[allow(non_camel_case_types)]
pub type c_int = raw::c_int;
#[allow(non_camel_case_types)]
pub type c_char = raw::c_char;
// automatically generated by rust-bindgen
#[derive(Clone, Copy)]
#[repr(isize)]
pub enum LogPriority {
UNKNOWN = 0,
DEFAULT = 1,
VERBOSE = 2,
DEBUG = 3,
INFO = 4,
WARN = 5,
ERROR = 6,
FATAL = 7,
SILENT = 8,
}
#[link(name = "log")]
extern "C" {
pub fn __android_log_write(prio: c_int,
tag: *const c_char,
text: *const c_char)
-> c_int;
pub fn __android_log_print(prio: c_int,
tag: *const c_char,
fmt: *const c_char,
...)
-> c_int;
pub fn __android_log_vprint(prio: c_int,
tag: *const c_char,
fmt: *const c_char,
ap: *mut c_va_list)
-> c_int;
pub fn __android_log_assert(cond: *const c_char,
tag: *const c_char,
fmt: *const c_char,
...);
}

Просмотреть файл

@ -1 +0,0 @@
{"files":{"Cargo.toml":"010e9f9fe058d816ddd587ba2cc3347ed210b8fe3cc6aeb38887da92c13ed310","LICENSE-APACHE":"99938c5864dd33decb62ab20fd883a9b00181d768ae887a4f19b2d0015c41dc9","LICENSE-MIT":"35043211d1b7be8f7e3f9cad27d981f2189ba9a39d9527b275b3c9740298dfe2","README.md":"d778346fe4a52f482e46e7db13dcc2d028afee1712bd4a19b3878491651619e0","src/lib.rs":"5e314322f5235be63c72ed66c00914599520e7e17f23be729a2e69605eebb332"},"package":"8cbd542dd180566fad88fd2729a53a62a734843c626638006a9d63ec0688484e"}

40
third_party/rust/android_logger/Cargo.toml поставляемый
Просмотреть файл

@ -1,40 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies
#
# If you believe there's an error in this file please file an
# issue against the rust-lang/cargo repository. If you're
# editing this file be aware that the upstream Cargo.toml
# will likely look very different (and much more reasonable)
[package]
name = "android_logger"
version = "0.8.6"
authors = ["The android_logger Developers"]
description = "A logging implementation for `log` which hooks to android log output.\n"
readme = "README.md"
keywords = ["android", "bindings", "log", "logger"]
categories = ["api-bindings"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/Nercury/android_logger-rs"
[dependencies.android_log-sys]
version = "0.1"
[dependencies.env_logger]
version = "0.7"
default-features = false
[dependencies.lazy_static]
version = "1.0"
[dependencies.log]
version = "0.4"
[features]
default = ["regex"]
regex = ["env_logger/regex"]
[badges.travis-ci]
repository = "Nercury/android_logger-rs"

201
third_party/rust/android_logger/LICENSE-APACHE поставляемый
Просмотреть файл

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016 The android_logger Developers
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

80
third_party/rust/android_logger/README.md поставляемый
Просмотреть файл

@ -1,80 +0,0 @@
## Send Rust logs to Logcat
[![Version](https://img.shields.io/crates/v/android_logger.svg)](https://crates.io/crates/android_logger)
[![Build Status](https://travis-ci.org/Nercury/android_logger-rs.svg?branch=master)](https://travis-ci.org/Nercury/android_logger-rs)
This library is a drop-in replacement for `env_logger`. Instead, it outputs messages to
android's logcat.
This only works on Android and requires linking to `log` which
is only available under android. With Cargo, it is possible to conditionally require
this library:
```toml
[target.'cfg(target_os = "android")'.dependencies]
android_logger = "0.8"
```
Example of initialization on activity creation, with log configuration:
```rust
#[macro_use] extern crate log;
extern crate android_logger;
use log::Level;
use android_logger::{Config,FilterBuilder};
fn native_activity_create() {
android_logger::init_once(
Config::default()
.with_min_level(Level::Trace) // limit log level
.with_tag("mytag") // logs will show under mytag tag
.with_filter( // configure messages for specific crate
FilterBuilder::new()
.parse("debug,hello::crate=error")
.build())
);
trace!("this is a verbose {}", "message");
error!("this is printed by default");
}
```
To allow all logs, use the default configuration with min level Trace:
```rust
#[macro_use] extern crate log;
extern crate android_logger;
use log::Level;
use android_logger::Config;
fn native_activity_create() {
android_logger::init_once(
Config::default().with_min_level(Level::Trace));
}
```
There is a caveat that this library can only be initialized once
(hence the `init_once` function name). However, Android native activity can be
re-created every time the screen is rotated, resulting in multiple initialization calls.
Therefore this library will only log a warning for subsequent `init_once` calls.
This library ensures that logged messages do not overflow Android log message limits
by efficiently splitting messages into chunks.
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in the work by you, as defined in the Apache-2.0
license, shall be dual licensed as above, without any additional terms or
conditions.

408
third_party/rust/android_logger/src/lib.rs поставляемый
Просмотреть файл

@ -1,408 +0,0 @@
// Copyright 2016 The android_logger Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
//! A logger which writes to android output.
//!
//! ## Example
//!
//! ```
//! #[macro_use] extern crate log;
//! extern crate android_logger;
//!
//! use log::Level;
//! use android_logger::Config;
//!
//! /// Android code may not have obvious "main", this is just an example.
//! fn main() {
//! android_logger::init_once(
//! Config::default().with_min_level(Level::Trace),
//! );
//!
//! debug!("this is a debug {}", "message");
//! error!("this is printed by default");
//! }
//! ```
//!
//! ## Example with module path filter
//!
//! It is possible to limit log messages to output from a specific crate,
//! and override the logcat tag name (by default, the crate name is used):
//!
//! ```
//! #[macro_use] extern crate log;
//! extern crate android_logger;
//!
//! use log::Level;
//! use android_logger::{Config,FilterBuilder};
//!
//! fn main() {
//! android_logger::init_once(
//! Config::default()
//! .with_min_level(Level::Trace)
//! .with_tag("mytag")
//! .with_filter(FilterBuilder::new().parse("debug,hello::crate=trace").build()),
//! );
//!
//! // ..
//! }
//! ```
#[cfg(target_os = "android")]
extern crate android_log_sys as log_ffi;
#[macro_use]
extern crate lazy_static;
#[macro_use]
extern crate log;
extern crate env_logger;
use std::sync::RwLock;
#[cfg(target_os = "android")]
use log_ffi::LogPriority;
use log::{Level, Log, Metadata, Record};
use std::ffi::{CStr, CString};
use std::mem;
use std::fmt;
use std::ptr;
pub use env_logger::filter::{Filter, Builder as FilterBuilder};
/// Output log to android system.
#[cfg(target_os = "android")]
fn android_log(prio: log_ffi::LogPriority, tag: &CStr, msg: &CStr) {
unsafe {
log_ffi::__android_log_write(
prio as log_ffi::c_int,
tag.as_ptr() as *const log_ffi::c_char,
msg.as_ptr() as *const log_ffi::c_char,
)
};
}
/// Dummy output placeholder for tests.
#[cfg(not(target_os = "android"))]
fn android_log(_priority: Level, _tag: &CStr, _msg: &CStr) {}
/// Underlying android logger backend
pub struct AndroidLogger {
config: RwLock<Config>,
}
impl AndroidLogger {
/// Create new logger instance from config
pub fn new(config: Config) -> AndroidLogger {
AndroidLogger {
config: RwLock::new(config),
}
}
}
lazy_static! {
static ref ANDROID_LOGGER: AndroidLogger = AndroidLogger::default();
}
const LOGGING_TAG_MAX_LEN: usize = 23;
const LOGGING_MSG_MAX_LEN: usize = 4000;
impl Default for AndroidLogger {
/// Create a new logger with default config
fn default() -> AndroidLogger {
AndroidLogger {
config: RwLock::new(Config::default()),
}
}
}
impl Log for AndroidLogger {
fn enabled(&self, _: &Metadata) -> bool {
true
}
fn log(&self, record: &Record) {
let config = self.config
.read()
.expect("failed to acquire android_log filter lock for read");
if !config.filter_matches(record) {
return;
}
// tag must not exceed LOGGING_TAG_MAX_LEN
let mut tag_bytes: [u8; LOGGING_TAG_MAX_LEN + 1] = unsafe { mem::uninitialized() };
let module_path = record.module_path().unwrap_or_default().to_owned();
// If no tag was specified, use module name
let custom_tag = &config.tag;
let tag = custom_tag.as_ref().map(|s| s.as_bytes()).unwrap_or(module_path.as_bytes());
// truncate the tag here to fit into LOGGING_TAG_MAX_LEN
self.fill_tag_bytes(&mut tag_bytes, tag);
// use stack array as C string
let tag: &CStr = unsafe { CStr::from_ptr(mem::transmute(tag_bytes.as_ptr())) };
// message must not exceed LOGGING_MSG_MAX_LEN
// therefore split log message into multiple log calls
let mut writer = PlatformLogWriter::new(record.level(), tag);
// If a custom tag is used, add the module path to the message.
// Use PlatformLogWriter to output chunks if they exceed max size.
let _ = if custom_tag.is_some() {
fmt::write(&mut writer, format_args!("{}: {}", module_path, *record.args()))
} else {
fmt::write(&mut writer, *record.args())
};
// output the remaining message (this would usually be the most common case)
writer.flush();
}
fn flush(&self) {}
}
impl AndroidLogger {
fn fill_tag_bytes(&self, array: &mut [u8], tag: &[u8]) {
if tag.len() > LOGGING_TAG_MAX_LEN {
for (input, output) in tag.iter()
.take(LOGGING_TAG_MAX_LEN - 2)
.chain(b"..\0".iter())
.zip(array.iter_mut())
{
*output = *input;
}
} else {
for (input, output) in tag.iter()
.chain(b"\0".iter())
.zip(array.iter_mut())
{
*output = *input;
}
}
}
}
/// Filter for android logger.
pub struct Config {
log_level: Option<Level>,
filter: Option<env_logger::filter::Filter>,
tag: Option<CString>,
}
impl Default for Config {
fn default() -> Self {
Config {
log_level: None,
filter: None,
tag: None,
}
}
}
impl Config {
/// Change the minimum log level.
///
/// All values above the set level are logged. For example, if
/// `Warn` is set, the `Error` is logged too, but `Info` isn't.
pub fn with_min_level(mut self, level: Level) -> Self {
self.log_level = Some(level);
self
}
fn filter_matches(&self, record: &Record) -> bool {
if let Some(ref filter) = self.filter {
filter.matches(&record)
} else {
true
}
}
pub fn with_filter(mut self, filter: env_logger::filter::Filter) -> Self {
self.filter = Some(filter);
self
}
pub fn with_tag<S: Into<Vec<u8>>>(mut self, tag: S) -> Self {
self.tag = Some(CString::new(tag).expect("Can't convert tag to CString"));
self
}
}
struct PlatformLogWriter<'a> {
#[cfg(target_os = "android")] priority: LogPriority,
#[cfg(not(target_os = "android"))] priority: Level,
len: usize,
last_newline_index: usize,
tag: &'a CStr,
buffer: [u8; LOGGING_MSG_MAX_LEN + 1],
}
impl<'a> PlatformLogWriter<'a> {
#[cfg(target_os = "android")]
pub fn new(level: Level, tag: &CStr) -> PlatformLogWriter {
PlatformLogWriter {
priority: match level {
Level::Warn => LogPriority::WARN,
Level::Info => LogPriority::INFO,
Level::Debug => LogPriority::DEBUG,
Level::Error => LogPriority::ERROR,
Level::Trace => LogPriority::VERBOSE,
},
len: 0,
last_newline_index: 0,
tag,
buffer: unsafe { mem::uninitialized() },
}
}
#[cfg(not(target_os = "android"))]
pub fn new(level: Level, tag: &CStr) -> PlatformLogWriter {
PlatformLogWriter {
priority: level,
len: 0,
last_newline_index: 0,
tag,
buffer: unsafe { mem::uninitialized() },
}
}
/// Flush some bytes to android logger.
///
/// If there is a newline, flush up to it.
/// If ther was no newline, flush all.
///
/// Not guaranteed to flush everything.
fn temporal_flush(&mut self) {
let total_len = self.len;
if total_len == 0 {
return;
}
if self.last_newline_index > 0 {
let copy_from_index = self.last_newline_index;
let remaining_chunk_len = total_len - copy_from_index;
self.output_specified_len(copy_from_index);
self.copy_bytes_to_start(copy_from_index, remaining_chunk_len);
self.len = remaining_chunk_len;
} else {
self.output_specified_len(total_len);
self.len = 0;
}
self.last_newline_index = 0;
}
/// Flush everything remaining to android logger.
fn flush(&mut self) {
let total_len = self.len;
if total_len == 0 {
return;
}
self.output_specified_len(total_len);
self.len = 0;
self.last_newline_index = 0;
}
/// Output buffer up until the \0 which will be placed at `len` position.
fn output_specified_len(&mut self, len: usize) {
let mut last_byte: u8 = b'\0';
mem::swap(&mut last_byte, unsafe {
self.buffer.get_unchecked_mut(len)
});
let msg: &CStr = unsafe { CStr::from_ptr(mem::transmute(self.buffer.as_ptr())) };
android_log(self.priority, self.tag, msg);
*unsafe { self.buffer.get_unchecked_mut(len) } = last_byte;
}
/// Copy `len` bytes from `index` position to starting position.
fn copy_bytes_to_start(&mut self, index: usize, len: usize) {
let src = unsafe { self.buffer.as_ptr().offset(index as isize) };
let dst = self.buffer.as_mut_ptr();
unsafe { ptr::copy(src, dst, len) };
}
}
impl<'a> fmt::Write for PlatformLogWriter<'a> {
fn write_str(&mut self, s: &str) -> fmt::Result {
let mut incomming_bytes = s.as_bytes();
while !incomming_bytes.is_empty() {
let len = self.len;
// write everything possible to buffer and mark last \n
let new_len = len + incomming_bytes.len();
let last_newline = self.buffer[len..LOGGING_MSG_MAX_LEN]
.iter_mut()
.zip(incomming_bytes)
.enumerate()
.fold(None, |acc, (i, (output, input))| {
*output = *input;
if *input == b'\n' {
Some(i)
} else {
acc
}
});
// update last \n index
if let Some(newline) = last_newline {
self.last_newline_index = len + newline;
}
// calculate how many bytes were written
let written_len = if new_len <= LOGGING_MSG_MAX_LEN {
// if the len was not exceeded
self.len = new_len;
new_len - len // written len
} else {
// if new length was exceeded
self.len = LOGGING_MSG_MAX_LEN;
self.temporal_flush();
LOGGING_MSG_MAX_LEN - len // written len
};
incomming_bytes = &incomming_bytes[written_len..];
}
Ok(())
}
}
/// Send a log record to Android logging backend.
///
/// This action does not require initialization. However, without initialization it
/// will use the default filter, which allows all logs.
pub fn log(record: &Record) {
ANDROID_LOGGER.log(record)
}
/// Initializes the global logger with an android logger.
///
/// This can be called many times, but will only initialize logging once,
/// and will not replace any other previously initialized logger.
///
/// It is ok to call this at the activity creation, and it will be
/// repeatedly called on every lifecycle restart (i.e. screen rotation).
pub fn init_once(config: Config) {
if let Err(err) = log::set_logger(&*ANDROID_LOGGER) {
debug!("android_logger: log::set_logger failed: {}", err);
} else {
if let Some(level) = config.log_level {
log::set_max_level(level.to_level_filter());
}
*ANDROID_LOGGER
.config
.write()
.expect("failed to acquire android_log filter lock for write") = config;
}
}

Просмотреть файл

@ -1 +0,0 @@
{"files":{"CHANGELOG.md":"7c044d74477515ab39287a4caff27eb96daebaed8b9f9b6a1d1c081a7b42d4a7","Cargo.lock":"132c1f881b80a79314567a6993141c6204495fec144cdcec1729f2a3e0fec18b","Cargo.toml":"b60137f1fd54001ca4d8be1d0bbec154225a44c8f4fa3576078bdad55216d357","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"6485b8ed310d3f0340bf1ad1f47645069ce4069dcc6bb46c7d5c6faf41de1fdb","README.md":"0e231c1c4ad51ff0239062297bdaa69aeb34a8692e3f814188ce1e0ade8583d5","examples/custom_default_format.rs":"799c439f61cb711078f8aa584db537a5758c25b90d44767849dae2ad3822885c","examples/custom_format.rs":"ac8323e2febf8b8ff7238bd254fbbbfb3183da5af84f7f3a261fd9ad892c9ab6","examples/custom_logger.rs":"99fb3c9761ad4c5fe73f4ec2a2bd44b4acf6d1f7b7cfaa16bf0373665d3e2a4b","examples/default.rs":"ac96427611784d310704f738c7a29ebddd7930c8a70ad3c464c4d3eae4cf74a3","examples/direct_logger.rs":"549f6a10e0903d06aca2cc7ba82415b07a23392676101c9bc7aa72b4a9b0b9e2","examples/filters_from_code.rs":"84bd82803683d19ae96f85edcf4ee38cda028c2dbde923dddecc8563453b18e2","src/filter/mod.rs":"de471579c5db400c5ed11b9d7c9fc62686068b42798c58f7165806319ab7ec09","src/filter/regex.rs":"5fff47d1d4d0aa3f2bab90636127d3e72aebf800c3b78faba99637220ffdf865","src/filter/string.rs":"52bbd047c31a1afdb3cd1c11629b956f21b3f47bf22e06421baf3d693a045e59","src/fmt/humantime/extern_impl.rs":"cd2538e7a03fd3ad6c843af3c3d4016ca96cadaefee32cf9b37329c4787e6552","src/fmt/humantime/mod.rs":"408496eb21344c654b9e06da2a2df86de56e427147bb7f7b47851e0da976c003","src/fmt/humantime/shim_impl.rs":"7c2fdf4031f5568b716df14842b0d32bc03ff398763f4849960df7f9632a5bb2","src/fmt/mod.rs":"5104dad2fd14bc18ab6ab46e7c2bc5752b509d9fc934fb99f0ebc126728f8f04","src/fmt/writer/atty.rs":"3e9fd61d291d0919f7aa7119a26dd15d920df8783b4ae57bcf2c3cb6f3ff06b5","src/fmt/writer/mod.rs":"583f6616e0cf21955a530baa332fb7a99bf4fcd418a2367bbd1e733a06a22318","src/fmt/writer/termcolor/extern_impl.rs":"15e048be128568abcdd0ce99dafffe296df26131d4aa05921585761d31c11db5","src/fmt/writer/termcolor/mod.rs":"a3cf956aec030e0f940e4eaefe58d7703857eb900022286e328e05e5f61de183","src/fmt/writer/termcolor/shim_impl.rs":"bdd479c4e933b14ba02a3c1a9fe30eb51bcdf600e23cebd044d68683fdaad037","src/lib.rs":"2c5ab92ee141022f3e657b0f81e84e5ee4e7fad9fb648204e00ed4fb03d4166f","tests/init-twice-retains-filter.rs":"00524ce0f6779981b695bad1fdd244f87b76c126aeccd8b4ff77ef9e6325478b","tests/log-in-log.rs":"41126910998adfbac771c2a1237fecbc5437344f8e4dfc2f93235bab764a087e","tests/regexp_filter.rs":"44aa6c39de894be090e37083601e501cfffb15e3c0cd36209c48abdf3e2cb120"},"package":"aafcde04e90a5226a6443b7aabdb016ba2f8307c847d524724bd9b346dd1a2d3"}

Просмотреть файл

@ -1,3 +0,0 @@
Changes to this crate are tracked via [GitHub Releases][releases].
[releases]: https://github.com/sebasmagri/env_logger/releases

212
third_party/rust/env_logger-0.6.2/Cargo.lock сгенерированный поставляемый
Просмотреть файл

@ -1,212 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
[[package]]
name = "aho-corasick"
version = "0.6.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"memchr 2.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "atty"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.40 (registry+https://github.com/rust-lang/crates.io-index)",
"termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "cfg-if"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "env_logger"
version = "0.6.2"
dependencies = [
"atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"humantime 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "humantime"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "lazy_static"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.40"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "memchr"
version = "2.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.40 (registry+https://github.com/rust-lang/crates.io-index)",
"version_check 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "quick-error"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "redox_syscall"
version = "0.1.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "redox_termios"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"aho-corasick 0.6.9 (registry+https://github.com/rust-lang/crates.io-index)",
"memchr 2.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.5 (registry+https://github.com/rust-lang/crates.io-index)",
"thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
"utf8-ranges 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "regex-syntax"
version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termcolor"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"wincolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "termion"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.40 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "thread_local"
version = "0.3.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ucd-util"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "utf8-ranges"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "version_check"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.3.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-util"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "wincolor"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-util 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[metadata]
"checksum aho-corasick 0.6.9 (registry+https://github.com/rust-lang/crates.io-index)" = "1e9a933f4e58658d7b12defcf96dc5c720f20832deebe3e0a19efd3b6aaeeb9e"
"checksum atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "af80143d6f7608d746df1520709e5d141c96f240b0e62b0aa41bdfb53374d9d4"
"checksum cfg-if 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "082bb9b28e00d3c9d39cc03e64ce4cea0f1bb9b3fde493f0cbc008472d22bdf4"
"checksum humantime 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "3ca7e5f2e110db35f93b837c81797f3714500b81d517bf20c431b16d3ca4f114"
"checksum lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c8f31047daa365f19be14b47c29df4f7c3b581832407daabe6ae77397619237d"
"checksum libc 0.2.40 (registry+https://github.com/rust-lang/crates.io-index)" = "6fd41f331ac7c5b8ac259b8bf82c75c0fb2e469bbf37d2becbba9a6a2221965b"
"checksum log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c84ec4b527950aa83a329754b01dbe3f58361d1c5efacd1f6d68c494d08a17c6"
"checksum memchr 2.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0a3eb002f0535929f1199681417029ebea04aadc0c7a4224b46be99c7f5d6a16"
"checksum quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "9274b940887ce9addde99c4eee6b5c44cc494b182b97e73dc8ffdcb3397fd3f0"
"checksum redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)" = "0d92eecebad22b767915e4d529f89f28ee96dbbf5a4810d2b844373f136417fd"
"checksum redox_termios 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7e891cfe48e9100a70a3b6eb652fef28920c117d366339687bd5576160db0f76"
"checksum regex 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "37e7cbbd370869ce2e8dff25c7018702d10b21a20ef7135316f8daecd6c25b7f"
"checksum regex-syntax 0.6.5 (registry+https://github.com/rust-lang/crates.io-index)" = "8c2f35eedad5295fdf00a63d7d4b238135723f92b434ec06774dad15c7ab0861"
"checksum termcolor 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)" = "4096add70612622289f2fdcdbd5086dc81c1e2675e6ae58d6c4f62a16c6d7f2f"
"checksum termion 1.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "689a3bdfaab439fd92bc87df5c4c78417d3cbe537487274e9b0b2dce76e92096"
"checksum thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c6b53e329000edc2b34dbe8545fd20e55a333362d0a321909685a19bd28c3f1b"
"checksum ucd-util 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "fd2be2d6639d0f8fe6cdda291ad456e23629558d466e2789d2c3e9892bda285d"
"checksum utf8-ranges 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "796f7e48bef87609f7ade7e06495a87d5cd06c7866e6a5cbfceffc558a243737"
"checksum version_check 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "914b1a6776c4c929a602fafd8bc742e06365d4bcbe48c30f9cca5824f70dc9dd"
"checksum winapi 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "04e3bd221fcbe8a271359c04f21a76db7d0c6028862d1bb5512d85e1e2eb5bb3"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-util 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7168bab6e1daee33b4557efd0e95d5ca70a03706d39fa5f3fe7a236f584b03c9"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
"checksum wincolor 1.0.1 (registry+https://github.com/rust-lang/crates.io-index)" = "561ed901ae465d6185fa7864d63fbd5720d0ef718366c9a4dc83cf6170d7e9ba"

57
third_party/rust/env_logger-0.6.2/Cargo.toml поставляемый
Просмотреть файл

@ -1,57 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies
#
# If you believe there's an error in this file please file an
# issue against the rust-lang/cargo repository. If you're
# editing this file be aware that the upstream Cargo.toml
# will likely look very different (and much more reasonable)
[package]
name = "env_logger"
version = "0.6.2"
authors = ["The Rust Project Developers"]
description = "A logging implementation for `log` which is configured via an environment\nvariable.\n"
documentation = "https://docs.rs/env_logger"
readme = "README.md"
keywords = ["logging", "log", "logger"]
categories = ["development-tools::debugging"]
license = "MIT/Apache-2.0"
repository = "https://github.com/sebasmagri/env_logger/"
[[test]]
name = "regexp_filter"
harness = false
[[test]]
name = "log-in-log"
harness = false
[[test]]
name = "init-twice-retains-filter"
harness = false
[dependencies.atty]
version = "0.2.5"
optional = true
[dependencies.humantime]
version = "1.1"
optional = true
[dependencies.log]
version = "0.4"
features = ["std"]
[dependencies.regex]
version = "1.0.3"
optional = true
[dependencies.termcolor]
version = "1.0.2"
optional = true
[features]
default = ["termcolor", "atty", "humantime", "regex"]

Просмотреть файл

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

25
third_party/rust/env_logger-0.6.2/LICENSE-MIT поставляемый
Просмотреть файл

@ -1,25 +0,0 @@
Copyright (c) 2014 The Rust Project Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

152
third_party/rust/env_logger-0.6.2/README.md поставляемый
Просмотреть файл

@ -1,152 +0,0 @@
env_logger [![Build Status](https://travis-ci.org/sebasmagri/env_logger.svg?branch=master)](https://travis-ci.org/sebasmagri/env_logger) [![Maintenance](https://img.shields.io/badge/maintenance-actively%20maintained-brightgreen.svg)](https://github.com/sebasmagri/env_logger) [![crates.io](https://img.shields.io/crates/v/env_logger.svg)](https://crates.io/crates/env_logger) [![Documentation](https://img.shields.io/badge/docs-current-blue.svg)](https://docs.rs/env_logger)
==========
Implements a logger that can be configured via environment variables.
## Usage
### In libraries
`env_logger` makes sense when used in executables (binary projects). Libraries should use the [`log`](https://doc.rust-lang.org/log) crate instead.
### In executables
It must be added along with `log` to the project dependencies:
```toml
[dependencies]
log = "0.4.0"
env_logger = "0.6.2"
```
`env_logger` must be initialized as early as possible in the project. After it's initialized, you can use the `log` macros to do actual logging.
```rust
#[macro_use]
extern crate log;
extern crate env_logger;
fn main() {
env_logger::init();
info!("starting up");
// ...
}
```
Then when running the executable, specify a value for the `RUST_LOG`
environment variable that corresponds with the log messages you want to show.
```bash
$ RUST_LOG=info ./main
[2018-11-03T06:09:06Z INFO default] starting up
```
`env_logger` can be configured in other ways besides an environment variable. See [the examples](https://github.com/sebasmagri/env_logger/tree/master/examples) for more approaches.
### In tests
Tests can use the `env_logger` crate to see log messages generated during that test:
```toml
[dependencies]
log = "0.4.0"
[dev-dependencies]
env_logger = "0.6.2"
```
```rust
#[macro_use]
extern crate log;
fn add_one(num: i32) -> i32 {
info!("add_one called with {}", num);
num + 1
}
#[cfg(test)]
mod tests {
use super::*;
extern crate env_logger;
fn init() {
let _ = env_logger::builder().is_test(true).try_init();
}
#[test]
fn it_adds_one() {
init();
info!("can log from the test too");
assert_eq!(3, add_one(2));
}
#[test]
fn it_handles_negative_numbers() {
init();
info!("logging from another test");
assert_eq!(-7, add_one(-8));
}
}
```
Assuming the module under test is called `my_lib`, running the tests with the
`RUST_LOG` filtering to info messages from this module looks like:
```bash
$ RUST_LOG=my_lib=info cargo test
Running target/debug/my_lib-...
running 2 tests
[INFO my_lib::tests] logging from another test
[INFO my_lib] add_one called with -8
test tests::it_handles_negative_numbers ... ok
[INFO my_lib::tests] can log from the test too
[INFO my_lib] add_one called with 2
test tests::it_adds_one ... ok
test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured
```
Note that `env_logger::try_init()` needs to be called in each test in which you
want to enable logging. Additionally, the default behavior of tests to
run in parallel means that logging output may be interleaved with test output.
Either run tests in a single thread by specifying `RUST_TEST_THREADS=1` or by
running one test by specifying its name as an argument to the test binaries as
directed by the `cargo test` help docs:
```bash
$ RUST_LOG=my_lib=info cargo test it_adds_one
Running target/debug/my_lib-...
running 1 test
[INFO my_lib::tests] can log from the test too
[INFO my_lib] add_one called with 2
test tests::it_adds_one ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
```
## Configuring log target
By default, `env_logger` logs to stderr. If you want to log to stdout instead,
you can use the `Builder` to change the log target:
```rust
use std::env;
use env_logger::{Builder, Target};
let mut builder = Builder::from_default_env();
builder.target(Target::Stdout);
builder.init();
```
## Stability of the default format
The default format won't optimise for long-term stability, and explicitly makes no guarantees about the stability of its output across major, minor or patch version bumps during `0.x`.
If you want to capture or interpret the output of `env_logger` programmatically then you should use a custom format.

Просмотреть файл

@ -1,44 +0,0 @@
/*!
Disabling parts of the default format.
Before running this example, try setting the `MY_LOG_LEVEL` environment variable to `info`:
```no_run,shell
$ export MY_LOG_LEVEL='info'
```
Also try setting the `MY_LOG_STYLE` environment variable to `never` to disable colors
or `auto` to enable them:
```no_run,shell
$ export MY_LOG_STYLE=never
```
If you want to control the logging output completely, see the `custom_logger` example.
*/
#[macro_use]
extern crate log;
extern crate env_logger;
use env_logger::{Env, Builder};
fn init_logger() {
let env = Env::default()
.filter("MY_LOG_LEVEL")
.write_style("MY_LOG_STYLE");
let mut builder = Builder::from_env(env);
builder
.default_format_level(false)
.default_format_timestamp_nanos(true);
builder.init();
}
fn main() {
init_logger();
info!("a log from `MyLogger`");
}

Просмотреть файл

@ -1,54 +0,0 @@
/*!
Changing the default logging format.
Before running this example, try setting the `MY_LOG_LEVEL` environment variable to `info`:
```no_run,shell
$ export MY_LOG_LEVEL='info'
```
Also try setting the `MY_LOG_STYLE` environment variable to `never` to disable colors
or `auto` to enable them:
```no_run,shell
$ export MY_LOG_STYLE=never
```
If you want to control the logging output completely, see the `custom_logger` example.
*/
#[macro_use]
extern crate log;
extern crate env_logger;
use std::io::Write;
use env_logger::{Env, Builder, fmt};
fn init_logger() {
let env = Env::default()
.filter("MY_LOG_LEVEL")
.write_style("MY_LOG_STYLE");
let mut builder = Builder::from_env(env);
// Use a different format for writing log records
// The colors are only available when the `termcolor` dependency is (which it is by default)
#[cfg(feature = "termcolor")]
builder.format(|buf, record| {
let mut style = buf.style();
style.set_bg(fmt::Color::Yellow).set_bold(true);
let timestamp = buf.timestamp();
writeln!(buf, "My formatted log ({}): {}", timestamp, style.value(record.args()))
});
builder.init();
}
fn main() {
init_logger();
info!("a log from `MyLogger`");
}

Просмотреть файл

@ -1,60 +0,0 @@
/*!
Using `env_logger` to drive a custom logger.
Before running this example, try setting the `MY_LOG_LEVEL` environment variable to `info`:
```no_run,shell
$ export MY_LOG_LEVEL='info'
```
If you only want to change the way logs are formatted, look at the `custom_format` example.
*/
#[macro_use]
extern crate log;
extern crate env_logger;
use env_logger::filter::Filter;
use log::{Log, Metadata, Record, SetLoggerError};
struct MyLogger {
inner: Filter
}
impl MyLogger {
fn new() -> MyLogger {
use env_logger::filter::Builder;
let mut builder = Builder::from_env("MY_LOG_LEVEL");
MyLogger {
inner: builder.build()
}
}
fn init() -> Result<(), SetLoggerError> {
let logger = Self::new();
log::set_max_level(logger.inner.filter());
log::set_boxed_logger(Box::new(logger))
}
}
impl Log for MyLogger {
fn enabled(&self, metadata: &Metadata) -> bool {
self.inner.enabled(metadata)
}
fn log(&self, record: &Record) {
// Check if the record is matched by the logger before logging
if self.inner.matches(record) {
println!("{} - {}", record.level(), record.args());
}
}
fn flush(&self) { }
}
fn main() {
MyLogger::init().unwrap();
info!("a log from `MyLogger`");
}

Просмотреть файл

@ -1,39 +0,0 @@
/*!
Using `env_logger`.
Before running this example, try setting the `MY_LOG_LEVEL` environment variable to `info`:
```no_run,shell
$ export MY_LOG_LEVEL='info'
```
Also try setting the `MY_LOG_STYLE` environment variable to `never` to disable colors
or `auto` to enable them:
```no_run,shell
$ export MY_LOG_STYLE=never
```
*/
#[macro_use]
extern crate log;
extern crate env_logger;
use env_logger::Env;
fn main() {
// The `Env` lets us tweak what the environment
// variables to read are and what the default
// value is if they're missing
let env = Env::default()
.filter_or("MY_LOG_LEVEL", "trace")
.write_style_or("MY_LOG_STYLE", "always");
env_logger::init_from_env(env);
trace!("some trace log");
debug!("some debug log");
info!("some information log");
warn!("some warning log");
error!("some error log");
}

Просмотреть файл

@ -1,40 +0,0 @@
/*!
Using `env_logger::Logger` and the `log::Log` trait directly.
This example doesn't rely on environment variables, or having a static logger installed.
*/
extern crate log;
extern crate env_logger;
fn record() -> log::Record<'static> {
let error_metadata = log::MetadataBuilder::new()
.target("myApp")
.level(log::Level::Error)
.build();
log::Record::builder()
.metadata(error_metadata)
.args(format_args!("Error!"))
.line(Some(433))
.file(Some("app.rs"))
.module_path(Some("server"))
.build()
}
fn main() {
use log::Log;
let stylish_logger = env_logger::Builder::new()
.filter(None, log::LevelFilter::Error)
.write_style(env_logger::WriteStyle::Always)
.build();
let unstylish_logger = env_logger::Builder::new()
.filter(None, log::LevelFilter::Error)
.write_style(env_logger::WriteStyle::Never)
.build();
stylish_logger.log(&record());
unstylish_logger.log(&record());
}

Просмотреть файл

@ -1,19 +0,0 @@
/*!
Specify logging filters in code instead of using an environment variable.
*/
#[macro_use]
extern crate log;
extern crate env_logger;
fn main() {
env_logger::builder()
.filter_level(log::LevelFilter::Trace)
.init();
trace!("some trace log");
debug!("some debug log");
info!("some information log");
warn!("some warning log");
error!("some error log");
}

Просмотреть файл

@ -1,579 +0,0 @@
//! Filtering for log records.
//!
//! This module contains the log filtering used by `env_logger` to match records.
//! You can use the `Filter` type in your own logger implementation to use the same
//! filter parsing and matching as `env_logger`. For more details about the format
//! for directive strings see [Enabling Logging].
//!
//! ## Using `env_logger` in your own logger
//!
//! You can use `env_logger`'s filtering functionality with your own logger.
//! Call [`Builder::parse`] to parse directives from a string when constructing
//! your logger. Call [`Filter::matches`] to check whether a record should be
//! logged based on the parsed filters when log records are received.
//!
//! ```
//! extern crate log;
//! extern crate env_logger;
//! use env_logger::filter::Filter;
//! use log::{Log, Metadata, Record};
//!
//! struct MyLogger {
//! filter: Filter
//! }
//!
//! impl MyLogger {
//! fn new() -> MyLogger {
//! use env_logger::filter::Builder;
//! let mut builder = Builder::new();
//!
//! // Parse a directives string from an environment variable
//! if let Ok(ref filter) = std::env::var("MY_LOG_LEVEL") {
//! builder.parse(filter);
//! }
//!
//! MyLogger {
//! filter: builder.build()
//! }
//! }
//! }
//!
//! impl Log for MyLogger {
//! fn enabled(&self, metadata: &Metadata) -> bool {
//! self.filter.enabled(metadata)
//! }
//!
//! fn log(&self, record: &Record) {
//! // Check if the record is matched by the filter
//! if self.filter.matches(record) {
//! println!("{:?}", record);
//! }
//! }
//!
//! fn flush(&self) {}
//! }
//! # fn main() {}
//! ```
//!
//! [Enabling Logging]: ../index.html#enabling-logging
//! [`Builder::parse`]: struct.Builder.html#method.parse
//! [`Filter::matches`]: struct.Filter.html#method.matches
use std::env;
use std::mem;
use std::fmt;
use log::{Level, LevelFilter, Record, Metadata};
#[cfg(feature = "regex")]
#[path = "regex.rs"]
mod inner;
#[cfg(not(feature = "regex"))]
#[path = "string.rs"]
mod inner;
/// A log filter.
///
/// This struct can be used to determine whether or not a log record
/// should be written to the output.
/// Use the [`Builder`] type to parse and construct a `Filter`.
///
/// [`Builder`]: struct.Builder.html
pub struct Filter {
directives: Vec<Directive>,
filter: Option<inner::Filter>,
}
/// A builder for a log filter.
///
/// It can be used to parse a set of directives from a string before building
/// a [`Filter`] instance.
///
/// ## Example
///
/// ```
/// #[macro_use]
/// extern crate log;
/// extern crate env_logger;
///
/// use std::env;
/// use std::io;
/// use env_logger::filter::Builder;
///
/// fn main() {
/// let mut builder = Builder::new();
///
/// // Parse a logging filter from an environment variable.
/// if let Ok(rust_log) = env::var("RUST_LOG") {
/// builder.parse(&rust_log);
/// }
///
/// let filter = builder.build();
/// }
/// ```
///
/// [`Filter`]: struct.Filter.html
pub struct Builder {
directives: Vec<Directive>,
filter: Option<inner::Filter>,
built: bool,
}
#[derive(Debug)]
struct Directive {
name: Option<String>,
level: LevelFilter,
}
impl Filter {
/// Returns the maximum `LevelFilter` that this filter instance is
/// configured to output.
///
/// # Example
///
/// ```rust
/// extern crate log;
/// extern crate env_logger;
///
/// use log::LevelFilter;
/// use env_logger::filter::Builder;
///
/// fn main() {
/// let mut builder = Builder::new();
/// builder.filter(Some("module1"), LevelFilter::Info);
/// builder.filter(Some("module2"), LevelFilter::Error);
///
/// let filter = builder.build();
/// assert_eq!(filter.filter(), LevelFilter::Info);
/// }
/// ```
pub fn filter(&self) -> LevelFilter {
self.directives.iter()
.map(|d| d.level)
.max()
.unwrap_or(LevelFilter::Off)
}
/// Checks if this record matches the configured filter.
pub fn matches(&self, record: &Record) -> bool {
if !self.enabled(record.metadata()) {
return false;
}
if let Some(filter) = self.filter.as_ref() {
if !filter.is_match(&*record.args().to_string()) {
return false;
}
}
true
}
/// Determines if a log message with the specified metadata would be logged.
pub fn enabled(&self, metadata: &Metadata) -> bool {
let level = metadata.level();
let target = metadata.target();
enabled(&self.directives, level, target)
}
}
impl Builder {
/// Initializes the filter builder with defaults.
pub fn new() -> Builder {
Builder {
directives: Vec::new(),
filter: None,
built: false,
}
}
/// Initializes the filter builder from an environment.
pub fn from_env(env: &str) -> Builder {
let mut builder = Builder::new();
if let Ok(s) = env::var(env) {
builder.parse(&s);
}
builder
}
/// Adds a directive to the filter for a specific module.
pub fn filter_module(&mut self, module: &str, level: LevelFilter) -> &mut Self {
self.filter(Some(module), level)
}
/// Adds a directive to the filter for all modules.
pub fn filter_level(&mut self, level: LevelFilter) -> &mut Self {
self.filter(None, level)
}
/// Adds a directive to the filter.
///
/// The given module (if any) will log at most the specified level provided.
/// If no module is provided then the filter will apply to all log messages.
pub fn filter(&mut self,
module: Option<&str>,
level: LevelFilter) -> &mut Self {
self.directives.push(Directive {
name: module.map(|s| s.to_string()),
level,
});
self
}
/// Parses the directives string.
///
/// See the [Enabling Logging] section for more details.
///
/// [Enabling Logging]: ../index.html#enabling-logging
pub fn parse(&mut self, filters: &str) -> &mut Self {
let (directives, filter) = parse_spec(filters);
self.filter = filter;
for directive in directives {
self.directives.push(directive);
}
self
}
/// Build a log filter.
pub fn build(&mut self) -> Filter {
assert!(!self.built, "attempt to re-use consumed builder");
self.built = true;
if self.directives.is_empty() {
// Adds the default filter if none exist
self.directives.push(Directive {
name: None,
level: LevelFilter::Error,
});
} else {
// Sort the directives by length of their name, this allows a
// little more efficient lookup at runtime.
self.directives.sort_by(|a, b| {
let alen = a.name.as_ref().map(|a| a.len()).unwrap_or(0);
let blen = b.name.as_ref().map(|b| b.len()).unwrap_or(0);
alen.cmp(&blen)
});
}
Filter {
directives: mem::replace(&mut self.directives, Vec::new()),
filter: mem::replace(&mut self.filter, None),
}
}
}
impl Default for Builder {
fn default() -> Self {
Builder::new()
}
}
impl fmt::Debug for Filter {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Filter")
.field("filter", &self.filter)
.field("directives", &self.directives)
.finish()
}
}
impl fmt::Debug for Builder {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
if self.built {
f.debug_struct("Filter")
.field("built", &true)
.finish()
} else {
f.debug_struct("Filter")
.field("filter", &self.filter)
.field("directives", &self.directives)
.finish()
}
}
}
/// Parse a logging specification string (e.g: "crate1,crate2::mod3,crate3::x=error/foo")
/// and return a vector with log directives.
fn parse_spec(spec: &str) -> (Vec<Directive>, Option<inner::Filter>) {
let mut dirs = Vec::new();
let mut parts = spec.split('/');
let mods = parts.next();
let filter = parts.next();
if parts.next().is_some() {
eprintln!("warning: invalid logging spec '{}', \
ignoring it (too many '/'s)", spec);
return (dirs, None);
}
mods.map(|m| { for s in m.split(',') {
if s.len() == 0 { continue }
let mut parts = s.split('=');
let (log_level, name) = match (parts.next(), parts.next().map(|s| s.trim()), parts.next()) {
(Some(part0), None, None) => {
// if the single argument is a log-level string or number,
// treat that as a global fallback
match part0.parse() {
Ok(num) => (num, None),
Err(_) => (LevelFilter::max(), Some(part0)),
}
}
(Some(part0), Some(""), None) => (LevelFilter::max(), Some(part0)),
(Some(part0), Some(part1), None) => {
match part1.parse() {
Ok(num) => (num, Some(part0)),
_ => {
eprintln!("warning: invalid logging spec '{}', \
ignoring it", part1);
continue
}
}
},
_ => {
eprintln!("warning: invalid logging spec '{}', \
ignoring it", s);
continue
}
};
dirs.push(Directive {
name: name.map(|s| s.to_string()),
level: log_level,
});
}});
let filter = filter.map_or(None, |filter| {
match inner::Filter::new(filter) {
Ok(re) => Some(re),
Err(e) => {
eprintln!("warning: invalid regex filter - {}", e);
None
}
}
});
return (dirs, filter);
}
// Check whether a level and target are enabled by the set of directives.
fn enabled(directives: &[Directive], level: Level, target: &str) -> bool {
// Search for the longest match, the vector is assumed to be pre-sorted.
for directive in directives.iter().rev() {
match directive.name {
Some(ref name) if !target.starts_with(&**name) => {},
Some(..) | None => {
return level <= directive.level
}
}
}
false
}
#[cfg(test)]
mod tests {
use log::{Level, LevelFilter};
use super::{Builder, Filter, Directive, parse_spec, enabled};
fn make_logger_filter(dirs: Vec<Directive>) -> Filter {
let mut logger = Builder::new().build();
logger.directives = dirs;
logger
}
#[test]
fn filter_info() {
let logger = Builder::new().filter(None, LevelFilter::Info).build();
assert!(enabled(&logger.directives, Level::Info, "crate1"));
assert!(!enabled(&logger.directives, Level::Debug, "crate1"));
}
#[test]
fn filter_beginning_longest_match() {
let logger = Builder::new()
.filter(Some("crate2"), LevelFilter::Info)
.filter(Some("crate2::mod"), LevelFilter::Debug)
.filter(Some("crate1::mod1"), LevelFilter::Warn)
.build();
assert!(enabled(&logger.directives, Level::Debug, "crate2::mod1"));
assert!(!enabled(&logger.directives, Level::Debug, "crate2"));
}
#[test]
fn parse_default() {
let logger = Builder::new().parse("info,crate1::mod1=warn").build();
assert!(enabled(&logger.directives, Level::Warn, "crate1::mod1"));
assert!(enabled(&logger.directives, Level::Info, "crate2::mod2"));
}
#[test]
fn match_full_path() {
let logger = make_logger_filter(vec![
Directive {
name: Some("crate2".to_string()),
level: LevelFilter::Info
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Warn
}
]);
assert!(enabled(&logger.directives, Level::Warn, "crate1::mod1"));
assert!(!enabled(&logger.directives, Level::Info, "crate1::mod1"));
assert!(enabled(&logger.directives, Level::Info, "crate2"));
assert!(!enabled(&logger.directives, Level::Debug, "crate2"));
}
#[test]
fn no_match() {
let logger = make_logger_filter(vec![
Directive { name: Some("crate2".to_string()), level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(!enabled(&logger.directives, Level::Warn, "crate3"));
}
#[test]
fn match_beginning() {
let logger = make_logger_filter(vec![
Directive { name: Some("crate2".to_string()), level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(enabled(&logger.directives, Level::Info, "crate2::mod1"));
}
#[test]
fn match_beginning_longest_match() {
let logger = make_logger_filter(vec![
Directive { name: Some("crate2".to_string()), level: LevelFilter::Info },
Directive { name: Some("crate2::mod".to_string()), level: LevelFilter::Debug },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(enabled(&logger.directives, Level::Debug, "crate2::mod1"));
assert!(!enabled(&logger.directives, Level::Debug, "crate2"));
}
#[test]
fn match_default() {
let logger = make_logger_filter(vec![
Directive { name: None, level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(enabled(&logger.directives, Level::Warn, "crate1::mod1"));
assert!(enabled(&logger.directives, Level::Info, "crate2::mod2"));
}
#[test]
fn zero_level() {
let logger = make_logger_filter(vec![
Directive { name: None, level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Off }
]);
assert!(!enabled(&logger.directives, Level::Error, "crate1::mod1"));
assert!(enabled(&logger.directives, Level::Info, "crate2::mod2"));
}
#[test]
fn parse_spec_valid() {
let (dirs, filter) = parse_spec("crate1::mod1=error,crate1::mod2,crate2=debug");
assert_eq!(dirs.len(), 3);
assert_eq!(dirs[0].name, Some("crate1::mod1".to_string()));
assert_eq!(dirs[0].level, LevelFilter::Error);
assert_eq!(dirs[1].name, Some("crate1::mod2".to_string()));
assert_eq!(dirs[1].level, LevelFilter::max());
assert_eq!(dirs[2].name, Some("crate2".to_string()));
assert_eq!(dirs[2].level, LevelFilter::Debug);
assert!(filter.is_none());
}
#[test]
fn parse_spec_invalid_crate() {
// test parse_spec with multiple = in specification
let (dirs, filter) = parse_spec("crate1::mod1=warn=info,crate2=debug");
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, LevelFilter::Debug);
assert!(filter.is_none());
}
#[test]
fn parse_spec_invalid_level() {
// test parse_spec with 'noNumber' as log level
let (dirs, filter) = parse_spec("crate1::mod1=noNumber,crate2=debug");
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, LevelFilter::Debug);
assert!(filter.is_none());
}
#[test]
fn parse_spec_string_level() {
// test parse_spec with 'warn' as log level
let (dirs, filter) = parse_spec("crate1::mod1=wrong,crate2=warn");
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, LevelFilter::Warn);
assert!(filter.is_none());
}
#[test]
fn parse_spec_empty_level() {
// test parse_spec with '' as log level
let (dirs, filter) = parse_spec("crate1::mod1=wrong,crate2=");
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, LevelFilter::max());
assert!(filter.is_none());
}
#[test]
fn parse_spec_global() {
// test parse_spec with no crate
let (dirs, filter) = parse_spec("warn,crate2=debug");
assert_eq!(dirs.len(), 2);
assert_eq!(dirs[0].name, None);
assert_eq!(dirs[0].level, LevelFilter::Warn);
assert_eq!(dirs[1].name, Some("crate2".to_string()));
assert_eq!(dirs[1].level, LevelFilter::Debug);
assert!(filter.is_none());
}
#[test]
fn parse_spec_valid_filter() {
let (dirs, filter) = parse_spec("crate1::mod1=error,crate1::mod2,crate2=debug/abc");
assert_eq!(dirs.len(), 3);
assert_eq!(dirs[0].name, Some("crate1::mod1".to_string()));
assert_eq!(dirs[0].level, LevelFilter::Error);
assert_eq!(dirs[1].name, Some("crate1::mod2".to_string()));
assert_eq!(dirs[1].level, LevelFilter::max());
assert_eq!(dirs[2].name, Some("crate2".to_string()));
assert_eq!(dirs[2].level, LevelFilter::Debug);
assert!(filter.is_some() && filter.unwrap().to_string() == "abc");
}
#[test]
fn parse_spec_invalid_crate_filter() {
let (dirs, filter) = parse_spec("crate1::mod1=error=warn,crate2=debug/a.c");
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate2".to_string()));
assert_eq!(dirs[0].level, LevelFilter::Debug);
assert!(filter.is_some() && filter.unwrap().to_string() == "a.c");
}
#[test]
fn parse_spec_empty_with_filter() {
let (dirs, filter) = parse_spec("crate1/a*c");
assert_eq!(dirs.len(), 1);
assert_eq!(dirs[0].name, Some("crate1".to_string()));
assert_eq!(dirs[0].level, LevelFilter::max());
assert!(filter.is_some() && filter.unwrap().to_string() == "a*c");
}
}

Просмотреть файл

@ -1,29 +0,0 @@
extern crate regex;
use std::fmt;
use self::regex::Regex;
#[derive(Debug)]
pub struct Filter {
inner: Regex,
}
impl Filter {
pub fn new(spec: &str) -> Result<Filter, String> {
match Regex::new(spec){
Ok(r) => Ok(Filter { inner: r }),
Err(e) => Err(e.to_string()),
}
}
pub fn is_match(&self, s: &str) -> bool {
self.inner.is_match(s)
}
}
impl fmt::Display for Filter {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.inner.fmt(f)
}
}

Просмотреть файл

@ -1,22 +0,0 @@
use std::fmt;
#[derive(Debug)]
pub struct Filter {
inner: String,
}
impl Filter {
pub fn new(spec: &str) -> Result<Filter, String> {
Ok(Filter { inner: spec.to_string() })
}
pub fn is_match(&self, s: &str) -> bool {
s.contains(&self.inner)
}
}
impl fmt::Display for Filter {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
self.inner.fmt(f)
}
}

Просмотреть файл

@ -1,84 +0,0 @@
use std::fmt;
use std::time::SystemTime;
use humantime::{format_rfc3339_nanos, format_rfc3339_seconds};
use ::fmt::Formatter;
pub(in ::fmt) mod glob {
pub use super::*;
}
impl Formatter {
/// Get a [`Timestamp`] for the current date and time in UTC.
///
/// # Examples
///
/// Include the current timestamp with the log record:
///
/// ```
/// use std::io::Write;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let ts = buf.timestamp();
///
/// writeln!(buf, "{}: {}: {}", ts, record.level(), record.args())
/// });
/// ```
///
/// [`Timestamp`]: struct.Timestamp.html
pub fn timestamp(&self) -> Timestamp {
Timestamp(SystemTime::now())
}
/// Get a [`PreciseTimestamp`] for the current date and time in UTC with nanos.
pub fn precise_timestamp(&self) -> PreciseTimestamp {
PreciseTimestamp(SystemTime::now())
}
}
/// An [RFC3339] formatted timestamp.
///
/// The timestamp implements [`Display`] and can be written to a [`Formatter`].
///
/// [RFC3339]: https://www.ietf.org/rfc/rfc3339.txt
/// [`Display`]: https://doc.rust-lang.org/stable/std/fmt/trait.Display.html
/// [`Formatter`]: struct.Formatter.html
pub struct Timestamp(SystemTime);
/// An [RFC3339] formatted timestamp with nanos.
///
/// [RFC3339]: https://www.ietf.org/rfc/rfc3339.txt
#[derive(Debug)]
pub struct PreciseTimestamp(SystemTime);
impl fmt::Debug for Timestamp {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
/// A `Debug` wrapper for `Timestamp` that uses the `Display` implementation.
struct TimestampValue<'a>(&'a Timestamp);
impl<'a> fmt::Debug for TimestampValue<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fmt::Display::fmt(&self.0, f)
}
}
f.debug_tuple("Timestamp")
.field(&TimestampValue(&self))
.finish()
}
}
impl fmt::Display for Timestamp {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
format_rfc3339_seconds(self.0).fmt(f)
}
}
impl fmt::Display for PreciseTimestamp {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
format_rfc3339_nanos(self.0).fmt(f)
}
}

Просмотреть файл

@ -1,11 +0,0 @@
/*
This internal module contains the timestamp implementation.
Its public API is available when the `humantime` crate is available.
*/
#[cfg_attr(feature = "humantime", path = "extern_impl.rs")]
#[cfg_attr(not(feature = "humantime"), path = "shim_impl.rs")]
mod imp;
pub(in ::fmt) use self::imp::*;

Просмотреть файл

@ -1,7 +0,0 @@
/*
Timestamps aren't available when we don't have a `humantime` dependency.
*/
pub(in ::fmt) mod glob {
}

Просмотреть файл

@ -1,358 +0,0 @@
//! Formatting for log records.
//!
//! This module contains a [`Formatter`] that can be used to format log records
//! into without needing temporary allocations. Usually you won't need to worry
//! about the contents of this module and can use the `Formatter` like an ordinary
//! [`Write`].
//!
//! # Formatting log records
//!
//! The format used to print log records can be customised using the [`Builder::format`]
//! method.
//! Custom formats can apply different color and weight to printed values using
//! [`Style`] builders.
//!
//! ```
//! use std::io::Write;
//!
//! let mut builder = env_logger::Builder::new();
//!
//! builder.format(|buf, record| {
//! writeln!(buf, "{}: {}",
//! record.level(),
//! record.args())
//! });
//! ```
//!
//! [`Formatter`]: struct.Formatter.html
//! [`Style`]: struct.Style.html
//! [`Builder::format`]: ../struct.Builder.html#method.format
//! [`Write`]: https://doc.rust-lang.org/stable/std/io/trait.Write.html
use std::io::prelude::*;
use std::{io, fmt, mem};
use std::rc::Rc;
use std::cell::RefCell;
use std::fmt::Display;
use log::Record;
pub(crate) mod writer;
mod humantime;
pub use self::humantime::glob::*;
pub use self::writer::glob::*;
use self::writer::{Writer, Buffer};
pub(crate) mod glob {
pub use super::{Target, WriteStyle};
}
/// A formatter to write logs into.
///
/// `Formatter` implements the standard [`Write`] trait for writing log records.
/// It also supports terminal colors, through the [`style`] method.
///
/// # Examples
///
/// Use the [`writeln`] macro to format a log record.
/// An instance of a `Formatter` is passed to an `env_logger` format as `buf`:
///
/// ```
/// use std::io::Write;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| writeln!(buf, "{}: {}", record.level(), record.args()));
/// ```
///
/// [`Write`]: https://doc.rust-lang.org/stable/std/io/trait.Write.html
/// [`writeln`]: https://doc.rust-lang.org/stable/std/macro.writeln.html
/// [`style`]: #method.style
pub struct Formatter {
buf: Rc<RefCell<Buffer>>,
write_style: WriteStyle,
}
impl Formatter {
pub(crate) fn new(writer: &Writer) -> Self {
Formatter {
buf: Rc::new(RefCell::new(writer.buffer())),
write_style: writer.write_style(),
}
}
pub(crate) fn write_style(&self) -> WriteStyle {
self.write_style
}
pub(crate) fn print(&self, writer: &Writer) -> io::Result<()> {
writer.print(&self.buf.borrow())
}
pub(crate) fn clear(&mut self) {
self.buf.borrow_mut().clear()
}
}
impl Write for Formatter {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.buf.borrow_mut().write(buf)
}
fn flush(&mut self) -> io::Result<()> {
self.buf.borrow_mut().flush()
}
}
impl fmt::Debug for Formatter {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Formatter").finish()
}
}
pub(crate) struct Builder {
pub default_format_timestamp: bool,
pub default_format_timestamp_nanos: bool,
pub default_format_module_path: bool,
pub default_format_level: bool,
#[allow(unknown_lints, bare_trait_objects)]
pub custom_format: Option<Box<Fn(&mut Formatter, &Record) -> io::Result<()> + Sync + Send>>,
built: bool,
}
impl Default for Builder {
fn default() -> Self {
Builder {
default_format_timestamp: true,
default_format_timestamp_nanos: false,
default_format_module_path: true,
default_format_level: true,
custom_format: None,
built: false,
}
}
}
impl Builder {
/// Convert the format into a callable function.
///
/// If the `custom_format` is `Some`, then any `default_format` switches are ignored.
/// If the `custom_format` is `None`, then a default format is returned.
/// Any `default_format` switches set to `false` won't be written by the format.
#[allow(unknown_lints, bare_trait_objects)]
pub fn build(&mut self) -> Box<Fn(&mut Formatter, &Record) -> io::Result<()> + Sync + Send> {
assert!(!self.built, "attempt to re-use consumed builder");
let built = mem::replace(self, Builder {
built: true,
..Default::default()
});
if let Some(fmt) = built.custom_format {
fmt
}
else {
Box::new(move |buf, record| {
let fmt = DefaultFormat {
timestamp: built.default_format_timestamp,
timestamp_nanos: built.default_format_timestamp_nanos,
module_path: built.default_format_module_path,
level: built.default_format_level,
written_header_value: false,
buf,
};
fmt.write(record)
})
}
}
}
#[cfg(feature = "termcolor")]
type SubtleStyle = StyledValue<'static, &'static str>;
#[cfg(not(feature = "termcolor"))]
type SubtleStyle = &'static str;
/// The default format.
///
/// This format needs to work with any combination of crate features.
struct DefaultFormat<'a> {
timestamp: bool,
module_path: bool,
level: bool,
timestamp_nanos: bool,
written_header_value: bool,
buf: &'a mut Formatter,
}
impl<'a> DefaultFormat<'a> {
fn write(mut self, record: &Record) -> io::Result<()> {
self.write_timestamp()?;
self.write_level(record)?;
self.write_module_path(record)?;
self.finish_header()?;
self.write_args(record)
}
fn subtle_style(&self, text: &'static str) -> SubtleStyle {
#[cfg(feature = "termcolor")]
{
self.buf.style()
.set_color(Color::Black)
.set_intense(true)
.into_value(text)
}
#[cfg(not(feature = "termcolor"))]
{
text
}
}
fn write_header_value<T>(&mut self, value: T) -> io::Result<()>
where
T: Display,
{
if !self.written_header_value {
self.written_header_value = true;
let open_brace = self.subtle_style("[");
write!(self.buf, "{}{}", open_brace, value)
} else {
write!(self.buf, " {}", value)
}
}
fn write_level(&mut self, record: &Record) -> io::Result<()> {
if !self.level {
return Ok(())
}
let level = {
#[cfg(feature = "termcolor")]
{
self.buf.default_styled_level(record.level())
}
#[cfg(not(feature = "termcolor"))]
{
record.level()
}
};
self.write_header_value(format_args!("{:<5}", level))
}
fn write_timestamp(&mut self) -> io::Result<()> {
#[cfg(feature = "humantime")]
{
if !self.timestamp {
return Ok(())
}
if self.timestamp_nanos {
let ts_nanos = self.buf.precise_timestamp();
self.write_header_value(ts_nanos)
} else {
let ts = self.buf.timestamp();
self.write_header_value(ts)
}
}
#[cfg(not(feature = "humantime"))]
{
let _ = self.timestamp;
let _ = self.timestamp_nanos;
Ok(())
}
}
fn write_module_path(&mut self, record: &Record) -> io::Result<()> {
if !self.module_path {
return Ok(())
}
if let Some(module_path) = record.module_path() {
self.write_header_value(module_path)
} else {
Ok(())
}
}
fn finish_header(&mut self) -> io::Result<()> {
if self.written_header_value {
let close_brace = self.subtle_style("]");
write!(self.buf, "{} ", close_brace)
} else {
Ok(())
}
}
fn write_args(&mut self, record: &Record) -> io::Result<()> {
writeln!(self.buf, "{}", record.args())
}
}
#[cfg(test)]
mod tests {
use super::*;
use log::{Level, Record};
fn write(fmt: DefaultFormat) -> String {
let buf = fmt.buf.buf.clone();
let record = Record::builder()
.args(format_args!("log message"))
.level(Level::Info)
.file(Some("test.rs"))
.line(Some(144))
.module_path(Some("test::path"))
.build();
fmt.write(&record).expect("failed to write record");
let buf = buf.borrow();
String::from_utf8(buf.bytes().to_vec()).expect("failed to read record")
}
#[test]
fn default_format_with_header() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: false,
timestamp_nanos: false,
module_path: true,
level: true,
written_header_value: false,
buf: &mut f,
});
assert_eq!("[INFO test::path] log message\n", written);
}
#[test]
fn default_format_no_header() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: false,
timestamp_nanos: false,
module_path: false,
level: false,
written_header_value: false,
buf: &mut f,
});
assert_eq!("log message\n", written);
}
}

Просмотреть файл

@ -1,34 +0,0 @@
/*
This internal module contains the terminal detection implementation.
If the `atty` crate is available then we use it to detect whether we're
attached to a particular TTY. If the `atty` crate is not available we
assume we're not attached to anything. This effectively prevents styles
from being printed.
*/
#[cfg(feature = "atty")]
mod imp {
use atty;
pub(in ::fmt) fn is_stdout() -> bool {
atty::is(atty::Stream::Stdout)
}
pub(in ::fmt) fn is_stderr() -> bool {
atty::is(atty::Stream::Stderr)
}
}
#[cfg(not(feature = "atty"))]
mod imp {
pub(in ::fmt) fn is_stdout() -> bool {
false
}
pub(in ::fmt) fn is_stderr() -> bool {
false
}
}
pub(in ::fmt) use self::imp::*;

Просмотреть файл

@ -1,206 +0,0 @@
mod termcolor;
mod atty;
use std::{fmt, io};
use self::termcolor::BufferWriter;
use self::atty::{is_stdout, is_stderr};
pub(in ::fmt) mod glob {
pub use super::termcolor::glob::*;
pub use super::*;
}
pub(in ::fmt) use self::termcolor::Buffer;
/// Log target, either `stdout` or `stderr`.
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
pub enum Target {
/// Logs will be sent to standard output.
Stdout,
/// Logs will be sent to standard error.
Stderr,
}
impl Default for Target {
fn default() -> Self {
Target::Stderr
}
}
/// Whether or not to print styles to the target.
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
pub enum WriteStyle {
/// Try to print styles, but don't force the issue.
Auto,
/// Try very hard to print styles.
Always,
/// Never print styles.
Never,
}
impl Default for WriteStyle {
fn default() -> Self {
WriteStyle::Auto
}
}
/// A terminal target with color awareness.
pub(crate) struct Writer {
inner: BufferWriter,
write_style: WriteStyle,
}
impl Writer {
pub fn write_style(&self) -> WriteStyle {
self.write_style
}
pub(in ::fmt) fn buffer(&self) -> Buffer {
self.inner.buffer()
}
pub(in ::fmt) fn print(&self, buf: &Buffer) -> io::Result<()> {
self.inner.print(buf)
}
}
/// A builder for a terminal writer.
///
/// The target and style choice can be configured before building.
pub(crate) struct Builder {
target: Target,
write_style: WriteStyle,
is_test: bool,
built: bool,
}
impl Builder {
/// Initialize the writer builder with defaults.
pub(crate) fn new() -> Self {
Builder {
target: Default::default(),
write_style: Default::default(),
is_test: false,
built: false,
}
}
/// Set the target to write to.
pub(crate) fn target(&mut self, target: Target) -> &mut Self {
self.target = target;
self
}
/// Parses a style choice string.
///
/// See the [Disabling colors] section for more details.
///
/// [Disabling colors]: ../index.html#disabling-colors
pub(crate) fn parse_write_style(&mut self, write_style: &str) -> &mut Self {
self.write_style(parse_write_style(write_style))
}
/// Whether or not to print style characters when writing.
pub(crate) fn write_style(&mut self, write_style: WriteStyle) -> &mut Self {
self.write_style = write_style;
self
}
/// Whether or not to capture logs for `cargo test`.
pub(crate) fn is_test(&mut self, is_test: bool) -> &mut Self {
self.is_test = is_test;
self
}
/// Build a terminal writer.
pub(crate) fn build(&mut self) -> Writer {
assert!(!self.built, "attempt to re-use consumed builder");
self.built = true;
let color_choice = match self.write_style {
WriteStyle::Auto => {
if match self.target {
Target::Stderr => is_stderr(),
Target::Stdout => is_stdout(),
} {
WriteStyle::Auto
} else {
WriteStyle::Never
}
},
color_choice => color_choice,
};
let writer = match self.target {
Target::Stderr => BufferWriter::stderr(self.is_test, color_choice),
Target::Stdout => BufferWriter::stdout(self.is_test, color_choice),
};
Writer {
inner: writer,
write_style: self.write_style,
}
}
}
impl Default for Builder {
fn default() -> Self {
Builder::new()
}
}
impl fmt::Debug for Builder {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Logger")
.field("target", &self.target)
.field("write_style", &self.write_style)
.finish()
}
}
impl fmt::Debug for Writer {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Writer").finish()
}
}
fn parse_write_style(spec: &str) -> WriteStyle {
match spec {
"auto" => WriteStyle::Auto,
"always" => WriteStyle::Always,
"never" => WriteStyle::Never,
_ => Default::default(),
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn parse_write_style_valid() {
let inputs = vec![
("auto", WriteStyle::Auto),
("always", WriteStyle::Always),
("never", WriteStyle::Never),
];
for (input, expected) in inputs {
assert_eq!(expected, parse_write_style(input));
}
}
#[test]
fn parse_write_style_invalid() {
let inputs = vec![
"",
"true",
"false",
"NEVER!!"
];
for input in inputs {
assert_eq!(WriteStyle::Auto, parse_write_style(input));
}
}
}

Просмотреть файл

@ -1,490 +0,0 @@
use std::borrow::Cow;
use std::fmt;
use std::io::{self, Write};
use std::cell::RefCell;
use std::rc::Rc;
use log::Level;
use termcolor::{self, ColorChoice, ColorSpec, WriteColor};
use ::WriteStyle;
use ::fmt::{Formatter, Target};
pub(in ::fmt::writer) mod glob {
pub use super::*;
}
impl Formatter {
/// Begin a new [`Style`].
///
/// # Examples
///
/// Create a bold, red colored style and use it to print the log level:
///
/// ```
/// use std::io::Write;
/// use env_logger::fmt::Color;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut level_style = buf.style();
///
/// level_style.set_color(Color::Red).set_bold(true);
///
/// writeln!(buf, "{}: {}",
/// level_style.value(record.level()),
/// record.args())
/// });
/// ```
///
/// [`Style`]: struct.Style.html
pub fn style(&self) -> Style {
Style {
buf: self.buf.clone(),
spec: ColorSpec::new(),
}
}
/// Get the default [`Style`] for the given level.
///
/// The style can be used to print other values besides the level.
pub fn default_level_style(&self, level: Level) -> Style {
let mut level_style = self.style();
match level {
Level::Trace => level_style.set_color(Color::Black).set_intense(true),
Level::Debug => level_style.set_color(Color::White),
Level::Info => level_style.set_color(Color::Green),
Level::Warn => level_style.set_color(Color::Yellow),
Level::Error => level_style.set_color(Color::Red).set_bold(true),
};
level_style
}
/// Get a printable [`Style`] for the given level.
///
/// The style can only be used to print the level.
pub fn default_styled_level(&self, level: Level) -> StyledValue<'static, Level> {
self.default_level_style(level).into_value(level)
}
}
pub(in ::fmt::writer) struct BufferWriter {
inner: termcolor::BufferWriter,
test_target: Option<Target>,
}
pub(in ::fmt) struct Buffer {
inner: termcolor::Buffer,
test_target: Option<Target>,
}
impl BufferWriter {
pub(in ::fmt::writer) fn stderr(is_test: bool, write_style: WriteStyle) -> Self {
BufferWriter {
inner: termcolor::BufferWriter::stderr(write_style.into_color_choice()),
test_target: if is_test {
Some(Target::Stderr)
} else {
None
},
}
}
pub(in ::fmt::writer) fn stdout(is_test: bool, write_style: WriteStyle) -> Self {
BufferWriter {
inner: termcolor::BufferWriter::stdout(write_style.into_color_choice()),
test_target: if is_test {
Some(Target::Stdout)
} else {
None
},
}
}
pub(in ::fmt::writer) fn buffer(&self) -> Buffer {
Buffer {
inner: self.inner.buffer(),
test_target: self.test_target,
}
}
pub(in ::fmt::writer) fn print(&self, buf: &Buffer) -> io::Result<()> {
if let Some(target) = self.test_target {
// This impl uses the `eprint` and `print` macros
// instead of `termcolor`'s buffer.
// This is so their output can be captured by `cargo test`
let log = String::from_utf8_lossy(buf.bytes());
match target {
Target::Stderr => eprint!("{}", log),
Target::Stdout => print!("{}", log),
}
Ok(())
} else {
self.inner.print(&buf.inner)
}
}
}
impl Buffer {
pub(in ::fmt) fn clear(&mut self) {
self.inner.clear()
}
pub(in ::fmt) fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.inner.write(buf)
}
pub(in ::fmt) fn flush(&mut self) -> io::Result<()> {
self.inner.flush()
}
pub(in ::fmt) fn bytes(&self) -> &[u8] {
self.inner.as_slice()
}
fn set_color(&mut self, spec: &ColorSpec) -> io::Result<()> {
// Ignore styles for test captured logs because they can't be printed
if self.test_target.is_none() {
self.inner.set_color(spec)
} else {
Ok(())
}
}
fn reset(&mut self) -> io::Result<()> {
// Ignore styles for test captured logs because they can't be printed
if self.test_target.is_none() {
self.inner.reset()
} else {
Ok(())
}
}
}
impl WriteStyle {
fn into_color_choice(self) -> ColorChoice {
match self {
WriteStyle::Always => ColorChoice::Always,
WriteStyle::Auto => ColorChoice::Auto,
WriteStyle::Never => ColorChoice::Never,
}
}
}
/// A set of styles to apply to the terminal output.
///
/// Call [`Formatter::style`] to get a `Style` and use the builder methods to
/// set styling properties, like [color] and [weight].
/// To print a value using the style, wrap it in a call to [`value`] when the log
/// record is formatted.
///
/// # Examples
///
/// Create a bold, red colored style and use it to print the log level:
///
/// ```
/// use std::io::Write;
/// use env_logger::fmt::Color;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut level_style = buf.style();
///
/// level_style.set_color(Color::Red).set_bold(true);
///
/// writeln!(buf, "{}: {}",
/// level_style.value(record.level()),
/// record.args())
/// });
/// ```
///
/// Styles can be re-used to output multiple values:
///
/// ```
/// use std::io::Write;
/// use env_logger::fmt::Color;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut bold = buf.style();
///
/// bold.set_bold(true);
///
/// writeln!(buf, "{}: {} {}",
/// bold.value(record.level()),
/// bold.value("some bold text"),
/// record.args())
/// });
/// ```
///
/// [`Formatter::style`]: struct.Formatter.html#method.style
/// [color]: #method.set_color
/// [weight]: #method.set_bold
/// [`value`]: #method.value
#[derive(Clone)]
pub struct Style {
buf: Rc<RefCell<Buffer>>,
spec: ColorSpec,
}
/// A value that can be printed using the given styles.
///
/// It is the result of calling [`Style::value`].
///
/// [`Style::value`]: struct.Style.html#method.value
pub struct StyledValue<'a, T> {
style: Cow<'a, Style>,
value: T,
}
impl Style {
/// Set the text color.
///
/// # Examples
///
/// Create a style with red text:
///
/// ```
/// use std::io::Write;
/// use env_logger::fmt::Color;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut style = buf.style();
///
/// style.set_color(Color::Red);
///
/// writeln!(buf, "{}", style.value(record.args()))
/// });
/// ```
pub fn set_color(&mut self, color: Color) -> &mut Style {
self.spec.set_fg(color.into_termcolor());
self
}
/// Set the text weight.
///
/// If `yes` is true then text will be written in bold.
/// If `yes` is false then text will be written in the default weight.
///
/// # Examples
///
/// Create a style with bold text:
///
/// ```
/// use std::io::Write;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut style = buf.style();
///
/// style.set_bold(true);
///
/// writeln!(buf, "{}", style.value(record.args()))
/// });
/// ```
pub fn set_bold(&mut self, yes: bool) -> &mut Style {
self.spec.set_bold(yes);
self
}
/// Set the text intensity.
///
/// If `yes` is true then text will be written in a brighter color.
/// If `yes` is false then text will be written in the default color.
///
/// # Examples
///
/// Create a style with intense text:
///
/// ```
/// use std::io::Write;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut style = buf.style();
///
/// style.set_intense(true);
///
/// writeln!(buf, "{}", style.value(record.args()))
/// });
/// ```
pub fn set_intense(&mut self, yes: bool) -> &mut Style {
self.spec.set_intense(yes);
self
}
/// Set the background color.
///
/// # Examples
///
/// Create a style with a yellow background:
///
/// ```
/// use std::io::Write;
/// use env_logger::fmt::Color;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut style = buf.style();
///
/// style.set_bg(Color::Yellow);
///
/// writeln!(buf, "{}", style.value(record.args()))
/// });
/// ```
pub fn set_bg(&mut self, color: Color) -> &mut Style {
self.spec.set_bg(color.into_termcolor());
self
}
/// Wrap a value in the style.
///
/// The same `Style` can be used to print multiple different values.
///
/// # Examples
///
/// Create a bold, red colored style and use it to print the log level:
///
/// ```
/// use std::io::Write;
/// use env_logger::fmt::Color;
///
/// let mut builder = env_logger::Builder::new();
///
/// builder.format(|buf, record| {
/// let mut style = buf.style();
///
/// style.set_color(Color::Red).set_bold(true);
///
/// writeln!(buf, "{}: {}",
/// style.value(record.level()),
/// record.args())
/// });
/// ```
pub fn value<T>(&self, value: T) -> StyledValue<T> {
StyledValue {
style: Cow::Borrowed(self),
value
}
}
/// Wrap a value in the style by taking ownership of it.
pub(crate) fn into_value<T>(&mut self, value: T) -> StyledValue<'static, T> {
StyledValue {
style: Cow::Owned(self.clone()),
value
}
}
}
impl<'a, T> StyledValue<'a, T> {
fn write_fmt<F>(&self, f: F) -> fmt::Result
where
F: FnOnce() -> fmt::Result,
{
self.style.buf.borrow_mut().set_color(&self.style.spec).map_err(|_| fmt::Error)?;
// Always try to reset the terminal style, even if writing failed
let write = f();
let reset = self.style.buf.borrow_mut().reset().map_err(|_| fmt::Error);
write.and(reset)
}
}
impl fmt::Debug for Style {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Style").field("spec", &self.spec).finish()
}
}
macro_rules! impl_styled_value_fmt {
($($fmt_trait:path),*) => {
$(
impl<'a, T: $fmt_trait> $fmt_trait for StyledValue<'a, T> {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
self.write_fmt(|| T::fmt(&self.value, f))
}
}
)*
};
}
impl_styled_value_fmt!(
fmt::Debug,
fmt::Display,
fmt::Pointer,
fmt::Octal,
fmt::Binary,
fmt::UpperHex,
fmt::LowerHex,
fmt::UpperExp,
fmt::LowerExp);
// The `Color` type is copied from https://github.com/BurntSushi/ripgrep/tree/master/termcolor
/// The set of available colors for the terminal foreground/background.
///
/// The `Ansi256` and `Rgb` colors will only output the correct codes when
/// paired with the `Ansi` `WriteColor` implementation.
///
/// The `Ansi256` and `Rgb` color types are not supported when writing colors
/// on Windows using the console. If they are used on Windows, then they are
/// silently ignored and no colors will be emitted.
///
/// This set may expand over time.
///
/// This type has a `FromStr` impl that can parse colors from their human
/// readable form. The format is as follows:
///
/// 1. Any of the explicitly listed colors in English. They are matched
/// case insensitively.
/// 2. A single 8-bit integer, in either decimal or hexadecimal format.
/// 3. A triple of 8-bit integers separated by a comma, where each integer is
/// in decimal or hexadecimal format.
///
/// Hexadecimal numbers are written with a `0x` prefix.
#[allow(missing_docs)]
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum Color {
Black,
Blue,
Green,
Red,
Cyan,
Magenta,
Yellow,
White,
Ansi256(u8),
Rgb(u8, u8, u8),
#[doc(hidden)]
__Nonexhaustive,
}
impl Color {
fn into_termcolor(self) -> Option<termcolor::Color> {
match self {
Color::Black => Some(termcolor::Color::Black),
Color::Blue => Some(termcolor::Color::Blue),
Color::Green => Some(termcolor::Color::Green),
Color::Red => Some(termcolor::Color::Red),
Color::Cyan => Some(termcolor::Color::Cyan),
Color::Magenta => Some(termcolor::Color::Magenta),
Color::Yellow => Some(termcolor::Color::Yellow),
Color::White => Some(termcolor::Color::White),
Color::Ansi256(value) => Some(termcolor::Color::Ansi256(value)),
Color::Rgb(r, g, b) => Some(termcolor::Color::Rgb(r, g, b)),
_ => None,
}
}
}

Просмотреть файл

@ -1,12 +0,0 @@
/*
This internal module contains the style and terminal writing implementation.
Its public API is available when the `termcolor` crate is available.
The terminal printing is shimmed when the `termcolor` crate is not available.
*/
#[cfg_attr(feature = "termcolor", path = "extern_impl.rs")]
#[cfg_attr(not(feature = "termcolor"), path = "shim_impl.rs")]
mod imp;
pub(in ::fmt) use self::imp::*;

Просмотреть файл

@ -1,65 +0,0 @@
use std::io;
use fmt::{WriteStyle, Target};
pub(in ::fmt::writer) mod glob {
}
pub(in ::fmt::writer) struct BufferWriter {
target: Target,
}
pub(in ::fmt) struct Buffer(Vec<u8>);
impl BufferWriter {
pub(in ::fmt::writer) fn stderr(_is_test: bool, _write_style: WriteStyle) -> Self {
BufferWriter {
target: Target::Stderr,
}
}
pub(in ::fmt::writer) fn stdout(_is_test: bool, _write_style: WriteStyle) -> Self {
BufferWriter {
target: Target::Stdout,
}
}
pub(in ::fmt::writer) fn buffer(&self) -> Buffer {
Buffer(Vec::new())
}
pub(in ::fmt::writer) fn print(&self, buf: &Buffer) -> io::Result<()> {
// This impl uses the `eprint` and `print` macros
// instead of using the streams directly.
// This is so their output can be captured by `cargo test`
let log = String::from_utf8_lossy(&buf.0);
match self.target {
Target::Stderr => eprint!("{}", log),
Target::Stdout => print!("{}", log),
}
Ok(())
}
}
impl Buffer {
pub(in ::fmt) fn clear(&mut self) {
self.0.clear();
}
pub(in ::fmt) fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.0.extend(buf);
Ok(buf.len())
}
pub(in ::fmt) fn flush(&mut self) -> io::Result<()> {
Ok(())
}
#[cfg(test)]
pub(in ::fmt) fn bytes(&self) -> &[u8] {
&self.0
}
}

1173
third_party/rust/env_logger-0.6.2/src/lib.rs поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,40 +0,0 @@
extern crate log;
extern crate env_logger;
use std::process;
use std::env;
use std::str;
fn main() {
if env::var("YOU_ARE_TESTING_NOW").is_ok() {
// Init from the env (which should set the max level to `Debug`)
env_logger::init();
assert_eq!(log::LevelFilter::Debug, log::max_level());
// Init again using a different max level
// This shouldn't clobber the level that was previously set
env_logger::Builder::new()
.parse_filters("info")
.try_init()
.unwrap_err();
assert_eq!(log::LevelFilter::Debug, log::max_level());
return
}
let exe = env::current_exe().unwrap();
let out = process::Command::new(exe)
.env("YOU_ARE_TESTING_NOW", "1")
.env("RUST_LOG", "debug")
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
if out.status.success() {
return
}
println!("test failed: {}", out.status);
println!("--- stdout\n{}", str::from_utf8(&out.stdout).unwrap());
println!("--- stderr\n{}", str::from_utf8(&out.stderr).unwrap());
process::exit(1);
}

Просмотреть файл

@ -1,38 +0,0 @@
#[macro_use] extern crate log;
extern crate env_logger;
use std::process;
use std::fmt;
use std::env;
use std::str;
struct Foo;
impl fmt::Display for Foo {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
info!("test");
f.write_str("bar")
}
}
fn main() {
env_logger::init();
if env::var("YOU_ARE_TESTING_NOW").is_ok() {
return info!("{}", Foo);
}
let exe = env::current_exe().unwrap();
let out = process::Command::new(exe)
.env("YOU_ARE_TESTING_NOW", "1")
.env("RUST_LOG", "debug")
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
if out.status.success() {
return
}
println!("test failed: {}", out.status);
println!("--- stdout\n{}", str::from_utf8(&out.stdout).unwrap());
println!("--- stderr\n{}", str::from_utf8(&out.stderr).unwrap());
process::exit(1);
}

Просмотреть файл

@ -1,51 +0,0 @@
#[macro_use] extern crate log;
extern crate env_logger;
use std::process;
use std::env;
use std::str;
fn main() {
if env::var("LOG_REGEXP_TEST").ok() == Some(String::from("1")) {
child_main();
} else {
parent_main()
}
}
fn child_main() {
env_logger::init();
info!("XYZ Message");
}
fn run_child(rust_log: String) -> bool {
let exe = env::current_exe().unwrap();
let out = process::Command::new(exe)
.env("LOG_REGEXP_TEST", "1")
.env("RUST_LOG", rust_log)
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
str::from_utf8(out.stderr.as_ref()).unwrap().contains("XYZ Message")
}
fn assert_message_printed(rust_log: &str) {
if !run_child(rust_log.to_string()) {
panic!("RUST_LOG={} should allow the test log message", rust_log)
}
}
fn assert_message_not_printed(rust_log: &str) {
if run_child(rust_log.to_string()) {
panic!("RUST_LOG={} should not allow the test log message", rust_log)
}
}
fn parent_main() {
// test normal log severity levels
assert_message_printed("info");
assert_message_not_printed("warn");
// test of regular expression filters
assert_message_printed("info/XYZ");
assert_message_not_printed("info/XXX");
}

Просмотреть файл

@ -1 +1 @@
{"files":{"CHANGELOG.md":"7c044d74477515ab39287a4caff27eb96daebaed8b9f9b6a1d1c081a7b42d4a7","Cargo.lock":"b1394b6c58241027832cc714a0754902d82aa1f6923ab478c318739462e565ca","Cargo.toml":"2961879155d753ba90ecd98c17875c82007a6973c95867e86bc1ec5bd4f5db41","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"6485b8ed310d3f0340bf1ad1f47645069ce4069dcc6bb46c7d5c6faf41de1fdb","README.md":"0bf17650e07b88f1486f033643c1e82517caa69410e6faeaa352782d9231d63e","examples/custom_default_format.rs":"ae18cd0e765cf1f16568f9879925861d6f004481f955b58af5ed8fd04b0fca99","examples/custom_format.rs":"b0f41b7a3e6fe7582871281f4244c62c66b0d724bfc678907f67185a784e82b4","examples/custom_logger.rs":"6eeef506681a46925117e8f89395cdf4fea60a0d1f6a420e51768e790272dcde","examples/default.rs":"7ed1c6a8a8fe457a86676bd3a75c07d4ec7fb54147cf2825c9d299a5878a24cd","examples/direct_logger.rs":"ee20c25379c396e5e74e963290a4d8773a86f3fe10193f61fb1efd1c7271faf4","examples/filters_from_code.rs":"7f007b0dfa5a3964f839134824dc3684bf2f3c3d7b4c36c580cd029df5f9308b","src/filter/mod.rs":"5da7e51e7b77efdd4d2b5445e5d0264be2c897909d0f86eb553e16611307aed2","src/filter/regex.rs":"bdf875bac25e089e1e462f5dd01a88678067c24118ecd6268561c6a6af39747d","src/filter/string.rs":"fac54d51189fc0b5d2bff334b7a7e465177b431e3428299e345e1f90062d832e","src/fmt/humantime/extern_impl.rs":"f3087b29eedb8b4d5573621ad206e48a2eac72a77277be3b0e631d7dc9fb7a2e","src/fmt/humantime/mod.rs":"f4111c26cf2ffb85c1d639bd7674d55af7e1736e7e98c52f7be3070046a3253f","src/fmt/humantime/shim_impl.rs":"cce9a252abd5952fa109a72b1dfb85a593d237e22606b2b608a32c69184560e9","src/fmt/mod.rs":"4ab11971a73eb5fe9b40f0bca6dfc404321dd9e2ffcf87d911408e7183dc8362","src/fmt/writer/atty.rs":"69d9dd26c430000cd2d40f9c68b2e77cd492fec22921dd2c16864301252583e0","src/fmt/writer/mod.rs":"1e0feb4dee3ee86c4c24f49566673e99ec85765869105a07a2fc7436d7640cfe","src/fmt/writer/termcolor/extern_impl.rs":"89e9f2e66b914ddc960ad9a4355265a5db5d7be410b139cf2b54ca99207374a7","src/fmt/writer/termcolor/mod.rs":"a790f9391a50cd52be6823e3e55942de13a8d12e23d63765342ae9e8dd6d091c","src/fmt/writer/termcolor/shim_impl.rs":"d93786671d6a89fc2912f77f04b8cb0b82d67277d255d15ac31bfc1bc4464e30","src/lib.rs":"3cbc4f4d3fe51c43fc45a2f435c141f0de5b40b65ba0d2c7d16bb58c04d10898","tests/init-twice-retains-filter.rs":"be5cd2132342d89ede1f5c4266173bb3c4d51cc22a1847f133d299a1c5430ccb","tests/log-in-log.rs":"29fecc65c1e0d1c22d79c97e7ca843ad44a91f27934148d7a05c48899a3f39d8","tests/log_tls_dtors.rs":"7320667d774a9b05037f7bf273fb2574dec0705707692a9cd2f46f4cd5bc68dd","tests/regexp_filter.rs":"a84263c995b534b6479a1d0abadf63f4f0264958ff86d9173d6b2139b82c4dc5"},"package":"44533bbbb3bb3c1fa17d9f2e4e38bbbaf8396ba82193c4cb1b6445d711445d36"}
{"files":{"CHANGELOG.md":"7c044d74477515ab39287a4caff27eb96daebaed8b9f9b6a1d1c081a7b42d4a7","Cargo.lock":"132c1f881b80a79314567a6993141c6204495fec144cdcec1729f2a3e0fec18b","Cargo.toml":"b60137f1fd54001ca4d8be1d0bbec154225a44c8f4fa3576078bdad55216d357","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"6485b8ed310d3f0340bf1ad1f47645069ce4069dcc6bb46c7d5c6faf41de1fdb","README.md":"0e231c1c4ad51ff0239062297bdaa69aeb34a8692e3f814188ce1e0ade8583d5","examples/custom_default_format.rs":"799c439f61cb711078f8aa584db537a5758c25b90d44767849dae2ad3822885c","examples/custom_format.rs":"ac8323e2febf8b8ff7238bd254fbbbfb3183da5af84f7f3a261fd9ad892c9ab6","examples/custom_logger.rs":"99fb3c9761ad4c5fe73f4ec2a2bd44b4acf6d1f7b7cfaa16bf0373665d3e2a4b","examples/default.rs":"ac96427611784d310704f738c7a29ebddd7930c8a70ad3c464c4d3eae4cf74a3","examples/direct_logger.rs":"549f6a10e0903d06aca2cc7ba82415b07a23392676101c9bc7aa72b4a9b0b9e2","examples/filters_from_code.rs":"84bd82803683d19ae96f85edcf4ee38cda028c2dbde923dddecc8563453b18e2","src/filter/mod.rs":"de471579c5db400c5ed11b9d7c9fc62686068b42798c58f7165806319ab7ec09","src/filter/regex.rs":"5fff47d1d4d0aa3f2bab90636127d3e72aebf800c3b78faba99637220ffdf865","src/filter/string.rs":"52bbd047c31a1afdb3cd1c11629b956f21b3f47bf22e06421baf3d693a045e59","src/fmt/humantime/extern_impl.rs":"cd2538e7a03fd3ad6c843af3c3d4016ca96cadaefee32cf9b37329c4787e6552","src/fmt/humantime/mod.rs":"408496eb21344c654b9e06da2a2df86de56e427147bb7f7b47851e0da976c003","src/fmt/humantime/shim_impl.rs":"7c2fdf4031f5568b716df14842b0d32bc03ff398763f4849960df7f9632a5bb2","src/fmt/mod.rs":"5104dad2fd14bc18ab6ab46e7c2bc5752b509d9fc934fb99f0ebc126728f8f04","src/fmt/writer/atty.rs":"3e9fd61d291d0919f7aa7119a26dd15d920df8783b4ae57bcf2c3cb6f3ff06b5","src/fmt/writer/mod.rs":"583f6616e0cf21955a530baa332fb7a99bf4fcd418a2367bbd1e733a06a22318","src/fmt/writer/termcolor/extern_impl.rs":"15e048be128568abcdd0ce99dafffe296df26131d4aa05921585761d31c11db5","src/fmt/writer/termcolor/mod.rs":"a3cf956aec030e0f940e4eaefe58d7703857eb900022286e328e05e5f61de183","src/fmt/writer/termcolor/shim_impl.rs":"bdd479c4e933b14ba02a3c1a9fe30eb51bcdf600e23cebd044d68683fdaad037","src/lib.rs":"2c5ab92ee141022f3e657b0f81e84e5ee4e7fad9fb648204e00ed4fb03d4166f","tests/init-twice-retains-filter.rs":"00524ce0f6779981b695bad1fdd244f87b76c126aeccd8b4ff77ef9e6325478b","tests/log-in-log.rs":"41126910998adfbac771c2a1237fecbc5437344f8e4dfc2f93235bab764a087e","tests/regexp_filter.rs":"44aa6c39de894be090e37083601e501cfffb15e3c0cd36209c48abdf3e2cb120"},"package":"aafcde04e90a5226a6443b7aabdb016ba2f8307c847d524724bd9b346dd1a2d3"}

14
third_party/rust/env_logger/Cargo.lock сгенерированный поставляемый
Просмотреть файл

@ -25,18 +25,18 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "env_logger"
version = "0.7.1"
version = "0.6.2"
dependencies = [
"atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
"humantime 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)",
"humantime 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)",
"regex 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"termcolor 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "humantime"
version = "1.3.0"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
@ -54,7 +54,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "log"
version = "0.4.8"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)",
@ -189,10 +189,10 @@ dependencies = [
"checksum aho-corasick 0.6.9 (registry+https://github.com/rust-lang/crates.io-index)" = "1e9a933f4e58658d7b12defcf96dc5c720f20832deebe3e0a19efd3b6aaeeb9e"
"checksum atty 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "af80143d6f7608d746df1520709e5d141c96f240b0e62b0aa41bdfb53374d9d4"
"checksum cfg-if 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "082bb9b28e00d3c9d39cc03e64ce4cea0f1bb9b3fde493f0cbc008472d22bdf4"
"checksum humantime 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "df004cfca50ef23c36850aaaa59ad52cc70d0e90243c3c7737a4dd32dc7a3c4f"
"checksum humantime 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "3ca7e5f2e110db35f93b837c81797f3714500b81d517bf20c431b16d3ca4f114"
"checksum lazy_static 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "c8f31047daa365f19be14b47c29df4f7c3b581832407daabe6ae77397619237d"
"checksum libc 0.2.40 (registry+https://github.com/rust-lang/crates.io-index)" = "6fd41f331ac7c5b8ac259b8bf82c75c0fb2e469bbf37d2becbba9a6a2221965b"
"checksum log 0.4.8 (registry+https://github.com/rust-lang/crates.io-index)" = "14b6052be84e6b71ab17edffc2eeabf5c2c3ae1fdb464aae35ac50c67a44e1f7"
"checksum log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c84ec4b527950aa83a329754b01dbe3f58361d1c5efacd1f6d68c494d08a17c6"
"checksum memchr 2.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "0a3eb002f0535929f1199681417029ebea04aadc0c7a4224b46be99c7f5d6a16"
"checksum quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "9274b940887ce9addde99c4eee6b5c44cc494b182b97e73dc8ffdcb3397fd3f0"
"checksum redox_syscall 0.1.37 (registry+https://github.com/rust-lang/crates.io-index)" = "0d92eecebad22b767915e4d529f89f28ee96dbbf5a4810d2b844373f136417fd"

11
third_party/rust/env_logger/Cargo.toml поставляемый
Просмотреть файл

@ -11,9 +11,8 @@
# will likely look very different (and much more reasonable)
[package]
edition = "2018"
name = "env_logger"
version = "0.7.1"
version = "0.6.2"
authors = ["The Rust Project Developers"]
description = "A logging implementation for `log` which is configured via an environment\nvariable.\n"
documentation = "https://docs.rs/env_logger"
@ -31,10 +30,6 @@ harness = false
name = "log-in-log"
harness = false
[[test]]
name = "log_tls_dtors"
harness = false
[[test]]
name = "init-twice-retains-filter"
harness = false
@ -43,11 +38,11 @@ version = "0.2.5"
optional = true
[dependencies.humantime]
version = "1.3"
version = "1.1"
optional = true
[dependencies.log]
version = "0.4.8"
version = "0.4"
features = ["std"]
[dependencies.regex]

6
third_party/rust/env_logger/README.md поставляемый
Просмотреть файл

@ -16,7 +16,7 @@ It must be added along with `log` to the project dependencies:
```toml
[dependencies]
log = "0.4.0"
env_logger = "0.7.1"
env_logger = "0.6.2"
```
`env_logger` must be initialized as early as possible in the project. After it's initialized, you can use the `log` macros to do actual logging.
@ -24,6 +24,7 @@ env_logger = "0.7.1"
```rust
#[macro_use]
extern crate log;
extern crate env_logger;
fn main() {
env_logger::init();
@ -53,7 +54,7 @@ Tests can use the `env_logger` crate to see log messages generated during that t
log = "0.4.0"
[dev-dependencies]
env_logger = "0.7.1"
env_logger = "0.6.2"
```
```rust
@ -68,6 +69,7 @@ fn add_one(num: i32) -> i32 {
#[cfg(test)]
mod tests {
use super::*;
extern crate env_logger;
fn init() {
let _ = env_logger::builder().is_test(true).try_init();

Просмотреть файл

@ -19,8 +19,9 @@ If you want to control the logging output completely, see the `custom_logger` ex
#[macro_use]
extern crate log;
extern crate env_logger;
use env_logger::{Builder, Env};
use env_logger::{Env, Builder};
fn init_logger() {
let env = Env::default()
@ -29,7 +30,9 @@ fn init_logger() {
let mut builder = Builder::from_env(env);
builder.format_level(false).format_timestamp_nanos();
builder
.default_format_level(false)
.default_format_timestamp_nanos(true);
builder.init();
}

Просмотреть файл

@ -17,37 +17,38 @@ $ export MY_LOG_STYLE=never
If you want to control the logging output completely, see the `custom_logger` example.
*/
#[cfg(all(feature = "termcolor", feature = "humantime"))]
fn main() {
use env_logger::{fmt, Builder, Env};
use std::io::Write;
#[macro_use]
extern crate log;
extern crate env_logger;
fn init_logger() {
let env = Env::default()
.filter("MY_LOG_LEVEL")
.write_style("MY_LOG_STYLE");
use std::io::Write;
Builder::from_env(env)
.format(|buf, record| {
let mut style = buf.style();
style.set_bg(fmt::Color::Yellow).set_bold(true);
use env_logger::{Env, Builder, fmt};
let timestamp = buf.timestamp();
fn init_logger() {
let env = Env::default()
.filter("MY_LOG_LEVEL")
.write_style("MY_LOG_STYLE");
writeln!(
buf,
"My formatted log ({}): {}",
timestamp,
style.value(record.args())
)
})
.init();
}
let mut builder = Builder::from_env(env);
init_logger();
// Use a different format for writing log records
// The colors are only available when the `termcolor` dependency is (which it is by default)
#[cfg(feature = "termcolor")]
builder.format(|buf, record| {
let mut style = buf.style();
style.set_bg(fmt::Color::Yellow).set_bold(true);
log::info!("a log from `MyLogger`");
let timestamp = buf.timestamp();
writeln!(buf, "My formatted log ({}): {}", timestamp, style.value(record.args()))
});
builder.init();
}
#[cfg(not(all(feature = "termcolor", feature = "humantime")))]
fn main() {}
fn main() {
init_logger();
info!("a log from `MyLogger`");
}

Просмотреть файл

@ -12,12 +12,12 @@ If you only want to change the way logs are formatted, look at the `custom_forma
#[macro_use]
extern crate log;
extern crate env_logger;
use env_logger::filter::Filter;
use log::{Log, Metadata, Record, SetLoggerError};
struct MyLogger {
inner: Filter,
inner: Filter
}
impl MyLogger {
@ -26,7 +26,7 @@ impl MyLogger {
let mut builder = Builder::from_env("MY_LOG_LEVEL");
MyLogger {
inner: builder.build(),
inner: builder.build()
}
}
@ -50,7 +50,7 @@ impl Log for MyLogger {
}
}
fn flush(&self) {}
fn flush(&self) { }
}
fn main() {

Просмотреть файл

@ -17,6 +17,7 @@ $ export MY_LOG_STYLE=never
#[macro_use]
extern crate log;
extern crate env_logger;
use env_logger::Env;

Просмотреть файл

@ -4,6 +4,9 @@ Using `env_logger::Logger` and the `log::Log` trait directly.
This example doesn't rely on environment variables, or having a static logger installed.
*/
extern crate log;
extern crate env_logger;
fn record() -> log::Record<'static> {
let error_metadata = log::MetadataBuilder::new()
.target("myApp")
@ -31,7 +34,7 @@ fn main() {
.filter(None, log::LevelFilter::Error)
.write_style(env_logger::WriteStyle::Never)
.build();
stylish_logger.log(&record());
unstylish_logger.log(&record());
}
}

Просмотреть файл

@ -4,6 +4,7 @@ Specify logging filters in code instead of using an environment variable.
#[macro_use]
extern crate log;
extern crate env_logger;
fn main() {
env_logger::builder()

219
third_party/rust/env_logger/src/filter/mod.rs поставляемый
Просмотреть файл

@ -1,15 +1,15 @@
//! Filtering for log records.
//!
//!
//! This module contains the log filtering used by `env_logger` to match records.
//! You can use the `Filter` type in your own logger implementation to use the same
//! filter parsing and matching as `env_logger`. For more details about the format
//! You can use the `Filter` type in your own logger implementation to use the same
//! filter parsing and matching as `env_logger`. For more details about the format
//! for directive strings see [Enabling Logging].
//!
//!
//! ## Using `env_logger` in your own logger
//!
//! You can use `env_logger`'s filtering functionality with your own logger.
//! Call [`Builder::parse`] to parse directives from a string when constructing
//! your logger. Call [`Filter::matches`] to check whether a record should be
//! Call [`Builder::parse`] to parse directives from a string when constructing
//! your logger. Call [`Filter::matches`] to check whether a record should be
//! logged based on the parsed filters when log records are received.
//!
//! ```
@ -54,15 +54,15 @@
//! }
//! # fn main() {}
//! ```
//!
//!
//! [Enabling Logging]: ../index.html#enabling-logging
//! [`Builder::parse`]: struct.Builder.html#method.parse
//! [`Filter::matches`]: struct.Filter.html#method.matches
use log::{Level, LevelFilter, Metadata, Record};
use std::env;
use std::fmt;
use std::mem;
use std::fmt;
use log::{Level, LevelFilter, Record, Metadata};
#[cfg(feature = "regex")]
#[path = "regex.rs"]
@ -73,11 +73,11 @@ mod inner;
mod inner;
/// A log filter.
///
///
/// This struct can be used to determine whether or not a log record
/// should be written to the output.
/// Use the [`Builder`] type to parse and construct a `Filter`.
///
///
/// [`Builder`]: struct.Builder.html
pub struct Filter {
directives: Vec<Directive>,
@ -85,10 +85,10 @@ pub struct Filter {
}
/// A builder for a log filter.
///
///
/// It can be used to parse a set of directives from a string before building
/// a [`Filter`] instance.
///
///
/// ## Example
///
/// ```
@ -111,7 +111,7 @@ pub struct Filter {
/// let filter = builder.build();
/// }
/// ```
///
///
/// [`Filter`]: struct.Filter.html
pub struct Builder {
directives: Vec<Directive>,
@ -148,8 +148,7 @@ impl Filter {
/// }
/// ```
pub fn filter(&self) -> LevelFilter {
self.directives
.iter()
self.directives.iter()
.map(|d| d.level)
.max()
.unwrap_or(LevelFilter::Off)
@ -214,7 +213,9 @@ impl Builder {
///
/// The given module (if any) will log at most the specified level provided.
/// If no module is provided then the filter will apply to all log messages.
pub fn filter(&mut self, module: Option<&str>, level: LevelFilter) -> &mut Self {
pub fn filter(&mut self,
module: Option<&str>,
level: LevelFilter) -> &mut Self {
self.directives.push(Directive {
name: module.map(|s| s.to_string()),
level,
@ -225,7 +226,7 @@ impl Builder {
/// Parses the directives string.
///
/// See the [Enabling Logging] section for more details.
///
///
/// [Enabling Logging]: ../index.html#enabling-logging
pub fn parse(&mut self, filters: &str) -> &mut Self {
let (directives, filter) = parse_spec(filters);
@ -273,7 +274,7 @@ impl Default for Builder {
}
impl fmt::Debug for Filter {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Filter")
.field("filter", &self.filter)
.field("directives", &self.directives)
@ -282,14 +283,16 @@ impl fmt::Debug for Filter {
}
impl fmt::Debug for Builder {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
if self.built {
f.debug_struct("Filter").field("built", &true).finish()
f.debug_struct("Filter")
.field("built", &true)
.finish()
} else {
f.debug_struct("Filter")
.field("filter", &self.filter)
.field("directives", &self.directives)
.finish()
.field("filter", &self.filter)
.field("directives", &self.directives)
.finish()
}
}
}
@ -303,75 +306,68 @@ fn parse_spec(spec: &str) -> (Vec<Directive>, Option<inner::Filter>) {
let mods = parts.next();
let filter = parts.next();
if parts.next().is_some() {
eprintln!(
"warning: invalid logging spec '{}', \
ignoring it (too many '/'s)",
spec
);
eprintln!("warning: invalid logging spec '{}', \
ignoring it (too many '/'s)", spec);
return (dirs, None);
}
mods.map(|m| {
for s in m.split(',') {
if s.len() == 0 {
continue;
mods.map(|m| { for s in m.split(',') {
if s.len() == 0 { continue }
let mut parts = s.split('=');
let (log_level, name) = match (parts.next(), parts.next().map(|s| s.trim()), parts.next()) {
(Some(part0), None, None) => {
// if the single argument is a log-level string or number,
// treat that as a global fallback
match part0.parse() {
Ok(num) => (num, None),
Err(_) => (LevelFilter::max(), Some(part0)),
}
}
let mut parts = s.split('=');
let (log_level, name) =
match (parts.next(), parts.next().map(|s| s.trim()), parts.next()) {
(Some(part0), None, None) => {
// if the single argument is a log-level string or number,
// treat that as a global fallback
match part0.parse() {
Ok(num) => (num, None),
Err(_) => (LevelFilter::max(), Some(part0)),
}
}
(Some(part0), Some(""), None) => (LevelFilter::max(), Some(part0)),
(Some(part0), Some(part1), None) => match part1.parse() {
Ok(num) => (num, Some(part0)),
_ => {
eprintln!(
"warning: invalid logging spec '{}', \
ignoring it",
part1
);
continue;
}
},
(Some(part0), Some(""), None) => (LevelFilter::max(), Some(part0)),
(Some(part0), Some(part1), None) => {
match part1.parse() {
Ok(num) => (num, Some(part0)),
_ => {
eprintln!(
"warning: invalid logging spec '{}', \
ignoring it",
s
);
continue;
eprintln!("warning: invalid logging spec '{}', \
ignoring it", part1);
continue
}
};
dirs.push(Directive {
name: name.map(|s| s.to_string()),
level: log_level,
});
}
});
}
},
_ => {
eprintln!("warning: invalid logging spec '{}', \
ignoring it", s);
continue
}
};
dirs.push(Directive {
name: name.map(|s| s.to_string()),
level: log_level,
});
}});
let filter = filter.map_or(None, |filter| match inner::Filter::new(filter) {
Ok(re) => Some(re),
Err(e) => {
eprintln!("warning: invalid regex filter - {}", e);
None
let filter = filter.map_or(None, |filter| {
match inner::Filter::new(filter) {
Ok(re) => Some(re),
Err(e) => {
eprintln!("warning: invalid regex filter - {}", e);
None
}
}
});
return (dirs, filter);
}
// Check whether a level and target are enabled by the set of directives.
fn enabled(directives: &[Directive], level: Level, target: &str) -> bool {
// Search for the longest match, the vector is assumed to be pre-sorted.
for directive in directives.iter().rev() {
match directive.name {
Some(ref name) if !target.starts_with(&**name) => {}
Some(..) | None => return level <= directive.level,
Some(ref name) if !target.starts_with(&**name) => {},
Some(..) | None => {
return level <= directive.level
}
}
}
false
@ -381,7 +377,7 @@ fn enabled(directives: &[Directive], level: Level, target: &str) -> bool {
mod tests {
use log::{Level, LevelFilter};
use super::{enabled, parse_spec, Builder, Directive, Filter};
use super::{Builder, Filter, Directive, parse_spec, enabled};
fn make_logger_filter(dirs: Vec<Directive>) -> Filter {
let mut logger = Builder::new().build();
@ -399,10 +395,10 @@ mod tests {
#[test]
fn filter_beginning_longest_match() {
let logger = Builder::new()
.filter(Some("crate2"), LevelFilter::Info)
.filter(Some("crate2::mod"), LevelFilter::Debug)
.filter(Some("crate1::mod1"), LevelFilter::Warn)
.build();
.filter(Some("crate2"), LevelFilter::Info)
.filter(Some("crate2::mod"), LevelFilter::Debug)
.filter(Some("crate1::mod1"), LevelFilter::Warn)
.build();
assert!(enabled(&logger.directives, Level::Debug, "crate2::mod1"));
assert!(!enabled(&logger.directives, Level::Debug, "crate2"));
}
@ -419,12 +415,12 @@ mod tests {
let logger = make_logger_filter(vec![
Directive {
name: Some("crate2".to_string()),
level: LevelFilter::Info,
level: LevelFilter::Info
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Warn,
},
level: LevelFilter::Warn
}
]);
assert!(enabled(&logger.directives, Level::Warn, "crate1::mod1"));
assert!(!enabled(&logger.directives, Level::Info, "crate1::mod1"));
@ -435,14 +431,8 @@ mod tests {
#[test]
fn no_match() {
let logger = make_logger_filter(vec![
Directive {
name: Some("crate2".to_string()),
level: LevelFilter::Info,
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Warn,
},
Directive { name: Some("crate2".to_string()), level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(!enabled(&logger.directives, Level::Warn, "crate3"));
}
@ -450,14 +440,8 @@ mod tests {
#[test]
fn match_beginning() {
let logger = make_logger_filter(vec![
Directive {
name: Some("crate2".to_string()),
level: LevelFilter::Info,
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Warn,
},
Directive { name: Some("crate2".to_string()), level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(enabled(&logger.directives, Level::Info, "crate2::mod1"));
}
@ -465,18 +449,9 @@ mod tests {
#[test]
fn match_beginning_longest_match() {
let logger = make_logger_filter(vec![
Directive {
name: Some("crate2".to_string()),
level: LevelFilter::Info,
},
Directive {
name: Some("crate2::mod".to_string()),
level: LevelFilter::Debug,
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Warn,
},
Directive { name: Some("crate2".to_string()), level: LevelFilter::Info },
Directive { name: Some("crate2::mod".to_string()), level: LevelFilter::Debug },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(enabled(&logger.directives, Level::Debug, "crate2::mod1"));
assert!(!enabled(&logger.directives, Level::Debug, "crate2"));
@ -485,14 +460,8 @@ mod tests {
#[test]
fn match_default() {
let logger = make_logger_filter(vec![
Directive {
name: None,
level: LevelFilter::Info,
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Warn,
},
Directive { name: None, level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Warn }
]);
assert!(enabled(&logger.directives, Level::Warn, "crate1::mod1"));
assert!(enabled(&logger.directives, Level::Info, "crate2::mod2"));
@ -501,14 +470,8 @@ mod tests {
#[test]
fn zero_level() {
let logger = make_logger_filter(vec![
Directive {
name: None,
level: LevelFilter::Info,
},
Directive {
name: Some("crate1::mod1".to_string()),
level: LevelFilter::Off,
},
Directive { name: None, level: LevelFilter::Info },
Directive { name: Some("crate1::mod1".to_string()), level: LevelFilter::Off }
]);
assert!(!enabled(&logger.directives, Level::Error, "crate1::mod1"));
assert!(enabled(&logger.directives, Level::Info, "crate2::mod2"));

Просмотреть файл

@ -11,7 +11,7 @@ pub struct Filter {
impl Filter {
pub fn new(spec: &str) -> Result<Filter, String> {
match Regex::new(spec) {
match Regex::new(spec){
Ok(r) => Ok(Filter { inner: r }),
Err(e) => Err(e.to_string()),
}

Просмотреть файл

@ -7,9 +7,7 @@ pub struct Filter {
impl Filter {
pub fn new(spec: &str) -> Result<Filter, String> {
Ok(Filter {
inner: spec.to_string(),
})
Ok(Filter { inner: spec.to_string() })
}
pub fn is_match(&self, s: &str) -> bool {

Просмотреть файл

@ -1,13 +1,11 @@
use std::fmt;
use std::time::SystemTime;
use humantime::{
format_rfc3339_micros, format_rfc3339_millis, format_rfc3339_nanos, format_rfc3339_seconds,
};
use humantime::{format_rfc3339_nanos, format_rfc3339_seconds};
use crate::fmt::{Formatter, TimestampPrecision};
use ::fmt::Formatter;
pub(in crate::fmt) mod glob {
pub(in ::fmt) mod glob {
pub use super::*;
}
@ -32,46 +30,12 @@ impl Formatter {
///
/// [`Timestamp`]: struct.Timestamp.html
pub fn timestamp(&self) -> Timestamp {
Timestamp {
time: SystemTime::now(),
precision: TimestampPrecision::Seconds,
}
Timestamp(SystemTime::now())
}
/// Get a [`Timestamp`] for the current date and time in UTC with full
/// second precision.
pub fn timestamp_seconds(&self) -> Timestamp {
Timestamp {
time: SystemTime::now(),
precision: TimestampPrecision::Seconds,
}
}
/// Get a [`Timestamp`] for the current date and time in UTC with
/// millisecond precision.
pub fn timestamp_millis(&self) -> Timestamp {
Timestamp {
time: SystemTime::now(),
precision: TimestampPrecision::Millis,
}
}
/// Get a [`Timestamp`] for the current date and time in UTC with
/// microsecond precision.
pub fn timestamp_micros(&self) -> Timestamp {
Timestamp {
time: SystemTime::now(),
precision: TimestampPrecision::Micros,
}
}
/// Get a [`Timestamp`] for the current date and time in UTC with
/// nanosecond precision.
pub fn timestamp_nanos(&self) -> Timestamp {
Timestamp {
time: SystemTime::now(),
precision: TimestampPrecision::Nanos,
}
/// Get a [`PreciseTimestamp`] for the current date and time in UTC with nanos.
pub fn precise_timestamp(&self) -> PreciseTimestamp {
PreciseTimestamp(SystemTime::now())
}
}
@ -82,10 +46,13 @@ impl Formatter {
/// [RFC3339]: https://www.ietf.org/rfc/rfc3339.txt
/// [`Display`]: https://doc.rust-lang.org/stable/std/fmt/trait.Display.html
/// [`Formatter`]: struct.Formatter.html
pub struct Timestamp {
time: SystemTime,
precision: TimestampPrecision,
}
pub struct Timestamp(SystemTime);
/// An [RFC3339] formatted timestamp with nanos.
///
/// [RFC3339]: https://www.ietf.org/rfc/rfc3339.txt
#[derive(Debug)]
pub struct PreciseTimestamp(SystemTime);
impl fmt::Debug for Timestamp {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
@ -99,20 +66,19 @@ impl fmt::Debug for Timestamp {
}
f.debug_tuple("Timestamp")
.field(&TimestampValue(&self))
.finish()
.field(&TimestampValue(&self))
.finish()
}
}
impl fmt::Display for Timestamp {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let formatter = match self.precision {
TimestampPrecision::Seconds => format_rfc3339_seconds,
TimestampPrecision::Millis => format_rfc3339_millis,
TimestampPrecision::Micros => format_rfc3339_micros,
TimestampPrecision::Nanos => format_rfc3339_nanos,
};
formatter(self.time).fmt(f)
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
format_rfc3339_seconds(self.0).fmt(f)
}
}
impl fmt::Display for PreciseTimestamp {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
format_rfc3339_nanos(self.0).fmt(f)
}
}

Просмотреть файл

@ -8,4 +8,4 @@ Its public API is available when the `humantime` crate is available.
#[cfg_attr(not(feature = "humantime"), path = "shim_impl.rs")]
mod imp;
pub(in crate::fmt) use self::imp::*;
pub(in ::fmt) use self::imp::*;

Просмотреть файл

@ -2,4 +2,6 @@
Timestamps aren't available when we don't have a `humantime` dependency.
*/
pub(in crate::fmt) mod glob {}
pub(in ::fmt) mod glob {
}

237
third_party/rust/env_logger/src/fmt/mod.rs поставляемый
Просмотреть файл

@ -29,48 +29,24 @@
//! [`Builder::format`]: ../struct.Builder.html#method.format
//! [`Write`]: https://doc.rust-lang.org/stable/std/io/trait.Write.html
use std::io::prelude::*;
use std::{io, fmt, mem};
use std::rc::Rc;
use std::cell::RefCell;
use std::fmt::Display;
use std::io::prelude::*;
use std::rc::Rc;
use std::{fmt, io, mem};
use log::Record;
mod humantime;
pub(crate) mod writer;
mod humantime;
pub use self::humantime::glob::*;
pub use self::writer::glob::*;
use self::writer::{Buffer, Writer};
use self::writer::{Writer, Buffer};
pub(crate) mod glob {
pub use super::{Target, TimestampPrecision, WriteStyle};
}
/// Formatting precision of timestamps.
///
/// Seconds give precision of full seconds, milliseconds give thousands of a
/// second (3 decimal digits), microseconds are millionth of a second (6 decimal
/// digits) and nanoseconds are billionth of a second (9 decimal digits).
#[derive(Copy, Clone, Debug)]
pub enum TimestampPrecision {
/// Full second precision (0 decimal digits)
Seconds,
/// Millisecond precision (3 decimal digits)
Millis,
/// Microsecond precision (6 decimal digits)
Micros,
/// Nanosecond precision (9 decimal digits)
Nanos,
}
/// The default timestamp precision is seconds.
impl Default for TimestampPrecision {
fn default() -> Self {
TimestampPrecision::Seconds
}
pub use super::{Target, WriteStyle};
}
/// A formatter to write logs into.
@ -131,16 +107,16 @@ impl Write for Formatter {
}
impl fmt::Debug for Formatter {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Formatter").finish()
}
}
pub(crate) struct Builder {
pub format_timestamp: Option<TimestampPrecision>,
pub format_module_path: bool,
pub format_level: bool,
pub format_indent: Option<usize>,
pub default_format_timestamp: bool,
pub default_format_timestamp_nanos: bool,
pub default_format_module_path: bool,
pub default_format_level: bool,
#[allow(unknown_lints, bare_trait_objects)]
pub custom_format: Option<Box<Fn(&mut Formatter, &Record) -> io::Result<()> + Sync + Send>>,
built: bool,
@ -149,10 +125,10 @@ pub(crate) struct Builder {
impl Default for Builder {
fn default() -> Self {
Builder {
format_timestamp: Some(Default::default()),
format_module_path: true,
format_level: true,
format_indent: Some(4),
default_format_timestamp: true,
default_format_timestamp_nanos: false,
default_format_module_path: true,
default_format_level: true,
custom_format: None,
built: false,
}
@ -161,7 +137,7 @@ impl Default for Builder {
impl Builder {
/// Convert the format into a callable function.
///
///
/// If the `custom_format` is `Some`, then any `default_format` switches are ignored.
/// If the `custom_format` is `None`, then a default format is returned.
/// Any `default_format` switches set to `false` won't be written by the format.
@ -169,24 +145,22 @@ impl Builder {
pub fn build(&mut self) -> Box<Fn(&mut Formatter, &Record) -> io::Result<()> + Sync + Send> {
assert!(!self.built, "attempt to re-use consumed builder");
let built = mem::replace(
self,
Builder {
built: true,
..Default::default()
},
);
let built = mem::replace(self, Builder {
built: true,
..Default::default()
});
if let Some(fmt) = built.custom_format {
fmt
} else {
}
else {
Box::new(move |buf, record| {
let fmt = DefaultFormat {
timestamp: built.format_timestamp,
module_path: built.format_module_path,
level: built.format_level,
timestamp: built.default_format_timestamp,
timestamp_nanos: built.default_format_timestamp_nanos,
module_path: built.default_format_module_path,
level: built.default_format_level,
written_header_value: false,
indent: built.format_indent,
buf,
};
@ -202,14 +176,14 @@ type SubtleStyle = StyledValue<'static, &'static str>;
type SubtleStyle = &'static str;
/// The default format.
///
///
/// This format needs to work with any combination of crate features.
struct DefaultFormat<'a> {
timestamp: Option<TimestampPrecision>,
timestamp: bool,
module_path: bool,
level: bool,
timestamp_nanos: bool,
written_header_value: bool,
indent: Option<usize>,
buf: &'a mut Formatter,
}
@ -226,8 +200,7 @@ impl<'a> DefaultFormat<'a> {
fn subtle_style(&self, text: &'static str) -> SubtleStyle {
#[cfg(feature = "termcolor")]
{
self.buf
.style()
self.buf.style()
.set_color(Color::Black)
.set_intense(true)
.into_value(text)
@ -254,7 +227,7 @@ impl<'a> DefaultFormat<'a> {
fn write_level(&mut self, record: &Record) -> io::Result<()> {
if !self.level {
return Ok(());
return Ok(())
}
let level = {
@ -274,29 +247,29 @@ impl<'a> DefaultFormat<'a> {
fn write_timestamp(&mut self) -> io::Result<()> {
#[cfg(feature = "humantime")]
{
use self::TimestampPrecision::*;
let ts = match self.timestamp {
None => return Ok(()),
Some(Seconds) => self.buf.timestamp_seconds(),
Some(Millis) => self.buf.timestamp_millis(),
Some(Micros) => self.buf.timestamp_micros(),
Some(Nanos) => self.buf.timestamp_nanos(),
};
if !self.timestamp {
return Ok(())
}
self.write_header_value(ts)
if self.timestamp_nanos {
let ts_nanos = self.buf.precise_timestamp();
self.write_header_value(ts_nanos)
} else {
let ts = self.buf.timestamp();
self.write_header_value(ts)
}
}
#[cfg(not(feature = "humantime"))]
{
// Trick the compiler to think we have used self.timestamp
// Workaround for "field is never used: `timestamp`" compiler nag.
let _ = self.timestamp;
let _ = self.timestamp_nanos;
Ok(())
}
}
fn write_module_path(&mut self, record: &Record) -> io::Result<()> {
if !self.module_path {
return Ok(());
return Ok(())
}
if let Some(module_path) = record.module_path() {
@ -316,51 +289,7 @@ impl<'a> DefaultFormat<'a> {
}
fn write_args(&mut self, record: &Record) -> io::Result<()> {
match self.indent {
// Fast path for no indentation
None => writeln!(self.buf, "{}", record.args()),
Some(indent_count) => {
// Create a wrapper around the buffer only if we have to actually indent the message
struct IndentWrapper<'a, 'b: 'a> {
fmt: &'a mut DefaultFormat<'b>,
indent_count: usize,
}
impl<'a, 'b> Write for IndentWrapper<'a, 'b> {
fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
let mut first = true;
for chunk in buf.split(|&x| x == b'\n') {
if !first {
write!(self.fmt.buf, "\n{:width$}", "", width = self.indent_count)?;
}
self.fmt.buf.write_all(chunk)?;
first = false;
}
Ok(buf.len())
}
fn flush(&mut self) -> io::Result<()> {
self.fmt.buf.flush()
}
}
// The explicit scope here is just to make older versions of Rust happy
{
let mut wrapper = IndentWrapper {
fmt: self,
indent_count,
};
write!(wrapper, "{}", record.args())?;
}
writeln!(self.buf)?;
Ok(())
}
}
writeln!(self.buf, "{}", record.args())
}
}
@ -374,7 +303,7 @@ mod tests {
let buf = fmt.buf.buf.clone();
let record = Record::builder()
.args(format_args!("log\nmessage"))
.args(format_args!("log message"))
.level(Level::Info)
.file(Some("test.rs"))
.line(Some(144))
@ -388,7 +317,7 @@ mod tests {
}
#[test]
fn format_with_header() {
fn default_format_with_header() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
@ -396,19 +325,19 @@ mod tests {
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: None,
timestamp: false,
timestamp_nanos: false,
module_path: true,
level: true,
written_header_value: false,
indent: None,
buf: &mut f,
});
assert_eq!("[INFO test::path] log\nmessage\n", written);
assert_eq!("[INFO test::path] log message\n", written);
}
#[test]
fn format_no_header() {
fn default_format_no_header() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
@ -416,74 +345,14 @@ mod tests {
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: None,
timestamp: false,
timestamp_nanos: false,
module_path: false,
level: false,
written_header_value: false,
indent: None,
buf: &mut f,
});
assert_eq!("log\nmessage\n", written);
}
#[test]
fn format_indent_spaces() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: None,
module_path: true,
level: true,
written_header_value: false,
indent: Some(4),
buf: &mut f,
});
assert_eq!("[INFO test::path] log\n message\n", written);
}
#[test]
fn format_indent_zero_spaces() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: None,
module_path: true,
level: true,
written_header_value: false,
indent: Some(0),
buf: &mut f,
});
assert_eq!("[INFO test::path] log\nmessage\n", written);
}
#[test]
fn format_indent_spaces_no_header() {
let writer = writer::Builder::new()
.write_style(WriteStyle::Never)
.build();
let mut f = Formatter::new(&writer);
let written = write(DefaultFormat {
timestamp: None,
module_path: false,
level: false,
written_header_value: false,
indent: Some(4),
buf: &mut f,
});
assert_eq!("log\n message\n", written);
assert_eq!("log message\n", written);
}
}

Просмотреть файл

@ -11,24 +11,24 @@ from being printed.
mod imp {
use atty;
pub(in crate::fmt) fn is_stdout() -> bool {
pub(in ::fmt) fn is_stdout() -> bool {
atty::is(atty::Stream::Stdout)
}
pub(in crate::fmt) fn is_stderr() -> bool {
pub(in ::fmt) fn is_stderr() -> bool {
atty::is(atty::Stream::Stderr)
}
}
#[cfg(not(feature = "atty"))]
mod imp {
pub(in crate::fmt) fn is_stdout() -> bool {
pub(in ::fmt) fn is_stdout() -> bool {
false
}
pub(in crate::fmt) fn is_stderr() -> bool {
pub(in ::fmt) fn is_stderr() -> bool {
false
}
}
pub(in crate::fmt) use self::imp::*;
pub(in ::fmt) use self::imp::*;

Просмотреть файл

@ -1,16 +1,16 @@
mod atty;
mod termcolor;
mod atty;
use self::atty::{is_stderr, is_stdout};
use self::termcolor::BufferWriter;
use std::{fmt, io};
use self::termcolor::BufferWriter;
use self::atty::{is_stdout, is_stderr};
pub(in crate::fmt) mod glob {
pub(in ::fmt) mod glob {
pub use super::termcolor::glob::*;
pub use super::*;
}
pub(in crate::fmt) use self::termcolor::Buffer;
pub(in ::fmt) use self::termcolor::Buffer;
/// Log target, either `stdout` or `stderr`.
#[derive(Clone, Copy, Debug, Eq, Hash, PartialEq)]
@ -55,11 +55,11 @@ impl Writer {
self.write_style
}
pub(in crate::fmt) fn buffer(&self) -> Buffer {
pub(in ::fmt) fn buffer(&self) -> Buffer {
self.inner.buffer()
}
pub(in crate::fmt) fn print(&self, buf: &Buffer) -> io::Result<()> {
pub(in ::fmt) fn print(&self, buf: &Buffer) -> io::Result<()> {
self.inner.print(buf)
}
}
@ -127,7 +127,7 @@ impl Builder {
} else {
WriteStyle::Never
}
}
},
color_choice => color_choice,
};
@ -150,16 +150,16 @@ impl Default for Builder {
}
impl fmt::Debug for Builder {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Logger")
.field("target", &self.target)
.field("write_style", &self.write_style)
.finish()
.field("target", &self.target)
.field("write_style", &self.write_style)
.finish()
}
}
impl fmt::Debug for Writer {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Writer").finish()
}
}
@ -192,7 +192,12 @@ mod tests {
#[test]
fn parse_write_style_invalid() {
let inputs = vec!["", "true", "false", "NEVER!!"];
let inputs = vec![
"",
"true",
"false",
"NEVER!!"
];
for input in inputs {
assert_eq!(WriteStyle::Auto, parse_write_style(input));

Просмотреть файл

@ -1,15 +1,16 @@
use std::borrow::Cow;
use std::cell::RefCell;
use std::fmt;
use std::io::{self, Write};
use std::cell::RefCell;
use std::rc::Rc;
use log::Level;
use termcolor::{self, ColorChoice, ColorSpec, WriteColor};
use crate::fmt::{Formatter, Target, WriteStyle};
use ::WriteStyle;
use ::fmt::{Formatter, Target};
pub(in crate::fmt::writer) mod glob {
pub(in ::fmt::writer) mod glob {
pub use super::*;
}
@ -46,7 +47,7 @@ impl Formatter {
}
/// Get the default [`Style`] for the given level.
///
///
/// The style can be used to print other values besides the level.
pub fn default_level_style(&self, level: Level) -> Style {
let mut level_style = self.style();
@ -61,46 +62,54 @@ impl Formatter {
}
/// Get a printable [`Style`] for the given level.
///
///
/// The style can only be used to print the level.
pub fn default_styled_level(&self, level: Level) -> StyledValue<'static, Level> {
self.default_level_style(level).into_value(level)
}
}
pub(in crate::fmt::writer) struct BufferWriter {
pub(in ::fmt::writer) struct BufferWriter {
inner: termcolor::BufferWriter,
test_target: Option<Target>,
}
pub(in crate::fmt) struct Buffer {
pub(in ::fmt) struct Buffer {
inner: termcolor::Buffer,
test_target: Option<Target>,
}
impl BufferWriter {
pub(in crate::fmt::writer) fn stderr(is_test: bool, write_style: WriteStyle) -> Self {
pub(in ::fmt::writer) fn stderr(is_test: bool, write_style: WriteStyle) -> Self {
BufferWriter {
inner: termcolor::BufferWriter::stderr(write_style.into_color_choice()),
test_target: if is_test { Some(Target::Stderr) } else { None },
test_target: if is_test {
Some(Target::Stderr)
} else {
None
},
}
}
pub(in crate::fmt::writer) fn stdout(is_test: bool, write_style: WriteStyle) -> Self {
pub(in ::fmt::writer) fn stdout(is_test: bool, write_style: WriteStyle) -> Self {
BufferWriter {
inner: termcolor::BufferWriter::stdout(write_style.into_color_choice()),
test_target: if is_test { Some(Target::Stdout) } else { None },
test_target: if is_test {
Some(Target::Stdout)
} else {
None
},
}
}
pub(in crate::fmt::writer) fn buffer(&self) -> Buffer {
pub(in ::fmt::writer) fn buffer(&self) -> Buffer {
Buffer {
inner: self.inner.buffer(),
test_target: self.test_target,
}
}
pub(in crate::fmt::writer) fn print(&self, buf: &Buffer) -> io::Result<()> {
pub(in ::fmt::writer) fn print(&self, buf: &Buffer) -> io::Result<()> {
if let Some(target) = self.test_target {
// This impl uses the `eprint` and `print` macros
// instead of `termcolor`'s buffer.
@ -120,19 +129,19 @@ impl BufferWriter {
}
impl Buffer {
pub(in crate::fmt) fn clear(&mut self) {
pub(in ::fmt) fn clear(&mut self) {
self.inner.clear()
}
pub(in crate::fmt) fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
pub(in ::fmt) fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.inner.write(buf)
}
pub(in crate::fmt) fn flush(&mut self) -> io::Result<()> {
pub(in ::fmt) fn flush(&mut self) -> io::Result<()> {
self.inner.flush()
}
pub(in crate::fmt) fn bytes(&self) -> &[u8] {
pub(in ::fmt) fn bytes(&self) -> &[u8] {
self.inner.as_slice()
}
@ -365,7 +374,7 @@ impl Style {
pub fn value<T>(&self, value: T) -> StyledValue<T> {
StyledValue {
style: Cow::Borrowed(self),
value,
value
}
}
@ -373,7 +382,7 @@ impl Style {
pub(crate) fn into_value<T>(&mut self, value: T) -> StyledValue<'static, T> {
StyledValue {
style: Cow::Owned(self.clone()),
value,
value
}
}
}
@ -383,11 +392,7 @@ impl<'a, T> StyledValue<'a, T> {
where
F: FnOnce() -> fmt::Result,
{
self.style
.buf
.borrow_mut()
.set_color(&self.style.spec)
.map_err(|_| fmt::Error)?;
self.style.buf.borrow_mut().set_color(&self.style.spec).map_err(|_| fmt::Error)?;
// Always try to reset the terminal style, even if writing failed
let write = f();
@ -398,7 +403,7 @@ impl<'a, T> StyledValue<'a, T> {
}
impl fmt::Debug for Style {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Style").field("spec", &self.spec).finish()
}
}
@ -424,8 +429,7 @@ impl_styled_value_fmt!(
fmt::UpperHex,
fmt::LowerHex,
fmt::UpperExp,
fmt::LowerExp
);
fmt::LowerExp);
// The `Color` type is copied from https://github.com/BurntSushi/ripgrep/tree/master/termcolor

Просмотреть файл

@ -9,4 +9,4 @@ The terminal printing is shimmed when the `termcolor` crate is not available.
#[cfg_attr(not(feature = "termcolor"), path = "shim_impl.rs")]
mod imp;
pub(in crate::fmt) use self::imp::*;
pub(in ::fmt) use self::imp::*;

Просмотреть файл

@ -1,33 +1,35 @@
use std::io;
use crate::fmt::{Target, WriteStyle};
use fmt::{WriteStyle, Target};
pub(in crate::fmt::writer) mod glob {}
pub(in ::fmt::writer) mod glob {
}
pub(in crate::fmt::writer) struct BufferWriter {
pub(in ::fmt::writer) struct BufferWriter {
target: Target,
}
pub(in crate::fmt) struct Buffer(Vec<u8>);
pub(in ::fmt) struct Buffer(Vec<u8>);
impl BufferWriter {
pub(in crate::fmt::writer) fn stderr(_is_test: bool, _write_style: WriteStyle) -> Self {
pub(in ::fmt::writer) fn stderr(_is_test: bool, _write_style: WriteStyle) -> Self {
BufferWriter {
target: Target::Stderr,
}
}
pub(in crate::fmt::writer) fn stdout(_is_test: bool, _write_style: WriteStyle) -> Self {
pub(in ::fmt::writer) fn stdout(_is_test: bool, _write_style: WriteStyle) -> Self {
BufferWriter {
target: Target::Stdout,
}
}
pub(in crate::fmt::writer) fn buffer(&self) -> Buffer {
pub(in ::fmt::writer) fn buffer(&self) -> Buffer {
Buffer(Vec::new())
}
pub(in crate::fmt::writer) fn print(&self, buf: &Buffer) -> io::Result<()> {
pub(in ::fmt::writer) fn print(&self, buf: &Buffer) -> io::Result<()> {
// This impl uses the `eprint` and `print` macros
// instead of using the streams directly.
// This is so their output can be captured by `cargo test`
@ -43,21 +45,21 @@ impl BufferWriter {
}
impl Buffer {
pub(in crate::fmt) fn clear(&mut self) {
pub(in ::fmt) fn clear(&mut self) {
self.0.clear();
}
pub(in crate::fmt) fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
pub(in ::fmt) fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
self.0.extend(buf);
Ok(buf.len())
}
pub(in crate::fmt) fn flush(&mut self) -> io::Result<()> {
pub(in ::fmt) fn flush(&mut self) -> io::Result<()> {
Ok(())
}
#[cfg(test)]
pub(in crate::fmt) fn bytes(&self) -> &[u8] {
pub(in ::fmt) fn bytes(&self) -> &[u8] {
&self.0
}
}
}

378
third_party/rust/env_logger/src/lib.rs поставляемый
Просмотреть файл

@ -16,6 +16,7 @@
//!
//! ```
//! #[macro_use] extern crate log;
//! extern crate env_logger;
//!
//! use log::Level;
//!
@ -138,32 +139,33 @@
//! * `error,hello=warn/[0-9]scopes` turn on global error logging and also
//! warn for hello. In both cases the log message must include a single digit
//! number followed by 'scopes'.
//!
//!
//! ## Capturing logs in tests
//!
//!
//! Records logged during `cargo test` will not be captured by the test harness by default.
//! The [`Builder::is_test`] method can be used in unit tests to ensure logs will be captured:
//!
//!
//! ```
//! # #[macro_use] extern crate log;
//! # extern crate env_logger;
//! # fn main() {}
//! #[cfg(test)]
//! mod tests {
//! fn init() {
//! let _ = env_logger::builder().is_test(true).try_init();
//! }
//!
//!
//! #[test]
//! fn it_works() {
//! init();
//!
//!
//! info!("This record will be captured by `cargo test`");
//!
//!
//! assert_eq!(2, 1 + 1);
//! }
//! }
//! ```
//!
//!
//! Enabling test capturing comes at the expense of color and other style support
//! and may have performance implications.
//!
@ -177,32 +179,32 @@
//! * `always` will always print style characters even if they aren't supported by the terminal.
//! This includes emitting ANSI colors on Windows if the console API is unavailable.
//! * `never` will never print style characters.
//!
//!
//! ## Tweaking the default format
//!
//!
//! Parts of the default format can be excluded from the log output using the [`Builder`].
//! The following example excludes the timestamp from the log output:
//!
//!
//! ```
//! env_logger::builder()
//! .format_timestamp(None)
//! .default_format_timestamp(false)
//! .init();
//! ```
//!
//!
//! ### Stability of the default format
//!
//! The default format won't optimise for long-term stability, and explicitly makes no
//! guarantees about the stability of its output across major, minor or patch version
//!
//! The default format won't optimise for long-term stability, and explicitly makes no
//! guarantees about the stability of its output across major, minor or patch version
//! bumps during `0.x`.
//!
//! If you want to capture or interpret the output of `env_logger` programmatically
//!
//! If you want to capture or interpret the output of `env_logger` programmatically
//! then you should use a custom format.
//!
//!
//! ### Using a custom format
//!
//!
//! Custom formats can be provided as closures to the [`Builder`].
//! These closures take a [`Formatter`] and `log::Record` as arguments:
//!
//!
//! ```
//! use std::io::Write;
//!
@ -212,43 +214,54 @@
//! })
//! .init();
//! ```
//!
//!
//! See the [`fmt`] module for more details about custom formats.
//!
//!
//! ## Specifying defaults for environment variables
//!
//!
//! `env_logger` can read configuration from environment variables.
//! If these variables aren't present, the default value to use can be tweaked with the [`Env`] type.
//! The following example defaults to log `warn` and above if the `RUST_LOG` environment variable
//! isn't set:
//!
//!
//! ```
//! use env_logger::Env;
//!
//! env_logger::from_env(Env::default().default_filter_or("warn")).init();
//! ```
//!
//!
//! [log-crate-url]: https://docs.rs/log/
//! [`Builder`]: struct.Builder.html
//! [`Builder::is_test`]: struct.Builder.html#method.is_test
//! [`Env`]: struct.Env.html
//! [`fmt`]: fmt/index.html
#![doc(
html_logo_url = "https://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png",
html_favicon_url = "https://www.rust-lang.org/static/images/favicon.ico",
html_root_url = "https://docs.rs/env_logger/0.7.1"
)]
#![doc(html_logo_url = "https://www.rust-lang.org/logos/rust-logo-128x128-blk-v2.png",
html_favicon_url = "https://www.rust-lang.org/static/images/favicon.ico",
html_root_url = "https://docs.rs/env_logger/0.6.2")]
#![cfg_attr(test, deny(warnings))]
// When compiled for the rustc compiler itself we want to make sure that this is
// an unstable crate
#![cfg_attr(rustbuild, feature(staged_api, rustc_private))]
#![cfg_attr(rustbuild, unstable(feature = "rustc_private", issue = "27812"))]
#![deny(missing_debug_implementations, missing_docs, warnings)]
use std::{borrow::Cow, cell::RefCell, env, io};
extern crate log;
use log::{LevelFilter, Log, Metadata, Record, SetLoggerError};
#[cfg(feature = "termcolor")]
extern crate termcolor;
#[cfg(feature = "humantime")]
extern crate humantime;
#[cfg(feature = "atty")]
extern crate atty;
use std::{env, io};
use std::borrow::Cow;
use std::cell::RefCell;
use log::{Log, LevelFilter, Record, SetLoggerError, Metadata};
pub mod filter;
pub mod fmt;
@ -256,8 +269,8 @@ pub mod fmt;
pub use self::fmt::glob::*;
use self::filter::Filter;
use self::fmt::writer::{self, Writer};
use self::fmt::Formatter;
use self::fmt::writer::{self, Writer};
/// The default name for the environment variable to read filters from.
pub const DEFAULT_FILTER_ENV: &'static str = "RUST_LOG";
@ -321,7 +334,9 @@ pub struct Logger {
/// # Examples
///
/// ```
/// #[macro_use] extern crate log;
/// #[macro_use]
/// extern crate log;
/// extern crate env_logger;
///
/// use std::env;
/// use std::io::Write;
@ -349,28 +364,30 @@ pub struct Builder {
impl Builder {
/// Initializes the log builder with defaults.
///
///
/// **NOTE:** This method won't read from any environment variables.
/// Use the [`filter`] and [`write_style`] methods to configure the builder
/// or use [`from_env`] or [`from_default_env`] instead.
///
///
/// # Examples
///
///
/// Create a new builder and configure filters and style:
///
///
/// ```
/// # extern crate log;
/// # extern crate env_logger;
/// # fn main() {
/// use log::LevelFilter;
/// use env_logger::{Builder, WriteStyle};
///
///
/// let mut builder = Builder::new();
///
///
/// builder.filter(None, LevelFilter::Info)
/// .write_style(WriteStyle::Always)
/// .init();
/// # }
/// ```
///
///
/// [`filter`]: #method.filter
/// [`write_style`]: #method.write_style
/// [`from_env`]: #method.from_env
@ -385,13 +402,13 @@ impl Builder {
/// passing in.
///
/// # Examples
///
///
/// Initialise a logger reading the log filter from an environment variable
/// called `MY_LOG`:
///
///
/// ```
/// use env_logger::Builder;
///
///
/// let mut builder = Builder::from_env("MY_LOG");
/// builder.init();
/// ```
@ -409,7 +426,7 @@ impl Builder {
/// ```
pub fn from_env<'a, E>(env: E) -> Self
where
E: Into<Env<'a>>,
E: Into<Env<'a>>
{
let mut builder = Builder::new();
let env = env.into();
@ -426,18 +443,18 @@ impl Builder {
}
/// Initializes the log builder from the environment using default variable names.
///
///
/// This method is a convenient way to call `from_env(Env::default())` without
/// having to use the `Env` type explicitly. The builder will use the
/// [default environment variables].
///
///
/// # Examples
///
///
/// Initialise a logger using the default environment variables:
///
///
/// ```
/// use env_logger::Builder;
///
///
/// let mut builder = Builder::from_default_env();
/// builder.init();
/// ```
@ -456,17 +473,17 @@ impl Builder {
/// `Formatter` so that implementations can use the [`std::fmt`] macros
/// to format and output without intermediate heap allocations. The default
/// `env_logger` formatter takes advantage of this.
///
///
/// # Examples
///
///
/// Use a custom format to write only the log message:
///
///
/// ```
/// use std::io::Write;
/// use env_logger::Builder;
///
///
/// let mut builder = Builder::new();
///
///
/// builder.format(|buf, record| writeln!(buf, "{}", record.args()));
/// ```
///
@ -474,66 +491,44 @@ impl Builder {
/// [`String`]: https://doc.rust-lang.org/stable/std/string/struct.String.html
/// [`std::fmt`]: https://doc.rust-lang.org/std/fmt/index.html
pub fn format<F: 'static>(&mut self, format: F) -> &mut Self
where
F: Fn(&mut Formatter, &Record) -> io::Result<()> + Sync + Send,
where F: Fn(&mut Formatter, &Record) -> io::Result<()> + Sync + Send
{
self.format.custom_format = Some(Box::new(format));
self
}
/// Use the default format.
///
///
/// This method will clear any custom format set on the builder.
pub fn default_format(&mut self) -> &mut Self {
self.format = Default::default();
self.format.custom_format = None;
self
}
/// Whether or not to write the level in the default format.
pub fn format_level(&mut self, write: bool) -> &mut Self {
self.format.format_level = write;
pub fn default_format_level(&mut self, write: bool) -> &mut Self {
self.format.default_format_level = write;
self
}
/// Whether or not to write the module path in the default format.
pub fn format_module_path(&mut self, write: bool) -> &mut Self {
self.format.format_module_path = write;
pub fn default_format_module_path(&mut self, write: bool) -> &mut Self {
self.format.default_format_module_path = write;
self
}
/// Configures the amount of spaces to use to indent multiline log records.
/// A value of `None` disables any kind of indentation.
pub fn format_indent(&mut self, indent: Option<usize>) -> &mut Self {
self.format.format_indent = indent;
/// Whether or not to write the timestamp in the default format.
pub fn default_format_timestamp(&mut self, write: bool) -> &mut Self {
self.format.default_format_timestamp = write;
self
}
/// Configures if timestamp should be included and in what precision.
pub fn format_timestamp(&mut self, timestamp: Option<fmt::TimestampPrecision>) -> &mut Self {
self.format.format_timestamp = timestamp;
/// Whether or not to write the timestamp with nanos.
pub fn default_format_timestamp_nanos(&mut self, write: bool) -> &mut Self {
self.format.default_format_timestamp_nanos = write;
self
}
/// Configures the timestamp to use second precision.
pub fn format_timestamp_secs(&mut self) -> &mut Self {
self.format_timestamp(Some(fmt::TimestampPrecision::Seconds))
}
/// Configures the timestamp to use millisecond precision.
pub fn format_timestamp_millis(&mut self) -> &mut Self {
self.format_timestamp(Some(fmt::TimestampPrecision::Millis))
}
/// Configures the timestamp to use microsecond precision.
pub fn format_timestamp_micros(&mut self) -> &mut Self {
self.format_timestamp(Some(fmt::TimestampPrecision::Micros))
}
/// Configures the timestamp to use nanosecond precision.
pub fn format_timestamp_nanos(&mut self) -> &mut Self {
self.format_timestamp(Some(fmt::TimestampPrecision::Nanos))
}
/// Adds a directive to the filter for a specific module.
///
/// # Examples
@ -541,6 +536,8 @@ impl Builder {
/// Only include messages for warning and above for logs in `path::to::module`:
///
/// ```
/// # extern crate log;
/// # extern crate env_logger;
/// # fn main() {
/// use log::LevelFilter;
/// use env_logger::Builder;
@ -562,6 +559,8 @@ impl Builder {
/// Only include messages for warning and above for logs in `path::to::module`:
///
/// ```
/// # extern crate log;
/// # extern crate env_logger;
/// # fn main() {
/// use log::LevelFilter;
/// use env_logger::Builder;
@ -580,26 +579,39 @@ impl Builder {
///
/// The given module (if any) will log at most the specified level provided.
/// If no module is provided then the filter will apply to all log messages.
///
///
/// # Examples
///
///
/// Only include messages for warning and above for logs in `path::to::module`:
///
///
/// ```
/// # extern crate log;
/// # extern crate env_logger;
/// # fn main() {
/// use log::LevelFilter;
/// use env_logger::Builder;
///
///
/// let mut builder = Builder::new();
///
///
/// builder.filter(Some("path::to::module"), LevelFilter::Info);
/// # }
/// ```
pub fn filter(&mut self, module: Option<&str>, level: LevelFilter) -> &mut Self {
pub fn filter(&mut self,
module: Option<&str>,
level: LevelFilter) -> &mut Self {
self.filter.filter(module, level);
self
}
/// Parses the directives string in the same form as the `RUST_LOG`
/// environment variable.
///
/// See the module documentation for more details.
#[deprecated(since = "0.6.1", note = "use `parse_filters` instead.")]
pub fn parse(&mut self, filters: &str) -> &mut Self {
self.parse_filters(filters)
}
/// Parses the directives string in the same form as the `RUST_LOG`
/// environment variable.
///
@ -612,16 +624,16 @@ impl Builder {
/// Sets the target for the log output.
///
/// Env logger can log to either stdout or stderr. The default is stderr.
///
///
/// # Examples
///
///
/// Write log message to `stdout`:
///
///
/// ```
/// use env_logger::{Builder, Target};
///
///
/// let mut builder = Builder::new();
///
///
/// builder.target(Target::Stdout);
/// ```
pub fn target(&mut self, target: fmt::Target) -> &mut Self {
@ -633,16 +645,16 @@ impl Builder {
///
/// This can be useful in environments that don't support control characters
/// for setting colors.
///
///
/// # Examples
///
///
/// Never attempt to write styles:
///
///
/// ```
/// use env_logger::{Builder, WriteStyle};
///
///
/// let mut builder = Builder::new();
///
///
/// builder.write_style(WriteStyle::Never);
/// ```
pub fn write_style(&mut self, write_style: fmt::WriteStyle) -> &mut Self {
@ -660,7 +672,7 @@ impl Builder {
}
/// Sets whether or not the logger will be used in unit tests.
///
///
/// If `is_test` is `true` then the logger will allow the testing framework to
/// capture log records rather than printing them to the terminal directly.
pub fn is_test(&mut self, is_test: bool) -> &mut Self {
@ -700,8 +712,7 @@ impl Builder {
/// This function will panic if it is called more than once, or if another
/// library has already initialized a global logger.
pub fn init(&mut self) {
self.try_init()
.expect("Builder::init should not be called after logger initialized");
self.try_init().expect("Builder::init should not be called after logger initialized");
}
/// Build an env logger.
@ -748,8 +759,8 @@ impl Logger {
/// let logger = Logger::from_env(env);
/// ```
pub fn from_env<'a, E>(env: E) -> Self
where
E: Into<Env<'a>>,
where
E: Into<Env<'a>>
{
Builder::from_env(env).build()
}
@ -807,51 +818,40 @@ impl Log for Logger {
static FORMATTER: RefCell<Option<Formatter>> = RefCell::new(None);
}
let print = |formatter: &mut Formatter, record: &Record| {
let _ =
(self.format)(formatter, record).and_then(|_| formatter.print(&self.writer));
FORMATTER.with(|tl_buf| {
// It's possible for implementations to sometimes
// log-while-logging (e.g. a `std::fmt` implementation logs
// internally) but it's super rare. If this happens make sure we
// at least don't panic and ship some output to the screen.
let mut a;
let mut b = None;
let tl_buf = match tl_buf.try_borrow_mut() {
Ok(f) => {
a = f;
&mut *a
}
Err(_) => &mut b,
};
// Check the buffer style. If it's different from the logger's
// style then drop the buffer and recreate it.
match *tl_buf {
Some(ref mut formatter) => {
if formatter.write_style() != self.writer.write_style() {
*formatter = Formatter::new(&self.writer)
}
},
ref mut tl_buf => *tl_buf = Some(Formatter::new(&self.writer))
}
// The format is guaranteed to be `Some` by this point
let mut formatter = tl_buf.as_mut().unwrap();
let _ = (self.format)(&mut formatter, record).and_then(|_| formatter.print(&self.writer));
// Always clear the buffer afterwards
formatter.clear();
};
let printed = FORMATTER
.try_with(|tl_buf| {
match tl_buf.try_borrow_mut() {
// There are no active borrows of the buffer
Ok(mut tl_buf) => match *tl_buf {
// We have a previously set formatter
Some(ref mut formatter) => {
// Check the buffer style. If it's different from the logger's
// style then drop the buffer and recreate it.
if formatter.write_style() != self.writer.write_style() {
*formatter = Formatter::new(&self.writer);
}
print(formatter, record);
}
// We don't have a previously set formatter
None => {
let mut formatter = Formatter::new(&self.writer);
print(&mut formatter, record);
*tl_buf = Some(formatter);
}
},
// There's already an active borrow of the buffer (due to re-entrancy)
Err(_) => {
print(&mut Formatter::new(&self.writer), record);
}
}
})
.is_ok();
if !printed {
// The thread-local storage was not available (because its
// destructor has already run). Create a new single-use
// Formatter on the stack for this call.
print(&mut Formatter::new(&self.writer), record);
}
});
}
}
@ -867,7 +867,7 @@ impl<'a> Env<'a> {
/// Specify an environment variable to read the filter from.
pub fn filter<E>(mut self, filter_env: E) -> Self
where
E: Into<Cow<'a, str>>,
E: Into<Cow<'a, str>>
{
self.filter = Var::new(filter_env);
@ -888,7 +888,7 @@ impl<'a> Env<'a> {
}
/// Use the default environment variable to read the filter from.
///
///
/// If the variable is not set, the default value will be used.
pub fn default_filter_or<V>(mut self, default: V) -> Self
where
@ -906,7 +906,7 @@ impl<'a> Env<'a> {
/// Specify an environment variable to read the style from.
pub fn write_style<E>(mut self, write_style_env: E) -> Self
where
E: Into<Cow<'a, str>>,
E: Into<Cow<'a, str>>
{
self.write_style = Var::new(write_style_env);
@ -917,9 +917,9 @@ impl<'a> Env<'a> {
///
/// If the variable is not set, the default value will be used.
pub fn write_style_or<E, V>(mut self, write_style_env: E, default: V) -> Self
where
E: Into<Cow<'a, str>>,
V: Into<Cow<'a, str>>,
where
E: Into<Cow<'a, str>>,
V: Into<Cow<'a, str>>,
{
self.write_style = Var::new_with_default(write_style_env, default);
@ -930,8 +930,8 @@ impl<'a> Env<'a> {
///
/// If the variable is not set, the default value will be used.
pub fn default_write_style_or<V>(mut self, default: V) -> Self
where
V: Into<Cow<'a, str>>,
where
V: Into<Cow<'a, str>>,
{
self.write_style = Var::new_with_default(DEFAULT_WRITE_STYLE_ENV, default);
@ -945,8 +945,8 @@ impl<'a> Env<'a> {
impl<'a> Var<'a> {
fn new<E>(name: E) -> Self
where
E: Into<Cow<'a, str>>,
where
E: Into<Cow<'a, str>>,
{
Var {
name: name.into(),
@ -968,13 +968,15 @@ impl<'a> Var<'a> {
fn get(&self) -> Option<String> {
env::var(&*self.name)
.ok()
.or_else(|| self.default.to_owned().map(|v| v.into_owned()))
.or_else(|| self.default
.to_owned()
.map(|v| v.into_owned()))
}
}
impl<'a, T> From<T> for Env<'a>
where
T: Into<Cow<'a, str>>,
T: Into<Cow<'a, str>>
{
fn from(filter_env: T) -> Self {
Env::default().filter(filter_env.into())
@ -991,26 +993,28 @@ impl<'a> Default for Env<'a> {
}
mod std_fmt_impls {
use super::*;
use std::fmt;
use super::*;
impl fmt::Debug for Logger {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
impl fmt::Debug for Logger{
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
f.debug_struct("Logger")
.field("filter", &self.filter)
.finish()
}
}
impl fmt::Debug for Builder {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
impl fmt::Debug for Builder{
fn fmt(&self, f: &mut fmt::Formatter)->fmt::Result {
if self.built {
f.debug_struct("Logger").field("built", &true).finish()
f.debug_struct("Logger")
.field("built", &true)
.finish()
} else {
f.debug_struct("Logger")
.field("filter", &self.filter)
.field("writer", &self.writer)
.finish()
.field("filter", &self.filter)
.field("writer", &self.writer)
.finish()
}
}
}
@ -1073,7 +1077,7 @@ pub fn init() {
/// library has already initialized a global logger.
pub fn try_init_from_env<'a, E>(env: E) -> Result<(), SetLoggerError>
where
E: Into<Env<'a>>,
E: Into<Env<'a>>
{
let mut builder = Builder::from_env(env);
@ -1105,25 +1109,24 @@ where
/// library has already initialized a global logger.
pub fn init_from_env<'a, E>(env: E)
where
E: Into<Env<'a>>,
E: Into<Env<'a>>
{
try_init_from_env(env)
.expect("env_logger::init_from_env should not be called after logger initialized");
try_init_from_env(env).expect("env_logger::init_from_env should not be called after logger initialized");
}
/// Create a new builder with the default environment variables.
///
///
/// The builder can be configured before being initialized.
pub fn builder() -> Builder {
Builder::from_default_env()
}
/// Create a builder from the given environment variables.
///
///
/// The builder can be configured before being initialized.
pub fn from_env<'a, E>(env: E) -> Builder
where
E: Into<Env<'a>>,
E: Into<Env<'a>>
{
Builder::from_env(env)
}
@ -1145,10 +1148,7 @@ mod tests {
fn env_get_filter_reads_from_default_if_var_not_set() {
env::remove_var("env_get_filter_reads_from_default_if_var_not_set");
let env = Env::new().filter_or(
"env_get_filter_reads_from_default_if_var_not_set",
"from default",
);
let env = Env::new().filter_or("env_get_filter_reads_from_default_if_var_not_set", "from default");
assert_eq!(Some("from default".to_owned()), env.get_filter());
}
@ -1157,8 +1157,7 @@ mod tests {
fn env_get_write_style_reads_from_var_if_set() {
env::set_var("env_get_write_style_reads_from_var_if_set", "from var");
let env =
Env::new().write_style_or("env_get_write_style_reads_from_var_if_set", "from default");
let env = Env::new().write_style_or("env_get_write_style_reads_from_var_if_set", "from default");
assert_eq!(Some("from var".to_owned()), env.get_write_style());
}
@ -1167,10 +1166,7 @@ mod tests {
fn env_get_write_style_reads_from_default_if_var_not_set() {
env::remove_var("env_get_write_style_reads_from_default_if_var_not_set");
let env = Env::new().write_style_or(
"env_get_write_style_reads_from_default_if_var_not_set",
"from default",
);
let env = Env::new().write_style_or("env_get_write_style_reads_from_default_if_var_not_set", "from default");
assert_eq!(Some("from default".to_owned()), env.get_write_style());
}

Просмотреть файл

@ -1,8 +1,8 @@
extern crate env_logger;
extern crate log;
extern crate env_logger;
use std::env;
use std::process;
use std::env;
use std::str;
fn main() {
@ -20,7 +20,7 @@ fn main() {
.unwrap_err();
assert_eq!(log::LevelFilter::Debug, log::max_level());
return;
return
}
let exe = env::current_exe().unwrap();
@ -30,7 +30,7 @@ fn main() {
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
if out.status.success() {
return;
return
}
println!("test failed: {}", out.status);

Просмотреть файл

@ -1,10 +1,9 @@
#[macro_use]
extern crate log;
#[macro_use] extern crate log;
extern crate env_logger;
use std::env;
use std::fmt;
use std::process;
use std::fmt;
use std::env;
use std::str;
struct Foo;
@ -29,7 +28,7 @@ fn main() {
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
if out.status.success() {
return;
return
}
println!("test failed: {}", out.status);

Просмотреть файл

@ -1,66 +0,0 @@
#[macro_use]
extern crate log;
extern crate env_logger;
use std::env;
use std::process;
use std::str;
use std::thread;
struct DropMe;
impl Drop for DropMe {
fn drop(&mut self) {
debug!("Dropping now");
}
}
fn run() {
// Use multiple thread local values to increase the chance that our TLS
// value will get destroyed after the FORMATTER key in the library
thread_local! {
static DROP_ME_0: DropMe = DropMe;
static DROP_ME_1: DropMe = DropMe;
static DROP_ME_2: DropMe = DropMe;
static DROP_ME_3: DropMe = DropMe;
static DROP_ME_4: DropMe = DropMe;
static DROP_ME_5: DropMe = DropMe;
static DROP_ME_6: DropMe = DropMe;
static DROP_ME_7: DropMe = DropMe;
static DROP_ME_8: DropMe = DropMe;
static DROP_ME_9: DropMe = DropMe;
}
DROP_ME_0.with(|_| {});
DROP_ME_1.with(|_| {});
DROP_ME_2.with(|_| {});
DROP_ME_3.with(|_| {});
DROP_ME_4.with(|_| {});
DROP_ME_5.with(|_| {});
DROP_ME_6.with(|_| {});
DROP_ME_7.with(|_| {});
DROP_ME_8.with(|_| {});
DROP_ME_9.with(|_| {});
}
fn main() {
env_logger::init();
if env::var("YOU_ARE_TESTING_NOW").is_ok() {
// Run on a separate thread because TLS values on the main thread
// won't have their destructors run if pthread is used.
// https://doc.rust-lang.org/std/thread/struct.LocalKey.html#platform-specific-behavior
thread::spawn(run).join().unwrap();
} else {
let exe = env::current_exe().unwrap();
let out = process::Command::new(exe)
.env("YOU_ARE_TESTING_NOW", "1")
.env("RUST_LOG", "debug")
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
if !out.status.success() {
println!("test failed: {}", out.status);
println!("--- stdout\n{}", str::from_utf8(&out.stdout).unwrap());
println!("--- stderr\n{}", str::from_utf8(&out.stderr).unwrap());
process::exit(1);
}
}
}

Просмотреть файл

@ -1,9 +1,8 @@
#[macro_use]
extern crate log;
#[macro_use] extern crate log;
extern crate env_logger;
use std::env;
use std::process;
use std::env;
use std::str;
fn main() {
@ -26,9 +25,7 @@ fn run_child(rust_log: String) -> bool {
.env("RUST_LOG", rust_log)
.output()
.unwrap_or_else(|e| panic!("Unable to start child process: {}", e));
str::from_utf8(out.stderr.as_ref())
.unwrap()
.contains("XYZ Message")
str::from_utf8(out.stderr.as_ref()).unwrap().contains("XYZ Message")
}
fn assert_message_printed(rust_log: &str) {
@ -39,10 +36,7 @@ fn assert_message_printed(rust_log: &str) {
fn assert_message_not_printed(rust_log: &str) {
if run_child(rust_log.to_string()) {
panic!(
"RUST_LOG={} should not allow the test log message",
rust_log
)
panic!("RUST_LOG={} should not allow the test log message", rust_log)
}
}

Просмотреть файл

@ -1 +0,0 @@
{"files":{"Cargo.toml":"f522bcd6e15aa3817fbc327ac33ae663ba494f1d32b9d91a5a35b773f0a0edbb","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"63e747d86bdeb67638f26b4b75107f129c5f12de432ae83ccdb1ccbe28debf30","README.md":"67780fbfcaf2cd01e3b7f5c7d1ef8b9e385b7cd4435358954aec24a85755ced2","src/error.rs":"bee4653bcdfac1c903c41c6ae647fbeeb8ce45818886c69cead324156e77a9c5","src/ffistr.rs":"44460d6b0879a76274af8508b9cdbab5b8170f646b05a9415a2e5e51bd2f040b","src/handle_map.rs":"a2f25411c953d07daba18a8a39e5731b01d7b07c78414824b3b66ed13f8f3c2f","src/into_ffi.rs":"05c4a1c9f3aebb4570ac6578f946ba9d9fc90c54abb76f30704868b277df2f9d","src/lib.rs":"6c111cdd9fa2251a9013c19c89930e46bc7357d3ea2f76040cdeb6223d9583e7","src/macros.rs":"1f05d94853bbf5cfb1ece0333dd36e6b8e352ecdcaafc1c6f491934d05e4b140","src/string.rs":"966d2b41fae4e7a6083eb142a57e669e4bafd833f01c8b24fc67dff4fb4a5595"},"package":"efee06d8ac3e85a6e9759a0ed2682235a70832ebe10953849b92cdced8688660"}

53
third_party/rust/ffi-support/Cargo.toml поставляемый
Просмотреть файл

@ -1,53 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies
#
# If you believe there's an error in this file please file an
# issue against the rust-lang/cargo repository. If you're
# editing this file be aware that the upstream Cargo.toml
# will likely look very different (and much more reasonable)
[package]
edition = "2018"
name = "ffi-support"
version = "0.3.5"
authors = ["Thom Chiovoloni <tchiovoloni@mozilla.com>"]
description = "A crate to help expose Rust functions over the FFI."
readme = "README.md"
keywords = ["ffi", "bindings"]
categories = ["development-tools::ffi"]
license = "Apache-2.0 / MIT"
repository = "https://github.com/mozilla/application-services"
[dependencies.backtrace]
version = "0.3.9"
optional = true
[dependencies.failure]
version = "0.1.5"
[dependencies.failure_derive]
version = "0.1.5"
[dependencies.lazy_static]
version = "1.3.0"
[dependencies.log]
version = "0.4"
[dev-dependencies.env_logger]
version = "0.6.2"
[dev-dependencies.rand]
version = "0.7.0"
[dev-dependencies.rayon]
version = "1.1.0"
[features]
default = []
log_backtraces = ["log_panics", "backtrace"]
log_panics = []
[badges.travis-ci]
repository = "mozilla/application-services"

201
third_party/rust/ffi-support/LICENSE-APACHE поставляемый
Просмотреть файл

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

25
third_party/rust/ffi-support/LICENSE-MIT поставляемый
Просмотреть файл

@ -1,25 +0,0 @@
Copyright (c) 2018-2019 Mozilla Foundation
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

32
third_party/rust/ffi-support/README.md поставляемый
Просмотреть файл

@ -1,32 +0,0 @@
# FFI Support
[![Docs](https://docs.rs/ffi-support/badge.svg)](https://docs.rs/ffi-support)
This crate implements a support library to simplify implementing the patterns that the [mozilla/application-services](https://github.com/mozilla/application-services) repository uses for it's "Rust Component" FFI libraries, which are used to share Rust code
In particular, it can assist with the following areas:
1. Avoiding throwing panics over the FFI (which is undefined behavior)
2. Translating rust errors (and panics) into errors that the caller on the other side of the FFI is able to handle.
3. Converting strings to/from rust str.
4. Passing non-string data (in a few ways, including exposing an opaque pointeer, marshalling data to JSON strings with serde, as well as arbitrary custom handling) back and forth between Rust and whatever the caller on the other side of the FFI is.
Additionally, it's documentation describes a number of the problems we've hit doing this to expose libraries to consumers on mobile platforms.
## Usage
Add the following to your Cargo.toml
```toml
ffi-support = "0.1.1"
```
For further examples, the examples in the docs is the best starting point, followed by the usage code in the [mozilla/application-services](https://github.com/mozilla/application-services) repo (for example [here](https://github.com/mozilla/application-services/blob/master/components/places/ffi/src/lib.rs) or [here](https://github.com/mozilla/application-services/blob/master/components/places/src/ffi.rs)).
## License
Dual licensed under the Apache License, Version 2.0 <LICENSE-APACHE> or
<http://www.apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT> or
<http://opensource.org/licenses/MIT>, at your option. All files in the project
carrying such notice may not be copied, modified, or distributed except
according to those terms.

365
third_party/rust/ffi-support/src/error.rs поставляемый
Просмотреть файл

@ -1,365 +0,0 @@
/* Copyright 2018-2019 Mozilla Foundation
*
* Licensed under the Apache License (Version 2.0), or the MIT license,
* (the "Licenses") at your option. You may not use this file except in
* compliance with one of the Licenses. You may obtain copies of the
* Licenses at:
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Licenses is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the Licenses for the specific language governing permissions and
* limitations under the Licenses. */
use crate::string::{destroy_c_string, rust_string_to_c};
use std::os::raw::c_char;
use std::{self, ptr};
/// Represents an error that occured within rust, storing both an error code, and additional data
/// that may be used by the caller.
///
/// Misuse of this type can cause numerous issues, so please read the entire documentation before
/// usage.
///
/// ## Rationale
///
/// This library encourages a pattern of taking a `&mut ExternError` as the final parameter for
/// functions exposed over the FFI. This is an "out parameter" which we use to write error/success
/// information that occurred during the function's execution.
///
/// To be clear, this means instances of `ExternError` will be created on the other side of the FFI,
/// and passed (by mutable reference) into Rust.
///
/// While this pattern is not particularly ergonomic in Rust (although hopefully this library
/// helps!), it offers two main benefits over something more ergonomic (which might be `Result`
/// shaped).
///
/// 1. It avoids defining a large number of `Result`-shaped types in the FFI consumer, as would
/// be required with something like an `struct ExternResult<T> { ok: *mut T, err:... }`
///
/// 2. It offers additional type safety over `struct ExternResult { ok: *mut c_void, err:... }`,
/// which helps avoid memory safety errors. It also can offer better performance for returning
/// primitives and repr(C) structs (no boxing required).
///
/// It also is less tricky to use properly than giving consumers a `get_last_error()` function, or
/// similar.
///
/// ## Caveats
///
/// Note that the order of the fields is `code` (an i32) then `message` (a `*mut c_char`), getting
/// this wrong on the other side of the FFI will cause memory corruption and crashes.
///
/// The fields are public largely for documentation purposes, but you should use
/// [`ExternError::new_error`] or [`ExternError::success`] to create these.
///
/// ## Layout/fields
///
/// This struct's field are not `pub` (mostly so that we can soundly implement `Send`, but also so
/// that we can verify rust users are constructing them appropriately), the fields, their types, and
/// their order are *very much* a part of the public API of this type. Consumers on the other side
/// of the FFI will need to know its layout.
///
/// If this were a C struct, it would look like
///
/// ```c,no_run
/// struct ExternError {
/// int32_t code;
/// char *message; // note: nullable
/// };
/// ```
///
/// In rust, there are two fields, in this order: `code: ErrorCode`, and `message: *mut c_char`.
/// Note that ErrorCode is a `#[repr(transparent)]` wrapper around an `i32`, so the first property
/// is equivalent to an `i32`.
///
/// #### The `code` field.
///
/// This is the error code, 0 represents success, all other values represent failure. If the `code`
/// field is nonzero, there should always be a message, and if it's zero, the message will always be
/// null.
///
/// #### The `message` field.
///
/// This is a null-terminated C string containing some amount of additional information about the
/// error. If the `code` property is nonzero, there should always be an error message. Otherwise,
/// this should will be null.
///
/// This string (when not null) is allocated on the rust heap (using this crate's
/// [`rust_string_to_c`]), and must be freed on it as well. Critically, if there are multiple rust
/// packages using being used in the same application, it *must be freed on the same heap that
/// allocated it*, or you will corrupt both heaps.
///
/// Typically, this object is managed on the other side of the FFI (on the "FFI consumer"), which
/// means you must expose a function to release the resources of `message` which can be done easily
/// using the [`define_string_destructor!`] macro provided by this crate.
///
/// If, for some reason, you need to release the resources directly, you may call
/// `ExternError::release()`. Note that you probably do not need to do this, and it's
/// intentional that this is not called automatically by implementing `drop`.
///
/// ## Example
///
/// ```rust,no_run
/// use ffi_support::{ExternError, ErrorCode};
///
/// #[derive(Debug)]
/// pub enum MyError {
/// IllegalFoo(String),
/// InvalidBar(i64),
/// // ...
/// }
///
/// // Putting these in a module is obviously optional, but it allows documentation, and helps
/// // avoid accidental reuse.
/// pub mod error_codes {
/// // note: -1 and 0 are reserved by ffi_support
/// pub const ILLEGAL_FOO: i32 = 1;
/// pub const INVALID_BAR: i32 = 2;
/// // ...
/// }
///
/// fn get_code(e: &MyError) -> ErrorCode {
/// match e {
/// MyError::IllegalFoo(_) => ErrorCode::new(error_codes::ILLEGAL_FOO),
/// MyError::InvalidBar(_) => ErrorCode::new(error_codes::INVALID_BAR),
/// // ...
/// }
/// }
///
/// impl From<MyError> for ExternError {
/// fn from(e: MyError) -> ExternError {
/// ExternError::new_error(get_code(&e), format!("{:?}", e))
/// }
/// }
/// ```
#[repr(C)]
// Note: We're intentionally not implementing Clone -- it's too risky.
#[derive(Debug, PartialEq)]
pub struct ExternError {
// Don't reorder or add anything here!
code: ErrorCode,
message: *mut c_char,
}
impl std::panic::UnwindSafe for ExternError {}
impl std::panic::RefUnwindSafe for ExternError {}
/// This is sound so long as our fields are private.
unsafe impl Send for ExternError {}
impl ExternError {
/// Construct an ExternError representing failure from a code and a message.
#[inline]
pub fn new_error(code: ErrorCode, message: impl Into<String>) -> Self {
assert!(
!code.is_success(),
"Attempted to construct a success ExternError with a message"
);
Self {
code,
message: rust_string_to_c(message),
}
}
/// Returns a ExternError representing a success. Also returned by ExternError::default()
#[inline]
pub fn success() -> Self {
Self {
code: ErrorCode::SUCCESS,
message: ptr::null_mut(),
}
}
/// Helper for the case where we aren't exposing this back over the FFI and
/// we just want to warn if an error occurred and then release the allocated
/// memory.
///
/// Typically, this is done if the error will still be detected and reported
/// by other channels.
///
/// We assume we're not inside a catch_unwind, and so we wrap inside one
/// ourselves.
pub fn consume_and_log_if_error(self) {
if !self.code.is_success() {
// in practice this should never panic, but you never know...
crate::abort_on_panic::call_with_output(|| {
log::error!("Unhandled ExternError({:?}) {:?}", self.code, unsafe {
crate::FfiStr::from_raw(self.message)
});
unsafe {
self.manually_release();
}
})
}
}
/// Get the `code` property.
#[inline]
pub fn get_code(&self) -> ErrorCode {
self.code
}
/// Get the `message` property as a pointer to c_char.
#[inline]
pub fn get_raw_message(&self) -> *const c_char {
self.message as *const _
}
/// Get the `message` property as an [`FfiStr`]
#[inline]
pub fn get_message(&self) -> crate::FfiStr<'_> {
// Safe because the lifetime is the same as our lifetime.
unsafe { crate::FfiStr::from_raw(self.get_raw_message()) }
}
/// Get the `message` property as a String, or None if this is not an error result.
///
/// ## Safety
///
/// You should only call this if you are certain that the other side of the FFI doesn't have a
/// reference to this result (more specifically, to the `message` property) anywhere!
#[inline]
pub unsafe fn get_and_consume_message(self) -> Option<String> {
if self.code.is_success() {
None
} else {
let res = self.get_message().into_string();
self.manually_release();
Some(res)
}
}
/// Manually release the memory behind this string. You probably don't want to call this.
///
/// ## Safety
///
/// You should only call this if you are certain that the other side of the FFI doesn't have a
/// reference to this result (more specifically, to the `message` property) anywhere!
pub unsafe fn manually_release(self) {
if !self.message.is_null() {
destroy_c_string(self.message)
}
}
}
impl Default for ExternError {
#[inline]
fn default() -> Self {
ExternError::success()
}
}
// This is the `Err` of std::thread::Result, which is what
// `panic::catch_unwind` returns.
impl From<Box<dyn std::any::Any + Send + 'static>> for ExternError {
fn from(e: Box<dyn std::any::Any + Send + 'static>) -> Self {
// The documentation suggests that it will *usually* be a str or String.
let message = if let Some(s) = e.downcast_ref::<&'static str>() {
s.to_string()
} else if let Some(s) = e.downcast_ref::<String>() {
s.clone()
} else {
"Unknown panic!".to_string()
};
log::error!("Caught a panic calling rust code: {:?}", message);
ExternError::new_error(ErrorCode::PANIC, message)
}
}
/// A wrapper around error codes, which is represented identically to an i32 on the other side of
/// the FFI. Essentially exists to check that we don't accidentally reuse success/panic codes for
/// other things.
#[repr(transparent)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Default)]
pub struct ErrorCode(i32);
impl ErrorCode {
/// The ErrorCode used for success.
pub const SUCCESS: ErrorCode = ErrorCode(0);
/// The ErrorCode used for panics. It's unlikely you need to ever use this.
// TODO: Consider moving to the reserved region...
pub const PANIC: ErrorCode = ErrorCode(-1);
/// The ErrorCode used for handle map errors.
pub const INVALID_HANDLE: ErrorCode = ErrorCode(-1000);
/// Construct an error code from an integer code.
///
/// ## Panics
///
/// Panics if you call it with 0 (reserved for success, but you can use `ErrorCode::SUCCESS` if
/// that's what you want), or -1 (reserved for panics, but you can use `ErrorCode::PANIC` if
/// that's what you want).
pub fn new(code: i32) -> Self {
assert!(code > ErrorCode::INVALID_HANDLE.0 && code != ErrorCode::PANIC.0 && code != ErrorCode::SUCCESS.0,
"Error: The ErrorCodes `{success}`, `{panic}`, and all error codes less than or equal \
to `{reserved}` are reserved (got {code}). You may use the associated constants on this \
type (`ErrorCode::PANIC`, etc) if you'd like instances of those error codes.",
panic = ErrorCode::PANIC.0,
success = ErrorCode::SUCCESS.0,
reserved = ErrorCode::INVALID_HANDLE.0,
code = code,
);
ErrorCode(code)
}
/// Get the raw numeric value of this ErrorCode.
#[inline]
pub fn code(self) -> i32 {
self.0
}
/// Returns whether or not this is a success code.
#[inline]
pub fn is_success(self) -> bool {
self.code() == 0
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
#[should_panic]
fn test_code_new_reserved_success() {
ErrorCode::new(0);
}
#[test]
#[should_panic]
fn test_code_new_reserved_panic() {
ErrorCode::new(-1);
}
#[test]
#[should_panic]
fn test_code_new_reserved_handle_error() {
ErrorCode::new(-1000);
}
#[test]
#[should_panic]
fn test_code_new_reserved_unknown() {
// Everything below -1000 should be reserved.
ErrorCode::new(-1043);
}
#[test]
fn test_code_new_allowed() {
// Should not panic
ErrorCode::new(-2);
}
#[test]
fn test_code() {
assert!(!ErrorCode::PANIC.is_success());
assert!(!ErrorCode::INVALID_HANDLE.is_success());
assert!(ErrorCode::SUCCESS.is_success());
assert_eq!(ErrorCode::default(), ErrorCode::SUCCESS);
}
}

248
third_party/rust/ffi-support/src/ffistr.rs поставляемый
Просмотреть файл

@ -1,248 +0,0 @@
/* Copyright 2018-2019 Mozilla Foundation
*
* Licensed under the Apache License (Version 2.0), or the MIT license,
* (the "Licenses") at your option. You may not use this file except in
* compliance with one of the Licenses. You may obtain copies of the
* Licenses at:
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Licenses is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the Licenses for the specific language governing permissions and
* limitations under the Licenses. */
use std::ffi::CStr;
use std::marker::PhantomData;
use std::os::raw::c_char;
/// `FfiStr<'a>` is a safe (`#[repr(transparent)]`) wrapper around a
/// nul-terminated `*const c_char` (e.g. a C string). Conceptually, it is
/// similar to [`std::ffi::CStr`], except that it may be used in the signatures
/// of extern "C" functions.
///
/// Functions accepting strings should use this instead of accepting a C string
/// directly. This allows us to write those functions using safe code without
/// allowing safe Rust to cause memory unsafety.
///
/// A single function for constructing these from Rust ([`FfiStr::from_raw`])
/// has been provided. Most of the time, this should not be necessary, and users
/// should accept `FfiStr` in the parameter list directly.
///
/// ## Caveats
///
/// An effort has been made to make this struct hard to misuse, however it is
/// still possible, if the `'static` lifetime is manually specified in the
/// struct. E.g.
///
/// ```rust,no_run
/// # use ffi_support::FfiStr;
/// // NEVER DO THIS
/// #[no_mangle]
/// extern "C" fn never_do_this(s: FfiStr<'static>) {
/// // save `s` somewhere, and access it after this
/// // function returns.
/// }
/// ```
///
/// Instead, one of the following patterns should be used:
///
/// ```
/// # use ffi_support::FfiStr;
/// #[no_mangle]
/// extern "C" fn valid_use_1(s: FfiStr<'_>) {
/// // Use of `s` after this function returns is impossible
/// }
/// // Alternative:
/// #[no_mangle]
/// extern "C" fn valid_use_2(s: FfiStr) {
/// // Use of `s` after this function returns is impossible
/// }
/// ```
#[repr(transparent)]
pub struct FfiStr<'a> {
cstr: *const c_char,
_boo: PhantomData<&'a ()>,
}
impl<'a> FfiStr<'a> {
/// Construct an `FfiStr` from a raw pointer.
///
/// This should not be needed most of the time, and users should instead
/// accept `FfiStr` in function parameter lists.
#[inline]
pub unsafe fn from_raw(ptr: *const c_char) -> Self {
Self {
cstr: ptr,
_boo: PhantomData,
}
}
/// Construct a FfiStr from a `std::ffi::CStr`. This is provided for
/// completeness, as a safe method of producing an `FfiStr` in Rust.
#[inline]
pub fn from_cstr(cstr: &'a CStr) -> Self {
Self {
cstr: cstr.as_ptr(),
_boo: PhantomData,
}
}
/// Get an `&str` out of the `FfiStr`. This will panic in any case that
/// [`FfiStr::as_opt_str`] would return `None` (e.g. null pointer or invalid
/// UTF-8).
///
/// If the string should be optional, you should use [`FfiStr::as_opt_str`]
/// instead. If an owned string is desired, use [`FfiStr::into_string`] or
/// [`FfiStr::into_opt_string`].
#[inline]
pub fn as_str(&self) -> &'a str {
self.as_opt_str()
.expect("Unexpected null string pointer passed to rust")
}
/// Get an `Option<&str>` out of the `FfiStr`. If this stores a null
/// pointer, then None will be returned. If a string containing invalid
/// UTF-8 was passed, then an error will be logged and `None` will be
/// returned.
///
/// If the string is a required argument, use [`FfiStr::as_str`], or
/// [`FfiStr::into_string`] instead. If `Option<String>` is desired, use
/// [`FfiStr::into_opt_string`] (which will handle invalid UTF-8 by
/// replacing with the replacement character).
pub fn as_opt_str(&self) -> Option<&'a str> {
if self.cstr.is_null() {
return None;
}
unsafe {
match std::ffi::CStr::from_ptr(self.cstr).to_str() {
Ok(s) => Some(s),
Err(e) => {
log::error!("Invalid UTF-8 was passed to rust! {:?}", e);
None
}
}
}
}
/// Get an `Option<String>` out of the `FfiStr`. Returns `None` if this
/// `FfiStr` holds a null pointer. Note that unlike [`FfiStr::as_opt_str`],
/// invalid UTF-8 is replaced with the replacement character instead of
/// causing us to return None.
///
/// If the string should be mandatory, you should use
/// [`FfiStr::into_string`] instead. If an owned string is not needed, you
/// may want to use [`FfiStr::as_str`] or [`FfiStr::as_opt_str`] instead,
/// (however, note the differences in how invalid UTF-8 is handled, should
/// this be relevant to your use).
pub fn into_opt_string(self) -> Option<String> {
if !self.cstr.is_null() {
unsafe { Some(CStr::from_ptr(self.cstr).to_string_lossy().to_string()) }
} else {
None
}
}
/// Get a `String` out of a `FfiStr`. This function is essential a
/// convenience wrapper for `ffi_str.into_opt_string().unwrap()`, with a
/// message that indicates that a null argument was passed to rust when it
/// should be mandatory. As with [`FfiStr::into_opt_string`], invalid UTF-8
/// is replaced with the replacement character if encountered.
///
/// If the string should *not* be mandatory, you should use
/// [`FfiStr::into_opt_string`] instead. If an owned string is not needed,
/// you may want to use [`FfiStr::as_str`] or [`FfiStr::as_opt_str`]
/// instead, (however, note the differences in how invalid UTF-8 is handled,
/// should this be relevant to your use).
#[inline]
pub fn into_string(self) -> String {
self.into_opt_string()
.expect("Unexpected null string pointer passed to rust")
}
}
impl<'a> std::fmt::Debug for FfiStr<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if let Some(s) = self.as_opt_str() {
write!(f, "FfiStr({:?})", s)
} else {
write!(f, "FfiStr(null)")
}
}
}
// Conversions...
impl<'a> From<FfiStr<'a>> for String {
#[inline]
fn from(f: FfiStr<'a>) -> Self {
f.into_string()
}
}
impl<'a> From<FfiStr<'a>> for Option<String> {
#[inline]
fn from(f: FfiStr<'a>) -> Self {
f.into_opt_string()
}
}
impl<'a> From<FfiStr<'a>> for Option<&'a str> {
#[inline]
fn from(f: FfiStr<'a>) -> Self {
f.as_opt_str()
}
}
impl<'a> From<FfiStr<'a>> for &'a str {
#[inline]
fn from(f: FfiStr<'a>) -> Self {
f.as_str()
}
}
// TODO: `AsRef<str>`?
// Comparisons...
// Compare FfiStr with eachother
impl<'a> PartialEq for FfiStr<'a> {
#[inline]
fn eq(&self, other: &FfiStr<'a>) -> bool {
self.as_opt_str() == other.as_opt_str()
}
}
// Compare FfiStr with str
impl<'a> PartialEq<str> for FfiStr<'a> {
#[inline]
fn eq(&self, other: &str) -> bool {
self.as_opt_str() == Some(other)
}
}
// Compare FfiStr with &str
impl<'a, 'b> PartialEq<&'b str> for FfiStr<'a> {
#[inline]
fn eq(&self, other: &&'b str) -> bool {
self.as_opt_str() == Some(*other)
}
}
// rhs/lhs swap version of above
impl<'a> PartialEq<FfiStr<'a>> for str {
#[inline]
fn eq(&self, other: &FfiStr<'a>) -> bool {
Some(self) == other.as_opt_str()
}
}
// rhs/lhs swap...
impl<'a, 'b> PartialEq<FfiStr<'a>> for &'b str {
#[inline]
fn eq(&self, other: &FfiStr<'a>) -> bool {
Some(*self) == other.as_opt_str()
}
}

1307
third_party/rust/ffi-support/src/handle_map.rs поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

275
third_party/rust/ffi-support/src/into_ffi.rs поставляемый
Просмотреть файл

@ -1,275 +0,0 @@
/* Copyright 2018-2019 Mozilla Foundation
*
* Licensed under the Apache License (Version 2.0), or the MIT license,
* (the "Licenses") at your option. You may not use this file except in
* compliance with one of the Licenses. You may obtain copies of the
* Licenses at:
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Licenses is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the Licenses for the specific language governing permissions and
* limitations under the Licenses. */
use crate::string::*;
use std::os::raw::c_char;
use std::ptr;
/// This trait is used to return types over the FFI. It essentially is a mapping between a type and
/// version of that type we can pass back to C (`IntoFfi::Value`).
///
/// The main wrinkle is that we need to be able to pass a value back to C in both the success and
/// error cases. In the error cases, we don't want there to need to be any cleanup for the foreign
/// code to do, and we want the API to be relatively easy to use.
///
/// Additionally, the mapping is not consistent for different types. For some rust types, we want to
/// convert them to JSON. For some, we want to return an opaque `*mut T` handle. For others,
/// we'd like to return by value.
///
/// This trait supports those cases by adding some type-level indirection, and allowing both cases
/// to be provided (both cases what is done in the error and success cases).
///
/// We implement this for the following types:
///
/// - `String`, by conversion to `*mut c_char`. Note that the caller (on the other side of the FFI)
/// is expected to free this, so you will need to provide them with a destructor for strings,
/// which can be done with the [`define_string_destructor!`] macro.
///
/// - `()`: as a no-op conversion -- this just allows us to expose functions without a return type
/// over the FFI.
///
/// - `bool`: is implemented by conversion to `u8` (`0u8` is `false`, `1u8` is `true`, and
/// `ffi_default()` is `false`). This is because it doesn't seem to be safe to pass over the FFI
/// directly (or at least, doing so might hit a bug in JNA).
///
/// - All numeric primitives except `isize`, `usize`, `char`, `i128`, and `u128` are implememented
/// by passing directly through (and using `Default::default()` for `ffi_default()`).
/// - `isize`, `usize` could be added, but they'd be quite easy to accidentally misuse, so we
/// currently omit them.
/// - `char` is less easy to misuse, but it's also less clear why you'd want to be doing this.
/// If we did ever add this, we'd probably want to convert to a `u32` (similar to how we
/// convert `bool` to `u8`) for better ABI stability.
/// - `i128` and `u128` do not have a stable ABI, so they cannot be returned across the FFI.
///
/// - `Option<T>` where `T` is `IntoFfi`, by returning `IntoFfi::ffi_default()` for `None`.
///
/// None of these are directly helpful for user types though, so macros are provided for the
/// following cases:
///
/// 1. For types which are passed around by an opaque pointer, the macro
/// [`implement_into_ffi_by_pointer!`] is provided.
///
/// 2. For types which should be returned as a JSON string, the macro
/// [`implement_into_ffi_by_json!`] is provided.
///
/// See the "Examples" section below for some other cases, such as returning by value.
///
/// ## Safety
///
/// This is an unsafe trait (implementing it requires `unsafe impl`). This is because we cannot
/// guarantee that your type is safe to pass to C. The helpers we've provided as macros should be
/// safe to use, and in the cases where a common pattern can't be done both safely and generically,
/// we've opted not to provide a macro for it. That said, many of these cases are still safe if you
/// meet some relatively basic requirements, see below for examples.
///
/// ## Examples
///
/// ### Returning types by value
///
/// If you want to return a type by value, we don't provide a macro for this, primarially because
/// doing so cannot be statically guarantee that it is safe. However, it *is* safe for the cases
/// where the type is either `#[repr(C)]` or `#[repr(transparent)]`. If this doesn't hold, you will
/// want to use a different option!
///
/// Regardless, if this holds, it's fairly simple to implement, for example:
///
/// ```rust
/// # use ffi_support::IntoFfi;
/// #[derive(Default)]
/// #[repr(C)]
/// pub struct Point {
/// pub x: i32,
/// pub y: i32,
/// }
///
/// unsafe impl IntoFfi for Point {
/// type Value = Self;
/// #[inline] fn ffi_default() -> Self { Default::default() }
/// #[inline] fn into_ffi_value(self) -> Self { self }
/// }
/// ```
///
/// ### Conversion to another type (which is returned over the FFI)
///
/// In the FxA FFI, we used to have a `SyncKeys` type, which was converted to a different type before
/// returning over the FFI. (The real FxA FFI is a little different, and more complex, but this is
/// relatively close, and more widely recommendable than the one the FxA FFI uses):
///
/// This is fairly easy to do by performing the conversion inside `IntoFfi`.
///
/// ```rust
/// # use ffi_support::{self, IntoFfi};
/// # use std::{ptr, os::raw::c_char};
/// pub struct SyncKeys(pub String, pub String);
///
/// #[repr(C)]
/// pub struct SyncKeysC {
/// pub sync_key: *mut c_char,
/// pub xcs: *mut c_char,
/// }
///
/// unsafe impl IntoFfi for SyncKeys {
/// type Value = SyncKeysC;
/// #[inline]
/// fn ffi_default() -> SyncKeysC {
/// SyncKeysC {
/// sync_key: ptr::null_mut(),
/// xcs: ptr::null_mut(),
/// }
/// }
///
/// #[inline]
/// fn into_ffi_value(self) -> SyncKeysC {
/// SyncKeysC {
/// sync_key: ffi_support::rust_string_to_c(self.0),
/// xcs: ffi_support::rust_string_to_c(self.1),
/// }
/// }
/// }
///
/// // Note: this type manages memory, so you still will want to expose a destructor for this,
/// // and possibly implement Drop as well.
/// ```
pub unsafe trait IntoFfi {
/// This type must be:
///
/// 1. Compatible with C, which is to say `#[repr(C)]`, a numeric primitive, another type that
/// has guarantees made about it's layout, or a `#[repr(transparent)]` wrapper around one of
/// those.
///
/// One could even use `&T`, so long as `T: Sized`, although it's extremely dubious to return
/// a reference to borrowed memory over the FFI, since it's very difficult for the caller
/// to know how long it remains valid.
///
/// 2. Capable of storing an empty/ignorable/default value.
///
/// 3. Capable of storing the actual value.
///
/// Valid examples include:
///
/// - Primitive numbers (other than i128/u128)
///
/// - #[repr(C)] structs containing only things on this list.
///
/// - Raw pointers: `*const T`, and `*mut T`
///
/// - Enums with a fixed `repr`, although it's a good idea avoid `#[repr(C)]` enums in favor of
/// `#[repr(i32)]` (for example, any fixed type there should be fine), as it's potentially
/// error prone to access `#[repr(C)]` enums from Android over JNA (it's only safe if C's
/// `sizeof(int) == 4`, which is very common, but not universally true).
///
/// - `&T`/`&mut T` where `T: Sized` but only if you really know what you're doing, because this is
/// probably a mistake.
///
/// Invalid examples include things like `&str`, `&[T]`, `String`, `Vec<T>`, `Box<T>`,
/// `std::ffi::CString`, `&std::ffi::CStr`, etc. (Note that eventually, `Box<T>` may be valid
/// `where T: Sized`, but currently it is not).
type Value;
/// Return an 'empty' value. This is what's passed back to C in the case of an error,
/// so it doesn't actually need to be "empty", so much as "ignorable". Note that this
/// is also used when an empty `Option<T>` is returned.
fn ffi_default() -> Self::Value;
/// Convert ourselves into a value we can pass back to C with confidence.
fn into_ffi_value(self) -> Self::Value;
}
unsafe impl IntoFfi for String {
type Value = *mut c_char;
#[inline]
fn ffi_default() -> Self::Value {
ptr::null_mut()
}
#[inline]
fn into_ffi_value(self) -> Self::Value {
rust_string_to_c(self)
}
}
// Implement IntoFfi for Option<T> by falling back to ffi_default for None.
unsafe impl<T: IntoFfi> IntoFfi for Option<T> {
type Value = <T as IntoFfi>::Value;
#[inline]
fn ffi_default() -> Self::Value {
T::ffi_default()
}
#[inline]
fn into_ffi_value(self) -> Self::Value {
if let Some(s) = self {
s.into_ffi_value()
} else {
T::ffi_default()
}
}
}
// We've had problems in the past returning booleans over the FFI (specifically to JNA), and so
// we convert them to `u8`.
unsafe impl IntoFfi for bool {
type Value = u8;
#[inline]
fn ffi_default() -> Self::Value {
0u8
}
#[inline]
fn into_ffi_value(self) -> Self::Value {
self as u8
}
}
unsafe impl IntoFfi for crate::ByteBuffer {
type Value = crate::ByteBuffer;
#[inline]
fn ffi_default() -> Self::Value {
crate::ByteBuffer::default()
}
#[inline]
fn into_ffi_value(self) -> Self::Value {
self
}
}
// just cuts down on boilerplate. Not public.
macro_rules! impl_into_ffi_for_primitive {
($($T:ty),+) => {$(
unsafe impl IntoFfi for $T {
type Value = Self;
#[inline] fn ffi_default() -> Self { Default::default() }
#[inline] fn into_ffi_value(self) -> Self { self }
}
)+}
}
// See IntoFfi docs for why this is not exhaustive
impl_into_ffi_for_primitive![(), i8, u8, i16, u16, i32, u32, i64, u64, f32, f64];
// just cuts down on boilerplate. Not public.
macro_rules! impl_into_ffi_for_pointer {
($($T:ty),+) => {$(
unsafe impl IntoFfi for $T {
type Value = Self;
#[inline] fn ffi_default() -> Self { ptr::null_mut() }
#[inline] fn into_ffi_value(self) -> Self { self }
}
)+}
}
impl_into_ffi_for_pointer![*mut i8, *const i8, *mut u8, *const u8];

483
third_party/rust/ffi-support/src/lib.rs поставляемый
Просмотреть файл

@ -1,483 +0,0 @@
/* Copyright 2018-2019 Mozilla Foundation
*
* Licensed under the Apache License (Version 2.0), or the MIT license,
* (the "Licenses") at your option. You may not use this file except in
* compliance with one of the Licenses. You may obtain copies of the
* Licenses at:
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Licenses is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the Licenses for the specific language governing permissions and
* limitations under the Licenses. */
#![deny(missing_docs)]
#![allow(unknown_lints)]
#![warn(rust_2018_idioms)]
//! # FFI Support
//!
//! This crate implements a support library to simplify implementing the patterns that the
//! `mozilla/application-services` repository uses for it's "Rust Component" FFI libraries.
//!
//! It is *strongly encouraged* that anybody writing FFI code in this repository read this
//! documentation before doing so, as it is a subtle, difficult, and error prone process.
//!
//! ## Terminology
//!
//! For each library, there are currently three parts we're concerned with. There's no clear correct
//! name for these, so this documentation will attempt to use the following terminology:
//!
//! - **Rust Component**: A Rust crate which does not expose an FFI directly, but may be may be
//! wrapped by one that does. These have a `crate-type` in their Cargo.toml (see
//! https://doc.rust-lang.org/reference/linkage.html) of `lib`, and not `staticlib` or `cdylib`
//! (Note that `lib` is the default if `crate-type` is not specified). Examples include the
//! `fxa-client`, and `logins` crates.
//!
//! - **FFI Component**: A wrapper crate that takes a Rust component, and exposes an FFI from it.
//! These typically have `ffi` in the name, and have `crate-type = ["lib", "staticlib", "cdylib"]`
//! in their Cargo.toml. For example, the `fxa-client/ffi` and `logins/ffi` crates (note:
//! paths are subject to change). When built, these produce a native library that is consumed by
//! the "FFI Consumer".
//!
//! - **FFI Consumer**: This is a low level library, typically implemented in Kotlin (for Android)
//! or Swift (for iOS), that exposes a memory-safe wrapper around the memory-unsafe C API produced
//! by the FFI component. It's expected that the maintainers of the FFI Component and FFI Consumer
//! be the same (or at least, the author of the consumer should be completely comfortable with the
//! API exposed by, and code in the FFI component), since the code in these is extremely tightly
//! coupled, and very easy to get wrong.
//!
//! Note that while there are three parts, there may be more than three libraries relevant here, for
//! example there may be more than one FFI consumer (one for Android, one for iOS).
//!
//! ## Usage
//!
//! This library will typically be used in both the Rust component, and the FFI component, however
//! it frequently will be an optional dependency in the Rust component that's only available when a
//! feature flag (which the FFI component will always require) is used.
//!
//! The reason it's required inside the Rust component (and not solely in the FFI component, which
//! would be nice), is so that types provided by that crate may implement the traits provided by
//! this crate (this is because Rust does not allow crate `C` to implement a trait defined in crate
//! `A` for a type defined in crate `B`).
//!
//! In general, examples should be provided for the most important types and functions
//! ([`call_with_result`], [`IntoFfi`],
//! [`ExternError`], etc), but you should also look at the code of
//! consumers of this library.
//!
//! ### Usage in the Rust Component
//!
//! Inside the Rust component, you will implement:
//!
//! 1. [`IntoFfi`] for all types defined in that crate that you want to return
//! over the FFI. For most common cases, the [`implement_into_ffi_by_json!`] and
//! [`implement_into_ffi_by_protobuf!`] macros will do the job here, however you
//! can see that trait's documentation for discussion and examples of
//! implementing it manually.
//!
//! 2. Conversion to [`ExternError`] for the error type(s) exposed by that
//! rust component, that is, `impl From<MyError> for ExternError`.
//!
//! ### Usage in the FFI Component
//!
//! Inside the FFI component, you will use this library in a few ways:
//!
//! 1. Destructors will be exposed for each types that had [`implement_into_ffi_by_pointer!`] called
//! on it (using [`define_box_destructor!`]), and a destructor for strings should be exposed as
//! well, using [`define_string_destructor`]
//!
//! 2. The body of every / nearly every FFI function will be wrapped in either a
//! [`call_with_result`] or [`call_with_output`].
//!
//! This is required because if we `panic!` (e.g. from an `assert!`, `unwrap()`, `expect()`, from
//! indexing past the end of an array, etc) across the FFI boundary, the behavior is undefined
//! and in practice very weird things tend to happen (we aren't caught by the caller, since they
//! don't have the same exception behavior as us).
//!
//! If you don't think your program (or possibly just certain calls) can handle panics, you may
//! also use the versions of these functions in the [`abort_on_panic`] module, which
//! do as their name suggest.
//!
//! Additionally, c strings that are passed in as arguments may be represented using [`FfiStr`],
//! which contains several helpful inherent methods for extracting their data.
//!
use std::{panic, thread};
mod error;
mod ffistr;
pub mod handle_map;
mod into_ffi;
#[macro_use]
mod macros;
mod string;
pub use crate::error::*;
pub use crate::ffistr::FfiStr;
pub use crate::into_ffi::*;
pub use crate::macros::*;
pub use crate::string::*;
// We export most of the types from this, but some constants
// (MAX_CAPACITY) don't make sense at the top level.
pub use crate::handle_map::{ConcurrentHandleMap, Handle, HandleError, HandleMap};
/// Call a callback that returns a `Result<T, E>` while:
///
/// - Catching panics, and reporting them to C via [`ExternError`].
/// - Converting `T` to a C-compatible type using [`IntoFfi`].
/// - Converting `E` to a C-compatible error via `Into<ExternError>`.
///
/// This (or [`call_with_output`]) should be in the majority of the FFI functions, see the crate
/// top-level docs for more info.
///
/// If your function doesn't produce an error, you may use [`call_with_output`] instead, which
/// doesn't require you return a Result.
///
/// ## Example
///
/// A few points about the following example:
///
/// - We need to mark it as `#[no_mangle] pub extern "C"`.
///
/// - We prefix it with a unique name for the library (e.g. `mylib_`). Foreign functions are not
/// namespaced, and symbol collisions can cause a large number of problems and subtle bugs,
/// including memory safety issues in some cases.
///
/// ```rust,no_run
/// # use ffi_support::{ExternError, ErrorCode, FfiStr};
/// # use std::os::raw::c_char;
///
/// # #[derive(Debug)]
/// # struct BadEmptyString;
/// # impl From<BadEmptyString> for ExternError {
/// # fn from(e: BadEmptyString) -> Self {
/// # ExternError::new_error(ErrorCode::new(1), "Bad empty string")
/// # }
/// # }
///
/// #[no_mangle]
/// pub extern "C" fn mylib_print_string(
/// // Strings come in as an `FfiStr`, which is a wrapper around a null terminated C string.
/// thing_to_print: FfiStr<'_>,
/// // Note that taking `&mut T` and `&T` is both allowed and encouraged, so long as `T: Sized`,
/// // (e.g. it can't be a trait object, `&[T]`, a `&str`, etc). Also note that `Option<&T>` and
/// // `Option<&mut T>` are also allowed, if you expect the caller to sometimes pass in null, but
/// // that's the only case when it's currently to use `Option` in an argument list like this).
/// error: &mut ExternError
/// ) {
/// // You should try to to do as little as possible outside the call_with_result,
/// // to avoid a case where a panic occurs.
/// ffi_support::call_with_result(error, || {
/// let s = thing_to_print.as_str();
/// if s.is_empty() {
/// // This is a silly example!
/// return Err(BadEmptyString);
/// }
/// println!("{}", s);
/// Ok(())
/// })
/// }
/// ```
pub fn call_with_result<R, E, F>(out_error: &mut ExternError, callback: F) -> R::Value
where
F: panic::UnwindSafe + FnOnce() -> Result<R, E>,
E: Into<ExternError>,
R: IntoFfi,
{
call_with_result_impl(out_error, callback)
}
/// Call a callback that returns a `T` while:
///
/// - Catching panics, and reporting them to C via [`ExternError`]
/// - Converting `T` to a C-compatible type using [`IntoFfi`]
///
/// Note that you still need to provide an [`ExternError`] to this function, to report panics.
///
/// See [`call_with_result`] if you'd like to return a `Result<T, E>` (Note: `E` must
/// be convertible to [`ExternError`]).
///
/// This (or [`call_with_result`]) should be in the majority of the FFI functions, see
/// the crate top-level docs for more info.
pub fn call_with_output<R, F>(out_error: &mut ExternError, callback: F) -> R::Value
where
F: panic::UnwindSafe + FnOnce() -> R,
R: IntoFfi,
{
// We need something that's `Into<ExternError>`, even though we never return it, so just use
// `ExternError` itself.
call_with_result(out_error, || -> Result<_, ExternError> { Ok(callback()) })
}
fn call_with_result_impl<R, E, F>(out_error: &mut ExternError, callback: F) -> R::Value
where
F: panic::UnwindSafe + FnOnce() -> Result<R, E>,
E: Into<ExternError>,
R: IntoFfi,
{
*out_error = ExternError::success();
let res: thread::Result<(ExternError, R::Value)> = panic::catch_unwind(|| {
init_panic_handling_once();
match callback() {
Ok(v) => (ExternError::default(), v.into_ffi_value()),
Err(e) => (e.into(), R::ffi_default()),
}
});
match res {
Ok((err, o)) => {
*out_error = err;
o
}
Err(e) => {
*out_error = e.into();
R::ffi_default()
}
}
}
/// This module exists just to expose a variant of [`call_with_result`] and [`call_with_output`]
/// that aborts, instead of unwinding, on panic.
pub mod abort_on_panic {
use super::*;
// Struct that exists to automatically process::abort if we don't call
// `std::mem::forget()` on it. This can have substantial performance
// benefits over calling `std::panic::catch_unwind` and aborting if a panic
// was caught, in addition to not requiring AssertUnwindSafe (for example).
struct AbortOnDrop;
impl Drop for AbortOnDrop {
fn drop(&mut self) {
std::process::abort();
}
}
/// A helper function useful for cases where you'd like to abort on panic,
/// but aren't in a position where you'd like to return an FFI-compatible
/// type.
#[inline]
pub fn with_abort_on_panic<R, F>(callback: F) -> R
where
F: FnOnce() -> R,
{
let aborter = AbortOnDrop;
let res = callback();
std::mem::forget(aborter);
res
}
/// Same as the root `call_with_result`, but aborts on panic instead of unwinding. See the
/// `call_with_result` documentation for more.
pub fn call_with_result<R, E, F>(out_error: &mut ExternError, callback: F) -> R::Value
where
F: FnOnce() -> Result<R, E>,
E: Into<ExternError>,
R: IntoFfi,
{
with_abort_on_panic(|| match callback() {
Ok(v) => {
*out_error = ExternError::default();
v.into_ffi_value()
}
Err(e) => {
*out_error = e.into();
R::ffi_default()
}
})
}
/// Same as the root `call_with_output`, but aborts on panic instead of unwinding. As a result,
/// it doesn't require a [`ExternError`] out argument. See the `call_with_output` documentation
/// for more info.
pub fn call_with_output<R, F>(callback: F) -> R::Value
where
F: FnOnce() -> R,
R: IntoFfi,
{
with_abort_on_panic(callback).into_ffi_value()
}
}
#[cfg(feature = "log_panics")]
fn init_panic_handling_once() {
use std::sync::{Once, ONCE_INIT};
static INIT_BACKTRACES: Once = ONCE_INIT;
INIT_BACKTRACES.call_once(move || {
#[cfg(all(feature = "log_backtraces", not(target_os = "android")))]
{
// Turn on backtraces for failure, if it's still listening.
std::env::set_var("RUST_BACKTRACE", "1");
}
// Turn on a panic hook which logs both backtraces and the panic
// "Location" (file/line). We do both in case we've been stripped,
// ).
std::panic::set_hook(Box::new(move |panic_info| {
let (file, line) = if let Some(loc) = panic_info.location() {
(loc.file(), loc.line())
} else {
// Apparently this won't happen but rust has reserved the
// ability to start returning None from location in some cases
// in the future.
("<unknown>", 0)
};
log::error!("### Rust `panic!` hit at file '{}', line {}", file, line);
// We could use failure for failure::Backtrace (and we enable RUST_BACKTRACE
// to opt-in to backtraces on failure errors if possible), however:
// - `failure` only checks the RUST_BACKTRACE variable once, and we could have errors
// before this. So we just use the backtrace crate directly.
#[cfg(all(feature = "log_backtraces", not(target_os = "android")))]
{
log::error!(" Complete stack trace:\n{:?}", backtrace::Backtrace::new());
}
}));
});
}
#[cfg(not(feature = "log_panics"))]
fn init_panic_handling_once() {}
/// ByteBuffer is a struct that represents an array of bytes to be sent over the FFI boundaries.
/// There are several cases when you might want to use this, but the primary one for us
/// is for returning protobuf-encoded data to Swift and Java. The type is currently rather
/// limited (implementing almost no functionality), however in the future it may be
/// more expanded.
///
/// ## Caveats
///
/// Note that the order of the fields is `len` (an i64) then `data` (a `*mut u8`), getting
/// this wrong on the other side of the FFI will cause memory corruption and crashes.
/// `i64` is used for the length instead of `u64` and `usize` because JNA has interop
/// issues with both these types.
///
/// ByteBuffer does not implement Drop. This is intentional. Memory passed into it will
/// be leaked if it is not explicitly destroyed by calling [`ByteBuffer::destroy`]. This
/// is because in the future, we may allow it's use for passing data into Rust code.
/// ByteBuffer assuming ownership of the data would make this a problem.
///
/// Note that alling `destroy` manually is not typically needed or recommended,
/// and instead you should use [`define_bytebuffer_destructor!`].
///
/// ## Layout/fields
///
/// This struct's field are not `pub` (mostly so that we can soundly implement `Send`, but also so
/// that we can verify rust users are constructing them appropriately), the fields, their types, and
/// their order are *very much* a part of the public API of this type. Consumers on the other side
/// of the FFI will need to know its layout.
///
/// If this were a C struct, it would look like
///
/// ```c,no_run
/// struct ByteBuffer {
/// int64_t len;
/// uint8_t *data; // note: nullable
/// };
/// ```
///
/// In rust, there are two fields, in this order: `len: i64`, and `data: *mut u8`.
///
/// ### Description of fields
///
/// `data` is a pointer to an array of `len` bytes. Not that data can be a null pointer and therefore
/// should be checked.
///
/// The bytes array is allocated on the heap and must be freed on it as well. Critically, if there
/// are multiple rust packages using being used in the same application, it *must be freed on the
/// same heap that allocated it*, or you will corrupt both heaps.
///
/// Typically, this object is managed on the other side of the FFI (on the "FFI consumer"), which
/// means you must expose a function to release the resources of `data` which can be done easily
/// using the [`define_bytebuffer_destructor!`] macro provided by this crate.
#[repr(C)]
pub struct ByteBuffer {
len: i64,
data: *mut u8,
}
impl From<Vec<u8>> for ByteBuffer {
#[inline]
fn from(bytes: Vec<u8>) -> Self {
Self::from_vec(bytes)
}
}
impl ByteBuffer {
/// Creates a `ByteBuffer` of the requested size, zero-filled.
///
/// The contents of the vector will not be dropped. Instead, `destroy` must
/// be called later to reclaim this memory or it will be leaked.
///
/// ## Caveats
///
/// This will panic if the buffer length (`usize`) cannot fit into a `i64`.
#[inline]
pub fn new_with_size(size: usize) -> Self {
let mut buf = vec![];
buf.reserve_exact(size);
buf.resize(size, 0);
ByteBuffer::from_vec(buf)
}
/// Creates a `ByteBuffer` instance from a `Vec` instance.
///
/// The contents of the vector will not be dropped. Instead, `destroy` must
/// be called later to reclaim this memory or it will be leaked.
///
/// ## Caveats
///
/// This will panic if the buffer length (`usize`) cannot fit into a `i64`.
#[inline]
pub fn from_vec(bytes: Vec<u8>) -> Self {
use std::convert::TryFrom;
let mut buf = bytes.into_boxed_slice();
let data = buf.as_mut_ptr();
let len = i64::try_from(buf.len()).expect("buffer length cannot fit into a i64.");
std::mem::forget(buf);
Self { data, len }
}
/// Convert this `ByteBuffer` into a Vec<u8>. This is the only way
/// to access the data from inside the buffer.
#[inline]
pub fn into_vec(self) -> Vec<u8> {
if self.data.is_null() {
vec![]
} else {
// This is correct because we convert to a Box<[u8]> first, which is
// a design constraint of RawVec.
unsafe { Vec::from_raw_parts(self.data, self.len as usize, self.len as usize) }
}
}
/// Reclaim memory stored in this ByteBuffer.
///
/// You typically should not call this manually, and instead expose a
/// function that does so via [`define_bytebuffer_destructor!`].
///
/// ## Caveats
///
/// This is safe so long as the buffer is empty, or the data was allocated
/// by Rust code, e.g. this is a ByteBuffer created by
/// `ByteBuffer::from_vec` or `Default::default`.
///
/// If the ByteBuffer were passed into Rust (which you shouldn't do, since
/// theres no way to see the data in Rust currently), then calling `destroy`
/// is fundamentally broken.
#[inline]
pub fn destroy(self) {
drop(self.into_vec())
}
}
impl Default for ByteBuffer {
#[inline]
fn default() -> Self {
Self {
len: 0 as i64,
data: std::ptr::null_mut(),
}
}
}

341
third_party/rust/ffi-support/src/macros.rs поставляемый
Просмотреть файл

@ -1,341 +0,0 @@
/* Copyright 2018-2019 Mozilla Foundation
*
* Licensed under the Apache License (Version 2.0), or the MIT license,
* (the "Licenses") at your option. You may not use this file except in
* compliance with one of the Licenses. You may obtain copies of the
* Licenses at:
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Licenses is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the Licenses for the specific language governing permissions and
* limitations under the Licenses. */
/// Implements [`IntoFfi`] for the provided types (more than one may be passed in) by allocating
/// `$T` on the heap as an opaque pointer.
///
/// This is typically going to be used from the "Rust component", and not the "FFI component" (see
/// the top level crate documentation for more information), however you will still need to
/// implement a destructor in the FFI component using [`define_box_destructor!`].
///
/// In general, is only safe to do for `send` types (even this is dodgy, but it's often necessary
/// to keep the locking on the other side of the FFI, so Sync is too harsh), so we enforce this in
/// this macro. (You're still free to implement this manually, if this restriction is too harsh
/// for your use case and you're certain you know what you're doing).
#[macro_export]
macro_rules! implement_into_ffi_by_pointer {
($($T:ty),* $(,)*) => {$(
unsafe impl $crate::IntoFfi for $T where $T: Send {
type Value = *mut $T;
#[inline]
fn ffi_default() -> *mut $T {
std::ptr::null_mut()
}
#[inline]
fn into_ffi_value(self) -> *mut $T {
Box::into_raw(Box::new(self))
}
}
)*}
}
/// Implements [`IntoFfi`] for the provided types (more than one may be passed
/// in) by converting to the type to a JSON string.
///
/// Additionally, most of the time we recomment using this crate's protobuf
/// support, instead of JSON.
///
/// This is typically going to be used from the "Rust component", and not the
/// "FFI component" (see the top level crate documentation for more
/// information).
///
/// Note: Each type passed in must implement or derive `serde::Serialize`.
///
/// Note: for this to works, the crate it's called in must depend on `serde` and
/// `serde_json`.
///
/// ## Panics
///
/// The [`IntoFfi`] implementation this macro generates may panic in the
/// following cases:
///
/// - You've passed a type that contains a Map that has non-string keys (which
/// can't be represented in JSON).
///
/// - You've passed a type which has a custom serializer, and the custom
/// serializer failed.
///
/// These cases are both rare enough that this still seems fine for the majority
/// of uses.
#[macro_export]
macro_rules! implement_into_ffi_by_json {
($($T:ty),* $(,)*) => {$(
unsafe impl $crate::IntoFfi for $T where $T: serde::Serialize {
type Value = *mut std::os::raw::c_char;
#[inline]
fn ffi_default() -> *mut std::os::raw::c_char {
std::ptr::null_mut()
}
#[inline]
fn into_ffi_value(self) -> *mut std::os::raw::c_char {
// This panic is inside our catch_panic, so it should be fine.
// We've also documented the case where the IntoFfi impl that
// calls this panics, and it's rare enough that it shouldn't
// matter that if it happens we return an ExternError
// representing a panic instead of one of some other type
// (especially given that the application isn't likely to be
// able to meaningfully handle JSON serialization failure).
let as_string = serde_json::to_string(&self).unwrap();
$crate::rust_string_to_c(as_string)
}
}
)*}
}
/// Implements [`IntoFfi`] for the provided types (more than one may be passed in) implementing
/// `prost::Message` (protobuf auto-generated type) by converting to the type to a [`ByteBuffer`].
/// This [`ByteBuffer`] should later be passed by value.
///
/// Note: for this to works, the crate it's called in must depend on `prost`.
///
/// Note: Each type passed in must implement or derive `prost::Message`.
#[macro_export]
macro_rules! implement_into_ffi_by_protobuf {
($($FFIType:ty),* $(,)*) => {$(
unsafe impl $crate::IntoFfi for $FFIType where $FFIType: prost::Message {
type Value = $crate::ByteBuffer;
#[inline]
fn ffi_default() -> Self::Value {
Default::default()
}
#[inline]
fn into_ffi_value(self) -> Self::Value {
use prost::Message;
let mut bytes = Vec::with_capacity(self.encoded_len());
// Unwrap is safe, since we have reserved sufficient capacity in
// the vector.
self.encode(&mut bytes).unwrap();
bytes.into()
}
}
)*}
}
/// Implement IntoFfi for a type by converting through another type.
///
/// The argument `$MidTy` argument must implement `From<$SrcTy>` and
/// [`InfoFfi`].
///
/// This is provided (even though it's trivial) because it is always safe (well,
/// so long as `$MidTy`'s [`IntoFfi`] implementation is correct), but would
/// otherwise require use of `unsafe` to implement.
#[macro_export]
macro_rules! implement_into_ffi_by_delegation {
($SrcTy:ty, $MidTy:ty) => {
unsafe impl $crate::IntoFfi for $SrcTy
where
$MidTy: From<$SrcTy> + $crate::IntoFfi,
{
// The <$MidTy as SomeTrait>::method is required even when it would
// be ambiguous due to some obscure details of macro syntax.
type Value = <$MidTy as $crate::IntoFfi>::Value;
#[inline]
fn ffi_default() -> Self::Value {
<$MidTy as $crate::IntoFfi>::ffi_default()
}
#[inline]
fn into_ffi_value(self) -> Self::Value {
use $crate::IntoFfi;
<$MidTy as From<$SrcTy>>::from(self).into_ffi_value()
}
}
};
}
/// For a number of reasons (name collisions are a big one, but, it also wouldn't work on all
/// platforms), we cannot export `extern "C"` functions from this library. However, it's pretty
/// common to want to free strings allocated by rust, so many libraries will need this, so we
/// provide it as a macro.
///
/// It simply expands to a `#[no_mangle] pub unsafe extern "C" fn` which wraps this crate's
/// [`destroy_c_string`] function.
///
/// ## Caveats
///
/// If you're using multiple separately compiled rust libraries in your application, it's critical
/// that you are careful to only ever free strings allocated by a Rust library using the same rust
/// library. Passing them to a different Rust library's string destructor will cause you to corrupt
/// multiple heaps.
///
/// Additionally, be sure that all strings you pass to this were actually allocated by rust. It's a
/// common issue for JNA code to transparently convert Pointers to things to Strings behind the
/// scenes, which is quite risky here. (To avoid this in JNA, only use `String` for passing
/// read-only strings into Rust, e.g. it's for passing `*const c_char`. All other uses should use
/// `Pointer` and `getString()`).
///
/// Finally, to avoid name collisions, it is strongly recommended that you provide an name for this
/// function unique to your library.
///
/// ## Example
///
/// ```rust
/// # use ffi_support::define_string_destructor;
/// define_string_destructor!(mylib_destroy_string);
/// ```
#[macro_export]
macro_rules! define_string_destructor {
($mylib_destroy_string:ident) => {
#[doc = "Public destructor for strings managed by the other side of the FFI."]
#[no_mangle]
pub unsafe extern "C" fn $mylib_destroy_string(s: *mut std::os::raw::c_char) {
// Note: This should never happen, but in the case of a bug aborting
// here is better than the badness that happens if we unwind across
// the FFI boundary.
$crate::abort_on_panic::with_abort_on_panic(|| {
if !s.is_null() {
$crate::destroy_c_string(s)
}
});
}
};
}
/// Define a (public) destructor for a type that was allocated by `Box::into_raw(Box::new(value))`
/// (e.g. a pointer which is probably opaque).
///
/// ## Caveats
///
/// This can go wrong in a ridiculous number of ways, and we can't really prevent any of them. But
/// essentially, the caller (on the other side of the FFI) needs to be extremely careful to ensure
/// that it stops using the pointer after it's freed.
///
/// Also, to avoid name collisions, it is strongly recommended that you provide an name for this
/// function unique to your library. (This is true for all functions you expose).
///
/// ## Example
///
/// ```rust
/// # use ffi_support::define_box_destructor;
/// struct CoolType(Vec<i32>);
///
/// define_box_destructor!(CoolType, mylib_destroy_cooltype);
/// ```
#[macro_export]
macro_rules! define_box_destructor {
($T:ty, $destructor_name:ident) => {
#[no_mangle]
pub unsafe extern "C" fn $destructor_name(v: *mut $T) {
// We should consider passing an error parameter in here rather than
// aborting, but at the moment the only case where we do this
// (interrupt handles) should never panic in Drop, so it's probably
// fine.
$crate::abort_on_panic::with_abort_on_panic(|| {
if !v.is_null() {
drop(Box::from_raw(v))
}
});
}
};
}
/// Define a (public) destructor for the ByteBuffer type.
///
/// ## Caveats
///
/// If you're using multiple separately compiled rust libraries in your application, it's critical
/// that you are careful to only ever free `ByteBuffer` instances allocated by a Rust library using
/// the same rust library. Passing them to a different Rust library's string destructor will cause
/// you to corrupt multiple heaps.
/// One common ByteBuffer destructor is defined per Rust library.
///
/// Also, to avoid name collisions, it is strongly recommended that you provide an name for this
/// function unique to your library. (This is true for all functions you expose).
///
/// ## Example
///
/// ```rust
/// # use ffi_support::define_bytebuffer_destructor;
/// define_bytebuffer_destructor!(mylib_destroy_bytebuffer);
/// ```
#[macro_export]
macro_rules! define_bytebuffer_destructor {
($destructor_name:ident) => {
#[no_mangle]
pub extern "C" fn $destructor_name(v: $crate::ByteBuffer) {
// Note: This should never happen, but in the case of a bug aborting
// here is better than the badness that happens if we unwind across
// the FFI boundary.
$crate::abort_on_panic::with_abort_on_panic(|| v.destroy())
}
};
}
/// Define a (public) destructor for a type that lives inside a lazy_static
/// [`ConcurrentHandleMap`].
///
/// Note that this is actually totally safe, unlike the other
/// `define_blah_destructor` macros.
///
/// A critical difference, however, is that this dtor takes an `err` out
/// parameter to indicate failure. This difference is why the name is different
/// as well (deleter vs destructor).
///
/// ## Example
///
/// ```rust
/// # use lazy_static::lazy_static;
/// # use ffi_support::{ConcurrentHandleMap, define_handle_map_deleter};
/// struct Thing(Vec<i32>);
/// // Somewhere...
/// lazy_static! {
/// static ref THING_HANDLES: ConcurrentHandleMap<Thing> = ConcurrentHandleMap::new();
/// }
/// define_handle_map_deleter!(THING_HANDLES, mylib_destroy_thing);
/// ```
#[macro_export]
macro_rules! define_handle_map_deleter {
($HANDLE_MAP_NAME:ident, $destructor_name:ident) => {
#[no_mangle]
pub extern "C" fn $destructor_name(v: u64, err: &mut $crate::ExternError) {
$crate::call_with_result(err, || {
// Force type errors here.
let map: &$crate::ConcurrentHandleMap<_> = &*$HANDLE_MAP_NAME;
map.delete_u64(v)
})
}
};
}
/// Force a compile error if the condition is not met. Requires a unique name
/// for the assertion for... reasons. This is included mainly because it's a
/// common desire for FFI code, but not for other sorts of code.
///
/// # Examples
///
/// Failing example:
///
/// ```compile_fail
/// ffi_support::static_assert!(THIS_SHOULD_FAIL, false);
/// ```
///
/// Passing example:
///
/// ```
/// ffi_support::static_assert!(THIS_SHOULD_PASS, true);
/// ```
#[macro_export]
macro_rules! static_assert {
($ASSERT_NAME:ident, $test:expr) => {
#[allow(dead_code, nonstandard_style)]
const $ASSERT_NAME: [u8; 0 - (!$test as bool as usize)] =
[0u8; 0 - (!$test as bool as usize)];
};
}

162
third_party/rust/ffi-support/src/string.rs поставляемый
Просмотреть файл

@ -1,162 +0,0 @@
/* Copyright 2018-2019 Mozilla Foundation
*
* Licensed under the Apache License (Version 2.0), or the MIT license,
* (the "Licenses") at your option. You may not use this file except in
* compliance with one of the Licenses. You may obtain copies of the
* Licenses at:
*
* http://www.apache.org/licenses/LICENSE-2.0
* http://opensource.org/licenses/MIT
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the Licenses is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the Licenses for the specific language governing permissions and
* limitations under the Licenses. */
use crate::FfiStr;
use std::ffi::CString;
use std::os::raw::c_char;
use std::ptr;
/// Convert a rust string into a NUL-terminated utf-8 string suitable for passing to C, or to things
/// ABI-compatible with C.
///
/// Important: This string must eventually be freed. You may either do that using the
/// [`destroy_c_string`] method (or, if you must, by dropping the underlying [`std::ffi::CString`]
/// after recovering it via [`std::ffi::CString::from_raw`]).
///
/// It's common to want to allow the consumer (e.g. on the "C" side of the FFI) to be allowed to
/// free this memory, and the macro [`define_string_destructor!`] may be used to do so.
///
/// ## Panics
///
/// This function may panic if the argument has an interior null byte. This is fairly rare, but
/// is possible in theory.
#[inline]
pub fn rust_string_to_c(rust_string: impl Into<String>) -> *mut c_char {
CString::new(rust_string.into())
.expect("Error: Rust string contained an interior null byte.")
.into_raw()
}
/// Variant of [`rust_string_to_c`] which takes an Option, and returns null for None.
#[inline]
pub fn opt_rust_string_to_c(opt_rust_string: Option<impl Into<String>>) -> *mut c_char {
if let Some(s) = opt_rust_string {
rust_string_to_c(s)
} else {
ptr::null_mut()
}
}
/// Free the memory of a string created by [`rust_string_to_c`] on the rust heap. If `c_string` is
/// null, this is a no-op.
///
/// See the [`define_string_destructor!`] macro which may be used for exposing this function over
/// the FFI.
///
/// ## Safety
///
/// This is inherently unsafe, since we're deallocating memory. Be sure
///
/// - Nobody can use the memory after it's deallocated.
/// - The memory was actually allocated on this heap (and it's not a string from the other side of
/// the FFI which was allocated on e.g. the C heap).
/// - If multiple separate rust libraries are in use (for example, as DLLs) in a single program,
/// you must also make sure that the rust library that allocated the memory is also the one
/// that frees it.
///
/// See documentation for [`define_string_destructor!`], which gives a more complete overview of the
/// potential issues.
#[inline]
pub unsafe fn destroy_c_string(cstring: *mut c_char) {
// we're not guaranteed to be in a place where we can complain about this beyond logging,
// and there's an obvious way to handle it.
if !cstring.is_null() {
drop(CString::from_raw(cstring))
}
}
/// Convert a null-terminated C string to a rust `str`. This does not take ownership of the string,
/// and you should be careful about the lifetime of the resulting string. Note that strings
/// containing invalid UTF-8 are replaced with the empty string (for many cases, you will want to
/// use [`rust_string_from_c`] instead, which will do a lossy conversion).
///
/// If you actually need an owned rust `String`, you're encouraged to use [`rust_string_from_c`],
/// which, as mentioned, also behaves better in the face of invalid UTF-8.
///
/// ## Safety
///
/// This is unsafe because we read from a raw pointer, which may or may not be valid.
///
/// We also assume `c_string` is a null terminated string, and have no way of knowing if that's
/// actually true. If it's not, we'll read arbitrary memory from the heap until we see a '\0', which
/// can result in a enormous number of problems.
///
/// ## Panics
///
/// Panics if it's argument is null, see [`opt_rust_str_from_c`] for a variant that returns None in
/// this case instead.
///
/// Note: This means it's forbidden to call this outside of a `call_with_result` (or something else
/// that uses [`std::panic::catch_unwind`]), as it is UB to panic across the FFI boundary.
#[inline]
#[deprecated(since = "0.3.0", note = "Please use FfiStr::as_str instead")]
pub unsafe fn rust_str_from_c<'a>(c_string: *const c_char) -> &'a str {
FfiStr::from_raw(c_string).as_str()
}
/// Same as `rust_string_from_c`, but returns None if `c_string` is null instead of asserting.
///
/// ## Safety
///
/// This is unsafe because we read from a raw pointer, which may or may not be valid.
///
/// We also assume `c_string` is a null terminated string, and have no way of knowing if that's
/// actually true. If it's not, we'll read arbitrary memory from the heap until we see a '\0', which
/// can result in a enormous number of problems.
#[inline]
#[deprecated(since = "0.3.0", note = "Please use FfiStr::as_opt_str instead")]
pub unsafe fn opt_rust_str_from_c<'a>(c_string: *const c_char) -> Option<&'a str> {
FfiStr::from_raw(c_string).as_opt_str()
}
/// Convert a null-terminated C into an owned rust string, replacing invalid UTF-8 with the
/// unicode replacement character.
///
/// ## Safety
///
/// This is unsafe because we dereference a raw pointer, which may or may not be valid.
///
/// We also assume `c_string` is a null terminated string, and have no way of knowing if that's
/// actually true. If it's not, we'll read arbitrary memory from the heap until we see a '\0', which
/// can result in a enormous number of problems.
///
/// ## Panics
///
/// Panics if it's argument is null. See also [`opt_rust_string_from_c`], which returns None
/// instead.
///
/// Note: This means it's forbidden to call this outside of a `call_with_result` (or something else
/// that uses `std::panic::catch_unwind`), as it is UB to panic across the FFI boundary.
#[inline]
#[deprecated(since = "0.3.0", note = "Please use FfiStr::into_string instead")]
pub unsafe fn rust_string_from_c(c_string: *const c_char) -> String {
FfiStr::from_raw(c_string).into_string()
}
/// Same as `rust_string_from_c`, but returns None if `c_string` is null instead of asserting.
///
/// ## Safety
///
/// This is unsafe because we dereference a raw pointer, which may or may not be valid.
///
/// We also assume `c_string` is a null terminated string, and have no way of knowing if that's
/// actually true. If it's not, we'll read arbitrary memory from the heap until we see a '\0', which
/// can result in a enormous number of problems.
#[inline]
#[deprecated(since = "0.3.0", note = "Please use FfiStr::into_opt_string instead")]
pub unsafe fn opt_rust_string_from_c(c_string: *const c_char) -> Option<String> {
FfiStr::from_raw(c_string).into_opt_string()
}

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

1034
third_party/rust/glean-core/Cargo.lock сгенерированный поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

85
third_party/rust/glean-core/Cargo.toml поставляемый
Просмотреть файл

@ -1,85 +0,0 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies
#
# If you believe there's an error in this file please file an
# issue against the rust-lang/cargo repository. If you're
# editing this file be aware that the upstream Cargo.toml
# will likely look very different (and much more reasonable)
[package]
edition = "2018"
name = "glean-core"
version = "22.0.0"
authors = ["Jan-Erik Rediger <jrediger@mozilla.com>", "The Glean Team <glean-team@mozilla.com>"]
include = ["README.md", "LICENSE", "src/**/*", "examples/**/*", "tests/**/*", "Cargo.toml"]
description = "A modern Telemetry library"
readme = "README.md"
keywords = ["telemetry"]
license = "MPL-2.0"
repository = "https://github.com/mozilla/glean"
[dependencies.bincode]
version = "1.1.3"
[dependencies.chrono]
version = "0.4.6"
features = ["serde"]
[dependencies.failure]
version = "0.1.5"
[dependencies.ffi-support]
version = "0.3.5"
[dependencies.lazy_static]
version = "1.4.0"
[dependencies.log]
version = "0.4.6"
[dependencies.once_cell]
version = "1.2.0"
[dependencies.regex]
version = "1.3.0"
features = ["std"]
default-features = false
[dependencies.rkv]
version = "0.10.2"
[dependencies.serde]
version = "1.0.102"
features = ["derive"]
[dependencies.serde_json]
version = "1.0.41"
[dependencies.uuid]
version = "0.8.1"
features = ["v4"]
[dev-dependencies.color-backtrace]
version = "0.2.3"
[dev-dependencies.ctor]
version = "0.1.9"
[dev-dependencies.env_logger]
version = "0.7.1"
features = ["termcolor", "atty", "humantime"]
default-features = false
[dev-dependencies.iso8601]
version = "0.3"
[dev-dependencies.tempfile]
version = "3.0.7"
[badges.circle-ci]
branch = "master"
repository = "mozilla/glean"
[badges.maintenance]
status = "actively-developed"

373
third_party/rust/glean-core/LICENSE поставляемый
Просмотреть файл

@ -1,373 +0,0 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

54
third_party/rust/glean-core/README.md поставляемый
Просмотреть файл

@ -1,54 +0,0 @@
# Glean SDK
The `Glean SDK` is a modern approach for a Telemetry library and is part of the [Glean project](https://docs.telemetry.mozilla.org/concepts/glean/glean.html).
## `glean-core`
This library provides the core functionality of the Glean SDK, including implementations of all metric types, the ping serializer and the storage layer.
It's used in all platform-specific wrappers.
It's not intended to be used by users directly.
Each supported platform has a specific Glean package with a nicer API.
A nice Rust API will be provided by the [Glean](https://crates.io/crates/glean) crate.
## Documentation
All documentation is available online:
* [The Glean SDK Book][book]
* [API documentation][apidocs]
[book]: https://mozilla.github.io/glean/
[apidocs]: https://mozilla.github.io/glean/docs/glean_core/index.html
## Usage
```rust
use glean_core::{Glean, Configuration, CommonMetricData, metrics::*};
let cfg = Configuration {
data_path: "/tmp/glean".into(),
application_id: "glean.sample.app".into(),
upload_enabled: true,
max_events: None,
};
let mut glean = Glean::new(cfg).unwrap();
let ping = PingType::new("sample", true);
glean.register_ping_type(&ping);
let call_counter: CounterMetric = CounterMetric::new(CommonMetricData {
name: "calls".into(),
category: "local".into(),
send_in_pings: vec!["sample".into()],
..Default::default()
});
call_counter.add(&glean, 1);
glean.send_ping(&ping).unwrap();
```
## License
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/

Просмотреть файл

@ -1,77 +0,0 @@
use std::env;
use glean_core::metrics::*;
use glean_core::ping::PingMaker;
use glean_core::{CommonMetricData, Glean};
use tempfile::Builder;
fn main() {
env_logger::init();
color_backtrace::install();
let mut args = env::args().skip(1);
let data_path = if let Some(path) = args.next() {
path
} else {
let root = Builder::new().prefix("simple-db").tempdir().unwrap();
root.path().display().to_string()
};
let cfg = glean_core::Configuration {
data_path,
application_id: "org.mozilla.glean_core.example".into(),
upload_enabled: true,
max_events: None,
delay_ping_lifetime_io: false,
};
let mut glean = Glean::new(cfg).unwrap();
glean.register_ping_type(&PingType::new("baseline", true, false));
glean.register_ping_type(&PingType::new("metrics", true, false));
let local_metric: StringMetric = StringMetric::new(CommonMetricData {
name: "local_metric".into(),
category: "local".into(),
send_in_pings: vec!["baseline".into()],
..Default::default()
});
let call_counter: CounterMetric = CounterMetric::new(CommonMetricData {
name: "calls".into(),
category: "local".into(),
send_in_pings: vec!["baseline".into(), "metrics".into()],
..Default::default()
});
local_metric.set(&glean, "I can set this");
call_counter.add(&glean, 1);
println!("Baseline Data:\n{}", glean.snapshot("baseline", true));
call_counter.add(&glean, 2);
println!("Metrics Data:\n{}", glean.snapshot("metrics", true));
call_counter.add(&glean, 3);
println!();
println!("Baseline Data 2:\n{}", glean.snapshot("baseline", false));
println!("Metrics Data 2:\n{}", glean.snapshot("metrics", true));
let list: StringListMetric = StringListMetric::new(CommonMetricData {
name: "list".into(),
category: "local".into(),
send_in_pings: vec!["baseline".into()],
..Default::default()
});
list.add(&glean, "once");
list.add(&glean, "upon");
let ping_maker = PingMaker::new();
let ping = ping_maker
.collect_string(&glean, glean.get_ping_by_name("baseline").unwrap())
.unwrap();
println!("Baseline Ping:\n{}", ping);
let ping = ping_maker.collect_string(&glean, glean.get_ping_by_name("metrics").unwrap());
println!("Metrics Ping: {:?}", ping);
}

Просмотреть файл

@ -1,100 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
//! A module containing glean-core code for supporting data migration
//! (i.e. sequence numbers) from glean-ac. This is a temporary module
//! planned to be removed in 2020, after the transition from glean-ac
//! is complete.
use crate::util::truncate_string_at_boundary;
use std::collections::HashMap;
use super::Glean;
use super::PingMaker;
const GLEAN_AC_SEQUENCE_SUFFIX: &str = "_seq";
/// Stores the sequence numbers from glean-ac in glean-core.
pub(super) fn migrate_sequence_numbers(glean: &Glean, seq_numbers: HashMap<String, i32>) {
let ping_maker = PingMaker::new();
for (store_name_with_suffix, next_seq) in seq_numbers.into_iter() {
// Note: glean-ac stores the sequence numbers as '<ping_name>_seq',
// glean-core requires '<ping_name>#sequence'.
if !store_name_with_suffix.ends_with(GLEAN_AC_SEQUENCE_SUFFIX) {
continue;
}
// Negative or 0 counters are definitively not worth importing.
if next_seq <= 0 {
continue;
}
let truncated_len = store_name_with_suffix
.len()
.saturating_sub(GLEAN_AC_SEQUENCE_SUFFIX.len());
let store_name = truncate_string_at_boundary(store_name_with_suffix, truncated_len);
ping_maker.set_ping_seq(glean, &store_name, next_seq);
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::tests::new_glean;
#[test]
fn invalid_storage_names_must_not_be_migrated() {
let (glean, _) = new_glean();
let mut ac_seq_numbers = HashMap::new();
ac_seq_numbers.insert(String::from("control_seq"), 3);
ac_seq_numbers.insert(String::from("ignored_seq-lol"), 85);
let ping_maker = PingMaker::new();
migrate_sequence_numbers(&glean, ac_seq_numbers);
assert_eq!(3, ping_maker.get_ping_seq(&glean, "control"));
// The next one should not have been migrated, so we expect
// it to start from 0 instead of 85.
assert_eq!(0, ping_maker.get_ping_seq(&glean, "ignored"));
}
#[test]
fn invalid_sequence_numbers_must_not_be_migrated() {
let (glean, _) = new_glean();
let mut ac_seq_numbers = HashMap::new();
ac_seq_numbers.insert(String::from("control_seq"), 3);
ac_seq_numbers.insert(String::from("ignored_seq"), -85);
let ping_maker = PingMaker::new();
migrate_sequence_numbers(&glean, ac_seq_numbers);
assert_eq!(3, ping_maker.get_ping_seq(&glean, "control"));
// The next one should not have been migrated, so we expect
// it to start from 0 instead of 85.
assert_eq!(0, ping_maker.get_ping_seq(&glean, "ignored"));
}
#[test]
fn valid_sequence_numbers_must_be_migrated() {
let (glean, _) = new_glean();
let mut ac_seq_numbers = HashMap::new();
ac_seq_numbers.insert(String::from("custom_seq"), 3);
ac_seq_numbers.insert(String::from("other_seq"), 7);
ac_seq_numbers.insert(String::from("ignored_seq-lol"), 85);
let ping_maker = PingMaker::new();
migrate_sequence_numbers(&glean, ac_seq_numbers);
assert_eq!(3, ping_maker.get_ping_seq(&glean, "custom"));
assert_eq!(7, ping_maker.get_ping_seq(&glean, "other"));
// The next one should not have been migrated, so we expect
// it to start from 0 instead of 85.
assert_eq!(0, ping_maker.get_ping_seq(&glean, "ignored"));
}
}

Просмотреть файл

@ -1,126 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use std::convert::TryFrom;
use crate::error::{Error, ErrorKind};
use crate::metrics::dynamic_label;
use crate::Glean;
/// The supported metrics' lifetimes.
///
/// A metric's lifetime determines when its stored data gets reset.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum Lifetime {
/// The metric is reset with each sent ping
Ping,
/// The metric is reset on application restart
Application,
/// The metric is reset with each user profile
User,
}
impl Default for Lifetime {
fn default() -> Self {
Lifetime::Ping
}
}
impl Lifetime {
/// String representation of the lifetime.
pub fn as_str(self) -> &'static str {
match self {
Lifetime::Ping => "ping",
Lifetime::Application => "app",
Lifetime::User => "user",
}
}
}
impl TryFrom<i32> for Lifetime {
type Error = Error;
fn try_from(value: i32) -> Result<Lifetime, Self::Error> {
match value {
0 => Ok(Lifetime::Ping),
1 => Ok(Lifetime::Application),
2 => Ok(Lifetime::User),
e => Err(ErrorKind::Lifetime(e).into()),
}
}
}
/// The common set of data shared across all different metric types.
#[derive(Default, Debug, Clone)]
pub struct CommonMetricData {
/// The metric's name.
pub name: String,
/// The metric's category.
pub category: String,
/// List of ping names to include this metric in.
pub send_in_pings: Vec<String>,
/// The metric's lifetime.
pub lifetime: Lifetime,
/// Whether or not the metric is disabled.
///
/// Disabled metrics are never recorded.
pub disabled: bool,
/// Dynamic label.
/// When a LabeledMetric<T> factory creates the specific metric to be
/// recorded to, dynamic labels are stored in the specific label so that we
/// can validate them when the Glean singleton is available.
pub dynamic_label: Option<String>,
}
impl CommonMetricData {
/// Create a new metadata object.
pub fn new<A: Into<String>, B: Into<String>, C: Into<String>>(
category: A,
name: B,
ping_name: C,
) -> CommonMetricData {
CommonMetricData {
name: name.into(),
category: category.into(),
send_in_pings: vec![ping_name.into()],
..Default::default()
}
}
/// The metric's base identifier, including the category and name, but not the label.
///
/// If `category` is empty, it's ommitted.
/// Otherwise, it's the combination of the metric's `category` and `name`.
pub(crate) fn base_identifier(&self) -> String {
if self.category.is_empty() {
self.name.clone()
} else {
format!("{}.{}", self.category, self.name)
}
}
/// The metric's unique identifier, including the category, name and label.
///
/// If `category` is empty, it's ommitted.
/// Otherwise, it's the combination of the metric's `category`, `name` and `label`.
pub(crate) fn identifier(&self, glean: &Glean) -> String {
let base_identifier = self.base_identifier();
if let Some(label) = &self.dynamic_label {
dynamic_label(glean, self, &base_identifier, label)
} else {
base_identifier
}
}
/// Whether this metric should be recorded.
pub fn should_record(&self) -> bool {
!self.disabled
}
/// The list of storages this metric should be recorded into.
pub fn storage_names(&self) -> &[String] {
&self.send_in_pings
}
}

Просмотреть файл

@ -1,862 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use std::collections::btree_map::Entry;
use std::collections::BTreeMap;
use std::fs;
use std::sync::RwLock;
use rkv::{Rkv, SingleStore, StoreOptions};
use crate::metrics::Metric;
use crate::CommonMetricData;
use crate::Glean;
use crate::Lifetime;
use crate::Result;
#[derive(Debug)]
pub struct Database {
rkv: Rkv,
// Metrics with 'application' lifetime only live as long
// as the application lives: they don't need to be persisted
// to disk using rkv. Store them in a map.
app_lifetime_data: RwLock<BTreeMap<String, Metric>>,
// If the `delay_ping_lifetime_io` Glean config option is `true`,
// we will save metrics with 'ping' lifetime data in a map temporarily
// so as to persist them to disk using rkv in bulk on shutdown,
// or after a given interval, instead of everytime a new metric
// is created / updated.
ping_lifetime_data: Option<RwLock<BTreeMap<String, Metric>>>,
}
impl Database {
/// Initialize the data store.
///
/// This opens the underlying rkv store and creates
/// the underlying directory structure.
pub fn new(data_path: &str, delay_ping_lifetime_io: bool) -> Result<Self> {
Ok(Self {
rkv: Self::open_rkv(data_path)?,
app_lifetime_data: RwLock::new(BTreeMap::new()),
ping_lifetime_data: if delay_ping_lifetime_io {
Some(RwLock::new(BTreeMap::new()))
} else {
None
},
})
}
/// Creates the storage directories and inits rkv.
fn open_rkv(path: &str) -> Result<Rkv> {
let path = std::path::Path::new(path).join("db");
log::debug!("Database path: {:?}", path.display());
fs::create_dir_all(&path)?;
let rkv = Rkv::new(&path)?;
log::info!("Database initialized");
Ok(rkv)
}
/// Build the key of the final location of the data in the database.
/// Such location is built using the storage name and the metric
/// key/name (if available).
///
/// ## Arguments
///
/// * `storage_name` - the name of the storage to store/fetch data from.
/// * `metric_key` - the optional metric key/name.
///
/// ## Return value
///
/// Returns a String representing the location, in the database, data must
/// be written or read from.
fn get_storage_key(storage_name: &str, metric_key: Option<&str>) -> String {
match metric_key {
Some(k) => format!("{}#{}", storage_name, k),
None => format!("{}#", storage_name),
}
}
/// Iterates with the provided transaction function over the requested data
/// from the given storage.
///
/// * If the storage is unavailable, the transaction function is never invoked.
/// * If the read data cannot be deserialized it will be silently skipped.
///
/// ## Arguments
///
/// * `lifetime`: The metric lifetime to iterate over.
/// * `storage_name`: The storage name to iterate over.
/// * `metric_key`: The metric key to iterate over. All metrics iterated over
/// will have this prefix. For example, if `metric_key` is of the form `{category}.`,
/// it will iterate over all metrics in the given category. If the `metric_key` is of the
/// form `{category}.{name}/`, the iterator will iterate over all specific metrics for
/// a given labeled metric. If not provided, the entire storage for the given lifetime
/// will be iterated over.
/// * `transaction_fn`: Called for each entry being iterated over. It is
/// passed two arguments: `(metric_name: &[u8], metric: &Metric)`.
///
/// ## Panics
///
/// This function will **not** panic on database errors.
pub fn iter_store_from<F>(
&self,
lifetime: Lifetime,
storage_name: &str,
metric_key: Option<&str>,
mut transaction_fn: F,
) where
F: FnMut(&[u8], &Metric),
{
let iter_start = Self::get_storage_key(storage_name, metric_key);
let len = iter_start.len();
// Lifetime::Application data is not persisted to disk
if lifetime == Lifetime::Application {
let data = self
.app_lifetime_data
.read()
.expect("Can't read app lifetime data");
for (key, value) in data.iter() {
if key.starts_with(&iter_start) {
let key = &key[len..];
transaction_fn(key.as_bytes(), value);
}
}
return;
}
// Lifetime::Ping data is not persisted to disk if
// Glean has `delay_ping_lifetime_io` set to true
if lifetime == Lifetime::Ping {
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
let data = ping_lifetime_data
.read()
.expect("Can't read ping lifetime data");
for (key, value) in data.iter() {
if key.starts_with(&iter_start) {
let key = &key[len..];
transaction_fn(key.as_bytes(), value);
}
}
return;
}
}
let store: SingleStore = unwrap_or!(
self.rkv
.open_single(lifetime.as_str(), StoreOptions::create()),
return
);
let reader = unwrap_or!(self.rkv.read(), return);
let mut iter = unwrap_or!(store.iter_from(&reader, &iter_start), return);
while let Some(Ok((metric_name, value))) = iter.next() {
if !metric_name.starts_with(iter_start.as_bytes()) {
break;
}
let metric_name = &metric_name[len..];
let metric: Metric = match value.expect("Value missing in iteration") {
rkv::Value::Blob(blob) => unwrap_or!(bincode::deserialize(blob), continue),
_ => continue,
};
transaction_fn(metric_name, &metric);
}
}
/// Determine if the storage has the given metric.
///
/// If data cannot be read it is assumed that the storage does not have the metric.
///
/// ## Arguments
///
/// * `lifetime`: The lifetime of the metric.
/// * `storage_name`: The storage name to look in.
/// * `metric_identifier`: The metric identifier.
///
/// ## Panics
///
/// This function will **not** panic on database errors.
pub fn has_metric(
&self,
lifetime: Lifetime,
storage_name: &str,
metric_identifier: &str,
) -> bool {
let key = Self::get_storage_key(storage_name, Some(metric_identifier));
// Lifetime::Application data is not persisted to disk
if lifetime == Lifetime::Application {
return self
.app_lifetime_data
.read()
.map(|data| data.contains_key(&key))
.unwrap_or(false);
}
// Lifetime::Ping data is not persisted to disk if
// Glean has `delay_ping_lifetime_io` set to true
if lifetime == Lifetime::Ping {
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
return ping_lifetime_data
.read()
.map(|data| data.contains_key(&key))
.unwrap_or(false);
}
}
let store: SingleStore = unwrap_or!(
self.rkv
.open_single(lifetime.as_str(), StoreOptions::create()),
return false
);
let reader = unwrap_or!(self.rkv.read(), return false);
store.get(&reader, &key).unwrap_or(None).is_some()
}
/// Write to the specified storage with the provided transaction function.
///
/// If the storage is unavailable, it will return an error.
///
/// ## Panics
///
/// * This function will panic for `Lifetime::Application`.
/// * This function will **not** panic on database errors.
pub fn write_with_store<F>(&self, store_name: Lifetime, mut transaction_fn: F) -> Result<()>
where
F: FnMut(rkv::Writer, SingleStore) -> Result<()>,
{
if store_name == Lifetime::Application {
panic!("Can't write with store for application-lifetime data");
}
let store: SingleStore = self
.rkv
.open_single(store_name.as_str(), StoreOptions::create())?;
let writer = self.rkv.write()?;
transaction_fn(writer, store)?;
Ok(())
}
/// Records a metric in the underlying storage system.
pub fn record(&self, glean: &Glean, data: &CommonMetricData, value: &Metric) {
let name = data.identifier(glean);
for ping_name in data.storage_names() {
if let Err(e) = self.record_per_lifetime(data.lifetime, ping_name, &name, value) {
log::error!("Failed to record metric into {}: {:?}", ping_name, e);
}
}
}
/// Records a metric in the underlying storage system, for a single lifetime.
///
/// ## Return value
///
/// If the storage is unavailable or the write fails, no data will be stored and an error will be returned.
///
/// Otherwise `Ok(())` is returned.
///
/// ## Panics
///
/// * This function will **not** panic on database errors.
fn record_per_lifetime(
&self,
lifetime: Lifetime,
storage_name: &str,
key: &str,
metric: &Metric,
) -> Result<()> {
let final_key = Self::get_storage_key(storage_name, Some(key));
if lifetime == Lifetime::Application {
let mut data = self
.app_lifetime_data
.write()
.expect("Can't read app lifetime data");
data.insert(final_key, metric.clone());
return Ok(());
}
// Lifetime::Ping data is not persisted to disk if
// Glean has `delay_ping_lifetime_io` set to true
if lifetime == Lifetime::Ping {
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
let mut data = ping_lifetime_data
.write()
.expect("Can't read ping lifetime data");
data.insert(final_key, metric.clone());
return Ok(());
}
}
let encoded = bincode::serialize(&metric).expect("IMPOSSIBLE: Serializing metric failed");
let value = rkv::Value::Blob(&encoded);
let store_name = lifetime.as_str();
let store = self.rkv.open_single(store_name, StoreOptions::create())?;
let mut writer = self.rkv.write()?;
store.put(&mut writer, final_key, &value)?;
writer.commit()?;
Ok(())
}
/// Records the provided value, with the given lifetime, after
/// applying a transformation function.
pub fn record_with<F>(&self, glean: &Glean, data: &CommonMetricData, mut transform: F)
where
F: FnMut(Option<Metric>) -> Metric,
{
let name = data.identifier(glean);
for ping_name in data.storage_names() {
if let Err(e) =
self.record_per_lifetime_with(data.lifetime, ping_name, &name, &mut transform)
{
log::error!("Failed to record metric into {}: {:?}", ping_name, e);
}
}
}
/// Records a metric in the underlying storage system, after applying the
/// given transformation function, for a single lifetime.
///
/// ## Return value
///
/// If the storage is unavailable or the write fails, no data will be stored and an error will be returned.
///
/// Otherwise `Ok(())` is returned.
///
/// ## Panics
///
/// * This function will **not** panic on database errors.
pub fn record_per_lifetime_with<F>(
&self,
lifetime: Lifetime,
storage_name: &str,
key: &str,
mut transform: F,
) -> Result<()>
where
F: FnMut(Option<Metric>) -> Metric,
{
let final_key = Self::get_storage_key(storage_name, Some(key));
if lifetime == Lifetime::Application {
let mut data = self
.app_lifetime_data
.write()
.expect("Can't access app lifetime data as writable");
let entry = data.entry(final_key);
match entry {
Entry::Vacant(entry) => {
entry.insert(transform(None));
}
Entry::Occupied(mut entry) => {
let old_value = entry.get().clone();
entry.insert(transform(Some(old_value)));
}
}
return Ok(());
}
// Lifetime::Ping data is not persisted to disk if
// Glean has `delay_ping_lifetime_io` set to true
if lifetime == Lifetime::Ping {
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
let mut data = ping_lifetime_data
.write()
.expect("Can't access ping lifetime data as writable");
let entry = data.entry(final_key);
match entry {
Entry::Vacant(entry) => {
entry.insert(transform(None));
}
Entry::Occupied(mut entry) => {
let old_value = entry.get().clone();
entry.insert(transform(Some(old_value)));
}
}
return Ok(());
}
}
let store_name = lifetime.as_str();
let store = self.rkv.open_single(store_name, StoreOptions::create())?;
let mut writer = self.rkv.write()?;
let new_value: Metric = {
let old_value = store.get(&writer, &final_key)?;
match old_value {
Some(rkv::Value::Blob(blob)) => {
let old_value = bincode::deserialize(blob).ok();
transform(old_value)
}
_ => transform(None),
}
};
let encoded =
bincode::serialize(&new_value).expect("IMPOSSIBLE: Serializing metric failed");
let value = rkv::Value::Blob(&encoded);
store.put(&mut writer, final_key, &value)?;
writer.commit()?;
Ok(())
}
/// Clears a storage (only Ping Lifetime).
///
/// ## Return value
///
/// * If the storage is unavailable an error is returned.
/// * If any individual delete fails, an error is returned, but other deletions might have
/// happened.
///
/// Otherwise `Ok(())` is returned.
///
/// ## Panics
///
/// * This function will **not** panic on database errors.
pub fn clear_ping_lifetime_storage(&self, storage_name: &str) -> Result<()> {
// Lifetime::Ping might have data saved to `ping_lifetime_data`
// in case `delay_ping_lifetime_io` is set to true
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
ping_lifetime_data
.write()
.expect("Can't access ping lifetime data as writable")
.clear();
return Ok(());
}
self.write_with_store(Lifetime::Ping, |mut writer, store| {
let mut metrics = Vec::new();
{
let mut iter = store.iter_from(&writer, &storage_name)?;
while let Some(Ok((metric_name, _))) = iter.next() {
if let Ok(metric_name) = std::str::from_utf8(metric_name) {
if !metric_name.starts_with(&storage_name) {
break;
}
metrics.push(metric_name.to_owned());
}
}
}
let mut res = Ok(());
for to_delete in metrics {
if let Err(e) = store.delete(&mut writer, to_delete) {
log::error!("Can't delete from store: {:?}", e);
res = Err(e);
}
}
writer.commit()?;
Ok(res?)
})
}
/// Removes a single metric from the storage.
///
/// ## Arguments
///
/// * `lifetime` - the lifetime of the storage in which to look for the metric.
/// * `storage_name` - the name of the storage to store/fetch data from.
/// * `metric_key` - the metric key/name.
///
/// ## Return value
///
/// * If the storage is unavailable an error is returned.
/// * If the metric could not be deleted, an error is returned.
///
/// Otherwise `Ok(())` is returned.
///
/// ## Panics
///
/// * This function will **not** panic on database errors.
pub fn remove_single_metric(
&self,
lifetime: Lifetime,
storage_name: &str,
metric_name: &str,
) -> Result<()> {
let final_key = Self::get_storage_key(storage_name, Some(metric_name));
if lifetime == Lifetime::Application {
let mut data = self
.app_lifetime_data
.write()
.expect("Can't access app lifetime data as writable");
data.remove(&final_key);
return Ok(());
}
// Lifetime::Ping data is not persisted to disk if
// Glean has `delay_ping_lifetime_io` set to true
if lifetime == Lifetime::Ping {
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
let mut data = ping_lifetime_data
.write()
.expect("Can't access app lifetime data as writable");
data.remove(&final_key);
return Ok(());
}
}
self.write_with_store(lifetime, |mut writer, store| {
store.delete(&mut writer, final_key.clone())?;
writer.commit()?;
Ok(())
})
}
/// Clears all metrics in the database.
///
/// Errors are logged.
///
/// ## Panics
///
/// * This function will **not** panic on database errors.
pub fn clear_all(&self) {
for lifetime in [Lifetime::User, Lifetime::Ping].iter() {
let res = self.write_with_store(*lifetime, |mut writer, store| {
store.clear(&mut writer)?;
writer.commit()?;
Ok(())
});
if let Err(e) = res {
log::error!("Could not clear store for lifetime {:?}: {:?}", lifetime, e);
}
}
self.app_lifetime_data
.write()
.expect("Can't access app lifetime data as writable")
.clear();
if let Some(ping_lifetime_data) = &self.ping_lifetime_data {
ping_lifetime_data
.write()
.expect("Can't access ping lifetime data as writable")
.clear();
}
}
}
#[cfg(test)]
mod test {
use super::*;
use std::collections::HashMap;
use tempfile::tempdir;
#[test]
fn test_panicks_if_fails_dir_creation() {
assert!(Database::new("/!#\"'@#°ç", false).is_err());
}
#[test]
fn test_data_dir_rkv_inits() {
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
Database::new(&str_dir, false).unwrap();
assert!(dir.path().exists());
}
#[test]
fn test_ping_lifetime_metric_recorded() {
// Init the database in a temporary directory.
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
let db = Database::new(&str_dir, false).unwrap();
assert!(db.ping_lifetime_data.is_none());
// Attempt to record a known value.
let test_value = "test-value";
let test_storage = "test-storage";
let test_metric_id = "telemetry_test.test_name";
db.record_per_lifetime(
Lifetime::Ping,
test_storage,
test_metric_id,
&Metric::String(test_value.to_string()),
)
.unwrap();
// Verify that the data is correctly recorded.
let mut found_metrics = 0;
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
found_metrics += 1;
let metric_id = String::from_utf8_lossy(metric_name).into_owned();
assert_eq!(test_metric_id, metric_id);
match metric {
Metric::String(s) => assert_eq!(test_value, s),
_ => panic!("Unexpected data found"),
}
};
db.iter_store_from(Lifetime::Ping, test_storage, None, &mut snapshotter);
assert_eq!(1, found_metrics, "We only expect 1 Lifetime.Ping metric.");
}
#[test]
fn test_application_lifetime_metric_recorded() {
// Init the database in a temporary directory.
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
let db = Database::new(&str_dir, false).unwrap();
// Attempt to record a known value.
let test_value = "test-value";
let test_storage = "test-storage1";
let test_metric_id = "telemetry_test.test_name";
db.record_per_lifetime(
Lifetime::Application,
test_storage,
test_metric_id,
&Metric::String(test_value.to_string()),
)
.unwrap();
// Verify that the data is correctly recorded.
let mut found_metrics = 0;
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
found_metrics += 1;
let metric_id = String::from_utf8_lossy(metric_name).into_owned();
assert_eq!(test_metric_id, metric_id);
match metric {
Metric::String(s) => assert_eq!(test_value, s),
_ => panic!("Unexpected data found"),
}
};
db.iter_store_from(Lifetime::Application, test_storage, None, &mut snapshotter);
assert_eq!(
1, found_metrics,
"We only expect 1 Lifetime.Application metric."
);
}
#[test]
fn test_user_lifetime_metric_recorded() {
// Init the database in a temporary directory.
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
let db = Database::new(&str_dir, false).unwrap();
// Attempt to record a known value.
let test_value = "test-value";
let test_storage = "test-storage2";
let test_metric_id = "telemetry_test.test_name";
db.record_per_lifetime(
Lifetime::User,
test_storage,
test_metric_id,
&Metric::String(test_value.to_string()),
)
.unwrap();
// Verify that the data is correctly recorded.
let mut found_metrics = 0;
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
found_metrics += 1;
let metric_id = String::from_utf8_lossy(metric_name).into_owned();
assert_eq!(test_metric_id, metric_id);
match metric {
Metric::String(s) => assert_eq!(test_value, s),
_ => panic!("Unexpected data found"),
}
};
db.iter_store_from(Lifetime::User, test_storage, None, &mut snapshotter);
assert_eq!(1, found_metrics, "We only expect 1 Lifetime.User metric.");
}
#[test]
fn test_clear_ping_storage() {
// Init the database in a temporary directory.
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
let db = Database::new(&str_dir, false).unwrap();
// Attempt to record a known value for every single lifetime.
let test_storage = "test-storage";
db.record_per_lifetime(
Lifetime::User,
test_storage,
"telemetry_test.test_name_user",
&Metric::String("test-value-user".to_string()),
)
.unwrap();
db.record_per_lifetime(
Lifetime::Ping,
test_storage,
"telemetry_test.test_name_ping",
&Metric::String("test-value-ping".to_string()),
)
.unwrap();
db.record_per_lifetime(
Lifetime::Application,
test_storage,
"telemetry_test.test_name_application",
&Metric::String("test-value-application".to_string()),
)
.unwrap();
// Take a snapshot for the data, all the lifetimes.
{
let mut snapshot: HashMap<String, String> = HashMap::new();
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
let metric_name = String::from_utf8_lossy(metric_name).into_owned();
match metric {
Metric::String(s) => snapshot.insert(metric_name, s.to_string()),
_ => panic!("Unexpected data found"),
};
};
db.iter_store_from(Lifetime::User, test_storage, None, &mut snapshotter);
db.iter_store_from(Lifetime::Ping, test_storage, None, &mut snapshotter);
db.iter_store_from(Lifetime::Application, test_storage, None, &mut snapshotter);
assert_eq!(3, snapshot.len(), "We expect all lifetimes to be present.");
assert!(snapshot.contains_key("telemetry_test.test_name_user"));
assert!(snapshot.contains_key("telemetry_test.test_name_ping"));
assert!(snapshot.contains_key("telemetry_test.test_name_application"));
}
// Clear the Ping lifetime.
db.clear_ping_lifetime_storage(test_storage).unwrap();
// Take a snapshot again and check that we're only clearing the Ping lifetime.
{
let mut snapshot: HashMap<String, String> = HashMap::new();
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
let metric_name = String::from_utf8_lossy(metric_name).into_owned();
match metric {
Metric::String(s) => snapshot.insert(metric_name, s.to_string()),
_ => panic!("Unexpected data found"),
};
};
db.iter_store_from(Lifetime::User, test_storage, None, &mut snapshotter);
db.iter_store_from(Lifetime::Ping, test_storage, None, &mut snapshotter);
db.iter_store_from(Lifetime::Application, test_storage, None, &mut snapshotter);
assert_eq!(2, snapshot.len(), "We only expect 2 metrics to be left.");
assert!(snapshot.contains_key("telemetry_test.test_name_user"));
assert!(snapshot.contains_key("telemetry_test.test_name_application"));
}
}
#[test]
fn test_remove_single_metric() {
// Init the database in a temporary directory.
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
let db = Database::new(&str_dir, false).unwrap();
let test_storage = "test-storage-single-lifetime";
let metric_id_pattern = "telemetry_test.single_metric";
// Write sample metrics to the database.
let lifetimes = vec![Lifetime::User, Lifetime::Ping, Lifetime::Application];
for lifetime in lifetimes.iter() {
for value in &["retain", "delete"] {
db.record_per_lifetime(
*lifetime,
test_storage,
&format!("{}_{}", metric_id_pattern, value),
&Metric::String(value.to_string()),
)
.unwrap();
}
}
// Remove "telemetry_test.single_metric_delete" from each lifetime.
for lifetime in lifetimes.iter() {
db.remove_single_metric(
*lifetime,
test_storage,
&format!("{}_delete", metric_id_pattern),
)
.unwrap();
}
// Verify that "telemetry_test.single_metric_retain" is still around for all lifetimes.
for lifetime in lifetimes.iter() {
let mut found_metrics = 0;
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
found_metrics += 1;
let metric_id = String::from_utf8_lossy(metric_name).into_owned();
assert_eq!(format!("{}_retain", metric_id_pattern), metric_id);
match metric {
Metric::String(s) => assert_eq!("retain", s),
_ => panic!("Unexpected data found"),
}
};
// Check the User lifetime.
db.iter_store_from(*lifetime, test_storage, None, &mut snapshotter);
assert_eq!(
1, found_metrics,
"We only expect 1 metric for this lifetime."
);
}
}
#[test]
fn test_deferred_ping_lifetime_collection() {
// Init the database in a temporary directory.
let dir = tempdir().unwrap();
let str_dir = dir.path().display().to_string();
let db = Database::new(&str_dir, true).unwrap();
assert!(db.ping_lifetime_data.is_some());
// Attempt to record a known value.
let test_value = "test-value";
let test_storage = "test-storage1";
let test_metric_id = "telemetry_test.test_name";
db.record_per_lifetime(
Lifetime::Ping,
test_storage,
test_metric_id,
&Metric::String(test_value.to_string()),
)
.unwrap();
// Verify that the data is correctly recorded.
let mut found_metrics = 0;
let mut snapshotter = |metric_name: &[u8], metric: &Metric| {
found_metrics += 1;
let metric_id = String::from_utf8_lossy(metric_name).into_owned();
assert_eq!(test_metric_id, metric_id);
match metric {
Metric::String(s) => assert_eq!(test_value, s),
_ => panic!("Unexpected data found"),
}
};
db.iter_store_from(Lifetime::Ping, test_storage, None, &mut snapshotter);
assert_eq!(1, found_metrics, "We only expect 1 Lifetime.Ping metric.");
// Make sure data was **not** persisted with rkv.
let store: SingleStore = unwrap_or!(
db.rkv
.open_single(Lifetime::Ping.as_str(), StoreOptions::create()),
panic!()
);
let reader = unwrap_or!(db.rkv.read(), panic!());
assert!(store
.get(&reader, &test_metric_id)
.unwrap_or(None)
.is_none());
}
}

180
third_party/rust/glean-core/src/error.rs поставляемый
Просмотреть файл

@ -1,180 +0,0 @@
use std::ffi::OsString;
use std::fmt::{self, Display};
use std::io;
use std::result;
use failure::{self, Backtrace, Context, Fail};
use ffi_support::{handle_map::HandleError, ExternError};
use rkv::error::StoreError;
/// A specialized [`Result`] type for this crate's operations.
///
/// This is generally used to avoid writing out [Error] directly and
/// is otherwise a direct mapping to [`Result`].
///
/// [`Result`]: https://doc.rust-lang.org/stable/std/result/enum.Result.html
/// [`Error`]: std.struct.Error.html
pub type Result<T> = result::Result<T, Error>;
/// A list enumerating the categories of errors in this crate.
///
/// This list is intended to grow over time and it is not recommended to
/// exhaustively match against it.
///
/// It is used with the [`Error`] struct.
///
/// [`Error`]: std.struct.Error.html
#[derive(Debug, Fail)]
pub enum ErrorKind {
/// Lifetime conversion failed
#[fail(display = "Lifetime conversion from {} failed", _0)]
Lifetime(i32),
/// FFI-Support error
#[fail(display = "Invalid handle: {}", _0)]
Handle(HandleError),
/// IO error
#[fail(display = "An I/O error occurred: {}", _0)]
IoError(io::Error),
/// IO error
#[fail(display = "An Rkv error occurred: {}", _0)]
Rkv(StoreError),
/// JSON error
#[fail(display = "A JSON error occurred: {}", _0)]
Json(serde_json::error::Error),
/// TimeUnit conversion failed
#[fail(display = "TimeUnit conversion from {} failed", _0)]
TimeUnit(i32),
/// MemoryUnit conversion failed
#[fail(display = "MemoryUnit conversion from {} failed", _0)]
MemoryUnit(i32),
/// HistogramType conversion failed
#[fail(display = "HistogramType conversion from {} failed", _0)]
HistogramType(i32),
/// OsString conversion failed
#[fail(display = "OsString conversion from {:?} failed", _0)]
OsString(OsString),
/// Unknown error
#[fail(display = "Invalid UTF-8 byte sequence in string.")]
Utf8Error,
}
/// A specialized [`Error`] type for this crate's operations.
///
/// [`Error`]: https://doc.rust-lang.org/stable/std/error/trait.Error.html
#[derive(Debug)]
pub struct Error {
inner: Context<ErrorKind>,
}
impl Error {
/// Access the [`ErrorKind`] member.
///
/// [`ErrorKind`]: enum.ErrorKind.html
pub fn kind(&self) -> &ErrorKind {
&*self.inner.get_context()
}
/// Return a new UTF-8 error
///
/// This is exposed in order to expose conversion errors on the FFI layer.
pub fn utf8_error() -> Error {
Error {
inner: Context::new(ErrorKind::Utf8Error),
}
}
}
impl Fail for Error {
fn cause(&self) -> Option<&dyn Fail> {
self.inner.cause()
}
fn backtrace(&self) -> Option<&Backtrace> {
self.inner.backtrace()
}
}
impl Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
Display::fmt(&self.inner, f)
}
}
impl From<ErrorKind> for Error {
fn from(kind: ErrorKind) -> Error {
let inner = Context::new(kind);
Error { inner }
}
}
impl From<Context<ErrorKind>> for Error {
fn from(inner: Context<ErrorKind>) -> Error {
Error { inner }
}
}
impl From<HandleError> for Error {
fn from(error: HandleError) -> Error {
Error {
inner: Context::new(ErrorKind::Handle(error)),
}
}
}
impl From<io::Error> for Error {
fn from(error: io::Error) -> Error {
Error {
inner: Context::new(ErrorKind::IoError(error)),
}
}
}
impl From<StoreError> for Error {
fn from(error: StoreError) -> Error {
Error {
inner: Context::new(ErrorKind::Rkv(error)),
}
}
}
impl From<Error> for ExternError {
fn from(error: Error) -> ExternError {
ffi_support::ExternError::new_error(ffi_support::ErrorCode::new(42), format!("{}", error))
}
}
impl From<serde_json::error::Error> for Error {
fn from(error: serde_json::error::Error) -> Error {
Error {
inner: Context::new(ErrorKind::Json(error)),
}
}
}
impl From<OsString> for Error {
fn from(error: OsString) -> Error {
Error {
inner: Context::new(ErrorKind::OsString(error)),
}
}
}
/// To satisfy integer conversion done by the macros on the FFI side, we need to be able to turn
/// something infallible into an error.
/// This will never actually be reached, as an integer-to-integer conversion is infallible.
impl From<std::convert::Infallible> for Error {
fn from(_: std::convert::Infallible) -> Error {
unreachable!()
}
}

Просмотреть файл

@ -1,213 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
//! # Error Recording
//!
//! Glean keeps track of errors that occured due to invalid labels or invalid values when recording
//! other metrics.
//!
//! Error counts are stored in labeled counters in the `glean.error` category.
//! The labeled counter metrics that store the errors are defined in the `metrics.yaml` for documentation purposes,
//! but are not actually used directly, since the `send_in_pings` value needs to match the pings of the metric that is erroring (plus the "metrics" ping),
//! not some constant value that we could define in `metrics.yaml`.
use std::convert::TryFrom;
use std::fmt::Display;
use crate::error::{Error, ErrorKind};
use crate::metrics::CounterMetric;
use crate::metrics::{combine_base_identifier_and_label, strip_label};
use crate::CommonMetricData;
use crate::Glean;
use crate::Lifetime;
/// The possible error types for metric recording.
#[derive(Debug)]
pub enum ErrorType {
/// For when the value to be recorded does not match the metric-specific restrictions
InvalidValue,
/// For when the label of a labeled metric does not match the restrictions
InvalidLabel,
/// For when the metric caught an invalid state while recording
InvalidState,
}
impl ErrorType {
/// The error type's metric name
pub fn as_str(&self) -> &'static str {
match self {
ErrorType::InvalidValue => "invalid_value",
ErrorType::InvalidLabel => "invalid_label",
ErrorType::InvalidState => "invalid_state",
}
}
}
impl TryFrom<i32> for ErrorType {
type Error = Error;
fn try_from(value: i32) -> Result<ErrorType, Self::Error> {
match value {
0 => Ok(ErrorType::InvalidValue),
1 => Ok(ErrorType::InvalidLabel),
2 => Ok(ErrorType::InvalidState),
e => Err(ErrorKind::Lifetime(e).into()),
}
}
}
/// For a given metric, get the metric in which to record errors
fn get_error_metric_for_metric(meta: &CommonMetricData, error: ErrorType) -> CounterMetric {
// Can't use meta.identifier here, since that might cause infinite recursion
// if the label on this metric needs to report an error.
let identifier = meta.base_identifier();
let name = strip_label(&identifier);
// Record errors in the pings the metric is in, as well as the metrics ping.
let mut send_in_pings = meta.send_in_pings.clone();
let ping_name = "metrics".to_string();
if !send_in_pings.contains(&ping_name) {
send_in_pings.push(ping_name);
}
CounterMetric::new(CommonMetricData {
name: combine_base_identifier_and_label(error.as_str(), name),
category: "glean.error".into(),
lifetime: Lifetime::Ping,
send_in_pings,
..Default::default()
})
}
/// Records an error into Glean.
///
/// Errors are recorded as labeled counters in the `glean.error` category.
///
/// *Note*: We do make assumptions here how labeled metrics are encoded, namely by having the name
/// `<name>/<label>`.
/// Errors do not adhere to the usual "maximum label" restriction.
///
/// ## Arguments
///
/// * glean - The Glean instance containing the database
/// * meta - The metric's meta data
/// * error - The error type to record
/// * message - The message to log. This message is not sent with the ping.
/// It does not need to include the metric name, as that is automatically prepended to the message.
/// * num_errors - The number of errors of the same type to report.
pub fn record_error<O: Into<Option<i32>>>(
glean: &Glean,
meta: &CommonMetricData,
error: ErrorType,
message: impl Display,
num_errors: O,
) {
let metric = get_error_metric_for_metric(meta, error);
log::warn!("{}: {}", meta.base_identifier(), message);
let to_report = num_errors.into().unwrap_or(1);
debug_assert!(to_report > 0);
metric.add(glean, to_report);
}
/// Get the number of recorded errors for the given metric and error type.
///
/// *Notes: This is a **test-only** API, but we need to expose it to be used in integration tests.
///
/// ## Arguments
///
/// * glean - The Glean object holding the database
/// * meta - The metadata of the metric instance
/// * error - The type of error
///
/// ## Return value
///
/// The number of errors reported
pub fn test_get_num_recorded_errors(
glean: &Glean,
meta: &CommonMetricData,
error: ErrorType,
ping_name: Option<&str>,
) -> Result<i32, String> {
let use_ping_name = ping_name.unwrap_or(&meta.send_in_pings[0]);
let metric = get_error_metric_for_metric(meta, error);
metric.test_get_value(glean, use_ping_name).ok_or_else(|| {
format!(
"No error recorded for {} in '{}' store",
meta.base_identifier(),
use_ping_name
)
})
}
#[cfg(test)]
mod test {
use super::*;
use crate::metrics::*;
const GLOBAL_APPLICATION_ID: &str = "org.mozilla.glean.test.app";
pub fn new_glean() -> (Glean, tempfile::TempDir) {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, true).unwrap();
(glean, dir)
}
#[test]
fn recording_of_all_error_types() {
let (glean, _t) = new_glean();
let string_metric = StringMetric::new(CommonMetricData {
name: "string_metric".into(),
category: "telemetry".into(),
send_in_pings: vec!["store1".into(), "store2".into()],
disabled: false,
lifetime: Lifetime::User,
..Default::default()
});
let expected_invalid_values_errors: i32 = 1;
let expected_invalid_labels_errors: i32 = 2;
record_error(
&glean,
string_metric.meta(),
ErrorType::InvalidValue,
"Invalid value",
None,
);
record_error(
&glean,
string_metric.meta(),
ErrorType::InvalidLabel,
"Invalid label",
expected_invalid_labels_errors,
);
for store in &["store1", "store2", "metrics"] {
assert_eq!(
Ok(expected_invalid_values_errors),
test_get_num_recorded_errors(
&glean,
string_metric.meta(),
ErrorType::InvalidValue,
Some(store)
)
);
assert_eq!(
Ok(expected_invalid_labels_errors),
test_get_num_recorded_errors(
&glean,
string_metric.meta(),
ErrorType::InvalidLabel,
Some(store)
)
);
}
}
}

Просмотреть файл

@ -1,367 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use std::collections::HashMap;
use std::fs;
use std::fs::{create_dir_all, File, OpenOptions};
use std::io::BufRead;
use std::io::BufReader;
use std::io::Write;
use std::iter::FromIterator;
use std::path::{Path, PathBuf};
use std::sync::RwLock;
use serde::{Deserialize, Serialize};
use serde_json;
use serde_json::{json, Value as JsonValue};
use crate::CommonMetricData;
use crate::Glean;
use crate::Result;
/// Represents the data for a single event.
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct RecordedEventData {
pub timestamp: u64,
pub category: String,
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub extra: Option<HashMap<String, String>>,
}
impl RecordedEventData {
/// Serialize an event to JSON, adjusting its timestamp relative to a base timestamp
pub fn serialize_relative(&self, timestamp_offset: u64) -> JsonValue {
json!(&RecordedEventData {
timestamp: self.timestamp - timestamp_offset,
category: self.category.clone(),
name: self.name.clone(),
extra: self.extra.clone(),
})
}
}
/// This struct handles the in-memory and on-disk storage logic for events.
///
/// So that the data survives shutting down of the application, events are stored
/// in an append-only file on disk, in addition to the store in memory. Each line
/// of this file records a single event in JSON, exactly as it will be sent in the
/// ping. There is one file per store.
///
/// When restarting the application, these on-disk files are checked, and if any are
/// found, they are loaded, queued for sending and flushed immediately before any
/// further events are collected. This is because the timestamps for these events
/// may have come from a previous boot of the device, and therefore will not be
/// compatible with any newly-collected events.
#[derive(Debug)]
pub struct EventDatabase {
/// Path to directory of on-disk event files
pub path: PathBuf,
/// The in-memory list of events
event_stores: RwLock<HashMap<String, Vec<RecordedEventData>>>,
/// A lock to be held when doing operations on the filesystem
file_lock: RwLock<()>,
}
impl EventDatabase {
/// Create a new event database.
///
/// # Arguments
///
/// * `data_path` - The directory to store events in. A new directory
/// `events` will be created inside of this directory.
pub fn new(data_path: &str) -> Result<Self> {
let path = Path::new(data_path).join("events");
create_dir_all(&path)?;
Ok(Self {
path,
event_stores: RwLock::new(HashMap::new()),
file_lock: RwLock::new(()),
})
}
/// Initialize events storage after Glean is fully initialized and ready to
/// send pings. This must be called once on application startup, e.g. from
/// [Glean.initialize], but after we are ready to send pings, since this
/// could potentially collect and send pings.
///
/// If there are any events queued on disk, it loads them into memory so
/// that the memory and disk representations are in sync.
///
/// Secondly, if this is the first time the application has been run since
/// rebooting, any pings containing events are assembled into pings and cleared
/// immediately, since their timestamps won't be compatible with the timestamps
/// we would create during this boot of the device.
///
/// # Arguments
///
/// * `glean` - The Glean instance.
///
/// # Return value
///
/// `true` if at least one ping was generated, `false` otherwise.
pub fn flush_pending_events_on_startup(&self, glean: &Glean) -> bool {
match self.load_events_from_disk() {
Ok(_) => self.send_all_events(glean),
Err(err) => {
log::error!("Error loading events from disk: {}", err);
false
}
}
}
fn load_events_from_disk(&self) -> Result<()> {
let _lock = self.file_lock.read().unwrap(); // safe unwrap, only error case is poisoning
let mut db = self.event_stores.write().unwrap(); // safe unwrap, only error case is poisoning
for entry in fs::read_dir(&self.path)? {
let entry = entry?;
if entry.file_type()?.is_file() {
let store_name = entry.file_name().into_string()?;
let file = BufReader::new(File::open(entry.path())?);
db.insert(
store_name,
file.lines()
.filter_map(|line| line.ok())
.filter_map(|line| serde_json::from_str::<RecordedEventData>(&line).ok())
.collect(),
);
}
}
Ok(())
}
fn send_all_events(&self, glean: &Glean) -> bool {
let store_names = {
let db = self.event_stores.read().unwrap(); // safe unwrap, only error case is poisoning
db.keys().cloned().collect::<Vec<String>>()
};
let mut ping_sent = false;
for store_name in store_names {
if let Err(err) = glean.send_ping_by_name(&store_name) {
log::error!(
"Error flushing existing events to the '{}' ping: {}",
store_name,
err
);
} else {
ping_sent = true;
}
}
ping_sent
}
/// Record an event in the desired stores.
///
/// # Arguments
///
/// * `glean` - The Glean instance.
/// * `meta` - The metadata about the event metric. Used to get the category,
/// name and stores for the metric.
/// * `timestamp` - The timestamp of the event, in milliseconds. Must use a
/// monotonically increasing timer (this value is obtained on the
/// platform-specific side).
/// * `extra` - Extra data values, mapping strings to strings.
pub fn record(
&self,
glean: &Glean,
meta: &CommonMetricData,
timestamp: u64,
extra: Option<HashMap<String, String>>,
) {
// Create RecordedEventData object, and its JSON form for serialization
// on disk.
let event = RecordedEventData {
timestamp,
category: meta.category.to_string(),
name: meta.name.to_string(),
extra,
};
let event_json = serde_json::to_string(&event).unwrap(); // safe unwrap, event can always be serialized
// Store the event in memory and on disk to each of the stores.
let mut stores_to_send: Vec<&str> = Vec::new();
{
let mut db = self.event_stores.write().unwrap(); // safe unwrap, only error case is poisoning
for store_name in meta.send_in_pings.iter() {
let store = db.entry(store_name.to_string()).or_insert_with(Vec::new);
store.push(event.clone());
self.write_event_to_disk(store_name, &event_json);
if store.len() == glean.get_max_events() {
stores_to_send.push(&store_name);
}
}
}
// If any of the event stores reached maximum size, send the pings
// containing those events immediately.
for store_name in stores_to_send {
if let Err(err) = glean.send_ping_by_name(store_name) {
log::error!(
"Got more than {} events, but could not send {} ping: {}",
glean.get_max_events(),
store_name,
err
);
}
}
}
/// Writes an event to a single store on disk.
///
/// # Arguments
///
/// * `store_name` - The name of the store.
/// * `event_json` - The event content, as a single-line JSON-encoded string.
fn write_event_to_disk(&self, store_name: &str, event_json: &str) {
let _lock = self.file_lock.write().unwrap(); // safe unwrap, only error case is poisoning
if let Err(err) = OpenOptions::new()
.create(true)
.append(true)
.open(self.path.join(store_name))
.and_then(|mut file| writeln!(file, "{}", event_json))
{
log::error!("IO error writing event to store '{}': {}", store_name, err);
}
}
/// Get a snapshot of the stored event data as a JsonValue.
///
/// # Arguments
///
/// * `store_name` - The name of the desired store.
/// * `clear_store` - Whether to clear the store after snapshotting.
///
/// # Returns
///
/// The an array of events, JSON encoded, if any.
pub fn snapshot_as_json(&self, store_name: &str, clear_store: bool) -> Option<JsonValue> {
let result = {
let mut db = self.event_stores.write().unwrap(); // safe unwrap, only error case is poisoning
db.get_mut(&store_name.to_string()).and_then(|store| {
if !store.is_empty() {
// Timestamps may have been recorded out-of-order, so sort the events
// by the timestamp.
// We can't insert events in order as-we-go, because we also append
// events to a file on disk, where this would be expensive. Best to
// handle this in every case (whether events came from disk or memory)
// in a single location.
store.sort_by(|a, b| a.timestamp.cmp(&b.timestamp));
let first_timestamp = store[0].timestamp;
Some(JsonValue::from_iter(
store.iter().map(|e| e.serialize_relative(first_timestamp)),
))
} else {
log::error!("Unexpectly got empty event store for '{}'", store_name);
None
}
})
};
if clear_store {
self.event_stores
.write()
.unwrap() // safe unwrap, only error case is poisoning
.remove(&store_name.to_string());
let _lock = self.file_lock.write().unwrap(); // safe unwrap, only error case is poisoning
if let Err(err) = fs::remove_file(self.path.join(store_name)) {
match err.kind() {
std::io::ErrorKind::NotFound => {
// silently drop this error, the file was already non-existing
}
_ => log::error!("Error removing events queue file '{}': {}", store_name, err),
}
}
}
result
}
/// Clear all stored events, both in memory and on-disk.
pub fn clear_all(&self) -> Result<()> {
// safe unwrap, only error case is poisoning
self.event_stores.write().unwrap().clear();
// safe unwrap, only error case is poisoning
let _lock = self.file_lock.write().unwrap();
std::fs::remove_dir_all(&self.path)?;
create_dir_all(&self.path)?;
Ok(())
}
/// **Test-only API (exported for FFI purposes).**
///
/// Return whether there are any events currently stored for the given even
/// metric.
///
/// This doesn't clear the stored value.
pub fn test_has_value<'a>(&'a self, meta: &'a CommonMetricData, store_name: &str) -> bool {
self.event_stores
.read()
.unwrap() // safe unwrap, only error case is poisoning
.get(&store_name.to_string())
.into_iter()
.flatten()
.any(|event| event.name == meta.name && event.category == meta.category)
}
/// **Test-only API (exported for FFI purposes).**
///
/// Get the vector of currently stored events for the given event metric in
/// the given store.
///
/// This doesn't clear the stored value.
pub fn test_get_value<'a>(
&'a self,
meta: &'a CommonMetricData,
store_name: &str,
) -> Option<Vec<RecordedEventData>> {
let value: Vec<RecordedEventData> = self
.event_stores
.read()
.unwrap() // safe unwrap, only error case is poisoning
.get(&store_name.to_string())
.into_iter()
.flatten()
.filter(|event| event.name == meta.name && event.category == meta.category)
.cloned()
.collect();
if !value.is_empty() {
Some(value)
} else {
None
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn handle_truncated_events_on_disk() {
let t = tempfile::tempdir().unwrap();
{
let db = EventDatabase::new(&t.path().display().to_string()).unwrap();
db.write_event_to_disk("events", "{\"timestamp\": 500");
db.write_event_to_disk("events", "{\"timestamp\"");
db.write_event_to_disk(
"events",
"{\"timestamp\": 501, \"category\": \"ui\", \"name\": \"click\"}",
);
}
{
let db = EventDatabase::new(&t.path().display().to_string()).unwrap();
db.load_events_from_disk().unwrap();
let events = &db.event_stores.read().unwrap()["events"];
assert_eq!(1, events.len());
}
}
}

Просмотреть файл

@ -1,201 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use std::collections::HashMap;
use once_cell::unsync::OnceCell;
use serde::{Deserialize, Serialize};
use super::{Bucketing, Histogram};
/// Create the possible ranges in an exponential distribution from `min` to `max` with
/// `bucket_count` buckets.
///
/// This algorithm calculates the bucket sizes using a natural log approach to get `bucket_count` number of buckets,
/// exponentially spaced between `min` and `max`
///
/// Bucket limits are the minimal bucket value.
/// That means values in a bucket `i` are `bucket[i] <= value < bucket[i+1]`.
/// It will always contain an underflow bucket (`< 1`).
fn exponential_range(min: u64, max: u64, bucket_count: usize) -> Vec<u64> {
let log_max = (max as f64).ln();
let mut ranges = Vec::with_capacity(bucket_count);
let mut current = min;
if current == 0 {
current = 1;
}
// undeflow bucket
ranges.push(0);
ranges.push(current);
for i in 2..bucket_count {
let log_current = (current as f64).ln();
let log_ratio = (log_max - log_current) / (bucket_count - i) as f64;
let log_next = log_current + log_ratio;
let next_value = log_next.exp().round() as u64;
current = if next_value > current {
next_value
} else {
current + 1
};
ranges.push(current);
}
ranges
}
/// An exponential bucketing algorithm.
///
/// Buckets are pre-computed at instantiation with an exponential distribution from `min` to `max`
/// and `bucket_count` buckets.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PrecomputedExponential {
// Don't serialize the (potentially large) array of ranges, instead compute them on first
// access.
#[serde(skip)]
bucket_ranges: OnceCell<Vec<u64>>,
min: u64,
max: u64,
bucket_count: usize,
}
impl Bucketing for PrecomputedExponential {
/// Get the bucket for the sample.
///
/// This uses a binary search to locate the index `i` of the bucket such that:
/// bucket[i] <= sample < bucket[i+1]
fn sample_to_bucket_minimum(&self, sample: u64) -> u64 {
let limit = match self.ranges().binary_search(&sample) {
// Found an exact match to fit it in
Ok(i) => i,
// Sorted it fits after the bucket's limit, therefore it fits into the previous bucket
Err(i) => i - 1,
};
self.ranges()[limit]
}
fn ranges(&self) -> &[u64] {
// Create the exponential range on first access.
self.bucket_ranges
.get_or_init(|| exponential_range(self.min, self.max, self.bucket_count))
}
}
impl Histogram<PrecomputedExponential> {
/// Create a histogram with `count` exponential buckets in the range `min` to `max`.
pub fn exponential(
min: u64,
max: u64,
bucket_count: usize,
) -> Histogram<PrecomputedExponential> {
Histogram {
values: HashMap::new(),
count: 0,
sum: 0,
bucketing: PrecomputedExponential {
bucket_ranges: OnceCell::new(),
min,
max,
bucket_count,
},
}
}
}
#[cfg(test)]
mod test {
use super::*;
const DEFAULT_BUCKET_COUNT: usize = 100;
const DEFAULT_RANGE_MIN: u64 = 0;
const DEFAULT_RANGE_MAX: u64 = 60_000;
#[test]
fn can_count() {
let mut hist = Histogram::exponential(1, 500, 10);
assert!(hist.is_empty());
for i in 1..=10 {
hist.accumulate(i);
}
assert_eq!(10, hist.count());
assert_eq!(55, hist.sum());
}
#[test]
fn overflow_values_accumulate_in_the_last_bucket() {
let mut hist =
Histogram::exponential(DEFAULT_RANGE_MIN, DEFAULT_RANGE_MAX, DEFAULT_BUCKET_COUNT);
hist.accumulate(DEFAULT_RANGE_MAX + 100);
assert_eq!(1, hist.values[&DEFAULT_RANGE_MAX]);
}
#[test]
fn short_exponential_buckets_are_correct() {
let test_buckets = vec![0, 1, 2, 3, 5, 9, 16, 29, 54, 100];
assert_eq!(test_buckets, exponential_range(1, 100, 10));
// There's always a zero bucket, so we increase the lower limit.
assert_eq!(test_buckets, exponential_range(0, 100, 10));
}
#[test]
fn default_exponential_buckets_are_correct() {
// Hand calculated values using current default range 0 - 60000 and bucket count of 100.
// NOTE: The final bucket, regardless of width, represents the overflow bucket to hold any
// values beyond the maximum (in this case the maximum is 60000)
let test_buckets = vec![
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 19, 21, 23, 25, 28, 31, 34,
38, 42, 46, 51, 56, 62, 68, 75, 83, 92, 101, 111, 122, 135, 149, 164, 181, 200, 221,
244, 269, 297, 328, 362, 399, 440, 485, 535, 590, 651, 718, 792, 874, 964, 1064, 1174,
1295, 1429, 1577, 1740, 1920, 2118, 2337, 2579, 2846, 3140, 3464, 3822, 4217, 4653,
5134, 5665, 6250, 6896, 7609, 8395, 9262, 10219, 11275, 12440, 13726, 15144, 16709,
18436, 20341, 22443, 24762, 27321, 30144, 33259, 36696, 40488, 44672, 49288, 54381,
60000,
];
assert_eq!(
test_buckets,
exponential_range(DEFAULT_RANGE_MIN, DEFAULT_RANGE_MAX, DEFAULT_BUCKET_COUNT)
);
}
#[test]
fn default_buckets_correctly_accumulate() {
let mut hist =
Histogram::exponential(DEFAULT_RANGE_MIN, DEFAULT_RANGE_MAX, DEFAULT_BUCKET_COUNT);
for i in &[1, 10, 100, 1000, 10000] {
hist.accumulate(*i);
}
assert_eq!(11111, hist.sum());
assert_eq!(5, hist.count());
assert_eq!(None, hist.values.get(&0)); // underflow is empty
assert_eq!(1, hist.values[&1]); // bucket_ranges[1] = 1
assert_eq!(1, hist.values[&10]); // bucket_ranges[10] = 10
assert_eq!(1, hist.values[&92]); // bucket_ranges[33] = 92
assert_eq!(1, hist.values[&964]); // bucket_ranges[57] = 964
assert_eq!(1, hist.values[&9262]); // bucket_ranges[80] = 9262
}
#[test]
fn accumulate_large_numbers() {
let mut hist = Histogram::exponential(1, 500, 10);
hist.accumulate(u64::max_value());
hist.accumulate(u64::max_value());
assert_eq!(2, hist.count());
// Saturate before overflowing
assert_eq!(u64::max_value(), hist.sum());
assert_eq!(2, hist.values[&500]);
}
}

Просмотреть файл

@ -1,163 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use std::collections::HashMap;
use serde::{Deserialize, Serialize};
use super::{Bucketing, Histogram};
/// A functional bucketing algorithm.
///
/// Bucketing is performed by a function, rather than pre-computed buckets.
/// The bucket index of a given sample is determined with the following function:
///
/// i = ⌊n log<sub>base</sub>(𝑥)⌋
///
/// In other words, there are n buckets for each power of `base` magnitude.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Functional {
exponent: f64,
}
impl Functional {
/// Instantiate a new functional bucketing.
fn new(log_base: f64, buckets_per_magnitutde: f64) -> Functional {
let exponent = log_base.powf(1.0 / buckets_per_magnitutde);
Functional { exponent }
}
/// Maps a sample to a "bucket index" that it belongs in.
/// A "bucket index" is the consecutive integer index of each bucket, useful as a
/// mathematical concept, even though the internal representation is stored and
/// sent using the minimum value in each bucket.
fn sample_to_bucket_index(&self, sample: u64) -> u64 {
((sample + 1) as f64).log(self.exponent) as u64
}
/// Determines the minimum value of a bucket, given a bucket index.
fn bucket_index_to_bucket_minimum(&self, index: u64) -> u64 {
self.exponent.powf(index as f64) as u64
}
}
impl Bucketing for Functional {
fn sample_to_bucket_minimum(&self, sample: u64) -> u64 {
if sample == 0 {
return 0;
}
let index = self.sample_to_bucket_index(sample);
self.bucket_index_to_bucket_minimum(index)
}
fn ranges(&self) -> &[u64] {
unimplemented!("Bucket ranges for functional bucketing are not precomputed")
}
}
impl Histogram<Functional> {
/// Create a histogram with functional buckets.
pub fn functional(log_base: f64, buckets_per_magnitutde: f64) -> Histogram<Functional> {
Histogram {
values: HashMap::new(),
count: 0,
sum: 0,
bucketing: Functional::new(log_base, buckets_per_magnitutde),
}
}
/// Get a snapshot of all contiguous values.
///
/// **Caution** This is a more specific implementation of `snapshot_values` on functional
/// histograms. `snapshot_values` cannot be used with those, due to buckets not being
/// precomputed.
pub fn snapshot(&self) -> HashMap<u64, u64> {
if self.values.is_empty() {
return HashMap::new();
}
let mut min_key = None;
let mut max_key = None;
// `Iterator#min` and `Iterator#max` would do the same job independently,
// but we want to avoid iterating the keys twice, so we loop ourselves.
for key in self.values.keys() {
let key = *key;
// safe unwrap, we checked it's not none
if min_key.is_none() || key < min_key.unwrap() {
min_key = Some(key);
}
// safe unwrap, we checked it's not none
if max_key.is_none() || key > max_key.unwrap() {
max_key = Some(key);
}
}
// Non-empty values, therefore minimum/maximum exists.
// safe unwraps, we set it at least once.
let min_bucket = self.bucketing.sample_to_bucket_index(min_key.unwrap());
let max_bucket = self.bucketing.sample_to_bucket_index(max_key.unwrap()) + 1;
let mut values = self.values.clone();
for idx in min_bucket..=max_bucket {
// Fill in missing entries.
let min_bucket = self.bucketing.bucket_index_to_bucket_minimum(idx);
let _ = values.entry(min_bucket).or_insert(0);
}
values
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn can_count() {
let mut hist = Histogram::functional(2.0, 8.0);
assert!(hist.is_empty());
for i in 1..=10 {
hist.accumulate(i);
}
assert_eq!(10, hist.count());
assert_eq!(55, hist.sum());
}
#[test]
fn sample_to_bucket_minimum_correctly_rounds_down() {
let hist = Histogram::functional(2.0, 8.0);
// Check each of the first 100 integers, where numerical accuracy of the round-tripping
// is most potentially problematic
for value in 0..100 {
let bucket_minimum = hist.bucketing.sample_to_bucket_minimum(value);
assert!(bucket_minimum <= value);
assert_eq!(
bucket_minimum,
hist.bucketing.sample_to_bucket_minimum(bucket_minimum)
);
}
// Do an exponential sampling of higher numbers
for i in 11..500 {
let value = 1.5f64.powi(i);
let value = value as u64;
let bucket_minimum = hist.bucketing.sample_to_bucket_minimum(value);
assert!(bucket_minimum <= value);
assert_eq!(
bucket_minimum,
hist.bucketing.sample_to_bucket_minimum(bucket_minimum)
);
}
}
}

Просмотреть файл

@ -1,178 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use std::cmp;
use std::collections::HashMap;
use once_cell::unsync::OnceCell;
use serde::{Deserialize, Serialize};
use super::{Bucketing, Histogram};
/// Create the possible ranges in a linear distribution from `min` to `max` with
/// `bucket_count` buckets.
///
/// This algorithm calculates `bucket_count` number of buckets of equal sizes between `min` and `max`.
///
/// Bucket limits are the minimal bucket value.
/// That means values in a bucket `i` are `bucket[i] <= value < bucket[i+1]`.
/// It will always contain an underflow bucket (`< 1`).
fn linear_range(min: u64, max: u64, count: usize) -> Vec<u64> {
let mut ranges = Vec::with_capacity(count);
ranges.push(0);
let min = cmp::max(1, min);
let count = count as u64;
for i in 1..count {
let range = (min * (count - 1 - i) + max * (i - 1)) / (count - 2);
ranges.push(range);
}
ranges
}
/// A linear bucketing algorithm.
///
/// Buckets are pre-computed at instantiation with a linear distribution from `min` to `max`
/// and `bucket_count` buckets.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PrecomputedLinear {
// Don't serialize the (potentially large) array of ranges, instead compute them on first
// access.
#[serde(skip)]
bucket_ranges: OnceCell<Vec<u64>>,
min: u64,
max: u64,
bucket_count: usize,
}
impl Bucketing for PrecomputedLinear {
/// Get the bucket for the sample.
///
/// This uses a binary search to locate the index `i` of the bucket such that:
/// bucket[i] <= sample < bucket[i+1]
fn sample_to_bucket_minimum(&self, sample: u64) -> u64 {
let limit = match self.ranges().binary_search(&sample) {
// Found an exact match to fit it in
Ok(i) => i,
// Sorted it fits after the bucket's limit, therefore it fits into the previous bucket
Err(i) => i - 1,
};
self.ranges()[limit]
}
fn ranges(&self) -> &[u64] {
// Create the linear range on first access.
self.bucket_ranges
.get_or_init(|| linear_range(self.min, self.max, self.bucket_count))
}
}
impl Histogram<PrecomputedLinear> {
/// Create a histogram with `bucket_count` linear buckets in the range `min` to `max`.
pub fn linear(min: u64, max: u64, bucket_count: usize) -> Histogram<PrecomputedLinear> {
Histogram {
values: HashMap::new(),
count: 0,
sum: 0,
bucketing: PrecomputedLinear {
bucket_ranges: OnceCell::new(),
min,
max,
bucket_count,
},
}
}
}
#[cfg(test)]
mod test {
use super::*;
const DEFAULT_BUCKET_COUNT: usize = 100;
const DEFAULT_RANGE_MIN: u64 = 0;
const DEFAULT_RANGE_MAX: u64 = 100;
#[test]
fn can_count() {
let mut hist = Histogram::linear(1, 500, 10);
assert!(hist.is_empty());
for i in 1..=10 {
hist.accumulate(i);
}
assert_eq!(10, hist.count());
assert_eq!(55, hist.sum());
}
#[test]
fn overflow_values_accumulate_in_the_last_bucket() {
let mut hist =
Histogram::linear(DEFAULT_RANGE_MIN, DEFAULT_RANGE_MAX, DEFAULT_BUCKET_COUNT);
hist.accumulate(DEFAULT_RANGE_MAX + 100);
assert_eq!(1, hist.values[&DEFAULT_RANGE_MAX]);
}
#[test]
fn short_linear_buckets_are_correct() {
let test_buckets = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 10];
assert_eq!(test_buckets, linear_range(1, 10, 10));
// There's always a zero bucket, so we increase the lower limit.
assert_eq!(test_buckets, linear_range(0, 10, 10));
}
#[test]
fn long_linear_buckets_are_correct() {
// Hand calculated values using current default range 0 - 60000 and bucket count of 100.
// NOTE: The final bucket, regardless of width, represents the overflow bucket to hold any
// values beyond the maximum (in this case the maximum is 60000)
let test_buckets = vec![
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45,
46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67,
68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 100,
];
assert_eq!(
test_buckets,
linear_range(DEFAULT_RANGE_MIN, DEFAULT_RANGE_MAX, DEFAULT_BUCKET_COUNT)
);
}
#[test]
fn default_buckets_correctly_accumulate() {
let mut hist =
Histogram::linear(DEFAULT_RANGE_MIN, DEFAULT_RANGE_MAX, DEFAULT_BUCKET_COUNT);
for i in &[1, 10, 100, 1000, 10000] {
hist.accumulate(*i);
}
assert_eq!(11111, hist.sum());
assert_eq!(5, hist.count());
assert_eq!(None, hist.values.get(&0));
assert_eq!(1, hist.values[&1]);
assert_eq!(1, hist.values[&10]);
assert_eq!(3, hist.values[&100]);
}
#[test]
fn accumulate_large_numbers() {
let mut hist = Histogram::linear(1, 500, 10);
hist.accumulate(u64::max_value());
hist.accumulate(u64::max_value());
assert_eq!(2, hist.count());
// Saturate before overflowing
assert_eq!(u64::max_value(), hist.sum());
assert_eq!(2, hist.values[&500]);
}
}

Просмотреть файл

@ -1,163 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
//! A simple histogram implementation for exponential histograms.
use std::collections::HashMap;
use std::convert::TryFrom;
use serde::{Deserialize, Serialize};
use crate::error::{Error, ErrorKind};
pub use exponential::PrecomputedExponential;
pub use functional::Functional;
pub use linear::PrecomputedLinear;
mod exponential;
mod functional;
mod linear;
/// Different kinds of histograms.
#[derive(Debug, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum HistogramType {
/// A histogram with linear distributed buckets.
Linear,
/// A histogram with exponential distributed buckets.
Exponential,
}
impl TryFrom<i32> for HistogramType {
type Error = Error;
fn try_from(value: i32) -> Result<HistogramType, Self::Error> {
match value {
0 => Ok(HistogramType::Linear),
1 => Ok(HistogramType::Exponential),
e => Err(ErrorKind::HistogramType(e).into()),
}
}
}
/// A histogram.
///
/// Stores the counts per bucket and tracks the count of added samples and the total sum.
/// The bucketing algorithm can be changed.
///
/// ## Example
///
/// ```rust,ignore
/// let mut hist = Histogram::exponential(1, 500, 10);
///
/// for i in 1..=10 {
/// hist.accumulate(i);
/// }
///
/// assert_eq!(10, hist.count());
/// assert_eq!(55, hist.sum());
/// ```
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Histogram<B> {
/// Mapping bucket's minimum to sample count.
values: HashMap<u64, u64>,
/// The count of samples added.
count: u64,
/// The total sum of samples.
sum: u64,
/// The bucketing algorithm used.
bucketing: B,
}
/// A bucketing algorithm for histograms.
///
/// It's responsible to calculate the bucket a sample goes into.
/// It can calculate buckets on-the-fly or pre-calculate buckets and re-use that when needed.
pub trait Bucketing {
/// Get the bucket's minimum value the sample falls into.
fn sample_to_bucket_minimum(&self, sample: u64) -> u64;
/// The computed bucket ranges for this bucketing algorithm.
fn ranges(&self) -> &[u64];
}
/// Implement the bucketing algorithm on every object that has that algorithm using dynamic
/// dispatch.
impl Bucketing for Box<dyn Bucketing> {
fn sample_to_bucket_minimum(&self, sample: u64) -> u64 {
(**self).sample_to_bucket_minimum(sample)
}
fn ranges(&self) -> &[u64] {
(**self).ranges()
}
}
impl<B: Bucketing> Histogram<B> {
/// Get the number of buckets in this histogram.
pub fn bucket_count(&self) -> usize {
self.values.len()
}
/// Add a single value to this histogram.
pub fn accumulate(&mut self, sample: u64) {
let bucket_min = self.bucketing.sample_to_bucket_minimum(sample);
let entry = self.values.entry(bucket_min).or_insert(0);
*entry += 1;
self.sum = self.sum.saturating_add(sample);
self.count += 1;
}
/// Get the total sum of values recorded in this histogram.
pub fn sum(&self) -> u64 {
self.sum
}
/// Get the total count of values recorded in this histogram.
pub fn count(&self) -> u64 {
self.count
}
/// Get the filled values.
pub fn values(&self) -> &HashMap<u64, u64> {
&self.values
}
/// Check if this histogram recorded any values.
pub fn is_empty(&self) -> bool {
self.count() == 0
}
/// Get a snapshot of all values from the first bucket until one past the last filled bucket,
/// filling in empty buckets with 0.
pub fn snapshot_values(&self) -> HashMap<u64, u64> {
let mut res = self.values.clone();
let max_bucket = self.values.keys().max().cloned().unwrap_or(0);
for &min_bucket in self.bucketing.ranges() {
// Fill in missing entries.
let _ = res.entry(min_bucket).or_insert(0);
// stop one after the last filled bucket
if min_bucket > max_bucket {
break;
}
}
res
}
}
impl<B: Bucketing + 'static> Histogram<B> {
/// Box the contained bucketing algorithm to allow for dynamic dispatch.
pub fn boxed(self) -> Histogram<Box<dyn Bucketing>> {
Histogram {
values: self.values,
count: self.count,
sum: self.sum,
bucketing: Box::new(self.bucketing),
}
}
}

Просмотреть файл

@ -1,38 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use super::{metrics::*, CommonMetricData, Lifetime};
#[derive(Debug)]
pub struct CoreMetrics {
pub client_id: UuidMetric,
pub first_run_date: DatetimeMetric,
}
impl CoreMetrics {
pub fn new() -> CoreMetrics {
CoreMetrics {
client_id: UuidMetric::new(CommonMetricData {
name: "client_id".into(),
category: "".into(),
send_in_pings: vec!["glean_client_info".into()],
lifetime: Lifetime::User,
disabled: false,
dynamic_label: None,
}),
first_run_date: DatetimeMetric::new(
CommonMetricData {
name: "first_run_date".into(),
category: "".into(),
send_in_pings: vec!["glean_client_info".into()],
lifetime: Lifetime::User,
disabled: false,
dynamic_label: None,
},
TimeUnit::Day,
),
}
}
}

Просмотреть файл

@ -1,30 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
use super::metrics::PingType;
/// Glean-provided pings, all enabled by default.
///
/// These pings are defined in `glean-core/pings.yaml` and for now manually translated into Rust code.
/// This might get auto-generated when the Rust API lands ([Bug 1579146](https://bugzilla.mozilla.org/show_bug.cgi?id=1579146)).
///
/// They are parsed and registered by the platform-specific wrappers, but might be used Glean-internal directly.
#[derive(Debug)]
pub struct InternalPings {
pub baseline: PingType,
pub metrics: PingType,
pub events: PingType,
pub deletion_request: PingType,
}
impl InternalPings {
pub fn new() -> InternalPings {
InternalPings {
baseline: PingType::new("baseline", true, false),
metrics: PingType::new("metrics", true, false),
events: PingType::new("events", true, false),
deletion_request: PingType::new("deletion_request", true, true),
}
}
}

596
third_party/rust/glean-core/src/lib.rs поставляемый
Просмотреть файл

@ -1,596 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
#![deny(missing_docs)]
//! Glean is a modern approach for recording and sending Telemetry data.
//!
//! It's in use at Mozilla.
//!
//! All documentation can be found online:
//!
//! ## [The Glean SDK Book](https://mozilla.github.io/glean)
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use chrono::{DateTime, FixedOffset};
use lazy_static::lazy_static;
use uuid::Uuid;
mod macros;
pub mod ac_migration;
mod common_metric_data;
mod database;
mod error;
mod error_recording;
mod event_database;
mod histogram;
mod internal_metrics;
mod internal_pings;
pub mod metrics;
pub mod ping;
pub mod storage;
mod util;
use crate::ac_migration::migrate_sequence_numbers;
pub use crate::common_metric_data::{CommonMetricData, Lifetime};
use crate::database::Database;
pub use crate::error::{Error, Result};
pub use crate::error_recording::{test_get_num_recorded_errors, ErrorType};
use crate::event_database::EventDatabase;
use crate::internal_metrics::CoreMetrics;
use crate::internal_pings::InternalPings;
use crate::metrics::PingType;
use crate::ping::PingMaker;
use crate::storage::StorageManager;
use crate::util::{local_now_with_offset, sanitize_application_id};
const GLEAN_SCHEMA_VERSION: u32 = 1;
const DEFAULT_MAX_EVENTS: usize = 500;
lazy_static! {
static ref KNOWN_CLIENT_ID: Uuid =
Uuid::parse_str("c0ffeec0-ffee-c0ff-eec0-ffeec0ffeec0").unwrap();
}
/// The Glean configuration.
///
/// Optional values will be filled in with default values.
#[derive(Debug, Clone)]
pub struct Configuration {
/// Whether upload should be enabled.
pub upload_enabled: bool,
/// Path to a directory to store all data in.
pub data_path: String,
/// The application ID (will be sanitized during initialization).
pub application_id: String,
/// The maximum number of events to store before sending a ping containing events.
pub max_events: Option<usize>,
/// Whether Glean should delay persistence of data from metrics with ping lifetime.
pub delay_ping_lifetime_io: bool,
}
/// The object holding meta information about a Glean instance.
///
/// ## Example
///
/// Create a new Glean instance, register a ping, record a simple counter and then send the final
/// ping.
///
/// ```rust,no_run
/// # use glean_core::{Glean, Configuration, CommonMetricData, metrics::*};
/// let cfg = Configuration {
/// data_path: "/tmp/glean".into(),
/// application_id: "glean.sample.app".into(),
/// upload_enabled: true,
/// max_events: None,
/// delay_ping_lifetime_io: false,
/// };
/// let mut glean = Glean::new(cfg).unwrap();
/// let ping = PingType::new("sample", true, false);
/// glean.register_ping_type(&ping);
///
/// let call_counter: CounterMetric = CounterMetric::new(CommonMetricData {
/// name: "calls".into(),
/// category: "local".into(),
/// send_in_pings: vec!["sample".into()],
/// ..Default::default()
/// });
///
/// call_counter.add(&glean, 1);
///
/// glean.send_ping(&ping).unwrap();
/// ```
///
/// ## Note
///
/// In specific language bindings, this is usually wrapped in a singleton and all metric recording goes to a single instance of this object.
/// In the Rust core, it is possible to create multiple instances, which is used in testing.
#[derive(Debug)]
pub struct Glean {
upload_enabled: bool,
data_store: Database,
event_data_store: EventDatabase,
core_metrics: CoreMetrics,
internal_pings: InternalPings,
data_path: PathBuf,
application_id: String,
ping_registry: HashMap<String, PingType>,
start_time: DateTime<FixedOffset>,
max_events: usize,
}
impl Glean {
/// Create and initialize a new Glean object.
///
/// This will create the necessary directories and files in `data_path`.
/// This will also initialize the core metrics.
pub fn new(cfg: Configuration) -> Result<Self> {
log::info!("Creating new Glean");
let application_id = sanitize_application_id(&cfg.application_id);
// Creating the data store creates the necessary path as well.
// If that fails we bail out and don't initialize further.
let data_store = Database::new(&cfg.data_path, cfg.delay_ping_lifetime_io)?;
let event_data_store = EventDatabase::new(&cfg.data_path)?;
let mut glean = Self {
upload_enabled: cfg.upload_enabled,
data_store,
event_data_store,
core_metrics: CoreMetrics::new(),
internal_pings: InternalPings::new(),
data_path: PathBuf::from(cfg.data_path),
application_id,
ping_registry: HashMap::new(),
start_time: local_now_with_offset(),
max_events: cfg.max_events.unwrap_or(DEFAULT_MAX_EVENTS),
};
glean.on_change_upload_enabled(cfg.upload_enabled);
Ok(glean)
}
/// Create and initialize a new Glean object.
///
/// This will attempt to delete any previously existing database and
/// then create the necessary directories and files in `data_path`.
/// This will also initialize the core metrics.
///
/// # Arguments
///
/// * `cfg` - an instance of the Glean `Configuration`.
/// * `new_sequence_nums` - a map of ("<pingName>_seq", sequence number)
/// used to initialize Glean with sequence numbers imported from glean-ac.
pub fn with_sequence_numbers(
cfg: Configuration,
new_sequence_nums: HashMap<String, i32>,
) -> Result<Self> {
log::info!("Creating new Glean (migrating data)");
// Delete the database directory, if it exists. Bail out if there's
// errors, as I'm not sure what else could be done if we can't even
// delete a directory we own.
let db_path = Path::new(&cfg.data_path).join("db");
if db_path.exists() {
std::fs::remove_dir_all(db_path)?;
}
let glean = Self::new(cfg)?;
// Set sequence numbers coming through the FFI.
migrate_sequence_numbers(&glean, new_sequence_nums);
Ok(glean)
}
/// For tests make it easy to create a Glean object using only the required configuration.
#[cfg(test)]
pub(crate) fn with_options(
data_path: &str,
application_id: &str,
upload_enabled: bool,
) -> Result<Self> {
let cfg = Configuration {
data_path: data_path.into(),
application_id: application_id.into(),
upload_enabled,
max_events: None,
delay_ping_lifetime_io: false,
};
Self::new(cfg)
}
/// Initialize the core metrics managed by Glean's Rust core.
fn initialize_core_metrics(&mut self) {
let need_new_client_id = match self
.core_metrics
.client_id
.get_value(self, "glean_client_info")
{
None => true,
Some(uuid) => uuid == *KNOWN_CLIENT_ID,
};
if need_new_client_id {
self.core_metrics.client_id.generate_and_set(self);
}
if self
.core_metrics
.first_run_date
.get_value(self, "glean_client_info")
.is_none()
{
self.core_metrics.first_run_date.set(self, None);
}
}
/// Called when Glean is initialized to the point where it can correctly
/// assemble pings. Usually called from the language specific layer after all
/// of the core metrics have been set and the ping types have been
/// registered.
///
/// # Return value
///
/// `true` if at least one ping was generated, `false` otherwise.
pub fn on_ready_to_send_pings(&self) -> bool {
self.event_data_store.flush_pending_events_on_startup(&self)
}
/// Set whether upload is enabled or not.
///
/// When uploading is disabled, metrics aren't recorded at all and no
/// data is uploaded.
///
/// When disabling, all pending metrics, events and queued pings are cleared.
///
/// When enabling, the core Glean metrics are recreated.
///
/// If the value of this flag is not actually changed, this is a no-op.
///
/// # Arguments
///
/// * `flag` - When true, enable metric collection.
///
/// # Returns
///
/// * Returns true when the flag was different from the current value, and
/// actual work was done to clear or reinstate metrics.
pub fn set_upload_enabled(&mut self, flag: bool) -> bool {
log::info!("Upload enabled: {:?}", flag);
// When upload is disabled, send a deletion-request ping
if !flag {
if let Err(err) = self.internal_pings.deletion_request.send(self) {
log::error!("Failed to send deletion-request ping on optout: {}", err);
}
}
if self.upload_enabled != flag {
self.upload_enabled = flag;
self.on_change_upload_enabled(flag);
true
} else {
false
}
}
/// Determine whether upload is enabled.
///
/// When upload is disabled, no data will be recorded.
pub fn is_upload_enabled(&self) -> bool {
self.upload_enabled
}
/// Handles the changing of state when upload_enabled changes.
///
/// Should only be called when the state actually changes.
/// When disabling, all pending metrics, events and queued pings are cleared.
///
/// When enabling, the core Glean metrics are recreated.
///
/// # Arguments
///
/// * `flag` - When true, enable metric collection.
fn on_change_upload_enabled(&mut self, flag: bool) {
if flag {
self.initialize_core_metrics();
} else {
self.clear_metrics();
}
}
/// Clear any pending metrics when telemetry is disabled.
fn clear_metrics(&mut self) {
// There is only one metric that we want to survive after clearing all
// metrics: first_run_date. Here, we store its value so we can restore
// it after clearing the metrics.
let existing_first_run_date = self
.core_metrics
.first_run_date
.get_value(self, "glean_client_info");
// Clear any pending pings.
let ping_maker = PingMaker::new();
if let Err(err) = ping_maker.clear_pending_pings(self.get_data_path()) {
log::error!("Error clearing pending pings: {}", err);
}
// Delete all stored metrics.
// Note that this also includes the ping sequence numbers, so it has
// the effect of resetting those to their initial values.
self.data_store.clear_all();
if let Err(err) = self.event_data_store.clear_all() {
log::error!("Error clearing pending events: {}", err);
}
// This does not clear the experiments store (which isn't managed by the
// StorageEngineManager), since doing so would mean we would have to have the
// application tell us again which experiments are active if telemetry is
// re-enabled.
{
// We need to briefly set upload_enabled to true here so that `set`
// is not a no-op. This is safe, since nothing on the Rust side can
// run concurrently to this since we hold a mutable reference to the
// Glean object. Additionally, the pending pings have been cleared
// from disk, so the PingUploader can't wake up and start sending
// pings.
self.upload_enabled = true;
// Store a "dummy" KNOWN_CLIENT_ID in the client_id metric. This will
// make it easier to detect if pings were unintentionally sent after
// uploading is disabled.
self.core_metrics.client_id.set(self, *KNOWN_CLIENT_ID);
// Restore the first_run_date.
if let Some(existing_first_run_date) = existing_first_run_date {
self.core_metrics
.first_run_date
.set(self, Some(existing_first_run_date));
}
self.upload_enabled = false;
}
}
/// Get the application ID as specified on instantiation.
pub fn get_application_id(&self) -> &str {
&self.application_id
}
/// Get the data path of this instance.
pub fn get_data_path(&self) -> &Path {
&self.data_path
}
/// Get a handle to the database.
pub fn storage(&self) -> &Database {
&self.data_store
}
/// Get a handle to the event database.
pub fn event_storage(&self) -> &EventDatabase {
&self.event_data_store
}
/// Get the maximum number of events to store before sending a ping.
pub fn get_max_events(&self) -> usize {
self.max_events
}
/// Take a snapshot for the given store and optionally clear it.
///
/// ## Arguments
///
/// * `store_name` - The store to snapshot.
/// * `clear_store` - Whether to clear the store after snapshotting.
///
/// ## Return value
///
/// Returns the snapshot in a string encoded as JSON.
/// If the snapshot is empty, it returns an empty string.
pub fn snapshot(&mut self, store_name: &str, clear_store: bool) -> String {
StorageManager
.snapshot(&self.storage(), store_name, clear_store)
.unwrap_or_else(|| String::from(""))
}
fn make_path(&self, ping_name: &str, doc_id: &str) -> String {
format!(
"/submit/{}/{}/{}/{}",
self.get_application_id(),
ping_name,
GLEAN_SCHEMA_VERSION,
doc_id
)
}
/// Send a ping.
///
/// The ping content is assembled as soon as possible, but upload is not
/// guaranteed to happen immediately, as that depends on the upload
/// policies.
///
/// If the ping currently contains no content, it will not be sent.
///
/// Returns true if a ping was assembled and queued, false otherwise.
/// Returns an error if collecting or writing the ping to disk failed.
pub fn send_ping(&self, ping: &PingType) -> Result<bool> {
let ping_maker = PingMaker::new();
let doc_id = Uuid::new_v4().to_string();
let url_path = self.make_path(&ping.name, &doc_id);
match ping_maker.collect(self, &ping) {
None => {
log::info!(
"No content for ping '{}', therefore no ping queued.",
ping.name
);
Ok(false)
}
Some(content) => {
if let Err(e) = ping_maker.store_ping(
&doc_id,
&ping.name,
&self.get_data_path(),
&url_path,
&content,
) {
log::warn!("IO error while writing ping to file: {}", e);
return Err(e.into());
}
log::info!(
"The ping '{}' was submitted and will be sent as soon as possible",
ping.name
);
Ok(true)
}
}
}
/// Send a list of pings by name.
///
/// See `send_ping` for detailed information.
///
/// Returns true if at least one ping was assembled and queued, false otherwise.
pub fn send_pings_by_name(&self, ping_names: &[String]) -> bool {
// TODO: 1553813: glean-ac collects and stores pings in parallel and then joins them all before queueing the worker.
// This here is writing them out sequentially.
let mut result = false;
for ping_name in ping_names {
if let Ok(true) = self.send_ping_by_name(ping_name) {
result = true;
}
}
result
}
/// Send a ping by name.
///
/// The ping content is assembled as soon as possible, but upload is not
/// guaranteed to happen immediately, as that depends on the upload
/// policies.
///
/// If the ping currently contains no content, it will not be sent.
///
/// Returns true if a ping was assembled and queued, false otherwise.
/// Returns an error if collecting or writing the ping to disk failed.
pub fn send_ping_by_name(&self, ping_name: &str) -> Result<bool> {
match self.get_ping_by_name(ping_name) {
None => {
log::error!("Attempted to send unknown ping '{}'", ping_name);
Ok(false)
}
Some(ping) => self.send_ping(ping),
}
}
/// Get a [`PingType`](metrics/struct.PingType.html) by name.
///
/// ## Return value
///
/// Returns the `PingType` if a ping of the given name was registered before.
/// Returns `None` otherwise.
pub fn get_ping_by_name(&self, ping_name: &str) -> Option<&PingType> {
self.ping_registry.get(ping_name)
}
/// Register a new [`PingType`](metrics/struct.PingType.html).
pub fn register_ping_type(&mut self, ping: &PingType) {
if self.ping_registry.contains_key(&ping.name) {
log::error!("Duplicate ping named '{}'", ping.name)
}
self.ping_registry.insert(ping.name.clone(), ping.clone());
}
/// Get create time of the Glean object.
pub(crate) fn start_time(&self) -> DateTime<FixedOffset> {
self.start_time
}
/// Indicate that an experiment is running.
/// Glean will then add an experiment annotation to the environment
/// which is sent with pings. This information is not persisted between runs.
///
/// ## Arguments
///
/// * `experiment_id` - The id of the active experiment (maximum 30 bytes).
/// * `branch` - The experiment branch (maximum 30 bytes).
/// * `extra` - Optional metadata to output with the ping.
pub fn set_experiment_active(
&self,
experiment_id: String,
branch: String,
extra: Option<HashMap<String, String>>,
) {
let metric = metrics::ExperimentMetric::new(&self, experiment_id);
metric.set_active(&self, branch, extra);
}
/// Indicate that an experiment is no longer running.
///
/// ## Arguments
///
/// * `experiment_id` - The id of the active experiment to deactivate (maximum 30 bytes).
pub fn set_experiment_inactive(&self, experiment_id: String) {
let metric = metrics::ExperimentMetric::new(&self, experiment_id);
metric.set_inactive(&self);
}
/// **Test-only API (exported for FFI purposes).**
///
/// Check if an experiment is currently active.
///
/// ## Arguments
///
/// * `experiment_id` - The id of the experiment (maximum 30 bytes).
///
/// ## Return value
///
/// True if the experiment is active, false otherwise.
pub fn test_is_experiment_active(&self, experiment_id: String) -> bool {
self.test_get_experiment_data_as_json(experiment_id)
.is_some()
}
/// **Test-only API (exported for FFI purposes).**
///
/// Get stored data for the requested experiment.
///
/// ## Arguments
///
/// * `experiment_id` - The id of the active experiment (maximum 30 bytes).
///
/// ## Return value
///
/// If the requested experiment is active, a JSON string with the following format:
/// { 'branch': 'the-branch-name', 'extra': {'key': 'value', ...}}
/// Otherwise, None.
pub fn test_get_experiment_data_as_json(&self, experiment_id: String) -> Option<String> {
let metric = metrics::ExperimentMetric::new(&self, experiment_id);
metric.test_get_value_as_json_string(&self)
}
/// **Test-only API (exported for FFI purposes).**
///
/// Delete all stored metrics.
/// Note that this also includes the ping sequence numbers, so it has
/// the effect of resetting those to their initial values.
pub fn test_clear_all_stores(&self) {
self.data_store.clear_all();
// We don't care about this failing, maybe the data does just not exist.
let _ = self.event_data_store.clear_all();
}
}
// Split unit tests to a separate file, to reduce the file of this one.
#[cfg(test)]
#[cfg(test)]
#[path = "lib_unit_tests.rs"]
mod tests;

Просмотреть файл

@ -1,420 +0,0 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
// NOTE: This is a test-only file that contains unit tests for
// the lib.rs file.
use super::*;
use crate::metrics::RecordedExperimentData;
use crate::metrics::StringMetric;
const GLOBAL_APPLICATION_ID: &str = "org.mozilla.glean.test.app";
pub fn new_glean() -> (Glean, tempfile::TempDir) {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, true).unwrap();
(glean, dir)
}
#[test]
fn path_is_constructed_from_data() {
let (glean, _) = new_glean();
assert_eq!(
"/submit/org-mozilla-glean-test-app/baseline/1/this-is-a-docid",
glean.make_path("baseline", "this-is-a-docid")
);
}
// Experiment's API tests: the next two tests come from glean-ac's
// ExperimentsStorageEngineTest.kt.
#[test]
fn experiment_id_and_branch_get_truncated_if_too_long() {
let t = tempfile::tempdir().unwrap();
let name = t.path().display().to_string();
let glean = Glean::with_options(&name, "org.mozilla.glean.tests", true).unwrap();
// Generate long strings for the used ids.
let very_long_id = "test-experiment-id".repeat(10);
let very_long_branch_id = "test-branch-id".repeat(10);
// Mark the experiment as active.
glean.set_experiment_active(very_long_id.clone(), very_long_branch_id.clone(), None);
// Generate the expected id and branch strings.
let mut expected_id = very_long_id.clone();
expected_id.truncate(100);
let mut expected_branch_id = very_long_branch_id.clone();
expected_branch_id.truncate(100);
assert!(
glean.test_is_experiment_active(expected_id.clone()),
"An experiment with the truncated id should be available"
);
// Make sure the branch id was truncated as well.
let experiment_data = glean.test_get_experiment_data_as_json(expected_id.clone());
assert!(
!experiment_data.is_none(),
"Experiment data must be available"
);
let parsed_json: RecordedExperimentData =
::serde_json::from_str(&experiment_data.unwrap()).unwrap();
assert_eq!(expected_branch_id, parsed_json.branch);
}
#[test]
fn limits_on_experiments_extras_are_applied_correctly() {
let t = tempfile::tempdir().unwrap();
let name = t.path().display().to_string();
let glean = Glean::with_options(&name, "org.mozilla.glean.tests", true).unwrap();
let experiment_id = "test-experiment_id".to_string();
let branch_id = "test-branch-id".to_string();
let mut extras = HashMap::new();
let too_long_key = "0123456789".repeat(11);
let too_long_value = "0123456789".repeat(11);
// Build and extras HashMap that's a little too long in every way
for n in 0..21 {
extras.insert(format!("{}-{}", n, too_long_key), too_long_value.clone());
}
// Mark the experiment as active.
glean.set_experiment_active(experiment_id.clone(), branch_id.clone(), Some(extras));
// Make sure it is active
assert!(
glean.test_is_experiment_active(experiment_id.clone()),
"An experiment with the truncated id should be available"
);
// Get the data
let experiment_data = glean.test_get_experiment_data_as_json(experiment_id.clone());
assert!(
!experiment_data.is_none(),
"Experiment data must be available"
);
// Parse the JSON and validate the lengths
let parsed_json: RecordedExperimentData =
::serde_json::from_str(&experiment_data.unwrap()).unwrap();
assert_eq!(
20,
parsed_json.clone().extra.unwrap().len(),
"Experiments extra must be less than max length"
);
for (key, value) in parsed_json.extra.as_ref().unwrap().iter() {
assert!(
key.len() <= 100,
"Experiments extra key must be less than max length"
);
assert!(
value.len() <= 100,
"Experiments extra value must be less than max length"
);
}
}
#[test]
fn experiments_status_is_correctly_toggled() {
let t = tempfile::tempdir().unwrap();
let name = t.path().display().to_string();
let glean = Glean::with_options(&name, "org.mozilla.glean.tests", true).unwrap();
// Define the experiment's data.
let experiment_id: String = "test-toggle-experiment".into();
let branch_id: String = "test-branch-toggle".into();
let extra: HashMap<String, String> = [("test-key".into(), "test-value".into())]
.iter()
.cloned()
.collect();
// Activate an experiment.
glean.set_experiment_active(
experiment_id.clone(),
branch_id.clone(),
Some(extra.clone()),
);
// Check that the experiment is marekd as active.
assert!(
glean.test_is_experiment_active(experiment_id.clone()),
"The experiment must be marked as active."
);
// Check that the extra data was stored.
let experiment_data = glean.test_get_experiment_data_as_json(experiment_id.clone());
assert!(
experiment_data.is_some(),
"Experiment data must be available"
);
let parsed_data: RecordedExperimentData =
::serde_json::from_str(&experiment_data.unwrap()).unwrap();
assert_eq!(parsed_data.extra.unwrap(), extra.clone());
// Disable the experiment and check that is no longer available.
glean.set_experiment_inactive(experiment_id.clone());
assert!(
!glean.test_is_experiment_active(experiment_id.clone()),
"The experiment must not be available any more."
);
}
#[test]
fn client_id_and_first_run_date_must_be_regenerated() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
{
let glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, true).unwrap();
glean.data_store.clear_all();
assert!(glean
.core_metrics
.client_id
.test_get_value(&glean, "glean_client_info")
.is_none());
assert!(glean
.core_metrics
.first_run_date
.test_get_value_as_string(&glean, "glean_client_info")
.is_none());
}
{
let glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, true).unwrap();
assert!(glean
.core_metrics
.client_id
.test_get_value(&glean, "glean_client_info")
.is_some());
assert!(glean
.core_metrics
.first_run_date
.test_get_value_as_string(&glean, "glean_client_info")
.is_some());
}
}
#[test]
fn basic_metrics_should_be_cleared_when_uploading_is_disabled() {
let (mut glean, _t) = new_glean();
let metric = StringMetric::new(CommonMetricData::new(
"category",
"string_metric",
"baseline",
));
metric.set(&glean, "TEST VALUE");
assert!(metric.test_get_value(&glean, "baseline").is_some());
glean.set_upload_enabled(false);
assert!(metric.test_get_value(&glean, "baseline").is_none());
metric.set(&glean, "TEST VALUE");
assert!(metric.test_get_value(&glean, "baseline").is_none());
glean.set_upload_enabled(true);
assert!(metric.test_get_value(&glean, "baseline").is_none());
metric.set(&glean, "TEST VALUE");
assert!(metric.test_get_value(&glean, "baseline").is_some());
}
#[test]
fn first_run_date_is_managed_correctly_when_toggling_uploading() {
let (mut glean, _) = new_glean();
let original_first_run_date = glean
.core_metrics
.first_run_date
.get_value(&glean, "glean_client_info");
glean.set_upload_enabled(false);
assert_eq!(
original_first_run_date,
glean
.core_metrics
.first_run_date
.get_value(&glean, "glean_client_info")
);
glean.set_upload_enabled(true);
assert_eq!(
original_first_run_date,
glean
.core_metrics
.first_run_date
.get_value(&glean, "glean_client_info")
);
}
#[test]
fn client_id_is_managed_correctly_when_toggling_uploading() {
let (mut glean, _) = new_glean();
let original_client_id = glean
.core_metrics
.client_id
.get_value(&glean, "glean_client_info");
assert!(original_client_id.is_some());
assert_ne!(*KNOWN_CLIENT_ID, original_client_id.unwrap());
glean.set_upload_enabled(false);
assert_eq!(
*KNOWN_CLIENT_ID,
glean
.core_metrics
.client_id
.get_value(&glean, "glean_client_info")
.unwrap()
);
glean.set_upload_enabled(true);
let current_client_id = glean
.core_metrics
.client_id
.get_value(&glean, "glean_client_info");
assert!(current_client_id.is_some());
assert_ne!(*KNOWN_CLIENT_ID, current_client_id.unwrap());
assert_ne!(original_client_id, current_client_id);
}
#[test]
fn client_id_is_set_to_known_value_when_uploading_disabled_at_start() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, false).unwrap();
assert_eq!(
*KNOWN_CLIENT_ID,
glean
.core_metrics
.client_id
.get_value(&glean, "glean_client_info")
.unwrap()
);
}
#[test]
fn client_id_is_set_to_random_value_when_uploading_enabled_at_start() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, true).unwrap();
let current_client_id = glean
.core_metrics
.client_id
.get_value(&glean, "glean_client_info");
assert!(current_client_id.is_some());
assert_ne!(*KNOWN_CLIENT_ID, current_client_id.unwrap());
}
#[test]
fn enabling_when_already_enabled_is_a_noop() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let mut glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, true).unwrap();
assert!(!glean.set_upload_enabled(true));
}
#[test]
fn disabling_when_already_disabled_is_a_noop() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let mut glean = Glean::with_options(&tmpname, GLOBAL_APPLICATION_ID, false).unwrap();
assert!(!glean.set_upload_enabled(false));
}
#[test]
fn glean_inits_with_migration_when_no_db_dir_exists() {
let dir = tempfile::tempdir().unwrap();
let tmpname = dir.path().display().to_string();
let cfg = Configuration {
data_path: tmpname,
application_id: GLOBAL_APPLICATION_ID.to_string(),
upload_enabled: false,
max_events: None,
delay_ping_lifetime_io: false,
};
let mut ac_seq_numbers = HashMap::new();
ac_seq_numbers.insert(String::from("custom_seq"), 3);
let mut glean = Glean::with_sequence_numbers(cfg, ac_seq_numbers).unwrap();
assert!(!glean.set_upload_enabled(false));
}
// Test that the enum variants keep a stable discriminant when serialized.
// Discriminant values are taken from a stable ordering from v20.0.0.
// New metrics after that should be added in order.
#[test]
#[rustfmt::skip] // Let's not add newlines unnecessary
fn correct_order() {
use histogram::Histogram;
use metrics::{Metric::*, TimeUnit};
use std::time::Duration;
use util::local_now_with_offset;
// Extract the discriminant of the serialized value,
// that is: the first 4 bytes.
fn discriminant(metric: &metrics::Metric) -> u32 {
let ser = bincode::serialize(metric).unwrap();
(ser[0] as u32)
| (ser[1] as u32) << 8
| (ser[2] as u32) << 16
| (ser[3] as u32) << 24
}
// One of every metric type. The values are arbitrary and don't matter.
let all_metrics = vec![
Boolean(false),
Counter(0),
CustomDistributionExponential(Histogram::exponential(1, 500, 10)),
CustomDistributionLinear(Histogram::linear(1, 500, 10)),
Datetime(local_now_with_offset(), TimeUnit::Second),
Experiment(RecordedExperimentData { branch: "branch".into(), extra: None, }),
Quantity(0),
String("glean".into()),
StringList(vec!["glean".into()]),
Uuid("082c3e52-0a18-11ea-946f-0fe0c98c361c".into()),
Timespan(Duration::new(5, 0), TimeUnit::Second),
TimingDistribution(Histogram::functional(2.0, 8.0)),
MemoryDistribution(Histogram::functional(2.0, 8.0)),
];
for metric in all_metrics {
let disc = discriminant(&metric);
// DO NOT TOUCH THE EXPECTED VALUE.
// If this test fails because of non-equal discriminants, that is a bug in the code, not
// the test.
// We're matching here, thus fail the build if new variants are added.
match metric {
Boolean(..) => assert_eq!( 0, disc),
Counter(..) => assert_eq!( 1, disc),
CustomDistributionExponential(..) => assert_eq!( 2, disc),
CustomDistributionLinear(..) => assert_eq!( 3, disc),
Datetime(..) => assert_eq!( 4, disc),
Experiment(..) => assert_eq!( 5, disc),
Quantity(..) => assert_eq!( 6, disc),
String(..) => assert_eq!( 7, disc),
StringList(..) => assert_eq!( 8, disc),
Uuid(..) => assert_eq!( 9, disc),
Timespan(..) => assert_eq!(10, disc),
TimingDistribution(..) => assert_eq!(11, disc),
MemoryDistribution(..) => assert_eq!(12, disc),
}
}
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше