Bug 1635573 - vendor coverage r=ahal

Vendor coverage, and make sure we can run out of third_party as much as possible

Differential Revision: https://phabricator.services.mozilla.com/D74531
This commit is contained in:
Tarek Ziadé 2020-05-19 15:04:28 +00:00
Родитель 8c590158e1
Коммит 7d6014b5a8
99 изменённых файлов: 26840 добавлений и 19 удалений

Просмотреть файл

@ -110,24 +110,24 @@ class PerftestTests(MachCommandBase):
# include in sys.path all deps
_setup_path()
try:
import black # noqa
import coverage # noqa
except ImportError:
# install the tests scripts in the virtualenv
# this step will be removed once Bug 1635573 is done
# so we don't have to hit pypi or our internal mirror at all
pydeps = Path(self.topsrcdir, "third_party", "python")
vendors = ["coverage"]
pypis = ["flake8"]
for name in (
# installing pyrsistent and attrs so we don't pick newer ones
str(pydeps / "pyrsistent"),
str(pydeps / "attrs"),
"coverage",
"black",
"flake8",
):
install_package(self.virtualenv_manager, name)
# if we're not on try we want to install black
if not ON_TRY:
pypis.append("black")
# these are the deps we are getting from pypi
for dep in pypis:
install_package(self.virtualenv_manager, dep)
# pip-installing dependencies that require compilation or special setup
for dep in vendors:
install_package(self.virtualenv_manager, str(Path(pydeps, dep)))
here = Path(__file__).parent.resolve()
if not ON_TRY:
@ -135,7 +135,8 @@ class PerftestTests(MachCommandBase):
assert self._run_python_script("black", str(here))
# checking flake8 correctness
assert self._run_python_script("flake8", str(here))
if not (ON_TRY and sys.platform == "darwin"):
assert self._run_python_script("flake8", str(here))
# running pytest with coverage
# coverage is done in three steps:

Просмотреть файл

@ -8,7 +8,6 @@ import mock
import tempfile
import shutil
from contextlib import contextmanager
import platform
from mach.registrar import Registrar
@ -72,9 +71,6 @@ def test_doc_flavor(mocked_func):
@mock.patch("mozperftest.mach_commands.MachCommandBase._activate_virtualenv")
@mock.patch("mozperftest.mach_commands.PerftestTests._run_python_script")
def test_test_runner(*mocked):
if platform.system() == "Darwin" and ON_TRY:
return
# simulating on try to run the paths parser
old = mach_commands.ON_TRY
mach_commands.ON_TRY = True

Просмотреть файл

@ -345,6 +345,7 @@ mozperftest:
description: mozperftest unit tests
platform:
- linux1804-64/opt
- macosx1014-64/opt
- windows10-64/opt
treeherder:
symbol: mpu

136
third_party/python/coverage/CONTRIBUTORS.txt поставляемый Normal file
Просмотреть файл

@ -0,0 +1,136 @@
Coverage.py was originally written by Gareth Rees, and since 2004 has been
extended and maintained by Ned Batchelder.
Other contributions, including writing code, updating docs, and submitting
useful bug reports, have been made by:
Abdeali Kothari
Adi Roiban
Agbonze O. Jeremiah
Albertas Agejevas
Aleksi Torhamo
Alex Gaynor
Alex Groce
Alex Sandro
Alexander Todorov
Alexander Walters
Andrew Hoos
Anthony Sottile
Arcadiy Ivanov
Aron Griffis
Artem Dayneko
Ben Finney
Bernát Gábor
Bill Hart
Brandon Rhodes
Brett Cannon
Bruno P. Kinoshita
Buck Evan
Calen Pennington
Carl Gieringer
Catherine Proulx
Chris Adams
Chris Jerdonek
Chris Rose
Chris Warrick
Christian Heimes
Christine Lytwynec
Christoph Zwerschke
Conrad Ho
Cosimo Lupo
Dan Hemberger
Dan Riti
Dan Wandschneider
Danek Duvall
Daniel Hahler
Danny Allen
David Christian
David MacIver
David Stanek
David Szotten
Detlev Offenbach
Devin Jeanpierre
Dirk Thomas
Dmitry Shishov
Dmitry Trofimov
Eduardo Schettino
Eli Skeggs
Emil Madsen
Edward Loper
Federico Bond
Frazer McLean
Geoff Bache
George Paci
George Song
George-Cristian Bîrzan
Greg Rogers
Guido van Rossum
Guillaume Chazarain
Hugo van Kemenade
Ilia Meerovich
Imri Goldberg
Ionel Cristian Mărieș
JT Olds
Jessamyn Smith
Joe Doherty
Joe Jevnik
Jon Chappell
Jon Dufresne
Joseph Tate
Josh Williams
Julian Berman
Julien Voisin
Justas Sadzevičius
Kjell Braden
Krystian Kichewko
Kyle Altendorf
Lars Hupfeldt Nielsen
Leonardo Pistone
Lex Berezhny
Loïc Dachary
Marc Abramowitz
Marcus Cobden
Marius Gedminas
Mark van der Wal
Martin Fuzzey
Matt Bachmann
Matthew Boehm
Matthew Desmarais
Max Linke
Michał Bultrowicz
Mickie Betz
Mike Fiedler
Nathan Land
Noel O'Boyle
Olivier Grisel
Ori Avtalion
Pankaj Pandey
Pablo Carballo
Patrick Mezard
Peter Baughman
Peter Ebden
Peter Portante
Reya B
Rodrigue Cloutier
Roger Hu
Ross Lawley
Roy Williams
Salvatore Zagaria
Sandra Martocchia
Scott Belden
Sigve Tjora
Simon Willison
Stan Hu
Stefan Behnel
Stephan Richter
Stephen Finucane
Steve Leonard
Steve Peak
S. Y. Lee
Ted Wexler
Thijs Triemstra
Titus Brown
Ville Skyttä
Yury Selivanov
Zac Hatfield-Dodds
Zooko Wilcox-O'Hearn

177
third_party/python/coverage/LICENSE.txt поставляемый Normal file
Просмотреть файл

@ -0,0 +1,177 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

187
third_party/python/coverage/PKG-INFO поставляемый Normal file
Просмотреть файл

@ -0,0 +1,187 @@
Metadata-Version: 2.1
Name: coverage
Version: 5.1
Summary: Code coverage measurement for Python
Home-page: https://github.com/nedbat/coveragepy
Author: Ned Batchelder and 131 others
Author-email: ned@nedbatchelder.com
License: Apache 2.0
Project-URL: Documentation, https://coverage.readthedocs.io
Project-URL: Funding, https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=pypi
Project-URL: Issues, https://github.com/nedbat/coveragepy/issues
Description: .. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
===========
Coverage.py
===========
Code coverage testing for Python.
| |license| |versions| |status|
| |ci-status| |win-ci-status| |docs| |codecov|
| |kit| |format| |repos|
| |stars| |forks| |contributors|
| |tidelift| |twitter-coveragepy| |twitter-nedbat|
Coverage.py measures code coverage, typically during test execution. It uses
the code analysis tools and tracing hooks provided in the Python standard
library to determine which lines are executable, and which have been executed.
Coverage.py runs on many versions of Python:
* CPython 2.7.
* CPython 3.5 through 3.9 alpha 4.
* PyPy2 7.3.0 and PyPy3 7.3.0.
Documentation is on `Read the Docs`_. Code repository and issue tracker are on
`GitHub`_.
.. _Read the Docs: https://coverage.readthedocs.io/
.. _GitHub: https://github.com/nedbat/coveragepy
**New in 5.0:** SQLite data storage, JSON report, contexts, relative filenames,
dropped support for Python 2.6, 3.3 and 3.4.
For Enterprise
--------------
.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png
:width: 75
:alt: Tidelift
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
.. list-table::
:widths: 10 100
* - |tideliftlogo|
- `Available as part of the Tidelift Subscription. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_
Coverage and thousands of other packages are working with
Tidelift to deliver one enterprise subscription that covers all of the open
source you use. If you want the flexibility of open source and the confidence
of commercial-grade software, this is for you.
`Learn more. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_
Getting Started
---------------
See the `Quick Start section`_ of the docs.
.. _Quick Start section: https://coverage.readthedocs.io/#quick-start
Change history
--------------
The complete history of changes is on the `change history page`_.
.. _change history page: https://coverage.readthedocs.io/en/latest/changes.html
Contributing
------------
See the `Contributing section`_ of the docs.
.. _Contributing section: https://coverage.readthedocs.io/en/latest/contributing.html
Security
--------
To report a security vulnerability, please use the `Tidelift security
contact`_. Tidelift will coordinate the fix and disclosure.
.. _Tidelift security contact: https://tidelift.com/security
License
-------
Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_.
.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0
.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
.. |ci-status| image:: https://travis-ci.com/nedbat/coveragepy.svg?branch=master
:target: https://travis-ci.com/nedbat/coveragepy
:alt: Build status
.. |win-ci-status| image:: https://ci.appveyor.com/api/projects/status/kmeqpdje7h9r6vsf/branch/master?svg=true
:target: https://ci.appveyor.com/project/nedbat/coveragepy
:alt: Windows build status
.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
:target: https://coverage.readthedocs.io/
:alt: Documentation
.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master
:target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master
:alt: Requirements status
.. |kit| image:: https://badge.fury.io/py/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: PyPI status
.. |format| image:: https://img.shields.io/pypi/format/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Kit format
.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Weekly PyPI downloads
.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072
:target: https://pypi.org/project/coverage/
:alt: Python versions supported
.. |status| image:: https://img.shields.io/pypi/status/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Package stability
.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: License
.. |codecov| image:: https://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master&precision=2
:target: https://codecov.io/github/nedbat/coveragepy?branch=master
:alt: Coverage!
.. |repos| image:: https://repology.org/badge/tiny-repos/python:coverage.svg
:target: https://repology.org/metapackage/python:coverage/versions
:alt: Packaging status
.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
:alt: Tidelift
.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/stargazers
:alt: Github stars
.. |forks| image:: https://img.shields.io/github/forks/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/network/members
:alt: Github forks
.. |contributors| image:: https://img.shields.io/github/contributors/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/graphs/contributors
:alt: Contributors
.. |twitter-coveragepy| image:: https://img.shields.io/twitter/follow/coveragepy.svg?label=coveragepy&style=flat&logo=twitter&logoColor=4FADFF
:target: https://twitter.com/coveragepy
:alt: coverage.py on Twitter
.. |twitter-nedbat| image:: https://img.shields.io/twitter/follow/nedbat.svg?label=nedbat&style=flat&logo=twitter&logoColor=4FADFF
:target: https://twitter.com/nedbat
:alt: nedbat on Twitter
Keywords: code coverage testing
Platform: UNKNOWN
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Topic :: Software Development :: Testing
Classifier: Development Status :: 5 - Production/Stable
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4
Description-Content-Type: text/x-rst
Provides-Extra: toml

152
third_party/python/coverage/README.rst поставляемый Normal file
Просмотреть файл

@ -0,0 +1,152 @@
.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
===========
Coverage.py
===========
Code coverage testing for Python.
| |license| |versions| |status|
| |ci-status| |win-ci-status| |docs| |codecov|
| |kit| |format| |repos|
| |stars| |forks| |contributors|
| |tidelift| |twitter-coveragepy| |twitter-nedbat|
Coverage.py measures code coverage, typically during test execution. It uses
the code analysis tools and tracing hooks provided in the Python standard
library to determine which lines are executable, and which have been executed.
Coverage.py runs on many versions of Python:
* CPython 2.7.
* CPython 3.5 through 3.9 alpha 4.
* PyPy2 7.3.0 and PyPy3 7.3.0.
Documentation is on `Read the Docs`_. Code repository and issue tracker are on
`GitHub`_.
.. _Read the Docs: https://coverage.readthedocs.io/
.. _GitHub: https://github.com/nedbat/coveragepy
**New in 5.0:** SQLite data storage, JSON report, contexts, relative filenames,
dropped support for Python 2.6, 3.3 and 3.4.
For Enterprise
--------------
.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png
:width: 75
:alt: Tidelift
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
.. list-table::
:widths: 10 100
* - |tideliftlogo|
- `Available as part of the Tidelift Subscription. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_
Coverage and thousands of other packages are working with
Tidelift to deliver one enterprise subscription that covers all of the open
source you use. If you want the flexibility of open source and the confidence
of commercial-grade software, this is for you.
`Learn more. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_
Getting Started
---------------
See the `Quick Start section`_ of the docs.
.. _Quick Start section: https://coverage.readthedocs.io/#quick-start
Change history
--------------
The complete history of changes is on the `change history page`_.
.. _change history page: https://coverage.readthedocs.io/en/latest/changes.html
Contributing
------------
See the `Contributing section`_ of the docs.
.. _Contributing section: https://coverage.readthedocs.io/en/latest/contributing.html
Security
--------
To report a security vulnerability, please use the `Tidelift security
contact`_. Tidelift will coordinate the fix and disclosure.
.. _Tidelift security contact: https://tidelift.com/security
License
-------
Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_.
.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0
.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
.. |ci-status| image:: https://travis-ci.com/nedbat/coveragepy.svg?branch=master
:target: https://travis-ci.com/nedbat/coveragepy
:alt: Build status
.. |win-ci-status| image:: https://ci.appveyor.com/api/projects/status/kmeqpdje7h9r6vsf/branch/master?svg=true
:target: https://ci.appveyor.com/project/nedbat/coveragepy
:alt: Windows build status
.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
:target: https://coverage.readthedocs.io/
:alt: Documentation
.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master
:target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master
:alt: Requirements status
.. |kit| image:: https://badge.fury.io/py/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: PyPI status
.. |format| image:: https://img.shields.io/pypi/format/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Kit format
.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Weekly PyPI downloads
.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072
:target: https://pypi.org/project/coverage/
:alt: Python versions supported
.. |status| image:: https://img.shields.io/pypi/status/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: Package stability
.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
:target: https://pypi.org/project/coverage/
:alt: License
.. |codecov| image:: https://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master&precision=2
:target: https://codecov.io/github/nedbat/coveragepy?branch=master
:alt: Coverage!
.. |repos| image:: https://repology.org/badge/tiny-repos/python:coverage.svg
:target: https://repology.org/metapackage/python:coverage/versions
:alt: Packaging status
.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage
:target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme
:alt: Tidelift
.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/stargazers
:alt: Github stars
.. |forks| image:: https://img.shields.io/github/forks/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/network/members
:alt: Github forks
.. |contributors| image:: https://img.shields.io/github/contributors/nedbat/coveragepy.svg?logo=github
:target: https://github.com/nedbat/coveragepy/graphs/contributors
:alt: Contributors
.. |twitter-coveragepy| image:: https://img.shields.io/twitter/follow/coveragepy.svg?label=coveragepy&style=flat&logo=twitter&logoColor=4FADFF
:target: https://twitter.com/coveragepy
:alt: coverage.py on Twitter
.. |twitter-nedbat| image:: https://img.shields.io/twitter/follow/nedbat.svg?label=nedbat&style=flat&logo=twitter&logoColor=4FADFF
:target: https://twitter.com/nedbat
:alt: nedbat on Twitter

36
third_party/python/coverage/coverage/__init__.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,36 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Code coverage measurement for Python.
Ned Batchelder
https://nedbatchelder.com/code/coverage
"""
import sys
from coverage.version import __version__, __url__, version_info
from coverage.control import Coverage, process_startup
from coverage.data import CoverageData
from coverage.misc import CoverageException
from coverage.plugin import CoveragePlugin, FileTracer, FileReporter
from coverage.pytracer import PyTracer
# Backward compatibility.
coverage = Coverage
# On Windows, we encode and decode deep enough that something goes wrong and
# the encodings.utf_8 module is loaded and then unloaded, I don't know why.
# Adding a reference here prevents it from being unloaded. Yuk.
import encodings.utf_8 # pylint: disable=wrong-import-position, wrong-import-order
# Because of the "from coverage.control import fooey" lines at the top of the
# file, there's an entry for coverage.coverage in sys.modules, mapped to None.
# This makes some inspection tools (like pydoc) unable to find the class
# coverage.coverage. So remove that entry.
try:
del sys.modules['coverage.coverage']
except KeyError:
pass

8
third_party/python/coverage/coverage/__main__.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,8 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Coverage.py's main entry point."""
import sys
from coverage.cmdline import main
sys.exit(main())

108
third_party/python/coverage/coverage/annotate.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,108 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Source file annotation for coverage.py."""
import io
import os
import re
from coverage.files import flat_rootname
from coverage.misc import ensure_dir, isolate_module
from coverage.report import get_analysis_to_report
os = isolate_module(os)
class AnnotateReporter(object):
"""Generate annotated source files showing line coverage.
This reporter creates annotated copies of the measured source files. Each
.py file is copied as a .py,cover file, with a left-hand margin annotating
each line::
> def h(x):
- if 0: #pragma: no cover
- pass
> if x == 1:
! a = 1
> else:
> a = 2
> h(2)
Executed lines use '>', lines not executed use '!', lines excluded from
consideration use '-'.
"""
def __init__(self, coverage):
self.coverage = coverage
self.config = self.coverage.config
self.directory = None
blank_re = re.compile(r"\s*(#|$)")
else_re = re.compile(r"\s*else\s*:\s*(#|$)")
def report(self, morfs, directory=None):
"""Run the report.
See `coverage.report()` for arguments.
"""
self.directory = directory
self.coverage.get_data()
for fr, analysis in get_analysis_to_report(self.coverage, morfs):
self.annotate_file(fr, analysis)
def annotate_file(self, fr, analysis):
"""Annotate a single file.
`fr` is the FileReporter for the file to annotate.
"""
statements = sorted(analysis.statements)
missing = sorted(analysis.missing)
excluded = sorted(analysis.excluded)
if self.directory:
ensure_dir(self.directory)
dest_file = os.path.join(self.directory, flat_rootname(fr.relative_filename()))
if dest_file.endswith("_py"):
dest_file = dest_file[:-3] + ".py"
dest_file += ",cover"
else:
dest_file = fr.filename + ",cover"
with io.open(dest_file, 'w', encoding='utf8') as dest:
i = 0
j = 0
covered = True
source = fr.source()
for lineno, line in enumerate(source.splitlines(True), start=1):
while i < len(statements) and statements[i] < lineno:
i += 1
while j < len(missing) and missing[j] < lineno:
j += 1
if i < len(statements) and statements[i] == lineno:
covered = j >= len(missing) or missing[j] > lineno
if self.blank_re.match(line):
dest.write(u' ')
elif self.else_re.match(line):
# Special logic for lines containing only 'else:'.
if i >= len(statements) and j >= len(missing):
dest.write(u'! ')
elif i >= len(statements) or j >= len(missing):
dest.write(u'> ')
elif statements[i] == missing[j]:
dest.write(u'! ')
else:
dest.write(u'> ')
elif lineno in excluded:
dest.write(u'- ')
elif covered:
dest.write(u'> ')
else:
dest.write(u'! ')
dest.write(line)

33
third_party/python/coverage/coverage/backunittest.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,33 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Implementations of unittest features from the future."""
import unittest
def unittest_has(method):
"""Does `unittest.TestCase` have `method` defined?"""
return hasattr(unittest.TestCase, method)
class TestCase(unittest.TestCase):
"""Just like unittest.TestCase, but with assert methods added.
Designed to be compatible with 3.1 unittest. Methods are only defined if
`unittest` doesn't have them.
"""
# pylint: disable=arguments-differ, deprecated-method
if not unittest_has('assertCountEqual'):
def assertCountEqual(self, *args, **kwargs):
return self.assertItemsEqual(*args, **kwargs)
if not unittest_has('assertRaisesRegex'):
def assertRaisesRegex(self, *args, **kwargs):
return self.assertRaisesRegexp(*args, **kwargs)
if not unittest_has('assertRegex'):
def assertRegex(self, *args, **kwargs):
return self.assertRegexpMatches(*args, **kwargs)

253
third_party/python/coverage/coverage/backward.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,253 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Add things to old Pythons so I can pretend they are newer."""
# This file's purpose is to provide modules to be imported from here.
# pylint: disable=unused-import
import os
import sys
from coverage import env
# Pythons 2 and 3 differ on where to get StringIO.
try:
from cStringIO import StringIO
except ImportError:
from io import StringIO
# In py3, ConfigParser was renamed to the more-standard configparser.
# But there's a py3 backport that installs "configparser" in py2, and I don't
# want it because it has annoying deprecation warnings. So try the real py2
# import first.
try:
import ConfigParser as configparser
except ImportError:
import configparser
# What's a string called?
try:
string_class = basestring
except NameError:
string_class = str
# What's a Unicode string called?
try:
unicode_class = unicode
except NameError:
unicode_class = str
# range or xrange?
try:
range = xrange # pylint: disable=redefined-builtin
except NameError:
range = range
try:
from itertools import zip_longest
except ImportError:
from itertools import izip_longest as zip_longest
# Where do we get the thread id from?
try:
from thread import get_ident as get_thread_id
except ImportError:
from threading import get_ident as get_thread_id
try:
os.PathLike
except AttributeError:
# This is Python 2 and 3
path_types = (bytes, string_class, unicode_class)
else:
# 3.6+
path_types = (bytes, str, os.PathLike)
# shlex.quote is new, but there's an undocumented implementation in "pipes",
# who knew!?
try:
from shlex import quote as shlex_quote
except ImportError:
# Useful function, available under a different (undocumented) name
# in Python versions earlier than 3.3.
from pipes import quote as shlex_quote
try:
import reprlib
except ImportError:
import repr as reprlib
# A function to iterate listlessly over a dict's items, and one to get the
# items as a list.
try:
{}.iteritems
except AttributeError:
# Python 3
def iitems(d):
"""Produce the items from dict `d`."""
return d.items()
def litems(d):
"""Return a list of items from dict `d`."""
return list(d.items())
else:
# Python 2
def iitems(d):
"""Produce the items from dict `d`."""
return d.iteritems()
def litems(d):
"""Return a list of items from dict `d`."""
return d.items()
# Getting the `next` function from an iterator is different in 2 and 3.
try:
iter([]).next
except AttributeError:
def iternext(seq):
"""Get the `next` function for iterating over `seq`."""
return iter(seq).__next__
else:
def iternext(seq):
"""Get the `next` function for iterating over `seq`."""
return iter(seq).next
# Python 3.x is picky about bytes and strings, so provide methods to
# get them right, and make them no-ops in 2.x
if env.PY3:
def to_bytes(s):
"""Convert string `s` to bytes."""
return s.encode('utf8')
def to_string(b):
"""Convert bytes `b` to string."""
return b.decode('utf8')
def binary_bytes(byte_values):
"""Produce a byte string with the ints from `byte_values`."""
return bytes(byte_values)
def byte_to_int(byte):
"""Turn a byte indexed from a bytes object into an int."""
return byte
def bytes_to_ints(bytes_value):
"""Turn a bytes object into a sequence of ints."""
# In Python 3, iterating bytes gives ints.
return bytes_value
else:
def to_bytes(s):
"""Convert string `s` to bytes (no-op in 2.x)."""
return s
def to_string(b):
"""Convert bytes `b` to string."""
return b
def binary_bytes(byte_values):
"""Produce a byte string with the ints from `byte_values`."""
return "".join(chr(b) for b in byte_values)
def byte_to_int(byte):
"""Turn a byte indexed from a bytes object into an int."""
return ord(byte)
def bytes_to_ints(bytes_value):
"""Turn a bytes object into a sequence of ints."""
for byte in bytes_value:
yield ord(byte)
try:
# In Python 2.x, the builtins were in __builtin__
BUILTINS = sys.modules['__builtin__']
except KeyError:
# In Python 3.x, they're in builtins
BUILTINS = sys.modules['builtins']
# imp was deprecated in Python 3.3
try:
import importlib
import importlib.util
imp = None
except ImportError:
importlib = None
# We only want to use importlib if it has everything we need.
try:
importlib_util_find_spec = importlib.util.find_spec
except Exception:
import imp
importlib_util_find_spec = None
# What is the .pyc magic number for this version of Python?
try:
PYC_MAGIC_NUMBER = importlib.util.MAGIC_NUMBER
except AttributeError:
PYC_MAGIC_NUMBER = imp.get_magic()
def code_object(fn):
"""Get the code object from a function."""
try:
return fn.func_code
except AttributeError:
return fn.__code__
try:
from types import SimpleNamespace
except ImportError:
# The code from https://docs.python.org/3/library/types.html#types.SimpleNamespace
class SimpleNamespace:
"""Python implementation of SimpleNamespace, for Python 2."""
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __repr__(self):
keys = sorted(self.__dict__)
items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys)
return "{}({})".format(type(self).__name__, ", ".join(items))
def __eq__(self, other):
return self.__dict__ == other.__dict__
def invalidate_import_caches():
"""Invalidate any import caches that may or may not exist."""
if importlib and hasattr(importlib, "invalidate_caches"):
importlib.invalidate_caches()
def import_local_file(modname, modfile=None):
"""Import a local file as a module.
Opens a file in the current directory named `modname`.py, imports it
as `modname`, and returns the module object. `modfile` is the file to
import if it isn't in the current directory.
"""
try:
from importlib.machinery import SourceFileLoader
except ImportError:
SourceFileLoader = None
if modfile is None:
modfile = modname + '.py'
if SourceFileLoader:
# pylint: disable=no-value-for-parameter, deprecated-method
mod = SourceFileLoader(modname, modfile).load_module()
else:
for suff in imp.get_suffixes(): # pragma: part covered
if suff[0] == '.py':
break
with open(modfile, 'r') as f:
# pylint: disable=undefined-loop-variable
mod = imp.load_module(modname, f, modfile, suff)
return mod

19
third_party/python/coverage/coverage/bytecode.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,19 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Bytecode manipulation for coverage.py"""
import types
def code_objects(code):
"""Iterate over all the code objects in `code`."""
stack = [code]
while stack:
# We're going to return the code object on the stack, but first
# push its children for later returning.
code = stack.pop()
for c in code.co_consts:
if isinstance(c, types.CodeType):
stack.append(c)
yield code

866
third_party/python/coverage/coverage/cmdline.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,866 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Command-line support for coverage.py."""
from __future__ import print_function
import glob
import optparse
import os.path
import shlex
import sys
import textwrap
import traceback
import coverage
from coverage import Coverage
from coverage import env
from coverage.collector import CTracer
from coverage.data import line_counts
from coverage.debug import info_formatter, info_header, short_stack
from coverage.execfile import PyRunner
from coverage.misc import BaseCoverageException, ExceptionDuringRun, NoSource, output_encoding
from coverage.results import should_fail_under
class Opts(object):
"""A namespace class for individual options we'll build parsers from."""
append = optparse.make_option(
'-a', '--append', action='store_true',
help="Append coverage data to .coverage, otherwise it starts clean each time.",
)
branch = optparse.make_option(
'', '--branch', action='store_true',
help="Measure branch coverage in addition to statement coverage.",
)
CONCURRENCY_CHOICES = [
"thread", "gevent", "greenlet", "eventlet", "multiprocessing",
]
concurrency = optparse.make_option(
'', '--concurrency', action='store', metavar="LIB",
choices=CONCURRENCY_CHOICES,
help=(
"Properly measure code using a concurrency library. "
"Valid values are: %s."
) % ", ".join(CONCURRENCY_CHOICES),
)
context = optparse.make_option(
'', '--context', action='store', metavar="LABEL",
help="The context label to record for this coverage run.",
)
debug = optparse.make_option(
'', '--debug', action='store', metavar="OPTS",
help="Debug options, separated by commas. [env: COVERAGE_DEBUG]",
)
directory = optparse.make_option(
'-d', '--directory', action='store', metavar="DIR",
help="Write the output files to DIR.",
)
fail_under = optparse.make_option(
'', '--fail-under', action='store', metavar="MIN", type="float",
help="Exit with a status of 2 if the total coverage is less than MIN.",
)
help = optparse.make_option(
'-h', '--help', action='store_true',
help="Get help on this command.",
)
ignore_errors = optparse.make_option(
'-i', '--ignore-errors', action='store_true',
help="Ignore errors while reading source files.",
)
include = optparse.make_option(
'', '--include', action='store',
metavar="PAT1,PAT2,...",
help=(
"Include only files whose paths match one of these patterns. "
"Accepts shell-style wildcards, which must be quoted."
),
)
pylib = optparse.make_option(
'-L', '--pylib', action='store_true',
help=(
"Measure coverage even inside the Python installed library, "
"which isn't done by default."
),
)
show_missing = optparse.make_option(
'-m', '--show-missing', action='store_true',
help="Show line numbers of statements in each module that weren't executed.",
)
skip_covered = optparse.make_option(
'--skip-covered', action='store_true',
help="Skip files with 100% coverage.",
)
skip_empty = optparse.make_option(
'--skip-empty', action='store_true',
help="Skip files with no code.",
)
show_contexts = optparse.make_option(
'--show-contexts', action='store_true',
help="Show contexts for covered lines.",
)
omit = optparse.make_option(
'', '--omit', action='store',
metavar="PAT1,PAT2,...",
help=(
"Omit files whose paths match one of these patterns. "
"Accepts shell-style wildcards, which must be quoted."
),
)
contexts = optparse.make_option(
'', '--contexts', action='store',
metavar="REGEX1,REGEX2,...",
help=(
"Only display data from lines covered in the given contexts. "
"Accepts Python regexes, which must be quoted."
),
)
output_xml = optparse.make_option(
'-o', '', action='store', dest="outfile",
metavar="OUTFILE",
help="Write the XML report to this file. Defaults to 'coverage.xml'",
)
output_json = optparse.make_option(
'-o', '', action='store', dest="outfile",
metavar="OUTFILE",
help="Write the JSON report to this file. Defaults to 'coverage.json'",
)
json_pretty_print = optparse.make_option(
'', '--pretty-print', action='store_true',
help="Format the JSON for human readers.",
)
parallel_mode = optparse.make_option(
'-p', '--parallel-mode', action='store_true',
help=(
"Append the machine name, process id and random number to the "
".coverage data file name to simplify collecting data from "
"many processes."
),
)
module = optparse.make_option(
'-m', '--module', action='store_true',
help=(
"<pyfile> is an importable Python module, not a script path, "
"to be run as 'python -m' would run it."
),
)
rcfile = optparse.make_option(
'', '--rcfile', action='store',
help=(
"Specify configuration file. "
"By default '.coveragerc', 'setup.cfg', 'tox.ini', and "
"'pyproject.toml' are tried. [env: COVERAGE_RCFILE]"
),
)
source = optparse.make_option(
'', '--source', action='store', metavar="SRC1,SRC2,...",
help="A list of packages or directories of code to be measured.",
)
timid = optparse.make_option(
'', '--timid', action='store_true',
help=(
"Use a simpler but slower trace method. Try this if you get "
"seemingly impossible results!"
),
)
title = optparse.make_option(
'', '--title', action='store', metavar="TITLE",
help="A text string to use as the title on the HTML.",
)
version = optparse.make_option(
'', '--version', action='store_true',
help="Display version information and exit.",
)
class CoverageOptionParser(optparse.OptionParser, object):
"""Base OptionParser for coverage.py.
Problems don't exit the program.
Defaults are initialized for all options.
"""
def __init__(self, *args, **kwargs):
super(CoverageOptionParser, self).__init__(
add_help_option=False, *args, **kwargs
)
self.set_defaults(
action=None,
append=None,
branch=None,
concurrency=None,
context=None,
debug=None,
directory=None,
fail_under=None,
help=None,
ignore_errors=None,
include=None,
module=None,
omit=None,
contexts=None,
parallel_mode=None,
pylib=None,
rcfile=True,
show_missing=None,
skip_covered=None,
skip_empty=None,
show_contexts=None,
source=None,
timid=None,
title=None,
version=None,
)
self.disable_interspersed_args()
class OptionParserError(Exception):
"""Used to stop the optparse error handler ending the process."""
pass
def parse_args_ok(self, args=None, options=None):
"""Call optparse.parse_args, but return a triple:
(ok, options, args)
"""
try:
options, args = super(CoverageOptionParser, self).parse_args(args, options)
except self.OptionParserError:
return False, None, None
return True, options, args
def error(self, msg):
"""Override optparse.error so sys.exit doesn't get called."""
show_help(msg)
raise self.OptionParserError
class GlobalOptionParser(CoverageOptionParser):
"""Command-line parser for coverage.py global option arguments."""
def __init__(self):
super(GlobalOptionParser, self).__init__()
self.add_options([
Opts.help,
Opts.version,
])
class CmdOptionParser(CoverageOptionParser):
"""Parse one of the new-style commands for coverage.py."""
def __init__(self, action, options, defaults=None, usage=None, description=None):
"""Create an OptionParser for a coverage.py command.
`action` is the slug to put into `options.action`.
`options` is a list of Option's for the command.
`defaults` is a dict of default value for options.
`usage` is the usage string to display in help.
`description` is the description of the command, for the help text.
"""
if usage:
usage = "%prog " + usage
super(CmdOptionParser, self).__init__(
usage=usage,
description=description,
)
self.set_defaults(action=action, **(defaults or {}))
self.add_options(options)
self.cmd = action
def __eq__(self, other):
# A convenience equality, so that I can put strings in unit test
# results, and they will compare equal to objects.
return (other == "<CmdOptionParser:%s>" % self.cmd)
__hash__ = None # This object doesn't need to be hashed.
def get_prog_name(self):
"""Override of an undocumented function in optparse.OptionParser."""
program_name = super(CmdOptionParser, self).get_prog_name()
# Include the sub-command for this parser as part of the command.
return "{command} {subcommand}".format(command=program_name, subcommand=self.cmd)
GLOBAL_ARGS = [
Opts.debug,
Opts.help,
Opts.rcfile,
]
CMDS = {
'annotate': CmdOptionParser(
"annotate",
[
Opts.directory,
Opts.ignore_errors,
Opts.include,
Opts.omit,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description=(
"Make annotated copies of the given files, marking statements that are executed "
"with > and statements that are missed with !."
),
),
'combine': CmdOptionParser(
"combine",
[
Opts.append,
] + GLOBAL_ARGS,
usage="[options] <path1> <path2> ... <pathN>",
description=(
"Combine data from multiple coverage files collected "
"with 'run -p'. The combined results are written to a single "
"file representing the union of the data. The positional "
"arguments are data files or directories containing data files. "
"If no paths are provided, data files in the default data file's "
"directory are combined."
),
),
'debug': CmdOptionParser(
"debug", GLOBAL_ARGS,
usage="<topic>",
description=(
"Display information on the internals of coverage.py, "
"for diagnosing problems. "
"Topics are 'data' to show a summary of the collected data, "
"or 'sys' to show installation information."
),
),
'erase': CmdOptionParser(
"erase", GLOBAL_ARGS,
description="Erase previously collected coverage data.",
),
'help': CmdOptionParser(
"help", GLOBAL_ARGS,
usage="[command]",
description="Describe how to use coverage.py",
),
'html': CmdOptionParser(
"html",
[
Opts.contexts,
Opts.directory,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.show_contexts,
Opts.skip_covered,
Opts.skip_empty,
Opts.title,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description=(
"Create an HTML report of the coverage of the files. "
"Each file gets its own page, with the source decorated to show "
"executed, excluded, and missed lines."
),
),
'json': CmdOptionParser(
"json",
[
Opts.contexts,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.output_json,
Opts.json_pretty_print,
Opts.show_contexts,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Generate a JSON report of coverage results."
),
'report': CmdOptionParser(
"report",
[
Opts.contexts,
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.show_missing,
Opts.skip_covered,
Opts.skip_empty,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Report coverage statistics on modules."
),
'run': CmdOptionParser(
"run",
[
Opts.append,
Opts.branch,
Opts.concurrency,
Opts.context,
Opts.include,
Opts.module,
Opts.omit,
Opts.pylib,
Opts.parallel_mode,
Opts.source,
Opts.timid,
] + GLOBAL_ARGS,
usage="[options] <pyfile> [program options]",
description="Run a Python program, measuring code execution."
),
'xml': CmdOptionParser(
"xml",
[
Opts.fail_under,
Opts.ignore_errors,
Opts.include,
Opts.omit,
Opts.output_xml,
] + GLOBAL_ARGS,
usage="[options] [modules]",
description="Generate an XML report of coverage results."
),
}
def show_help(error=None, topic=None, parser=None):
"""Display an error message, or the named topic."""
assert error or topic or parser
program_path = sys.argv[0]
if program_path.endswith(os.path.sep + '__main__.py'):
# The path is the main module of a package; get that path instead.
program_path = os.path.dirname(program_path)
program_name = os.path.basename(program_path)
if env.WINDOWS:
# entry_points={'console_scripts':...} on Windows makes files
# called coverage.exe, coverage3.exe, and coverage-3.5.exe. These
# invoke coverage-script.py, coverage3-script.py, and
# coverage-3.5-script.py. argv[0] is the .py file, but we want to
# get back to the original form.
auto_suffix = "-script.py"
if program_name.endswith(auto_suffix):
program_name = program_name[:-len(auto_suffix)]
help_params = dict(coverage.__dict__)
help_params['program_name'] = program_name
if CTracer is not None:
help_params['extension_modifier'] = 'with C extension'
else:
help_params['extension_modifier'] = 'without C extension'
if error:
print(error, file=sys.stderr)
print("Use '%s help' for help." % (program_name,), file=sys.stderr)
elif parser:
print(parser.format_help().strip())
print()
else:
help_msg = textwrap.dedent(HELP_TOPICS.get(topic, '')).strip()
if help_msg:
print(help_msg.format(**help_params))
else:
print("Don't know topic %r" % topic)
print("Full documentation is at {__url__}".format(**help_params))
OK, ERR, FAIL_UNDER = 0, 1, 2
class CoverageScript(object):
"""The command-line interface to coverage.py."""
def __init__(self):
self.global_option = False
self.coverage = None
def command_line(self, argv):
"""The bulk of the command line interface to coverage.py.
`argv` is the argument list to process.
Returns 0 if all is well, 1 if something went wrong.
"""
# Collect the command-line options.
if not argv:
show_help(topic='minimum_help')
return OK
# The command syntax we parse depends on the first argument. Global
# switch syntax always starts with an option.
self.global_option = argv[0].startswith('-')
if self.global_option:
parser = GlobalOptionParser()
else:
parser = CMDS.get(argv[0])
if not parser:
show_help("Unknown command: '%s'" % argv[0])
return ERR
argv = argv[1:]
ok, options, args = parser.parse_args_ok(argv)
if not ok:
return ERR
# Handle help and version.
if self.do_help(options, args, parser):
return OK
# Listify the list options.
source = unshell_list(options.source)
omit = unshell_list(options.omit)
include = unshell_list(options.include)
debug = unshell_list(options.debug)
contexts = unshell_list(options.contexts)
# Do something.
self.coverage = Coverage(
data_suffix=options.parallel_mode,
cover_pylib=options.pylib,
timid=options.timid,
branch=options.branch,
config_file=options.rcfile,
source=source,
omit=omit,
include=include,
debug=debug,
concurrency=options.concurrency,
check_preimported=True,
context=options.context,
)
if options.action == "debug":
return self.do_debug(args)
elif options.action == "erase":
self.coverage.erase()
return OK
elif options.action == "run":
return self.do_run(options, args)
elif options.action == "combine":
if options.append:
self.coverage.load()
data_dirs = args or None
self.coverage.combine(data_dirs, strict=True)
self.coverage.save()
return OK
# Remaining actions are reporting, with some common options.
report_args = dict(
morfs=unglob_args(args),
ignore_errors=options.ignore_errors,
omit=omit,
include=include,
contexts=contexts,
)
# We need to be able to import from the current directory, because
# plugins may try to, for example, to read Django settings.
sys.path.insert(0, '')
self.coverage.load()
total = None
if options.action == "report":
total = self.coverage.report(
show_missing=options.show_missing,
skip_covered=options.skip_covered,
skip_empty=options.skip_empty,
**report_args
)
elif options.action == "annotate":
self.coverage.annotate(directory=options.directory, **report_args)
elif options.action == "html":
total = self.coverage.html_report(
directory=options.directory,
title=options.title,
skip_covered=options.skip_covered,
skip_empty=options.skip_empty,
show_contexts=options.show_contexts,
**report_args
)
elif options.action == "xml":
outfile = options.outfile
total = self.coverage.xml_report(outfile=outfile, **report_args)
elif options.action == "json":
outfile = options.outfile
total = self.coverage.json_report(
outfile=outfile,
pretty_print=options.pretty_print,
show_contexts=options.show_contexts,
**report_args
)
if total is not None:
# Apply the command line fail-under options, and then use the config
# value, so we can get fail_under from the config file.
if options.fail_under is not None:
self.coverage.set_option("report:fail_under", options.fail_under)
fail_under = self.coverage.get_option("report:fail_under")
precision = self.coverage.get_option("report:precision")
if should_fail_under(total, fail_under, precision):
return FAIL_UNDER
return OK
def do_help(self, options, args, parser):
"""Deal with help requests.
Return True if it handled the request, False if not.
"""
# Handle help.
if options.help:
if self.global_option:
show_help(topic='help')
else:
show_help(parser=parser)
return True
if options.action == "help":
if args:
for a in args:
parser = CMDS.get(a)
if parser:
show_help(parser=parser)
else:
show_help(topic=a)
else:
show_help(topic='help')
return True
# Handle version.
if options.version:
show_help(topic='version')
return True
return False
def do_run(self, options, args):
"""Implementation of 'coverage run'."""
if not args:
if options.module:
# Specified -m with nothing else.
show_help("No module specified for -m")
return ERR
command_line = self.coverage.get_option("run:command_line")
if command_line is not None:
args = shlex.split(command_line)
if args and args[0] == "-m":
options.module = True
args = args[1:]
if not args:
show_help("Nothing to do.")
return ERR
if options.append and self.coverage.get_option("run:parallel"):
show_help("Can't append to data files in parallel mode.")
return ERR
if options.concurrency == "multiprocessing":
# Can't set other run-affecting command line options with
# multiprocessing.
for opt_name in ['branch', 'include', 'omit', 'pylib', 'source', 'timid']:
# As it happens, all of these options have no default, meaning
# they will be None if they have not been specified.
if getattr(options, opt_name) is not None:
show_help(
"Options affecting multiprocessing must only be specified "
"in a configuration file.\n"
"Remove --{} from the command line.".format(opt_name)
)
return ERR
runner = PyRunner(args, as_module=bool(options.module))
runner.prepare()
if options.append:
self.coverage.load()
# Run the script.
self.coverage.start()
code_ran = True
try:
runner.run()
except NoSource:
code_ran = False
raise
finally:
self.coverage.stop()
if code_ran:
self.coverage.save()
return OK
def do_debug(self, args):
"""Implementation of 'coverage debug'."""
if not args:
show_help("What information would you like: config, data, sys, premain?")
return ERR
for info in args:
if info == 'sys':
sys_info = self.coverage.sys_info()
print(info_header("sys"))
for line in info_formatter(sys_info):
print(" %s" % line)
elif info == 'data':
self.coverage.load()
data = self.coverage.get_data()
print(info_header("data"))
print("path: %s" % self.coverage.get_data().data_filename())
if data:
print("has_arcs: %r" % data.has_arcs())
summary = line_counts(data, fullpath=True)
filenames = sorted(summary.keys())
print("\n%d files:" % len(filenames))
for f in filenames:
line = "%s: %d lines" % (f, summary[f])
plugin = data.file_tracer(f)
if plugin:
line += " [%s]" % plugin
print(line)
else:
print("No data collected")
elif info == 'config':
print(info_header("config"))
config_info = self.coverage.config.__dict__.items()
for line in info_formatter(config_info):
print(" %s" % line)
elif info == "premain":
print(info_header("premain"))
print(short_stack())
else:
show_help("Don't know what you mean by %r" % info)
return ERR
return OK
def unshell_list(s):
"""Turn a command-line argument into a list."""
if not s:
return None
if env.WINDOWS:
# When running coverage.py as coverage.exe, some of the behavior
# of the shell is emulated: wildcards are expanded into a list of
# file names. So you have to single-quote patterns on the command
# line, but (not) helpfully, the single quotes are included in the
# argument, so we have to strip them off here.
s = s.strip("'")
return s.split(',')
def unglob_args(args):
"""Interpret shell wildcards for platforms that need it."""
if env.WINDOWS:
globbed = []
for arg in args:
if '?' in arg or '*' in arg:
globbed.extend(glob.glob(arg))
else:
globbed.append(arg)
args = globbed
return args
HELP_TOPICS = {
'help': """\
Coverage.py, version {__version__} {extension_modifier}
Measure, collect, and report on code coverage in Python programs.
usage: {program_name} <command> [options] [args]
Commands:
annotate Annotate source files with execution information.
combine Combine a number of data files.
erase Erase previously collected coverage data.
help Get help on using coverage.py.
html Create an HTML report.
json Create a JSON report of coverage results.
report Report coverage stats on modules.
run Run a Python program and measure code execution.
xml Create an XML report of coverage results.
Use "{program_name} help <command>" for detailed help on any command.
""",
'minimum_help': """\
Code coverage for Python. Use '{program_name} help' for help.
""",
'version': """\
Coverage.py, version {__version__} {extension_modifier}
""",
}
def main(argv=None):
"""The main entry point to coverage.py.
This is installed as the script entry point.
"""
if argv is None:
argv = sys.argv[1:]
try:
status = CoverageScript().command_line(argv)
except ExceptionDuringRun as err:
# An exception was caught while running the product code. The
# sys.exc_info() return tuple is packed into an ExceptionDuringRun
# exception.
traceback.print_exception(*err.args) # pylint: disable=no-value-for-parameter
status = ERR
except BaseCoverageException as err:
# A controlled error inside coverage.py: print the message to the user.
msg = err.args[0]
if env.PY2:
msg = msg.encode(output_encoding())
print(msg)
status = ERR
except SystemExit as err:
# The user called `sys.exit()`. Exit with their argument, if any.
if err.args:
status = err.args[0]
else:
status = None
return status
# Profiling using ox_profile. Install it from GitHub:
# pip install git+https://github.com/emin63/ox_profile.git
#
# $set_env.py: COVERAGE_PROFILE - Set to use ox_profile.
_profile = os.environ.get("COVERAGE_PROFILE", "")
if _profile: # pragma: debugging
from ox_profile.core.launchers import SimpleLauncher # pylint: disable=import-error
original_main = main
def main(argv=None): # pylint: disable=function-redefined
"""A wrapper around main that profiles."""
try:
profiler = SimpleLauncher.launch()
return original_main(argv)
finally:
data, _ = profiler.query(re_filter='coverage', max_records=100)
print(profiler.show(query=data, limit=100, sep='', col=''))
profiler.cancel()

429
third_party/python/coverage/coverage/collector.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,429 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Raw data collector for coverage.py."""
import os
import sys
from coverage import env
from coverage.backward import litems, range # pylint: disable=redefined-builtin
from coverage.debug import short_stack
from coverage.disposition import FileDisposition
from coverage.misc import CoverageException, isolate_module
from coverage.pytracer import PyTracer
os = isolate_module(os)
try:
# Use the C extension code when we can, for speed.
from coverage.tracer import CTracer, CFileDisposition
except ImportError:
# Couldn't import the C extension, maybe it isn't built.
if os.getenv('COVERAGE_TEST_TRACER') == 'c':
# During testing, we use the COVERAGE_TEST_TRACER environment variable
# to indicate that we've fiddled with the environment to test this
# fallback code. If we thought we had a C tracer, but couldn't import
# it, then exit quickly and clearly instead of dribbling confusing
# errors. I'm using sys.exit here instead of an exception because an
# exception here causes all sorts of other noise in unittest.
sys.stderr.write("*** COVERAGE_TEST_TRACER is 'c' but can't import CTracer!\n")
sys.exit(1)
CTracer = None
class Collector(object):
"""Collects trace data.
Creates a Tracer object for each thread, since they track stack
information. Each Tracer points to the same shared data, contributing
traced data points.
When the Collector is started, it creates a Tracer for the current thread,
and installs a function to create Tracers for each new thread started.
When the Collector is stopped, all active Tracers are stopped.
Threads started while the Collector is stopped will never have Tracers
associated with them.
"""
# The stack of active Collectors. Collectors are added here when started,
# and popped when stopped. Collectors on the stack are paused when not
# the top, and resumed when they become the top again.
_collectors = []
# The concurrency settings we support here.
SUPPORTED_CONCURRENCIES = set(["greenlet", "eventlet", "gevent", "thread"])
def __init__(
self, should_trace, check_include, should_start_context, file_mapper,
timid, branch, warn, concurrency,
):
"""Create a collector.
`should_trace` is a function, taking a file name and a frame, and
returning a `coverage.FileDisposition object`.
`check_include` is a function taking a file name and a frame. It returns
a boolean: True if the file should be traced, False if not.
`should_start_context` is a function taking a frame, and returning a
string. If the frame should be the start of a new context, the string
is the new context. If the frame should not be the start of a new
context, return None.
`file_mapper` is a function taking a filename, and returning a Unicode
filename. The result is the name that will be recorded in the data
file.
If `timid` is true, then a slower simpler trace function will be
used. This is important for some environments where manipulation of
tracing functions make the faster more sophisticated trace function not
operate properly.
If `branch` is true, then branches will be measured. This involves
collecting data on which statements followed each other (arcs). Use
`get_arc_data` to get the arc data.
`warn` is a warning function, taking a single string message argument
and an optional slug argument which will be a string or None, to be
used if a warning needs to be issued.
`concurrency` is a list of strings indicating the concurrency libraries
in use. Valid values are "greenlet", "eventlet", "gevent", or "thread"
(the default). Of these four values, only one can be supplied. Other
values are ignored.
"""
self.should_trace = should_trace
self.check_include = check_include
self.should_start_context = should_start_context
self.file_mapper = file_mapper
self.warn = warn
self.branch = branch
self.threading = None
self.covdata = None
self.static_context = None
self.origin = short_stack()
self.concur_id_func = None
self.mapped_file_cache = {}
# We can handle a few concurrency options here, but only one at a time.
these_concurrencies = self.SUPPORTED_CONCURRENCIES.intersection(concurrency)
if len(these_concurrencies) > 1:
raise CoverageException("Conflicting concurrency settings: %s" % concurrency)
self.concurrency = these_concurrencies.pop() if these_concurrencies else ''
try:
if self.concurrency == "greenlet":
import greenlet
self.concur_id_func = greenlet.getcurrent
elif self.concurrency == "eventlet":
import eventlet.greenthread # pylint: disable=import-error,useless-suppression
self.concur_id_func = eventlet.greenthread.getcurrent
elif self.concurrency == "gevent":
import gevent # pylint: disable=import-error,useless-suppression
self.concur_id_func = gevent.getcurrent
elif self.concurrency == "thread" or not self.concurrency:
# It's important to import threading only if we need it. If
# it's imported early, and the program being measured uses
# gevent, then gevent's monkey-patching won't work properly.
import threading
self.threading = threading
else:
raise CoverageException("Don't understand concurrency=%s" % concurrency)
except ImportError:
raise CoverageException(
"Couldn't trace with concurrency=%s, the module isn't installed." % (
self.concurrency,
)
)
self.reset()
if timid:
# Being timid: use the simple Python trace function.
self._trace_class = PyTracer
else:
# Being fast: use the C Tracer if it is available, else the Python
# trace function.
self._trace_class = CTracer or PyTracer
if self._trace_class is CTracer:
self.file_disposition_class = CFileDisposition
self.supports_plugins = True
else:
self.file_disposition_class = FileDisposition
self.supports_plugins = False
def __repr__(self):
return "<Collector at 0x%x: %s>" % (id(self), self.tracer_name())
def use_data(self, covdata, context):
"""Use `covdata` for recording data."""
self.covdata = covdata
self.static_context = context
self.covdata.set_context(self.static_context)
def tracer_name(self):
"""Return the class name of the tracer we're using."""
return self._trace_class.__name__
def _clear_data(self):
"""Clear out existing data, but stay ready for more collection."""
# We used to used self.data.clear(), but that would remove filename
# keys and data values that were still in use higher up the stack
# when we are called as part of switch_context.
for d in self.data.values():
d.clear()
for tracer in self.tracers:
tracer.reset_activity()
def reset(self):
"""Clear collected data, and prepare to collect more."""
# A dictionary mapping file names to dicts with line number keys (if not
# branch coverage), or mapping file names to dicts with line number
# pairs as keys (if branch coverage).
self.data = {}
# A dictionary mapping file names to file tracer plugin names that will
# handle them.
self.file_tracers = {}
# The .should_trace_cache attribute is a cache from file names to
# coverage.FileDisposition objects, or None. When a file is first
# considered for tracing, a FileDisposition is obtained from
# Coverage.should_trace. Its .trace attribute indicates whether the
# file should be traced or not. If it should be, a plugin with dynamic
# file names can decide not to trace it based on the dynamic file name
# being excluded by the inclusion rules, in which case the
# FileDisposition will be replaced by None in the cache.
if env.PYPY:
import __pypy__ # pylint: disable=import-error
# Alex Gaynor said:
# should_trace_cache is a strictly growing key: once a key is in
# it, it never changes. Further, the keys used to access it are
# generally constant, given sufficient context. That is to say, at
# any given point _trace() is called, pypy is able to know the key.
# This is because the key is determined by the physical source code
# line, and that's invariant with the call site.
#
# This property of a dict with immutable keys, combined with
# call-site-constant keys is a match for PyPy's module dict,
# which is optimized for such workloads.
#
# This gives a 20% benefit on the workload described at
# https://bitbucket.org/pypy/pypy/issue/1871/10x-slower-than-cpython-under-coverage
self.should_trace_cache = __pypy__.newdict("module")
else:
self.should_trace_cache = {}
# Our active Tracers.
self.tracers = []
self._clear_data()
def _start_tracer(self):
"""Start a new Tracer object, and store it in self.tracers."""
tracer = self._trace_class()
tracer.data = self.data
tracer.trace_arcs = self.branch
tracer.should_trace = self.should_trace
tracer.should_trace_cache = self.should_trace_cache
tracer.warn = self.warn
if hasattr(tracer, 'concur_id_func'):
tracer.concur_id_func = self.concur_id_func
elif self.concur_id_func:
raise CoverageException(
"Can't support concurrency=%s with %s, only threads are supported" % (
self.concurrency, self.tracer_name(),
)
)
if hasattr(tracer, 'file_tracers'):
tracer.file_tracers = self.file_tracers
if hasattr(tracer, 'threading'):
tracer.threading = self.threading
if hasattr(tracer, 'check_include'):
tracer.check_include = self.check_include
if hasattr(tracer, 'should_start_context'):
tracer.should_start_context = self.should_start_context
tracer.switch_context = self.switch_context
fn = tracer.start()
self.tracers.append(tracer)
return fn
# The trace function has to be set individually on each thread before
# execution begins. Ironically, the only support the threading module has
# for running code before the thread main is the tracing function. So we
# install this as a trace function, and the first time it's called, it does
# the real trace installation.
def _installation_trace(self, frame, event, arg):
"""Called on new threads, installs the real tracer."""
# Remove ourselves as the trace function.
sys.settrace(None)
# Install the real tracer.
fn = self._start_tracer()
# Invoke the real trace function with the current event, to be sure
# not to lose an event.
if fn:
fn = fn(frame, event, arg)
# Return the new trace function to continue tracing in this scope.
return fn
def start(self):
"""Start collecting trace information."""
if self._collectors:
self._collectors[-1].pause()
self.tracers = []
# Check to see whether we had a fullcoverage tracer installed. If so,
# get the stack frames it stashed away for us.
traces0 = []
fn0 = sys.gettrace()
if fn0:
tracer0 = getattr(fn0, '__self__', None)
if tracer0:
traces0 = getattr(tracer0, 'traces', [])
try:
# Install the tracer on this thread.
fn = self._start_tracer()
except:
if self._collectors:
self._collectors[-1].resume()
raise
# If _start_tracer succeeded, then we add ourselves to the global
# stack of collectors.
self._collectors.append(self)
# Replay all the events from fullcoverage into the new trace function.
for args in traces0:
(frame, event, arg), lineno = args
try:
fn(frame, event, arg, lineno=lineno)
except TypeError:
raise Exception("fullcoverage must be run with the C trace function.")
# Install our installation tracer in threading, to jump-start other
# threads.
if self.threading:
self.threading.settrace(self._installation_trace)
def stop(self):
"""Stop collecting trace information."""
assert self._collectors
if self._collectors[-1] is not self:
print("self._collectors:")
for c in self._collectors:
print(" {!r}\n{}".format(c, c.origin))
assert self._collectors[-1] is self, (
"Expected current collector to be %r, but it's %r" % (self, self._collectors[-1])
)
self.pause()
# Remove this Collector from the stack, and resume the one underneath
# (if any).
self._collectors.pop()
if self._collectors:
self._collectors[-1].resume()
def pause(self):
"""Pause tracing, but be prepared to `resume`."""
for tracer in self.tracers:
tracer.stop()
stats = tracer.get_stats()
if stats:
print("\nCoverage.py tracer stats:")
for k in sorted(stats.keys()):
print("%20s: %s" % (k, stats[k]))
if self.threading:
self.threading.settrace(None)
def resume(self):
"""Resume tracing after a `pause`."""
for tracer in self.tracers:
tracer.start()
if self.threading:
self.threading.settrace(self._installation_trace)
else:
self._start_tracer()
def _activity(self):
"""Has any activity been traced?
Returns a boolean, True if any trace function was invoked.
"""
return any(tracer.activity() for tracer in self.tracers)
def switch_context(self, new_context):
"""Switch to a new dynamic context."""
self.flush_data()
if self.static_context:
context = self.static_context
if new_context:
context += "|" + new_context
else:
context = new_context
self.covdata.set_context(context)
def cached_mapped_file(self, filename):
"""A locally cached version of file names mapped through file_mapper."""
key = (type(filename), filename)
try:
return self.mapped_file_cache[key]
except KeyError:
return self.mapped_file_cache.setdefault(key, self.file_mapper(filename))
def mapped_file_dict(self, d):
"""Return a dict like d, but with keys modified by file_mapper."""
# The call to litems() ensures that the GIL protects the dictionary
# iterator against concurrent modifications by tracers running
# in other threads. We try three times in case of concurrent
# access, hoping to get a clean copy.
runtime_err = None
for _ in range(3):
try:
items = litems(d)
except RuntimeError as ex:
runtime_err = ex
else:
break
else:
raise runtime_err
return dict((self.cached_mapped_file(k), v) for k, v in items if v)
def flush_data(self):
"""Save the collected data to our associated `CoverageData`.
Data may have also been saved along the way. This forces the
last of the data to be saved.
Returns True if there was data to save, False if not.
"""
if not self._activity():
return False
if self.branch:
self.covdata.add_arcs(self.mapped_file_dict(self.data))
else:
self.covdata.add_lines(self.mapped_file_dict(self.data))
self.covdata.add_file_tracers(self.mapped_file_dict(self.file_tracers))
self._clear_data()
return True

555
third_party/python/coverage/coverage/config.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,555 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Config file for coverage.py"""
import collections
import copy
import os
import os.path
import re
from coverage import env
from coverage.backward import configparser, iitems, string_class
from coverage.misc import contract, CoverageException, isolate_module
from coverage.misc import substitute_variables
from coverage.tomlconfig import TomlConfigParser, TomlDecodeError
os = isolate_module(os)
class HandyConfigParser(configparser.RawConfigParser):
"""Our specialization of ConfigParser."""
def __init__(self, our_file):
"""Create the HandyConfigParser.
`our_file` is True if this config file is specifically for coverage,
False if we are examining another config file (tox.ini, setup.cfg)
for possible settings.
"""
configparser.RawConfigParser.__init__(self)
self.section_prefixes = ["coverage:"]
if our_file:
self.section_prefixes.append("")
def read(self, filenames, encoding=None):
"""Read a file name as UTF-8 configuration data."""
kwargs = {}
if env.PYVERSION >= (3, 2):
kwargs['encoding'] = encoding or "utf-8"
return configparser.RawConfigParser.read(self, filenames, **kwargs)
def has_option(self, section, option):
for section_prefix in self.section_prefixes:
real_section = section_prefix + section
has = configparser.RawConfigParser.has_option(self, real_section, option)
if has:
return has
return False
def has_section(self, section):
for section_prefix in self.section_prefixes:
real_section = section_prefix + section
has = configparser.RawConfigParser.has_section(self, real_section)
if has:
return real_section
return False
def options(self, section):
for section_prefix in self.section_prefixes:
real_section = section_prefix + section
if configparser.RawConfigParser.has_section(self, real_section):
return configparser.RawConfigParser.options(self, real_section)
raise configparser.NoSectionError
def get_section(self, section):
"""Get the contents of a section, as a dictionary."""
d = {}
for opt in self.options(section):
d[opt] = self.get(section, opt)
return d
def get(self, section, option, *args, **kwargs): # pylint: disable=arguments-differ
"""Get a value, replacing environment variables also.
The arguments are the same as `RawConfigParser.get`, but in the found
value, ``$WORD`` or ``${WORD}`` are replaced by the value of the
environment variable ``WORD``.
Returns the finished value.
"""
for section_prefix in self.section_prefixes:
real_section = section_prefix + section
if configparser.RawConfigParser.has_option(self, real_section, option):
break
else:
raise configparser.NoOptionError
v = configparser.RawConfigParser.get(self, real_section, option, *args, **kwargs)
v = substitute_variables(v, os.environ)
return v
def getlist(self, section, option):
"""Read a list of strings.
The value of `section` and `option` is treated as a comma- and newline-
separated list of strings. Each value is stripped of whitespace.
Returns the list of strings.
"""
value_list = self.get(section, option)
values = []
for value_line in value_list.split('\n'):
for value in value_line.split(','):
value = value.strip()
if value:
values.append(value)
return values
def getregexlist(self, section, option):
"""Read a list of full-line regexes.
The value of `section` and `option` is treated as a newline-separated
list of regexes. Each value is stripped of whitespace.
Returns the list of strings.
"""
line_list = self.get(section, option)
value_list = []
for value in line_list.splitlines():
value = value.strip()
try:
re.compile(value)
except re.error as e:
raise CoverageException(
"Invalid [%s].%s value %r: %s" % (section, option, value, e)
)
if value:
value_list.append(value)
return value_list
# The default line exclusion regexes.
DEFAULT_EXCLUDE = [
r'#\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(cover|COVER)',
]
# The default partial branch regexes, to be modified by the user.
DEFAULT_PARTIAL = [
r'#\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(branch|BRANCH)',
]
# The default partial branch regexes, based on Python semantics.
# These are any Python branching constructs that can't actually execute all
# their branches.
DEFAULT_PARTIAL_ALWAYS = [
'while (True|1|False|0):',
'if (True|1|False|0):',
]
class CoverageConfig(object):
"""Coverage.py configuration.
The attributes of this class are the various settings that control the
operation of coverage.py.
"""
# pylint: disable=too-many-instance-attributes
def __init__(self):
"""Initialize the configuration attributes to their defaults."""
# Metadata about the config.
# We tried to read these config files.
self.attempted_config_files = []
# We did read these config files, but maybe didn't find any content for us.
self.config_files_read = []
# The file that gave us our configuration.
self.config_file = None
self._config_contents = None
# Defaults for [run] and [report]
self._include = None
self._omit = None
# Defaults for [run]
self.branch = False
self.command_line = None
self.concurrency = None
self.context = None
self.cover_pylib = False
self.data_file = ".coverage"
self.debug = []
self.disable_warnings = []
self.dynamic_context = None
self.note = None
self.parallel = False
self.plugins = []
self.relative_files = False
self.run_include = None
self.run_omit = None
self.source = None
self.timid = False
self._crash = None
# Defaults for [report]
self.exclude_list = DEFAULT_EXCLUDE[:]
self.fail_under = 0.0
self.ignore_errors = False
self.report_include = None
self.report_omit = None
self.partial_always_list = DEFAULT_PARTIAL_ALWAYS[:]
self.partial_list = DEFAULT_PARTIAL[:]
self.precision = 0
self.report_contexts = None
self.show_missing = False
self.skip_covered = False
self.skip_empty = False
# Defaults for [html]
self.extra_css = None
self.html_dir = "htmlcov"
self.html_title = "Coverage report"
self.show_contexts = False
# Defaults for [xml]
self.xml_output = "coverage.xml"
self.xml_package_depth = 99
# Defaults for [json]
self.json_output = "coverage.json"
self.json_pretty_print = False
self.json_show_contexts = False
# Defaults for [paths]
self.paths = collections.OrderedDict()
# Options for plugins
self.plugin_options = {}
MUST_BE_LIST = [
"debug", "concurrency", "plugins",
"report_omit", "report_include",
"run_omit", "run_include",
]
def from_args(self, **kwargs):
"""Read config values from `kwargs`."""
for k, v in iitems(kwargs):
if v is not None:
if k in self.MUST_BE_LIST and isinstance(v, string_class):
v = [v]
setattr(self, k, v)
@contract(filename=str)
def from_file(self, filename, our_file):
"""Read configuration from a .rc file.
`filename` is a file name to read.
`our_file` is True if this config file is specifically for coverage,
False if we are examining another config file (tox.ini, setup.cfg)
for possible settings.
Returns True or False, whether the file could be read, and it had some
coverage.py settings in it.
"""
_, ext = os.path.splitext(filename)
if ext == '.toml':
cp = TomlConfigParser(our_file)
else:
cp = HandyConfigParser(our_file)
self.attempted_config_files.append(filename)
try:
files_read = cp.read(filename)
except (configparser.Error, TomlDecodeError) as err:
raise CoverageException("Couldn't read config file %s: %s" % (filename, err))
if not files_read:
return False
self.config_files_read.extend(map(os.path.abspath, files_read))
any_set = False
try:
for option_spec in self.CONFIG_FILE_OPTIONS:
was_set = self._set_attr_from_config_option(cp, *option_spec)
if was_set:
any_set = True
except ValueError as err:
raise CoverageException("Couldn't read config file %s: %s" % (filename, err))
# Check that there are no unrecognized options.
all_options = collections.defaultdict(set)
for option_spec in self.CONFIG_FILE_OPTIONS:
section, option = option_spec[1].split(":")
all_options[section].add(option)
for section, options in iitems(all_options):
real_section = cp.has_section(section)
if real_section:
for unknown in set(cp.options(section)) - options:
raise CoverageException(
"Unrecognized option '[%s] %s=' in config file %s" % (
real_section, unknown, filename
)
)
# [paths] is special
if cp.has_section('paths'):
for option in cp.options('paths'):
self.paths[option] = cp.getlist('paths', option)
any_set = True
# plugins can have options
for plugin in self.plugins:
if cp.has_section(plugin):
self.plugin_options[plugin] = cp.get_section(plugin)
any_set = True
# Was this file used as a config file? If it's specifically our file,
# then it was used. If we're piggybacking on someone else's file,
# then it was only used if we found some settings in it.
if our_file:
used = True
else:
used = any_set
if used:
self.config_file = os.path.abspath(filename)
with open(filename) as f:
self._config_contents = f.read()
return used
def copy(self):
"""Return a copy of the configuration."""
return copy.deepcopy(self)
CONFIG_FILE_OPTIONS = [
# These are *args for _set_attr_from_config_option:
# (attr, where, type_="")
#
# attr is the attribute to set on the CoverageConfig object.
# where is the section:name to read from the configuration file.
# type_ is the optional type to apply, by using .getTYPE to read the
# configuration value from the file.
# [run]
('branch', 'run:branch', 'boolean'),
('command_line', 'run:command_line'),
('concurrency', 'run:concurrency', 'list'),
('context', 'run:context'),
('cover_pylib', 'run:cover_pylib', 'boolean'),
('data_file', 'run:data_file'),
('debug', 'run:debug', 'list'),
('disable_warnings', 'run:disable_warnings', 'list'),
('dynamic_context', 'run:dynamic_context'),
('note', 'run:note'),
('parallel', 'run:parallel', 'boolean'),
('plugins', 'run:plugins', 'list'),
('relative_files', 'run:relative_files', 'boolean'),
('run_include', 'run:include', 'list'),
('run_omit', 'run:omit', 'list'),
('source', 'run:source', 'list'),
('timid', 'run:timid', 'boolean'),
('_crash', 'run:_crash'),
# [report]
('exclude_list', 'report:exclude_lines', 'regexlist'),
('fail_under', 'report:fail_under', 'float'),
('ignore_errors', 'report:ignore_errors', 'boolean'),
('partial_always_list', 'report:partial_branches_always', 'regexlist'),
('partial_list', 'report:partial_branches', 'regexlist'),
('precision', 'report:precision', 'int'),
('report_contexts', 'report:contexts', 'list'),
('report_include', 'report:include', 'list'),
('report_omit', 'report:omit', 'list'),
('show_missing', 'report:show_missing', 'boolean'),
('skip_covered', 'report:skip_covered', 'boolean'),
('skip_empty', 'report:skip_empty', 'boolean'),
('sort', 'report:sort'),
# [html]
('extra_css', 'html:extra_css'),
('html_dir', 'html:directory'),
('html_title', 'html:title'),
('show_contexts', 'html:show_contexts', 'boolean'),
# [xml]
('xml_output', 'xml:output'),
('xml_package_depth', 'xml:package_depth', 'int'),
# [json]
('json_output', 'json:output'),
('json_pretty_print', 'json:pretty_print', 'boolean'),
('json_show_contexts', 'json:show_contexts', 'boolean'),
]
def _set_attr_from_config_option(self, cp, attr, where, type_=''):
"""Set an attribute on self if it exists in the ConfigParser.
Returns True if the attribute was set.
"""
section, option = where.split(":")
if cp.has_option(section, option):
method = getattr(cp, 'get' + type_)
setattr(self, attr, method(section, option))
return True
return False
def get_plugin_options(self, plugin):
"""Get a dictionary of options for the plugin named `plugin`."""
return self.plugin_options.get(plugin, {})
def set_option(self, option_name, value):
"""Set an option in the configuration.
`option_name` is a colon-separated string indicating the section and
option name. For example, the ``branch`` option in the ``[run]``
section of the config file would be indicated with `"run:branch"`.
`value` is the new value for the option.
"""
# Special-cased options.
if option_name == "paths":
self.paths = value
return
# Check all the hard-coded options.
for option_spec in self.CONFIG_FILE_OPTIONS:
attr, where = option_spec[:2]
if where == option_name:
setattr(self, attr, value)
return
# See if it's a plugin option.
plugin_name, _, key = option_name.partition(":")
if key and plugin_name in self.plugins:
self.plugin_options.setdefault(plugin_name, {})[key] = value
return
# If we get here, we didn't find the option.
raise CoverageException("No such option: %r" % option_name)
def get_option(self, option_name):
"""Get an option from the configuration.
`option_name` is a colon-separated string indicating the section and
option name. For example, the ``branch`` option in the ``[run]``
section of the config file would be indicated with `"run:branch"`.
Returns the value of the option.
"""
# Special-cased options.
if option_name == "paths":
return self.paths
# Check all the hard-coded options.
for option_spec in self.CONFIG_FILE_OPTIONS:
attr, where = option_spec[:2]
if where == option_name:
return getattr(self, attr)
# See if it's a plugin option.
plugin_name, _, key = option_name.partition(":")
if key and plugin_name in self.plugins:
return self.plugin_options.get(plugin_name, {}).get(key)
# If we get here, we didn't find the option.
raise CoverageException("No such option: %r" % option_name)
def config_files_to_try(config_file):
"""What config files should we try to read?
Returns a list of tuples:
(filename, is_our_file, was_file_specified)
"""
# Some API users were specifying ".coveragerc" to mean the same as
# True, so make it so.
if config_file == ".coveragerc":
config_file = True
specified_file = (config_file is not True)
if not specified_file:
# No file was specified. Check COVERAGE_RCFILE.
config_file = os.environ.get('COVERAGE_RCFILE')
if config_file:
specified_file = True
if not specified_file:
# Still no file specified. Default to .coveragerc
config_file = ".coveragerc"
files_to_try = [
(config_file, True, specified_file),
("setup.cfg", False, False),
("tox.ini", False, False),
("pyproject.toml", False, False),
]
return files_to_try
def read_coverage_config(config_file, **kwargs):
"""Read the coverage.py configuration.
Arguments:
config_file: a boolean or string, see the `Coverage` class for the
tricky details.
all others: keyword arguments from the `Coverage` class, used for
setting values in the configuration.
Returns:
config:
config is a CoverageConfig object read from the appropriate
configuration file.
"""
# Build the configuration from a number of sources:
# 1) defaults:
config = CoverageConfig()
# 2) from a file:
if config_file:
files_to_try = config_files_to_try(config_file)
for fname, our_file, specified_file in files_to_try:
config_read = config.from_file(fname, our_file=our_file)
if config_read:
break
if specified_file:
raise CoverageException("Couldn't read '%s' as a config file" % fname)
# $set_env.py: COVERAGE_DEBUG - Options for --debug.
# 3) from environment variables:
env_data_file = os.environ.get('COVERAGE_FILE')
if env_data_file:
config.data_file = env_data_file
debugs = os.environ.get('COVERAGE_DEBUG')
if debugs:
config.debug.extend(d.strip() for d in debugs.split(","))
# 4) from constructor arguments:
config.from_args(**kwargs)
# Once all the config has been collected, there's a little post-processing
# to do.
config.data_file = os.path.expanduser(config.data_file)
config.html_dir = os.path.expanduser(config.html_dir)
config.xml_output = os.path.expanduser(config.xml_output)
config.paths = collections.OrderedDict(
(k, [os.path.expanduser(f) for f in v])
for k, v in config.paths.items()
)
return config

91
third_party/python/coverage/coverage/context.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,91 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Determine contexts for coverage.py"""
def combine_context_switchers(context_switchers):
"""Create a single context switcher from multiple switchers.
`context_switchers` is a list of functions that take a frame as an
argument and return a string to use as the new context label.
Returns a function that composites `context_switchers` functions, or None
if `context_switchers` is an empty list.
When invoked, the combined switcher calls `context_switchers` one-by-one
until a string is returned. The combined switcher returns None if all
`context_switchers` return None.
"""
if not context_switchers:
return None
if len(context_switchers) == 1:
return context_switchers[0]
def should_start_context(frame):
"""The combiner for multiple context switchers."""
for switcher in context_switchers:
new_context = switcher(frame)
if new_context is not None:
return new_context
return None
return should_start_context
def should_start_context_test_function(frame):
"""Is this frame calling a test_* function?"""
co_name = frame.f_code.co_name
if co_name.startswith("test") or co_name == "runTest":
return qualname_from_frame(frame)
return None
def qualname_from_frame(frame):
"""Get a qualified name for the code running in `frame`."""
co = frame.f_code
fname = co.co_name
method = None
if co.co_argcount and co.co_varnames[0] == "self":
self = frame.f_locals["self"]
method = getattr(self, fname, None)
if method is None:
func = frame.f_globals.get(fname)
if func is None:
return None
return func.__module__ + '.' + fname
func = getattr(method, '__func__', None)
if func is None:
cls = self.__class__
return cls.__module__ + '.' + cls.__name__ + "." + fname
if hasattr(func, '__qualname__'):
qname = func.__module__ + '.' + func.__qualname__
else:
for cls in getattr(self.__class__, '__mro__', ()):
f = cls.__dict__.get(fname, None)
if f is None:
continue
if f is func:
qname = cls.__module__ + '.' + cls.__name__ + "." + fname
break
else:
# Support for old-style classes.
def mro(bases):
for base in bases:
f = base.__dict__.get(fname, None)
if f is func:
return base.__module__ + '.' + base.__name__ + "." + fname
for base in bases:
qname = mro(base.__bases__)
if qname is not None:
return qname
return None
qname = mro([self.__class__])
if qname is None:
qname = func.__module__ + '.' + fname
return qname

1110
third_party/python/coverage/coverage/control.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

50
third_party/python/coverage/coverage/ctracer/datastack.c поставляемый Normal file
Просмотреть файл

@ -0,0 +1,50 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#include "util.h"
#include "datastack.h"
#define STACK_DELTA 20
int
DataStack_init(Stats *pstats, DataStack *pdata_stack)
{
pdata_stack->depth = -1;
pdata_stack->stack = NULL;
pdata_stack->alloc = 0;
return RET_OK;
}
void
DataStack_dealloc(Stats *pstats, DataStack *pdata_stack)
{
int i;
for (i = 0; i < pdata_stack->alloc; i++) {
Py_XDECREF(pdata_stack->stack[i].file_data);
}
PyMem_Free(pdata_stack->stack);
}
int
DataStack_grow(Stats *pstats, DataStack *pdata_stack)
{
pdata_stack->depth++;
if (pdata_stack->depth >= pdata_stack->alloc) {
/* We've outgrown our data_stack array: make it bigger. */
int bigger = pdata_stack->alloc + STACK_DELTA;
DataStackEntry * bigger_data_stack = PyMem_Realloc(pdata_stack->stack, bigger * sizeof(DataStackEntry));
STATS( pstats->stack_reallocs++; )
if (bigger_data_stack == NULL) {
PyErr_NoMemory();
pdata_stack->depth--;
return RET_ERROR;
}
/* Zero the new entries. */
memset(bigger_data_stack + pdata_stack->alloc, 0, STACK_DELTA * sizeof(DataStackEntry));
pdata_stack->stack = bigger_data_stack;
pdata_stack->alloc = bigger;
}
return RET_OK;
}

45
third_party/python/coverage/coverage/ctracer/datastack.h поставляемый Normal file
Просмотреть файл

@ -0,0 +1,45 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_DATASTACK_H
#define _COVERAGE_DATASTACK_H
#include "util.h"
#include "stats.h"
/* An entry on the data stack. For each call frame, we need to record all
* the information needed for CTracer_handle_line to operate as quickly as
* possible.
*/
typedef struct DataStackEntry {
/* The current file_data dictionary. Owned. */
PyObject * file_data;
/* The disposition object for this frame. A borrowed instance of CFileDisposition. */
PyObject * disposition;
/* The FileTracer handling this frame, or None if it's Python. Borrowed. */
PyObject * file_tracer;
/* The line number of the last line recorded, for tracing arcs.
-1 means there was no previous line, as when entering a code object.
*/
int last_line;
BOOL started_context;
} DataStackEntry;
/* A data stack is a dynamically allocated vector of DataStackEntry's. */
typedef struct DataStack {
int depth; /* The index of the last-used entry in stack. */
int alloc; /* number of entries allocated at stack. */
/* The file data at each level, or NULL if not recording. */
DataStackEntry * stack;
} DataStack;
int DataStack_init(Stats * pstats, DataStack *pdata_stack);
void DataStack_dealloc(Stats * pstats, DataStack *pdata_stack);
int DataStack_grow(Stats * pstats, DataStack *pdata_stack);
#endif /* _COVERAGE_DATASTACK_H */

85
third_party/python/coverage/coverage/ctracer/filedisp.c поставляемый Normal file
Просмотреть файл

@ -0,0 +1,85 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#include "util.h"
#include "filedisp.h"
void
CFileDisposition_dealloc(CFileDisposition *self)
{
Py_XDECREF(self->original_filename);
Py_XDECREF(self->canonical_filename);
Py_XDECREF(self->source_filename);
Py_XDECREF(self->trace);
Py_XDECREF(self->reason);
Py_XDECREF(self->file_tracer);
Py_XDECREF(self->has_dynamic_filename);
}
static PyMemberDef
CFileDisposition_members[] = {
{ "original_filename", T_OBJECT, offsetof(CFileDisposition, original_filename), 0,
PyDoc_STR("") },
{ "canonical_filename", T_OBJECT, offsetof(CFileDisposition, canonical_filename), 0,
PyDoc_STR("") },
{ "source_filename", T_OBJECT, offsetof(CFileDisposition, source_filename), 0,
PyDoc_STR("") },
{ "trace", T_OBJECT, offsetof(CFileDisposition, trace), 0,
PyDoc_STR("") },
{ "reason", T_OBJECT, offsetof(CFileDisposition, reason), 0,
PyDoc_STR("") },
{ "file_tracer", T_OBJECT, offsetof(CFileDisposition, file_tracer), 0,
PyDoc_STR("") },
{ "has_dynamic_filename", T_OBJECT, offsetof(CFileDisposition, has_dynamic_filename), 0,
PyDoc_STR("") },
{ NULL }
};
PyTypeObject
CFileDispositionType = {
MyType_HEAD_INIT
"coverage.CFileDispositionType", /*tp_name*/
sizeof(CFileDisposition), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)CFileDisposition_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
"CFileDisposition objects", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
0, /* tp_methods */
CFileDisposition_members, /* tp_members */
0, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
0, /* tp_init */
0, /* tp_alloc */
0, /* tp_new */
};

26
third_party/python/coverage/coverage/ctracer/filedisp.h поставляемый Normal file
Просмотреть файл

@ -0,0 +1,26 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_FILEDISP_H
#define _COVERAGE_FILEDISP_H
#include "util.h"
#include "structmember.h"
typedef struct CFileDisposition {
PyObject_HEAD
PyObject * original_filename;
PyObject * canonical_filename;
PyObject * source_filename;
PyObject * trace;
PyObject * reason;
PyObject * file_tracer;
PyObject * has_dynamic_filename;
} CFileDisposition;
void CFileDisposition_dealloc(CFileDisposition *self);
extern PyTypeObject CFileDispositionType;
#endif /* _COVERAGE_FILEDISP_H */

108
third_party/python/coverage/coverage/ctracer/module.c поставляемый Normal file
Просмотреть файл

@ -0,0 +1,108 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#include "util.h"
#include "tracer.h"
#include "filedisp.h"
/* Module definition */
#define MODULE_DOC PyDoc_STR("Fast coverage tracer.")
#if PY_MAJOR_VERSION >= 3
static PyModuleDef
moduledef = {
PyModuleDef_HEAD_INIT,
"coverage.tracer",
MODULE_DOC,
-1,
NULL, /* methods */
NULL,
NULL, /* traverse */
NULL, /* clear */
NULL
};
PyObject *
PyInit_tracer(void)
{
PyObject * mod = PyModule_Create(&moduledef);
if (mod == NULL) {
return NULL;
}
if (CTracer_intern_strings() < 0) {
return NULL;
}
/* Initialize CTracer */
CTracerType.tp_new = PyType_GenericNew;
if (PyType_Ready(&CTracerType) < 0) {
Py_DECREF(mod);
return NULL;
}
Py_INCREF(&CTracerType);
if (PyModule_AddObject(mod, "CTracer", (PyObject *)&CTracerType) < 0) {
Py_DECREF(mod);
Py_DECREF(&CTracerType);
return NULL;
}
/* Initialize CFileDisposition */
CFileDispositionType.tp_new = PyType_GenericNew;
if (PyType_Ready(&CFileDispositionType) < 0) {
Py_DECREF(mod);
Py_DECREF(&CTracerType);
return NULL;
}
Py_INCREF(&CFileDispositionType);
if (PyModule_AddObject(mod, "CFileDisposition", (PyObject *)&CFileDispositionType) < 0) {
Py_DECREF(mod);
Py_DECREF(&CTracerType);
Py_DECREF(&CFileDispositionType);
return NULL;
}
return mod;
}
#else
void
inittracer(void)
{
PyObject * mod;
mod = Py_InitModule3("coverage.tracer", NULL, MODULE_DOC);
if (mod == NULL) {
return;
}
if (CTracer_intern_strings() < 0) {
return;
}
/* Initialize CTracer */
CTracerType.tp_new = PyType_GenericNew;
if (PyType_Ready(&CTracerType) < 0) {
return;
}
Py_INCREF(&CTracerType);
PyModule_AddObject(mod, "CTracer", (PyObject *)&CTracerType);
/* Initialize CFileDisposition */
CFileDispositionType.tp_new = PyType_GenericNew;
if (PyType_Ready(&CFileDispositionType) < 0) {
return;
}
Py_INCREF(&CFileDispositionType);
PyModule_AddObject(mod, "CFileDisposition", (PyObject *)&CFileDispositionType);
}
#endif /* Py3k */

31
third_party/python/coverage/coverage/ctracer/stats.h поставляемый Normal file
Просмотреть файл

@ -0,0 +1,31 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_STATS_H
#define _COVERAGE_STATS_H
#include "util.h"
#if COLLECT_STATS
#define STATS(x) x
#else
#define STATS(x)
#endif
typedef struct Stats {
unsigned int calls; /* Need at least one member, but the rest only if needed. */
#if COLLECT_STATS
unsigned int lines;
unsigned int returns;
unsigned int exceptions;
unsigned int others;
unsigned int files;
unsigned int missed_returns;
unsigned int stack_reallocs;
unsigned int errors;
unsigned int pycalls;
unsigned int start_context_calls;
#endif
} Stats;
#endif /* _COVERAGE_STATS_H */

1186
third_party/python/coverage/coverage/ctracer/tracer.c поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

74
third_party/python/coverage/coverage/ctracer/tracer.h поставляемый Normal file
Просмотреть файл

@ -0,0 +1,74 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_TRACER_H
#define _COVERAGE_TRACER_H
#include "util.h"
#include "structmember.h"
#include "frameobject.h"
#include "opcode.h"
#include "datastack.h"
/* The CTracer type. */
typedef struct CTracer {
PyObject_HEAD
/* Python objects manipulated directly by the Collector class. */
PyObject * should_trace;
PyObject * check_include;
PyObject * warn;
PyObject * concur_id_func;
PyObject * data;
PyObject * file_tracers;
PyObject * should_trace_cache;
PyObject * trace_arcs;
PyObject * should_start_context;
PyObject * switch_context;
/* Has the tracer been started? */
BOOL started;
/* Are we tracing arcs, or just lines? */
BOOL tracing_arcs;
/* Have we had any activity? */
BOOL activity;
/* The current dynamic context. */
PyObject * context;
/*
The data stack is a stack of dictionaries. Each dictionary collects
data for a single source file. The data stack parallels the call stack:
each call pushes the new frame's file data onto the data stack, and each
return pops file data off.
The file data is a dictionary whose form depends on the tracing options.
If tracing arcs, the keys are line number pairs. If not tracing arcs,
the keys are line numbers. In both cases, the value is irrelevant
(None).
*/
DataStack data_stack; /* Used if we aren't doing concurrency. */
PyObject * data_stack_index; /* Used if we are doing concurrency. */
DataStack * data_stacks;
int data_stacks_alloc;
int data_stacks_used;
DataStack * pdata_stack;
/* The current file's data stack entry. */
DataStackEntry * pcur_entry;
/* The parent frame for the last exception event, to fix missing returns. */
PyFrameObject * last_exc_back;
int last_exc_firstlineno;
Stats stats;
} CTracer;
int CTracer_intern_strings(void);
extern PyTypeObject CTracerType;
#endif /* _COVERAGE_TRACER_H */

67
third_party/python/coverage/coverage/ctracer/util.h поставляемый Normal file
Просмотреть файл

@ -0,0 +1,67 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
#ifndef _COVERAGE_UTIL_H
#define _COVERAGE_UTIL_H
#include <Python.h>
/* Compile-time debugging helpers */
#undef WHAT_LOG /* Define to log the WHAT params in the trace function. */
#undef TRACE_LOG /* Define to log our bookkeeping. */
#undef COLLECT_STATS /* Collect counters: stats are printed when tracer is stopped. */
#undef DO_NOTHING /* Define this to make the tracer do nothing. */
/* Py 2.x and 3.x compatibility */
#if PY_MAJOR_VERSION >= 3
#define MyText_Type PyUnicode_Type
#define MyText_AS_BYTES(o) PyUnicode_AsASCIIString(o)
#define MyBytes_GET_SIZE(o) PyBytes_GET_SIZE(o)
#define MyBytes_AS_STRING(o) PyBytes_AS_STRING(o)
#define MyText_AsString(o) PyUnicode_AsUTF8(o)
#define MyText_FromFormat PyUnicode_FromFormat
#define MyInt_FromInt(i) PyLong_FromLong((long)i)
#define MyInt_AsInt(o) (int)PyLong_AsLong(o)
#define MyText_InternFromString(s) PyUnicode_InternFromString(s)
#define MyType_HEAD_INIT PyVarObject_HEAD_INIT(NULL, 0)
#else
#define MyText_Type PyString_Type
#define MyText_AS_BYTES(o) (Py_INCREF(o), o)
#define MyBytes_GET_SIZE(o) PyString_GET_SIZE(o)
#define MyBytes_AS_STRING(o) PyString_AS_STRING(o)
#define MyText_AsString(o) PyString_AsString(o)
#define MyText_FromFormat PyUnicode_FromFormat
#define MyInt_FromInt(i) PyInt_FromLong((long)i)
#define MyInt_AsInt(o) (int)PyInt_AsLong(o)
#define MyText_InternFromString(s) PyString_InternFromString(s)
#define MyType_HEAD_INIT PyObject_HEAD_INIT(NULL) 0,
#endif /* Py3k */
// Undocumented, and not in all 2.7.x, so our own copy of it.
#define My_XSETREF(op, op2) \
do { \
PyObject *_py_tmp = (PyObject *)(op); \
(op) = (op2); \
Py_XDECREF(_py_tmp); \
} while (0)
/* The values returned to indicate ok or error. */
#define RET_OK 0
#define RET_ERROR -1
/* Nicer booleans */
typedef int BOOL;
#define FALSE 0
#define TRUE 1
/* Only for extreme machete-mode debugging! */
#define CRASH { printf("*** CRASH! ***\n"); *((int*)1) = 1; }
#endif /* _COVERAGE_UTIL_H */

124
third_party/python/coverage/coverage/data.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,124 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Coverage data for coverage.py.
This file had the 4.x JSON data support, which is now gone. This file still
has storage-agnostic helpers, and is kept to avoid changing too many imports.
CoverageData is now defined in sqldata.py, and imported here to keep the
imports working.
"""
import glob
import os.path
from coverage.misc import CoverageException, file_be_gone
from coverage.sqldata import CoverageData
def line_counts(data, fullpath=False):
"""Return a dict summarizing the line coverage data.
Keys are based on the file names, and values are the number of executed
lines. If `fullpath` is true, then the keys are the full pathnames of
the files, otherwise they are the basenames of the files.
Returns a dict mapping file names to counts of lines.
"""
summ = {}
if fullpath:
filename_fn = lambda f: f
else:
filename_fn = os.path.basename
for filename in data.measured_files():
summ[filename_fn(filename)] = len(data.lines(filename))
return summ
def add_data_to_hash(data, filename, hasher):
"""Contribute `filename`'s data to the `hasher`.
`hasher` is a `coverage.misc.Hasher` instance to be updated with
the file's data. It should only get the results data, not the run
data.
"""
if data.has_arcs():
hasher.update(sorted(data.arcs(filename) or []))
else:
hasher.update(sorted(data.lines(filename) or []))
hasher.update(data.file_tracer(filename))
def combine_parallel_data(data, aliases=None, data_paths=None, strict=False):
"""Combine a number of data files together.
Treat `data.filename` as a file prefix, and combine the data from all
of the data files starting with that prefix plus a dot.
If `aliases` is provided, it's a `PathAliases` object that is used to
re-map paths to match the local machine's.
If `data_paths` is provided, it is a list of directories or files to
combine. Directories are searched for files that start with
`data.filename` plus dot as a prefix, and those files are combined.
If `data_paths` is not provided, then the directory portion of
`data.filename` is used as the directory to search for data files.
Every data file found and combined is then deleted from disk. If a file
cannot be read, a warning will be issued, and the file will not be
deleted.
If `strict` is true, and no files are found to combine, an error is
raised.
"""
# Because of the os.path.abspath in the constructor, data_dir will
# never be an empty string.
data_dir, local = os.path.split(data.base_filename())
localdot = local + '.*'
data_paths = data_paths or [data_dir]
files_to_combine = []
for p in data_paths:
if os.path.isfile(p):
files_to_combine.append(os.path.abspath(p))
elif os.path.isdir(p):
pattern = os.path.join(os.path.abspath(p), localdot)
files_to_combine.extend(glob.glob(pattern))
else:
raise CoverageException("Couldn't combine from non-existent path '%s'" % (p,))
if strict and not files_to_combine:
raise CoverageException("No data to combine")
files_combined = 0
for f in files_to_combine:
if f == data.data_filename():
# Sometimes we are combining into a file which is one of the
# parallel files. Skip that file.
if data._debug.should('dataio'):
data._debug.write("Skipping combining ourself: %r" % (f,))
continue
if data._debug.should('dataio'):
data._debug.write("Combining data file %r" % (f,))
try:
new_data = CoverageData(f, debug=data._debug)
new_data.read()
except CoverageException as exc:
if data._warn:
# The CoverageException has the file name in it, so just
# use the message as the warning.
data._warn(str(exc))
else:
data.update(new_data, aliases=aliases)
files_combined += 1
if data._debug.should('dataio'):
data._debug.write("Deleting combined data file %r" % (f,))
file_be_gone(f)
if strict and not files_combined:
raise CoverageException("No usable data files")

406
third_party/python/coverage/coverage/debug.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,406 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Control of and utilities for debugging."""
import contextlib
import functools
import inspect
import itertools
import os
import pprint
import sys
try:
import _thread
except ImportError:
import thread as _thread
from coverage.backward import reprlib, StringIO
from coverage.misc import isolate_module
os = isolate_module(os)
# When debugging, it can be helpful to force some options, especially when
# debugging the configuration mechanisms you usually use to control debugging!
# This is a list of forced debugging options.
FORCED_DEBUG = []
FORCED_DEBUG_FILE = None
class DebugControl(object):
"""Control and output for debugging."""
show_repr_attr = False # For SimpleReprMixin
def __init__(self, options, output):
"""Configure the options and output file for debugging."""
self.options = list(options) + FORCED_DEBUG
self.suppress_callers = False
filters = []
if self.should('pid'):
filters.append(add_pid_and_tid)
self.output = DebugOutputFile.get_one(
output,
show_process=self.should('process'),
filters=filters,
)
self.raw_output = self.output.outfile
def __repr__(self):
return "<DebugControl options=%r raw_output=%r>" % (self.options, self.raw_output)
def should(self, option):
"""Decide whether to output debug information in category `option`."""
if option == "callers" and self.suppress_callers:
return False
return (option in self.options)
@contextlib.contextmanager
def without_callers(self):
"""A context manager to prevent call stacks from being logged."""
old = self.suppress_callers
self.suppress_callers = True
try:
yield
finally:
self.suppress_callers = old
def write(self, msg):
"""Write a line of debug output.
`msg` is the line to write. A newline will be appended.
"""
self.output.write(msg+"\n")
if self.should('self'):
caller_self = inspect.stack()[1][0].f_locals.get('self')
if caller_self is not None:
self.output.write("self: {!r}\n".format(caller_self))
if self.should('callers'):
dump_stack_frames(out=self.output, skip=1)
self.output.flush()
class DebugControlString(DebugControl):
"""A `DebugControl` that writes to a StringIO, for testing."""
def __init__(self, options):
super(DebugControlString, self).__init__(options, StringIO())
def get_output(self):
"""Get the output text from the `DebugControl`."""
return self.raw_output.getvalue()
class NoDebugging(object):
"""A replacement for DebugControl that will never try to do anything."""
def should(self, option): # pylint: disable=unused-argument
"""Should we write debug messages? Never."""
return False
def info_header(label):
"""Make a nice header string."""
return "--{:-<60s}".format(" "+label+" ")
def info_formatter(info):
"""Produce a sequence of formatted lines from info.
`info` is a sequence of pairs (label, data). The produced lines are
nicely formatted, ready to print.
"""
info = list(info)
if not info:
return
label_len = 30
assert all(len(l) < label_len for l, _ in info)
for label, data in info:
if data == []:
data = "-none-"
if isinstance(data, (list, set, tuple)):
prefix = "%*s:" % (label_len, label)
for e in data:
yield "%*s %s" % (label_len+1, prefix, e)
prefix = ""
else:
yield "%*s: %s" % (label_len, label, data)
def write_formatted_info(writer, header, info):
"""Write a sequence of (label,data) pairs nicely."""
writer.write(info_header(header))
for line in info_formatter(info):
writer.write(" %s" % line)
def short_stack(limit=None, skip=0):
"""Return a string summarizing the call stack.
The string is multi-line, with one line per stack frame. Each line shows
the function name, the file name, and the line number:
...
start_import_stop : /Users/ned/coverage/trunk/tests/coveragetest.py @95
import_local_file : /Users/ned/coverage/trunk/tests/coveragetest.py @81
import_local_file : /Users/ned/coverage/trunk/coverage/backward.py @159
...
`limit` is the number of frames to include, defaulting to all of them.
`skip` is the number of frames to skip, so that debugging functions can
call this and not be included in the result.
"""
stack = inspect.stack()[limit:skip:-1]
return "\n".join("%30s : %s:%d" % (t[3], t[1], t[2]) for t in stack)
def dump_stack_frames(limit=None, out=None, skip=0):
"""Print a summary of the stack to stdout, or someplace else."""
out = out or sys.stdout
out.write(short_stack(limit=limit, skip=skip+1))
out.write("\n")
def clipped_repr(text, numchars=50):
"""`repr(text)`, but limited to `numchars`."""
r = reprlib.Repr()
r.maxstring = numchars
return r.repr(text)
def short_id(id64):
"""Given a 64-bit id, make a shorter 16-bit one."""
id16 = 0
for offset in range(0, 64, 16):
id16 ^= id64 >> offset
return id16 & 0xFFFF
def add_pid_and_tid(text):
"""A filter to add pid and tid to debug messages."""
# Thread ids are useful, but too long. Make a shorter one.
tid = "{:04x}".format(short_id(_thread.get_ident()))
text = "{:5d}.{}: {}".format(os.getpid(), tid, text)
return text
class SimpleReprMixin(object):
"""A mixin implementing a simple __repr__."""
simple_repr_ignore = ['simple_repr_ignore', '$coverage.object_id']
def __repr__(self):
show_attrs = (
(k, v) for k, v in self.__dict__.items()
if getattr(v, "show_repr_attr", True)
and not callable(v)
and k not in self.simple_repr_ignore
)
return "<{klass} @0x{id:x} {attrs}>".format(
klass=self.__class__.__name__,
id=id(self),
attrs=" ".join("{}={!r}".format(k, v) for k, v in show_attrs),
)
def simplify(v): # pragma: debugging
"""Turn things which are nearly dict/list/etc into dict/list/etc."""
if isinstance(v, dict):
return {k:simplify(vv) for k, vv in v.items()}
elif isinstance(v, (list, tuple)):
return type(v)(simplify(vv) for vv in v)
elif hasattr(v, "__dict__"):
return simplify({'.'+k: v for k, v in v.__dict__.items()})
else:
return v
def pp(v): # pragma: debugging
"""Debug helper to pretty-print data, including SimpleNamespace objects."""
# Might not be needed in 3.9+
pprint.pprint(simplify(v))
def filter_text(text, filters):
"""Run `text` through a series of filters.
`filters` is a list of functions. Each takes a string and returns a
string. Each is run in turn.
Returns: the final string that results after all of the filters have
run.
"""
clean_text = text.rstrip()
ending = text[len(clean_text):]
text = clean_text
for fn in filters:
lines = []
for line in text.splitlines():
lines.extend(fn(line).splitlines())
text = "\n".join(lines)
return text + ending
class CwdTracker(object): # pragma: debugging
"""A class to add cwd info to debug messages."""
def __init__(self):
self.cwd = None
def filter(self, text):
"""Add a cwd message for each new cwd."""
cwd = os.getcwd()
if cwd != self.cwd:
text = "cwd is now {!r}\n".format(cwd) + text
self.cwd = cwd
return text
class DebugOutputFile(object): # pragma: debugging
"""A file-like object that includes pid and cwd information."""
def __init__(self, outfile, show_process, filters):
self.outfile = outfile
self.show_process = show_process
self.filters = list(filters)
if self.show_process:
self.filters.insert(0, CwdTracker().filter)
self.write("New process: executable: %r\n" % (sys.executable,))
self.write("New process: cmd: %r\n" % (getattr(sys, 'argv', None),))
if hasattr(os, 'getppid'):
self.write("New process: pid: %r, parent pid: %r\n" % (os.getpid(), os.getppid()))
SYS_MOD_NAME = '$coverage.debug.DebugOutputFile.the_one'
@classmethod
def get_one(cls, fileobj=None, show_process=True, filters=(), interim=False):
"""Get a DebugOutputFile.
If `fileobj` is provided, then a new DebugOutputFile is made with it.
If `fileobj` isn't provided, then a file is chosen
(COVERAGE_DEBUG_FILE, or stderr), and a process-wide singleton
DebugOutputFile is made.
`show_process` controls whether the debug file adds process-level
information, and filters is a list of other message filters to apply.
`filters` are the text filters to apply to the stream to annotate with
pids, etc.
If `interim` is true, then a future `get_one` can replace this one.
"""
if fileobj is not None:
# Make DebugOutputFile around the fileobj passed.
return cls(fileobj, show_process, filters)
# Because of the way igor.py deletes and re-imports modules,
# this class can be defined more than once. But we really want
# a process-wide singleton. So stash it in sys.modules instead of
# on a class attribute. Yes, this is aggressively gross.
the_one, is_interim = sys.modules.get(cls.SYS_MOD_NAME, (None, True))
if the_one is None or is_interim:
if fileobj is None:
debug_file_name = os.environ.get("COVERAGE_DEBUG_FILE", FORCED_DEBUG_FILE)
if debug_file_name:
fileobj = open(debug_file_name, "a")
else:
fileobj = sys.stderr
the_one = cls(fileobj, show_process, filters)
sys.modules[cls.SYS_MOD_NAME] = (the_one, interim)
return the_one
def write(self, text):
"""Just like file.write, but filter through all our filters."""
self.outfile.write(filter_text(text, self.filters))
self.outfile.flush()
def flush(self):
"""Flush our file."""
self.outfile.flush()
def log(msg, stack=False): # pragma: debugging
"""Write a log message as forcefully as possible."""
out = DebugOutputFile.get_one(interim=True)
out.write(msg+"\n")
if stack:
dump_stack_frames(out=out, skip=1)
def decorate_methods(decorator, butnot=(), private=False): # pragma: debugging
"""A class decorator to apply a decorator to methods."""
def _decorator(cls):
for name, meth in inspect.getmembers(cls, inspect.isroutine):
if name not in cls.__dict__:
continue
if name != "__init__":
if not private and name.startswith("_"):
continue
if name in butnot:
continue
setattr(cls, name, decorator(meth))
return cls
return _decorator
def break_in_pudb(func): # pragma: debugging
"""A function decorator to stop in the debugger for each call."""
@functools.wraps(func)
def _wrapper(*args, **kwargs):
import pudb
sys.stdout = sys.__stdout__
pudb.set_trace()
return func(*args, **kwargs)
return _wrapper
OBJ_IDS = itertools.count()
CALLS = itertools.count()
OBJ_ID_ATTR = "$coverage.object_id"
def show_calls(show_args=True, show_stack=False, show_return=False): # pragma: debugging
"""A method decorator to debug-log each call to the function."""
def _decorator(func):
@functools.wraps(func)
def _wrapper(self, *args, **kwargs):
oid = getattr(self, OBJ_ID_ATTR, None)
if oid is None:
oid = "{:08d} {:04d}".format(os.getpid(), next(OBJ_IDS))
setattr(self, OBJ_ID_ATTR, oid)
extra = ""
if show_args:
eargs = ", ".join(map(repr, args))
ekwargs = ", ".join("{}={!r}".format(*item) for item in kwargs.items())
extra += "("
extra += eargs
if eargs and ekwargs:
extra += ", "
extra += ekwargs
extra += ")"
if show_stack:
extra += " @ "
extra += "; ".join(_clean_stack_line(l) for l in short_stack().splitlines())
callid = next(CALLS)
msg = "{} {:04d} {}{}\n".format(oid, callid, func.__name__, extra)
DebugOutputFile.get_one(interim=True).write(msg)
ret = func(self, *args, **kwargs)
if show_return:
msg = "{} {:04d} {} return {!r}\n".format(oid, callid, func.__name__, ret)
DebugOutputFile.get_one(interim=True).write(msg)
return ret
return _wrapper
return _decorator
def _clean_stack_line(s): # pragma: debugging
"""Simplify some paths in a stack trace, for compactness."""
s = s.strip()
s = s.replace(os.path.dirname(__file__) + '/', '')
s = s.replace(os.path.dirname(os.__file__) + '/', '')
s = s.replace(sys.prefix + '/', '')
return s

37
third_party/python/coverage/coverage/disposition.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,37 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Simple value objects for tracking what to do with files."""
class FileDisposition(object):
"""A simple value type for recording what to do with a file."""
pass
# FileDisposition "methods": FileDisposition is a pure value object, so it can
# be implemented in either C or Python. Acting on them is done with these
# functions.
def disposition_init(cls, original_filename):
"""Construct and initialize a new FileDisposition object."""
disp = cls()
disp.original_filename = original_filename
disp.canonical_filename = original_filename
disp.source_filename = None
disp.trace = False
disp.reason = ""
disp.file_tracer = None
disp.has_dynamic_filename = False
return disp
def disposition_debug_msg(disp):
"""Make a nice debug message of what the FileDisposition is doing."""
if disp.trace:
msg = "Tracing %r" % (disp.original_filename,)
if disp.file_tracer:
msg += ": will be traced by %r" % disp.file_tracer
else:
msg = "Not tracing %r: %s" % (disp.original_filename, disp.reason)
return msg

99
third_party/python/coverage/coverage/env.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,99 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Determine facts about the environment."""
import os
import platform
import sys
# Operating systems.
WINDOWS = sys.platform == "win32"
LINUX = sys.platform.startswith("linux")
# Python versions. We amend version_info with one more value, a zero if an
# official version, or 1 if built from source beyond an official version.
PYVERSION = sys.version_info + (int(platform.python_version()[-1] == "+"),)
PY2 = PYVERSION < (3, 0)
PY3 = PYVERSION >= (3, 0)
# Python implementations.
PYPY = (platform.python_implementation() == 'PyPy')
if PYPY:
PYPYVERSION = sys.pypy_version_info
PYPY2 = PYPY and PY2
PYPY3 = PYPY and PY3
JYTHON = (platform.python_implementation() == 'Jython')
IRONPYTHON = (platform.python_implementation() == 'IronPython')
# Python behavior
class PYBEHAVIOR(object):
"""Flags indicating this Python's behavior."""
# Is "if __debug__" optimized away?
optimize_if_debug = (not PYPY)
# Is "if not __debug__" optimized away?
optimize_if_not_debug = (not PYPY) and (PYVERSION >= (3, 7, 0, 'alpha', 4))
# Is "if not __debug__" optimized away even better?
optimize_if_not_debug2 = (not PYPY) and (PYVERSION >= (3, 8, 0, 'beta', 1))
# Do we have yield-from?
yield_from = (PYVERSION >= (3, 3))
# Do we have PEP 420 namespace packages?
namespaces_pep420 = (PYVERSION >= (3, 3))
# Do .pyc files have the source file size recorded in them?
size_in_pyc = (PYVERSION >= (3, 3))
# Do we have async and await syntax?
async_syntax = (PYVERSION >= (3, 5))
# PEP 448 defined additional unpacking generalizations
unpackings_pep448 = (PYVERSION >= (3, 5))
# Can co_lnotab have negative deltas?
negative_lnotab = (PYVERSION >= (3, 6)) and not (PYPY and PYPYVERSION < (7, 2))
# Do .pyc files conform to PEP 552? Hash-based pyc's.
hashed_pyc_pep552 = (PYVERSION >= (3, 7, 0, 'alpha', 4))
# Python 3.7.0b3 changed the behavior of the sys.path[0] entry for -m. It
# used to be an empty string (meaning the current directory). It changed
# to be the actual path to the current directory, so that os.chdir wouldn't
# affect the outcome.
actual_syspath0_dash_m = (PYVERSION >= (3, 7, 0, 'beta', 3))
# When a break/continue/return statement in a try block jumps to a finally
# block, does the finally block do the break/continue/return (pre-3.8), or
# does the finally jump back to the break/continue/return (3.8) to do the
# work?
finally_jumps_back = (PYVERSION >= (3, 8))
# When a function is decorated, does the trace function get called for the
# @-line and also the def-line (new behavior in 3.8)? Or just the @-line
# (old behavior)?
trace_decorated_def = (PYVERSION >= (3, 8))
# Are while-true loops optimized into absolute jumps with no loop setup?
nix_while_true = (PYVERSION >= (3, 8))
# Python 3.9a1 made sys.argv[0] and other reported files absolute paths.
report_absolute_files = (PYVERSION >= (3, 9))
# Coverage.py specifics.
# Are we using the C-implemented trace function?
C_TRACER = os.getenv('COVERAGE_TEST_TRACER', 'c') == 'c'
# Are we coverage-measuring ourselves?
METACOV = os.getenv('COVERAGE_COVERAGE', '') != ''
# Are we running our test suite?
# Even when running tests, you can use COVERAGE_TESTING=0 to disable the
# test-specific behavior like contracts.
TESTING = os.getenv('COVERAGE_TESTING', '') == 'True'

362
third_party/python/coverage/coverage/execfile.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,362 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Execute files of Python code."""
import inspect
import marshal
import os
import struct
import sys
import types
from coverage import env
from coverage.backward import BUILTINS
from coverage.backward import PYC_MAGIC_NUMBER, imp, importlib_util_find_spec
from coverage.files import canonical_filename, python_reported_file
from coverage.misc import CoverageException, ExceptionDuringRun, NoCode, NoSource, isolate_module
from coverage.phystokens import compile_unicode
from coverage.python import get_python_source
os = isolate_module(os)
class DummyLoader(object):
"""A shim for the pep302 __loader__, emulating pkgutil.ImpLoader.
Currently only implements the .fullname attribute
"""
def __init__(self, fullname, *_args):
self.fullname = fullname
if importlib_util_find_spec:
def find_module(modulename):
"""Find the module named `modulename`.
Returns the file path of the module, the name of the enclosing
package, and the spec.
"""
try:
spec = importlib_util_find_spec(modulename)
except ImportError as err:
raise NoSource(str(err))
if not spec:
raise NoSource("No module named %r" % (modulename,))
pathname = spec.origin
packagename = spec.name
if spec.submodule_search_locations:
mod_main = modulename + ".__main__"
spec = importlib_util_find_spec(mod_main)
if not spec:
raise NoSource(
"No module named %s; "
"%r is a package and cannot be directly executed"
% (mod_main, modulename)
)
pathname = spec.origin
packagename = spec.name
packagename = packagename.rpartition(".")[0]
return pathname, packagename, spec
else:
def find_module(modulename):
"""Find the module named `modulename`.
Returns the file path of the module, the name of the enclosing
package, and None (where a spec would have been).
"""
openfile = None
glo, loc = globals(), locals()
try:
# Search for the module - inside its parent package, if any - using
# standard import mechanics.
if '.' in modulename:
packagename, name = modulename.rsplit('.', 1)
package = __import__(packagename, glo, loc, ['__path__'])
searchpath = package.__path__
else:
packagename, name = None, modulename
searchpath = None # "top-level search" in imp.find_module()
openfile, pathname, _ = imp.find_module(name, searchpath)
# Complain if this is a magic non-file module.
if openfile is None and pathname is None:
raise NoSource(
"module does not live in a file: %r" % modulename
)
# If `modulename` is actually a package, not a mere module, then we
# pretend to be Python 2.7 and try running its __main__.py script.
if openfile is None:
packagename = modulename
name = '__main__'
package = __import__(packagename, glo, loc, ['__path__'])
searchpath = package.__path__
openfile, pathname, _ = imp.find_module(name, searchpath)
except ImportError as err:
raise NoSource(str(err))
finally:
if openfile:
openfile.close()
return pathname, packagename, None
class PyRunner(object):
"""Multi-stage execution of Python code.
This is meant to emulate real Python execution as closely as possible.
"""
def __init__(self, args, as_module=False):
self.args = args
self.as_module = as_module
self.arg0 = args[0]
self.package = self.modulename = self.pathname = self.loader = self.spec = None
def prepare(self):
"""Set sys.path properly.
This needs to happen before any importing, and without importing anything.
"""
if self.as_module:
if env.PYBEHAVIOR.actual_syspath0_dash_m:
path0 = os.getcwd()
else:
path0 = ""
elif os.path.isdir(self.arg0):
# Running a directory means running the __main__.py file in that
# directory.
path0 = self.arg0
else:
path0 = os.path.abspath(os.path.dirname(self.arg0))
if os.path.isdir(sys.path[0]):
# sys.path fakery. If we are being run as a command, then sys.path[0]
# is the directory of the "coverage" script. If this is so, replace
# sys.path[0] with the directory of the file we're running, or the
# current directory when running modules. If it isn't so, then we
# don't know what's going on, and just leave it alone.
top_file = inspect.stack()[-1][0].f_code.co_filename
sys_path_0_abs = os.path.abspath(sys.path[0])
top_file_dir_abs = os.path.abspath(os.path.dirname(top_file))
sys_path_0_abs = canonical_filename(sys_path_0_abs)
top_file_dir_abs = canonical_filename(top_file_dir_abs)
if sys_path_0_abs != top_file_dir_abs:
path0 = None
else:
# sys.path[0] is a file. Is the next entry the directory containing
# that file?
if sys.path[1] == os.path.dirname(sys.path[0]):
# Can it be right to always remove that?
del sys.path[1]
if path0 is not None:
sys.path[0] = python_reported_file(path0)
def _prepare2(self):
"""Do more preparation to run Python code.
Includes finding the module to run and adjusting sys.argv[0].
This method is allowed to import code.
"""
if self.as_module:
self.modulename = self.arg0
pathname, self.package, self.spec = find_module(self.modulename)
if self.spec is not None:
self.modulename = self.spec.name
self.loader = DummyLoader(self.modulename)
self.pathname = os.path.abspath(pathname)
self.args[0] = self.arg0 = self.pathname
elif os.path.isdir(self.arg0):
# Running a directory means running the __main__.py file in that
# directory.
for ext in [".py", ".pyc", ".pyo"]:
try_filename = os.path.join(self.arg0, "__main__" + ext)
if os.path.exists(try_filename):
self.arg0 = try_filename
break
else:
raise NoSource("Can't find '__main__' module in '%s'" % self.arg0)
if env.PY2:
self.arg0 = os.path.abspath(self.arg0)
# Make a spec. I don't know if this is the right way to do it.
try:
import importlib.machinery
except ImportError:
pass
else:
try_filename = python_reported_file(try_filename)
self.spec = importlib.machinery.ModuleSpec("__main__", None, origin=try_filename)
self.spec.has_location = True
self.package = ""
self.loader = DummyLoader("__main__")
else:
if env.PY3:
self.loader = DummyLoader("__main__")
self.arg0 = python_reported_file(self.arg0)
def run(self):
"""Run the Python code!"""
self._prepare2()
# Create a module to serve as __main__
main_mod = types.ModuleType('__main__')
from_pyc = self.arg0.endswith((".pyc", ".pyo"))
main_mod.__file__ = self.arg0
if from_pyc:
main_mod.__file__ = main_mod.__file__[:-1]
if self.package is not None:
main_mod.__package__ = self.package
main_mod.__loader__ = self.loader
if self.spec is not None:
main_mod.__spec__ = self.spec
main_mod.__builtins__ = BUILTINS
sys.modules['__main__'] = main_mod
# Set sys.argv properly.
sys.argv = self.args
try:
# Make a code object somehow.
if from_pyc:
code = make_code_from_pyc(self.arg0)
else:
code = make_code_from_py(self.arg0)
except CoverageException:
raise
except Exception as exc:
msg = "Couldn't run '{filename}' as Python code: {exc.__class__.__name__}: {exc}"
raise CoverageException(msg.format(filename=self.arg0, exc=exc))
# Execute the code object.
# Return to the original directory in case the test code exits in
# a non-existent directory.
cwd = os.getcwd()
try:
exec(code, main_mod.__dict__)
except SystemExit: # pylint: disable=try-except-raise
# The user called sys.exit(). Just pass it along to the upper
# layers, where it will be handled.
raise
except Exception:
# Something went wrong while executing the user code.
# Get the exc_info, and pack them into an exception that we can
# throw up to the outer loop. We peel one layer off the traceback
# so that the coverage.py code doesn't appear in the final printed
# traceback.
typ, err, tb = sys.exc_info()
# PyPy3 weirdness. If I don't access __context__, then somehow it
# is non-None when the exception is reported at the upper layer,
# and a nested exception is shown to the user. This getattr fixes
# it somehow? https://bitbucket.org/pypy/pypy/issue/1903
getattr(err, '__context__', None)
# Call the excepthook.
try:
if hasattr(err, "__traceback__"):
err.__traceback__ = err.__traceback__.tb_next
sys.excepthook(typ, err, tb.tb_next)
except SystemExit: # pylint: disable=try-except-raise
raise
except Exception:
# Getting the output right in the case of excepthook
# shenanigans is kind of involved.
sys.stderr.write("Error in sys.excepthook:\n")
typ2, err2, tb2 = sys.exc_info()
err2.__suppress_context__ = True
if hasattr(err2, "__traceback__"):
err2.__traceback__ = err2.__traceback__.tb_next
sys.__excepthook__(typ2, err2, tb2.tb_next)
sys.stderr.write("\nOriginal exception was:\n")
raise ExceptionDuringRun(typ, err, tb.tb_next)
else:
sys.exit(1)
finally:
os.chdir(cwd)
def run_python_module(args):
"""Run a Python module, as though with ``python -m name args...``.
`args` is the argument array to present as sys.argv, including the first
element naming the module being executed.
This is a helper for tests, to encapsulate how to use PyRunner.
"""
runner = PyRunner(args, as_module=True)
runner.prepare()
runner.run()
def run_python_file(args):
"""Run a Python file as if it were the main program on the command line.
`args` is the argument array to present as sys.argv, including the first
element naming the file being executed. `package` is the name of the
enclosing package, if any.
This is a helper for tests, to encapsulate how to use PyRunner.
"""
runner = PyRunner(args, as_module=False)
runner.prepare()
runner.run()
def make_code_from_py(filename):
"""Get source from `filename` and make a code object of it."""
# Open the source file.
try:
source = get_python_source(filename)
except (IOError, NoSource):
raise NoSource("No file to run: '%s'" % filename)
code = compile_unicode(source, filename, "exec")
return code
def make_code_from_pyc(filename):
"""Get a code object from a .pyc file."""
try:
fpyc = open(filename, "rb")
except IOError:
raise NoCode("No file to run: '%s'" % filename)
with fpyc:
# First four bytes are a version-specific magic number. It has to
# match or we won't run the file.
magic = fpyc.read(4)
if magic != PYC_MAGIC_NUMBER:
raise NoCode("Bad magic number in .pyc file: {} != {}".format(magic, PYC_MAGIC_NUMBER))
date_based = True
if env.PYBEHAVIOR.hashed_pyc_pep552:
flags = struct.unpack('<L', fpyc.read(4))[0]
hash_based = flags & 0x01
if hash_based:
fpyc.read(8) # Skip the hash.
date_based = False
if date_based:
# Skip the junk in the header that we don't need.
fpyc.read(4) # Skip the moddate.
if env.PYBEHAVIOR.size_in_pyc:
# 3.3 added another long to the header (size), skip it.
fpyc.read(4)
# The rest of the file is the code object we want.
code = marshal.load(fpyc)
return code

432
third_party/python/coverage/coverage/files.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,432 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""File wrangling."""
import hashlib
import fnmatch
import ntpath
import os
import os.path
import posixpath
import re
import sys
from coverage import env
from coverage.backward import unicode_class
from coverage.misc import contract, CoverageException, join_regex, isolate_module
os = isolate_module(os)
def set_relative_directory():
"""Set the directory that `relative_filename` will be relative to."""
global RELATIVE_DIR, CANONICAL_FILENAME_CACHE
# The absolute path to our current directory.
RELATIVE_DIR = os.path.normcase(abs_file(os.curdir) + os.sep)
# Cache of results of calling the canonical_filename() method, to
# avoid duplicating work.
CANONICAL_FILENAME_CACHE = {}
def relative_directory():
"""Return the directory that `relative_filename` is relative to."""
return RELATIVE_DIR
@contract(returns='unicode')
def relative_filename(filename):
"""Return the relative form of `filename`.
The file name will be relative to the current directory when the
`set_relative_directory` was called.
"""
fnorm = os.path.normcase(filename)
if fnorm.startswith(RELATIVE_DIR):
filename = filename[len(RELATIVE_DIR):]
return unicode_filename(filename)
@contract(returns='unicode')
def canonical_filename(filename):
"""Return a canonical file name for `filename`.
An absolute path with no redundant components and normalized case.
"""
if filename not in CANONICAL_FILENAME_CACHE:
cf = filename
if not os.path.isabs(filename):
for path in [os.curdir] + sys.path:
if path is None:
continue
f = os.path.join(path, filename)
try:
exists = os.path.exists(f)
except UnicodeError:
exists = False
if exists:
cf = f
break
cf = abs_file(cf)
CANONICAL_FILENAME_CACHE[filename] = cf
return CANONICAL_FILENAME_CACHE[filename]
MAX_FLAT = 200
@contract(filename='unicode', returns='unicode')
def flat_rootname(filename):
"""A base for a flat file name to correspond to this file.
Useful for writing files about the code where you want all the files in
the same directory, but need to differentiate same-named files from
different directories.
For example, the file a/b/c.py will return 'a_b_c_py'
"""
name = ntpath.splitdrive(filename)[1]
name = re.sub(r"[\\/.:]", "_", name)
if len(name) > MAX_FLAT:
h = hashlib.sha1(name.encode('UTF-8')).hexdigest()
name = name[-(MAX_FLAT-len(h)-1):] + '_' + h
return name
if env.WINDOWS:
_ACTUAL_PATH_CACHE = {}
_ACTUAL_PATH_LIST_CACHE = {}
def actual_path(path):
"""Get the actual path of `path`, including the correct case."""
if env.PY2 and isinstance(path, unicode_class):
path = path.encode(sys.getfilesystemencoding())
if path in _ACTUAL_PATH_CACHE:
return _ACTUAL_PATH_CACHE[path]
head, tail = os.path.split(path)
if not tail:
# This means head is the drive spec: normalize it.
actpath = head.upper()
elif not head:
actpath = tail
else:
head = actual_path(head)
if head in _ACTUAL_PATH_LIST_CACHE:
files = _ACTUAL_PATH_LIST_CACHE[head]
else:
try:
files = os.listdir(head)
except Exception:
# This will raise OSError, or this bizarre TypeError:
# https://bugs.python.org/issue1776160
files = []
_ACTUAL_PATH_LIST_CACHE[head] = files
normtail = os.path.normcase(tail)
for f in files:
if os.path.normcase(f) == normtail:
tail = f
break
actpath = os.path.join(head, tail)
_ACTUAL_PATH_CACHE[path] = actpath
return actpath
else:
def actual_path(filename):
"""The actual path for non-Windows platforms."""
return filename
if env.PY2:
@contract(returns='unicode')
def unicode_filename(filename):
"""Return a Unicode version of `filename`."""
if isinstance(filename, str):
encoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
filename = filename.decode(encoding, "replace")
return filename
else:
@contract(filename='unicode', returns='unicode')
def unicode_filename(filename):
"""Return a Unicode version of `filename`."""
return filename
@contract(returns='unicode')
def abs_file(path):
"""Return the absolute normalized form of `path`."""
try:
path = os.path.realpath(path)
except UnicodeError:
pass
path = os.path.abspath(path)
path = actual_path(path)
path = unicode_filename(path)
return path
def python_reported_file(filename):
"""Return the string as Python would describe this file name."""
if env.PYBEHAVIOR.report_absolute_files:
filename = os.path.abspath(filename)
return filename
RELATIVE_DIR = None
CANONICAL_FILENAME_CACHE = None
set_relative_directory()
def isabs_anywhere(filename):
"""Is `filename` an absolute path on any OS?"""
return ntpath.isabs(filename) or posixpath.isabs(filename)
def prep_patterns(patterns):
"""Prepare the file patterns for use in a `FnmatchMatcher`.
If a pattern starts with a wildcard, it is used as a pattern
as-is. If it does not start with a wildcard, then it is made
absolute with the current directory.
If `patterns` is None, an empty list is returned.
"""
prepped = []
for p in patterns or []:
if p.startswith(("*", "?")):
prepped.append(p)
else:
prepped.append(abs_file(p))
return prepped
class TreeMatcher(object):
"""A matcher for files in a tree.
Construct with a list of paths, either files or directories. Paths match
with the `match` method if they are one of the files, or if they are
somewhere in a subtree rooted at one of the directories.
"""
def __init__(self, paths):
self.paths = list(paths)
def __repr__(self):
return "<TreeMatcher %r>" % self.paths
def info(self):
"""A list of strings for displaying when dumping state."""
return self.paths
def match(self, fpath):
"""Does `fpath` indicate a file in one of our trees?"""
for p in self.paths:
if fpath.startswith(p):
if fpath == p:
# This is the same file!
return True
if fpath[len(p)] == os.sep:
# This is a file in the directory
return True
return False
class ModuleMatcher(object):
"""A matcher for modules in a tree."""
def __init__(self, module_names):
self.modules = list(module_names)
def __repr__(self):
return "<ModuleMatcher %r>" % (self.modules)
def info(self):
"""A list of strings for displaying when dumping state."""
return self.modules
def match(self, module_name):
"""Does `module_name` indicate a module in one of our packages?"""
if not module_name:
return False
for m in self.modules:
if module_name.startswith(m):
if module_name == m:
return True
if module_name[len(m)] == '.':
# This is a module in the package
return True
return False
class FnmatchMatcher(object):
"""A matcher for files by file name pattern."""
def __init__(self, pats):
self.pats = list(pats)
self.re = fnmatches_to_regex(self.pats, case_insensitive=env.WINDOWS)
def __repr__(self):
return "<FnmatchMatcher %r>" % self.pats
def info(self):
"""A list of strings for displaying when dumping state."""
return self.pats
def match(self, fpath):
"""Does `fpath` match one of our file name patterns?"""
return self.re.match(fpath) is not None
def sep(s):
"""Find the path separator used in this string, or os.sep if none."""
sep_match = re.search(r"[\\/]", s)
if sep_match:
the_sep = sep_match.group(0)
else:
the_sep = os.sep
return the_sep
def fnmatches_to_regex(patterns, case_insensitive=False, partial=False):
"""Convert fnmatch patterns to a compiled regex that matches any of them.
Slashes are always converted to match either slash or backslash, for
Windows support, even when running elsewhere.
If `partial` is true, then the pattern will match if the target string
starts with the pattern. Otherwise, it must match the entire string.
Returns: a compiled regex object. Use the .match method to compare target
strings.
"""
regexes = (fnmatch.translate(pattern) for pattern in patterns)
# Python3.7 fnmatch translates "/" as "/". Before that, it translates as "\/",
# so we have to deal with maybe a backslash.
regexes = (re.sub(r"\\?/", r"[\\\\/]", regex) for regex in regexes)
if partial:
# fnmatch always adds a \Z to match the whole string, which we don't
# want, so we remove the \Z. While removing it, we only replace \Z if
# followed by paren (introducing flags), or at end, to keep from
# destroying a literal \Z in the pattern.
regexes = (re.sub(r'\\Z(\(\?|$)', r'\1', regex) for regex in regexes)
flags = 0
if case_insensitive:
flags |= re.IGNORECASE
compiled = re.compile(join_regex(regexes), flags=flags)
return compiled
class PathAliases(object):
"""A collection of aliases for paths.
When combining data files from remote machines, often the paths to source
code are different, for example, due to OS differences, or because of
serialized checkouts on continuous integration machines.
A `PathAliases` object tracks a list of pattern/result pairs, and can
map a path through those aliases to produce a unified path.
"""
def __init__(self):
self.aliases = []
def pprint(self): # pragma: debugging
"""Dump the important parts of the PathAliases, for debugging."""
for regex, result in self.aliases:
print("{!r} --> {!r}".format(regex.pattern, result))
def add(self, pattern, result):
"""Add the `pattern`/`result` pair to the list of aliases.
`pattern` is an `fnmatch`-style pattern. `result` is a simple
string. When mapping paths, if a path starts with a match against
`pattern`, then that match is replaced with `result`. This models
isomorphic source trees being rooted at different places on two
different machines.
`pattern` can't end with a wildcard component, since that would
match an entire tree, and not just its root.
"""
if len(pattern) > 1:
pattern = pattern.rstrip(r"\/")
# The pattern can't end with a wildcard component.
if pattern.endswith("*"):
raise CoverageException("Pattern must not end with wildcards.")
pattern_sep = sep(pattern)
# The pattern is meant to match a filepath. Let's make it absolute
# unless it already is, or is meant to match any prefix.
if not pattern.startswith('*') and not isabs_anywhere(pattern):
pattern = abs_file(pattern)
if not pattern.endswith(pattern_sep):
pattern += pattern_sep
# Make a regex from the pattern.
regex = fnmatches_to_regex([pattern], case_insensitive=True, partial=True)
# Normalize the result: it must end with a path separator.
result_sep = sep(result)
result = result.rstrip(r"\/") + result_sep
self.aliases.append((regex, result))
def map(self, path):
"""Map `path` through the aliases.
`path` is checked against all of the patterns. The first pattern to
match is used to replace the root of the path with the result root.
Only one pattern is ever used. If no patterns match, `path` is
returned unchanged.
The separator style in the result is made to match that of the result
in the alias.
Returns the mapped path. If a mapping has happened, this is a
canonical path. If no mapping has happened, it is the original value
of `path` unchanged.
"""
for regex, result in self.aliases:
m = regex.match(path)
if m:
new = path.replace(m.group(0), result)
new = new.replace(sep(path), sep(result))
new = canonical_filename(new)
return new
return path
def find_python_files(dirname):
"""Yield all of the importable Python files in `dirname`, recursively.
To be importable, the files have to be in a directory with a __init__.py,
except for `dirname` itself, which isn't required to have one. The
assumption is that `dirname` was specified directly, so the user knows
best, but sub-directories are checked for a __init__.py to be sure we only
find the importable files.
"""
for i, (dirpath, dirnames, filenames) in enumerate(os.walk(dirname)):
if i > 0 and '__init__.py' not in filenames:
# If a directory doesn't have __init__.py, then it isn't
# importable and neither are its files
del dirnames[:]
continue
for filename in filenames:
# We're only interested in files that look like reasonable Python
# files: Must end with .py or .pyw, and must not have certain funny
# characters that probably mean they are editor junk.
if re.match(r"^[^.#~!$@%^&*()+=,]+\.pyw?$", filename):
yield os.path.join(dirpath, filename)

60
third_party/python/coverage/coverage/fullcoverage/encodings.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,60 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Imposter encodings module that installs a coverage-style tracer.
This is NOT the encodings module; it is an imposter that sets up tracing
instrumentation and then replaces itself with the real encodings module.
If the directory that holds this file is placed first in the PYTHONPATH when
using "coverage" to run Python's tests, then this file will become the very
first module imported by the internals of Python 3. It installs a
coverage.py-compatible trace function that can watch Standard Library modules
execute from the very earliest stages of Python's own boot process. This fixes
a problem with coverage.py - that it starts too late to trace the coverage of
many of the most fundamental modules in the Standard Library.
"""
import sys
class FullCoverageTracer(object):
def __init__(self):
# `traces` is a list of trace events. Frames are tricky: the same
# frame object is used for a whole scope, with new line numbers
# written into it. So in one scope, all the frame objects are the
# same object, and will eventually all will point to the last line
# executed. So we keep the line numbers alongside the frames.
# The list looks like:
#
# traces = [
# ((frame, event, arg), lineno), ...
# ]
#
self.traces = []
def fullcoverage_trace(self, *args):
frame, event, arg = args
self.traces.append((args, frame.f_lineno))
return self.fullcoverage_trace
sys.settrace(FullCoverageTracer().fullcoverage_trace)
# In coverage/files.py is actual_filename(), which uses glob.glob. I don't
# understand why, but that use of glob borks everything if fullcoverage is in
# effect. So here we make an ugly hail-mary pass to switch off glob.glob over
# there. This means when using fullcoverage, Windows path names will not be
# their actual case.
#sys.fullcoverage = True
# Finally, remove our own directory from sys.path; remove ourselves from
# sys.modules; and re-import "encodings", which will be the real package
# this time. Note that the delete from sys.modules dictionary has to
# happen last, since all of the symbols in this module will become None
# at that exact moment, including "sys".
parentdir = max(filter(__file__.startswith, sys.path), key=len)
sys.path.remove(parentdir)
del sys.modules['encodings']
import encodings

511
third_party/python/coverage/coverage/html.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,511 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""HTML reporting for coverage.py."""
import datetime
import json
import os
import re
import shutil
import coverage
from coverage import env
from coverage.backward import iitems, SimpleNamespace
from coverage.data import add_data_to_hash
from coverage.files import flat_rootname
from coverage.misc import CoverageException, ensure_dir, file_be_gone, Hasher, isolate_module
from coverage.report import get_analysis_to_report
from coverage.results import Numbers
from coverage.templite import Templite
os = isolate_module(os)
# Static files are looked for in a list of places.
STATIC_PATH = [
# The place Debian puts system Javascript libraries.
"/usr/share/javascript",
# Our htmlfiles directory.
os.path.join(os.path.dirname(__file__), "htmlfiles"),
]
def data_filename(fname, pkgdir=""):
"""Return the path to a data file of ours.
The file is searched for on `STATIC_PATH`, and the first place it's found,
is returned.
Each directory in `STATIC_PATH` is searched as-is, and also, if `pkgdir`
is provided, at that sub-directory.
"""
tried = []
for static_dir in STATIC_PATH:
static_filename = os.path.join(static_dir, fname)
if os.path.exists(static_filename):
return static_filename
else:
tried.append(static_filename)
if pkgdir:
static_filename = os.path.join(static_dir, pkgdir, fname)
if os.path.exists(static_filename):
return static_filename
else:
tried.append(static_filename)
raise CoverageException(
"Couldn't find static file %r from %r, tried: %r" % (fname, os.getcwd(), tried)
)
def read_data(fname):
"""Return the contents of a data file of ours."""
with open(data_filename(fname)) as data_file:
return data_file.read()
def write_html(fname, html):
"""Write `html` to `fname`, properly encoded."""
html = re.sub(r"(\A\s+)|(\s+$)", "", html, flags=re.MULTILINE) + "\n"
with open(fname, "wb") as fout:
fout.write(html.encode('ascii', 'xmlcharrefreplace'))
class HtmlDataGeneration(object):
"""Generate structured data to be turned into HTML reports."""
EMPTY = "(empty)"
def __init__(self, cov):
self.coverage = cov
self.config = self.coverage.config
data = self.coverage.get_data()
self.has_arcs = data.has_arcs()
if self.config.show_contexts:
if data.measured_contexts() == set([""]):
self.coverage._warn("No contexts were measured")
data.set_query_contexts(self.config.report_contexts)
def data_for_file(self, fr, analysis):
"""Produce the data needed for one file's report."""
if self.has_arcs:
missing_branch_arcs = analysis.missing_branch_arcs()
arcs_executed = analysis.arcs_executed()
if self.config.show_contexts:
contexts_by_lineno = analysis.data.contexts_by_lineno(analysis.filename)
lines = []
for lineno, tokens in enumerate(fr.source_token_lines(), start=1):
# Figure out how to mark this line.
category = None
short_annotations = []
long_annotations = []
if lineno in analysis.excluded:
category = 'exc'
elif lineno in analysis.missing:
category = 'mis'
elif self.has_arcs and lineno in missing_branch_arcs:
category = 'par'
for b in missing_branch_arcs[lineno]:
if b < 0:
short_annotations.append("exit")
else:
short_annotations.append(b)
long_annotations.append(fr.missing_arc_description(lineno, b, arcs_executed))
elif lineno in analysis.statements:
category = 'run'
contexts = contexts_label = None
context_list = None
if category and self.config.show_contexts:
contexts = sorted(c or self.EMPTY for c in contexts_by_lineno[lineno])
if contexts == [self.EMPTY]:
contexts_label = self.EMPTY
else:
contexts_label = "{} ctx".format(len(contexts))
context_list = contexts
lines.append(SimpleNamespace(
tokens=tokens,
number=lineno,
category=category,
statement=(lineno in analysis.statements),
contexts=contexts,
contexts_label=contexts_label,
context_list=context_list,
short_annotations=short_annotations,
long_annotations=long_annotations,
))
file_data = SimpleNamespace(
relative_filename=fr.relative_filename(),
nums=analysis.numbers,
lines=lines,
)
return file_data
class HtmlReporter(object):
"""HTML reporting."""
# These files will be copied from the htmlfiles directory to the output
# directory.
STATIC_FILES = [
("style.css", ""),
("jquery.min.js", "jquery"),
("jquery.ba-throttle-debounce.min.js", "jquery-throttle-debounce"),
("jquery.hotkeys.js", "jquery-hotkeys"),
("jquery.isonscreen.js", "jquery-isonscreen"),
("jquery.tablesorter.min.js", "jquery-tablesorter"),
("coverage_html.js", ""),
("keybd_closed.png", ""),
("keybd_open.png", ""),
]
def __init__(self, cov):
self.coverage = cov
self.config = self.coverage.config
self.directory = self.config.html_dir
title = self.config.html_title
if env.PY2:
title = title.decode("utf8")
if self.config.extra_css:
self.extra_css = os.path.basename(self.config.extra_css)
else:
self.extra_css = None
self.data = self.coverage.get_data()
self.has_arcs = self.data.has_arcs()
self.file_summaries = []
self.all_files_nums = []
self.incr = IncrementalChecker(self.directory)
self.datagen = HtmlDataGeneration(self.coverage)
self.totals = Numbers()
self.template_globals = {
# Functions available in the templates.
'escape': escape,
'pair': pair,
'len': len,
# Constants for this report.
'__url__': coverage.__url__,
'__version__': coverage.__version__,
'title': title,
'time_stamp': datetime.datetime.now().strftime('%Y-%m-%d %H:%M'),
'extra_css': self.extra_css,
'has_arcs': self.has_arcs,
'show_contexts': self.config.show_contexts,
# Constants for all reports.
# These css classes determine which lines are highlighted by default.
'category': {
'exc': 'exc show_exc',
'mis': 'mis show_mis',
'par': 'par run show_par',
'run': 'run',
}
}
self.pyfile_html_source = read_data("pyfile.html")
self.source_tmpl = Templite(self.pyfile_html_source, self.template_globals)
def report(self, morfs):
"""Generate an HTML report for `morfs`.
`morfs` is a list of modules or file names.
"""
# Read the status data and check that this run used the same
# global data as the last run.
self.incr.read()
self.incr.check_global_data(self.config, self.pyfile_html_source)
# Process all the files.
for fr, analysis in get_analysis_to_report(self.coverage, morfs):
self.html_file(fr, analysis)
if not self.all_files_nums:
raise CoverageException("No data to report.")
self.totals = sum(self.all_files_nums)
# Write the index file.
self.index_file()
self.make_local_static_report_files()
return self.totals.n_statements and self.totals.pc_covered
def make_local_static_report_files(self):
"""Make local instances of static files for HTML report."""
# The files we provide must always be copied.
for static, pkgdir in self.STATIC_FILES:
shutil.copyfile(
data_filename(static, pkgdir),
os.path.join(self.directory, static)
)
# The user may have extra CSS they want copied.
if self.extra_css:
shutil.copyfile(
self.config.extra_css,
os.path.join(self.directory, self.extra_css)
)
def html_file(self, fr, analysis):
"""Generate an HTML file for one source file."""
rootname = flat_rootname(fr.relative_filename())
html_filename = rootname + ".html"
ensure_dir(self.directory)
html_path = os.path.join(self.directory, html_filename)
# Get the numbers for this file.
nums = analysis.numbers
self.all_files_nums.append(nums)
if self.config.skip_covered:
# Don't report on 100% files.
no_missing_lines = (nums.n_missing == 0)
no_missing_branches = (nums.n_partial_branches == 0)
if no_missing_lines and no_missing_branches:
# If there's an existing file, remove it.
file_be_gone(html_path)
return
if self.config.skip_empty:
# Don't report on empty files.
if nums.n_statements == 0:
file_be_gone(html_path)
return
# Find out if the file on disk is already correct.
if self.incr.can_skip_file(self.data, fr, rootname):
self.file_summaries.append(self.incr.index_info(rootname))
return
# Write the HTML page for this file.
file_data = self.datagen.data_for_file(fr, analysis)
for ldata in file_data.lines:
# Build the HTML for the line.
html = []
for tok_type, tok_text in ldata.tokens:
if tok_type == "ws":
html.append(escape(tok_text))
else:
tok_html = escape(tok_text) or '&nbsp;'
html.append(
u'<span class="{}">{}</span>'.format(tok_type, tok_html)
)
ldata.html = ''.join(html)
if ldata.short_annotations:
# 202F is NARROW NO-BREAK SPACE.
# 219B is RIGHTWARDS ARROW WITH STROKE.
ldata.annotate = u",&nbsp;&nbsp; ".join(
u"{}&#x202F;&#x219B;&#x202F;{}".format(ldata.number, d)
for d in ldata.short_annotations
)
else:
ldata.annotate = None
if ldata.long_annotations:
longs = ldata.long_annotations
if len(longs) == 1:
ldata.annotate_long = longs[0]
else:
ldata.annotate_long = u"{:d} missed branches: {}".format(
len(longs),
u", ".join(
u"{:d}) {}".format(num, ann_long)
for num, ann_long in enumerate(longs, start=1)
),
)
else:
ldata.annotate_long = None
css_classes = []
if ldata.category:
css_classes.append(self.template_globals['category'][ldata.category])
ldata.css_class = ' '.join(css_classes) or "pln"
html = self.source_tmpl.render(file_data.__dict__)
write_html(html_path, html)
# Save this file's information for the index file.
index_info = {
'nums': nums,
'html_filename': html_filename,
'relative_filename': fr.relative_filename(),
}
self.file_summaries.append(index_info)
self.incr.set_index_info(rootname, index_info)
def index_file(self):
"""Write the index.html file for this report."""
index_tmpl = Templite(read_data("index.html"), self.template_globals)
html = index_tmpl.render({
'files': self.file_summaries,
'totals': self.totals,
})
write_html(os.path.join(self.directory, "index.html"), html)
# Write the latest hashes for next time.
self.incr.write()
class IncrementalChecker(object):
"""Logic and data to support incremental reporting."""
STATUS_FILE = "status.json"
STATUS_FORMAT = 2
# pylint: disable=wrong-spelling-in-comment,useless-suppression
# The data looks like:
#
# {
# "format": 2,
# "globals": "540ee119c15d52a68a53fe6f0897346d",
# "version": "4.0a1",
# "files": {
# "cogapp___init__": {
# "hash": "e45581a5b48f879f301c0f30bf77a50c",
# "index": {
# "html_filename": "cogapp___init__.html",
# "relative_filename": "cogapp/__init__",
# "nums": [ 1, 14, 0, 0, 0, 0, 0 ]
# }
# },
# ...
# "cogapp_whiteutils": {
# "hash": "8504bb427fc488c4176809ded0277d51",
# "index": {
# "html_filename": "cogapp_whiteutils.html",
# "relative_filename": "cogapp/whiteutils",
# "nums": [ 1, 59, 0, 1, 28, 2, 2 ]
# }
# }
# }
# }
def __init__(self, directory):
self.directory = directory
self.reset()
def reset(self):
"""Initialize to empty. Causes all files to be reported."""
self.globals = ''
self.files = {}
def read(self):
"""Read the information we stored last time."""
usable = False
try:
status_file = os.path.join(self.directory, self.STATUS_FILE)
with open(status_file) as fstatus:
status = json.load(fstatus)
except (IOError, ValueError):
usable = False
else:
usable = True
if status['format'] != self.STATUS_FORMAT:
usable = False
elif status['version'] != coverage.__version__:
usable = False
if usable:
self.files = {}
for filename, fileinfo in iitems(status['files']):
fileinfo['index']['nums'] = Numbers(*fileinfo['index']['nums'])
self.files[filename] = fileinfo
self.globals = status['globals']
else:
self.reset()
def write(self):
"""Write the current status."""
status_file = os.path.join(self.directory, self.STATUS_FILE)
files = {}
for filename, fileinfo in iitems(self.files):
fileinfo['index']['nums'] = fileinfo['index']['nums'].init_args()
files[filename] = fileinfo
status = {
'format': self.STATUS_FORMAT,
'version': coverage.__version__,
'globals': self.globals,
'files': files,
}
with open(status_file, "w") as fout:
json.dump(status, fout, separators=(',', ':'))
def check_global_data(self, *data):
"""Check the global data that can affect incremental reporting."""
m = Hasher()
for d in data:
m.update(d)
these_globals = m.hexdigest()
if self.globals != these_globals:
self.reset()
self.globals = these_globals
def can_skip_file(self, data, fr, rootname):
"""Can we skip reporting this file?
`data` is a CoverageData object, `fr` is a `FileReporter`, and
`rootname` is the name being used for the file.
"""
m = Hasher()
m.update(fr.source().encode('utf-8'))
add_data_to_hash(data, fr.filename, m)
this_hash = m.hexdigest()
that_hash = self.file_hash(rootname)
if this_hash == that_hash:
# Nothing has changed to require the file to be reported again.
return True
else:
self.set_file_hash(rootname, this_hash)
return False
def file_hash(self, fname):
"""Get the hash of `fname`'s contents."""
return self.files.get(fname, {}).get('hash', '')
def set_file_hash(self, fname, val):
"""Set the hash of `fname`'s contents."""
self.files.setdefault(fname, {})['hash'] = val
def index_info(self, fname):
"""Get the information for index.html for `fname`."""
return self.files.get(fname, {}).get('index', {})
def set_index_info(self, fname, info):
"""Set the information for index.html for `fname`."""
self.files.setdefault(fname, {})['index'] = info
# Helpers for templates and generating HTML
def escape(t):
"""HTML-escape the text in `t`.
This is only suitable for HTML text, not attributes.
"""
# Convert HTML special chars into HTML entities.
return t.replace("&", "&amp;").replace("<", "&lt;")
def pair(ratio):
"""Format a pair of numbers so JavaScript can read them in an attribute."""
return "%s %s" % ratio

589
third_party/python/coverage/coverage/htmlfiles/coverage_html.js поставляемый Normal file
Просмотреть файл

@ -0,0 +1,589 @@
// Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
// For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
// Coverage.py HTML report browser code.
/*jslint browser: true, sloppy: true, vars: true, plusplus: true, maxerr: 50, indent: 4 */
/*global coverage: true, document, window, $ */
coverage = {};
// Find all the elements with shortkey_* class, and use them to assign a shortcut key.
coverage.assign_shortkeys = function () {
$("*[class*='shortkey_']").each(function (i, e) {
$.each($(e).attr("class").split(" "), function (i, c) {
if (/^shortkey_/.test(c)) {
$(document).bind('keydown', c.substr(9), function () {
$(e).click();
});
}
});
});
};
// Create the events for the help panel.
coverage.wire_up_help_panel = function () {
$("#keyboard_icon").click(function () {
// Show the help panel, and position it so the keyboard icon in the
// panel is in the same place as the keyboard icon in the header.
$(".help_panel").show();
var koff = $("#keyboard_icon").offset();
var poff = $("#panel_icon").position();
$(".help_panel").offset({
top: koff.top-poff.top,
left: koff.left-poff.left
});
});
$("#panel_icon").click(function () {
$(".help_panel").hide();
});
};
// Create the events for the filter box.
coverage.wire_up_filter = function () {
// Cache elements.
var table = $("table.index");
var table_rows = table.find("tbody tr");
var table_row_names = table_rows.find("td.name a");
var no_rows = $("#no_rows");
// Create a duplicate table footer that we can modify with dynamic summed values.
var table_footer = $("table.index tfoot tr");
var table_dynamic_footer = table_footer.clone();
table_dynamic_footer.attr('class', 'total_dynamic hidden');
table_footer.after(table_dynamic_footer);
// Observe filter keyevents.
$("#filter").on("keyup change", $.debounce(150, function (event) {
var filter_value = $(this).val();
if (filter_value === "") {
// Filter box is empty, remove all filtering.
table_rows.removeClass("hidden");
// Show standard footer, hide dynamic footer.
table_footer.removeClass("hidden");
table_dynamic_footer.addClass("hidden");
// Hide placeholder, show table.
if (no_rows.length > 0) {
no_rows.hide();
}
table.show();
}
else {
// Filter table items by value.
var hidden = 0;
var shown = 0;
// Hide / show elements.
$.each(table_row_names, function () {
var element = $(this).parents("tr");
if ($(this).text().indexOf(filter_value) === -1) {
// hide
element.addClass("hidden");
hidden++;
}
else {
// show
element.removeClass("hidden");
shown++;
}
});
// Show placeholder if no rows will be displayed.
if (no_rows.length > 0) {
if (shown === 0) {
// Show placeholder, hide table.
no_rows.show();
table.hide();
}
else {
// Hide placeholder, show table.
no_rows.hide();
table.show();
}
}
// Manage dynamic header:
if (hidden > 0) {
// Calculate new dynamic sum values based on visible rows.
for (var column = 2; column < 20; column++) {
// Calculate summed value.
var cells = table_rows.find('td:nth-child(' + column + ')');
if (!cells.length) {
// No more columns...!
break;
}
var sum = 0, numer = 0, denom = 0;
$.each(cells.filter(':visible'), function () {
var ratio = $(this).data("ratio");
if (ratio) {
var splitted = ratio.split(" ");
numer += parseInt(splitted[0], 10);
denom += parseInt(splitted[1], 10);
}
else {
sum += parseInt(this.innerHTML, 10);
}
});
// Get footer cell element.
var footer_cell = table_dynamic_footer.find('td:nth-child(' + column + ')');
// Set value into dynamic footer cell element.
if (cells[0].innerHTML.indexOf('%') > -1) {
// Percentage columns use the numerator and denominator,
// and adapt to the number of decimal places.
var match = /\.([0-9]+)/.exec(cells[0].innerHTML);
var places = 0;
if (match) {
places = match[1].length;
}
var pct = numer * 100 / denom;
footer_cell.text(pct.toFixed(places) + '%');
}
else {
footer_cell.text(sum);
}
}
// Hide standard footer, show dynamic footer.
table_footer.addClass("hidden");
table_dynamic_footer.removeClass("hidden");
}
else {
// Show standard footer, hide dynamic footer.
table_footer.removeClass("hidden");
table_dynamic_footer.addClass("hidden");
}
}
}));
// Trigger change event on setup, to force filter on page refresh
// (filter value may still be present).
$("#filter").trigger("change");
};
// Loaded on index.html
coverage.index_ready = function ($) {
// Look for a localStorage item containing previous sort settings:
var sort_list = [];
var storage_name = "COVERAGE_INDEX_SORT";
var stored_list = undefined;
try {
stored_list = localStorage.getItem(storage_name);
} catch(err) {}
if (stored_list) {
sort_list = JSON.parse('[[' + stored_list + ']]');
}
// Create a new widget which exists only to save and restore
// the sort order:
$.tablesorter.addWidget({
id: "persistentSort",
// Format is called by the widget before displaying:
format: function (table) {
if (table.config.sortList.length === 0 && sort_list.length > 0) {
// This table hasn't been sorted before - we'll use
// our stored settings:
$(table).trigger('sorton', [sort_list]);
}
else {
// This is not the first load - something has
// already defined sorting so we'll just update
// our stored value to match:
sort_list = table.config.sortList;
}
}
});
// Configure our tablesorter to handle the variable number of
// columns produced depending on report options:
var headers = [];
var col_count = $("table.index > thead > tr > th").length;
headers[0] = { sorter: 'text' };
for (i = 1; i < col_count-1; i++) {
headers[i] = { sorter: 'digit' };
}
headers[col_count-1] = { sorter: 'percent' };
// Enable the table sorter:
$("table.index").tablesorter({
widgets: ['persistentSort'],
headers: headers
});
coverage.assign_shortkeys();
coverage.wire_up_help_panel();
coverage.wire_up_filter();
// Watch for page unload events so we can save the final sort settings:
$(window).unload(function () {
try {
localStorage.setItem(storage_name, sort_list.toString())
} catch(err) {}
});
};
// -- pyfile stuff --
coverage.pyfile_ready = function ($) {
// If we're directed to a particular line number, highlight the line.
var frag = location.hash;
if (frag.length > 2 && frag[1] === 't') {
$(frag).addClass('highlight');
coverage.set_sel(parseInt(frag.substr(2), 10));
}
else {
coverage.set_sel(0);
}
$(document)
.bind('keydown', 'j', coverage.to_next_chunk_nicely)
.bind('keydown', 'k', coverage.to_prev_chunk_nicely)
.bind('keydown', '0', coverage.to_top)
.bind('keydown', '1', coverage.to_first_chunk)
;
$(".button_toggle_run").click(function (evt) {coverage.toggle_lines(evt.target, "run");});
$(".button_toggle_exc").click(function (evt) {coverage.toggle_lines(evt.target, "exc");});
$(".button_toggle_mis").click(function (evt) {coverage.toggle_lines(evt.target, "mis");});
$(".button_toggle_par").click(function (evt) {coverage.toggle_lines(evt.target, "par");});
coverage.assign_shortkeys();
coverage.wire_up_help_panel();
coverage.init_scroll_markers();
// Rebuild scroll markers when the window height changes.
$(window).resize(coverage.build_scroll_markers);
};
coverage.toggle_lines = function (btn, cls) {
btn = $(btn);
var show = "show_"+cls;
if (btn.hasClass(show)) {
$("#source ." + cls).removeClass(show);
btn.removeClass(show);
}
else {
$("#source ." + cls).addClass(show);
btn.addClass(show);
}
coverage.build_scroll_markers();
};
// Return the nth line div.
coverage.line_elt = function (n) {
return $("#t" + n);
};
// Return the nth line number div.
coverage.num_elt = function (n) {
return $("#n" + n);
};
// Set the selection. b and e are line numbers.
coverage.set_sel = function (b, e) {
// The first line selected.
coverage.sel_begin = b;
// The next line not selected.
coverage.sel_end = (e === undefined) ? b+1 : e;
};
coverage.to_top = function () {
coverage.set_sel(0, 1);
coverage.scroll_window(0);
};
coverage.to_first_chunk = function () {
coverage.set_sel(0, 1);
coverage.to_next_chunk();
};
// Return a string indicating what kind of chunk this line belongs to,
// or null if not a chunk.
coverage.chunk_indicator = function (line_elt) {
var klass = line_elt.attr('class');
if (klass) {
var m = klass.match(/\bshow_\w+\b/);
if (m) {
return m[0];
}
}
return null;
};
coverage.to_next_chunk = function () {
var c = coverage;
// Find the start of the next colored chunk.
var probe = c.sel_end;
var chunk_indicator, probe_line;
while (true) {
probe_line = c.line_elt(probe);
if (probe_line.length === 0) {
return;
}
chunk_indicator = c.chunk_indicator(probe_line);
if (chunk_indicator) {
break;
}
probe++;
}
// There's a next chunk, `probe` points to it.
var begin = probe;
// Find the end of this chunk.
var next_indicator = chunk_indicator;
while (next_indicator === chunk_indicator) {
probe++;
probe_line = c.line_elt(probe);
next_indicator = c.chunk_indicator(probe_line);
}
c.set_sel(begin, probe);
c.show_selection();
};
coverage.to_prev_chunk = function () {
var c = coverage;
// Find the end of the prev colored chunk.
var probe = c.sel_begin-1;
var probe_line = c.line_elt(probe);
if (probe_line.length === 0) {
return;
}
var chunk_indicator = c.chunk_indicator(probe_line);
while (probe > 0 && !chunk_indicator) {
probe--;
probe_line = c.line_elt(probe);
if (probe_line.length === 0) {
return;
}
chunk_indicator = c.chunk_indicator(probe_line);
}
// There's a prev chunk, `probe` points to its last line.
var end = probe+1;
// Find the beginning of this chunk.
var prev_indicator = chunk_indicator;
while (prev_indicator === chunk_indicator) {
probe--;
probe_line = c.line_elt(probe);
prev_indicator = c.chunk_indicator(probe_line);
}
c.set_sel(probe+1, end);
c.show_selection();
};
// Return the line number of the line nearest pixel position pos
coverage.line_at_pos = function (pos) {
var l1 = coverage.line_elt(1),
l2 = coverage.line_elt(2),
result;
if (l1.length && l2.length) {
var l1_top = l1.offset().top,
line_height = l2.offset().top - l1_top,
nlines = (pos - l1_top) / line_height;
if (nlines < 1) {
result = 1;
}
else {
result = Math.ceil(nlines);
}
}
else {
result = 1;
}
return result;
};
// Returns 0, 1, or 2: how many of the two ends of the selection are on
// the screen right now?
coverage.selection_ends_on_screen = function () {
if (coverage.sel_begin === 0) {
return 0;
}
var top = coverage.line_elt(coverage.sel_begin);
var next = coverage.line_elt(coverage.sel_end-1);
return (
(top.isOnScreen() ? 1 : 0) +
(next.isOnScreen() ? 1 : 0)
);
};
coverage.to_next_chunk_nicely = function () {
coverage.finish_scrolling();
if (coverage.selection_ends_on_screen() === 0) {
// The selection is entirely off the screen: select the top line on
// the screen.
var win = $(window);
coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop()));
}
coverage.to_next_chunk();
};
coverage.to_prev_chunk_nicely = function () {
coverage.finish_scrolling();
if (coverage.selection_ends_on_screen() === 0) {
var win = $(window);
coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop() + win.height()));
}
coverage.to_prev_chunk();
};
// Select line number lineno, or if it is in a colored chunk, select the
// entire chunk
coverage.select_line_or_chunk = function (lineno) {
var c = coverage;
var probe_line = c.line_elt(lineno);
if (probe_line.length === 0) {
return;
}
var the_indicator = c.chunk_indicator(probe_line);
if (the_indicator) {
// The line is in a highlighted chunk.
// Search backward for the first line.
var probe = lineno;
var indicator = the_indicator;
while (probe > 0 && indicator === the_indicator) {
probe--;
probe_line = c.line_elt(probe);
if (probe_line.length === 0) {
break;
}
indicator = c.chunk_indicator(probe_line);
}
var begin = probe + 1;
// Search forward for the last line.
probe = lineno;
indicator = the_indicator;
while (indicator === the_indicator) {
probe++;
probe_line = c.line_elt(probe);
indicator = c.chunk_indicator(probe_line);
}
coverage.set_sel(begin, probe);
}
else {
coverage.set_sel(lineno);
}
};
coverage.show_selection = function () {
var c = coverage;
// Highlight the lines in the chunk
$(".linenos .highlight").removeClass("highlight");
for (var probe = c.sel_begin; probe > 0 && probe < c.sel_end; probe++) {
c.num_elt(probe).addClass("highlight");
}
c.scroll_to_selection();
};
coverage.scroll_to_selection = function () {
// Scroll the page if the chunk isn't fully visible.
if (coverage.selection_ends_on_screen() < 2) {
// Need to move the page. The html,body trick makes it scroll in all
// browsers, got it from http://stackoverflow.com/questions/3042651
var top = coverage.line_elt(coverage.sel_begin);
var top_pos = parseInt(top.offset().top, 10);
coverage.scroll_window(top_pos - 30);
}
};
coverage.scroll_window = function (to_pos) {
$("html,body").animate({scrollTop: to_pos}, 200);
};
coverage.finish_scrolling = function () {
$("html,body").stop(true, true);
};
coverage.init_scroll_markers = function () {
var c = coverage;
// Init some variables
c.lines_len = $('#source p').length;
c.body_h = $('body').height();
c.header_h = $('div#header').height();
// Build html
c.build_scroll_markers();
};
coverage.build_scroll_markers = function () {
var c = coverage,
min_line_height = 3,
max_line_height = 10,
visible_window_h = $(window).height();
c.lines_to_mark = $('#source').find('p.show_run, p.show_mis, p.show_exc, p.show_exc, p.show_par');
$('#scroll_marker').remove();
// Don't build markers if the window has no scroll bar.
if (c.body_h <= visible_window_h) {
return;
}
$("body").append("<div id='scroll_marker'>&nbsp;</div>");
var scroll_marker = $('#scroll_marker'),
marker_scale = scroll_marker.height() / c.body_h,
line_height = scroll_marker.height() / c.lines_len;
// Line height must be between the extremes.
if (line_height > min_line_height) {
if (line_height > max_line_height) {
line_height = max_line_height;
}
}
else {
line_height = min_line_height;
}
var previous_line = -99,
last_mark,
last_top,
offsets = {};
// Calculate line offsets outside loop to prevent relayouts
c.lines_to_mark.each(function() {
offsets[this.id] = $(this).offset().top;
});
c.lines_to_mark.each(function () {
var id_name = $(this).attr('id'),
line_top = Math.round(offsets[id_name] * marker_scale),
line_number = parseInt(id_name.substring(1, id_name.length));
if (line_number === previous_line + 1) {
// If this solid missed block just make previous mark higher.
last_mark.css({
'height': line_top + line_height - last_top
});
}
else {
// Add colored line in scroll_marker block.
scroll_marker.append('<div id="m' + line_number + '" class="marker"></div>');
last_mark = $('#m' + line_number);
last_mark.css({
'height': line_height,
'top': line_top
});
last_top = line_top;
}
previous_line = line_number;
});
};

118
third_party/python/coverage/coverage/htmlfiles/index.html поставляемый Normal file
Просмотреть файл

@ -0,0 +1,118 @@
{# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 #}
{# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt #}
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>{{ title|escape }}</title>
<link rel="stylesheet" href="style.css" type="text/css">
{% if extra_css %}
<link rel="stylesheet" href="{{ extra_css }}" type="text/css">
{% endif %}
<script type="text/javascript" src="jquery.min.js"></script>
<script type="text/javascript" src="jquery.ba-throttle-debounce.min.js"></script>
<script type="text/javascript" src="jquery.tablesorter.min.js"></script>
<script type="text/javascript" src="jquery.hotkeys.js"></script>
<script type="text/javascript" src="coverage_html.js"></script>
<script type="text/javascript">
jQuery(document).ready(coverage.index_ready);
</script>
</head>
<body class="indexfile">
<div id="header">
<div class="content">
<h1>{{ title|escape }}:
<span class="pc_cov">{{totals.pc_covered_str}}%</span>
</h1>
<img id="keyboard_icon" src="keybd_closed.png" alt="Show keyboard shortcuts" />
<form id="filter_container">
<input id="filter" type="text" value="" placeholder="filter..." />
</form>
</div>
</div>
<div class="help_panel">
<img id="panel_icon" src="keybd_open.png" alt="Hide keyboard shortcuts" />
<p class="legend">Hot-keys on this page</p>
<div>
<p class="keyhelp">
<span class="key">n</span>
<span class="key">s</span>
<span class="key">m</span>
<span class="key">x</span>
{% if has_arcs %}
<span class="key">b</span>
<span class="key">p</span>
{% endif %}
<span class="key">c</span> &nbsp; change column sorting
</p>
</div>
</div>
<div id="index">
<table class="index">
<thead>
{# The title="" attr doesn"t work in Safari. #}
<tr class="tablehead" title="Click to sort">
<th class="name left headerSortDown shortkey_n">Module</th>
<th class="shortkey_s">statements</th>
<th class="shortkey_m">missing</th>
<th class="shortkey_x">excluded</th>
{% if has_arcs %}
<th class="shortkey_b">branches</th>
<th class="shortkey_p">partial</th>
{% endif %}
<th class="right shortkey_c">coverage</th>
</tr>
</thead>
{# HTML syntax requires thead, tfoot, tbody #}
<tfoot>
<tr class="total">
<td class="name left">Total</td>
<td>{{totals.n_statements}}</td>
<td>{{totals.n_missing}}</td>
<td>{{totals.n_excluded}}</td>
{% if has_arcs %}
<td>{{totals.n_branches}}</td>
<td>{{totals.n_partial_branches}}</td>
{% endif %}
<td class="right" data-ratio="{{totals.ratio_covered|pair}}">{{totals.pc_covered_str}}%</td>
</tr>
</tfoot>
<tbody>
{% for file in files %}
<tr class="file">
<td class="name left"><a href="{{file.html_filename}}">{{file.relative_filename}}</a></td>
<td>{{file.nums.n_statements}}</td>
<td>{{file.nums.n_missing}}</td>
<td>{{file.nums.n_excluded}}</td>
{% if has_arcs %}
<td>{{file.nums.n_branches}}</td>
<td>{{file.nums.n_partial_branches}}</td>
{% endif %}
<td class="right" data-ratio="{{file.nums.ratio_covered|pair}}">{{file.nums.pc_covered_str}}%</td>
</tr>
{% endfor %}
</tbody>
</table>
<p id="no_rows">
No items found using the specified filter.
</p>
</div>
<div id="footer">
<div class="content">
<p>
<a class="nav" href="{{__url__}}">coverage.py v{{__version__}}</a>,
created at {{ time_stamp }}
</p>
</div>
</div>
</body>
</html>

Просмотреть файл

@ -0,0 +1,9 @@
/*
* jQuery throttle / debounce - v1.1 - 3/7/2010
* http://benalman.com/projects/jquery-throttle-debounce-plugin/
*
* Copyright (c) 2010 "Cowboy" Ben Alman
* Dual licensed under the MIT and GPL licenses.
* http://benalman.com/about/license/
*/
(function(b,c){var $=b.jQuery||b.Cowboy||(b.Cowboy={}),a;$.throttle=a=function(e,f,j,i){var h,d=0;if(typeof f!=="boolean"){i=j;j=f;f=c}function g(){var o=this,m=+new Date()-d,n=arguments;function l(){d=+new Date();j.apply(o,n)}function k(){h=c}if(i&&!h){l()}h&&clearTimeout(h);if(i===c&&m>e){l()}else{if(f!==true){h=setTimeout(i?k:l,i===c?e-m:e)}}}if($.guid){g.guid=j.guid=j.guid||$.guid++}return g};$.debounce=function(d,e,f){return f===c?a(d,e,false):a(d,f,e!==false)}})(this);

Просмотреть файл

@ -0,0 +1,99 @@
/*
* jQuery Hotkeys Plugin
* Copyright 2010, John Resig
* Dual licensed under the MIT or GPL Version 2 licenses.
*
* Based upon the plugin by Tzury Bar Yochay:
* http://github.com/tzuryby/hotkeys
*
* Original idea by:
* Binny V A, http://www.openjs.com/scripts/events/keyboard_shortcuts/
*/
(function(jQuery){
jQuery.hotkeys = {
version: "0.8",
specialKeys: {
8: "backspace", 9: "tab", 13: "return", 16: "shift", 17: "ctrl", 18: "alt", 19: "pause",
20: "capslock", 27: "esc", 32: "space", 33: "pageup", 34: "pagedown", 35: "end", 36: "home",
37: "left", 38: "up", 39: "right", 40: "down", 45: "insert", 46: "del",
96: "0", 97: "1", 98: "2", 99: "3", 100: "4", 101: "5", 102: "6", 103: "7",
104: "8", 105: "9", 106: "*", 107: "+", 109: "-", 110: ".", 111 : "/",
112: "f1", 113: "f2", 114: "f3", 115: "f4", 116: "f5", 117: "f6", 118: "f7", 119: "f8",
120: "f9", 121: "f10", 122: "f11", 123: "f12", 144: "numlock", 145: "scroll", 191: "/", 224: "meta"
},
shiftNums: {
"`": "~", "1": "!", "2": "@", "3": "#", "4": "$", "5": "%", "6": "^", "7": "&",
"8": "*", "9": "(", "0": ")", "-": "_", "=": "+", ";": ": ", "'": "\"", ",": "<",
".": ">", "/": "?", "\\": "|"
}
};
function keyHandler( handleObj ) {
// Only care when a possible input has been specified
if ( typeof handleObj.data !== "string" ) {
return;
}
var origHandler = handleObj.handler,
keys = handleObj.data.toLowerCase().split(" ");
handleObj.handler = function( event ) {
// Don't fire in text-accepting inputs that we didn't directly bind to
if ( this !== event.target && (/textarea|select/i.test( event.target.nodeName ) ||
event.target.type === "text") ) {
return;
}
// Keypress represents characters, not special keys
var special = event.type !== "keypress" && jQuery.hotkeys.specialKeys[ event.which ],
character = String.fromCharCode( event.which ).toLowerCase(),
key, modif = "", possible = {};
// check combinations (alt|ctrl|shift+anything)
if ( event.altKey && special !== "alt" ) {
modif += "alt+";
}
if ( event.ctrlKey && special !== "ctrl" ) {
modif += "ctrl+";
}
// TODO: Need to make sure this works consistently across platforms
if ( event.metaKey && !event.ctrlKey && special !== "meta" ) {
modif += "meta+";
}
if ( event.shiftKey && special !== "shift" ) {
modif += "shift+";
}
if ( special ) {
possible[ modif + special ] = true;
} else {
possible[ modif + character ] = true;
possible[ modif + jQuery.hotkeys.shiftNums[ character ] ] = true;
// "$" can be triggered as "Shift+4" or "Shift+$" or just "$"
if ( modif === "shift+" ) {
possible[ jQuery.hotkeys.shiftNums[ character ] ] = true;
}
}
for ( var i = 0, l = keys.length; i < l; i++ ) {
if ( possible[ keys[i] ] ) {
return origHandler.apply( this, arguments );
}
}
};
}
jQuery.each([ "keydown", "keyup", "keypress" ], function() {
jQuery.event.special[ this ] = { add: keyHandler };
});
})( jQuery );

Просмотреть файл

@ -0,0 +1,53 @@
/* Copyright (c) 2010
* @author Laurence Wheway
* Dual licensed under the MIT (http://www.opensource.org/licenses/mit-license.php)
* and GPL (http://www.opensource.org/licenses/gpl-license.php) licenses.
*
* @version 1.2.0
*/
(function($) {
jQuery.extend({
isOnScreen: function(box, container) {
//ensure numbers come in as intgers (not strings) and remove 'px' is it's there
for(var i in box){box[i] = parseFloat(box[i])};
for(var i in container){container[i] = parseFloat(container[i])};
if(!container){
container = {
left: $(window).scrollLeft(),
top: $(window).scrollTop(),
width: $(window).width(),
height: $(window).height()
}
}
if( box.left+box.width-container.left > 0 &&
box.left < container.width+container.left &&
box.top+box.height-container.top > 0 &&
box.top < container.height+container.top
) return true;
return false;
}
})
jQuery.fn.isOnScreen = function (container) {
for(var i in container){container[i] = parseFloat(container[i])};
if(!container){
container = {
left: $(window).scrollLeft(),
top: $(window).scrollTop(),
width: $(window).width(),
height: $(window).height()
}
}
if( $(this).offset().left+$(this).width()-container.left > 0 &&
$(this).offset().left < container.width+container.left &&
$(this).offset().top+$(this).height()-container.top > 0 &&
$(this).offset().top < container.height+container.top
) return true;
return false;
}
})(jQuery);

4
third_party/python/coverage/coverage/htmlfiles/jquery.min.js поставляемый Normal file

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Двоичные данные
third_party/python/coverage/coverage/htmlfiles/keybd_closed.png поставляемый Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 112 B

Двоичные данные
third_party/python/coverage/coverage/htmlfiles/keybd_open.png поставляемый Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 112 B

112
third_party/python/coverage/coverage/htmlfiles/pyfile.html поставляемый Normal file
Просмотреть файл

@ -0,0 +1,112 @@
{# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 #}
{# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt #}
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
{# IE8 rounds line-height incorrectly, and adding this emulateIE7 line makes it right! #}
{# http://social.msdn.microsoft.com/Forums/en-US/iewebdevelopment/thread/7684445e-f080-4d8f-8529-132763348e21 #}
<meta http-equiv="X-UA-Compatible" content="IE=emulateIE7" />
<title>Coverage for {{relative_filename|escape}}: {{nums.pc_covered_str}}%</title>
<link rel="stylesheet" href="style.css" type="text/css">
{% if extra_css %}
<link rel="stylesheet" href="{{ extra_css }}" type="text/css">
{% endif %}
<script type="text/javascript" src="jquery.min.js"></script>
<script type="text/javascript" src="jquery.hotkeys.js"></script>
<script type="text/javascript" src="jquery.isonscreen.js"></script>
<script type="text/javascript" src="coverage_html.js"></script>
<script type="text/javascript">
jQuery(document).ready(coverage.pyfile_ready);
</script>
</head>
<body class="pyfile">
<div id="header">
<div class="content">
<h1>Coverage for <b>{{relative_filename|escape}}</b> :
<span class="pc_cov">{{nums.pc_covered_str}}%</span>
</h1>
<img id="keyboard_icon" src="keybd_closed.png" alt="Show keyboard shortcuts" />
<h2 class="stats">
{{nums.n_statements}} statements &nbsp;
<span class="{{category.run}} shortkey_r button_toggle_run">{{nums.n_executed}} run</span>
<span class="{{category.mis}} shortkey_m button_toggle_mis">{{nums.n_missing}} missing</span>
<span class="{{category.exc}} shortkey_x button_toggle_exc">{{nums.n_excluded}} excluded</span>
{% if has_arcs %}
<span class="{{category.par}} shortkey_p button_toggle_par">{{nums.n_partial_branches}} partial</span>
{% endif %}
</h2>
</div>
</div>
<div class="help_panel">
<img id="panel_icon" src="keybd_open.png" alt="Hide keyboard shortcuts" />
<p class="legend">Hot-keys on this page</p>
<div>
<p class="keyhelp">
<span class="key">r</span>
<span class="key">m</span>
<span class="key">x</span>
<span class="key">p</span> &nbsp; toggle line displays
</p>
<p class="keyhelp">
<span class="key">j</span>
<span class="key">k</span> &nbsp; next/prev highlighted chunk
</p>
<p class="keyhelp">
<span class="key">0</span> &nbsp; (zero) top of page
</p>
<p class="keyhelp">
<span class="key">1</span> &nbsp; (one) first highlighted chunk
</p>
</div>
</div>
<div id="source">
{% for line in lines -%}
{% joined %}
<p id="t{{line.number}}" class="{{line.css_class}}">
<span class="n"><a href="#t{{line.number}}">{{line.number}}</a></span>
<span class="t">{{line.html}}&nbsp;</span>
{% if line.context_list %}
<input type="checkbox" id="ctxs{{line.number}}" />
{% endif %}
{# Things that should float right in the line. #}
<span class="r">
{% if line.annotate %}
<span class="annotate short">{{line.annotate}}</span>
<span class="annotate long">{{line.annotate_long}}</span>
{% endif %}
{% if line.contexts %}
<label for="ctxs{{line.number}}" class="ctx">{{ line.contexts_label }}</label>
{% endif %}
</span>
{# Things that should appear below the line. #}
{% if line.context_list %}
<span class="ctxs">
{% for context in line.context_list %}
<span>{{context}}</span>
{% endfor %}
</span>
{% endif %}
</p>
{% endjoined %}
{% endfor %}
</div>
<div id="footer">
<div class="content">
<p>
<a class="nav" href="index.html">&#xab; index</a> &nbsp; &nbsp; <a class="nav" href="{{__url__}}">coverage.py v{{__version__}}</a>,
created at {{ time_stamp }}
</p>
</div>
</div>
</body>
</html>

124
third_party/python/coverage/coverage/htmlfiles/style.css поставляемый Normal file
Просмотреть файл

@ -0,0 +1,124 @@
@charset "UTF-8";
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
/* Don't edit this .css file. Edit the .scss file instead! */
html, body, h1, h2, h3, p, table, td, th { margin: 0; padding: 0; border: 0; font-weight: inherit; font-style: inherit; font-size: 100%; font-family: inherit; vertical-align: baseline; }
body { font-family: georgia, serif; font-size: 1em; }
html > body { font-size: 16px; }
p { font-size: .75em; line-height: 1.33333333em; }
table { border-collapse: collapse; }
td { vertical-align: top; }
table tr.hidden { display: none !important; }
p#no_rows { display: none; font-size: 1.2em; }
a.nav { text-decoration: none; color: inherit; }
a.nav:hover { text-decoration: underline; color: inherit; }
#header { background: #f8f8f8; width: 100%; border-bottom: 1px solid #eee; }
.indexfile #footer { margin: 1em 3em; }
.pyfile #footer { margin: 1em 1em; }
#footer .content { padding: 0; font-size: 85%; font-family: verdana, sans-serif; color: #666666; font-style: italic; }
#index { margin: 1em 0 0 3em; }
#header .content { padding: 1em 3rem; }
h1 { font-size: 1.25em; display: inline-block; }
#filter_container { display: inline-block; float: right; margin: 0 2em 0 0; }
#filter_container input { width: 10em; }
h2.stats { margin-top: .5em; font-size: 1em; }
.stats span { border: 1px solid; border-radius: .1em; padding: .1em .5em; margin: 0 .1em; cursor: pointer; border-color: #ccc #999 #999 #ccc; }
.stats span.run { background: #eeffee; }
.stats span.run.show_run { border-color: #999 #ccc #ccc #999; background: #ddffdd; }
.stats span.mis { background: #ffeeee; }
.stats span.mis.show_mis { border-color: #999 #ccc #ccc #999; background: #ffdddd; }
.stats span.exc { background: #f7f7f7; }
.stats span.exc.show_exc { border-color: #999 #ccc #ccc #999; background: #eeeeee; }
.stats span.par { background: #ffffd5; }
.stats span.par.show_par { border-color: #999 #ccc #ccc #999; background: #ffffaa; }
#source p .annotate.long, .help_panel { display: none; position: absolute; z-index: 999; background: #ffffcc; border: 1px solid #888; border-radius: .2em; box-shadow: #cccccc .2em .2em .2em; color: #333; padding: .25em .5em; }
#source p .annotate.long { white-space: normal; float: right; top: 1.75em; right: 1em; height: auto; }
#keyboard_icon { float: right; margin: 5px; cursor: pointer; }
.help_panel { padding: .5em; border: 1px solid #883; }
.help_panel .legend { font-style: italic; margin-bottom: 1em; }
.indexfile .help_panel { width: 20em; height: 4em; }
.pyfile .help_panel { width: 16em; height: 8em; }
#panel_icon { float: right; cursor: pointer; }
.keyhelp { margin: .75em; }
.keyhelp .key { border: 1px solid black; border-color: #888 #333 #333 #888; padding: .1em .35em; font-family: monospace; font-weight: bold; background: #eee; }
#source { padding: 1em 0 1em 3rem; font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace; }
#source p { position: relative; white-space: pre; }
#source p * { box-sizing: border-box; }
#source p .n { float: left; text-align: right; width: 3rem; box-sizing: border-box; margin-left: -3rem; padding-right: 1em; color: #999999; font-family: verdana, sans-serif; }
#source p .n a { text-decoration: none; color: #999999; font-size: .8333em; line-height: 1em; }
#source p .n a:hover { text-decoration: underline; color: #999999; }
#source p.highlight .n { background: #ffdd00; }
#source p .t { display: inline-block; width: 100%; box-sizing: border-box; margin-left: -.5em; padding-left: 0.3em; border-left: 0.2em solid white; }
#source p .t:hover { background: #f2f2f2; }
#source p .t:hover ~ .r .annotate.long { display: block; }
#source p .t .com { color: green; font-style: italic; line-height: 1px; }
#source p .t .key { font-weight: bold; line-height: 1px; }
#source p .t .str { color: #000080; }
#source p.mis .t { border-left: 0.2em solid #ff0000; }
#source p.mis.show_mis .t { background: #ffdddd; }
#source p.mis.show_mis .t:hover { background: #f2d2d2; }
#source p.run .t { border-left: 0.2em solid #00ff00; }
#source p.run.show_run .t { background: #ddffdd; }
#source p.run.show_run .t:hover { background: #d2f2d2; }
#source p.exc .t { border-left: 0.2em solid #808080; }
#source p.exc.show_exc .t { background: #eeeeee; }
#source p.exc.show_exc .t:hover { background: #e2e2e2; }
#source p.par .t { border-left: 0.2em solid #eeee99; }
#source p.par.show_par .t { background: #ffffaa; }
#source p.par.show_par .t:hover { background: #f2f2a2; }
#source p .r { position: absolute; top: 0; right: 2.5em; font-family: verdana, sans-serif; }
#source p .annotate { font-family: georgia; color: #666; padding-right: .5em; }
#source p .annotate.short:hover ~ .long { display: block; }
#source p .annotate.long { width: 30em; right: 2.5em; }
#source p input { display: none; }
#source p input ~ .r label.ctx { cursor: pointer; border-radius: .25em; }
#source p input ~ .r label.ctx::before { content: "▶ "; }
#source p input ~ .r label.ctx:hover { background: #d5f7ff; color: #666; }
#source p input:checked ~ .r label.ctx { background: #aaeeff; color: #666; border-radius: .75em .75em 0 0; padding: 0 .5em; margin: -.25em 0; }
#source p input:checked ~ .r label.ctx::before { content: "▼ "; }
#source p input:checked ~ .ctxs { padding: .25em .5em; overflow-y: scroll; max-height: 10.5em; }
#source p label.ctx { color: #999; display: inline-block; padding: 0 .5em; font-size: .8333em; }
#source p .ctxs { display: block; max-height: 0; overflow-y: hidden; transition: all .2s; padding: 0 .5em; font-family: verdana, sans-serif; white-space: nowrap; background: #aaeeff; border-radius: .25em; margin-right: 1.75em; }
#source p .ctxs span { display: block; text-align: right; }
#index td, #index th { text-align: right; width: 5em; padding: .25em .5em; border-bottom: 1px solid #eee; }
#index td.left, #index th.left { padding-left: 0; }
#index td.right, #index th.right { padding-right: 0; }
#index td.name, #index th.name { text-align: left; width: auto; }
#index th { font-style: italic; color: #333; border-bottom: 1px solid #ccc; cursor: pointer; }
#index th:hover { background: #eee; border-bottom: 1px solid #999; }
#index th.headerSortDown, #index th.headerSortUp { border-bottom: 1px solid #000; white-space: nowrap; background: #eee; }
#index th.headerSortDown:after { content: " ↓"; }
#index th.headerSortUp:after { content: " ↑"; }
#index td.name a { text-decoration: none; color: #000; }
#index tr.total td, #index tr.total_dynamic td { font-weight: bold; border-top: 1px solid #ccc; border-bottom: none; }
#index tr.file:hover { background: #eeeeee; }
#index tr.file:hover td.name { text-decoration: underline; color: #000; }
#scroll_marker { position: fixed; right: 0; top: 0; width: 16px; height: 100%; background: white; border-left: 1px solid #eee; will-change: transform; }
#scroll_marker .marker { background: #ddd; position: absolute; min-height: 3px; width: 100%; }

537
third_party/python/coverage/coverage/htmlfiles/style.scss поставляемый Normal file
Просмотреть файл

@ -0,0 +1,537 @@
/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */
/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */
// CSS styles for coverage.py HTML reports.
// When you edit this file, you need to run "make css" to get the CSS file
// generated, and then check in both the .scss and the .css files.
// When working on the file, this command is useful:
// sass --watch --style=compact --sourcemap=none --no-cache coverage/htmlfiles/style.scss:htmlcov/style.css
// Ignore this comment, it's for the CSS output file:
/* Don't edit this .css file. Edit the .scss file instead! */
// Dimensions
$left-gutter: 3rem;
// Page-wide styles
html, body, h1, h2, h3, p, table, td, th {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
// Set baseline grid to 16 pt.
body {
font-family: georgia, serif;
font-size: 1em;
}
html>body {
font-size: 16px;
}
// Set base font size to 12/16
p {
font-size: .75em; // 12/16
line-height: 1.33333333em; // 16/12
}
table {
border-collapse: collapse;
}
td {
vertical-align: top;
}
table tr.hidden {
display: none !important;
}
p#no_rows {
display: none;
font-size: 1.2em;
}
a.nav {
text-decoration: none;
color: inherit;
&:hover {
text-decoration: underline;
color: inherit;
}
}
// Page structure
#header {
background: #f8f8f8;
width: 100%;
border-bottom: 1px solid #eee;
}
.indexfile #footer {
margin: 1em 3em;
}
.pyfile #footer {
margin: 1em 1em;
}
#footer .content {
padding: 0;
font-size: 85%;
font-family: verdana, sans-serif;
color: #666666;
font-style: italic;
}
#index {
margin: 1em 0 0 3em;
}
// Header styles
#header .content {
padding: 1em $left-gutter;
}
h1 {
font-size: 1.25em;
display: inline-block;
}
#filter_container {
display: inline-block;
float: right;
margin: 0 2em 0 0;
input {
width: 10em;
}
}
$pln-color: #ffffff;
$mis-color: #ffdddd;
$run-color: #ddffdd;
$exc-color: #eeeeee;
$par-color: #ffffaa;
$off-button-lighten: 50%;
h2.stats {
margin-top: .5em;
font-size: 1em;
}
.stats span {
border: 1px solid;
border-radius: .1em;
padding: .1em .5em;
margin: 0 .1em;
cursor: pointer;
border-color: #ccc #999 #999 #ccc;
&.run {
background: mix($run-color, #fff, $off-button-lighten);
&.show_run {
border-color: #999 #ccc #ccc #999;
background: $run-color;
}
}
&.mis {
background: mix($mis-color, #fff, $off-button-lighten);
&.show_mis {
border-color: #999 #ccc #ccc #999;
background: $mis-color;
}
}
&.exc {
background: mix($exc-color, #fff, $off-button-lighten);
&.show_exc {
border-color: #999 #ccc #ccc #999;
background: $exc-color;
}
}
&.par {
background: mix($par-color, #fff, $off-button-lighten);
&.show_par {
border-color: #999 #ccc #ccc #999;
background: $par-color;
}
}
}
// Yellow post-it things.
%popup {
display: none;
position: absolute;
z-index: 999;
background: #ffffcc;
border: 1px solid #888;
border-radius: .2em;
box-shadow: #cccccc .2em .2em .2em;
color: #333;
padding: .25em .5em;
}
// Yellow post-it's in the text listings.
%in-text-popup {
@extend %popup;
white-space: normal;
float: right;
top: 1.75em;
right: 1em;
height: auto;
}
// Help panel
#keyboard_icon {
float: right;
margin: 5px;
cursor: pointer;
}
.help_panel {
@extend %popup;
padding: .5em;
border: 1px solid #883;
.legend {
font-style: italic;
margin-bottom: 1em;
}
.indexfile & {
width: 20em;
height: 4em;
}
.pyfile & {
width: 16em;
height: 8em;
}
}
#panel_icon {
float: right;
cursor: pointer;
}
.keyhelp {
margin: .75em;
.key {
border: 1px solid black;
border-color: #888 #333 #333 #888;
padding: .1em .35em;
font-family: monospace;
font-weight: bold;
background: #eee;
}
}
// Source file styles
$hover-dark-amt: 95%;
$pln-hover-color: mix($pln-color, #000, $hover-dark-amt);
$mis-hover-color: mix($mis-color, #000, $hover-dark-amt);
$run-hover-color: mix($run-color, #000, $hover-dark-amt);
$exc-hover-color: mix($exc-color, #000, $hover-dark-amt);
$par-hover-color: mix($par-color, #000, $hover-dark-amt);
// The slim bar at the left edge of the source lines, colored by coverage.
$border-indicator-width: .2em;
$context-panel-color: #aaeeff;
#source {
padding: 1em 0 1em $left-gutter;
font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace;
p {
// position relative makes position:absolute pop-ups appear in the right place.
position: relative;
white-space: pre;
* {
box-sizing: border-box;
}
.n {
float: left;
text-align: right;
width: $left-gutter;
box-sizing: border-box;
margin-left: -$left-gutter;
padding-right: 1em;
color: #999999;
font-family: verdana, sans-serif;
a {
text-decoration: none;
color: #999999;
font-size: .8333em; // 10/12
line-height: 1em;
&:hover {
text-decoration: underline;
color: #999999;
}
}
}
&.highlight .n {
background: #ffdd00;
}
.t {
display: inline-block;
width: 100%;
box-sizing: border-box;
margin-left: -.5em;
padding-left: .5em - $border-indicator-width;
border-left: $border-indicator-width solid white;
&:hover {
background: $pln-hover-color;
& ~ .r .annotate.long {
display: block;
}
}
// Syntax coloring
.com {
color: green;
font-style: italic;
line-height: 1px;
}
.key {
font-weight: bold;
line-height: 1px;
}
.str {
color: #000080;
}
}
&.mis {
.t {
border-left: $border-indicator-width solid #ff0000;
}
&.show_mis .t {
background: $mis-color;
&:hover {
background: $mis-hover-color;
}
}
}
&.run {
.t {
border-left: $border-indicator-width solid #00ff00;
}
&.show_run .t {
background: $run-color;
&:hover {
background: $run-hover-color;
}
}
}
&.exc {
.t {
border-left: $border-indicator-width solid #808080;
}
&.show_exc .t {
background: $exc-color;
&:hover {
background: $exc-hover-color;
}
}
}
&.par {
.t {
border-left: $border-indicator-width solid #eeee99;
}
&.show_par .t {
background: $par-color;
&:hover {
background: $par-hover-color;
}
}
}
.r {
position: absolute;
top: 0;
right: 2.5em;
font-family: verdana, sans-serif;
}
.annotate {
font-family: georgia;
color: #666;
padding-right: .5em;
&.short:hover ~ .long {
display: block;
}
&.long {
@extend %in-text-popup;
width: 30em;
right: 2.5em;
}
}
input {
display: none;
& ~ .r label.ctx {
cursor: pointer;
border-radius: .25em;
&::before {
content: "";
}
&:hover {
background: mix($context-panel-color, #fff, 50%);
color: #666;
}
}
&:checked ~ .r label.ctx {
background: $context-panel-color;
color: #666;
border-radius: .75em .75em 0 0;
padding: 0 .5em;
margin: -.25em 0;
&::before {
content: "";
}
}
&:checked ~ .ctxs {
padding: .25em .5em;
overflow-y: scroll;
max-height: 10.5em;
}
}
label.ctx {
color: #999;
display: inline-block;
padding: 0 .5em;
font-size: .8333em; // 10/12
}
.ctxs {
display: block;
max-height: 0;
overflow-y: hidden;
transition: all .2s;
padding: 0 .5em;
font-family: verdana, sans-serif;
white-space: nowrap;
background: $context-panel-color;
border-radius: .25em;
margin-right: 1.75em;
span {
display: block;
text-align: right;
}
}
}
}
// index styles
#index {
td, th {
text-align: right;
width: 5em;
padding: .25em .5em;
border-bottom: 1px solid #eee;
&.left {
padding-left: 0;
}
&.right {
padding-right: 0;
}
&.name {
text-align: left;
width: auto;
}
}
th {
font-style: italic;
color: #333;
border-bottom: 1px solid #ccc;
cursor: pointer;
&:hover {
background: #eee;
border-bottom: 1px solid #999;
}
&.headerSortDown, &.headerSortUp {
border-bottom: 1px solid #000;
white-space: nowrap;
background: #eee;
}
&.headerSortDown:after {
content: "";
}
&.headerSortUp:after {
content: "";
}
}
td.name a {
text-decoration: none;
color: #000;
}
tr.total td,
tr.total_dynamic td {
font-weight: bold;
border-top: 1px solid #ccc;
border-bottom: none;
}
tr.file:hover {
background: #eeeeee;
td.name {
text-decoration: underline;
color: #000;
}
}
}
// scroll marker styles
#scroll_marker {
position: fixed;
right: 0;
top: 0;
width: 16px;
height: 100%;
background: white;
border-left: 1px solid #eee;
will-change: transform; // for faster scrolling of fixed element in Chrome
.marker {
background: #ddd;
position: absolute;
min-height: 3px;
width: 100%;
}
}

469
third_party/python/coverage/coverage/inorout.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,469 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Determining whether files are being measured/reported or not."""
# For finding the stdlib
import atexit
import inspect
import itertools
import os
import platform
import re
import sys
import traceback
from coverage import env
from coverage.backward import code_object
from coverage.disposition import FileDisposition, disposition_init
from coverage.files import TreeMatcher, FnmatchMatcher, ModuleMatcher
from coverage.files import prep_patterns, find_python_files, canonical_filename
from coverage.misc import CoverageException
from coverage.python import source_for_file, source_for_morf
# Pypy has some unusual stuff in the "stdlib". Consider those locations
# when deciding where the stdlib is. These modules are not used for anything,
# they are modules importable from the pypy lib directories, so that we can
# find those directories.
_structseq = _pypy_irc_topic = None
if env.PYPY:
try:
import _structseq
except ImportError:
pass
try:
import _pypy_irc_topic
except ImportError:
pass
def canonical_path(morf, directory=False):
"""Return the canonical path of the module or file `morf`.
If the module is a package, then return its directory. If it is a
module, then return its file, unless `directory` is True, in which
case return its enclosing directory.
"""
morf_path = canonical_filename(source_for_morf(morf))
if morf_path.endswith("__init__.py") or directory:
morf_path = os.path.split(morf_path)[0]
return morf_path
def name_for_module(filename, frame):
"""Get the name of the module for a filename and frame.
For configurability's sake, we allow __main__ modules to be matched by
their importable name.
If loaded via runpy (aka -m), we can usually recover the "original"
full dotted module name, otherwise, we resort to interpreting the
file name to get the module's name. In the case that the module name
can't be determined, None is returned.
"""
module_globals = frame.f_globals if frame is not None else {}
if module_globals is None: # pragma: only ironpython
# IronPython doesn't provide globals: https://github.com/IronLanguages/main/issues/1296
module_globals = {}
dunder_name = module_globals.get('__name__', None)
if isinstance(dunder_name, str) and dunder_name != '__main__':
# This is the usual case: an imported module.
return dunder_name
loader = module_globals.get('__loader__', None)
for attrname in ('fullname', 'name'): # attribute renamed in py3.2
if hasattr(loader, attrname):
fullname = getattr(loader, attrname)
else:
continue
if isinstance(fullname, str) and fullname != '__main__':
# Module loaded via: runpy -m
return fullname
# Script as first argument to Python command line.
inspectedname = inspect.getmodulename(filename)
if inspectedname is not None:
return inspectedname
else:
return dunder_name
def module_is_namespace(mod):
"""Is the module object `mod` a PEP420 namespace module?"""
return hasattr(mod, '__path__') and getattr(mod, '__file__', None) is None
def module_has_file(mod):
"""Does the module object `mod` have an existing __file__ ?"""
mod__file__ = getattr(mod, '__file__', None)
if mod__file__ is None:
return False
return os.path.exists(mod__file__)
class InOrOut(object):
"""Machinery for determining what files to measure."""
def __init__(self, warn):
self.warn = warn
# The matchers for should_trace.
self.source_match = None
self.source_pkgs_match = None
self.pylib_paths = self.cover_paths = None
self.pylib_match = self.cover_match = None
self.include_match = self.omit_match = None
self.plugins = []
self.disp_class = FileDisposition
# The source argument can be directories or package names.
self.source = []
self.source_pkgs = []
self.source_pkgs_unmatched = []
self.omit = self.include = None
def configure(self, config):
"""Apply the configuration to get ready for decision-time."""
for src in config.source or []:
if os.path.isdir(src):
self.source.append(canonical_filename(src))
else:
self.source_pkgs.append(src)
self.source_pkgs_unmatched = self.source_pkgs[:]
self.omit = prep_patterns(config.run_omit)
self.include = prep_patterns(config.run_include)
# The directories for files considered "installed with the interpreter".
self.pylib_paths = set()
if not config.cover_pylib:
# Look at where some standard modules are located. That's the
# indication for "installed with the interpreter". In some
# environments (virtualenv, for example), these modules may be
# spread across a few locations. Look at all the candidate modules
# we've imported, and take all the different ones.
for m in (atexit, inspect, os, platform, _pypy_irc_topic, re, _structseq, traceback):
if m is not None and hasattr(m, "__file__"):
self.pylib_paths.add(canonical_path(m, directory=True))
if _structseq and not hasattr(_structseq, '__file__'):
# PyPy 2.4 has no __file__ in the builtin modules, but the code
# objects still have the file names. So dig into one to find
# the path to exclude. The "filename" might be synthetic,
# don't be fooled by those.
structseq_file = code_object(_structseq.structseq_new).co_filename
if not structseq_file.startswith("<"):
self.pylib_paths.add(canonical_path(structseq_file))
# To avoid tracing the coverage.py code itself, we skip anything
# located where we are.
self.cover_paths = [canonical_path(__file__, directory=True)]
if env.TESTING:
# Don't include our own test code.
self.cover_paths.append(os.path.join(self.cover_paths[0], "tests"))
# When testing, we use PyContracts, which should be considered
# part of coverage.py, and it uses six. Exclude those directories
# just as we exclude ourselves.
import contracts
import six
for mod in [contracts, six]:
self.cover_paths.append(canonical_path(mod))
# Create the matchers we need for should_trace
if self.source or self.source_pkgs:
self.source_match = TreeMatcher(self.source)
self.source_pkgs_match = ModuleMatcher(self.source_pkgs)
else:
if self.cover_paths:
self.cover_match = TreeMatcher(self.cover_paths)
if self.pylib_paths:
self.pylib_match = TreeMatcher(self.pylib_paths)
if self.include:
self.include_match = FnmatchMatcher(self.include)
if self.omit:
self.omit_match = FnmatchMatcher(self.omit)
def should_trace(self, filename, frame=None):
"""Decide whether to trace execution in `filename`, with a reason.
This function is called from the trace function. As each new file name
is encountered, this function determines whether it is traced or not.
Returns a FileDisposition object.
"""
original_filename = filename
disp = disposition_init(self.disp_class, filename)
def nope(disp, reason):
"""Simple helper to make it easy to return NO."""
disp.trace = False
disp.reason = reason
return disp
if frame is not None:
# Compiled Python files have two file names: frame.f_code.co_filename is
# the file name at the time the .pyc was compiled. The second name is
# __file__, which is where the .pyc was actually loaded from. Since
# .pyc files can be moved after compilation (for example, by being
# installed), we look for __file__ in the frame and prefer it to the
# co_filename value.
dunder_file = frame.f_globals and frame.f_globals.get('__file__')
if dunder_file:
filename = source_for_file(dunder_file)
if original_filename and not original_filename.startswith('<'):
orig = os.path.basename(original_filename)
if orig != os.path.basename(filename):
# Files shouldn't be renamed when moved. This happens when
# exec'ing code. If it seems like something is wrong with
# the frame's file name, then just use the original.
filename = original_filename
if not filename:
# Empty string is pretty useless.
return nope(disp, "empty string isn't a file name")
if filename.startswith('memory:'):
return nope(disp, "memory isn't traceable")
if filename.startswith('<'):
# Lots of non-file execution is represented with artificial
# file names like "<string>", "<doctest readme.txt[0]>", or
# "<exec_function>". Don't ever trace these executions, since we
# can't do anything with the data later anyway.
return nope(disp, "not a real file name")
# pyexpat does a dumb thing, calling the trace function explicitly from
# C code with a C file name.
if re.search(r"[/\\]Modules[/\\]pyexpat.c", filename):
return nope(disp, "pyexpat lies about itself")
# Jython reports the .class file to the tracer, use the source file.
if filename.endswith("$py.class"):
filename = filename[:-9] + ".py"
canonical = canonical_filename(filename)
disp.canonical_filename = canonical
# Try the plugins, see if they have an opinion about the file.
plugin = None
for plugin in self.plugins.file_tracers:
if not plugin._coverage_enabled:
continue
try:
file_tracer = plugin.file_tracer(canonical)
if file_tracer is not None:
file_tracer._coverage_plugin = plugin
disp.trace = True
disp.file_tracer = file_tracer
if file_tracer.has_dynamic_source_filename():
disp.has_dynamic_filename = True
else:
disp.source_filename = canonical_filename(
file_tracer.source_filename()
)
break
except Exception:
self.warn(
"Disabling plug-in %r due to an exception:" % (plugin._coverage_plugin_name)
)
traceback.print_exc()
plugin._coverage_enabled = False
continue
else:
# No plugin wanted it: it's Python.
disp.trace = True
disp.source_filename = canonical
if not disp.has_dynamic_filename:
if not disp.source_filename:
raise CoverageException(
"Plugin %r didn't set source_filename for %r" %
(plugin, disp.original_filename)
)
reason = self.check_include_omit_etc(disp.source_filename, frame)
if reason:
nope(disp, reason)
return disp
def check_include_omit_etc(self, filename, frame):
"""Check a file name against the include, omit, etc, rules.
Returns a string or None. String means, don't trace, and is the reason
why. None means no reason found to not trace.
"""
modulename = name_for_module(filename, frame)
# If the user specified source or include, then that's authoritative
# about the outer bound of what to measure and we don't have to apply
# any canned exclusions. If they didn't, then we have to exclude the
# stdlib and coverage.py directories.
if self.source_match:
if self.source_pkgs_match.match(modulename):
if modulename in self.source_pkgs_unmatched:
self.source_pkgs_unmatched.remove(modulename)
elif not self.source_match.match(filename):
return "falls outside the --source trees"
elif self.include_match:
if not self.include_match.match(filename):
return "falls outside the --include trees"
else:
# If we aren't supposed to trace installed code, then check if this
# is near the Python standard library and skip it if so.
if self.pylib_match and self.pylib_match.match(filename):
return "is in the stdlib"
# We exclude the coverage.py code itself, since a little of it
# will be measured otherwise.
if self.cover_match and self.cover_match.match(filename):
return "is part of coverage.py"
# Check the file against the omit pattern.
if self.omit_match and self.omit_match.match(filename):
return "is inside an --omit pattern"
# No point tracing a file we can't later write to SQLite.
try:
filename.encode("utf8")
except UnicodeEncodeError:
return "non-encodable filename"
# No reason found to skip this file.
return None
def warn_conflicting_settings(self):
"""Warn if there are settings that conflict."""
if self.include:
if self.source or self.source_pkgs:
self.warn("--include is ignored because --source is set", slug="include-ignored")
def warn_already_imported_files(self):
"""Warn if files have already been imported that we will be measuring."""
if self.include or self.source or self.source_pkgs:
warned = set()
for mod in list(sys.modules.values()):
filename = getattr(mod, "__file__", None)
if filename is None:
continue
if filename in warned:
continue
disp = self.should_trace(filename)
if disp.trace:
msg = "Already imported a file that will be measured: {}".format(filename)
self.warn(msg, slug="already-imported")
warned.add(filename)
def warn_unimported_source(self):
"""Warn about source packages that were of interest, but never traced."""
for pkg in self.source_pkgs_unmatched:
self._warn_about_unmeasured_code(pkg)
def _warn_about_unmeasured_code(self, pkg):
"""Warn about a package or module that we never traced.
`pkg` is a string, the name of the package or module.
"""
mod = sys.modules.get(pkg)
if mod is None:
self.warn("Module %s was never imported." % pkg, slug="module-not-imported")
return
if module_is_namespace(mod):
# A namespace package. It's OK for this not to have been traced,
# since there is no code directly in it.
return
if not module_has_file(mod):
self.warn("Module %s has no Python source." % pkg, slug="module-not-python")
return
# The module was in sys.modules, and seems like a module with code, but
# we never measured it. I guess that means it was imported before
# coverage even started.
self.warn(
"Module %s was previously imported, but not measured" % pkg,
slug="module-not-measured",
)
def find_possibly_unexecuted_files(self):
"""Find files in the areas of interest that might be untraced.
Yields pairs: file path, and responsible plug-in name.
"""
for pkg in self.source_pkgs:
if (not pkg in sys.modules or
not module_has_file(sys.modules[pkg])):
continue
pkg_file = source_for_file(sys.modules[pkg].__file__)
for ret in self._find_executable_files(canonical_path(pkg_file)):
yield ret
for src in self.source:
for ret in self._find_executable_files(src):
yield ret
def _find_plugin_files(self, src_dir):
"""Get executable files from the plugins."""
for plugin in self.plugins.file_tracers:
for x_file in plugin.find_executable_files(src_dir):
yield x_file, plugin._coverage_plugin_name
def _find_executable_files(self, src_dir):
"""Find executable files in `src_dir`.
Search for files in `src_dir` that can be executed because they
are probably importable. Don't include ones that have been omitted
by the configuration.
Yield the file path, and the plugin name that handles the file.
"""
py_files = ((py_file, None) for py_file in find_python_files(src_dir))
plugin_files = self._find_plugin_files(src_dir)
for file_path, plugin_name in itertools.chain(py_files, plugin_files):
file_path = canonical_filename(file_path)
if self.omit_match and self.omit_match.match(file_path):
# Turns out this file was omitted, so don't pull it back
# in as unexecuted.
continue
yield file_path, plugin_name
def sys_info(self):
"""Our information for Coverage.sys_info.
Returns a list of (key, value) pairs.
"""
info = [
('cover_paths', self.cover_paths),
('pylib_paths', self.pylib_paths),
]
matcher_names = [
'source_match', 'source_pkgs_match',
'include_match', 'omit_match',
'cover_match', 'pylib_match',
]
for matcher_name in matcher_names:
matcher = getattr(self, matcher_name)
if matcher:
matcher_info = matcher.info()
else:
matcher_info = '-none-'
info.append((matcher_name, matcher_info))
return info

103
third_party/python/coverage/coverage/jsonreport.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,103 @@
# coding: utf-8
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Json reporting for coverage.py"""
import datetime
import json
import sys
from coverage import __version__
from coverage.report import get_analysis_to_report
from coverage.results import Numbers
class JsonReporter(object):
"""A reporter for writing JSON coverage results."""
def __init__(self, coverage):
self.coverage = coverage
self.config = self.coverage.config
self.total = Numbers()
self.report_data = {}
def report(self, morfs, outfile=None):
"""Generate a json report for `morfs`.
`morfs` is a list of modules or file names.
`outfile` is a file object to write the json to
"""
outfile = outfile or sys.stdout
coverage_data = self.coverage.get_data()
coverage_data.set_query_contexts(self.config.report_contexts)
self.report_data["meta"] = {
"version": __version__,
"timestamp": datetime.datetime.now().isoformat(),
"branch_coverage": coverage_data.has_arcs(),
"show_contexts": self.config.json_show_contexts,
}
measured_files = {}
for file_reporter, analysis in get_analysis_to_report(self.coverage, morfs):
measured_files[file_reporter.relative_filename()] = self.report_one_file(
coverage_data,
analysis
)
self.report_data["files"] = measured_files
self.report_data["totals"] = {
'covered_lines': self.total.n_executed,
'num_statements': self.total.n_statements,
'percent_covered': self.total.pc_covered,
'missing_lines': self.total.n_missing,
'excluded_lines': self.total.n_excluded,
}
if coverage_data.has_arcs():
self.report_data["totals"].update({
'num_branches': self.total.n_branches,
'num_partial_branches': self.total.n_partial_branches,
'covered_branches': self.total.n_executed_branches,
'missing_branches': self.total.n_missing_branches,
})
json.dump(
self.report_data,
outfile,
indent=4 if self.config.json_pretty_print else None
)
return self.total.n_statements and self.total.pc_covered
def report_one_file(self, coverage_data, analysis):
"""Extract the relevant report data for a single file"""
nums = analysis.numbers
self.total += nums
summary = {
'covered_lines': nums.n_executed,
'num_statements': nums.n_statements,
'percent_covered': nums.pc_covered,
'missing_lines': nums.n_missing,
'excluded_lines': nums.n_excluded,
}
reported_file = {
'executed_lines': sorted(analysis.executed),
'summary': summary,
'missing_lines': sorted(analysis.missing),
'excluded_lines': sorted(analysis.excluded)
}
if self.config.json_show_contexts:
reported_file['contexts'] = analysis.data.contexts_by_lineno(
analysis.filename,
)
if coverage_data.has_arcs():
reported_file['summary'].update({
'num_branches': nums.n_branches,
'num_partial_branches': nums.n_partial_branches,
'covered_branches': nums.n_executed_branches,
'missing_branches': nums.n_missing_branches,
})
return reported_file

361
third_party/python/coverage/coverage/misc.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,361 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Miscellaneous stuff for coverage.py."""
import errno
import hashlib
import inspect
import locale
import os
import os.path
import random
import re
import socket
import sys
import types
from coverage import env
from coverage.backward import to_bytes, unicode_class
ISOLATED_MODULES = {}
def isolate_module(mod):
"""Copy a module so that we are isolated from aggressive mocking.
If a test suite mocks os.path.exists (for example), and then we need to use
it during the test, everything will get tangled up if we use their mock.
Making a copy of the module when we import it will isolate coverage.py from
those complications.
"""
if mod not in ISOLATED_MODULES:
new_mod = types.ModuleType(mod.__name__)
ISOLATED_MODULES[mod] = new_mod
for name in dir(mod):
value = getattr(mod, name)
if isinstance(value, types.ModuleType):
value = isolate_module(value)
setattr(new_mod, name, value)
return ISOLATED_MODULES[mod]
os = isolate_module(os)
def dummy_decorator_with_args(*args_unused, **kwargs_unused):
"""Dummy no-op implementation of a decorator with arguments."""
def _decorator(func):
return func
return _decorator
# Environment COVERAGE_NO_CONTRACTS=1 can turn off contracts while debugging
# tests to remove noise from stack traces.
# $set_env.py: COVERAGE_NO_CONTRACTS - Disable PyContracts to simplify stack traces.
USE_CONTRACTS = env.TESTING and not bool(int(os.environ.get("COVERAGE_NO_CONTRACTS", 0)))
# Use PyContracts for assertion testing on parameters and returns, but only if
# we are running our own test suite.
if USE_CONTRACTS:
from contracts import contract # pylint: disable=unused-import
from contracts import new_contract as raw_new_contract
def new_contract(*args, **kwargs):
"""A proxy for contracts.new_contract that doesn't mind happening twice."""
try:
return raw_new_contract(*args, **kwargs)
except ValueError:
# During meta-coverage, this module is imported twice, and
# PyContracts doesn't like redefining contracts. It's OK.
pass
# Define contract words that PyContract doesn't have.
new_contract('bytes', lambda v: isinstance(v, bytes))
if env.PY3:
new_contract('unicode', lambda v: isinstance(v, unicode_class))
def one_of(argnames):
"""Ensure that only one of the argnames is non-None."""
def _decorator(func):
argnameset = set(name.strip() for name in argnames.split(","))
def _wrapper(*args, **kwargs):
vals = [kwargs.get(name) for name in argnameset]
assert sum(val is not None for val in vals) == 1
return func(*args, **kwargs)
return _wrapper
return _decorator
else: # pragma: not testing
# We aren't using real PyContracts, so just define our decorators as
# stunt-double no-ops.
contract = dummy_decorator_with_args
one_of = dummy_decorator_with_args
def new_contract(*args_unused, **kwargs_unused):
"""Dummy no-op implementation of `new_contract`."""
pass
def nice_pair(pair):
"""Make a nice string representation of a pair of numbers.
If the numbers are equal, just return the number, otherwise return the pair
with a dash between them, indicating the range.
"""
start, end = pair
if start == end:
return "%d" % start
else:
return "%d-%d" % (start, end)
def expensive(fn):
"""A decorator to indicate that a method shouldn't be called more than once.
Normally, this does nothing. During testing, this raises an exception if
called more than once.
"""
if env.TESTING:
attr = "_once_" + fn.__name__
def _wrapper(self):
if hasattr(self, attr):
raise AssertionError("Shouldn't have called %s more than once" % fn.__name__)
setattr(self, attr, True)
return fn(self)
return _wrapper
else:
return fn # pragma: not testing
def bool_or_none(b):
"""Return bool(b), but preserve None."""
if b is None:
return None
else:
return bool(b)
def join_regex(regexes):
"""Combine a list of regexes into one that matches any of them."""
return "|".join("(?:%s)" % r for r in regexes)
def file_be_gone(path):
"""Remove a file, and don't get annoyed if it doesn't exist."""
try:
os.remove(path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
def ensure_dir(directory):
"""Make sure the directory exists.
If `directory` is None or empty, do nothing.
"""
if directory and not os.path.isdir(directory):
os.makedirs(directory)
def ensure_dir_for_file(path):
"""Make sure the directory for the path exists."""
ensure_dir(os.path.dirname(path))
def output_encoding(outfile=None):
"""Determine the encoding to use for output written to `outfile` or stdout."""
if outfile is None:
outfile = sys.stdout
encoding = (
getattr(outfile, "encoding", None) or
getattr(sys.__stdout__, "encoding", None) or
locale.getpreferredencoding()
)
return encoding
def filename_suffix(suffix):
"""Compute a filename suffix for a data file.
If `suffix` is a string or None, simply return it. If `suffix` is True,
then build a suffix incorporating the hostname, process id, and a random
number.
Returns a string or None.
"""
if suffix is True:
# If data_suffix was a simple true value, then make a suffix with
# plenty of distinguishing information. We do this here in
# `save()` at the last minute so that the pid will be correct even
# if the process forks.
dice = random.Random(os.urandom(8)).randint(0, 999999)
suffix = "%s.%s.%06d" % (socket.gethostname(), os.getpid(), dice)
return suffix
class Hasher(object):
"""Hashes Python data into md5."""
def __init__(self):
self.md5 = hashlib.md5()
def update(self, v):
"""Add `v` to the hash, recursively if needed."""
self.md5.update(to_bytes(str(type(v))))
if isinstance(v, unicode_class):
self.md5.update(v.encode('utf8'))
elif isinstance(v, bytes):
self.md5.update(v)
elif v is None:
pass
elif isinstance(v, (int, float)):
self.md5.update(to_bytes(str(v)))
elif isinstance(v, (tuple, list)):
for e in v:
self.update(e)
elif isinstance(v, dict):
keys = v.keys()
for k in sorted(keys):
self.update(k)
self.update(v[k])
else:
for k in dir(v):
if k.startswith('__'):
continue
a = getattr(v, k)
if inspect.isroutine(a):
continue
self.update(k)
self.update(a)
self.md5.update(b'.')
def hexdigest(self):
"""Retrieve the hex digest of the hash."""
return self.md5.hexdigest()
def _needs_to_implement(that, func_name):
"""Helper to raise NotImplementedError in interface stubs."""
if hasattr(that, "_coverage_plugin_name"):
thing = "Plugin"
name = that._coverage_plugin_name
else:
thing = "Class"
klass = that.__class__
name = "{klass.__module__}.{klass.__name__}".format(klass=klass)
raise NotImplementedError(
"{thing} {name!r} needs to implement {func_name}()".format(
thing=thing, name=name, func_name=func_name
)
)
class DefaultValue(object):
"""A sentinel object to use for unusual default-value needs.
Construct with a string that will be used as the repr, for display in help
and Sphinx output.
"""
def __init__(self, display_as):
self.display_as = display_as
def __repr__(self):
return self.display_as
def substitute_variables(text, variables):
"""Substitute ``${VAR}`` variables in `text` with their values.
Variables in the text can take a number of shell-inspired forms::
$VAR
${VAR}
${VAR?} strict: an error if VAR isn't defined.
${VAR-missing} defaulted: "missing" if VAR isn't defined.
$$ just a dollar sign.
`variables` is a dictionary of variable values.
Returns the resulting text with values substituted.
"""
dollar_pattern = r"""(?x) # Use extended regex syntax
\$ # A dollar sign,
(?: # then
(?P<dollar>\$) | # a dollar sign, or
(?P<word1>\w+) | # a plain word, or
{ # a {-wrapped
(?P<word2>\w+) # word,
(?:
(?P<strict>\?) | # with a strict marker
-(?P<defval>[^}]*) # or a default value
)? # maybe.
}
)
"""
def dollar_replace(match):
"""Called for each $replacement."""
# Only one of the groups will have matched, just get its text.
word = next(g for g in match.group('dollar', 'word1', 'word2') if g)
if word == "$":
return "$"
elif word in variables:
return variables[word]
elif match.group('strict'):
msg = "Variable {} is undefined: {!r}".format(word, text)
raise CoverageException(msg)
else:
return match.group('defval')
text = re.sub(dollar_pattern, dollar_replace, text)
return text
class BaseCoverageException(Exception):
"""The base of all Coverage exceptions."""
pass
class CoverageException(BaseCoverageException):
"""An exception raised by a coverage.py function."""
pass
class NoSource(CoverageException):
"""We couldn't find the source for a module."""
pass
class NoCode(NoSource):
"""We couldn't find any code at all."""
pass
class NotPython(CoverageException):
"""A source file turned out not to be parsable Python."""
pass
class ExceptionDuringRun(CoverageException):
"""An exception happened while running customer code.
Construct it with three arguments, the values from `sys.exc_info`.
"""
pass
class StopEverything(BaseCoverageException):
"""An exception that means everything should stop.
The CoverageTest class converts these to SkipTest, so that when running
tests, raising this exception will automatically skip the test.
"""
pass

111
third_party/python/coverage/coverage/multiproc.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,111 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Monkey-patching to add multiprocessing support for coverage.py"""
import multiprocessing
import multiprocessing.process
import os
import os.path
import sys
import traceback
from coverage import env
from coverage.misc import contract
# An attribute that will be set on the module to indicate that it has been
# monkey-patched.
PATCHED_MARKER = "_coverage$patched"
if env.PYVERSION >= (3, 4):
OriginalProcess = multiprocessing.process.BaseProcess
else:
OriginalProcess = multiprocessing.Process
original_bootstrap = OriginalProcess._bootstrap
class ProcessWithCoverage(OriginalProcess): # pylint: disable=abstract-method
"""A replacement for multiprocess.Process that starts coverage."""
def _bootstrap(self, *args, **kwargs): # pylint: disable=arguments-differ
"""Wrapper around _bootstrap to start coverage."""
try:
from coverage import Coverage # avoid circular import
cov = Coverage(data_suffix=True)
cov._warn_preimported_source = False
cov.start()
debug = cov._debug
if debug.should("multiproc"):
debug.write("Calling multiprocessing bootstrap")
except Exception:
print("Exception during multiprocessing bootstrap init:")
traceback.print_exc(file=sys.stdout)
sys.stdout.flush()
raise
try:
return original_bootstrap(self, *args, **kwargs)
finally:
if debug.should("multiproc"):
debug.write("Finished multiprocessing bootstrap")
cov.stop()
cov.save()
if debug.should("multiproc"):
debug.write("Saved multiprocessing data")
class Stowaway(object):
"""An object to pickle, so when it is unpickled, it can apply the monkey-patch."""
def __init__(self, rcfile):
self.rcfile = rcfile
def __getstate__(self):
return {'rcfile': self.rcfile}
def __setstate__(self, state):
patch_multiprocessing(state['rcfile'])
@contract(rcfile=str)
def patch_multiprocessing(rcfile):
"""Monkey-patch the multiprocessing module.
This enables coverage measurement of processes started by multiprocessing.
This involves aggressive monkey-patching.
`rcfile` is the path to the rcfile being used.
"""
if hasattr(multiprocessing, PATCHED_MARKER):
return
if env.PYVERSION >= (3, 4):
OriginalProcess._bootstrap = ProcessWithCoverage._bootstrap
else:
multiprocessing.Process = ProcessWithCoverage
# Set the value in ProcessWithCoverage that will be pickled into the child
# process.
os.environ["COVERAGE_RCFILE"] = os.path.abspath(rcfile)
# When spawning processes rather than forking them, we have no state in the
# new process. We sneak in there with a Stowaway: we stuff one of our own
# objects into the data that gets pickled and sent to the sub-process. When
# the Stowaway is unpickled, it's __setstate__ method is called, which
# re-applies the monkey-patch.
# Windows only spawns, so this is needed to keep Windows working.
try:
from multiprocessing import spawn
original_get_preparation_data = spawn.get_preparation_data
except (ImportError, AttributeError):
pass
else:
def get_preparation_data_with_stowaway(name):
"""Get the original preparation data, and also insert our stowaway."""
d = original_get_preparation_data(name)
d['stowaway'] = Stowaway(rcfile)
return d
spawn.get_preparation_data = get_preparation_data_with_stowaway
setattr(multiprocessing, PATCHED_MARKER, True)

163
third_party/python/coverage/coverage/numbits.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,163 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""
Functions to manipulate packed binary representations of number sets.
To save space, coverage stores sets of line numbers in SQLite using a packed
binary representation called a numbits. A numbits is a set of positive
integers.
A numbits is stored as a blob in the database. The exact meaning of the bytes
in the blobs should be considered an implementation detail that might change in
the future. Use these functions to work with those binary blobs of data.
"""
import json
from coverage import env
from coverage.backward import byte_to_int, bytes_to_ints, binary_bytes, zip_longest
from coverage.misc import contract, new_contract
if env.PY3:
def _to_blob(b):
"""Convert a bytestring into a type SQLite will accept for a blob."""
return b
new_contract('blob', lambda v: isinstance(v, bytes))
else:
def _to_blob(b):
"""Convert a bytestring into a type SQLite will accept for a blob."""
return buffer(b) # pylint: disable=undefined-variable
new_contract('blob', lambda v: isinstance(v, buffer)) # pylint: disable=undefined-variable
@contract(nums='Iterable', returns='blob')
def nums_to_numbits(nums):
"""Convert `nums` into a numbits.
Arguments:
nums: a reusable iterable of integers, the line numbers to store.
Returns:
A binary blob.
"""
try:
nbytes = max(nums) // 8 + 1
except ValueError:
# nums was empty.
return _to_blob(b'')
b = bytearray(nbytes)
for num in nums:
b[num//8] |= 1 << num % 8
return _to_blob(bytes(b))
@contract(numbits='blob', returns='list[int]')
def numbits_to_nums(numbits):
"""Convert a numbits into a list of numbers.
Arguments:
numbits: a binary blob, the packed number set.
Returns:
A list of ints.
When registered as a SQLite function by :func:`register_sqlite_functions`,
this returns a string, a JSON-encoded list of ints.
"""
nums = []
for byte_i, byte in enumerate(bytes_to_ints(numbits)):
for bit_i in range(8):
if (byte & (1 << bit_i)):
nums.append(byte_i * 8 + bit_i)
return nums
@contract(numbits1='blob', numbits2='blob', returns='blob')
def numbits_union(numbits1, numbits2):
"""Compute the union of two numbits.
Returns:
A new numbits, the union of `numbits1` and `numbits2`.
"""
byte_pairs = zip_longest(bytes_to_ints(numbits1), bytes_to_ints(numbits2), fillvalue=0)
return _to_blob(binary_bytes(b1 | b2 for b1, b2 in byte_pairs))
@contract(numbits1='blob', numbits2='blob', returns='blob')
def numbits_intersection(numbits1, numbits2):
"""Compute the intersection of two numbits.
Returns:
A new numbits, the intersection `numbits1` and `numbits2`.
"""
byte_pairs = zip_longest(bytes_to_ints(numbits1), bytes_to_ints(numbits2), fillvalue=0)
intersection_bytes = binary_bytes(b1 & b2 for b1, b2 in byte_pairs)
return _to_blob(intersection_bytes.rstrip(b'\0'))
@contract(numbits1='blob', numbits2='blob', returns='bool')
def numbits_any_intersection(numbits1, numbits2):
"""Is there any number that appears in both numbits?
Determine whether two number sets have a non-empty intersection. This is
faster than computing the intersection.
Returns:
A bool, True if there is any number in both `numbits1` and `numbits2`.
"""
byte_pairs = zip_longest(bytes_to_ints(numbits1), bytes_to_ints(numbits2), fillvalue=0)
return any(b1 & b2 for b1, b2 in byte_pairs)
@contract(num='int', numbits='blob', returns='bool')
def num_in_numbits(num, numbits):
"""Does the integer `num` appear in `numbits`?
Returns:
A bool, True if `num` is a member of `numbits`.
"""
nbyte, nbit = divmod(num, 8)
if nbyte >= len(numbits):
return False
return bool(byte_to_int(numbits[nbyte]) & (1 << nbit))
def register_sqlite_functions(connection):
"""
Define numbits functions in a SQLite connection.
This defines these functions for use in SQLite statements:
* :func:`numbits_union`
* :func:`numbits_intersection`
* :func:`numbits_any_intersection`
* :func:`num_in_numbits`
* :func:`numbits_to_nums`
`connection` is a :class:`sqlite3.Connection <python:sqlite3.Connection>`
object. After creating the connection, pass it to this function to
register the numbits functions. Then you can use numbits functions in your
queries::
import sqlite3
from coverage.numbits import register_sqlite_functions
conn = sqlite3.connect('example.db')
register_sqlite_functions(conn)
c = conn.cursor()
# Kind of a nonsense query: find all the files and contexts that
# executed line 47 in any file:
c.execute(
"select file_id, context_id from line_bits where num_in_numbits(?, numbits)",
(47,)
)
"""
connection.create_function("numbits_union", 2, numbits_union)
connection.create_function("numbits_intersection", 2, numbits_intersection)
connection.create_function("numbits_any_intersection", 2, numbits_any_intersection)
connection.create_function("num_in_numbits", 2, num_in_numbits)
connection.create_function("numbits_to_nums", 1, lambda b: json.dumps(numbits_to_nums(b)))

68
third_party/python/coverage/coverage/optional.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,68 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""
Imports that we need at runtime, but might not be present.
When importing one of these modules, always do it in the function where you
need the module. Some tests will need to remove the module. If you import
it at the top level of your module, then the test won't be able to simulate
the module being unimportable.
The import will always succeed, but the value will be None if the module is
unavailable.
Bad::
# MyModule.py
from coverage.optional import unsure
def use_unsure():
unsure.something()
Good::
# MyModule.py
def use_unsure():
from coverage.optional import unsure
if unsure is None:
raise Exception("Module unsure isn't available!")
unsure.something()
"""
import contextlib
# This file's purpose is to provide modules to be imported from here.
# pylint: disable=unused-import
# TOML support is an install-time extra option.
try:
import toml
except ImportError: # pragma: not covered
toml = None
@contextlib.contextmanager
def without(modname):
"""Hide a module for testing.
Use this in a test function to make an optional module unavailable during
the test::
with coverage.optional.without('toml'):
use_toml_somehow()
Arguments:
modname (str): the name of a module importable from
`coverage.optional`.
"""
real_module = globals()[modname]
try:
globals()[modname] = None
yield
finally:
globals()[modname] = real_module

1251
third_party/python/coverage/coverage/parser.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

297
third_party/python/coverage/coverage/phystokens.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,297 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Better tokenizing for coverage.py."""
import codecs
import keyword
import re
import sys
import token
import tokenize
from coverage import env
from coverage.backward import iternext, unicode_class
from coverage.misc import contract
def phys_tokens(toks):
"""Return all physical tokens, even line continuations.
tokenize.generate_tokens() doesn't return a token for the backslash that
continues lines. This wrapper provides those tokens so that we can
re-create a faithful representation of the original source.
Returns the same values as generate_tokens()
"""
last_line = None
last_lineno = -1
last_ttext = None
for ttype, ttext, (slineno, scol), (elineno, ecol), ltext in toks:
if last_lineno != elineno:
if last_line and last_line.endswith("\\\n"):
# We are at the beginning of a new line, and the last line
# ended with a backslash. We probably have to inject a
# backslash token into the stream. Unfortunately, there's more
# to figure out. This code::
#
# usage = """\
# HEY THERE
# """
#
# triggers this condition, but the token text is::
#
# '"""\\\nHEY THERE\n"""'
#
# so we need to figure out if the backslash is already in the
# string token or not.
inject_backslash = True
if last_ttext.endswith("\\"):
inject_backslash = False
elif ttype == token.STRING:
if "\n" in ttext and ttext.split('\n', 1)[0][-1] == '\\':
# It's a multi-line string and the first line ends with
# a backslash, so we don't need to inject another.
inject_backslash = False
if inject_backslash:
# Figure out what column the backslash is in.
ccol = len(last_line.split("\n")[-2]) - 1
# Yield the token, with a fake token type.
yield (
99999, "\\\n",
(slineno, ccol), (slineno, ccol+2),
last_line
)
last_line = ltext
if ttype not in (tokenize.NEWLINE, tokenize.NL):
last_ttext = ttext
yield ttype, ttext, (slineno, scol), (elineno, ecol), ltext
last_lineno = elineno
@contract(source='unicode')
def source_token_lines(source):
"""Generate a series of lines, one for each line in `source`.
Each line is a list of pairs, each pair is a token::
[('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
Each pair has a token class, and the token text.
If you concatenate all the token texts, and then join them with newlines,
you should have your original `source` back, with two differences:
trailing whitespace is not preserved, and a final line with no newline
is indistinguishable from a final line with a newline.
"""
ws_tokens = set([token.INDENT, token.DEDENT, token.NEWLINE, tokenize.NL])
line = []
col = 0
source = source.expandtabs(8).replace('\r\n', '\n')
tokgen = generate_tokens(source)
for ttype, ttext, (_, scol), (_, ecol), _ in phys_tokens(tokgen):
mark_start = True
for part in re.split('(\n)', ttext):
if part == '\n':
yield line
line = []
col = 0
mark_end = False
elif part == '':
mark_end = False
elif ttype in ws_tokens:
mark_end = False
else:
if mark_start and scol > col:
line.append(("ws", u" " * (scol - col)))
mark_start = False
tok_class = tokenize.tok_name.get(ttype, 'xx').lower()[:3]
if ttype == token.NAME and keyword.iskeyword(ttext):
tok_class = "key"
line.append((tok_class, part))
mark_end = True
scol = 0
if mark_end:
col = ecol
if line:
yield line
class CachedTokenizer(object):
"""A one-element cache around tokenize.generate_tokens.
When reporting, coverage.py tokenizes files twice, once to find the
structure of the file, and once to syntax-color it. Tokenizing is
expensive, and easily cached.
This is a one-element cache so that our twice-in-a-row tokenizing doesn't
actually tokenize twice.
"""
def __init__(self):
self.last_text = None
self.last_tokens = None
@contract(text='unicode')
def generate_tokens(self, text):
"""A stand-in for `tokenize.generate_tokens`."""
if text != self.last_text:
self.last_text = text
readline = iternext(text.splitlines(True))
self.last_tokens = list(tokenize.generate_tokens(readline))
return self.last_tokens
# Create our generate_tokens cache as a callable replacement function.
generate_tokens = CachedTokenizer().generate_tokens
COOKIE_RE = re.compile(r"^[ \t]*#.*coding[:=][ \t]*([-\w.]+)", flags=re.MULTILINE)
@contract(source='bytes')
def _source_encoding_py2(source):
"""Determine the encoding for `source`, according to PEP 263.
`source` is a byte string, the text of the program.
Returns a string, the name of the encoding.
"""
assert isinstance(source, bytes)
# Do this so the detect_encode code we copied will work.
readline = iternext(source.splitlines(True))
# This is mostly code adapted from Py3.2's tokenize module.
def _get_normal_name(orig_enc):
"""Imitates get_normal_name in tokenizer.c."""
# Only care about the first 12 characters.
enc = orig_enc[:12].lower().replace("_", "-")
if re.match(r"^utf-8($|-)", enc):
return "utf-8"
if re.match(r"^(latin-1|iso-8859-1|iso-latin-1)($|-)", enc):
return "iso-8859-1"
return orig_enc
# From detect_encode():
# It detects the encoding from the presence of a UTF-8 BOM or an encoding
# cookie as specified in PEP-0263. If both a BOM and a cookie are present,
# but disagree, a SyntaxError will be raised. If the encoding cookie is an
# invalid charset, raise a SyntaxError. Note that if a UTF-8 BOM is found,
# 'utf-8-sig' is returned.
# If no encoding is specified, then the default will be returned.
default = 'ascii'
bom_found = False
encoding = None
def read_or_stop():
"""Get the next source line, or ''."""
try:
return readline()
except StopIteration:
return ''
def find_cookie(line):
"""Find an encoding cookie in `line`."""
try:
line_string = line.decode('ascii')
except UnicodeDecodeError:
return None
matches = COOKIE_RE.findall(line_string)
if not matches:
return None
encoding = _get_normal_name(matches[0])
try:
codec = codecs.lookup(encoding)
except LookupError:
# This behavior mimics the Python interpreter
raise SyntaxError("unknown encoding: " + encoding)
if bom_found:
# codecs in 2.3 were raw tuples of functions, assume the best.
codec_name = getattr(codec, 'name', encoding)
if codec_name != 'utf-8':
# This behavior mimics the Python interpreter
raise SyntaxError('encoding problem: utf-8')
encoding += '-sig'
return encoding
first = read_or_stop()
if first.startswith(codecs.BOM_UTF8):
bom_found = True
first = first[3:]
default = 'utf-8-sig'
if not first:
return default
encoding = find_cookie(first)
if encoding:
return encoding
second = read_or_stop()
if not second:
return default
encoding = find_cookie(second)
if encoding:
return encoding
return default
@contract(source='bytes')
def _source_encoding_py3(source):
"""Determine the encoding for `source`, according to PEP 263.
`source` is a byte string: the text of the program.
Returns a string, the name of the encoding.
"""
readline = iternext(source.splitlines(True))
return tokenize.detect_encoding(readline)[0]
if env.PY3:
source_encoding = _source_encoding_py3
else:
source_encoding = _source_encoding_py2
@contract(source='unicode')
def compile_unicode(source, filename, mode):
"""Just like the `compile` builtin, but works on any Unicode string.
Python 2's compile() builtin has a stupid restriction: if the source string
is Unicode, then it may not have a encoding declaration in it. Why not?
Who knows! It also decodes to utf8, and then tries to interpret those utf8
bytes according to the encoding declaration. Why? Who knows!
This function neuters the coding declaration, and compiles it.
"""
source = neuter_encoding_declaration(source)
if env.PY2 and isinstance(filename, unicode_class):
filename = filename.encode(sys.getfilesystemencoding(), "replace")
code = compile(source, filename, mode)
return code
@contract(source='unicode', returns='unicode')
def neuter_encoding_declaration(source):
"""Return `source`, with any encoding declaration neutered."""
if COOKIE_RE.search(source):
source_lines = source.splitlines(True)
for lineno in range(min(2, len(source_lines))):
source_lines[lineno] = COOKIE_RE.sub("# (deleted declaration)", source_lines[lineno])
source = "".join(source_lines)
return source

533
third_party/python/coverage/coverage/plugin.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,533 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""
.. versionadded:: 4.0
Plug-in interfaces for coverage.py.
Coverage.py supports a few different kinds of plug-ins that change its
behavior:
* File tracers implement tracing of non-Python file types.
* Configurers add custom configuration, using Python code to change the
configuration.
* Dynamic context switchers decide when the dynamic context has changed, for
example, to record what test function produced the coverage.
To write a coverage.py plug-in, create a module with a subclass of
:class:`~coverage.CoveragePlugin`. You will override methods in your class to
participate in various aspects of coverage.py's processing.
Different types of plug-ins have to override different methods.
Any plug-in can optionally implement :meth:`~coverage.CoveragePlugin.sys_info`
to provide debugging information about their operation.
Your module must also contain a ``coverage_init`` function that registers an
instance of your plug-in class::
import coverage
class MyPlugin(coverage.CoveragePlugin):
...
def coverage_init(reg, options):
reg.add_file_tracer(MyPlugin())
You use the `reg` parameter passed to your ``coverage_init`` function to
register your plug-in object. The registration method you call depends on
what kind of plug-in it is.
If your plug-in takes options, the `options` parameter is a dictionary of your
plug-in's options from the coverage.py configuration file. Use them however
you want to configure your object before registering it.
Coverage.py will store its own information on your plug-in object, using
attributes whose names start with ``_coverage_``. Don't be startled.
.. warning::
Plug-ins are imported by coverage.py before it begins measuring code.
If you write a plugin in your own project, it might import your product
code before coverage.py can start measuring. This can result in your
own code being reported as missing.
One solution is to put your plugins in your project tree, but not in
your importable Python package.
.. _file_tracer_plugins:
File Tracers
============
File tracers implement measurement support for non-Python files. File tracers
implement the :meth:`~coverage.CoveragePlugin.file_tracer` method to claim
files and the :meth:`~coverage.CoveragePlugin.file_reporter` method to report
on those files.
In your ``coverage_init`` function, use the ``add_file_tracer`` method to
register your file tracer.
.. _configurer_plugins:
Configurers
===========
.. versionadded:: 4.5
Configurers modify the configuration of coverage.py during start-up.
Configurers implement the :meth:`~coverage.CoveragePlugin.configure` method to
change the configuration.
In your ``coverage_init`` function, use the ``add_configurer`` method to
register your configurer.
.. _dynamic_context_plugins:
Dynamic Context Switchers
=========================
.. versionadded:: 5.0
Dynamic context switcher plugins implement the
:meth:`~coverage.CoveragePlugin.dynamic_context` method to dynamically compute
the context label for each measured frame.
Computed context labels are useful when you want to group measured data without
modifying the source code.
For example, you could write a plugin that checks `frame.f_code` to inspect
the currently executed method, and set the context label to a fully qualified
method name if it's an instance method of `unittest.TestCase` and the method
name starts with 'test'. Such a plugin would provide basic coverage grouping
by test and could be used with test runners that have no built-in coveragepy
support.
In your ``coverage_init`` function, use the ``add_dynamic_context`` method to
register your dynamic context switcher.
"""
from coverage import files
from coverage.misc import contract, _needs_to_implement
class CoveragePlugin(object):
"""Base class for coverage.py plug-ins."""
def file_tracer(self, filename): # pylint: disable=unused-argument
"""Get a :class:`FileTracer` object for a file.
Plug-in type: file tracer.
Every Python source file is offered to your plug-in to give it a chance
to take responsibility for tracing the file. If your plug-in can
handle the file, it should return a :class:`FileTracer` object.
Otherwise return None.
There is no way to register your plug-in for particular files.
Instead, this method is invoked for all files as they are executed,
and the plug-in decides whether it can trace the file or not.
Be prepared for `filename` to refer to all kinds of files that have
nothing to do with your plug-in.
The file name will be a Python file being executed. There are two
broad categories of behavior for a plug-in, depending on the kind of
files your plug-in supports:
* Static file names: each of your original source files has been
converted into a distinct Python file. Your plug-in is invoked with
the Python file name, and it maps it back to its original source
file.
* Dynamic file names: all of your source files are executed by the same
Python file. In this case, your plug-in implements
:meth:`FileTracer.dynamic_source_filename` to provide the actual
source file for each execution frame.
`filename` is a string, the path to the file being considered. This is
the absolute real path to the file. If you are comparing to other
paths, be sure to take this into account.
Returns a :class:`FileTracer` object to use to trace `filename`, or
None if this plug-in cannot trace this file.
"""
return None
def file_reporter(self, filename): # pylint: disable=unused-argument
"""Get the :class:`FileReporter` class to use for a file.
Plug-in type: file tracer.
This will only be invoked if `filename` returns non-None from
:meth:`file_tracer`. It's an error to return None from this method.
Returns a :class:`FileReporter` object to use to report on `filename`,
or the string `"python"` to have coverage.py treat the file as Python.
"""
_needs_to_implement(self, "file_reporter")
def dynamic_context(self, frame): # pylint: disable=unused-argument
"""Get the dynamically computed context label for `frame`.
Plug-in type: dynamic context.
This method is invoked for each frame when outside of a dynamic
context, to see if a new dynamic context should be started. If it
returns a string, a new context label is set for this and deeper
frames. The dynamic context ends when this frame returns.
Returns a string to start a new dynamic context, or None if no new
context should be started.
"""
return None
def find_executable_files(self, src_dir): # pylint: disable=unused-argument
"""Yield all of the executable files in `src_dir`, recursively.
Plug-in type: file tracer.
Executability is a plug-in-specific property, but generally means files
which would have been considered for coverage analysis, had they been
included automatically.
Returns or yields a sequence of strings, the paths to files that could
have been executed, including files that had been executed.
"""
return []
def configure(self, config):
"""Modify the configuration of coverage.py.
Plug-in type: configurer.
This method is called during coverage.py start-up, to give your plug-in
a chance to change the configuration. The `config` parameter is an
object with :meth:`~coverage.Coverage.get_option` and
:meth:`~coverage.Coverage.set_option` methods. Do not call any other
methods on the `config` object.
"""
pass
def sys_info(self):
"""Get a list of information useful for debugging.
Plug-in type: any.
This method will be invoked for ``--debug=sys``. Your
plug-in can return any information it wants to be displayed.
Returns a list of pairs: `[(name, value), ...]`.
"""
return []
class FileTracer(object):
"""Support needed for files during the execution phase.
File tracer plug-ins implement subclasses of FileTracer to return from
their :meth:`~CoveragePlugin.file_tracer` method.
You may construct this object from :meth:`CoveragePlugin.file_tracer` any
way you like. A natural choice would be to pass the file name given to
`file_tracer`.
`FileTracer` objects should only be created in the
:meth:`CoveragePlugin.file_tracer` method.
See :ref:`howitworks` for details of the different coverage.py phases.
"""
def source_filename(self):
"""The source file name for this file.
This may be any file name you like. A key responsibility of a plug-in
is to own the mapping from Python execution back to whatever source
file name was originally the source of the code.
See :meth:`CoveragePlugin.file_tracer` for details about static and
dynamic file names.
Returns the file name to credit with this execution.
"""
_needs_to_implement(self, "source_filename")
def has_dynamic_source_filename(self):
"""Does this FileTracer have dynamic source file names?
FileTracers can provide dynamically determined file names by
implementing :meth:`dynamic_source_filename`. Invoking that function
is expensive. To determine whether to invoke it, coverage.py uses the
result of this function to know if it needs to bother invoking
:meth:`dynamic_source_filename`.
See :meth:`CoveragePlugin.file_tracer` for details about static and
dynamic file names.
Returns True if :meth:`dynamic_source_filename` should be called to get
dynamic source file names.
"""
return False
def dynamic_source_filename(self, filename, frame): # pylint: disable=unused-argument
"""Get a dynamically computed source file name.
Some plug-ins need to compute the source file name dynamically for each
frame.
This function will not be invoked if
:meth:`has_dynamic_source_filename` returns False.
Returns the source file name for this frame, or None if this frame
shouldn't be measured.
"""
return None
def line_number_range(self, frame):
"""Get the range of source line numbers for a given a call frame.
The call frame is examined, and the source line number in the original
file is returned. The return value is a pair of numbers, the starting
line number and the ending line number, both inclusive. For example,
returning (5, 7) means that lines 5, 6, and 7 should be considered
executed.
This function might decide that the frame doesn't indicate any lines
from the source file were executed. Return (-1, -1) in this case to
tell coverage.py that no lines should be recorded for this frame.
"""
lineno = frame.f_lineno
return lineno, lineno
class FileReporter(object):
"""Support needed for files during the analysis and reporting phases.
File tracer plug-ins implement a subclass of `FileReporter`, and return
instances from their :meth:`CoveragePlugin.file_reporter` method.
There are many methods here, but only :meth:`lines` is required, to provide
the set of executable lines in the file.
See :ref:`howitworks` for details of the different coverage.py phases.
"""
def __init__(self, filename):
"""Simple initialization of a `FileReporter`.
The `filename` argument is the path to the file being reported. This
will be available as the `.filename` attribute on the object. Other
method implementations on this base class rely on this attribute.
"""
self.filename = filename
def __repr__(self):
return "<{0.__class__.__name__} filename={0.filename!r}>".format(self)
def relative_filename(self):
"""Get the relative file name for this file.
This file path will be displayed in reports. The default
implementation will supply the actual project-relative file path. You
only need to supply this method if you have an unusual syntax for file
paths.
"""
return files.relative_filename(self.filename)
@contract(returns='unicode')
def source(self):
"""Get the source for the file.
Returns a Unicode string.
The base implementation simply reads the `self.filename` file and
decodes it as UTF8. Override this method if your file isn't readable
as a text file, or if you need other encoding support.
"""
with open(self.filename, "rb") as f:
return f.read().decode("utf8")
def lines(self):
"""Get the executable lines in this file.
Your plug-in must determine which lines in the file were possibly
executable. This method returns a set of those line numbers.
Returns a set of line numbers.
"""
_needs_to_implement(self, "lines")
def excluded_lines(self):
"""Get the excluded executable lines in this file.
Your plug-in can use any method it likes to allow the user to exclude
executable lines from consideration.
Returns a set of line numbers.
The base implementation returns the empty set.
"""
return set()
def translate_lines(self, lines):
"""Translate recorded lines into reported lines.
Some file formats will want to report lines slightly differently than
they are recorded. For example, Python records the last line of a
multi-line statement, but reports are nicer if they mention the first
line.
Your plug-in can optionally define this method to perform these kinds
of adjustment.
`lines` is a sequence of integers, the recorded line numbers.
Returns a set of integers, the adjusted line numbers.
The base implementation returns the numbers unchanged.
"""
return set(lines)
def arcs(self):
"""Get the executable arcs in this file.
To support branch coverage, your plug-in needs to be able to indicate
possible execution paths, as a set of line number pairs. Each pair is
a `(prev, next)` pair indicating that execution can transition from the
`prev` line number to the `next` line number.
Returns a set of pairs of line numbers. The default implementation
returns an empty set.
"""
return set()
def no_branch_lines(self):
"""Get the lines excused from branch coverage in this file.
Your plug-in can use any method it likes to allow the user to exclude
lines from consideration of branch coverage.
Returns a set of line numbers.
The base implementation returns the empty set.
"""
return set()
def translate_arcs(self, arcs):
"""Translate recorded arcs into reported arcs.
Similar to :meth:`translate_lines`, but for arcs. `arcs` is a set of
line number pairs.
Returns a set of line number pairs.
The default implementation returns `arcs` unchanged.
"""
return arcs
def exit_counts(self):
"""Get a count of exits from that each line.
To determine which lines are branches, coverage.py looks for lines that
have more than one exit. This function creates a dict mapping each
executable line number to a count of how many exits it has.
To be honest, this feels wrong, and should be refactored. Let me know
if you attempt to implement this method in your plug-in...
"""
return {}
def missing_arc_description(self, start, end, executed_arcs=None): # pylint: disable=unused-argument
"""Provide an English sentence describing a missing arc.
The `start` and `end` arguments are the line numbers of the missing
arc. Negative numbers indicate entering or exiting code objects.
The `executed_arcs` argument is a set of line number pairs, the arcs
that were executed in this file.
By default, this simply returns the string "Line {start} didn't jump
to {end}".
"""
return "Line {start} didn't jump to line {end}".format(start=start, end=end)
def source_token_lines(self):
"""Generate a series of tokenized lines, one for each line in `source`.
These tokens are used for syntax-colored reports.
Each line is a list of pairs, each pair is a token::
[('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
Each pair has a token class, and the token text. The token classes
are:
* ``'com'``: a comment
* ``'key'``: a keyword
* ``'nam'``: a name, or identifier
* ``'num'``: a number
* ``'op'``: an operator
* ``'str'``: a string literal
* ``'ws'``: some white space
* ``'txt'``: some other kind of text
If you concatenate all the token texts, and then join them with
newlines, you should have your original source back.
The default implementation simply returns each line tagged as
``'txt'``.
"""
for line in self.source().splitlines():
yield [('txt', line)]
# Annoying comparison operators. Py3k wants __lt__ etc, and Py2k needs all
# of them defined.
def __eq__(self, other):
return isinstance(other, FileReporter) and self.filename == other.filename
def __ne__(self, other):
return not (self == other)
def __lt__(self, other):
return self.filename < other.filename
def __le__(self, other):
return self.filename <= other.filename
def __gt__(self, other):
return self.filename > other.filename
def __ge__(self, other):
return self.filename >= other.filename
__hash__ = None # This object doesn't need to be hashed.

281
third_party/python/coverage/coverage/plugin_support.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,281 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Support for plugins."""
import os
import os.path
import sys
from coverage.misc import CoverageException, isolate_module
from coverage.plugin import CoveragePlugin, FileTracer, FileReporter
os = isolate_module(os)
class Plugins(object):
"""The currently loaded collection of coverage.py plugins."""
def __init__(self):
self.order = []
self.names = {}
self.file_tracers = []
self.configurers = []
self.context_switchers = []
self.current_module = None
self.debug = None
@classmethod
def load_plugins(cls, modules, config, debug=None):
"""Load plugins from `modules`.
Returns a Plugins object with the loaded and configured plugins.
"""
plugins = cls()
plugins.debug = debug
for module in modules:
plugins.current_module = module
__import__(module)
mod = sys.modules[module]
coverage_init = getattr(mod, "coverage_init", None)
if not coverage_init:
raise CoverageException(
"Plugin module %r didn't define a coverage_init function" % module
)
options = config.get_plugin_options(module)
coverage_init(plugins, options)
plugins.current_module = None
return plugins
def add_file_tracer(self, plugin):
"""Add a file tracer plugin.
`plugin` is an instance of a third-party plugin class. It must
implement the :meth:`CoveragePlugin.file_tracer` method.
"""
self._add_plugin(plugin, self.file_tracers)
def add_configurer(self, plugin):
"""Add a configuring plugin.
`plugin` is an instance of a third-party plugin class. It must
implement the :meth:`CoveragePlugin.configure` method.
"""
self._add_plugin(plugin, self.configurers)
def add_dynamic_context(self, plugin):
"""Add a dynamic context plugin.
`plugin` is an instance of a third-party plugin class. It must
implement the :meth:`CoveragePlugin.dynamic_context` method.
"""
self._add_plugin(plugin, self.context_switchers)
def add_noop(self, plugin):
"""Add a plugin that does nothing.
This is only useful for testing the plugin support.
"""
self._add_plugin(plugin, None)
def _add_plugin(self, plugin, specialized):
"""Add a plugin object.
`plugin` is a :class:`CoveragePlugin` instance to add. `specialized`
is a list to append the plugin to.
"""
plugin_name = "%s.%s" % (self.current_module, plugin.__class__.__name__)
if self.debug and self.debug.should('plugin'):
self.debug.write("Loaded plugin %r: %r" % (self.current_module, plugin))
labelled = LabelledDebug("plugin %r" % (self.current_module,), self.debug)
plugin = DebugPluginWrapper(plugin, labelled)
# pylint: disable=attribute-defined-outside-init
plugin._coverage_plugin_name = plugin_name
plugin._coverage_enabled = True
self.order.append(plugin)
self.names[plugin_name] = plugin
if specialized is not None:
specialized.append(plugin)
def __nonzero__(self):
return bool(self.order)
__bool__ = __nonzero__
def __iter__(self):
return iter(self.order)
def get(self, plugin_name):
"""Return a plugin by name."""
return self.names[plugin_name]
class LabelledDebug(object):
"""A Debug writer, but with labels for prepending to the messages."""
def __init__(self, label, debug, prev_labels=()):
self.labels = list(prev_labels) + [label]
self.debug = debug
def add_label(self, label):
"""Add a label to the writer, and return a new `LabelledDebug`."""
return LabelledDebug(label, self.debug, self.labels)
def message_prefix(self):
"""The prefix to use on messages, combining the labels."""
prefixes = self.labels + ['']
return ":\n".join(" "*i+label for i, label in enumerate(prefixes))
def write(self, message):
"""Write `message`, but with the labels prepended."""
self.debug.write("%s%s" % (self.message_prefix(), message))
class DebugPluginWrapper(CoveragePlugin):
"""Wrap a plugin, and use debug to report on what it's doing."""
def __init__(self, plugin, debug):
super(DebugPluginWrapper, self).__init__()
self.plugin = plugin
self.debug = debug
def file_tracer(self, filename):
tracer = self.plugin.file_tracer(filename)
self.debug.write("file_tracer(%r) --> %r" % (filename, tracer))
if tracer:
debug = self.debug.add_label("file %r" % (filename,))
tracer = DebugFileTracerWrapper(tracer, debug)
return tracer
def file_reporter(self, filename):
reporter = self.plugin.file_reporter(filename)
self.debug.write("file_reporter(%r) --> %r" % (filename, reporter))
if reporter:
debug = self.debug.add_label("file %r" % (filename,))
reporter = DebugFileReporterWrapper(filename, reporter, debug)
return reporter
def dynamic_context(self, frame):
context = self.plugin.dynamic_context(frame)
self.debug.write("dynamic_context(%r) --> %r" % (frame, context))
return context
def find_executable_files(self, src_dir):
executable_files = self.plugin.find_executable_files(src_dir)
self.debug.write("find_executable_files(%r) --> %r" % (src_dir, executable_files))
return executable_files
def configure(self, config):
self.debug.write("configure(%r)" % (config,))
self.plugin.configure(config)
def sys_info(self):
return self.plugin.sys_info()
class DebugFileTracerWrapper(FileTracer):
"""A debugging `FileTracer`."""
def __init__(self, tracer, debug):
self.tracer = tracer
self.debug = debug
def _show_frame(self, frame):
"""A short string identifying a frame, for debug messages."""
return "%s@%d" % (
os.path.basename(frame.f_code.co_filename),
frame.f_lineno,
)
def source_filename(self):
sfilename = self.tracer.source_filename()
self.debug.write("source_filename() --> %r" % (sfilename,))
return sfilename
def has_dynamic_source_filename(self):
has = self.tracer.has_dynamic_source_filename()
self.debug.write("has_dynamic_source_filename() --> %r" % (has,))
return has
def dynamic_source_filename(self, filename, frame):
dyn = self.tracer.dynamic_source_filename(filename, frame)
self.debug.write("dynamic_source_filename(%r, %s) --> %r" % (
filename, self._show_frame(frame), dyn,
))
return dyn
def line_number_range(self, frame):
pair = self.tracer.line_number_range(frame)
self.debug.write("line_number_range(%s) --> %r" % (self._show_frame(frame), pair))
return pair
class DebugFileReporterWrapper(FileReporter):
"""A debugging `FileReporter`."""
def __init__(self, filename, reporter, debug):
super(DebugFileReporterWrapper, self).__init__(filename)
self.reporter = reporter
self.debug = debug
def relative_filename(self):
ret = self.reporter.relative_filename()
self.debug.write("relative_filename() --> %r" % (ret,))
return ret
def lines(self):
ret = self.reporter.lines()
self.debug.write("lines() --> %r" % (ret,))
return ret
def excluded_lines(self):
ret = self.reporter.excluded_lines()
self.debug.write("excluded_lines() --> %r" % (ret,))
return ret
def translate_lines(self, lines):
ret = self.reporter.translate_lines(lines)
self.debug.write("translate_lines(%r) --> %r" % (lines, ret))
return ret
def translate_arcs(self, arcs):
ret = self.reporter.translate_arcs(arcs)
self.debug.write("translate_arcs(%r) --> %r" % (arcs, ret))
return ret
def no_branch_lines(self):
ret = self.reporter.no_branch_lines()
self.debug.write("no_branch_lines() --> %r" % (ret,))
return ret
def exit_counts(self):
ret = self.reporter.exit_counts()
self.debug.write("exit_counts() --> %r" % (ret,))
return ret
def arcs(self):
ret = self.reporter.arcs()
self.debug.write("arcs() --> %r" % (ret,))
return ret
def source(self):
ret = self.reporter.source()
self.debug.write("source() --> %d chars" % (len(ret),))
return ret
def source_token_lines(self):
ret = list(self.reporter.source_token_lines())
self.debug.write("source_token_lines() --> %d tokens" % (len(ret),))
return ret

249
third_party/python/coverage/coverage/python.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,249 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Python source expertise for coverage.py"""
import os.path
import types
import zipimport
from coverage import env, files
from coverage.misc import contract, expensive, isolate_module, join_regex
from coverage.misc import CoverageException, NoSource
from coverage.parser import PythonParser
from coverage.phystokens import source_token_lines, source_encoding
from coverage.plugin import FileReporter
os = isolate_module(os)
@contract(returns='bytes')
def read_python_source(filename):
"""Read the Python source text from `filename`.
Returns bytes.
"""
with open(filename, "rb") as f:
source = f.read()
if env.IRONPYTHON:
# IronPython reads Unicode strings even for "rb" files.
source = bytes(source)
return source.replace(b"\r\n", b"\n").replace(b"\r", b"\n")
@contract(returns='unicode')
def get_python_source(filename):
"""Return the source code, as unicode."""
base, ext = os.path.splitext(filename)
if ext == ".py" and env.WINDOWS:
exts = [".py", ".pyw"]
else:
exts = [ext]
for ext in exts:
try_filename = base + ext
if os.path.exists(try_filename):
# A regular text file: open it.
source = read_python_source(try_filename)
break
# Maybe it's in a zip file?
source = get_zip_bytes(try_filename)
if source is not None:
break
else:
# Couldn't find source.
exc_msg = "No source for code: '%s'.\n" % (filename,)
exc_msg += "Aborting report output, consider using -i."
raise NoSource(exc_msg)
# Replace \f because of http://bugs.python.org/issue19035
source = source.replace(b'\f', b' ')
source = source.decode(source_encoding(source), "replace")
# Python code should always end with a line with a newline.
if source and source[-1] != '\n':
source += '\n'
return source
@contract(returns='bytes|None')
def get_zip_bytes(filename):
"""Get data from `filename` if it is a zip file path.
Returns the bytestring data read from the zip file, or None if no zip file
could be found or `filename` isn't in it. The data returned will be
an empty string if the file is empty.
"""
markers = ['.zip'+os.sep, '.egg'+os.sep, '.pex'+os.sep]
for marker in markers:
if marker in filename:
parts = filename.split(marker)
try:
zi = zipimport.zipimporter(parts[0]+marker[:-1])
except zipimport.ZipImportError:
continue
try:
data = zi.get_data(parts[1])
except IOError:
continue
return data
return None
def source_for_file(filename):
"""Return the source filename for `filename`.
Given a file name being traced, return the best guess as to the source
file to attribute it to.
"""
if filename.endswith(".py"):
# .py files are themselves source files.
return filename
elif filename.endswith((".pyc", ".pyo")):
# Bytecode files probably have source files near them.
py_filename = filename[:-1]
if os.path.exists(py_filename):
# Found a .py file, use that.
return py_filename
if env.WINDOWS:
# On Windows, it could be a .pyw file.
pyw_filename = py_filename + "w"
if os.path.exists(pyw_filename):
return pyw_filename
# Didn't find source, but it's probably the .py file we want.
return py_filename
elif filename.endswith("$py.class"):
# Jython is easy to guess.
return filename[:-9] + ".py"
# No idea, just use the file name as-is.
return filename
def source_for_morf(morf):
"""Get the source filename for the module-or-file `morf`."""
if hasattr(morf, '__file__') and morf.__file__:
filename = morf.__file__
elif isinstance(morf, types.ModuleType):
# A module should have had .__file__, otherwise we can't use it.
# This could be a PEP-420 namespace package.
raise CoverageException("Module {} has no file".format(morf))
else:
filename = morf
filename = source_for_file(files.unicode_filename(filename))
return filename
class PythonFileReporter(FileReporter):
"""Report support for a Python file."""
def __init__(self, morf, coverage=None):
self.coverage = coverage
filename = source_for_morf(morf)
super(PythonFileReporter, self).__init__(files.canonical_filename(filename))
if hasattr(morf, '__name__'):
name = morf.__name__.replace(".", os.sep)
if os.path.basename(filename).startswith('__init__.'):
name += os.sep + "__init__"
name += ".py"
name = files.unicode_filename(name)
else:
name = files.relative_filename(filename)
self.relname = name
self._source = None
self._parser = None
self._excluded = None
def __repr__(self):
return "<PythonFileReporter {!r}>".format(self.filename)
@contract(returns='unicode')
def relative_filename(self):
return self.relname
@property
def parser(self):
"""Lazily create a :class:`PythonParser`."""
if self._parser is None:
self._parser = PythonParser(
filename=self.filename,
exclude=self.coverage._exclude_regex('exclude'),
)
self._parser.parse_source()
return self._parser
def lines(self):
"""Return the line numbers of statements in the file."""
return self.parser.statements
def excluded_lines(self):
"""Return the line numbers of statements in the file."""
return self.parser.excluded
def translate_lines(self, lines):
return self.parser.translate_lines(lines)
def translate_arcs(self, arcs):
return self.parser.translate_arcs(arcs)
@expensive
def no_branch_lines(self):
no_branch = self.parser.lines_matching(
join_regex(self.coverage.config.partial_list),
join_regex(self.coverage.config.partial_always_list)
)
return no_branch
@expensive
def arcs(self):
return self.parser.arcs()
@expensive
def exit_counts(self):
return self.parser.exit_counts()
def missing_arc_description(self, start, end, executed_arcs=None):
return self.parser.missing_arc_description(start, end, executed_arcs)
@contract(returns='unicode')
def source(self):
if self._source is None:
self._source = get_python_source(self.filename)
return self._source
def should_be_python(self):
"""Does it seem like this file should contain Python?
This is used to decide if a file reported as part of the execution of
a program was really likely to have contained Python in the first
place.
"""
# Get the file extension.
_, ext = os.path.splitext(self.filename)
# Anything named *.py* should be Python.
if ext.startswith('.py'):
return True
# A file with no extension should be Python.
if not ext:
return True
# Everything else is probably not Python.
return False
def source_token_lines(self):
return source_token_lines(self.source())

245
third_party/python/coverage/coverage/pytracer.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,245 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Raw data collector for coverage.py."""
import atexit
import dis
import sys
from coverage import env
# We need the YIELD_VALUE opcode below, in a comparison-friendly form.
YIELD_VALUE = dis.opmap['YIELD_VALUE']
if env.PY2:
YIELD_VALUE = chr(YIELD_VALUE)
class PyTracer(object):
"""Python implementation of the raw data tracer."""
# Because of poor implementations of trace-function-manipulating tools,
# the Python trace function must be kept very simple. In particular, there
# must be only one function ever set as the trace function, both through
# sys.settrace, and as the return value from the trace function. Put
# another way, the trace function must always return itself. It cannot
# swap in other functions, or return None to avoid tracing a particular
# frame.
#
# The trace manipulator that introduced this restriction is DecoratorTools,
# which sets a trace function, and then later restores the pre-existing one
# by calling sys.settrace with a function it found in the current frame.
#
# Systems that use DecoratorTools (or similar trace manipulations) must use
# PyTracer to get accurate results. The command-line --timid argument is
# used to force the use of this tracer.
def __init__(self):
# Attributes set from the collector:
self.data = None
self.trace_arcs = False
self.should_trace = None
self.should_trace_cache = None
self.should_start_context = None
self.warn = None
# The threading module to use, if any.
self.threading = None
self.cur_file_dict = None
self.last_line = 0 # int, but uninitialized.
self.cur_file_name = None
self.context = None
self.started_context = False
self.data_stack = []
self.last_exc_back = None
self.last_exc_firstlineno = 0
self.thread = None
self.stopped = False
self._activity = False
self.in_atexit = False
# On exit, self.in_atexit = True
atexit.register(setattr, self, 'in_atexit', True)
def __repr__(self):
return "<PyTracer at {}: {} lines in {} files>".format(
id(self),
sum(len(v) for v in self.data.values()),
len(self.data),
)
def log(self, marker, *args):
"""For hard-core logging of what this tracer is doing."""
with open("/tmp/debug_trace.txt", "a") as f:
f.write("{} {:x}.{:x}[{}] {:x} {}\n".format(
marker,
id(self),
self.thread.ident,
len(self.data_stack),
self.threading.currentThread().ident,
" ".join(map(str, args))
))
def _trace(self, frame, event, arg_unused):
"""The trace function passed to sys.settrace."""
#self.log(":", frame.f_code.co_filename, frame.f_lineno, event)
if (self.stopped and sys.gettrace() == self._trace): # pylint: disable=comparison-with-callable
# The PyTrace.stop() method has been called, possibly by another
# thread, let's deactivate ourselves now.
#self.log("X", frame.f_code.co_filename, frame.f_lineno)
sys.settrace(None)
return None
if self.last_exc_back:
if frame == self.last_exc_back:
# Someone forgot a return event.
if self.trace_arcs and self.cur_file_dict:
pair = (self.last_line, -self.last_exc_firstlineno)
self.cur_file_dict[pair] = None
self.cur_file_dict, self.cur_file_name, self.last_line, self.started_context = (
self.data_stack.pop()
)
self.last_exc_back = None
if event == 'call':
# Should we start a new context?
if self.should_start_context and self.context is None:
context_maybe = self.should_start_context(frame)
if context_maybe is not None:
self.context = context_maybe
self.started_context = True
self.switch_context(self.context)
else:
self.started_context = False
else:
self.started_context = False
# Entering a new frame. Decide if we should trace
# in this file.
self._activity = True
self.data_stack.append(
(
self.cur_file_dict,
self.cur_file_name,
self.last_line,
self.started_context,
)
)
filename = frame.f_code.co_filename
self.cur_file_name = filename
disp = self.should_trace_cache.get(filename)
if disp is None:
disp = self.should_trace(filename, frame)
self.should_trace_cache[filename] = disp
self.cur_file_dict = None
if disp.trace:
tracename = disp.source_filename
if tracename not in self.data:
self.data[tracename] = {}
self.cur_file_dict = self.data[tracename]
# The call event is really a "start frame" event, and happens for
# function calls and re-entering generators. The f_lasti field is
# -1 for calls, and a real offset for generators. Use <0 as the
# line number for calls, and the real line number for generators.
if getattr(frame, 'f_lasti', -1) < 0:
self.last_line = -frame.f_code.co_firstlineno
else:
self.last_line = frame.f_lineno
elif event == 'line':
# Record an executed line.
if self.cur_file_dict is not None:
lineno = frame.f_lineno
#if frame.f_code.co_filename != self.cur_file_name:
# self.log("*", frame.f_code.co_filename, self.cur_file_name, lineno)
if self.trace_arcs:
self.cur_file_dict[(self.last_line, lineno)] = None
else:
self.cur_file_dict[lineno] = None
self.last_line = lineno
elif event == 'return':
if self.trace_arcs and self.cur_file_dict:
# Record an arc leaving the function, but beware that a
# "return" event might just mean yielding from a generator.
# Jython seems to have an empty co_code, so just assume return.
code = frame.f_code.co_code
if (not code) or code[frame.f_lasti] != YIELD_VALUE:
first = frame.f_code.co_firstlineno
self.cur_file_dict[(self.last_line, -first)] = None
# Leaving this function, pop the filename stack.
self.cur_file_dict, self.cur_file_name, self.last_line, self.started_context = (
self.data_stack.pop()
)
# Leaving a context?
if self.started_context:
self.context = None
self.switch_context(None)
elif event == 'exception':
self.last_exc_back = frame.f_back
self.last_exc_firstlineno = frame.f_code.co_firstlineno
return self._trace
def start(self):
"""Start this Tracer.
Return a Python function suitable for use with sys.settrace().
"""
self.stopped = False
if self.threading:
if self.thread is None:
self.thread = self.threading.currentThread()
else:
if self.thread.ident != self.threading.currentThread().ident:
# Re-starting from a different thread!? Don't set the trace
# function, but we are marked as running again, so maybe it
# will be ok?
#self.log("~", "starting on different threads")
return self._trace
sys.settrace(self._trace)
return self._trace
def stop(self):
"""Stop this Tracer."""
# Get the active tracer callback before setting the stop flag to be
# able to detect if the tracer was changed prior to stopping it.
tf = sys.gettrace()
# Set the stop flag. The actual call to sys.settrace(None) will happen
# in the self._trace callback itself to make sure to call it from the
# right thread.
self.stopped = True
if self.threading and self.thread.ident != self.threading.currentThread().ident:
# Called on a different thread than started us: we can't unhook
# ourselves, but we've set the flag that we should stop, so we
# won't do any more tracing.
#self.log("~", "stopping on different threads")
return
if self.warn:
# PyPy clears the trace function before running atexit functions,
# so don't warn if we are in atexit on PyPy and the trace function
# has changed to None.
dont_warn = (env.PYPY and env.PYPYVERSION >= (5, 4) and self.in_atexit and tf is None)
if (not dont_warn) and tf != self._trace: # pylint: disable=comparison-with-callable
self.warn(
"Trace function changed, measurement is likely wrong: %r" % (tf,),
slug="trace-changed",
)
def activity(self):
"""Has there been any activity?"""
return self._activity
def reset_activity(self):
"""Reset the activity() flag."""
self._activity = False
def get_stats(self):
"""Return a dictionary of statistics, or None."""
return None

86
third_party/python/coverage/coverage/report.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,86 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Reporter foundation for coverage.py."""
import sys
from coverage import env
from coverage.files import prep_patterns, FnmatchMatcher
from coverage.misc import CoverageException, NoSource, NotPython, ensure_dir_for_file, file_be_gone
def render_report(output_path, reporter, morfs):
"""Run the provided reporter ensuring any required setup and cleanup is done
At a high level this method ensures the output file is ready to be written to. Then writes the
report to it. Then closes the file and deletes any garbage created if necessary.
"""
file_to_close = None
delete_file = False
if output_path:
if output_path == '-':
outfile = sys.stdout
else:
# Ensure that the output directory is created; done here
# because this report pre-opens the output file.
# HTMLReport does this using the Report plumbing because
# its task is more complex, being multiple files.
ensure_dir_for_file(output_path)
open_kwargs = {}
if env.PY3:
open_kwargs['encoding'] = 'utf8'
outfile = open(output_path, "w", **open_kwargs)
file_to_close = outfile
try:
return reporter.report(morfs, outfile=outfile)
except CoverageException:
delete_file = True
raise
finally:
if file_to_close:
file_to_close.close()
if delete_file:
file_be_gone(output_path)
def get_analysis_to_report(coverage, morfs):
"""Get the files to report on.
For each morf in `morfs`, if it should be reported on (based on the omit
and include configuration options), yield a pair, the `FileReporter` and
`Analysis` for the morf.
"""
file_reporters = coverage._get_file_reporters(morfs)
config = coverage.config
if config.report_include:
matcher = FnmatchMatcher(prep_patterns(config.report_include))
file_reporters = [fr for fr in file_reporters if matcher.match(fr.filename)]
if config.report_omit:
matcher = FnmatchMatcher(prep_patterns(config.report_omit))
file_reporters = [fr for fr in file_reporters if not matcher.match(fr.filename)]
if not file_reporters:
raise CoverageException("No data to report.")
for fr in sorted(file_reporters):
try:
analysis = coverage._analyze(fr)
except NoSource:
if not config.ignore_errors:
raise
except NotPython:
# Only report errors for .py files, and only if we didn't
# explicitly suppress those errors.
# NotPython is only raised by PythonFileReporter, which has a
# should_be_python() method.
if fr.should_be_python():
if config.ignore_errors:
msg = "Couldn't parse Python file '{}'".format(fr.filename)
coverage._warn(msg, slug="couldnt-parse")
else:
raise
else:
yield (fr, analysis)

346
third_party/python/coverage/coverage/results.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,346 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Results of coverage measurement."""
import collections
from coverage.backward import iitems
from coverage.debug import SimpleReprMixin
from coverage.misc import contract, CoverageException, nice_pair
class Analysis(object):
"""The results of analyzing a FileReporter."""
def __init__(self, data, file_reporter, file_mapper):
self.data = data
self.file_reporter = file_reporter
self.filename = file_mapper(self.file_reporter.filename)
self.statements = self.file_reporter.lines()
self.excluded = self.file_reporter.excluded_lines()
# Identify missing statements.
executed = self.data.lines(self.filename) or []
executed = self.file_reporter.translate_lines(executed)
self.executed = executed
self.missing = self.statements - self.executed
if self.data.has_arcs():
self._arc_possibilities = sorted(self.file_reporter.arcs())
self.exit_counts = self.file_reporter.exit_counts()
self.no_branch = self.file_reporter.no_branch_lines()
n_branches = self._total_branches()
mba = self.missing_branch_arcs()
n_partial_branches = sum(len(v) for k,v in iitems(mba) if k not in self.missing)
n_missing_branches = sum(len(v) for k,v in iitems(mba))
else:
self._arc_possibilities = []
self.exit_counts = {}
self.no_branch = set()
n_branches = n_partial_branches = n_missing_branches = 0
self.numbers = Numbers(
n_files=1,
n_statements=len(self.statements),
n_excluded=len(self.excluded),
n_missing=len(self.missing),
n_branches=n_branches,
n_partial_branches=n_partial_branches,
n_missing_branches=n_missing_branches,
)
def missing_formatted(self, branches=False):
"""The missing line numbers, formatted nicely.
Returns a string like "1-2, 5-11, 13-14".
If `branches` is true, includes the missing branch arcs also.
"""
if branches and self.has_arcs():
arcs = iitems(self.missing_branch_arcs())
else:
arcs = None
return format_lines(self.statements, self.missing, arcs=arcs)
def has_arcs(self):
"""Were arcs measured in this result?"""
return self.data.has_arcs()
@contract(returns='list(tuple(int, int))')
def arc_possibilities(self):
"""Returns a sorted list of the arcs in the code."""
return self._arc_possibilities
@contract(returns='list(tuple(int, int))')
def arcs_executed(self):
"""Returns a sorted list of the arcs actually executed in the code."""
executed = self.data.arcs(self.filename) or []
executed = self.file_reporter.translate_arcs(executed)
return sorted(executed)
@contract(returns='list(tuple(int, int))')
def arcs_missing(self):
"""Returns a sorted list of the arcs in the code not executed."""
possible = self.arc_possibilities()
executed = self.arcs_executed()
missing = (
p for p in possible
if p not in executed
and p[0] not in self.no_branch
)
return sorted(missing)
@contract(returns='list(tuple(int, int))')
def arcs_unpredicted(self):
"""Returns a sorted list of the executed arcs missing from the code."""
possible = self.arc_possibilities()
executed = self.arcs_executed()
# Exclude arcs here which connect a line to itself. They can occur
# in executed data in some cases. This is where they can cause
# trouble, and here is where it's the least burden to remove them.
# Also, generators can somehow cause arcs from "enter" to "exit", so
# make sure we have at least one positive value.
unpredicted = (
e for e in executed
if e not in possible
and e[0] != e[1]
and (e[0] > 0 or e[1] > 0)
)
return sorted(unpredicted)
def _branch_lines(self):
"""Returns a list of line numbers that have more than one exit."""
return [l1 for l1,count in iitems(self.exit_counts) if count > 1]
def _total_branches(self):
"""How many total branches are there?"""
return sum(count for count in self.exit_counts.values() if count > 1)
@contract(returns='dict(int: list(int))')
def missing_branch_arcs(self):
"""Return arcs that weren't executed from branch lines.
Returns {l1:[l2a,l2b,...], ...}
"""
missing = self.arcs_missing()
branch_lines = set(self._branch_lines())
mba = collections.defaultdict(list)
for l1, l2 in missing:
if l1 in branch_lines:
mba[l1].append(l2)
return mba
@contract(returns='dict(int: tuple(int, int))')
def branch_stats(self):
"""Get stats about branches.
Returns a dict mapping line numbers to a tuple:
(total_exits, taken_exits).
"""
missing_arcs = self.missing_branch_arcs()
stats = {}
for lnum in self._branch_lines():
exits = self.exit_counts[lnum]
try:
missing = len(missing_arcs[lnum])
except KeyError:
missing = 0
stats[lnum] = (exits, exits - missing)
return stats
class Numbers(SimpleReprMixin):
"""The numerical results of measuring coverage.
This holds the basic statistics from `Analysis`, and is used to roll
up statistics across files.
"""
# A global to determine the precision on coverage percentages, the number
# of decimal places.
_precision = 0
_near0 = 1.0 # These will change when _precision is changed.
_near100 = 99.0
def __init__(self, n_files=0, n_statements=0, n_excluded=0, n_missing=0,
n_branches=0, n_partial_branches=0, n_missing_branches=0
):
self.n_files = n_files
self.n_statements = n_statements
self.n_excluded = n_excluded
self.n_missing = n_missing
self.n_branches = n_branches
self.n_partial_branches = n_partial_branches
self.n_missing_branches = n_missing_branches
def init_args(self):
"""Return a list for __init__(*args) to recreate this object."""
return [
self.n_files, self.n_statements, self.n_excluded, self.n_missing,
self.n_branches, self.n_partial_branches, self.n_missing_branches,
]
@classmethod
def set_precision(cls, precision):
"""Set the number of decimal places used to report percentages."""
assert 0 <= precision < 10
cls._precision = precision
cls._near0 = 1.0 / 10**precision
cls._near100 = 100.0 - cls._near0
@property
def n_executed(self):
"""Returns the number of executed statements."""
return self.n_statements - self.n_missing
@property
def n_executed_branches(self):
"""Returns the number of executed branches."""
return self.n_branches - self.n_missing_branches
@property
def pc_covered(self):
"""Returns a single percentage value for coverage."""
if self.n_statements > 0:
numerator, denominator = self.ratio_covered
pc_cov = (100.0 * numerator) / denominator
else:
pc_cov = 100.0
return pc_cov
@property
def pc_covered_str(self):
"""Returns the percent covered, as a string, without a percent sign.
Note that "0" is only returned when the value is truly zero, and "100"
is only returned when the value is truly 100. Rounding can never
result in either "0" or "100".
"""
pc = self.pc_covered
if 0 < pc < self._near0:
pc = self._near0
elif self._near100 < pc < 100:
pc = self._near100
else:
pc = round(pc, self._precision)
return "%.*f" % (self._precision, pc)
@classmethod
def pc_str_width(cls):
"""How many characters wide can pc_covered_str be?"""
width = 3 # "100"
if cls._precision > 0:
width += 1 + cls._precision
return width
@property
def ratio_covered(self):
"""Return a numerator and denominator for the coverage ratio."""
numerator = self.n_executed + self.n_executed_branches
denominator = self.n_statements + self.n_branches
return numerator, denominator
def __add__(self, other):
nums = Numbers()
nums.n_files = self.n_files + other.n_files
nums.n_statements = self.n_statements + other.n_statements
nums.n_excluded = self.n_excluded + other.n_excluded
nums.n_missing = self.n_missing + other.n_missing
nums.n_branches = self.n_branches + other.n_branches
nums.n_partial_branches = (
self.n_partial_branches + other.n_partial_branches
)
nums.n_missing_branches = (
self.n_missing_branches + other.n_missing_branches
)
return nums
def __radd__(self, other):
# Implementing 0+Numbers allows us to sum() a list of Numbers.
if other == 0:
return self
return NotImplemented
def _line_ranges(statements, lines):
"""Produce a list of ranges for `format_lines`."""
statements = sorted(statements)
lines = sorted(lines)
pairs = []
start = None
lidx = 0
for stmt in statements:
if lidx >= len(lines):
break
if stmt == lines[lidx]:
lidx += 1
if not start:
start = stmt
end = stmt
elif start:
pairs.append((start, end))
start = None
if start:
pairs.append((start, end))
return pairs
def format_lines(statements, lines, arcs=None):
"""Nicely format a list of line numbers.
Format a list of line numbers for printing by coalescing groups of lines as
long as the lines represent consecutive statements. This will coalesce
even if there are gaps between statements.
For example, if `statements` is [1,2,3,4,5,10,11,12,13,14] and
`lines` is [1,2,5,10,11,13,14] then the result will be "1-2, 5-11, 13-14".
Both `lines` and `statements` can be any iterable. All of the elements of
`lines` must be in `statements`, and all of the values must be positive
integers.
If `arcs` is provided, they are (start,[end,end,end]) pairs that will be
included in the output as long as start isn't in `lines`.
"""
line_items = [(pair[0], nice_pair(pair)) for pair in _line_ranges(statements, lines)]
if arcs:
line_exits = sorted(arcs)
for line, exits in line_exits:
for ex in sorted(exits):
if line not in lines:
dest = (ex if ex > 0 else "exit")
line_items.append((line, "%d->%s" % (line, dest)))
ret = ', '.join(t[-1] for t in sorted(line_items))
return ret
@contract(total='number', fail_under='number', precision=int, returns=bool)
def should_fail_under(total, fail_under, precision):
"""Determine if a total should fail due to fail-under.
`total` is a float, the coverage measurement total. `fail_under` is the
fail_under setting to compare with. `precision` is the number of digits
to consider after the decimal point.
Returns True if the total should fail.
"""
# We can never achieve higher than 100% coverage, or less than zero.
if not (0 <= fail_under <= 100.0):
msg = "fail_under={} is invalid. Must be between 0 and 100.".format(fail_under)
raise CoverageException(msg)
# Special case for fail_under=100, it must really be 100.
if fail_under == 100.0 and total != 100.0:
return True
return round(total, precision) < fail_under

1106
third_party/python/coverage/coverage/sqldata.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

155
third_party/python/coverage/coverage/summary.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,155 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Summary reporting"""
import sys
from coverage import env
from coverage.report import get_analysis_to_report
from coverage.results import Numbers
from coverage.misc import NotPython, CoverageException, output_encoding
class SummaryReporter(object):
"""A reporter for writing the summary report."""
def __init__(self, coverage):
self.coverage = coverage
self.config = self.coverage.config
self.branches = coverage.get_data().has_arcs()
self.outfile = None
self.fr_analysis = []
self.skipped_count = 0
self.empty_count = 0
self.total = Numbers()
self.fmt_err = u"%s %s: %s"
def writeout(self, line):
"""Write a line to the output, adding a newline."""
if env.PY2:
line = line.encode(output_encoding())
self.outfile.write(line.rstrip())
self.outfile.write("\n")
def report(self, morfs, outfile=None):
"""Writes a report summarizing coverage statistics per module.
`outfile` is a file object to write the summary to. It must be opened
for native strings (bytes on Python 2, Unicode on Python 3).
"""
self.outfile = outfile or sys.stdout
self.coverage.get_data().set_query_contexts(self.config.report_contexts)
for fr, analysis in get_analysis_to_report(self.coverage, morfs):
self.report_one_file(fr, analysis)
# Prepare the formatting strings, header, and column sorting.
max_name = max([len(fr.relative_filename()) for (fr, analysis) in self.fr_analysis] + [5])
fmt_name = u"%%- %ds " % max_name
fmt_skip_covered = u"\n%s file%s skipped due to complete coverage."
fmt_skip_empty = u"\n%s empty file%s skipped."
header = (fmt_name % "Name") + u" Stmts Miss"
fmt_coverage = fmt_name + u"%6d %6d"
if self.branches:
header += u" Branch BrPart"
fmt_coverage += u" %6d %6d"
width100 = Numbers.pc_str_width()
header += u"%*s" % (width100+4, "Cover")
fmt_coverage += u"%%%ds%%%%" % (width100+3,)
if self.config.show_missing:
header += u" Missing"
fmt_coverage += u" %s"
rule = u"-" * len(header)
column_order = dict(name=0, stmts=1, miss=2, cover=-1)
if self.branches:
column_order.update(dict(branch=3, brpart=4))
# Write the header
self.writeout(header)
self.writeout(rule)
# `lines` is a list of pairs, (line text, line values). The line text
# is a string that will be printed, and line values is a tuple of
# sortable values.
lines = []
for (fr, analysis) in self.fr_analysis:
try:
nums = analysis.numbers
args = (fr.relative_filename(), nums.n_statements, nums.n_missing)
if self.branches:
args += (nums.n_branches, nums.n_partial_branches)
args += (nums.pc_covered_str,)
if self.config.show_missing:
args += (analysis.missing_formatted(branches=True),)
text = fmt_coverage % args
# Add numeric percent coverage so that sorting makes sense.
args += (nums.pc_covered,)
lines.append((text, args))
except Exception:
report_it = not self.config.ignore_errors
if report_it:
typ, msg = sys.exc_info()[:2]
# NotPython is only raised by PythonFileReporter, which has a
# should_be_python() method.
if typ is NotPython and not fr.should_be_python():
report_it = False
if report_it:
self.writeout(self.fmt_err % (fr.relative_filename(), typ.__name__, msg))
# Sort the lines and write them out.
if getattr(self.config, 'sort', None):
position = column_order.get(self.config.sort.lower())
if position is None:
raise CoverageException("Invalid sorting option: {!r}".format(self.config.sort))
lines.sort(key=lambda l: (l[1][position], l[0]))
for line in lines:
self.writeout(line[0])
# Write a TOTAl line if we had more than one file.
if self.total.n_files > 1:
self.writeout(rule)
args = ("TOTAL", self.total.n_statements, self.total.n_missing)
if self.branches:
args += (self.total.n_branches, self.total.n_partial_branches)
args += (self.total.pc_covered_str,)
if self.config.show_missing:
args += ("",)
self.writeout(fmt_coverage % args)
# Write other final lines.
if not self.total.n_files and not self.skipped_count:
raise CoverageException("No data to report.")
if self.config.skip_covered and self.skipped_count:
self.writeout(
fmt_skip_covered % (self.skipped_count, 's' if self.skipped_count > 1 else '')
)
if self.config.skip_empty and self.empty_count:
self.writeout(
fmt_skip_empty % (self.empty_count, 's' if self.empty_count > 1 else '')
)
return self.total.n_statements and self.total.pc_covered
def report_one_file(self, fr, analysis):
"""Report on just one file, the callback from report()."""
nums = analysis.numbers
self.total += nums
no_missing_lines = (nums.n_missing == 0)
no_missing_branches = (nums.n_partial_branches == 0)
if self.config.skip_covered and no_missing_lines and no_missing_branches:
# Don't report on 100% files.
self.skipped_count += 1
elif self.config.skip_empty and nums.n_statements == 0:
# Don't report on empty files.
self.empty_count += 1
else:
self.fr_analysis.append((fr, analysis))

302
third_party/python/coverage/coverage/templite.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,302 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""A simple Python template renderer, for a nano-subset of Django syntax.
For a detailed discussion of this code, see this chapter from 500 Lines:
http://aosabook.org/en/500L/a-template-engine.html
"""
# Coincidentally named the same as http://code.activestate.com/recipes/496702/
import re
from coverage import env
class TempliteSyntaxError(ValueError):
"""Raised when a template has a syntax error."""
pass
class TempliteValueError(ValueError):
"""Raised when an expression won't evaluate in a template."""
pass
class CodeBuilder(object):
"""Build source code conveniently."""
def __init__(self, indent=0):
self.code = []
self.indent_level = indent
def __str__(self):
return "".join(str(c) for c in self.code)
def add_line(self, line):
"""Add a line of source to the code.
Indentation and newline will be added for you, don't provide them.
"""
self.code.extend([" " * self.indent_level, line, "\n"])
def add_section(self):
"""Add a section, a sub-CodeBuilder."""
section = CodeBuilder(self.indent_level)
self.code.append(section)
return section
INDENT_STEP = 4 # PEP8 says so!
def indent(self):
"""Increase the current indent for following lines."""
self.indent_level += self.INDENT_STEP
def dedent(self):
"""Decrease the current indent for following lines."""
self.indent_level -= self.INDENT_STEP
def get_globals(self):
"""Execute the code, and return a dict of globals it defines."""
# A check that the caller really finished all the blocks they started.
assert self.indent_level == 0
# Get the Python source as a single string.
python_source = str(self)
# Execute the source, defining globals, and return them.
global_namespace = {}
exec(python_source, global_namespace)
return global_namespace
class Templite(object):
"""A simple template renderer, for a nano-subset of Django syntax.
Supported constructs are extended variable access::
{{var.modifier.modifier|filter|filter}}
loops::
{% for var in list %}...{% endfor %}
and ifs::
{% if var %}...{% endif %}
Comments are within curly-hash markers::
{# This will be ignored #}
Lines between `{% joined %}` and `{% endjoined %}` will have lines stripped
and joined. Be careful, this could join words together!
Any of these constructs can have a hyphen at the end (`-}}`, `-%}`, `-#}`),
which will collapse the whitespace following the tag.
Construct a Templite with the template text, then use `render` against a
dictionary context to create a finished string::
templite = Templite('''
<h1>Hello {{name|upper}}!</h1>
{% for topic in topics %}
<p>You are interested in {{topic}}.</p>
{% endif %}
''',
{'upper': str.upper},
)
text = templite.render({
'name': "Ned",
'topics': ['Python', 'Geometry', 'Juggling'],
})
"""
def __init__(self, text, *contexts):
"""Construct a Templite with the given `text`.
`contexts` are dictionaries of values to use for future renderings.
These are good for filters and global values.
"""
self.context = {}
for context in contexts:
self.context.update(context)
self.all_vars = set()
self.loop_vars = set()
# We construct a function in source form, then compile it and hold onto
# it, and execute it to render the template.
code = CodeBuilder()
code.add_line("def render_function(context, do_dots):")
code.indent()
vars_code = code.add_section()
code.add_line("result = []")
code.add_line("append_result = result.append")
code.add_line("extend_result = result.extend")
if env.PY2:
code.add_line("to_str = unicode")
else:
code.add_line("to_str = str")
buffered = []
def flush_output():
"""Force `buffered` to the code builder."""
if len(buffered) == 1:
code.add_line("append_result(%s)" % buffered[0])
elif len(buffered) > 1:
code.add_line("extend_result([%s])" % ", ".join(buffered))
del buffered[:]
ops_stack = []
# Split the text to form a list of tokens.
tokens = re.split(r"(?s)({{.*?}}|{%.*?%}|{#.*?#})", text)
squash = in_joined = False
for token in tokens:
if token.startswith('{'):
start, end = 2, -2
squash = (token[-3] == '-')
if squash:
end = -3
if token.startswith('{#'):
# Comment: ignore it and move on.
continue
elif token.startswith('{{'):
# An expression to evaluate.
expr = self._expr_code(token[start:end].strip())
buffered.append("to_str(%s)" % expr)
else:
# token.startswith('{%')
# Action tag: split into words and parse further.
flush_output()
words = token[start:end].strip().split()
if words[0] == 'if':
# An if statement: evaluate the expression to determine if.
if len(words) != 2:
self._syntax_error("Don't understand if", token)
ops_stack.append('if')
code.add_line("if %s:" % self._expr_code(words[1]))
code.indent()
elif words[0] == 'for':
# A loop: iterate over expression result.
if len(words) != 4 or words[2] != 'in':
self._syntax_error("Don't understand for", token)
ops_stack.append('for')
self._variable(words[1], self.loop_vars)
code.add_line(
"for c_%s in %s:" % (
words[1],
self._expr_code(words[3])
)
)
code.indent()
elif words[0] == 'joined':
ops_stack.append('joined')
in_joined = True
elif words[0].startswith('end'):
# Endsomething. Pop the ops stack.
if len(words) != 1:
self._syntax_error("Don't understand end", token)
end_what = words[0][3:]
if not ops_stack:
self._syntax_error("Too many ends", token)
start_what = ops_stack.pop()
if start_what != end_what:
self._syntax_error("Mismatched end tag", end_what)
if end_what == 'joined':
in_joined = False
else:
code.dedent()
else:
self._syntax_error("Don't understand tag", words[0])
else:
# Literal content. If it isn't empty, output it.
if in_joined:
token = re.sub(r"\s*\n\s*", "", token.strip())
elif squash:
token = token.lstrip()
if token:
buffered.append(repr(token))
if ops_stack:
self._syntax_error("Unmatched action tag", ops_stack[-1])
flush_output()
for var_name in self.all_vars - self.loop_vars:
vars_code.add_line("c_%s = context[%r]" % (var_name, var_name))
code.add_line('return "".join(result)')
code.dedent()
self._render_function = code.get_globals()['render_function']
def _expr_code(self, expr):
"""Generate a Python expression for `expr`."""
if "|" in expr:
pipes = expr.split("|")
code = self._expr_code(pipes[0])
for func in pipes[1:]:
self._variable(func, self.all_vars)
code = "c_%s(%s)" % (func, code)
elif "." in expr:
dots = expr.split(".")
code = self._expr_code(dots[0])
args = ", ".join(repr(d) for d in dots[1:])
code = "do_dots(%s, %s)" % (code, args)
else:
self._variable(expr, self.all_vars)
code = "c_%s" % expr
return code
def _syntax_error(self, msg, thing):
"""Raise a syntax error using `msg`, and showing `thing`."""
raise TempliteSyntaxError("%s: %r" % (msg, thing))
def _variable(self, name, vars_set):
"""Track that `name` is used as a variable.
Adds the name to `vars_set`, a set of variable names.
Raises an syntax error if `name` is not a valid name.
"""
if not re.match(r"[_a-zA-Z][_a-zA-Z0-9]*$", name):
self._syntax_error("Not a valid name", name)
vars_set.add(name)
def render(self, context=None):
"""Render this template by applying it to `context`.
`context` is a dictionary of values to use in this rendering.
"""
# Make the complete context we'll use.
render_context = dict(self.context)
if context:
render_context.update(context)
return self._render_function(render_context, self._do_dots)
def _do_dots(self, value, *dots):
"""Evaluate dotted expressions at run-time."""
for dot in dots:
try:
value = getattr(value, dot)
except AttributeError:
try:
value = value[dot]
except (TypeError, KeyError):
raise TempliteValueError(
"Couldn't evaluate %r.%s" % (value, dot)
)
if callable(value):
value = value()
return value

164
third_party/python/coverage/coverage/tomlconfig.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,164 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""TOML configuration support for coverage.py"""
import io
import os
import re
from coverage import env
from coverage.backward import configparser, path_types
from coverage.misc import CoverageException, substitute_variables
class TomlDecodeError(Exception):
"""An exception class that exists even when toml isn't installed."""
pass
class TomlConfigParser:
"""TOML file reading with the interface of HandyConfigParser."""
# This class has the same interface as config.HandyConfigParser, no
# need for docstrings.
# pylint: disable=missing-function-docstring
def __init__(self, our_file):
self.our_file = our_file
self.data = None
def read(self, filenames):
from coverage.optional import toml
# RawConfigParser takes a filename or list of filenames, but we only
# ever call this with a single filename.
assert isinstance(filenames, path_types)
filename = filenames
if env.PYVERSION >= (3, 6):
filename = os.fspath(filename)
try:
with io.open(filename, encoding='utf-8') as fp:
toml_text = fp.read()
except IOError:
return []
if toml:
toml_text = substitute_variables(toml_text, os.environ)
try:
self.data = toml.loads(toml_text)
except toml.TomlDecodeError as err:
raise TomlDecodeError(*err.args)
return [filename]
else:
has_toml = re.search(r"^\[tool\.coverage\.", toml_text, flags=re.MULTILINE)
if self.our_file or has_toml:
# Looks like they meant to read TOML, but we can't read it.
msg = "Can't read {!r} without TOML support. Install with [toml] extra"
raise CoverageException(msg.format(filename))
return []
def _get_section(self, section):
"""Get a section from the data.
Arguments:
section (str): A section name, which can be dotted.
Returns:
name (str): the actual name of the section that was found, if any,
or None.
data (str): the dict of data in the section, or None if not found.
"""
prefixes = ["tool.coverage."]
if self.our_file:
prefixes.append("")
for prefix in prefixes:
real_section = prefix + section
parts = real_section.split(".")
try:
data = self.data[parts[0]]
for part in parts[1:]:
data = data[part]
except KeyError:
continue
break
else:
return None, None
return real_section, data
def _get(self, section, option):
"""Like .get, but returns the real section name and the value."""
name, data = self._get_section(section)
if data is None:
raise configparser.NoSectionError(section)
try:
return name, data[option]
except KeyError:
raise configparser.NoOptionError(option, name)
def has_option(self, section, option):
_, data = self._get_section(section)
if data is None:
return False
return option in data
def has_section(self, section):
name, _ = self._get_section(section)
return name
def options(self, section):
_, data = self._get_section(section)
if data is None:
raise configparser.NoSectionError(section)
return list(data.keys())
def get_section(self, section):
_, data = self._get_section(section)
return data
def get(self, section, option):
_, value = self._get(section, option)
return value
def _check_type(self, section, option, value, type_, type_desc):
if not isinstance(value, type_):
raise ValueError(
'Option {!r} in section {!r} is not {}: {!r}'
.format(option, section, type_desc, value)
)
def getboolean(self, section, option):
name, value = self._get(section, option)
self._check_type(name, option, value, bool, "a boolean")
return value
def getlist(self, section, option):
name, values = self._get(section, option)
self._check_type(name, option, values, list, "a list")
return values
def getregexlist(self, section, option):
name, values = self._get(section, option)
self._check_type(name, option, values, list, "a list")
for value in values:
value = value.strip()
try:
re.compile(value)
except re.error as e:
raise CoverageException(
"Invalid [%s].%s value %r: %s" % (name, option, value, e)
)
return values
def getint(self, section, option):
name, value = self._get(section, option)
self._check_type(name, option, value, int, "an integer")
return value
def getfloat(self, section, option):
name, value = self._get(section, option)
if isinstance(value, int):
value = float(value)
self._check_type(name, option, value, float, "a float")
return value

33
third_party/python/coverage/coverage/version.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,33 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""The version and URL for coverage.py"""
# This file is exec'ed in setup.py, don't import anything!
# Same semantics as sys.version_info.
version_info = (5, 1, 0, 'final', 0)
def _make_version(major, minor, micro, releaselevel, serial):
"""Create a readable version string from version_info tuple components."""
assert releaselevel in ['alpha', 'beta', 'candidate', 'final']
version = "%d.%d" % (major, minor)
if micro:
version += ".%d" % (micro,)
if releaselevel != 'final':
short = {'alpha': 'a', 'beta': 'b', 'candidate': 'rc'}[releaselevel]
version += "%s%d" % (short, serial)
return version
def _make_url(major, minor, micro, releaselevel, serial):
"""Make the URL people should start at for this version of coverage.py."""
url = "https://coverage.readthedocs.io"
if releaselevel != 'final':
# For pre-releases, use a version-specific URL.
url += "/en/coverage-" + _make_version(major, minor, micro, releaselevel, serial)
return url
__version__ = _make_version(*version_info)
__url__ = _make_url(*version_info)

230
third_party/python/coverage/coverage/xmlreport.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,230 @@
# coding: utf-8
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""XML reporting for coverage.py"""
import os
import os.path
import sys
import time
import xml.dom.minidom
from coverage import env
from coverage import __url__, __version__, files
from coverage.backward import iitems
from coverage.misc import isolate_module
from coverage.report import get_analysis_to_report
os = isolate_module(os)
DTD_URL = 'https://raw.githubusercontent.com/cobertura/web/master/htdocs/xml/coverage-04.dtd'
def rate(hit, num):
"""Return the fraction of `hit`/`num`, as a string."""
if num == 0:
return "1"
else:
return "%.4g" % (float(hit) / num)
class XmlReporter(object):
"""A reporter for writing Cobertura-style XML coverage results."""
def __init__(self, coverage):
self.coverage = coverage
self.config = self.coverage.config
self.source_paths = set()
if self.config.source:
for src in self.config.source:
if os.path.exists(src):
if not self.config.relative_files:
src = files.canonical_filename(src)
self.source_paths.add(src)
self.packages = {}
self.xml_out = None
def report(self, morfs, outfile=None):
"""Generate a Cobertura-compatible XML report for `morfs`.
`morfs` is a list of modules or file names.
`outfile` is a file object to write the XML to.
"""
# Initial setup.
outfile = outfile or sys.stdout
has_arcs = self.coverage.get_data().has_arcs()
# Create the DOM that will store the data.
impl = xml.dom.minidom.getDOMImplementation()
self.xml_out = impl.createDocument(None, "coverage", None)
# Write header stuff.
xcoverage = self.xml_out.documentElement
xcoverage.setAttribute("version", __version__)
xcoverage.setAttribute("timestamp", str(int(time.time()*1000)))
xcoverage.appendChild(self.xml_out.createComment(
" Generated by coverage.py: %s " % __url__
))
xcoverage.appendChild(self.xml_out.createComment(" Based on %s " % DTD_URL))
# Call xml_file for each file in the data.
for fr, analysis in get_analysis_to_report(self.coverage, morfs):
self.xml_file(fr, analysis, has_arcs)
xsources = self.xml_out.createElement("sources")
xcoverage.appendChild(xsources)
# Populate the XML DOM with the source info.
for path in sorted(self.source_paths):
xsource = self.xml_out.createElement("source")
xsources.appendChild(xsource)
txt = self.xml_out.createTextNode(path)
xsource.appendChild(txt)
lnum_tot, lhits_tot = 0, 0
bnum_tot, bhits_tot = 0, 0
xpackages = self.xml_out.createElement("packages")
xcoverage.appendChild(xpackages)
# Populate the XML DOM with the package info.
for pkg_name, pkg_data in sorted(iitems(self.packages)):
class_elts, lhits, lnum, bhits, bnum = pkg_data
xpackage = self.xml_out.createElement("package")
xpackages.appendChild(xpackage)
xclasses = self.xml_out.createElement("classes")
xpackage.appendChild(xclasses)
for _, class_elt in sorted(iitems(class_elts)):
xclasses.appendChild(class_elt)
xpackage.setAttribute("name", pkg_name.replace(os.sep, '.'))
xpackage.setAttribute("line-rate", rate(lhits, lnum))
if has_arcs:
branch_rate = rate(bhits, bnum)
else:
branch_rate = "0"
xpackage.setAttribute("branch-rate", branch_rate)
xpackage.setAttribute("complexity", "0")
lnum_tot += lnum
lhits_tot += lhits
bnum_tot += bnum
bhits_tot += bhits
xcoverage.setAttribute("lines-valid", str(lnum_tot))
xcoverage.setAttribute("lines-covered", str(lhits_tot))
xcoverage.setAttribute("line-rate", rate(lhits_tot, lnum_tot))
if has_arcs:
xcoverage.setAttribute("branches-valid", str(bnum_tot))
xcoverage.setAttribute("branches-covered", str(bhits_tot))
xcoverage.setAttribute("branch-rate", rate(bhits_tot, bnum_tot))
else:
xcoverage.setAttribute("branches-covered", "0")
xcoverage.setAttribute("branches-valid", "0")
xcoverage.setAttribute("branch-rate", "0")
xcoverage.setAttribute("complexity", "0")
# Write the output file.
outfile.write(serialize_xml(self.xml_out))
# Return the total percentage.
denom = lnum_tot + bnum_tot
if denom == 0:
pct = 0.0
else:
pct = 100.0 * (lhits_tot + bhits_tot) / denom
return pct
def xml_file(self, fr, analysis, has_arcs):
"""Add to the XML report for a single file."""
# Create the 'lines' and 'package' XML elements, which
# are populated later. Note that a package == a directory.
filename = fr.filename.replace("\\", "/")
for source_path in self.source_paths:
source_path = files.canonical_filename(source_path)
if filename.startswith(source_path.replace("\\", "/") + "/"):
rel_name = filename[len(source_path)+1:]
break
else:
rel_name = fr.relative_filename()
self.source_paths.add(fr.filename[:-len(rel_name)].rstrip(r"\/"))
dirname = os.path.dirname(rel_name) or u"."
dirname = "/".join(dirname.split("/")[:self.config.xml_package_depth])
package_name = dirname.replace("/", ".")
package = self.packages.setdefault(package_name, [{}, 0, 0, 0, 0])
xclass = self.xml_out.createElement("class")
xclass.appendChild(self.xml_out.createElement("methods"))
xlines = self.xml_out.createElement("lines")
xclass.appendChild(xlines)
xclass.setAttribute("name", os.path.relpath(rel_name, dirname))
xclass.setAttribute("filename", rel_name.replace("\\", "/"))
xclass.setAttribute("complexity", "0")
branch_stats = analysis.branch_stats()
missing_branch_arcs = analysis.missing_branch_arcs()
# For each statement, create an XML 'line' element.
for line in sorted(analysis.statements):
xline = self.xml_out.createElement("line")
xline.setAttribute("number", str(line))
# Q: can we get info about the number of times a statement is
# executed? If so, that should be recorded here.
xline.setAttribute("hits", str(int(line not in analysis.missing)))
if has_arcs:
if line in branch_stats:
total, taken = branch_stats[line]
xline.setAttribute("branch", "true")
xline.setAttribute(
"condition-coverage",
"%d%% (%d/%d)" % (100*taken//total, taken, total)
)
if line in missing_branch_arcs:
annlines = ["exit" if b < 0 else str(b) for b in missing_branch_arcs[line]]
xline.setAttribute("missing-branches", ",".join(annlines))
xlines.appendChild(xline)
class_lines = len(analysis.statements)
class_hits = class_lines - len(analysis.missing)
if has_arcs:
class_branches = sum(t for t, k in branch_stats.values())
missing_branches = sum(t - k for t, k in branch_stats.values())
class_br_hits = class_branches - missing_branches
else:
class_branches = 0.0
class_br_hits = 0.0
# Finalize the statistics that are collected in the XML DOM.
xclass.setAttribute("line-rate", rate(class_hits, class_lines))
if has_arcs:
branch_rate = rate(class_br_hits, class_branches)
else:
branch_rate = "0"
xclass.setAttribute("branch-rate", branch_rate)
package[0][rel_name] = xclass
package[1] += class_hits
package[2] += class_lines
package[3] += class_br_hits
package[4] += class_branches
def serialize_xml(dom):
"""Serialize a minidom node to XML."""
out = dom.toprettyxml()
if env.PY2:
out = out.encode("utf8")
return out

19
third_party/python/coverage/setup.cfg поставляемый Normal file
Просмотреть файл

@ -0,0 +1,19 @@
[tool:pytest]
addopts = -q -n3 --strict --no-flaky-report -rfe --failed-first
markers =
expensive: too slow to run during "make smoke"
filterwarnings =
ignore:dns.hash module will be removed:DeprecationWarning
ignore:Using or importing the ABCs:DeprecationWarning
[pep8]
ignore = E265,E266,E123,E133,E226,E241,E242,E301,E401
max-line-length = 100
[metadata]
license_file = LICENSE.txt
[egg_info]
tag_build =
tag_date = 0

217
third_party/python/coverage/setup.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,217 @@
# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt
"""Code coverage measurement for Python"""
# Distutils setup for coverage.py
# This file is used unchanged under all versions of Python, 2.x and 3.x.
import os
import sys
# Setuptools has to be imported before distutils, or things break.
from setuptools import setup
from distutils.core import Extension # pylint: disable=wrong-import-order
from distutils.command.build_ext import build_ext # pylint: disable=wrong-import-order
from distutils import errors # pylint: disable=wrong-import-order
# Get or massage our metadata. We exec coverage/version.py so we can avoid
# importing the product code into setup.py.
classifiers = """\
Environment :: Console
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: OS Independent
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: Implementation :: PyPy
Topic :: Software Development :: Quality Assurance
Topic :: Software Development :: Testing
"""
cov_ver_py = os.path.join(os.path.split(__file__)[0], "coverage/version.py")
with open(cov_ver_py) as version_file:
# __doc__ will be overwritten by version.py.
doc = __doc__
# Keep pylint happy.
__version__ = __url__ = version_info = ""
# Execute the code in version.py.
exec(compile(version_file.read(), cov_ver_py, 'exec'))
with open("README.rst") as readme:
long_description = readme.read().replace("https://coverage.readthedocs.io", __url__)
with open("CONTRIBUTORS.txt", "rb") as contributors:
paras = contributors.read().split(b"\n\n")
num_others = len(paras[-1].splitlines())
num_others += 1 # Count Gareth Rees, who is mentioned in the top paragraph.
classifier_list = classifiers.splitlines()
if version_info[3] == 'alpha':
devstat = "3 - Alpha"
elif version_info[3] in ['beta', 'candidate']:
devstat = "4 - Beta"
else:
assert version_info[3] == 'final'
devstat = "5 - Production/Stable"
classifier_list.append("Development Status :: " + devstat)
# Create the keyword arguments for setup()
setup_args = dict(
name='coverage',
version=__version__,
packages=[
'coverage',
],
package_data={
'coverage': [
'htmlfiles/*.*',
'fullcoverage/*.*',
]
},
entry_points={
# Install a script as "coverage", and as "coverage[23]", and as
# "coverage-2.7" (or whatever).
'console_scripts': [
'coverage = coverage.cmdline:main',
'coverage%d = coverage.cmdline:main' % sys.version_info[:1],
'coverage-%d.%d = coverage.cmdline:main' % sys.version_info[:2],
],
},
extras_require={
# Enable pyproject.toml support.
'toml': ['toml'],
},
# We need to get HTML assets from our htmlfiles directory.
zip_safe=False,
author='Ned Batchelder and {} others'.format(num_others),
author_email='ned@nedbatchelder.com',
description=doc,
long_description=long_description,
long_description_content_type='text/x-rst',
keywords='code coverage testing',
license='Apache 2.0',
classifiers=classifier_list,
url="https://github.com/nedbat/coveragepy",
project_urls={
'Documentation': __url__,
'Funding': (
'https://tidelift.com/subscription/pkg/pypi-coverage'
'?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=pypi'
),
'Issues': 'https://github.com/nedbat/coveragepy/issues',
},
python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4",
)
# A replacement for the build_ext command which raises a single exception
# if the build fails, so we can fallback nicely.
ext_errors = (
errors.CCompilerError,
errors.DistutilsExecError,
errors.DistutilsPlatformError,
)
if sys.platform == 'win32':
# distutils.msvc9compiler can raise an IOError when failing to
# find the compiler
ext_errors += (IOError,)
class BuildFailed(Exception):
"""Raise this to indicate the C extension wouldn't build."""
def __init__(self):
Exception.__init__(self)
self.cause = sys.exc_info()[1] # work around py 2/3 different syntax
class ve_build_ext(build_ext):
"""Build C extensions, but fail with a straightforward exception."""
def run(self):
"""Wrap `run` with `BuildFailed`."""
try:
build_ext.run(self)
except errors.DistutilsPlatformError:
raise BuildFailed()
def build_extension(self, ext):
"""Wrap `build_extension` with `BuildFailed`."""
try:
# Uncomment to test compile failure handling:
# raise errors.CCompilerError("OOPS")
build_ext.build_extension(self, ext)
except ext_errors:
raise BuildFailed()
except ValueError as err:
# this can happen on Windows 64 bit, see Python issue 7511
if "'path'" in str(err): # works with both py 2/3
raise BuildFailed()
raise
# There are a few reasons we might not be able to compile the C extension.
# Figure out if we should attempt the C extension or not.
compile_extension = True
if sys.platform.startswith('java'):
# Jython can't compile C extensions
compile_extension = False
if '__pypy__' in sys.builtin_module_names:
# Pypy can't compile C extensions
compile_extension = False
if compile_extension:
setup_args.update(dict(
ext_modules=[
Extension(
"coverage.tracer",
sources=[
"coverage/ctracer/datastack.c",
"coverage/ctracer/filedisp.c",
"coverage/ctracer/module.c",
"coverage/ctracer/tracer.c",
],
),
],
cmdclass={
'build_ext': ve_build_ext,
},
))
def main():
"""Actually invoke setup() with the arguments we built above."""
# For a variety of reasons, it might not be possible to install the C
# extension. Try it with, and if it fails, try it without.
try:
setup(**setup_args)
except BuildFailed as exc:
msg = "Couldn't install with extension module, trying without it..."
exc_msg = "%s: %s" % (exc.__class__.__name__, exc.cause)
print("**\n** %s\n** %s\n**" % (msg, exc_msg))
del setup_args['ext_modules']
setup(**setup_args)
if __name__ == '__main__':
main()

22
third_party/python/pyflakes/AUTHORS поставляемый Normal file
Просмотреть файл

@ -0,0 +1,22 @@
Contributors
------------
* Phil Frost - Former Divmod Team
* Moe Aboulkheir - Former Divmod Team
* Jean-Paul Calderone - Former Divmod Team
* Glyph Lefkowitz - Former Divmod Team
* Tristan Seligmann
* Jonathan Lange
* Georg Brandl
* Ronny Pfannschmidt
* Virgil Dupras
* Kevin Watters
* Ian Cordasco
* Florent Xicluna
* Domen Kožar
* Marcin Cieślak
* Steven Myint
* Ignas Mikalajūnas
See also the contributors list on GitHub:
https://github.com/PyCQA/pyflakes/graphs/contributors

21
third_party/python/pyflakes/LICENSE поставляемый Normal file
Просмотреть файл

@ -0,0 +1,21 @@
Copyright 2005-2011 Divmod, Inc.
Copyright 2013-2014 Florent Xicluna
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

3
third_party/python/pyflakes/MANIFEST.in поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
include README.rst NEWS.rst
include AUTHORS LICENSE
include bin/pyflakes

266
third_party/python/pyflakes/NEWS.rst поставляемый Normal file
Просмотреть файл

@ -0,0 +1,266 @@
2.2.0 (2020-04-08)
- Include column information in error messages
- Fix ``@overload`` detection with other decorators and in non-global scopes
- Fix return-type annotation being a class member
- Fix assignment to ``_`` in doctests with existing ``_`` name
- Namespace attributes which are attached to ast nodes with ``_pyflakes_`` to
avoid conflicts with other libraries (notably bandit)
- Add check for f-strings without placeholders
- Add check for unused/extra/invalid ``'string literal'.format(...)``
- Add check for unused/extra/invalid ``'string literal % ...``
- Improve python shebang detection
- Allow type ignore to be followed by a code ``# type: ignore[attr-defined]``
- Add support for assignment expressions (PEP 572)
- Support ``@overload`` detection from ``typing_extensions`` as well
- Fix ``@overload`` detection for async functions
- Allow ``continue`` inside ``finally`` in python 3.8+
- Fix handling of annotations in positional-only arguments
- Make pyflakes more resistant to future syntax additions
- Fix false positives in partially quoted type annotations
- Warn about ``is`` comparison to tuples
- Fix ``Checker`` usage with async function subtrees
- Add check for ``if`` of non-empty tuple
- Switch from ``optparse`` to ``argparse``
- Fix false positives in partially quoted type annotations in unusual contexts
- Be more cautious when identifying ``Literal`` type expressions
2.1.1 (2019-02-28)
- Fix reported line number for type comment errors
- Fix typing.overload check to only check imported names
2.1.0 (2019-01-23)
- Allow intentional assignment to variables named ``_``
- Recognize ``__module__`` as a valid name in class scope
- ``pyflakes.checker.Checker`` supports checking of partial ``ast`` trees
- Detect assign-before-use for local variables which shadow builtin names
- Detect invalid ``print`` syntax using ``>>`` operator
- Treat ``async for`` the same as a ``for`` loop for introducing variables
- Add detection for list concatenation in ``__all__``
- Exempt ``@typing.overload`` from duplicate function declaration
- Importing a submodule of an ``as``-aliased ``import``-import is marked as
used
- Report undefined names from ``__all__`` as possibly coming from a ``*``
import
- Add support for changes in Python 3.8-dev
- Add support for PEP 563 (``from __future__ import annotations``)
- Include Python version and platform information in ``pyflakes --version``
- Recognize ``__annotations__`` as a valid magic global in Python 3.6+
- Mark names used in PEP 484 ``# type: ...`` comments as used
- Add check for use of ``is`` operator with ``str``, ``bytes``, and ``int``
literals
2.0.0 (2018-05-20)
- Drop support for EOL Python <2.7 and 3.2-3.3
- Check for unused exception binding in ``except:`` block
- Handle string literal type annotations
- Ignore redefinitions of ``_``, unless originally defined by import
- Support ``__class__`` without ``self`` in Python 3
- Issue an error for ``raise NotImplemented(...)``
1.6.0 (2017-08-03)
- Process function scope variable annotations for used names
- Find Python files without extensions by their shebang
1.5.0 (2017-01-09)
- Enable support for PEP 526 annotated assignments
1.4.0 (2016-12-30):
- Change formatting of ImportStarMessage to be consistent with other errors
- Support PEP 498 "f-strings"
1.3.0 (2016-09-01):
- Fix PyPy2 Windows IntegrationTests
- Check for duplicate dictionary keys
- Fix TestMain tests on Windows
- Fix "continue" and "break" checks ignoring py3.5's "async for" loop
1.2.3 (2016-05-12):
- Fix TypeError when processing relative imports
1.2.2 (2016-05-06):
- Avoid traceback when exception is del-ed in except
1.2.1 (2015-05-05):
- Fix false RedefinedWhileUnused for submodule imports
1.2.0 (2016-05-03):
- Warn against reusing exception names after the except: block on Python 3
- Improve the error messages for imports
1.1.0 (2016-03-01):
- Allow main() to accept arguments.
- Support @ matrix-multiplication operator
- Validate ``__future__`` imports
- Fix doctest scope testing
- Warn for tuple assertions which are always true
- Warn for "import \*" not at module level on Python 3
- Catch many more kinds of SyntaxErrors
- Check PEP 498 f-strings
- (and a few more sundry bugfixes)
1.0.0 (2015-09-20):
- Python 3.5 support. async/await statements in particular.
- Fix test_api.py on Windows.
- Eliminate a false UnusedImport warning when the name has been
declared "global"
0.9.2 (2015-06-17):
- Fix a traceback when a global is defined in one scope, and used in another.
0.9.1 (2015-06-09):
- Update NEWS.txt to include 0.9.0, which had been forgotten.
0.9.0 (2015-05-31):
- Exit gracefully, not with a traceback, on SIGINT and SIGPIPE.
- Fix incorrect report of undefined name when using lambda expressions in
generator expressions.
- Don't crash on DOS line endings on Windows and Python 2.6.
- Don't report an undefined name if the 'del' which caused a name to become
undefined is only conditionally executed.
- Properly handle differences in list comprehension scope in Python 3.
- Improve handling of edge cases around 'global' defined variables.
- Report an error for 'return' outside a function.
0.8.1 (2014-03-30):
- Detect the declared encoding in Python 3.
- Do not report redefinition of import in a local scope, if the
global name is used elsewhere in the module.
- Catch undefined variable in loop generator when it is also used as
loop variable.
- Report undefined name for ``(a, b) = (1, 2)`` but not for the general
unpacking ``(a, b) = func()``.
- Correctly detect when an imported module is used in default arguments
of a method, when the method and the module use the same name.
- Distribute a universal wheel file.
0.8.0 (2014-03-22):
- Adapt for the AST in Python 3.4.
- Fix caret position on SyntaxError.
- Fix crash on Python 2.x with some doctest SyntaxError.
- Add tox.ini.
- The ``PYFLAKES_NODOCTEST`` environment variable has been replaced with the
``PYFLAKES_DOCTEST`` environment variable (with the opposite meaning).
Doctest checking is now disabled by default; set the environment variable
to enable it.
- Correctly parse incremental ``__all__ += [...]``.
- Catch return with arguments inside a generator (Python <= 3.2).
- Do not complain about ``_`` in doctests.
- Drop deprecated methods ``pushFunctionScope`` and ``pushClassScope``.
0.7.3 (2013-07-02):
- Do not report undefined name for generator expression and dict or
set comprehension at class level.
- Deprecate ``Checker.pushFunctionScope`` and ``Checker.pushClassScope``:
use ``Checker.pushScope`` instead.
- Remove dependency on Unittest2 for the tests.
0.7.2 (2013-04-24):
- Fix computation of ``DoctestSyntaxError.lineno`` and ``col``.
- Add boolean attribute ``Checker.withDoctest`` to ignore doctests.
- If environment variable ``PYFLAKES_NODOCTEST`` is set, skip doctests.
- Environment variable ``PYFLAKES_BUILTINS`` accepts a comma-separated
list of additional built-in names.
0.7.1 (2013-04-23):
- File ``bin/pyflakes`` was missing in tarball generated with distribute.
- Fix reporting errors in non-ASCII filenames (Python 2.x).
0.7.0 (2013-04-17):
- Add --version and --help options.
- Support ``python -m pyflakes`` (Python 2.7 and Python 3.x).
- Add attribute ``Message.col`` to report column offset.
- Do not report redefinition of variable for a variable used in a list
comprehension in a conditional.
- Do not report redefinition of variable for generator expressions and
set or dict comprehensions.
- Do not report undefined name when the code is protected with a
``NameError`` exception handler.
- Do not report redefinition of variable when unassigning a module imported
for its side-effect.
- Support special locals like ``__tracebackhide__`` for py.test.
- Support checking doctests.
- Fix issue with Turkish locale where ``'i'.upper() == 'i'`` in Python 2.
0.6.1 (2013-01-29):
- Fix detection of variables in augmented assignments.
0.6.0 (2013-01-29):
- Support Python 3 up to 3.3, based on the pyflakes3k project.
- Preserve compatibility with Python 2.5 and all recent versions of Python.
- Support custom reporters in addition to the default Reporter.
- Allow function redefinition for modern property construction via
property.setter/deleter.
- Fix spurious redefinition warnings in conditionals.
- Do not report undefined name in ``__all__`` if import * is used.
- Add WindowsError as a known built-in name on all platforms.
- Support specifying additional built-ins in the ``Checker`` constructor.
- Don't issue Unused Variable warning when using locals() in current scope.
- Handle problems with the encoding of source files.
- Remove dependency on Twisted for the tests.
- Support ``python setup.py test`` and ``python setup.py develop``.
- Create script using setuptools ``entry_points`` to support all platforms,
including Windows.
0.5.0 (2011-09-02):
- Convert pyflakes to use newer _ast infrastructure rather than compiler.
- Support for new syntax in 2.7 (including set literals, set comprehensions,
and dictionary comprehensions).
- Make sure class names don't get bound until after class definition.
0.4.0 (2009-11-25):
- Fix reporting for certain SyntaxErrors which lack line number
information.
- Check for syntax errors more rigorously.
- Support checking names used with the class decorator syntax in versions
of Python which have it.
- Detect local variables which are bound but never used.
- Handle permission errors when trying to read source files.
- Handle problems with the encoding of source files.
- Support importing dotted names so as not to incorrectly report them as
redefined unused names.
- Support all forms of the with statement.
- Consider static ``__all__`` definitions and avoid reporting unused names
if the names are listed there.
- Fix incorrect checking of class names with respect to the names of their
bases in the class statement.
- Support the ``__path__`` global in ``__init__.py``.
0.3.0 (2009-01-30):
- Display more informative SyntaxError messages.
- Don't hang flymake with unmatched triple quotes (only report a single
line of source for a multiline syntax error).
- Recognize ``__builtins__`` as a defined name.
- Improve pyflakes support for python versions 2.3-2.5
- Support for if-else expressions and with statements.
- Warn instead of error on non-existent file paths.
- Check for ``__future__`` imports after other statements.
- Add reporting for some types of import shadowing.
- Improve reporting of unbound locals

116
third_party/python/pyflakes/PKG-INFO поставляемый Normal file
Просмотреть файл

@ -0,0 +1,116 @@
Metadata-Version: 1.2
Name: pyflakes
Version: 2.2.0
Summary: passive checker of Python programs
Home-page: https://github.com/PyCQA/pyflakes
Author: A lot of people
Author-email: code-quality@python.org
License: MIT
Description: ========
Pyflakes
========
A simple program which checks Python source files for errors.
Pyflakes analyzes programs and detects various errors. It works by
parsing the source file, not importing it, so it is safe to use on
modules with side effects. It's also much faster.
It is `available on PyPI <https://pypi.org/project/pyflakes/>`_
and it supports all active versions of Python: 2.7 and 3.4 to 3.7.
Installation
------------
It can be installed with::
$ pip install --upgrade pyflakes
Useful tips:
* Be sure to install it for a version of Python which is compatible
with your codebase: for Python 2, ``pip2 install pyflakes`` and for
Python3, ``pip3 install pyflakes``.
* You can also invoke Pyflakes with ``python3 -m pyflakes .`` or
``python2 -m pyflakes .`` if you have it installed for both versions.
* If you require more options and more flexibility, you could give a
look to Flake8_ too.
Design Principles
-----------------
Pyflakes makes a simple promise: it will never complain about style,
and it will try very, very hard to never emit false positives.
Pyflakes is also faster than Pylint_
or Pychecker_. This is
largely because Pyflakes only examines the syntax tree of each file
individually. As a consequence, Pyflakes is more limited in the
types of things it can check.
If you like Pyflakes but also want stylistic checks, you want
flake8_, which combines
Pyflakes with style checks against
`PEP 8`_ and adds
per-project configuration ability.
Mailing-list
------------
Share your feedback and ideas: `subscribe to the mailing-list
<https://mail.python.org/mailman/listinfo/code-quality>`_
Contributing
------------
Issues are tracked on `GitHub <https://github.com/PyCQA/pyflakes/issues>`_.
Patches may be submitted via a `GitHub pull request`_ or via the mailing list
if you prefer. If you are comfortable doing so, please `rebase your changes`_
so they may be applied to master with a fast-forward merge, and each commit is
a coherent unit of work with a well-written log message. If you are not
comfortable with this rebase workflow, the project maintainers will be happy to
rebase your commits for you.
All changes should include tests and pass flake8_.
.. image:: https://api.travis-ci.org/PyCQA/pyflakes.svg?branch=master
:target: https://travis-ci.org/PyCQA/pyflakes
:alt: Build status
.. _Pylint: http://www.pylint.org/
.. _flake8: https://pypi.org/project/flake8/
.. _`PEP 8`: http://legacy.python.org/dev/peps/pep-0008/
.. _Pychecker: http://pychecker.sourceforge.net/
.. _`rebase your changes`: https://git-scm.com/book/en/v2/Git-Branching-Rebasing
.. _`GitHub pull request`: https://github.com/PyCQA/pyflakes/pulls
Changelog
---------
Please see `NEWS.rst <https://github.com/PyCQA/pyflakes/blob/master/NEWS.rst>`_.
Platform: UNKNOWN
Classifier: Development Status :: 6 - Mature
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Topic :: Software Development
Classifier: Topic :: Utilities
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*

89
third_party/python/pyflakes/README.rst поставляемый Normal file
Просмотреть файл

@ -0,0 +1,89 @@
========
Pyflakes
========
A simple program which checks Python source files for errors.
Pyflakes analyzes programs and detects various errors. It works by
parsing the source file, not importing it, so it is safe to use on
modules with side effects. It's also much faster.
It is `available on PyPI <https://pypi.org/project/pyflakes/>`_
and it supports all active versions of Python: 2.7 and 3.4 to 3.7.
Installation
------------
It can be installed with::
$ pip install --upgrade pyflakes
Useful tips:
* Be sure to install it for a version of Python which is compatible
with your codebase: for Python 2, ``pip2 install pyflakes`` and for
Python3, ``pip3 install pyflakes``.
* You can also invoke Pyflakes with ``python3 -m pyflakes .`` or
``python2 -m pyflakes .`` if you have it installed for both versions.
* If you require more options and more flexibility, you could give a
look to Flake8_ too.
Design Principles
-----------------
Pyflakes makes a simple promise: it will never complain about style,
and it will try very, very hard to never emit false positives.
Pyflakes is also faster than Pylint_
or Pychecker_. This is
largely because Pyflakes only examines the syntax tree of each file
individually. As a consequence, Pyflakes is more limited in the
types of things it can check.
If you like Pyflakes but also want stylistic checks, you want
flake8_, which combines
Pyflakes with style checks against
`PEP 8`_ and adds
per-project configuration ability.
Mailing-list
------------
Share your feedback and ideas: `subscribe to the mailing-list
<https://mail.python.org/mailman/listinfo/code-quality>`_
Contributing
------------
Issues are tracked on `GitHub <https://github.com/PyCQA/pyflakes/issues>`_.
Patches may be submitted via a `GitHub pull request`_ or via the mailing list
if you prefer. If you are comfortable doing so, please `rebase your changes`_
so they may be applied to master with a fast-forward merge, and each commit is
a coherent unit of work with a well-written log message. If you are not
comfortable with this rebase workflow, the project maintainers will be happy to
rebase your commits for you.
All changes should include tests and pass flake8_.
.. image:: https://api.travis-ci.org/PyCQA/pyflakes.svg?branch=master
:target: https://travis-ci.org/PyCQA/pyflakes
:alt: Build status
.. _Pylint: http://www.pylint.org/
.. _flake8: https://pypi.org/project/flake8/
.. _`PEP 8`: http://legacy.python.org/dev/peps/pep-0008/
.. _Pychecker: http://pychecker.sourceforge.net/
.. _`rebase your changes`: https://git-scm.com/book/en/v2/Git-Branching-Rebasing
.. _`GitHub pull request`: https://github.com/PyCQA/pyflakes/pulls
Changelog
---------
Please see `NEWS.rst <https://github.com/PyCQA/pyflakes/blob/master/NEWS.rst>`_.

1
third_party/python/pyflakes/pyflakes/__init__.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
__version__ = '2.2.0'

5
third_party/python/pyflakes/pyflakes/__main__.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,5 @@
from pyflakes.api import main
# python -m pyflakes
if __name__ == '__main__':
main(prog='pyflakes')

213
third_party/python/pyflakes/pyflakes/api.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,213 @@
"""
API for the command-line I{pyflakes} tool.
"""
from __future__ import with_statement
import ast
import os
import platform
import re
import sys
from pyflakes import checker, __version__
from pyflakes import reporter as modReporter
__all__ = ['check', 'checkPath', 'checkRecursive', 'iterSourceCode', 'main']
PYTHON_SHEBANG_REGEX = re.compile(br'^#!.*\bpython([23](\.\d+)?|w)?[dmu]?\s')
def check(codeString, filename, reporter=None):
"""
Check the Python source given by C{codeString} for flakes.
@param codeString: The Python source to check.
@type codeString: C{str}
@param filename: The name of the file the source came from, used to report
errors.
@type filename: C{str}
@param reporter: A L{Reporter} instance, where errors and warnings will be
reported.
@return: The number of warnings emitted.
@rtype: C{int}
"""
if reporter is None:
reporter = modReporter._makeDefaultReporter()
# First, compile into an AST and handle syntax errors.
try:
tree = ast.parse(codeString, filename=filename)
except SyntaxError:
value = sys.exc_info()[1]
msg = value.args[0]
(lineno, offset, text) = value.lineno, value.offset, value.text
if checker.PYPY:
if text is None:
lines = codeString.splitlines()
if len(lines) >= lineno:
text = lines[lineno - 1]
if sys.version_info >= (3, ) and isinstance(text, bytes):
try:
text = text.decode('ascii')
except UnicodeDecodeError:
text = None
offset -= 1
# If there's an encoding problem with the file, the text is None.
if text is None:
# Avoid using msg, since for the only known case, it contains a
# bogus message that claims the encoding the file declared was
# unknown.
reporter.unexpectedError(filename, 'problem decoding source')
else:
reporter.syntaxError(filename, msg, lineno, offset, text)
return 1
except Exception:
reporter.unexpectedError(filename, 'problem decoding source')
return 1
# Okay, it's syntactically valid. Now check it.
file_tokens = checker.make_tokens(codeString)
w = checker.Checker(tree, file_tokens=file_tokens, filename=filename)
w.messages.sort(key=lambda m: m.lineno)
for warning in w.messages:
reporter.flake(warning)
return len(w.messages)
def checkPath(filename, reporter=None):
"""
Check the given path, printing out any warnings detected.
@param reporter: A L{Reporter} instance, where errors and warnings will be
reported.
@return: the number of warnings printed
"""
if reporter is None:
reporter = modReporter._makeDefaultReporter()
try:
with open(filename, 'rb') as f:
codestr = f.read()
except IOError:
msg = sys.exc_info()[1]
reporter.unexpectedError(filename, msg.args[1])
return 1
return check(codestr, filename, reporter)
def isPythonFile(filename):
"""Return True if filename points to a Python file."""
if filename.endswith('.py'):
return True
# Avoid obvious Emacs backup files
if filename.endswith("~"):
return False
max_bytes = 128
try:
with open(filename, 'rb') as f:
text = f.read(max_bytes)
if not text:
return False
except IOError:
return False
return PYTHON_SHEBANG_REGEX.match(text)
def iterSourceCode(paths):
"""
Iterate over all Python source files in C{paths}.
@param paths: A list of paths. Directories will be recursed into and
any .py files found will be yielded. Any non-directories will be
yielded as-is.
"""
for path in paths:
if os.path.isdir(path):
for dirpath, dirnames, filenames in os.walk(path):
for filename in filenames:
full_path = os.path.join(dirpath, filename)
if isPythonFile(full_path):
yield full_path
else:
yield path
def checkRecursive(paths, reporter):
"""
Recursively check all source files in C{paths}.
@param paths: A list of paths to Python source files and directories
containing Python source files.
@param reporter: A L{Reporter} where all of the warnings and errors
will be reported to.
@return: The number of warnings found.
"""
warnings = 0
for sourcePath in iterSourceCode(paths):
warnings += checkPath(sourcePath, reporter)
return warnings
def _exitOnSignal(sigName, message):
"""Handles a signal with sys.exit.
Some of these signals (SIGPIPE, for example) don't exist or are invalid on
Windows. So, ignore errors that might arise.
"""
import signal
try:
sigNumber = getattr(signal, sigName)
except AttributeError:
# the signal constants defined in the signal module are defined by
# whether the C library supports them or not. So, SIGPIPE might not
# even be defined.
return
def handler(sig, f):
sys.exit(message)
try:
signal.signal(sigNumber, handler)
except ValueError:
# It's also possible the signal is defined, but then it's invalid. In
# this case, signal.signal raises ValueError.
pass
def _get_version():
"""
Retrieve and format package version along with python version & OS used
"""
return ('%s Python %s on %s' %
(__version__, platform.python_version(), platform.system()))
def main(prog=None, args=None):
"""Entry point for the script "pyflakes"."""
import argparse
# Handle "Keyboard Interrupt" and "Broken pipe" gracefully
_exitOnSignal('SIGINT', '... stopped')
_exitOnSignal('SIGPIPE', 1)
parser = argparse.ArgumentParser(prog=prog,
description='Check Python source files for errors')
parser.add_argument('-V', '--version', action='version', version=_get_version())
parser.add_argument('path', nargs='*',
help='Path(s) of Python file(s) to check. STDIN if not given.')
args = parser.parse_args(args=args).path
reporter = modReporter._makeDefaultReporter()
if args:
warnings = checkRecursive(args, reporter)
else:
warnings = check(sys.stdin.read(), '<stdin>', reporter)
raise SystemExit(warnings > 0)

2249
third_party/python/pyflakes/pyflakes/checker.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

371
third_party/python/pyflakes/pyflakes/messages.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,371 @@
"""
Provide the class Message and its subclasses.
"""
class Message(object):
message = ''
message_args = ()
def __init__(self, filename, loc):
self.filename = filename
self.lineno = loc.lineno
self.col = getattr(loc, 'col_offset', 0)
def __str__(self):
return '%s:%s:%s %s' % (self.filename, self.lineno, self.col+1,
self.message % self.message_args)
class UnusedImport(Message):
message = '%r imported but unused'
def __init__(self, filename, loc, name):
Message.__init__(self, filename, loc)
self.message_args = (name,)
class RedefinedWhileUnused(Message):
message = 'redefinition of unused %r from line %r'
def __init__(self, filename, loc, name, orig_loc):
Message.__init__(self, filename, loc)
self.message_args = (name, orig_loc.lineno)
class RedefinedInListComp(Message):
message = 'list comprehension redefines %r from line %r'
def __init__(self, filename, loc, name, orig_loc):
Message.__init__(self, filename, loc)
self.message_args = (name, orig_loc.lineno)
class ImportShadowedByLoopVar(Message):
message = 'import %r from line %r shadowed by loop variable'
def __init__(self, filename, loc, name, orig_loc):
Message.__init__(self, filename, loc)
self.message_args = (name, orig_loc.lineno)
class ImportStarNotPermitted(Message):
message = "'from %s import *' only allowed at module level"
def __init__(self, filename, loc, modname):
Message.__init__(self, filename, loc)
self.message_args = (modname,)
class ImportStarUsed(Message):
message = "'from %s import *' used; unable to detect undefined names"
def __init__(self, filename, loc, modname):
Message.__init__(self, filename, loc)
self.message_args = (modname,)
class ImportStarUsage(Message):
message = "%r may be undefined, or defined from star imports: %s"
def __init__(self, filename, loc, name, from_list):
Message.__init__(self, filename, loc)
self.message_args = (name, from_list)
class UndefinedName(Message):
message = 'undefined name %r'
def __init__(self, filename, loc, name):
Message.__init__(self, filename, loc)
self.message_args = (name,)
class DoctestSyntaxError(Message):
message = 'syntax error in doctest'
def __init__(self, filename, loc, position=None):
Message.__init__(self, filename, loc)
if position:
(self.lineno, self.col) = position
self.message_args = ()
class UndefinedExport(Message):
message = 'undefined name %r in __all__'
def __init__(self, filename, loc, name):
Message.__init__(self, filename, loc)
self.message_args = (name,)
class UndefinedLocal(Message):
message = 'local variable %r {0} referenced before assignment'
default = 'defined in enclosing scope on line %r'
builtin = 'defined as a builtin'
def __init__(self, filename, loc, name, orig_loc):
Message.__init__(self, filename, loc)
if orig_loc is None:
self.message = self.message.format(self.builtin)
self.message_args = name
else:
self.message = self.message.format(self.default)
self.message_args = (name, orig_loc.lineno)
class DuplicateArgument(Message):
message = 'duplicate argument %r in function definition'
def __init__(self, filename, loc, name):
Message.__init__(self, filename, loc)
self.message_args = (name,)
class MultiValueRepeatedKeyLiteral(Message):
message = 'dictionary key %r repeated with different values'
def __init__(self, filename, loc, key):
Message.__init__(self, filename, loc)
self.message_args = (key,)
class MultiValueRepeatedKeyVariable(Message):
message = 'dictionary key variable %s repeated with different values'
def __init__(self, filename, loc, key):
Message.__init__(self, filename, loc)
self.message_args = (key,)
class LateFutureImport(Message):
message = 'from __future__ imports must occur at the beginning of the file'
def __init__(self, filename, loc, names):
Message.__init__(self, filename, loc)
self.message_args = ()
class FutureFeatureNotDefined(Message):
"""An undefined __future__ feature name was imported."""
message = 'future feature %s is not defined'
def __init__(self, filename, loc, name):
Message.__init__(self, filename, loc)
self.message_args = (name,)
class UnusedVariable(Message):
"""
Indicates that a variable has been explicitly assigned to but not actually
used.
"""
message = 'local variable %r is assigned to but never used'
def __init__(self, filename, loc, names):
Message.__init__(self, filename, loc)
self.message_args = (names,)
class ReturnWithArgsInsideGenerator(Message):
"""
Indicates a return statement with arguments inside a generator.
"""
message = '\'return\' with argument inside generator'
class ReturnOutsideFunction(Message):
"""
Indicates a return statement outside of a function/method.
"""
message = '\'return\' outside function'
class YieldOutsideFunction(Message):
"""
Indicates a yield or yield from statement outside of a function/method.
"""
message = '\'yield\' outside function'
# For whatever reason, Python gives different error messages for these two. We
# match the Python error message exactly.
class ContinueOutsideLoop(Message):
"""
Indicates a continue statement outside of a while or for loop.
"""
message = '\'continue\' not properly in loop'
class BreakOutsideLoop(Message):
"""
Indicates a break statement outside of a while or for loop.
"""
message = '\'break\' outside loop'
class ContinueInFinally(Message):
"""
Indicates a continue statement in a finally block in a while or for loop.
"""
message = '\'continue\' not supported inside \'finally\' clause'
class DefaultExceptNotLast(Message):
"""
Indicates an except: block as not the last exception handler.
"""
message = 'default \'except:\' must be last'
class TwoStarredExpressions(Message):
"""
Two or more starred expressions in an assignment (a, *b, *c = d).
"""
message = 'two starred expressions in assignment'
class TooManyExpressionsInStarredAssignment(Message):
"""
Too many expressions in an assignment with star-unpacking
"""
message = 'too many expressions in star-unpacking assignment'
class IfTuple(Message):
"""
Conditional test is a non-empty tuple literal, which are always True.
"""
message = '\'if tuple literal\' is always true, perhaps remove accidental comma?'
class AssertTuple(Message):
"""
Assertion test is a non-empty tuple literal, which are always True.
"""
message = 'assertion is always true, perhaps remove parentheses?'
class ForwardAnnotationSyntaxError(Message):
message = 'syntax error in forward annotation %r'
def __init__(self, filename, loc, annotation):
Message.__init__(self, filename, loc)
self.message_args = (annotation,)
class CommentAnnotationSyntaxError(Message):
message = 'syntax error in type comment %r'
def __init__(self, filename, loc, annotation):
Message.__init__(self, filename, loc)
self.message_args = (annotation,)
class RaiseNotImplemented(Message):
message = "'raise NotImplemented' should be 'raise NotImplementedError'"
class InvalidPrintSyntax(Message):
message = 'use of >> is invalid with print function'
class IsLiteral(Message):
message = 'use ==/!= to compare constant literals (str, bytes, int, float, tuple)'
class FStringMissingPlaceholders(Message):
message = 'f-string is missing placeholders'
class StringDotFormatExtraPositionalArguments(Message):
message = "'...'.format(...) has unused arguments at position(s): %s"
def __init__(self, filename, loc, extra_positions):
Message.__init__(self, filename, loc)
self.message_args = (extra_positions,)
class StringDotFormatExtraNamedArguments(Message):
message = "'...'.format(...) has unused named argument(s): %s"
def __init__(self, filename, loc, extra_keywords):
Message.__init__(self, filename, loc)
self.message_args = (extra_keywords,)
class StringDotFormatMissingArgument(Message):
message = "'...'.format(...) is missing argument(s) for placeholder(s): %s"
def __init__(self, filename, loc, missing_arguments):
Message.__init__(self, filename, loc)
self.message_args = (missing_arguments,)
class StringDotFormatMixingAutomatic(Message):
message = "'...'.format(...) mixes automatic and manual numbering"
class StringDotFormatInvalidFormat(Message):
message = "'...'.format(...) has invalid format string: %s"
def __init__(self, filename, loc, error):
Message.__init__(self, filename, loc)
self.message_args = (error,)
class PercentFormatInvalidFormat(Message):
message = "'...' %% ... has invalid format string: %s"
def __init__(self, filename, loc, error):
Message.__init__(self, filename, loc)
self.message_args = (error,)
class PercentFormatMixedPositionalAndNamed(Message):
message = "'...' %% ... has mixed positional and named placeholders"
class PercentFormatUnsupportedFormatCharacter(Message):
message = "'...' %% ... has unsupported format character %r"
def __init__(self, filename, loc, c):
Message.__init__(self, filename, loc)
self.message_args = (c,)
class PercentFormatPositionalCountMismatch(Message):
message = "'...' %% ... has %d placeholder(s) but %d substitution(s)"
def __init__(self, filename, loc, n_placeholders, n_substitutions):
Message.__init__(self, filename, loc)
self.message_args = (n_placeholders, n_substitutions)
class PercentFormatExtraNamedArguments(Message):
message = "'...' %% ... has unused named argument(s): %s"
def __init__(self, filename, loc, extra_keywords):
Message.__init__(self, filename, loc)
self.message_args = (extra_keywords,)
class PercentFormatMissingArgument(Message):
message = "'...' %% ... is missing argument(s) for placeholder(s): %s"
def __init__(self, filename, loc, missing_arguments):
Message.__init__(self, filename, loc)
self.message_args = (missing_arguments,)
class PercentFormatExpectedMapping(Message):
message = "'...' %% ... expected mapping but got sequence"
class PercentFormatExpectedSequence(Message):
message = "'...' %% ... expected sequence but got mapping"
class PercentFormatStarRequiresSequence(Message):
message = "'...' %% ... `*` specifier requires sequence"

82
third_party/python/pyflakes/pyflakes/reporter.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,82 @@
"""
Provide the Reporter class.
"""
import re
import sys
class Reporter(object):
"""
Formats the results of pyflakes checks to users.
"""
def __init__(self, warningStream, errorStream):
"""
Construct a L{Reporter}.
@param warningStream: A file-like object where warnings will be
written to. The stream's C{write} method must accept unicode.
C{sys.stdout} is a good value.
@param errorStream: A file-like object where error output will be
written to. The stream's C{write} method must accept unicode.
C{sys.stderr} is a good value.
"""
self._stdout = warningStream
self._stderr = errorStream
def unexpectedError(self, filename, msg):
"""
An unexpected error occurred trying to process C{filename}.
@param filename: The path to a file that we could not process.
@ptype filename: C{unicode}
@param msg: A message explaining the problem.
@ptype msg: C{unicode}
"""
self._stderr.write("%s: %s\n" % (filename, msg))
def syntaxError(self, filename, msg, lineno, offset, text):
"""
There was a syntax error in C{filename}.
@param filename: The path to the file with the syntax error.
@ptype filename: C{unicode}
@param msg: An explanation of the syntax error.
@ptype msg: C{unicode}
@param lineno: The line number where the syntax error occurred.
@ptype lineno: C{int}
@param offset: The column on which the syntax error occurred, or None.
@ptype offset: C{int}
@param text: The source code containing the syntax error.
@ptype text: C{unicode}
"""
line = text.splitlines()[-1]
if offset is not None:
if sys.version_info < (3, 8):
offset = offset - (len(text) - len(line)) + 1
self._stderr.write('%s:%d:%d: %s\n' %
(filename, lineno, offset, msg))
else:
self._stderr.write('%s:%d: %s\n' % (filename, lineno, msg))
self._stderr.write(line)
self._stderr.write('\n')
if offset is not None:
self._stderr.write(re.sub(r'\S', ' ', line[:offset - 1]) +
"^\n")
def flake(self, message):
"""
pyflakes found something wrong with the code.
@param: A L{pyflakes.messages.Message}.
"""
self._stdout.write(str(message))
self._stdout.write('\n')
def _makeDefaultReporter():
"""
Make a reporter that can be used when no reporter is specified.
"""
return Reporter(sys.stdout, sys.stderr)

0
third_party/python/pyflakes/pyflakes/test/__init__.py поставляемый Normal file
Просмотреть файл

72
third_party/python/pyflakes/pyflakes/test/harness.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,72 @@
import ast
import textwrap
import unittest
from pyflakes import checker
__all__ = ['TestCase', 'skip', 'skipIf']
skip = unittest.skip
skipIf = unittest.skipIf
class TestCase(unittest.TestCase):
withDoctest = False
def flakes(self, input, *expectedOutputs, **kw):
tree = ast.parse(textwrap.dedent(input))
file_tokens = checker.make_tokens(textwrap.dedent(input))
if kw.get('is_segment'):
tree = tree.body[0]
kw.pop('is_segment')
w = checker.Checker(
tree, file_tokens=file_tokens, withDoctest=self.withDoctest, **kw
)
outputs = [type(o) for o in w.messages]
expectedOutputs = list(expectedOutputs)
outputs.sort(key=lambda t: t.__name__)
expectedOutputs.sort(key=lambda t: t.__name__)
self.assertEqual(outputs, expectedOutputs, '''\
for input:
%s
expected outputs:
%r
but got:
%s''' % (input, expectedOutputs, '\n'.join([str(o) for o in w.messages])))
return w
if not hasattr(unittest.TestCase, 'assertIs'):
def assertIs(self, expr1, expr2, msg=None):
if expr1 is not expr2:
self.fail(msg or '%r is not %r' % (expr1, expr2))
if not hasattr(unittest.TestCase, 'assertIsInstance'):
def assertIsInstance(self, obj, cls, msg=None):
"""Same as self.assertTrue(isinstance(obj, cls))."""
if not isinstance(obj, cls):
self.fail(msg or '%r is not an instance of %r' % (obj, cls))
if not hasattr(unittest.TestCase, 'assertNotIsInstance'):
def assertNotIsInstance(self, obj, cls, msg=None):
"""Same as self.assertFalse(isinstance(obj, cls))."""
if isinstance(obj, cls):
self.fail(msg or '%r is an instance of %r' % (obj, cls))
if not hasattr(unittest.TestCase, 'assertIn'):
def assertIn(self, member, container, msg=None):
"""Just like self.assertTrue(a in b)."""
if member not in container:
self.fail(msg or '%r not found in %r' % (member, container))
if not hasattr(unittest.TestCase, 'assertNotIn'):
def assertNotIn(self, member, container, msg=None):
"""Just like self.assertTrue(a not in b)."""
if member in container:
self.fail(msg or
'%r unexpectedly found in %r' % (member, container))

835
third_party/python/pyflakes/pyflakes/test/test_api.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,835 @@
"""
Tests for L{pyflakes.scripts.pyflakes}.
"""
import contextlib
import os
import sys
import shutil
import subprocess
import tempfile
from pyflakes.messages import UnusedImport
from pyflakes.reporter import Reporter
from pyflakes.api import (
main,
checkPath,
checkRecursive,
iterSourceCode,
)
from pyflakes.test.harness import TestCase, skipIf
if sys.version_info < (3,):
from cStringIO import StringIO
else:
from io import StringIO
unichr = chr
try:
sys.pypy_version_info
PYPY = True
except AttributeError:
PYPY = False
try:
WindowsError
WIN = True
except NameError:
WIN = False
ERROR_HAS_COL_NUM = ERROR_HAS_LAST_LINE = sys.version_info >= (3, 2) or PYPY
def withStderrTo(stderr, f, *args, **kwargs):
"""
Call C{f} with C{sys.stderr} redirected to C{stderr}.
"""
(outer, sys.stderr) = (sys.stderr, stderr)
try:
return f(*args, **kwargs)
finally:
sys.stderr = outer
class Node(object):
"""
Mock an AST node.
"""
def __init__(self, lineno, col_offset=0):
self.lineno = lineno
self.col_offset = col_offset
class SysStreamCapturing(object):
"""
Context manager capturing sys.stdin, sys.stdout and sys.stderr.
The file handles are replaced with a StringIO object.
On environments that support it, the StringIO object uses newlines
set to os.linesep. Otherwise newlines are converted from \\n to
os.linesep during __exit__.
"""
def _create_StringIO(self, buffer=None):
# Python 3 has a newline argument
try:
return StringIO(buffer, newline=os.linesep)
except TypeError:
self._newline = True
# Python 2 creates an input only stream when buffer is not None
if buffer is None:
return StringIO()
else:
return StringIO(buffer)
def __init__(self, stdin):
self._newline = False
self._stdin = self._create_StringIO(stdin or '')
def __enter__(self):
self._orig_stdin = sys.stdin
self._orig_stdout = sys.stdout
self._orig_stderr = sys.stderr
sys.stdin = self._stdin
sys.stdout = self._stdout_stringio = self._create_StringIO()
sys.stderr = self._stderr_stringio = self._create_StringIO()
return self
def __exit__(self, *args):
self.output = self._stdout_stringio.getvalue()
self.error = self._stderr_stringio.getvalue()
if self._newline and os.linesep != '\n':
self.output = self.output.replace('\n', os.linesep)
self.error = self.error.replace('\n', os.linesep)
sys.stdin = self._orig_stdin
sys.stdout = self._orig_stdout
sys.stderr = self._orig_stderr
class LoggingReporter(object):
"""
Implementation of Reporter that just appends any error to a list.
"""
def __init__(self, log):
"""
Construct a C{LoggingReporter}.
@param log: A list to append log messages to.
"""
self.log = log
def flake(self, message):
self.log.append(('flake', str(message)))
def unexpectedError(self, filename, message):
self.log.append(('unexpectedError', filename, message))
def syntaxError(self, filename, msg, lineno, offset, line):
self.log.append(('syntaxError', filename, msg, lineno, offset, line))
class TestIterSourceCode(TestCase):
"""
Tests for L{iterSourceCode}.
"""
def setUp(self):
self.tempdir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.tempdir)
def makeEmptyFile(self, *parts):
assert parts
fpath = os.path.join(self.tempdir, *parts)
open(fpath, 'a').close()
return fpath
def test_emptyDirectory(self):
"""
There are no Python files in an empty directory.
"""
self.assertEqual(list(iterSourceCode([self.tempdir])), [])
def test_singleFile(self):
"""
If the directory contains one Python file, C{iterSourceCode} will find
it.
"""
childpath = self.makeEmptyFile('foo.py')
self.assertEqual(list(iterSourceCode([self.tempdir])), [childpath])
def test_onlyPythonSource(self):
"""
Files that are not Python source files are not included.
"""
self.makeEmptyFile('foo.pyc')
self.assertEqual(list(iterSourceCode([self.tempdir])), [])
def test_recurses(self):
"""
If the Python files are hidden deep down in child directories, we will
find them.
"""
os.mkdir(os.path.join(self.tempdir, 'foo'))
apath = self.makeEmptyFile('foo', 'a.py')
self.makeEmptyFile('foo', 'a.py~')
os.mkdir(os.path.join(self.tempdir, 'bar'))
bpath = self.makeEmptyFile('bar', 'b.py')
cpath = self.makeEmptyFile('c.py')
self.assertEqual(
sorted(iterSourceCode([self.tempdir])),
sorted([apath, bpath, cpath]))
def test_shebang(self):
"""
Find Python files that don't end with `.py`, but contain a Python
shebang.
"""
python = os.path.join(self.tempdir, 'a')
with open(python, 'w') as fd:
fd.write('#!/usr/bin/env python\n')
self.makeEmptyFile('b')
with open(os.path.join(self.tempdir, 'c'), 'w') as fd:
fd.write('hello\nworld\n')
python2 = os.path.join(self.tempdir, 'd')
with open(python2, 'w') as fd:
fd.write('#!/usr/bin/env python2\n')
python3 = os.path.join(self.tempdir, 'e')
with open(python3, 'w') as fd:
fd.write('#!/usr/bin/env python3\n')
pythonw = os.path.join(self.tempdir, 'f')
with open(pythonw, 'w') as fd:
fd.write('#!/usr/bin/env pythonw\n')
python3args = os.path.join(self.tempdir, 'g')
with open(python3args, 'w') as fd:
fd.write('#!/usr/bin/python3 -u\n')
python2u = os.path.join(self.tempdir, 'h')
with open(python2u, 'w') as fd:
fd.write('#!/usr/bin/python2u\n')
python3d = os.path.join(self.tempdir, 'i')
with open(python3d, 'w') as fd:
fd.write('#!/usr/local/bin/python3d\n')
python38m = os.path.join(self.tempdir, 'j')
with open(python38m, 'w') as fd:
fd.write('#! /usr/bin/env python3.8m\n')
python27 = os.path.join(self.tempdir, 'k')
with open(python27, 'w') as fd:
fd.write('#!/usr/bin/python2.7 \n')
# Should NOT be treated as Python source
notfirst = os.path.join(self.tempdir, 'l')
with open(notfirst, 'w') as fd:
fd.write('#!/bin/sh\n#!/usr/bin/python\n')
self.assertEqual(
sorted(iterSourceCode([self.tempdir])),
sorted([python, python2, python3, pythonw, python3args, python2u,
python3d, python38m, python27]))
def test_multipleDirectories(self):
"""
L{iterSourceCode} can be given multiple directories. It will recurse
into each of them.
"""
foopath = os.path.join(self.tempdir, 'foo')
barpath = os.path.join(self.tempdir, 'bar')
os.mkdir(foopath)
apath = self.makeEmptyFile('foo', 'a.py')
os.mkdir(barpath)
bpath = self.makeEmptyFile('bar', 'b.py')
self.assertEqual(
sorted(iterSourceCode([foopath, barpath])),
sorted([apath, bpath]))
def test_explicitFiles(self):
"""
If one of the paths given to L{iterSourceCode} is not a directory but
a file, it will include that in its output.
"""
epath = self.makeEmptyFile('e.py')
self.assertEqual(list(iterSourceCode([epath])),
[epath])
class TestReporter(TestCase):
"""
Tests for L{Reporter}.
"""
def test_syntaxError(self):
"""
C{syntaxError} reports that there was a syntax error in the source
file. It reports to the error stream and includes the filename, line
number, error message, actual line of source and a caret pointing to
where the error is.
"""
err = StringIO()
reporter = Reporter(None, err)
reporter.syntaxError('foo.py', 'a problem', 3,
8 if sys.version_info >= (3, 8) else 7,
'bad line of source')
self.assertEqual(
("foo.py:3:8: a problem\n"
"bad line of source\n"
" ^\n"),
err.getvalue())
def test_syntaxErrorNoOffset(self):
"""
C{syntaxError} doesn't include a caret pointing to the error if
C{offset} is passed as C{None}.
"""
err = StringIO()
reporter = Reporter(None, err)
reporter.syntaxError('foo.py', 'a problem', 3, None,
'bad line of source')
self.assertEqual(
("foo.py:3: a problem\n"
"bad line of source\n"),
err.getvalue())
def test_multiLineSyntaxError(self):
"""
If there's a multi-line syntax error, then we only report the last
line. The offset is adjusted so that it is relative to the start of
the last line.
"""
err = StringIO()
lines = [
'bad line of source',
'more bad lines of source',
]
reporter = Reporter(None, err)
reporter.syntaxError('foo.py', 'a problem', 3, len(lines[0]) + 7,
'\n'.join(lines))
column = 25 if sys.version_info >= (3, 8) else 7
self.assertEqual(
("foo.py:3:%d: a problem\n" % column +
lines[-1] + "\n" +
" " * (column - 1) + "^\n"),
err.getvalue())
def test_unexpectedError(self):
"""
C{unexpectedError} reports an error processing a source file.
"""
err = StringIO()
reporter = Reporter(None, err)
reporter.unexpectedError('source.py', 'error message')
self.assertEqual('source.py: error message\n', err.getvalue())
def test_flake(self):
"""
C{flake} reports a code warning from Pyflakes. It is exactly the
str() of a L{pyflakes.messages.Message}.
"""
out = StringIO()
reporter = Reporter(out, None)
message = UnusedImport('foo.py', Node(42), 'bar')
reporter.flake(message)
self.assertEqual(out.getvalue(), "%s\n" % (message,))
class CheckTests(TestCase):
"""
Tests for L{check} and L{checkPath} which check a file for flakes.
"""
@contextlib.contextmanager
def makeTempFile(self, content):
"""
Make a temporary file containing C{content} and return a path to it.
"""
fd, name = tempfile.mkstemp()
try:
with os.fdopen(fd, 'wb') as f:
if not hasattr(content, 'decode'):
content = content.encode('ascii')
f.write(content)
yield name
finally:
os.remove(name)
def assertHasErrors(self, path, errorList):
"""
Assert that C{path} causes errors.
@param path: A path to a file to check.
@param errorList: A list of errors expected to be printed to stderr.
"""
err = StringIO()
count = withStderrTo(err, checkPath, path)
self.assertEqual(
(count, err.getvalue()), (len(errorList), ''.join(errorList)))
def getErrors(self, path):
"""
Get any warnings or errors reported by pyflakes for the file at C{path}.
@param path: The path to a Python file on disk that pyflakes will check.
@return: C{(count, log)}, where C{count} is the number of warnings or
errors generated, and log is a list of those warnings, presented
as structured data. See L{LoggingReporter} for more details.
"""
log = []
reporter = LoggingReporter(log)
count = checkPath(path, reporter)
return count, log
def test_legacyScript(self):
from pyflakes.scripts import pyflakes as script_pyflakes
self.assertIs(script_pyflakes.checkPath, checkPath)
def test_missingTrailingNewline(self):
"""
Source which doesn't end with a newline shouldn't cause any
exception to be raised nor an error indicator to be returned by
L{check}.
"""
with self.makeTempFile("def foo():\n\tpass\n\t") as fName:
self.assertHasErrors(fName, [])
def test_checkPathNonExisting(self):
"""
L{checkPath} handles non-existing files.
"""
count, errors = self.getErrors('extremo')
self.assertEqual(count, 1)
self.assertEqual(
errors,
[('unexpectedError', 'extremo', 'No such file or directory')])
def test_multilineSyntaxError(self):
"""
Source which includes a syntax error which results in the raised
L{SyntaxError.text} containing multiple lines of source are reported
with only the last line of that source.
"""
source = """\
def foo():
'''
def bar():
pass
def baz():
'''quux'''
"""
# Sanity check - SyntaxError.text should be multiple lines, if it
# isn't, something this test was unprepared for has happened.
def evaluate(source):
exec(source)
try:
evaluate(source)
except SyntaxError:
e = sys.exc_info()[1]
if not PYPY:
self.assertTrue(e.text.count('\n') > 1)
else:
self.fail()
with self.makeTempFile(source) as sourcePath:
if PYPY:
message = 'end of file (EOF) while scanning triple-quoted string literal'
else:
message = 'invalid syntax'
column = 8 if sys.version_info >= (3, 8) else 11
self.assertHasErrors(
sourcePath,
["""\
%s:8:%d: %s
'''quux'''
%s^
""" % (sourcePath, column, message, ' ' * (column - 1))])
def test_eofSyntaxError(self):
"""
The error reported for source files which end prematurely causing a
syntax error reflects the cause for the syntax error.
"""
with self.makeTempFile("def foo(") as sourcePath:
if PYPY:
result = """\
%s:1:7: parenthesis is never closed
def foo(
^
""" % (sourcePath,)
else:
result = """\
%s:1:9: unexpected EOF while parsing
def foo(
^
""" % (sourcePath,)
self.assertHasErrors(
sourcePath,
[result])
def test_eofSyntaxErrorWithTab(self):
"""
The error reported for source files which end prematurely causing a
syntax error reflects the cause for the syntax error.
"""
with self.makeTempFile("if True:\n\tfoo =") as sourcePath:
column = 6 if PYPY else 7
last_line = '\t ^' if PYPY else '\t ^'
self.assertHasErrors(
sourcePath,
["""\
%s:2:%s: invalid syntax
\tfoo =
%s
""" % (sourcePath, column, last_line)])
def test_nonDefaultFollowsDefaultSyntaxError(self):
"""
Source which has a non-default argument following a default argument
should include the line number of the syntax error. However these
exceptions do not include an offset.
"""
source = """\
def foo(bar=baz, bax):
pass
"""
with self.makeTempFile(source) as sourcePath:
if ERROR_HAS_LAST_LINE:
if PYPY and sys.version_info >= (3,):
column = 7
elif sys.version_info >= (3, 8):
column = 9
else:
column = 8
last_line = ' ' * (column - 1) + '^\n'
columnstr = '%d:' % column
else:
last_line = columnstr = ''
self.assertHasErrors(
sourcePath,
["""\
%s:1:%s non-default argument follows default argument
def foo(bar=baz, bax):
%s""" % (sourcePath, columnstr, last_line)])
def test_nonKeywordAfterKeywordSyntaxError(self):
"""
Source which has a non-keyword argument after a keyword argument should
include the line number of the syntax error. However these exceptions
do not include an offset.
"""
source = """\
foo(bar=baz, bax)
"""
with self.makeTempFile(source) as sourcePath:
if ERROR_HAS_LAST_LINE:
if PYPY and sys.version_info >= (3,):
column = 12
elif sys.version_info >= (3, 8):
column = 14
else:
column = 13
last_line = ' ' * (column - 1) + '^\n'
columnstr = '%d:' % column
else:
last_line = columnstr = ''
if sys.version_info >= (3, 5):
message = 'positional argument follows keyword argument'
else:
message = 'non-keyword arg after keyword arg'
self.assertHasErrors(
sourcePath,
["""\
%s:1:%s %s
foo(bar=baz, bax)
%s""" % (sourcePath, columnstr, message, last_line)])
def test_invalidEscape(self):
"""
The invalid escape syntax raises ValueError in Python 2
"""
ver = sys.version_info
# ValueError: invalid \x escape
with self.makeTempFile(r"foo = '\xyz'") as sourcePath:
if ver < (3,):
decoding_error = "%s: problem decoding source\n" % (sourcePath,)
else:
position_end = 1
if PYPY:
column = 6
else:
column = 7
# Column has been "fixed" since 3.2.4 and 3.3.1
if ver < (3, 2, 4) or ver[:3] == (3, 3, 0):
position_end = 2
if ERROR_HAS_LAST_LINE:
last_line = '%s^\n' % (' ' * (column - 1))
else:
last_line = ''
decoding_error = """\
%s:1:%d: (unicode error) 'unicodeescape' codec can't decode bytes \
in position 0-%d: truncated \\xXX escape
foo = '\\xyz'
%s""" % (sourcePath, column, position_end, last_line)
self.assertHasErrors(
sourcePath, [decoding_error])
@skipIf(sys.platform == 'win32', 'unsupported on Windows')
def test_permissionDenied(self):
"""
If the source file is not readable, this is reported on standard
error.
"""
if os.getuid() == 0:
self.skipTest('root user can access all files regardless of '
'permissions')
with self.makeTempFile('') as sourcePath:
os.chmod(sourcePath, 0)
count, errors = self.getErrors(sourcePath)
self.assertEqual(count, 1)
self.assertEqual(
errors,
[('unexpectedError', sourcePath, "Permission denied")])
def test_pyflakesWarning(self):
"""
If the source file has a pyflakes warning, this is reported as a
'flake'.
"""
with self.makeTempFile("import foo") as sourcePath:
count, errors = self.getErrors(sourcePath)
self.assertEqual(count, 1)
self.assertEqual(
errors, [('flake', str(UnusedImport(sourcePath, Node(1), 'foo')))])
def test_encodedFileUTF8(self):
"""
If source file declares the correct encoding, no error is reported.
"""
SNOWMAN = unichr(0x2603)
source = ("""\
# coding: utf-8
x = "%s"
""" % SNOWMAN).encode('utf-8')
with self.makeTempFile(source) as sourcePath:
self.assertHasErrors(sourcePath, [])
def test_CRLFLineEndings(self):
"""
Source files with Windows CR LF line endings are parsed successfully.
"""
with self.makeTempFile("x = 42\r\n") as sourcePath:
self.assertHasErrors(sourcePath, [])
def test_misencodedFileUTF8(self):
"""
If a source file contains bytes which cannot be decoded, this is
reported on stderr.
"""
SNOWMAN = unichr(0x2603)
source = ("""\
# coding: ascii
x = "%s"
""" % SNOWMAN).encode('utf-8')
with self.makeTempFile(source) as sourcePath:
if PYPY and sys.version_info < (3, ):
message = ('\'ascii\' codec can\'t decode byte 0xe2 '
'in position 21: ordinal not in range(128)')
result = """\
%s:0:0: %s
x = "\xe2\x98\x83"
^\n""" % (sourcePath, message)
else:
message = 'problem decoding source'
result = "%s: problem decoding source\n" % (sourcePath,)
self.assertHasErrors(
sourcePath, [result])
def test_misencodedFileUTF16(self):
"""
If a source file contains bytes which cannot be decoded, this is
reported on stderr.
"""
SNOWMAN = unichr(0x2603)
source = ("""\
# coding: ascii
x = "%s"
""" % SNOWMAN).encode('utf-16')
with self.makeTempFile(source) as sourcePath:
self.assertHasErrors(
sourcePath, ["%s: problem decoding source\n" % (sourcePath,)])
def test_checkRecursive(self):
"""
L{checkRecursive} descends into each directory, finding Python files
and reporting problems.
"""
tempdir = tempfile.mkdtemp()
try:
os.mkdir(os.path.join(tempdir, 'foo'))
file1 = os.path.join(tempdir, 'foo', 'bar.py')
with open(file1, 'wb') as fd:
fd.write("import baz\n".encode('ascii'))
file2 = os.path.join(tempdir, 'baz.py')
with open(file2, 'wb') as fd:
fd.write("import contraband".encode('ascii'))
log = []
reporter = LoggingReporter(log)
warnings = checkRecursive([tempdir], reporter)
self.assertEqual(warnings, 2)
self.assertEqual(
sorted(log),
sorted([('flake', str(UnusedImport(file1, Node(1), 'baz'))),
('flake',
str(UnusedImport(file2, Node(1), 'contraband')))]))
finally:
shutil.rmtree(tempdir)
class IntegrationTests(TestCase):
"""
Tests of the pyflakes script that actually spawn the script.
"""
# https://bitbucket.org/pypy/pypy/issues/3069/pypy36-on-windows-incorrect-line-separator
if PYPY and sys.version_info >= (3,) and WIN:
LINESEP = '\n'
else:
LINESEP = os.linesep
def setUp(self):
self.tempdir = tempfile.mkdtemp()
self.tempfilepath = os.path.join(self.tempdir, 'temp')
def tearDown(self):
shutil.rmtree(self.tempdir)
def getPyflakesBinary(self):
"""
Return the path to the pyflakes binary.
"""
import pyflakes
package_dir = os.path.dirname(pyflakes.__file__)
return os.path.join(package_dir, '..', 'bin', 'pyflakes')
def runPyflakes(self, paths, stdin=None):
"""
Launch a subprocess running C{pyflakes}.
@param paths: Command-line arguments to pass to pyflakes.
@param stdin: Text to use as stdin.
@return: C{(returncode, stdout, stderr)} of the completed pyflakes
process.
"""
env = dict(os.environ)
env['PYTHONPATH'] = os.pathsep.join(sys.path)
command = [sys.executable, self.getPyflakesBinary()]
command.extend(paths)
if stdin:
p = subprocess.Popen(command, env=env, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = p.communicate(stdin.encode('ascii'))
else:
p = subprocess.Popen(command, env=env,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(stdout, stderr) = p.communicate()
rv = p.wait()
if sys.version_info >= (3,):
stdout = stdout.decode('utf-8')
stderr = stderr.decode('utf-8')
return (stdout, stderr, rv)
def test_goodFile(self):
"""
When a Python source file is all good, the return code is zero and no
messages are printed to either stdout or stderr.
"""
open(self.tempfilepath, 'a').close()
d = self.runPyflakes([self.tempfilepath])
self.assertEqual(d, ('', '', 0))
def test_fileWithFlakes(self):
"""
When a Python source file has warnings, the return code is non-zero
and the warnings are printed to stdout.
"""
with open(self.tempfilepath, 'wb') as fd:
fd.write("import contraband\n".encode('ascii'))
d = self.runPyflakes([self.tempfilepath])
expected = UnusedImport(self.tempfilepath, Node(1), 'contraband')
self.assertEqual(d, ("%s%s" % (expected, self.LINESEP), '', 1))
def test_errors_io(self):
"""
When pyflakes finds errors with the files it's given, (if they don't
exist, say), then the return code is non-zero and the errors are
printed to stderr.
"""
d = self.runPyflakes([self.tempfilepath])
error_msg = '%s: No such file or directory%s' % (self.tempfilepath,
self.LINESEP)
self.assertEqual(d, ('', error_msg, 1))
def test_errors_syntax(self):
"""
When pyflakes finds errors with the files it's given, (if they don't
exist, say), then the return code is non-zero and the errors are
printed to stderr.
"""
with open(self.tempfilepath, 'wb') as fd:
fd.write("import".encode('ascii'))
d = self.runPyflakes([self.tempfilepath])
error_msg = '{0}:1:{2}: invalid syntax{1}import{1} {3}^{1}'.format(
self.tempfilepath, self.LINESEP, 6 if PYPY else 7, '' if PYPY else ' ')
self.assertEqual(d, ('', error_msg, 1))
def test_readFromStdin(self):
"""
If no arguments are passed to C{pyflakes} then it reads from stdin.
"""
d = self.runPyflakes([], stdin='import contraband')
expected = UnusedImport('<stdin>', Node(1), 'contraband')
self.assertEqual(d, ("%s%s" % (expected, self.LINESEP), '', 1))
class TestMain(IntegrationTests):
"""
Tests of the pyflakes main function.
"""
LINESEP = os.linesep
def runPyflakes(self, paths, stdin=None):
try:
with SysStreamCapturing(stdin) as capture:
main(args=paths)
except SystemExit as e:
self.assertIsInstance(e.code, bool)
rv = int(e.code)
return (capture.output, capture.error, rv)
else:
raise RuntimeError('SystemExit not raised')

41
third_party/python/pyflakes/pyflakes/test/test_builtin.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,41 @@
"""
Tests for detecting redefinition of builtins.
"""
from sys import version_info
from pyflakes import messages as m
from pyflakes.test.harness import TestCase, skipIf
class TestBuiltins(TestCase):
def test_builtin_unbound_local(self):
self.flakes('''
def foo():
a = range(1, 10)
range = a
return range
foo()
print(range)
''', m.UndefinedLocal)
def test_global_shadowing_builtin(self):
self.flakes('''
def f():
global range
range = None
print(range)
f()
''')
@skipIf(version_info >= (3,), 'not an UnboundLocalError in Python 3')
def test_builtin_in_comprehension(self):
self.flakes('''
def f():
[range for range in range(1, 10)]
f()
''', m.UndefinedLocal)

186
third_party/python/pyflakes/pyflakes/test/test_checker.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,186 @@
import ast
import sys
from pyflakes import checker
from pyflakes.test.harness import TestCase, skipIf
class TypeableVisitorTests(TestCase):
"""
Tests of L{_TypeableVisitor}
"""
@staticmethod
def _run_visitor(s):
"""
Run L{_TypeableVisitor} on the parsed source and return the visitor.
"""
tree = ast.parse(s)
visitor = checker._TypeableVisitor()
visitor.visit(tree)
return visitor
def test_node_types(self):
"""
Test that the typeable node types are collected
"""
visitor = self._run_visitor(
"""\
x = 1 # assignment
for x in range(1): pass # for loop
def f(): pass # function definition
with a as b: pass # with statement
"""
)
self.assertEqual(visitor.typeable_lines, [1, 2, 3, 4])
self.assertIsInstance(visitor.typeable_nodes[1], ast.Assign)
self.assertIsInstance(visitor.typeable_nodes[2], ast.For)
self.assertIsInstance(visitor.typeable_nodes[3], ast.FunctionDef)
self.assertIsInstance(visitor.typeable_nodes[4], ast.With)
def test_visitor_recurses(self):
"""
Test the common pitfall of missing `generic_visit` in visitors by
ensuring that nested nodes are reported
"""
visitor = self._run_visitor(
"""\
def f():
x = 1
"""
)
self.assertEqual(visitor.typeable_lines, [1, 2])
self.assertIsInstance(visitor.typeable_nodes[1], ast.FunctionDef)
self.assertIsInstance(visitor.typeable_nodes[2], ast.Assign)
@skipIf(sys.version_info < (3, 5), 'async syntax introduced in py35')
def test_py35_node_types(self):
"""
Test that the PEP 492 node types are collected
"""
visitor = self._run_visitor(
"""\
async def f(): # async def
async for x in y: pass # async for
async with a as b: pass # async with
"""
)
self.assertEqual(visitor.typeable_lines, [1, 2, 3])
self.assertIsInstance(visitor.typeable_nodes[1], ast.AsyncFunctionDef)
self.assertIsInstance(visitor.typeable_nodes[2], ast.AsyncFor)
self.assertIsInstance(visitor.typeable_nodes[3], ast.AsyncWith)
def test_last_node_wins(self):
"""
Test that when two typeable nodes are present on a line, the last
typeable one wins.
"""
visitor = self._run_visitor('x = 1; y = 1')
# detected both assignable nodes
self.assertEqual(visitor.typeable_lines, [1, 1])
# but the assignment to `y` wins
self.assertEqual(visitor.typeable_nodes[1].targets[0].id, 'y')
class CollectTypeCommentsTests(TestCase):
"""
Tests of L{_collect_type_comments}
"""
@staticmethod
def _collect(s):
"""
Run L{_collect_type_comments} on the parsed source and return the
mapping from nodes to comments. The return value is converted to
a set: {(node_type, tuple of comments), ...}
"""
tree = ast.parse(s)
tokens = checker.make_tokens(s)
ret = checker._collect_type_comments(tree, tokens)
return {(type(k), tuple(s for _, s in v)) for k, v in ret.items()}
def test_bytes(self):
"""
Test that the function works for binary source
"""
ret = self._collect(b'x = 1 # type: int')
self.assertSetEqual(ret, {(ast.Assign, ('# type: int',))})
def test_text(self):
"""
Test that the function works for text source
"""
ret = self._collect(u'x = 1 # type: int')
self.assertEqual(ret, {(ast.Assign, ('# type: int',))})
def test_non_type_comment_ignored(self):
"""
Test that a non-type comment is ignored
"""
ret = self._collect('x = 1 # noqa')
self.assertSetEqual(ret, set())
def test_type_comment_before_typeable(self):
"""
Test that a type comment before something typeable is ignored.
"""
ret = self._collect('# type: int\nx = 1')
self.assertSetEqual(ret, set())
def test_type_ignore_comment_ignored(self):
"""
Test that `# type: ignore` comments are not collected.
"""
ret = self._collect('x = 1 # type: ignore')
self.assertSetEqual(ret, set())
def test_type_ignore_with_other_things_ignored(self):
"""
Test that `# type: ignore` comments with more content are also not
collected.
"""
ret = self._collect('x = 1 # type: ignore # noqa')
self.assertSetEqual(ret, set())
ret = self._collect('x = 1 #type:ignore#noqa')
self.assertSetEqual(ret, set())
def test_type_comment_with_extra_still_collected(self):
ret = self._collect('x = 1 # type: int # noqa')
self.assertSetEqual(ret, {(ast.Assign, ('# type: int # noqa',))})
def test_type_comment_without_whitespace(self):
ret = self._collect('x = 1 #type:int')
self.assertSetEqual(ret, {(ast.Assign, ('#type:int',))})
def test_type_comment_starts_with_word_ignore(self):
ret = self._collect('x = 1 # type: ignore[T]')
self.assertSetEqual(ret, set())
def test_last_node_wins(self):
"""
Test that when two typeable nodes are present on a line, the last
typeable one wins.
"""
ret = self._collect('def f(): x = 1 # type: int')
self.assertSetEqual(ret, {(ast.Assign, ('# type: int',))})
def test_function_def_assigned_comments(self):
"""
Test that type comments for function arguments are all attributed to
the function definition.
"""
ret = self._collect(
"""\
def f(
a, # type: int
b, # type: str
):
# type: (...) -> None
pass
"""
)
expected = {(
ast.FunctionDef,
('# type: int', '# type: str', '# type: (...) -> None'),
)}
self.assertSetEqual(ret, expected)

132
third_party/python/pyflakes/pyflakes/test/test_code_segment.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,132 @@
from sys import version_info
from pyflakes import messages as m
from pyflakes.checker import (FunctionScope, ClassScope, ModuleScope,
Argument, FunctionDefinition, Assignment)
from pyflakes.test.harness import TestCase, skipIf
class TestCodeSegments(TestCase):
"""
Tests for segments of a module
"""
def test_function_segment(self):
self.flakes('''
def foo():
def bar():
pass
''', is_segment=True)
self.flakes('''
def foo():
def bar():
x = 0
''', m.UnusedVariable, is_segment=True)
def test_class_segment(self):
self.flakes('''
class Foo:
class Bar:
pass
''', is_segment=True)
self.flakes('''
class Foo:
def bar():
x = 0
''', m.UnusedVariable, is_segment=True)
def test_scope_class(self):
checker = self.flakes('''
class Foo:
x = 0
def bar(a, b=1, *d, **e):
pass
''', is_segment=True)
scopes = checker.deadScopes
module_scopes = [
scope for scope in scopes if scope.__class__ is ModuleScope]
class_scopes = [
scope for scope in scopes if scope.__class__ is ClassScope]
function_scopes = [
scope for scope in scopes if scope.__class__ is FunctionScope]
# Ensure module scope is not present because we are analysing
# the inner contents of Foo
self.assertEqual(len(module_scopes), 0)
self.assertEqual(len(class_scopes), 1)
self.assertEqual(len(function_scopes), 1)
class_scope = class_scopes[0]
function_scope = function_scopes[0]
self.assertIsInstance(class_scope, ClassScope)
self.assertIsInstance(function_scope, FunctionScope)
self.assertIn('x', class_scope)
self.assertIn('bar', class_scope)
self.assertIn('a', function_scope)
self.assertIn('b', function_scope)
self.assertIn('d', function_scope)
self.assertIn('e', function_scope)
self.assertIsInstance(class_scope['bar'], FunctionDefinition)
self.assertIsInstance(class_scope['x'], Assignment)
self.assertIsInstance(function_scope['a'], Argument)
self.assertIsInstance(function_scope['b'], Argument)
self.assertIsInstance(function_scope['d'], Argument)
self.assertIsInstance(function_scope['e'], Argument)
def test_scope_function(self):
checker = self.flakes('''
def foo(a, b=1, *d, **e):
def bar(f, g=1, *h, **i):
pass
''', is_segment=True)
scopes = checker.deadScopes
module_scopes = [
scope for scope in scopes if scope.__class__ is ModuleScope]
function_scopes = [
scope for scope in scopes if scope.__class__ is FunctionScope]
# Ensure module scope is not present because we are analysing
# the inner contents of foo
self.assertEqual(len(module_scopes), 0)
self.assertEqual(len(function_scopes), 2)
function_scope_foo = function_scopes[1]
function_scope_bar = function_scopes[0]
self.assertIsInstance(function_scope_foo, FunctionScope)
self.assertIsInstance(function_scope_bar, FunctionScope)
self.assertIn('a', function_scope_foo)
self.assertIn('b', function_scope_foo)
self.assertIn('d', function_scope_foo)
self.assertIn('e', function_scope_foo)
self.assertIn('bar', function_scope_foo)
self.assertIn('f', function_scope_bar)
self.assertIn('g', function_scope_bar)
self.assertIn('h', function_scope_bar)
self.assertIn('i', function_scope_bar)
self.assertIsInstance(function_scope_foo['bar'], FunctionDefinition)
self.assertIsInstance(function_scope_foo['a'], Argument)
self.assertIsInstance(function_scope_foo['b'], Argument)
self.assertIsInstance(function_scope_foo['d'], Argument)
self.assertIsInstance(function_scope_foo['e'], Argument)
self.assertIsInstance(function_scope_bar['f'], Argument)
self.assertIsInstance(function_scope_bar['g'], Argument)
self.assertIsInstance(function_scope_bar['h'], Argument)
self.assertIsInstance(function_scope_bar['i'], Argument)
@skipIf(version_info < (3, 5), 'new in Python 3.5')
def test_scope_async_function(self):
self.flakes('async def foo(): pass', is_segment=True)

213
third_party/python/pyflakes/pyflakes/test/test_dict.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,213 @@
"""
Tests for dict duplicate keys Pyflakes behavior.
"""
from sys import version_info
from pyflakes import messages as m
from pyflakes.test.harness import TestCase, skipIf
class Test(TestCase):
def test_duplicate_keys(self):
self.flakes(
"{'yes': 1, 'yes': 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
@skipIf(version_info < (3,),
"bytes and strings with same 'value' are not equal in python3")
def test_duplicate_keys_bytes_vs_unicode_py3(self):
self.flakes("{b'a': 1, u'a': 2}")
@skipIf(version_info < (3,),
"bytes and strings with same 'value' are not equal in python3")
def test_duplicate_values_bytes_vs_unicode_py3(self):
self.flakes(
"{1: b'a', 1: u'a'}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
@skipIf(version_info >= (3,),
"bytes and strings with same 'value' are equal in python2")
def test_duplicate_keys_bytes_vs_unicode_py2(self):
self.flakes(
"{b'a': 1, u'a': 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
@skipIf(version_info >= (3,),
"bytes and strings with same 'value' are equal in python2")
def test_duplicate_values_bytes_vs_unicode_py2(self):
self.flakes("{1: b'a', 1: u'a'}")
def test_multiple_duplicate_keys(self):
self.flakes(
"{'yes': 1, 'yes': 2, 'no': 2, 'no': 3}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_in_function(self):
self.flakes(
'''
def f(thing):
pass
f({'yes': 1, 'yes': 2})
''',
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_in_lambda(self):
self.flakes(
"lambda x: {(0,1): 1, (0,1): 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_tuples(self):
self.flakes(
"{(0,1): 1, (0,1): 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_tuples_int_and_float(self):
self.flakes(
"{(0,1): 1, (0,1.0): 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_ints(self):
self.flakes(
"{1: 1, 1: 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_bools(self):
self.flakes(
"{True: 1, True: 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_bools_false(self):
# Needed to ensure 2.x correctly coerces these from variables
self.flakes(
"{False: 1, False: 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_keys_none(self):
self.flakes(
"{None: 1, None: 2}",
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_variable_keys(self):
self.flakes(
'''
a = 1
{a: 1, a: 2}
''',
m.MultiValueRepeatedKeyVariable,
m.MultiValueRepeatedKeyVariable,
)
def test_duplicate_variable_values(self):
self.flakes(
'''
a = 1
b = 2
{1: a, 1: b}
''',
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_variable_values_same_value(self):
# Current behaviour is not to look up variable values. This is to
# confirm that.
self.flakes(
'''
a = 1
b = 1
{1: a, 1: b}
''',
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_duplicate_key_float_and_int(self):
"""
These do look like different values, but when it comes to their use as
keys, they compare as equal and so are actually duplicates.
The literal dict {1: 1, 1.0: 1} actually becomes {1.0: 1}.
"""
self.flakes(
'''
{1: 1, 1.0: 2}
''',
m.MultiValueRepeatedKeyLiteral,
m.MultiValueRepeatedKeyLiteral,
)
def test_no_duplicate_key_error_same_value(self):
self.flakes('''
{'yes': 1, 'yes': 1}
''')
def test_no_duplicate_key_errors(self):
self.flakes('''
{'yes': 1, 'no': 2}
''')
def test_no_duplicate_keys_tuples_same_first_element(self):
self.flakes("{(0,1): 1, (0,2): 1}")
def test_no_duplicate_key_errors_func_call(self):
self.flakes('''
def test(thing):
pass
test({True: 1, None: 2, False: 1})
''')
def test_no_duplicate_key_errors_bool_or_none(self):
self.flakes("{True: 1, None: 2, False: 1}")
def test_no_duplicate_key_errors_ints(self):
self.flakes('''
{1: 1, 2: 1}
''')
def test_no_duplicate_key_errors_vars(self):
self.flakes('''
test = 'yes'
rest = 'yes'
{test: 1, rest: 2}
''')
def test_no_duplicate_key_errors_tuples(self):
self.flakes('''
{(0,1): 1, (0,2): 1}
''')
def test_no_duplicate_key_errors_instance_attributes(self):
self.flakes('''
class Test():
pass
f = Test()
f.a = 1
{f.a: 1, f.a: 1}
''')

465
third_party/python/pyflakes/pyflakes/test/test_doctests.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,465 @@
import sys
import textwrap
from pyflakes import messages as m
from pyflakes.checker import (
DoctestScope,
FunctionScope,
ModuleScope,
)
from pyflakes.test.test_other import Test as TestOther
from pyflakes.test.test_imports import Test as TestImports
from pyflakes.test.test_undefined_names import Test as TestUndefinedNames
from pyflakes.test.harness import TestCase, skip
try:
sys.pypy_version_info
PYPY = True
except AttributeError:
PYPY = False
class _DoctestMixin(object):
withDoctest = True
def doctestify(self, input):
lines = []
for line in textwrap.dedent(input).splitlines():
if line.strip() == '':
pass
elif (line.startswith(' ') or
line.startswith('except:') or
line.startswith('except ') or
line.startswith('finally:') or
line.startswith('else:') or
line.startswith('elif ') or
(lines and lines[-1].startswith(('>>> @', '... @')))):
line = "... %s" % line
else:
line = ">>> %s" % line
lines.append(line)
doctestificator = textwrap.dedent('''\
def doctest_something():
"""
%s
"""
''')
return doctestificator % "\n ".join(lines)
def flakes(self, input, *args, **kw):
return super(_DoctestMixin, self).flakes(self.doctestify(input), *args, **kw)
class Test(TestCase):
withDoctest = True
def test_scope_class(self):
"""Check that a doctest is given a DoctestScope."""
checker = self.flakes("""
m = None
def doctest_stuff():
'''
>>> d = doctest_stuff()
'''
f = m
return f
""")
scopes = checker.deadScopes
module_scopes = [
scope for scope in scopes if scope.__class__ is ModuleScope]
doctest_scopes = [
scope for scope in scopes if scope.__class__ is DoctestScope]
function_scopes = [
scope for scope in scopes if scope.__class__ is FunctionScope]
self.assertEqual(len(module_scopes), 1)
self.assertEqual(len(doctest_scopes), 1)
module_scope = module_scopes[0]
doctest_scope = doctest_scopes[0]
self.assertIsInstance(doctest_scope, DoctestScope)
self.assertIsInstance(doctest_scope, ModuleScope)
self.assertNotIsInstance(doctest_scope, FunctionScope)
self.assertNotIsInstance(module_scope, DoctestScope)
self.assertIn('m', module_scope)
self.assertIn('doctest_stuff', module_scope)
self.assertIn('d', doctest_scope)
self.assertEqual(len(function_scopes), 1)
self.assertIn('f', function_scopes[0])
def test_nested_doctest_ignored(self):
"""Check that nested doctests are ignored."""
checker = self.flakes("""
m = None
def doctest_stuff():
'''
>>> def function_in_doctest():
... \"\"\"
... >>> ignored_undefined_name
... \"\"\"
... df = m
... return df
...
>>> function_in_doctest()
'''
f = m
return f
""")
scopes = checker.deadScopes
module_scopes = [
scope for scope in scopes if scope.__class__ is ModuleScope]
doctest_scopes = [
scope for scope in scopes if scope.__class__ is DoctestScope]
function_scopes = [
scope for scope in scopes if scope.__class__ is FunctionScope]
self.assertEqual(len(module_scopes), 1)
self.assertEqual(len(doctest_scopes), 1)
module_scope = module_scopes[0]
doctest_scope = doctest_scopes[0]
self.assertIn('m', module_scope)
self.assertIn('doctest_stuff', module_scope)
self.assertIn('function_in_doctest', doctest_scope)
self.assertEqual(len(function_scopes), 2)
self.assertIn('f', function_scopes[0])
self.assertIn('df', function_scopes[1])
def test_global_module_scope_pollution(self):
"""Check that global in doctest does not pollute module scope."""
checker = self.flakes("""
def doctest_stuff():
'''
>>> def function_in_doctest():
... global m
... m = 50
... df = 10
... m = df
...
>>> function_in_doctest()
'''
f = 10
return f
""")
scopes = checker.deadScopes
module_scopes = [
scope for scope in scopes if scope.__class__ is ModuleScope]
doctest_scopes = [
scope for scope in scopes if scope.__class__ is DoctestScope]
function_scopes = [
scope for scope in scopes if scope.__class__ is FunctionScope]
self.assertEqual(len(module_scopes), 1)
self.assertEqual(len(doctest_scopes), 1)
module_scope = module_scopes[0]
doctest_scope = doctest_scopes[0]
self.assertIn('doctest_stuff', module_scope)
self.assertIn('function_in_doctest', doctest_scope)
self.assertEqual(len(function_scopes), 2)
self.assertIn('f', function_scopes[0])
self.assertIn('df', function_scopes[1])
self.assertIn('m', function_scopes[1])
self.assertNotIn('m', module_scope)
def test_global_undefined(self):
self.flakes("""
global m
def doctest_stuff():
'''
>>> m
'''
""", m.UndefinedName)
def test_nested_class(self):
"""Doctest within nested class are processed."""
self.flakes("""
class C:
class D:
'''
>>> m
'''
def doctest_stuff(self):
'''
>>> m
'''
return 1
""", m.UndefinedName, m.UndefinedName)
def test_ignore_nested_function(self):
"""Doctest module does not process doctest in nested functions."""
# 'syntax error' would cause a SyntaxError if the doctest was processed.
# However doctest does not find doctest in nested functions
# (https://bugs.python.org/issue1650090). If nested functions were
# processed, this use of m should cause UndefinedName, and the
# name inner_function should probably exist in the doctest scope.
self.flakes("""
def doctest_stuff():
def inner_function():
'''
>>> syntax error
>>> inner_function()
1
>>> m
'''
return 1
m = inner_function()
return m
""")
def test_inaccessible_scope_class(self):
"""Doctest may not access class scope."""
self.flakes("""
class C:
def doctest_stuff(self):
'''
>>> m
'''
return 1
m = 1
""", m.UndefinedName)
def test_importBeforeDoctest(self):
self.flakes("""
import foo
def doctest_stuff():
'''
>>> foo
'''
""")
@skip("todo")
def test_importBeforeAndInDoctest(self):
self.flakes('''
import foo
def doctest_stuff():
"""
>>> import foo
>>> foo
"""
foo
''', m.RedefinedWhileUnused)
def test_importInDoctestAndAfter(self):
self.flakes('''
def doctest_stuff():
"""
>>> import foo
>>> foo
"""
import foo
foo()
''')
def test_offsetInDoctests(self):
exc = self.flakes('''
def doctest_stuff():
"""
>>> x # line 5
"""
''', m.UndefinedName).messages[0]
self.assertEqual(exc.lineno, 5)
self.assertEqual(exc.col, 12)
def test_offsetInLambdasInDoctests(self):
exc = self.flakes('''
def doctest_stuff():
"""
>>> lambda: x # line 5
"""
''', m.UndefinedName).messages[0]
self.assertEqual(exc.lineno, 5)
self.assertEqual(exc.col, 20)
def test_offsetAfterDoctests(self):
exc = self.flakes('''
def doctest_stuff():
"""
>>> x = 5
"""
x
''', m.UndefinedName).messages[0]
self.assertEqual(exc.lineno, 8)
self.assertEqual(exc.col, 0)
def test_syntaxErrorInDoctest(self):
exceptions = self.flakes(
'''
def doctest_stuff():
"""
>>> from # line 4
>>> fortytwo = 42
>>> except Exception:
"""
''',
m.DoctestSyntaxError,
m.DoctestSyntaxError,
m.DoctestSyntaxError).messages
exc = exceptions[0]
self.assertEqual(exc.lineno, 4)
if PYPY:
self.assertEqual(exc.col, 27)
elif sys.version_info >= (3, 8):
self.assertEqual(exc.col, 18)
else:
self.assertEqual(exc.col, 26)
# PyPy error column offset is 0,
# for the second and third line of the doctest
# i.e. at the beginning of the line
exc = exceptions[1]
self.assertEqual(exc.lineno, 5)
if PYPY:
self.assertEqual(exc.col, 14)
else:
self.assertEqual(exc.col, 16)
exc = exceptions[2]
self.assertEqual(exc.lineno, 6)
if PYPY:
self.assertEqual(exc.col, 14)
elif sys.version_info >= (3, 8):
self.assertEqual(exc.col, 13)
else:
self.assertEqual(exc.col, 18)
def test_indentationErrorInDoctest(self):
exc = self.flakes('''
def doctest_stuff():
"""
>>> if True:
... pass
"""
''', m.DoctestSyntaxError).messages[0]
self.assertEqual(exc.lineno, 5)
if PYPY:
self.assertEqual(exc.col, 14)
elif sys.version_info >= (3, 8):
self.assertEqual(exc.col, 13)
else:
self.assertEqual(exc.col, 16)
def test_offsetWithMultiLineArgs(self):
(exc1, exc2) = self.flakes(
'''
def doctest_stuff(arg1,
arg2,
arg3):
"""
>>> assert
>>> this
"""
''',
m.DoctestSyntaxError,
m.UndefinedName).messages
self.assertEqual(exc1.lineno, 6)
if PYPY:
self.assertEqual(exc1.col, 20)
else:
self.assertEqual(exc1.col, 19)
self.assertEqual(exc2.lineno, 7)
self.assertEqual(exc2.col, 12)
def test_doctestCanReferToFunction(self):
self.flakes("""
def foo():
'''
>>> foo
'''
""")
def test_doctestCanReferToClass(self):
self.flakes("""
class Foo():
'''
>>> Foo
'''
def bar(self):
'''
>>> Foo
'''
""")
def test_noOffsetSyntaxErrorInDoctest(self):
exceptions = self.flakes(
'''
def buildurl(base, *args, **kwargs):
"""
>>> buildurl('/blah.php', ('a', '&'), ('b', '=')
'/blah.php?a=%26&b=%3D'
>>> buildurl('/blah.php', a='&', 'b'='=')
'/blah.php?b=%3D&a=%26'
"""
pass
''',
m.DoctestSyntaxError,
m.DoctestSyntaxError).messages
exc = exceptions[0]
self.assertEqual(exc.lineno, 4)
exc = exceptions[1]
self.assertEqual(exc.lineno, 6)
def test_singleUnderscoreInDoctest(self):
self.flakes('''
def func():
"""A docstring
>>> func()
1
>>> _
1
"""
return 1
''')
def test_globalUnderscoreInDoctest(self):
self.flakes("""
from gettext import ugettext as _
def doctest_stuff():
'''
>>> pass
'''
""", m.UnusedImport)
class TestOther(_DoctestMixin, TestOther):
"""Run TestOther with each test wrapped in a doctest."""
class TestImports(_DoctestMixin, TestImports):
"""Run TestImports with each test wrapped in a doctest."""
class TestUndefinedNames(_DoctestMixin, TestUndefinedNames):
"""Run TestUndefinedNames with each test wrapped in a doctest."""

1221
third_party/python/pyflakes/pyflakes/test/test_imports.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

222
third_party/python/pyflakes/pyflakes/test/test_is_literal.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,222 @@
from pyflakes.messages import IsLiteral
from pyflakes.test.harness import TestCase
class Test(TestCase):
def test_is_str(self):
self.flakes("""
x = 'foo'
if x is 'foo':
pass
""", IsLiteral)
def test_is_bytes(self):
self.flakes("""
x = b'foo'
if x is b'foo':
pass
""", IsLiteral)
def test_is_unicode(self):
self.flakes("""
x = u'foo'
if x is u'foo':
pass
""", IsLiteral)
def test_is_int(self):
self.flakes("""
x = 10
if x is 10:
pass
""", IsLiteral)
def test_is_true(self):
self.flakes("""
x = True
if x is True:
pass
""")
def test_is_false(self):
self.flakes("""
x = False
if x is False:
pass
""")
def test_is_not_str(self):
self.flakes("""
x = 'foo'
if x is not 'foo':
pass
""", IsLiteral)
def test_is_not_bytes(self):
self.flakes("""
x = b'foo'
if x is not b'foo':
pass
""", IsLiteral)
def test_is_not_unicode(self):
self.flakes("""
x = u'foo'
if x is not u'foo':
pass
""", IsLiteral)
def test_is_not_int(self):
self.flakes("""
x = 10
if x is not 10:
pass
""", IsLiteral)
def test_is_not_true(self):
self.flakes("""
x = True
if x is not True:
pass
""")
def test_is_not_false(self):
self.flakes("""
x = False
if x is not False:
pass
""")
def test_left_is_str(self):
self.flakes("""
x = 'foo'
if 'foo' is x:
pass
""", IsLiteral)
def test_left_is_bytes(self):
self.flakes("""
x = b'foo'
if b'foo' is x:
pass
""", IsLiteral)
def test_left_is_unicode(self):
self.flakes("""
x = u'foo'
if u'foo' is x:
pass
""", IsLiteral)
def test_left_is_int(self):
self.flakes("""
x = 10
if 10 is x:
pass
""", IsLiteral)
def test_left_is_true(self):
self.flakes("""
x = True
if True is x:
pass
""")
def test_left_is_false(self):
self.flakes("""
x = False
if False is x:
pass
""")
def test_left_is_not_str(self):
self.flakes("""
x = 'foo'
if 'foo' is not x:
pass
""", IsLiteral)
def test_left_is_not_bytes(self):
self.flakes("""
x = b'foo'
if b'foo' is not x:
pass
""", IsLiteral)
def test_left_is_not_unicode(self):
self.flakes("""
x = u'foo'
if u'foo' is not x:
pass
""", IsLiteral)
def test_left_is_not_int(self):
self.flakes("""
x = 10
if 10 is not x:
pass
""", IsLiteral)
def test_left_is_not_true(self):
self.flakes("""
x = True
if True is not x:
pass
""")
def test_left_is_not_false(self):
self.flakes("""
x = False
if False is not x:
pass
""")
def test_chained_operators_is_true(self):
self.flakes("""
x = 5
if x is True < 4:
pass
""")
def test_chained_operators_is_str(self):
self.flakes("""
x = 5
if x is 'foo' < 4:
pass
""", IsLiteral)
def test_chained_operators_is_true_end(self):
self.flakes("""
x = 5
if 4 < x is True:
pass
""")
def test_chained_operators_is_str_end(self):
self.flakes("""
x = 5
if 4 < x is 'foo':
pass
""", IsLiteral)
def test_is_tuple_constant(self):
self.flakes('''\
x = 5
if x is ():
pass
''', IsLiteral)
def test_is_tuple_constant_containing_constants(self):
self.flakes('''\
x = 5
if x is (1, '2', True, (1.5, ())):
pass
''', IsLiteral)
def test_is_tuple_containing_variables_ok(self):
# a bit nonsensical, but does not trigger a SyntaxWarning
self.flakes('''\
x = 5
if x is (x,):
pass
''')

2142
third_party/python/pyflakes/pyflakes/test/test_other.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,34 @@
from sys import version_info
from pyflakes import messages as m
from pyflakes.test.harness import TestCase, skipIf
class Test(TestCase):
@skipIf(version_info >= (3, 3), 'new in Python 3.3')
def test_return(self):
self.flakes('''
class a:
def b():
for x in a.c:
if x:
yield x
return a
''', m.ReturnWithArgsInsideGenerator)
@skipIf(version_info >= (3, 3), 'new in Python 3.3')
def test_returnNone(self):
self.flakes('''
def a():
yield 12
return None
''', m.ReturnWithArgsInsideGenerator)
@skipIf(version_info >= (3, 3), 'new in Python 3.3')
def test_returnYieldExpression(self):
self.flakes('''
def a():
b = yield a
return b
''', m.ReturnWithArgsInsideGenerator)

Просмотреть файл

@ -0,0 +1,554 @@
"""
Tests for behaviour related to type annotations.
"""
from sys import version_info
from pyflakes import messages as m
from pyflakes.test.harness import TestCase, skipIf
class TestTypeAnnotations(TestCase):
def test_typingOverload(self):
"""Allow intentional redefinitions via @typing.overload"""
self.flakes("""
import typing
from typing import overload
@overload
def f(s): # type: (None) -> None
pass
@overload
def f(s): # type: (int) -> int
pass
def f(s):
return s
@typing.overload
def g(s): # type: (None) -> None
pass
@typing.overload
def g(s): # type: (int) -> int
pass
def g(s):
return s
""")
def test_typingExtensionsOverload(self):
"""Allow intentional redefinitions via @typing_extensions.overload"""
self.flakes("""
import typing_extensions
from typing_extensions import overload
@overload
def f(s): # type: (None) -> None
pass
@overload
def f(s): # type: (int) -> int
pass
def f(s):
return s
@typing_extensions.overload
def g(s): # type: (None) -> None
pass
@typing_extensions.overload
def g(s): # type: (int) -> int
pass
def g(s):
return s
""")
@skipIf(version_info < (3, 5), 'new in Python 3.5')
def test_typingOverloadAsync(self):
"""Allow intentional redefinitions via @typing.overload (async)"""
self.flakes("""
from typing import overload
@overload
async def f(s): # type: (None) -> None
pass
@overload
async def f(s): # type: (int) -> int
pass
async def f(s):
return s
""")
def test_overload_with_multiple_decorators(self):
self.flakes("""
from typing import overload
dec = lambda f: f
@dec
@overload
def f(x): # type: (int) -> int
pass
@dec
@overload
def f(x): # type: (str) -> str
pass
@dec
def f(x): return x
""")
def test_overload_in_class(self):
self.flakes("""
from typing import overload
class C:
@overload
def f(self, x): # type: (int) -> int
pass
@overload
def f(self, x): # type: (str) -> str
pass
def f(self, x): return x
""")
def test_not_a_typing_overload(self):
"""regression test for @typing.overload detection bug in 2.1.0"""
self.flakes("""
def foo(x):
return x
@foo
def bar():
pass
def bar():
pass
""", m.RedefinedWhileUnused)
@skipIf(version_info < (3, 6), 'new in Python 3.6')
def test_variable_annotations(self):
self.flakes('''
name: str
age: int
''')
self.flakes('''
name: str = 'Bob'
age: int = 18
''')
self.flakes('''
class C:
name: str
age: int
''')
self.flakes('''
class C:
name: str = 'Bob'
age: int = 18
''')
self.flakes('''
def f():
name: str
age: int
''')
self.flakes('''
def f():
name: str = 'Bob'
age: int = 18
foo: not_a_real_type = None
''', m.UnusedVariable, m.UnusedVariable, m.UnusedVariable, m.UndefinedName)
self.flakes('''
def f():
name: str
print(name)
''', m.UndefinedName)
self.flakes('''
from typing import Any
def f():
a: Any
''')
self.flakes('''
foo: not_a_real_type
''', m.UndefinedName)
self.flakes('''
foo: not_a_real_type = None
''', m.UndefinedName)
self.flakes('''
class C:
foo: not_a_real_type
''', m.UndefinedName)
self.flakes('''
class C:
foo: not_a_real_type = None
''', m.UndefinedName)
self.flakes('''
def f():
class C:
foo: not_a_real_type
''', m.UndefinedName)
self.flakes('''
def f():
class C:
foo: not_a_real_type = None
''', m.UndefinedName)
self.flakes('''
from foo import Bar
bar: Bar
''')
self.flakes('''
from foo import Bar
bar: 'Bar'
''')
self.flakes('''
import foo
bar: foo.Bar
''')
self.flakes('''
import foo
bar: 'foo.Bar'
''')
self.flakes('''
from foo import Bar
def f(bar: Bar): pass
''')
self.flakes('''
from foo import Bar
def f(bar: 'Bar'): pass
''')
self.flakes('''
from foo import Bar
def f(bar) -> Bar: return bar
''')
self.flakes('''
from foo import Bar
def f(bar) -> 'Bar': return bar
''')
self.flakes('''
bar: 'Bar'
''', m.UndefinedName)
self.flakes('''
bar: 'foo.Bar'
''', m.UndefinedName)
self.flakes('''
from foo import Bar
bar: str
''', m.UnusedImport)
self.flakes('''
from foo import Bar
def f(bar: str): pass
''', m.UnusedImport)
self.flakes('''
def f(a: A) -> A: pass
class A: pass
''', m.UndefinedName, m.UndefinedName)
self.flakes('''
def f(a: 'A') -> 'A': return a
class A: pass
''')
self.flakes('''
a: A
class A: pass
''', m.UndefinedName)
self.flakes('''
a: 'A'
class A: pass
''')
self.flakes('''
a: 'A B'
''', m.ForwardAnnotationSyntaxError)
self.flakes('''
a: 'A; B'
''', m.ForwardAnnotationSyntaxError)
self.flakes('''
a: '1 + 2'
''')
self.flakes('''
a: 'a: "A"'
''', m.ForwardAnnotationSyntaxError)
@skipIf(version_info < (3, 5), 'new in Python 3.5')
def test_annotated_async_def(self):
self.flakes('''
class c: pass
async def func(c: c) -> None: pass
''')
@skipIf(version_info < (3, 7), 'new in Python 3.7')
def test_postponed_annotations(self):
self.flakes('''
from __future__ import annotations
def f(a: A) -> A: pass
class A:
b: B
class B: pass
''')
self.flakes('''
from __future__ import annotations
def f(a: A) -> A: pass
class A:
b: Undefined
class B: pass
''', m.UndefinedName)
def test_typeCommentsMarkImportsAsUsed(self):
self.flakes("""
from mod import A, B, C, D, E, F, G
def f(
a, # type: A
):
# type: (...) -> B
for b in a: # type: C
with b as c: # type: D
d = c.x # type: E
return d
def g(x): # type: (F) -> G
return x.y
""")
def test_typeCommentsFullSignature(self):
self.flakes("""
from mod import A, B, C, D
def f(a, b):
# type: (A, B[C]) -> D
return a + b
""")
def test_typeCommentsStarArgs(self):
self.flakes("""
from mod import A, B, C, D
def f(a, *b, **c):
# type: (A, *B, **C) -> D
return a + b
""")
def test_typeCommentsFullSignatureWithDocstring(self):
self.flakes('''
from mod import A, B, C, D
def f(a, b):
# type: (A, B[C]) -> D
"""do the thing!"""
return a + b
''')
def test_typeCommentsAdditionalComment(self):
self.flakes("""
from mod import F
x = 1 # type: F # noqa
""")
def test_typeCommentsNoWhitespaceAnnotation(self):
self.flakes("""
from mod import F
x = 1 #type:F
""")
def test_typeCommentsInvalidDoesNotMarkAsUsed(self):
self.flakes("""
from mod import F
# type: F
""", m.UnusedImport)
def test_typeCommentsSyntaxError(self):
self.flakes("""
def f(x): # type: (F[) -> None
pass
""", m.CommentAnnotationSyntaxError)
def test_typeCommentsSyntaxErrorCorrectLine(self):
checker = self.flakes("""\
x = 1
# type: definitely not a PEP 484 comment
""", m.CommentAnnotationSyntaxError)
self.assertEqual(checker.messages[0].lineno, 2)
def test_typeCommentsAssignedToPreviousNode(self):
# This test demonstrates an issue in the implementation which
# associates the type comment with a node above it, however the type
# comment isn't valid according to mypy. If an improved approach
# which can detect these "invalid" type comments is implemented, this
# test should be removed / improved to assert that new check.
self.flakes("""
from mod import F
x = 1
# type: F
""")
def test_typeIgnore(self):
self.flakes("""
a = 0 # type: ignore
b = 0 # type: ignore[excuse]
c = 0 # type: ignore=excuse
d = 0 # type: ignore [excuse]
e = 0 # type: ignore whatever
""")
def test_typeIgnoreBogus(self):
self.flakes("""
x = 1 # type: ignored
""", m.UndefinedName)
def test_typeIgnoreBogusUnicode(self):
error = (m.CommentAnnotationSyntaxError if version_info < (3,)
else m.UndefinedName)
self.flakes("""
x = 2 # type: ignore\xc3
""", error)
@skipIf(version_info < (3,), 'new in Python 3')
def test_return_annotation_is_class_scope_variable(self):
self.flakes("""
from typing import TypeVar
class Test:
Y = TypeVar('Y')
def t(self, x: Y) -> Y:
return x
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_return_annotation_is_function_body_variable(self):
self.flakes("""
class Test:
def t(self) -> Y:
Y = 2
return Y
""", m.UndefinedName)
@skipIf(version_info < (3, 8), 'new in Python 3.8')
def test_positional_only_argument_annotations(self):
self.flakes("""
from x import C
def f(c: C, /): ...
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_partially_quoted_type_annotation(self):
self.flakes("""
from queue import Queue
from typing import Optional
def f() -> Optional['Queue[str]']:
return None
""")
def test_partially_quoted_type_assignment(self):
self.flakes("""
from queue import Queue
from typing import Optional
MaybeQueue = Optional['Queue[str]']
""")
def test_nested_partially_quoted_type_assignment(self):
self.flakes("""
from queue import Queue
from typing import Callable
Func = Callable[['Queue[str]'], None]
""")
def test_quoted_type_cast(self):
self.flakes("""
from typing import cast, Optional
maybe_int = cast('Optional[int]', 42)
""")
def test_type_cast_literal_str_to_str(self):
# Checks that our handling of quoted type annotations in the first
# argument to `cast` doesn't cause issues when (only) the _second_
# argument is a literal str which looks a bit like a type annoation.
self.flakes("""
from typing import cast
a_string = cast(str, 'Optional[int]')
""")
def test_quoted_type_cast_renamed_import(self):
self.flakes("""
from typing import cast as tsac, Optional as Maybe
maybe_int = tsac('Maybe[int]', 42)
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_literal_type_typing(self):
self.flakes("""
from typing import Literal
def f(x: Literal['some string']) -> None:
return None
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_literal_type_typing_extensions(self):
self.flakes("""
from typing_extensions import Literal
def f(x: Literal['some string']) -> None:
return None
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_literal_type_some_other_module(self):
"""err on the side of false-negatives for types named Literal"""
self.flakes("""
from my_module import compat
from my_module.compat import Literal
def f(x: compat.Literal['some string']) -> None:
return None
def g(x: Literal['some string']) -> None:
return None
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_literal_union_type_typing(self):
self.flakes("""
from typing import Literal
def f(x: Literal['some string', 'foo bar']) -> None:
return None
""")
@skipIf(version_info < (3,), 'new in Python 3')
def test_deferred_twice_annotation(self):
self.flakes("""
from queue import Queue
from typing import Optional
def f() -> "Optional['Queue[str]']":
return None
""")
@skipIf(version_info < (3, 7), 'new in Python 3.7')
def test_partial_string_annotations_with_future_annotations(self):
self.flakes("""
from __future__ import annotations
from queue import Queue
from typing import Optional
def f() -> Optional['Queue[str]']:
return None
""")

Просмотреть файл

@ -0,0 +1,854 @@
import ast
from sys import version_info
from pyflakes import messages as m, checker
from pyflakes.test.harness import TestCase, skipIf, skip
class Test(TestCase):
def test_undefined(self):
self.flakes('bar', m.UndefinedName)
def test_definedInListComp(self):
self.flakes('[a for a in range(10) if a]')
@skipIf(version_info < (3,),
'in Python 2 list comprehensions execute in the same scope')
def test_undefinedInListComp(self):
self.flakes('''
[a for a in range(10)]
a
''',
m.UndefinedName)
@skipIf(version_info < (3,),
'in Python 2 exception names stay bound after the except: block')
def test_undefinedExceptionName(self):
"""Exception names can't be used after the except: block.
The exc variable is unused inside the exception handler."""
self.flakes('''
try:
raise ValueError('ve')
except ValueError as exc:
pass
exc
''', m.UndefinedName, m.UnusedVariable)
def test_namesDeclaredInExceptBlocks(self):
"""Locals declared in except: blocks can be used after the block.
This shows the example in test_undefinedExceptionName is
different."""
self.flakes('''
try:
raise ValueError('ve')
except ValueError as exc:
e = exc
e
''')
@skip('error reporting disabled due to false positives below')
def test_undefinedExceptionNameObscuringLocalVariable(self):
"""Exception names obscure locals, can't be used after.
Last line will raise UnboundLocalError on Python 3 after exiting
the except: block. Note next two examples for false positives to
watch out for."""
self.flakes('''
exc = 'Original value'
try:
raise ValueError('ve')
except ValueError as exc:
pass
exc
''',
m.UndefinedName)
@skipIf(version_info < (3,),
'in Python 2 exception names stay bound after the except: block')
def test_undefinedExceptionNameObscuringLocalVariable2(self):
"""Exception names are unbound after the `except:` block.
Last line will raise UnboundLocalError on Python 3 but would print out
've' on Python 2. The exc variable is unused inside the exception
handler."""
self.flakes('''
try:
raise ValueError('ve')
except ValueError as exc:
pass
print(exc)
exc = 'Original value'
''', m.UndefinedName, m.UnusedVariable)
def test_undefinedExceptionNameObscuringLocalVariableFalsePositive1(self):
"""Exception names obscure locals, can't be used after. Unless.
Last line will never raise UnboundLocalError because it's only
entered if no exception was raised."""
# The exc variable is unused inside the exception handler.
expected = [] if version_info < (3,) else [m.UnusedVariable]
self.flakes('''
exc = 'Original value'
try:
raise ValueError('ve')
except ValueError as exc:
print('exception logged')
raise
exc
''', *expected)
def test_delExceptionInExcept(self):
"""The exception name can be deleted in the except: block."""
self.flakes('''
try:
pass
except Exception as exc:
del exc
''')
def test_undefinedExceptionNameObscuringLocalVariableFalsePositive2(self):
"""Exception names obscure locals, can't be used after. Unless.
Last line will never raise UnboundLocalError because `error` is
only falsy if the `except:` block has not been entered."""
# The exc variable is unused inside the exception handler.
expected = [] if version_info < (3,) else [m.UnusedVariable]
self.flakes('''
exc = 'Original value'
error = None
try:
raise ValueError('ve')
except ValueError as exc:
error = 'exception logged'
if error:
print(error)
else:
exc
''', *expected)
@skip('error reporting disabled due to false positives below')
def test_undefinedExceptionNameObscuringGlobalVariable(self):
"""Exception names obscure globals, can't be used after.
Last line will raise UnboundLocalError on both Python 2 and
Python 3 because the existence of that exception name creates
a local scope placeholder for it, obscuring any globals, etc."""
self.flakes('''
exc = 'Original value'
def func():
try:
pass # nothing is raised
except ValueError as exc:
pass # block never entered, exc stays unbound
exc
''',
m.UndefinedLocal)
@skip('error reporting disabled due to false positives below')
def test_undefinedExceptionNameObscuringGlobalVariable2(self):
"""Exception names obscure globals, can't be used after.
Last line will raise NameError on Python 3 because the name is
locally unbound after the `except:` block, even if it's
nonlocal. We should issue an error in this case because code
only working correctly if an exception isn't raised, is invalid.
Unless it's explicitly silenced, see false positives below."""
self.flakes('''
exc = 'Original value'
def func():
global exc
try:
raise ValueError('ve')
except ValueError as exc:
pass # block never entered, exc stays unbound
exc
''',
m.UndefinedLocal)
def test_undefinedExceptionNameObscuringGlobalVariableFalsePositive1(self):
"""Exception names obscure globals, can't be used after. Unless.
Last line will never raise NameError because it's only entered
if no exception was raised."""
# The exc variable is unused inside the exception handler.
expected = [] if version_info < (3,) else [m.UnusedVariable]
self.flakes('''
exc = 'Original value'
def func():
global exc
try:
raise ValueError('ve')
except ValueError as exc:
print('exception logged')
raise
exc
''', *expected)
def test_undefinedExceptionNameObscuringGlobalVariableFalsePositive2(self):
"""Exception names obscure globals, can't be used after. Unless.
Last line will never raise NameError because `error` is only
falsy if the `except:` block has not been entered."""
# The exc variable is unused inside the exception handler.
expected = [] if version_info < (3,) else [m.UnusedVariable]
self.flakes('''
exc = 'Original value'
def func():
global exc
error = None
try:
raise ValueError('ve')
except ValueError as exc:
error = 'exception logged'
if error:
print(error)
else:
exc
''', *expected)
def test_functionsNeedGlobalScope(self):
self.flakes('''
class a:
def b():
fu
fu = 1
''')
def test_builtins(self):
self.flakes('range(10)')
def test_builtinWindowsError(self):
"""
C{WindowsError} is sometimes a builtin name, so no warning is emitted
for using it.
"""
self.flakes('WindowsError')
@skipIf(version_info < (3, 6), 'new feature in 3.6')
def test_moduleAnnotations(self):
"""
Use of the C{__annotations__} in module scope should not emit
an undefined name warning when version is greater than or equal to 3.6.
"""
self.flakes('__annotations__')
def test_magicGlobalsFile(self):
"""
Use of the C{__file__} magic global should not emit an undefined name
warning.
"""
self.flakes('__file__')
def test_magicGlobalsBuiltins(self):
"""
Use of the C{__builtins__} magic global should not emit an undefined
name warning.
"""
self.flakes('__builtins__')
def test_magicGlobalsName(self):
"""
Use of the C{__name__} magic global should not emit an undefined name
warning.
"""
self.flakes('__name__')
def test_magicGlobalsPath(self):
"""
Use of the C{__path__} magic global should not emit an undefined name
warning, if you refer to it from a file called __init__.py.
"""
self.flakes('__path__', m.UndefinedName)
self.flakes('__path__', filename='package/__init__.py')
def test_magicModuleInClassScope(self):
"""
Use of the C{__module__} magic builtin should not emit an undefined
name warning if used in class scope.
"""
self.flakes('__module__', m.UndefinedName)
self.flakes('''
class Foo:
__module__
''')
self.flakes('''
class Foo:
def bar(self):
__module__
''', m.UndefinedName)
def test_globalImportStar(self):
"""Can't find undefined names with import *."""
self.flakes('from fu import *; bar',
m.ImportStarUsed, m.ImportStarUsage)
@skipIf(version_info >= (3,), 'obsolete syntax')
def test_localImportStar(self):
"""
A local import * still allows undefined names to be found
in upper scopes.
"""
self.flakes('''
def a():
from fu import *
bar
''', m.ImportStarUsed, m.UndefinedName, m.UnusedImport)
@skipIf(version_info >= (3,), 'obsolete syntax')
def test_unpackedParameter(self):
"""Unpacked function parameters create bindings."""
self.flakes('''
def a((bar, baz)):
bar; baz
''')
def test_definedByGlobal(self):
"""
"global" can make an otherwise undefined name in another function
defined.
"""
self.flakes('''
def a(): global fu; fu = 1
def b(): fu
''')
self.flakes('''
def c(): bar
def b(): global bar; bar = 1
''')
def test_definedByGlobalMultipleNames(self):
"""
"global" can accept multiple names.
"""
self.flakes('''
def a(): global fu, bar; fu = 1; bar = 2
def b(): fu; bar
''')
def test_globalInGlobalScope(self):
"""
A global statement in the global scope is ignored.
"""
self.flakes('''
global x
def foo():
print(x)
''', m.UndefinedName)
def test_global_reset_name_only(self):
"""A global statement does not prevent other names being undefined."""
# Only different undefined names are reported.
# See following test that fails where the same name is used.
self.flakes('''
def f1():
s
def f2():
global m
''', m.UndefinedName)
@skip("todo")
def test_unused_global(self):
"""An unused global statement does not define the name."""
self.flakes('''
def f1():
m
def f2():
global m
''', m.UndefinedName)
def test_del(self):
"""Del deletes bindings."""
self.flakes('a = 1; del a; a', m.UndefinedName)
def test_delGlobal(self):
"""Del a global binding from a function."""
self.flakes('''
a = 1
def f():
global a
del a
a
''')
def test_delUndefined(self):
"""Del an undefined name."""
self.flakes('del a', m.UndefinedName)
def test_delConditional(self):
"""
Ignores conditional bindings deletion.
"""
self.flakes('''
context = None
test = True
if False:
del(test)
assert(test)
''')
def test_delConditionalNested(self):
"""
Ignored conditional bindings deletion even if they are nested in other
blocks.
"""
self.flakes('''
context = None
test = True
if False:
with context():
del(test)
assert(test)
''')
def test_delWhile(self):
"""
Ignore bindings deletion if called inside the body of a while
statement.
"""
self.flakes('''
def test():
foo = 'bar'
while False:
del foo
assert(foo)
''')
def test_delWhileTestUsage(self):
"""
Ignore bindings deletion if called inside the body of a while
statement and name is used inside while's test part.
"""
self.flakes('''
def _worker():
o = True
while o is not True:
del o
o = False
''')
def test_delWhileNested(self):
"""
Ignore bindings deletions if node is part of while's test, even when
del is in a nested block.
"""
self.flakes('''
context = None
def _worker():
o = True
while o is not True:
while True:
with context():
del o
o = False
''')
def test_globalFromNestedScope(self):
"""Global names are available from nested scopes."""
self.flakes('''
a = 1
def b():
def c():
a
''')
def test_laterRedefinedGlobalFromNestedScope(self):
"""
Test that referencing a local name that shadows a global, before it is
defined, generates a warning.
"""
self.flakes('''
a = 1
def fun():
a
a = 2
return a
''', m.UndefinedLocal)
def test_laterRedefinedGlobalFromNestedScope2(self):
"""
Test that referencing a local name in a nested scope that shadows a
global declared in an enclosing scope, before it is defined, generates
a warning.
"""
self.flakes('''
a = 1
def fun():
global a
def fun2():
a
a = 2
return a
''', m.UndefinedLocal)
def test_intermediateClassScopeIgnored(self):
"""
If a name defined in an enclosing scope is shadowed by a local variable
and the name is used locally before it is bound, an unbound local
warning is emitted, even if there is a class scope between the enclosing
scope and the local scope.
"""
self.flakes('''
def f():
x = 1
class g:
def h(self):
a = x
x = None
print(x, a)
print(x)
''', m.UndefinedLocal)
def test_doubleNestingReportsClosestName(self):
"""
Test that referencing a local name in a nested scope that shadows a
variable declared in two different outer scopes before it is defined
in the innermost scope generates an UnboundLocal warning which
refers to the nearest shadowed name.
"""
exc = self.flakes('''
def a():
x = 1
def b():
x = 2 # line 5
def c():
x
x = 3
return x
return x
return x
''', m.UndefinedLocal).messages[0]
# _DoctestMixin.flakes adds two lines preceding the code above.
expected_line_num = 7 if self.withDoctest else 5
self.assertEqual(exc.message_args, ('x', expected_line_num))
def test_laterRedefinedGlobalFromNestedScope3(self):
"""
Test that referencing a local name in a nested scope that shadows a
global, before it is defined, generates a warning.
"""
self.flakes('''
def fun():
a = 1
def fun2():
a
a = 1
return a
return a
''', m.UndefinedLocal)
def test_undefinedAugmentedAssignment(self):
self.flakes(
'''
def f(seq):
a = 0
seq[a] += 1
seq[b] /= 2
c[0] *= 2
a -= 3
d += 4
e[any] = 5
''',
m.UndefinedName, # b
m.UndefinedName, # c
m.UndefinedName, m.UnusedVariable, # d
m.UndefinedName, # e
)
def test_nestedClass(self):
"""Nested classes can access enclosing scope."""
self.flakes('''
def f(foo):
class C:
bar = foo
def f(self):
return foo
return C()
f(123).f()
''')
def test_badNestedClass(self):
"""Free variables in nested classes must bind at class creation."""
self.flakes('''
def f():
class C:
bar = foo
foo = 456
return foo
f()
''', m.UndefinedName)
def test_definedAsStarArgs(self):
"""Star and double-star arg names are defined."""
self.flakes('''
def f(a, *b, **c):
print(a, b, c)
''')
@skipIf(version_info < (3,), 'new in Python 3')
def test_definedAsStarUnpack(self):
"""Star names in unpack are defined."""
self.flakes('''
a, *b = range(10)
print(a, b)
''')
self.flakes('''
*a, b = range(10)
print(a, b)
''')
self.flakes('''
a, *b, c = range(10)
print(a, b, c)
''')
@skipIf(version_info < (3,), 'new in Python 3')
def test_usedAsStarUnpack(self):
"""
Star names in unpack are used if RHS is not a tuple/list literal.
"""
self.flakes('''
def f():
a, *b = range(10)
''')
self.flakes('''
def f():
(*a, b) = range(10)
''')
self.flakes('''
def f():
[a, *b, c] = range(10)
''')
@skipIf(version_info < (3,), 'new in Python 3')
def test_unusedAsStarUnpack(self):
"""
Star names in unpack are unused if RHS is a tuple/list literal.
"""
self.flakes('''
def f():
a, *b = any, all, 4, 2, 'un'
''', m.UnusedVariable, m.UnusedVariable)
self.flakes('''
def f():
(*a, b) = [bool, int, float, complex]
''', m.UnusedVariable, m.UnusedVariable)
self.flakes('''
def f():
[a, *b, c] = 9, 8, 7, 6, 5, 4
''', m.UnusedVariable, m.UnusedVariable, m.UnusedVariable)
@skipIf(version_info < (3,), 'new in Python 3')
def test_keywordOnlyArgs(self):
"""Keyword-only arg names are defined."""
self.flakes('''
def f(*, a, b=None):
print(a, b)
''')
self.flakes('''
import default_b
def f(*, a, b=default_b):
print(a, b)
''')
@skipIf(version_info < (3,), 'new in Python 3')
def test_keywordOnlyArgsUndefined(self):
"""Typo in kwonly name."""
self.flakes('''
def f(*, a, b=default_c):
print(a, b)
''', m.UndefinedName)
@skipIf(version_info < (3,), 'new in Python 3')
def test_annotationUndefined(self):
"""Undefined annotations."""
self.flakes('''
from abc import note1, note2, note3, note4, note5
def func(a: note1, *args: note2,
b: note3=12, **kw: note4) -> note5: pass
''')
self.flakes('''
def func():
d = e = 42
def func(a: {1, d}) -> (lambda c: e): pass
''')
@skipIf(version_info < (3,), 'new in Python 3')
def test_metaClassUndefined(self):
self.flakes('''
from abc import ABCMeta
class A(metaclass=ABCMeta): pass
''')
def test_definedInGenExp(self):
"""
Using the loop variable of a generator expression results in no
warnings.
"""
self.flakes('(a for a in [1, 2, 3] if a)')
self.flakes('(b for b in (a for a in [1, 2, 3] if a) if b)')
def test_undefinedInGenExpNested(self):
"""
The loop variables of generator expressions nested together are
not defined in the other generator.
"""
self.flakes('(b for b in (a for a in [1, 2, 3] if b) if b)',
m.UndefinedName)
self.flakes('(b for b in (a for a in [1, 2, 3] if a) if a)',
m.UndefinedName)
def test_undefinedWithErrorHandler(self):
"""
Some compatibility code checks explicitly for NameError.
It should not trigger warnings.
"""
self.flakes('''
try:
socket_map
except NameError:
socket_map = {}
''')
self.flakes('''
try:
_memoryview.contiguous
except (NameError, AttributeError):
raise RuntimeError("Python >= 3.3 is required")
''')
# If NameError is not explicitly handled, generate a warning
self.flakes('''
try:
socket_map
except:
socket_map = {}
''', m.UndefinedName)
self.flakes('''
try:
socket_map
except Exception:
socket_map = {}
''', m.UndefinedName)
def test_definedInClass(self):
"""
Defined name for generator expressions and dict/set comprehension.
"""
self.flakes('''
class A:
T = range(10)
Z = (x for x in T)
L = [x for x in T]
B = dict((i, str(i)) for i in T)
''')
self.flakes('''
class A:
T = range(10)
X = {x for x in T}
Y = {x:x for x in T}
''')
def test_definedInClassNested(self):
"""Defined name for nested generator expressions in a class."""
self.flakes('''
class A:
T = range(10)
Z = (x for x in (a for a in T))
''')
def test_undefinedInLoop(self):
"""
The loop variable is defined after the expression is computed.
"""
self.flakes('''
for i in range(i):
print(i)
''', m.UndefinedName)
self.flakes('''
[42 for i in range(i)]
''', m.UndefinedName)
self.flakes('''
(42 for i in range(i))
''', m.UndefinedName)
def test_definedFromLambdaInDictionaryComprehension(self):
"""
Defined name referenced from a lambda function within a dict/set
comprehension.
"""
self.flakes('''
{lambda: id(x) for x in range(10)}
''')
def test_definedFromLambdaInGenerator(self):
"""
Defined name referenced from a lambda function within a generator
expression.
"""
self.flakes('''
any(lambda: id(x) for x in range(10))
''')
def test_undefinedFromLambdaInDictionaryComprehension(self):
"""
Undefined name referenced from a lambda function within a dict/set
comprehension.
"""
self.flakes('''
{lambda: id(y) for x in range(10)}
''', m.UndefinedName)
def test_undefinedFromLambdaInComprehension(self):
"""
Undefined name referenced from a lambda function within a generator
expression.
"""
self.flakes('''
any(lambda: id(y) for x in range(10))
''', m.UndefinedName)
def test_dunderClass(self):
"""
`__class__` is defined in class scope under Python 3, but is not
in Python 2.
"""
code = '''
class Test(object):
def __init__(self):
print(__class__.__name__)
self.x = 1
t = Test()
'''
if version_info < (3,):
self.flakes(code, m.UndefinedName)
else:
self.flakes(code)
class NameTests(TestCase):
"""
Tests for some extra cases of name handling.
"""
def test_impossibleContext(self):
"""
A Name node with an unrecognized context results in a RuntimeError being
raised.
"""
tree = ast.parse("x = 10")
file_tokens = checker.make_tokens("x = 10")
# Make it into something unrecognizable.
tree.body[0].targets[0].ctx = object()
self.assertRaises(RuntimeError, checker.Checker, tree, file_tokens=file_tokens)

10
third_party/python/pyflakes/setup.cfg поставляемый Normal file
Просмотреть файл

@ -0,0 +1,10 @@
[bdist_wheel]
universal = 1
[metadata]
license_file = LICENSE
[egg_info]
tag_build =
tag_date = 0

66
third_party/python/pyflakes/setup.py поставляемый Executable file
Просмотреть файл

@ -0,0 +1,66 @@
#!/usr/bin/env python
# Copyright 2005-2011 Divmod, Inc.
# Copyright 2013 Florent Xicluna. See LICENSE file for details
from __future__ import with_statement
import os.path
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
extra = {'scripts': ["bin/pyflakes"]}
else:
extra = {
'test_suite': 'pyflakes.test',
'entry_points': {
'console_scripts': ['pyflakes = pyflakes.api:main'],
},
}
def get_version(fname=os.path.join('pyflakes', '__init__.py')):
with open(fname) as f:
for line in f:
if line.startswith('__version__'):
return eval(line.split('=')[-1])
def get_long_description():
descr = []
for fname in ('README.rst',):
with open(fname) as f:
descr.append(f.read())
return '\n\n'.join(descr)
setup(
name="pyflakes",
license="MIT",
version=get_version(),
description="passive checker of Python programs",
long_description=get_long_description(),
author="A lot of people",
author_email="code-quality@python.org",
url="https://github.com/PyCQA/pyflakes",
packages=["pyflakes", "pyflakes.scripts", "pyflakes.test"],
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
classifiers=[
"Development Status :: 6 - Mature",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Software Development",
"Topic :: Utilities",
],
**extra)

2
third_party/python/requirements.in поставляемый
Просмотреть файл

@ -21,6 +21,7 @@ biplist==1.0.3
blessings==1.7
compare-locales==8.0.0
cookies==2.2.1
coverage==5.1
distro==1.4.0
ecdsa==0.15
esprima==4.0.1
@ -34,6 +35,7 @@ pipenv==2018.5.18
ply==3.10
psutil==5.7.0
pyasn1==0.4.8
pyflakes==2.2.0
pytest==3.6.2
python-hglib==2.4
pytoml==0.1.10

37
third_party/python/requirements.txt поставляемый
Просмотреть файл

@ -30,6 +30,39 @@ cookies==2.2.1 \
--hash=sha256:15bee753002dff684987b8df8c235288eb8d45f8191ae056254812dfd42c81d3 \
--hash=sha256:d6b698788cae4cfa4e62ef8643a9ca332b79bd96cb314294b864ae8d7eb3ee8e \
# via -r requirements-mach-vendor-python.in
coverage==5.1 \
--hash=sha256:00f1d23f4336efc3b311ed0d807feb45098fc86dee1ca13b3d6768cdab187c8a \
--hash=sha256:01333e1bd22c59713ba8a79f088b3955946e293114479bbfc2e37d522be03355 \
--hash=sha256:0cb4be7e784dcdc050fc58ef05b71aa8e89b7e6636b99967fadbdba694cf2b65 \
--hash=sha256:0e61d9803d5851849c24f78227939c701ced6704f337cad0a91e0972c51c1ee7 \
--hash=sha256:1601e480b9b99697a570cea7ef749e88123c04b92d84cedaa01e117436b4a0a9 \
--hash=sha256:2742c7515b9eb368718cd091bad1a1b44135cc72468c731302b3d641895b83d1 \
--hash=sha256:2d27a3f742c98e5c6b461ee6ef7287400a1956c11421eb574d843d9ec1f772f0 \
--hash=sha256:402e1744733df483b93abbf209283898e9f0d67470707e3c7516d84f48524f55 \
--hash=sha256:5c542d1e62eece33c306d66fe0a5c4f7f7b3c08fecc46ead86d7916684b36d6c \
--hash=sha256:5f2294dbf7875b991c381e3d5af2bcc3494d836affa52b809c91697449d0eda6 \
--hash=sha256:6402bd2fdedabbdb63a316308142597534ea8e1895f4e7d8bf7476c5e8751fef \
--hash=sha256:66460ab1599d3cf894bb6baee8c684788819b71a5dc1e8fa2ecc152e5d752019 \
--hash=sha256:782caea581a6e9ff75eccda79287daefd1d2631cc09d642b6ee2d6da21fc0a4e \
--hash=sha256:79a3cfd6346ce6c13145731d39db47b7a7b859c0272f02cdb89a3bdcbae233a0 \
--hash=sha256:7a5bdad4edec57b5fb8dae7d3ee58622d626fd3a0be0dfceda162a7035885ecf \
--hash=sha256:8fa0cbc7ecad630e5b0f4f35b0f6ad419246b02bc750de7ac66db92667996d24 \
--hash=sha256:a027ef0492ede1e03a8054e3c37b8def89a1e3c471482e9f046906ba4f2aafd2 \
--hash=sha256:a3f3654d5734a3ece152636aad89f58afc9213c6520062db3978239db122f03c \
--hash=sha256:a82b92b04a23d3c8a581fc049228bafde988abacba397d57ce95fe95e0338ab4 \
--hash=sha256:acf3763ed01af8410fc36afea23707d4ea58ba7e86a8ee915dfb9ceff9ef69d0 \
--hash=sha256:adeb4c5b608574a3d647011af36f7586811a2c1197c861aedb548dd2453b41cd \
--hash=sha256:b83835506dfc185a319031cf853fa4bb1b3974b1f913f5bb1a0f3d98bdcded04 \
--hash=sha256:bb28a7245de68bf29f6fb199545d072d1036a1917dca17a1e75bbb919e14ee8e \
--hash=sha256:bf9cb9a9fd8891e7efd2d44deb24b86d647394b9705b744ff6f8261e6f29a730 \
--hash=sha256:c317eaf5ff46a34305b202e73404f55f7389ef834b8dbf4da09b9b9b37f76dd2 \
--hash=sha256:dbe8c6ae7534b5b024296464f387d57c13caa942f6d8e6e0346f27e509f0f768 \
--hash=sha256:de807ae933cfb7f0c7d9d981a053772452217df2bf38e7e6267c9cbf9545a796 \
--hash=sha256:dead2ddede4c7ba6cb3a721870f5141c97dc7d85a079edb4bd8d88c3ad5b20c7 \
--hash=sha256:dec5202bfe6f672d4511086e125db035a52b00f1648d6407cc8e526912c0353a \
--hash=sha256:e1ea316102ea1e1770724db01998d1603ed921c54a86a2efcb03428d5417e489 \
--hash=sha256:f90bfc4ad18450c80b024036eaf91e4a246ae287701aaa88eaebebf150868052 \
# via -r requirements-mach-vendor-python.in
distro==1.4.0 \
--hash=sha256:362dde65d846d23baee4b5c058c8586f219b5a54be1cf5fc6ff55c4578392f57 \
--hash=sha256:eedf82a470ebe7d010f1872c17237c79ab04097948800029994fa458e52fb4b4 \
@ -104,6 +137,10 @@ pyasn1==0.4.8 \
--hash=sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d \
--hash=sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba \
# via -r requirements-mach-vendor-python.in
pyflakes==2.2.0 \
--hash=sha256:0d94e0e05a19e57a99444b6ddcf9a6eb2e5c68d3ca1e98e90707af8152c90a92 \
--hash=sha256:35b2d75ee967ea93b55750aa9edbbf72813e06a66ba54438df2cfac9e3c27fc8 \
# via -r requirements-mach-vendor-python.in
pytest==3.6.2 \
--hash=sha256:8ea01fc4fcc8e1b1e305252b4bc80a1528019ab99fd3b88666c9dc38d754406c \
--hash=sha256:90898786b3d0b880b47645bae7b51aa9bbf1e9d1e4510c2cfd15dd65c70ea0cd \