Bug 1382362 - Update pytest to v3.1.3 and py to v1.4.34, r=davehunt

This patch was generated by something similar to:

$ cd third_party/python
$ hg rm pytest/* py/*
$ pip wheel pytest
$ unzip pytest.whl
$ unzip py.whl
$ hg add pytest/* py/*

MozReview-Commit-ID: 3LKVrbKfMgK

--HG--
extra : rebase_source : 4204340a78501a8e44e83dbf9cae63a7e91541ef
This commit is contained in:
Andrew Halberstadt 2017-07-19 16:50:57 -04:00
Родитель c73e53ea66
Коммит 5c71ab391c
102 изменённых файлов: 12149 добавлений и 11391 удалений

24
third_party/python/py/AUTHORS поставляемый
Просмотреть файл

@ -1,24 +0,0 @@
Holger Krekel, holger at merlinux eu
Benjamin Peterson, benjamin at python org
Ronny Pfannschmidt, Ronny.Pfannschmidt at gmx de
Guido Wesdorp, johnny at johnnydebris net
Samuele Pedroni, pedronis at openend se
Carl Friedrich Bolz, cfbolz at gmx de
Armin Rigo, arigo at tunes org
Maciek Fijalkowski, fijal at genesilico pl
Brian Dorsey, briandorsey at gmail com
Floris Bruynooghe, flub at devork be
merlinux GmbH, Germany, office at merlinux eu
Contributors include::
Ross Lawley
Ralf Schmitt
Chris Lamb
Harald Armin Massa
Martijn Faassen
Ian Bicking
Jan Balster
Grig Gheorghiu
Bob Ippolito
Christian Tismer

19
third_party/python/py/LICENSE поставляемый
Просмотреть файл

@ -1,19 +0,0 @@
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

9
third_party/python/py/MANIFEST.in поставляемый
Просмотреть файл

@ -1,9 +0,0 @@
include CHANGELOG
include AUTHORS
include README.txt
include setup.py
include LICENSE
include conftest.py
include tox.ini
graft doc
graft testing

46
third_party/python/py/PKG-INFO поставляемый
Просмотреть файл

@ -1,46 +0,0 @@
Metadata-Version: 1.1
Name: py
Version: 1.4.31
Summary: library with cross-python path, ini-parsing, io, code, log facilities
Home-page: http://pylib.readthedocs.org/
Author: holger krekel, Ronny Pfannschmidt, Benjamin Peterson and others
Author-email: pytest-dev@python.org
License: MIT license
Description: .. image:: https://drone.io/bitbucket.org/pytest-dev/py/status.png
:target: https://drone.io/bitbucket.org/pytest-dev/py/latest
.. image:: https://pypip.in/v/py/badge.png
:target: https://pypi.python.org/pypi/py
The py lib is a Python development support library featuring
the following tools and modules:
* py.path: uniform local and svn path objects
* py.apipkg: explicit API control and lazy-importing
* py.iniconfig: easy parsing of .ini files
* py.code: dynamic code generation and introspection
NOTE: prior to the 1.4 release this distribution used to
contain py.test which is now its own package, see http://pytest.org
For questions and more information please visit http://pylib.readthedocs.org
Bugs and issues: http://bitbucket.org/pytest-dev/py/issues/
Authors: Holger Krekel and others, 2004-2015
Platform: unix
Platform: linux
Platform: osx
Platform: cygwin
Platform: win32
Classifier: Development Status :: 6 - Mature
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3

21
third_party/python/py/README.txt поставляемый
Просмотреть файл

@ -1,21 +0,0 @@
.. image:: https://drone.io/bitbucket.org/pytest-dev/py/status.png
:target: https://drone.io/bitbucket.org/pytest-dev/py/latest
.. image:: https://pypip.in/v/py/badge.png
:target: https://pypi.python.org/pypi/py
The py lib is a Python development support library featuring
the following tools and modules:
* py.path: uniform local and svn path objects
* py.apipkg: explicit API control and lazy-importing
* py.iniconfig: easy parsing of .ini files
* py.code: dynamic code generation and introspection
NOTE: prior to the 1.4 release this distribution used to
contain py.test which is now its own package, see http://pytest.org
For questions and more information please visit http://pylib.readthedocs.org
Bugs and issues: http://bitbucket.org/pytest-dev/py/issues/
Authors: Holger Krekel and others, 2004-2015

302
third_party/python/py/py/__init__.py поставляемый
Просмотреть файл

@ -1,150 +1,152 @@
"""
py.test and pylib: rapid testing and development utils
this module uses apipkg.py for lazy-loading sub modules
and classes. The initpkg-dictionary below specifies
name->value mappings where value can be another namespace
dictionary or an import path.
(c) Holger Krekel and others, 2004-2014
"""
__version__ = '1.4.31'
from py import _apipkg
# so that py.error.* instances are picklable
import sys
sys.modules['py.error'] = _apipkg.AliasModule("py.error", "py._error", 'error')
_apipkg.initpkg(__name__, attr={'_apipkg': _apipkg}, exportdefs={
# access to all standard lib modules
'std': '._std:std',
# access to all posix errno's as classes
'error': '._error:error',
'_pydir' : '.__metainfo:pydir',
'version': 'py:__version__', # backward compatibility
# pytest-2.0 has a flat namespace, we use alias modules
# to keep old references compatible
'test' : 'pytest',
'test.collect' : 'pytest',
'test.cmdline' : 'pytest',
# hook into the top-level standard library
'process' : {
'__doc__' : '._process:__doc__',
'cmdexec' : '._process.cmdexec:cmdexec',
'kill' : '._process.killproc:kill',
'ForkedFunc' : '._process.forkedfunc:ForkedFunc',
},
'apipkg' : {
'initpkg' : '._apipkg:initpkg',
'ApiModule' : '._apipkg:ApiModule',
},
'iniconfig' : {
'IniConfig' : '._iniconfig:IniConfig',
'ParseError' : '._iniconfig:ParseError',
},
'path' : {
'__doc__' : '._path:__doc__',
'svnwc' : '._path.svnwc:SvnWCCommandPath',
'svnurl' : '._path.svnurl:SvnCommandPath',
'local' : '._path.local:LocalPath',
'SvnAuth' : '._path.svnwc:SvnAuth',
},
# python inspection/code-generation API
'code' : {
'__doc__' : '._code:__doc__',
'compile' : '._code.source:compile_',
'Source' : '._code.source:Source',
'Code' : '._code.code:Code',
'Frame' : '._code.code:Frame',
'ExceptionInfo' : '._code.code:ExceptionInfo',
'Traceback' : '._code.code:Traceback',
'getfslineno' : '._code.source:getfslineno',
'getrawcode' : '._code.code:getrawcode',
'patch_builtins' : '._code.code:patch_builtins',
'unpatch_builtins' : '._code.code:unpatch_builtins',
'_AssertionError' : '._code.assertion:AssertionError',
'_reinterpret_old' : '._code.assertion:reinterpret_old',
'_reinterpret' : '._code.assertion:reinterpret',
'_reprcompare' : '._code.assertion:_reprcompare',
'_format_explanation' : '._code.assertion:_format_explanation',
},
# backports and additions of builtins
'builtin' : {
'__doc__' : '._builtin:__doc__',
'enumerate' : '._builtin:enumerate',
'reversed' : '._builtin:reversed',
'sorted' : '._builtin:sorted',
'any' : '._builtin:any',
'all' : '._builtin:all',
'set' : '._builtin:set',
'frozenset' : '._builtin:frozenset',
'BaseException' : '._builtin:BaseException',
'GeneratorExit' : '._builtin:GeneratorExit',
'_sysex' : '._builtin:_sysex',
'print_' : '._builtin:print_',
'_reraise' : '._builtin:_reraise',
'_tryimport' : '._builtin:_tryimport',
'exec_' : '._builtin:exec_',
'_basestring' : '._builtin:_basestring',
'_totext' : '._builtin:_totext',
'_isbytes' : '._builtin:_isbytes',
'_istext' : '._builtin:_istext',
'_getimself' : '._builtin:_getimself',
'_getfuncdict' : '._builtin:_getfuncdict',
'_getcode' : '._builtin:_getcode',
'builtins' : '._builtin:builtins',
'execfile' : '._builtin:execfile',
'callable' : '._builtin:callable',
'bytes' : '._builtin:bytes',
'text' : '._builtin:text',
},
# input-output helping
'io' : {
'__doc__' : '._io:__doc__',
'dupfile' : '._io.capture:dupfile',
'TextIO' : '._io.capture:TextIO',
'BytesIO' : '._io.capture:BytesIO',
'FDCapture' : '._io.capture:FDCapture',
'StdCapture' : '._io.capture:StdCapture',
'StdCaptureFD' : '._io.capture:StdCaptureFD',
'TerminalWriter' : '._io.terminalwriter:TerminalWriter',
'ansi_print' : '._io.terminalwriter:ansi_print',
'get_terminal_width' : '._io.terminalwriter:get_terminal_width',
'saferepr' : '._io.saferepr:saferepr',
},
# small and mean xml/html generation
'xml' : {
'__doc__' : '._xmlgen:__doc__',
'html' : '._xmlgen:html',
'Tag' : '._xmlgen:Tag',
'raw' : '._xmlgen:raw',
'Namespace' : '._xmlgen:Namespace',
'escape' : '._xmlgen:escape',
},
'log' : {
# logging API ('producers' and 'consumers' connected via keywords)
'__doc__' : '._log:__doc__',
'_apiwarn' : '._log.warning:_apiwarn',
'Producer' : '._log.log:Producer',
'setconsumer' : '._log.log:setconsumer',
'_setstate' : '._log.log:setstate',
'_getstate' : '._log.log:getstate',
'Path' : '._log.log:Path',
'STDOUT' : '._log.log:STDOUT',
'STDERR' : '._log.log:STDERR',
'Syslog' : '._log.log:Syslog',
},
})
"""
pylib: rapid testing and development utils
this module uses apipkg.py for lazy-loading sub modules
and classes. The initpkg-dictionary below specifies
name->value mappings where value can be another namespace
dictionary or an import path.
(c) Holger Krekel and others, 2004-2014
"""
__version__ = '1.4.34'
from py import _apipkg
# so that py.error.* instances are picklable
import sys
sys.modules['py.error'] = _apipkg.AliasModule("py.error", "py._error", 'error')
import py.error # "Dereference" it now just to be safe (issue110)
_apipkg.initpkg(__name__, attr={'_apipkg': _apipkg}, exportdefs={
# access to all standard lib modules
'std': '._std:std',
# access to all posix errno's as classes
'error': '._error:error',
'_pydir' : '.__metainfo:pydir',
'version': 'py:__version__', # backward compatibility
# pytest-2.0 has a flat namespace, we use alias modules
# to keep old references compatible
'test' : 'pytest',
'test.collect' : 'pytest',
'test.cmdline' : 'pytest',
# hook into the top-level standard library
'process' : {
'__doc__' : '._process:__doc__',
'cmdexec' : '._process.cmdexec:cmdexec',
'kill' : '._process.killproc:kill',
'ForkedFunc' : '._process.forkedfunc:ForkedFunc',
},
'apipkg' : {
'initpkg' : '._apipkg:initpkg',
'ApiModule' : '._apipkg:ApiModule',
},
'iniconfig' : {
'IniConfig' : '._iniconfig:IniConfig',
'ParseError' : '._iniconfig:ParseError',
},
'path' : {
'__doc__' : '._path:__doc__',
'svnwc' : '._path.svnwc:SvnWCCommandPath',
'svnurl' : '._path.svnurl:SvnCommandPath',
'local' : '._path.local:LocalPath',
'SvnAuth' : '._path.svnwc:SvnAuth',
},
# python inspection/code-generation API
'code' : {
'__doc__' : '._code:__doc__',
'compile' : '._code.source:compile_',
'Source' : '._code.source:Source',
'Code' : '._code.code:Code',
'Frame' : '._code.code:Frame',
'ExceptionInfo' : '._code.code:ExceptionInfo',
'Traceback' : '._code.code:Traceback',
'getfslineno' : '._code.source:getfslineno',
'getrawcode' : '._code.code:getrawcode',
'patch_builtins' : '._code.code:patch_builtins',
'unpatch_builtins' : '._code.code:unpatch_builtins',
'_AssertionError' : '._code.assertion:AssertionError',
'_reinterpret_old' : '._code.assertion:reinterpret_old',
'_reinterpret' : '._code.assertion:reinterpret',
'_reprcompare' : '._code.assertion:_reprcompare',
'_format_explanation' : '._code.assertion:_format_explanation',
},
# backports and additions of builtins
'builtin' : {
'__doc__' : '._builtin:__doc__',
'enumerate' : '._builtin:enumerate',
'reversed' : '._builtin:reversed',
'sorted' : '._builtin:sorted',
'any' : '._builtin:any',
'all' : '._builtin:all',
'set' : '._builtin:set',
'frozenset' : '._builtin:frozenset',
'BaseException' : '._builtin:BaseException',
'GeneratorExit' : '._builtin:GeneratorExit',
'_sysex' : '._builtin:_sysex',
'print_' : '._builtin:print_',
'_reraise' : '._builtin:_reraise',
'_tryimport' : '._builtin:_tryimport',
'exec_' : '._builtin:exec_',
'_basestring' : '._builtin:_basestring',
'_totext' : '._builtin:_totext',
'_isbytes' : '._builtin:_isbytes',
'_istext' : '._builtin:_istext',
'_getimself' : '._builtin:_getimself',
'_getfuncdict' : '._builtin:_getfuncdict',
'_getcode' : '._builtin:_getcode',
'builtins' : '._builtin:builtins',
'execfile' : '._builtin:execfile',
'callable' : '._builtin:callable',
'bytes' : '._builtin:bytes',
'text' : '._builtin:text',
},
# input-output helping
'io' : {
'__doc__' : '._io:__doc__',
'dupfile' : '._io.capture:dupfile',
'TextIO' : '._io.capture:TextIO',
'BytesIO' : '._io.capture:BytesIO',
'FDCapture' : '._io.capture:FDCapture',
'StdCapture' : '._io.capture:StdCapture',
'StdCaptureFD' : '._io.capture:StdCaptureFD',
'TerminalWriter' : '._io.terminalwriter:TerminalWriter',
'ansi_print' : '._io.terminalwriter:ansi_print',
'get_terminal_width' : '._io.terminalwriter:get_terminal_width',
'saferepr' : '._io.saferepr:saferepr',
},
# small and mean xml/html generation
'xml' : {
'__doc__' : '._xmlgen:__doc__',
'html' : '._xmlgen:html',
'Tag' : '._xmlgen:Tag',
'raw' : '._xmlgen:raw',
'Namespace' : '._xmlgen:Namespace',
'escape' : '._xmlgen:escape',
},
'log' : {
# logging API ('producers' and 'consumers' connected via keywords)
'__doc__' : '._log:__doc__',
'_apiwarn' : '._log.warning:_apiwarn',
'Producer' : '._log.log:Producer',
'setconsumer' : '._log.log:setconsumer',
'_setstate' : '._log.log:setstate',
'_getstate' : '._log.log:getstate',
'Path' : '._log.log:Path',
'STDOUT' : '._log.log:STDOUT',
'STDERR' : '._log.log:STDERR',
'Syslog' : '._log.log:Syslog',
},
})

4
third_party/python/py/py/__metainfo.py поставляемый
Просмотреть файл

@ -1,2 +1,2 @@
import py
pydir = py.path.local(py.__file__).dirpath()
import py
pydir = py.path.local(py.__file__).dirpath()

362
third_party/python/py/py/_apipkg.py поставляемый
Просмотреть файл

@ -1,181 +1,181 @@
"""
apipkg: control the exported namespace of a python package.
see http://pypi.python.org/pypi/apipkg
(c) holger krekel, 2009 - MIT license
"""
import os
import sys
from types import ModuleType
__version__ = '1.3.dev'
def _py_abspath(path):
"""
special version of abspath
that will leave paths from jython jars alone
"""
if path.startswith('__pyclasspath__'):
return path
else:
return os.path.abspath(path)
def initpkg(pkgname, exportdefs, attr=dict()):
""" initialize given package from the export definitions. """
oldmod = sys.modules.get(pkgname)
d = {}
f = getattr(oldmod, '__file__', None)
if f:
f = _py_abspath(f)
d['__file__'] = f
if hasattr(oldmod, '__version__'):
d['__version__'] = oldmod.__version__
if hasattr(oldmod, '__loader__'):
d['__loader__'] = oldmod.__loader__
if hasattr(oldmod, '__path__'):
d['__path__'] = [_py_abspath(p) for p in oldmod.__path__]
if '__doc__' not in exportdefs and getattr(oldmod, '__doc__', None):
d['__doc__'] = oldmod.__doc__
d.update(attr)
if hasattr(oldmod, "__dict__"):
oldmod.__dict__.update(d)
mod = ApiModule(pkgname, exportdefs, implprefix=pkgname, attr=d)
sys.modules[pkgname] = mod
def importobj(modpath, attrname):
module = __import__(modpath, None, None, ['__doc__'])
if not attrname:
return module
retval = module
names = attrname.split(".")
for x in names:
retval = getattr(retval, x)
return retval
class ApiModule(ModuleType):
def __docget(self):
try:
return self.__doc
except AttributeError:
if '__doc__' in self.__map__:
return self.__makeattr('__doc__')
def __docset(self, value):
self.__doc = value
__doc__ = property(__docget, __docset)
def __init__(self, name, importspec, implprefix=None, attr=None):
self.__name__ = name
self.__all__ = [x for x in importspec if x != '__onfirstaccess__']
self.__map__ = {}
self.__implprefix__ = implprefix or name
if attr:
for name, val in attr.items():
# print "setting", self.__name__, name, val
setattr(self, name, val)
for name, importspec in importspec.items():
if isinstance(importspec, dict):
subname = '%s.%s' % (self.__name__, name)
apimod = ApiModule(subname, importspec, implprefix)
sys.modules[subname] = apimod
setattr(self, name, apimod)
else:
parts = importspec.split(':')
modpath = parts.pop(0)
attrname = parts and parts[0] or ""
if modpath[0] == '.':
modpath = implprefix + modpath
if not attrname:
subname = '%s.%s' % (self.__name__, name)
apimod = AliasModule(subname, modpath)
sys.modules[subname] = apimod
if '.' not in name:
setattr(self, name, apimod)
else:
self.__map__[name] = (modpath, attrname)
def __repr__(self):
l = []
if hasattr(self, '__version__'):
l.append("version=" + repr(self.__version__))
if hasattr(self, '__file__'):
l.append('from ' + repr(self.__file__))
if l:
return '<ApiModule %r %s>' % (self.__name__, " ".join(l))
return '<ApiModule %r>' % (self.__name__,)
def __makeattr(self, name):
"""lazily compute value for name or raise AttributeError if unknown."""
# print "makeattr", self.__name__, name
target = None
if '__onfirstaccess__' in self.__map__:
target = self.__map__.pop('__onfirstaccess__')
importobj(*target)()
try:
modpath, attrname = self.__map__[name]
except KeyError:
if target is not None and name != '__onfirstaccess__':
# retry, onfirstaccess might have set attrs
return getattr(self, name)
raise AttributeError(name)
else:
result = importobj(modpath, attrname)
setattr(self, name, result)
try:
del self.__map__[name]
except KeyError:
pass # in a recursive-import situation a double-del can happen
return result
__getattr__ = __makeattr
def __dict__(self):
# force all the content of the module to be loaded when __dict__ is read
dictdescr = ModuleType.__dict__['__dict__']
dict = dictdescr.__get__(self)
if dict is not None:
hasattr(self, 'some')
for name in self.__all__:
try:
self.__makeattr(name)
except AttributeError:
pass
return dict
__dict__ = property(__dict__)
def AliasModule(modname, modpath, attrname=None):
mod = []
def getmod():
if not mod:
x = importobj(modpath, None)
if attrname is not None:
x = getattr(x, attrname)
mod.append(x)
return mod[0]
class AliasModule(ModuleType):
def __repr__(self):
x = modpath
if attrname:
x += "." + attrname
return '<AliasModule %r for %r>' % (modname, x)
def __getattribute__(self, name):
try:
return getattr(getmod(), name)
except ImportError:
return None
def __setattr__(self, name, value):
setattr(getmod(), name, value)
def __delattr__(self, name):
delattr(getmod(), name)
return AliasModule(str(modname))
"""
apipkg: control the exported namespace of a python package.
see http://pypi.python.org/pypi/apipkg
(c) holger krekel, 2009 - MIT license
"""
import os
import sys
from types import ModuleType
__version__ = '1.3.dev'
def _py_abspath(path):
"""
special version of abspath
that will leave paths from jython jars alone
"""
if path.startswith('__pyclasspath__'):
return path
else:
return os.path.abspath(path)
def initpkg(pkgname, exportdefs, attr=dict()):
""" initialize given package from the export definitions. """
oldmod = sys.modules.get(pkgname)
d = {}
f = getattr(oldmod, '__file__', None)
if f:
f = _py_abspath(f)
d['__file__'] = f
if hasattr(oldmod, '__version__'):
d['__version__'] = oldmod.__version__
if hasattr(oldmod, '__loader__'):
d['__loader__'] = oldmod.__loader__
if hasattr(oldmod, '__path__'):
d['__path__'] = [_py_abspath(p) for p in oldmod.__path__]
if '__doc__' not in exportdefs and getattr(oldmod, '__doc__', None):
d['__doc__'] = oldmod.__doc__
d.update(attr)
if hasattr(oldmod, "__dict__"):
oldmod.__dict__.update(d)
mod = ApiModule(pkgname, exportdefs, implprefix=pkgname, attr=d)
sys.modules[pkgname] = mod
def importobj(modpath, attrname):
module = __import__(modpath, None, None, ['__doc__'])
if not attrname:
return module
retval = module
names = attrname.split(".")
for x in names:
retval = getattr(retval, x)
return retval
class ApiModule(ModuleType):
def __docget(self):
try:
return self.__doc
except AttributeError:
if '__doc__' in self.__map__:
return self.__makeattr('__doc__')
def __docset(self, value):
self.__doc = value
__doc__ = property(__docget, __docset)
def __init__(self, name, importspec, implprefix=None, attr=None):
self.__name__ = name
self.__all__ = [x for x in importspec if x != '__onfirstaccess__']
self.__map__ = {}
self.__implprefix__ = implprefix or name
if attr:
for name, val in attr.items():
# print "setting", self.__name__, name, val
setattr(self, name, val)
for name, importspec in importspec.items():
if isinstance(importspec, dict):
subname = '%s.%s' % (self.__name__, name)
apimod = ApiModule(subname, importspec, implprefix)
sys.modules[subname] = apimod
setattr(self, name, apimod)
else:
parts = importspec.split(':')
modpath = parts.pop(0)
attrname = parts and parts[0] or ""
if modpath[0] == '.':
modpath = implprefix + modpath
if not attrname:
subname = '%s.%s' % (self.__name__, name)
apimod = AliasModule(subname, modpath)
sys.modules[subname] = apimod
if '.' not in name:
setattr(self, name, apimod)
else:
self.__map__[name] = (modpath, attrname)
def __repr__(self):
l = []
if hasattr(self, '__version__'):
l.append("version=" + repr(self.__version__))
if hasattr(self, '__file__'):
l.append('from ' + repr(self.__file__))
if l:
return '<ApiModule %r %s>' % (self.__name__, " ".join(l))
return '<ApiModule %r>' % (self.__name__,)
def __makeattr(self, name):
"""lazily compute value for name or raise AttributeError if unknown."""
# print "makeattr", self.__name__, name
target = None
if '__onfirstaccess__' in self.__map__:
target = self.__map__.pop('__onfirstaccess__')
importobj(*target)()
try:
modpath, attrname = self.__map__[name]
except KeyError:
if target is not None and name != '__onfirstaccess__':
# retry, onfirstaccess might have set attrs
return getattr(self, name)
raise AttributeError(name)
else:
result = importobj(modpath, attrname)
setattr(self, name, result)
try:
del self.__map__[name]
except KeyError:
pass # in a recursive-import situation a double-del can happen
return result
__getattr__ = __makeattr
def __dict__(self):
# force all the content of the module to be loaded when __dict__ is read
dictdescr = ModuleType.__dict__['__dict__']
dict = dictdescr.__get__(self)
if dict is not None:
hasattr(self, 'some')
for name in self.__all__:
try:
self.__makeattr(name)
except AttributeError:
pass
return dict
__dict__ = property(__dict__)
def AliasModule(modname, modpath, attrname=None):
mod = []
def getmod():
if not mod:
x = importobj(modpath, None)
if attrname is not None:
x = getattr(x, attrname)
mod.append(x)
return mod[0]
class AliasModule(ModuleType):
def __repr__(self):
x = modpath
if attrname:
x += "." + attrname
return '<AliasModule %r for %r>' % (modname, x)
def __getattribute__(self, name):
try:
return getattr(getmod(), name)
except ImportError:
return None
def __setattr__(self, name, value):
setattr(getmod(), name, value)
def __delattr__(self, name):
delattr(getmod(), name)
return AliasModule(str(modname))

496
third_party/python/py/py/_builtin.py поставляемый
Просмотреть файл

@ -1,248 +1,248 @@
import sys
try:
reversed = reversed
except NameError:
def reversed(sequence):
"""reversed(sequence) -> reverse iterator over values of the sequence
Return a reverse iterator
"""
if hasattr(sequence, '__reversed__'):
return sequence.__reversed__()
if not hasattr(sequence, '__getitem__'):
raise TypeError("argument to reversed() must be a sequence")
return reversed_iterator(sequence)
class reversed_iterator(object):
def __init__(self, seq):
self.seq = seq
self.remaining = len(seq)
def __iter__(self):
return self
def next(self):
i = self.remaining
if i > 0:
i -= 1
item = self.seq[i]
self.remaining = i
return item
raise StopIteration
def __length_hint__(self):
return self.remaining
try:
any = any
except NameError:
def any(iterable):
for x in iterable:
if x:
return True
return False
try:
all = all
except NameError:
def all(iterable):
for x in iterable:
if not x:
return False
return True
try:
sorted = sorted
except NameError:
builtin_cmp = cmp # need to use cmp as keyword arg
def sorted(iterable, cmp=None, key=None, reverse=0):
use_cmp = None
if key is not None:
if cmp is None:
def use_cmp(x, y):
return builtin_cmp(x[0], y[0])
else:
def use_cmp(x, y):
return cmp(x[0], y[0])
l = [(key(element), element) for element in iterable]
else:
if cmp is not None:
use_cmp = cmp
l = list(iterable)
if use_cmp is not None:
l.sort(use_cmp)
else:
l.sort()
if reverse:
l.reverse()
if key is not None:
return [element for (_, element) in l]
return l
try:
set, frozenset = set, frozenset
except NameError:
from sets import set, frozenset
# pass through
enumerate = enumerate
try:
BaseException = BaseException
except NameError:
BaseException = Exception
try:
GeneratorExit = GeneratorExit
except NameError:
class GeneratorExit(Exception):
""" This exception is never raised, it is there to make it possible to
write code compatible with CPython 2.5 even in lower CPython
versions."""
pass
GeneratorExit.__module__ = 'exceptions'
_sysex = (KeyboardInterrupt, SystemExit, MemoryError, GeneratorExit)
try:
callable = callable
except NameError:
def callable(obj):
return hasattr(obj, "__call__")
if sys.version_info >= (3, 0):
exec ("print_ = print ; exec_=exec")
import builtins
# some backward compatibility helpers
_basestring = str
def _totext(obj, encoding=None, errors=None):
if isinstance(obj, bytes):
if errors is None:
obj = obj.decode(encoding)
else:
obj = obj.decode(encoding, errors)
elif not isinstance(obj, str):
obj = str(obj)
return obj
def _isbytes(x):
return isinstance(x, bytes)
def _istext(x):
return isinstance(x, str)
text = str
bytes = bytes
def _getimself(function):
return getattr(function, '__self__', None)
def _getfuncdict(function):
return getattr(function, "__dict__", None)
def _getcode(function):
return getattr(function, "__code__", None)
def execfile(fn, globs=None, locs=None):
if globs is None:
back = sys._getframe(1)
globs = back.f_globals
locs = back.f_locals
del back
elif locs is None:
locs = globs
fp = open(fn, "r")
try:
source = fp.read()
finally:
fp.close()
co = compile(source, fn, "exec", dont_inherit=True)
exec_(co, globs, locs)
else:
import __builtin__ as builtins
_totext = unicode
_basestring = basestring
text = unicode
bytes = str
execfile = execfile
callable = callable
def _isbytes(x):
return isinstance(x, str)
def _istext(x):
return isinstance(x, unicode)
def _getimself(function):
return getattr(function, 'im_self', None)
def _getfuncdict(function):
return getattr(function, "__dict__", None)
def _getcode(function):
try:
return getattr(function, "__code__")
except AttributeError:
return getattr(function, "func_code", None)
def print_(*args, **kwargs):
""" minimal backport of py3k print statement. """
sep = ' '
if 'sep' in kwargs:
sep = kwargs.pop('sep')
end = '\n'
if 'end' in kwargs:
end = kwargs.pop('end')
file = 'file' in kwargs and kwargs.pop('file') or sys.stdout
if kwargs:
args = ", ".join([str(x) for x in kwargs])
raise TypeError("invalid keyword arguments: %s" % args)
at_start = True
for x in args:
if not at_start:
file.write(sep)
file.write(str(x))
at_start = False
file.write(end)
def exec_(obj, globals=None, locals=None):
""" minimal backport of py3k exec statement. """
__tracebackhide__ = True
if globals is None:
frame = sys._getframe(1)
globals = frame.f_globals
if locals is None:
locals = frame.f_locals
elif locals is None:
locals = globals
exec2(obj, globals, locals)
if sys.version_info >= (3, 0):
def _reraise(cls, val, tb):
__tracebackhide__ = True
assert hasattr(val, '__traceback__')
raise cls.with_traceback(val, tb)
else:
exec ("""
def _reraise(cls, val, tb):
__tracebackhide__ = True
raise cls, val, tb
def exec2(obj, globals, locals):
__tracebackhide__ = True
exec obj in globals, locals
""")
def _tryimport(*names):
""" return the first successfully imported module. """
assert names
for name in names:
try:
__import__(name)
except ImportError:
excinfo = sys.exc_info()
else:
return sys.modules[name]
_reraise(*excinfo)
import sys
try:
reversed = reversed
except NameError:
def reversed(sequence):
"""reversed(sequence) -> reverse iterator over values of the sequence
Return a reverse iterator
"""
if hasattr(sequence, '__reversed__'):
return sequence.__reversed__()
if not hasattr(sequence, '__getitem__'):
raise TypeError("argument to reversed() must be a sequence")
return reversed_iterator(sequence)
class reversed_iterator(object):
def __init__(self, seq):
self.seq = seq
self.remaining = len(seq)
def __iter__(self):
return self
def next(self):
i = self.remaining
if i > 0:
i -= 1
item = self.seq[i]
self.remaining = i
return item
raise StopIteration
def __length_hint__(self):
return self.remaining
try:
any = any
except NameError:
def any(iterable):
for x in iterable:
if x:
return True
return False
try:
all = all
except NameError:
def all(iterable):
for x in iterable:
if not x:
return False
return True
try:
sorted = sorted
except NameError:
builtin_cmp = cmp # need to use cmp as keyword arg
def sorted(iterable, cmp=None, key=None, reverse=0):
use_cmp = None
if key is not None:
if cmp is None:
def use_cmp(x, y):
return builtin_cmp(x[0], y[0])
else:
def use_cmp(x, y):
return cmp(x[0], y[0])
l = [(key(element), element) for element in iterable]
else:
if cmp is not None:
use_cmp = cmp
l = list(iterable)
if use_cmp is not None:
l.sort(use_cmp)
else:
l.sort()
if reverse:
l.reverse()
if key is not None:
return [element for (_, element) in l]
return l
try:
set, frozenset = set, frozenset
except NameError:
from sets import set, frozenset
# pass through
enumerate = enumerate
try:
BaseException = BaseException
except NameError:
BaseException = Exception
try:
GeneratorExit = GeneratorExit
except NameError:
class GeneratorExit(Exception):
""" This exception is never raised, it is there to make it possible to
write code compatible with CPython 2.5 even in lower CPython
versions."""
pass
GeneratorExit.__module__ = 'exceptions'
_sysex = (KeyboardInterrupt, SystemExit, MemoryError, GeneratorExit)
try:
callable = callable
except NameError:
def callable(obj):
return hasattr(obj, "__call__")
if sys.version_info >= (3, 0):
exec ("print_ = print ; exec_=exec")
import builtins
# some backward compatibility helpers
_basestring = str
def _totext(obj, encoding=None, errors=None):
if isinstance(obj, bytes):
if errors is None:
obj = obj.decode(encoding)
else:
obj = obj.decode(encoding, errors)
elif not isinstance(obj, str):
obj = str(obj)
return obj
def _isbytes(x):
return isinstance(x, bytes)
def _istext(x):
return isinstance(x, str)
text = str
bytes = bytes
def _getimself(function):
return getattr(function, '__self__', None)
def _getfuncdict(function):
return getattr(function, "__dict__", None)
def _getcode(function):
return getattr(function, "__code__", None)
def execfile(fn, globs=None, locs=None):
if globs is None:
back = sys._getframe(1)
globs = back.f_globals
locs = back.f_locals
del back
elif locs is None:
locs = globs
fp = open(fn, "r")
try:
source = fp.read()
finally:
fp.close()
co = compile(source, fn, "exec", dont_inherit=True)
exec_(co, globs, locs)
else:
import __builtin__ as builtins
_totext = unicode
_basestring = basestring
text = unicode
bytes = str
execfile = execfile
callable = callable
def _isbytes(x):
return isinstance(x, str)
def _istext(x):
return isinstance(x, unicode)
def _getimself(function):
return getattr(function, 'im_self', None)
def _getfuncdict(function):
return getattr(function, "__dict__", None)
def _getcode(function):
try:
return getattr(function, "__code__")
except AttributeError:
return getattr(function, "func_code", None)
def print_(*args, **kwargs):
""" minimal backport of py3k print statement. """
sep = ' '
if 'sep' in kwargs:
sep = kwargs.pop('sep')
end = '\n'
if 'end' in kwargs:
end = kwargs.pop('end')
file = 'file' in kwargs and kwargs.pop('file') or sys.stdout
if kwargs:
args = ", ".join([str(x) for x in kwargs])
raise TypeError("invalid keyword arguments: %s" % args)
at_start = True
for x in args:
if not at_start:
file.write(sep)
file.write(str(x))
at_start = False
file.write(end)
def exec_(obj, globals=None, locals=None):
""" minimal backport of py3k exec statement. """
__tracebackhide__ = True
if globals is None:
frame = sys._getframe(1)
globals = frame.f_globals
if locals is None:
locals = frame.f_locals
elif locals is None:
locals = globals
exec2(obj, globals, locals)
if sys.version_info >= (3, 0):
def _reraise(cls, val, tb):
__tracebackhide__ = True
assert hasattr(val, '__traceback__')
raise cls.with_traceback(val, tb)
else:
exec ("""
def _reraise(cls, val, tb):
__tracebackhide__ = True
raise cls, val, tb
def exec2(obj, globals, locals):
__tracebackhide__ = True
exec obj in globals, locals
""")
def _tryimport(*names):
""" return the first successfully imported module. """
assert names
for name in names:
try:
__import__(name)
except ImportError:
excinfo = sys.exc_info()
else:
return sys.modules[name]
_reraise(*excinfo)

2
third_party/python/py/py/_code/__init__.py поставляемый
Просмотреть файл

@ -1 +1 @@
""" python inspection/code generation API """
""" python inspection/code generation API """

Просмотреть файл

@ -1,339 +1,339 @@
"""
Find intermediate evalutation results in assert statements through builtin AST.
This should replace _assertionold.py eventually.
"""
import sys
import ast
import py
from py._code.assertion import _format_explanation, BuiltinAssertionError
if sys.platform.startswith("java") and sys.version_info < (2, 5, 2):
# See http://bugs.jython.org/issue1497
_exprs = ("BoolOp", "BinOp", "UnaryOp", "Lambda", "IfExp", "Dict",
"ListComp", "GeneratorExp", "Yield", "Compare", "Call",
"Repr", "Num", "Str", "Attribute", "Subscript", "Name",
"List", "Tuple")
_stmts = ("FunctionDef", "ClassDef", "Return", "Delete", "Assign",
"AugAssign", "Print", "For", "While", "If", "With", "Raise",
"TryExcept", "TryFinally", "Assert", "Import", "ImportFrom",
"Exec", "Global", "Expr", "Pass", "Break", "Continue")
_expr_nodes = set(getattr(ast, name) for name in _exprs)
_stmt_nodes = set(getattr(ast, name) for name in _stmts)
def _is_ast_expr(node):
return node.__class__ in _expr_nodes
def _is_ast_stmt(node):
return node.__class__ in _stmt_nodes
else:
def _is_ast_expr(node):
return isinstance(node, ast.expr)
def _is_ast_stmt(node):
return isinstance(node, ast.stmt)
class Failure(Exception):
"""Error found while interpreting AST."""
def __init__(self, explanation=""):
self.cause = sys.exc_info()
self.explanation = explanation
def interpret(source, frame, should_fail=False):
mod = ast.parse(source)
visitor = DebugInterpreter(frame)
try:
visitor.visit(mod)
except Failure:
failure = sys.exc_info()[1]
return getfailure(failure)
if should_fail:
return ("(assertion failed, but when it was re-run for "
"printing intermediate values, it did not fail. Suggestions: "
"compute assert expression before the assert or use --no-assert)")
def run(offending_line, frame=None):
if frame is None:
frame = py.code.Frame(sys._getframe(1))
return interpret(offending_line, frame)
def getfailure(failure):
explanation = _format_explanation(failure.explanation)
value = failure.cause[1]
if str(value):
lines = explanation.splitlines()
if not lines:
lines.append("")
lines[0] += " << %s" % (value,)
explanation = "\n".join(lines)
text = "%s: %s" % (failure.cause[0].__name__, explanation)
if text.startswith("AssertionError: assert "):
text = text[16:]
return text
operator_map = {
ast.BitOr : "|",
ast.BitXor : "^",
ast.BitAnd : "&",
ast.LShift : "<<",
ast.RShift : ">>",
ast.Add : "+",
ast.Sub : "-",
ast.Mult : "*",
ast.Div : "/",
ast.FloorDiv : "//",
ast.Mod : "%",
ast.Eq : "==",
ast.NotEq : "!=",
ast.Lt : "<",
ast.LtE : "<=",
ast.Gt : ">",
ast.GtE : ">=",
ast.Pow : "**",
ast.Is : "is",
ast.IsNot : "is not",
ast.In : "in",
ast.NotIn : "not in"
}
unary_map = {
ast.Not : "not %s",
ast.Invert : "~%s",
ast.USub : "-%s",
ast.UAdd : "+%s"
}
class DebugInterpreter(ast.NodeVisitor):
"""Interpret AST nodes to gleam useful debugging information. """
def __init__(self, frame):
self.frame = frame
def generic_visit(self, node):
# Fallback when we don't have a special implementation.
if _is_ast_expr(node):
mod = ast.Expression(node)
co = self._compile(mod)
try:
result = self.frame.eval(co)
except Exception:
raise Failure()
explanation = self.frame.repr(result)
return explanation, result
elif _is_ast_stmt(node):
mod = ast.Module([node])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co)
except Exception:
raise Failure()
return None, None
else:
raise AssertionError("can't handle %s" %(node,))
def _compile(self, source, mode="eval"):
return compile(source, "<assertion interpretation>", mode)
def visit_Expr(self, expr):
return self.visit(expr.value)
def visit_Module(self, mod):
for stmt in mod.body:
self.visit(stmt)
def visit_Name(self, name):
explanation, result = self.generic_visit(name)
# See if the name is local.
source = "%r in locals() is not globals()" % (name.id,)
co = self._compile(source)
try:
local = self.frame.eval(co)
except Exception:
# have to assume it isn't
local = False
if not local:
return name.id, result
return explanation, result
def visit_Compare(self, comp):
left = comp.left
left_explanation, left_result = self.visit(left)
for op, next_op in zip(comp.ops, comp.comparators):
next_explanation, next_result = self.visit(next_op)
op_symbol = operator_map[op.__class__]
explanation = "%s %s %s" % (left_explanation, op_symbol,
next_explanation)
source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_left=left_result,
__exprinfo_right=next_result)
except Exception:
raise Failure(explanation)
try:
if not result:
break
except KeyboardInterrupt:
raise
except:
break
left_explanation, left_result = next_explanation, next_result
rcomp = py.code._reprcompare
if rcomp:
res = rcomp(op_symbol, left_result, next_result)
if res:
explanation = res
return explanation, result
def visit_BoolOp(self, boolop):
is_or = isinstance(boolop.op, ast.Or)
explanations = []
for operand in boolop.values:
explanation, result = self.visit(operand)
explanations.append(explanation)
if result == is_or:
break
name = is_or and " or " or " and "
explanation = "(" + name.join(explanations) + ")"
return explanation, result
def visit_UnaryOp(self, unary):
pattern = unary_map[unary.op.__class__]
operand_explanation, operand_result = self.visit(unary.operand)
explanation = pattern % (operand_explanation,)
co = self._compile(pattern % ("__exprinfo_expr",))
try:
result = self.frame.eval(co, __exprinfo_expr=operand_result)
except Exception:
raise Failure(explanation)
return explanation, result
def visit_BinOp(self, binop):
left_explanation, left_result = self.visit(binop.left)
right_explanation, right_result = self.visit(binop.right)
symbol = operator_map[binop.op.__class__]
explanation = "(%s %s %s)" % (left_explanation, symbol,
right_explanation)
source = "__exprinfo_left %s __exprinfo_right" % (symbol,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_left=left_result,
__exprinfo_right=right_result)
except Exception:
raise Failure(explanation)
return explanation, result
def visit_Call(self, call):
func_explanation, func = self.visit(call.func)
arg_explanations = []
ns = {"__exprinfo_func" : func}
arguments = []
for arg in call.args:
arg_explanation, arg_result = self.visit(arg)
arg_name = "__exprinfo_%s" % (len(ns),)
ns[arg_name] = arg_result
arguments.append(arg_name)
arg_explanations.append(arg_explanation)
for keyword in call.keywords:
arg_explanation, arg_result = self.visit(keyword.value)
arg_name = "__exprinfo_%s" % (len(ns),)
ns[arg_name] = arg_result
keyword_source = "%s=%%s" % (keyword.arg)
arguments.append(keyword_source % (arg_name,))
arg_explanations.append(keyword_source % (arg_explanation,))
if call.starargs:
arg_explanation, arg_result = self.visit(call.starargs)
arg_name = "__exprinfo_star"
ns[arg_name] = arg_result
arguments.append("*%s" % (arg_name,))
arg_explanations.append("*%s" % (arg_explanation,))
if call.kwargs:
arg_explanation, arg_result = self.visit(call.kwargs)
arg_name = "__exprinfo_kwds"
ns[arg_name] = arg_result
arguments.append("**%s" % (arg_name,))
arg_explanations.append("**%s" % (arg_explanation,))
args_explained = ", ".join(arg_explanations)
explanation = "%s(%s)" % (func_explanation, args_explained)
args = ", ".join(arguments)
source = "__exprinfo_func(%s)" % (args,)
co = self._compile(source)
try:
result = self.frame.eval(co, **ns)
except Exception:
raise Failure(explanation)
pattern = "%s\n{%s = %s\n}"
rep = self.frame.repr(result)
explanation = pattern % (rep, rep, explanation)
return explanation, result
def _is_builtin_name(self, name):
pattern = "%r not in globals() and %r not in locals()"
source = pattern % (name.id, name.id)
co = self._compile(source)
try:
return self.frame.eval(co)
except Exception:
return False
def visit_Attribute(self, attr):
if not isinstance(attr.ctx, ast.Load):
return self.generic_visit(attr)
source_explanation, source_result = self.visit(attr.value)
explanation = "%s.%s" % (source_explanation, attr.attr)
source = "__exprinfo_expr.%s" % (attr.attr,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_expr=source_result)
except Exception:
raise Failure(explanation)
explanation = "%s\n{%s = %s.%s\n}" % (self.frame.repr(result),
self.frame.repr(result),
source_explanation, attr.attr)
# Check if the attr is from an instance.
source = "%r in getattr(__exprinfo_expr, '__dict__', {})"
source = source % (attr.attr,)
co = self._compile(source)
try:
from_instance = self.frame.eval(co, __exprinfo_expr=source_result)
except Exception:
from_instance = True
if from_instance:
rep = self.frame.repr(result)
pattern = "%s\n{%s = %s\n}"
explanation = pattern % (rep, rep, explanation)
return explanation, result
def visit_Assert(self, assrt):
test_explanation, test_result = self.visit(assrt.test)
if test_explanation.startswith("False\n{False =") and \
test_explanation.endswith("\n"):
test_explanation = test_explanation[15:-2]
explanation = "assert %s" % (test_explanation,)
if not test_result:
try:
raise BuiltinAssertionError
except Exception:
raise Failure(explanation)
return explanation, test_result
def visit_Assign(self, assign):
value_explanation, value_result = self.visit(assign.value)
explanation = "... = %s" % (value_explanation,)
name = ast.Name("__exprinfo_expr", ast.Load(),
lineno=assign.value.lineno,
col_offset=assign.value.col_offset)
new_assign = ast.Assign(assign.targets, name, lineno=assign.lineno,
col_offset=assign.col_offset)
mod = ast.Module([new_assign])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co, __exprinfo_expr=value_result)
except Exception:
raise Failure(explanation)
return explanation, value_result
"""
Find intermediate evalutation results in assert statements through builtin AST.
This should replace _assertionold.py eventually.
"""
import sys
import ast
import py
from py._code.assertion import _format_explanation, BuiltinAssertionError
if sys.platform.startswith("java") and sys.version_info < (2, 5, 2):
# See http://bugs.jython.org/issue1497
_exprs = ("BoolOp", "BinOp", "UnaryOp", "Lambda", "IfExp", "Dict",
"ListComp", "GeneratorExp", "Yield", "Compare", "Call",
"Repr", "Num", "Str", "Attribute", "Subscript", "Name",
"List", "Tuple")
_stmts = ("FunctionDef", "ClassDef", "Return", "Delete", "Assign",
"AugAssign", "Print", "For", "While", "If", "With", "Raise",
"TryExcept", "TryFinally", "Assert", "Import", "ImportFrom",
"Exec", "Global", "Expr", "Pass", "Break", "Continue")
_expr_nodes = set(getattr(ast, name) for name in _exprs)
_stmt_nodes = set(getattr(ast, name) for name in _stmts)
def _is_ast_expr(node):
return node.__class__ in _expr_nodes
def _is_ast_stmt(node):
return node.__class__ in _stmt_nodes
else:
def _is_ast_expr(node):
return isinstance(node, ast.expr)
def _is_ast_stmt(node):
return isinstance(node, ast.stmt)
class Failure(Exception):
"""Error found while interpreting AST."""
def __init__(self, explanation=""):
self.cause = sys.exc_info()
self.explanation = explanation
def interpret(source, frame, should_fail=False):
mod = ast.parse(source)
visitor = DebugInterpreter(frame)
try:
visitor.visit(mod)
except Failure:
failure = sys.exc_info()[1]
return getfailure(failure)
if should_fail:
return ("(assertion failed, but when it was re-run for "
"printing intermediate values, it did not fail. Suggestions: "
"compute assert expression before the assert or use --no-assert)")
def run(offending_line, frame=None):
if frame is None:
frame = py.code.Frame(sys._getframe(1))
return interpret(offending_line, frame)
def getfailure(failure):
explanation = _format_explanation(failure.explanation)
value = failure.cause[1]
if str(value):
lines = explanation.splitlines()
if not lines:
lines.append("")
lines[0] += " << %s" % (value,)
explanation = "\n".join(lines)
text = "%s: %s" % (failure.cause[0].__name__, explanation)
if text.startswith("AssertionError: assert "):
text = text[16:]
return text
operator_map = {
ast.BitOr : "|",
ast.BitXor : "^",
ast.BitAnd : "&",
ast.LShift : "<<",
ast.RShift : ">>",
ast.Add : "+",
ast.Sub : "-",
ast.Mult : "*",
ast.Div : "/",
ast.FloorDiv : "//",
ast.Mod : "%",
ast.Eq : "==",
ast.NotEq : "!=",
ast.Lt : "<",
ast.LtE : "<=",
ast.Gt : ">",
ast.GtE : ">=",
ast.Pow : "**",
ast.Is : "is",
ast.IsNot : "is not",
ast.In : "in",
ast.NotIn : "not in"
}
unary_map = {
ast.Not : "not %s",
ast.Invert : "~%s",
ast.USub : "-%s",
ast.UAdd : "+%s"
}
class DebugInterpreter(ast.NodeVisitor):
"""Interpret AST nodes to gleam useful debugging information. """
def __init__(self, frame):
self.frame = frame
def generic_visit(self, node):
# Fallback when we don't have a special implementation.
if _is_ast_expr(node):
mod = ast.Expression(node)
co = self._compile(mod)
try:
result = self.frame.eval(co)
except Exception:
raise Failure()
explanation = self.frame.repr(result)
return explanation, result
elif _is_ast_stmt(node):
mod = ast.Module([node])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co)
except Exception:
raise Failure()
return None, None
else:
raise AssertionError("can't handle %s" %(node,))
def _compile(self, source, mode="eval"):
return compile(source, "<assertion interpretation>", mode)
def visit_Expr(self, expr):
return self.visit(expr.value)
def visit_Module(self, mod):
for stmt in mod.body:
self.visit(stmt)
def visit_Name(self, name):
explanation, result = self.generic_visit(name)
# See if the name is local.
source = "%r in locals() is not globals()" % (name.id,)
co = self._compile(source)
try:
local = self.frame.eval(co)
except Exception:
# have to assume it isn't
local = False
if not local:
return name.id, result
return explanation, result
def visit_Compare(self, comp):
left = comp.left
left_explanation, left_result = self.visit(left)
for op, next_op in zip(comp.ops, comp.comparators):
next_explanation, next_result = self.visit(next_op)
op_symbol = operator_map[op.__class__]
explanation = "%s %s %s" % (left_explanation, op_symbol,
next_explanation)
source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_left=left_result,
__exprinfo_right=next_result)
except Exception:
raise Failure(explanation)
try:
if not result:
break
except KeyboardInterrupt:
raise
except:
break
left_explanation, left_result = next_explanation, next_result
rcomp = py.code._reprcompare
if rcomp:
res = rcomp(op_symbol, left_result, next_result)
if res:
explanation = res
return explanation, result
def visit_BoolOp(self, boolop):
is_or = isinstance(boolop.op, ast.Or)
explanations = []
for operand in boolop.values:
explanation, result = self.visit(operand)
explanations.append(explanation)
if result == is_or:
break
name = is_or and " or " or " and "
explanation = "(" + name.join(explanations) + ")"
return explanation, result
def visit_UnaryOp(self, unary):
pattern = unary_map[unary.op.__class__]
operand_explanation, operand_result = self.visit(unary.operand)
explanation = pattern % (operand_explanation,)
co = self._compile(pattern % ("__exprinfo_expr",))
try:
result = self.frame.eval(co, __exprinfo_expr=operand_result)
except Exception:
raise Failure(explanation)
return explanation, result
def visit_BinOp(self, binop):
left_explanation, left_result = self.visit(binop.left)
right_explanation, right_result = self.visit(binop.right)
symbol = operator_map[binop.op.__class__]
explanation = "(%s %s %s)" % (left_explanation, symbol,
right_explanation)
source = "__exprinfo_left %s __exprinfo_right" % (symbol,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_left=left_result,
__exprinfo_right=right_result)
except Exception:
raise Failure(explanation)
return explanation, result
def visit_Call(self, call):
func_explanation, func = self.visit(call.func)
arg_explanations = []
ns = {"__exprinfo_func" : func}
arguments = []
for arg in call.args:
arg_explanation, arg_result = self.visit(arg)
arg_name = "__exprinfo_%s" % (len(ns),)
ns[arg_name] = arg_result
arguments.append(arg_name)
arg_explanations.append(arg_explanation)
for keyword in call.keywords:
arg_explanation, arg_result = self.visit(keyword.value)
arg_name = "__exprinfo_%s" % (len(ns),)
ns[arg_name] = arg_result
keyword_source = "%s=%%s" % (keyword.arg)
arguments.append(keyword_source % (arg_name,))
arg_explanations.append(keyword_source % (arg_explanation,))
if call.starargs:
arg_explanation, arg_result = self.visit(call.starargs)
arg_name = "__exprinfo_star"
ns[arg_name] = arg_result
arguments.append("*%s" % (arg_name,))
arg_explanations.append("*%s" % (arg_explanation,))
if call.kwargs:
arg_explanation, arg_result = self.visit(call.kwargs)
arg_name = "__exprinfo_kwds"
ns[arg_name] = arg_result
arguments.append("**%s" % (arg_name,))
arg_explanations.append("**%s" % (arg_explanation,))
args_explained = ", ".join(arg_explanations)
explanation = "%s(%s)" % (func_explanation, args_explained)
args = ", ".join(arguments)
source = "__exprinfo_func(%s)" % (args,)
co = self._compile(source)
try:
result = self.frame.eval(co, **ns)
except Exception:
raise Failure(explanation)
pattern = "%s\n{%s = %s\n}"
rep = self.frame.repr(result)
explanation = pattern % (rep, rep, explanation)
return explanation, result
def _is_builtin_name(self, name):
pattern = "%r not in globals() and %r not in locals()"
source = pattern % (name.id, name.id)
co = self._compile(source)
try:
return self.frame.eval(co)
except Exception:
return False
def visit_Attribute(self, attr):
if not isinstance(attr.ctx, ast.Load):
return self.generic_visit(attr)
source_explanation, source_result = self.visit(attr.value)
explanation = "%s.%s" % (source_explanation, attr.attr)
source = "__exprinfo_expr.%s" % (attr.attr,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_expr=source_result)
except Exception:
raise Failure(explanation)
explanation = "%s\n{%s = %s.%s\n}" % (self.frame.repr(result),
self.frame.repr(result),
source_explanation, attr.attr)
# Check if the attr is from an instance.
source = "%r in getattr(__exprinfo_expr, '__dict__', {})"
source = source % (attr.attr,)
co = self._compile(source)
try:
from_instance = self.frame.eval(co, __exprinfo_expr=source_result)
except Exception:
from_instance = True
if from_instance:
rep = self.frame.repr(result)
pattern = "%s\n{%s = %s\n}"
explanation = pattern % (rep, rep, explanation)
return explanation, result
def visit_Assert(self, assrt):
test_explanation, test_result = self.visit(assrt.test)
if test_explanation.startswith("False\n{False =") and \
test_explanation.endswith("\n"):
test_explanation = test_explanation[15:-2]
explanation = "assert %s" % (test_explanation,)
if not test_result:
try:
raise BuiltinAssertionError
except Exception:
raise Failure(explanation)
return explanation, test_result
def visit_Assign(self, assign):
value_explanation, value_result = self.visit(assign.value)
explanation = "... = %s" % (value_explanation,)
name = ast.Name("__exprinfo_expr", ast.Load(),
lineno=assign.value.lineno,
col_offset=assign.value.col_offset)
new_assign = ast.Assign(assign.targets, name, lineno=assign.lineno,
col_offset=assign.col_offset)
mod = ast.Module([new_assign])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co, __exprinfo_expr=value_result)
except Exception:
raise Failure(explanation)
return explanation, value_result

1110
third_party/python/py/py/_code/_assertionold.py поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,79 +1,79 @@
# copied from python-2.7.3's traceback.py
# CHANGES:
# - some_str is replaced, trying to create unicode strings
#
import types
def format_exception_only(etype, value):
"""Format the exception part of a traceback.
The arguments are the exception type and value such as given by
sys.last_type and sys.last_value. The return value is a list of
strings, each ending in a newline.
Normally, the list contains a single string; however, for
SyntaxError exceptions, it contains several lines that (when
printed) display detailed information about where the syntax
error occurred.
The message indicating which exception occurred is always the last
string in the list.
"""
# An instance should not have a meaningful value parameter, but
# sometimes does, particularly for string exceptions, such as
# >>> raise string1, string2 # deprecated
#
# Clear these out first because issubtype(string1, SyntaxError)
# would throw another exception and mask the original problem.
if (isinstance(etype, BaseException) or
isinstance(etype, types.InstanceType) or
etype is None or type(etype) is str):
return [_format_final_exc_line(etype, value)]
stype = etype.__name__
if not issubclass(etype, SyntaxError):
return [_format_final_exc_line(stype, value)]
# It was a syntax error; show exactly where the problem was found.
lines = []
try:
msg, (filename, lineno, offset, badline) = value.args
except Exception:
pass
else:
filename = filename or "<string>"
lines.append(' File "%s", line %d\n' % (filename, lineno))
if badline is not None:
lines.append(' %s\n' % badline.strip())
if offset is not None:
caretspace = badline.rstrip('\n')[:offset].lstrip()
# non-space whitespace (likes tabs) must be kept for alignment
caretspace = ((c.isspace() and c or ' ') for c in caretspace)
# only three spaces to account for offset1 == pos 0
lines.append(' %s^\n' % ''.join(caretspace))
value = msg
lines.append(_format_final_exc_line(stype, value))
return lines
def _format_final_exc_line(etype, value):
"""Return a list of a single line -- normal case for format_exception_only"""
valuestr = _some_str(value)
if value is None or not valuestr:
line = "%s\n" % etype
else:
line = "%s: %s\n" % (etype, valuestr)
return line
def _some_str(value):
try:
return unicode(value)
except Exception:
try:
return str(value)
except Exception:
pass
return '<unprintable %s object>' % type(value).__name__
# copied from python-2.7.3's traceback.py
# CHANGES:
# - some_str is replaced, trying to create unicode strings
#
import types
def format_exception_only(etype, value):
"""Format the exception part of a traceback.
The arguments are the exception type and value such as given by
sys.last_type and sys.last_value. The return value is a list of
strings, each ending in a newline.
Normally, the list contains a single string; however, for
SyntaxError exceptions, it contains several lines that (when
printed) display detailed information about where the syntax
error occurred.
The message indicating which exception occurred is always the last
string in the list.
"""
# An instance should not have a meaningful value parameter, but
# sometimes does, particularly for string exceptions, such as
# >>> raise string1, string2 # deprecated
#
# Clear these out first because issubtype(string1, SyntaxError)
# would throw another exception and mask the original problem.
if (isinstance(etype, BaseException) or
isinstance(etype, types.InstanceType) or
etype is None or type(etype) is str):
return [_format_final_exc_line(etype, value)]
stype = etype.__name__
if not issubclass(etype, SyntaxError):
return [_format_final_exc_line(stype, value)]
# It was a syntax error; show exactly where the problem was found.
lines = []
try:
msg, (filename, lineno, offset, badline) = value.args
except Exception:
pass
else:
filename = filename or "<string>"
lines.append(' File "%s", line %d\n' % (filename, lineno))
if badline is not None:
lines.append(' %s\n' % badline.strip())
if offset is not None:
caretspace = badline.rstrip('\n')[:offset].lstrip()
# non-space whitespace (likes tabs) must be kept for alignment
caretspace = ((c.isspace() and c or ' ') for c in caretspace)
# only three spaces to account for offset1 == pos 0
lines.append(' %s^\n' % ''.join(caretspace))
value = msg
lines.append(_format_final_exc_line(stype, value))
return lines
def _format_final_exc_line(etype, value):
"""Return a list of a single line -- normal case for format_exception_only"""
valuestr = _some_str(value)
if value is None or not valuestr:
line = "%s\n" % etype
else:
line = "%s: %s\n" % (etype, valuestr)
return line
def _some_str(value):
try:
return unicode(value)
except Exception:
try:
return str(value)
except Exception:
pass
return '<unprintable %s object>' % type(value).__name__

188
third_party/python/py/py/_code/assertion.py поставляемый
Просмотреть файл

@ -1,94 +1,94 @@
import sys
import py
BuiltinAssertionError = py.builtin.builtins.AssertionError
_reprcompare = None # if set, will be called by assert reinterp for comparison ops
def _format_explanation(explanation):
"""This formats an explanation
Normally all embedded newlines are escaped, however there are
three exceptions: \n{, \n} and \n~. The first two are intended
cover nested explanations, see function and attribute explanations
for examples (.visit_Call(), visit_Attribute()). The last one is
for when one explanation needs to span multiple lines, e.g. when
displaying diffs.
"""
raw_lines = (explanation or '').split('\n')
# escape newlines not followed by {, } and ~
lines = [raw_lines[0]]
for l in raw_lines[1:]:
if l.startswith('{') or l.startswith('}') or l.startswith('~'):
lines.append(l)
else:
lines[-1] += '\\n' + l
result = lines[:1]
stack = [0]
stackcnt = [0]
for line in lines[1:]:
if line.startswith('{'):
if stackcnt[-1]:
s = 'and '
else:
s = 'where '
stack.append(len(result))
stackcnt[-1] += 1
stackcnt.append(0)
result.append(' +' + ' '*(len(stack)-1) + s + line[1:])
elif line.startswith('}'):
assert line.startswith('}')
stack.pop()
stackcnt.pop()
result[stack[-1]] += line[1:]
else:
assert line.startswith('~')
result.append(' '*len(stack) + line[1:])
assert len(stack) == 1
return '\n'.join(result)
class AssertionError(BuiltinAssertionError):
def __init__(self, *args):
BuiltinAssertionError.__init__(self, *args)
if args:
try:
self.msg = str(args[0])
except py.builtin._sysex:
raise
except:
self.msg = "<[broken __repr__] %s at %0xd>" %(
args[0].__class__, id(args[0]))
else:
f = py.code.Frame(sys._getframe(1))
try:
source = f.code.fullsource
if source is not None:
try:
source = source.getstatement(f.lineno, assertion=True)
except IndexError:
source = None
else:
source = str(source.deindent()).strip()
except py.error.ENOENT:
source = None
# this can also occur during reinterpretation, when the
# co_filename is set to "<run>".
if source:
self.msg = reinterpret(source, f, should_fail=True)
else:
self.msg = "<could not determine information>"
if not self.args:
self.args = (self.msg,)
if sys.version_info > (3, 0):
AssertionError.__module__ = "builtins"
reinterpret_old = "old reinterpretation not available for py3"
else:
from py._code._assertionold import interpret as reinterpret_old
if sys.version_info >= (2, 6) or (sys.platform.startswith("java")):
from py._code._assertionnew import interpret as reinterpret
else:
reinterpret = reinterpret_old
import sys
import py
BuiltinAssertionError = py.builtin.builtins.AssertionError
_reprcompare = None # if set, will be called by assert reinterp for comparison ops
def _format_explanation(explanation):
"""This formats an explanation
Normally all embedded newlines are escaped, however there are
three exceptions: \n{, \n} and \n~. The first two are intended
cover nested explanations, see function and attribute explanations
for examples (.visit_Call(), visit_Attribute()). The last one is
for when one explanation needs to span multiple lines, e.g. when
displaying diffs.
"""
raw_lines = (explanation or '').split('\n')
# escape newlines not followed by {, } and ~
lines = [raw_lines[0]]
for l in raw_lines[1:]:
if l.startswith('{') or l.startswith('}') or l.startswith('~'):
lines.append(l)
else:
lines[-1] += '\\n' + l
result = lines[:1]
stack = [0]
stackcnt = [0]
for line in lines[1:]:
if line.startswith('{'):
if stackcnt[-1]:
s = 'and '
else:
s = 'where '
stack.append(len(result))
stackcnt[-1] += 1
stackcnt.append(0)
result.append(' +' + ' '*(len(stack)-1) + s + line[1:])
elif line.startswith('}'):
assert line.startswith('}')
stack.pop()
stackcnt.pop()
result[stack[-1]] += line[1:]
else:
assert line.startswith('~')
result.append(' '*len(stack) + line[1:])
assert len(stack) == 1
return '\n'.join(result)
class AssertionError(BuiltinAssertionError):
def __init__(self, *args):
BuiltinAssertionError.__init__(self, *args)
if args:
try:
self.msg = str(args[0])
except py.builtin._sysex:
raise
except:
self.msg = "<[broken __repr__] %s at %0xd>" %(
args[0].__class__, id(args[0]))
else:
f = py.code.Frame(sys._getframe(1))
try:
source = f.code.fullsource
if source is not None:
try:
source = source.getstatement(f.lineno, assertion=True)
except IndexError:
source = None
else:
source = str(source.deindent()).strip()
except py.error.ENOENT:
source = None
# this can also occur during reinterpretation, when the
# co_filename is set to "<run>".
if source:
self.msg = reinterpret(source, f, should_fail=True)
else:
self.msg = "<could not determine information>"
if not self.args:
self.args = (self.msg,)
if sys.version_info > (3, 0):
AssertionError.__module__ = "builtins"
reinterpret_old = "old reinterpretation not available for py3"
else:
from py._code._assertionold import interpret as reinterpret_old
if sys.version_info >= (2, 6) or (sys.platform.startswith("java")):
from py._code._assertionnew import interpret as reinterpret
else:
reinterpret = reinterpret_old

1574
third_party/python/py/py/_code/code.py поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

830
third_party/python/py/py/_code/source.py поставляемый
Просмотреть файл

@ -1,419 +1,411 @@
from __future__ import generators
from bisect import bisect_right
import sys
import inspect, tokenize
import py
from types import ModuleType
cpy_compile = compile
try:
import _ast
from _ast import PyCF_ONLY_AST as _AST_FLAG
except ImportError:
_AST_FLAG = 0
_ast = None
class Source(object):
""" a immutable object holding a source code fragment,
possibly deindenting it.
"""
_compilecounter = 0
def __init__(self, *parts, **kwargs):
self.lines = lines = []
de = kwargs.get('deindent', True)
rstrip = kwargs.get('rstrip', True)
for part in parts:
if not part:
partlines = []
if isinstance(part, Source):
partlines = part.lines
elif isinstance(part, (tuple, list)):
partlines = [x.rstrip("\n") for x in part]
elif isinstance(part, py.builtin._basestring):
partlines = part.split('\n')
if rstrip:
while partlines:
if partlines[-1].strip():
break
partlines.pop()
else:
partlines = getsource(part, deindent=de).lines
if de:
partlines = deindent(partlines)
lines.extend(partlines)
def __eq__(self, other):
try:
return self.lines == other.lines
except AttributeError:
if isinstance(other, str):
return str(self) == other
return False
def __getitem__(self, key):
if isinstance(key, int):
return self.lines[key]
else:
if key.step not in (None, 1):
raise IndexError("cannot slice a Source with a step")
return self.__getslice__(key.start, key.stop)
def __len__(self):
return len(self.lines)
def __getslice__(self, start, end):
newsource = Source()
newsource.lines = self.lines[start:end]
return newsource
def strip(self):
""" return new source object with trailing
and leading blank lines removed.
"""
start, end = 0, len(self)
while start < end and not self.lines[start].strip():
start += 1
while end > start and not self.lines[end-1].strip():
end -= 1
source = Source()
source.lines[:] = self.lines[start:end]
return source
def putaround(self, before='', after='', indent=' ' * 4):
""" return a copy of the source object with
'before' and 'after' wrapped around it.
"""
before = Source(before)
after = Source(after)
newsource = Source()
lines = [ (indent + line) for line in self.lines]
newsource.lines = before.lines + lines + after.lines
return newsource
def indent(self, indent=' ' * 4):
""" return a copy of the source object with
all lines indented by the given indent-string.
"""
newsource = Source()
newsource.lines = [(indent+line) for line in self.lines]
return newsource
def getstatement(self, lineno, assertion=False):
""" return Source statement which contains the
given linenumber (counted from 0).
"""
start, end = self.getstatementrange(lineno, assertion)
return self[start:end]
def getstatementrange(self, lineno, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
"""
if not (0 <= lineno < len(self)):
raise IndexError("lineno out of range")
ast, start, end = getstatementrange_ast(lineno, self)
return start, end
def deindent(self, offset=None):
""" return a new source object deindented by offset.
If offset is None then guess an indentation offset from
the first non-blank line. Subsequent lines which have a
lower indentation offset will be copied verbatim as
they are assumed to be part of multilines.
"""
# XXX maybe use the tokenizer to properly handle multiline
# strings etc.pp?
newsource = Source()
newsource.lines[:] = deindent(self.lines, offset)
return newsource
def isparseable(self, deindent=True):
""" return True if source is parseable, heuristically
deindenting it by default.
"""
try:
import parser
except ImportError:
syntax_checker = lambda x: compile(x, 'asd', 'exec')
else:
syntax_checker = parser.suite
if deindent:
source = str(self.deindent())
else:
source = str(self)
try:
#compile(source+'\n', "x", "exec")
syntax_checker(source+'\n')
except KeyboardInterrupt:
raise
except Exception:
return False
else:
return True
def __str__(self):
return "\n".join(self.lines)
def compile(self, filename=None, mode='exec',
flag=generators.compiler_flag,
dont_inherit=0, _genframe=None):
""" return compiled code object. if filename is None
invent an artificial filename which displays
the source/line position of the caller frame.
"""
if not filename or py.path.local(filename).check(file=0):
if _genframe is None:
_genframe = sys._getframe(1) # the caller
fn,lineno = _genframe.f_code.co_filename, _genframe.f_lineno
base = "<%d-codegen " % self._compilecounter
self.__class__._compilecounter += 1
if not filename:
filename = base + '%s:%d>' % (fn, lineno)
else:
filename = base + '%r %s:%d>' % (filename, fn, lineno)
source = "\n".join(self.lines) + '\n'
try:
co = cpy_compile(source, filename, mode, flag)
except SyntaxError:
ex = sys.exc_info()[1]
# re-represent syntax errors from parsing python strings
msglines = self.lines[:ex.lineno]
if ex.offset:
msglines.append(" "*ex.offset + '^')
msglines.append("(code was compiled probably from here: %s)" % filename)
newex = SyntaxError('\n'.join(msglines))
newex.offset = ex.offset
newex.lineno = ex.lineno
newex.text = ex.text
raise newex
else:
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
if sys.version_info[0] >= 3:
# XXX py3's inspect.getsourcefile() checks for a module
# and a pep302 __loader__ ... we don't have a module
# at code compile-time so we need to fake it here
m = ModuleType("_pycodecompile_pseudo_module")
py.std.inspect.modulesbyfile[filename] = None
py.std.sys.modules[None] = m
m.__loader__ = 1
py.std.linecache.cache[filename] = (1, None, lines, filename)
return co
#
# public API shortcut functions
#
def compile_(source, filename=None, mode='exec', flags=
generators.compiler_flag, dont_inherit=0):
""" compile the given source to a raw code object,
and maintain an internal cache which allows later
retrieval of the source code for the code object
and any recursively created code objects.
"""
if _ast is not None and isinstance(source, _ast.AST):
# XXX should Source support having AST?
return cpy_compile(source, filename, mode, flags, dont_inherit)
_genframe = sys._getframe(1) # the caller
s = Source(source)
co = s.compile(filename, mode, flags, _genframe=_genframe)
return co
def getfslineno(obj):
""" Return source location (path, lineno) for the given object.
If the source cannot be determined return ("", -1)
"""
try:
code = py.code.Code(obj)
except TypeError:
try:
fn = (py.std.inspect.getsourcefile(obj) or
py.std.inspect.getfile(obj))
except TypeError:
return "", -1
fspath = fn and py.path.local(fn) or None
lineno = -1
if fspath:
try:
_, lineno = findsource(obj)
except IOError:
pass
else:
fspath = code.path
lineno = code.firstlineno
assert isinstance(lineno, int)
return fspath, lineno
#
# helper functions
#
def findsource(obj):
try:
sourcelines, lineno = py.std.inspect.findsource(obj)
except py.builtin._sysex:
raise
except:
return None, -1
source = Source()
source.lines = [line.rstrip() for line in sourcelines]
return source, lineno
def getsource(obj, **kwargs):
obj = py.code.getrawcode(obj)
try:
strsrc = inspect.getsource(obj)
except IndentationError:
strsrc = "\"Buggy python version consider upgrading, cannot get source\""
assert isinstance(strsrc, str)
return Source(strsrc, **kwargs)
def deindent(lines, offset=None):
if offset is None:
for line in lines:
line = line.expandtabs()
s = line.lstrip()
if s:
offset = len(line)-len(s)
break
else:
offset = 0
if offset == 0:
return list(lines)
newlines = []
def readline_generator(lines):
for line in lines:
yield line + '\n'
while True:
yield ''
it = readline_generator(lines)
try:
for _, _, (sline, _), (eline, _), _ in tokenize.generate_tokens(lambda: next(it)):
if sline > len(lines):
break # End of input reached
if sline > len(newlines):
line = lines[sline - 1].expandtabs()
if line.lstrip() and line[:offset].isspace():
line = line[offset:] # Deindent
newlines.append(line)
for i in range(sline, eline):
# Don't deindent continuing lines of
# multiline tokens (i.e. multiline strings)
newlines.append(lines[i])
except (IndentationError, tokenize.TokenError):
pass
# Add any lines we didn't see. E.g. if an exception was raised.
newlines.extend(lines[len(newlines):])
return newlines
def get_statement_startend2(lineno, node):
import ast
# flatten all statements and except handlers into one lineno-list
# AST's line numbers start indexing at 1
l = []
for x in ast.walk(node):
if isinstance(x, _ast.stmt) or isinstance(x, _ast.ExceptHandler):
l.append(x.lineno - 1)
for name in "finalbody", "orelse":
val = getattr(x, name, None)
if val:
# treat the finally/orelse part as its own statement
l.append(val[0].lineno - 1 - 1)
l.sort()
insert_index = bisect_right(l, lineno)
start = l[insert_index - 1]
if insert_index >= len(l):
end = None
else:
end = l[insert_index]
return start, end
def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
if astnode is None:
content = str(source)
if sys.version_info < (2,7):
content += "\n"
try:
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
except ValueError:
start, end = getstatementrange_old(lineno, source, assertion)
return None, start, end
start, end = get_statement_startend2(lineno, astnode)
# we need to correct the end:
# - ast-parsing strips comments
# - there might be empty lines
# - we might have lesser indented code blocks at the end
if end is None:
end = len(source.lines)
if end > start + 1:
# make sure we don't span differently indented code blocks
# by using the BlockFinder helper used which inspect.getsource() uses itself
block_finder = inspect.BlockFinder()
# if we start with an indented line, put blockfinder to "started" mode
block_finder.started = source.lines[start][0].isspace()
it = ((x + "\n") for x in source.lines[start:end])
try:
for tok in tokenize.generate_tokens(lambda: next(it)):
block_finder.tokeneater(*tok)
except (inspect.EndOfBlock, IndentationError):
end = block_finder.last + start
except Exception:
pass
# the end might still point to a comment or empty line, correct it
while end:
line = source.lines[end - 1].lstrip()
if line.startswith("#") or not line:
end -= 1
else:
break
return astnode, start, end
def getstatementrange_old(lineno, source, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
raise an IndexError if no such statementrange can be found.
"""
# XXX this logic is only used on python2.4 and below
# 1. find the start of the statement
from codeop import compile_command
for start in range(lineno, -1, -1):
if assertion:
line = source.lines[start]
# the following lines are not fully tested, change with care
if 'super' in line and 'self' in line and '__init__' in line:
raise IndexError("likely a subclass")
if "assert" not in line and "raise" not in line:
continue
trylines = source.lines[start:lineno+1]
# quick hack to prepare parsing an indented line with
# compile_command() (which errors on "return" outside defs)
trylines.insert(0, 'def xxx():')
trysource = '\n '.join(trylines)
# ^ space here
try:
compile_command(trysource)
except (SyntaxError, OverflowError, ValueError):
continue
# 2. find the end of the statement
for end in range(lineno+1, len(source)+1):
trysource = source[start:end]
if trysource.isparseable():
return start, end
raise SyntaxError("no valid source range around line %d " % (lineno,))
from __future__ import generators
from bisect import bisect_right
import sys
import inspect, tokenize
import py
from types import ModuleType
cpy_compile = compile
try:
import _ast
from _ast import PyCF_ONLY_AST as _AST_FLAG
except ImportError:
_AST_FLAG = 0
_ast = None
class Source(object):
""" a immutable object holding a source code fragment,
possibly deindenting it.
"""
_compilecounter = 0
def __init__(self, *parts, **kwargs):
self.lines = lines = []
de = kwargs.get('deindent', True)
rstrip = kwargs.get('rstrip', True)
for part in parts:
if not part:
partlines = []
if isinstance(part, Source):
partlines = part.lines
elif isinstance(part, (tuple, list)):
partlines = [x.rstrip("\n") for x in part]
elif isinstance(part, py.builtin._basestring):
partlines = part.split('\n')
if rstrip:
while partlines:
if partlines[-1].strip():
break
partlines.pop()
else:
partlines = getsource(part, deindent=de).lines
if de:
partlines = deindent(partlines)
lines.extend(partlines)
def __eq__(self, other):
try:
return self.lines == other.lines
except AttributeError:
if isinstance(other, str):
return str(self) == other
return False
def __getitem__(self, key):
if isinstance(key, int):
return self.lines[key]
else:
if key.step not in (None, 1):
raise IndexError("cannot slice a Source with a step")
return self.__getslice__(key.start, key.stop)
def __len__(self):
return len(self.lines)
def __getslice__(self, start, end):
newsource = Source()
newsource.lines = self.lines[start:end]
return newsource
def strip(self):
""" return new source object with trailing
and leading blank lines removed.
"""
start, end = 0, len(self)
while start < end and not self.lines[start].strip():
start += 1
while end > start and not self.lines[end-1].strip():
end -= 1
source = Source()
source.lines[:] = self.lines[start:end]
return source
def putaround(self, before='', after='', indent=' ' * 4):
""" return a copy of the source object with
'before' and 'after' wrapped around it.
"""
before = Source(before)
after = Source(after)
newsource = Source()
lines = [ (indent + line) for line in self.lines]
newsource.lines = before.lines + lines + after.lines
return newsource
def indent(self, indent=' ' * 4):
""" return a copy of the source object with
all lines indented by the given indent-string.
"""
newsource = Source()
newsource.lines = [(indent+line) for line in self.lines]
return newsource
def getstatement(self, lineno, assertion=False):
""" return Source statement which contains the
given linenumber (counted from 0).
"""
start, end = self.getstatementrange(lineno, assertion)
return self[start:end]
def getstatementrange(self, lineno, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
"""
if not (0 <= lineno < len(self)):
raise IndexError("lineno out of range")
ast, start, end = getstatementrange_ast(lineno, self)
return start, end
def deindent(self, offset=None):
""" return a new source object deindented by offset.
If offset is None then guess an indentation offset from
the first non-blank line. Subsequent lines which have a
lower indentation offset will be copied verbatim as
they are assumed to be part of multilines.
"""
# XXX maybe use the tokenizer to properly handle multiline
# strings etc.pp?
newsource = Source()
newsource.lines[:] = deindent(self.lines, offset)
return newsource
def isparseable(self, deindent=True):
""" return True if source is parseable, heuristically
deindenting it by default.
"""
try:
import parser
except ImportError:
syntax_checker = lambda x: compile(x, 'asd', 'exec')
else:
syntax_checker = parser.suite
if deindent:
source = str(self.deindent())
else:
source = str(self)
try:
#compile(source+'\n', "x", "exec")
syntax_checker(source+'\n')
except KeyboardInterrupt:
raise
except Exception:
return False
else:
return True
def __str__(self):
return "\n".join(self.lines)
def compile(self, filename=None, mode='exec',
flag=generators.compiler_flag,
dont_inherit=0, _genframe=None):
""" return compiled code object. if filename is None
invent an artificial filename which displays
the source/line position of the caller frame.
"""
if not filename or py.path.local(filename).check(file=0):
if _genframe is None:
_genframe = sys._getframe(1) # the caller
fn,lineno = _genframe.f_code.co_filename, _genframe.f_lineno
base = "<%d-codegen " % self._compilecounter
self.__class__._compilecounter += 1
if not filename:
filename = base + '%s:%d>' % (fn, lineno)
else:
filename = base + '%r %s:%d>' % (filename, fn, lineno)
source = "\n".join(self.lines) + '\n'
try:
co = cpy_compile(source, filename, mode, flag)
except SyntaxError:
ex = sys.exc_info()[1]
# re-represent syntax errors from parsing python strings
msglines = self.lines[:ex.lineno]
if ex.offset:
msglines.append(" "*ex.offset + '^')
msglines.append("(code was compiled probably from here: %s)" % filename)
newex = SyntaxError('\n'.join(msglines))
newex.offset = ex.offset
newex.lineno = ex.lineno
newex.text = ex.text
raise newex
else:
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
py.std.linecache.cache[filename] = (1, None, lines, filename)
return co
#
# public API shortcut functions
#
def compile_(source, filename=None, mode='exec', flags=
generators.compiler_flag, dont_inherit=0):
""" compile the given source to a raw code object,
and maintain an internal cache which allows later
retrieval of the source code for the code object
and any recursively created code objects.
"""
if _ast is not None and isinstance(source, _ast.AST):
# XXX should Source support having AST?
return cpy_compile(source, filename, mode, flags, dont_inherit)
_genframe = sys._getframe(1) # the caller
s = Source(source)
co = s.compile(filename, mode, flags, _genframe=_genframe)
return co
def getfslineno(obj):
""" Return source location (path, lineno) for the given object.
If the source cannot be determined return ("", -1)
"""
try:
code = py.code.Code(obj)
except TypeError:
try:
fn = (py.std.inspect.getsourcefile(obj) or
py.std.inspect.getfile(obj))
except TypeError:
return "", -1
fspath = fn and py.path.local(fn) or None
lineno = -1
if fspath:
try:
_, lineno = findsource(obj)
except IOError:
pass
else:
fspath = code.path
lineno = code.firstlineno
assert isinstance(lineno, int)
return fspath, lineno
#
# helper functions
#
def findsource(obj):
try:
sourcelines, lineno = py.std.inspect.findsource(obj)
except py.builtin._sysex:
raise
except:
return None, -1
source = Source()
source.lines = [line.rstrip() for line in sourcelines]
return source, lineno
def getsource(obj, **kwargs):
obj = py.code.getrawcode(obj)
try:
strsrc = inspect.getsource(obj)
except IndentationError:
strsrc = "\"Buggy python version consider upgrading, cannot get source\""
assert isinstance(strsrc, str)
return Source(strsrc, **kwargs)
def deindent(lines, offset=None):
if offset is None:
for line in lines:
line = line.expandtabs()
s = line.lstrip()
if s:
offset = len(line)-len(s)
break
else:
offset = 0
if offset == 0:
return list(lines)
newlines = []
def readline_generator(lines):
for line in lines:
yield line + '\n'
while True:
yield ''
it = readline_generator(lines)
try:
for _, _, (sline, _), (eline, _), _ in tokenize.generate_tokens(lambda: next(it)):
if sline > len(lines):
break # End of input reached
if sline > len(newlines):
line = lines[sline - 1].expandtabs()
if line.lstrip() and line[:offset].isspace():
line = line[offset:] # Deindent
newlines.append(line)
for i in range(sline, eline):
# Don't deindent continuing lines of
# multiline tokens (i.e. multiline strings)
newlines.append(lines[i])
except (IndentationError, tokenize.TokenError):
pass
# Add any lines we didn't see. E.g. if an exception was raised.
newlines.extend(lines[len(newlines):])
return newlines
def get_statement_startend2(lineno, node):
import ast
# flatten all statements and except handlers into one lineno-list
# AST's line numbers start indexing at 1
l = []
for x in ast.walk(node):
if isinstance(x, _ast.stmt) or isinstance(x, _ast.ExceptHandler):
l.append(x.lineno - 1)
for name in "finalbody", "orelse":
val = getattr(x, name, None)
if val:
# treat the finally/orelse part as its own statement
l.append(val[0].lineno - 1 - 1)
l.sort()
insert_index = bisect_right(l, lineno)
start = l[insert_index - 1]
if insert_index >= len(l):
end = None
else:
end = l[insert_index]
return start, end
def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
if astnode is None:
content = str(source)
if sys.version_info < (2,7):
content += "\n"
try:
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
except ValueError:
start, end = getstatementrange_old(lineno, source, assertion)
return None, start, end
start, end = get_statement_startend2(lineno, astnode)
# we need to correct the end:
# - ast-parsing strips comments
# - there might be empty lines
# - we might have lesser indented code blocks at the end
if end is None:
end = len(source.lines)
if end > start + 1:
# make sure we don't span differently indented code blocks
# by using the BlockFinder helper used which inspect.getsource() uses itself
block_finder = inspect.BlockFinder()
# if we start with an indented line, put blockfinder to "started" mode
block_finder.started = source.lines[start][0].isspace()
it = ((x + "\n") for x in source.lines[start:end])
try:
for tok in tokenize.generate_tokens(lambda: next(it)):
block_finder.tokeneater(*tok)
except (inspect.EndOfBlock, IndentationError):
end = block_finder.last + start
except Exception:
pass
# the end might still point to a comment or empty line, correct it
while end:
line = source.lines[end - 1].lstrip()
if line.startswith("#") or not line:
end -= 1
else:
break
return astnode, start, end
def getstatementrange_old(lineno, source, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
raise an IndexError if no such statementrange can be found.
"""
# XXX this logic is only used on python2.4 and below
# 1. find the start of the statement
from codeop import compile_command
for start in range(lineno, -1, -1):
if assertion:
line = source.lines[start]
# the following lines are not fully tested, change with care
if 'super' in line and 'self' in line and '__init__' in line:
raise IndexError("likely a subclass")
if "assert" not in line and "raise" not in line:
continue
trylines = source.lines[start:lineno+1]
# quick hack to prepare parsing an indented line with
# compile_command() (which errors on "return" outside defs)
trylines.insert(0, 'def xxx():')
trysource = '\n '.join(trylines)
# ^ space here
try:
compile_command(trysource)
except (SyntaxError, OverflowError, ValueError):
continue
# 2. find the end of the statement
for end in range(lineno+1, len(source)+1):
trysource = source[start:end]
if trysource.isparseable():
return start, end
raise SyntaxError("no valid source range around line %d " % (lineno,))

177
third_party/python/py/py/_error.py поставляемый
Просмотреть файл

@ -1,88 +1,89 @@
"""
create errno-specific classes for IO or os calls.
"""
import sys, os, errno
class Error(EnvironmentError):
def __repr__(self):
return "%s.%s %r: %s " %(self.__class__.__module__,
self.__class__.__name__,
self.__class__.__doc__,
" ".join(map(str, self.args)),
#repr(self.args)
)
def __str__(self):
s = "[%s]: %s" %(self.__class__.__doc__,
" ".join(map(str, self.args)),
)
return s
_winerrnomap = {
2: errno.ENOENT,
3: errno.ENOENT,
17: errno.EEXIST,
13: errno.EBUSY, # empty cd drive, but ENOMEDIUM seems unavailiable
22: errno.ENOTDIR,
20: errno.ENOTDIR,
267: errno.ENOTDIR,
5: errno.EACCES, # anything better?
}
class ErrorMaker(object):
""" lazily provides Exception classes for each possible POSIX errno
(as defined per the 'errno' module). All such instances
subclass EnvironmentError.
"""
Error = Error
_errno2class = {}
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError(name)
eno = getattr(errno, name)
cls = self._geterrnoclass(eno)
setattr(self, name, cls)
return cls
def _geterrnoclass(self, eno):
try:
return self._errno2class[eno]
except KeyError:
clsname = errno.errorcode.get(eno, "UnknownErrno%d" %(eno,))
errorcls = type(Error)(clsname, (Error,),
{'__module__':'py.error',
'__doc__': os.strerror(eno)})
self._errno2class[eno] = errorcls
return errorcls
def checked_call(self, func, *args, **kwargs):
""" call a function and raise an errno-exception if applicable. """
__tracebackhide__ = True
try:
return func(*args, **kwargs)
except self.Error:
raise
except (OSError, EnvironmentError):
cls, value, tb = sys.exc_info()
if not hasattr(value, 'errno'):
raise
__tracebackhide__ = False
errno = value.errno
try:
if not isinstance(value, WindowsError):
raise NameError
except NameError:
# we are not on Windows, or we got a proper OSError
cls = self._geterrnoclass(errno)
else:
try:
cls = self._geterrnoclass(_winerrnomap[errno])
except KeyError:
raise value
raise cls("%s%r" % (func.__name__, args))
__tracebackhide__ = True
error = ErrorMaker()
"""
create errno-specific classes for IO or os calls.
"""
import sys, os, errno
class Error(EnvironmentError):
def __repr__(self):
return "%s.%s %r: %s " %(self.__class__.__module__,
self.__class__.__name__,
self.__class__.__doc__,
" ".join(map(str, self.args)),
#repr(self.args)
)
def __str__(self):
s = "[%s]: %s" %(self.__class__.__doc__,
" ".join(map(str, self.args)),
)
return s
_winerrnomap = {
2: errno.ENOENT,
3: errno.ENOENT,
17: errno.EEXIST,
18: errno.EXDEV,
13: errno.EBUSY, # empty cd drive, but ENOMEDIUM seems unavailiable
22: errno.ENOTDIR,
20: errno.ENOTDIR,
267: errno.ENOTDIR,
5: errno.EACCES, # anything better?
}
class ErrorMaker(object):
""" lazily provides Exception classes for each possible POSIX errno
(as defined per the 'errno' module). All such instances
subclass EnvironmentError.
"""
Error = Error
_errno2class = {}
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError(name)
eno = getattr(errno, name)
cls = self._geterrnoclass(eno)
setattr(self, name, cls)
return cls
def _geterrnoclass(self, eno):
try:
return self._errno2class[eno]
except KeyError:
clsname = errno.errorcode.get(eno, "UnknownErrno%d" %(eno,))
errorcls = type(Error)(clsname, (Error,),
{'__module__':'py.error',
'__doc__': os.strerror(eno)})
self._errno2class[eno] = errorcls
return errorcls
def checked_call(self, func, *args, **kwargs):
""" call a function and raise an errno-exception if applicable. """
__tracebackhide__ = True
try:
return func(*args, **kwargs)
except self.Error:
raise
except (OSError, EnvironmentError):
cls, value, tb = sys.exc_info()
if not hasattr(value, 'errno'):
raise
__tracebackhide__ = False
errno = value.errno
try:
if not isinstance(value, WindowsError):
raise NameError
except NameError:
# we are not on Windows, or we got a proper OSError
cls = self._geterrnoclass(errno)
else:
try:
cls = self._geterrnoclass(_winerrnomap[errno])
except KeyError:
raise value
raise cls("%s%r" % (func.__name__, args))
__tracebackhide__ = True
error = ErrorMaker()

324
third_party/python/py/py/_iniconfig.py поставляемый
Просмотреть файл

@ -1,162 +1,162 @@
""" brain-dead simple parser for ini-style files.
(C) Ronny Pfannschmidt, Holger Krekel -- MIT licensed
"""
__version__ = "0.2.dev2"
__all__ = ['IniConfig', 'ParseError']
COMMENTCHARS = "#;"
class ParseError(Exception):
def __init__(self, path, lineno, msg):
Exception.__init__(self, path, lineno, msg)
self.path = path
self.lineno = lineno
self.msg = msg
def __str__(self):
return "%s:%s: %s" %(self.path, self.lineno+1, self.msg)
class SectionWrapper(object):
def __init__(self, config, name):
self.config = config
self.name = name
def lineof(self, name):
return self.config.lineof(self.name, name)
def get(self, key, default=None, convert=str):
return self.config.get(self.name, key, convert=convert, default=default)
def __getitem__(self, key):
return self.config.sections[self.name][key]
def __iter__(self):
section = self.config.sections.get(self.name, [])
def lineof(key):
return self.config.lineof(self.name, key)
for name in sorted(section, key=lineof):
yield name
def items(self):
for name in self:
yield name, self[name]
class IniConfig(object):
def __init__(self, path, data=None):
self.path = str(path) # convenience
if data is None:
f = open(self.path)
try:
tokens = self._parse(iter(f))
finally:
f.close()
else:
tokens = self._parse(data.splitlines(True))
self._sources = {}
self.sections = {}
for lineno, section, name, value in tokens:
if section is None:
self._raise(lineno, 'no section header defined')
self._sources[section, name] = lineno
if name is None:
if section in self.sections:
self._raise(lineno, 'duplicate section %r'%(section, ))
self.sections[section] = {}
else:
if name in self.sections[section]:
self._raise(lineno, 'duplicate name %r'%(name, ))
self.sections[section][name] = value
def _raise(self, lineno, msg):
raise ParseError(self.path, lineno, msg)
def _parse(self, line_iter):
result = []
section = None
for lineno, line in enumerate(line_iter):
name, data = self._parseline(line, lineno)
# new value
if name is not None and data is not None:
result.append((lineno, section, name, data))
# new section
elif name is not None and data is None:
if not name:
self._raise(lineno, 'empty section name')
section = name
result.append((lineno, section, None, None))
# continuation
elif name is None and data is not None:
if not result:
self._raise(lineno, 'unexpected value continuation')
last = result.pop()
last_name, last_data = last[-2:]
if last_name is None:
self._raise(lineno, 'unexpected value continuation')
if last_data:
data = '%s\n%s' % (last_data, data)
result.append(last[:-1] + (data,))
return result
def _parseline(self, line, lineno):
# blank lines
if iscommentline(line):
line = ""
else:
line = line.rstrip()
if not line:
return None, None
# section
if line[0] == '[':
realline = line
for c in COMMENTCHARS:
line = line.split(c)[0].rstrip()
if line[-1] == "]":
return line[1:-1], None
return None, realline.strip()
# value
elif not line[0].isspace():
try:
name, value = line.split('=', 1)
if ":" in name:
raise ValueError()
except ValueError:
try:
name, value = line.split(":", 1)
except ValueError:
self._raise(lineno, 'unexpected line: %r' % line)
return name.strip(), value.strip()
# continuation
else:
return None, line.strip()
def lineof(self, section, name=None):
lineno = self._sources.get((section, name))
if lineno is not None:
return lineno + 1
def get(self, section, name, default=None, convert=str):
try:
return convert(self.sections[section][name])
except KeyError:
return default
def __getitem__(self, name):
if name not in self.sections:
raise KeyError(name)
return SectionWrapper(self, name)
def __iter__(self):
for name in sorted(self.sections, key=self.lineof):
yield SectionWrapper(self, name)
def __contains__(self, arg):
return arg in self.sections
def iscommentline(line):
c = line.lstrip()[:1]
return c in COMMENTCHARS
""" brain-dead simple parser for ini-style files.
(C) Ronny Pfannschmidt, Holger Krekel -- MIT licensed
"""
__version__ = "0.2.dev2"
__all__ = ['IniConfig', 'ParseError']
COMMENTCHARS = "#;"
class ParseError(Exception):
def __init__(self, path, lineno, msg):
Exception.__init__(self, path, lineno, msg)
self.path = path
self.lineno = lineno
self.msg = msg
def __str__(self):
return "%s:%s: %s" %(self.path, self.lineno+1, self.msg)
class SectionWrapper(object):
def __init__(self, config, name):
self.config = config
self.name = name
def lineof(self, name):
return self.config.lineof(self.name, name)
def get(self, key, default=None, convert=str):
return self.config.get(self.name, key, convert=convert, default=default)
def __getitem__(self, key):
return self.config.sections[self.name][key]
def __iter__(self):
section = self.config.sections.get(self.name, [])
def lineof(key):
return self.config.lineof(self.name, key)
for name in sorted(section, key=lineof):
yield name
def items(self):
for name in self:
yield name, self[name]
class IniConfig(object):
def __init__(self, path, data=None):
self.path = str(path) # convenience
if data is None:
f = open(self.path)
try:
tokens = self._parse(iter(f))
finally:
f.close()
else:
tokens = self._parse(data.splitlines(True))
self._sources = {}
self.sections = {}
for lineno, section, name, value in tokens:
if section is None:
self._raise(lineno, 'no section header defined')
self._sources[section, name] = lineno
if name is None:
if section in self.sections:
self._raise(lineno, 'duplicate section %r'%(section, ))
self.sections[section] = {}
else:
if name in self.sections[section]:
self._raise(lineno, 'duplicate name %r'%(name, ))
self.sections[section][name] = value
def _raise(self, lineno, msg):
raise ParseError(self.path, lineno, msg)
def _parse(self, line_iter):
result = []
section = None
for lineno, line in enumerate(line_iter):
name, data = self._parseline(line, lineno)
# new value
if name is not None and data is not None:
result.append((lineno, section, name, data))
# new section
elif name is not None and data is None:
if not name:
self._raise(lineno, 'empty section name')
section = name
result.append((lineno, section, None, None))
# continuation
elif name is None and data is not None:
if not result:
self._raise(lineno, 'unexpected value continuation')
last = result.pop()
last_name, last_data = last[-2:]
if last_name is None:
self._raise(lineno, 'unexpected value continuation')
if last_data:
data = '%s\n%s' % (last_data, data)
result.append(last[:-1] + (data,))
return result
def _parseline(self, line, lineno):
# blank lines
if iscommentline(line):
line = ""
else:
line = line.rstrip()
if not line:
return None, None
# section
if line[0] == '[':
realline = line
for c in COMMENTCHARS:
line = line.split(c)[0].rstrip()
if line[-1] == "]":
return line[1:-1], None
return None, realline.strip()
# value
elif not line[0].isspace():
try:
name, value = line.split('=', 1)
if ":" in name:
raise ValueError()
except ValueError:
try:
name, value = line.split(":", 1)
except ValueError:
self._raise(lineno, 'unexpected line: %r' % line)
return name.strip(), value.strip()
# continuation
else:
return None, line.strip()
def lineof(self, section, name=None):
lineno = self._sources.get((section, name))
if lineno is not None:
return lineno + 1
def get(self, section, name, default=None, convert=str):
try:
return convert(self.sections[section][name])
except KeyError:
return default
def __getitem__(self, name):
if name not in self.sections:
raise KeyError(name)
return SectionWrapper(self, name)
def __iter__(self):
for name in sorted(self.sections, key=self.lineof):
yield SectionWrapper(self, name)
def __contains__(self, arg):
return arg in self.sections
def iscommentline(line):
c = line.lstrip()[:1]
return c in COMMENTCHARS

2
third_party/python/py/py/_io/__init__.py поставляемый
Просмотреть файл

@ -1 +1 @@
""" input/output helping """
""" input/output helping """

742
third_party/python/py/py/_io/capture.py поставляемый
Просмотреть файл

@ -1,371 +1,371 @@
import os
import sys
import py
import tempfile
try:
from io import StringIO
except ImportError:
from StringIO import StringIO
if sys.version_info < (3,0):
class TextIO(StringIO):
def write(self, data):
if not isinstance(data, unicode):
data = unicode(data, getattr(self, '_encoding', 'UTF-8'), 'replace')
StringIO.write(self, data)
else:
TextIO = StringIO
try:
from io import BytesIO
except ImportError:
class BytesIO(StringIO):
def write(self, data):
if isinstance(data, unicode):
raise TypeError("not a byte value: %r" %(data,))
StringIO.write(self, data)
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
class FDCapture:
""" Capture IO to/from a given os-level filedescriptor. """
def __init__(self, targetfd, tmpfile=None, now=True, patchsys=False):
""" save targetfd descriptor, and open a new
temporary file there. If no tmpfile is
specified a tempfile.Tempfile() will be opened
in text mode.
"""
self.targetfd = targetfd
if tmpfile is None and targetfd != 0:
f = tempfile.TemporaryFile('wb+')
tmpfile = dupfile(f, encoding="UTF-8")
f.close()
self.tmpfile = tmpfile
self._savefd = os.dup(self.targetfd)
if patchsys:
self._oldsys = getattr(sys, patchsysdict[targetfd])
if now:
self.start()
def start(self):
try:
os.fstat(self._savefd)
except OSError:
raise ValueError("saved filedescriptor not valid, "
"did you call start() twice?")
if self.targetfd == 0 and not self.tmpfile:
fd = os.open(devnullpath, os.O_RDONLY)
os.dup2(fd, 0)
os.close(fd)
if hasattr(self, '_oldsys'):
setattr(sys, patchsysdict[self.targetfd], DontReadFromInput())
else:
os.dup2(self.tmpfile.fileno(), self.targetfd)
if hasattr(self, '_oldsys'):
setattr(sys, patchsysdict[self.targetfd], self.tmpfile)
def done(self):
""" unpatch and clean up, returns the self.tmpfile (file object)
"""
os.dup2(self._savefd, self.targetfd)
os.close(self._savefd)
if self.targetfd != 0:
self.tmpfile.seek(0)
if hasattr(self, '_oldsys'):
setattr(sys, patchsysdict[self.targetfd], self._oldsys)
return self.tmpfile
def writeorg(self, data):
""" write a string to the original file descriptor
"""
tempfp = tempfile.TemporaryFile()
try:
os.dup2(self._savefd, tempfp.fileno())
tempfp.write(data)
finally:
tempfp.close()
def dupfile(f, mode=None, buffering=0, raising=False, encoding=None):
""" return a new open file object that's a duplicate of f
mode is duplicated if not given, 'buffering' controls
buffer size (defaulting to no buffering) and 'raising'
defines whether an exception is raised when an incompatible
file object is passed in (if raising is False, the file
object itself will be returned)
"""
try:
fd = f.fileno()
mode = mode or f.mode
except AttributeError:
if raising:
raise
return f
newfd = os.dup(fd)
if sys.version_info >= (3,0):
if encoding is not None:
mode = mode.replace("b", "")
buffering = True
return os.fdopen(newfd, mode, buffering, encoding, closefd=True)
else:
f = os.fdopen(newfd, mode, buffering)
if encoding is not None:
return EncodedFile(f, encoding)
return f
class EncodedFile(object):
def __init__(self, _stream, encoding):
self._stream = _stream
self.encoding = encoding
def write(self, obj):
if isinstance(obj, unicode):
obj = obj.encode(self.encoding)
elif isinstance(obj, str):
pass
else:
obj = str(obj)
self._stream.write(obj)
def writelines(self, linelist):
data = ''.join(linelist)
self.write(data)
def __getattr__(self, name):
return getattr(self._stream, name)
class Capture(object):
def call(cls, func, *args, **kwargs):
""" return a (res, out, err) tuple where
out and err represent the output/error output
during function execution.
call the given function with args/kwargs
and capture output/error during its execution.
"""
so = cls()
try:
res = func(*args, **kwargs)
finally:
out, err = so.reset()
return res, out, err
call = classmethod(call)
def reset(self):
""" reset sys.stdout/stderr and return captured output as strings. """
if hasattr(self, '_reset'):
raise ValueError("was already reset")
self._reset = True
outfile, errfile = self.done(save=False)
out, err = "", ""
if outfile and not outfile.closed:
out = outfile.read()
outfile.close()
if errfile and errfile != outfile and not errfile.closed:
err = errfile.read()
errfile.close()
return out, err
def suspend(self):
""" return current snapshot captures, memorize tempfiles. """
outerr = self.readouterr()
outfile, errfile = self.done()
return outerr
class StdCaptureFD(Capture):
""" This class allows to capture writes to FD1 and FD2
and may connect a NULL file to FD0 (and prevent
reads from sys.stdin). If any of the 0,1,2 file descriptors
is invalid it will not be captured.
"""
def __init__(self, out=True, err=True, mixed=False,
in_=True, patchsys=True, now=True):
self._options = {
"out": out,
"err": err,
"mixed": mixed,
"in_": in_,
"patchsys": patchsys,
"now": now,
}
self._save()
if now:
self.startall()
def _save(self):
in_ = self._options['in_']
out = self._options['out']
err = self._options['err']
mixed = self._options['mixed']
patchsys = self._options['patchsys']
if in_:
try:
self.in_ = FDCapture(0, tmpfile=None, now=False,
patchsys=patchsys)
except OSError:
pass
if out:
tmpfile = None
if hasattr(out, 'write'):
tmpfile = out
try:
self.out = FDCapture(1, tmpfile=tmpfile,
now=False, patchsys=patchsys)
self._options['out'] = self.out.tmpfile
except OSError:
pass
if err:
if out and mixed:
tmpfile = self.out.tmpfile
elif hasattr(err, 'write'):
tmpfile = err
else:
tmpfile = None
try:
self.err = FDCapture(2, tmpfile=tmpfile,
now=False, patchsys=patchsys)
self._options['err'] = self.err.tmpfile
except OSError:
pass
def startall(self):
if hasattr(self, 'in_'):
self.in_.start()
if hasattr(self, 'out'):
self.out.start()
if hasattr(self, 'err'):
self.err.start()
def resume(self):
""" resume capturing with original temp files. """
self.startall()
def done(self, save=True):
""" return (outfile, errfile) and stop capturing. """
outfile = errfile = None
if hasattr(self, 'out') and not self.out.tmpfile.closed:
outfile = self.out.done()
if hasattr(self, 'err') and not self.err.tmpfile.closed:
errfile = self.err.done()
if hasattr(self, 'in_'):
tmpfile = self.in_.done()
if save:
self._save()
return outfile, errfile
def readouterr(self):
""" return snapshot value of stdout/stderr capturings. """
if hasattr(self, "out"):
out = self._readsnapshot(self.out.tmpfile)
else:
out = ""
if hasattr(self, "err"):
err = self._readsnapshot(self.err.tmpfile)
else:
err = ""
return [out, err]
def _readsnapshot(self, f):
f.seek(0)
res = f.read()
enc = getattr(f, "encoding", None)
if enc:
res = py.builtin._totext(res, enc, "replace")
f.truncate(0)
f.seek(0)
return res
class StdCapture(Capture):
""" This class allows to capture writes to sys.stdout|stderr "in-memory"
and will raise errors on tries to read from sys.stdin. It only
modifies sys.stdout|stderr|stdin attributes and does not
touch underlying File Descriptors (use StdCaptureFD for that).
"""
def __init__(self, out=True, err=True, in_=True, mixed=False, now=True):
self._oldout = sys.stdout
self._olderr = sys.stderr
self._oldin = sys.stdin
if out and not hasattr(out, 'file'):
out = TextIO()
self.out = out
if err:
if mixed:
err = out
elif not hasattr(err, 'write'):
err = TextIO()
self.err = err
self.in_ = in_
if now:
self.startall()
def startall(self):
if self.out:
sys.stdout = self.out
if self.err:
sys.stderr = self.err
if self.in_:
sys.stdin = self.in_ = DontReadFromInput()
def done(self, save=True):
""" return (outfile, errfile) and stop capturing. """
outfile = errfile = None
if self.out and not self.out.closed:
sys.stdout = self._oldout
outfile = self.out
outfile.seek(0)
if self.err and not self.err.closed:
sys.stderr = self._olderr
errfile = self.err
errfile.seek(0)
if self.in_:
sys.stdin = self._oldin
return outfile, errfile
def resume(self):
""" resume capturing with original temp files. """
self.startall()
def readouterr(self):
""" return snapshot value of stdout/stderr capturings. """
out = err = ""
if self.out:
out = self.out.getvalue()
self.out.truncate(0)
self.out.seek(0)
if self.err:
err = self.err.getvalue()
self.err.truncate(0)
self.err.seek(0)
return out, err
class DontReadFromInput:
"""Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though,
because in automated test runs it is better to crash than
hang indefinitely.
"""
def read(self, *args):
raise IOError("reading from stdin while output is captured")
readline = read
readlines = read
__iter__ = read
def fileno(self):
raise ValueError("redirected Stdin is pseudofile, has no fileno()")
def isatty(self):
return False
def close(self):
pass
try:
devnullpath = os.devnull
except AttributeError:
if os.name == 'nt':
devnullpath = 'NUL'
else:
devnullpath = '/dev/null'
import os
import sys
import py
import tempfile
try:
from io import StringIO
except ImportError:
from StringIO import StringIO
if sys.version_info < (3,0):
class TextIO(StringIO):
def write(self, data):
if not isinstance(data, unicode):
data = unicode(data, getattr(self, '_encoding', 'UTF-8'), 'replace')
StringIO.write(self, data)
else:
TextIO = StringIO
try:
from io import BytesIO
except ImportError:
class BytesIO(StringIO):
def write(self, data):
if isinstance(data, unicode):
raise TypeError("not a byte value: %r" %(data,))
StringIO.write(self, data)
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
class FDCapture:
""" Capture IO to/from a given os-level filedescriptor. """
def __init__(self, targetfd, tmpfile=None, now=True, patchsys=False):
""" save targetfd descriptor, and open a new
temporary file there. If no tmpfile is
specified a tempfile.Tempfile() will be opened
in text mode.
"""
self.targetfd = targetfd
if tmpfile is None and targetfd != 0:
f = tempfile.TemporaryFile('wb+')
tmpfile = dupfile(f, encoding="UTF-8")
f.close()
self.tmpfile = tmpfile
self._savefd = os.dup(self.targetfd)
if patchsys:
self._oldsys = getattr(sys, patchsysdict[targetfd])
if now:
self.start()
def start(self):
try:
os.fstat(self._savefd)
except OSError:
raise ValueError("saved filedescriptor not valid, "
"did you call start() twice?")
if self.targetfd == 0 and not self.tmpfile:
fd = os.open(devnullpath, os.O_RDONLY)
os.dup2(fd, 0)
os.close(fd)
if hasattr(self, '_oldsys'):
setattr(sys, patchsysdict[self.targetfd], DontReadFromInput())
else:
os.dup2(self.tmpfile.fileno(), self.targetfd)
if hasattr(self, '_oldsys'):
setattr(sys, patchsysdict[self.targetfd], self.tmpfile)
def done(self):
""" unpatch and clean up, returns the self.tmpfile (file object)
"""
os.dup2(self._savefd, self.targetfd)
os.close(self._savefd)
if self.targetfd != 0:
self.tmpfile.seek(0)
if hasattr(self, '_oldsys'):
setattr(sys, patchsysdict[self.targetfd], self._oldsys)
return self.tmpfile
def writeorg(self, data):
""" write a string to the original file descriptor
"""
tempfp = tempfile.TemporaryFile()
try:
os.dup2(self._savefd, tempfp.fileno())
tempfp.write(data)
finally:
tempfp.close()
def dupfile(f, mode=None, buffering=0, raising=False, encoding=None):
""" return a new open file object that's a duplicate of f
mode is duplicated if not given, 'buffering' controls
buffer size (defaulting to no buffering) and 'raising'
defines whether an exception is raised when an incompatible
file object is passed in (if raising is False, the file
object itself will be returned)
"""
try:
fd = f.fileno()
mode = mode or f.mode
except AttributeError:
if raising:
raise
return f
newfd = os.dup(fd)
if sys.version_info >= (3,0):
if encoding is not None:
mode = mode.replace("b", "")
buffering = True
return os.fdopen(newfd, mode, buffering, encoding, closefd=True)
else:
f = os.fdopen(newfd, mode, buffering)
if encoding is not None:
return EncodedFile(f, encoding)
return f
class EncodedFile(object):
def __init__(self, _stream, encoding):
self._stream = _stream
self.encoding = encoding
def write(self, obj):
if isinstance(obj, unicode):
obj = obj.encode(self.encoding)
elif isinstance(obj, str):
pass
else:
obj = str(obj)
self._stream.write(obj)
def writelines(self, linelist):
data = ''.join(linelist)
self.write(data)
def __getattr__(self, name):
return getattr(self._stream, name)
class Capture(object):
def call(cls, func, *args, **kwargs):
""" return a (res, out, err) tuple where
out and err represent the output/error output
during function execution.
call the given function with args/kwargs
and capture output/error during its execution.
"""
so = cls()
try:
res = func(*args, **kwargs)
finally:
out, err = so.reset()
return res, out, err
call = classmethod(call)
def reset(self):
""" reset sys.stdout/stderr and return captured output as strings. """
if hasattr(self, '_reset'):
raise ValueError("was already reset")
self._reset = True
outfile, errfile = self.done(save=False)
out, err = "", ""
if outfile and not outfile.closed:
out = outfile.read()
outfile.close()
if errfile and errfile != outfile and not errfile.closed:
err = errfile.read()
errfile.close()
return out, err
def suspend(self):
""" return current snapshot captures, memorize tempfiles. """
outerr = self.readouterr()
outfile, errfile = self.done()
return outerr
class StdCaptureFD(Capture):
""" This class allows to capture writes to FD1 and FD2
and may connect a NULL file to FD0 (and prevent
reads from sys.stdin). If any of the 0,1,2 file descriptors
is invalid it will not be captured.
"""
def __init__(self, out=True, err=True, mixed=False,
in_=True, patchsys=True, now=True):
self._options = {
"out": out,
"err": err,
"mixed": mixed,
"in_": in_,
"patchsys": patchsys,
"now": now,
}
self._save()
if now:
self.startall()
def _save(self):
in_ = self._options['in_']
out = self._options['out']
err = self._options['err']
mixed = self._options['mixed']
patchsys = self._options['patchsys']
if in_:
try:
self.in_ = FDCapture(0, tmpfile=None, now=False,
patchsys=patchsys)
except OSError:
pass
if out:
tmpfile = None
if hasattr(out, 'write'):
tmpfile = out
try:
self.out = FDCapture(1, tmpfile=tmpfile,
now=False, patchsys=patchsys)
self._options['out'] = self.out.tmpfile
except OSError:
pass
if err:
if out and mixed:
tmpfile = self.out.tmpfile
elif hasattr(err, 'write'):
tmpfile = err
else:
tmpfile = None
try:
self.err = FDCapture(2, tmpfile=tmpfile,
now=False, patchsys=patchsys)
self._options['err'] = self.err.tmpfile
except OSError:
pass
def startall(self):
if hasattr(self, 'in_'):
self.in_.start()
if hasattr(self, 'out'):
self.out.start()
if hasattr(self, 'err'):
self.err.start()
def resume(self):
""" resume capturing with original temp files. """
self.startall()
def done(self, save=True):
""" return (outfile, errfile) and stop capturing. """
outfile = errfile = None
if hasattr(self, 'out') and not self.out.tmpfile.closed:
outfile = self.out.done()
if hasattr(self, 'err') and not self.err.tmpfile.closed:
errfile = self.err.done()
if hasattr(self, 'in_'):
tmpfile = self.in_.done()
if save:
self._save()
return outfile, errfile
def readouterr(self):
""" return snapshot value of stdout/stderr capturings. """
if hasattr(self, "out"):
out = self._readsnapshot(self.out.tmpfile)
else:
out = ""
if hasattr(self, "err"):
err = self._readsnapshot(self.err.tmpfile)
else:
err = ""
return [out, err]
def _readsnapshot(self, f):
f.seek(0)
res = f.read()
enc = getattr(f, "encoding", None)
if enc:
res = py.builtin._totext(res, enc, "replace")
f.truncate(0)
f.seek(0)
return res
class StdCapture(Capture):
""" This class allows to capture writes to sys.stdout|stderr "in-memory"
and will raise errors on tries to read from sys.stdin. It only
modifies sys.stdout|stderr|stdin attributes and does not
touch underlying File Descriptors (use StdCaptureFD for that).
"""
def __init__(self, out=True, err=True, in_=True, mixed=False, now=True):
self._oldout = sys.stdout
self._olderr = sys.stderr
self._oldin = sys.stdin
if out and not hasattr(out, 'file'):
out = TextIO()
self.out = out
if err:
if mixed:
err = out
elif not hasattr(err, 'write'):
err = TextIO()
self.err = err
self.in_ = in_
if now:
self.startall()
def startall(self):
if self.out:
sys.stdout = self.out
if self.err:
sys.stderr = self.err
if self.in_:
sys.stdin = self.in_ = DontReadFromInput()
def done(self, save=True):
""" return (outfile, errfile) and stop capturing. """
outfile = errfile = None
if self.out and not self.out.closed:
sys.stdout = self._oldout
outfile = self.out
outfile.seek(0)
if self.err and not self.err.closed:
sys.stderr = self._olderr
errfile = self.err
errfile.seek(0)
if self.in_:
sys.stdin = self._oldin
return outfile, errfile
def resume(self):
""" resume capturing with original temp files. """
self.startall()
def readouterr(self):
""" return snapshot value of stdout/stderr capturings. """
out = err = ""
if self.out:
out = self.out.getvalue()
self.out.truncate(0)
self.out.seek(0)
if self.err:
err = self.err.getvalue()
self.err.truncate(0)
self.err.seek(0)
return out, err
class DontReadFromInput:
"""Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though,
because in automated test runs it is better to crash than
hang indefinitely.
"""
def read(self, *args):
raise IOError("reading from stdin while output is captured")
readline = read
readlines = read
__iter__ = read
def fileno(self):
raise ValueError("redirected Stdin is pseudofile, has no fileno()")
def isatty(self):
return False
def close(self):
pass
try:
devnullpath = os.devnull
except AttributeError:
if os.name == 'nt':
devnullpath = 'NUL'
else:
devnullpath = '/dev/null'

142
third_party/python/py/py/_io/saferepr.py поставляемый
Просмотреть файл

@ -1,71 +1,71 @@
import py
import sys
builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib')
class SafeRepr(reprlib.Repr):
""" subclass of repr.Repr that limits the resulting size of repr()
and includes information on exceptions raised during the call.
"""
def repr(self, x):
return self._callhelper(reprlib.Repr.repr, self, x)
def repr_unicode(self, x, level):
# Strictly speaking wrong on narrow builds
def repr(u):
if "'" not in u:
return py.builtin._totext("'%s'") % u
elif '"' not in u:
return py.builtin._totext('"%s"') % u
else:
return py.builtin._totext("'%s'") % u.replace("'", r"\'")
s = repr(x[:self.maxstring])
if len(s) > self.maxstring:
i = max(0, (self.maxstring-3)//2)
j = max(0, self.maxstring-3-i)
s = repr(x[:i] + x[len(x)-j:])
s = s[:i] + '...' + s[len(s)-j:]
return s
def repr_instance(self, x, level):
return self._callhelper(builtin_repr, x)
def _callhelper(self, call, x, *args):
try:
# Try the vanilla repr and make sure that the result is a string
s = call(x, *args)
except py.builtin._sysex:
raise
except:
cls, e, tb = sys.exc_info()
exc_name = getattr(cls, '__name__', 'unknown')
try:
exc_info = str(e)
except py.builtin._sysex:
raise
except:
exc_info = 'unknown'
return '<[%s("%s") raised in repr()] %s object at 0x%x>' % (
exc_name, exc_info, x.__class__.__name__, id(x))
else:
if len(s) > self.maxsize:
i = max(0, (self.maxsize-3)//2)
j = max(0, self.maxsize-3-i)
s = s[:i] + '...' + s[len(s)-j:]
return s
def saferepr(obj, maxsize=240):
""" return a size-limited safe repr-string for the given object.
Failing __repr__ functions of user instances will be represented
with a short exception info and 'saferepr' generally takes
care to never raise exceptions itself. This function is a wrapper
around the Repr/reprlib functionality of the standard 2.6 lib.
"""
# review exception handling
srepr = SafeRepr()
srepr.maxstring = maxsize
srepr.maxsize = maxsize
srepr.maxother = 160
return srepr.repr(obj)
import py
import sys
builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib')
class SafeRepr(reprlib.Repr):
""" subclass of repr.Repr that limits the resulting size of repr()
and includes information on exceptions raised during the call.
"""
def repr(self, x):
return self._callhelper(reprlib.Repr.repr, self, x)
def repr_unicode(self, x, level):
# Strictly speaking wrong on narrow builds
def repr(u):
if "'" not in u:
return py.builtin._totext("'%s'") % u
elif '"' not in u:
return py.builtin._totext('"%s"') % u
else:
return py.builtin._totext("'%s'") % u.replace("'", r"\'")
s = repr(x[:self.maxstring])
if len(s) > self.maxstring:
i = max(0, (self.maxstring-3)//2)
j = max(0, self.maxstring-3-i)
s = repr(x[:i] + x[len(x)-j:])
s = s[:i] + '...' + s[len(s)-j:]
return s
def repr_instance(self, x, level):
return self._callhelper(builtin_repr, x)
def _callhelper(self, call, x, *args):
try:
# Try the vanilla repr and make sure that the result is a string
s = call(x, *args)
except py.builtin._sysex:
raise
except:
cls, e, tb = sys.exc_info()
exc_name = getattr(cls, '__name__', 'unknown')
try:
exc_info = str(e)
except py.builtin._sysex:
raise
except:
exc_info = 'unknown'
return '<[%s("%s") raised in repr()] %s object at 0x%x>' % (
exc_name, exc_info, x.__class__.__name__, id(x))
else:
if len(s) > self.maxsize:
i = max(0, (self.maxsize-3)//2)
j = max(0, self.maxsize-3-i)
s = s[:i] + '...' + s[len(s)-j:]
return s
def saferepr(obj, maxsize=240):
""" return a size-limited safe repr-string for the given object.
Failing __repr__ functions of user instances will be represented
with a short exception info and 'saferepr' generally takes
care to never raise exceptions itself. This function is a wrapper
around the Repr/reprlib functionality of the standard 2.6 lib.
"""
# review exception handling
srepr = SafeRepr()
srepr.maxstring = maxsize
srepr.maxsize = maxsize
srepr.maxother = 160
return srepr.repr(obj)

705
third_party/python/py/py/_io/terminalwriter.py поставляемый
Просмотреть файл

@ -1,348 +1,357 @@
"""
Helper functions for writing to terminals and files.
"""
import sys, os
import py
py3k = sys.version_info[0] >= 3
from py.builtin import text, bytes
win32_and_ctypes = False
colorama = None
if sys.platform == "win32":
try:
import colorama
except ImportError:
try:
import ctypes
win32_and_ctypes = True
except ImportError:
pass
def _getdimensions():
import termios,fcntl,struct
call = fcntl.ioctl(1,termios.TIOCGWINSZ,"\000"*8)
height,width = struct.unpack( "hhhh", call ) [:2]
return height, width
def get_terminal_width():
height = width = 0
try:
height, width = _getdimensions()
except py.builtin._sysex:
raise
except:
# pass to fallback below
pass
if width == 0:
# FALLBACK:
# * some exception happened
# * or this is emacs terminal which reports (0,0)
width = int(os.environ.get('COLUMNS', 80))
# XXX the windows getdimensions may be bogus, let's sanify a bit
if width < 40:
width = 80
return width
terminal_width = get_terminal_width()
# XXX unify with _escaped func below
def ansi_print(text, esc, file=None, newline=True, flush=False):
if file is None:
file = sys.stderr
text = text.rstrip()
if esc and not isinstance(esc, tuple):
esc = (esc,)
if esc and sys.platform != "win32" and file.isatty():
text = (''.join(['\x1b[%sm' % cod for cod in esc]) +
text +
'\x1b[0m') # ANSI color code "reset"
if newline:
text += '\n'
if esc and win32_and_ctypes and file.isatty():
if 1 in esc:
bold = True
esc = tuple([x for x in esc if x != 1])
else:
bold = False
esctable = {() : FOREGROUND_WHITE, # normal
(31,): FOREGROUND_RED, # red
(32,): FOREGROUND_GREEN, # green
(33,): FOREGROUND_GREEN|FOREGROUND_RED, # yellow
(34,): FOREGROUND_BLUE, # blue
(35,): FOREGROUND_BLUE|FOREGROUND_RED, # purple
(36,): FOREGROUND_BLUE|FOREGROUND_GREEN, # cyan
(37,): FOREGROUND_WHITE, # white
(39,): FOREGROUND_WHITE, # reset
}
attr = esctable.get(esc, FOREGROUND_WHITE)
if bold:
attr |= FOREGROUND_INTENSITY
STD_OUTPUT_HANDLE = -11
STD_ERROR_HANDLE = -12
if file is sys.stderr:
handle = GetStdHandle(STD_ERROR_HANDLE)
else:
handle = GetStdHandle(STD_OUTPUT_HANDLE)
oldcolors = GetConsoleInfo(handle).wAttributes
attr |= (oldcolors & 0x0f0)
SetConsoleTextAttribute(handle, attr)
while len(text) > 32768:
file.write(text[:32768])
text = text[32768:]
if text:
file.write(text)
SetConsoleTextAttribute(handle, oldcolors)
else:
file.write(text)
if flush:
file.flush()
def should_do_markup(file):
if os.environ.get('PY_COLORS') == '1':
return True
if os.environ.get('PY_COLORS') == '0':
return False
return hasattr(file, 'isatty') and file.isatty() \
and os.environ.get('TERM') != 'dumb' \
and not (sys.platform.startswith('java') and os._name == 'nt')
class TerminalWriter(object):
_esctable = dict(black=30, red=31, green=32, yellow=33,
blue=34, purple=35, cyan=36, white=37,
Black=40, Red=41, Green=42, Yellow=43,
Blue=44, Purple=45, Cyan=46, White=47,
bold=1, light=2, blink=5, invert=7)
# XXX deprecate stringio argument
def __init__(self, file=None, stringio=False, encoding=None):
if file is None:
if stringio:
self.stringio = file = py.io.TextIO()
else:
file = py.std.sys.stdout
elif py.builtin.callable(file) and not (
hasattr(file, "write") and hasattr(file, "flush")):
file = WriteFile(file, encoding=encoding)
if hasattr(file, "isatty") and file.isatty() and colorama:
file = colorama.AnsiToWin32(file).stream
self.encoding = encoding or getattr(file, 'encoding', "utf-8")
self._file = file
self.fullwidth = get_terminal_width()
self.hasmarkup = should_do_markup(file)
self._lastlen = 0
def _escaped(self, text, esc):
if esc and self.hasmarkup:
text = (''.join(['\x1b[%sm' % cod for cod in esc]) +
text +'\x1b[0m')
return text
def markup(self, text, **kw):
esc = []
for name in kw:
if name not in self._esctable:
raise ValueError("unknown markup: %r" %(name,))
if kw[name]:
esc.append(self._esctable[name])
return self._escaped(text, tuple(esc))
def sep(self, sepchar, title=None, fullwidth=None, **kw):
if fullwidth is None:
fullwidth = self.fullwidth
# the goal is to have the line be as long as possible
# under the condition that len(line) <= fullwidth
if sys.platform == "win32":
# if we print in the last column on windows we are on a
# new line but there is no way to verify/neutralize this
# (we may not know the exact line width)
# so let's be defensive to avoid empty lines in the output
fullwidth -= 1
if title is not None:
# we want 2 + 2*len(fill) + len(title) <= fullwidth
# i.e. 2 + 2*len(sepchar)*N + len(title) <= fullwidth
# 2*len(sepchar)*N <= fullwidth - len(title) - 2
# N <= (fullwidth - len(title) - 2) // (2*len(sepchar))
N = (fullwidth - len(title) - 2) // (2*len(sepchar))
fill = sepchar * N
line = "%s %s %s" % (fill, title, fill)
else:
# we want len(sepchar)*N <= fullwidth
# i.e. N <= fullwidth // len(sepchar)
line = sepchar * (fullwidth // len(sepchar))
# in some situations there is room for an extra sepchar at the right,
# in particular if we consider that with a sepchar like "_ " the
# trailing space is not important at the end of the line
if len(line) + len(sepchar.rstrip()) <= fullwidth:
line += sepchar.rstrip()
self.line(line, **kw)
def write(self, msg, **kw):
if msg:
if not isinstance(msg, (bytes, text)):
msg = text(msg)
if self.hasmarkup and kw:
markupmsg = self.markup(msg, **kw)
else:
markupmsg = msg
write_out(self._file, markupmsg)
def line(self, s='', **kw):
self.write(s, **kw)
self._checkfill(s)
self.write('\n')
def reline(self, line, **kw):
if not self.hasmarkup:
raise ValueError("cannot use rewrite-line without terminal")
self.write(line, **kw)
self._checkfill(line)
self.write('\r')
self._lastlen = len(line)
def _checkfill(self, line):
diff2last = self._lastlen - len(line)
if diff2last > 0:
self.write(" " * diff2last)
class Win32ConsoleWriter(TerminalWriter):
def write(self, msg, **kw):
if msg:
if not isinstance(msg, (bytes, text)):
msg = text(msg)
oldcolors = None
if self.hasmarkup and kw:
handle = GetStdHandle(STD_OUTPUT_HANDLE)
oldcolors = GetConsoleInfo(handle).wAttributes
default_bg = oldcolors & 0x00F0
attr = default_bg
if kw.pop('bold', False):
attr |= FOREGROUND_INTENSITY
if kw.pop('red', False):
attr |= FOREGROUND_RED
elif kw.pop('blue', False):
attr |= FOREGROUND_BLUE
elif kw.pop('green', False):
attr |= FOREGROUND_GREEN
elif kw.pop('yellow', False):
attr |= FOREGROUND_GREEN|FOREGROUND_RED
else:
attr |= oldcolors & 0x0007
SetConsoleTextAttribute(handle, attr)
write_out(self._file, msg)
if oldcolors:
SetConsoleTextAttribute(handle, oldcolors)
class WriteFile(object):
def __init__(self, writemethod, encoding=None):
self.encoding = encoding
self._writemethod = writemethod
def write(self, data):
if self.encoding:
data = data.encode(self.encoding, "replace")
self._writemethod(data)
def flush(self):
return
if win32_and_ctypes:
TerminalWriter = Win32ConsoleWriter
import ctypes
from ctypes import wintypes
# ctypes access to the Windows console
STD_OUTPUT_HANDLE = -11
STD_ERROR_HANDLE = -12
FOREGROUND_BLACK = 0x0000 # black text
FOREGROUND_BLUE = 0x0001 # text color contains blue.
FOREGROUND_GREEN = 0x0002 # text color contains green.
FOREGROUND_RED = 0x0004 # text color contains red.
FOREGROUND_WHITE = 0x0007
FOREGROUND_INTENSITY = 0x0008 # text color is intensified.
BACKGROUND_BLACK = 0x0000 # background color black
BACKGROUND_BLUE = 0x0010 # background color contains blue.
BACKGROUND_GREEN = 0x0020 # background color contains green.
BACKGROUND_RED = 0x0040 # background color contains red.
BACKGROUND_WHITE = 0x0070
BACKGROUND_INTENSITY = 0x0080 # background color is intensified.
SHORT = ctypes.c_short
class COORD(ctypes.Structure):
_fields_ = [('X', SHORT),
('Y', SHORT)]
class SMALL_RECT(ctypes.Structure):
_fields_ = [('Left', SHORT),
('Top', SHORT),
('Right', SHORT),
('Bottom', SHORT)]
class CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
_fields_ = [('dwSize', COORD),
('dwCursorPosition', COORD),
('wAttributes', wintypes.WORD),
('srWindow', SMALL_RECT),
('dwMaximumWindowSize', COORD)]
_GetStdHandle = ctypes.windll.kernel32.GetStdHandle
_GetStdHandle.argtypes = [wintypes.DWORD]
_GetStdHandle.restype = wintypes.HANDLE
def GetStdHandle(kind):
return _GetStdHandle(kind)
SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute
SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD]
SetConsoleTextAttribute.restype = wintypes.BOOL
_GetConsoleScreenBufferInfo = \
ctypes.windll.kernel32.GetConsoleScreenBufferInfo
_GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE,
ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)]
_GetConsoleScreenBufferInfo.restype = wintypes.BOOL
def GetConsoleInfo(handle):
info = CONSOLE_SCREEN_BUFFER_INFO()
_GetConsoleScreenBufferInfo(handle, ctypes.byref(info))
return info
def _getdimensions():
handle = GetStdHandle(STD_OUTPUT_HANDLE)
info = GetConsoleInfo(handle)
# Substract one from the width, otherwise the cursor wraps
# and the ending \n causes an empty line to display.
return info.dwSize.Y, info.dwSize.X - 1
def write_out(fil, msg):
# XXX sometimes "msg" is of type bytes, sometimes text which
# complicates the situation. Should we try to enforce unicode?
try:
# on py27 and above writing out to sys.stdout with an encoding
# should usually work for unicode messages (if the encoding is
# capable of it)
fil.write(msg)
except UnicodeEncodeError:
# on py26 it might not work because stdout expects bytes
if fil.encoding:
try:
fil.write(msg.encode(fil.encoding))
except UnicodeEncodeError:
# it might still fail if the encoding is not capable
pass
else:
fil.flush()
return
# fallback: escape all unicode characters
msg = msg.encode("unicode-escape").decode("ascii")
fil.write(msg)
fil.flush()
"""
Helper functions for writing to terminals and files.
"""
import sys, os
import py
py3k = sys.version_info[0] >= 3
from py.builtin import text, bytes
win32_and_ctypes = False
colorama = None
if sys.platform == "win32":
try:
import colorama
except ImportError:
try:
import ctypes
win32_and_ctypes = True
except ImportError:
pass
def _getdimensions():
import termios,fcntl,struct
call = fcntl.ioctl(1,termios.TIOCGWINSZ,"\000"*8)
height,width = struct.unpack( "hhhh", call ) [:2]
return height, width
def get_terminal_width():
height = width = 0
try:
height, width = _getdimensions()
except py.builtin._sysex:
raise
except:
# pass to fallback below
pass
if width == 0:
# FALLBACK:
# * some exception happened
# * or this is emacs terminal which reports (0,0)
width = int(os.environ.get('COLUMNS', 80))
# XXX the windows getdimensions may be bogus, let's sanify a bit
if width < 40:
width = 80
return width
terminal_width = get_terminal_width()
# XXX unify with _escaped func below
def ansi_print(text, esc, file=None, newline=True, flush=False):
if file is None:
file = sys.stderr
text = text.rstrip()
if esc and not isinstance(esc, tuple):
esc = (esc,)
if esc and sys.platform != "win32" and file.isatty():
text = (''.join(['\x1b[%sm' % cod for cod in esc]) +
text +
'\x1b[0m') # ANSI color code "reset"
if newline:
text += '\n'
if esc and win32_and_ctypes and file.isatty():
if 1 in esc:
bold = True
esc = tuple([x for x in esc if x != 1])
else:
bold = False
esctable = {() : FOREGROUND_WHITE, # normal
(31,): FOREGROUND_RED, # red
(32,): FOREGROUND_GREEN, # green
(33,): FOREGROUND_GREEN|FOREGROUND_RED, # yellow
(34,): FOREGROUND_BLUE, # blue
(35,): FOREGROUND_BLUE|FOREGROUND_RED, # purple
(36,): FOREGROUND_BLUE|FOREGROUND_GREEN, # cyan
(37,): FOREGROUND_WHITE, # white
(39,): FOREGROUND_WHITE, # reset
}
attr = esctable.get(esc, FOREGROUND_WHITE)
if bold:
attr |= FOREGROUND_INTENSITY
STD_OUTPUT_HANDLE = -11
STD_ERROR_HANDLE = -12
if file is sys.stderr:
handle = GetStdHandle(STD_ERROR_HANDLE)
else:
handle = GetStdHandle(STD_OUTPUT_HANDLE)
oldcolors = GetConsoleInfo(handle).wAttributes
attr |= (oldcolors & 0x0f0)
SetConsoleTextAttribute(handle, attr)
while len(text) > 32768:
file.write(text[:32768])
text = text[32768:]
if text:
file.write(text)
SetConsoleTextAttribute(handle, oldcolors)
else:
file.write(text)
if flush:
file.flush()
def should_do_markup(file):
if os.environ.get('PY_COLORS') == '1':
return True
if os.environ.get('PY_COLORS') == '0':
return False
return hasattr(file, 'isatty') and file.isatty() \
and os.environ.get('TERM') != 'dumb' \
and not (sys.platform.startswith('java') and os._name == 'nt')
class TerminalWriter(object):
_esctable = dict(black=30, red=31, green=32, yellow=33,
blue=34, purple=35, cyan=36, white=37,
Black=40, Red=41, Green=42, Yellow=43,
Blue=44, Purple=45, Cyan=46, White=47,
bold=1, light=2, blink=5, invert=7)
# XXX deprecate stringio argument
def __init__(self, file=None, stringio=False, encoding=None):
if file is None:
if stringio:
self.stringio = file = py.io.TextIO()
else:
file = py.std.sys.stdout
elif py.builtin.callable(file) and not (
hasattr(file, "write") and hasattr(file, "flush")):
file = WriteFile(file, encoding=encoding)
if hasattr(file, "isatty") and file.isatty() and colorama:
file = colorama.AnsiToWin32(file).stream
self.encoding = encoding or getattr(file, 'encoding', "utf-8")
self._file = file
self.hasmarkup = should_do_markup(file)
self._lastlen = 0
@property
def fullwidth(self):
if hasattr(self, '_terminal_width'):
return self._terminal_width
return get_terminal_width()
@fullwidth.setter
def fullwidth(self, value):
self._terminal_width = value
def _escaped(self, text, esc):
if esc and self.hasmarkup:
text = (''.join(['\x1b[%sm' % cod for cod in esc]) +
text +'\x1b[0m')
return text
def markup(self, text, **kw):
esc = []
for name in kw:
if name not in self._esctable:
raise ValueError("unknown markup: %r" %(name,))
if kw[name]:
esc.append(self._esctable[name])
return self._escaped(text, tuple(esc))
def sep(self, sepchar, title=None, fullwidth=None, **kw):
if fullwidth is None:
fullwidth = self.fullwidth
# the goal is to have the line be as long as possible
# under the condition that len(line) <= fullwidth
if sys.platform == "win32":
# if we print in the last column on windows we are on a
# new line but there is no way to verify/neutralize this
# (we may not know the exact line width)
# so let's be defensive to avoid empty lines in the output
fullwidth -= 1
if title is not None:
# we want 2 + 2*len(fill) + len(title) <= fullwidth
# i.e. 2 + 2*len(sepchar)*N + len(title) <= fullwidth
# 2*len(sepchar)*N <= fullwidth - len(title) - 2
# N <= (fullwidth - len(title) - 2) // (2*len(sepchar))
N = (fullwidth - len(title) - 2) // (2*len(sepchar))
fill = sepchar * N
line = "%s %s %s" % (fill, title, fill)
else:
# we want len(sepchar)*N <= fullwidth
# i.e. N <= fullwidth // len(sepchar)
line = sepchar * (fullwidth // len(sepchar))
# in some situations there is room for an extra sepchar at the right,
# in particular if we consider that with a sepchar like "_ " the
# trailing space is not important at the end of the line
if len(line) + len(sepchar.rstrip()) <= fullwidth:
line += sepchar.rstrip()
self.line(line, **kw)
def write(self, msg, **kw):
if msg:
if not isinstance(msg, (bytes, text)):
msg = text(msg)
if self.hasmarkup and kw:
markupmsg = self.markup(msg, **kw)
else:
markupmsg = msg
write_out(self._file, markupmsg)
def line(self, s='', **kw):
self.write(s, **kw)
self._checkfill(s)
self.write('\n')
def reline(self, line, **kw):
if not self.hasmarkup:
raise ValueError("cannot use rewrite-line without terminal")
self.write(line, **kw)
self._checkfill(line)
self.write('\r')
self._lastlen = len(line)
def _checkfill(self, line):
diff2last = self._lastlen - len(line)
if diff2last > 0:
self.write(" " * diff2last)
class Win32ConsoleWriter(TerminalWriter):
def write(self, msg, **kw):
if msg:
if not isinstance(msg, (bytes, text)):
msg = text(msg)
oldcolors = None
if self.hasmarkup and kw:
handle = GetStdHandle(STD_OUTPUT_HANDLE)
oldcolors = GetConsoleInfo(handle).wAttributes
default_bg = oldcolors & 0x00F0
attr = default_bg
if kw.pop('bold', False):
attr |= FOREGROUND_INTENSITY
if kw.pop('red', False):
attr |= FOREGROUND_RED
elif kw.pop('blue', False):
attr |= FOREGROUND_BLUE
elif kw.pop('green', False):
attr |= FOREGROUND_GREEN
elif kw.pop('yellow', False):
attr |= FOREGROUND_GREEN|FOREGROUND_RED
else:
attr |= oldcolors & 0x0007
SetConsoleTextAttribute(handle, attr)
write_out(self._file, msg)
if oldcolors:
SetConsoleTextAttribute(handle, oldcolors)
class WriteFile(object):
def __init__(self, writemethod, encoding=None):
self.encoding = encoding
self._writemethod = writemethod
def write(self, data):
if self.encoding:
data = data.encode(self.encoding, "replace")
self._writemethod(data)
def flush(self):
return
if win32_and_ctypes:
TerminalWriter = Win32ConsoleWriter
import ctypes
from ctypes import wintypes
# ctypes access to the Windows console
STD_OUTPUT_HANDLE = -11
STD_ERROR_HANDLE = -12
FOREGROUND_BLACK = 0x0000 # black text
FOREGROUND_BLUE = 0x0001 # text color contains blue.
FOREGROUND_GREEN = 0x0002 # text color contains green.
FOREGROUND_RED = 0x0004 # text color contains red.
FOREGROUND_WHITE = 0x0007
FOREGROUND_INTENSITY = 0x0008 # text color is intensified.
BACKGROUND_BLACK = 0x0000 # background color black
BACKGROUND_BLUE = 0x0010 # background color contains blue.
BACKGROUND_GREEN = 0x0020 # background color contains green.
BACKGROUND_RED = 0x0040 # background color contains red.
BACKGROUND_WHITE = 0x0070
BACKGROUND_INTENSITY = 0x0080 # background color is intensified.
SHORT = ctypes.c_short
class COORD(ctypes.Structure):
_fields_ = [('X', SHORT),
('Y', SHORT)]
class SMALL_RECT(ctypes.Structure):
_fields_ = [('Left', SHORT),
('Top', SHORT),
('Right', SHORT),
('Bottom', SHORT)]
class CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
_fields_ = [('dwSize', COORD),
('dwCursorPosition', COORD),
('wAttributes', wintypes.WORD),
('srWindow', SMALL_RECT),
('dwMaximumWindowSize', COORD)]
_GetStdHandle = ctypes.windll.kernel32.GetStdHandle
_GetStdHandle.argtypes = [wintypes.DWORD]
_GetStdHandle.restype = wintypes.HANDLE
def GetStdHandle(kind):
return _GetStdHandle(kind)
SetConsoleTextAttribute = ctypes.windll.kernel32.SetConsoleTextAttribute
SetConsoleTextAttribute.argtypes = [wintypes.HANDLE, wintypes.WORD]
SetConsoleTextAttribute.restype = wintypes.BOOL
_GetConsoleScreenBufferInfo = \
ctypes.windll.kernel32.GetConsoleScreenBufferInfo
_GetConsoleScreenBufferInfo.argtypes = [wintypes.HANDLE,
ctypes.POINTER(CONSOLE_SCREEN_BUFFER_INFO)]
_GetConsoleScreenBufferInfo.restype = wintypes.BOOL
def GetConsoleInfo(handle):
info = CONSOLE_SCREEN_BUFFER_INFO()
_GetConsoleScreenBufferInfo(handle, ctypes.byref(info))
return info
def _getdimensions():
handle = GetStdHandle(STD_OUTPUT_HANDLE)
info = GetConsoleInfo(handle)
# Substract one from the width, otherwise the cursor wraps
# and the ending \n causes an empty line to display.
return info.dwSize.Y, info.dwSize.X - 1
def write_out(fil, msg):
# XXX sometimes "msg" is of type bytes, sometimes text which
# complicates the situation. Should we try to enforce unicode?
try:
# on py27 and above writing out to sys.stdout with an encoding
# should usually work for unicode messages (if the encoding is
# capable of it)
fil.write(msg)
except UnicodeEncodeError:
# on py26 it might not work because stdout expects bytes
if fil.encoding:
try:
fil.write(msg.encode(fil.encoding))
except UnicodeEncodeError:
# it might still fail if the encoding is not capable
pass
else:
fil.flush()
return
# fallback: escape all unicode characters
msg = msg.encode("unicode-escape").decode("ascii")
fil.write(msg)
fil.flush()

4
third_party/python/py/py/_log/__init__.py поставляемый
Просмотреть файл

@ -1,2 +1,2 @@
""" logging API ('producers' and 'consumers' connected via keywords) """
""" logging API ('producers' and 'consumers' connected via keywords) """

372
third_party/python/py/py/_log/log.py поставляемый
Просмотреть файл

@ -1,186 +1,186 @@
"""
basic logging functionality based on a producer/consumer scheme.
XXX implement this API: (maybe put it into slogger.py?)
log = Logger(
info=py.log.STDOUT,
debug=py.log.STDOUT,
command=None)
log.info("hello", "world")
log.command("hello", "world")
log = Logger(info=Logger(something=...),
debug=py.log.STDOUT,
command=None)
"""
import py, sys
class Message(object):
def __init__(self, keywords, args):
self.keywords = keywords
self.args = args
def content(self):
return " ".join(map(str, self.args))
def prefix(self):
return "[%s] " % (":".join(self.keywords))
def __str__(self):
return self.prefix() + self.content()
class Producer(object):
""" (deprecated) Log producer API which sends messages to be logged
to a 'consumer' object, which then prints them to stdout,
stderr, files, etc. Used extensively by PyPy-1.1.
"""
Message = Message # to allow later customization
keywords2consumer = {}
def __init__(self, keywords, keywordmapper=None, **kw):
if hasattr(keywords, 'split'):
keywords = tuple(keywords.split())
self._keywords = keywords
if keywordmapper is None:
keywordmapper = default_keywordmapper
self._keywordmapper = keywordmapper
def __repr__(self):
return "<py.log.Producer %s>" % ":".join(self._keywords)
def __getattr__(self, name):
if '_' in name:
raise AttributeError(name)
producer = self.__class__(self._keywords + (name,))
setattr(self, name, producer)
return producer
def __call__(self, *args):
""" write a message to the appropriate consumer(s) """
func = self._keywordmapper.getconsumer(self._keywords)
if func is not None:
func(self.Message(self._keywords, args))
class KeywordMapper:
def __init__(self):
self.keywords2consumer = {}
def getstate(self):
return self.keywords2consumer.copy()
def setstate(self, state):
self.keywords2consumer.clear()
self.keywords2consumer.update(state)
def getconsumer(self, keywords):
""" return a consumer matching the given keywords.
tries to find the most suitable consumer by walking, starting from
the back, the list of keywords, the first consumer matching a
keyword is returned (falling back to py.log.default)
"""
for i in range(len(keywords), 0, -1):
try:
return self.keywords2consumer[keywords[:i]]
except KeyError:
continue
return self.keywords2consumer.get('default', default_consumer)
def setconsumer(self, keywords, consumer):
""" set a consumer for a set of keywords. """
# normalize to tuples
if isinstance(keywords, str):
keywords = tuple(filter(None, keywords.split()))
elif hasattr(keywords, '_keywords'):
keywords = keywords._keywords
elif not isinstance(keywords, tuple):
raise TypeError("key %r is not a string or tuple" % (keywords,))
if consumer is not None and not py.builtin.callable(consumer):
if not hasattr(consumer, 'write'):
raise TypeError(
"%r should be None, callable or file-like" % (consumer,))
consumer = File(consumer)
self.keywords2consumer[keywords] = consumer
def default_consumer(msg):
""" the default consumer, prints the message to stdout (using 'print') """
sys.stderr.write(str(msg)+"\n")
default_keywordmapper = KeywordMapper()
def setconsumer(keywords, consumer):
default_keywordmapper.setconsumer(keywords, consumer)
def setstate(state):
default_keywordmapper.setstate(state)
def getstate():
return default_keywordmapper.getstate()
#
# Consumers
#
class File(object):
""" log consumer wrapping a file(-like) object """
def __init__(self, f):
assert hasattr(f, 'write')
#assert isinstance(f, file) or not hasattr(f, 'open')
self._file = f
def __call__(self, msg):
""" write a message to the log """
self._file.write(str(msg) + "\n")
if hasattr(self._file, 'flush'):
self._file.flush()
class Path(object):
""" log consumer that opens and writes to a Path """
def __init__(self, filename, append=False,
delayed_create=False, buffering=False):
self._append = append
self._filename = str(filename)
self._buffering = buffering
if not delayed_create:
self._openfile()
def _openfile(self):
mode = self._append and 'a' or 'w'
f = open(self._filename, mode)
self._file = f
def __call__(self, msg):
""" write a message to the log """
if not hasattr(self, "_file"):
self._openfile()
self._file.write(str(msg) + "\n")
if not self._buffering:
self._file.flush()
def STDOUT(msg):
""" consumer that writes to sys.stdout """
sys.stdout.write(str(msg)+"\n")
def STDERR(msg):
""" consumer that writes to sys.stderr """
sys.stderr.write(str(msg)+"\n")
class Syslog:
""" consumer that writes to the syslog daemon """
def __init__(self, priority = None):
if priority is None:
priority = self.LOG_INFO
self.priority = priority
def __call__(self, msg):
""" write a message to the log """
py.std.syslog.syslog(self.priority, str(msg))
for _prio in "EMERG ALERT CRIT ERR WARNING NOTICE INFO DEBUG".split():
_prio = "LOG_" + _prio
try:
setattr(Syslog, _prio, getattr(py.std.syslog, _prio))
except AttributeError:
pass
"""
basic logging functionality based on a producer/consumer scheme.
XXX implement this API: (maybe put it into slogger.py?)
log = Logger(
info=py.log.STDOUT,
debug=py.log.STDOUT,
command=None)
log.info("hello", "world")
log.command("hello", "world")
log = Logger(info=Logger(something=...),
debug=py.log.STDOUT,
command=None)
"""
import py, sys
class Message(object):
def __init__(self, keywords, args):
self.keywords = keywords
self.args = args
def content(self):
return " ".join(map(str, self.args))
def prefix(self):
return "[%s] " % (":".join(self.keywords))
def __str__(self):
return self.prefix() + self.content()
class Producer(object):
""" (deprecated) Log producer API which sends messages to be logged
to a 'consumer' object, which then prints them to stdout,
stderr, files, etc. Used extensively by PyPy-1.1.
"""
Message = Message # to allow later customization
keywords2consumer = {}
def __init__(self, keywords, keywordmapper=None, **kw):
if hasattr(keywords, 'split'):
keywords = tuple(keywords.split())
self._keywords = keywords
if keywordmapper is None:
keywordmapper = default_keywordmapper
self._keywordmapper = keywordmapper
def __repr__(self):
return "<py.log.Producer %s>" % ":".join(self._keywords)
def __getattr__(self, name):
if '_' in name:
raise AttributeError(name)
producer = self.__class__(self._keywords + (name,))
setattr(self, name, producer)
return producer
def __call__(self, *args):
""" write a message to the appropriate consumer(s) """
func = self._keywordmapper.getconsumer(self._keywords)
if func is not None:
func(self.Message(self._keywords, args))
class KeywordMapper:
def __init__(self):
self.keywords2consumer = {}
def getstate(self):
return self.keywords2consumer.copy()
def setstate(self, state):
self.keywords2consumer.clear()
self.keywords2consumer.update(state)
def getconsumer(self, keywords):
""" return a consumer matching the given keywords.
tries to find the most suitable consumer by walking, starting from
the back, the list of keywords, the first consumer matching a
keyword is returned (falling back to py.log.default)
"""
for i in range(len(keywords), 0, -1):
try:
return self.keywords2consumer[keywords[:i]]
except KeyError:
continue
return self.keywords2consumer.get('default', default_consumer)
def setconsumer(self, keywords, consumer):
""" set a consumer for a set of keywords. """
# normalize to tuples
if isinstance(keywords, str):
keywords = tuple(filter(None, keywords.split()))
elif hasattr(keywords, '_keywords'):
keywords = keywords._keywords
elif not isinstance(keywords, tuple):
raise TypeError("key %r is not a string or tuple" % (keywords,))
if consumer is not None and not py.builtin.callable(consumer):
if not hasattr(consumer, 'write'):
raise TypeError(
"%r should be None, callable or file-like" % (consumer,))
consumer = File(consumer)
self.keywords2consumer[keywords] = consumer
def default_consumer(msg):
""" the default consumer, prints the message to stdout (using 'print') """
sys.stderr.write(str(msg)+"\n")
default_keywordmapper = KeywordMapper()
def setconsumer(keywords, consumer):
default_keywordmapper.setconsumer(keywords, consumer)
def setstate(state):
default_keywordmapper.setstate(state)
def getstate():
return default_keywordmapper.getstate()
#
# Consumers
#
class File(object):
""" log consumer wrapping a file(-like) object """
def __init__(self, f):
assert hasattr(f, 'write')
#assert isinstance(f, file) or not hasattr(f, 'open')
self._file = f
def __call__(self, msg):
""" write a message to the log """
self._file.write(str(msg) + "\n")
if hasattr(self._file, 'flush'):
self._file.flush()
class Path(object):
""" log consumer that opens and writes to a Path """
def __init__(self, filename, append=False,
delayed_create=False, buffering=False):
self._append = append
self._filename = str(filename)
self._buffering = buffering
if not delayed_create:
self._openfile()
def _openfile(self):
mode = self._append and 'a' or 'w'
f = open(self._filename, mode)
self._file = f
def __call__(self, msg):
""" write a message to the log """
if not hasattr(self, "_file"):
self._openfile()
self._file.write(str(msg) + "\n")
if not self._buffering:
self._file.flush()
def STDOUT(msg):
""" consumer that writes to sys.stdout """
sys.stdout.write(str(msg)+"\n")
def STDERR(msg):
""" consumer that writes to sys.stderr """
sys.stderr.write(str(msg)+"\n")
class Syslog:
""" consumer that writes to the syslog daemon """
def __init__(self, priority = None):
if priority is None:
priority = self.LOG_INFO
self.priority = priority
def __call__(self, msg):
""" write a message to the log """
py.std.syslog.syslog(self.priority, str(msg))
for _prio in "EMERG ALERT CRIT ERR WARNING NOTICE INFO DEBUG".split():
_prio = "LOG_" + _prio
try:
setattr(Syslog, _prio, getattr(py.std.syslog, _prio))
except AttributeError:
pass

152
third_party/python/py/py/_log/warning.py поставляемый
Просмотреть файл

@ -1,76 +1,76 @@
import py, sys
class DeprecationWarning(DeprecationWarning):
def __init__(self, msg, path, lineno):
self.msg = msg
self.path = path
self.lineno = lineno
def __repr__(self):
return "%s:%d: %s" %(self.path, self.lineno+1, self.msg)
def __str__(self):
return self.msg
def _apiwarn(startversion, msg, stacklevel=2, function=None):
# below is mostly COPIED from python2.4/warnings.py's def warn()
# Get context information
if isinstance(stacklevel, str):
frame = sys._getframe(1)
level = 1
found = frame.f_code.co_filename.find(stacklevel) != -1
while frame:
co = frame.f_code
if co.co_filename.find(stacklevel) == -1:
if found:
stacklevel = level
break
else:
found = True
level += 1
frame = frame.f_back
else:
stacklevel = 1
msg = "%s (since version %s)" %(msg, startversion)
warn(msg, stacklevel=stacklevel+1, function=function)
def warn(msg, stacklevel=1, function=None):
if function is not None:
filename = py.std.inspect.getfile(function)
lineno = py.code.getrawcode(function).co_firstlineno
else:
try:
caller = sys._getframe(stacklevel)
except ValueError:
globals = sys.__dict__
lineno = 1
else:
globals = caller.f_globals
lineno = caller.f_lineno
if '__name__' in globals:
module = globals['__name__']
else:
module = "<string>"
filename = globals.get('__file__')
if filename:
fnl = filename.lower()
if fnl.endswith(".pyc") or fnl.endswith(".pyo"):
filename = filename[:-1]
elif fnl.endswith("$py.class"):
filename = filename.replace('$py.class', '.py')
else:
if module == "__main__":
try:
filename = sys.argv[0]
except AttributeError:
# embedded interpreters don't have sys.argv, see bug #839151
filename = '__main__'
if not filename:
filename = module
path = py.path.local(filename)
warning = DeprecationWarning(msg, path, lineno)
py.std.warnings.warn_explicit(warning, category=Warning,
filename=str(warning.path),
lineno=warning.lineno,
registry=py.std.warnings.__dict__.setdefault(
"__warningsregistry__", {})
)
import py, sys
class DeprecationWarning(DeprecationWarning):
def __init__(self, msg, path, lineno):
self.msg = msg
self.path = path
self.lineno = lineno
def __repr__(self):
return "%s:%d: %s" %(self.path, self.lineno+1, self.msg)
def __str__(self):
return self.msg
def _apiwarn(startversion, msg, stacklevel=2, function=None):
# below is mostly COPIED from python2.4/warnings.py's def warn()
# Get context information
if isinstance(stacklevel, str):
frame = sys._getframe(1)
level = 1
found = frame.f_code.co_filename.find(stacklevel) != -1
while frame:
co = frame.f_code
if co.co_filename.find(stacklevel) == -1:
if found:
stacklevel = level
break
else:
found = True
level += 1
frame = frame.f_back
else:
stacklevel = 1
msg = "%s (since version %s)" %(msg, startversion)
warn(msg, stacklevel=stacklevel+1, function=function)
def warn(msg, stacklevel=1, function=None):
if function is not None:
filename = py.std.inspect.getfile(function)
lineno = py.code.getrawcode(function).co_firstlineno
else:
try:
caller = sys._getframe(stacklevel)
except ValueError:
globals = sys.__dict__
lineno = 1
else:
globals = caller.f_globals
lineno = caller.f_lineno
if '__name__' in globals:
module = globals['__name__']
else:
module = "<string>"
filename = globals.get('__file__')
if filename:
fnl = filename.lower()
if fnl.endswith(".pyc") or fnl.endswith(".pyo"):
filename = filename[:-1]
elif fnl.endswith("$py.class"):
filename = filename.replace('$py.class', '.py')
else:
if module == "__main__":
try:
filename = sys.argv[0]
except AttributeError:
# embedded interpreters don't have sys.argv, see bug #839151
filename = '__main__'
if not filename:
filename = module
path = py.path.local(filename)
warning = DeprecationWarning(msg, path, lineno)
py.std.warnings.warn_explicit(warning, category=Warning,
filename=str(warning.path),
lineno=warning.lineno,
registry=py.std.warnings.__dict__.setdefault(
"__warningsregistry__", {})
)

2
third_party/python/py/py/_path/__init__.py поставляемый
Просмотреть файл

@ -1 +1 @@
""" unified file system api """
""" unified file system api """

228
third_party/python/py/py/_path/cacheutil.py поставляемый
Просмотреть файл

@ -1,114 +1,114 @@
"""
This module contains multithread-safe cache implementations.
All Caches have
getorbuild(key, builder)
delentry(key)
methods and allow configuration when instantiating the cache class.
"""
from time import time as gettime
class BasicCache(object):
def __init__(self, maxentries=128):
self.maxentries = maxentries
self.prunenum = int(maxentries - maxentries/8)
self._dict = {}
def clear(self):
self._dict.clear()
def _getentry(self, key):
return self._dict[key]
def _putentry(self, key, entry):
self._prunelowestweight()
self._dict[key] = entry
def delentry(self, key, raising=False):
try:
del self._dict[key]
except KeyError:
if raising:
raise
def getorbuild(self, key, builder):
try:
entry = self._getentry(key)
except KeyError:
entry = self._build(key, builder)
self._putentry(key, entry)
return entry.value
def _prunelowestweight(self):
""" prune out entries with lowest weight. """
numentries = len(self._dict)
if numentries >= self.maxentries:
# evict according to entry's weight
items = [(entry.weight, key)
for key, entry in self._dict.items()]
items.sort()
index = numentries - self.prunenum
if index > 0:
for weight, key in items[:index]:
# in MT situations the element might be gone
self.delentry(key, raising=False)
class BuildcostAccessCache(BasicCache):
""" A BuildTime/Access-counting cache implementation.
the weight of a value is computed as the product of
num-accesses-of-a-value * time-to-build-the-value
The values with the least such weights are evicted
if the cache maxentries threshold is superceded.
For implementation flexibility more than one object
might be evicted at a time.
"""
# time function to use for measuring build-times
def _build(self, key, builder):
start = gettime()
val = builder()
end = gettime()
return WeightedCountingEntry(val, end-start)
class WeightedCountingEntry(object):
def __init__(self, value, oneweight):
self._value = value
self.weight = self._oneweight = oneweight
def value(self):
self.weight += self._oneweight
return self._value
value = property(value)
class AgingCache(BasicCache):
""" This cache prunes out cache entries that are too old.
"""
def __init__(self, maxentries=128, maxseconds=10.0):
super(AgingCache, self).__init__(maxentries)
self.maxseconds = maxseconds
def _getentry(self, key):
entry = self._dict[key]
if entry.isexpired():
self.delentry(key)
raise KeyError(key)
return entry
def _build(self, key, builder):
val = builder()
entry = AgingEntry(val, gettime() + self.maxseconds)
return entry
class AgingEntry(object):
def __init__(self, value, expirationtime):
self.value = value
self.weight = expirationtime
def isexpired(self):
t = gettime()
return t >= self.weight
"""
This module contains multithread-safe cache implementations.
All Caches have
getorbuild(key, builder)
delentry(key)
methods and allow configuration when instantiating the cache class.
"""
from time import time as gettime
class BasicCache(object):
def __init__(self, maxentries=128):
self.maxentries = maxentries
self.prunenum = int(maxentries - maxentries/8)
self._dict = {}
def clear(self):
self._dict.clear()
def _getentry(self, key):
return self._dict[key]
def _putentry(self, key, entry):
self._prunelowestweight()
self._dict[key] = entry
def delentry(self, key, raising=False):
try:
del self._dict[key]
except KeyError:
if raising:
raise
def getorbuild(self, key, builder):
try:
entry = self._getentry(key)
except KeyError:
entry = self._build(key, builder)
self._putentry(key, entry)
return entry.value
def _prunelowestweight(self):
""" prune out entries with lowest weight. """
numentries = len(self._dict)
if numentries >= self.maxentries:
# evict according to entry's weight
items = [(entry.weight, key)
for key, entry in self._dict.items()]
items.sort()
index = numentries - self.prunenum
if index > 0:
for weight, key in items[:index]:
# in MT situations the element might be gone
self.delentry(key, raising=False)
class BuildcostAccessCache(BasicCache):
""" A BuildTime/Access-counting cache implementation.
the weight of a value is computed as the product of
num-accesses-of-a-value * time-to-build-the-value
The values with the least such weights are evicted
if the cache maxentries threshold is superceded.
For implementation flexibility more than one object
might be evicted at a time.
"""
# time function to use for measuring build-times
def _build(self, key, builder):
start = gettime()
val = builder()
end = gettime()
return WeightedCountingEntry(val, end-start)
class WeightedCountingEntry(object):
def __init__(self, value, oneweight):
self._value = value
self.weight = self._oneweight = oneweight
def value(self):
self.weight += self._oneweight
return self._value
value = property(value)
class AgingCache(BasicCache):
""" This cache prunes out cache entries that are too old.
"""
def __init__(self, maxentries=128, maxseconds=10.0):
super(AgingCache, self).__init__(maxentries)
self.maxseconds = maxseconds
def _getentry(self, key):
entry = self._dict[key]
if entry.isexpired():
self.delentry(key)
raise KeyError(key)
return entry
def _build(self, key, builder):
val = builder()
entry = AgingEntry(val, gettime() + self.maxseconds)
return entry
class AgingEntry(object):
def __init__(self, value, expirationtime):
self.value = value
self.weight = expirationtime
def isexpired(self):
t = gettime()
return t >= self.weight

848
third_party/python/py/py/_path/common.py поставляемый
Просмотреть файл

@ -1,403 +1,445 @@
"""
"""
import os, sys, posixpath
import py
# Moved from local.py.
iswin32 = sys.platform == "win32" or (getattr(os, '_name', False) == 'nt')
class Checkers:
_depend_on_existence = 'exists', 'link', 'dir', 'file'
def __init__(self, path):
self.path = path
def dir(self):
raise NotImplementedError
def file(self):
raise NotImplementedError
def dotfile(self):
return self.path.basename.startswith('.')
def ext(self, arg):
if not arg.startswith('.'):
arg = '.' + arg
return self.path.ext == arg
def exists(self):
raise NotImplementedError
def basename(self, arg):
return self.path.basename == arg
def basestarts(self, arg):
return self.path.basename.startswith(arg)
def relto(self, arg):
return self.path.relto(arg)
def fnmatch(self, arg):
return self.path.fnmatch(arg)
def endswith(self, arg):
return str(self.path).endswith(arg)
def _evaluate(self, kw):
for name, value in kw.items():
invert = False
meth = None
try:
meth = getattr(self, name)
except AttributeError:
if name[:3] == 'not':
invert = True
try:
meth = getattr(self, name[3:])
except AttributeError:
pass
if meth is None:
raise TypeError(
"no %r checker available for %r" % (name, self.path))
try:
if py.code.getrawcode(meth).co_argcount > 1:
if (not meth(value)) ^ invert:
return False
else:
if bool(value) ^ bool(meth()) ^ invert:
return False
except (py.error.ENOENT, py.error.ENOTDIR, py.error.EBUSY):
# EBUSY feels not entirely correct,
# but its kind of necessary since ENOMEDIUM
# is not accessible in python
for name in self._depend_on_existence:
if name in kw:
if kw.get(name):
return False
name = 'not' + name
if name in kw:
if not kw.get(name):
return False
return True
class NeverRaised(Exception):
pass
class PathBase(object):
""" shared implementation for filesystem path objects."""
Checkers = Checkers
def __div__(self, other):
return self.join(str(other))
__truediv__ = __div__ # py3k
def basename(self):
""" basename part of path. """
return self._getbyspec('basename')[0]
basename = property(basename, None, None, basename.__doc__)
def dirname(self):
""" dirname part of path. """
return self._getbyspec('dirname')[0]
dirname = property(dirname, None, None, dirname.__doc__)
def purebasename(self):
""" pure base name of the path."""
return self._getbyspec('purebasename')[0]
purebasename = property(purebasename, None, None, purebasename.__doc__)
def ext(self):
""" extension of the path (including the '.')."""
return self._getbyspec('ext')[0]
ext = property(ext, None, None, ext.__doc__)
def dirpath(self, *args, **kwargs):
""" return the directory path joined with any given path arguments. """
return self.new(basename='').join(*args, **kwargs)
def read_binary(self):
""" read and return a bytestring from reading the path. """
with self.open('rb') as f:
return f.read()
def read_text(self, encoding):
""" read and return a Unicode string from reading the path. """
with self.open("r", encoding=encoding) as f:
return f.read()
def read(self, mode='r'):
""" read and return a bytestring from reading the path. """
with self.open(mode) as f:
return f.read()
def readlines(self, cr=1):
""" read and return a list of lines from the path. if cr is False, the
newline will be removed from the end of each line. """
if not cr:
content = self.read('rU')
return content.split('\n')
else:
f = self.open('rU')
try:
return f.readlines()
finally:
f.close()
def load(self):
""" (deprecated) return object unpickled from self.read() """
f = self.open('rb')
try:
return py.error.checked_call(py.std.pickle.load, f)
finally:
f.close()
def move(self, target):
""" move this path to target. """
if target.relto(self):
raise py.error.EINVAL(target,
"cannot move path into a subdirectory of itself")
try:
self.rename(target)
except py.error.EXDEV: # invalid cross-device link
self.copy(target)
self.remove()
def __repr__(self):
""" return a string representation of this path. """
return repr(str(self))
def check(self, **kw):
""" check a path for existence and properties.
Without arguments, return True if the path exists, otherwise False.
valid checkers::
file=1 # is a file
file=0 # is not a file (may not even exist)
dir=1 # is a dir
link=1 # is a link
exists=1 # exists
You can specify multiple checker definitions, for example::
path.check(file=1, link=1) # a link pointing to a file
"""
if not kw:
kw = {'exists' : 1}
return self.Checkers(self)._evaluate(kw)
def fnmatch(self, pattern):
"""return true if the basename/fullname matches the glob-'pattern'.
valid pattern characters::
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
If the pattern contains a path-separator then the full path
is used for pattern matching and a '*' is prepended to the
pattern.
if the pattern doesn't contain a path-separator the pattern
is only matched against the basename.
"""
return FNMatcher(pattern)(self)
def relto(self, relpath):
""" return a string which is the relative part of the path
to the given 'relpath'.
"""
if not isinstance(relpath, (str, PathBase)):
raise TypeError("%r: not a string or path object" %(relpath,))
strrelpath = str(relpath)
if strrelpath and strrelpath[-1] != self.sep:
strrelpath += self.sep
#assert strrelpath[-1] == self.sep
#assert strrelpath[-2] != self.sep
strself = self.strpath
if sys.platform == "win32" or getattr(os, '_name', None) == 'nt':
if os.path.normcase(strself).startswith(
os.path.normcase(strrelpath)):
return strself[len(strrelpath):]
elif strself.startswith(strrelpath):
return strself[len(strrelpath):]
return ""
def ensure_dir(self, *args):
""" ensure the path joined with args is a directory. """
return self.ensure(*args, **{"dir": True})
def bestrelpath(self, dest):
""" return a string which is a relative path from self
(assumed to be a directory) to dest such that
self.join(bestrelpath) == dest and if not such
path can be determined return dest.
"""
try:
if self == dest:
return os.curdir
base = self.common(dest)
if not base: # can be the case on windows
return str(dest)
self2base = self.relto(base)
reldest = dest.relto(base)
if self2base:
n = self2base.count(self.sep) + 1
else:
n = 0
l = [os.pardir] * n
if reldest:
l.append(reldest)
target = dest.sep.join(l)
return target
except AttributeError:
return str(dest)
def exists(self):
return self.check()
def isdir(self):
return self.check(dir=1)
def isfile(self):
return self.check(file=1)
def parts(self, reverse=False):
""" return a root-first list of all ancestor directories
plus the path itself.
"""
current = self
l = [self]
while 1:
last = current
current = current.dirpath()
if last == current:
break
l.append(current)
if not reverse:
l.reverse()
return l
def common(self, other):
""" return the common part shared with the other path
or None if there is no common part.
"""
last = None
for x, y in zip(self.parts(), other.parts()):
if x != y:
return last
last = x
return last
def __add__(self, other):
""" return new path object with 'other' added to the basename"""
return self.new(basename=self.basename+str(other))
def __cmp__(self, other):
""" return sort value (-1, 0, +1). """
try:
return cmp(self.strpath, other.strpath)
except AttributeError:
return cmp(str(self), str(other)) # self.path, other.path)
def __lt__(self, other):
try:
return self.strpath < other.strpath
except AttributeError:
return str(self) < str(other)
def visit(self, fil=None, rec=None, ignore=NeverRaised, bf=False, sort=False):
""" yields all paths below the current one
fil is a filter (glob pattern or callable), if not matching the
path will not be yielded, defaulting to None (everything is
returned)
rec is a filter (glob pattern or callable) that controls whether
a node is descended, defaulting to None
ignore is an Exception class that is ignoredwhen calling dirlist()
on any of the paths (by default, all exceptions are reported)
bf if True will cause a breadthfirst search instead of the
default depthfirst. Default: False
sort if True will sort entries within each directory level.
"""
for x in Visitor(fil, rec, ignore, bf, sort).gen(self):
yield x
def _sortlist(self, res, sort):
if sort:
if hasattr(sort, '__call__'):
res.sort(sort)
else:
res.sort()
def samefile(self, other):
""" return True if other refers to the same stat object as self. """
return self.strpath == str(other)
class Visitor:
def __init__(self, fil, rec, ignore, bf, sort):
if isinstance(fil, str):
fil = FNMatcher(fil)
if isinstance(rec, str):
self.rec = FNMatcher(rec)
elif not hasattr(rec, '__call__') and rec:
self.rec = lambda path: True
else:
self.rec = rec
self.fil = fil
self.ignore = ignore
self.breadthfirst = bf
self.optsort = sort and sorted or (lambda x: x)
def gen(self, path):
try:
entries = path.listdir()
except self.ignore:
return
rec = self.rec
dirs = self.optsort([p for p in entries
if p.check(dir=1) and (rec is None or rec(p))])
if not self.breadthfirst:
for subdir in dirs:
for p in self.gen(subdir):
yield p
for p in self.optsort(entries):
if self.fil is None or self.fil(p):
yield p
if self.breadthfirst:
for subdir in dirs:
for p in self.gen(subdir):
yield p
class FNMatcher:
def __init__(self, pattern):
self.pattern = pattern
def __call__(self, path):
pattern = self.pattern
if (pattern.find(path.sep) == -1 and
iswin32 and
pattern.find(posixpath.sep) != -1):
# Running on Windows, the pattern has no Windows path separators,
# and the pattern has one or more Posix path separators. Replace
# the Posix path separators with the Windows path separator.
pattern = pattern.replace(posixpath.sep, path.sep)
if pattern.find(path.sep) == -1:
name = path.basename
else:
name = str(path) # path.strpath # XXX svn?
if not os.path.isabs(pattern):
pattern = '*' + path.sep + pattern
return py.std.fnmatch.fnmatch(name, pattern)
"""
"""
import os, sys, posixpath
import fnmatch
import py
# Moved from local.py.
iswin32 = sys.platform == "win32" or (getattr(os, '_name', False) == 'nt')
try:
from os import fspath
except ImportError:
def fspath(path):
"""
Return the string representation of the path.
If str or bytes is passed in, it is returned unchanged.
This code comes from PEP 519, modified to support earlier versions of
python.
This is required for python < 3.6.
"""
if isinstance(path, (py.builtin.text, py.builtin.bytes)):
return path
# Work from the object's type to match method resolution of other magic
# methods.
path_type = type(path)
try:
return path_type.__fspath__(path)
except AttributeError:
if hasattr(path_type, '__fspath__'):
raise
try:
import pathlib
except ImportError:
pass
else:
if isinstance(path, pathlib.PurePath):
return py.builtin.text(path)
raise TypeError("expected str, bytes or os.PathLike object, not "
+ path_type.__name__)
class Checkers:
_depend_on_existence = 'exists', 'link', 'dir', 'file'
def __init__(self, path):
self.path = path
def dir(self):
raise NotImplementedError
def file(self):
raise NotImplementedError
def dotfile(self):
return self.path.basename.startswith('.')
def ext(self, arg):
if not arg.startswith('.'):
arg = '.' + arg
return self.path.ext == arg
def exists(self):
raise NotImplementedError
def basename(self, arg):
return self.path.basename == arg
def basestarts(self, arg):
return self.path.basename.startswith(arg)
def relto(self, arg):
return self.path.relto(arg)
def fnmatch(self, arg):
return self.path.fnmatch(arg)
def endswith(self, arg):
return str(self.path).endswith(arg)
def _evaluate(self, kw):
for name, value in kw.items():
invert = False
meth = None
try:
meth = getattr(self, name)
except AttributeError:
if name[:3] == 'not':
invert = True
try:
meth = getattr(self, name[3:])
except AttributeError:
pass
if meth is None:
raise TypeError(
"no %r checker available for %r" % (name, self.path))
try:
if py.code.getrawcode(meth).co_argcount > 1:
if (not meth(value)) ^ invert:
return False
else:
if bool(value) ^ bool(meth()) ^ invert:
return False
except (py.error.ENOENT, py.error.ENOTDIR, py.error.EBUSY):
# EBUSY feels not entirely correct,
# but its kind of necessary since ENOMEDIUM
# is not accessible in python
for name in self._depend_on_existence:
if name in kw:
if kw.get(name):
return False
name = 'not' + name
if name in kw:
if not kw.get(name):
return False
return True
class NeverRaised(Exception):
pass
class PathBase(object):
""" shared implementation for filesystem path objects."""
Checkers = Checkers
def __div__(self, other):
return self.join(fspath(other))
__truediv__ = __div__ # py3k
def basename(self):
""" basename part of path. """
return self._getbyspec('basename')[0]
basename = property(basename, None, None, basename.__doc__)
def dirname(self):
""" dirname part of path. """
return self._getbyspec('dirname')[0]
dirname = property(dirname, None, None, dirname.__doc__)
def purebasename(self):
""" pure base name of the path."""
return self._getbyspec('purebasename')[0]
purebasename = property(purebasename, None, None, purebasename.__doc__)
def ext(self):
""" extension of the path (including the '.')."""
return self._getbyspec('ext')[0]
ext = property(ext, None, None, ext.__doc__)
def dirpath(self, *args, **kwargs):
""" return the directory path joined with any given path arguments. """
return self.new(basename='').join(*args, **kwargs)
def read_binary(self):
""" read and return a bytestring from reading the path. """
with self.open('rb') as f:
return f.read()
def read_text(self, encoding):
""" read and return a Unicode string from reading the path. """
with self.open("r", encoding=encoding) as f:
return f.read()
def read(self, mode='r'):
""" read and return a bytestring from reading the path. """
with self.open(mode) as f:
return f.read()
def readlines(self, cr=1):
""" read and return a list of lines from the path. if cr is False, the
newline will be removed from the end of each line. """
if sys.version_info < (3, ):
mode = 'rU'
else: # python 3 deprecates mode "U" in favor of "newline" option
mode = 'r'
if not cr:
content = self.read(mode)
return content.split('\n')
else:
f = self.open(mode)
try:
return f.readlines()
finally:
f.close()
def load(self):
""" (deprecated) return object unpickled from self.read() """
f = self.open('rb')
try:
return py.error.checked_call(py.std.pickle.load, f)
finally:
f.close()
def move(self, target):
""" move this path to target. """
if target.relto(self):
raise py.error.EINVAL(target,
"cannot move path into a subdirectory of itself")
try:
self.rename(target)
except py.error.EXDEV: # invalid cross-device link
self.copy(target)
self.remove()
def __repr__(self):
""" return a string representation of this path. """
return repr(str(self))
def check(self, **kw):
""" check a path for existence and properties.
Without arguments, return True if the path exists, otherwise False.
valid checkers::
file=1 # is a file
file=0 # is not a file (may not even exist)
dir=1 # is a dir
link=1 # is a link
exists=1 # exists
You can specify multiple checker definitions, for example::
path.check(file=1, link=1) # a link pointing to a file
"""
if not kw:
kw = {'exists' : 1}
return self.Checkers(self)._evaluate(kw)
def fnmatch(self, pattern):
"""return true if the basename/fullname matches the glob-'pattern'.
valid pattern characters::
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
If the pattern contains a path-separator then the full path
is used for pattern matching and a '*' is prepended to the
pattern.
if the pattern doesn't contain a path-separator the pattern
is only matched against the basename.
"""
return FNMatcher(pattern)(self)
def relto(self, relpath):
""" return a string which is the relative part of the path
to the given 'relpath'.
"""
if not isinstance(relpath, (str, PathBase)):
raise TypeError("%r: not a string or path object" %(relpath,))
strrelpath = str(relpath)
if strrelpath and strrelpath[-1] != self.sep:
strrelpath += self.sep
#assert strrelpath[-1] == self.sep
#assert strrelpath[-2] != self.sep
strself = self.strpath
if sys.platform == "win32" or getattr(os, '_name', None) == 'nt':
if os.path.normcase(strself).startswith(
os.path.normcase(strrelpath)):
return strself[len(strrelpath):]
elif strself.startswith(strrelpath):
return strself[len(strrelpath):]
return ""
def ensure_dir(self, *args):
""" ensure the path joined with args is a directory. """
return self.ensure(*args, **{"dir": True})
def bestrelpath(self, dest):
""" return a string which is a relative path from self
(assumed to be a directory) to dest such that
self.join(bestrelpath) == dest and if not such
path can be determined return dest.
"""
try:
if self == dest:
return os.curdir
base = self.common(dest)
if not base: # can be the case on windows
return str(dest)
self2base = self.relto(base)
reldest = dest.relto(base)
if self2base:
n = self2base.count(self.sep) + 1
else:
n = 0
l = [os.pardir] * n
if reldest:
l.append(reldest)
target = dest.sep.join(l)
return target
except AttributeError:
return str(dest)
def exists(self):
return self.check()
def isdir(self):
return self.check(dir=1)
def isfile(self):
return self.check(file=1)
def parts(self, reverse=False):
""" return a root-first list of all ancestor directories
plus the path itself.
"""
current = self
l = [self]
while 1:
last = current
current = current.dirpath()
if last == current:
break
l.append(current)
if not reverse:
l.reverse()
return l
def common(self, other):
""" return the common part shared with the other path
or None if there is no common part.
"""
last = None
for x, y in zip(self.parts(), other.parts()):
if x != y:
return last
last = x
return last
def __add__(self, other):
""" return new path object with 'other' added to the basename"""
return self.new(basename=self.basename+str(other))
def __cmp__(self, other):
""" return sort value (-1, 0, +1). """
try:
return cmp(self.strpath, other.strpath)
except AttributeError:
return cmp(str(self), str(other)) # self.path, other.path)
def __lt__(self, other):
try:
return self.strpath < other.strpath
except AttributeError:
return str(self) < str(other)
def visit(self, fil=None, rec=None, ignore=NeverRaised, bf=False, sort=False):
""" yields all paths below the current one
fil is a filter (glob pattern or callable), if not matching the
path will not be yielded, defaulting to None (everything is
returned)
rec is a filter (glob pattern or callable) that controls whether
a node is descended, defaulting to None
ignore is an Exception class that is ignoredwhen calling dirlist()
on any of the paths (by default, all exceptions are reported)
bf if True will cause a breadthfirst search instead of the
default depthfirst. Default: False
sort if True will sort entries within each directory level.
"""
for x in Visitor(fil, rec, ignore, bf, sort).gen(self):
yield x
def _sortlist(self, res, sort):
if sort:
if hasattr(sort, '__call__'):
res.sort(sort)
else:
res.sort()
def samefile(self, other):
""" return True if other refers to the same stat object as self. """
return self.strpath == str(other)
def __fspath__(self):
return self.strpath
class Visitor:
def __init__(self, fil, rec, ignore, bf, sort):
if isinstance(fil, py.builtin._basestring):
fil = FNMatcher(fil)
if isinstance(rec, py.builtin._basestring):
self.rec = FNMatcher(rec)
elif not hasattr(rec, '__call__') and rec:
self.rec = lambda path: True
else:
self.rec = rec
self.fil = fil
self.ignore = ignore
self.breadthfirst = bf
self.optsort = sort and sorted or (lambda x: x)
def gen(self, path):
try:
entries = path.listdir()
except self.ignore:
return
rec = self.rec
dirs = self.optsort([p for p in entries
if p.check(dir=1) and (rec is None or rec(p))])
if not self.breadthfirst:
for subdir in dirs:
for p in self.gen(subdir):
yield p
for p in self.optsort(entries):
if self.fil is None or self.fil(p):
yield p
if self.breadthfirst:
for subdir in dirs:
for p in self.gen(subdir):
yield p
class FNMatcher:
def __init__(self, pattern):
self.pattern = pattern
def __call__(self, path):
pattern = self.pattern
if (pattern.find(path.sep) == -1 and
iswin32 and
pattern.find(posixpath.sep) != -1):
# Running on Windows, the pattern has no Windows path separators,
# and the pattern has one or more Posix path separators. Replace
# the Posix path separators with the Windows path separator.
pattern = pattern.replace(posixpath.sep, path.sep)
if pattern.find(path.sep) == -1:
name = path.basename
else:
name = str(path) # path.strpath # XXX svn?
if not os.path.isabs(pattern):
pattern = '*' + path.sep + pattern
return fnmatch.fnmatch(name, pattern)

1841
third_party/python/py/py/_path/local.py поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

760
third_party/python/py/py/_path/svnurl.py поставляемый
Просмотреть файл

@ -1,380 +1,380 @@
"""
module defining a subversion path object based on the external
command 'svn'. This modules aims to work with svn 1.3 and higher
but might also interact well with earlier versions.
"""
import os, sys, time, re
import py
from py import path, process
from py._path import common
from py._path import svnwc as svncommon
from py._path.cacheutil import BuildcostAccessCache, AgingCache
DEBUG=False
class SvnCommandPath(svncommon.SvnPathBase):
""" path implementation that offers access to (possibly remote) subversion
repositories. """
_lsrevcache = BuildcostAccessCache(maxentries=128)
_lsnorevcache = AgingCache(maxentries=1000, maxseconds=60.0)
def __new__(cls, path, rev=None, auth=None):
self = object.__new__(cls)
if isinstance(path, cls):
rev = path.rev
auth = path.auth
path = path.strpath
svncommon.checkbadchars(path)
path = path.rstrip('/')
self.strpath = path
self.rev = rev
self.auth = auth
return self
def __repr__(self):
if self.rev == -1:
return 'svnurl(%r)' % self.strpath
else:
return 'svnurl(%r, %r)' % (self.strpath, self.rev)
def _svnwithrev(self, cmd, *args):
""" execute an svn command, append our own url and revision """
if self.rev is None:
return self._svnwrite(cmd, *args)
else:
args = ['-r', self.rev] + list(args)
return self._svnwrite(cmd, *args)
def _svnwrite(self, cmd, *args):
""" execute an svn command, append our own url """
l = ['svn %s' % cmd]
args = ['"%s"' % self._escape(item) for item in args]
l.extend(args)
l.append('"%s"' % self._encodedurl())
# fixing the locale because we can't otherwise parse
string = " ".join(l)
if DEBUG:
print("execing %s" % string)
out = self._svncmdexecauth(string)
return out
def _svncmdexecauth(self, cmd):
""" execute an svn command 'as is' """
cmd = svncommon.fixlocale() + cmd
if self.auth is not None:
cmd += ' ' + self.auth.makecmdoptions()
return self._cmdexec(cmd)
def _cmdexec(self, cmd):
try:
out = process.cmdexec(cmd)
except py.process.cmdexec.Error:
e = sys.exc_info()[1]
if (e.err.find('File Exists') != -1 or
e.err.find('File already exists') != -1):
raise py.error.EEXIST(self)
raise
return out
def _svnpopenauth(self, cmd):
""" execute an svn command, return a pipe for reading stdin """
cmd = svncommon.fixlocale() + cmd
if self.auth is not None:
cmd += ' ' + self.auth.makecmdoptions()
return self._popen(cmd)
def _popen(self, cmd):
return os.popen(cmd)
def _encodedurl(self):
return self._escape(self.strpath)
def _norev_delentry(self, path):
auth = self.auth and self.auth.makecmdoptions() or None
self._lsnorevcache.delentry((str(path), auth))
def open(self, mode='r'):
""" return an opened file with the given mode. """
if mode not in ("r", "rU",):
raise ValueError("mode %r not supported" % (mode,))
assert self.check(file=1) # svn cat returns an empty file otherwise
if self.rev is None:
return self._svnpopenauth('svn cat "%s"' % (
self._escape(self.strpath), ))
else:
return self._svnpopenauth('svn cat -r %s "%s"' % (
self.rev, self._escape(self.strpath)))
def dirpath(self, *args, **kwargs):
""" return the directory path of the current path joined
with any given path arguments.
"""
l = self.strpath.split(self.sep)
if len(l) < 4:
raise py.error.EINVAL(self, "base is not valid")
elif len(l) == 4:
return self.join(*args, **kwargs)
else:
return self.new(basename='').join(*args, **kwargs)
# modifying methods (cache must be invalidated)
def mkdir(self, *args, **kwargs):
""" create & return the directory joined with args.
pass a 'msg' keyword argument to set the commit message.
"""
commit_msg = kwargs.get('msg', "mkdir by py lib invocation")
createpath = self.join(*args)
createpath._svnwrite('mkdir', '-m', commit_msg)
self._norev_delentry(createpath.dirpath())
return createpath
def copy(self, target, msg='copied by py lib invocation'):
""" copy path to target with checkin message msg."""
if getattr(target, 'rev', None) is not None:
raise py.error.EINVAL(target, "revisions are immutable")
self._svncmdexecauth('svn copy -m "%s" "%s" "%s"' %(msg,
self._escape(self), self._escape(target)))
self._norev_delentry(target.dirpath())
def rename(self, target, msg="renamed by py lib invocation"):
""" rename this path to target with checkin message msg. """
if getattr(self, 'rev', None) is not None:
raise py.error.EINVAL(self, "revisions are immutable")
self._svncmdexecauth('svn move -m "%s" --force "%s" "%s"' %(
msg, self._escape(self), self._escape(target)))
self._norev_delentry(self.dirpath())
self._norev_delentry(self)
def remove(self, rec=1, msg='removed by py lib invocation'):
""" remove a file or directory (or a directory tree if rec=1) with
checkin message msg."""
if self.rev is not None:
raise py.error.EINVAL(self, "revisions are immutable")
self._svncmdexecauth('svn rm -m "%s" "%s"' %(msg, self._escape(self)))
self._norev_delentry(self.dirpath())
def export(self, topath):
""" export to a local path
topath should not exist prior to calling this, returns a
py.path.local instance
"""
topath = py.path.local(topath)
args = ['"%s"' % (self._escape(self),),
'"%s"' % (self._escape(topath),)]
if self.rev is not None:
args = ['-r', str(self.rev)] + args
self._svncmdexecauth('svn export %s' % (' '.join(args),))
return topath
def ensure(self, *args, **kwargs):
""" ensure that an args-joined path exists (by default as
a file). If you specify a keyword argument 'dir=True'
then the path is forced to be a directory path.
"""
if getattr(self, 'rev', None) is not None:
raise py.error.EINVAL(self, "revisions are immutable")
target = self.join(*args)
dir = kwargs.get('dir', 0)
for x in target.parts(reverse=True):
if x.check():
break
else:
raise py.error.ENOENT(target, "has not any valid base!")
if x == target:
if not x.check(dir=dir):
raise dir and py.error.ENOTDIR(x) or py.error.EISDIR(x)
return x
tocreate = target.relto(x)
basename = tocreate.split(self.sep, 1)[0]
tempdir = py.path.local.mkdtemp()
try:
tempdir.ensure(tocreate, dir=dir)
cmd = 'svn import -m "%s" "%s" "%s"' % (
"ensure %s" % self._escape(tocreate),
self._escape(tempdir.join(basename)),
x.join(basename)._encodedurl())
self._svncmdexecauth(cmd)
self._norev_delentry(x)
finally:
tempdir.remove()
return target
# end of modifying methods
def _propget(self, name):
res = self._svnwithrev('propget', name)
return res[:-1] # strip trailing newline
def _proplist(self):
res = self._svnwithrev('proplist')
lines = res.split('\n')
lines = [x.strip() for x in lines[1:]]
return svncommon.PropListDict(self, lines)
def info(self):
""" return an Info structure with svn-provided information. """
parent = self.dirpath()
nameinfo_seq = parent._listdir_nameinfo()
bn = self.basename
for name, info in nameinfo_seq:
if name == bn:
return info
raise py.error.ENOENT(self)
def _listdir_nameinfo(self):
""" return sequence of name-info directory entries of self """
def builder():
try:
res = self._svnwithrev('ls', '-v')
except process.cmdexec.Error:
e = sys.exc_info()[1]
if e.err.find('non-existent in that revision') != -1:
raise py.error.ENOENT(self, e.err)
elif e.err.find("E200009:") != -1:
raise py.error.ENOENT(self, e.err)
elif e.err.find('File not found') != -1:
raise py.error.ENOENT(self, e.err)
elif e.err.find('not part of a repository')!=-1:
raise py.error.ENOENT(self, e.err)
elif e.err.find('Unable to open')!=-1:
raise py.error.ENOENT(self, e.err)
elif e.err.lower().find('method not allowed')!=-1:
raise py.error.EACCES(self, e.err)
raise py.error.Error(e.err)
lines = res.split('\n')
nameinfo_seq = []
for lsline in lines:
if lsline:
info = InfoSvnCommand(lsline)
if info._name != '.': # svn 1.5 produces '.' dirs,
nameinfo_seq.append((info._name, info))
nameinfo_seq.sort()
return nameinfo_seq
auth = self.auth and self.auth.makecmdoptions() or None
if self.rev is not None:
return self._lsrevcache.getorbuild((self.strpath, self.rev, auth),
builder)
else:
return self._lsnorevcache.getorbuild((self.strpath, auth),
builder)
def listdir(self, fil=None, sort=None):
""" list directory contents, possibly filter by the given fil func
and possibly sorted.
"""
if isinstance(fil, str):
fil = common.FNMatcher(fil)
nameinfo_seq = self._listdir_nameinfo()
if len(nameinfo_seq) == 1:
name, info = nameinfo_seq[0]
if name == self.basename and info.kind == 'file':
#if not self.check(dir=1):
raise py.error.ENOTDIR(self)
paths = [self.join(name) for (name, info) in nameinfo_seq]
if fil:
paths = [x for x in paths if fil(x)]
self._sortlist(paths, sort)
return paths
def log(self, rev_start=None, rev_end=1, verbose=False):
""" return a list of LogEntry instances for this path.
rev_start is the starting revision (defaulting to the first one).
rev_end is the last revision (defaulting to HEAD).
if verbose is True, then the LogEntry instances also know which files changed.
"""
assert self.check() #make it simpler for the pipe
rev_start = rev_start is None and "HEAD" or rev_start
rev_end = rev_end is None and "HEAD" or rev_end
if rev_start == "HEAD" and rev_end == 1:
rev_opt = ""
else:
rev_opt = "-r %s:%s" % (rev_start, rev_end)
verbose_opt = verbose and "-v" or ""
xmlpipe = self._svnpopenauth('svn log --xml %s %s "%s"' %
(rev_opt, verbose_opt, self.strpath))
from xml.dom import minidom
tree = minidom.parse(xmlpipe)
result = []
for logentry in filter(None, tree.firstChild.childNodes):
if logentry.nodeType == logentry.ELEMENT_NODE:
result.append(svncommon.LogEntry(logentry))
return result
#01234567890123456789012345678901234567890123467
# 2256 hpk 165 Nov 24 17:55 __init__.py
# XXX spotted by Guido, SVN 1.3.0 has different aligning, breaks the code!!!
# 1312 johnny 1627 May 05 14:32 test_decorators.py
#
class InfoSvnCommand:
# the '0?' part in the middle is an indication of whether the resource is
# locked, see 'svn help ls'
lspattern = re.compile(
r'^ *(?P<rev>\d+) +(?P<author>.+?) +(0? *(?P<size>\d+))? '
'*(?P<date>\w+ +\d{2} +[\d:]+) +(?P<file>.*)$')
def __init__(self, line):
# this is a typical line from 'svn ls http://...'
#_ 1127 jum 0 Jul 13 15:28 branch/
match = self.lspattern.match(line)
data = match.groupdict()
self._name = data['file']
if self._name[-1] == '/':
self._name = self._name[:-1]
self.kind = 'dir'
else:
self.kind = 'file'
#self.has_props = l.pop(0) == 'P'
self.created_rev = int(data['rev'])
self.last_author = data['author']
self.size = data['size'] and int(data['size']) or 0
self.mtime = parse_time_with_missing_year(data['date'])
self.time = self.mtime * 1000000
def __eq__(self, other):
return self.__dict__ == other.__dict__
#____________________________________________________
#
# helper functions
#____________________________________________________
def parse_time_with_missing_year(timestr):
""" analyze the time part from a single line of "svn ls -v"
the svn output doesn't show the year makes the 'timestr'
ambigous.
"""
import calendar
t_now = time.gmtime()
tparts = timestr.split()
month = time.strptime(tparts.pop(0), '%b')[1]
day = time.strptime(tparts.pop(0), '%d')[2]
last = tparts.pop(0) # year or hour:minute
try:
if ":" in last:
raise ValueError()
year = time.strptime(last, '%Y')[0]
hour = minute = 0
except ValueError:
hour, minute = time.strptime(last, '%H:%M')[3:5]
year = t_now[0]
t_result = (year, month, day, hour, minute, 0,0,0,0)
if t_result > t_now:
year -= 1
t_result = (year, month, day, hour, minute, 0,0,0,0)
return calendar.timegm(t_result)
class PathEntry:
def __init__(self, ppart):
self.strpath = ppart.firstChild.nodeValue.encode('UTF-8')
self.action = ppart.getAttribute('action').encode('UTF-8')
if self.action == 'A':
self.copyfrom_path = ppart.getAttribute('copyfrom-path').encode('UTF-8')
if self.copyfrom_path:
self.copyfrom_rev = int(ppart.getAttribute('copyfrom-rev'))
"""
module defining a subversion path object based on the external
command 'svn'. This modules aims to work with svn 1.3 and higher
but might also interact well with earlier versions.
"""
import os, sys, time, re
import py
from py import path, process
from py._path import common
from py._path import svnwc as svncommon
from py._path.cacheutil import BuildcostAccessCache, AgingCache
DEBUG=False
class SvnCommandPath(svncommon.SvnPathBase):
""" path implementation that offers access to (possibly remote) subversion
repositories. """
_lsrevcache = BuildcostAccessCache(maxentries=128)
_lsnorevcache = AgingCache(maxentries=1000, maxseconds=60.0)
def __new__(cls, path, rev=None, auth=None):
self = object.__new__(cls)
if isinstance(path, cls):
rev = path.rev
auth = path.auth
path = path.strpath
svncommon.checkbadchars(path)
path = path.rstrip('/')
self.strpath = path
self.rev = rev
self.auth = auth
return self
def __repr__(self):
if self.rev == -1:
return 'svnurl(%r)' % self.strpath
else:
return 'svnurl(%r, %r)' % (self.strpath, self.rev)
def _svnwithrev(self, cmd, *args):
""" execute an svn command, append our own url and revision """
if self.rev is None:
return self._svnwrite(cmd, *args)
else:
args = ['-r', self.rev] + list(args)
return self._svnwrite(cmd, *args)
def _svnwrite(self, cmd, *args):
""" execute an svn command, append our own url """
l = ['svn %s' % cmd]
args = ['"%s"' % self._escape(item) for item in args]
l.extend(args)
l.append('"%s"' % self._encodedurl())
# fixing the locale because we can't otherwise parse
string = " ".join(l)
if DEBUG:
print("execing %s" % string)
out = self._svncmdexecauth(string)
return out
def _svncmdexecauth(self, cmd):
""" execute an svn command 'as is' """
cmd = svncommon.fixlocale() + cmd
if self.auth is not None:
cmd += ' ' + self.auth.makecmdoptions()
return self._cmdexec(cmd)
def _cmdexec(self, cmd):
try:
out = process.cmdexec(cmd)
except py.process.cmdexec.Error:
e = sys.exc_info()[1]
if (e.err.find('File Exists') != -1 or
e.err.find('File already exists') != -1):
raise py.error.EEXIST(self)
raise
return out
def _svnpopenauth(self, cmd):
""" execute an svn command, return a pipe for reading stdin """
cmd = svncommon.fixlocale() + cmd
if self.auth is not None:
cmd += ' ' + self.auth.makecmdoptions()
return self._popen(cmd)
def _popen(self, cmd):
return os.popen(cmd)
def _encodedurl(self):
return self._escape(self.strpath)
def _norev_delentry(self, path):
auth = self.auth and self.auth.makecmdoptions() or None
self._lsnorevcache.delentry((str(path), auth))
def open(self, mode='r'):
""" return an opened file with the given mode. """
if mode not in ("r", "rU",):
raise ValueError("mode %r not supported" % (mode,))
assert self.check(file=1) # svn cat returns an empty file otherwise
if self.rev is None:
return self._svnpopenauth('svn cat "%s"' % (
self._escape(self.strpath), ))
else:
return self._svnpopenauth('svn cat -r %s "%s"' % (
self.rev, self._escape(self.strpath)))
def dirpath(self, *args, **kwargs):
""" return the directory path of the current path joined
with any given path arguments.
"""
l = self.strpath.split(self.sep)
if len(l) < 4:
raise py.error.EINVAL(self, "base is not valid")
elif len(l) == 4:
return self.join(*args, **kwargs)
else:
return self.new(basename='').join(*args, **kwargs)
# modifying methods (cache must be invalidated)
def mkdir(self, *args, **kwargs):
""" create & return the directory joined with args.
pass a 'msg' keyword argument to set the commit message.
"""
commit_msg = kwargs.get('msg', "mkdir by py lib invocation")
createpath = self.join(*args)
createpath._svnwrite('mkdir', '-m', commit_msg)
self._norev_delentry(createpath.dirpath())
return createpath
def copy(self, target, msg='copied by py lib invocation'):
""" copy path to target with checkin message msg."""
if getattr(target, 'rev', None) is not None:
raise py.error.EINVAL(target, "revisions are immutable")
self._svncmdexecauth('svn copy -m "%s" "%s" "%s"' %(msg,
self._escape(self), self._escape(target)))
self._norev_delentry(target.dirpath())
def rename(self, target, msg="renamed by py lib invocation"):
""" rename this path to target with checkin message msg. """
if getattr(self, 'rev', None) is not None:
raise py.error.EINVAL(self, "revisions are immutable")
self._svncmdexecauth('svn move -m "%s" --force "%s" "%s"' %(
msg, self._escape(self), self._escape(target)))
self._norev_delentry(self.dirpath())
self._norev_delentry(self)
def remove(self, rec=1, msg='removed by py lib invocation'):
""" remove a file or directory (or a directory tree if rec=1) with
checkin message msg."""
if self.rev is not None:
raise py.error.EINVAL(self, "revisions are immutable")
self._svncmdexecauth('svn rm -m "%s" "%s"' %(msg, self._escape(self)))
self._norev_delentry(self.dirpath())
def export(self, topath):
""" export to a local path
topath should not exist prior to calling this, returns a
py.path.local instance
"""
topath = py.path.local(topath)
args = ['"%s"' % (self._escape(self),),
'"%s"' % (self._escape(topath),)]
if self.rev is not None:
args = ['-r', str(self.rev)] + args
self._svncmdexecauth('svn export %s' % (' '.join(args),))
return topath
def ensure(self, *args, **kwargs):
""" ensure that an args-joined path exists (by default as
a file). If you specify a keyword argument 'dir=True'
then the path is forced to be a directory path.
"""
if getattr(self, 'rev', None) is not None:
raise py.error.EINVAL(self, "revisions are immutable")
target = self.join(*args)
dir = kwargs.get('dir', 0)
for x in target.parts(reverse=True):
if x.check():
break
else:
raise py.error.ENOENT(target, "has not any valid base!")
if x == target:
if not x.check(dir=dir):
raise dir and py.error.ENOTDIR(x) or py.error.EISDIR(x)
return x
tocreate = target.relto(x)
basename = tocreate.split(self.sep, 1)[0]
tempdir = py.path.local.mkdtemp()
try:
tempdir.ensure(tocreate, dir=dir)
cmd = 'svn import -m "%s" "%s" "%s"' % (
"ensure %s" % self._escape(tocreate),
self._escape(tempdir.join(basename)),
x.join(basename)._encodedurl())
self._svncmdexecauth(cmd)
self._norev_delentry(x)
finally:
tempdir.remove()
return target
# end of modifying methods
def _propget(self, name):
res = self._svnwithrev('propget', name)
return res[:-1] # strip trailing newline
def _proplist(self):
res = self._svnwithrev('proplist')
lines = res.split('\n')
lines = [x.strip() for x in lines[1:]]
return svncommon.PropListDict(self, lines)
def info(self):
""" return an Info structure with svn-provided information. """
parent = self.dirpath()
nameinfo_seq = parent._listdir_nameinfo()
bn = self.basename
for name, info in nameinfo_seq:
if name == bn:
return info
raise py.error.ENOENT(self)
def _listdir_nameinfo(self):
""" return sequence of name-info directory entries of self """
def builder():
try:
res = self._svnwithrev('ls', '-v')
except process.cmdexec.Error:
e = sys.exc_info()[1]
if e.err.find('non-existent in that revision') != -1:
raise py.error.ENOENT(self, e.err)
elif e.err.find("E200009:") != -1:
raise py.error.ENOENT(self, e.err)
elif e.err.find('File not found') != -1:
raise py.error.ENOENT(self, e.err)
elif e.err.find('not part of a repository')!=-1:
raise py.error.ENOENT(self, e.err)
elif e.err.find('Unable to open')!=-1:
raise py.error.ENOENT(self, e.err)
elif e.err.lower().find('method not allowed')!=-1:
raise py.error.EACCES(self, e.err)
raise py.error.Error(e.err)
lines = res.split('\n')
nameinfo_seq = []
for lsline in lines:
if lsline:
info = InfoSvnCommand(lsline)
if info._name != '.': # svn 1.5 produces '.' dirs,
nameinfo_seq.append((info._name, info))
nameinfo_seq.sort()
return nameinfo_seq
auth = self.auth and self.auth.makecmdoptions() or None
if self.rev is not None:
return self._lsrevcache.getorbuild((self.strpath, self.rev, auth),
builder)
else:
return self._lsnorevcache.getorbuild((self.strpath, auth),
builder)
def listdir(self, fil=None, sort=None):
""" list directory contents, possibly filter by the given fil func
and possibly sorted.
"""
if isinstance(fil, str):
fil = common.FNMatcher(fil)
nameinfo_seq = self._listdir_nameinfo()
if len(nameinfo_seq) == 1:
name, info = nameinfo_seq[0]
if name == self.basename and info.kind == 'file':
#if not self.check(dir=1):
raise py.error.ENOTDIR(self)
paths = [self.join(name) for (name, info) in nameinfo_seq]
if fil:
paths = [x for x in paths if fil(x)]
self._sortlist(paths, sort)
return paths
def log(self, rev_start=None, rev_end=1, verbose=False):
""" return a list of LogEntry instances for this path.
rev_start is the starting revision (defaulting to the first one).
rev_end is the last revision (defaulting to HEAD).
if verbose is True, then the LogEntry instances also know which files changed.
"""
assert self.check() #make it simpler for the pipe
rev_start = rev_start is None and "HEAD" or rev_start
rev_end = rev_end is None and "HEAD" or rev_end
if rev_start == "HEAD" and rev_end == 1:
rev_opt = ""
else:
rev_opt = "-r %s:%s" % (rev_start, rev_end)
verbose_opt = verbose and "-v" or ""
xmlpipe = self._svnpopenauth('svn log --xml %s %s "%s"' %
(rev_opt, verbose_opt, self.strpath))
from xml.dom import minidom
tree = minidom.parse(xmlpipe)
result = []
for logentry in filter(None, tree.firstChild.childNodes):
if logentry.nodeType == logentry.ELEMENT_NODE:
result.append(svncommon.LogEntry(logentry))
return result
#01234567890123456789012345678901234567890123467
# 2256 hpk 165 Nov 24 17:55 __init__.py
# XXX spotted by Guido, SVN 1.3.0 has different aligning, breaks the code!!!
# 1312 johnny 1627 May 05 14:32 test_decorators.py
#
class InfoSvnCommand:
# the '0?' part in the middle is an indication of whether the resource is
# locked, see 'svn help ls'
lspattern = re.compile(
r'^ *(?P<rev>\d+) +(?P<author>.+?) +(0? *(?P<size>\d+))? '
r'*(?P<date>\w+ +\d{2} +[\d:]+) +(?P<file>.*)$')
def __init__(self, line):
# this is a typical line from 'svn ls http://...'
#_ 1127 jum 0 Jul 13 15:28 branch/
match = self.lspattern.match(line)
data = match.groupdict()
self._name = data['file']
if self._name[-1] == '/':
self._name = self._name[:-1]
self.kind = 'dir'
else:
self.kind = 'file'
#self.has_props = l.pop(0) == 'P'
self.created_rev = int(data['rev'])
self.last_author = data['author']
self.size = data['size'] and int(data['size']) or 0
self.mtime = parse_time_with_missing_year(data['date'])
self.time = self.mtime * 1000000
def __eq__(self, other):
return self.__dict__ == other.__dict__
#____________________________________________________
#
# helper functions
#____________________________________________________
def parse_time_with_missing_year(timestr):
""" analyze the time part from a single line of "svn ls -v"
the svn output doesn't show the year makes the 'timestr'
ambigous.
"""
import calendar
t_now = time.gmtime()
tparts = timestr.split()
month = time.strptime(tparts.pop(0), '%b')[1]
day = time.strptime(tparts.pop(0), '%d')[2]
last = tparts.pop(0) # year or hour:minute
try:
if ":" in last:
raise ValueError()
year = time.strptime(last, '%Y')[0]
hour = minute = 0
except ValueError:
hour, minute = time.strptime(last, '%H:%M')[3:5]
year = t_now[0]
t_result = (year, month, day, hour, minute, 0,0,0,0)
if t_result > t_now:
year -= 1
t_result = (year, month, day, hour, minute, 0,0,0,0)
return calendar.timegm(t_result)
class PathEntry:
def __init__(self, ppart):
self.strpath = ppart.firstChild.nodeValue.encode('UTF-8')
self.action = ppart.getAttribute('action').encode('UTF-8')
if self.action == 'A':
self.copyfrom_path = ppart.getAttribute('copyfrom-path').encode('UTF-8')
if self.copyfrom_path:
self.copyfrom_rev = int(ppart.getAttribute('copyfrom-rev'))

2480
third_party/python/py/py/_path/svnwc.py поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1 +1 @@
""" high-level sub-process handling """
""" high-level sub-process handling """

98
third_party/python/py/py/_process/cmdexec.py поставляемый
Просмотреть файл

@ -1,49 +1,49 @@
import sys
import subprocess
import py
from subprocess import Popen, PIPE
def cmdexec(cmd):
""" return unicode output of executing 'cmd' in a separate process.
raise cmdexec.Error exeception if the command failed.
the exception will provide an 'err' attribute containing
the error-output from the command.
if the subprocess module does not provide a proper encoding/unicode strings
sys.getdefaultencoding() will be used, if that does not exist, 'UTF-8'.
"""
process = subprocess.Popen(cmd, shell=True,
universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
if sys.version_info[0] < 3: # on py3 we get unicode strings, on py2 not
try:
default_encoding = sys.getdefaultencoding() # jython may not have it
except AttributeError:
default_encoding = sys.stdout.encoding or 'UTF-8'
out = unicode(out, process.stdout.encoding or default_encoding)
err = unicode(err, process.stderr.encoding or default_encoding)
status = process.poll()
if status:
raise ExecutionFailed(status, status, cmd, out, err)
return out
class ExecutionFailed(py.error.Error):
def __init__(self, status, systemstatus, cmd, out, err):
Exception.__init__(self)
self.status = status
self.systemstatus = systemstatus
self.cmd = cmd
self.err = err
self.out = out
def __str__(self):
return "ExecutionFailed: %d %s\n%s" %(self.status, self.cmd, self.err)
# export the exception under the name 'py.process.cmdexec.Error'
cmdexec.Error = ExecutionFailed
try:
ExecutionFailed.__module__ = 'py.process.cmdexec'
ExecutionFailed.__name__ = 'Error'
except (AttributeError, TypeError):
pass
import sys
import subprocess
import py
from subprocess import Popen, PIPE
def cmdexec(cmd):
""" return unicode output of executing 'cmd' in a separate process.
raise cmdexec.Error exeception if the command failed.
the exception will provide an 'err' attribute containing
the error-output from the command.
if the subprocess module does not provide a proper encoding/unicode strings
sys.getdefaultencoding() will be used, if that does not exist, 'UTF-8'.
"""
process = subprocess.Popen(cmd, shell=True,
universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
if sys.version_info[0] < 3: # on py3 we get unicode strings, on py2 not
try:
default_encoding = sys.getdefaultencoding() # jython may not have it
except AttributeError:
default_encoding = sys.stdout.encoding or 'UTF-8'
out = unicode(out, process.stdout.encoding or default_encoding)
err = unicode(err, process.stderr.encoding or default_encoding)
status = process.poll()
if status:
raise ExecutionFailed(status, status, cmd, out, err)
return out
class ExecutionFailed(py.error.Error):
def __init__(self, status, systemstatus, cmd, out, err):
Exception.__init__(self)
self.status = status
self.systemstatus = systemstatus
self.cmd = cmd
self.err = err
self.out = out
def __str__(self):
return "ExecutionFailed: %d %s\n%s" %(self.status, self.cmd, self.err)
# export the exception under the name 'py.process.cmdexec.Error'
cmdexec.Error = ExecutionFailed
try:
ExecutionFailed.__module__ = 'py.process.cmdexec'
ExecutionFailed.__name__ = 'Error'
except (AttributeError, TypeError):
pass

Просмотреть файл

@ -1,120 +1,120 @@
"""
ForkedFunc provides a way to run a function in a forked process
and get at its return value, stdout and stderr output as well
as signals and exitstatusus.
"""
import py
import os
import sys
import marshal
def get_unbuffered_io(fd, filename):
f = open(str(filename), "w")
if fd != f.fileno():
os.dup2(f.fileno(), fd)
class AutoFlush:
def write(self, data):
f.write(data)
f.flush()
def __getattr__(self, name):
return getattr(f, name)
return AutoFlush()
class ForkedFunc:
EXITSTATUS_EXCEPTION = 3
def __init__(self, fun, args=None, kwargs=None, nice_level=0,
child_on_start=None, child_on_exit=None):
if args is None:
args = []
if kwargs is None:
kwargs = {}
self.fun = fun
self.args = args
self.kwargs = kwargs
self.tempdir = tempdir = py.path.local.mkdtemp()
self.RETVAL = tempdir.ensure('retval')
self.STDOUT = tempdir.ensure('stdout')
self.STDERR = tempdir.ensure('stderr')
pid = os.fork()
if pid: # in parent process
self.pid = pid
else: # in child process
self.pid = None
self._child(nice_level, child_on_start, child_on_exit)
def _child(self, nice_level, child_on_start, child_on_exit):
# right now we need to call a function, but first we need to
# map all IO that might happen
sys.stdout = stdout = get_unbuffered_io(1, self.STDOUT)
sys.stderr = stderr = get_unbuffered_io(2, self.STDERR)
retvalf = self.RETVAL.open("wb")
EXITSTATUS = 0
try:
if nice_level:
os.nice(nice_level)
try:
if child_on_start is not None:
child_on_start()
retval = self.fun(*self.args, **self.kwargs)
retvalf.write(marshal.dumps(retval))
if child_on_exit is not None:
child_on_exit()
except:
excinfo = py.code.ExceptionInfo()
stderr.write(str(excinfo._getreprcrash()))
EXITSTATUS = self.EXITSTATUS_EXCEPTION
finally:
stdout.close()
stderr.close()
retvalf.close()
os.close(1)
os.close(2)
os._exit(EXITSTATUS)
def waitfinish(self, waiter=os.waitpid):
pid, systemstatus = waiter(self.pid, 0)
if systemstatus:
if os.WIFSIGNALED(systemstatus):
exitstatus = os.WTERMSIG(systemstatus) + 128
else:
exitstatus = os.WEXITSTATUS(systemstatus)
else:
exitstatus = 0
signal = systemstatus & 0x7f
if not exitstatus and not signal:
retval = self.RETVAL.open('rb')
try:
retval_data = retval.read()
finally:
retval.close()
retval = marshal.loads(retval_data)
else:
retval = None
stdout = self.STDOUT.read()
stderr = self.STDERR.read()
self._removetemp()
return Result(exitstatus, signal, retval, stdout, stderr)
def _removetemp(self):
if self.tempdir.check():
self.tempdir.remove()
def __del__(self):
if self.pid is not None: # only clean up in main process
self._removetemp()
class Result(object):
def __init__(self, exitstatus, signal, retval, stdout, stderr):
self.exitstatus = exitstatus
self.signal = signal
self.retval = retval
self.out = stdout
self.err = stderr
"""
ForkedFunc provides a way to run a function in a forked process
and get at its return value, stdout and stderr output as well
as signals and exitstatusus.
"""
import py
import os
import sys
import marshal
def get_unbuffered_io(fd, filename):
f = open(str(filename), "w")
if fd != f.fileno():
os.dup2(f.fileno(), fd)
class AutoFlush:
def write(self, data):
f.write(data)
f.flush()
def __getattr__(self, name):
return getattr(f, name)
return AutoFlush()
class ForkedFunc:
EXITSTATUS_EXCEPTION = 3
def __init__(self, fun, args=None, kwargs=None, nice_level=0,
child_on_start=None, child_on_exit=None):
if args is None:
args = []
if kwargs is None:
kwargs = {}
self.fun = fun
self.args = args
self.kwargs = kwargs
self.tempdir = tempdir = py.path.local.mkdtemp()
self.RETVAL = tempdir.ensure('retval')
self.STDOUT = tempdir.ensure('stdout')
self.STDERR = tempdir.ensure('stderr')
pid = os.fork()
if pid: # in parent process
self.pid = pid
else: # in child process
self.pid = None
self._child(nice_level, child_on_start, child_on_exit)
def _child(self, nice_level, child_on_start, child_on_exit):
# right now we need to call a function, but first we need to
# map all IO that might happen
sys.stdout = stdout = get_unbuffered_io(1, self.STDOUT)
sys.stderr = stderr = get_unbuffered_io(2, self.STDERR)
retvalf = self.RETVAL.open("wb")
EXITSTATUS = 0
try:
if nice_level:
os.nice(nice_level)
try:
if child_on_start is not None:
child_on_start()
retval = self.fun(*self.args, **self.kwargs)
retvalf.write(marshal.dumps(retval))
if child_on_exit is not None:
child_on_exit()
except:
excinfo = py.code.ExceptionInfo()
stderr.write(str(excinfo._getreprcrash()))
EXITSTATUS = self.EXITSTATUS_EXCEPTION
finally:
stdout.close()
stderr.close()
retvalf.close()
os.close(1)
os.close(2)
os._exit(EXITSTATUS)
def waitfinish(self, waiter=os.waitpid):
pid, systemstatus = waiter(self.pid, 0)
if systemstatus:
if os.WIFSIGNALED(systemstatus):
exitstatus = os.WTERMSIG(systemstatus) + 128
else:
exitstatus = os.WEXITSTATUS(systemstatus)
else:
exitstatus = 0
signal = systemstatus & 0x7f
if not exitstatus and not signal:
retval = self.RETVAL.open('rb')
try:
retval_data = retval.read()
finally:
retval.close()
retval = marshal.loads(retval_data)
else:
retval = None
stdout = self.STDOUT.read()
stderr = self.STDERR.read()
self._removetemp()
return Result(exitstatus, signal, retval, stdout, stderr)
def _removetemp(self):
if self.tempdir.check():
self.tempdir.remove()
def __del__(self):
if self.pid is not None: # only clean up in main process
self._removetemp()
class Result(object):
def __init__(self, exitstatus, signal, retval, stdout, stderr):
self.exitstatus = exitstatus
self.signal = signal
self.retval = retval
self.out = stdout
self.err = stderr

46
third_party/python/py/py/_process/killproc.py поставляемый
Просмотреть файл

@ -1,23 +1,23 @@
import py
import os, sys
if sys.platform == "win32" or getattr(os, '_name', '') == 'nt':
try:
import ctypes
except ImportError:
def dokill(pid):
py.process.cmdexec("taskkill /F /PID %d" %(pid,))
else:
def dokill(pid):
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(
PROCESS_TERMINATE, False, pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
else:
def dokill(pid):
os.kill(pid, 15)
def kill(pid):
""" kill process by id. """
dokill(pid)
import py
import os, sys
if sys.platform == "win32" or getattr(os, '_name', '') == 'nt':
try:
import ctypes
except ImportError:
def dokill(pid):
py.process.cmdexec("taskkill /F /PID %d" %(pid,))
else:
def dokill(pid):
PROCESS_TERMINATE = 1
handle = ctypes.windll.kernel32.OpenProcess(
PROCESS_TERMINATE, False, pid)
ctypes.windll.kernel32.TerminateProcess(handle, -1)
ctypes.windll.kernel32.CloseHandle(handle)
else:
def dokill(pid):
os.kill(pid, 15)
def kill(pid):
""" kill process by id. """
dokill(pid)

36
third_party/python/py/py/_std.py поставляемый
Просмотреть файл

@ -1,18 +1,18 @@
import sys
class Std(object):
""" makes top-level python modules available as an attribute,
importing them on first access.
"""
def __init__(self):
self.__dict__ = sys.modules
def __getattr__(self, name):
try:
m = __import__(name)
except ImportError:
raise AttributeError("py.std: could not import %s" % name)
return m
std = Std()
import sys
class Std(object):
""" makes top-level python modules available as an attribute,
importing them on first access.
"""
def __init__(self):
self.__dict__ = sys.modules
def __getattr__(self, name):
try:
m = __import__(name)
except ImportError:
raise AttributeError("py.std: could not import %s" % name)
return m
std = Std()

508
third_party/python/py/py/_xmlgen.py поставляемый
Просмотреть файл

@ -1,253 +1,255 @@
"""
module for generating and serializing xml and html structures
by using simple python objects.
(c) holger krekel, holger at merlinux eu. 2009
"""
import sys, re
if sys.version_info >= (3,0):
def u(s):
return s
def unicode(x, errors=None):
if hasattr(x, '__unicode__'):
return x.__unicode__()
return str(x)
else:
def u(s):
return unicode(s)
unicode = unicode
class NamespaceMetaclass(type):
def __getattr__(self, name):
if name[:1] == '_':
raise AttributeError(name)
if self == Namespace:
raise ValueError("Namespace class is abstract")
tagspec = self.__tagspec__
if tagspec is not None and name not in tagspec:
raise AttributeError(name)
classattr = {}
if self.__stickyname__:
classattr['xmlname'] = name
cls = type(name, (self.__tagclass__,), classattr)
setattr(self, name, cls)
return cls
class Tag(list):
class Attr(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __init__(self, *args, **kwargs):
super(Tag, self).__init__(args)
self.attr = self.Attr(**kwargs)
def __unicode__(self):
return self.unicode(indent=0)
__str__ = __unicode__
def unicode(self, indent=2):
l = []
SimpleUnicodeVisitor(l.append, indent).visit(self)
return u("").join(l)
def __repr__(self):
name = self.__class__.__name__
return "<%r tag object %d>" % (name, id(self))
Namespace = NamespaceMetaclass('Namespace', (object, ), {
'__tagspec__': None,
'__tagclass__': Tag,
'__stickyname__': False,
})
class HtmlTag(Tag):
def unicode(self, indent=2):
l = []
HtmlVisitor(l.append, indent, shortempty=False).visit(self)
return u("").join(l)
# exported plain html namespace
class html(Namespace):
__tagclass__ = HtmlTag
__stickyname__ = True
__tagspec__ = dict([(x,1) for x in (
'a,abbr,acronym,address,applet,area,b,bdo,big,blink,'
'blockquote,body,br,button,caption,center,cite,code,col,'
'colgroup,comment,dd,del,dfn,dir,div,dl,dt,em,embed,'
'fieldset,font,form,frameset,h1,h2,h3,h4,h5,h6,head,html,'
'i,iframe,img,input,ins,kbd,label,legend,li,link,listing,'
'map,marquee,menu,meta,multicol,nobr,noembed,noframes,'
'noscript,object,ol,optgroup,option,p,pre,q,s,script,'
'select,small,span,strike,strong,style,sub,sup,table,'
'tbody,td,textarea,tfoot,th,thead,title,tr,tt,u,ul,xmp,'
'base,basefont,frame,hr,isindex,param,samp,var'
).split(',') if x])
class Style(object):
def __init__(self, **kw):
for x, y in kw.items():
x = x.replace('_', '-')
setattr(self, x, y)
class raw(object):
"""just a box that can contain a unicode string that will be
included directly in the output"""
def __init__(self, uniobj):
self.uniobj = uniobj
class SimpleUnicodeVisitor(object):
""" recursive visitor to write unicode. """
def __init__(self, write, indent=0, curindent=0, shortempty=True):
self.write = write
self.cache = {}
self.visited = {} # for detection of recursion
self.indent = indent
self.curindent = curindent
self.parents = []
self.shortempty = shortempty # short empty tags or not
def visit(self, node):
""" dispatcher on node's class/bases name. """
cls = node.__class__
try:
visitmethod = self.cache[cls]
except KeyError:
for subclass in cls.__mro__:
visitmethod = getattr(self, subclass.__name__, None)
if visitmethod is not None:
break
else:
visitmethod = self.__object
self.cache[cls] = visitmethod
visitmethod(node)
# the default fallback handler is marked private
# to avoid clashes with the tag name object
def __object(self, obj):
#self.write(obj)
self.write(escape(unicode(obj)))
def raw(self, obj):
self.write(obj.uniobj)
def list(self, obj):
assert id(obj) not in self.visited
self.visited[id(obj)] = 1
for elem in obj:
self.visit(elem)
def Tag(self, tag):
assert id(tag) not in self.visited
try:
tag.parent = self.parents[-1]
except IndexError:
tag.parent = None
self.visited[id(tag)] = 1
tagname = getattr(tag, 'xmlname', tag.__class__.__name__)
if self.curindent and not self._isinline(tagname):
self.write("\n" + u(' ') * self.curindent)
if tag:
self.curindent += self.indent
self.write(u('<%s%s>') % (tagname, self.attributes(tag)))
self.parents.append(tag)
for x in tag:
self.visit(x)
self.parents.pop()
self.write(u('</%s>') % tagname)
self.curindent -= self.indent
else:
nameattr = tagname+self.attributes(tag)
if self._issingleton(tagname):
self.write(u('<%s/>') % (nameattr,))
else:
self.write(u('<%s></%s>') % (nameattr, tagname))
def attributes(self, tag):
# serialize attributes
attrlist = dir(tag.attr)
attrlist.sort()
l = []
for name in attrlist:
res = self.repr_attribute(tag.attr, name)
if res is not None:
l.append(res)
l.extend(self.getstyle(tag))
return u("").join(l)
def repr_attribute(self, attrs, name):
if name[:2] != '__':
value = getattr(attrs, name)
if name.endswith('_'):
name = name[:-1]
if isinstance(value, raw):
insert = value.uniobj
else:
insert = escape(unicode(value))
return ' %s="%s"' % (name, insert)
def getstyle(self, tag):
""" return attribute list suitable for styling. """
try:
styledict = tag.style.__dict__
except AttributeError:
return []
else:
stylelist = [x+': ' + y for x,y in styledict.items()]
return [u(' style="%s"') % u('; ').join(stylelist)]
def _issingleton(self, tagname):
"""can (and will) be overridden in subclasses"""
return self.shortempty
def _isinline(self, tagname):
"""can (and will) be overridden in subclasses"""
return False
class HtmlVisitor(SimpleUnicodeVisitor):
single = dict([(x, 1) for x in
('br,img,area,param,col,hr,meta,link,base,'
'input,frame').split(',')])
inline = dict([(x, 1) for x in
('a abbr acronym b basefont bdo big br cite code dfn em font '
'i img input kbd label q s samp select small span strike '
'strong sub sup textarea tt u var'.split(' '))])
def repr_attribute(self, attrs, name):
if name == 'class_':
value = getattr(attrs, name)
if value is None:
return
return super(HtmlVisitor, self).repr_attribute(attrs, name)
def _issingleton(self, tagname):
return tagname in self.single
def _isinline(self, tagname):
return tagname in self.inline
class _escape:
def __init__(self):
self.escape = {
u('"') : u('&quot;'), u('<') : u('&lt;'), u('>') : u('&gt;'),
u('&') : u('&amp;'), u("'") : u('&apos;'),
}
self.charef_rex = re.compile(u("|").join(self.escape.keys()))
def _replacer(self, match):
return self.escape[match.group(0)]
def __call__(self, ustring):
""" xml-escape the given unicode string. """
try:
ustring = unicode(ustring)
except UnicodeDecodeError:
ustring = unicode(ustring, 'utf-8', errors='replace')
return self.charef_rex.sub(self._replacer, ustring)
escape = _escape()
"""
module for generating and serializing xml and html structures
by using simple python objects.
(c) holger krekel, holger at merlinux eu. 2009
"""
import sys, re
if sys.version_info >= (3,0):
def u(s):
return s
def unicode(x, errors=None):
if hasattr(x, '__unicode__'):
return x.__unicode__()
return str(x)
else:
def u(s):
return unicode(s)
unicode = unicode
class NamespaceMetaclass(type):
def __getattr__(self, name):
if name[:1] == '_':
raise AttributeError(name)
if self == Namespace:
raise ValueError("Namespace class is abstract")
tagspec = self.__tagspec__
if tagspec is not None and name not in tagspec:
raise AttributeError(name)
classattr = {}
if self.__stickyname__:
classattr['xmlname'] = name
cls = type(name, (self.__tagclass__,), classattr)
setattr(self, name, cls)
return cls
class Tag(list):
class Attr(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __init__(self, *args, **kwargs):
super(Tag, self).__init__(args)
self.attr = self.Attr(**kwargs)
def __unicode__(self):
return self.unicode(indent=0)
__str__ = __unicode__
def unicode(self, indent=2):
l = []
SimpleUnicodeVisitor(l.append, indent).visit(self)
return u("").join(l)
def __repr__(self):
name = self.__class__.__name__
return "<%r tag object %d>" % (name, id(self))
Namespace = NamespaceMetaclass('Namespace', (object, ), {
'__tagspec__': None,
'__tagclass__': Tag,
'__stickyname__': False,
})
class HtmlTag(Tag):
def unicode(self, indent=2):
l = []
HtmlVisitor(l.append, indent, shortempty=False).visit(self)
return u("").join(l)
# exported plain html namespace
class html(Namespace):
__tagclass__ = HtmlTag
__stickyname__ = True
__tagspec__ = dict([(x,1) for x in (
'a,abbr,acronym,address,applet,area,article,aside,audio,b,'
'base,basefont,bdi,bdo,big,blink,blockquote,body,br,button,'
'canvas,caption,center,cite,code,col,colgroup,command,comment,'
'datalist,dd,del,details,dfn,dir,div,dl,dt,em,embed,'
'fieldset,figcaption,figure,footer,font,form,frame,frameset,h1,'
'h2,h3,h4,h5,h6,head,header,hgroup,hr,html,i,iframe,img,input,'
'ins,isindex,kbd,keygen,label,legend,li,link,listing,map,mark,'
'marquee,menu,meta,meter,multicol,nav,nobr,noembed,noframes,'
'noscript,object,ol,optgroup,option,output,p,param,pre,progress,'
'q,rp,rt,ruby,s,samp,script,section,select,small,source,span,'
'strike,strong,style,sub,summary,sup,table,tbody,td,textarea,'
'tfoot,th,thead,time,title,tr,track,tt,u,ul,xmp,var,video,wbr'
).split(',') if x])
class Style(object):
def __init__(self, **kw):
for x, y in kw.items():
x = x.replace('_', '-')
setattr(self, x, y)
class raw(object):
"""just a box that can contain a unicode string that will be
included directly in the output"""
def __init__(self, uniobj):
self.uniobj = uniobj
class SimpleUnicodeVisitor(object):
""" recursive visitor to write unicode. """
def __init__(self, write, indent=0, curindent=0, shortempty=True):
self.write = write
self.cache = {}
self.visited = {} # for detection of recursion
self.indent = indent
self.curindent = curindent
self.parents = []
self.shortempty = shortempty # short empty tags or not
def visit(self, node):
""" dispatcher on node's class/bases name. """
cls = node.__class__
try:
visitmethod = self.cache[cls]
except KeyError:
for subclass in cls.__mro__:
visitmethod = getattr(self, subclass.__name__, None)
if visitmethod is not None:
break
else:
visitmethod = self.__object
self.cache[cls] = visitmethod
visitmethod(node)
# the default fallback handler is marked private
# to avoid clashes with the tag name object
def __object(self, obj):
#self.write(obj)
self.write(escape(unicode(obj)))
def raw(self, obj):
self.write(obj.uniobj)
def list(self, obj):
assert id(obj) not in self.visited
self.visited[id(obj)] = 1
for elem in obj:
self.visit(elem)
def Tag(self, tag):
assert id(tag) not in self.visited
try:
tag.parent = self.parents[-1]
except IndexError:
tag.parent = None
self.visited[id(tag)] = 1
tagname = getattr(tag, 'xmlname', tag.__class__.__name__)
if self.curindent and not self._isinline(tagname):
self.write("\n" + u(' ') * self.curindent)
if tag:
self.curindent += self.indent
self.write(u('<%s%s>') % (tagname, self.attributes(tag)))
self.parents.append(tag)
for x in tag:
self.visit(x)
self.parents.pop()
self.write(u('</%s>') % tagname)
self.curindent -= self.indent
else:
nameattr = tagname+self.attributes(tag)
if self._issingleton(tagname):
self.write(u('<%s/>') % (nameattr,))
else:
self.write(u('<%s></%s>') % (nameattr, tagname))
def attributes(self, tag):
# serialize attributes
attrlist = dir(tag.attr)
attrlist.sort()
l = []
for name in attrlist:
res = self.repr_attribute(tag.attr, name)
if res is not None:
l.append(res)
l.extend(self.getstyle(tag))
return u("").join(l)
def repr_attribute(self, attrs, name):
if name[:2] != '__':
value = getattr(attrs, name)
if name.endswith('_'):
name = name[:-1]
if isinstance(value, raw):
insert = value.uniobj
else:
insert = escape(unicode(value))
return ' %s="%s"' % (name, insert)
def getstyle(self, tag):
""" return attribute list suitable for styling. """
try:
styledict = tag.style.__dict__
except AttributeError:
return []
else:
stylelist = [x+': ' + y for x,y in styledict.items()]
return [u(' style="%s"') % u('; ').join(stylelist)]
def _issingleton(self, tagname):
"""can (and will) be overridden in subclasses"""
return self.shortempty
def _isinline(self, tagname):
"""can (and will) be overridden in subclasses"""
return False
class HtmlVisitor(SimpleUnicodeVisitor):
single = dict([(x, 1) for x in
('br,img,area,param,col,hr,meta,link,base,'
'input,frame').split(',')])
inline = dict([(x, 1) for x in
('a abbr acronym b basefont bdo big br cite code dfn em font '
'i img input kbd label q s samp select small span strike '
'strong sub sup textarea tt u var'.split(' '))])
def repr_attribute(self, attrs, name):
if name == 'class_':
value = getattr(attrs, name)
if value is None:
return
return super(HtmlVisitor, self).repr_attribute(attrs, name)
def _issingleton(self, tagname):
return tagname in self.single
def _isinline(self, tagname):
return tagname in self.inline
class _escape:
def __init__(self):
self.escape = {
u('"') : u('&quot;'), u('<') : u('&lt;'), u('>') : u('&gt;'),
u('&') : u('&amp;'), u("'") : u('&apos;'),
}
self.charef_rex = re.compile(u("|").join(self.escape.keys()))
def _replacer(self, match):
return self.escape[match.group(0)]
def __call__(self, ustring):
""" xml-escape the given unicode string. """
try:
ustring = unicode(ustring)
except UnicodeDecodeError:
ustring = unicode(ustring, 'utf-8', errors='replace')
return self.charef_rex.sub(self._replacer, ustring)
escape = _escape()

20
third_party/python/py/py/test.py поставляемый
Просмотреть файл

@ -1,10 +1,10 @@
import sys
if __name__ == '__main__':
import pytest
sys.exit(pytest.main())
else:
import sys, pytest
sys.modules['py.test'] = pytest
# for more API entry points see the 'tests' definition
# in __init__.py
import sys
if __name__ == '__main__':
import pytest
sys.exit(pytest.main())
else:
import sys, pytest
sys.modules['py.test'] = pytest
# for more API entry points see the 'tests' definition
# in __init__.py

11
third_party/python/py/setup.cfg поставляемый
Просмотреть файл

@ -1,11 +0,0 @@
[wheel]
universal = 1
[devpi:upload]
formats = sdist.tgz,bdist_wheel
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0

38
third_party/python/py/setup.py поставляемый
Просмотреть файл

@ -1,38 +0,0 @@
import os, sys
from setuptools import setup
def main():
setup(
name='py',
description='library with cross-python path, ini-parsing, io, code, log facilities',
long_description = open('README.txt').read(),
version='1.4.31',
url='http://pylib.readthedocs.org/',
license='MIT license',
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],
author='holger krekel, Ronny Pfannschmidt, Benjamin Peterson and others',
author_email='pytest-dev@python.org',
classifiers=['Development Status :: 6 - Mature',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: POSIX',
'Operating System :: Microsoft :: Windows',
'Operating System :: MacOS :: MacOS X',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Libraries',
'Topic :: Utilities',
'Programming Language :: Python',
'Programming Language :: Python :: 3'],
packages=['py',
'py._code',
'py._io',
'py._log',
'py._path',
'py._process',
],
zip_safe=False,
)
if __name__ == '__main__':
main()

7
third_party/python/pytest/.coveragerc поставляемый
Просмотреть файл

@ -1,7 +0,0 @@
[run]
omit =
# standlonetemplate is read dynamically and tested by test_genscript
*standalonetemplate.py
# oldinterpret could be removed, as it is no longer used in py26+
*oldinterpret.py
vendored_packages

91
third_party/python/pytest/AUTHORS поставляемый
Просмотреть файл

@ -1,91 +0,0 @@
Holger Krekel, holger at merlinux eu
merlinux GmbH, Germany, office at merlinux eu
Contributors include::
Abhijeet Kasurde
Anatoly Bubenkoff
Andreas Zeidler
Andy Freeland
Anthon van der Neut
Armin Rigo
Aron Curzon
Aviv Palivoda
Benjamin Peterson
Bob Ippolito
Brian Dorsey
Brian Okken
Brianna Laugher
Bruno Oliveira
Carl Friedrich Bolz
Charles Cloud
Chris Lamb
Christian Theunert
Christian Tismer
Christopher Gilling
Daniel Grana
Daniel Hahler
Daniel Nuri
Dave Hunt
David Mohr
David Vierra
Edison Gustavo Muenz
Eduardo Schettino
Endre Galaczi
Elizaveta Shashkova
Eric Hunsberger
Eric Siegerman
Erik M. Bray
Florian Bruhin
Floris Bruynooghe
Gabriel Reis
Georgy Dyuldin
Graham Horler
Grig Gheorghiu
Guido Wesdorp
Harald Armin Massa
Ian Bicking
Jaap Broekhuizen
Jan Balster
Janne Vanhala
Jason R. Coombs
John Towler
Joshua Bronson
Jurko Gospodnetić
Katarzyna Jachim
Kevin Cox
Lee Kamentsky
Lukas Bednar
Maciek Fijalkowski
Maho
Marc Schlaich
Mark Abramowitz
Markus Unterwaditzer
Martijn Faassen
Martin Prusse
Matt Bachmann
Michael Aquilina
Michael Birtwell
Michael Droettboom
Nicolas Delaby
Pieter Mulder
Piotr Banaszkiewicz
Punyashloka Biswal
Quentin Pradet
Ralf Schmitt
Raphael Pierzina
Ronny Pfannschmidt
Ross Lawley
Ryan Wooden
Samuele Pedroni
Tom Viner
Trevor Bekolay
Wouter van Ackooy
David Díaz-Barquero
Eric Hunsberger
Simon Gomizelj
Russel Winder
Ben Webb
Alexei Kozlenok
Cal Leeming
Feng Ma

21
third_party/python/pytest/LICENSE поставляемый
Просмотреть файл

@ -1,21 +0,0 @@
The MIT License (MIT)
Copyright (c) 2004-2016 Holger Krekel and others
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

34
third_party/python/pytest/MANIFEST.in поставляемый
Просмотреть файл

@ -1,34 +0,0 @@
include CHANGELOG.rst
include LICENSE
include AUTHORS
include README.rst
include CONTRIBUTING.rst
include tox.ini
include setup.py
include .coveragerc
include plugin-test.sh
include requirements-docs.txt
include runtox.py
recursive-include bench *.py
recursive-include extra *.py
graft testing
graft doc
exclude _pytest/impl
graft _pytest/vendored_packages
recursive-exclude * *.pyc *.pyo
exclude appveyor/install.ps1
exclude appveyor.yml
exclude appveyor
exclude ISSUES.txt
exclude HOWTORELEASE.rst

133
third_party/python/pytest/PKG-INFO поставляемый
Просмотреть файл

@ -1,133 +0,0 @@
Metadata-Version: 1.1
Name: pytest
Version: 2.9.2
Summary: pytest: simple powerful testing with Python
Home-page: http://pytest.org
Author: Holger Krekel, Bruno Oliveira, Ronny Pfannschmidt, Floris Bruynooghe, Brianna Laugher, Florian Bruhin and others
Author-email: holger at merlinux.eu
License: MIT license
Description: .. image:: http://pytest.org/latest/_static/pytest1.png
:target: http://pytest.org
:align: center
:alt: pytest
------
.. image:: https://img.shields.io/pypi/v/pytest.svg
:target: https://pypi.python.org/pypi/pytest
.. image:: https://img.shields.io/pypi/pyversions/pytest.svg
:target: https://pypi.python.org/pypi/pytest
.. image:: https://img.shields.io/coveralls/pytest-dev/pytest/master.svg
:target: https://coveralls.io/r/pytest-dev/pytest
.. image:: https://travis-ci.org/pytest-dev/pytest.svg?branch=master
:target: https://travis-ci.org/pytest-dev/pytest
.. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true
:target: https://ci.appveyor.com/project/pytestbot/pytest
The ``pytest`` framework makes it easy to write small tests, yet
scales to support complex functional testing for applications and libraries.
An example of a simple test:
.. code-block:: python
# content of test_sample.py
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
To execute it::
$ py.test
======= test session starts ========
platform linux -- Python 3.4.3, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
collected 1 items
test_sample.py F
======= FAILURES ========
_______ test_answer ________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========
Due to ``py.test``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://pytest.org/latest/getting-started.html#our-first-test-run>`_ for more examples.
Features
--------
- Detailed info on failing `assert statements <http://pytest.org/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
- `Auto-discovery
<http://pytest.org/latest/goodpractices.html#python-test-discovery>`_
of test modules and functions;
- `Modular fixtures <http://pytest.org/latest/fixture.html>`_ for
managing small or parametrized long-lived test resources;
- Can run `unittest <http://pytest.org/latest/unittest.html>`_ (or trial),
`nose <http://pytest.org/latest/nose.html>`_ test suites out of the box;
- Python2.6+, Python3.2+, PyPy-2.3, Jython-2.5 (untested);
- Rich plugin architecture, with over 150+ `external plugins <http://pytest.org/latest/plugins.html#installing-external-plugins-searching>`_ and thriving community;
Documentation
-------------
For full documentation, including installation, tutorials and PDF documents, please see http://pytest.org.
Bugs/Requests
-------------
Please use the `GitHub issue tracker <https://github.com/pytest-dev/pytest/issues>`_ to submit bugs or request features.
Changelog
---------
Consult the `Changelog <http://pytest.org/latest/changelog.html>`_ page for fixes and enhancements of each version.
License
-------
Copyright Holger Krekel and others, 2004-2016.
Distributed under the terms of the `MIT`_ license, pytest is free and open source software.
.. _`MIT`: https://github.com/pytest-dev/pytest/blob/master/LICENSE
Platform: unix
Platform: linux
Platform: osx
Platform: cygwin
Platform: win32
Classifier: Development Status :: 6 - Mature
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5

102
third_party/python/pytest/README.rst поставляемый
Просмотреть файл

@ -1,102 +0,0 @@
.. image:: http://pytest.org/latest/_static/pytest1.png
:target: http://pytest.org
:align: center
:alt: pytest
------
.. image:: https://img.shields.io/pypi/v/pytest.svg
:target: https://pypi.python.org/pypi/pytest
.. image:: https://img.shields.io/pypi/pyversions/pytest.svg
:target: https://pypi.python.org/pypi/pytest
.. image:: https://img.shields.io/coveralls/pytest-dev/pytest/master.svg
:target: https://coveralls.io/r/pytest-dev/pytest
.. image:: https://travis-ci.org/pytest-dev/pytest.svg?branch=master
:target: https://travis-ci.org/pytest-dev/pytest
.. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true
:target: https://ci.appveyor.com/project/pytestbot/pytest
The ``pytest`` framework makes it easy to write small tests, yet
scales to support complex functional testing for applications and libraries.
An example of a simple test:
.. code-block:: python
# content of test_sample.py
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
To execute it::
$ py.test
======= test session starts ========
platform linux -- Python 3.4.3, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
collected 1 items
test_sample.py F
======= FAILURES ========
_______ test_answer ________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
======= 1 failed in 0.12 seconds ========
Due to ``py.test``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://pytest.org/latest/getting-started.html#our-first-test-run>`_ for more examples.
Features
--------
- Detailed info on failing `assert statements <http://pytest.org/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
- `Auto-discovery
<http://pytest.org/latest/goodpractices.html#python-test-discovery>`_
of test modules and functions;
- `Modular fixtures <http://pytest.org/latest/fixture.html>`_ for
managing small or parametrized long-lived test resources;
- Can run `unittest <http://pytest.org/latest/unittest.html>`_ (or trial),
`nose <http://pytest.org/latest/nose.html>`_ test suites out of the box;
- Python2.6+, Python3.2+, PyPy-2.3, Jython-2.5 (untested);
- Rich plugin architecture, with over 150+ `external plugins <http://pytest.org/latest/plugins.html#installing-external-plugins-searching>`_ and thriving community;
Documentation
-------------
For full documentation, including installation, tutorials and PDF documents, please see http://pytest.org.
Bugs/Requests
-------------
Please use the `GitHub issue tracker <https://github.com/pytest-dev/pytest/issues>`_ to submit bugs or request features.
Changelog
---------
Consult the `Changelog <http://pytest.org/latest/changelog.html>`_ page for fixes and enhancements of each version.
License
-------
Copyright Holger Krekel and others, 2004-2016.
Distributed under the terms of the `MIT`_ license, pytest is free and open source software.
.. _`MIT`: https://github.com/pytest-dev/pytest/blob/master/LICENSE

10
third_party/python/pytest/_pytest/__init__.py поставляемый
Просмотреть файл

@ -1,2 +1,8 @@
#
__version__ = '2.9.2'
__all__ = ['__version__']
try:
from ._version import version as __version__
except ImportError:
# broken installation, we don't even try
# unknown only works because we do poor mans version compare
__version__ = 'unknown'

Просмотреть файл

@ -57,7 +57,7 @@ If things do not work right away:
which should throw a KeyError: 'COMPLINE' (which is properly set by the
global argcomplete script).
"""
from __future__ import absolute_import, division, print_function
import sys
import os
from glob import glob
@ -87,6 +87,7 @@ class FastFilesCompleter:
completion.append(x[prefix_dir:])
return completion
if os.environ.get('_ARGCOMPLETE'):
try:
import argcomplete.completers

Просмотреть файл

@ -1,12 +1,10 @@
""" python inspection/code generation API """
from __future__ import absolute_import, division, print_function
from .code import Code # noqa
from .code import ExceptionInfo # noqa
from .code import Frame # noqa
from .code import Traceback # noqa
from .code import getrawcode # noqa
from .code import patch_builtins # noqa
from .code import unpatch_builtins # noqa
from .source import Source # noqa
from .source import compile_ as compile # noqa
from .source import getfslineno # noqa

Просмотреть файл

@ -2,6 +2,7 @@
# CHANGES:
# - some_str is replaced, trying to create unicode strings
#
from __future__ import absolute_import, division, print_function
import types
def format_exception_only(etype, value):

Просмотреть файл

@ -1,17 +1,21 @@
from __future__ import absolute_import, division, print_function
import sys
from inspect import CO_VARARGS, CO_VARKEYWORDS
import re
from weakref import ref
from _pytest.compat import _PY2, _PY3, PY35, safe_str
import py
builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib')
if sys.version_info[0] >= 3:
if _PY3:
from traceback import format_exception_only
else:
from ._py2traceback import format_exception_only
class Code(object):
""" wrapper around Python code objects """
def __init__(self, rawcode):
@ -28,6 +32,8 @@ class Code(object):
def __eq__(self, other):
return self.raw == other.raw
__hash__ = None
def __ne__(self, other):
return not self == other
@ -35,12 +41,16 @@ class Code(object):
def path(self):
""" return a path object pointing to source code (note that it
might not point to an actually existing file). """
p = py.path.local(self.raw.co_filename)
# maybe don't try this checking
if not p.check():
try:
p = py.path.local(self.raw.co_filename)
# maybe don't try this checking
if not p.check():
raise OSError("py.path check failed.")
except OSError:
# XXX maybe try harder like the weird logic
# in the standard lib [linecache.updatecache] does?
p = self.raw.co_filename
return p
@property
@ -139,7 +149,8 @@ class TracebackEntry(object):
_repr_style = None
exprinfo = None
def __init__(self, rawentry):
def __init__(self, rawentry, excinfo=None):
self._excinfo = excinfo
self._rawentry = rawentry
self.lineno = rawentry.tb_lineno - 1
@ -174,18 +185,6 @@ class TracebackEntry(object):
return self.frame.f_locals
locals = property(getlocals, None, None, "locals of underlaying frame")
def reinterpret(self):
"""Reinterpret the failing statement and returns a detailed information
about what operations are performed."""
from _pytest.assertion.reinterpret import reinterpret
if self.exprinfo is None:
source = py.builtin._totext(self.statement).strip()
x = reinterpret(source, self.frame, should_fail=True)
if not py.builtin._istext(x):
raise TypeError("interpret returned non-string %r" % (x,))
self.exprinfo = x
return self.exprinfo
def getfirstlinesource(self):
# on Jython this firstlineno can be -1 apparently
return max(self.frame.code.firstlineno, 0)
@ -220,16 +219,24 @@ class TracebackEntry(object):
""" return True if the current frame has a var __tracebackhide__
resolving to True
If __tracebackhide__ is a callable, it gets called with the
ExceptionInfo instance and can decide whether to hide the traceback.
mostly for internal use
"""
try:
return self.frame.f_locals['__tracebackhide__']
tbh = self.frame.f_locals['__tracebackhide__']
except KeyError:
try:
return self.frame.f_globals['__tracebackhide__']
tbh = self.frame.f_globals['__tracebackhide__']
except KeyError:
return False
if py.builtin.callable(tbh):
return tbh(None if self._excinfo is None else self._excinfo())
else:
return tbh
def __str__(self):
try:
fn = str(self.path)
@ -253,12 +260,13 @@ class Traceback(list):
access to Traceback entries.
"""
Entry = TracebackEntry
def __init__(self, tb):
""" initialize from given python traceback object. """
def __init__(self, tb, excinfo=None):
""" initialize from given python traceback object and ExceptionInfo """
self._excinfo = excinfo
if hasattr(tb, 'tb_next'):
def f(cur):
while cur is not None:
yield self.Entry(cur)
yield self.Entry(cur, excinfo=excinfo)
cur = cur.tb_next
list.__init__(self, f(tb))
else:
@ -282,7 +290,7 @@ class Traceback(list):
not codepath.relto(excludepath)) and
(lineno is None or x.lineno == lineno) and
(firstlineno is None or x.frame.code.firstlineno == firstlineno)):
return Traceback(x._rawentry)
return Traceback(x._rawentry, self._excinfo)
return self
def __getitem__(self, key):
@ -301,7 +309,7 @@ class Traceback(list):
by default this removes all the TracebackEntries which are hidden
(see ishidden() above)
"""
return Traceback(filter(fn, self))
return Traceback(filter(fn, self), self._excinfo)
def getcrashentry(self):
""" return last non-hidden traceback entry that lead
@ -337,6 +345,7 @@ class Traceback(list):
l.append(entry.frame.f_locals)
return None
co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2',
'?', 'eval')
@ -345,6 +354,8 @@ class ExceptionInfo(object):
help for navigating the traceback.
"""
_striptext = ''
_assert_start_repr = "AssertionError(u\'assert " if _PY2 else "AssertionError(\'assert "
def __init__(self, tup=None, exprinfo=None):
import _pytest._code
if tup is None:
@ -352,8 +363,8 @@ class ExceptionInfo(object):
if exprinfo is None and isinstance(tup[1], AssertionError):
exprinfo = getattr(tup[1], 'msg', None)
if exprinfo is None:
exprinfo = str(tup[1])
if exprinfo and exprinfo.startswith('assert '):
exprinfo = py.io.saferepr(tup[1])
if exprinfo and exprinfo.startswith(self._assert_start_repr):
self._striptext = 'AssertionError: '
self._excinfo = tup
#: the exception class
@ -365,7 +376,7 @@ class ExceptionInfo(object):
#: the exception type name
self.typename = self.type.__name__
#: the exception traceback (_pytest._code.Traceback instance)
self.traceback = _pytest._code.Traceback(self.tb)
self.traceback = _pytest._code.Traceback(self.tb, excinfo=ref(self))
def __repr__(self):
return "<ExceptionInfo %s tblen=%d>" % (self.typename, len(self.traceback))
@ -427,6 +438,19 @@ class ExceptionInfo(object):
loc = ReprFileLocation(entry.path, entry.lineno + 1, self.exconly())
return unicode(loc)
def match(self, regexp):
"""
Match the regular expression 'regexp' on the string representation of
the exception. If it matches then True is returned (so that it is
possible to write 'assert excinfo.match()'). If it doesn't match an
AssertionError is raised.
"""
__tracebackhide__ = True
if not re.search(regexp, str(self.value)):
assert 0, "Pattern '{0!s}' not found in '{1!s}'".format(
regexp, self.value)
return True
class FormattedExcinfo(object):
""" presenting information about failing Functions and Generators. """
@ -578,30 +602,91 @@ class FormattedExcinfo(object):
traceback = excinfo.traceback
if self.tbfilter:
traceback = traceback.filter()
recursionindex = None
if is_recursion_error(excinfo):
recursionindex = traceback.recursionindex()
traceback, extraline = self._truncate_recursive_traceback(traceback)
else:
extraline = None
last = traceback[-1]
entries = []
extraline = None
for index, entry in enumerate(traceback):
einfo = (last == entry) and excinfo or None
reprentry = self.repr_traceback_entry(entry, einfo)
entries.append(reprentry)
if index == recursionindex:
extraline = "!!! Recursion detected (same locals & position)"
break
return ReprTraceback(entries, extraline, style=self.style)
def repr_excinfo(self, excinfo):
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
return ReprExceptionInfo(reprtraceback, reprcrash)
def _truncate_recursive_traceback(self, traceback):
"""
Truncate the given recursive traceback trying to find the starting point
of the recursion.
class TerminalRepr:
The detection is done by going through each traceback entry and finding the
point in which the locals of the frame are equal to the locals of a previous frame (see ``recursionindex()``.
Handle the situation where the recursion process might raise an exception (for example
comparing numpy arrays using equality raises a TypeError), in which case we do our best to
warn the user of the error and show a limited traceback.
"""
try:
recursionindex = traceback.recursionindex()
except Exception as e:
max_frames = 10
extraline = (
'!!! Recursion error detected, but an error occurred locating the origin of recursion.\n'
' The following exception happened when comparing locals in the stack frame:\n'
' {exc_type}: {exc_msg}\n'
' Displaying first and last {max_frames} stack frames out of {total}.'
).format(exc_type=type(e).__name__, exc_msg=safe_str(e), max_frames=max_frames, total=len(traceback))
traceback = traceback[:max_frames] + traceback[-max_frames:]
else:
if recursionindex is not None:
extraline = "!!! Recursion detected (same locals & position)"
traceback = traceback[:recursionindex + 1]
else:
extraline = None
return traceback, extraline
def repr_excinfo(self, excinfo):
if _PY2:
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
return ReprExceptionInfo(reprtraceback, reprcrash)
else:
repr_chain = []
e = excinfo.value
descr = None
while e is not None:
if excinfo:
reprtraceback = self.repr_traceback(excinfo)
reprcrash = excinfo._getreprcrash()
else:
# fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None))
reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)]
if e.__cause__ is not None:
e = e.__cause__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'The above exception was the direct cause of the following exception:'
elif e.__context__ is not None:
e = e.__context__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'During handling of the above exception, another exception occurred:'
else:
e = None
repr_chain.reverse()
return ExceptionChainRepr(repr_chain)
class TerminalRepr(object):
def __str__(self):
s = self.__unicode__()
if sys.version_info[0] < 3:
if _PY2:
s = s.encode('utf-8')
return s
@ -617,21 +702,47 @@ class TerminalRepr:
return "<%s instance at %0x>" %(self.__class__, id(self))
class ReprExceptionInfo(TerminalRepr):
def __init__(self, reprtraceback, reprcrash):
self.reprtraceback = reprtraceback
self.reprcrash = reprcrash
class ExceptionRepr(TerminalRepr):
def __init__(self):
self.sections = []
def addsection(self, name, content, sep="-"):
self.sections.append((name, content, sep))
def toterminal(self, tw):
self.reprtraceback.toterminal(tw)
for name, content, sep in self.sections:
tw.sep(sep, name)
tw.line(content)
class ExceptionChainRepr(ExceptionRepr):
def __init__(self, chain):
super(ExceptionChainRepr, self).__init__()
self.chain = chain
# reprcrash and reprtraceback of the outermost (the newest) exception
# in the chain
self.reprtraceback = chain[-1][0]
self.reprcrash = chain[-1][1]
def toterminal(self, tw):
for element in self.chain:
element[0].toterminal(tw)
if element[2] is not None:
tw.line("")
tw.line(element[2], yellow=True)
super(ExceptionChainRepr, self).toterminal(tw)
class ReprExceptionInfo(ExceptionRepr):
def __init__(self, reprtraceback, reprcrash):
super(ReprExceptionInfo, self).__init__()
self.reprtraceback = reprtraceback
self.reprcrash = reprcrash
def toterminal(self, tw):
self.reprtraceback.toterminal(tw)
super(ReprExceptionInfo, self).toterminal(tw)
class ReprTraceback(TerminalRepr):
entrysep = "_ "
@ -720,7 +831,8 @@ class ReprFileLocation(TerminalRepr):
i = msg.find("\n")
if i != -1:
msg = msg[:i]
tw.line("%s:%s: %s" %(self.path, self.lineno, msg))
tw.write(self.path, bold=True, red=True)
tw.line(":%s: %s" % (self.lineno, msg))
class ReprLocals(TerminalRepr):
def __init__(self, lines):
@ -753,29 +865,6 @@ class ReprFuncArgs(TerminalRepr):
tw.line("")
oldbuiltins = {}
def patch_builtins(assertion=True, compile=True):
""" put compile and AssertionError builtins to Python's builtins. """
if assertion:
from _pytest.assertion import reinterpret
l = oldbuiltins.setdefault('AssertionError', [])
l.append(py.builtin.builtins.AssertionError)
py.builtin.builtins.AssertionError = reinterpret.AssertionError
if compile:
import _pytest._code
l = oldbuiltins.setdefault('compile', [])
l.append(py.builtin.builtins.compile)
py.builtin.builtins.compile = _pytest._code.compile
def unpatch_builtins(assertion=True, compile=True):
""" remove compile and AssertionError builtins from Python builtins. """
if assertion:
py.builtin.builtins.AssertionError = oldbuiltins['AssertionError'].pop()
if compile:
py.builtin.builtins.compile = oldbuiltins['compile'].pop()
def getrawcode(obj, trycall=True):
""" return code object for given function. """
try:
@ -792,7 +881,8 @@ def getrawcode(obj, trycall=True):
return x
return obj
if sys.version_info[:2] >= (3, 5): # RecursionError introduced in 3.5
if PY35: # RecursionError introduced in 3.5
def is_recursion_error(excinfo):
return excinfo.errisinstance(RecursionError) # noqa
else:

Просмотреть файл

@ -1,10 +1,9 @@
from __future__ import generators
from __future__ import absolute_import, division, generators, print_function
from bisect import bisect_right
import sys
import inspect, tokenize
import py
from types import ModuleType
cpy_compile = compile
try:
@ -52,22 +51,21 @@ class Source(object):
return str(self) == other
return False
__hash__ = None
def __getitem__(self, key):
if isinstance(key, int):
return self.lines[key]
else:
if key.step not in (None, 1):
raise IndexError("cannot slice a Source with a step")
return self.__getslice__(key.start, key.stop)
newsource = Source()
newsource.lines = self.lines[key.start:key.stop]
return newsource
def __len__(self):
return len(self.lines)
def __getslice__(self, start, end):
newsource = Source()
newsource.lines = self.lines[start:end]
return newsource
def strip(self):
""" return new source object with trailing
and leading blank lines removed.
@ -193,14 +191,6 @@ class Source(object):
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
if sys.version_info[0] >= 3:
# XXX py3's inspect.getsourcefile() checks for a module
# and a pep302 __loader__ ... we don't have a module
# at code compile-time so we need to fake it here
m = ModuleType("_pycodecompile_pseudo_module")
py.std.inspect.modulesbyfile[filename] = None
py.std.sys.modules[None] = m
m.__loader__ = 1
py.std.linecache.cache[filename] = (1, None, lines, filename)
return co
@ -266,6 +256,7 @@ def findsource(obj):
source.lines = [line.rstrip() for line in sourcelines]
return source, lineno
def getsource(obj, **kwargs):
import _pytest._code
obj = _pytest._code.getrawcode(obj)
@ -276,6 +267,7 @@ def getsource(obj, **kwargs):
assert isinstance(strsrc, str)
return Source(strsrc, **kwargs)
def deindent(lines, offset=None):
if offset is None:
for line in lines:
@ -289,6 +281,7 @@ def deindent(lines, offset=None):
if offset == 0:
return list(lines)
newlines = []
def readline_generator(lines):
for line in lines:
yield line + '\n'

Просмотреть файл

@ -2,7 +2,7 @@
imports symbols from vendored "pluggy" if available, otherwise
falls back to importing "pluggy" from the default namespace.
"""
from __future__ import absolute_import, division, print_function
try:
from _pytest.vendored_packages.pluggy import * # noqa
from _pytest.vendored_packages.pluggy import __version__ # noqa

4
third_party/python/pytest/_pytest/_version.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,4 @@
# coding: utf-8
# file generated by setuptools_scm
# don't change, don't track in version control
version = '3.1.3'

Просмотреть файл

@ -1,11 +1,13 @@
"""
support for presenting detailed information in failing assertions.
"""
from __future__ import absolute_import, division, print_function
import py
import os
import sys
from _pytest.monkeypatch import monkeypatch
from _pytest.assertion import util
from _pytest.assertion import rewrite
from _pytest.assertion import truncate
def pytest_addoption(parser):
@ -13,25 +15,46 @@ def pytest_addoption(parser):
group.addoption('--assert',
action="store",
dest="assertmode",
choices=("rewrite", "reinterp", "plain",),
choices=("rewrite", "plain",),
default="rewrite",
metavar="MODE",
help="""control assertion debugging tools. 'plain'
performs no assertion debugging. 'reinterp'
reinterprets assert statements after they failed
to provide assertion expression information.
'rewrite' (the default) rewrites assert
statements in test modules on import to
provide assert expression information. """)
group.addoption('--no-assert',
action="store_true",
default=False,
dest="noassert",
help="DEPRECATED equivalent to --assert=plain")
group.addoption('--nomagic', '--no-magic',
action="store_true",
default=False,
help="DEPRECATED equivalent to --assert=plain")
help="""Control assertion debugging tools. 'plain'
performs no assertion debugging. 'rewrite'
(the default) rewrites assert statements in
test modules on import to provide assert
expression information.""")
def register_assert_rewrite(*names):
"""Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside
the package will get their assert statements rewritten.
Thus you should make sure to call this before the module is
actually imported, usually in your __init__.py if you are a plugin
using a package.
:raise TypeError: if the given module names are not strings.
"""
for name in names:
if not isinstance(name, str):
msg = 'expected module names as *args, got {0} instead'
raise TypeError(msg.format(repr(names)))
for hook in sys.meta_path:
if isinstance(hook, rewrite.AssertionRewritingHook):
importhook = hook
break
else:
importhook = DummyRewriteHook()
importhook.mark_rewrite(*names)
class DummyRewriteHook(object):
"""A no-op import hook for when rewriting is disabled."""
def mark_rewrite(self, *names):
pass
class AssertionState:
@ -40,57 +63,39 @@ class AssertionState:
def __init__(self, config, mode):
self.mode = mode
self.trace = config.trace.root.get("assertion")
self.hook = None
def pytest_configure(config):
mode = config.getvalue("assertmode")
if config.getvalue("noassert") or config.getvalue("nomagic"):
mode = "plain"
if mode == "rewrite":
try:
import ast # noqa
except ImportError:
mode = "reinterp"
else:
# Both Jython and CPython 2.6.0 have AST bugs that make the
# assertion rewriting hook malfunction.
if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)):
mode = "reinterp"
if mode != "plain":
_load_modules(mode)
m = monkeypatch()
config._cleanup.append(m.undo)
m.setattr(py.builtin.builtins, 'AssertionError',
reinterpret.AssertionError) # noqa
hook = None
if mode == "rewrite":
hook = rewrite.AssertionRewritingHook() # noqa
sys.meta_path.insert(0, hook)
warn_about_missing_assertion(mode)
config._assertstate = AssertionState(config, mode)
config._assertstate.hook = hook
config._assertstate.trace("configured with mode set to %r" % (mode,))
def install_importhook(config):
"""Try to install the rewrite hook, raise SystemError if it fails."""
# Both Jython and CPython 2.6.0 have AST bugs that make the
# assertion rewriting hook malfunction.
if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)):
raise SystemError('rewrite not supported')
config._assertstate = AssertionState(config, 'rewrite')
config._assertstate.hook = hook = rewrite.AssertionRewritingHook(config)
sys.meta_path.insert(0, hook)
config._assertstate.trace('installed rewrite import hook')
def undo():
hook = config._assertstate.hook
if hook is not None and hook in sys.meta_path:
sys.meta_path.remove(hook)
config.add_cleanup(undo)
return hook
def pytest_collection(session):
# this hook is only called when test modules are collected
# so for example not in the master process of pytest-xdist
# (which does not collect test modules)
hook = session.config._assertstate.hook
if hook is not None:
hook.set_session(session)
def _running_on_ci():
"""Check if we're currently running on a CI system."""
env_vars = ['CI', 'BUILD_NUMBER']
return any(var in os.environ for var in env_vars)
assertstate = getattr(session.config, '_assertstate', None)
if assertstate:
if assertstate.hook is not None:
assertstate.hook.set_session(session)
def pytest_runtest_setup(item):
@ -106,8 +111,8 @@ def pytest_runtest_setup(item):
This uses the first result from the hook and then ensures the
following:
* Overly verbose explanations are dropped unless -vv was used or
running on a CI.
* Overly verbose explanations are truncated unless configured otherwise
(eg. if running in verbose mode).
* Embedded newlines are escaped to help util.format_explanation()
later.
* If the rewrite mode is used embedded %-characters are replaced
@ -120,14 +125,7 @@ def pytest_runtest_setup(item):
config=item.config, op=op, left=left, right=right)
for new_expl in hook_result:
if new_expl:
if (sum(len(p) for p in new_expl[1:]) > 80*8 and
item.config.option.verbose < 2 and
not _running_on_ci()):
show_max = 10
truncated_lines = len(new_expl) - show_max
new_expl[show_max:] = [py.builtin._totext(
'Detailed information truncated (%d more lines)'
', use "-vv" to show' % truncated_lines)]
new_expl = truncate.truncate_if_required(new_expl, item)
new_expl = [line.replace("\n", "\\n") for line in new_expl]
res = py.builtin._totext("\n~").join(new_expl)
if item.config.getvalue("assertmode") == "rewrite":
@ -141,35 +139,10 @@ def pytest_runtest_teardown(item):
def pytest_sessionfinish(session):
hook = session.config._assertstate.hook
if hook is not None:
hook.session = None
def _load_modules(mode):
"""Lazily import assertion related code."""
global rewrite, reinterpret
from _pytest.assertion import reinterpret # noqa
if mode == "rewrite":
from _pytest.assertion import rewrite # noqa
def warn_about_missing_assertion(mode):
try:
assert False
except AssertionError:
pass
else:
if mode == "rewrite":
specifically = ("assertions which are not in test modules "
"will be ignored")
else:
specifically = "failing tests may report as passing"
sys.stderr.write("WARNING: " + specifically +
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
assertstate = getattr(session.config, '_assertstate', None)
if assertstate:
if assertstate.hook is not None:
assertstate.hook.set_session(None)
# Expose this plugin's implementation for the pytest_assertrepr_compare hook

Просмотреть файл

@ -1,407 +0,0 @@
"""
Find intermediate evalutation results in assert statements through builtin AST.
"""
import ast
import sys
import _pytest._code
import py
from _pytest.assertion import util
u = py.builtin._totext
class AssertionError(util.BuiltinAssertionError):
def __init__(self, *args):
util.BuiltinAssertionError.__init__(self, *args)
if args:
# on Python2.6 we get len(args)==2 for: assert 0, (x,y)
# on Python2.7 and above we always get len(args) == 1
# with args[0] being the (x,y) tuple.
if len(args) > 1:
toprint = args
else:
toprint = args[0]
try:
self.msg = u(toprint)
except Exception:
self.msg = u(
"<[broken __repr__] %s at %0xd>"
% (toprint.__class__, id(toprint)))
else:
f = _pytest._code.Frame(sys._getframe(1))
try:
source = f.code.fullsource
if source is not None:
try:
source = source.getstatement(f.lineno, assertion=True)
except IndexError:
source = None
else:
source = str(source.deindent()).strip()
except py.error.ENOENT:
source = None
# this can also occur during reinterpretation, when the
# co_filename is set to "<run>".
if source:
self.msg = reinterpret(source, f, should_fail=True)
else:
self.msg = "<could not determine information>"
if not self.args:
self.args = (self.msg,)
if sys.version_info > (3, 0):
AssertionError.__module__ = "builtins"
if sys.platform.startswith("java"):
# See http://bugs.jython.org/issue1497
_exprs = ("BoolOp", "BinOp", "UnaryOp", "Lambda", "IfExp", "Dict",
"ListComp", "GeneratorExp", "Yield", "Compare", "Call",
"Repr", "Num", "Str", "Attribute", "Subscript", "Name",
"List", "Tuple")
_stmts = ("FunctionDef", "ClassDef", "Return", "Delete", "Assign",
"AugAssign", "Print", "For", "While", "If", "With", "Raise",
"TryExcept", "TryFinally", "Assert", "Import", "ImportFrom",
"Exec", "Global", "Expr", "Pass", "Break", "Continue")
_expr_nodes = set(getattr(ast, name) for name in _exprs)
_stmt_nodes = set(getattr(ast, name) for name in _stmts)
def _is_ast_expr(node):
return node.__class__ in _expr_nodes
def _is_ast_stmt(node):
return node.__class__ in _stmt_nodes
else:
def _is_ast_expr(node):
return isinstance(node, ast.expr)
def _is_ast_stmt(node):
return isinstance(node, ast.stmt)
try:
_Starred = ast.Starred
except AttributeError:
# Python 2. Define a dummy class so isinstance() will always be False.
class _Starred(object): pass
class Failure(Exception):
"""Error found while interpreting AST."""
def __init__(self, explanation=""):
self.cause = sys.exc_info()
self.explanation = explanation
def reinterpret(source, frame, should_fail=False):
mod = ast.parse(source)
visitor = DebugInterpreter(frame)
try:
visitor.visit(mod)
except Failure:
failure = sys.exc_info()[1]
return getfailure(failure)
if should_fail:
return ("(assertion failed, but when it was re-run for "
"printing intermediate values, it did not fail. Suggestions: "
"compute assert expression before the assert or use --assert=plain)")
def run(offending_line, frame=None):
if frame is None:
frame = _pytest._code.Frame(sys._getframe(1))
return reinterpret(offending_line, frame)
def getfailure(e):
explanation = util.format_explanation(e.explanation)
value = e.cause[1]
if str(value):
lines = explanation.split('\n')
lines[0] += " << %s" % (value,)
explanation = '\n'.join(lines)
text = "%s: %s" % (e.cause[0].__name__, explanation)
if text.startswith('AssertionError: assert '):
text = text[16:]
return text
operator_map = {
ast.BitOr : "|",
ast.BitXor : "^",
ast.BitAnd : "&",
ast.LShift : "<<",
ast.RShift : ">>",
ast.Add : "+",
ast.Sub : "-",
ast.Mult : "*",
ast.Div : "/",
ast.FloorDiv : "//",
ast.Mod : "%",
ast.Eq : "==",
ast.NotEq : "!=",
ast.Lt : "<",
ast.LtE : "<=",
ast.Gt : ">",
ast.GtE : ">=",
ast.Pow : "**",
ast.Is : "is",
ast.IsNot : "is not",
ast.In : "in",
ast.NotIn : "not in"
}
unary_map = {
ast.Not : "not %s",
ast.Invert : "~%s",
ast.USub : "-%s",
ast.UAdd : "+%s"
}
class DebugInterpreter(ast.NodeVisitor):
"""Interpret AST nodes to gleam useful debugging information. """
def __init__(self, frame):
self.frame = frame
def generic_visit(self, node):
# Fallback when we don't have a special implementation.
if _is_ast_expr(node):
mod = ast.Expression(node)
co = self._compile(mod)
try:
result = self.frame.eval(co)
except Exception:
raise Failure()
explanation = self.frame.repr(result)
return explanation, result
elif _is_ast_stmt(node):
mod = ast.Module([node])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co)
except Exception:
raise Failure()
return None, None
else:
raise AssertionError("can't handle %s" %(node,))
def _compile(self, source, mode="eval"):
return compile(source, "<assertion interpretation>", mode)
def visit_Expr(self, expr):
return self.visit(expr.value)
def visit_Module(self, mod):
for stmt in mod.body:
self.visit(stmt)
def visit_Name(self, name):
explanation, result = self.generic_visit(name)
# See if the name is local.
source = "%r in locals() is not globals()" % (name.id,)
co = self._compile(source)
try:
local = self.frame.eval(co)
except Exception:
# have to assume it isn't
local = None
if local is None or not self.frame.is_true(local):
return name.id, result
return explanation, result
def visit_Compare(self, comp):
left = comp.left
left_explanation, left_result = self.visit(left)
for op, next_op in zip(comp.ops, comp.comparators):
next_explanation, next_result = self.visit(next_op)
op_symbol = operator_map[op.__class__]
explanation = "%s %s %s" % (left_explanation, op_symbol,
next_explanation)
source = "__exprinfo_left %s __exprinfo_right" % (op_symbol,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_left=left_result,
__exprinfo_right=next_result)
except Exception:
raise Failure(explanation)
try:
if not self.frame.is_true(result):
break
except KeyboardInterrupt:
raise
except:
break
left_explanation, left_result = next_explanation, next_result
if util._reprcompare is not None:
res = util._reprcompare(op_symbol, left_result, next_result)
if res:
explanation = res
return explanation, result
def visit_BoolOp(self, boolop):
is_or = isinstance(boolop.op, ast.Or)
explanations = []
for operand in boolop.values:
explanation, result = self.visit(operand)
explanations.append(explanation)
if result == is_or:
break
name = is_or and " or " or " and "
explanation = "(" + name.join(explanations) + ")"
return explanation, result
def visit_UnaryOp(self, unary):
pattern = unary_map[unary.op.__class__]
operand_explanation, operand_result = self.visit(unary.operand)
explanation = pattern % (operand_explanation,)
co = self._compile(pattern % ("__exprinfo_expr",))
try:
result = self.frame.eval(co, __exprinfo_expr=operand_result)
except Exception:
raise Failure(explanation)
return explanation, result
def visit_BinOp(self, binop):
left_explanation, left_result = self.visit(binop.left)
right_explanation, right_result = self.visit(binop.right)
symbol = operator_map[binop.op.__class__]
explanation = "(%s %s %s)" % (left_explanation, symbol,
right_explanation)
source = "__exprinfo_left %s __exprinfo_right" % (symbol,)
co = self._compile(source)
try:
result = self.frame.eval(co, __exprinfo_left=left_result,
__exprinfo_right=right_result)
except Exception:
raise Failure(explanation)
return explanation, result
def visit_Call(self, call):
func_explanation, func = self.visit(call.func)
arg_explanations = []
ns = {"__exprinfo_func" : func}
arguments = []
for arg in call.args:
arg_explanation, arg_result = self.visit(arg)
if isinstance(arg, _Starred):
arg_name = "__exprinfo_star"
ns[arg_name] = arg_result
arguments.append("*%s" % (arg_name,))
arg_explanations.append("*%s" % (arg_explanation,))
else:
arg_name = "__exprinfo_%s" % (len(ns),)
ns[arg_name] = arg_result
arguments.append(arg_name)
arg_explanations.append(arg_explanation)
for keyword in call.keywords:
arg_explanation, arg_result = self.visit(keyword.value)
if keyword.arg:
arg_name = "__exprinfo_%s" % (len(ns),)
keyword_source = "%s=%%s" % (keyword.arg)
arguments.append(keyword_source % (arg_name,))
arg_explanations.append(keyword_source % (arg_explanation,))
else:
arg_name = "__exprinfo_kwds"
arguments.append("**%s" % (arg_name,))
arg_explanations.append("**%s" % (arg_explanation,))
ns[arg_name] = arg_result
if getattr(call, 'starargs', None):
arg_explanation, arg_result = self.visit(call.starargs)
arg_name = "__exprinfo_star"
ns[arg_name] = arg_result
arguments.append("*%s" % (arg_name,))
arg_explanations.append("*%s" % (arg_explanation,))
if getattr(call, 'kwargs', None):
arg_explanation, arg_result = self.visit(call.kwargs)
arg_name = "__exprinfo_kwds"
ns[arg_name] = arg_result
arguments.append("**%s" % (arg_name,))
arg_explanations.append("**%s" % (arg_explanation,))
args_explained = ", ".join(arg_explanations)
explanation = "%s(%s)" % (func_explanation, args_explained)
args = ", ".join(arguments)
source = "__exprinfo_func(%s)" % (args,)
co = self._compile(source)
try:
result = self.frame.eval(co, **ns)
except Exception:
raise Failure(explanation)
pattern = "%s\n{%s = %s\n}"
rep = self.frame.repr(result)
explanation = pattern % (rep, rep, explanation)
return explanation, result
def _is_builtin_name(self, name):
pattern = "%r not in globals() and %r not in locals()"
source = pattern % (name.id, name.id)
co = self._compile(source)
try:
return self.frame.eval(co)
except Exception:
return False
def visit_Attribute(self, attr):
if not isinstance(attr.ctx, ast.Load):
return self.generic_visit(attr)
source_explanation, source_result = self.visit(attr.value)
explanation = "%s.%s" % (source_explanation, attr.attr)
source = "__exprinfo_expr.%s" % (attr.attr,)
co = self._compile(source)
try:
try:
result = self.frame.eval(co, __exprinfo_expr=source_result)
except AttributeError:
# Maybe the attribute name needs to be mangled?
if not attr.attr.startswith("__") or attr.attr.endswith("__"):
raise
source = "getattr(__exprinfo_expr.__class__, '__name__', '')"
co = self._compile(source)
class_name = self.frame.eval(co, __exprinfo_expr=source_result)
mangled_attr = "_" + class_name + attr.attr
source = "__exprinfo_expr.%s" % (mangled_attr,)
co = self._compile(source)
result = self.frame.eval(co, __exprinfo_expr=source_result)
except Exception:
raise Failure(explanation)
explanation = "%s\n{%s = %s.%s\n}" % (self.frame.repr(result),
self.frame.repr(result),
source_explanation, attr.attr)
# Check if the attr is from an instance.
source = "%r in getattr(__exprinfo_expr, '__dict__', {})"
source = source % (attr.attr,)
co = self._compile(source)
try:
from_instance = self.frame.eval(co, __exprinfo_expr=source_result)
except Exception:
from_instance = None
if from_instance is None or self.frame.is_true(from_instance):
rep = self.frame.repr(result)
pattern = "%s\n{%s = %s\n}"
explanation = pattern % (rep, rep, explanation)
return explanation, result
def visit_Assert(self, assrt):
test_explanation, test_result = self.visit(assrt.test)
explanation = "assert %s" % (test_explanation,)
if not self.frame.is_true(test_result):
try:
raise util.BuiltinAssertionError
except Exception:
raise Failure(explanation)
return explanation, test_result
def visit_Assign(self, assign):
value_explanation, value_result = self.visit(assign.value)
explanation = "... = %s" % (value_explanation,)
name = ast.Name("__exprinfo_expr", ast.Load(),
lineno=assign.value.lineno,
col_offset=assign.value.col_offset)
new_assign = ast.Assign(assign.targets, name, lineno=assign.lineno,
col_offset=assign.col_offset)
mod = ast.Module([new_assign])
co = self._compile(mod, "exec")
try:
self.frame.exec_(co, __exprinfo_expr=value_result)
except Exception:
raise Failure(explanation)
return explanation, value_result

Просмотреть файл

@ -1,6 +1,7 @@
"""Rewrite assertion AST to produce nice error messages"""
from __future__ import absolute_import, division, print_function
import ast
import _ast
import errno
import itertools
import imp
@ -44,20 +45,20 @@ else:
class AssertionRewritingHook(object):
"""PEP302 Import hook which rewrites asserts."""
def __init__(self):
def __init__(self, config):
self.config = config
self.fnpats = config.getini("python_files")
self.session = None
self.modules = {}
self._rewritten_names = set()
self._register_with_pkg_resources()
self._must_rewrite = set()
def set_session(self, session):
self.fnpats = session.config.getini("python_files")
self.session = session
def find_module(self, name, path=None):
if self.session is None:
return None
sess = self.session
state = sess.config._assertstate
state = self.config._assertstate
state.trace("find_module called for: %s" % name)
names = name.rsplit(".", 1)
lastname = names[-1]
@ -78,7 +79,12 @@ class AssertionRewritingHook(object):
tp = desc[2]
if tp == imp.PY_COMPILED:
if hasattr(imp, "source_from_cache"):
fn = imp.source_from_cache(fn)
try:
fn = imp.source_from_cache(fn)
except ValueError:
# Python 3 doesn't like orphaned but still-importable
# .pyc files.
fn = fn[:-1]
else:
fn = fn[:-1]
elif tp != imp.PY_SOURCE:
@ -86,24 +92,13 @@ class AssertionRewritingHook(object):
return None
else:
fn = os.path.join(pth, name.rpartition(".")[2] + ".py")
fn_pypath = py.path.local(fn)
# Is this a test file?
if not sess.isinitpath(fn):
# We have to be very careful here because imports in this code can
# trigger a cycle.
self.session = None
try:
for pat in self.fnpats:
if fn_pypath.fnmatch(pat):
state.trace("matched test file %r" % (fn,))
break
else:
return None
finally:
self.session = sess
else:
state.trace("matched test file (was specified on cmdline): %r" %
(fn,))
if not self._should_rewrite(name, fn_pypath, state):
return None
self._rewritten_names.add(name)
# The requested module looks like a test file, so rewrite it. This is
# the most magical part of the process: load the source, rewrite the
# asserts, and load the rewritten source. We also cache the rewritten
@ -140,7 +135,7 @@ class AssertionRewritingHook(object):
co = _read_pyc(fn_pypath, pyc, state.trace)
if co is None:
state.trace("rewriting %r" % (fn,))
source_stat, co = _rewrite_test(state, fn_pypath)
source_stat, co = _rewrite_test(self.config, fn_pypath)
if co is None:
# Probably a SyntaxError in the test.
return None
@ -151,6 +146,51 @@ class AssertionRewritingHook(object):
self.modules[name] = co, pyc
return self
def _should_rewrite(self, name, fn_pypath, state):
# always rewrite conftest files
fn = str(fn_pypath)
if fn_pypath.basename == 'conftest.py':
state.trace("rewriting conftest file: %r" % (fn,))
return True
if self.session is not None:
if self.session.isinitpath(fn):
state.trace("matched test file (was specified on cmdline): %r" %
(fn,))
return True
# modules not passed explicitly on the command line are only
# rewritten if they match the naming convention for test files
for pat in self.fnpats:
if fn_pypath.fnmatch(pat):
state.trace("matched test file %r" % (fn,))
return True
for marked in self._must_rewrite:
if name.startswith(marked):
state.trace("matched marked file %r (from %r)" % (name, marked))
return True
return False
def mark_rewrite(self, *names):
"""Mark import names as needing to be re-written.
The named module or package as well as any nested modules will
be re-written on import.
"""
already_imported = set(names).intersection(set(sys.modules))
if already_imported:
for name in already_imported:
if name not in self._rewritten_names:
self._warn_already_imported(name)
self._must_rewrite.update(names)
def _warn_already_imported(self, name):
self.config.warn(
'P1',
'Module already imported so can not be re-written: %s' % name)
def load_module(self, name):
# If there is an existing module object named 'fullname' in
# sys.modules, the loader must use that existing module. (Otherwise,
@ -170,7 +210,8 @@ class AssertionRewritingHook(object):
mod.__loader__ = self
py.builtin.exec_(co, mod.__dict__)
except:
del sys.modules[name]
if name in sys.modules:
del sys.modules[name]
raise
return sys.modules[name]
@ -235,14 +276,16 @@ def _write_pyc(state, co, source_stat, pyc):
fp.close()
return True
RN = "\r\n".encode("utf-8")
N = "\n".encode("utf-8")
cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+")
BOM_UTF8 = '\xef\xbb\xbf'
def _rewrite_test(state, fn):
def _rewrite_test(config, fn):
"""Try to read and rewrite *fn* and return the code object."""
state = config._assertstate
try:
stat = fn.stat()
source = fn.read("rb")
@ -287,9 +330,9 @@ def _rewrite_test(state, fn):
# Let this pop up again in the real import.
state.trace("failed to parse: %r" % (fn,))
return None, None
rewrite_asserts(tree)
rewrite_asserts(tree, fn, config)
try:
co = compile(tree, fn.strpath, "exec")
co = compile(tree, fn.strpath, "exec", dont_inherit=True)
except SyntaxError:
# It's possible that this error is from some bug in the
# assertion rewriting, but I don't know of a fast way to tell.
@ -343,9 +386,9 @@ def _read_pyc(source, pyc, trace=lambda x: None):
return co
def rewrite_asserts(mod):
def rewrite_asserts(mod, module_path=None, config=None):
"""Rewrite the assert statements in mod."""
AssertionRewriter().run(mod)
AssertionRewriter(module_path, config).run(mod)
def _saferepr(obj):
@ -532,6 +575,11 @@ class AssertionRewriter(ast.NodeVisitor):
"""
def __init__(self, module_path, config):
super(AssertionRewriter, self).__init__()
self.module_path = module_path
self.config = config
def run(self, mod):
"""Find all assert statements in *mod* and rewrite them."""
if not mod.body:
@ -672,6 +720,10 @@ class AssertionRewriter(ast.NodeVisitor):
the expression is false.
"""
if isinstance(assert_.test, ast.Tuple) and self.config is not None:
fslocation = (self.module_path, assert_.lineno)
self.config.warn('R1', 'assertion is always true, perhaps '
'remove parentheses?', fslocation=fslocation)
self.statements = []
self.variables = []
self.variable_counter = itertools.count()
@ -855,6 +907,8 @@ class AssertionRewriter(ast.NodeVisitor):
def visit_Compare(self, comp):
self.push_format_context()
left_res, left_expl = self.visit(comp.left)
if isinstance(comp.left, (_ast.Compare, _ast.BoolOp)):
left_expl = "({0})".format(left_expl)
res_variables = [self.variable() for i in range(len(comp.ops))]
load_names = [ast.Name(v, ast.Load()) for v in res_variables]
store_names = [ast.Name(v, ast.Store()) for v in res_variables]
@ -864,6 +918,8 @@ class AssertionRewriter(ast.NodeVisitor):
results = [left_res]
for i, op, next_operand in it:
next_res, next_expl = self.visit(next_operand)
if isinstance(next_operand, (_ast.Compare, _ast.BoolOp)):
next_expl = "({0})".format(next_expl)
results.append(next_res)
sym = binop_map[op.__class__]
syms.append(ast.Str(sym))

102
third_party/python/pytest/_pytest/assertion/truncate.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,102 @@
"""
Utilities for truncating assertion output.
Current default behaviour is to truncate assertion explanations at
~8 terminal lines, unless running in "-vv" mode or running on CI.
"""
from __future__ import absolute_import, division, print_function
import os
import py
DEFAULT_MAX_LINES = 8
DEFAULT_MAX_CHARS = 8 * 80
USAGE_MSG = "use '-vv' to show"
def truncate_if_required(explanation, item, max_length=None):
"""
Truncate this assertion explanation if the given test item is eligible.
"""
if _should_truncate_item(item):
return _truncate_explanation(explanation)
return explanation
def _should_truncate_item(item):
"""
Whether or not this test item is eligible for truncation.
"""
verbose = item.config.option.verbose
return verbose < 2 and not _running_on_ci()
def _running_on_ci():
"""Check if we're currently running on a CI system."""
env_vars = ['CI', 'BUILD_NUMBER']
return any(var in os.environ for var in env_vars)
def _truncate_explanation(input_lines, max_lines=None, max_chars=None):
"""
Truncate given list of strings that makes up the assertion explanation.
Truncates to either 8 lines, or 640 characters - whichever the input reaches
first. The remaining lines will be replaced by a usage message.
"""
if max_lines is None:
max_lines = DEFAULT_MAX_LINES
if max_chars is None:
max_chars = DEFAULT_MAX_CHARS
# Check if truncation required
input_char_count = len("".join(input_lines))
if len(input_lines) <= max_lines and input_char_count <= max_chars:
return input_lines
# Truncate first to max_lines, and then truncate to max_chars if max_chars
# is exceeded.
truncated_explanation = input_lines[:max_lines]
truncated_explanation = _truncate_by_char_count(truncated_explanation, max_chars)
# Add ellipsis to final line
truncated_explanation[-1] = truncated_explanation[-1] + "..."
# Append useful message to explanation
truncated_line_count = len(input_lines) - len(truncated_explanation)
truncated_line_count += 1 # Account for the part-truncated final line
msg = '...Full output truncated'
if truncated_line_count == 1:
msg += ' ({0} line hidden)'.format(truncated_line_count)
else:
msg += ' ({0} lines hidden)'.format(truncated_line_count)
msg += ", {0}" .format(USAGE_MSG)
truncated_explanation.extend([
py.builtin._totext(""),
py.builtin._totext(msg),
])
return truncated_explanation
def _truncate_by_char_count(input_lines, max_chars):
# Check if truncation required
if len("".join(input_lines)) <= max_chars:
return input_lines
# Find point at which input length exceeds total allowed length
iterated_char_count = 0
for iterated_index, input_line in enumerate(input_lines):
if iterated_char_count + len(input_line) > max_chars:
break
iterated_char_count += len(input_line)
# Create truncated explanation with modified final line
truncated_result = input_lines[:iterated_index]
final_line = input_lines[iterated_index]
if final_line:
final_line_truncate_point = max_chars - iterated_char_count
final_line = final_line[:final_line_truncate_point]
truncated_result.append(final_line)
return truncated_result

Просмотреть файл

@ -1,4 +1,5 @@
"""Utilities for assertion debugging"""
from __future__ import absolute_import, division, print_function
import pprint
import _pytest._code
@ -8,7 +9,7 @@ try:
except ImportError:
Sequence = list
BuiltinAssertionError = py.builtin.builtins.AssertionError
u = py.builtin._totext
# The _reprcompare attribute on the util module is used by the new assertion
@ -38,44 +39,11 @@ def format_explanation(explanation):
displaying diffs.
"""
explanation = ecu(explanation)
explanation = _collapse_false(explanation)
lines = _split_explanation(explanation)
result = _format_lines(lines)
return u('\n').join(result)
def _collapse_false(explanation):
"""Collapse expansions of False
So this strips out any "assert False\n{where False = ...\n}"
blocks.
"""
where = 0
while True:
start = where = explanation.find("False\n{False = ", where)
if where == -1:
break
level = 0
prev_c = explanation[start]
for i, c in enumerate(explanation[start:]):
if prev_c + c == "\n{":
level += 1
elif prev_c + c == "\n}":
level -= 1
if not level:
break
prev_c = c
else:
raise AssertionError("unbalanced braces: %r" % (explanation,))
end = start + i
where = end
if explanation[end - 1] == '\n':
explanation = (explanation[:start] + explanation[start+15:end-1] +
explanation[end+1:])
where -= 17
return explanation
def _split_explanation(explanation):
"""Return a list of individual lines in the explanation
@ -138,7 +106,7 @@ except NameError:
def assertrepr_compare(config, op, left, right):
"""Return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=int(width/2))
left_repr = py.io.saferepr(left, maxsize=int(width//2))
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
summary = u('%s %s %s') % (ecu(left_repr), op, ecu(right_repr))
@ -225,9 +193,10 @@ def _diff_text(left, right, verbose=False):
'characters in diff, use -v to show') % i]
left = left[:-i]
right = right[:-i]
keepends = True
explanation += [line.strip('\n')
for line in ndiff(left.splitlines(),
right.splitlines())]
for line in ndiff(left.splitlines(keepends),
right.splitlines(keepends))]
return explanation
@ -288,8 +257,8 @@ def _compare_eq_dict(left, right, verbose=False):
explanation = []
common = set(left).intersection(set(right))
same = dict((k, left[k]) for k in common if left[k] == right[k])
if same and not verbose:
explanation += [u('Omitting %s identical items, use -v to show') %
if same and verbose < 2:
explanation += [u('Omitting %s identical items, use -vv to show') %
len(same)]
elif same:
explanation += [u('Common items:')]

14
third_party/python/pytest/_pytest/cacheprovider.py поставляемый Executable file → Normal file
Просмотреть файл

@ -1,10 +1,10 @@
"""
merged implementation of the cache provider
the name cache was not choosen to ensure pluggy automatically
the name cache was not chosen to ensure pluggy automatically
ignores the external pytest-cache
"""
from __future__ import absolute_import, division, print_function
import py
import pytest
import json
@ -139,11 +139,11 @@ class LFPlugin:
# running a subset of all tests with recorded failures outside
# of the set of tests currently executing
pass
elif self.config.getvalue("failedfirst"):
items[:] = previously_failed + previously_passed
else:
elif self.config.getvalue("lf"):
items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed)
else:
items[:] = previously_failed + previously_passed
def pytest_sessionfinish(self, session):
config = self.config
@ -219,7 +219,7 @@ def cacheshow(config, session):
basedir = config.cache._cachedir
vdir = basedir.join("v")
tw.sep("-", "cache values")
for valpath in vdir.visit(lambda x: x.isfile()):
for valpath in sorted(vdir.visit(lambda x: x.isfile())):
key = valpath.relto(vdir).replace(valpath.sep, "/")
val = config.cache.get(key, dummy)
if val is dummy:
@ -235,7 +235,7 @@ def cacheshow(config, session):
ddir = basedir.join("d")
if ddir.isdir() and ddir.listdir():
tw.sep("-", "cache directories")
for p in basedir.join("d").visit():
for p in sorted(basedir.join("d").visit()):
#if p.check(dir=1):
# print("%s/" % p.relto(basedir))
if p.isfile():

102
third_party/python/pytest/_pytest/capture.py поставляемый
Просмотреть файл

@ -2,16 +2,19 @@
per-test stdout/stderr capturing mechanism.
"""
from __future__ import with_statement
from __future__ import absolute_import, division, print_function
import contextlib
import sys
import os
import io
from io import UnsupportedOperation
from tempfile import TemporaryFile
import py
import pytest
from _pytest.compat import CaptureIO
from py.io import TextIO
unicode = py.builtin.text
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
@ -31,8 +34,10 @@ def pytest_addoption(parser):
@pytest.hookimpl(hookwrapper=True)
def pytest_load_initial_conftests(early_config, parser, args):
_readline_workaround()
ns = early_config.known_args_namespace
if ns.capture == "fd":
_py36_windowsconsoleio_workaround()
_readline_workaround()
pluginmanager = early_config.pluginmanager
capman = CaptureManager(ns.capture)
pluginmanager.register(capman, "capturemanager")
@ -146,46 +151,48 @@ class CaptureManager:
def pytest_internalerror(self, excinfo):
self.reset_capturings()
def suspendcapture_item(self, item, when):
out, err = self.suspendcapture()
def suspendcapture_item(self, item, when, in_=False):
out, err = self.suspendcapture(in_=in_)
item.add_report_section(when, "stdout", out)
item.add_report_section(when, "stderr", err)
error_capsysfderror = "cannot use capsys and capfd at the same time"
@pytest.fixture
def capsys(request):
"""enables capturing of writes to sys.stdout/sys.stderr and makes
"""Enable capturing of writes to sys.stdout/sys.stderr and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple.
"""
if "capfd" in request._funcargs:
if "capfd" in request.fixturenames:
raise request.raiseerror(error_capsysfderror)
request.node._capfuncarg = c = CaptureFixture(SysCapture)
request.node._capfuncarg = c = CaptureFixture(SysCapture, request)
return c
@pytest.fixture
def capfd(request):
"""enables capturing of writes to file descriptors 1 and 2 and makes
"""Enable capturing of writes to file descriptors 1 and 2 and make
captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple.
"""
if "capsys" in request._funcargs:
if "capsys" in request.fixturenames:
request.raiseerror(error_capsysfderror)
if not hasattr(os, 'dup'):
pytest.skip("capfd funcarg needs os.dup")
request.node._capfuncarg = c = CaptureFixture(FDCapture)
request.node._capfuncarg = c = CaptureFixture(FDCapture, request)
return c
class CaptureFixture:
def __init__(self, captureclass):
def __init__(self, captureclass, request):
self.captureclass = captureclass
self.request = request
def _start(self):
self._capture = MultiCapture(out=True, err=True, in_=False,
Capture=self.captureclass)
Capture=self.captureclass)
self._capture.start_capturing()
def close(self):
@ -200,6 +207,15 @@ class CaptureFixture:
except AttributeError:
return self._outerr
@contextlib.contextmanager
def disabled(self):
capmanager = self.request.config.pluginmanager.getplugin('capturemanager')
capmanager.suspendcapture_item(self.request.node, "call", in_=True)
try:
yield
finally:
capmanager.resumecapture()
def safe_text_dupfile(f, mode, default_encoding="UTF8"):
""" return a open text file object that's a duplicate of f on the
@ -390,7 +406,7 @@ class SysCapture:
if name == "stdin":
tmpfile = DontReadFromInput()
else:
tmpfile = TextIO()
tmpfile = CaptureIO()
self.tmpfile = tmpfile
def start(self):
@ -436,7 +452,8 @@ class DontReadFromInput:
__iter__ = read
def fileno(self):
raise ValueError("redirected Stdin is pseudofile, has no fileno()")
raise UnsupportedOperation("redirected stdin is pseudofile, "
"has no fileno()")
def isatty(self):
return False
@ -444,6 +461,13 @@ class DontReadFromInput:
def close(self):
pass
@property
def buffer(self):
if sys.version_info >= (3,0):
return self
else:
raise AttributeError('redirected stdin has no attribute buffer')
def _readline_workaround():
"""
@ -452,7 +476,7 @@ def _readline_workaround():
Pdb uses readline support where available--when not running from the Python
prompt, the readline module is not imported until running the pdb REPL. If
running py.test with the --pdb option this means the readline module is not
running pytest with the --pdb option this means the readline module is not
imported until after I/O capture has been started.
This is a problem for pyreadline, which is often used to implement readline
@ -470,3 +494,49 @@ def _readline_workaround():
import readline # noqa
except ImportError:
pass
def _py36_windowsconsoleio_workaround():
"""
Python 3.6 implemented unicode console handling for Windows. This works
by reading/writing to the raw console handle using
``{Read,Write}ConsoleW``.
The problem is that we are going to ``dup2`` over the stdio file
descriptors when doing ``FDCapture`` and this will ``CloseHandle`` the
handles used by Python to write to the console. Though there is still some
weirdness and the console handle seems to only be closed randomly and not
on the first call to ``CloseHandle``, or maybe it gets reopened with the
same handle value when we suspend capturing.
The workaround in this case will reopen stdio with a different fd which
also means a different handle by replicating the logic in
"Py_lifecycle.c:initstdio/create_stdio".
See https://github.com/pytest-dev/py/issues/103
"""
if not sys.platform.startswith('win32') or sys.version_info[:2] < (3, 6):
return
buffered = hasattr(sys.stdout.buffer, 'raw')
raw_stdout = sys.stdout.buffer.raw if buffered else sys.stdout.buffer
if not isinstance(raw_stdout, io._WindowsConsoleIO):
return
def _reopen_stdio(f, mode):
if not buffered and mode[0] == 'w':
buffering = 0
else:
buffering = -1
return io.TextIOWrapper(
open(os.dup(f.fileno()), mode, buffering),
f.encoding,
f.errors,
f.newlines,
f.line_buffering)
sys.__stdin__ = sys.stdin = _reopen_stdio(sys.stdin, 'rb')
sys.__stdout__ = sys.stdout = _reopen_stdio(sys.stdout, 'wb')
sys.__stderr__ = sys.stderr = _reopen_stdio(sys.stderr, 'wb')

307
third_party/python/pytest/_pytest/compat.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,307 @@
"""
python version compatibility code
"""
from __future__ import absolute_import, division, print_function
import sys
import inspect
import types
import re
import functools
import py
import _pytest
try:
import enum
except ImportError: # pragma: no cover
# Only available in Python 3.4+ or as a backport
enum = None
_PY3 = sys.version_info > (3, 0)
_PY2 = not _PY3
NoneType = type(None)
NOTSET = object()
PY35 = sys.version_info[:2] >= (3, 5)
PY36 = sys.version_info[:2] >= (3, 6)
MODULE_NOT_FOUND_ERROR = 'ModuleNotFoundError' if PY36 else 'ImportError'
if hasattr(inspect, 'signature'):
def _format_args(func):
return str(inspect.signature(func))
else:
def _format_args(func):
return inspect.formatargspec(*inspect.getargspec(func))
isfunction = inspect.isfunction
isclass = inspect.isclass
# used to work around a python2 exception info leak
exc_clear = getattr(sys, 'exc_clear', lambda: None)
# The type of re.compile objects is not exposed in Python.
REGEX_TYPE = type(re.compile(''))
def is_generator(func):
genfunc = inspect.isgeneratorfunction(func)
return genfunc and not iscoroutinefunction(func)
def iscoroutinefunction(func):
"""Return True if func is a decorated coroutine function.
Note: copied and modified from Python 3.5's builtin couroutines.py to avoid import asyncio directly,
which in turns also initializes the "logging" module as side-effect (see issue #8).
"""
return (getattr(func, '_is_coroutine', False) or
(hasattr(inspect, 'iscoroutinefunction') and inspect.iscoroutinefunction(func)))
def getlocation(function, curdir):
import inspect
fn = py.path.local(inspect.getfile(function))
lineno = py.builtin._getcode(function).co_firstlineno
if fn.relto(curdir):
fn = fn.relto(curdir)
return "%s:%d" %(fn, lineno+1)
def num_mock_patch_args(function):
""" return number of arguments used up by mock arguments (if any) """
patchings = getattr(function, "patchings", None)
if not patchings:
return 0
mock = sys.modules.get("mock", sys.modules.get("unittest.mock", None))
if mock is not None:
return len([p for p in patchings
if not p.attribute_name and p.new is mock.DEFAULT])
return len(patchings)
def getfuncargnames(function, startindex=None):
# XXX merge with main.py's varnames
#assert not isclass(function)
realfunction = function
while hasattr(realfunction, "__wrapped__"):
realfunction = realfunction.__wrapped__
if startindex is None:
startindex = inspect.ismethod(function) and 1 or 0
if realfunction != function:
startindex += num_mock_patch_args(function)
function = realfunction
if isinstance(function, functools.partial):
argnames = inspect.getargs(_pytest._code.getrawcode(function.func))[0]
partial = function
argnames = argnames[len(partial.args):]
if partial.keywords:
for kw in partial.keywords:
argnames.remove(kw)
else:
argnames = inspect.getargs(_pytest._code.getrawcode(function))[0]
defaults = getattr(function, 'func_defaults',
getattr(function, '__defaults__', None)) or ()
numdefaults = len(defaults)
if numdefaults:
return tuple(argnames[startindex:-numdefaults])
return tuple(argnames[startindex:])
if sys.version_info[:2] == (2, 6):
def isclass(object):
""" Return true if the object is a class. Overrides inspect.isclass for
python 2.6 because it will return True for objects which always return
something on __getattr__ calls (see #1035).
Backport of https://hg.python.org/cpython/rev/35bf8f7a8edc
"""
return isinstance(object, (type, types.ClassType))
if _PY3:
import codecs
imap = map
STRING_TYPES = bytes, str
UNICODE_TYPES = str,
def _escape_strings(val):
"""If val is pure ascii, returns it as a str(). Otherwise, escapes
bytes objects into a sequence of escaped bytes:
b'\xc3\xb4\xc5\xd6' -> u'\\xc3\\xb4\\xc5\\xd6'
and escapes unicode objects into a sequence of escaped unicode
ids, e.g.:
'4\\nV\\U00043efa\\x0eMXWB\\x1e\\u3028\\u15fd\\xcd\\U0007d944'
note:
the obvious "v.decode('unicode-escape')" will return
valid utf-8 unicode if it finds them in bytes, but we
want to return escaped bytes for any byte, even if they match
a utf-8 string.
"""
if isinstance(val, bytes):
if val:
# source: http://goo.gl/bGsnwC
encoded_bytes, _ = codecs.escape_encode(val)
return encoded_bytes.decode('ascii')
else:
# empty bytes crashes codecs.escape_encode (#1087)
return ''
else:
return val.encode('unicode_escape').decode('ascii')
else:
STRING_TYPES = bytes, str, unicode
UNICODE_TYPES = unicode,
from itertools import imap # NOQA
def _escape_strings(val):
"""In py2 bytes and str are the same type, so return if it's a bytes
object, return it unchanged if it is a full ascii string,
otherwise escape it into its binary form.
If it's a unicode string, change the unicode characters into
unicode escapes.
"""
if isinstance(val, bytes):
try:
return val.encode('ascii')
except UnicodeDecodeError:
return val.encode('string-escape')
else:
return val.encode('unicode-escape')
def get_real_func(obj):
""" gets the real function object of the (possibly) wrapped object by
functools.wraps or functools.partial.
"""
start_obj = obj
for i in range(100):
new_obj = getattr(obj, '__wrapped__', None)
if new_obj is None:
break
obj = new_obj
else:
raise ValueError(
("could not find real function of {start}"
"\nstopped at {current}").format(
start=py.io.saferepr(start_obj),
current=py.io.saferepr(obj)))
if isinstance(obj, functools.partial):
obj = obj.func
return obj
def getfslineno(obj):
# xxx let decorators etc specify a sane ordering
obj = get_real_func(obj)
if hasattr(obj, 'place_as'):
obj = obj.place_as
fslineno = _pytest._code.getfslineno(obj)
assert isinstance(fslineno[1], int), obj
return fslineno
def getimfunc(func):
try:
return func.__func__
except AttributeError:
try:
return func.im_func
except AttributeError:
return func
def safe_getattr(object, name, default):
""" Like getattr but return default upon any Exception.
Attribute access can potentially fail for 'evil' Python objects.
See issue #214.
"""
try:
return getattr(object, name, default)
except Exception:
return default
def _is_unittest_unexpected_success_a_failure():
"""Return if the test suite should fail if a @expectedFailure unittest test PASSES.
From https://docs.python.org/3/library/unittest.html?highlight=unittest#unittest.TestResult.wasSuccessful:
Changed in version 3.4: Returns False if there were any
unexpectedSuccesses from tests marked with the expectedFailure() decorator.
"""
return sys.version_info >= (3, 4)
if _PY3:
def safe_str(v):
"""returns v as string"""
return str(v)
else:
def safe_str(v):
"""returns v as string, converting to ascii if necessary"""
try:
return str(v)
except UnicodeError:
if not isinstance(v, unicode):
v = unicode(v)
errors = 'replace'
return v.encode('utf-8', errors)
COLLECT_FAKEMODULE_ATTRIBUTES = (
'Collector',
'Module',
'Generator',
'Function',
'Instance',
'Session',
'Item',
'Class',
'File',
'_fillfuncargs',
)
def _setup_collect_fakemodule():
from types import ModuleType
import pytest
pytest.collect = ModuleType('pytest.collect')
pytest.collect.__all__ = [] # used for setns
for attr in COLLECT_FAKEMODULE_ATTRIBUTES:
setattr(pytest.collect, attr, getattr(pytest, attr))
if _PY2:
from py.io import TextIO as CaptureIO
else:
import io
class CaptureIO(io.TextIOWrapper):
def __init__(self):
super(CaptureIO, self).__init__(
io.BytesIO(),
encoding='UTF-8', newline='', write_through=True,
)
def getvalue(self):
return self.buffer.getvalue().decode('UTF-8')
class FuncargnamesCompatAttr(object):
""" helper class so that Metafunc, Function and FixtureRequest
don't need to each define the "funcargnames" compatibility attribute.
"""
@property
def funcargnames(self):
""" alias attribute for ``fixturenames`` for pre-2.3 compatibility"""
return self.fixturenames

407
third_party/python/pytest/_pytest/config.py поставляемый
Просмотреть файл

@ -1,4 +1,5 @@
""" command line options, ini-file and conftest.py processing. """
from __future__ import absolute_import, division, print_function
import argparse
import shlex
import traceback
@ -7,10 +8,13 @@ import warnings
import py
# DON't import pytest here because it causes import cycle troubles
import sys, os
import sys
import os
import _pytest._code
import _pytest.hookspec # the extension point definitions
import _pytest.assertion
from _pytest._pluggy import PluginManager, HookimplMarker, HookspecMarker
from _pytest.compat import safe_str
hookimpl = HookimplMarker("pytest")
hookspec = HookspecMarker("pytest")
@ -25,6 +29,12 @@ class ConftestImportFailure(Exception):
self.path = path
self.excinfo = excinfo
def __str__(self):
etype, evalue, etb = self.excinfo
formatted = traceback.format_tb(etb)
# The level of the tracebacks we want to print is hand crafted :(
return repr(evalue) + '\n' + ''.join(formatted[2:])
def main(args=None, plugins=None):
""" return exit code, after performing an in-process test run.
@ -45,7 +55,6 @@ def main(args=None, plugins=None):
return 4
else:
try:
config.pluginmanager.check_pending()
return config.hook.pytest_cmdline_main(config=config)
finally:
config._ensure_unconfigure()
@ -57,15 +66,47 @@ def main(args=None, plugins=None):
class cmdline: # compatibility namespace
main = staticmethod(main)
class UsageError(Exception):
""" error in pytest usage or invocation"""
class PrintHelp(Exception):
"""Raised when pytest should print it's help to skip the rest of the
argument parsing and validation."""
pass
def filename_arg(path, optname):
""" Argparse type validator for filename arguments.
:path: path of filename
:optname: name of the option
"""
if os.path.isdir(path):
raise UsageError("{0} must be a filename, given: {1}".format(optname, path))
return path
def directory_arg(path, optname):
"""Argparse type validator for directory arguments.
:path: path of directory
:optname: name of the option
"""
if not os.path.isdir(path):
raise UsageError("{0} must be a directory, given: {1}".format(optname, path))
return path
_preinit = []
default_plugins = (
"mark main terminal runner python pdb unittest capture skipping "
"tmpdir monkeypatch recwarn pastebin helpconfig nose assertion genscript "
"junitxml resultlog doctest cacheprovider").split()
"mark main terminal runner python fixtures debugging unittest capture skipping "
"tmpdir monkeypatch recwarn pastebin helpconfig nose assertion "
"junitxml resultlog doctest cacheprovider freeze_support "
"setuponly setupplan warnings").split()
builtin_plugins = set(default_plugins)
builtin_plugins.add("pytester")
@ -97,6 +138,7 @@ def get_plugin_manager():
return get_config().pluginmanager
def _prepareconfig(args=None, plugins=None):
warning = None
if args is None:
args = sys.argv[1:]
elif isinstance(args, py.path.local):
@ -105,6 +147,8 @@ def _prepareconfig(args=None, plugins=None):
if not isinstance(args, str):
raise ValueError("not a string or argument list: %r" % (args,))
args = shlex.split(args, posix=sys.platform != "win32")
from _pytest import deprecated
warning = deprecated.MAIN_STR_ARGS
config = get_config()
pluginmanager = config.pluginmanager
try:
@ -114,6 +158,8 @@ def _prepareconfig(args=None, plugins=None):
pluginmanager.consider_pluginarg(plugin)
else:
pluginmanager.register(plugin)
if warning:
config.warn('C1', warning)
return pluginmanager.hook.pytest_cmdline_parse(
pluginmanager=pluginmanager, args=args)
except BaseException:
@ -123,7 +169,7 @@ def _prepareconfig(args=None, plugins=None):
class PytestPluginManager(PluginManager):
"""
Overwrites :py:class:`pluggy.PluginManager` to add pytest-specific
Overwrites :py:class:`pluggy.PluginManager <_pytest.vendored_packages.pluggy.PluginManager>` to add pytest-specific
functionality:
* loading plugins from the command line, ``PYTEST_PLUGIN`` env variable and
@ -139,6 +185,7 @@ class PytestPluginManager(PluginManager):
self._conftestpath2mod = {}
self._confcutdir = None
self._noconftest = False
self._duplicatepaths = set()
self.add_hookspecs(_pytest.hookspec)
self.register(self)
@ -152,11 +199,14 @@ class PytestPluginManager(PluginManager):
self.trace.root.setwriter(err.write)
self.enable_tracing()
# Config._consider_importhook will set a real object if required.
self.rewrite_hook = _pytest.assertion.DummyRewriteHook()
def addhooks(self, module_or_class):
"""
.. deprecated:: 2.8
Use :py:meth:`pluggy.PluginManager.add_hookspecs` instead.
Use :py:meth:`pluggy.PluginManager.add_hookspecs <_pytest.vendored_packages.pluggy.PluginManager.add_hookspecs>` instead.
"""
warning = dict(code="I2",
fslocation=_pytest._code.getfslineno(sys._getframe(1)),
@ -209,6 +259,9 @@ class PytestPluginManager(PluginManager):
if ret:
self.hook.pytest_plugin_registered.call_historic(
kwargs=dict(plugin=plugin, manager=self))
if isinstance(plugin, types.ModuleType):
self.consider_module(plugin)
return ret
def getplugin(self, name):
@ -353,38 +406,37 @@ class PytestPluginManager(PluginManager):
self.import_plugin(arg)
def consider_conftest(self, conftestmodule):
if self.register(conftestmodule, name=conftestmodule.__file__):
self.consider_module(conftestmodule)
self.register(conftestmodule, name=conftestmodule.__file__)
def consider_env(self):
self._import_plugin_specs(os.environ.get("PYTEST_PLUGINS"))
def consider_module(self, mod):
self._import_plugin_specs(getattr(mod, "pytest_plugins", None))
self._import_plugin_specs(getattr(mod, 'pytest_plugins', []))
def _import_plugin_specs(self, spec):
if spec:
if isinstance(spec, str):
spec = spec.split(",")
for import_spec in spec:
self.import_plugin(import_spec)
plugins = _get_plugin_specs_as_list(spec)
for import_spec in plugins:
self.import_plugin(import_spec)
def import_plugin(self, modname):
# most often modname refers to builtin modules, e.g. "pytester",
# "terminal" or "capture". Those plugins are registered under their
# basename for historic purposes but must be imported with the
# _pytest prefix.
assert isinstance(modname, str)
assert isinstance(modname, (py.builtin.text, str)), "module name as text required, got %r" % modname
modname = str(modname)
if self.get_plugin(modname) is not None:
return
if modname in builtin_plugins:
importspec = "_pytest." + modname
else:
importspec = modname
self.rewrite_hook.mark_rewrite(importspec)
try:
__import__(importspec)
except ImportError as e:
new_exc = ImportError('Error importing plugin "%s": %s' % (modname, e))
new_exc = ImportError('Error importing plugin "%s": %s' % (modname, safe_str(e.args[0])))
# copy over name and path attributes
for attr in ('name', 'path'):
if hasattr(e, attr):
@ -398,7 +450,24 @@ class PytestPluginManager(PluginManager):
else:
mod = sys.modules[importspec]
self.register(mod, modname)
self.consider_module(mod)
def _get_plugin_specs_as_list(specs):
"""
Parses a list of "plugin specs" and returns a list of plugin names.
Plugin specs can be given as a list of strings separated by "," or already as a list/tuple in
which case it is returned as a list. Specs can also be `None` in which case an
empty list is returned.
"""
if specs is not None:
if isinstance(specs, str):
specs = specs.split(',') if specs else []
if not isinstance(specs, (list, tuple)):
raise UsageError("Plugin specs must be a ','-separated string or a "
"list/tuple of strings for plugin names. Given: %r" % specs)
return list(specs)
return []
class Parser:
@ -537,13 +606,18 @@ class ArgumentError(Exception):
class Argument:
"""class that mimics the necessary behaviour of optparse.Option """
"""class that mimics the necessary behaviour of optparse.Option
its currently a least effort implementation
and ignoring choices and integer prefixes
https://docs.python.org/3/library/optparse.html#optparse-standard-option-types
"""
_typ_map = {
'int': int,
'string': str,
}
# enable after some grace period for plugin writers
TYPE_WARN = False
'float': float,
'complex': complex,
}
def __init__(self, *names, **attrs):
"""store parms in private vars for use in add_argument"""
@ -551,17 +625,12 @@ class Argument:
self._short_opts = []
self._long_opts = []
self.dest = attrs.get('dest')
if self.TYPE_WARN:
try:
help = attrs['help']
if '%default' in help:
warnings.warn(
'pytest now uses argparse. "%default" should be'
' changed to "%(default)s" ',
FutureWarning,
stacklevel=3)
except KeyError:
pass
if '%default' in (attrs.get('help') or ''):
warnings.warn(
'pytest now uses argparse. "%default" should be'
' changed to "%(default)s" ',
DeprecationWarning,
stacklevel=3)
try:
typ = attrs['type']
except KeyError:
@ -570,25 +639,23 @@ class Argument:
# this might raise a keyerror as well, don't want to catch that
if isinstance(typ, py.builtin._basestring):
if typ == 'choice':
if self.TYPE_WARN:
warnings.warn(
'type argument to addoption() is a string %r.'
' For parsearg this is optional and when supplied '
' should be a type.'
' (options: %s)' % (typ, names),
FutureWarning,
stacklevel=3)
warnings.warn(
'type argument to addoption() is a string %r.'
' For parsearg this is optional and when supplied'
' should be a type.'
' (options: %s)' % (typ, names),
DeprecationWarning,
stacklevel=3)
# argparse expects a type here take it from
# the type of the first element
attrs['type'] = type(attrs['choices'][0])
else:
if self.TYPE_WARN:
warnings.warn(
'type argument to addoption() is a string %r.'
' For parsearg this should be a type.'
' (options: %s)' % (typ, names),
FutureWarning,
stacklevel=3)
warnings.warn(
'type argument to addoption() is a string %r.'
' For parsearg this should be a type.'
' (options: %s)' % (typ, names),
DeprecationWarning,
stacklevel=3)
attrs['type'] = Argument._typ_map[typ]
# used in test_parseopt -> test_parse_defaultgetter
self.type = attrs['type']
@ -655,20 +722,17 @@ class Argument:
self._long_opts.append(opt)
def __repr__(self):
retval = 'Argument('
args = []
if self._short_opts:
retval += '_short_opts: ' + repr(self._short_opts) + ', '
args += ['_short_opts: ' + repr(self._short_opts)]
if self._long_opts:
retval += '_long_opts: ' + repr(self._long_opts) + ', '
retval += 'dest: ' + repr(self.dest) + ', '
args += ['_long_opts: ' + repr(self._long_opts)]
args += ['dest: ' + repr(self.dest)]
if hasattr(self, 'type'):
retval += 'type: ' + repr(self.type) + ', '
args += ['type: ' + repr(self.type)]
if hasattr(self, 'default'):
retval += 'default: ' + repr(self.default) + ', '
if retval[-2:] == ', ': # always long enough to test ("Argument(" )
retval = retval[:-2]
retval += ')'
return retval
args += ['default: ' + repr(self.default)]
return 'Argument({0})'.format(', '.join(args))
class OptionGroup:
@ -686,6 +750,10 @@ class OptionGroup:
results in help showing '--two-words' only, but --twowords gets
accepted **and** the automatic destination is in args.twowords
"""
conflict = set(optnames).intersection(
name for opt in self.options for name in opt.names())
if conflict:
raise ValueError("option names %s already added" % conflict)
option = Argument(*optnames, **attrs)
self._addoption_instance(option, shortupper=False)
@ -772,7 +840,7 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
if len(option) == 2 or option[2] == ' ':
return_list.append(option)
if option[2:] == short_long.get(option.replace('-', '')):
return_list.append(option.replace(' ', '='))
return_list.append(option.replace(' ', '=', 1))
action._formatted_action_invocation = ', '.join(return_list)
return action._formatted_action_invocation
@ -797,9 +865,11 @@ class Notset:
def __repr__(self):
return "<NOTSET>"
notset = Notset()
FILE_OR_DIR = 'file_or_dir'
class Config(object):
""" access to configuration values, pluginmanager and plugin hooks. """
@ -817,14 +887,17 @@ class Config(object):
self.trace = self.pluginmanager.trace.root.get("config")
self.hook = self.pluginmanager.hook
self._inicache = {}
self._override_ini = ()
self._opt2dest = {}
self._cleanup = []
self._warn = self.pluginmanager._warn
self.pluginmanager.register(self, "pytestconfig")
self._configured = False
def do_setns(dic):
import pytest
setns(pytest, dic)
self.hook.pytest_namespace.call_historic(do_setns, {})
self.hook.pytest_addoption.call_historic(kwargs=dict(parser=self._parser))
@ -847,11 +920,11 @@ class Config(object):
fin = self._cleanup.pop()
fin()
def warn(self, code, message, fslocation=None):
def warn(self, code, message, fslocation=None, nodeid=None):
""" generate a warning for this test session. """
self.hook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
fslocation=fslocation, nodeid=None))
fslocation=fslocation, nodeid=nodeid))
def get_terminal_writer(self):
return self.pluginmanager.get_plugin("terminalreporter")._tw
@ -908,13 +981,81 @@ class Config(object):
def _initini(self, args):
ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=self.option.copy())
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args)
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn)
self.rootdir, self.inifile, self.inicfg = r
self._parser.extra_info['rootdir'] = self.rootdir
self._parser.extra_info['inifile'] = self.inifile
self.invocation_dir = py.path.local()
self._parser.addini('addopts', 'extra command line options', 'args')
self._parser.addini('minversion', 'minimally required pytest version')
self._override_ini = ns.override_ini or ()
def _consider_importhook(self, args):
"""Install the PEP 302 import hook if using assertion re-writing.
Needs to parse the --assert=<mode> option from the commandline
and find all the installed plugins to mark them for re-writing
by the importhook.
"""
ns, unknown_args = self._parser.parse_known_and_unknown_args(args)
mode = ns.assertmode
if mode == 'rewrite':
try:
hook = _pytest.assertion.install_importhook(self)
except SystemError:
mode = 'plain'
else:
self._mark_plugins_for_rewrite(hook)
self._warn_about_missing_assertion(mode)
def _mark_plugins_for_rewrite(self, hook):
"""
Given an importhook, mark for rewrite any top-level
modules or packages in the distribution package for
all pytest plugins.
"""
import pkg_resources
self.pluginmanager.rewrite_hook = hook
# 'RECORD' available for plugins installed normally (pip install)
# 'SOURCES.txt' available for plugins installed in dev mode (pip install -e)
# for installed plugins 'SOURCES.txt' returns an empty list, and vice-versa
# so it shouldn't be an issue
metadata_files = 'RECORD', 'SOURCES.txt'
package_files = (
entry.split(',')[0]
for entrypoint in pkg_resources.iter_entry_points('pytest11')
for metadata in metadata_files
for entry in entrypoint.dist._get_metadata(metadata)
)
for fn in package_files:
is_simple_module = os.sep not in fn and fn.endswith('.py')
is_package = fn.count(os.sep) == 1 and fn.endswith('__init__.py')
if is_simple_module:
module_name, ext = os.path.splitext(fn)
hook.mark_rewrite(module_name)
elif is_package:
package_name = os.path.dirname(fn)
hook.mark_rewrite(package_name)
def _warn_about_missing_assertion(self, mode):
try:
assert False
except AssertionError:
pass
else:
if mode == 'plain':
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
" and FAILING TESTS WILL PASS. Are you"
" using python -O?")
else:
sys.stderr.write("WARNING: assertions not in test modules or"
" plugins will be ignored"
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
def _preparse(self, args, addopts=True):
self._initini(args)
@ -922,13 +1063,12 @@ class Config(object):
args[:] = shlex.split(os.environ.get('PYTEST_ADDOPTS', '')) + args
args[:] = self.getini("addopts") + args
self._checkversion()
self._consider_importhook(args)
self.pluginmanager.consider_preparse(args)
try:
self.pluginmanager.load_setuptools_entrypoints("pytest11")
except ImportError as e:
self.warn("I2", "could not load setuptools entry import: %s" % (e,))
self.pluginmanager.load_setuptools_entrypoints('pytest11')
self.pluginmanager.consider_env()
self.known_args_namespace = ns = self._parser.parse_known_args(args, namespace=self.option.copy())
confcutdir = self.known_args_namespace.confcutdir
if self.known_args_namespace.confcutdir is None and self.inifile:
confcutdir = py.path.local(self.inifile).dirname
self.known_args_namespace.confcutdir = confcutdir
@ -966,14 +1106,18 @@ class Config(object):
self._preparse(args, addopts=addopts)
# XXX deprecated hook:
self.hook.pytest_cmdline_preparse(config=self, args=args)
args = self._parser.parse_setoption(args, self.option, namespace=self.option)
if not args:
cwd = os.getcwd()
if cwd == self.rootdir:
args = self.getini('testpaths')
self._parser.after_preparse = True
try:
args = self._parser.parse_setoption(args, self.option, namespace=self.option)
if not args:
args = [cwd]
self.args = args
cwd = os.getcwd()
if cwd == self.rootdir:
args = self.getini('testpaths')
if not args:
args = [cwd]
self.args = args
except PrintHelp:
pass
def addinivalue_line(self, name, line):
""" add a line to an ini-file option. The option must have been
@ -986,7 +1130,7 @@ class Config(object):
def getini(self, name):
""" return configuration value from an :ref:`ini file <inifiles>`. If the
specified name hasn't been registered through a prior
:py:func:`parser.addini <pytest.config.Parser.addini>`
:py:func:`parser.addini <_pytest.config.Parser.addini>`
call (usually from a plugin), a ValueError is raised. """
try:
return self._inicache[name]
@ -999,14 +1143,16 @@ class Config(object):
description, type, default = self._parser._inidict[name]
except KeyError:
raise ValueError("unknown configuration value: %r" %(name,))
try:
value = self.inicfg[name]
except KeyError:
if default is not None:
return default
if type is None:
return ''
return []
value = self._get_override_ini_value(name)
if value is None:
try:
value = self.inicfg[name]
except KeyError:
if default is not None:
return default
if type is None:
return ''
return []
if type == "pathlist":
dp = py.path.local(self.inicfg.config.path).dirpath()
l = []
@ -1037,6 +1183,22 @@ class Config(object):
l.append(relroot)
return l
def _get_override_ini_value(self, name):
value = None
# override_ini is a list of list, to support both -o foo1=bar1 foo2=bar2 and
# and -o foo1=bar1 -o foo2=bar2 options
# always use the last item if multiple value set for same ini-name,
# e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2
for ini_config_list in self._override_ini:
for ini_config in ini_config_list:
try:
(key, user_ini_value) = ini_config.split("=", 1)
except ValueError:
raise UsageError("-o/--override-ini expects option=value style.")
if key == name:
value = user_ini_value
return value
def getoption(self, name, default=notset, skip=False):
""" return command line option value.
@ -1074,7 +1236,18 @@ def exists(path, ignore=EnvironmentError):
except ignore:
return False
def getcfg(args, inibasenames):
def getcfg(args, warnfunc=None):
"""
Search the list of arguments for a valid ini-file for pytest,
and return a tuple of (rootdir, inifile, cfg-dict).
note: warnfunc is an optional function used to warn
about ini-files that use deprecated features.
This parameter should be removed when pytest
adopts standard deprecation warnings (#1804).
"""
from _pytest.deprecated import SETUP_CFG_PYTEST
inibasenames = ["pytest.ini", "tox.ini", "setup.cfg"]
args = [x for x in args if not str(x).startswith("-")]
if not args:
args = [py.path.local()]
@ -1086,57 +1259,89 @@ def getcfg(args, inibasenames):
if exists(p):
iniconfig = py.iniconfig.IniConfig(p)
if 'pytest' in iniconfig.sections:
if inibasename == 'setup.cfg' and warnfunc:
warnfunc('C1', SETUP_CFG_PYTEST)
return base, p, iniconfig['pytest']
if inibasename == 'setup.cfg' and 'tool:pytest' in iniconfig.sections:
return base, p, iniconfig['tool:pytest']
elif inibasename == "pytest.ini":
# allowed to be empty
return base, p, {}
return None, None, None
def get_common_ancestor(args):
# args are what we get after early command line parsing (usually
# strings, but can be py.path.local objects as well)
def get_common_ancestor(paths):
common_ancestor = None
for arg in args:
if str(arg)[0] == "-":
for path in paths:
if not path.exists():
continue
p = py.path.local(arg)
if common_ancestor is None:
common_ancestor = p
common_ancestor = path
else:
if p.relto(common_ancestor) or p == common_ancestor:
if path.relto(common_ancestor) or path == common_ancestor:
continue
elif common_ancestor.relto(p):
common_ancestor = p
elif common_ancestor.relto(path):
common_ancestor = path
else:
shared = p.common(common_ancestor)
shared = path.common(common_ancestor)
if shared is not None:
common_ancestor = shared
if common_ancestor is None:
common_ancestor = py.path.local()
elif not common_ancestor.isdir():
elif common_ancestor.isfile():
common_ancestor = common_ancestor.dirpath()
return common_ancestor
def determine_setup(inifile, args):
def get_dirs_from_args(args):
def is_option(x):
return str(x).startswith('-')
def get_file_part_from_node_id(x):
return str(x).split('::')[0]
def get_dir_from_path(path):
if path.isdir():
return path
return py.path.local(path.dirname)
# These look like paths but may not exist
possible_paths = (
py.path.local(get_file_part_from_node_id(arg))
for arg in args
if not is_option(arg)
)
return [
get_dir_from_path(path)
for path in possible_paths
if path.exists()
]
def determine_setup(inifile, args, warnfunc=None):
dirs = get_dirs_from_args(args)
if inifile:
iniconfig = py.iniconfig.IniConfig(inifile)
try:
inicfg = iniconfig["pytest"]
except KeyError:
inicfg = None
rootdir = get_common_ancestor(args)
rootdir = get_common_ancestor(dirs)
else:
ancestor = get_common_ancestor(args)
rootdir, inifile, inicfg = getcfg(
[ancestor], ["pytest.ini", "tox.ini", "setup.cfg"])
ancestor = get_common_ancestor(dirs)
rootdir, inifile, inicfg = getcfg([ancestor], warnfunc=warnfunc)
if rootdir is None:
for rootdir in ancestor.parts(reverse=True):
if rootdir.join("setup.py").exists():
break
else:
rootdir = ancestor
rootdir, inifile, inicfg = getcfg(dirs, warnfunc=warnfunc)
if rootdir is None:
rootdir = get_common_ancestor([py.path.local(), ancestor])
is_fs_root = os.path.splitdrive(str(rootdir))[1] == os.sep
if is_fs_root:
rootdir = ancestor
return rootdir, inifile, inicfg or {}

Просмотреть файл

@ -1,51 +1,65 @@
""" interactive debugging with PDB, the Python Debugger. """
from __future__ import absolute_import
from __future__ import absolute_import, division, print_function
import pdb
import sys
import pytest
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption('--pdb',
action="store_true", dest="usepdb", default=False,
help="start the interactive Python debugger on errors.")
group._addoption(
'--pdb', dest="usepdb", action="store_true",
help="start the interactive Python debugger on errors.")
group._addoption(
'--pdbcls', dest="usepdb_cls", metavar="modulename:classname",
help="start a custom interactive Python debugger on errors. "
"For example: --pdbcls=IPython.terminal.debugger:TerminalPdb")
def pytest_namespace():
return {'set_trace': pytestPDB().set_trace}
def pytest_configure(config):
if config.getvalue("usepdb_cls"):
modname, classname = config.getvalue("usepdb_cls").split(":")
__import__(modname)
pdb_cls = getattr(sys.modules[modname], classname)
else:
pdb_cls = pdb.Pdb
if config.getvalue("usepdb"):
config.pluginmanager.register(PdbInvoke(), 'pdbinvoke')
old = (pdb.set_trace, pytestPDB._pluginmanager)
def fin():
pdb.set_trace, pytestPDB._pluginmanager = old
pytestPDB._config = None
pdb.set_trace = pytest.set_trace
pytestPDB._pdb_cls = pdb.Pdb
pdb.set_trace = pytestPDB.set_trace
pytestPDB._pluginmanager = config.pluginmanager
pytestPDB._config = config
pytestPDB._pdb_cls = pdb_cls
config._cleanup.append(fin)
class pytestPDB:
""" Pseudo PDB that defers to the real pdb. """
_pluginmanager = None
_config = None
_pdb_cls = pdb.Pdb
def set_trace(self):
@classmethod
def set_trace(cls):
""" invoke PDB set_trace debugging, dropping any IO capturing. """
import _pytest.config
frame = sys._getframe().f_back
if self._pluginmanager is not None:
capman = self._pluginmanager.getplugin("capturemanager")
if cls._pluginmanager is not None:
capman = cls._pluginmanager.getplugin("capturemanager")
if capman:
capman.suspendcapture(in_=True)
tw = _pytest.config.create_terminal_writer(self._config)
tw = _pytest.config.create_terminal_writer(cls._config)
tw.line()
tw.sep(">", "PDB set_trace (IO-capturing turned off)")
self._pluginmanager.hook.pytest_enter_pdb(config=self._config)
pdb.Pdb().set_trace(frame)
cls._pluginmanager.hook.pytest_enter_pdb(config=cls._config)
cls._pdb_cls().set_trace(frame)
class PdbInvoke:
@ -59,7 +73,7 @@ class PdbInvoke:
def pytest_internalerror(self, excrepr, excinfo):
for line in str(excrepr).split("\n"):
sys.stderr.write("INTERNALERROR> %s\n" %line)
sys.stderr.write("INTERNALERROR> %s\n" % line)
sys.stderr.flush()
tb = _postmortem_traceback(excinfo)
post_mortem(tb)
@ -98,7 +112,7 @@ def _find_last_non_hidden_frame(stack):
def post_mortem(t):
class Pdb(pdb.Pdb):
class Pdb(pytestPDB._pdb_cls):
def get_stack(self, f, t):
stack, i = pdb.Pdb.get_stack(self, f, t)
if f is None:

24
third_party/python/pytest/_pytest/deprecated.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,24 @@
"""
This module contains deprecation messages and bits of code used elsewhere in the codebase
that is planned to be removed in the next pytest release.
Keeping it in a central location makes it easy to track what is deprecated and should
be removed when the time comes.
"""
from __future__ import absolute_import, division, print_function
MAIN_STR_ARGS = 'passing a string to pytest.main() is deprecated, ' \
'pass a list of arguments instead.'
YIELD_TESTS = 'yield tests are deprecated, and scheduled to be removed in pytest 4.0'
FUNCARG_PREFIX = (
'{name}: declaring fixtures using "pytest_funcarg__" prefix is deprecated '
'and scheduled to be removed in pytest 4.0. '
'Please remove the prefix and use the @pytest.fixture decorator instead.')
SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.'
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
RESULT_LOG = '--result-log is deprecated and scheduled for removal in pytest 4.0'

104
third_party/python/pytest/_pytest/doctest.py поставляемый
Просмотреть файл

@ -1,22 +1,41 @@
""" discover and run doctests in modules and test files."""
from __future__ import absolute_import
from __future__ import absolute_import, division, print_function
import traceback
import pytest
from _pytest._code.code import TerminalRepr, ReprFileLocation, ExceptionInfo
from _pytest.python import FixtureRequest
from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr
from _pytest.fixtures import FixtureRequest
DOCTEST_REPORT_CHOICE_NONE = 'none'
DOCTEST_REPORT_CHOICE_CDIFF = 'cdiff'
DOCTEST_REPORT_CHOICE_NDIFF = 'ndiff'
DOCTEST_REPORT_CHOICE_UDIFF = 'udiff'
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE = 'only_first_failure'
DOCTEST_REPORT_CHOICES = (
DOCTEST_REPORT_CHOICE_NONE,
DOCTEST_REPORT_CHOICE_CDIFF,
DOCTEST_REPORT_CHOICE_NDIFF,
DOCTEST_REPORT_CHOICE_UDIFF,
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
)
def pytest_addoption(parser):
parser.addini('doctest_optionflags', 'option flags for doctests',
type="args", default=["ELLIPSIS"])
parser.addini("doctest_encoding", 'encoding used for doctest files', default="utf-8")
group = parser.getgroup("collect")
group.addoption("--doctest-modules",
action="store_true", default=False,
help="run doctests in all .py modules",
dest="doctestmodules")
group.addoption("--doctest-report",
type=str.lower, default="udiff",
help="choose another output format for diffs on doctest failure",
choices=DOCTEST_REPORT_CHOICES,
dest="doctestreport")
group.addoption("--doctest-glob",
action="append", default=[], metavar="pat",
help="doctests file matching pattern, default: test*.txt",
@ -59,7 +78,6 @@ class ReprFailDoctest(TerminalRepr):
class DoctestItem(pytest.Item):
def __init__(self, name, parent, runner=None, dtest=None):
super(DoctestItem, self).__init__(name, parent)
self.runner = runner
@ -70,7 +88,9 @@ class DoctestItem(pytest.Item):
def setup(self):
if self.dtest is not None:
self.fixture_request = _setup_fixtures(self)
globs = dict(getfixture=self.fixture_request.getfuncargvalue)
globs = dict(getfixture=self.fixture_request.getfixturevalue)
for name, value in self.fixture_request.getfixturevalue('doctest_namespace').items():
globs[name] = value
self.dtest.globs.update(globs)
def runtest(self):
@ -92,7 +112,7 @@ class DoctestItem(pytest.Item):
message = excinfo.type.__name__
reprlocation = ReprFileLocation(filename, lineno, message)
checker = _get_checker()
REPORT_UDIFF = doctest.REPORT_UDIFF
report_choice = _get_report_choice(self.config.getoption("doctestreport"))
if lineno is not None:
lines = doctestfailure.test.docstring.splitlines(False)
# add line numbers to the left of the error message
@ -108,7 +128,7 @@ class DoctestItem(pytest.Item):
indent = '...'
if excinfo.errisinstance(doctest.DocTestFailure):
lines += checker.output_difference(example,
doctestfailure.got, REPORT_UDIFF).split("\n")
doctestfailure.got, report_choice).split("\n")
else:
inner_excinfo = ExceptionInfo(excinfo.value.exc_info)
lines += ["UNEXPECTED EXCEPTION: %s" %
@ -143,30 +163,29 @@ def get_optionflags(parent):
flag_acc |= flag_lookup_table[flag]
return flag_acc
class DoctestTextfile(pytest.Module):
obj = None
class DoctestTextfile(DoctestItem, pytest.Module):
def runtest(self):
def collect(self):
import doctest
fixture_request = _setup_fixtures(self)
# inspired by doctest.testfile; ideally we would use it directly,
# but it doesn't support passing a custom checker
text = self.fspath.read()
encoding = self.config.getini("doctest_encoding")
text = self.fspath.read_text(encoding)
filename = str(self.fspath)
name = self.fspath.basename
globs = dict(getfixture=fixture_request.getfuncargvalue)
if '__name__' not in globs:
globs['__name__'] = '__main__'
globs = {'__name__': '__main__'}
optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker())
_fix_spoof_python2(runner, encoding)
parser = doctest.DocTestParser()
test = parser.get_doctest(text, globs, name, filename, 0)
_check_all_skipped(test)
runner.run(test)
if test.examples:
yield DoctestItem(test.name, self, runner, test)
def _check_all_skipped(test):
@ -197,6 +216,7 @@ class DoctestModule(pytest.Module):
optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker())
for test in finder.find(module, module.__name__):
if test.examples: # skip empty doctests
yield DoctestItem(test.name, self, runner, test)
@ -288,3 +308,53 @@ def _get_allow_bytes_flag():
"""
import doctest
return doctest.register_optionflag('ALLOW_BYTES')
def _get_report_choice(key):
"""
This function returns the actual `doctest` module flag value, we want to do it as late as possible to avoid
importing `doctest` and all its dependencies when parsing options, as it adds overhead and breaks tests.
"""
import doctest
return {
DOCTEST_REPORT_CHOICE_UDIFF: doctest.REPORT_UDIFF,
DOCTEST_REPORT_CHOICE_CDIFF: doctest.REPORT_CDIFF,
DOCTEST_REPORT_CHOICE_NDIFF: doctest.REPORT_NDIFF,
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE: doctest.REPORT_ONLY_FIRST_FAILURE,
DOCTEST_REPORT_CHOICE_NONE: 0,
}[key]
def _fix_spoof_python2(runner, encoding):
"""
Installs a "SpoofOut" into the given DebugRunner so it properly deals with unicode output. This
should patch only doctests for text files because they don't have a way to declare their
encoding. Doctests in docstrings from Python modules don't have the same problem given that
Python already decoded the strings.
This fixes the problem related in issue #2434.
"""
from _pytest.compat import _PY2
if not _PY2:
return
from doctest import _SpoofOut
class UnicodeSpoof(_SpoofOut):
def getvalue(self):
result = _SpoofOut.getvalue(self)
if encoding:
result = result.decode(encoding)
return result
runner._fakeout = UnicodeSpoof()
@pytest.fixture(scope='session')
def doctest_namespace():
"""
Inject names into the doctest namespace.
"""
return dict()

1129
third_party/python/pytest/_pytest/fixtures.py поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

44
third_party/python/pytest/_pytest/freeze_support.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,44 @@
"""
Provides a function to report all internal modules for using freezing tools
pytest
"""
from __future__ import absolute_import, division, print_function
def freeze_includes():
"""
Returns a list of module names used by py.test that should be
included by cx_freeze.
"""
import py
import _pytest
result = list(_iter_all_modules(py))
result += list(_iter_all_modules(_pytest))
return result
def _iter_all_modules(package, prefix=''):
"""
Iterates over the names of all modules that can be found in the given
package, recursively.
Example:
_iter_all_modules(_pytest) ->
['_pytest.assertion.newinterpret',
'_pytest.capture',
'_pytest.core',
...
]
"""
import os
import pkgutil
if type(package) is not str:
path, prefix = package.__path__[0], package.__name__ + '.'
else:
path = package
for _, name, is_package in pkgutil.iter_modules([path]):
if is_package:
for m in _iter_all_modules(os.path.join(path, name), prefix=name + '.'):
yield prefix + m
else:
yield prefix + name

132
third_party/python/pytest/_pytest/genscript.py поставляемый
Просмотреть файл

@ -1,132 +0,0 @@
""" (deprecated) generate a single-file self-contained version of pytest """
import os
import sys
import pkgutil
import py
import _pytest
def find_toplevel(name):
for syspath in sys.path:
base = py.path.local(syspath)
lib = base/name
if lib.check(dir=1):
return lib
mod = base.join("%s.py" % name)
if mod.check(file=1):
return mod
raise LookupError(name)
def pkgname(toplevel, rootpath, path):
parts = path.parts()[len(rootpath.parts()):]
return '.'.join([toplevel] + [x.purebasename for x in parts])
def pkg_to_mapping(name):
toplevel = find_toplevel(name)
name2src = {}
if toplevel.check(file=1): # module
name2src[toplevel.purebasename] = toplevel.read()
else: # package
for pyfile in toplevel.visit('*.py'):
pkg = pkgname(name, toplevel, pyfile)
name2src[pkg] = pyfile.read()
# with wheels py source code might be not be installed
# and the resulting genscript is useless, just bail out.
assert name2src, "no source code found for %r at %r" %(name, toplevel)
return name2src
def compress_mapping(mapping):
import base64, pickle, zlib
data = pickle.dumps(mapping, 2)
data = zlib.compress(data, 9)
data = base64.encodestring(data)
data = data.decode('ascii')
return data
def compress_packages(names):
mapping = {}
for name in names:
mapping.update(pkg_to_mapping(name))
return compress_mapping(mapping)
def generate_script(entry, packages):
data = compress_packages(packages)
tmpl = py.path.local(__file__).dirpath().join('standalonetemplate.py')
exe = tmpl.read()
exe = exe.replace('@SOURCES@', data)
exe = exe.replace('@ENTRY@', entry)
return exe
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption("--genscript", action="store", default=None,
dest="genscript", metavar="path",
help="create standalone pytest script at given target path.")
def pytest_cmdline_main(config):
import _pytest.config
genscript = config.getvalue("genscript")
if genscript:
tw = _pytest.config.create_terminal_writer(config)
tw.line("WARNING: usage of genscript is deprecated.",
red=True)
deps = ['py', '_pytest', 'pytest'] # pluggy is vendored
if sys.version_info < (2,7):
deps.append("argparse")
tw.line("generated script will run on python2.6-python3.3++")
else:
tw.line("WARNING: generated script will not run on python2.6 "
"due to 'argparse' dependency. Use python2.6 "
"to generate a python2.6 compatible script", red=True)
script = generate_script(
'import pytest; raise SystemExit(pytest.cmdline.main())',
deps,
)
genscript = py.path.local(genscript)
genscript.write(script)
tw.line("generated pytest standalone script: %s" % genscript,
bold=True)
return 0
def pytest_namespace():
return {'freeze_includes': freeze_includes}
def freeze_includes():
"""
Returns a list of module names used by py.test that should be
included by cx_freeze.
"""
result = list(_iter_all_modules(py))
result += list(_iter_all_modules(_pytest))
return result
def _iter_all_modules(package, prefix=''):
"""
Iterates over the names of all modules that can be found in the given
package, recursively.
Example:
_iter_all_modules(_pytest) ->
['_pytest.assertion.newinterpret',
'_pytest.capture',
'_pytest.core',
...
]
"""
if type(package) is not str:
path, prefix = package.__path__[0], package.__name__ + '.'
else:
path = package
for _, name, is_package in pkgutil.iter_modules([path]):
if is_package:
for m in _iter_all_modules(os.path.join(path, name), prefix=name + '.'):
yield prefix + m
else:
yield prefix + name

Просмотреть файл

@ -1,13 +1,48 @@
""" version info, help messages, tracing configuration. """
from __future__ import absolute_import, division, print_function
import py
import pytest
from _pytest.config import PrintHelp
import os, sys
from argparse import Action
class HelpAction(Action):
"""This is an argparse Action that will raise an exception in
order to skip the rest of the argument parsing when --help is passed.
This prevents argparse from quitting due to missing required arguments
when any are defined, for example by ``pytest_addoption``.
This is similar to the way that the builtin argparse --help option is
implemented by raising SystemExit.
"""
def __init__(self,
option_strings,
dest=None,
default=False,
help=None):
super(HelpAction, self).__init__(
option_strings=option_strings,
dest=dest,
const=True,
default=default,
nargs=0,
help=help)
def __call__(self, parser, namespace, values, option_string=None):
setattr(namespace, self.dest, self.const)
# We should only skip the rest of the parsing after preparse is done
if getattr(parser._parser, 'after_preparse', False):
raise PrintHelp
def pytest_addoption(parser):
group = parser.getgroup('debugconfig')
group.addoption('--version', action="store_true",
help="display pytest lib version and import information.")
group._addoption("-h", "--help", action="store_true", dest="help",
group._addoption("-h", "--help", action=HelpAction, dest="help",
help="show help message and configuration info")
group._addoption('-p', action="append", dest="plugins", default = [],
metavar="name",
@ -20,6 +55,10 @@ def pytest_addoption(parser):
group.addoption('--debug',
action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.")
group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini",
action="append",
help="override config option with option=value style, e.g. `-o xfail_strict=True`.")
@pytest.hookimpl(hookwrapper=True)
@ -37,12 +76,14 @@ def pytest_cmdline_parse():
config.trace.root.setwriter(debugfile.write)
undo_tracing = config.pluginmanager.enable_tracing()
sys.stderr.write("writing pytestdebug information to %s\n" % path)
def unset_tracing():
debugfile.close()
sys.stderr.write("wrote pytestdebug information to %s\n" %
debugfile.name)
config.trace.root.setwriter(None)
undo_tracing()
config.add_cleanup(unset_tracing)
def pytest_cmdline_main(config):
@ -67,9 +108,8 @@ def showhelp(config):
tw.write(config._parser.optparser.format_help())
tw.line()
tw.line()
#tw.sep( "=", "config file settings")
tw.line("[pytest] ini-options in the next "
"pytest.ini|tox.ini|setup.cfg file:")
tw.line("[pytest] ini-options in the first "
"pytest.ini|tox.ini|setup.cfg file found:")
tw.line()
for name in config._parser._ininames:
@ -92,8 +132,8 @@ def showhelp(config):
tw.line()
tw.line()
tw.line("to see available markers type: py.test --markers")
tw.line("to see available fixtures type: py.test --fixtures")
tw.line("to see available markers type: pytest --markers")
tw.line("to see available fixtures type: pytest --fixtures")
tw.line("(shown according to specified file_or_dir or current dir "
"if not specified)")

98
third_party/python/pytest/_pytest/hookspec.py поставляемый
Просмотреть файл

@ -16,7 +16,9 @@ def pytest_addhooks(pluginmanager):
@hookspec(historic=True)
def pytest_namespace():
"""return dict of name->object to be made globally available in
"""
DEPRECATED: this hook causes direct monkeypatching on pytest, its use is strongly discouraged
return dict of name->object to be made globally available in
the pytest namespace. This hook is called at plugin registration
time.
"""
@ -34,7 +36,7 @@ def pytest_addoption(parser):
.. note::
This function should be implemented only in plugins or ``conftest.py``
files situated at the tests root directory due to how py.test
files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`.
:arg parser: To add command line options, call
@ -71,7 +73,9 @@ def pytest_configure(config):
@hookspec(firstresult=True)
def pytest_cmdline_parse(pluginmanager, args):
"""return initialized config object, parsing the specified args. """
"""return initialized config object, parsing the specified args.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_cmdline_preparse(config, args):
"""(deprecated) modify command line arguments before option parsing. """
@ -79,7 +83,9 @@ def pytest_cmdline_preparse(config, args):
@hookspec(firstresult=True)
def pytest_cmdline_main(config):
""" called for performing the main command line action. The default
implementation will invoke the configure hooks and runtest_mainloop. """
implementation will invoke the configure hooks and runtest_mainloop.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_load_initial_conftests(early_config, parser, args):
""" implements the loading of initial conftest files ahead
@ -92,7 +98,9 @@ def pytest_load_initial_conftests(early_config, parser, args):
@hookspec(firstresult=True)
def pytest_collection(session):
""" perform the collection protocol for the given session. """
""" perform the collection protocol for the given session.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_collection_modifyitems(session, config, items):
""" called after collection has been performed, may filter or re-order
@ -106,11 +114,15 @@ def pytest_ignore_collect(path, config):
""" return True to prevent considering this path for collection.
This hook is consulted for all files and directories prior to calling
more specific hooks.
Stops at first non-None result, see :ref:`firstresult`
"""
@hookspec(firstresult=True)
def pytest_collect_directory(path, parent):
""" called before traversing a directory for collection files. """
""" called before traversing a directory for collection files.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_collect_file(path, parent):
""" return collection Node or None for the given path. Any new node
@ -131,7 +143,9 @@ def pytest_deselected(items):
@hookspec(firstresult=True)
def pytest_make_collect_report(collector):
""" perform ``collector.collect()`` and return a CollectReport. """
""" perform ``collector.collect()`` and return a CollectReport.
Stops at first non-None result, see :ref:`firstresult` """
# -------------------------------------------------------------------------
# Python test function related hooks
@ -143,19 +157,32 @@ def pytest_pycollect_makemodule(path, parent):
This hook will be called for each matching test module path.
The pytest_collect_file hook needs to be used if you want to
create test modules for files that do not match as a test module.
"""
Stops at first non-None result, see :ref:`firstresult` """
@hookspec(firstresult=True)
def pytest_pycollect_makeitem(collector, name, obj):
""" return custom item/collector for a python object in a module, or None. """
""" return custom item/collector for a python object in a module, or None.
Stops at first non-None result, see :ref:`firstresult` """
@hookspec(firstresult=True)
def pytest_pyfunc_call(pyfuncitem):
""" call underlying test function. """
""" call underlying test function.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_generate_tests(metafunc):
""" generate (multiple) parametrized calls to a test function."""
@hookspec(firstresult=True)
def pytest_make_parametrize_id(config, val, argname):
"""Return a user-friendly string representation of the given ``val`` that will be used
by @pytest.mark.parametrize calls. Return None if the hook doesn't know about ``val``.
The parameter name is available as ``argname``, if required.
Stops at first non-None result, see :ref:`firstresult` """
# -------------------------------------------------------------------------
# generic runtest related hooks
# -------------------------------------------------------------------------
@ -163,7 +190,9 @@ def pytest_generate_tests(metafunc):
@hookspec(firstresult=True)
def pytest_runtestloop(session):
""" called for performing the main runtest loop
(after collection finished). """
(after collection finished).
Stops at first non-None result, see :ref:`firstresult` """
def pytest_itemstart(item, node):
""" (deprecated, use pytest_runtest_logstart). """
@ -181,7 +210,9 @@ def pytest_runtest_protocol(item, nextitem):
:py:func:`pytest_runtest_teardown`.
:return boolean: True if no further hook implementations should be invoked.
"""
Stops at first non-None result, see :ref:`firstresult` """
def pytest_runtest_logstart(nodeid, location):
""" signal the start of running a single test item. """
@ -204,14 +235,30 @@ def pytest_runtest_teardown(item, nextitem):
@hookspec(firstresult=True)
def pytest_runtest_makereport(item, call):
""" return a :py:class:`_pytest.runner.TestReport` object
for the given :py:class:`pytest.Item` and
for the given :py:class:`pytest.Item <_pytest.main.Item>` and
:py:class:`_pytest.runner.CallInfo`.
"""
Stops at first non-None result, see :ref:`firstresult` """
def pytest_runtest_logreport(report):
""" process a test setup/call/teardown report relating to
the respective phase of executing a test. """
# -------------------------------------------------------------------------
# Fixture related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_fixture_setup(fixturedef, request):
""" performs fixture setup execution.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_fixture_post_finalizer(fixturedef):
""" called after fixture teardown, but before the cache is cleared so
the fixture result cache ``fixturedef.cached_result`` can
still be accessed."""
# -------------------------------------------------------------------------
# test session related hooks
# -------------------------------------------------------------------------
@ -227,7 +274,7 @@ def pytest_unconfigure(config):
# -------------------------------------------------------------------------
# hooks for customising the assert methods
# hooks for customizing the assert methods
# -------------------------------------------------------------------------
def pytest_assertrepr_compare(config, op, left, right):
@ -236,7 +283,7 @@ def pytest_assertrepr_compare(config, op, left, right):
Return None for no custom explanation, otherwise return a list
of strings. The strings will be joined by newlines but any newlines
*in* a string will be escaped. Note that all but the first line will
be indented sligthly, the intention is for the first line to be a summary.
be indented slightly, the intention is for the first line to be a summary.
"""
# -------------------------------------------------------------------------
@ -244,13 +291,22 @@ def pytest_assertrepr_compare(config, op, left, right):
# -------------------------------------------------------------------------
def pytest_report_header(config, startdir):
""" return a string to be displayed as header info for terminal reporting."""
""" return a string to be displayed as header info for terminal reporting.
.. note::
This function should be implemented only in plugins or ``conftest.py``
files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`.
"""
@hookspec(firstresult=True)
def pytest_report_teststatus(report):
""" return result-category, shortletter and verbose word for reporting."""
""" return result-category, shortletter and verbose word for reporting.
def pytest_terminal_summary(terminalreporter):
Stops at first non-None result, see :ref:`firstresult` """
def pytest_terminal_summary(terminalreporter, exitstatus):
""" add additional section in terminal summary reporting. """
@ -266,7 +322,9 @@ def pytest_logwarning(message, code, nodeid, fslocation):
@hookspec(firstresult=True)
def pytest_doctest_prepare_content(content):
""" return processed content for a given doctest"""
""" return processed content for a given doctest
Stops at first non-None result, see :ref:`firstresult` """
# -------------------------------------------------------------------------
# error handling and internal debugging hooks

109
third_party/python/pytest/_pytest/junitxml.py поставляемый
Просмотреть файл

@ -4,16 +4,20 @@
Based on initial code from Ross Lawley.
"""
# Output conforms to https://github.com/jenkinsci/xunit-plugin/blob/master/
# src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd
Output conforms to https://github.com/jenkinsci/xunit-plugin/blob/master/
src/main/resources/org/jenkinsci/plugins/xunit/types/model/xsd/junit-10.xsd
"""
from __future__ import absolute_import, division, print_function
import functools
import py
import os
import re
import sys
import time
import pytest
from _pytest.config import filename_arg
# Python 2.X and 3.X compatibility
if sys.version_info[0] < 3:
@ -27,6 +31,7 @@ else:
class Junit(py.xml.Namespace):
pass
# We need to get the subset of the invalid unicode ranges according to
# XML 1.0 which are valid in this python build. Hence we calculate
# this dynamically instead of hardcoding it. The spec range of valid
@ -102,6 +107,8 @@ class _NodeReporter(object):
}
if testreport.location[1] is not None:
attrs["line"] = testreport.location[1]
if hasattr(testreport, "url"):
attrs["url"] = testreport.url
self.attrs = attrs
def to_xml(self):
@ -116,19 +123,15 @@ class _NodeReporter(object):
node = kind(data, message=message)
self.append(node)
def _write_captured_output(self, report):
def write_captured_output(self, report):
for capname in ('out', 'err'):
allcontent = ""
for name, content in report.get_sections("Captured std%s" %
capname):
allcontent += content
if allcontent:
content = getattr(report, 'capstd' + capname)
if content:
tag = getattr(Junit, 'system-' + capname)
self.append(tag(bin_xml_escape(allcontent)))
self.append(tag(bin_xml_escape(content)))
def append_pass(self, report):
self.add_stats('passed')
self._write_captured_output(report)
def append_failure(self, report):
# msg = str(report.longrepr.reprtraceback.extraline)
@ -147,7 +150,6 @@ class _NodeReporter(object):
fail = Junit.failure(message=message)
fail.append(bin_xml_escape(report.longrepr))
self.append(fail)
self._write_captured_output(report)
def append_collect_error(self, report):
# msg = str(report.longrepr.reprtraceback.extraline)
@ -159,9 +161,12 @@ class _NodeReporter(object):
Junit.skipped, "collection skipped", report.longrepr)
def append_error(self, report):
if getattr(report, 'when', None) == 'teardown':
msg = "test teardown failure"
else:
msg = "test setup failure"
self._add_simple(
Junit.error, "test setup failure", report.longrepr)
self._write_captured_output(report)
Junit.error, msg, report.longrepr)
def append_skipped(self, report):
if hasattr(report, "wasxfail"):
@ -176,7 +181,7 @@ class _NodeReporter(object):
Junit.skipped("%s:%s: %s" % (filename, lineno, skipreason),
type="pytest.skip",
message=skipreason))
self._write_captured_output(report)
self.write_captured_output(report)
def finalize(self):
data = self.to_xml().unicode(indent=0)
@ -186,8 +191,8 @@ class _NodeReporter(object):
@pytest.fixture
def record_xml_property(request):
"""Fixture that adds extra xml properties to the tag for the calling test.
The fixture is callable with (name, value), with value being automatically
"""Add extra xml properties to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
"""
request.node.warn(
@ -212,6 +217,7 @@ def pytest_addoption(parser):
action="store",
dest="xmlpath",
metavar="path",
type=functools.partial(filename_arg, optname="--junitxml"),
default=None,
help="create junit-xml style report file at given path.")
group.addoption(
@ -220,13 +226,14 @@ def pytest_addoption(parser):
metavar="str",
default=None,
help="prepend prefix to classnames in junit-xml output")
parser.addini("junit_suite_name", "Test suite name for JUnit report", default="pytest")
def pytest_configure(config):
xmlpath = config.option.xmlpath
# prevent opening xmllog on slave nodes (xdist)
if xmlpath and not hasattr(config, 'slaveinput'):
config._xml = LogXML(xmlpath, config.option.junitprefix)
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini("junit_suite_name"))
config.pluginmanager.register(config._xml)
@ -253,10 +260,11 @@ def mangle_test_address(address):
class LogXML(object):
def __init__(self, logfile, prefix):
def __init__(self, logfile, prefix, suite_name="pytest"):
logfile = os.path.expanduser(os.path.expandvars(logfile))
self.logfile = os.path.normpath(os.path.abspath(logfile))
self.prefix = prefix
self.suite_name = suite_name
self.stats = dict.fromkeys([
'error',
'passed',
@ -265,6 +273,10 @@ class LogXML(object):
], 0)
self.node_reporters = {} # nodeid -> _NodeReporter
self.node_reporters_ordered = []
self.global_properties = []
# List of reports that failed on call but teardown is pending.
self.open_reports = []
self.cnt_double_fail_tests = 0
def finalize(self, report):
nodeid = getattr(report, 'nodeid', report)
@ -284,9 +296,12 @@ class LogXML(object):
if key in self.node_reporters:
# TODO: breasks for --dist=each
return self.node_reporters[key]
reporter = _NodeReporter(nodeid, self)
self.node_reporters[key] = reporter
self.node_reporters_ordered.append(reporter)
return reporter
def add_stats(self, key):
@ -321,14 +336,33 @@ class LogXML(object):
-> teardown node2
-> teardown node1
"""
close_report = None
if report.passed:
if report.when == "call": # ignore setup/teardown
reporter = self._opentestcase(report)
reporter.append_pass(report)
elif report.failed:
if report.when == "teardown":
# The following vars are needed when xdist plugin is used
report_wid = getattr(report, "worker_id", None)
report_ii = getattr(report, "item_index", None)
close_report = next(
(rep for rep in self.open_reports
if (rep.nodeid == report.nodeid and
getattr(rep, "item_index", None) == report_ii and
getattr(rep, "worker_id", None) == report_wid
)
), None)
if close_report:
# We need to open new testcase in case we have failure in
# call and error in teardown in order to follow junit
# schema
self.finalize(close_report)
self.cnt_double_fail_tests += 1
reporter = self._opentestcase(report)
if report.when == "call":
reporter.append_failure(report)
self.open_reports.append(report)
else:
reporter.append_error(report)
elif report.skipped:
@ -336,7 +370,20 @@ class LogXML(object):
reporter.append_skipped(report)
self.update_testcase_duration(report)
if report.when == "teardown":
reporter = self._opentestcase(report)
reporter.write_captured_output(report)
self.finalize(report)
report_wid = getattr(report, "worker_id", None)
report_ii = getattr(report, "item_index", None)
close_report = next(
(rep for rep in self.open_reports
if (rep.nodeid == report.nodeid and
getattr(rep, "item_index", None) == report_ii and
getattr(rep, "worker_id", None) == report_wid
)
), None)
if close_report:
self.open_reports.remove(close_report)
def update_testcase_duration(self, report):
"""accumulates total duration for nodeid from given report and updates
@ -369,12 +416,15 @@ class LogXML(object):
suite_stop_time = time.time()
suite_time_delta = suite_stop_time - self.suite_start_time
numtests = self.stats['passed'] + self.stats['failure'] + self.stats['skipped']
numtests = (self.stats['passed'] + self.stats['failure'] +
self.stats['skipped'] + self.stats['error'] -
self.cnt_double_fail_tests)
logfile.write('<?xml version="1.0" encoding="utf-8"?>')
logfile.write(Junit.testsuite(
self._get_global_properties_node(),
[x.to_xml() for x in self.node_reporters_ordered],
name="pytest",
name=self.suite_name,
errors=self.stats['error'],
failures=self.stats['failure'],
skips=self.stats['skipped'],
@ -385,3 +435,18 @@ class LogXML(object):
def pytest_terminal_summary(self, terminalreporter):
terminalreporter.write_sep("-",
"generated xml file: %s" % (self.logfile))
def add_global_property(self, name, value):
self.global_properties.append((str(name), bin_xml_escape(value)))
def _get_global_properties_node(self):
"""Return a Junit node containing custom properties, if any.
"""
if self.global_properties:
return Junit.properties(
[
Junit.property(name=name, value=value)
for name, value in self.global_properties
]
)
return ''

260
third_party/python/pytest/_pytest/main.py поставляемый
Просмотреть файл

@ -1,19 +1,20 @@
""" core implementation of testing process: init, session, runtest loop. """
import imp
from __future__ import absolute_import, division, print_function
import functools
import os
import re
import sys
import _pytest
import _pytest._code
import py
import pytest
try:
from collections import MutableMapping as MappingMixin
except ImportError:
from UserDict import DictMixin as MappingMixin
from _pytest.runner import collect_one_node
from _pytest.config import directory_arg, UsageError, hookimpl
from _pytest.runner import collect_one_node, exit
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
@ -25,11 +26,10 @@ EXIT_INTERNALERROR = 3
EXIT_USAGEERROR = 4
EXIT_NOTESTSCOLLECTED = 5
name_re = re.compile("^[a-zA-Z_]\w*$")
def pytest_addoption(parser):
parser.addini("norecursedirs", "directory patterns to avoid for recursion",
type="args", default=['.*', 'CVS', '_darcs', '{arch}', '*.egg'])
type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'])
parser.addini("testpaths", "directories to search for tests when no files or directories are given in the command line.",
type="args", default=[])
#parser.addini("dirpatterns",
@ -38,8 +38,8 @@ def pytest_addoption(parser):
# "**/test_*.py", "**/*_test.py"]
#)
group = parser.getgroup("general", "running and selection options")
group._addoption('-x', '--exitfirst', action="store_true", default=False,
dest="exitfirst",
group._addoption('-x', '--exitfirst', action="store_const",
dest="maxfail", const=1,
help="exit instantly on first error or failed test."),
group._addoption('--maxfail', metavar="num",
action="store", type=int, dest="maxfail", default=0,
@ -48,6 +48,9 @@ def pytest_addoption(parser):
help="run pytest in strict mode, warnings become errors.")
group._addoption("-c", metavar="file", type=str, dest="inifilename",
help="load configuration from `file` instead of trying to locate one of the implicit configuration files.")
group._addoption("--continue-on-collection-errors", action="store_true",
default=False, dest="continue_on_collection_errors",
help="Force test execution even if collection errors occur.")
group = parser.getgroup("collect", "collection")
group.addoption('--collectonly', '--collect-only', action="store_true",
@ -59,11 +62,14 @@ def pytest_addoption(parser):
# when changing this to --conf-cut-dir, config.py Conftest.setinitial
# needs upgrading as well
group.addoption('--confcutdir', dest="confcutdir", default=None,
metavar="dir",
metavar="dir", type=functools.partial(directory_arg, optname="--confcutdir"),
help="only load conftest.py's relative to specified dir.")
group.addoption('--noconftest', action="store_true",
dest="noconftest", default=False,
help="Don't load any conftest.py files.")
group.addoption('--keepduplicates', '--keep-duplicates', action="store_true",
dest="keepduplicates", default=False,
help="Keep duplicate tests.")
group = parser.getgroup("debugconfig",
"test session debugging and configuration")
@ -71,14 +77,19 @@ def pytest_addoption(parser):
help="base temporary directory for this test run.")
def pytest_namespace():
collect = dict(Item=Item, Collector=Collector, File=File, Session=Session)
return dict(collect=collect)
"""keeping this one works around a deeper startup issue in pytest
i tried to find it for a while but the amount of time turned unsustainable,
so i put a hack in to revisit later
"""
return {}
def pytest_configure(config):
pytest.config = config # compatibiltiy
if config.option.exitfirst:
config.option.maxfail = 1
__import__('pytest').config = config # compatibiltiy
def wrap_session(config, doit):
"""Skeleton command line program"""
@ -92,10 +103,13 @@ def wrap_session(config, doit):
config.hook.pytest_sessionstart(session=session)
initstate = 2
session.exitstatus = doit(config, session) or 0
except pytest.UsageError:
except UsageError:
raise
except KeyboardInterrupt:
excinfo = _pytest._code.ExceptionInfo()
if initstate < 2 and isinstance(excinfo.value, exit.Exception):
sys.stderr.write('{0}: {1}\n'.format(
excinfo.typename, excinfo.value.msg))
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
session.exitstatus = EXIT_INTERRUPTED
except:
@ -115,9 +129,11 @@ def wrap_session(config, doit):
config._ensure_unconfigure()
return session.exitstatus
def pytest_cmdline_main(config):
return wrap_session(config, _main)
def _main(config, session):
""" default command line protocol for initialization, session,
running tests and reporting. """
@ -129,37 +145,49 @@ def _main(config, session):
elif session.testscollected == 0:
return EXIT_NOTESTSCOLLECTED
def pytest_collection(session):
return session.perform_collect()
def pytest_runtestloop(session):
if (session.testsfailed and
not session.config.option.continue_on_collection_errors):
raise session.Interrupted(
"%d errors during collection" % session.testsfailed)
if session.config.option.collectonly:
return True
def getnextitem(i):
# this is a function to avoid python2
# keeping sys.exc_info set when calling into a test
# python2 keeps sys.exc_info till the frame is left
try:
return session.items[i+1]
except IndexError:
return None
for i, item in enumerate(session.items):
nextitem = getnextitem(i)
nextitem = session.items[i+1] if i+1 < len(session.items) else None
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
if session.shouldstop:
raise session.Interrupted(session.shouldstop)
return True
def pytest_ignore_collect(path, config):
p = path.dirpath()
ignore_paths = config._getconftest_pathlist("collect_ignore", path=p)
ignore_paths = config._getconftest_pathlist("collect_ignore", path=path.dirpath())
ignore_paths = ignore_paths or []
excludeopt = config.getoption("ignore")
if excludeopt:
ignore_paths.extend([py.path.local(x) for x in excludeopt])
return path in ignore_paths
if py.path.local(path) in ignore_paths:
return True
# Skip duplicate paths.
keepduplicates = config.getoption("keepduplicates")
duplicate_paths = config.pluginmanager._duplicatepaths
if not keepduplicates:
if path in duplicate_paths:
return True
else:
duplicate_paths.add(path)
return False
class FSHookProxy:
def __init__(self, fspath, pm, remove_mods):
@ -172,12 +200,22 @@ class FSHookProxy:
self.__dict__[name] = x
return x
def compatproperty(name):
def fget(self):
# deprecated - use pytest.name
return getattr(pytest, name)
class _CompatProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
return property(fget)
class NodeKeywords(MappingMixin):
def __init__(self, node):
@ -249,19 +287,23 @@ class Node(object):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = compatproperty("Module")
Class = compatproperty("Class")
Instance = compatproperty("Instance")
Function = compatproperty("Function")
File = compatproperty("File")
Item = compatproperty("Item")
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
cls = getattr(self, name)
if cls != getattr(pytest, name):
py.log._apiwarn("2.0", "use of node.%s is deprecated, "
"use pytest_pycollect_makeitem(...) to create custom "
"collection nodes" % name)
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
@ -275,9 +317,6 @@ class Node(object):
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
else:
fslocation = "%s:%s" % fslocation[:2]
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
@ -338,9 +377,9 @@ class Node(object):
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, py.builtin._basestring):
marker = MarkDecorator(marker)
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
@ -392,7 +431,10 @@ class Node(object):
if self.config.option.fulltrace:
style="long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
@ -403,7 +445,13 @@ class Node(object):
else:
style = "long"
return excinfo.getrepr(funcargs=True,
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
@ -430,10 +478,6 @@ class Collector(Node):
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _memocollect(self):
""" internal helper method to cache results of calling collect(). """
return self._memoizedcall('_collected', lambda: list(self.collect()))
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
@ -510,7 +554,6 @@ class Session(FSCollector):
def __init__(self, config):
FSCollector.__init__(self, config.rootdir, parent=None,
config=config, session=self)
self._fs2hookproxy = {}
self.testsfailed = 0
self.testscollected = 0
self.shouldstop = False
@ -522,12 +565,12 @@ class Session(FSCollector):
def _makeid(self):
return ""
@pytest.hookimpl(tryfirst=True)
@hookimpl(tryfirst=True)
def pytest_collectstart(self):
if self.shouldstop:
raise self.Interrupted(self.shouldstop)
@pytest.hookimpl(tryfirst=True)
@hookimpl(tryfirst=True)
def pytest_runtest_logreport(self, report):
if report.failed and not hasattr(report, 'wasxfail'):
self.testsfailed += 1
@ -541,28 +584,24 @@ class Session(FSCollector):
return path in self._initialpaths
def gethookproxy(self, fspath):
try:
return self._fs2hookproxy[fspath]
except KeyError:
# check if we have the common case of running
# hooks with all conftest.py filesall conftest.py
pm = self.config.pluginmanager
my_conftestmodules = pm._getconftestmodules(fspath)
remove_mods = pm._conftest_plugins.difference(my_conftestmodules)
if remove_mods:
# one or more conftests are not in use at this fspath
proxy = FSHookProxy(fspath, pm, remove_mods)
else:
# all plugis are active for this fspath
proxy = self.config.hook
self._fs2hookproxy[fspath] = proxy
return proxy
# check if we have the common case of running
# hooks with all conftest.py filesall conftest.py
pm = self.config.pluginmanager
my_conftestmodules = pm._getconftestmodules(fspath)
remove_mods = pm._conftest_plugins.difference(my_conftestmodules)
if remove_mods:
# one or more conftests are not in use at this fspath
proxy = FSHookProxy(fspath, pm, remove_mods)
else:
# all plugis are active for this fspath
proxy = self.config.hook
return proxy
def perform_collect(self, args=None, genitems=True):
hook = self.config.hook
try:
items = self._perform_collect(args, genitems)
self.config.pluginmanager.check_pending()
hook.pytest_collection_modifyitems(session=self,
config=self.config, items=items)
finally:
@ -591,8 +630,8 @@ class Session(FSCollector):
for arg, exc in self._notfound:
line = "(no name %r in any of %r)" % (arg, exc.args[0])
errors.append("not found: %s\n%s" % (arg, line))
#XXX: test this
raise pytest.UsageError(*errors)
# XXX: test this
raise UsageError(*errors)
if not genitems:
return rep.result
else:
@ -620,7 +659,7 @@ class Session(FSCollector):
names = self._parsearg(arg)
path = names.pop(0)
if path.check(dir=1):
assert not names, "invalid arg %r" %(arg,)
assert not names, "invalid arg %r" % (arg,)
for path in path.visit(fil=lambda x: x.check(file=1),
rec=self._recurse, bf=True, sort=True):
for x in self._collectfile(path):
@ -649,44 +688,41 @@ class Session(FSCollector):
return True
def _tryconvertpyarg(self, x):
mod = None
path = [os.path.abspath('.')] + sys.path
for name in x.split('.'):
# ignore anything that's not a proper name here
# else something like --pyargs will mess up '.'
# since imp.find_module will actually sometimes work for it
# but it's supposed to be considered a filesystem path
# not a package
if name_re.match(name) is None:
return x
try:
fd, mod, type_ = imp.find_module(name, path)
except ImportError:
return x
else:
if fd is not None:
fd.close()
"""Convert a dotted module name to path.
if type_[2] != imp.PKG_DIRECTORY:
path = [os.path.dirname(mod)]
else:
path = [mod]
return mod
"""
import pkgutil
try:
loader = pkgutil.find_loader(x)
except ImportError:
return x
if loader is None:
return x
# This method is sometimes invoked when AssertionRewritingHook, which
# does not define a get_filename method, is already in place:
try:
path = loader.get_filename(x)
except AttributeError:
# Retrieve path from AssertionRewritingHook:
path = loader.modules[x][0].co_filename
if loader.is_package(x):
path = os.path.dirname(path)
return path
def _parsearg(self, arg):
""" return (fspath, names) tuple after checking the file exists. """
arg = str(arg)
if self.config.option.pyargs:
arg = self._tryconvertpyarg(arg)
parts = str(arg).split("::")
if self.config.option.pyargs:
parts[0] = self._tryconvertpyarg(parts[0])
relpath = parts[0].replace("/", os.sep)
path = self.config.invocation_dir.join(relpath, abs=True)
if not path.check():
if self.config.option.pyargs:
msg = "file or package not found: "
raise UsageError(
"file or package not found: " + arg +
" (missing __init__.py?)")
else:
msg = "file not found: "
raise pytest.UsageError(msg + arg)
raise UsageError("file not found: " + arg)
parts[0] = path
return parts
@ -709,11 +745,11 @@ class Session(FSCollector):
nextnames = names[1:]
resultnodes = []
for node in matching:
if isinstance(node, pytest.Item):
if isinstance(node, Item):
if not names:
resultnodes.append(node)
continue
assert isinstance(node, pytest.Collector)
assert isinstance(node, Collector)
rep = collect_one_node(node)
if rep.passed:
has_matched = False
@ -726,16 +762,20 @@ class Session(FSCollector):
if not has_matched and len(rep.result) == 1 and x.name == "()":
nextnames.insert(0, name)
resultnodes.extend(self.matchnodes([x], nextnames))
node.ihook.pytest_collectreport(report=rep)
else:
# report collection failures here to avoid failing to run some test
# specified in the command line because the module could not be
# imported (#134)
node.ihook.pytest_collectreport(report=rep)
return resultnodes
def genitems(self, node):
self.trace("genitems", node)
if isinstance(node, pytest.Item):
if isinstance(node, Item):
node.ihook.pytest_itemcollected(item=node)
yield node
else:
assert isinstance(node, pytest.Collector)
assert isinstance(node, Collector)
rep = collect_one_node(node)
if rep.passed:
for subnode in rep.result:

162
third_party/python/pytest/_pytest/mark.py поставляемый
Просмотреть файл

@ -1,5 +1,64 @@
""" generic mechanism for marking and selecting python functions. """
from __future__ import absolute_import, division, print_function
import inspect
from collections import namedtuple
from operator import attrgetter
from .compat import imap
def alias(name):
return property(attrgetter(name), doc='alias for ' + name)
class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
@classmethod
def param(cls, *values, **kw):
marks = kw.pop('marks', ())
if isinstance(marks, MarkDecorator):
marks = marks,
else:
assert isinstance(marks, (tuple, list, set))
def param_extract_id(id=None):
return id
id = param_extract_id(**kw)
return cls(values, marks, id)
@classmethod
def extract_from(cls, parameterset, legacy_force_tuple=False):
"""
:param parameterset:
a legacy style parameterset that may or may not be a tuple,
and may or may not be wrapped into a mess of mark objects
:param legacy_force_tuple:
enforce tuple wrapping so single argument tuple values
don't get decomposed and break tests
"""
if isinstance(parameterset, cls):
return parameterset
if not isinstance(parameterset, MarkDecorator) and legacy_force_tuple:
return cls.param(parameterset)
newmarks = []
argval = parameterset
while isinstance(argval, MarkDecorator):
newmarks.append(MarkDecorator(Mark(
argval.markname, argval.args[:-1], argval.kwargs)))
argval = argval.args[-1]
assert not isinstance(argval, ParameterSet)
if legacy_force_tuple:
argval = argval,
return cls(argval, marks=newmarks, id=None)
@property
def deprecated_arg_dict(self):
return dict((mark.name, mark) for mark in self.marks)
class MarkerError(Exception):
@ -7,8 +66,8 @@ class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
def pytest_namespace():
return {'mark': MarkGenerator()}
def param(*values, **kw):
return ParameterSet.param(*values, **kw)
def pytest_addoption(parser):
@ -19,7 +78,7 @@ def pytest_addoption(parser):
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and their parent classes. Example: -k 'test_method or test "
"and their parent classes. Example: -k 'test_method or test_"
"other' matches all test functions and classes whose name "
"contains 'test_method' or 'test_other'. "
"Additionally keywords are matched to classes and functions "
@ -54,6 +113,8 @@ def pytest_cmdline_main(config):
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
@ -64,7 +125,7 @@ def pytest_collection_modifyitems(items, config):
return
# pytest used to allow "-" for negating
# but today we just allow "-" at the beginning, use "not" instead
# we probably remove "-" alltogether soon
# we probably remove "-" altogether soon
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False
@ -160,9 +221,13 @@ def matchkeyword(colitem, keywordexpr):
def pytest_configure(config):
import pytest
config._old_mark_config = MARK_GEN._config
if config.option.strict:
pytest.mark._config = config
MARK_GEN._config = config
def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None)
class MarkGenerator:
@ -176,13 +241,15 @@ class MarkGenerator:
will set a 'slowtest' :class:`MarkInfo` object
on the ``test_function`` object. """
_config = None
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError("Marker name must NOT start with underscore")
if hasattr(self, '_config'):
if self._config is not None:
self._check(name)
return MarkDecorator(name)
return MarkDecorator(Mark(name, (), {}))
def _check(self, name):
try:
@ -198,6 +265,7 @@ class MarkGenerator:
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
def istestfunc(func):
return hasattr(func, "__call__") and \
getattr(func, "__name__", "<lambda>") != "<lambda>"
@ -235,19 +303,23 @@ class MarkDecorator:
additional keyword or positional arguments.
"""
def __init__(self, name, args=None, kwargs=None):
self.name = name
self.args = args or ()
self.kwargs = kwargs or {}
def __init__(self, mark):
assert isinstance(mark, Mark), repr(mark)
self.mark = mark
name = alias('mark.name')
args = alias('mark.args')
kwargs = alias('mark.kwargs')
@property
def markname(self):
return self.name # for backward-compat (2.4.1 had this attr)
def __eq__(self, other):
return self.mark == other.mark
def __repr__(self):
d = self.__dict__.copy()
name = d.pop('name')
return "<MarkDecorator %r %r>" % (name, d)
return "<MarkDecorator %r>" % (self.mark,)
def __call__(self, *args, **kwargs):
""" if passed a single callable argument: decorate it with mark info.
@ -270,42 +342,50 @@ class MarkDecorator:
else:
holder = getattr(func, self.name, None)
if holder is None:
holder = MarkInfo(
self.name, self.args, self.kwargs
)
holder = MarkInfo(self.mark)
setattr(func, self.name, holder)
else:
holder.add(self.args, self.kwargs)
holder.add_mark(self.mark)
return func
kw = self.kwargs.copy()
kw.update(kwargs)
args = self.args + args
return self.__class__(self.name, args=args, kwargs=kw)
mark = Mark(self.name, args, kwargs)
return self.__class__(self.mark.combined_with(mark))
class MarkInfo:
class Mark(namedtuple('Mark', 'name, args, kwargs')):
def combined_with(self, other):
assert self.name == other.name
return Mark(
self.name, self.args + other.args,
dict(self.kwargs, **other.kwargs))
class MarkInfo(object):
""" Marking object created by :class:`MarkDecorator` instances. """
def __init__(self, name, args, kwargs):
#: name of attribute
self.name = name
#: positional argument list, empty if none specified
self.args = args
#: keyword argument dictionary, empty if nothing specified
self.kwargs = kwargs.copy()
self._arglist = [(args, kwargs.copy())]
def __init__(self, mark):
assert isinstance(mark, Mark), repr(mark)
self.combined = mark
self._marks = [mark]
name = alias('combined.name')
args = alias('combined.args')
kwargs = alias('combined.kwargs')
def __repr__(self):
return "<MarkInfo %r args=%r kwargs=%r>" % (
self.name, self.args, self.kwargs
)
return "<MarkInfo {0!r}>".format(self.combined)
def add(self, args, kwargs):
def add_mark(self, mark):
""" add a MarkInfo with the given args and kwargs. """
self._arglist.append((args, kwargs))
self.args += args
self.kwargs.update(kwargs)
self._marks.append(mark)
self.combined = self.combined.combined_with(mark)
def __iter__(self):
""" yield MarkInfo objects each relating to a marking-call. """
for args, kwargs in self._arglist:
yield MarkInfo(self.name, args, kwargs)
return imap(MarkInfo, self._marks)
MARK_GEN = MarkGenerator()

Просмотреть файл

@ -1,15 +1,19 @@
""" monkeypatching and mocking functionality. """
from __future__ import absolute_import, division, print_function
import os, sys
import os
import sys
import re
from py.builtin import _basestring
from _pytest.fixtures import fixture
RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$")
def pytest_funcarg__monkeypatch(request):
"""The returned ``monkeypatch`` funcarg provides these
@fixture
def monkeypatch():
"""The returned ``monkeypatch`` fixture provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
@ -22,13 +26,13 @@ def pytest_funcarg__monkeypatch(request):
monkeypatch.chdir(path)
All modifications will be undone after the requesting
test function has finished. The ``raising``
test function or fixture has finished. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
"""
mpatch = monkeypatch()
request.addfinalizer(mpatch.undo)
return mpatch
mpatch = MonkeyPatch()
yield mpatch
mpatch.undo()
def resolve(name):
@ -93,8 +97,9 @@ class Notset:
notset = Notset()
class monkeypatch:
""" Object keeping a record of setattr/item/env/syspath changes. """
class MonkeyPatch:
""" Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes.
"""
def __init__(self):
self._setattr = []
@ -220,10 +225,10 @@ class monkeypatch:
""" Undo previous changes. This call consumes the
undo stack. Calling it a second time has no effect unless
you do more monkeypatching after the undo call.
There is generally no need to call `undo()`, since it is
called automatically during tear-down.
Note that the same `monkeypatch` fixture is used across a
single test function invocation. If `monkeypatch` is used both by
the test function itself and one of the test fixtures,

19
third_party/python/pytest/_pytest/nose.py поставляемый
Просмотреть файл

@ -1,10 +1,11 @@
""" run test suites written for nose. """
from __future__ import absolute_import, division, print_function
import sys
import py
import pytest
from _pytest import unittest
from _pytest import unittest, runner, python
from _pytest.config import hookimpl
def get_skip_exceptions():
@ -19,19 +20,19 @@ def get_skip_exceptions():
def pytest_runtest_makereport(item, call):
if call.excinfo and call.excinfo.errisinstance(get_skip_exceptions()):
# let's substitute the excinfo with a pytest.skip one
call2 = call.__class__(lambda:
pytest.skip(str(call.excinfo.value)), call.when)
call2 = call.__class__(
lambda: runner.skip(str(call.excinfo.value)), call.when)
call.excinfo = call2.excinfo
@pytest.hookimpl(trylast=True)
@hookimpl(trylast=True)
def pytest_runtest_setup(item):
if is_potential_nosetest(item):
if isinstance(item.parent, pytest.Generator):
if isinstance(item.parent, python.Generator):
gen = item.parent
if not hasattr(gen, '_nosegensetup'):
call_optional(gen.obj, 'setup')
if isinstance(gen.parent, pytest.Instance):
if isinstance(gen.parent, python.Instance):
call_optional(gen.parent.obj, 'setup')
gen._nosegensetup = True
if not call_optional(item.obj, 'setup'):
@ -50,14 +51,14 @@ def teardown_nose(item):
def pytest_make_collect_report(collector):
if isinstance(collector, pytest.Generator):
if isinstance(collector, python.Generator):
call_optional(collector.obj, 'setup')
def is_potential_nosetest(item):
# extra check needed since we do not do nose style setup/teardown
# on direct unittest style classes
return isinstance(item, pytest.Function) and \
return isinstance(item, python.Function) and \
not isinstance(item, unittest.TestCaseFunction)

Просмотреть файл

@ -1,4 +1,6 @@
""" submit failure or test session information to a pastebin service. """
from __future__ import absolute_import, division, print_function
import pytest
import sys
import tempfile
@ -11,6 +13,7 @@ def pytest_addoption(parser):
choices=['failed', 'all'],
help="send failed|all info to bpaste.net pastebin service.")
@pytest.hookimpl(trylast=True)
def pytest_configure(config):
import py
@ -23,13 +26,16 @@ def pytest_configure(config):
# pastebin file will be utf-8 encoded binary file
config._pastebinfile = tempfile.TemporaryFile('w+b')
oldwrite = tr._tw.write
def tee_write(s, **kwargs):
oldwrite(s, **kwargs)
if py.builtin._istext(s):
s = s.encode('utf-8')
config._pastebinfile.write(s)
tr._tw.write = tee_write
def pytest_unconfigure(config):
if hasattr(config, '_pastebinfile'):
# get terminal contents and delete file
@ -45,6 +51,7 @@ def pytest_unconfigure(config):
pastebinurl = create_new_paste(sessionlog)
tr.write_line("pastebin session-log: %s\n" % pastebinurl)
def create_new_paste(contents):
"""
Creates a new paste using bpaste.net service.
@ -72,6 +79,7 @@ def create_new_paste(contents):
else:
return 'bad response: ' + response
def pytest_terminal_summary(terminalreporter):
import _pytest.config
if terminalreporter.config.option.pastebin != "failed":

153
third_party/python/pytest/_pytest/pytester.py поставляемый
Просмотреть файл

@ -1,4 +1,6 @@
""" (disabled by default) support for testing pytest and pytest plugins. """
from __future__ import absolute_import, division, print_function
import codecs
import gc
import os
@ -10,12 +12,14 @@ import time
import traceback
from fnmatch import fnmatch
from py.builtin import print_
from weakref import WeakKeyDictionary
from _pytest.capture import MultiCapture, SysCapture
from _pytest._code import Source
import py
import pytest
from _pytest.main import Session, EXIT_OK
from _pytest.assertion.rewrite import AssertionRewritingHook
def pytest_addoption(parser):
@ -84,7 +88,7 @@ class LsofFdLeakChecker(object):
return True
@pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_item(self, item):
def pytest_runtest_protocol(self, item):
lines1 = self.get_open_files()
yield
if hasattr(sys, "pypy_version_info"):
@ -103,7 +107,8 @@ class LsofFdLeakChecker(object):
error.extend([str(f) for f in lines2])
error.append(error[0])
error.append("*** function %s:%s: %s " % item.location)
pytest.fail("\n".join(error), pytrace=False)
error.append("See issue #2366")
item.warn('', "\n".join(error))
# XXX copied from execnet's conftest.py - needs to be merged
@ -123,15 +128,18 @@ def getexecutable(name, cache={}):
except KeyError:
executable = py.path.local.sysfind(name)
if executable:
import subprocess
popen = subprocess.Popen([str(executable), "--version"],
universal_newlines=True, stderr=subprocess.PIPE)
out, err = popen.communicate()
if name == "jython":
import subprocess
popen = subprocess.Popen([str(executable), "--version"],
universal_newlines=True, stderr=subprocess.PIPE)
out, err = popen.communicate()
if not err or "2.5" not in err:
executable = None
if "2.5.2" in err:
executable = None # http://bugs.jython.org/issue1790
elif popen.returncode != 0:
# Handle pyenv's 127.
executable = None
cache[name] = executable
return executable
@ -222,15 +230,15 @@ class HookRecorder:
name, check = entries.pop(0)
for ind, call in enumerate(self.calls[i:]):
if call._name == name:
print_("NAMEMATCH", name, call)
print("NAMEMATCH", name, call)
if eval(check, backlocals, call.__dict__):
print_("CHECKERMATCH", repr(check), "->", call)
print("CHECKERMATCH", repr(check), "->", call)
else:
print_("NOCHECKERMATCH", repr(check), "-", call)
print("NOCHECKERMATCH", repr(check), "-", call)
continue
i += ind + 1
break
print_("NONAMEMATCH", name, "with", call)
print("NONAMEMATCH", name, "with", call)
else:
pytest.fail("could not find %r check %r" % (name, check))
@ -318,7 +326,8 @@ def linecomp(request):
return LineComp()
def pytest_funcarg__LineMatcher(request):
@pytest.fixture(name='LineMatcher')
def LineMatcher_fixture(request):
return LineMatcher
@ -327,7 +336,7 @@ def testdir(request, tmpdir_factory):
return Testdir(request, tmpdir_factory)
rex_outcome = re.compile("(\d+) ([\w-]+)")
rex_outcome = re.compile(r"(\d+) ([\w-]+)")
class RunResult:
"""The result of running a command.
@ -362,6 +371,7 @@ class RunResult:
for num, cat in outcomes:
d[cat] = int(num)
return d
raise ValueError("Pytest terminal report not found")
def assert_outcomes(self, passed=0, skipped=0, failed=0):
""" assert that the specified outcomes appear with the respective
@ -374,10 +384,10 @@ class RunResult:
class Testdir:
"""Temporary test directory with tools to test/run py.test itself.
"""Temporary test directory with tools to test/run pytest itself.
This is based on the ``tmpdir`` fixture but provides a number of
methods which aid with testing py.test itself. Unless
methods which aid with testing pytest itself. Unless
:py:meth:`chdir` is used all methods will use :py:attr:`tmpdir` as
current working directory.
@ -396,6 +406,7 @@ class Testdir:
def __init__(self, request, tmpdir_factory):
self.request = request
self._mod_collections = WeakKeyDictionary()
# XXX remove duplication with tmpdir plugin
basetmp = tmpdir_factory.ensuretemp("testdir")
name = request.function.__name__
@ -441,9 +452,10 @@ class Testdir:
the module is re-imported.
"""
for name in set(sys.modules).difference(self._savemodulekeys):
# it seems zope.interfaces is keeping some state
# (used by twisted related tests)
if name != "zope.interface":
# some zope modules used by twisted-related tests keeps internal
# state and can't be deleted; we had some trouble in the past
# with zope.interface for example
if not name.startswith("zope"):
del sys.modules[name]
def make_hook_recorder(self, pluginmanager):
@ -463,7 +475,7 @@ class Testdir:
if not hasattr(self, '_olddir'):
self._olddir = old
def _makefile(self, ext, args, kwargs):
def _makefile(self, ext, args, kwargs, encoding="utf-8"):
items = list(kwargs.items())
if args:
source = py.builtin._totext("\n").join(
@ -473,14 +485,17 @@ class Testdir:
ret = None
for name, value in items:
p = self.tmpdir.join(name).new(ext=ext)
p.dirpath().ensure_dir()
source = Source(value)
def my_totext(s, encoding="utf-8"):
if py.builtin._isbytes(s):
s = py.builtin._totext(s, encoding=encoding)
return s
source_unicode = "\n".join([my_totext(line) for line in source.lines])
source = py.builtin._totext(source_unicode)
content = source.strip().encode("utf-8") # + "\n"
content = source.strip().encode(encoding) # + "\n"
#content = content.rstrip() + "\n"
p.write(content, "wb")
if ret is None:
@ -557,7 +572,7 @@ class Testdir:
def mkpydir(self, name):
"""Create a new python package.
This creates a (sub)direcotry with an empty ``__init__.py``
This creates a (sub)directory with an empty ``__init__.py``
file so that is recognised as a python package.
"""
@ -588,7 +603,7 @@ class Testdir:
"""Return the collection node of a file.
This is like :py:meth:`getnode` but uses
:py:meth:`parseconfigure` to create the (configured) py.test
:py:meth:`parseconfigure` to create the (configured) pytest
Config instance.
:param path: A :py:class:`py.path.local` instance of the file.
@ -652,11 +667,11 @@ class Testdir:
def inline_genitems(self, *args):
"""Run ``pytest.main(['--collectonly'])`` in-process.
Retuns a tuple of the collected items and a
Returns a tuple of the collected items and a
:py:class:`HookRecorder` instance.
This runs the :py:func:`pytest.main` function to run all of
py.test inside the test process itself like
pytest inside the test process itself like
:py:meth:`inline_run`. However the return value is a tuple of
the collection items and a :py:class:`HookRecorder` instance.
@ -669,7 +684,7 @@ class Testdir:
"""Run ``pytest.main()`` in-process, returning a HookRecorder.
This runs the :py:func:`pytest.main` function to run all of
py.test inside the test process itself. This means it can
pytest inside the test process itself. This means it can
return a :py:class:`HookRecorder` instance which gives more
detailed results from then run then can be done by matching
stdout/stderr from :py:meth:`runpytest`.
@ -681,9 +696,21 @@ class Testdir:
``pytest.main()`` instance should use.
:return: A :py:class:`HookRecorder` instance.
"""
# When running py.test inline any plugins active in the main
# test process are already imported. So this disables the
# warning which will trigger to say they can no longer be
# re-written, which is fine as they are already re-written.
orig_warn = AssertionRewritingHook._warn_already_imported
def revert():
AssertionRewritingHook._warn_already_imported = orig_warn
self.request.addfinalizer(revert)
AssertionRewritingHook._warn_already_imported = lambda *a: None
rec = []
class Collect:
def pytest_configure(x, config):
rec.append(self.make_hook_recorder(config.pluginmanager))
@ -713,19 +740,24 @@ class Testdir:
if kwargs.get("syspathinsert"):
self.syspathinsert()
now = time.time()
capture = py.io.StdCapture()
capture = MultiCapture(Capture=SysCapture)
capture.start_capturing()
try:
try:
reprec = self.inline_run(*args, **kwargs)
except SystemExit as e:
class reprec:
ret = e.args[0]
except Exception:
traceback.print_exc()
class reprec:
ret = 3
finally:
out, err = capture.reset()
out, err = capture.readouterr()
capture.stop_capturing()
sys.stdout.write(out)
sys.stderr.write(err)
@ -755,9 +787,9 @@ class Testdir:
return args
def parseconfig(self, *args):
"""Return a new py.test Config instance from given commandline args.
"""Return a new pytest Config instance from given commandline args.
This invokes the py.test bootstrapping code in _pytest.config
This invokes the pytest bootstrapping code in _pytest.config
to create a new :py:class:`_pytest.core.PluginManager` and
call the pytest_cmdline_parse hook to create new
:py:class:`_pytest.config.Config` instance.
@ -777,7 +809,7 @@ class Testdir:
return config
def parseconfigure(self, *args):
"""Return a new py.test configured Config instance.
"""Return a new pytest configured Config instance.
This returns a new :py:class:`_pytest.config.Config` instance
like :py:meth:`parseconfig`, but also calls the
@ -792,7 +824,7 @@ class Testdir:
def getitem(self, source, funcname="test_func"):
"""Return the test item for a test function.
This writes the source to a python file and runs py.test's
This writes the source to a python file and runs pytest's
collection on the resulting module, returning the test item
for the requested function name.
@ -812,7 +844,7 @@ class Testdir:
def getitems(self, source):
"""Return all test items collected from the module.
This writes the source to a python file and runs py.test's
This writes the source to a python file and runs pytest's
collection on the resulting module, returning all test items
contained within.
@ -824,7 +856,7 @@ class Testdir:
"""Return the module collection node for ``source``.
This writes ``source`` to a file using :py:meth:`makepyfile`
and then runs the py.test collection on it, returning the
and then runs the pytest collection on it, returning the
collection node for the test module.
:param source: The source code of the module to collect.
@ -833,7 +865,7 @@ class Testdir:
:py:meth:`parseconfigure`.
:param withinit: Whether to also write a ``__init__.py`` file
to the temporarly directory to ensure it is a package.
to the temporary directory to ensure it is a package.
"""
kw = {self.request.function.__name__: Source(source).strip()}
@ -842,6 +874,7 @@ class Testdir:
self.makepyfile(__init__ = "#")
self.config = config = self.parseconfigure(path, *configargs)
node = self.getnode(config, path)
return node
def collect_by_name(self, modcol, name):
@ -856,7 +889,9 @@ class Testdir:
:param name: The name of the node to return.
"""
for colitem in modcol._memocollect():
if modcol not in self._mod_collections:
self._mod_collections[modcol] = list(modcol.collect())
for colitem in self._mod_collections[modcol]:
if colitem.name == name:
return colitem
@ -891,8 +926,8 @@ class Testdir:
cmdargs = [str(x) for x in cmdargs]
p1 = self.tmpdir.join("stdout")
p2 = self.tmpdir.join("stderr")
print_("running:", ' '.join(cmdargs))
print_(" in:", str(py.path.local()))
print("running:", ' '.join(cmdargs))
print(" in:", str(py.path.local()))
f1 = codecs.open(str(p1), "w", encoding="utf8")
f2 = codecs.open(str(p2), "w", encoding="utf8")
try:
@ -918,13 +953,13 @@ class Testdir:
def _dump_lines(self, lines, fp):
try:
for line in lines:
py.builtin.print_(line, file=fp)
print(line, file=fp)
except UnicodeEncodeError:
print("couldn't print to %s because of encoding" % (fp,))
def _getpytestargs(self):
# we cannot use "(sys.executable,script)"
# because on windows the script is e.g. a py.test.exe
# because on windows the script is e.g. a pytest.exe
return (sys.executable, _pytest_fullpath,) # noqa
def runpython(self, script):
@ -939,7 +974,7 @@ class Testdir:
return self.run(sys.executable, "-c", command)
def runpytest_subprocess(self, *args, **kwargs):
"""Run py.test as a subprocess with given arguments.
"""Run pytest as a subprocess with given arguments.
Any plugins added to the :py:attr:`plugins` list will added
using the ``-p`` command line option. Addtionally
@ -967,15 +1002,15 @@ class Testdir:
return self.run(*args)
def spawn_pytest(self, string, expect_timeout=10.0):
"""Run py.test using pexpect.
"""Run pytest using pexpect.
This makes sure to use the right py.test and sets up the
This makes sure to use the right pytest and sets up the
temporary directory locations.
The pexpect child is returned.
"""
basetemp = self.tmpdir.mkdir("pexpect")
basetemp = self.tmpdir.mkdir("temp-pexpect")
invoke = " ".join(map(str, self._getpytestargs()))
cmd = "%s --basetemp=%s %s" % (invoke, basetemp, string)
return self.spawn(cmd, expect_timeout=expect_timeout)
@ -988,8 +1023,6 @@ class Testdir:
pexpect = pytest.importorskip("pexpect", "3.0")
if hasattr(sys, 'pypy_version_info') and '64' in platform.machine():
pytest.skip("pypy-64 bit not supported")
if sys.platform == "darwin":
pytest.xfail("pexpect does not work reliably on darwin?!")
if sys.platform.startswith("freebsd"):
pytest.xfail("pexpect does not work reliably on freebsd")
logfile = self.tmpdir.join("spawn.out").open("wb")
@ -1035,6 +1068,7 @@ class LineMatcher:
def __init__(self, lines):
self.lines = lines
self._log_output = []
def str(self):
"""Return the entire original text."""
@ -1058,10 +1092,11 @@ class LineMatcher:
for line in lines2:
for x in self.lines:
if line == x or fnmatch(x, line):
print_("matched: ", repr(line))
self._log("matched: ", repr(line))
break
else:
raise ValueError("line %r not found in output" % line)
self._log("line %r not found in output" % line)
raise ValueError(self._log_text)
def get_lines_after(self, fnline):
"""Return all lines following the given line in the text.
@ -1073,6 +1108,13 @@ class LineMatcher:
return self.lines[i+1:]
raise ValueError("line %r not found in output" % fnline)
def _log(self, *args):
self._log_output.append(' '.join((str(x) for x in args)))
@property
def _log_text(self):
return '\n'.join(self._log_output)
def fnmatch_lines(self, lines2):
"""Search the text for matching lines.
@ -1082,8 +1124,6 @@ class LineMatcher:
stdout.
"""
def show(arg1, arg2):
py.builtin.print_(arg1, arg2, file=sys.stderr)
lines2 = self._getlines(lines2)
lines1 = self.lines[:]
nextline = None
@ -1094,17 +1134,18 @@ class LineMatcher:
while lines1:
nextline = lines1.pop(0)
if line == nextline:
show("exact match:", repr(line))
self._log("exact match:", repr(line))
break
elif fnmatch(nextline, line):
show("fnmatch:", repr(line))
show(" with:", repr(nextline))
self._log("fnmatch:", repr(line))
self._log(" with:", repr(nextline))
break
else:
if not nomatchprinted:
show("nomatch:", repr(line))
self._log("nomatch:", repr(line))
nomatchprinted = True
show(" and:", repr(nextline))
self._log(" and:", repr(nextline))
extralines.append(nextline)
else:
pytest.fail("remains unmatched: %r, see stderr" % (line,))
self._log("remains unmatched: %r" % (line,))
pytest.fail(self._log_text)

2031
third_party/python/pytest/_pytest/python.py поставляемый

Разница между файлами не показана из-за своего большого размера Загрузить разницу

145
third_party/python/pytest/_pytest/recwarn.py поставляемый
Просмотреть файл

@ -1,4 +1,5 @@
""" recording warnings during test function execution. """
from __future__ import absolute_import, division, print_function
import inspect
@ -6,11 +7,11 @@ import _pytest._code
import py
import sys
import warnings
import pytest
from _pytest.fixtures import yield_fixture
@pytest.yield_fixture
def recwarn(request):
@yield_fixture
def recwarn():
"""Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
@ -25,54 +26,59 @@ def recwarn(request):
yield wrec
def pytest_namespace():
return {'deprecated_call': deprecated_call,
'warns': warns}
def deprecated_call(func=None, *args, **kwargs):
""" assert that calling ``func(*args, **kwargs)`` triggers a
``DeprecationWarning`` or ``PendingDeprecationWarning``.
"""context manager that can be used to ensure a block of code triggers a
``DeprecationWarning`` or ``PendingDeprecationWarning``::
This function can be used as a context manager::
>>> import warnings
>>> def api_call_v2():
... warnings.warn('use v3 of this api', DeprecationWarning)
... return 200
>>> with deprecated_call():
... myobject.deprecated_method()
... assert api_call_v2() == 200
Note: we cannot use WarningsRecorder here because it is still subject
to the mechanism that prevents warnings of the same type from being
triggered twice for the same module. See #1190.
``deprecated_call`` can also be used by passing a function and ``*args`` and ``*kwargs``,
in which case it will ensure calling ``func(*args, **kwargs)`` produces one of the warnings
types above.
"""
if not func:
return WarningsChecker(expected_warning=DeprecationWarning)
categories = []
def warn_explicit(message, category, *args, **kwargs):
categories.append(category)
old_warn_explicit(message, category, *args, **kwargs)
def warn(message, category=None, *args, **kwargs):
if isinstance(message, Warning):
categories.append(message.__class__)
else:
categories.append(category)
old_warn(message, category, *args, **kwargs)
old_warn = warnings.warn
old_warn_explicit = warnings.warn_explicit
warnings.warn_explicit = warn_explicit
warnings.warn = warn
try:
ret = func(*args, **kwargs)
finally:
warnings.warn_explicit = old_warn_explicit
warnings.warn = old_warn
deprecation_categories = (DeprecationWarning, PendingDeprecationWarning)
if not any(issubclass(c, deprecation_categories) for c in categories):
return _DeprecatedCallContext()
else:
__tracebackhide__ = True
raise AssertionError("%r did not produce DeprecationWarning" % (func,))
return ret
with _DeprecatedCallContext():
return func(*args, **kwargs)
class _DeprecatedCallContext(object):
"""Implements the logic to capture deprecation warnings as a context manager."""
def __enter__(self):
self._captured_categories = []
self._old_warn = warnings.warn
self._old_warn_explicit = warnings.warn_explicit
warnings.warn_explicit = self._warn_explicit
warnings.warn = self._warn
def _warn_explicit(self, message, category, *args, **kwargs):
self._captured_categories.append(category)
def _warn(self, message, category=None, *args, **kwargs):
if isinstance(message, Warning):
self._captured_categories.append(message.__class__)
else:
self._captured_categories.append(category)
def __exit__(self, exc_type, exc_val, exc_tb):
warnings.warn_explicit = self._old_warn_explicit
warnings.warn = self._old_warn
if exc_type is None:
deprecation_categories = (DeprecationWarning, PendingDeprecationWarning)
if not any(issubclass(c, deprecation_categories) for c in self._captured_categories):
__tracebackhide__ = True
msg = "Did not produce DeprecationWarning or PendingDeprecationWarning"
raise AssertionError(msg)
def warns(expected_warning, *args, **kwargs):
@ -110,24 +116,14 @@ def warns(expected_warning, *args, **kwargs):
return func(*args[1:], **kwargs)
class RecordedWarning(object):
def __init__(self, message, category, filename, lineno, file, line):
self.message = message
self.category = category
self.filename = filename
self.lineno = lineno
self.file = file
self.line = line
class WarningsRecorder(object):
class WarningsRecorder(warnings.catch_warnings):
"""A context manager to record raised warnings.
Adapted from `warnings.catch_warnings`.
"""
def __init__(self, module=None):
self._module = sys.modules['warnings'] if module is None else module
def __init__(self):
super(WarningsRecorder, self).__init__(record=True)
self._entered = False
self._list = []
@ -164,38 +160,20 @@ class WarningsRecorder(object):
if self._entered:
__tracebackhide__ = True
raise RuntimeError("Cannot enter %r twice" % self)
self._entered = True
self._filters = self._module.filters
self._module.filters = self._filters[:]
self._showwarning = self._module.showwarning
def showwarning(message, category, filename, lineno,
file=None, line=None):
self._list.append(RecordedWarning(
message, category, filename, lineno, file, line))
# still perform old showwarning functionality
self._showwarning(
message, category, filename, lineno, file=file, line=line)
self._module.showwarning = showwarning
# allow the same warning to be raised more than once
self._module.simplefilter('always')
self._list = super(WarningsRecorder, self).__enter__()
warnings.simplefilter('always')
return self
def __exit__(self, *exc_info):
if not self._entered:
__tracebackhide__ = True
raise RuntimeError("Cannot exit %r without entering first" % self)
self._module.filters = self._filters
self._module.showwarning = self._showwarning
super(WarningsRecorder, self).__exit__(*exc_info)
class WarningsChecker(WarningsRecorder):
def __init__(self, expected_warning=None, module=None):
super(WarningsChecker, self).__init__(module=module)
def __init__(self, expected_warning=None):
super(WarningsChecker, self).__init__()
msg = ("exceptions must be old-style classes or "
"derived from Warning, not %s")
@ -216,6 +194,11 @@ class WarningsChecker(WarningsRecorder):
# only check if we're not currently handling an exception
if all(a is None for a in exc_info):
if self.expected_warning is not None:
if not any(r.category in self.expected_warning for r in self):
if not any(issubclass(r.category, self.expected_warning)
for r in self):
__tracebackhide__ = True
pytest.fail("DID NOT WARN")
from _pytest.runner import fail
fail("DID NOT WARN. No warnings of type {0} was emitted. "
"The list of emitted warnings is: {1}.".format(
self.expected_warning,
[each.message for each in self]))

Просмотреть файл

@ -1,6 +1,7 @@
""" log machine-parseable test session result information in a plain
text file.
"""
from __future__ import absolute_import, division, print_function
import py
import os
@ -9,7 +10,7 @@ def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "resultlog plugin options")
group.addoption('--resultlog', '--result-log', action="store",
metavar="path", default=None,
help="path for machine-readable result log.")
help="DEPRECATED path for machine-readable result log.")
def pytest_configure(config):
resultlog = config.option.resultlog
@ -22,6 +23,9 @@ def pytest_configure(config):
config._resultlog = ResultLog(config, logfile)
config.pluginmanager.register(config._resultlog)
from _pytest.deprecated import RESULT_LOG
config.warn('C1', RESULT_LOG)
def pytest_unconfigure(config):
resultlog = getattr(config, '_resultlog', None)
if resultlog:
@ -58,9 +62,9 @@ class ResultLog(object):
self.logfile = logfile # preferably line buffered
def write_log_entry(self, testpath, lettercode, longrepr):
py.builtin.print_("%s %s" % (lettercode, testpath), file=self.logfile)
print("%s %s" % (lettercode, testpath), file=self.logfile)
for line in longrepr.splitlines():
py.builtin.print_(" %s" % line, file=self.logfile)
print(" %s" % line, file=self.logfile)
def log_outcome(self, report, lettercode, longrepr):
testpath = getattr(report, 'nodeid', None)

109
third_party/python/pytest/_pytest/runner.py поставляемый
Просмотреть файл

@ -1,20 +1,14 @@
""" basic collect and runtest protocol implementations """
from __future__ import absolute_import, division, print_function
import bdb
import sys
from time import time
import py
import pytest
from _pytest._code.code import TerminalRepr, ExceptionInfo
def pytest_namespace():
return {
'fail' : fail,
'skip' : skip,
'importorskip' : importorskip,
'exit' : exit,
}
#
# pytest plugin hooks
@ -73,7 +67,10 @@ def runtestprotocol(item, log=True, nextitem=None):
rep = call_and_report(item, "setup", log)
reports = [rep]
if rep.passed:
reports.append(call_and_report(item, "call", log))
if item.config.option.setupshow:
show_test_item(item)
if not item.config.option.setuponly:
reports.append(call_and_report(item, "call", log))
reports.append(call_and_report(item, "teardown", log,
nextitem=nextitem))
# after all teardown hooks have been called
@ -83,6 +80,16 @@ def runtestprotocol(item, log=True, nextitem=None):
item.funcargs = None
return reports
def show_test_item(item):
"""Show test function, parameters and the fixtures of the test item."""
tw = item.config.get_terminal_writer()
tw.line()
tw.write(' ' * 8)
tw.write(item._nodeid)
used_fixtures = sorted(item._fixtureinfo.name2fixturedefs.keys())
if used_fixtures:
tw.write(' (fixtures used: {0})'.format(', '.join(used_fixtures)))
def pytest_runtest_setup(item):
item.session._setupstate.prepare(item)
@ -198,6 +205,36 @@ class BaseReport(object):
if name.startswith(prefix):
yield prefix, content
@property
def longreprtext(self):
"""
Read-only property that returns the full string representation
of ``longrepr``.
.. versionadded:: 3.0
"""
tw = py.io.TerminalWriter(stringio=True)
tw.hasmarkup = False
self.toterminal(tw)
exc = tw.stringio.getvalue()
return exc.strip()
@property
def capstdout(self):
"""Return captured text from stdout, if capturing is enabled
.. versionadded:: 3.0
"""
return ''.join(content for (prefix, content) in self.get_sections('Captured stdout'))
@property
def capstderr(self):
"""Return captured text from stderr, if capturing is enabled
.. versionadded:: 3.0
"""
return ''.join(content for (prefix, content) in self.get_sections('Captured stderr'))
passed = property(lambda x: x.outcome == "passed")
failed = property(lambda x: x.outcome == "failed")
skipped = property(lambda x: x.outcome == "skipped")
@ -219,7 +256,7 @@ def pytest_runtest_makereport(item, call):
if not isinstance(excinfo, ExceptionInfo):
outcome = "failed"
longrepr = excinfo
elif excinfo.errisinstance(pytest.skip.Exception):
elif excinfo.errisinstance(skip.Exception):
outcome = "skipped"
r = excinfo._getreprcrash()
longrepr = (str(r.path), r.lineno, r.message)
@ -263,8 +300,10 @@ class TestReport(BaseReport):
#: one of 'setup', 'call', 'teardown' to indicate runtest phase.
self.when = when
#: list of (secname, data) extra information which needs to
#: marshallable
#: list of pairs ``(str, str)`` of extra information which needs to
#: marshallable. Used by pytest to add captured text
#: from ``stdout`` and ``stderr``, but may be used by other plugins
#: to add arbitrary information to reports.
self.sections = list(sections)
#: time it took to run just the test
@ -285,7 +324,9 @@ class TeardownErrorReport(BaseReport):
self.__dict__.update(extra)
def pytest_make_collect_report(collector):
call = CallInfo(collector._memocollect, "memocollect")
call = CallInfo(
lambda: list(collector.collect()),
'collect')
longrepr = None
if not call.excinfo:
outcome = "passed"
@ -447,10 +488,16 @@ class Skipped(OutcomeException):
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level
class Failed(OutcomeException):
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
""" raised for immediate program exits (no tracebacks/summaries)"""
def __init__(self, msg="unknown reason"):
@ -464,8 +511,10 @@ def exit(msg):
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg=""):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
@ -474,8 +523,11 @@ def skip(msg=""):
"""
__tracebackhide__ = True
raise Skipped(msg=msg)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
@ -484,6 +536,8 @@ def fail(msg="", pytrace=True):
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed
@ -492,12 +546,23 @@ def importorskip(modname, minversion=None):
__version__ attribute. If no minversion is specified the a skip
is only triggered if the module can not be imported.
"""
import warnings
__tracebackhide__ = True
compile(modname, '', 'eval') # to catch syntaxerrors
try:
__import__(modname)
except ImportError:
skip("could not import %r" %(modname,))
should_skip = False
with warnings.catch_warnings():
# make sure to ignore ImportWarnings that might happen because
# of existing directories with the same name we're trying to
# import but without a __init__.py file
warnings.simplefilter('ignore')
try:
__import__(modname)
except ImportError:
# Do not raise chained exception here(#1485)
should_skip = True
if should_skip:
raise Skipped("could not import %r" %(modname,), allow_module_level=True)
mod = sys.modules[modname]
if minversion is None:
return mod
@ -506,10 +571,10 @@ def importorskip(modname, minversion=None):
try:
from pkg_resources import parse_version as pv
except ImportError:
skip("we have a required version for %r but can not import "
"no pkg_resources to parse version strings." %(modname,))
raise Skipped("we have a required version for %r but can not import "
"pkg_resources to parse version strings." % (modname,),
allow_module_level=True)
if verattr is None or pv(verattr) < pv(minversion):
skip("module %r has __version__ %r, required is: %r" %(
modname, verattr, minversion))
raise Skipped("module %r has __version__ %r, required is: %r" %(
modname, verattr, minversion), allow_module_level=True)
return mod

74
third_party/python/pytest/_pytest/setuponly.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,74 @@
from __future__ import absolute_import, division, print_function
import pytest
import sys
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption('--setuponly', '--setup-only', action="store_true",
help="only setup fixtures, do not execute tests.")
group.addoption('--setupshow', '--setup-show', action="store_true",
help="show setup of fixtures while executing tests.")
@pytest.hookimpl(hookwrapper=True)
def pytest_fixture_setup(fixturedef, request):
yield
config = request.config
if config.option.setupshow:
if hasattr(request, 'param'):
# Save the fixture parameter so ._show_fixture_action() can
# display it now and during the teardown (in .finish()).
if fixturedef.ids:
if callable(fixturedef.ids):
fixturedef.cached_param = fixturedef.ids(request.param)
else:
fixturedef.cached_param = fixturedef.ids[
request.param_index]
else:
fixturedef.cached_param = request.param
_show_fixture_action(fixturedef, 'SETUP')
def pytest_fixture_post_finalizer(fixturedef):
if hasattr(fixturedef, "cached_result"):
config = fixturedef._fixturemanager.config
if config.option.setupshow:
_show_fixture_action(fixturedef, 'TEARDOWN')
if hasattr(fixturedef, "cached_param"):
del fixturedef.cached_param
def _show_fixture_action(fixturedef, msg):
config = fixturedef._fixturemanager.config
capman = config.pluginmanager.getplugin('capturemanager')
if capman:
out, err = capman.suspendcapture()
tw = config.get_terminal_writer()
tw.line()
tw.write(' ' * 2 * fixturedef.scopenum)
tw.write('{step} {scope} {fixture}'.format(
step=msg.ljust(8), # align the output to TEARDOWN
scope=fixturedef.scope[0].upper(),
fixture=fixturedef.argname))
if msg == 'SETUP':
deps = sorted(arg for arg in fixturedef.argnames if arg != 'request')
if deps:
tw.write(' (fixtures used: {0})'.format(', '.join(deps)))
if hasattr(fixturedef, 'cached_param'):
tw.write('[{0}]'.format(fixturedef.cached_param))
if capman:
capman.resumecapture()
sys.stdout.write(out)
sys.stderr.write(err)
@pytest.hookimpl(tryfirst=True)
def pytest_cmdline_main(config):
if config.option.setuponly:
config.option.setupshow = True

25
third_party/python/pytest/_pytest/setupplan.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,25 @@
from __future__ import absolute_import, division, print_function
import pytest
def pytest_addoption(parser):
group = parser.getgroup("debugconfig")
group.addoption('--setupplan', '--setup-plan', action="store_true",
help="show what fixtures and tests would be executed but "
"don't execute anything.")
@pytest.hookimpl(tryfirst=True)
def pytest_fixture_setup(fixturedef, request):
# Will return a dummy fixture if the setuponly option is provided.
if request.config.option.setupplan:
fixturedef.cached_result = (None, None, None)
return fixturedef.cached_result
@pytest.hookimpl(tryfirst=True)
def pytest_cmdline_main(config):
if config.option.setupplan:
config.option.setuponly = True
config.option.setupshow = True

109
third_party/python/pytest/_pytest/skipping.py поставляемый
Просмотреть файл

@ -1,12 +1,14 @@
""" support for skip/xfail functions and markers. """
from __future__ import absolute_import, division, print_function
import os
import sys
import traceback
import py
import pytest
from _pytest.config import hookimpl
from _pytest.mark import MarkInfo, MarkDecorator
from _pytest.runner import fail, skip
def pytest_addoption(parser):
group = parser.getgroup("general")
@ -23,10 +25,14 @@ def pytest_addoption(parser):
def pytest_configure(config):
if config.option.runxfail:
# yay a hack
import pytest
old = pytest.xfail
config._cleanup.append(lambda: setattr(pytest, "xfail", old))
def nop(*args, **kwargs):
pass
nop.Exception = XFailed
setattr(pytest, "xfail", nop)
@ -44,7 +50,7 @@ def pytest_configure(config):
)
config.addinivalue_line("markers",
"xfail(condition, reason=None, run=True, raises=None, strict=False): "
"mark the the test function as an expected failure if eval(condition) "
"mark the test function as an expected failure if eval(condition) "
"has a True value. Optionally specify a reason for better reporting "
"and run=False if you don't even want to execute the test function. "
"If only specific exception(s) are expected, you can list them in "
@ -53,11 +59,7 @@ def pytest_configure(config):
)
def pytest_namespace():
return dict(xfail=xfail)
class XFailed(pytest.fail.Exception):
class XFailed(fail.Exception):
""" raised from an explicit call to pytest.xfail() """
@ -65,6 +67,8 @@ def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
@ -96,52 +100,47 @@ class MarkEvaluator:
except Exception:
self.exc = sys.exc_info()
if isinstance(self.exc[1], SyntaxError):
msg = [" " * (self.exc[1].offset + 4) + "^",]
msg = [" " * (self.exc[1].offset + 4) + "^", ]
msg.append("SyntaxError: invalid syntax")
else:
msg = traceback.format_exception_only(*self.exc[:2])
pytest.fail("Error evaluating %r expression\n"
" %s\n"
"%s"
%(self.name, self.expr, "\n".join(msg)),
pytrace=False)
fail("Error evaluating %r expression\n"
" %s\n"
"%s"
% (self.name, self.expr, "\n".join(msg)),
pytrace=False)
def _getglobals(self):
d = {'os': os, 'sys': sys, 'config': self.item.config}
func = self.item.obj
try:
d.update(func.__globals__)
except AttributeError:
d.update(func.func_globals)
if hasattr(self.item, 'obj'):
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
if self.holder:
d = self._getglobals()
if self.holder.args or 'condition' in self.holder.kwargs:
self.result = False
# "holder" might be a MarkInfo or a MarkDecorator; only
# MarkInfo keeps track of all parameters it received in an
# _arglist attribute
if hasattr(self.holder, '_arglist'):
arglist = self.holder._arglist
else:
arglist = [(self.holder.args, self.holder.kwargs)]
for args, kwargs in arglist:
marks = getattr(self.holder, '_marks', None) \
or [self.holder.mark]
for _, args, kwargs in marks:
if 'condition' in kwargs:
args = (kwargs['condition'],)
for expr in args:
self.expr = expr
if isinstance(expr, py.builtin._basestring):
d = self._getglobals()
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in kwargs:
# XXX better be checked at collection time
msg = "you need to specify reason=STRING " \
"when using booleans as conditions."
pytest.fail(msg)
fail(msg)
result = bool(expr)
if result:
self.result = True
@ -165,7 +164,7 @@ class MarkEvaluator:
return expl
@pytest.hookimpl(tryfirst=True)
@hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
# Check if skip or skipif are specified as pytest marks
@ -174,23 +173,23 @@ def pytest_runtest_setup(item):
eval_skipif = MarkEvaluator(item, 'skipif')
if eval_skipif.istrue():
item._evalskip = eval_skipif
pytest.skip(eval_skipif.getexplanation())
skip(eval_skipif.getexplanation())
skip_info = item.keywords.get('skip')
if isinstance(skip_info, (MarkInfo, MarkDecorator)):
item._evalskip = True
if 'reason' in skip_info.kwargs:
pytest.skip(skip_info.kwargs['reason'])
skip(skip_info.kwargs['reason'])
elif skip_info.args:
pytest.skip(skip_info.args[0])
skip(skip_info.args[0])
else:
pytest.skip("unconditional skip")
skip("unconditional skip")
item._evalxfail = MarkEvaluator(item, 'xfail')
check_xfail_no_run(item)
@pytest.mark.hookwrapper
@hookimpl(hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem):
check_xfail_no_run(pyfuncitem)
outcome = yield
@ -205,7 +204,7 @@ def check_xfail_no_run(item):
evalxfail = item._evalxfail
if evalxfail.istrue():
if not evalxfail.get('run', True):
pytest.xfail("[NOTRUN] " + evalxfail.getexplanation())
xfail("[NOTRUN] " + evalxfail.getexplanation())
def check_strict_xfail(pyfuncitem):
@ -217,10 +216,10 @@ def check_strict_xfail(pyfuncitem):
if is_strict_xfail:
del pyfuncitem._evalxfail
explanation = evalxfail.getexplanation()
pytest.fail('[XPASS(strict)] ' + explanation, pytrace=False)
fail('[XPASS(strict)] ' + explanation, pytrace=False)
@pytest.hookimpl(hookwrapper=True)
@hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
@ -228,12 +227,19 @@ def pytest_runtest_makereport(item, call):
evalskip = getattr(item, '_evalskip', None)
# unitttest special case, see setting of _unexpectedsuccess
if hasattr(item, '_unexpectedsuccess') and rep.when == "call":
# we need to translate into how pytest encodes xpass
rep.wasxfail = "reason: " + repr(item._unexpectedsuccess)
rep.outcome = "failed"
from _pytest.compat import _is_unittest_unexpected_success_a_failure
if item._unexpectedsuccess:
rep.longrepr = "Unexpected success: {0}".format(item._unexpectedsuccess)
else:
rep.longrepr = "Unexpected success"
if _is_unittest_unexpected_success_a_failure():
rep.outcome = "failed"
else:
rep.outcome = "passed"
rep.wasxfail = rep.longrepr
elif item.config.option.runxfail:
pass # don't interefere
elif call.excinfo and call.excinfo.errisinstance(pytest.xfail.Exception):
elif call.excinfo and call.excinfo.errisinstance(xfail.Exception):
rep.wasxfail = "reason: " + call.excinfo.value.msg
rep.outcome = "skipped"
elif evalxfail and not rep.skipped and evalxfail.wasvalid() and \
@ -245,8 +251,15 @@ def pytest_runtest_makereport(item, call):
rep.outcome = "skipped"
rep.wasxfail = evalxfail.getexplanation()
elif call.when == "call":
rep.outcome = "failed" # xpass outcome
rep.wasxfail = evalxfail.getexplanation()
strict_default = item.config.getini('xfail_strict')
is_strict_xfail = evalxfail.get('strict', strict_default)
explanation = evalxfail.getexplanation()
if is_strict_xfail:
rep.outcome = "failed"
rep.longrepr = "[XPASS(strict)] {0}".format(explanation)
else:
rep.outcome = "passed"
rep.wasxfail = explanation
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple:
# skipped by mark.skipif; change the location of the failure
# to point to the item definition, otherwise it will display
@ -260,7 +273,7 @@ def pytest_report_teststatus(report):
if hasattr(report, "wasxfail"):
if report.skipped:
return "xfailed", "x", "xfail"
elif report.failed:
elif report.passed:
return "xpassed", "X", ("XPASS", {'yellow': True})
# called by the terminalreporter instance/plugin
@ -294,12 +307,14 @@ def pytest_terminal_summary(terminalreporter):
for line in lines:
tr._tw.line(line)
def show_simple(terminalreporter, lines, stat, format):
failed = terminalreporter.stats.get(stat)
if failed:
for rep in failed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
lines.append(format %(pos,))
lines.append(format % (pos,))
def show_xfailed(terminalreporter, lines):
xfailed = terminalreporter.stats.get("xfailed")
@ -311,13 +326,15 @@ def show_xfailed(terminalreporter, lines):
if reason:
lines.append(" " + str(reason))
def show_xpassed(terminalreporter, lines):
xpassed = terminalreporter.stats.get("xpassed")
if xpassed:
for rep in xpassed:
pos = terminalreporter.config.cwd_relative_nodeid(rep.nodeid)
reason = rep.wasxfail
lines.append("XPASS %s %s" %(pos, reason))
lines.append("XPASS %s %s" % (pos, reason))
def cached_eval(config, expr, d):
if not hasattr(config, '_evalcache'):
@ -342,6 +359,7 @@ def folded_skips(skipped):
l.append((len(events),) + key)
return l
def show_skipped(terminalreporter, lines):
tr = terminalreporter
skipped = tr.stats.get('skipped', [])
@ -357,5 +375,6 @@ def show_skipped(terminalreporter, lines):
for num, fspath, lineno, reason in fskips:
if reason.startswith("Skipped: "):
reason = reason[9:]
lines.append("SKIP [%d] %s:%d: %s" %
lines.append(
"SKIP [%d] %s:%d: %s" %
(num, fspath, lineno, reason))

Просмотреть файл

@ -1,89 +0,0 @@
#! /usr/bin/env python
# Hi There!
# You may be wondering what this giant blob of binary data here is, you might
# even be worried that we're up to something nefarious (good for you for being
# paranoid!). This is a base64 encoding of a zip file, this zip file contains
# a fully functional basic pytest script.
#
# Pytest is a thing that tests packages, pytest itself is a package that some-
# one might want to install, especially if they're looking to run tests inside
# some package they want to install. Pytest has a lot of code to collect and
# execute tests, and other such sort of "tribal knowledge" that has been en-
# coded in its code base. Because of this we basically include a basic copy
# of pytest inside this blob. We do this because it let's you as a maintainer
# or application developer who wants people who don't deal with python much to
# easily run tests without installing the complete pytest package.
#
# If you're wondering how this is created: you can create it yourself if you
# have a complete pytest installation by using this command on the command-
# line: ``py.test --genscript=runtests.py``.
sources = """
@SOURCES@"""
import sys
import base64
import zlib
class DictImporter(object):
def __init__(self, sources):
self.sources = sources
def find_module(self, fullname, path=None):
if fullname == "argparse" and sys.version_info >= (2,7):
# we were generated with <python2.7 (which pulls in argparse)
# but we are running now on a stdlib which has it, so use that.
return None
if fullname in self.sources:
return self
if fullname + '.__init__' in self.sources:
return self
return None
def load_module(self, fullname):
# print "load_module:", fullname
from types import ModuleType
try:
s = self.sources[fullname]
is_pkg = False
except KeyError:
s = self.sources[fullname + '.__init__']
is_pkg = True
co = compile(s, fullname, 'exec')
module = sys.modules.setdefault(fullname, ModuleType(fullname))
module.__file__ = "%s/%s" % (__file__, fullname)
module.__loader__ = self
if is_pkg:
module.__path__ = [fullname]
do_exec(co, module.__dict__) # noqa
return sys.modules[fullname]
def get_source(self, name):
res = self.sources.get(name)
if res is None:
res = self.sources.get(name + '.__init__')
return res
if __name__ == "__main__":
try:
import pkg_resources # noqa
except ImportError:
sys.stderr.write("ERROR: setuptools not installed\n")
sys.exit(2)
if sys.version_info >= (3, 0):
exec("def do_exec(co, loc): exec(co, loc)\n")
import pickle
sources = sources.encode("ascii") # ensure bytes
sources = pickle.loads(zlib.decompress(base64.decodebytes(sources)))
else:
import cPickle as pickle
exec("def do_exec(co, loc): exec co in loc\n")
sources = pickle.loads(zlib.decompress(base64.decodestring(sources)))
importer = DictImporter(sources)
sys.meta_path.insert(0, importer)
entry = "@ENTRY@"
do_exec(entry, locals()) # noqa

120
third_party/python/pytest/_pytest/terminal.py поставляемый
Просмотреть файл

@ -2,6 +2,9 @@
This is a good source for looking at the various reporting hooks.
"""
from __future__ import absolute_import, division, print_function
import itertools
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
import pytest
@ -20,16 +23,18 @@ def pytest_addoption(parser):
group._addoption('-q', '--quiet', action="count",
dest="quiet", default=0, help="decrease verbosity."),
group._addoption('-r',
action="store", dest="reportchars", default=None, metavar="chars",
action="store", dest="reportchars", default='', metavar="chars",
help="show extra test summary info as specified by chars (f)ailed, "
"(E)error, (s)skipped, (x)failed, (X)passed (w)pytest-warnings "
"(p)passed, (P)passed with output, (a)all except pP.")
"(E)error, (s)skipped, (x)failed, (X)passed, "
"(p)passed, (P)passed with output, (a)all except pP. "
"Warnings are displayed at all times except when "
"--disable-warnings is set")
group._addoption('--disable-warnings', '--disable-pytest-warnings', default=False,
dest='disable_warnings', action='store_true',
help='disable warnings summary')
group._addoption('-l', '--showlocals',
action="store_true", dest="showlocals", default=False,
help="show locals in tracebacks (disabled by default).")
group._addoption('--report',
action="store", dest="report", default=None, metavar="opts",
help="(deprecated, use -r)")
group._addoption('--tb', metavar="style",
action="store", dest="tbstyle", default='auto',
choices=['auto', 'long', 'short', 'no', 'line', 'native'],
@ -54,18 +59,11 @@ def pytest_configure(config):
def getreportopt(config):
reportopts = ""
optvalue = config.option.report
if optvalue:
py.builtin.print_("DEPRECATED: use -r instead of --report option.",
file=sys.stderr)
if optvalue:
for setting in optvalue.split(","):
setting = setting.strip()
if setting == "skipped":
reportopts += "s"
elif setting == "xfailed":
reportopts += "x"
reportchars = config.option.reportchars
if not config.option.disable_warnings and 'w' not in reportchars:
reportchars += 'w'
elif config.option.disable_warnings and 'w' in reportchars:
reportchars = reportchars.replace('w', '')
if reportchars:
for char in reportchars:
if char not in reportopts and char != 'a':
@ -85,13 +83,40 @@ def pytest_report_teststatus(report):
letter = "f"
return report.outcome, letter, report.outcome.upper()
class WarningReport:
"""
Simple structure to hold warnings information captured by ``pytest_logwarning``.
"""
def __init__(self, code, message, nodeid=None, fslocation=None):
"""
:param code: unused
:param str message: user friendly message about the warning
:param str|None nodeid: node id that generated the warning (see ``get_location``).
:param tuple|py.path.local fslocation:
file system location of the source of the warning (see ``get_location``).
"""
self.code = code
self.message = message
self.nodeid = nodeid
self.fslocation = fslocation
def get_location(self, config):
"""
Returns the more user-friendly information about the location
of a warning, or None.
"""
if self.nodeid:
return self.nodeid
if self.fslocation:
if isinstance(self.fslocation, tuple) and len(self.fslocation) >= 2:
filename, linenum = self.fslocation[:2]
relpath = py.path.local(filename).relto(config.invocation_dir)
return '%s:%s' % (relpath, linenum)
else:
return str(self.fslocation)
return None
class TerminalReporter:
def __init__(self, config, file=None):
@ -171,8 +196,6 @@ class TerminalReporter:
def pytest_logwarning(self, code, fslocation, message, nodeid):
warnings = self.stats.setdefault("warnings", [])
if isinstance(fslocation, tuple):
fslocation = "%s:%d" % fslocation
warning = WarningReport(code=code, fslocation=fslocation,
message=message, nodeid=nodeid)
warnings.append(warning)
@ -259,7 +282,7 @@ class TerminalReporter:
line = "collected "
else:
line = "collecting "
line += str(self._numcollected) + " items"
line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's')
if errors:
line += " / %d errors" % errors
if skipped:
@ -300,8 +323,8 @@ class TerminalReporter:
def pytest_report_header(self, config):
inifile = ""
if config.inifile:
inifile = config.rootdir.bestrelpath(config.inifile)
lines = ["rootdir: %s, inifile: %s" %(config.rootdir, inifile)]
inifile = " " + config.rootdir.bestrelpath(config.inifile)
lines = ["rootdir: %s, inifile:%s" % (config.rootdir, inifile)]
plugininfo = config.pluginmanager.list_plugin_distinfo()
if plugininfo:
@ -366,7 +389,8 @@ class TerminalReporter:
EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, EXIT_USAGEERROR,
EXIT_NOTESTSCOLLECTED)
if exitstatus in summary_exit_codes:
self.config.hook.pytest_terminal_summary(terminalreporter=self)
self.config.hook.pytest_terminal_summary(terminalreporter=self,
exitstatus=exitstatus)
self.summary_errors()
self.summary_failures()
self.summary_warnings()
@ -442,13 +466,21 @@ class TerminalReporter:
def summary_warnings(self):
if self.hasopt("w"):
warnings = self.stats.get("warnings")
if not warnings:
all_warnings = self.stats.get("warnings")
if not all_warnings:
return
self.write_sep("=", "pytest-warning summary")
for w in warnings:
self._tw.line("W%s %s %s" % (w.code,
w.fslocation, w.message))
grouped = itertools.groupby(all_warnings, key=lambda wr: wr.get_location(self.config))
self.write_sep("=", "warnings summary", yellow=True, bold=False)
for location, warnings in grouped:
self._tw.line(str(location) or '<undetermined location>')
for w in warnings:
lines = w.message.splitlines()
indented = '\n'.join(' ' + x for x in lines)
self._tw.line(indented)
self._tw.line()
self._tw.line('-- Docs: http://doc.pytest.org/en/latest/warnings.html')
def summary_passes(self):
if self.config.option.tbstyle != "no":
@ -462,6 +494,15 @@ class TerminalReporter:
self.write_sep("_", msg)
self._outrep_summary(rep)
def print_teardown_sections(self, rep):
for secname, content in rep.sections:
if 'teardown' in secname:
self._tw.sep('-', secname)
if content[-1:] == "\n":
content = content[:-1]
self._tw.line(content)
def summary_failures(self):
if self.config.option.tbstyle != "no":
reports = self.getreports('failed')
@ -477,6 +518,9 @@ class TerminalReporter:
markup = {'red': True, 'bold': True}
self.write_sep("_", msg, **markup)
self._outrep_summary(rep)
for report in self.getreports(''):
if report.nodeid == rep.nodeid and report.when == 'teardown':
self.print_teardown_sections(report)
def summary_errors(self):
if self.config.option.tbstyle != "no":
@ -517,16 +561,8 @@ class TerminalReporter:
def summary_deselected(self):
if 'deselected' in self.stats:
l = []
k = self.config.option.keyword
if k:
l.append("-k%s" % k)
m = self.config.option.markexpr
if m:
l.append("-m %r" % m)
if l:
self.write_sep("=", "%d tests deselected by %r" % (
len(self.stats['deselected']), " ".join(l)), bold=True)
self.write_sep("=", "%d tests deselected" % (
len(self.stats['deselected'])), bold=True)
def repr_pythonversion(v=None):
if v is None:
@ -546,8 +582,7 @@ def flatten(l):
def build_summary_stats_line(stats):
keys = ("failed passed skipped deselected "
"xfailed xpassed warnings error").split()
key_translation = {'warnings': 'pytest-warnings'}
"xfailed xpassed warnings error").split()
unknown_key_seen = False
for key in stats.keys():
if key not in keys:
@ -558,8 +593,7 @@ def build_summary_stats_line(stats):
for key in keys:
val = stats.get(key, None)
if val:
key_name = key_translation.get(key, key)
parts.append("%d %s" % (len(val), key_name))
parts.append("%d %s" % (len(val), key))
if parts:
line = ", ".join(parts)

11
third_party/python/pytest/_pytest/tmpdir.py поставляемый
Просмотреть файл

@ -1,9 +1,11 @@
""" support for providing temporary directories to test functions. """
from __future__ import absolute_import, division, print_function
import re
import pytest
import py
from _pytest.monkeypatch import monkeypatch
from _pytest.monkeypatch import MonkeyPatch
class TempdirFactory:
@ -81,6 +83,7 @@ def get_user():
except (ImportError, KeyError):
return None
# backward compatibility
TempdirHandler = TempdirFactory
@ -92,7 +95,7 @@ def pytest_configure(config):
available at pytest_configure time, but ideally should be moved entirely
to the tmpdir_factory session fixture.
"""
mp = monkeypatch()
mp = MonkeyPatch()
t = TempdirFactory(config)
config._cleanup.extend([mp.undo, t.finish])
mp.setattr(config, '_tmpdirhandler', t, raising=False)
@ -108,14 +111,14 @@ def tmpdir_factory(request):
@pytest.fixture
def tmpdir(request, tmpdir_factory):
"""return a temporary directory path object
"""Return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
"""
name = request.node.name
name = re.sub("[\W]", "_", name)
name = re.sub(r"[\W]", "_", name)
MAXVAL = 30
if len(name) > MAXVAL:
name = name[:MAXVAL]

84
third_party/python/pytest/_pytest/unittest.py поставляемый
Просмотреть файл

@ -1,14 +1,15 @@
""" discovery and running of std-library "unittest" style tests. """
from __future__ import absolute_import
from __future__ import absolute_import, division, print_function
import sys
import traceback
import pytest
# for transfering markers
# for transferring markers
import _pytest._code
from _pytest.python import transfer_markers
from _pytest.skipping import MarkEvaluator
from _pytest.config import hookimpl
from _pytest.runner import fail, skip
from _pytest.python import transfer_markers, Class, Module, Function
from _pytest.skipping import MarkEvaluator, xfail
def pytest_pycollect_makeitem(collector, name, obj):
@ -22,11 +23,11 @@ def pytest_pycollect_makeitem(collector, name, obj):
return UnitTestCase(name, parent=collector)
class UnitTestCase(pytest.Class):
class UnitTestCase(Class):
# marker for fixturemanger.getfixtureinfo()
# to declare that our children do not support funcargs
nofuncargs = True
def setup(self):
cls = self.obj
if getattr(cls, '__unittest_skip__', False):
@ -46,10 +47,12 @@ class UnitTestCase(pytest.Class):
return
self.session._fixturemanager.parsefactories(self, unittest=True)
loader = TestLoader()
module = self.getparent(pytest.Module).obj
module = self.getparent(Module).obj
foundsomething = False
for name in loader.getTestCaseNames(self.obj):
x = getattr(self.obj, name)
if not getattr(x, '__test__', True):
continue
funcobj = getattr(x, 'im_func', x)
transfer_markers(funcobj, cls, module)
yield TestCaseFunction(name, parent=self)
@ -63,8 +66,7 @@ class UnitTestCase(pytest.Class):
yield TestCaseFunction('runTest', parent=self)
class TestCaseFunction(pytest.Function):
class TestCaseFunction(Function):
_excinfo = None
def setup(self):
@ -92,6 +94,9 @@ class TestCaseFunction(pytest.Function):
def teardown(self):
if hasattr(self._testcase, 'teardown_method'):
self._testcase.teardown_method(self._obj)
# Allow garbage collection on TestCase instance attributes.
self._testcase = None
self._obj = None
def startTest(self, testcase):
pass
@ -106,36 +111,37 @@ class TestCaseFunction(pytest.Function):
try:
l = traceback.format_exception(*rawexcinfo)
l.insert(0, "NOTE: Incompatible Exception Representation, "
"displaying natively:\n\n")
pytest.fail("".join(l), pytrace=False)
except (pytest.fail.Exception, KeyboardInterrupt):
"displaying natively:\n\n")
fail("".join(l), pytrace=False)
except (fail.Exception, KeyboardInterrupt):
raise
except:
pytest.fail("ERROR: Unknown Incompatible Exception "
"representation:\n%r" %(rawexcinfo,), pytrace=False)
fail("ERROR: Unknown Incompatible Exception "
"representation:\n%r" % (rawexcinfo,), pytrace=False)
except KeyboardInterrupt:
raise
except pytest.fail.Exception:
except fail.Exception:
excinfo = _pytest._code.ExceptionInfo()
self.__dict__.setdefault('_excinfo', []).append(excinfo)
def addError(self, testcase, rawexcinfo):
self._addexcinfo(rawexcinfo)
def addFailure(self, testcase, rawexcinfo):
self._addexcinfo(rawexcinfo)
def addSkip(self, testcase, reason):
try:
pytest.skip(reason)
except pytest.skip.Exception:
skip(reason)
except skip.Exception:
self._evalskip = MarkEvaluator(self, 'SkipTest')
self._evalskip.result = True
self._addexcinfo(sys.exc_info())
def addExpectedFailure(self, testcase, rawexcinfo, reason=""):
try:
pytest.xfail(str(reason))
except pytest.xfail.Exception:
xfail(str(reason))
except xfail.Exception:
self._addexcinfo(sys.exc_info())
def addUnexpectedSuccess(self, testcase, reason=""):
@ -147,17 +153,42 @@ class TestCaseFunction(pytest.Function):
def stopTest(self, testcase):
pass
def _handle_skip(self):
# implements the skipping machinery (see #2137)
# analog to pythons Lib/unittest/case.py:run
testMethod = getattr(self._testcase, self._testcase._testMethodName)
if (getattr(self._testcase.__class__, "__unittest_skip__", False) or
getattr(testMethod, "__unittest_skip__", False)):
# If the class or method was skipped.
skip_why = (getattr(self._testcase.__class__, '__unittest_skip_why__', '') or
getattr(testMethod, '__unittest_skip_why__', ''))
try: # PY3, unittest2 on PY2
self._testcase._addSkip(self, self._testcase, skip_why)
except TypeError: # PY2
if sys.version_info[0] != 2:
raise
self._testcase._addSkip(self, skip_why)
return True
return False
def runtest(self):
self._testcase(result=self)
if self.config.pluginmanager.get_plugin("pdbinvoke") is None:
self._testcase(result=self)
else:
# disables tearDown and cleanups for post mortem debugging (see #1890)
if self._handle_skip():
return
self._testcase.debug()
def _prunetraceback(self, excinfo):
pytest.Function._prunetraceback(self, excinfo)
Function._prunetraceback(self, excinfo)
traceback = excinfo.traceback.filter(
lambda x:not x.frame.f_globals.get('__unittest'))
lambda x: not x.frame.f_globals.get('__unittest'))
if traceback:
excinfo.traceback = traceback
@pytest.hookimpl(tryfirst=True)
@hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
if isinstance(item, TestCaseFunction):
if item._excinfo:
@ -169,13 +200,15 @@ def pytest_runtest_makereport(item, call):
# twisted trial support
@pytest.hookimpl(hookwrapper=True)
@hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item):
if isinstance(item, TestCaseFunction) and \
'twisted.trial.unittest' in sys.modules:
ut = sys.modules['twisted.python.failure']
Failure__init__ = ut.Failure.__init__
check_testcase_implements_trial_reporter()
def excstore(self, exc_value=None, exc_type=None, exc_tb=None,
captureVars=None):
if exc_value is None:
@ -189,6 +222,7 @@ def pytest_runtest_protocol(item):
captureVars=captureVars)
except TypeError:
Failure__init__(self, exc_value, exc_type, exc_tb)
ut.Failure.__init__ = excstore
yield
ut.Failure.__init__ = Failure__init__

Просмотреть файл

@ -1,13 +0,0 @@
This directory vendors the `pluggy` module.
For a more detailed discussion for the reasons to vendoring this
package, please see [this issue](https://github.com/pytest-dev/pytest/issues/944).
To update the current version, execute:
```
$ pip install -U pluggy==<version> --no-compile --target=_pytest/vendored_packages
```
And commit the modified files. The `pluggy-<version>.dist-info` directory
created by `pip` should be ignored.

Просмотреть файл

@ -1,10 +0,0 @@
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

Просмотреть файл

@ -1,39 +0,0 @@
Metadata-Version: 2.0
Name: pluggy
Version: 0.3.1
Summary: plugin and hook calling mechanisms for python
Home-page: UNKNOWN
Author: Holger Krekel
Author-email: holger at merlinux.eu
License: MIT license
Platform: unix
Platform: linux
Platform: osx
Platform: win32
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

Просмотреть файл

@ -1,8 +0,0 @@
pluggy.py,sha256=v_RfWzyW6DPU1cJu_EFoL_OHq3t13qloVdR6UaMCXQA,29862
pluggy-0.3.1.dist-info/top_level.txt,sha256=xKSCRhai-v9MckvMuWqNz16c1tbsmOggoMSwTgcpYHE,7
pluggy-0.3.1.dist-info/pbr.json,sha256=xX3s6__wOcAyF-AZJX1sdZyW6PUXT-FkfBlM69EEUCg,47
pluggy-0.3.1.dist-info/RECORD,,
pluggy-0.3.1.dist-info/metadata.json,sha256=nLKltOT78dMV-00uXD6Aeemp4xNsz2q59j6ORSDeLjw,1027
pluggy-0.3.1.dist-info/METADATA,sha256=1b85Ho2u4iK30M099k7axMzcDDhLcIMb-A82JUJZnSo,1334
pluggy-0.3.1.dist-info/WHEEL,sha256=AvR0WeTpDaxT645bl5FQxUK6NPsTls2ttpcGJg3j1Xg,110
pluggy-0.3.1.dist-info/DESCRIPTION.rst,sha256=P5Akh1EdIBR6CeqtV2P8ZwpGSpZiTKPw0NyS7jEiD-g,306

Просмотреть файл

@ -1,6 +0,0 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.24.0)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

Просмотреть файл

@ -1 +0,0 @@
{"license": "MIT license", "name": "pluggy", "metadata_version": "2.0", "generator": "bdist_wheel (0.24.0)", "summary": "plugin and hook calling mechanisms for python", "platform": "unix", "version": "0.3.1", "extensions": {"python.details": {"document_names": {"description": "DESCRIPTION.rst"}, "contacts": [{"role": "author", "email": "holger at merlinux.eu", "name": "Holger Krekel"}]}}, "classifiers": ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Libraries", "Topic :: Utilities", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5"]}

Просмотреть файл

@ -1 +0,0 @@
{"is_release": false, "git_version": "7d4c9cd"}

Просмотреть файл

@ -1 +0,0 @@
pluggy

Просмотреть файл

@ -67,8 +67,9 @@ Pluggy currently consists of functionality for:
import sys
import inspect
__version__ = '0.3.1'
__all__ = ["PluginManager", "PluginValidationError",
__version__ = '0.4.0'
__all__ = ["PluginManager", "PluginValidationError", "HookCallError",
"HookspecMarker", "HookimplMarker"]
_py3 = sys.version_info > (3, 0)
@ -308,7 +309,7 @@ class PluginManager(object):
""" Core Pluginmanager class which manages registration
of plugin objects and 1:N hook calling.
You can register new hooks by calling ``addhooks(module_or_class)``.
You can register new hooks by calling ``add_hookspec(module_or_class)``.
You can register plugin objects (which contain hooks) by calling
``register(plugin)``. The Pluginmanager is initialized with a
prefix that is searched for in the names of the dict of registered
@ -374,7 +375,10 @@ class PluginManager(object):
def parse_hookimpl_opts(self, plugin, name):
method = getattr(plugin, name)
res = getattr(method, self.project_name + "_impl", None)
try:
res = getattr(method, self.project_name + "_impl", None)
except Exception:
res = {}
if res is not None and not isinstance(res, dict):
# false positive
res = None
@ -455,6 +459,10 @@ class PluginManager(object):
""" Return a plugin or None for the given name. """
return self._name2plugin.get(name)
def has_plugin(self, name):
""" Return True if a plugin with the given name is registered. """
return self.get_plugin(name) is not None
def get_name(self, plugin):
""" Return name for registered plugin or None if not registered. """
for name, val in self._name2plugin.items():
@ -492,7 +500,8 @@ class PluginManager(object):
def load_setuptools_entrypoints(self, entrypoint_name):
""" Load modules from querying the specified setuptools entrypoint name.
Return the number of loaded plugins. """
from pkg_resources import iter_entry_points, DistributionNotFound
from pkg_resources import (iter_entry_points, DistributionNotFound,
VersionConflict)
for ep in iter_entry_points(entrypoint_name):
# is the plugin registered or blocked?
if self.get_plugin(ep.name) or self.is_blocked(ep.name):
@ -501,6 +510,9 @@ class PluginManager(object):
plugin = ep.load()
except DistributionNotFound:
continue
except VersionConflict as e:
raise PluginValidationError(
"Plugin %r could not be loaded: %s!" % (ep.name, e))
self.register(plugin, name=ep.name)
self._plugin_distinfo.append((plugin, ep.dist))
return len(self._plugin_distinfo)
@ -528,7 +540,7 @@ class PluginManager(object):
of HookImpl instances and the keyword arguments for the hook call.
``after(outcome, hook_name, hook_impls, kwargs)`` receives the
same arguments as ``before`` but also a :py:class:`_CallOutcome`` object
same arguments as ``before`` but also a :py:class:`_CallOutcome <_pytest.vendored_packages.pluggy._CallOutcome>` object
which represents the result of the overall hook call.
"""
return _TracedHookExecution(self, before, after).undo
@ -573,7 +585,7 @@ class _MultiCall:
# XXX note that the __multicall__ argument is supported only
# for pytest compatibility reasons. It was never officially
# supported there and is explicitly deprecated since 2.8
# supported there and is explicitely deprecated since 2.8
# so we can remove it soon, allowing to avoid the below recursion
# in execute() and simplify/speed up the execute loop.
@ -590,7 +602,13 @@ class _MultiCall:
while self.hook_impls:
hook_impl = self.hook_impls.pop()
args = [all_kwargs[argname] for argname in hook_impl.argnames]
try:
args = [all_kwargs[argname] for argname in hook_impl.argnames]
except KeyError:
for argname in hook_impl.argnames:
if argname not in all_kwargs:
raise HookCallError(
"hook call must provide argument %r" % (argname,))
if hook_impl.hookwrapper:
return _wrapped_call(hook_impl.function(*args), self.execute)
res = hook_impl.function(*args)
@ -629,7 +647,10 @@ def varnames(func, startindex=None):
startindex = 1
else:
if not inspect.isfunction(func) and not inspect.ismethod(func):
func = getattr(func, '__call__', func)
try:
func = getattr(func, '__call__', func)
except Exception:
return ()
if startindex is None:
startindex = int(inspect.ismethod(func))
@ -763,6 +784,10 @@ class PluginValidationError(Exception):
""" plugin failed validation. """
class HookCallError(Exception):
""" Hook was called wrongly. """
if hasattr(inspect, 'signature'):
def _formatdef(func):
return "%s%s" % (

88
third_party/python/pytest/_pytest/warnings.py поставляемый Normal file
Просмотреть файл

@ -0,0 +1,88 @@
from __future__ import absolute_import, division, print_function
import warnings
from contextlib import contextmanager
import pytest
from _pytest import compat
def _setoption(wmod, arg):
"""
Copy of the warning._setoption function but does not escape arguments.
"""
parts = arg.split(':')
if len(parts) > 5:
raise wmod._OptionError("too many fields (max 5): %r" % (arg,))
while len(parts) < 5:
parts.append('')
action, message, category, module, lineno = [s.strip()
for s in parts]
action = wmod._getaction(action)
category = wmod._getcategory(category)
if lineno:
try:
lineno = int(lineno)
if lineno < 0:
raise ValueError
except (ValueError, OverflowError):
raise wmod._OptionError("invalid lineno %r" % (lineno,))
else:
lineno = 0
wmod.filterwarnings(action, message, category, module, lineno)
def pytest_addoption(parser):
group = parser.getgroup("pytest-warnings")
group.addoption(
'-W', '--pythonwarnings', action='append',
help="set which warnings to report, see -W option of python itself.")
parser.addini("filterwarnings", type="linelist",
help="Each line specifies warning filter pattern which would be passed"
"to warnings.filterwarnings. Process after -W and --pythonwarnings.")
@contextmanager
def catch_warnings_for_item(item):
"""
catches the warnings generated during setup/call/teardown execution
of the given item and after it is done posts them as warnings to this
item.
"""
args = item.config.getoption('pythonwarnings') or []
inifilters = item.config.getini("filterwarnings")
with warnings.catch_warnings(record=True) as log:
for arg in args:
warnings._setoption(arg)
for arg in inifilters:
_setoption(warnings, arg)
yield
for warning in log:
warn_msg = warning.message
unicode_warning = False
if compat._PY2 and any(isinstance(m, compat.UNICODE_TYPES) for m in warn_msg.args):
new_args = [compat.safe_str(m) for m in warn_msg.args]
unicode_warning = warn_msg.args != new_args
warn_msg.args = new_args
msg = warnings.formatwarning(
warn_msg, warning.category,
warning.filename, warning.lineno, warning.line)
item.warn("unused", msg)
if unicode_warning:
warnings.warn(
"Warning is using unicode non convertible to ascii, "
"converting to a safe representation:\n %s" % msg,
UnicodeWarning)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_protocol(item):
with catch_warnings_for_item(item):
yield

76
third_party/python/pytest/pytest.py поставляемый
Просмотреть файл

@ -2,19 +2,7 @@
"""
pytest: unit and functional testing with Python.
"""
__all__ = [
'main',
'UsageError',
'cmdline',
'hookspec',
'hookimpl',
'__version__',
]
if __name__ == '__main__': # if run as a script or by 'python -m pytest'
# we trigger the below "else" condition by the following import
import pytest
raise SystemExit(pytest.main())
# else we are imported
@ -22,7 +10,69 @@ from _pytest.config import (
main, UsageError, _preloadplugins, cmdline,
hookspec, hookimpl
)
from _pytest.fixtures import fixture, yield_fixture
from _pytest.assertion import register_assert_rewrite
from _pytest.freeze_support import freeze_includes
from _pytest import __version__
from _pytest.debugging import pytestPDB as __pytestPDB
from _pytest.recwarn import warns, deprecated_call
from _pytest.runner import fail, skip, importorskip, exit
from _pytest.mark import MARK_GEN as mark, param
from _pytest.skipping import xfail
from _pytest.main import Item, Collector, File, Session
from _pytest.fixtures import fillfixtures as _fillfuncargs
from _pytest.python import (
raises, approx,
Module, Class, Instance, Function, Generator,
)
_preloadplugins() # to populate pytest.* namespace so help(pytest) works
set_trace = __pytestPDB.set_trace
__all__ = [
'main',
'UsageError',
'cmdline',
'hookspec',
'hookimpl',
'__version__',
'register_assert_rewrite',
'freeze_includes',
'set_trace',
'warns',
'deprecated_call',
'fixture',
'yield_fixture',
'fail',
'skip',
'xfail',
'importorskip',
'exit',
'mark',
'param',
'approx',
'_fillfuncargs',
'Item',
'File',
'Collector',
'Session',
'Module',
'Class',
'Instance',
'Function',
'Generator',
'raises',
]
if __name__ == '__main__':
# if run as a script or by 'python -m pytest'
# we trigger the below "else" condition by the following import
import pytest
raise SystemExit(pytest.main())
else:
from _pytest.compat import _setup_collect_fakemodule
_preloadplugins() # to populate pytest.* namespace so help(pytest) works
_setup_collect_fakemodule()

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше