зеркало из https://github.com/mozilla/gecko-dev.git
Bug 1068653 - Part 1 Add python dependencies for taskcluster mach commands r=gps
--HG-- extra : rebase_source : 7a91182ca85dde748a14b03fa93ae85769691042 extra : source : b91e85b02d796db5de9a0e726a7c3360ea67b400
This commit is contained in:
Родитель
75eaed7a23
Коммит
51503a1a67
|
@ -12,3 +12,10 @@ What should not go here:
|
|||
|
||||
Historical information can be found at
|
||||
https://bugzilla.mozilla.org/show_bug.cgi?id=775243
|
||||
|
||||
## pyyaml | pystache
|
||||
|
||||
Used in taskcluster related mach commands to update download from github
|
||||
and remove .git and tests.
|
||||
|
||||
Then run tests in taskcluster/tests/
|
||||
|
|
|
@ -0,0 +1,17 @@
|
|||
*.pyc
|
||||
.DS_Store
|
||||
# Tox support. See: http://pypi.python.org/pypi/tox
|
||||
.tox
|
||||
# Our tox runs convert the doctests in *.rst files to Python 3 prior to
|
||||
# running tests. Ignore these temporary files.
|
||||
*.temp2to3.rst
|
||||
# The setup.py "prep" command converts *.md to *.temp.rst (via *.temp.md).
|
||||
*.temp.md
|
||||
*.temp.rst
|
||||
# TextMate project file
|
||||
*.tmproj
|
||||
# Distribution-related folders and files.
|
||||
build
|
||||
dist
|
||||
MANIFEST
|
||||
pystache.egg-info
|
|
@ -0,0 +1,3 @@
|
|||
[submodule "ext/spec"]
|
||||
path = ext/spec
|
||||
url = http://github.com/mustache/spec.git
|
|
@ -0,0 +1,14 @@
|
|||
language: python
|
||||
|
||||
# Travis CI has no plans to support Jython and no longer supports Python 2.5.
|
||||
python:
|
||||
- 2.6
|
||||
- 2.7
|
||||
- 3.2
|
||||
- pypy
|
||||
|
||||
script:
|
||||
- python setup.py install
|
||||
# Include the spec tests directory for Mustache spec tests and the
|
||||
# project directory for doctests.
|
||||
- pystache-test . ext/spec/specs
|
|
@ -0,0 +1,169 @@
|
|||
History
|
||||
=======
|
||||
|
||||
**Note:** Official support for Python 2.4 will end with Pystache version 0.6.0.
|
||||
|
||||
0.5.4 (2014-07-11)
|
||||
------------------
|
||||
|
||||
- Bugfix: made test with filenames OS agnostic (issue \#162).
|
||||
|
||||
0.5.3 (2012-11-03)
|
||||
------------------
|
||||
|
||||
- Added ability to customize string coercion (e.g. to have None render as
|
||||
`''`) (issue \#130).
|
||||
- Added Renderer.render_name() to render a template by name (issue \#122).
|
||||
- Added TemplateSpec.template_path to specify an absolute path to a
|
||||
template (issue \#41).
|
||||
- Added option of raising errors on missing tags/partials:
|
||||
`Renderer(missing_tags='strict')` (issue \#110).
|
||||
- Added support for finding and loading templates by file name in
|
||||
addition to by template name (issue \#127). [xgecko]
|
||||
- Added a `parse()` function that yields a printable, pre-compiled
|
||||
parse tree.
|
||||
- Added support for rendering pre-compiled templates.
|
||||
- Added Python 3.3 to the list of supported versions.
|
||||
- Added support for [PyPy](http://pypy.org/) (issue \#125).
|
||||
- Added support for [Travis CI](http://travis-ci.org) (issue \#124).
|
||||
[msabramo]
|
||||
- Bugfix: `defaults.DELIMITERS` can now be changed at runtime (issue \#135).
|
||||
[bennoleslie]
|
||||
- Bugfix: exceptions raised from a property are no longer swallowed
|
||||
when getting a key from a context stack (issue \#110).
|
||||
- Bugfix: lambda section values can now return non-ascii, non-unicode
|
||||
strings (issue \#118).
|
||||
- Bugfix: allow `test_pystache.py` and `tox` to pass when run from a
|
||||
downloaded sdist (i.e. without the spec test directory).
|
||||
- Convert HISTORY and README files from reST to Markdown.
|
||||
- More robust handling of byte strings in Python 3.
|
||||
- Added Creative Commons license for David Phillips's logo.
|
||||
|
||||
0.5.2 (2012-05-03)
|
||||
------------------
|
||||
|
||||
- Added support for dot notation and version 1.1.2 of the spec (issue
|
||||
\#99). [rbp]
|
||||
- Missing partials now render as empty string per latest version of
|
||||
spec (issue \#115).
|
||||
- Bugfix: falsey values now coerced to strings using str().
|
||||
- Bugfix: lambda return values for sections no longer pushed onto
|
||||
context stack (issue \#113).
|
||||
- Bugfix: lists of lambdas for sections were not rendered (issue
|
||||
\#114).
|
||||
|
||||
0.5.1 (2012-04-24)
|
||||
------------------
|
||||
|
||||
- Added support for Python 3.1 and 3.2.
|
||||
- Added tox support to test multiple Python versions.
|
||||
- Added test script entry point: pystache-test.
|
||||
- Added \_\_version\_\_ package attribute.
|
||||
- Test harness now supports both YAML and JSON forms of Mustache spec.
|
||||
- Test harness no longer requires nose.
|
||||
|
||||
0.5.0 (2012-04-03)
|
||||
------------------
|
||||
|
||||
This version represents a major rewrite and refactoring of the code base
|
||||
that also adds features and fixes many bugs. All functionality and
|
||||
nearly all unit tests have been preserved. However, some backwards
|
||||
incompatible changes to the API have been made.
|
||||
|
||||
Below is a selection of some of the changes (not exhaustive).
|
||||
|
||||
Highlights:
|
||||
|
||||
- Pystache now passes all tests in version 1.0.3 of the [Mustache
|
||||
spec](https://github.com/mustache/spec). [pvande]
|
||||
- Removed View class: it is no longer necessary to subclass from View
|
||||
or from any other class to create a view.
|
||||
- Replaced Template with Renderer class: template rendering behavior
|
||||
can be modified via the Renderer constructor or by setting
|
||||
attributes on a Renderer instance.
|
||||
- Added TemplateSpec class: template rendering can be specified on a
|
||||
per-view basis by subclassing from TemplateSpec.
|
||||
- Introduced separation of concerns and removed circular dependencies
|
||||
(e.g. between Template and View classes, cf. [issue
|
||||
\#13](https://github.com/defunkt/pystache/issues/13)).
|
||||
- Unicode now used consistently throughout the rendering process.
|
||||
- Expanded test coverage: nosetests now runs doctests and \~105 test
|
||||
cases from the Mustache spec (increasing the number of tests from 56
|
||||
to \~315).
|
||||
- Added a rudimentary benchmarking script to gauge performance while
|
||||
refactoring.
|
||||
- Extensive documentation added (e.g. docstrings).
|
||||
|
||||
Other changes:
|
||||
|
||||
- Added a command-line interface. [vrde]
|
||||
- The main rendering class now accepts a custom partial loader (e.g. a
|
||||
dictionary) and a custom escape function.
|
||||
- Non-ascii characters in str strings are now supported while
|
||||
rendering.
|
||||
- Added string encoding, file encoding, and errors options for
|
||||
decoding to unicode.
|
||||
- Removed the output encoding option.
|
||||
- Removed the use of markupsafe.
|
||||
|
||||
Bug fixes:
|
||||
|
||||
- Context values no longer processed as template strings.
|
||||
[jakearchibald]
|
||||
- Whitespace surrounding sections is no longer altered, per the spec.
|
||||
[heliodor]
|
||||
- Zeroes now render correctly when using PyPy. [alex]
|
||||
- Multline comments now permitted. [fczuardi]
|
||||
- Extensionless template files are now supported.
|
||||
- Passing `**kwargs` to `Template()` no longer modifies the context.
|
||||
- Passing `**kwargs` to `Template()` with no context no longer raises
|
||||
an exception.
|
||||
|
||||
0.4.1 (2012-03-25)
|
||||
------------------
|
||||
|
||||
- Added support for Python 2.4. [wangtz, jvantuyl]
|
||||
|
||||
0.4.0 (2011-01-12)
|
||||
------------------
|
||||
|
||||
- Add support for nested contexts (within template and view)
|
||||
- Add support for inverted lists
|
||||
- Decoupled template loading
|
||||
|
||||
0.3.1 (2010-05-07)
|
||||
------------------
|
||||
|
||||
- Fix package
|
||||
|
||||
0.3.0 (2010-05-03)
|
||||
------------------
|
||||
|
||||
- View.template\_path can now hold a list of path
|
||||
- Add {{& blah}} as an alias for {{{ blah }}}
|
||||
- Higher Order Sections
|
||||
- Inverted sections
|
||||
|
||||
0.2.0 (2010-02-15)
|
||||
------------------
|
||||
|
||||
- Bugfix: Methods returning False or None are not rendered
|
||||
- Bugfix: Don't render an empty string when a tag's value is 0.
|
||||
[enaeseth]
|
||||
- Add support for using non-callables as View attributes.
|
||||
[joshthecoder]
|
||||
- Allow using View instances as attributes. [joshthecoder]
|
||||
- Support for Unicode and non-ASCII-encoded bytestring output.
|
||||
[enaeseth]
|
||||
- Template file encoding awareness. [enaeseth]
|
||||
|
||||
0.1.1 (2009-11-13)
|
||||
------------------
|
||||
|
||||
- Ensure we're dealing with strings, always
|
||||
- Tests can be run by executing the test file directly
|
||||
|
||||
0.1.0 (2009-11-12)
|
||||
------------------
|
||||
|
||||
- First release
|
|
@ -0,0 +1,22 @@
|
|||
Copyright (C) 2012 Chris Jerdonek. All rights reserved.
|
||||
|
||||
Copyright (c) 2009 Chris Wanstrath
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
@ -0,0 +1,13 @@
|
|||
include README.md
|
||||
include HISTORY.md
|
||||
include LICENSE
|
||||
include TODO.md
|
||||
include setup_description.rst
|
||||
include tox.ini
|
||||
include test_pystache.py
|
||||
# You cannot use package_data, for example, to include data files in a
|
||||
# source distribution when using Distribute.
|
||||
recursive-include pystache/tests *.mustache *.txt
|
||||
# We deliberately exclude the gh/ directory because it contains copies
|
||||
# of resources needed only for the web page hosted on GitHub (via the
|
||||
# gh-pages branch).
|
|
@ -0,0 +1,276 @@
|
|||
Pystache
|
||||
========
|
||||
|
||||
<!-- Since PyPI rejects reST long descriptions that contain HTML, -->
|
||||
<!-- HTML comments must be removed when converting this file to reST. -->
|
||||
<!-- For more information on PyPI's behavior in this regard, see: -->
|
||||
<!-- http://docs.python.org/distutils/uploading.html#pypi-package-display -->
|
||||
<!-- The Pystache setup script strips 1-line HTML comments prior -->
|
||||
<!-- to converting to reST, so all HTML comments should be one line. -->
|
||||
<!-- -->
|
||||
<!-- We leave the leading brackets empty here. Otherwise, unwanted -->
|
||||
<!-- caption text shows up in the reST version converted by pandoc. -->
|
||||
![](http://defunkt.github.com/pystache/images/logo_phillips.png "mustachioed, monocled snake by David Phillips")
|
||||
|
||||
![](https://secure.travis-ci.org/defunkt/pystache.png "Travis CI current build status")
|
||||
|
||||
[Pystache](http://defunkt.github.com/pystache) is a Python
|
||||
implementation of [Mustache](http://mustache.github.com/). Mustache is a
|
||||
framework-agnostic, logic-free templating system inspired by
|
||||
[ctemplate](http://code.google.com/p/google-ctemplate/) and
|
||||
[et](http://www.ivan.fomichev.name/2008/05/erlang-template-engine-prototype.html).
|
||||
Like ctemplate, Mustache "emphasizes separating logic from presentation:
|
||||
it is impossible to embed application logic in this template language."
|
||||
|
||||
The [mustache(5)](http://mustache.github.com/mustache.5.html) man page
|
||||
provides a good introduction to Mustache's syntax. For a more complete
|
||||
(and more current) description of Mustache's behavior, see the official
|
||||
[Mustache spec](https://github.com/mustache/spec).
|
||||
|
||||
Pystache is [semantically versioned](http://semver.org) and can be found
|
||||
on [PyPI](http://pypi.python.org/pypi/pystache). This version of
|
||||
Pystache passes all tests in [version
|
||||
1.1.2](https://github.com/mustache/spec/tree/v1.1.2) of the spec.
|
||||
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Pystache is tested with--
|
||||
|
||||
- Python 2.4 (requires simplejson [version
|
||||
2.0.9](http://pypi.python.org/pypi/simplejson/2.0.9) or earlier)
|
||||
- Python 2.5 (requires
|
||||
[simplejson](http://pypi.python.org/pypi/simplejson/))
|
||||
- Python 2.6
|
||||
- Python 2.7
|
||||
- Python 3.1
|
||||
- Python 3.2
|
||||
- Python 3.3
|
||||
- [PyPy](http://pypy.org/)
|
||||
|
||||
[Distribute](http://packages.python.org/distribute/) (the setuptools fork)
|
||||
is recommended over [setuptools](http://pypi.python.org/pypi/setuptools),
|
||||
and is required in some cases (e.g. for Python 3 support).
|
||||
If you use [pip](http://www.pip-installer.org/), you probably already satisfy
|
||||
this requirement.
|
||||
|
||||
JSON support is needed only for the command-line interface and to run
|
||||
the spec tests. We require simplejson for earlier versions of Python
|
||||
since Python's [json](http://docs.python.org/library/json.html) module
|
||||
was added in Python 2.6.
|
||||
|
||||
For Python 2.4 we require an earlier version of simplejson since
|
||||
simplejson stopped officially supporting Python 2.4 in simplejson
|
||||
version 2.1.0. Earlier versions of simplejson can be installed manually,
|
||||
as follows:
|
||||
|
||||
pip install 'simplejson<2.1.0'
|
||||
|
||||
Official support for Python 2.4 will end with Pystache version 0.6.0.
|
||||
|
||||
Install It
|
||||
----------
|
||||
|
||||
pip install pystache
|
||||
|
||||
And test it--
|
||||
|
||||
pystache-test
|
||||
|
||||
To install and test from source (e.g. from GitHub), see the Develop
|
||||
section.
|
||||
|
||||
Use It
|
||||
------
|
||||
|
||||
>>> import pystache
|
||||
>>> print pystache.render('Hi {{person}}!', {'person': 'Mom'})
|
||||
Hi Mom!
|
||||
|
||||
You can also create dedicated view classes to hold your view logic.
|
||||
|
||||
Here's your view class (in .../examples/readme.py):
|
||||
|
||||
class SayHello(object):
|
||||
def to(self):
|
||||
return "Pizza"
|
||||
|
||||
Instantiating like so:
|
||||
|
||||
>>> from pystache.tests.examples.readme import SayHello
|
||||
>>> hello = SayHello()
|
||||
|
||||
Then your template, say\_hello.mustache (by default in the same
|
||||
directory as your class definition):
|
||||
|
||||
Hello, {{to}}!
|
||||
|
||||
Pull it together:
|
||||
|
||||
>>> renderer = pystache.Renderer()
|
||||
>>> print renderer.render(hello)
|
||||
Hello, Pizza!
|
||||
|
||||
For greater control over rendering (e.g. to specify a custom template
|
||||
directory), use the `Renderer` class like above. One can pass attributes
|
||||
to the Renderer class constructor or set them on a Renderer instance. To
|
||||
customize template loading on a per-view basis, subclass `TemplateSpec`.
|
||||
See the docstrings of the
|
||||
[Renderer](https://github.com/defunkt/pystache/blob/master/pystache/renderer.py)
|
||||
class and
|
||||
[TemplateSpec](https://github.com/defunkt/pystache/blob/master/pystache/template_spec.py)
|
||||
class for more information.
|
||||
|
||||
You can also pre-parse a template:
|
||||
|
||||
>>> parsed = pystache.parse(u"Hey {{#who}}{{.}}!{{/who}}")
|
||||
>>> print parsed
|
||||
[u'Hey ', _SectionNode(key=u'who', index_begin=12, index_end=18, parsed=[_EscapeNode(key=u'.'), u'!'])]
|
||||
|
||||
And then:
|
||||
|
||||
>>> print renderer.render(parsed, {'who': 'Pops'})
|
||||
Hey Pops!
|
||||
>>> print renderer.render(parsed, {'who': 'you'})
|
||||
Hey you!
|
||||
|
||||
Python 3
|
||||
--------
|
||||
|
||||
Pystache has supported Python 3 since version 0.5.1. Pystache behaves
|
||||
slightly differently between Python 2 and 3, as follows:
|
||||
|
||||
- In Python 2, the default html-escape function `cgi.escape()` does
|
||||
not escape single quotes. In Python 3, the default escape function
|
||||
`html.escape()` does escape single quotes.
|
||||
- In both Python 2 and 3, the string and file encodings default to
|
||||
`sys.getdefaultencoding()`. However, this function can return
|
||||
different values under Python 2 and 3, even when run from the same
|
||||
system. Check your own system for the behavior on your system, or do
|
||||
not rely on the defaults by passing in the encodings explicitly
|
||||
(e.g. to the `Renderer` class).
|
||||
|
||||
Unicode
|
||||
-------
|
||||
|
||||
This section describes how Pystache handles unicode, strings, and
|
||||
encodings.
|
||||
|
||||
Internally, Pystache uses [only unicode
|
||||
strings](http://docs.python.org/howto/unicode.html#tips-for-writing-unicode-aware-programs)
|
||||
(`str` in Python 3 and `unicode` in Python 2). For input, Pystache
|
||||
accepts both unicode strings and byte strings (`bytes` in Python 3 and
|
||||
`str` in Python 2). For output, Pystache's template rendering methods
|
||||
return only unicode.
|
||||
|
||||
Pystache's `Renderer` class supports a number of attributes to control
|
||||
how Pystache converts byte strings to unicode on input. These include
|
||||
the `file_encoding`, `string_encoding`, and `decode_errors` attributes.
|
||||
|
||||
The `file_encoding` attribute is the encoding the renderer uses to
|
||||
convert to unicode any files read from the file system. Similarly,
|
||||
`string_encoding` is the encoding the renderer uses to convert any other
|
||||
byte strings encountered during the rendering process into unicode (e.g.
|
||||
context values that are encoded byte strings).
|
||||
|
||||
The `decode_errors` attribute is what the renderer passes as the
|
||||
`errors` argument to Python's built-in unicode-decoding function
|
||||
(`str()` in Python 3 and `unicode()` in Python 2). The valid values for
|
||||
this argument are `strict`, `ignore`, and `replace`.
|
||||
|
||||
Each of these attributes can be set via the `Renderer` class's
|
||||
constructor using a keyword argument of the same name. See the Renderer
|
||||
class's docstrings for further details. In addition, the `file_encoding`
|
||||
attribute can be controlled on a per-view basis by subclassing the
|
||||
`TemplateSpec` class. When not specified explicitly, these attributes
|
||||
default to values set in Pystache's `defaults` module.
|
||||
|
||||
Develop
|
||||
-------
|
||||
|
||||
To test from a source distribution (without installing)--
|
||||
|
||||
python test_pystache.py
|
||||
|
||||
To test Pystache with multiple versions of Python (with a single
|
||||
command!), you can use [tox](http://pypi.python.org/pypi/tox):
|
||||
|
||||
pip install 'virtualenv<1.8' # Version 1.8 dropped support for Python 2.4.
|
||||
pip install 'tox<1.4' # Version 1.4 dropped support for Python 2.4.
|
||||
tox
|
||||
|
||||
If you do not have all Python versions listed in `tox.ini`--
|
||||
|
||||
tox -e py26,py32 # for example
|
||||
|
||||
The source distribution tests also include doctests and tests from the
|
||||
Mustache spec. To include tests from the Mustache spec in your test
|
||||
runs:
|
||||
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
The test harness parses the spec's (more human-readable) yaml files if
|
||||
[PyYAML](http://pypi.python.org/pypi/PyYAML) is present. Otherwise, it
|
||||
parses the json files. To install PyYAML--
|
||||
|
||||
pip install pyyaml
|
||||
|
||||
To run a subset of the tests, you can use
|
||||
[nose](http://somethingaboutorange.com/mrl/projects/nose/0.11.1/testing.html):
|
||||
|
||||
pip install nose
|
||||
nosetests --tests pystache/tests/test_context.py:GetValueTests.test_dictionary__key_present
|
||||
|
||||
### Using Python 3 with Pystache from source
|
||||
|
||||
Pystache is written in Python 2 and must be converted to Python 3 prior to
|
||||
using it with Python 3. The installation process (and tox) do this
|
||||
automatically.
|
||||
|
||||
To convert the code to Python 3 manually (while using Python 3)--
|
||||
|
||||
python setup.py build
|
||||
|
||||
This writes the converted code to a subdirectory called `build`.
|
||||
By design, Python 3 builds
|
||||
[cannot](https://bitbucket.org/tarek/distribute/issue/292/allow-use_2to3-with-python-2)
|
||||
be created from Python 2.
|
||||
|
||||
To convert the code without using setup.py, you can use
|
||||
[2to3](http://docs.python.org/library/2to3.html) as follows (two steps)--
|
||||
|
||||
2to3 --write --nobackups --no-diffs --doctests_only pystache
|
||||
2to3 --write --nobackups --no-diffs pystache
|
||||
|
||||
This converts the code (and doctests) in place.
|
||||
|
||||
To `import pystache` from a source distribution while using Python 3, be
|
||||
sure that you are importing from a directory containing a converted
|
||||
version of the code (e.g. from the `build` directory after converting),
|
||||
and not from the original (unconverted) source directory. Otherwise, you will
|
||||
get a syntax error. You can help prevent this by not running the Python
|
||||
IDE from the project directory when importing Pystache while using Python 3.
|
||||
|
||||
|
||||
Mailing List
|
||||
------------
|
||||
|
||||
There is a [mailing list](http://librelist.com/browser/pystache/). Note
|
||||
that there is a bit of a delay between posting a message and seeing it
|
||||
appear in the mailing list archive.
|
||||
|
||||
Credits
|
||||
-------
|
||||
|
||||
>>> context = { 'author': 'Chris Wanstrath', 'maintainer': 'Chris Jerdonek' }
|
||||
>>> print pystache.render("Author: {{author}}\nMaintainer: {{maintainer}}", context)
|
||||
Author: Chris Wanstrath
|
||||
Maintainer: Chris Jerdonek
|
||||
|
||||
Pystache logo by [David Phillips](http://davidphillips.us/) is licensed
|
||||
under a [Creative Commons Attribution-ShareAlike 3.0 Unported
|
||||
License](http://creativecommons.org/licenses/by-sa/3.0/deed.en_US).
|
||||
![](http://i.creativecommons.org/l/by-sa/3.0/88x31.png "Creative
|
||||
Commons Attribution-ShareAlike 3.0 Unported License")
|
|
@ -0,0 +1,16 @@
|
|||
TODO
|
||||
====
|
||||
|
||||
In development branch:
|
||||
|
||||
* Figure out a way to suppress center alignment of images in reST output.
|
||||
* Add a unit test for the change made in 7ea8e7180c41. This is with regard
|
||||
to not requiring spec tests when running tests from a downloaded sdist.
|
||||
* End support for Python 2.4.
|
||||
* Add Python 3.3 to tox file (after deprecating 2.4).
|
||||
* Turn the benchmarking script at pystache/tests/benchmark.py into a command
|
||||
in pystache/commands, or make it a subcommand of one of the existing
|
||||
commands (i.e. using a command argument).
|
||||
* Provide support for logging in at least one of the commands.
|
||||
* Make sure command parsing to pystache-test doesn't break with Python 2.4 and earlier.
|
||||
* Combine pystache-test with the main command.
|
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 170 KiB |
|
@ -0,0 +1,13 @@
|
|||
|
||||
"""
|
||||
TODO: add a docstring.
|
||||
|
||||
"""
|
||||
|
||||
# We keep all initialization code in a separate module.
|
||||
|
||||
from pystache.init import parse, render, Renderer, TemplateSpec
|
||||
|
||||
__all__ = ['parse', 'render', 'Renderer', 'TemplateSpec']
|
||||
|
||||
__version__ = '0.5.4' # Also change in setup.py.
|
|
@ -0,0 +1,4 @@
|
|||
"""
|
||||
TODO: add a docstring.
|
||||
|
||||
"""
|
|
@ -0,0 +1,95 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module provides command-line access to pystache.
|
||||
|
||||
Run this script using the -h option for command-line help.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
try:
|
||||
import json
|
||||
except:
|
||||
# The json module is new in Python 2.6, whereas simplejson is
|
||||
# compatible with earlier versions.
|
||||
try:
|
||||
import simplejson as json
|
||||
except ImportError:
|
||||
# Raise an error with a type different from ImportError as a hack around
|
||||
# this issue:
|
||||
# http://bugs.python.org/issue7559
|
||||
from sys import exc_info
|
||||
ex_type, ex_value, tb = exc_info()
|
||||
new_ex = Exception("%s: %s" % (ex_type.__name__, ex_value))
|
||||
raise new_ex.__class__, new_ex, tb
|
||||
|
||||
# The optparse module is deprecated in Python 2.7 in favor of argparse.
|
||||
# However, argparse is not available in Python 2.6 and earlier.
|
||||
from optparse import OptionParser
|
||||
import sys
|
||||
|
||||
# We use absolute imports here to allow use of this script from its
|
||||
# location in source control (e.g. for development purposes).
|
||||
# Otherwise, the following error occurs:
|
||||
#
|
||||
# ValueError: Attempted relative import in non-package
|
||||
#
|
||||
from pystache.common import TemplateNotFoundError
|
||||
from pystache.renderer import Renderer
|
||||
|
||||
|
||||
USAGE = """\
|
||||
%prog [-h] template context
|
||||
|
||||
Render a mustache template with the given context.
|
||||
|
||||
positional arguments:
|
||||
template A filename or template string.
|
||||
context A filename or JSON string."""
|
||||
|
||||
|
||||
def parse_args(sys_argv, usage):
|
||||
"""
|
||||
Return an OptionParser for the script.
|
||||
|
||||
"""
|
||||
args = sys_argv[1:]
|
||||
|
||||
parser = OptionParser(usage=usage)
|
||||
options, args = parser.parse_args(args)
|
||||
|
||||
template, context = args
|
||||
|
||||
return template, context
|
||||
|
||||
|
||||
# TODO: verify whether the setup() method's entry_points argument
|
||||
# supports passing arguments to main:
|
||||
#
|
||||
# http://packages.python.org/distribute/setuptools.html#automatic-script-creation
|
||||
#
|
||||
def main(sys_argv=sys.argv):
|
||||
template, context = parse_args(sys_argv, USAGE)
|
||||
|
||||
if template.endswith('.mustache'):
|
||||
template = template[:-9]
|
||||
|
||||
renderer = Renderer()
|
||||
|
||||
try:
|
||||
template = renderer.load_template(template)
|
||||
except TemplateNotFoundError:
|
||||
pass
|
||||
|
||||
try:
|
||||
context = json.load(open(context))
|
||||
except IOError:
|
||||
context = json.loads(context)
|
||||
|
||||
rendered = renderer.render(template, context)
|
||||
print rendered
|
||||
|
||||
|
||||
if __name__=='__main__':
|
||||
main()
|
|
@ -0,0 +1,18 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module provides a command to test pystache (unit tests, doctests, etc).
|
||||
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from pystache.tests.main import main as run_tests
|
||||
|
||||
|
||||
def main(sys_argv=sys.argv):
|
||||
run_tests(sys_argv=sys_argv)
|
||||
|
||||
|
||||
if __name__=='__main__':
|
||||
main()
|
|
@ -0,0 +1,71 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Exposes functionality needed throughout the project.
|
||||
|
||||
"""
|
||||
|
||||
from sys import version_info
|
||||
|
||||
def _get_string_types():
|
||||
# TODO: come up with a better solution for this. One of the issues here
|
||||
# is that in Python 3 there is no common base class for unicode strings
|
||||
# and byte strings, and 2to3 seems to convert all of "str", "unicode",
|
||||
# and "basestring" to Python 3's "str".
|
||||
if version_info < (3, ):
|
||||
return basestring
|
||||
# The latter evaluates to "bytes" in Python 3 -- even after conversion by 2to3.
|
||||
return (unicode, type(u"a".encode('utf-8')))
|
||||
|
||||
|
||||
_STRING_TYPES = _get_string_types()
|
||||
|
||||
|
||||
def is_string(obj):
|
||||
"""
|
||||
Return whether the given object is a byte string or unicode string.
|
||||
|
||||
This function is provided for compatibility with both Python 2 and 3
|
||||
when using 2to3.
|
||||
|
||||
"""
|
||||
return isinstance(obj, _STRING_TYPES)
|
||||
|
||||
|
||||
# This function was designed to be portable across Python versions -- both
|
||||
# with older versions and with Python 3 after applying 2to3.
|
||||
def read(path):
|
||||
"""
|
||||
Return the contents of a text file as a byte string.
|
||||
|
||||
"""
|
||||
# Opening in binary mode is necessary for compatibility across Python
|
||||
# 2 and 3. In both Python 2 and 3, open() defaults to opening files in
|
||||
# text mode. However, in Python 2, open() returns file objects whose
|
||||
# read() method returns byte strings (strings of type `str` in Python 2),
|
||||
# whereas in Python 3, the file object returns unicode strings (strings
|
||||
# of type `str` in Python 3).
|
||||
f = open(path, 'rb')
|
||||
# We avoid use of the with keyword for Python 2.4 support.
|
||||
try:
|
||||
return f.read()
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
|
||||
class MissingTags(object):
|
||||
|
||||
"""Contains the valid values for Renderer.missing_tags."""
|
||||
|
||||
ignore = 'ignore'
|
||||
strict = 'strict'
|
||||
|
||||
|
||||
class PystacheError(Exception):
|
||||
"""Base class for Pystache exceptions."""
|
||||
pass
|
||||
|
||||
|
||||
class TemplateNotFoundError(PystacheError):
|
||||
"""An exception raised when a template is not found."""
|
||||
pass
|
|
@ -0,0 +1,342 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Exposes a ContextStack class.
|
||||
|
||||
The Mustache spec makes a special distinction between two types of context
|
||||
stack elements: hashes and objects. For the purposes of interpreting the
|
||||
spec, we define these categories mutually exclusively as follows:
|
||||
|
||||
(1) Hash: an item whose type is a subclass of dict.
|
||||
|
||||
(2) Object: an item that is neither a hash nor an instance of a
|
||||
built-in type.
|
||||
|
||||
"""
|
||||
|
||||
from pystache.common import PystacheError
|
||||
|
||||
|
||||
# This equals '__builtin__' in Python 2 and 'builtins' in Python 3.
|
||||
_BUILTIN_MODULE = type(0).__module__
|
||||
|
||||
|
||||
# We use this private global variable as a return value to represent a key
|
||||
# not being found on lookup. This lets us distinguish between the case
|
||||
# of a key's value being None with the case of a key not being found --
|
||||
# without having to rely on exceptions (e.g. KeyError) for flow control.
|
||||
#
|
||||
# TODO: eliminate the need for a private global variable, e.g. by using the
|
||||
# preferred Python approach of "easier to ask for forgiveness than permission":
|
||||
# http://docs.python.org/glossary.html#term-eafp
|
||||
class NotFound(object):
|
||||
pass
|
||||
_NOT_FOUND = NotFound()
|
||||
|
||||
|
||||
def _get_value(context, key):
|
||||
"""
|
||||
Retrieve a key's value from a context item.
|
||||
|
||||
Returns _NOT_FOUND if the key does not exist.
|
||||
|
||||
The ContextStack.get() docstring documents this function's intended behavior.
|
||||
|
||||
"""
|
||||
if isinstance(context, dict):
|
||||
# Then we consider the argument a "hash" for the purposes of the spec.
|
||||
#
|
||||
# We do a membership test to avoid using exceptions for flow control
|
||||
# (e.g. catching KeyError).
|
||||
if key in context:
|
||||
return context[key]
|
||||
elif type(context).__module__ != _BUILTIN_MODULE:
|
||||
# Then we consider the argument an "object" for the purposes of
|
||||
# the spec.
|
||||
#
|
||||
# The elif test above lets us avoid treating instances of built-in
|
||||
# types like integers and strings as objects (cf. issue #81).
|
||||
# Instances of user-defined classes on the other hand, for example,
|
||||
# are considered objects by the test above.
|
||||
try:
|
||||
attr = getattr(context, key)
|
||||
except AttributeError:
|
||||
# TODO: distinguish the case of the attribute not existing from
|
||||
# an AttributeError being raised by the call to the attribute.
|
||||
# See the following issue for implementation ideas:
|
||||
# http://bugs.python.org/issue7559
|
||||
pass
|
||||
else:
|
||||
# TODO: consider using EAFP here instead.
|
||||
# http://docs.python.org/glossary.html#term-eafp
|
||||
if callable(attr):
|
||||
return attr()
|
||||
return attr
|
||||
|
||||
return _NOT_FOUND
|
||||
|
||||
|
||||
class KeyNotFoundError(PystacheError):
|
||||
|
||||
"""
|
||||
An exception raised when a key is not found in a context stack.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, key, details):
|
||||
self.key = key
|
||||
self.details = details
|
||||
|
||||
def __str__(self):
|
||||
return "Key %s not found: %s" % (repr(self.key), self.details)
|
||||
|
||||
|
||||
class ContextStack(object):
|
||||
|
||||
"""
|
||||
Provides dictionary-like access to a stack of zero or more items.
|
||||
|
||||
Instances of this class are meant to act as the rendering context
|
||||
when rendering Mustache templates in accordance with mustache(5)
|
||||
and the Mustache spec.
|
||||
|
||||
Instances encapsulate a private stack of hashes, objects, and built-in
|
||||
type instances. Querying the stack for the value of a key queries
|
||||
the items in the stack in order from last-added objects to first
|
||||
(last in, first out).
|
||||
|
||||
Caution: this class does not currently support recursive nesting in
|
||||
that items in the stack cannot themselves be ContextStack instances.
|
||||
|
||||
See the docstrings of the methods of this class for more details.
|
||||
|
||||
"""
|
||||
|
||||
# We reserve keyword arguments for future options (e.g. a "strict=True"
|
||||
# option for enabling a strict mode).
|
||||
def __init__(self, *items):
|
||||
"""
|
||||
Construct an instance, and initialize the private stack.
|
||||
|
||||
The *items arguments are the items with which to populate the
|
||||
initial stack. Items in the argument list are added to the
|
||||
stack in order so that, in particular, items at the end of
|
||||
the argument list are queried first when querying the stack.
|
||||
|
||||
Caution: items should not themselves be ContextStack instances, as
|
||||
recursive nesting does not behave as one might expect.
|
||||
|
||||
"""
|
||||
self._stack = list(items)
|
||||
|
||||
def __repr__(self):
|
||||
"""
|
||||
Return a string representation of the instance.
|
||||
|
||||
For example--
|
||||
|
||||
>>> context = ContextStack({'alpha': 'abc'}, {'numeric': 123})
|
||||
>>> repr(context)
|
||||
"ContextStack({'alpha': 'abc'}, {'numeric': 123})"
|
||||
|
||||
"""
|
||||
return "%s%s" % (self.__class__.__name__, tuple(self._stack))
|
||||
|
||||
@staticmethod
|
||||
def create(*context, **kwargs):
|
||||
"""
|
||||
Build a ContextStack instance from a sequence of context-like items.
|
||||
|
||||
This factory-style method is more general than the ContextStack class's
|
||||
constructor in that, unlike the constructor, the argument list
|
||||
can itself contain ContextStack instances.
|
||||
|
||||
Here is an example illustrating various aspects of this method:
|
||||
|
||||
>>> obj1 = {'animal': 'cat', 'vegetable': 'carrot', 'mineral': 'copper'}
|
||||
>>> obj2 = ContextStack({'vegetable': 'spinach', 'mineral': 'silver'})
|
||||
>>>
|
||||
>>> context = ContextStack.create(obj1, None, obj2, mineral='gold')
|
||||
>>>
|
||||
>>> context.get('animal')
|
||||
'cat'
|
||||
>>> context.get('vegetable')
|
||||
'spinach'
|
||||
>>> context.get('mineral')
|
||||
'gold'
|
||||
|
||||
Arguments:
|
||||
|
||||
*context: zero or more dictionaries, ContextStack instances, or objects
|
||||
with which to populate the initial context stack. None
|
||||
arguments will be skipped. Items in the *context list are
|
||||
added to the stack in order so that later items in the argument
|
||||
list take precedence over earlier items. This behavior is the
|
||||
same as the constructor's.
|
||||
|
||||
**kwargs: additional key-value data to add to the context stack.
|
||||
As these arguments appear after all items in the *context list,
|
||||
in the case of key conflicts these values take precedence over
|
||||
all items in the *context list. This behavior is the same as
|
||||
the constructor's.
|
||||
|
||||
"""
|
||||
items = context
|
||||
|
||||
context = ContextStack()
|
||||
|
||||
for item in items:
|
||||
if item is None:
|
||||
continue
|
||||
if isinstance(item, ContextStack):
|
||||
context._stack.extend(item._stack)
|
||||
else:
|
||||
context.push(item)
|
||||
|
||||
if kwargs:
|
||||
context.push(kwargs)
|
||||
|
||||
return context
|
||||
|
||||
# TODO: add more unit tests for this.
|
||||
# TODO: update the docstring for dotted names.
|
||||
def get(self, name):
|
||||
"""
|
||||
Resolve a dotted name against the current context stack.
|
||||
|
||||
This function follows the rules outlined in the section of the
|
||||
spec regarding tag interpolation. This function returns the value
|
||||
as is and does not coerce the return value to a string.
|
||||
|
||||
Arguments:
|
||||
|
||||
name: a dotted or non-dotted name.
|
||||
|
||||
default: the value to return if name resolution fails at any point.
|
||||
Defaults to the empty string per the Mustache spec.
|
||||
|
||||
This method queries items in the stack in order from last-added
|
||||
objects to first (last in, first out). The value returned is
|
||||
the value of the key in the first item that contains the key.
|
||||
If the key is not found in any item in the stack, then the default
|
||||
value is returned. The default value defaults to None.
|
||||
|
||||
In accordance with the spec, this method queries items in the
|
||||
stack for a key differently depending on whether the item is a
|
||||
hash, object, or neither (as defined in the module docstring):
|
||||
|
||||
(1) Hash: if the item is a hash, then the key's value is the
|
||||
dictionary value of the key. If the dictionary doesn't contain
|
||||
the key, then the key is considered not found.
|
||||
|
||||
(2) Object: if the item is an an object, then the method looks for
|
||||
an attribute with the same name as the key. If an attribute
|
||||
with that name exists, the value of the attribute is returned.
|
||||
If the attribute is callable, however (i.e. if the attribute
|
||||
is a method), then the attribute is called with no arguments
|
||||
and that value is returned. If there is no attribute with
|
||||
the same name as the key, then the key is considered not found.
|
||||
|
||||
(3) Neither: if the item is neither a hash nor an object, then
|
||||
the key is considered not found.
|
||||
|
||||
*Caution*:
|
||||
|
||||
Callables are handled differently depending on whether they are
|
||||
dictionary values, as in (1) above, or attributes, as in (2).
|
||||
The former are returned as-is, while the latter are first
|
||||
called and that value returned.
|
||||
|
||||
Here is an example to illustrate:
|
||||
|
||||
>>> def greet():
|
||||
... return "Hi Bob!"
|
||||
>>>
|
||||
>>> class Greeter(object):
|
||||
... greet = None
|
||||
>>>
|
||||
>>> dct = {'greet': greet}
|
||||
>>> obj = Greeter()
|
||||
>>> obj.greet = greet
|
||||
>>>
|
||||
>>> dct['greet'] is obj.greet
|
||||
True
|
||||
>>> ContextStack(dct).get('greet') #doctest: +ELLIPSIS
|
||||
<function greet at 0x...>
|
||||
>>> ContextStack(obj).get('greet')
|
||||
'Hi Bob!'
|
||||
|
||||
TODO: explain the rationale for this difference in treatment.
|
||||
|
||||
"""
|
||||
if name == '.':
|
||||
try:
|
||||
return self.top()
|
||||
except IndexError:
|
||||
raise KeyNotFoundError(".", "empty context stack")
|
||||
|
||||
parts = name.split('.')
|
||||
|
||||
try:
|
||||
result = self._get_simple(parts[0])
|
||||
except KeyNotFoundError:
|
||||
raise KeyNotFoundError(name, "first part")
|
||||
|
||||
for part in parts[1:]:
|
||||
# The full context stack is not used to resolve the remaining parts.
|
||||
# From the spec--
|
||||
#
|
||||
# 5) If any name parts were retained in step 1, each should be
|
||||
# resolved against a context stack containing only the result
|
||||
# from the former resolution. If any part fails resolution, the
|
||||
# result should be considered falsey, and should interpolate as
|
||||
# the empty string.
|
||||
#
|
||||
# TODO: make sure we have a test case for the above point.
|
||||
result = _get_value(result, part)
|
||||
# TODO: consider using EAFP here instead.
|
||||
# http://docs.python.org/glossary.html#term-eafp
|
||||
if result is _NOT_FOUND:
|
||||
raise KeyNotFoundError(name, "missing %s" % repr(part))
|
||||
|
||||
return result
|
||||
|
||||
def _get_simple(self, name):
|
||||
"""
|
||||
Query the stack for a non-dotted name.
|
||||
|
||||
"""
|
||||
for item in reversed(self._stack):
|
||||
result = _get_value(item, name)
|
||||
if result is not _NOT_FOUND:
|
||||
return result
|
||||
|
||||
raise KeyNotFoundError(name, "part missing")
|
||||
|
||||
def push(self, item):
|
||||
"""
|
||||
Push an item onto the stack.
|
||||
|
||||
"""
|
||||
self._stack.append(item)
|
||||
|
||||
def pop(self):
|
||||
"""
|
||||
Pop an item off of the stack, and return it.
|
||||
|
||||
"""
|
||||
return self._stack.pop()
|
||||
|
||||
def top(self):
|
||||
"""
|
||||
Return the item last added to the stack.
|
||||
|
||||
"""
|
||||
return self._stack[-1]
|
||||
|
||||
def copy(self):
|
||||
"""
|
||||
Return a copy of this instance.
|
||||
|
||||
"""
|
||||
return ContextStack(*self._stack)
|
|
@ -0,0 +1,65 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module provides a central location for defining default behavior.
|
||||
|
||||
Throughout the package, these defaults take effect only when the user
|
||||
does not otherwise specify a value.
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
# Python 3.2 adds html.escape() and deprecates cgi.escape().
|
||||
from html import escape
|
||||
except ImportError:
|
||||
from cgi import escape
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from pystache.common import MissingTags
|
||||
|
||||
|
||||
# How to handle encoding errors when decoding strings from str to unicode.
|
||||
#
|
||||
# This value is passed as the "errors" argument to Python's built-in
|
||||
# unicode() function:
|
||||
#
|
||||
# http://docs.python.org/library/functions.html#unicode
|
||||
#
|
||||
DECODE_ERRORS = 'strict'
|
||||
|
||||
# The name of the encoding to use when converting to unicode any strings of
|
||||
# type str encountered during the rendering process.
|
||||
STRING_ENCODING = sys.getdefaultencoding()
|
||||
|
||||
# The name of the encoding to use when converting file contents to unicode.
|
||||
# This default takes precedence over the STRING_ENCODING default for
|
||||
# strings that arise from files.
|
||||
FILE_ENCODING = sys.getdefaultencoding()
|
||||
|
||||
# The delimiters to start with when parsing.
|
||||
DELIMITERS = (u'{{', u'}}')
|
||||
|
||||
# How to handle missing tags when rendering a template.
|
||||
MISSING_TAGS = MissingTags.ignore
|
||||
|
||||
# The starting list of directories in which to search for templates when
|
||||
# loading a template by file name.
|
||||
SEARCH_DIRS = [os.curdir] # i.e. ['.']
|
||||
|
||||
# The escape function to apply to strings that require escaping when
|
||||
# rendering templates (e.g. for tags enclosed in double braces).
|
||||
# Only unicode strings will be passed to this function.
|
||||
#
|
||||
# The quote=True argument causes double but not single quotes to be escaped
|
||||
# in Python 3.1 and earlier, and both double and single quotes to be
|
||||
# escaped in Python 3.2 and later:
|
||||
#
|
||||
# http://docs.python.org/library/cgi.html#cgi.escape
|
||||
# http://docs.python.org/dev/library/html.html#html.escape
|
||||
#
|
||||
TAG_ESCAPE = lambda u: escape(u, quote=True)
|
||||
|
||||
# The default template extension, without the leading dot.
|
||||
TEMPLATE_EXTENSION = 'mustache'
|
|
@ -0,0 +1,19 @@
|
|||
# encoding: utf-8
|
||||
|
||||
"""
|
||||
This module contains the initialization logic called by __init__.py.
|
||||
|
||||
"""
|
||||
|
||||
from pystache.parser import parse
|
||||
from pystache.renderer import Renderer
|
||||
from pystache.template_spec import TemplateSpec
|
||||
|
||||
|
||||
def render(template, context=None, **kwargs):
|
||||
"""
|
||||
Return the given template string rendered using the given context.
|
||||
|
||||
"""
|
||||
renderer = Renderer()
|
||||
return renderer.render(template, context, **kwargs)
|
|
@ -0,0 +1,170 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module provides a Loader class for locating and reading templates.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from pystache import common
|
||||
from pystache import defaults
|
||||
from pystache.locator import Locator
|
||||
|
||||
|
||||
# We make a function so that the current defaults take effect.
|
||||
# TODO: revisit whether this is necessary.
|
||||
|
||||
def _make_to_unicode():
|
||||
def to_unicode(s, encoding=None):
|
||||
"""
|
||||
Raises a TypeError exception if the given string is already unicode.
|
||||
|
||||
"""
|
||||
if encoding is None:
|
||||
encoding = defaults.STRING_ENCODING
|
||||
return unicode(s, encoding, defaults.DECODE_ERRORS)
|
||||
return to_unicode
|
||||
|
||||
|
||||
class Loader(object):
|
||||
|
||||
"""
|
||||
Loads the template associated to a name or user-defined object.
|
||||
|
||||
All load_*() methods return the template as a unicode string.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, file_encoding=None, extension=None, to_unicode=None,
|
||||
search_dirs=None):
|
||||
"""
|
||||
Construct a template loader instance.
|
||||
|
||||
Arguments:
|
||||
|
||||
extension: the template file extension, without the leading dot.
|
||||
Pass False for no extension (e.g. to use extensionless template
|
||||
files). Defaults to the package default.
|
||||
|
||||
file_encoding: the name of the encoding to use when converting file
|
||||
contents to unicode. Defaults to the package default.
|
||||
|
||||
search_dirs: the list of directories in which to search when loading
|
||||
a template by name or file name. Defaults to the package default.
|
||||
|
||||
to_unicode: the function to use when converting strings of type
|
||||
str to unicode. The function should have the signature:
|
||||
|
||||
to_unicode(s, encoding=None)
|
||||
|
||||
It should accept a string of type str and an optional encoding
|
||||
name and return a string of type unicode. Defaults to calling
|
||||
Python's built-in function unicode() using the package string
|
||||
encoding and decode errors defaults.
|
||||
|
||||
"""
|
||||
if extension is None:
|
||||
extension = defaults.TEMPLATE_EXTENSION
|
||||
|
||||
if file_encoding is None:
|
||||
file_encoding = defaults.FILE_ENCODING
|
||||
|
||||
if search_dirs is None:
|
||||
search_dirs = defaults.SEARCH_DIRS
|
||||
|
||||
if to_unicode is None:
|
||||
to_unicode = _make_to_unicode()
|
||||
|
||||
self.extension = extension
|
||||
self.file_encoding = file_encoding
|
||||
# TODO: unit test setting this attribute.
|
||||
self.search_dirs = search_dirs
|
||||
self.to_unicode = to_unicode
|
||||
|
||||
def _make_locator(self):
|
||||
return Locator(extension=self.extension)
|
||||
|
||||
def unicode(self, s, encoding=None):
|
||||
"""
|
||||
Convert a string to unicode using the given encoding, and return it.
|
||||
|
||||
This function uses the underlying to_unicode attribute.
|
||||
|
||||
Arguments:
|
||||
|
||||
s: a basestring instance to convert to unicode. Unlike Python's
|
||||
built-in unicode() function, it is okay to pass unicode strings
|
||||
to this function. (Passing a unicode string to Python's unicode()
|
||||
with the encoding argument throws the error, "TypeError: decoding
|
||||
Unicode is not supported.")
|
||||
|
||||
encoding: the encoding to pass to the to_unicode attribute.
|
||||
Defaults to None.
|
||||
|
||||
"""
|
||||
if isinstance(s, unicode):
|
||||
return unicode(s)
|
||||
|
||||
return self.to_unicode(s, encoding)
|
||||
|
||||
def read(self, path, encoding=None):
|
||||
"""
|
||||
Read the template at the given path, and return it as a unicode string.
|
||||
|
||||
"""
|
||||
b = common.read(path)
|
||||
|
||||
if encoding is None:
|
||||
encoding = self.file_encoding
|
||||
|
||||
return self.unicode(b, encoding)
|
||||
|
||||
def load_file(self, file_name):
|
||||
"""
|
||||
Find and return the template with the given file name.
|
||||
|
||||
Arguments:
|
||||
|
||||
file_name: the file name of the template.
|
||||
|
||||
"""
|
||||
locator = self._make_locator()
|
||||
|
||||
path = locator.find_file(file_name, self.search_dirs)
|
||||
|
||||
return self.read(path)
|
||||
|
||||
def load_name(self, name):
|
||||
"""
|
||||
Find and return the template with the given template name.
|
||||
|
||||
Arguments:
|
||||
|
||||
name: the name of the template.
|
||||
|
||||
"""
|
||||
locator = self._make_locator()
|
||||
|
||||
path = locator.find_name(name, self.search_dirs)
|
||||
|
||||
return self.read(path)
|
||||
|
||||
# TODO: unit-test this method.
|
||||
def load_object(self, obj):
|
||||
"""
|
||||
Find and return the template associated to the given object.
|
||||
|
||||
Arguments:
|
||||
|
||||
obj: an instance of a user-defined class.
|
||||
|
||||
search_dirs: the list of directories in which to search.
|
||||
|
||||
"""
|
||||
locator = self._make_locator()
|
||||
|
||||
path = locator.find_object(obj, self.search_dirs)
|
||||
|
||||
return self.read(path)
|
|
@ -0,0 +1,171 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module provides a Locator class for finding template files.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
from pystache.common import TemplateNotFoundError
|
||||
from pystache import defaults
|
||||
|
||||
|
||||
class Locator(object):
|
||||
|
||||
def __init__(self, extension=None):
|
||||
"""
|
||||
Construct a template locator.
|
||||
|
||||
Arguments:
|
||||
|
||||
extension: the template file extension, without the leading dot.
|
||||
Pass False for no extension (e.g. to use extensionless template
|
||||
files). Defaults to the package default.
|
||||
|
||||
"""
|
||||
if extension is None:
|
||||
extension = defaults.TEMPLATE_EXTENSION
|
||||
|
||||
self.template_extension = extension
|
||||
|
||||
def get_object_directory(self, obj):
|
||||
"""
|
||||
Return the directory containing an object's defining class.
|
||||
|
||||
Returns None if there is no such directory, for example if the
|
||||
class was defined in an interactive Python session, or in a
|
||||
doctest that appears in a text file (rather than a Python file).
|
||||
|
||||
"""
|
||||
if not hasattr(obj, '__module__'):
|
||||
return None
|
||||
|
||||
module = sys.modules[obj.__module__]
|
||||
|
||||
if not hasattr(module, '__file__'):
|
||||
# TODO: add a unit test for this case.
|
||||
return None
|
||||
|
||||
path = module.__file__
|
||||
|
||||
return os.path.dirname(path)
|
||||
|
||||
def make_template_name(self, obj):
|
||||
"""
|
||||
Return the canonical template name for an object instance.
|
||||
|
||||
This method converts Python-style class names (PEP 8's recommended
|
||||
CamelCase, aka CapWords) to lower_case_with_underscords. Here
|
||||
is an example with code:
|
||||
|
||||
>>> class HelloWorld(object):
|
||||
... pass
|
||||
>>> hi = HelloWorld()
|
||||
>>>
|
||||
>>> locator = Locator()
|
||||
>>> locator.make_template_name(hi)
|
||||
'hello_world'
|
||||
|
||||
"""
|
||||
template_name = obj.__class__.__name__
|
||||
|
||||
def repl(match):
|
||||
return '_' + match.group(0).lower()
|
||||
|
||||
return re.sub('[A-Z]', repl, template_name)[1:]
|
||||
|
||||
def make_file_name(self, template_name, template_extension=None):
|
||||
"""
|
||||
Generate and return the file name for the given template name.
|
||||
|
||||
Arguments:
|
||||
|
||||
template_extension: defaults to the instance's extension.
|
||||
|
||||
"""
|
||||
file_name = template_name
|
||||
|
||||
if template_extension is None:
|
||||
template_extension = self.template_extension
|
||||
|
||||
if template_extension is not False:
|
||||
file_name += os.path.extsep + template_extension
|
||||
|
||||
return file_name
|
||||
|
||||
def _find_path(self, search_dirs, file_name):
|
||||
"""
|
||||
Search for the given file, and return the path.
|
||||
|
||||
Returns None if the file is not found.
|
||||
|
||||
"""
|
||||
for dir_path in search_dirs:
|
||||
file_path = os.path.join(dir_path, file_name)
|
||||
if os.path.exists(file_path):
|
||||
return file_path
|
||||
|
||||
return None
|
||||
|
||||
def _find_path_required(self, search_dirs, file_name):
|
||||
"""
|
||||
Return the path to a template with the given file name.
|
||||
|
||||
"""
|
||||
path = self._find_path(search_dirs, file_name)
|
||||
|
||||
if path is None:
|
||||
raise TemplateNotFoundError('File %s not found in dirs: %s' %
|
||||
(repr(file_name), repr(search_dirs)))
|
||||
|
||||
return path
|
||||
|
||||
def find_file(self, file_name, search_dirs):
|
||||
"""
|
||||
Return the path to a template with the given file name.
|
||||
|
||||
Arguments:
|
||||
|
||||
file_name: the file name of the template.
|
||||
|
||||
search_dirs: the list of directories in which to search.
|
||||
|
||||
"""
|
||||
return self._find_path_required(search_dirs, file_name)
|
||||
|
||||
def find_name(self, template_name, search_dirs):
|
||||
"""
|
||||
Return the path to a template with the given name.
|
||||
|
||||
Arguments:
|
||||
|
||||
template_name: the name of the template.
|
||||
|
||||
search_dirs: the list of directories in which to search.
|
||||
|
||||
"""
|
||||
file_name = self.make_file_name(template_name)
|
||||
|
||||
return self._find_path_required(search_dirs, file_name)
|
||||
|
||||
def find_object(self, obj, search_dirs, file_name=None):
|
||||
"""
|
||||
Return the path to a template associated with the given object.
|
||||
|
||||
"""
|
||||
if file_name is None:
|
||||
# TODO: should we define a make_file_name() method?
|
||||
template_name = self.make_template_name(obj)
|
||||
file_name = self.make_file_name(template_name)
|
||||
|
||||
dir_path = self.get_object_directory(obj)
|
||||
|
||||
if dir_path is not None:
|
||||
search_dirs = [dir_path] + search_dirs
|
||||
|
||||
path = self._find_path_required(search_dirs, file_name)
|
||||
|
||||
return path
|
|
@ -0,0 +1,50 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Exposes a class that represents a parsed (or compiled) template.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
class ParsedTemplate(object):
|
||||
|
||||
"""
|
||||
Represents a parsed or compiled template.
|
||||
|
||||
An instance wraps a list of unicode strings and node objects. A node
|
||||
object must have a `render(engine, stack)` method that accepts a
|
||||
RenderEngine instance and a ContextStack instance and returns a unicode
|
||||
string.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self._parse_tree = []
|
||||
|
||||
def __repr__(self):
|
||||
return repr(self._parse_tree)
|
||||
|
||||
def add(self, node):
|
||||
"""
|
||||
Arguments:
|
||||
|
||||
node: a unicode string or node object instance. See the class
|
||||
docstring for information.
|
||||
|
||||
"""
|
||||
self._parse_tree.append(node)
|
||||
|
||||
def render(self, engine, context):
|
||||
"""
|
||||
Returns: a string of type unicode.
|
||||
|
||||
"""
|
||||
# We avoid use of the ternary operator for Python 2.4 support.
|
||||
def get_unicode(node):
|
||||
if type(node) is unicode:
|
||||
return node
|
||||
return node.render(engine, context)
|
||||
parts = map(get_unicode, self._parse_tree)
|
||||
s = ''.join(parts)
|
||||
|
||||
return unicode(s)
|
|
@ -0,0 +1,378 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Exposes a parse() function to parse template strings.
|
||||
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from pystache import defaults
|
||||
from pystache.parsed import ParsedTemplate
|
||||
|
||||
|
||||
END_OF_LINE_CHARACTERS = [u'\r', u'\n']
|
||||
NON_BLANK_RE = re.compile(ur'^(.)', re.M)
|
||||
|
||||
|
||||
# TODO: add some unit tests for this.
|
||||
# TODO: add a test case that checks for spurious spaces.
|
||||
# TODO: add test cases for delimiters.
|
||||
def parse(template, delimiters=None):
|
||||
"""
|
||||
Parse a unicode template string and return a ParsedTemplate instance.
|
||||
|
||||
Arguments:
|
||||
|
||||
template: a unicode template string.
|
||||
|
||||
delimiters: a 2-tuple of delimiters. Defaults to the package default.
|
||||
|
||||
Examples:
|
||||
|
||||
>>> parsed = parse(u"Hey {{#who}}{{name}}!{{/who}}")
|
||||
>>> print str(parsed).replace('u', '') # This is a hack to get the test to pass both in Python 2 and 3.
|
||||
['Hey ', _SectionNode(key='who', index_begin=12, index_end=21, parsed=[_EscapeNode(key='name'), '!'])]
|
||||
|
||||
"""
|
||||
if type(template) is not unicode:
|
||||
raise Exception("Template is not unicode: %s" % type(template))
|
||||
parser = _Parser(delimiters)
|
||||
return parser.parse(template)
|
||||
|
||||
|
||||
def _compile_template_re(delimiters):
|
||||
"""
|
||||
Return a regular expression object (re.RegexObject) instance.
|
||||
|
||||
"""
|
||||
# The possible tag type characters following the opening tag,
|
||||
# excluding "=" and "{".
|
||||
tag_types = "!>&/#^"
|
||||
|
||||
# TODO: are we following this in the spec?
|
||||
#
|
||||
# The tag's content MUST be a non-whitespace character sequence
|
||||
# NOT containing the current closing delimiter.
|
||||
#
|
||||
tag = r"""
|
||||
(?P<whitespace>[\ \t]*)
|
||||
%(otag)s \s*
|
||||
(?:
|
||||
(?P<change>=) \s* (?P<delims>.+?) \s* = |
|
||||
(?P<raw>{) \s* (?P<raw_name>.+?) \s* } |
|
||||
(?P<tag>[%(tag_types)s]?) \s* (?P<tag_key>[\s\S]+?)
|
||||
)
|
||||
\s* %(ctag)s
|
||||
""" % {'tag_types': tag_types, 'otag': re.escape(delimiters[0]), 'ctag': re.escape(delimiters[1])}
|
||||
|
||||
return re.compile(tag, re.VERBOSE)
|
||||
|
||||
|
||||
class ParsingError(Exception):
|
||||
|
||||
pass
|
||||
|
||||
|
||||
## Node types
|
||||
|
||||
def _format(obj, exclude=None):
|
||||
if exclude is None:
|
||||
exclude = []
|
||||
exclude.append('key')
|
||||
attrs = obj.__dict__
|
||||
names = list(set(attrs.keys()) - set(exclude))
|
||||
names.sort()
|
||||
names.insert(0, 'key')
|
||||
args = ["%s=%s" % (name, repr(attrs[name])) for name in names]
|
||||
return "%s(%s)" % (obj.__class__.__name__, ", ".join(args))
|
||||
|
||||
|
||||
class _CommentNode(object):
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self)
|
||||
|
||||
def render(self, engine, context):
|
||||
return u''
|
||||
|
||||
|
||||
class _ChangeNode(object):
|
||||
|
||||
def __init__(self, delimiters):
|
||||
self.delimiters = delimiters
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self)
|
||||
|
||||
def render(self, engine, context):
|
||||
return u''
|
||||
|
||||
|
||||
class _EscapeNode(object):
|
||||
|
||||
def __init__(self, key):
|
||||
self.key = key
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self)
|
||||
|
||||
def render(self, engine, context):
|
||||
s = engine.fetch_string(context, self.key)
|
||||
return engine.escape(s)
|
||||
|
||||
|
||||
class _LiteralNode(object):
|
||||
|
||||
def __init__(self, key):
|
||||
self.key = key
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self)
|
||||
|
||||
def render(self, engine, context):
|
||||
s = engine.fetch_string(context, self.key)
|
||||
return engine.literal(s)
|
||||
|
||||
|
||||
class _PartialNode(object):
|
||||
|
||||
def __init__(self, key, indent):
|
||||
self.key = key
|
||||
self.indent = indent
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self)
|
||||
|
||||
def render(self, engine, context):
|
||||
template = engine.resolve_partial(self.key)
|
||||
# Indent before rendering.
|
||||
template = re.sub(NON_BLANK_RE, self.indent + ur'\1', template)
|
||||
|
||||
return engine.render(template, context)
|
||||
|
||||
|
||||
class _InvertedNode(object):
|
||||
|
||||
def __init__(self, key, parsed_section):
|
||||
self.key = key
|
||||
self.parsed_section = parsed_section
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self)
|
||||
|
||||
def render(self, engine, context):
|
||||
# TODO: is there a bug because we are not using the same
|
||||
# logic as in fetch_string()?
|
||||
data = engine.resolve_context(context, self.key)
|
||||
# Note that lambdas are considered truthy for inverted sections
|
||||
# per the spec.
|
||||
if data:
|
||||
return u''
|
||||
return self.parsed_section.render(engine, context)
|
||||
|
||||
|
||||
class _SectionNode(object):
|
||||
|
||||
# TODO: the template_ and parsed_template_ arguments don't both seem
|
||||
# to be necessary. Can we remove one of them? For example, if
|
||||
# callable(data) is True, then the initial parsed_template isn't used.
|
||||
def __init__(self, key, parsed, delimiters, template, index_begin, index_end):
|
||||
self.delimiters = delimiters
|
||||
self.key = key
|
||||
self.parsed = parsed
|
||||
self.template = template
|
||||
self.index_begin = index_begin
|
||||
self.index_end = index_end
|
||||
|
||||
def __repr__(self):
|
||||
return _format(self, exclude=['delimiters', 'template'])
|
||||
|
||||
def render(self, engine, context):
|
||||
values = engine.fetch_section_data(context, self.key)
|
||||
|
||||
parts = []
|
||||
for val in values:
|
||||
if callable(val):
|
||||
# Lambdas special case section rendering and bypass pushing
|
||||
# the data value onto the context stack. From the spec--
|
||||
#
|
||||
# When used as the data value for a Section tag, the
|
||||
# lambda MUST be treatable as an arity 1 function, and
|
||||
# invoked as such (passing a String containing the
|
||||
# unprocessed section contents). The returned value
|
||||
# MUST be rendered against the current delimiters, then
|
||||
# interpolated in place of the section.
|
||||
#
|
||||
# Also see--
|
||||
#
|
||||
# https://github.com/defunkt/pystache/issues/113
|
||||
#
|
||||
# TODO: should we check the arity?
|
||||
val = val(self.template[self.index_begin:self.index_end])
|
||||
val = engine._render_value(val, context, delimiters=self.delimiters)
|
||||
parts.append(val)
|
||||
continue
|
||||
|
||||
context.push(val)
|
||||
parts.append(self.parsed.render(engine, context))
|
||||
context.pop()
|
||||
|
||||
return unicode(''.join(parts))
|
||||
|
||||
|
||||
class _Parser(object):
|
||||
|
||||
_delimiters = None
|
||||
_template_re = None
|
||||
|
||||
def __init__(self, delimiters=None):
|
||||
if delimiters is None:
|
||||
delimiters = defaults.DELIMITERS
|
||||
|
||||
self._delimiters = delimiters
|
||||
|
||||
def _compile_delimiters(self):
|
||||
self._template_re = _compile_template_re(self._delimiters)
|
||||
|
||||
def _change_delimiters(self, delimiters):
|
||||
self._delimiters = delimiters
|
||||
self._compile_delimiters()
|
||||
|
||||
def parse(self, template):
|
||||
"""
|
||||
Parse a template string starting at some index.
|
||||
|
||||
This method uses the current tag delimiter.
|
||||
|
||||
Arguments:
|
||||
|
||||
template: a unicode string that is the template to parse.
|
||||
|
||||
index: the index at which to start parsing.
|
||||
|
||||
Returns:
|
||||
|
||||
a ParsedTemplate instance.
|
||||
|
||||
"""
|
||||
self._compile_delimiters()
|
||||
|
||||
start_index = 0
|
||||
content_end_index, parsed_section, section_key = None, None, None
|
||||
parsed_template = ParsedTemplate()
|
||||
|
||||
states = []
|
||||
|
||||
while True:
|
||||
match = self._template_re.search(template, start_index)
|
||||
|
||||
if match is None:
|
||||
break
|
||||
|
||||
match_index = match.start()
|
||||
end_index = match.end()
|
||||
|
||||
matches = match.groupdict()
|
||||
|
||||
# Normalize the matches dictionary.
|
||||
if matches['change'] is not None:
|
||||
matches.update(tag='=', tag_key=matches['delims'])
|
||||
elif matches['raw'] is not None:
|
||||
matches.update(tag='&', tag_key=matches['raw_name'])
|
||||
|
||||
tag_type = matches['tag']
|
||||
tag_key = matches['tag_key']
|
||||
leading_whitespace = matches['whitespace']
|
||||
|
||||
# Standalone (non-interpolation) tags consume the entire line,
|
||||
# both leading whitespace and trailing newline.
|
||||
did_tag_begin_line = match_index == 0 or template[match_index - 1] in END_OF_LINE_CHARACTERS
|
||||
did_tag_end_line = end_index == len(template) or template[end_index] in END_OF_LINE_CHARACTERS
|
||||
is_tag_interpolating = tag_type in ['', '&']
|
||||
|
||||
if did_tag_begin_line and did_tag_end_line and not is_tag_interpolating:
|
||||
if end_index < len(template):
|
||||
end_index += template[end_index] == '\r' and 1 or 0
|
||||
if end_index < len(template):
|
||||
end_index += template[end_index] == '\n' and 1 or 0
|
||||
elif leading_whitespace:
|
||||
match_index += len(leading_whitespace)
|
||||
leading_whitespace = ''
|
||||
|
||||
# Avoid adding spurious empty strings to the parse tree.
|
||||
if start_index != match_index:
|
||||
parsed_template.add(template[start_index:match_index])
|
||||
|
||||
start_index = end_index
|
||||
|
||||
if tag_type in ('#', '^'):
|
||||
# Cache current state.
|
||||
state = (tag_type, end_index, section_key, parsed_template)
|
||||
states.append(state)
|
||||
|
||||
# Initialize new state
|
||||
section_key, parsed_template = tag_key, ParsedTemplate()
|
||||
continue
|
||||
|
||||
if tag_type == '/':
|
||||
if tag_key != section_key:
|
||||
raise ParsingError("Section end tag mismatch: %s != %s" % (tag_key, section_key))
|
||||
|
||||
# Restore previous state with newly found section data.
|
||||
parsed_section = parsed_template
|
||||
|
||||
(tag_type, section_start_index, section_key, parsed_template) = states.pop()
|
||||
node = self._make_section_node(template, tag_type, tag_key, parsed_section,
|
||||
section_start_index, match_index)
|
||||
|
||||
else:
|
||||
node = self._make_interpolation_node(tag_type, tag_key, leading_whitespace)
|
||||
|
||||
parsed_template.add(node)
|
||||
|
||||
# Avoid adding spurious empty strings to the parse tree.
|
||||
if start_index != len(template):
|
||||
parsed_template.add(template[start_index:])
|
||||
|
||||
return parsed_template
|
||||
|
||||
def _make_interpolation_node(self, tag_type, tag_key, leading_whitespace):
|
||||
"""
|
||||
Create and return a non-section node for the parse tree.
|
||||
|
||||
"""
|
||||
# TODO: switch to using a dictionary instead of a bunch of ifs and elifs.
|
||||
if tag_type == '!':
|
||||
return _CommentNode()
|
||||
|
||||
if tag_type == '=':
|
||||
delimiters = tag_key.split()
|
||||
self._change_delimiters(delimiters)
|
||||
return _ChangeNode(delimiters)
|
||||
|
||||
if tag_type == '':
|
||||
return _EscapeNode(tag_key)
|
||||
|
||||
if tag_type == '&':
|
||||
return _LiteralNode(tag_key)
|
||||
|
||||
if tag_type == '>':
|
||||
return _PartialNode(tag_key, leading_whitespace)
|
||||
|
||||
raise Exception("Invalid symbol for interpolation tag: %s" % repr(tag_type))
|
||||
|
||||
def _make_section_node(self, template, tag_type, tag_key, parsed_section,
|
||||
section_start_index, section_end_index):
|
||||
"""
|
||||
Create and return a section node for the parse tree.
|
||||
|
||||
"""
|
||||
if tag_type == '#':
|
||||
return _SectionNode(tag_key, parsed_section, self._delimiters,
|
||||
template, section_start_index, section_end_index)
|
||||
|
||||
if tag_type == '^':
|
||||
return _InvertedNode(tag_key, parsed_section)
|
||||
|
||||
raise Exception("Invalid symbol for section tag: %s" % repr(tag_type))
|
|
@ -0,0 +1,181 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Defines a class responsible for rendering logic.
|
||||
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from pystache.common import is_string
|
||||
from pystache.parser import parse
|
||||
|
||||
|
||||
def context_get(stack, name):
|
||||
"""
|
||||
Find and return a name from a ContextStack instance.
|
||||
|
||||
"""
|
||||
return stack.get(name)
|
||||
|
||||
|
||||
class RenderEngine(object):
|
||||
|
||||
"""
|
||||
Provides a render() method.
|
||||
|
||||
This class is meant only for internal use.
|
||||
|
||||
As a rule, the code in this class operates on unicode strings where
|
||||
possible rather than, say, strings of type str or markupsafe.Markup.
|
||||
This means that strings obtained from "external" sources like partials
|
||||
and variable tag values are immediately converted to unicode (or
|
||||
escaped and converted to unicode) before being operated on further.
|
||||
This makes maintaining, reasoning about, and testing the correctness
|
||||
of the code much simpler. In particular, it keeps the implementation
|
||||
of this class independent of the API details of one (or possibly more)
|
||||
unicode subclasses (e.g. markupsafe.Markup).
|
||||
|
||||
"""
|
||||
|
||||
# TODO: it would probably be better for the constructor to accept
|
||||
# and set as an attribute a single RenderResolver instance
|
||||
# that encapsulates the customizable aspects of converting
|
||||
# strings and resolving partials and names from context.
|
||||
def __init__(self, literal=None, escape=None, resolve_context=None,
|
||||
resolve_partial=None, to_str=None):
|
||||
"""
|
||||
Arguments:
|
||||
|
||||
literal: the function used to convert unescaped variable tag
|
||||
values to unicode, e.g. the value corresponding to a tag
|
||||
"{{{name}}}". The function should accept a string of type
|
||||
str or unicode (or a subclass) and return a string of type
|
||||
unicode (but not a proper subclass of unicode).
|
||||
This class will only pass basestring instances to this
|
||||
function. For example, it will call str() on integer variable
|
||||
values prior to passing them to this function.
|
||||
|
||||
escape: the function used to escape and convert variable tag
|
||||
values to unicode, e.g. the value corresponding to a tag
|
||||
"{{name}}". The function should obey the same properties
|
||||
described above for the "literal" function argument.
|
||||
This function should take care to convert any str
|
||||
arguments to unicode just as the literal function should, as
|
||||
this class will not pass tag values to literal prior to passing
|
||||
them to this function. This allows for more flexibility,
|
||||
for example using a custom escape function that handles
|
||||
incoming strings of type markupsafe.Markup differently
|
||||
from plain unicode strings.
|
||||
|
||||
resolve_context: the function to call to resolve a name against
|
||||
a context stack. The function should accept two positional
|
||||
arguments: a ContextStack instance and a name to resolve.
|
||||
|
||||
resolve_partial: the function to call when loading a partial.
|
||||
The function should accept a template name string and return a
|
||||
template string of type unicode (not a subclass).
|
||||
|
||||
to_str: a function that accepts an object and returns a string (e.g.
|
||||
the built-in function str). This function is used for string
|
||||
coercion whenever a string is required (e.g. for converting None
|
||||
or 0 to a string).
|
||||
|
||||
"""
|
||||
self.escape = escape
|
||||
self.literal = literal
|
||||
self.resolve_context = resolve_context
|
||||
self.resolve_partial = resolve_partial
|
||||
self.to_str = to_str
|
||||
|
||||
# TODO: Rename context to stack throughout this module.
|
||||
|
||||
# From the spec:
|
||||
#
|
||||
# When used as the data value for an Interpolation tag, the lambda
|
||||
# MUST be treatable as an arity 0 function, and invoked as such.
|
||||
# The returned value MUST be rendered against the default delimiters,
|
||||
# then interpolated in place of the lambda.
|
||||
#
|
||||
def fetch_string(self, context, name):
|
||||
"""
|
||||
Get a value from the given context as a basestring instance.
|
||||
|
||||
"""
|
||||
val = self.resolve_context(context, name)
|
||||
|
||||
if callable(val):
|
||||
# Return because _render_value() is already a string.
|
||||
return self._render_value(val(), context)
|
||||
|
||||
if not is_string(val):
|
||||
return self.to_str(val)
|
||||
|
||||
return val
|
||||
|
||||
def fetch_section_data(self, context, name):
|
||||
"""
|
||||
Fetch the value of a section as a list.
|
||||
|
||||
"""
|
||||
data = self.resolve_context(context, name)
|
||||
|
||||
# From the spec:
|
||||
#
|
||||
# If the data is not of a list type, it is coerced into a list
|
||||
# as follows: if the data is truthy (e.g. `!!data == true`),
|
||||
# use a single-element list containing the data, otherwise use
|
||||
# an empty list.
|
||||
#
|
||||
if not data:
|
||||
data = []
|
||||
else:
|
||||
# The least brittle way to determine whether something
|
||||
# supports iteration is by trying to call iter() on it:
|
||||
#
|
||||
# http://docs.python.org/library/functions.html#iter
|
||||
#
|
||||
# It is not sufficient, for example, to check whether the item
|
||||
# implements __iter__ () (the iteration protocol). There is
|
||||
# also __getitem__() (the sequence protocol). In Python 2,
|
||||
# strings do not implement __iter__(), but in Python 3 they do.
|
||||
try:
|
||||
iter(data)
|
||||
except TypeError:
|
||||
# Then the value does not support iteration.
|
||||
data = [data]
|
||||
else:
|
||||
if is_string(data) or isinstance(data, dict):
|
||||
# Do not treat strings and dicts (which are iterable) as lists.
|
||||
data = [data]
|
||||
# Otherwise, treat the value as a list.
|
||||
|
||||
return data
|
||||
|
||||
def _render_value(self, val, context, delimiters=None):
|
||||
"""
|
||||
Render an arbitrary value.
|
||||
|
||||
"""
|
||||
if not is_string(val):
|
||||
# In case the template is an integer, for example.
|
||||
val = self.to_str(val)
|
||||
if type(val) is not unicode:
|
||||
val = self.literal(val)
|
||||
return self.render(val, context, delimiters)
|
||||
|
||||
def render(self, template, context_stack, delimiters=None):
|
||||
"""
|
||||
Render a unicode template string, and return as unicode.
|
||||
|
||||
Arguments:
|
||||
|
||||
template: a template string of type unicode (but not a proper
|
||||
subclass of unicode).
|
||||
|
||||
context_stack: a ContextStack instance.
|
||||
|
||||
"""
|
||||
parsed_template = parse(template, delimiters)
|
||||
|
||||
return parsed_template.render(self, context_stack)
|
|
@ -0,0 +1,460 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module provides a Renderer class to render templates.
|
||||
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from pystache import defaults
|
||||
from pystache.common import TemplateNotFoundError, MissingTags, is_string
|
||||
from pystache.context import ContextStack, KeyNotFoundError
|
||||
from pystache.loader import Loader
|
||||
from pystache.parsed import ParsedTemplate
|
||||
from pystache.renderengine import context_get, RenderEngine
|
||||
from pystache.specloader import SpecLoader
|
||||
from pystache.template_spec import TemplateSpec
|
||||
|
||||
|
||||
class Renderer(object):
|
||||
|
||||
"""
|
||||
A class for rendering mustache templates.
|
||||
|
||||
This class supports several rendering options which are described in
|
||||
the constructor's docstring. Other behavior can be customized by
|
||||
subclassing this class.
|
||||
|
||||
For example, one can pass a string-string dictionary to the constructor
|
||||
to bypass loading partials from the file system:
|
||||
|
||||
>>> partials = {'partial': 'Hello, {{thing}}!'}
|
||||
>>> renderer = Renderer(partials=partials)
|
||||
>>> # We apply print to make the test work in Python 3 after 2to3.
|
||||
>>> print renderer.render('{{>partial}}', {'thing': 'world'})
|
||||
Hello, world!
|
||||
|
||||
To customize string coercion (e.g. to render False values as ''), one can
|
||||
subclass this class. For example:
|
||||
|
||||
class MyRenderer(Renderer):
|
||||
def str_coerce(self, val):
|
||||
if not val:
|
||||
return ''
|
||||
else:
|
||||
return str(val)
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, file_encoding=None, string_encoding=None,
|
||||
decode_errors=None, search_dirs=None, file_extension=None,
|
||||
escape=None, partials=None, missing_tags=None):
|
||||
"""
|
||||
Construct an instance.
|
||||
|
||||
Arguments:
|
||||
|
||||
file_encoding: the name of the encoding to use by default when
|
||||
reading template files. All templates are converted to unicode
|
||||
prior to parsing. Defaults to the package default.
|
||||
|
||||
string_encoding: the name of the encoding to use when converting
|
||||
to unicode any byte strings (type str in Python 2) encountered
|
||||
during the rendering process. This name will be passed as the
|
||||
encoding argument to the built-in function unicode().
|
||||
Defaults to the package default.
|
||||
|
||||
decode_errors: the string to pass as the errors argument to the
|
||||
built-in function unicode() when converting byte strings to
|
||||
unicode. Defaults to the package default.
|
||||
|
||||
search_dirs: the list of directories in which to search when
|
||||
loading a template by name or file name. If given a string,
|
||||
the method interprets the string as a single directory.
|
||||
Defaults to the package default.
|
||||
|
||||
file_extension: the template file extension. Pass False for no
|
||||
extension (i.e. to use extensionless template files).
|
||||
Defaults to the package default.
|
||||
|
||||
partials: an object (e.g. a dictionary) for custom partial loading
|
||||
during the rendering process.
|
||||
The object should have a get() method that accepts a string
|
||||
and returns the corresponding template as a string, preferably
|
||||
as a unicode string. If there is no template with that name,
|
||||
the get() method should either return None (as dict.get() does)
|
||||
or raise an exception.
|
||||
If this argument is None, the rendering process will use
|
||||
the normal procedure of locating and reading templates from
|
||||
the file system -- using relevant instance attributes like
|
||||
search_dirs, file_encoding, etc.
|
||||
|
||||
escape: the function used to escape variable tag values when
|
||||
rendering a template. The function should accept a unicode
|
||||
string (or subclass of unicode) and return an escaped string
|
||||
that is again unicode (or a subclass of unicode).
|
||||
This function need not handle strings of type `str` because
|
||||
this class will only pass it unicode strings. The constructor
|
||||
assigns this function to the constructed instance's escape()
|
||||
method.
|
||||
To disable escaping entirely, one can pass `lambda u: u`
|
||||
as the escape function, for example. One may also wish to
|
||||
consider using markupsafe's escape function: markupsafe.escape().
|
||||
This argument defaults to the package default.
|
||||
|
||||
missing_tags: a string specifying how to handle missing tags.
|
||||
If 'strict', an error is raised on a missing tag. If 'ignore',
|
||||
the value of the tag is the empty string. Defaults to the
|
||||
package default.
|
||||
|
||||
"""
|
||||
if decode_errors is None:
|
||||
decode_errors = defaults.DECODE_ERRORS
|
||||
|
||||
if escape is None:
|
||||
escape = defaults.TAG_ESCAPE
|
||||
|
||||
if file_encoding is None:
|
||||
file_encoding = defaults.FILE_ENCODING
|
||||
|
||||
if file_extension is None:
|
||||
file_extension = defaults.TEMPLATE_EXTENSION
|
||||
|
||||
if missing_tags is None:
|
||||
missing_tags = defaults.MISSING_TAGS
|
||||
|
||||
if search_dirs is None:
|
||||
search_dirs = defaults.SEARCH_DIRS
|
||||
|
||||
if string_encoding is None:
|
||||
string_encoding = defaults.STRING_ENCODING
|
||||
|
||||
if isinstance(search_dirs, basestring):
|
||||
search_dirs = [search_dirs]
|
||||
|
||||
self._context = None
|
||||
self.decode_errors = decode_errors
|
||||
self.escape = escape
|
||||
self.file_encoding = file_encoding
|
||||
self.file_extension = file_extension
|
||||
self.missing_tags = missing_tags
|
||||
self.partials = partials
|
||||
self.search_dirs = search_dirs
|
||||
self.string_encoding = string_encoding
|
||||
|
||||
# This is an experimental way of giving views access to the current context.
|
||||
# TODO: consider another approach of not giving access via a property,
|
||||
# but instead letting the caller pass the initial context to the
|
||||
# main render() method by reference. This approach would probably
|
||||
# be less likely to be misused.
|
||||
@property
|
||||
def context(self):
|
||||
"""
|
||||
Return the current rendering context [experimental].
|
||||
|
||||
"""
|
||||
return self._context
|
||||
|
||||
# We could not choose str() as the name because 2to3 renames the unicode()
|
||||
# method of this class to str().
|
||||
def str_coerce(self, val):
|
||||
"""
|
||||
Coerce a non-string value to a string.
|
||||
|
||||
This method is called whenever a non-string is encountered during the
|
||||
rendering process when a string is needed (e.g. if a context value
|
||||
for string interpolation is not a string). To customize string
|
||||
coercion, you can override this method.
|
||||
|
||||
"""
|
||||
return str(val)
|
||||
|
||||
def _to_unicode_soft(self, s):
|
||||
"""
|
||||
Convert a basestring to unicode, preserving any unicode subclass.
|
||||
|
||||
"""
|
||||
# We type-check to avoid "TypeError: decoding Unicode is not supported".
|
||||
# We avoid the Python ternary operator for Python 2.4 support.
|
||||
if isinstance(s, unicode):
|
||||
return s
|
||||
return self.unicode(s)
|
||||
|
||||
def _to_unicode_hard(self, s):
|
||||
"""
|
||||
Convert a basestring to a string with type unicode (not subclass).
|
||||
|
||||
"""
|
||||
return unicode(self._to_unicode_soft(s))
|
||||
|
||||
def _escape_to_unicode(self, s):
|
||||
"""
|
||||
Convert a basestring to unicode (preserving any unicode subclass), and escape it.
|
||||
|
||||
Returns a unicode string (not subclass).
|
||||
|
||||
"""
|
||||
return unicode(self.escape(self._to_unicode_soft(s)))
|
||||
|
||||
def unicode(self, b, encoding=None):
|
||||
"""
|
||||
Convert a byte string to unicode, using string_encoding and decode_errors.
|
||||
|
||||
Arguments:
|
||||
|
||||
b: a byte string.
|
||||
|
||||
encoding: the name of an encoding. Defaults to the string_encoding
|
||||
attribute for this instance.
|
||||
|
||||
Raises:
|
||||
|
||||
TypeError: Because this method calls Python's built-in unicode()
|
||||
function, this method raises the following exception if the
|
||||
given string is already unicode:
|
||||
|
||||
TypeError: decoding Unicode is not supported
|
||||
|
||||
"""
|
||||
if encoding is None:
|
||||
encoding = self.string_encoding
|
||||
|
||||
# TODO: Wrap UnicodeDecodeErrors with a message about setting
|
||||
# the string_encoding and decode_errors attributes.
|
||||
return unicode(b, encoding, self.decode_errors)
|
||||
|
||||
def _make_loader(self):
|
||||
"""
|
||||
Create a Loader instance using current attributes.
|
||||
|
||||
"""
|
||||
return Loader(file_encoding=self.file_encoding, extension=self.file_extension,
|
||||
to_unicode=self.unicode, search_dirs=self.search_dirs)
|
||||
|
||||
def _make_load_template(self):
|
||||
"""
|
||||
Return a function that loads a template by name.
|
||||
|
||||
"""
|
||||
loader = self._make_loader()
|
||||
|
||||
def load_template(template_name):
|
||||
return loader.load_name(template_name)
|
||||
|
||||
return load_template
|
||||
|
||||
def _make_load_partial(self):
|
||||
"""
|
||||
Return a function that loads a partial by name.
|
||||
|
||||
"""
|
||||
if self.partials is None:
|
||||
return self._make_load_template()
|
||||
|
||||
# Otherwise, create a function from the custom partial loader.
|
||||
partials = self.partials
|
||||
|
||||
def load_partial(name):
|
||||
# TODO: consider using EAFP here instead.
|
||||
# http://docs.python.org/glossary.html#term-eafp
|
||||
# This would mean requiring that the custom partial loader
|
||||
# raise a KeyError on name not found.
|
||||
template = partials.get(name)
|
||||
if template is None:
|
||||
raise TemplateNotFoundError("Name %s not found in partials: %s" %
|
||||
(repr(name), type(partials)))
|
||||
|
||||
# RenderEngine requires that the return value be unicode.
|
||||
return self._to_unicode_hard(template)
|
||||
|
||||
return load_partial
|
||||
|
||||
def _is_missing_tags_strict(self):
|
||||
"""
|
||||
Return whether missing_tags is set to strict.
|
||||
|
||||
"""
|
||||
val = self.missing_tags
|
||||
|
||||
if val == MissingTags.strict:
|
||||
return True
|
||||
elif val == MissingTags.ignore:
|
||||
return False
|
||||
|
||||
raise Exception("Unsupported 'missing_tags' value: %s" % repr(val))
|
||||
|
||||
def _make_resolve_partial(self):
|
||||
"""
|
||||
Return the resolve_partial function to pass to RenderEngine.__init__().
|
||||
|
||||
"""
|
||||
load_partial = self._make_load_partial()
|
||||
|
||||
if self._is_missing_tags_strict():
|
||||
return load_partial
|
||||
# Otherwise, ignore missing tags.
|
||||
|
||||
def resolve_partial(name):
|
||||
try:
|
||||
return load_partial(name)
|
||||
except TemplateNotFoundError:
|
||||
return u''
|
||||
|
||||
return resolve_partial
|
||||
|
||||
def _make_resolve_context(self):
|
||||
"""
|
||||
Return the resolve_context function to pass to RenderEngine.__init__().
|
||||
|
||||
"""
|
||||
if self._is_missing_tags_strict():
|
||||
return context_get
|
||||
# Otherwise, ignore missing tags.
|
||||
|
||||
def resolve_context(stack, name):
|
||||
try:
|
||||
return context_get(stack, name)
|
||||
except KeyNotFoundError:
|
||||
return u''
|
||||
|
||||
return resolve_context
|
||||
|
||||
def _make_render_engine(self):
|
||||
"""
|
||||
Return a RenderEngine instance for rendering.
|
||||
|
||||
"""
|
||||
resolve_context = self._make_resolve_context()
|
||||
resolve_partial = self._make_resolve_partial()
|
||||
|
||||
engine = RenderEngine(literal=self._to_unicode_hard,
|
||||
escape=self._escape_to_unicode,
|
||||
resolve_context=resolve_context,
|
||||
resolve_partial=resolve_partial,
|
||||
to_str=self.str_coerce)
|
||||
return engine
|
||||
|
||||
# TODO: add unit tests for this method.
|
||||
def load_template(self, template_name):
|
||||
"""
|
||||
Load a template by name from the file system.
|
||||
|
||||
"""
|
||||
load_template = self._make_load_template()
|
||||
return load_template(template_name)
|
||||
|
||||
def _render_object(self, obj, *context, **kwargs):
|
||||
"""
|
||||
Render the template associated with the given object.
|
||||
|
||||
"""
|
||||
loader = self._make_loader()
|
||||
|
||||
# TODO: consider an approach that does not require using an if
|
||||
# block here. For example, perhaps this class's loader can be
|
||||
# a SpecLoader in all cases, and the SpecLoader instance can
|
||||
# check the object's type. Or perhaps Loader and SpecLoader
|
||||
# can be refactored to implement the same interface.
|
||||
if isinstance(obj, TemplateSpec):
|
||||
loader = SpecLoader(loader)
|
||||
template = loader.load(obj)
|
||||
else:
|
||||
template = loader.load_object(obj)
|
||||
|
||||
context = [obj] + list(context)
|
||||
|
||||
return self._render_string(template, *context, **kwargs)
|
||||
|
||||
def render_name(self, template_name, *context, **kwargs):
|
||||
"""
|
||||
Render the template with the given name using the given context.
|
||||
|
||||
See the render() docstring for more information.
|
||||
|
||||
"""
|
||||
loader = self._make_loader()
|
||||
template = loader.load_name(template_name)
|
||||
return self._render_string(template, *context, **kwargs)
|
||||
|
||||
def render_path(self, template_path, *context, **kwargs):
|
||||
"""
|
||||
Render the template at the given path using the given context.
|
||||
|
||||
Read the render() docstring for more information.
|
||||
|
||||
"""
|
||||
loader = self._make_loader()
|
||||
template = loader.read(template_path)
|
||||
|
||||
return self._render_string(template, *context, **kwargs)
|
||||
|
||||
def _render_string(self, template, *context, **kwargs):
|
||||
"""
|
||||
Render the given template string using the given context.
|
||||
|
||||
"""
|
||||
# RenderEngine.render() requires that the template string be unicode.
|
||||
template = self._to_unicode_hard(template)
|
||||
|
||||
render_func = lambda engine, stack: engine.render(template, stack)
|
||||
|
||||
return self._render_final(render_func, *context, **kwargs)
|
||||
|
||||
# All calls to render() should end here because it prepares the
|
||||
# context stack correctly.
|
||||
def _render_final(self, render_func, *context, **kwargs):
|
||||
"""
|
||||
Arguments:
|
||||
|
||||
render_func: a function that accepts a RenderEngine and ContextStack
|
||||
instance and returns a template rendering as a unicode string.
|
||||
|
||||
"""
|
||||
stack = ContextStack.create(*context, **kwargs)
|
||||
self._context = stack
|
||||
|
||||
engine = self._make_render_engine()
|
||||
|
||||
return render_func(engine, stack)
|
||||
|
||||
def render(self, template, *context, **kwargs):
|
||||
"""
|
||||
Render the given template string, view template, or parsed template.
|
||||
|
||||
Returns a unicode string.
|
||||
|
||||
Prior to rendering, this method will convert a template that is a
|
||||
byte string (type str in Python 2) to unicode using the string_encoding
|
||||
and decode_errors attributes. See the constructor docstring for
|
||||
more information.
|
||||
|
||||
Arguments:
|
||||
|
||||
template: a template string that is unicode or a byte string,
|
||||
a ParsedTemplate instance, or another object instance. In the
|
||||
final case, the function first looks for the template associated
|
||||
to the object by calling this class's get_associated_template()
|
||||
method. The rendering process also uses the passed object as
|
||||
the first element of the context stack when rendering.
|
||||
|
||||
*context: zero or more dictionaries, ContextStack instances, or objects
|
||||
with which to populate the initial context stack. None
|
||||
arguments are skipped. Items in the *context list are added to
|
||||
the context stack in order so that later items in the argument
|
||||
list take precedence over earlier items.
|
||||
|
||||
**kwargs: additional key-value data to add to the context stack.
|
||||
As these arguments appear after all items in the *context list,
|
||||
in the case of key conflicts these values take precedence over
|
||||
all items in the *context list.
|
||||
|
||||
"""
|
||||
if is_string(template):
|
||||
return self._render_string(template, *context, **kwargs)
|
||||
if isinstance(template, ParsedTemplate):
|
||||
render_func = lambda engine, stack: template.render(engine, stack)
|
||||
return self._render_final(render_func, *context, **kwargs)
|
||||
# Otherwise, we assume the template is an object.
|
||||
|
||||
return self._render_object(template, *context, **kwargs)
|
|
@ -0,0 +1,90 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This module supports customized (aka special or specified) template loading.
|
||||
|
||||
"""
|
||||
|
||||
import os.path
|
||||
|
||||
from pystache.loader import Loader
|
||||
|
||||
|
||||
# TODO: add test cases for this class.
|
||||
class SpecLoader(object):
|
||||
|
||||
"""
|
||||
Supports loading custom-specified templates (from TemplateSpec instances).
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, loader=None):
|
||||
if loader is None:
|
||||
loader = Loader()
|
||||
|
||||
self.loader = loader
|
||||
|
||||
def _find_relative(self, spec):
|
||||
"""
|
||||
Return the path to the template as a relative (dir, file_name) pair.
|
||||
|
||||
The directory returned is relative to the directory containing the
|
||||
class definition of the given object. The method returns None for
|
||||
this directory if the directory is unknown without first searching
|
||||
the search directories.
|
||||
|
||||
"""
|
||||
if spec.template_rel_path is not None:
|
||||
return os.path.split(spec.template_rel_path)
|
||||
# Otherwise, determine the file name separately.
|
||||
|
||||
locator = self.loader._make_locator()
|
||||
|
||||
# We do not use the ternary operator for Python 2.4 support.
|
||||
if spec.template_name is not None:
|
||||
template_name = spec.template_name
|
||||
else:
|
||||
template_name = locator.make_template_name(spec)
|
||||
|
||||
file_name = locator.make_file_name(template_name, spec.template_extension)
|
||||
|
||||
return (spec.template_rel_directory, file_name)
|
||||
|
||||
def _find(self, spec):
|
||||
"""
|
||||
Find and return the path to the template associated to the instance.
|
||||
|
||||
"""
|
||||
if spec.template_path is not None:
|
||||
return spec.template_path
|
||||
|
||||
dir_path, file_name = self._find_relative(spec)
|
||||
|
||||
locator = self.loader._make_locator()
|
||||
|
||||
if dir_path is None:
|
||||
# Then we need to search for the path.
|
||||
path = locator.find_object(spec, self.loader.search_dirs, file_name=file_name)
|
||||
else:
|
||||
obj_dir = locator.get_object_directory(spec)
|
||||
path = os.path.join(obj_dir, dir_path, file_name)
|
||||
|
||||
return path
|
||||
|
||||
def load(self, spec):
|
||||
"""
|
||||
Find and return the template associated to a TemplateSpec instance.
|
||||
|
||||
Returns the template as a unicode string.
|
||||
|
||||
Arguments:
|
||||
|
||||
spec: a TemplateSpec instance.
|
||||
|
||||
"""
|
||||
if spec.template is not None:
|
||||
return self.loader.unicode(spec.template, spec.template_encoding)
|
||||
|
||||
path = self._find(spec)
|
||||
|
||||
return self.loader.read(path, spec.template_encoding)
|
|
@ -0,0 +1,53 @@
|
|||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Provides a class to customize template information on a per-view basis.
|
||||
|
||||
To customize template properties for a particular view, create that view
|
||||
from a class that subclasses TemplateSpec. The "spec" in TemplateSpec
|
||||
stands for "special" or "specified" template information.
|
||||
|
||||
"""
|
||||
|
||||
class TemplateSpec(object):
|
||||
|
||||
"""
|
||||
A mixin or interface for specifying custom template information.
|
||||
|
||||
The "spec" in TemplateSpec can be taken to mean that the template
|
||||
information is either "specified" or "special."
|
||||
|
||||
A view should subclass this class only if customized template loading
|
||||
is needed. The following attributes allow one to customize/override
|
||||
template information on a per view basis. A None value means to use
|
||||
default behavior for that value and perform no customization. All
|
||||
attributes are initialized to None.
|
||||
|
||||
Attributes:
|
||||
|
||||
template: the template as a string.
|
||||
|
||||
template_encoding: the encoding used by the template.
|
||||
|
||||
template_extension: the template file extension. Defaults to "mustache".
|
||||
Pass False for no extension (i.e. extensionless template files).
|
||||
|
||||
template_name: the name of the template.
|
||||
|
||||
template_path: absolute path to the template.
|
||||
|
||||
template_rel_directory: the directory containing the template file,
|
||||
relative to the directory containing the module defining the class.
|
||||
|
||||
template_rel_path: the path to the template file, relative to the
|
||||
directory containing the module defining the class.
|
||||
|
||||
"""
|
||||
|
||||
template = None
|
||||
template_encoding = None
|
||||
template_extension = None
|
||||
template_name = None
|
||||
template_path = None
|
||||
template_rel_directory = None
|
||||
template_rel_path = None
|
|
@ -0,0 +1,413 @@
|
|||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
"""
|
||||
This script supports publishing Pystache to PyPI.
|
||||
|
||||
This docstring contains instructions to Pystache maintainers on how
|
||||
to release a new version of Pystache.
|
||||
|
||||
(1) Prepare the release.
|
||||
|
||||
Make sure the code is finalized and merged to master. Bump the version
|
||||
number in setup.py, update the release date in the HISTORY file, etc.
|
||||
|
||||
Generate the reStructuredText long_description using--
|
||||
|
||||
$ python setup.py prep
|
||||
|
||||
and be sure this new version is checked in. You must have pandoc installed
|
||||
to do this step:
|
||||
|
||||
http://johnmacfarlane.net/pandoc/
|
||||
|
||||
It helps to review this auto-generated file on GitHub prior to uploading
|
||||
because the long description will be sent to PyPI and appear there after
|
||||
publishing. PyPI attempts to convert this string to HTML before displaying
|
||||
it on the PyPI project page. If PyPI finds any issues, it will render it
|
||||
instead as plain-text, which we do not want.
|
||||
|
||||
To check in advance that PyPI will accept and parse the reST file as HTML,
|
||||
you can use the rst2html program installed by the docutils package
|
||||
(http://docutils.sourceforge.net/). To install docutils:
|
||||
|
||||
$ pip install docutils
|
||||
|
||||
To check the file, run the following command and confirm that it reports
|
||||
no warnings:
|
||||
|
||||
$ python setup.py --long-description | rst2html.py -v --no-raw > out.html
|
||||
|
||||
See here for more information:
|
||||
|
||||
http://docs.python.org/distutils/uploading.html#pypi-package-display
|
||||
|
||||
(2) Push to PyPI. To release a new version of Pystache to PyPI--
|
||||
|
||||
http://pypi.python.org/pypi/pystache
|
||||
|
||||
create a PyPI user account if you do not already have one. The user account
|
||||
will need permissions to push to PyPI. A current "Package Index Owner" of
|
||||
Pystache can grant you those permissions.
|
||||
|
||||
When you have permissions, run the following:
|
||||
|
||||
python setup.py publish
|
||||
|
||||
If you get an error like the following--
|
||||
|
||||
Upload failed (401): You must be identified to edit package information
|
||||
|
||||
then add a file called .pyirc to your home directory with the following
|
||||
contents:
|
||||
|
||||
[server-login]
|
||||
username: <PyPI username>
|
||||
password: <PyPI password>
|
||||
|
||||
as described here, for example:
|
||||
|
||||
http://docs.python.org/release/2.5.2/dist/pypirc.html
|
||||
|
||||
(3) Tag the release on GitHub. Here are some commands for tagging.
|
||||
|
||||
List current tags:
|
||||
|
||||
git tag -l -n3
|
||||
|
||||
Create an annotated tag:
|
||||
|
||||
git tag -a -m "Version 0.5.1" "v0.5.1"
|
||||
|
||||
Push a tag to GitHub:
|
||||
|
||||
git push --tags defunkt v0.5.1
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
|
||||
py_version = sys.version_info
|
||||
|
||||
# distutils does not seem to support the following setup() arguments.
|
||||
# It displays a UserWarning when setup() is passed those options:
|
||||
#
|
||||
# * entry_points
|
||||
# * install_requires
|
||||
#
|
||||
# distribute works with Python 2.3.5 and above:
|
||||
#
|
||||
# http://packages.python.org/distribute/setuptools.html#building-and-distributing-packages-with-distribute
|
||||
#
|
||||
if py_version < (2, 3, 5):
|
||||
# TODO: this might not work yet.
|
||||
import distutils as dist
|
||||
from distutils import core
|
||||
setup = core.setup
|
||||
else:
|
||||
import setuptools as dist
|
||||
setup = dist.setup
|
||||
|
||||
|
||||
VERSION = '0.5.4' # Also change in pystache/__init__.py.
|
||||
|
||||
FILE_ENCODING = 'utf-8'
|
||||
|
||||
README_PATH = 'README.md'
|
||||
HISTORY_PATH = 'HISTORY.md'
|
||||
LICENSE_PATH = 'LICENSE'
|
||||
|
||||
RST_DESCRIPTION_PATH = 'setup_description.rst'
|
||||
|
||||
TEMP_EXTENSION = '.temp'
|
||||
|
||||
PREP_COMMAND = 'prep'
|
||||
|
||||
CLASSIFIERS = (
|
||||
'Development Status :: 4 - Beta',
|
||||
'License :: OSI Approved :: MIT License',
|
||||
'Programming Language :: Python',
|
||||
'Programming Language :: Python :: 2',
|
||||
'Programming Language :: Python :: 2.4',
|
||||
'Programming Language :: Python :: 2.5',
|
||||
'Programming Language :: Python :: 2.6',
|
||||
'Programming Language :: Python :: 2.7',
|
||||
'Programming Language :: Python :: 3',
|
||||
'Programming Language :: Python :: 3.1',
|
||||
'Programming Language :: Python :: 3.2',
|
||||
'Programming Language :: Python :: 3.3',
|
||||
'Programming Language :: Python :: Implementation :: PyPy',
|
||||
)
|
||||
|
||||
# Comments in reST begin with two dots.
|
||||
RST_LONG_DESCRIPTION_INTRO = """\
|
||||
.. Do not edit this file. This file is auto-generated for PyPI by setup.py
|
||||
.. using pandoc, so edits should go in the source files rather than here.
|
||||
"""
|
||||
|
||||
|
||||
def read(path):
|
||||
"""
|
||||
Read and return the contents of a text file as a unicode string.
|
||||
|
||||
"""
|
||||
# This function implementation was chosen to be compatible across Python 2/3.
|
||||
f = open(path, 'rb')
|
||||
# We avoid use of the with keyword for Python 2.4 support.
|
||||
try:
|
||||
b = f.read()
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
return b.decode(FILE_ENCODING)
|
||||
|
||||
|
||||
def write(u, path):
|
||||
"""
|
||||
Write a unicode string to a file (as utf-8).
|
||||
|
||||
"""
|
||||
print("writing to: %s" % path)
|
||||
# This function implementation was chosen to be compatible across Python 2/3.
|
||||
f = open(path, "wb")
|
||||
try:
|
||||
b = u.encode(FILE_ENCODING)
|
||||
f.write(b)
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
|
||||
def make_temp_path(path, new_ext=None):
|
||||
"""
|
||||
Arguments:
|
||||
|
||||
new_ext: the new file extension, including the leading dot.
|
||||
Defaults to preserving the existing file extension.
|
||||
|
||||
"""
|
||||
root, ext = os.path.splitext(path)
|
||||
if new_ext is None:
|
||||
new_ext = ext
|
||||
temp_path = root + TEMP_EXTENSION + new_ext
|
||||
return temp_path
|
||||
|
||||
|
||||
def strip_html_comments(text):
|
||||
"""Strip HTML comments from a unicode string."""
|
||||
lines = text.splitlines(True) # preserve line endings.
|
||||
|
||||
# Remove HTML comments (which we only allow to take a special form).
|
||||
new_lines = filter(lambda line: not line.startswith("<!--"), lines)
|
||||
|
||||
return "".join(new_lines)
|
||||
|
||||
|
||||
# We write the converted file to a temp file to simplify debugging and
|
||||
# to avoid removing a valid pre-existing file on failure.
|
||||
def convert_md_to_rst(md_path, rst_temp_path):
|
||||
"""
|
||||
Convert the contents of a file from Markdown to reStructuredText.
|
||||
|
||||
Returns the converted text as a Unicode string.
|
||||
|
||||
Arguments:
|
||||
|
||||
md_path: a path to a UTF-8 encoded Markdown file to convert.
|
||||
|
||||
rst_temp_path: a temporary path to which to write the converted contents.
|
||||
|
||||
"""
|
||||
# Pandoc uses the UTF-8 character encoding for both input and output.
|
||||
command = "pandoc --write=rst --output=%s %s" % (rst_temp_path, md_path)
|
||||
print("converting with pandoc: %s to %s\n-->%s" % (md_path, rst_temp_path,
|
||||
command))
|
||||
|
||||
if os.path.exists(rst_temp_path):
|
||||
os.remove(rst_temp_path)
|
||||
|
||||
os.system(command)
|
||||
|
||||
if not os.path.exists(rst_temp_path):
|
||||
s = ("Error running: %s\n"
|
||||
" Did you install pandoc per the %s docstring?" % (command,
|
||||
__file__))
|
||||
sys.exit(s)
|
||||
|
||||
return read(rst_temp_path)
|
||||
|
||||
|
||||
# The long_description needs to be formatted as reStructuredText.
|
||||
# See the following for more information:
|
||||
#
|
||||
# http://docs.python.org/distutils/setupscript.html#additional-meta-data
|
||||
# http://docs.python.org/distutils/uploading.html#pypi-package-display
|
||||
#
|
||||
def make_long_description():
|
||||
"""
|
||||
Generate the reST long_description for setup() from source files.
|
||||
|
||||
Returns the generated long_description as a unicode string.
|
||||
|
||||
"""
|
||||
readme_path = README_PATH
|
||||
|
||||
# Remove our HTML comments because PyPI does not allow it.
|
||||
# See the setup.py docstring for more info on this.
|
||||
readme_md = strip_html_comments(read(readme_path))
|
||||
history_md = strip_html_comments(read(HISTORY_PATH))
|
||||
license_md = """\
|
||||
License
|
||||
=======
|
||||
|
||||
""" + read(LICENSE_PATH)
|
||||
|
||||
sections = [readme_md, history_md, license_md]
|
||||
md_description = '\n\n'.join(sections)
|
||||
|
||||
# Write the combined Markdown file to a temp path.
|
||||
md_ext = os.path.splitext(readme_path)[1]
|
||||
md_description_path = make_temp_path(RST_DESCRIPTION_PATH, new_ext=md_ext)
|
||||
write(md_description, md_description_path)
|
||||
|
||||
rst_temp_path = make_temp_path(RST_DESCRIPTION_PATH)
|
||||
long_description = convert_md_to_rst(md_path=md_description_path,
|
||||
rst_temp_path=rst_temp_path)
|
||||
|
||||
return "\n".join([RST_LONG_DESCRIPTION_INTRO, long_description])
|
||||
|
||||
|
||||
def prep():
|
||||
"""Update the reST long_description file."""
|
||||
long_description = make_long_description()
|
||||
write(long_description, RST_DESCRIPTION_PATH)
|
||||
|
||||
|
||||
def publish():
|
||||
"""Publish this package to PyPI (aka "the Cheeseshop")."""
|
||||
long_description = make_long_description()
|
||||
|
||||
if long_description != read(RST_DESCRIPTION_PATH):
|
||||
print("""\
|
||||
Description file not up-to-date: %s
|
||||
Run the following command and commit the changes--
|
||||
|
||||
python setup.py %s
|
||||
""" % (RST_DESCRIPTION_PATH, PREP_COMMAND))
|
||||
sys.exit()
|
||||
|
||||
print("Description up-to-date: %s" % RST_DESCRIPTION_PATH)
|
||||
|
||||
answer = raw_input("Are you sure you want to publish to PyPI (yes/no)?")
|
||||
|
||||
if answer != "yes":
|
||||
exit("Aborted: nothing published")
|
||||
|
||||
os.system('python setup.py sdist upload')
|
||||
|
||||
|
||||
# We use the package simplejson for older Python versions since Python
|
||||
# does not contain the module json before 2.6:
|
||||
#
|
||||
# http://docs.python.org/library/json.html
|
||||
#
|
||||
# Moreover, simplejson stopped officially support for Python 2.4 in version 2.1.0:
|
||||
#
|
||||
# https://github.com/simplejson/simplejson/blob/master/CHANGES.txt
|
||||
#
|
||||
requires = []
|
||||
if py_version < (2, 5):
|
||||
requires.append('simplejson<2.1')
|
||||
elif py_version < (2, 6):
|
||||
requires.append('simplejson')
|
||||
|
||||
INSTALL_REQUIRES = requires
|
||||
|
||||
# TODO: decide whether to use find_packages() instead. I'm not sure that
|
||||
# find_packages() is available with distutils, for example.
|
||||
PACKAGES = [
|
||||
'pystache',
|
||||
'pystache.commands',
|
||||
# The following packages are only for testing.
|
||||
'pystache.tests',
|
||||
'pystache.tests.data',
|
||||
'pystache.tests.data.locator',
|
||||
'pystache.tests.examples',
|
||||
]
|
||||
|
||||
|
||||
# The purpose of this function is to follow the guidance suggested here:
|
||||
#
|
||||
# http://packages.python.org/distribute/python3.html#note-on-compatibility-with-setuptools
|
||||
#
|
||||
# The guidance is for better compatibility when using setuptools (e.g. with
|
||||
# earlier versions of Python 2) instead of Distribute, because of new
|
||||
# keyword arguments to setup() that setuptools may not recognize.
|
||||
def get_extra_args():
|
||||
"""
|
||||
Return a dictionary of extra args to pass to setup().
|
||||
|
||||
"""
|
||||
extra = {}
|
||||
# TODO: it might be more correct to check whether we are using
|
||||
# Distribute instead of setuptools, since use_2to3 doesn't take
|
||||
# effect when using Python 2, even when using Distribute.
|
||||
if py_version >= (3, ):
|
||||
# Causes 2to3 to be run during the build step.
|
||||
extra['use_2to3'] = True
|
||||
|
||||
return extra
|
||||
|
||||
|
||||
def main(sys_argv):
|
||||
|
||||
# TODO: use the logging module instead of printing.
|
||||
# TODO: include the following in a verbose mode.
|
||||
sys.stderr.write("pystache: using: version %s of %s\n" % (repr(dist.__version__), repr(dist)))
|
||||
|
||||
command = sys_argv[-1]
|
||||
|
||||
if command == 'publish':
|
||||
publish()
|
||||
sys.exit()
|
||||
elif command == PREP_COMMAND:
|
||||
prep()
|
||||
sys.exit()
|
||||
|
||||
long_description = read(RST_DESCRIPTION_PATH)
|
||||
template_files = ['*.mustache', '*.txt']
|
||||
extra_args = get_extra_args()
|
||||
|
||||
setup(name='pystache',
|
||||
version=VERSION,
|
||||
license='MIT',
|
||||
description='Mustache for Python',
|
||||
long_description=long_description,
|
||||
author='Chris Wanstrath',
|
||||
author_email='chris@ozmm.org',
|
||||
maintainer='Chris Jerdonek',
|
||||
maintainer_email='chris.jerdonek@gmail.com',
|
||||
url='http://github.com/defunkt/pystache',
|
||||
install_requires=INSTALL_REQUIRES,
|
||||
packages=PACKAGES,
|
||||
package_data = {
|
||||
# Include template files so tests can be run.
|
||||
'pystache.tests.data': template_files,
|
||||
'pystache.tests.data.locator': template_files,
|
||||
'pystache.tests.examples': template_files,
|
||||
},
|
||||
entry_points = {
|
||||
'console_scripts': [
|
||||
'pystache=pystache.commands.render:main',
|
||||
'pystache-test=pystache.commands.test:main',
|
||||
],
|
||||
},
|
||||
classifiers = CLASSIFIERS,
|
||||
**extra_args
|
||||
)
|
||||
|
||||
|
||||
if __name__=='__main__':
|
||||
main(sys.argv)
|
|
@ -0,0 +1,513 @@
|
|||
.. Do not edit this file. This file is auto-generated for PyPI by setup.py
|
||||
.. using pandoc, so edits should go in the source files rather than here.
|
||||
|
||||
Pystache
|
||||
========
|
||||
|
||||
.. figure:: http://defunkt.github.com/pystache/images/logo_phillips.png
|
||||
:alt: mustachioed, monocled snake by David Phillips
|
||||
|
||||
.. figure:: https://secure.travis-ci.org/defunkt/pystache.png
|
||||
:alt: Travis CI current build status
|
||||
|
||||
`Pystache <http://defunkt.github.com/pystache>`__ is a Python
|
||||
implementation of `Mustache <http://mustache.github.com/>`__. Mustache
|
||||
is a framework-agnostic, logic-free templating system inspired by
|
||||
`ctemplate <http://code.google.com/p/google-ctemplate/>`__ and
|
||||
`et <http://www.ivan.fomichev.name/2008/05/erlang-template-engine-prototype.html>`__.
|
||||
Like ctemplate, Mustache "emphasizes separating logic from presentation:
|
||||
it is impossible to embed application logic in this template language."
|
||||
|
||||
The `mustache(5) <http://mustache.github.com/mustache.5.html>`__ man
|
||||
page provides a good introduction to Mustache's syntax. For a more
|
||||
complete (and more current) description of Mustache's behavior, see the
|
||||
official `Mustache spec <https://github.com/mustache/spec>`__.
|
||||
|
||||
Pystache is `semantically versioned <http://semver.org>`__ and can be
|
||||
found on `PyPI <http://pypi.python.org/pypi/pystache>`__. This version
|
||||
of Pystache passes all tests in `version
|
||||
1.1.2 <https://github.com/mustache/spec/tree/v1.1.2>`__ of the spec.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
Pystache is tested with--
|
||||
|
||||
- Python 2.4 (requires simplejson `version
|
||||
2.0.9 <http://pypi.python.org/pypi/simplejson/2.0.9>`__ or earlier)
|
||||
- Python 2.5 (requires
|
||||
`simplejson <http://pypi.python.org/pypi/simplejson/>`__)
|
||||
- Python 2.6
|
||||
- Python 2.7
|
||||
- Python 3.1
|
||||
- Python 3.2
|
||||
- Python 3.3
|
||||
- `PyPy <http://pypy.org/>`__
|
||||
|
||||
`Distribute <http://packages.python.org/distribute/>`__ (the setuptools
|
||||
fork) is recommended over
|
||||
`setuptools <http://pypi.python.org/pypi/setuptools>`__, and is required
|
||||
in some cases (e.g. for Python 3 support). If you use
|
||||
`pip <http://www.pip-installer.org/>`__, you probably already satisfy
|
||||
this requirement.
|
||||
|
||||
JSON support is needed only for the command-line interface and to run
|
||||
the spec tests. We require simplejson for earlier versions of Python
|
||||
since Python's `json <http://docs.python.org/library/json.html>`__
|
||||
module was added in Python 2.6.
|
||||
|
||||
For Python 2.4 we require an earlier version of simplejson since
|
||||
simplejson stopped officially supporting Python 2.4 in simplejson
|
||||
version 2.1.0. Earlier versions of simplejson can be installed manually,
|
||||
as follows:
|
||||
|
||||
::
|
||||
|
||||
pip install 'simplejson<2.1.0'
|
||||
|
||||
Official support for Python 2.4 will end with Pystache version 0.6.0.
|
||||
|
||||
Install It
|
||||
----------
|
||||
|
||||
::
|
||||
|
||||
pip install pystache
|
||||
|
||||
And test it--
|
||||
|
||||
::
|
||||
|
||||
pystache-test
|
||||
|
||||
To install and test from source (e.g. from GitHub), see the Develop
|
||||
section.
|
||||
|
||||
Use It
|
||||
------
|
||||
|
||||
::
|
||||
|
||||
>>> import pystache
|
||||
>>> print pystache.render('Hi {{person}}!', {'person': 'Mom'})
|
||||
Hi Mom!
|
||||
|
||||
You can also create dedicated view classes to hold your view logic.
|
||||
|
||||
Here's your view class (in .../examples/readme.py):
|
||||
|
||||
::
|
||||
|
||||
class SayHello(object):
|
||||
def to(self):
|
||||
return "Pizza"
|
||||
|
||||
Instantiating like so:
|
||||
|
||||
::
|
||||
|
||||
>>> from pystache.tests.examples.readme import SayHello
|
||||
>>> hello = SayHello()
|
||||
|
||||
Then your template, say\_hello.mustache (by default in the same
|
||||
directory as your class definition):
|
||||
|
||||
::
|
||||
|
||||
Hello, {{to}}!
|
||||
|
||||
Pull it together:
|
||||
|
||||
::
|
||||
|
||||
>>> renderer = pystache.Renderer()
|
||||
>>> print renderer.render(hello)
|
||||
Hello, Pizza!
|
||||
|
||||
For greater control over rendering (e.g. to specify a custom template
|
||||
directory), use the ``Renderer`` class like above. One can pass
|
||||
attributes to the Renderer class constructor or set them on a Renderer
|
||||
instance. To customize template loading on a per-view basis, subclass
|
||||
``TemplateSpec``. See the docstrings of the
|
||||
`Renderer <https://github.com/defunkt/pystache/blob/master/pystache/renderer.py>`__
|
||||
class and
|
||||
`TemplateSpec <https://github.com/defunkt/pystache/blob/master/pystache/template_spec.py>`__
|
||||
class for more information.
|
||||
|
||||
You can also pre-parse a template:
|
||||
|
||||
::
|
||||
|
||||
>>> parsed = pystache.parse(u"Hey {{#who}}{{.}}!{{/who}}")
|
||||
>>> print parsed
|
||||
[u'Hey ', _SectionNode(key=u'who', index_begin=12, index_end=18, parsed=[_EscapeNode(key=u'.'), u'!'])]
|
||||
|
||||
And then:
|
||||
|
||||
::
|
||||
|
||||
>>> print renderer.render(parsed, {'who': 'Pops'})
|
||||
Hey Pops!
|
||||
>>> print renderer.render(parsed, {'who': 'you'})
|
||||
Hey you!
|
||||
|
||||
Python 3
|
||||
--------
|
||||
|
||||
Pystache has supported Python 3 since version 0.5.1. Pystache behaves
|
||||
slightly differently between Python 2 and 3, as follows:
|
||||
|
||||
- In Python 2, the default html-escape function ``cgi.escape()`` does
|
||||
not escape single quotes. In Python 3, the default escape function
|
||||
``html.escape()`` does escape single quotes.
|
||||
- In both Python 2 and 3, the string and file encodings default to
|
||||
``sys.getdefaultencoding()``. However, this function can return
|
||||
different values under Python 2 and 3, even when run from the same
|
||||
system. Check your own system for the behavior on your system, or do
|
||||
not rely on the defaults by passing in the encodings explicitly (e.g.
|
||||
to the ``Renderer`` class).
|
||||
|
||||
Unicode
|
||||
-------
|
||||
|
||||
This section describes how Pystache handles unicode, strings, and
|
||||
encodings.
|
||||
|
||||
Internally, Pystache uses `only unicode
|
||||
strings <http://docs.python.org/howto/unicode.html#tips-for-writing-unicode-aware-programs>`__
|
||||
(``str`` in Python 3 and ``unicode`` in Python 2). For input, Pystache
|
||||
accepts both unicode strings and byte strings (``bytes`` in Python 3 and
|
||||
``str`` in Python 2). For output, Pystache's template rendering methods
|
||||
return only unicode.
|
||||
|
||||
Pystache's ``Renderer`` class supports a number of attributes to control
|
||||
how Pystache converts byte strings to unicode on input. These include
|
||||
the ``file_encoding``, ``string_encoding``, and ``decode_errors``
|
||||
attributes.
|
||||
|
||||
The ``file_encoding`` attribute is the encoding the renderer uses to
|
||||
convert to unicode any files read from the file system. Similarly,
|
||||
``string_encoding`` is the encoding the renderer uses to convert any
|
||||
other byte strings encountered during the rendering process into unicode
|
||||
(e.g. context values that are encoded byte strings).
|
||||
|
||||
The ``decode_errors`` attribute is what the renderer passes as the
|
||||
``errors`` argument to Python's built-in unicode-decoding function
|
||||
(``str()`` in Python 3 and ``unicode()`` in Python 2). The valid values
|
||||
for this argument are ``strict``, ``ignore``, and ``replace``.
|
||||
|
||||
Each of these attributes can be set via the ``Renderer`` class's
|
||||
constructor using a keyword argument of the same name. See the Renderer
|
||||
class's docstrings for further details. In addition, the
|
||||
``file_encoding`` attribute can be controlled on a per-view basis by
|
||||
subclassing the ``TemplateSpec`` class. When not specified explicitly,
|
||||
these attributes default to values set in Pystache's ``defaults``
|
||||
module.
|
||||
|
||||
Develop
|
||||
-------
|
||||
|
||||
To test from a source distribution (without installing)--
|
||||
|
||||
::
|
||||
|
||||
python test_pystache.py
|
||||
|
||||
To test Pystache with multiple versions of Python (with a single
|
||||
command!), you can use `tox <http://pypi.python.org/pypi/tox>`__:
|
||||
|
||||
::
|
||||
|
||||
pip install 'virtualenv<1.8' # Version 1.8 dropped support for Python 2.4.
|
||||
pip install 'tox<1.4' # Version 1.4 dropped support for Python 2.4.
|
||||
tox
|
||||
|
||||
If you do not have all Python versions listed in ``tox.ini``--
|
||||
|
||||
::
|
||||
|
||||
tox -e py26,py32 # for example
|
||||
|
||||
The source distribution tests also include doctests and tests from the
|
||||
Mustache spec. To include tests from the Mustache spec in your test
|
||||
runs:
|
||||
|
||||
::
|
||||
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
The test harness parses the spec's (more human-readable) yaml files if
|
||||
`PyYAML <http://pypi.python.org/pypi/PyYAML>`__ is present. Otherwise,
|
||||
it parses the json files. To install PyYAML--
|
||||
|
||||
::
|
||||
|
||||
pip install pyyaml
|
||||
|
||||
To run a subset of the tests, you can use
|
||||
`nose <http://somethingaboutorange.com/mrl/projects/nose/0.11.1/testing.html>`__:
|
||||
|
||||
::
|
||||
|
||||
pip install nose
|
||||
nosetests --tests pystache/tests/test_context.py:GetValueTests.test_dictionary__key_present
|
||||
|
||||
Using Python 3 with Pystache from source
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Pystache is written in Python 2 and must be converted to Python 3 prior
|
||||
to using it with Python 3. The installation process (and tox) do this
|
||||
automatically.
|
||||
|
||||
To convert the code to Python 3 manually (while using Python 3)--
|
||||
|
||||
::
|
||||
|
||||
python setup.py build
|
||||
|
||||
This writes the converted code to a subdirectory called ``build``. By
|
||||
design, Python 3 builds
|
||||
`cannot <https://bitbucket.org/tarek/distribute/issue/292/allow-use_2to3-with-python-2>`__
|
||||
be created from Python 2.
|
||||
|
||||
To convert the code without using setup.py, you can use
|
||||
`2to3 <http://docs.python.org/library/2to3.html>`__ as follows (two
|
||||
steps)--
|
||||
|
||||
::
|
||||
|
||||
2to3 --write --nobackups --no-diffs --doctests_only pystache
|
||||
2to3 --write --nobackups --no-diffs pystache
|
||||
|
||||
This converts the code (and doctests) in place.
|
||||
|
||||
To ``import pystache`` from a source distribution while using Python 3,
|
||||
be sure that you are importing from a directory containing a converted
|
||||
version of the code (e.g. from the ``build`` directory after
|
||||
converting), and not from the original (unconverted) source directory.
|
||||
Otherwise, you will get a syntax error. You can help prevent this by not
|
||||
running the Python IDE from the project directory when importing
|
||||
Pystache while using Python 3.
|
||||
|
||||
Mailing List
|
||||
------------
|
||||
|
||||
There is a `mailing list <http://librelist.com/browser/pystache/>`__.
|
||||
Note that there is a bit of a delay between posting a message and seeing
|
||||
it appear in the mailing list archive.
|
||||
|
||||
Credits
|
||||
-------
|
||||
|
||||
::
|
||||
|
||||
>>> context = { 'author': 'Chris Wanstrath', 'maintainer': 'Chris Jerdonek' }
|
||||
>>> print pystache.render("Author: {{author}}\nMaintainer: {{maintainer}}", context)
|
||||
Author: Chris Wanstrath
|
||||
Maintainer: Chris Jerdonek
|
||||
|
||||
Pystache logo by `David Phillips <http://davidphillips.us/>`__ is
|
||||
licensed under a `Creative Commons Attribution-ShareAlike 3.0 Unported
|
||||
License <http://creativecommons.org/licenses/by-sa/3.0/deed.en_US>`__.
|
||||
|image0|
|
||||
|
||||
History
|
||||
=======
|
||||
|
||||
**Note:** Official support for Python 2.4 will end with Pystache version
|
||||
0.6.0.
|
||||
|
||||
0.5.4 (2014-07-11)
|
||||
------------------
|
||||
|
||||
- Bugfix: made test with filenames OS agnostic (issue #162).
|
||||
|
||||
0.5.3 (2012-11-03)
|
||||
------------------
|
||||
|
||||
- Added ability to customize string coercion (e.g. to have None render
|
||||
as ``''``) (issue #130).
|
||||
- Added Renderer.render\_name() to render a template by name (issue
|
||||
#122).
|
||||
- Added TemplateSpec.template\_path to specify an absolute path to a
|
||||
template (issue #41).
|
||||
- Added option of raising errors on missing tags/partials:
|
||||
``Renderer(missing_tags='strict')`` (issue #110).
|
||||
- Added support for finding and loading templates by file name in
|
||||
addition to by template name (issue #127). [xgecko]
|
||||
- Added a ``parse()`` function that yields a printable, pre-compiled
|
||||
parse tree.
|
||||
- Added support for rendering pre-compiled templates.
|
||||
- Added Python 3.3 to the list of supported versions.
|
||||
- Added support for `PyPy <http://pypy.org/>`__ (issue #125).
|
||||
- Added support for `Travis CI <http://travis-ci.org>`__ (issue #124).
|
||||
[msabramo]
|
||||
- Bugfix: ``defaults.DELIMITERS`` can now be changed at runtime (issue
|
||||
#135). [bennoleslie]
|
||||
- Bugfix: exceptions raised from a property are no longer swallowed
|
||||
when getting a key from a context stack (issue #110).
|
||||
- Bugfix: lambda section values can now return non-ascii, non-unicode
|
||||
strings (issue #118).
|
||||
- Bugfix: allow ``test_pystache.py`` and ``tox`` to pass when run from
|
||||
a downloaded sdist (i.e. without the spec test directory).
|
||||
- Convert HISTORY and README files from reST to Markdown.
|
||||
- More robust handling of byte strings in Python 3.
|
||||
- Added Creative Commons license for David Phillips's logo.
|
||||
|
||||
0.5.2 (2012-05-03)
|
||||
------------------
|
||||
|
||||
- Added support for dot notation and version 1.1.2 of the spec (issue
|
||||
#99). [rbp]
|
||||
- Missing partials now render as empty string per latest version of
|
||||
spec (issue #115).
|
||||
- Bugfix: falsey values now coerced to strings using str().
|
||||
- Bugfix: lambda return values for sections no longer pushed onto
|
||||
context stack (issue #113).
|
||||
- Bugfix: lists of lambdas for sections were not rendered (issue #114).
|
||||
|
||||
0.5.1 (2012-04-24)
|
||||
------------------
|
||||
|
||||
- Added support for Python 3.1 and 3.2.
|
||||
- Added tox support to test multiple Python versions.
|
||||
- Added test script entry point: pystache-test.
|
||||
- Added \_\_version\_\_ package attribute.
|
||||
- Test harness now supports both YAML and JSON forms of Mustache spec.
|
||||
- Test harness no longer requires nose.
|
||||
|
||||
0.5.0 (2012-04-03)
|
||||
------------------
|
||||
|
||||
This version represents a major rewrite and refactoring of the code base
|
||||
that also adds features and fixes many bugs. All functionality and
|
||||
nearly all unit tests have been preserved. However, some backwards
|
||||
incompatible changes to the API have been made.
|
||||
|
||||
Below is a selection of some of the changes (not exhaustive).
|
||||
|
||||
Highlights:
|
||||
|
||||
- Pystache now passes all tests in version 1.0.3 of the `Mustache
|
||||
spec <https://github.com/mustache/spec>`__. [pvande]
|
||||
- Removed View class: it is no longer necessary to subclass from View
|
||||
or from any other class to create a view.
|
||||
- Replaced Template with Renderer class: template rendering behavior
|
||||
can be modified via the Renderer constructor or by setting attributes
|
||||
on a Renderer instance.
|
||||
- Added TemplateSpec class: template rendering can be specified on a
|
||||
per-view basis by subclassing from TemplateSpec.
|
||||
- Introduced separation of concerns and removed circular dependencies
|
||||
(e.g. between Template and View classes, cf. `issue
|
||||
#13 <https://github.com/defunkt/pystache/issues/13>`__).
|
||||
- Unicode now used consistently throughout the rendering process.
|
||||
- Expanded test coverage: nosetests now runs doctests and ~105 test
|
||||
cases from the Mustache spec (increasing the number of tests from 56
|
||||
to ~315).
|
||||
- Added a rudimentary benchmarking script to gauge performance while
|
||||
refactoring.
|
||||
- Extensive documentation added (e.g. docstrings).
|
||||
|
||||
Other changes:
|
||||
|
||||
- Added a command-line interface. [vrde]
|
||||
- The main rendering class now accepts a custom partial loader (e.g. a
|
||||
dictionary) and a custom escape function.
|
||||
- Non-ascii characters in str strings are now supported while
|
||||
rendering.
|
||||
- Added string encoding, file encoding, and errors options for decoding
|
||||
to unicode.
|
||||
- Removed the output encoding option.
|
||||
- Removed the use of markupsafe.
|
||||
|
||||
Bug fixes:
|
||||
|
||||
- Context values no longer processed as template strings.
|
||||
[jakearchibald]
|
||||
- Whitespace surrounding sections is no longer altered, per the spec.
|
||||
[heliodor]
|
||||
- Zeroes now render correctly when using PyPy. [alex]
|
||||
- Multline comments now permitted. [fczuardi]
|
||||
- Extensionless template files are now supported.
|
||||
- Passing ``**kwargs`` to ``Template()`` no longer modifies the
|
||||
context.
|
||||
- Passing ``**kwargs`` to ``Template()`` with no context no longer
|
||||
raises an exception.
|
||||
|
||||
0.4.1 (2012-03-25)
|
||||
------------------
|
||||
|
||||
- Added support for Python 2.4. [wangtz, jvantuyl]
|
||||
|
||||
0.4.0 (2011-01-12)
|
||||
------------------
|
||||
|
||||
- Add support for nested contexts (within template and view)
|
||||
- Add support for inverted lists
|
||||
- Decoupled template loading
|
||||
|
||||
0.3.1 (2010-05-07)
|
||||
------------------
|
||||
|
||||
- Fix package
|
||||
|
||||
0.3.0 (2010-05-03)
|
||||
------------------
|
||||
|
||||
- View.template\_path can now hold a list of path
|
||||
- Add {{& blah}} as an alias for {{{ blah }}}
|
||||
- Higher Order Sections
|
||||
- Inverted sections
|
||||
|
||||
0.2.0 (2010-02-15)
|
||||
------------------
|
||||
|
||||
- Bugfix: Methods returning False or None are not rendered
|
||||
- Bugfix: Don't render an empty string when a tag's value is 0.
|
||||
[enaeseth]
|
||||
- Add support for using non-callables as View attributes.
|
||||
[joshthecoder]
|
||||
- Allow using View instances as attributes. [joshthecoder]
|
||||
- Support for Unicode and non-ASCII-encoded bytestring output.
|
||||
[enaeseth]
|
||||
- Template file encoding awareness. [enaeseth]
|
||||
|
||||
0.1.1 (2009-11-13)
|
||||
------------------
|
||||
|
||||
- Ensure we're dealing with strings, always
|
||||
- Tests can be run by executing the test file directly
|
||||
|
||||
0.1.0 (2009-11-12)
|
||||
------------------
|
||||
|
||||
- First release
|
||||
|
||||
License
|
||||
=======
|
||||
|
||||
Copyright (C) 2012 Chris Jerdonek. All rights reserved.
|
||||
|
||||
Copyright (c) 2009 Chris Wanstrath
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a
|
||||
copy of this software and associated documentation files (the
|
||||
"Software"), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included
|
||||
in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
.. |image0| image:: http://i.creativecommons.org/l/by-sa/3.0/88x31.png
|
|
@ -0,0 +1,30 @@
|
|||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
"""
|
||||
Runs project tests.
|
||||
|
||||
This script is a substitute for running--
|
||||
|
||||
python -m pystache.commands.test
|
||||
|
||||
It is useful in Python 2.4 because the -m flag does not accept subpackages
|
||||
in Python 2.4:
|
||||
|
||||
http://docs.python.org/using/cmdline.html#cmdoption-m
|
||||
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
from pystache.commands import test
|
||||
from pystache.tests.main import FROM_SOURCE_OPTION
|
||||
|
||||
|
||||
def main(sys_argv=sys.argv):
|
||||
sys.argv.insert(1, FROM_SOURCE_OPTION)
|
||||
test.main()
|
||||
|
||||
|
||||
if __name__=='__main__':
|
||||
main()
|
|
@ -0,0 +1,36 @@
|
|||
# A tox configuration file to test across multiple Python versions.
|
||||
#
|
||||
# http://pypi.python.org/pypi/tox
|
||||
#
|
||||
[tox]
|
||||
# Tox 1.4 drops py24 and adds py33. In the current version, we want to
|
||||
# support 2.4, so we can't simultaneously support 3.3.
|
||||
envlist = py24,py25,py26,py27,py27-yaml,py27-noargs,py31,py32,pypy
|
||||
|
||||
[testenv]
|
||||
# Change the working directory so that we don't import the pystache located
|
||||
# in the original location.
|
||||
changedir =
|
||||
{envbindir}
|
||||
commands =
|
||||
pystache-test {toxinidir}
|
||||
|
||||
# Check that the spec tests work with PyYAML.
|
||||
[testenv:py27-yaml]
|
||||
basepython =
|
||||
python2.7
|
||||
deps =
|
||||
PyYAML
|
||||
changedir =
|
||||
{envbindir}
|
||||
commands =
|
||||
pystache-test {toxinidir}
|
||||
|
||||
# Check that pystache-test works from an install with no arguments.
|
||||
[testenv:py27-noargs]
|
||||
basepython =
|
||||
python2.7
|
||||
changedir =
|
||||
{envbindir}
|
||||
commands =
|
||||
pystache-test
|
|
@ -0,0 +1,147 @@
|
|||
|
||||
For a complete Mercurial changelog, see
|
||||
'https://bitbucket.org/xi/pyyaml/commits'.
|
||||
|
||||
3.11 (2014-03-26)
|
||||
-----------------
|
||||
|
||||
* Source and binary distributions are rebuilt against the latest
|
||||
versions of Cython and LibYAML.
|
||||
|
||||
3.10 (2011-05-30)
|
||||
-----------------
|
||||
|
||||
* Do not try to build LibYAML bindings on platforms other than CPython
|
||||
(Thank to olt(at)bogosoft(dot)com).
|
||||
* Clear cyclic references in the parser and the emitter
|
||||
(Thank to kristjan(at)ccpgames(dot)com).
|
||||
* Dropped support for Python 2.3 and 2.4.
|
||||
|
||||
3.09 (2009-08-31)
|
||||
-----------------
|
||||
|
||||
* Fixed an obscure scanner error not reported when there is
|
||||
no line break at the end of the stream (Thank to Ingy).
|
||||
* Fixed use of uninitialized memory when emitting anchors with
|
||||
LibYAML bindings (Thank to cegner(at)yahoo-inc(dot)com).
|
||||
* Fixed emitting incorrect BOM characters for UTF-16 (Thank to
|
||||
Valentin Nechayev)
|
||||
* Fixed the emitter for folded scalars not respecting the preferred
|
||||
line width (Thank to Ingy).
|
||||
* Fixed a subtle ordering issue with emitting '%TAG' directives
|
||||
(Thank to Andrey Somov).
|
||||
* Fixed performance regression with LibYAML bindings.
|
||||
|
||||
|
||||
3.08 (2008-12-31)
|
||||
-----------------
|
||||
|
||||
* Python 3 support (Thank to Erick Tryzelaar).
|
||||
* Use Cython instead of Pyrex to build LibYAML bindings.
|
||||
* Refactored support for unicode and byte input/output streams.
|
||||
|
||||
|
||||
3.07 (2008-12-29)
|
||||
-----------------
|
||||
|
||||
* The emitter learned to use an optional indentation indicator
|
||||
for block scalar; thus scalars with leading whitespaces
|
||||
could now be represented in a literal or folded style.
|
||||
* The test suite is now included in the source distribution.
|
||||
To run the tests, type 'python setup.py test'.
|
||||
* Refactored the test suite: dropped unittest in favor of
|
||||
a custom test appliance.
|
||||
* Fixed the path resolver in CDumper.
|
||||
* Forced an explicit document end indicator when there is
|
||||
a possibility of parsing ambiguity.
|
||||
* More setup.py improvements: the package should be usable
|
||||
when any combination of setuptools, Pyrex and LibYAML
|
||||
is installed.
|
||||
* Windows binary packages are built against LibYAML-0.1.2.
|
||||
* Minor typos and corrections (Thank to Ingy dot Net
|
||||
and Andrey Somov).
|
||||
|
||||
|
||||
3.06 (2008-10-03)
|
||||
-----------------
|
||||
|
||||
* setup.py checks whether LibYAML is installed and if so, builds
|
||||
and installs LibYAML bindings. To force or disable installation
|
||||
of LibYAML bindings, use '--with-libyaml' or '--without-libyaml'
|
||||
respectively.
|
||||
* The source distribution includes compiled Pyrex sources so
|
||||
building LibYAML bindings no longer requires Pyrex installed.
|
||||
* 'yaml.load()' raises an exception if the input stream contains
|
||||
more than one YAML document.
|
||||
* Fixed exceptions produced by LibYAML bindings.
|
||||
* Fixed a dot '.' character being recognized as !!float.
|
||||
* Fixed Python 2.3 compatibility issue in constructing !!timestamp values.
|
||||
* Windows binary packages are built against the LibYAML stable branch.
|
||||
* Added attributes 'yaml.__version__' and 'yaml.__with_libyaml__'.
|
||||
|
||||
|
||||
3.05 (2007-05-13)
|
||||
-----------------
|
||||
|
||||
* Windows binary packages were built with LibYAML trunk.
|
||||
* Fixed a bug that prevent processing a live stream of YAML documents in
|
||||
timely manner (Thanks edward(at)sweetbytes(dot)net).
|
||||
* Fixed a bug when the path in add_path_resolver contains boolean values
|
||||
(Thanks jstroud(at)mbi(dot)ucla(dot)edu).
|
||||
* Fixed loss of microsecond precision in timestamps
|
||||
(Thanks edemaine(at)mit(dot)edu).
|
||||
* Fixed loading an empty YAML stream.
|
||||
* Allowed immutable subclasses of YAMLObject.
|
||||
* Made the encoding of the unicode->str conversion explicit so that
|
||||
the conversion does not depend on the default Python encoding.
|
||||
* Forced emitting float values in a YAML compatible form.
|
||||
|
||||
|
||||
3.04 (2006-08-20)
|
||||
-----------------
|
||||
|
||||
* Include experimental LibYAML bindings.
|
||||
* Fully support recursive structures.
|
||||
* Sort dictionary keys. Mapping node values are now represented
|
||||
as lists of pairs instead of dictionaries. No longer check
|
||||
for duplicate mapping keys as it didn't work correctly anyway.
|
||||
* Fix invalid output of single-quoted scalars in cases when a single
|
||||
quote is not escaped when preceeded by whitespaces or line breaks.
|
||||
* To make porting easier, rewrite Parser not using generators.
|
||||
* Fix handling of unexpected block mapping values.
|
||||
* Fix a bug in Representer.represent_object: copy_reg.dispatch_table
|
||||
was not correctly handled.
|
||||
* Fix a bug when a block scalar is incorrectly emitted in the simple
|
||||
key context.
|
||||
* Hold references to the objects being represented.
|
||||
* Make Representer not try to guess !!pairs when a list is represented.
|
||||
* Fix timestamp constructing and representing.
|
||||
* Fix the 'N' plain scalar being incorrectly recognized as !!bool.
|
||||
|
||||
|
||||
3.03 (2006-06-19)
|
||||
-----------------
|
||||
|
||||
* Fix Python 2.5 compatibility issues.
|
||||
* Fix numerous bugs in the float handling.
|
||||
* Fix scanning some ill-formed documents.
|
||||
* Other minor fixes.
|
||||
|
||||
|
||||
3.02 (2006-05-15)
|
||||
-----------------
|
||||
|
||||
* Fix win32 installer. Apparently bdist_wininst does not work well
|
||||
under Linux.
|
||||
* Fix a bug in add_path_resolver.
|
||||
* Add the yaml-highlight example. Try to run on a color terminal:
|
||||
`python yaml_hl.py <any_document.yaml`.
|
||||
|
||||
|
||||
3.01 (2006-05-07)
|
||||
-----------------
|
||||
|
||||
* Initial release. The version number reflects the codename
|
||||
of the project (PyYAML 3000) and differenciates it from
|
||||
the abandoned PyYaml module.
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
Copyright (c) 2006 Kirill Simonov
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,36 @@
|
|||
Metadata-Version: 1.1
|
||||
Name: PyYAML
|
||||
Version: 3.11
|
||||
Summary: YAML parser and emitter for Python
|
||||
Home-page: http://pyyaml.org/wiki/PyYAML
|
||||
Author: Kirill Simonov
|
||||
Author-email: xi@resolvent.net
|
||||
License: MIT
|
||||
Download-URL: http://pyyaml.org/download/pyyaml/PyYAML-3.11.tar.gz
|
||||
Description: YAML is a data serialization format designed for human readability
|
||||
and interaction with scripting languages. PyYAML is a YAML parser
|
||||
and emitter for Python.
|
||||
|
||||
PyYAML features a complete YAML 1.1 parser, Unicode support, pickle
|
||||
support, capable extension API, and sensible error messages. PyYAML
|
||||
supports standard YAML tags and provides Python-specific tags that
|
||||
allow to represent an arbitrary Python object.
|
||||
|
||||
PyYAML is applicable for a broad range of tasks from complex
|
||||
configuration files to object serialization and persistance.
|
||||
Platform: Any
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.5
|
||||
Classifier: Programming Language :: Python :: 2.6
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.0
|
||||
Classifier: Programming Language :: Python :: 3.1
|
||||
Classifier: Programming Language :: Python :: 3.2
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Topic :: Text Processing :: Markup
|
|
@ -0,0 +1,35 @@
|
|||
PyYAML - The next generation YAML parser and emitter for Python.
|
||||
|
||||
To install, type 'python setup.py install'.
|
||||
|
||||
By default, the setup.py script checks whether LibYAML is installed
|
||||
and if so, builds and installs LibYAML bindings. To skip the check
|
||||
and force installation of LibYAML bindings, use the option '--with-libyaml':
|
||||
'python setup.py --with-libyaml install'. To disable the check and
|
||||
skip building and installing LibYAML bindings, use '--without-libyaml':
|
||||
'python setup.py --without-libyaml install'.
|
||||
|
||||
When LibYAML bindings are installed, you may use fast LibYAML-based
|
||||
parser and emitter as follows:
|
||||
|
||||
>>> yaml.load(stream, Loader=yaml.CLoader)
|
||||
>>> yaml.dump(data, Dumper=yaml.CDumper)
|
||||
|
||||
PyYAML includes a comprehensive test suite. To run the tests,
|
||||
type 'python setup.py test'.
|
||||
|
||||
For more information, check the PyYAML homepage:
|
||||
'http://pyyaml.org/wiki/PyYAML'.
|
||||
|
||||
For PyYAML tutorial and reference, see:
|
||||
'http://pyyaml.org/wiki/PyYAMLDocumentation'.
|
||||
|
||||
Post your questions and opinions to the YAML-Core mailing list:
|
||||
'http://lists.sourceforge.net/lists/listinfo/yaml-core'.
|
||||
|
||||
Submit bug reports and feature requests to the PyYAML bug tracker:
|
||||
'http://pyyaml.org/newticket?component=pyyaml'.
|
||||
|
||||
PyYAML is written by Kirill Simonov <xi@resolvent.net>. It is released
|
||||
under the MIT license. See the file LICENSE for more details.
|
||||
|
|
@ -0,0 +1,302 @@
|
|||
|
||||
#
|
||||
# Examples from the Preview section of the YAML specification
|
||||
# (http://yaml.org/spec/1.2/#Preview)
|
||||
#
|
||||
|
||||
# Sequence of scalars
|
||||
---
|
||||
- Mark McGwire
|
||||
- Sammy Sosa
|
||||
- Ken Griffey
|
||||
|
||||
# Mapping scalars to scalars
|
||||
---
|
||||
hr: 65 # Home runs
|
||||
avg: 0.278 # Batting average
|
||||
rbi: 147 # Runs Batted In
|
||||
|
||||
# Mapping scalars to sequences
|
||||
---
|
||||
american:
|
||||
- Boston Red Sox
|
||||
- Detroit Tigers
|
||||
- New York Yankees
|
||||
national:
|
||||
- New York Mets
|
||||
- Chicago Cubs
|
||||
- Atlanta Braves
|
||||
|
||||
# Sequence of mappings
|
||||
---
|
||||
-
|
||||
name: Mark McGwire
|
||||
hr: 65
|
||||
avg: 0.278
|
||||
-
|
||||
name: Sammy Sosa
|
||||
hr: 63
|
||||
avg: 0.288
|
||||
|
||||
# Sequence of sequences
|
||||
---
|
||||
- [name , hr, avg ]
|
||||
- [Mark McGwire, 65, 0.278]
|
||||
- [Sammy Sosa , 63, 0.288]
|
||||
|
||||
# Mapping of mappings
|
||||
---
|
||||
Mark McGwire: {hr: 65, avg: 0.278}
|
||||
Sammy Sosa: {
|
||||
hr: 63,
|
||||
avg: 0.288
|
||||
}
|
||||
|
||||
# Two documents in a stream
|
||||
--- # Ranking of 1998 home runs
|
||||
- Mark McGwire
|
||||
- Sammy Sosa
|
||||
- Ken Griffey
|
||||
--- # Team ranking
|
||||
- Chicago Cubs
|
||||
- St Louis Cardinals
|
||||
|
||||
# Documents with the end indicator
|
||||
---
|
||||
time: 20:03:20
|
||||
player: Sammy Sosa
|
||||
action: strike (miss)
|
||||
...
|
||||
---
|
||||
time: 20:03:47
|
||||
player: Sammy Sosa
|
||||
action: grand slam
|
||||
...
|
||||
|
||||
# Comments
|
||||
---
|
||||
hr: # 1998 hr ranking
|
||||
- Mark McGwire
|
||||
- Sammy Sosa
|
||||
rbi:
|
||||
# 1998 rbi ranking
|
||||
- Sammy Sosa
|
||||
- Ken Griffey
|
||||
|
||||
# Anchors and aliases
|
||||
---
|
||||
hr:
|
||||
- Mark McGwire
|
||||
# Following node labeled SS
|
||||
- &SS Sammy Sosa
|
||||
rbi:
|
||||
- *SS # Subsequent occurrence
|
||||
- Ken Griffey
|
||||
|
||||
# Mapping between sequences
|
||||
---
|
||||
? - Detroit Tigers
|
||||
- Chicago cubs
|
||||
:
|
||||
- 2001-07-23
|
||||
? [ New York Yankees,
|
||||
Atlanta Braves ]
|
||||
: [ 2001-07-02, 2001-08-12,
|
||||
2001-08-14 ]
|
||||
|
||||
# Inline nested mapping
|
||||
---
|
||||
# products purchased
|
||||
- item : Super Hoop
|
||||
quantity: 1
|
||||
- item : Basketball
|
||||
quantity: 4
|
||||
- item : Big Shoes
|
||||
quantity: 1
|
||||
|
||||
# Literal scalars
|
||||
--- | # ASCII art
|
||||
\//||\/||
|
||||
// || ||__
|
||||
|
||||
# Folded scalars
|
||||
--- >
|
||||
Mark McGwire's
|
||||
year was crippled
|
||||
by a knee injury.
|
||||
|
||||
# Preserved indented block in a folded scalar
|
||||
---
|
||||
>
|
||||
Sammy Sosa completed another
|
||||
fine season with great stats.
|
||||
|
||||
63 Home Runs
|
||||
0.288 Batting Average
|
||||
|
||||
What a year!
|
||||
|
||||
# Indentation determines scope
|
||||
---
|
||||
name: Mark McGwire
|
||||
accomplishment: >
|
||||
Mark set a major league
|
||||
home run record in 1998.
|
||||
stats: |
|
||||
65 Home Runs
|
||||
0.278 Batting Average
|
||||
|
||||
# Quoted scalars
|
||||
---
|
||||
unicode: "Sosa did fine.\u263A"
|
||||
control: "\b1998\t1999\t2000\n"
|
||||
hex esc: "\x0d\x0a is \r\n"
|
||||
single: '"Howdy!" he cried.'
|
||||
quoted: ' # not a ''comment''.'
|
||||
tie-fighter: '|\-*-/|'
|
||||
|
||||
# Multi-line flow scalars
|
||||
---
|
||||
plain:
|
||||
This unquoted scalar
|
||||
spans many lines.
|
||||
quoted: "So does this
|
||||
quoted scalar.\n"
|
||||
|
||||
# Integers
|
||||
---
|
||||
canonical: 12345
|
||||
decimal: +12_345
|
||||
sexagesimal: 3:25:45
|
||||
octal: 014
|
||||
hexadecimal: 0xC
|
||||
|
||||
# Floating point
|
||||
---
|
||||
canonical: 1.23015e+3
|
||||
exponential: 12.3015e+02
|
||||
sexagesimal: 20:30.15
|
||||
fixed: 1_230.15
|
||||
negative infinity: -.inf
|
||||
not a number: .NaN
|
||||
|
||||
# Miscellaneous
|
||||
---
|
||||
null: ~
|
||||
true: boolean
|
||||
false: boolean
|
||||
string: '12345'
|
||||
|
||||
# Timestamps
|
||||
---
|
||||
canonical: 2001-12-15T02:59:43.1Z
|
||||
iso8601: 2001-12-14t21:59:43.10-05:00
|
||||
spaced: 2001-12-14 21:59:43.10 -5
|
||||
date: 2002-12-14
|
||||
|
||||
# Various explicit tags
|
||||
---
|
||||
not-date: !!str 2002-04-28
|
||||
picture: !!binary |
|
||||
R0lGODlhDAAMAIQAAP//9/X
|
||||
17unp5WZmZgAAAOfn515eXv
|
||||
Pz7Y6OjuDg4J+fn5OTk6enp
|
||||
56enmleECcgggoBADs=
|
||||
application specific tag: !something |
|
||||
The semantics of the tag
|
||||
above may be different for
|
||||
different documents.
|
||||
|
||||
# Global tags
|
||||
%TAG ! tag:clarkevans.com,2002:
|
||||
--- !shape
|
||||
# Use the ! handle for presenting
|
||||
# tag:clarkevans.com,2002:circle
|
||||
- !circle
|
||||
center: &ORIGIN {x: 73, y: 129}
|
||||
radius: 7
|
||||
- !line
|
||||
start: *ORIGIN
|
||||
finish: { x: 89, y: 102 }
|
||||
- !label
|
||||
start: *ORIGIN
|
||||
color: 0xFFEEBB
|
||||
text: Pretty vector drawing.
|
||||
|
||||
# Unordered sets
|
||||
--- !!set
|
||||
# sets are represented as a
|
||||
# mapping where each key is
|
||||
# associated with the empty string
|
||||
? Mark McGwire
|
||||
? Sammy Sosa
|
||||
? Ken Griff
|
||||
|
||||
# Ordered mappings
|
||||
--- !!omap
|
||||
# ordered maps are represented as
|
||||
# a sequence of mappings, with
|
||||
# each mapping having one key
|
||||
- Mark McGwire: 65
|
||||
- Sammy Sosa: 63
|
||||
- Ken Griffy: 58
|
||||
|
||||
# Full length example
|
||||
--- !<tag:clarkevans.com,2002:invoice>
|
||||
invoice: 34843
|
||||
date : 2001-01-23
|
||||
bill-to: &id001
|
||||
given : Chris
|
||||
family : Dumars
|
||||
address:
|
||||
lines: |
|
||||
458 Walkman Dr.
|
||||
Suite #292
|
||||
city : Royal Oak
|
||||
state : MI
|
||||
postal : 48046
|
||||
ship-to: *id001
|
||||
product:
|
||||
- sku : BL394D
|
||||
quantity : 4
|
||||
description : Basketball
|
||||
price : 450.00
|
||||
- sku : BL4438H
|
||||
quantity : 1
|
||||
description : Super Hoop
|
||||
price : 2392.00
|
||||
tax : 251.42
|
||||
total: 4443.52
|
||||
comments:
|
||||
Late afternoon is best.
|
||||
Backup contact is Nancy
|
||||
Billsmer @ 338-4338.
|
||||
|
||||
# Another full-length example
|
||||
---
|
||||
Time: 2001-11-23 15:01:42 -5
|
||||
User: ed
|
||||
Warning:
|
||||
This is an error message
|
||||
for the log file
|
||||
---
|
||||
Time: 2001-11-23 15:02:31 -5
|
||||
User: ed
|
||||
Warning:
|
||||
A slightly different error
|
||||
message.
|
||||
---
|
||||
Date: 2001-11-23 15:03:17 -5
|
||||
User: ed
|
||||
Fatal:
|
||||
Unknown variable "bar"
|
||||
Stack:
|
||||
- file: TopClass.py
|
||||
line: 23
|
||||
code: |
|
||||
x = MoreObject("345\n")
|
||||
- file: MoreClass.py
|
||||
line: 58
|
||||
code: |-
|
||||
foo = bar
|
||||
|
|
@ -0,0 +1,431 @@
|
|||
|
||||
"""
|
||||
yaml.py
|
||||
|
||||
Lexer for YAML, a human-friendly data serialization language
|
||||
(http://yaml.org/).
|
||||
|
||||
Written by Kirill Simonov <xi@resolvent.net>.
|
||||
|
||||
License: Whatever suitable for inclusion into the Pygments package.
|
||||
"""
|
||||
|
||||
from pygments.lexer import \
|
||||
ExtendedRegexLexer, LexerContext, include, bygroups
|
||||
from pygments.token import \
|
||||
Text, Comment, Punctuation, Name, Literal
|
||||
|
||||
__all__ = ['YAMLLexer']
|
||||
|
||||
|
||||
class YAMLLexerContext(LexerContext):
|
||||
"""Indentation context for the YAML lexer."""
|
||||
|
||||
def __init__(self, *args, **kwds):
|
||||
super(YAMLLexerContext, self).__init__(*args, **kwds)
|
||||
self.indent_stack = []
|
||||
self.indent = -1
|
||||
self.next_indent = 0
|
||||
self.block_scalar_indent = None
|
||||
|
||||
|
||||
def something(TokenClass):
|
||||
"""Do not produce empty tokens."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
if not text:
|
||||
return
|
||||
yield match.start(), TokenClass, text
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def reset_indent(TokenClass):
|
||||
"""Reset the indentation levels."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
context.indent_stack = []
|
||||
context.indent = -1
|
||||
context.next_indent = 0
|
||||
context.block_scalar_indent = None
|
||||
yield match.start(), TokenClass, text
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def save_indent(TokenClass, start=False):
|
||||
"""Save a possible indentation level."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
extra = ''
|
||||
if start:
|
||||
context.next_indent = len(text)
|
||||
if context.next_indent < context.indent:
|
||||
while context.next_indent < context.indent:
|
||||
context.indent = context.indent_stack.pop()
|
||||
if context.next_indent > context.indent:
|
||||
extra = text[context.indent:]
|
||||
text = text[:context.indent]
|
||||
else:
|
||||
context.next_indent += len(text)
|
||||
if text:
|
||||
yield match.start(), TokenClass, text
|
||||
if extra:
|
||||
yield match.start()+len(text), TokenClass.Error, extra
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def set_indent(TokenClass, implicit=False):
|
||||
"""Set the previously saved indentation level."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
if context.indent < context.next_indent:
|
||||
context.indent_stack.append(context.indent)
|
||||
context.indent = context.next_indent
|
||||
if not implicit:
|
||||
context.next_indent += len(text)
|
||||
yield match.start(), TokenClass, text
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def set_block_scalar_indent(TokenClass):
|
||||
"""Set an explicit indentation level for a block scalar."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
context.block_scalar_indent = None
|
||||
if not text:
|
||||
return
|
||||
increment = match.group(1)
|
||||
if increment:
|
||||
current_indent = max(context.indent, 0)
|
||||
increment = int(increment)
|
||||
context.block_scalar_indent = current_indent + increment
|
||||
if text:
|
||||
yield match.start(), TokenClass, text
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def parse_block_scalar_empty_line(IndentTokenClass, ContentTokenClass):
|
||||
"""Process an empty line in a block scalar."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
if (context.block_scalar_indent is None or
|
||||
len(text) <= context.block_scalar_indent):
|
||||
if text:
|
||||
yield match.start(), IndentTokenClass, text
|
||||
else:
|
||||
indentation = text[:context.block_scalar_indent]
|
||||
content = text[context.block_scalar_indent:]
|
||||
yield match.start(), IndentTokenClass, indentation
|
||||
yield (match.start()+context.block_scalar_indent,
|
||||
ContentTokenClass, content)
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def parse_block_scalar_indent(TokenClass):
|
||||
"""Process indentation spaces in a block scalar."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
if context.block_scalar_indent is None:
|
||||
if len(text) <= max(context.indent, 0):
|
||||
context.stack.pop()
|
||||
context.stack.pop()
|
||||
return
|
||||
context.block_scalar_indent = len(text)
|
||||
else:
|
||||
if len(text) < context.block_scalar_indent:
|
||||
context.stack.pop()
|
||||
context.stack.pop()
|
||||
return
|
||||
if text:
|
||||
yield match.start(), TokenClass, text
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
def parse_plain_scalar_indent(TokenClass):
|
||||
"""Process indentation spaces in a plain scalar."""
|
||||
def callback(lexer, match, context):
|
||||
text = match.group()
|
||||
if len(text) <= context.indent:
|
||||
context.stack.pop()
|
||||
context.stack.pop()
|
||||
return
|
||||
if text:
|
||||
yield match.start(), TokenClass, text
|
||||
context.pos = match.end()
|
||||
return callback
|
||||
|
||||
|
||||
class YAMLLexer(ExtendedRegexLexer):
|
||||
"""Lexer for the YAML language."""
|
||||
|
||||
name = 'YAML'
|
||||
aliases = ['yaml']
|
||||
filenames = ['*.yaml', '*.yml']
|
||||
mimetypes = ['text/x-yaml']
|
||||
|
||||
tokens = {
|
||||
|
||||
# the root rules
|
||||
'root': [
|
||||
# ignored whitespaces
|
||||
(r'[ ]+(?=#|$)', Text.Blank),
|
||||
# line breaks
|
||||
(r'\n+', Text.Break),
|
||||
# a comment
|
||||
(r'#[^\n]*', Comment.Single),
|
||||
# the '%YAML' directive
|
||||
(r'^%YAML(?=[ ]|$)', reset_indent(Name.Directive),
|
||||
'yaml-directive'),
|
||||
# the %TAG directive
|
||||
(r'^%TAG(?=[ ]|$)', reset_indent(Name.Directive),
|
||||
'tag-directive'),
|
||||
# document start and document end indicators
|
||||
(r'^(?:---|\.\.\.)(?=[ ]|$)',
|
||||
reset_indent(Punctuation.Document), 'block-line'),
|
||||
# indentation spaces
|
||||
(r'[ ]*(?![ \t\n\r\f\v]|$)',
|
||||
save_indent(Text.Indent, start=True),
|
||||
('block-line', 'indentation')),
|
||||
],
|
||||
|
||||
# trailing whitespaces after directives or a block scalar indicator
|
||||
'ignored-line': [
|
||||
# ignored whitespaces
|
||||
(r'[ ]+(?=#|$)', Text.Blank),
|
||||
# a comment
|
||||
(r'#[^\n]*', Comment.Single),
|
||||
# line break
|
||||
(r'\n', Text.Break, '#pop:2'),
|
||||
],
|
||||
|
||||
# the %YAML directive
|
||||
'yaml-directive': [
|
||||
# the version number
|
||||
(r'([ ]+)([0-9]+\.[0-9]+)',
|
||||
bygroups(Text.Blank, Literal.Version), 'ignored-line'),
|
||||
],
|
||||
|
||||
# the %YAG directive
|
||||
'tag-directive': [
|
||||
# a tag handle and the corresponding prefix
|
||||
(r'([ ]+)(!|![0-9A-Za-z_-]*!)'
|
||||
r'([ ]+)(!|!?[0-9A-Za-z;/?:@&=+$,_.!~*\'()\[\]%-]+)',
|
||||
bygroups(Text.Blank, Name.Type, Text.Blank, Name.Type),
|
||||
'ignored-line'),
|
||||
],
|
||||
|
||||
# block scalar indicators and indentation spaces
|
||||
'indentation': [
|
||||
# trailing whitespaces are ignored
|
||||
(r'[ ]*$', something(Text.Blank), '#pop:2'),
|
||||
# whitespaces preceeding block collection indicators
|
||||
(r'[ ]+(?=[?:-](?:[ ]|$))', save_indent(Text.Indent)),
|
||||
# block collection indicators
|
||||
(r'[?:-](?=[ ]|$)', set_indent(Punctuation.Indicator)),
|
||||
# the beginning a block line
|
||||
(r'[ ]*', save_indent(Text.Indent), '#pop'),
|
||||
],
|
||||
|
||||
# an indented line in the block context
|
||||
'block-line': [
|
||||
# the line end
|
||||
(r'[ ]*(?=#|$)', something(Text.Blank), '#pop'),
|
||||
# whitespaces separating tokens
|
||||
(r'[ ]+', Text.Blank),
|
||||
# tags, anchors and aliases,
|
||||
include('descriptors'),
|
||||
# block collections and scalars
|
||||
include('block-nodes'),
|
||||
# flow collections and quoted scalars
|
||||
include('flow-nodes'),
|
||||
# a plain scalar
|
||||
(r'(?=[^ \t\n\r\f\v?:,\[\]{}#&*!|>\'"%@`-]|[?:-][^ \t\n\r\f\v])',
|
||||
something(Literal.Scalar.Plain),
|
||||
'plain-scalar-in-block-context'),
|
||||
],
|
||||
|
||||
# tags, anchors, aliases
|
||||
'descriptors' : [
|
||||
# a full-form tag
|
||||
(r'!<[0-9A-Za-z;/?:@&=+$,_.!~*\'()\[\]%-]+>', Name.Type),
|
||||
# a tag in the form '!', '!suffix' or '!handle!suffix'
|
||||
(r'!(?:[0-9A-Za-z_-]+)?'
|
||||
r'(?:![0-9A-Za-z;/?:@&=+$,_.!~*\'()\[\]%-]+)?', Name.Type),
|
||||
# an anchor
|
||||
(r'&[0-9A-Za-z_-]+', Name.Anchor),
|
||||
# an alias
|
||||
(r'\*[0-9A-Za-z_-]+', Name.Alias),
|
||||
],
|
||||
|
||||
# block collections and scalars
|
||||
'block-nodes': [
|
||||
# implicit key
|
||||
(r':(?=[ ]|$)', set_indent(Punctuation.Indicator, implicit=True)),
|
||||
# literal and folded scalars
|
||||
(r'[|>]', Punctuation.Indicator,
|
||||
('block-scalar-content', 'block-scalar-header')),
|
||||
],
|
||||
|
||||
# flow collections and quoted scalars
|
||||
'flow-nodes': [
|
||||
# a flow sequence
|
||||
(r'\[', Punctuation.Indicator, 'flow-sequence'),
|
||||
# a flow mapping
|
||||
(r'\{', Punctuation.Indicator, 'flow-mapping'),
|
||||
# a single-quoted scalar
|
||||
(r'\'', Literal.Scalar.Flow.Quote, 'single-quoted-scalar'),
|
||||
# a double-quoted scalar
|
||||
(r'\"', Literal.Scalar.Flow.Quote, 'double-quoted-scalar'),
|
||||
],
|
||||
|
||||
# the content of a flow collection
|
||||
'flow-collection': [
|
||||
# whitespaces
|
||||
(r'[ ]+', Text.Blank),
|
||||
# line breaks
|
||||
(r'\n+', Text.Break),
|
||||
# a comment
|
||||
(r'#[^\n]*', Comment.Single),
|
||||
# simple indicators
|
||||
(r'[?:,]', Punctuation.Indicator),
|
||||
# tags, anchors and aliases
|
||||
include('descriptors'),
|
||||
# nested collections and quoted scalars
|
||||
include('flow-nodes'),
|
||||
# a plain scalar
|
||||
(r'(?=[^ \t\n\r\f\v?:,\[\]{}#&*!|>\'"%@`])',
|
||||
something(Literal.Scalar.Plain),
|
||||
'plain-scalar-in-flow-context'),
|
||||
],
|
||||
|
||||
# a flow sequence indicated by '[' and ']'
|
||||
'flow-sequence': [
|
||||
# include flow collection rules
|
||||
include('flow-collection'),
|
||||
# the closing indicator
|
||||
(r'\]', Punctuation.Indicator, '#pop'),
|
||||
],
|
||||
|
||||
# a flow mapping indicated by '{' and '}'
|
||||
'flow-mapping': [
|
||||
# include flow collection rules
|
||||
include('flow-collection'),
|
||||
# the closing indicator
|
||||
(r'\}', Punctuation.Indicator, '#pop'),
|
||||
],
|
||||
|
||||
# block scalar lines
|
||||
'block-scalar-content': [
|
||||
# line break
|
||||
(r'\n', Text.Break),
|
||||
# empty line
|
||||
(r'^[ ]+$',
|
||||
parse_block_scalar_empty_line(Text.Indent,
|
||||
Literal.Scalar.Block)),
|
||||
# indentation spaces (we may leave the state here)
|
||||
(r'^[ ]*', parse_block_scalar_indent(Text.Indent)),
|
||||
# line content
|
||||
(r'[^\n\r\f\v]+', Literal.Scalar.Block),
|
||||
],
|
||||
|
||||
# the content of a literal or folded scalar
|
||||
'block-scalar-header': [
|
||||
# indentation indicator followed by chomping flag
|
||||
(r'([1-9])?[+-]?(?=[ ]|$)',
|
||||
set_block_scalar_indent(Punctuation.Indicator),
|
||||
'ignored-line'),
|
||||
# chomping flag followed by indentation indicator
|
||||
(r'[+-]?([1-9])?(?=[ ]|$)',
|
||||
set_block_scalar_indent(Punctuation.Indicator),
|
||||
'ignored-line'),
|
||||
],
|
||||
|
||||
# ignored and regular whitespaces in quoted scalars
|
||||
'quoted-scalar-whitespaces': [
|
||||
# leading and trailing whitespaces are ignored
|
||||
(r'^[ ]+|[ ]+$', Text.Blank),
|
||||
# line breaks are ignored
|
||||
(r'\n+', Text.Break),
|
||||
# other whitespaces are a part of the value
|
||||
(r'[ ]+', Literal.Scalar.Flow),
|
||||
],
|
||||
|
||||
# single-quoted scalars
|
||||
'single-quoted-scalar': [
|
||||
# include whitespace and line break rules
|
||||
include('quoted-scalar-whitespaces'),
|
||||
# escaping of the quote character
|
||||
(r'\'\'', Literal.Scalar.Flow.Escape),
|
||||
# regular non-whitespace characters
|
||||
(r'[^ \t\n\r\f\v\']+', Literal.Scalar.Flow),
|
||||
# the closing quote
|
||||
(r'\'', Literal.Scalar.Flow.Quote, '#pop'),
|
||||
],
|
||||
|
||||
# double-quoted scalars
|
||||
'double-quoted-scalar': [
|
||||
# include whitespace and line break rules
|
||||
include('quoted-scalar-whitespaces'),
|
||||
# escaping of special characters
|
||||
(r'\\[0abt\tn\nvfre "\\N_LP]', Literal.Scalar.Flow.Escape),
|
||||
# escape codes
|
||||
(r'\\(?:x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4}|U[0-9A-Fa-f]{8})',
|
||||
Literal.Scalar.Flow.Escape),
|
||||
# regular non-whitespace characters
|
||||
(r'[^ \t\n\r\f\v\"\\]+', Literal.Scalar.Flow),
|
||||
# the closing quote
|
||||
(r'"', Literal.Scalar.Flow.Quote, '#pop'),
|
||||
],
|
||||
|
||||
# the beginning of a new line while scanning a plain scalar
|
||||
'plain-scalar-in-block-context-new-line': [
|
||||
# empty lines
|
||||
(r'^[ ]+$', Text.Blank),
|
||||
# line breaks
|
||||
(r'\n+', Text.Break),
|
||||
# document start and document end indicators
|
||||
(r'^(?=---|\.\.\.)', something(Punctuation.Document), '#pop:3'),
|
||||
# indentation spaces (we may leave the block line state here)
|
||||
(r'^[ ]*', parse_plain_scalar_indent(Text.Indent), '#pop'),
|
||||
],
|
||||
|
||||
# a plain scalar in the block context
|
||||
'plain-scalar-in-block-context': [
|
||||
# the scalar ends with the ':' indicator
|
||||
(r'[ ]*(?=:[ ]|:$)', something(Text.Blank), '#pop'),
|
||||
# the scalar ends with whitespaces followed by a comment
|
||||
(r'[ ]+(?=#)', Text.Blank, '#pop'),
|
||||
# trailing whitespaces are ignored
|
||||
(r'[ ]+$', Text.Blank),
|
||||
# line breaks are ignored
|
||||
(r'\n+', Text.Break, 'plain-scalar-in-block-context-new-line'),
|
||||
# other whitespaces are a part of the value
|
||||
(r'[ ]+', Literal.Scalar.Plain),
|
||||
# regular non-whitespace characters
|
||||
(r'(?::(?![ \t\n\r\f\v])|[^ \t\n\r\f\v:])+',
|
||||
Literal.Scalar.Plain),
|
||||
],
|
||||
|
||||
# a plain scalar is the flow context
|
||||
'plain-scalar-in-flow-context': [
|
||||
# the scalar ends with an indicator character
|
||||
(r'[ ]*(?=[,:?\[\]{}])', something(Text.Blank), '#pop'),
|
||||
# the scalar ends with a comment
|
||||
(r'[ ]+(?=#)', Text.Blank, '#pop'),
|
||||
# leading and trailing whitespaces are ignored
|
||||
(r'^[ ]+|[ ]+$', Text.Blank),
|
||||
# line breaks are ignored
|
||||
(r'\n+', Text.Break),
|
||||
# other whitespaces are a part of the value
|
||||
(r'[ ]+', Literal.Scalar.Plain),
|
||||
# regular non-whitespace characters
|
||||
(r'[^ \t\n\r\f\v,:?\[\]{}]+', Literal.Scalar.Plain),
|
||||
],
|
||||
|
||||
}
|
||||
|
||||
def get_tokens_unprocessed(self, text=None, context=None):
|
||||
if context is None:
|
||||
context = YAMLLexerContext(text, 0)
|
||||
return super(YAMLLexer, self).get_tokens_unprocessed(text, context)
|
||||
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
%YAML 1.1
|
||||
---
|
||||
|
||||
ascii:
|
||||
|
||||
header: "\e[0;1;30;40m"
|
||||
|
||||
footer: "\e[0m"
|
||||
|
||||
tokens:
|
||||
stream-start:
|
||||
stream-end:
|
||||
directive: { start: "\e[35m", end: "\e[0;1;30;40m" }
|
||||
document-start: { start: "\e[35m", end: "\e[0;1;30;40m" }
|
||||
document-end: { start: "\e[35m", end: "\e[0;1;30;40m" }
|
||||
block-sequence-start:
|
||||
block-mapping-start:
|
||||
block-end:
|
||||
flow-sequence-start: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
flow-mapping-start: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
flow-sequence-end: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
flow-mapping-end: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
key: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
value: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
block-entry: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
flow-entry: { start: "\e[33m", end: "\e[0;1;30;40m" }
|
||||
alias: { start: "\e[32m", end: "\e[0;1;30;40m" }
|
||||
anchor: { start: "\e[32m", end: "\e[0;1;30;40m" }
|
||||
tag: { start: "\e[32m", end: "\e[0;1;30;40m" }
|
||||
scalar: { start: "\e[36m", end: "\e[0;1;30;40m" }
|
||||
|
||||
replaces:
|
||||
- "\r\n": "\n"
|
||||
- "\r": "\n"
|
||||
- "\n": "\n"
|
||||
- "\x85": "\n"
|
||||
- "\u2028": "\n"
|
||||
- "\u2029": "\n"
|
||||
|
||||
html: &html
|
||||
|
||||
tokens:
|
||||
stream-start:
|
||||
stream-end:
|
||||
directive: { start: <code class="directive_token">, end: </code> }
|
||||
document-start: { start: <code class="document_start_token">, end: </code> }
|
||||
document-end: { start: <code class="document_end_token">, end: </code> }
|
||||
block-sequence-start:
|
||||
block-mapping-start:
|
||||
block-end:
|
||||
flow-sequence-start: { start: <code class="delimiter_token">, end: </code> }
|
||||
flow-mapping-start: { start: <code class="delimiter_token">, end: </code> }
|
||||
flow-sequence-end: { start: <code class="delimiter_token">, end: </code> }
|
||||
flow-mapping-end: { start: <code class="delimiter_token">, end: </code> }
|
||||
key: { start: <code class="delimiter_token">, end: </code> }
|
||||
value: { start: <code class="delimiter_token">, end: </code> }
|
||||
block-entry: { start: <code class="delimiter_token">, end: </code> }
|
||||
flow-entry: { start: <code class="delimiter_token">, end: </code> }
|
||||
alias: { start: <code class="anchor_token">, end: </code> }
|
||||
anchor: { start: <code class="anchor_token">, end: </code> }
|
||||
tag: { start: <code class="tag_token">, end: </code> }
|
||||
scalar: { start: <code class="scalar_token">, end: </code> }
|
||||
|
||||
events:
|
||||
stream-start: { start: <pre class="yaml_stream"> }
|
||||
stream-end: { end: </pre> }
|
||||
document-start: { start: <span class="document"> }
|
||||
document-end: { end: </span> }
|
||||
sequence-start: { start: <span class="sequence"> }
|
||||
sequence-end: { end: </span> }
|
||||
mapping-start: { start: <span class="mapping"> }
|
||||
mapping-end: { end: </span> }
|
||||
scalar: { start: <span class="scalar">, end: </span> }
|
||||
|
||||
replaces:
|
||||
- "\r\n": "\n"
|
||||
- "\r": "\n"
|
||||
- "\n": "\n"
|
||||
- "\x85": "\n"
|
||||
- "\u2028": "\n"
|
||||
- "\u2029": "\n"
|
||||
- "&": "&"
|
||||
- "<": "<"
|
||||
- ">": ">"
|
||||
|
||||
html-page:
|
||||
|
||||
header: |
|
||||
<html>
|
||||
<head>
|
||||
<title>A YAML stream</title>
|
||||
<style type="text/css">
|
||||
.document { background: #FFF }
|
||||
.sequence { background: #EEF }
|
||||
.mapping { background: #EFE }
|
||||
.scalar { background: #FEE }
|
||||
.directive_token { color: #C0C }
|
||||
.document_start_token { color: #C0C; font-weight: bold }
|
||||
.document_end_token { color: #C0C; font-weight: bold }
|
||||
.delimiter_token { color: #600; font-weight: bold }
|
||||
.anchor_token { color: #090 }
|
||||
.tag_token { color: #090 }
|
||||
.scalar_token { color: #000 }
|
||||
.yaml_stream { color: #999 }
|
||||
</style>
|
||||
<body>
|
||||
|
||||
footer: |
|
||||
</body>
|
||||
</html>
|
||||
|
||||
<<: *html
|
||||
|
||||
|
||||
# vim: ft=yaml
|
|
@ -0,0 +1,114 @@
|
|||
#!/usr/bin/python
|
||||
|
||||
import yaml, codecs, sys, os.path, optparse
|
||||
|
||||
class Style:
|
||||
|
||||
def __init__(self, header=None, footer=None,
|
||||
tokens=None, events=None, replaces=None):
|
||||
self.header = header
|
||||
self.footer = footer
|
||||
self.replaces = replaces
|
||||
self.substitutions = {}
|
||||
for domain, Class in [(tokens, 'Token'), (events, 'Event')]:
|
||||
if not domain:
|
||||
continue
|
||||
for key in domain:
|
||||
name = ''.join([part.capitalize() for part in key.split('-')])
|
||||
cls = getattr(yaml, '%s%s' % (name, Class))
|
||||
value = domain[key]
|
||||
if not value:
|
||||
continue
|
||||
start = value.get('start')
|
||||
end = value.get('end')
|
||||
if start:
|
||||
self.substitutions[cls, -1] = start
|
||||
if end:
|
||||
self.substitutions[cls, +1] = end
|
||||
|
||||
def __setstate__(self, state):
|
||||
self.__init__(**state)
|
||||
|
||||
yaml.add_path_resolver(u'tag:yaml.org,2002:python/object:__main__.Style',
|
||||
[None], dict)
|
||||
yaml.add_path_resolver(u'tag:yaml.org,2002:pairs',
|
||||
[None, u'replaces'], list)
|
||||
|
||||
class YAMLHighlight:
|
||||
|
||||
def __init__(self, options):
|
||||
config = yaml.load(file(options.config, 'rb').read())
|
||||
self.style = config[options.style]
|
||||
if options.input:
|
||||
self.input = file(options.input, 'rb')
|
||||
else:
|
||||
self.input = sys.stdin
|
||||
if options.output:
|
||||
self.output = file(options.output, 'wb')
|
||||
else:
|
||||
self.output = sys.stdout
|
||||
|
||||
def highlight(self):
|
||||
input = self.input.read()
|
||||
if input.startswith(codecs.BOM_UTF16_LE):
|
||||
input = unicode(input, 'utf-16-le')
|
||||
elif input.startswith(codecs.BOM_UTF16_BE):
|
||||
input = unicode(input, 'utf-16-be')
|
||||
else:
|
||||
input = unicode(input, 'utf-8')
|
||||
substitutions = self.style.substitutions
|
||||
tokens = yaml.scan(input)
|
||||
events = yaml.parse(input)
|
||||
markers = []
|
||||
number = 0
|
||||
for token in tokens:
|
||||
number += 1
|
||||
if token.start_mark.index != token.end_mark.index:
|
||||
cls = token.__class__
|
||||
if (cls, -1) in substitutions:
|
||||
markers.append([token.start_mark.index, +2, number, substitutions[cls, -1]])
|
||||
if (cls, +1) in substitutions:
|
||||
markers.append([token.end_mark.index, -2, number, substitutions[cls, +1]])
|
||||
number = 0
|
||||
for event in events:
|
||||
number += 1
|
||||
cls = event.__class__
|
||||
if (cls, -1) in substitutions:
|
||||
markers.append([event.start_mark.index, +1, number, substitutions[cls, -1]])
|
||||
if (cls, +1) in substitutions:
|
||||
markers.append([event.end_mark.index, -1, number, substitutions[cls, +1]])
|
||||
markers.sort()
|
||||
markers.reverse()
|
||||
chunks = []
|
||||
position = len(input)
|
||||
for index, weight1, weight2, substitution in markers:
|
||||
if index < position:
|
||||
chunk = input[index:position]
|
||||
for substring, replacement in self.style.replaces:
|
||||
chunk = chunk.replace(substring, replacement)
|
||||
chunks.append(chunk)
|
||||
position = index
|
||||
chunks.append(substitution)
|
||||
chunks.reverse()
|
||||
result = u''.join(chunks)
|
||||
if self.style.header:
|
||||
self.output.write(self.style.header)
|
||||
self.output.write(result.encode('utf-8'))
|
||||
if self.style.footer:
|
||||
self.output.write(self.style.footer)
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option('-s', '--style', dest='style', default='ascii',
|
||||
help="specify the highlighting style", metavar='STYLE')
|
||||
parser.add_option('-c', '--config', dest='config',
|
||||
default=os.path.join(os.path.dirname(sys.argv[0]), 'yaml_hl.cfg'),
|
||||
help="set an alternative configuration file", metavar='CONFIG')
|
||||
parser.add_option('-i', '--input', dest='input', default=None,
|
||||
help="set the input file (default: stdin)", metavar='FILE')
|
||||
parser.add_option('-o', '--output', dest='output', default=None,
|
||||
help="set the output file (default: stdout)", metavar='FILE')
|
||||
(options, args) = parser.parse_args()
|
||||
hl = YAMLHighlight(options)
|
||||
hl.highlight()
|
||||
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,23 @@
|
|||
|
||||
#include <yaml.h>
|
||||
|
||||
#if PY_MAJOR_VERSION < 3
|
||||
|
||||
#define PyUnicode_FromString(s) PyUnicode_DecodeUTF8((s), strlen(s), "strict")
|
||||
|
||||
#else
|
||||
|
||||
#define PyString_CheckExact PyBytes_CheckExact
|
||||
#define PyString_AS_STRING PyBytes_AS_STRING
|
||||
#define PyString_GET_SIZE PyBytes_GET_SIZE
|
||||
#define PyString_FromStringAndSize PyBytes_FromStringAndSize
|
||||
|
||||
#endif
|
||||
|
||||
#ifdef _MSC_VER /* MS Visual C++ 6.0 */
|
||||
#if _MSC_VER == 1200
|
||||
|
||||
#define PyLong_FromUnsignedLongLong(z) PyInt_FromLong(i)
|
||||
|
||||
#endif
|
||||
#endif
|
|
@ -0,0 +1,251 @@
|
|||
|
||||
cdef extern from "_yaml.h":
|
||||
|
||||
void malloc(int l)
|
||||
void memcpy(char *d, char *s, int l)
|
||||
int strlen(char *s)
|
||||
int PyString_CheckExact(object o)
|
||||
int PyUnicode_CheckExact(object o)
|
||||
char *PyString_AS_STRING(object o)
|
||||
int PyString_GET_SIZE(object o)
|
||||
object PyString_FromStringAndSize(char *v, int l)
|
||||
object PyUnicode_FromString(char *u)
|
||||
object PyUnicode_DecodeUTF8(char *u, int s, char *e)
|
||||
object PyUnicode_AsUTF8String(object o)
|
||||
int PY_MAJOR_VERSION
|
||||
|
||||
ctypedef enum:
|
||||
SIZEOF_VOID_P
|
||||
ctypedef enum yaml_encoding_t:
|
||||
YAML_ANY_ENCODING
|
||||
YAML_UTF8_ENCODING
|
||||
YAML_UTF16LE_ENCODING
|
||||
YAML_UTF16BE_ENCODING
|
||||
ctypedef enum yaml_break_t:
|
||||
YAML_ANY_BREAK
|
||||
YAML_CR_BREAK
|
||||
YAML_LN_BREAK
|
||||
YAML_CRLN_BREAK
|
||||
ctypedef enum yaml_error_type_t:
|
||||
YAML_NO_ERROR
|
||||
YAML_MEMORY_ERROR
|
||||
YAML_READER_ERROR
|
||||
YAML_SCANNER_ERROR
|
||||
YAML_PARSER_ERROR
|
||||
YAML_WRITER_ERROR
|
||||
YAML_EMITTER_ERROR
|
||||
ctypedef enum yaml_scalar_style_t:
|
||||
YAML_ANY_SCALAR_STYLE
|
||||
YAML_PLAIN_SCALAR_STYLE
|
||||
YAML_SINGLE_QUOTED_SCALAR_STYLE
|
||||
YAML_DOUBLE_QUOTED_SCALAR_STYLE
|
||||
YAML_LITERAL_SCALAR_STYLE
|
||||
YAML_FOLDED_SCALAR_STYLE
|
||||
ctypedef enum yaml_sequence_style_t:
|
||||
YAML_ANY_SEQUENCE_STYLE
|
||||
YAML_BLOCK_SEQUENCE_STYLE
|
||||
YAML_FLOW_SEQUENCE_STYLE
|
||||
ctypedef enum yaml_mapping_style_t:
|
||||
YAML_ANY_MAPPING_STYLE
|
||||
YAML_BLOCK_MAPPING_STYLE
|
||||
YAML_FLOW_MAPPING_STYLE
|
||||
ctypedef enum yaml_token_type_t:
|
||||
YAML_NO_TOKEN
|
||||
YAML_STREAM_START_TOKEN
|
||||
YAML_STREAM_END_TOKEN
|
||||
YAML_VERSION_DIRECTIVE_TOKEN
|
||||
YAML_TAG_DIRECTIVE_TOKEN
|
||||
YAML_DOCUMENT_START_TOKEN
|
||||
YAML_DOCUMENT_END_TOKEN
|
||||
YAML_BLOCK_SEQUENCE_START_TOKEN
|
||||
YAML_BLOCK_MAPPING_START_TOKEN
|
||||
YAML_BLOCK_END_TOKEN
|
||||
YAML_FLOW_SEQUENCE_START_TOKEN
|
||||
YAML_FLOW_SEQUENCE_END_TOKEN
|
||||
YAML_FLOW_MAPPING_START_TOKEN
|
||||
YAML_FLOW_MAPPING_END_TOKEN
|
||||
YAML_BLOCK_ENTRY_TOKEN
|
||||
YAML_FLOW_ENTRY_TOKEN
|
||||
YAML_KEY_TOKEN
|
||||
YAML_VALUE_TOKEN
|
||||
YAML_ALIAS_TOKEN
|
||||
YAML_ANCHOR_TOKEN
|
||||
YAML_TAG_TOKEN
|
||||
YAML_SCALAR_TOKEN
|
||||
ctypedef enum yaml_event_type_t:
|
||||
YAML_NO_EVENT
|
||||
YAML_STREAM_START_EVENT
|
||||
YAML_STREAM_END_EVENT
|
||||
YAML_DOCUMENT_START_EVENT
|
||||
YAML_DOCUMENT_END_EVENT
|
||||
YAML_ALIAS_EVENT
|
||||
YAML_SCALAR_EVENT
|
||||
YAML_SEQUENCE_START_EVENT
|
||||
YAML_SEQUENCE_END_EVENT
|
||||
YAML_MAPPING_START_EVENT
|
||||
YAML_MAPPING_END_EVENT
|
||||
|
||||
ctypedef int yaml_read_handler_t(void *data, char *buffer,
|
||||
int size, int *size_read) except 0
|
||||
|
||||
ctypedef int yaml_write_handler_t(void *data, char *buffer,
|
||||
int size) except 0
|
||||
|
||||
ctypedef struct yaml_mark_t:
|
||||
int index
|
||||
int line
|
||||
int column
|
||||
ctypedef struct yaml_version_directive_t:
|
||||
int major
|
||||
int minor
|
||||
ctypedef struct yaml_tag_directive_t:
|
||||
char *handle
|
||||
char *prefix
|
||||
|
||||
ctypedef struct _yaml_token_stream_start_data_t:
|
||||
yaml_encoding_t encoding
|
||||
ctypedef struct _yaml_token_alias_data_t:
|
||||
char *value
|
||||
ctypedef struct _yaml_token_anchor_data_t:
|
||||
char *value
|
||||
ctypedef struct _yaml_token_tag_data_t:
|
||||
char *handle
|
||||
char *suffix
|
||||
ctypedef struct _yaml_token_scalar_data_t:
|
||||
char *value
|
||||
int length
|
||||
yaml_scalar_style_t style
|
||||
ctypedef struct _yaml_token_version_directive_data_t:
|
||||
int major
|
||||
int minor
|
||||
ctypedef struct _yaml_token_tag_directive_data_t:
|
||||
char *handle
|
||||
char *prefix
|
||||
ctypedef union _yaml_token_data_t:
|
||||
_yaml_token_stream_start_data_t stream_start
|
||||
_yaml_token_alias_data_t alias
|
||||
_yaml_token_anchor_data_t anchor
|
||||
_yaml_token_tag_data_t tag
|
||||
_yaml_token_scalar_data_t scalar
|
||||
_yaml_token_version_directive_data_t version_directive
|
||||
_yaml_token_tag_directive_data_t tag_directive
|
||||
ctypedef struct yaml_token_t:
|
||||
yaml_token_type_t type
|
||||
_yaml_token_data_t data
|
||||
yaml_mark_t start_mark
|
||||
yaml_mark_t end_mark
|
||||
|
||||
ctypedef struct _yaml_event_stream_start_data_t:
|
||||
yaml_encoding_t encoding
|
||||
ctypedef struct _yaml_event_document_start_data_tag_directives_t:
|
||||
yaml_tag_directive_t *start
|
||||
yaml_tag_directive_t *end
|
||||
ctypedef struct _yaml_event_document_start_data_t:
|
||||
yaml_version_directive_t *version_directive
|
||||
_yaml_event_document_start_data_tag_directives_t tag_directives
|
||||
int implicit
|
||||
ctypedef struct _yaml_event_document_end_data_t:
|
||||
int implicit
|
||||
ctypedef struct _yaml_event_alias_data_t:
|
||||
char *anchor
|
||||
ctypedef struct _yaml_event_scalar_data_t:
|
||||
char *anchor
|
||||
char *tag
|
||||
char *value
|
||||
int length
|
||||
int plain_implicit
|
||||
int quoted_implicit
|
||||
yaml_scalar_style_t style
|
||||
ctypedef struct _yaml_event_sequence_start_data_t:
|
||||
char *anchor
|
||||
char *tag
|
||||
int implicit
|
||||
yaml_sequence_style_t style
|
||||
ctypedef struct _yaml_event_mapping_start_data_t:
|
||||
char *anchor
|
||||
char *tag
|
||||
int implicit
|
||||
yaml_mapping_style_t style
|
||||
ctypedef union _yaml_event_data_t:
|
||||
_yaml_event_stream_start_data_t stream_start
|
||||
_yaml_event_document_start_data_t document_start
|
||||
_yaml_event_document_end_data_t document_end
|
||||
_yaml_event_alias_data_t alias
|
||||
_yaml_event_scalar_data_t scalar
|
||||
_yaml_event_sequence_start_data_t sequence_start
|
||||
_yaml_event_mapping_start_data_t mapping_start
|
||||
ctypedef struct yaml_event_t:
|
||||
yaml_event_type_t type
|
||||
_yaml_event_data_t data
|
||||
yaml_mark_t start_mark
|
||||
yaml_mark_t end_mark
|
||||
|
||||
ctypedef struct yaml_parser_t:
|
||||
yaml_error_type_t error
|
||||
char *problem
|
||||
int problem_offset
|
||||
int problem_value
|
||||
yaml_mark_t problem_mark
|
||||
char *context
|
||||
yaml_mark_t context_mark
|
||||
|
||||
ctypedef struct yaml_emitter_t:
|
||||
yaml_error_type_t error
|
||||
char *problem
|
||||
|
||||
char *yaml_get_version_string()
|
||||
void yaml_get_version(int *major, int *minor, int *patch)
|
||||
|
||||
void yaml_token_delete(yaml_token_t *token)
|
||||
|
||||
int yaml_stream_start_event_initialize(yaml_event_t *event,
|
||||
yaml_encoding_t encoding)
|
||||
int yaml_stream_end_event_initialize(yaml_event_t *event)
|
||||
int yaml_document_start_event_initialize(yaml_event_t *event,
|
||||
yaml_version_directive_t *version_directive,
|
||||
yaml_tag_directive_t *tag_directives_start,
|
||||
yaml_tag_directive_t *tag_directives_end,
|
||||
int implicit)
|
||||
int yaml_document_end_event_initialize(yaml_event_t *event,
|
||||
int implicit)
|
||||
int yaml_alias_event_initialize(yaml_event_t *event, char *anchor)
|
||||
int yaml_scalar_event_initialize(yaml_event_t *event,
|
||||
char *anchor, char *tag, char *value, int length,
|
||||
int plain_implicit, int quoted_implicit,
|
||||
yaml_scalar_style_t style)
|
||||
int yaml_sequence_start_event_initialize(yaml_event_t *event,
|
||||
char *anchor, char *tag, int implicit, yaml_sequence_style_t style)
|
||||
int yaml_sequence_end_event_initialize(yaml_event_t *event)
|
||||
int yaml_mapping_start_event_initialize(yaml_event_t *event,
|
||||
char *anchor, char *tag, int implicit, yaml_mapping_style_t style)
|
||||
int yaml_mapping_end_event_initialize(yaml_event_t *event)
|
||||
void yaml_event_delete(yaml_event_t *event)
|
||||
|
||||
int yaml_parser_initialize(yaml_parser_t *parser)
|
||||
void yaml_parser_delete(yaml_parser_t *parser)
|
||||
void yaml_parser_set_input_string(yaml_parser_t *parser,
|
||||
char *input, int size)
|
||||
void yaml_parser_set_input(yaml_parser_t *parser,
|
||||
yaml_read_handler_t *handler, void *data)
|
||||
void yaml_parser_set_encoding(yaml_parser_t *parser,
|
||||
yaml_encoding_t encoding)
|
||||
int yaml_parser_scan(yaml_parser_t *parser, yaml_token_t *token) except *
|
||||
int yaml_parser_parse(yaml_parser_t *parser, yaml_event_t *event) except *
|
||||
|
||||
int yaml_emitter_initialize(yaml_emitter_t *emitter)
|
||||
void yaml_emitter_delete(yaml_emitter_t *emitter)
|
||||
void yaml_emitter_set_output_string(yaml_emitter_t *emitter,
|
||||
char *output, int size, int *size_written)
|
||||
void yaml_emitter_set_output(yaml_emitter_t *emitter,
|
||||
yaml_write_handler_t *handler, void *data)
|
||||
void yaml_emitter_set_encoding(yaml_emitter_t *emitter,
|
||||
yaml_encoding_t encoding)
|
||||
void yaml_emitter_set_canonical(yaml_emitter_t *emitter, int canonical)
|
||||
void yaml_emitter_set_indent(yaml_emitter_t *emitter, int indent)
|
||||
void yaml_emitter_set_width(yaml_emitter_t *emitter, int width)
|
||||
void yaml_emitter_set_unicode(yaml_emitter_t *emitter, int unicode)
|
||||
void yaml_emitter_set_break(yaml_emitter_t *emitter,
|
||||
yaml_break_t line_break)
|
||||
int yaml_emitter_emit(yaml_emitter_t *emitter, yaml_event_t *event) except *
|
||||
int yaml_emitter_flush(yaml_emitter_t *emitter)
|
||||
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,315 @@
|
|||
|
||||
from error import *
|
||||
|
||||
from tokens import *
|
||||
from events import *
|
||||
from nodes import *
|
||||
|
||||
from loader import *
|
||||
from dumper import *
|
||||
|
||||
__version__ = '3.11'
|
||||
|
||||
try:
|
||||
from cyaml import *
|
||||
__with_libyaml__ = True
|
||||
except ImportError:
|
||||
__with_libyaml__ = False
|
||||
|
||||
def scan(stream, Loader=Loader):
|
||||
"""
|
||||
Scan a YAML stream and produce scanning tokens.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_token():
|
||||
yield loader.get_token()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def parse(stream, Loader=Loader):
|
||||
"""
|
||||
Parse a YAML stream and produce parsing events.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_event():
|
||||
yield loader.get_event()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def compose(stream, Loader=Loader):
|
||||
"""
|
||||
Parse the first YAML document in a stream
|
||||
and produce the corresponding representation tree.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
return loader.get_single_node()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def compose_all(stream, Loader=Loader):
|
||||
"""
|
||||
Parse all YAML documents in a stream
|
||||
and produce corresponding representation trees.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_node():
|
||||
yield loader.get_node()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def load(stream, Loader=Loader):
|
||||
"""
|
||||
Parse the first YAML document in a stream
|
||||
and produce the corresponding Python object.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
return loader.get_single_data()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def load_all(stream, Loader=Loader):
|
||||
"""
|
||||
Parse all YAML documents in a stream
|
||||
and produce corresponding Python objects.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_data():
|
||||
yield loader.get_data()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def safe_load(stream):
|
||||
"""
|
||||
Parse the first YAML document in a stream
|
||||
and produce the corresponding Python object.
|
||||
Resolve only basic YAML tags.
|
||||
"""
|
||||
return load(stream, SafeLoader)
|
||||
|
||||
def safe_load_all(stream):
|
||||
"""
|
||||
Parse all YAML documents in a stream
|
||||
and produce corresponding Python objects.
|
||||
Resolve only basic YAML tags.
|
||||
"""
|
||||
return load_all(stream, SafeLoader)
|
||||
|
||||
def emit(events, stream=None, Dumper=Dumper,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None):
|
||||
"""
|
||||
Emit YAML parsing events into a stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
getvalue = None
|
||||
if stream is None:
|
||||
from StringIO import StringIO
|
||||
stream = StringIO()
|
||||
getvalue = stream.getvalue
|
||||
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
try:
|
||||
for event in events:
|
||||
dumper.emit(event)
|
||||
finally:
|
||||
dumper.dispose()
|
||||
if getvalue:
|
||||
return getvalue()
|
||||
|
||||
def serialize_all(nodes, stream=None, Dumper=Dumper,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding='utf-8', explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
"""
|
||||
Serialize a sequence of representation trees into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
getvalue = None
|
||||
if stream is None:
|
||||
if encoding is None:
|
||||
from StringIO import StringIO
|
||||
else:
|
||||
from cStringIO import StringIO
|
||||
stream = StringIO()
|
||||
getvalue = stream.getvalue
|
||||
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
encoding=encoding, version=version, tags=tags,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end)
|
||||
try:
|
||||
dumper.open()
|
||||
for node in nodes:
|
||||
dumper.serialize(node)
|
||||
dumper.close()
|
||||
finally:
|
||||
dumper.dispose()
|
||||
if getvalue:
|
||||
return getvalue()
|
||||
|
||||
def serialize(node, stream=None, Dumper=Dumper, **kwds):
|
||||
"""
|
||||
Serialize a representation tree into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return serialize_all([node], stream, Dumper=Dumper, **kwds)
|
||||
|
||||
def dump_all(documents, stream=None, Dumper=Dumper,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding='utf-8', explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
"""
|
||||
Serialize a sequence of Python objects into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
getvalue = None
|
||||
if stream is None:
|
||||
if encoding is None:
|
||||
from StringIO import StringIO
|
||||
else:
|
||||
from cStringIO import StringIO
|
||||
stream = StringIO()
|
||||
getvalue = stream.getvalue
|
||||
dumper = Dumper(stream, default_style=default_style,
|
||||
default_flow_style=default_flow_style,
|
||||
canonical=canonical, indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
encoding=encoding, version=version, tags=tags,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end)
|
||||
try:
|
||||
dumper.open()
|
||||
for data in documents:
|
||||
dumper.represent(data)
|
||||
dumper.close()
|
||||
finally:
|
||||
dumper.dispose()
|
||||
if getvalue:
|
||||
return getvalue()
|
||||
|
||||
def dump(data, stream=None, Dumper=Dumper, **kwds):
|
||||
"""
|
||||
Serialize a Python object into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return dump_all([data], stream, Dumper=Dumper, **kwds)
|
||||
|
||||
def safe_dump_all(documents, stream=None, **kwds):
|
||||
"""
|
||||
Serialize a sequence of Python objects into a YAML stream.
|
||||
Produce only basic YAML tags.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
|
||||
|
||||
def safe_dump(data, stream=None, **kwds):
|
||||
"""
|
||||
Serialize a Python object into a YAML stream.
|
||||
Produce only basic YAML tags.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
|
||||
|
||||
def add_implicit_resolver(tag, regexp, first=None,
|
||||
Loader=Loader, Dumper=Dumper):
|
||||
"""
|
||||
Add an implicit scalar detector.
|
||||
If an implicit scalar value matches the given regexp,
|
||||
the corresponding tag is assigned to the scalar.
|
||||
first is a sequence of possible initial characters or None.
|
||||
"""
|
||||
Loader.add_implicit_resolver(tag, regexp, first)
|
||||
Dumper.add_implicit_resolver(tag, regexp, first)
|
||||
|
||||
def add_path_resolver(tag, path, kind=None, Loader=Loader, Dumper=Dumper):
|
||||
"""
|
||||
Add a path based resolver for the given tag.
|
||||
A path is a list of keys that forms a path
|
||||
to a node in the representation tree.
|
||||
Keys can be string values, integers, or None.
|
||||
"""
|
||||
Loader.add_path_resolver(tag, path, kind)
|
||||
Dumper.add_path_resolver(tag, path, kind)
|
||||
|
||||
def add_constructor(tag, constructor, Loader=Loader):
|
||||
"""
|
||||
Add a constructor for the given tag.
|
||||
Constructor is a function that accepts a Loader instance
|
||||
and a node object and produces the corresponding Python object.
|
||||
"""
|
||||
Loader.add_constructor(tag, constructor)
|
||||
|
||||
def add_multi_constructor(tag_prefix, multi_constructor, Loader=Loader):
|
||||
"""
|
||||
Add a multi-constructor for the given tag prefix.
|
||||
Multi-constructor is called for a node if its tag starts with tag_prefix.
|
||||
Multi-constructor accepts a Loader instance, a tag suffix,
|
||||
and a node object and produces the corresponding Python object.
|
||||
"""
|
||||
Loader.add_multi_constructor(tag_prefix, multi_constructor)
|
||||
|
||||
def add_representer(data_type, representer, Dumper=Dumper):
|
||||
"""
|
||||
Add a representer for the given type.
|
||||
Representer is a function accepting a Dumper instance
|
||||
and an instance of the given data type
|
||||
and producing the corresponding representation node.
|
||||
"""
|
||||
Dumper.add_representer(data_type, representer)
|
||||
|
||||
def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
|
||||
"""
|
||||
Add a representer for the given type.
|
||||
Multi-representer is a function accepting a Dumper instance
|
||||
and an instance of the given data type or subtype
|
||||
and producing the corresponding representation node.
|
||||
"""
|
||||
Dumper.add_multi_representer(data_type, multi_representer)
|
||||
|
||||
class YAMLObjectMetaclass(type):
|
||||
"""
|
||||
The metaclass for YAMLObject.
|
||||
"""
|
||||
def __init__(cls, name, bases, kwds):
|
||||
super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
|
||||
if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
|
||||
cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
|
||||
cls.yaml_dumper.add_representer(cls, cls.to_yaml)
|
||||
|
||||
class YAMLObject(object):
|
||||
"""
|
||||
An object that can dump itself to a YAML stream
|
||||
and load itself from a YAML stream.
|
||||
"""
|
||||
|
||||
__metaclass__ = YAMLObjectMetaclass
|
||||
__slots__ = () # no direct instantiation, so allow immutable subclasses
|
||||
|
||||
yaml_loader = Loader
|
||||
yaml_dumper = Dumper
|
||||
|
||||
yaml_tag = None
|
||||
yaml_flow_style = None
|
||||
|
||||
def from_yaml(cls, loader, node):
|
||||
"""
|
||||
Convert a representation node to a Python object.
|
||||
"""
|
||||
return loader.construct_yaml_object(node, cls)
|
||||
from_yaml = classmethod(from_yaml)
|
||||
|
||||
def to_yaml(cls, dumper, data):
|
||||
"""
|
||||
Convert a Python object to a representation node.
|
||||
"""
|
||||
return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
|
||||
flow_style=cls.yaml_flow_style)
|
||||
to_yaml = classmethod(to_yaml)
|
||||
|
|
@ -0,0 +1,139 @@
|
|||
|
||||
__all__ = ['Composer', 'ComposerError']
|
||||
|
||||
from error import MarkedYAMLError
|
||||
from events import *
|
||||
from nodes import *
|
||||
|
||||
class ComposerError(MarkedYAMLError):
|
||||
pass
|
||||
|
||||
class Composer(object):
|
||||
|
||||
def __init__(self):
|
||||
self.anchors = {}
|
||||
|
||||
def check_node(self):
|
||||
# Drop the STREAM-START event.
|
||||
if self.check_event(StreamStartEvent):
|
||||
self.get_event()
|
||||
|
||||
# If there are more documents available?
|
||||
return not self.check_event(StreamEndEvent)
|
||||
|
||||
def get_node(self):
|
||||
# Get the root node of the next document.
|
||||
if not self.check_event(StreamEndEvent):
|
||||
return self.compose_document()
|
||||
|
||||
def get_single_node(self):
|
||||
# Drop the STREAM-START event.
|
||||
self.get_event()
|
||||
|
||||
# Compose a document if the stream is not empty.
|
||||
document = None
|
||||
if not self.check_event(StreamEndEvent):
|
||||
document = self.compose_document()
|
||||
|
||||
# Ensure that the stream contains no more documents.
|
||||
if not self.check_event(StreamEndEvent):
|
||||
event = self.get_event()
|
||||
raise ComposerError("expected a single document in the stream",
|
||||
document.start_mark, "but found another document",
|
||||
event.start_mark)
|
||||
|
||||
# Drop the STREAM-END event.
|
||||
self.get_event()
|
||||
|
||||
return document
|
||||
|
||||
def compose_document(self):
|
||||
# Drop the DOCUMENT-START event.
|
||||
self.get_event()
|
||||
|
||||
# Compose the root node.
|
||||
node = self.compose_node(None, None)
|
||||
|
||||
# Drop the DOCUMENT-END event.
|
||||
self.get_event()
|
||||
|
||||
self.anchors = {}
|
||||
return node
|
||||
|
||||
def compose_node(self, parent, index):
|
||||
if self.check_event(AliasEvent):
|
||||
event = self.get_event()
|
||||
anchor = event.anchor
|
||||
if anchor not in self.anchors:
|
||||
raise ComposerError(None, None, "found undefined alias %r"
|
||||
% anchor.encode('utf-8'), event.start_mark)
|
||||
return self.anchors[anchor]
|
||||
event = self.peek_event()
|
||||
anchor = event.anchor
|
||||
if anchor is not None:
|
||||
if anchor in self.anchors:
|
||||
raise ComposerError("found duplicate anchor %r; first occurence"
|
||||
% anchor.encode('utf-8'), self.anchors[anchor].start_mark,
|
||||
"second occurence", event.start_mark)
|
||||
self.descend_resolver(parent, index)
|
||||
if self.check_event(ScalarEvent):
|
||||
node = self.compose_scalar_node(anchor)
|
||||
elif self.check_event(SequenceStartEvent):
|
||||
node = self.compose_sequence_node(anchor)
|
||||
elif self.check_event(MappingStartEvent):
|
||||
node = self.compose_mapping_node(anchor)
|
||||
self.ascend_resolver()
|
||||
return node
|
||||
|
||||
def compose_scalar_node(self, anchor):
|
||||
event = self.get_event()
|
||||
tag = event.tag
|
||||
if tag is None or tag == u'!':
|
||||
tag = self.resolve(ScalarNode, event.value, event.implicit)
|
||||
node = ScalarNode(tag, event.value,
|
||||
event.start_mark, event.end_mark, style=event.style)
|
||||
if anchor is not None:
|
||||
self.anchors[anchor] = node
|
||||
return node
|
||||
|
||||
def compose_sequence_node(self, anchor):
|
||||
start_event = self.get_event()
|
||||
tag = start_event.tag
|
||||
if tag is None or tag == u'!':
|
||||
tag = self.resolve(SequenceNode, None, start_event.implicit)
|
||||
node = SequenceNode(tag, [],
|
||||
start_event.start_mark, None,
|
||||
flow_style=start_event.flow_style)
|
||||
if anchor is not None:
|
||||
self.anchors[anchor] = node
|
||||
index = 0
|
||||
while not self.check_event(SequenceEndEvent):
|
||||
node.value.append(self.compose_node(node, index))
|
||||
index += 1
|
||||
end_event = self.get_event()
|
||||
node.end_mark = end_event.end_mark
|
||||
return node
|
||||
|
||||
def compose_mapping_node(self, anchor):
|
||||
start_event = self.get_event()
|
||||
tag = start_event.tag
|
||||
if tag is None or tag == u'!':
|
||||
tag = self.resolve(MappingNode, None, start_event.implicit)
|
||||
node = MappingNode(tag, [],
|
||||
start_event.start_mark, None,
|
||||
flow_style=start_event.flow_style)
|
||||
if anchor is not None:
|
||||
self.anchors[anchor] = node
|
||||
while not self.check_event(MappingEndEvent):
|
||||
#key_event = self.peek_event()
|
||||
item_key = self.compose_node(node, None)
|
||||
#if item_key in node.value:
|
||||
# raise ComposerError("while composing a mapping", start_event.start_mark,
|
||||
# "found duplicate key", key_event.start_mark)
|
||||
item_value = self.compose_node(node, item_key)
|
||||
#node.value[item_key] = item_value
|
||||
node.value.append((item_key, item_value))
|
||||
end_event = self.get_event()
|
||||
node.end_mark = end_event.end_mark
|
||||
return node
|
||||
|
|
@ -0,0 +1,675 @@
|
|||
|
||||
__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
|
||||
'ConstructorError']
|
||||
|
||||
from error import *
|
||||
from nodes import *
|
||||
|
||||
import datetime
|
||||
|
||||
import binascii, re, sys, types
|
||||
|
||||
class ConstructorError(MarkedYAMLError):
|
||||
pass
|
||||
|
||||
class BaseConstructor(object):
|
||||
|
||||
yaml_constructors = {}
|
||||
yaml_multi_constructors = {}
|
||||
|
||||
def __init__(self):
|
||||
self.constructed_objects = {}
|
||||
self.recursive_objects = {}
|
||||
self.state_generators = []
|
||||
self.deep_construct = False
|
||||
|
||||
def check_data(self):
|
||||
# If there are more documents available?
|
||||
return self.check_node()
|
||||
|
||||
def get_data(self):
|
||||
# Construct and return the next document.
|
||||
if self.check_node():
|
||||
return self.construct_document(self.get_node())
|
||||
|
||||
def get_single_data(self):
|
||||
# Ensure that the stream contains a single document and construct it.
|
||||
node = self.get_single_node()
|
||||
if node is not None:
|
||||
return self.construct_document(node)
|
||||
return None
|
||||
|
||||
def construct_document(self, node):
|
||||
data = self.construct_object(node)
|
||||
while self.state_generators:
|
||||
state_generators = self.state_generators
|
||||
self.state_generators = []
|
||||
for generator in state_generators:
|
||||
for dummy in generator:
|
||||
pass
|
||||
self.constructed_objects = {}
|
||||
self.recursive_objects = {}
|
||||
self.deep_construct = False
|
||||
return data
|
||||
|
||||
def construct_object(self, node, deep=False):
|
||||
if node in self.constructed_objects:
|
||||
return self.constructed_objects[node]
|
||||
if deep:
|
||||
old_deep = self.deep_construct
|
||||
self.deep_construct = True
|
||||
if node in self.recursive_objects:
|
||||
raise ConstructorError(None, None,
|
||||
"found unconstructable recursive node", node.start_mark)
|
||||
self.recursive_objects[node] = None
|
||||
constructor = None
|
||||
tag_suffix = None
|
||||
if node.tag in self.yaml_constructors:
|
||||
constructor = self.yaml_constructors[node.tag]
|
||||
else:
|
||||
for tag_prefix in self.yaml_multi_constructors:
|
||||
if node.tag.startswith(tag_prefix):
|
||||
tag_suffix = node.tag[len(tag_prefix):]
|
||||
constructor = self.yaml_multi_constructors[tag_prefix]
|
||||
break
|
||||
else:
|
||||
if None in self.yaml_multi_constructors:
|
||||
tag_suffix = node.tag
|
||||
constructor = self.yaml_multi_constructors[None]
|
||||
elif None in self.yaml_constructors:
|
||||
constructor = self.yaml_constructors[None]
|
||||
elif isinstance(node, ScalarNode):
|
||||
constructor = self.__class__.construct_scalar
|
||||
elif isinstance(node, SequenceNode):
|
||||
constructor = self.__class__.construct_sequence
|
||||
elif isinstance(node, MappingNode):
|
||||
constructor = self.__class__.construct_mapping
|
||||
if tag_suffix is None:
|
||||
data = constructor(self, node)
|
||||
else:
|
||||
data = constructor(self, tag_suffix, node)
|
||||
if isinstance(data, types.GeneratorType):
|
||||
generator = data
|
||||
data = generator.next()
|
||||
if self.deep_construct:
|
||||
for dummy in generator:
|
||||
pass
|
||||
else:
|
||||
self.state_generators.append(generator)
|
||||
self.constructed_objects[node] = data
|
||||
del self.recursive_objects[node]
|
||||
if deep:
|
||||
self.deep_construct = old_deep
|
||||
return data
|
||||
|
||||
def construct_scalar(self, node):
|
||||
if not isinstance(node, ScalarNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a scalar node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
return node.value
|
||||
|
||||
def construct_sequence(self, node, deep=False):
|
||||
if not isinstance(node, SequenceNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a sequence node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
return [self.construct_object(child, deep=deep)
|
||||
for child in node.value]
|
||||
|
||||
def construct_mapping(self, node, deep=False):
|
||||
if not isinstance(node, MappingNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a mapping node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
mapping = {}
|
||||
for key_node, value_node in node.value:
|
||||
key = self.construct_object(key_node, deep=deep)
|
||||
try:
|
||||
hash(key)
|
||||
except TypeError, exc:
|
||||
raise ConstructorError("while constructing a mapping", node.start_mark,
|
||||
"found unacceptable key (%s)" % exc, key_node.start_mark)
|
||||
value = self.construct_object(value_node, deep=deep)
|
||||
mapping[key] = value
|
||||
return mapping
|
||||
|
||||
def construct_pairs(self, node, deep=False):
|
||||
if not isinstance(node, MappingNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a mapping node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
pairs = []
|
||||
for key_node, value_node in node.value:
|
||||
key = self.construct_object(key_node, deep=deep)
|
||||
value = self.construct_object(value_node, deep=deep)
|
||||
pairs.append((key, value))
|
||||
return pairs
|
||||
|
||||
def add_constructor(cls, tag, constructor):
|
||||
if not 'yaml_constructors' in cls.__dict__:
|
||||
cls.yaml_constructors = cls.yaml_constructors.copy()
|
||||
cls.yaml_constructors[tag] = constructor
|
||||
add_constructor = classmethod(add_constructor)
|
||||
|
||||
def add_multi_constructor(cls, tag_prefix, multi_constructor):
|
||||
if not 'yaml_multi_constructors' in cls.__dict__:
|
||||
cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
|
||||
cls.yaml_multi_constructors[tag_prefix] = multi_constructor
|
||||
add_multi_constructor = classmethod(add_multi_constructor)
|
||||
|
||||
class SafeConstructor(BaseConstructor):
|
||||
|
||||
def construct_scalar(self, node):
|
||||
if isinstance(node, MappingNode):
|
||||
for key_node, value_node in node.value:
|
||||
if key_node.tag == u'tag:yaml.org,2002:value':
|
||||
return self.construct_scalar(value_node)
|
||||
return BaseConstructor.construct_scalar(self, node)
|
||||
|
||||
def flatten_mapping(self, node):
|
||||
merge = []
|
||||
index = 0
|
||||
while index < len(node.value):
|
||||
key_node, value_node = node.value[index]
|
||||
if key_node.tag == u'tag:yaml.org,2002:merge':
|
||||
del node.value[index]
|
||||
if isinstance(value_node, MappingNode):
|
||||
self.flatten_mapping(value_node)
|
||||
merge.extend(value_node.value)
|
||||
elif isinstance(value_node, SequenceNode):
|
||||
submerge = []
|
||||
for subnode in value_node.value:
|
||||
if not isinstance(subnode, MappingNode):
|
||||
raise ConstructorError("while constructing a mapping",
|
||||
node.start_mark,
|
||||
"expected a mapping for merging, but found %s"
|
||||
% subnode.id, subnode.start_mark)
|
||||
self.flatten_mapping(subnode)
|
||||
submerge.append(subnode.value)
|
||||
submerge.reverse()
|
||||
for value in submerge:
|
||||
merge.extend(value)
|
||||
else:
|
||||
raise ConstructorError("while constructing a mapping", node.start_mark,
|
||||
"expected a mapping or list of mappings for merging, but found %s"
|
||||
% value_node.id, value_node.start_mark)
|
||||
elif key_node.tag == u'tag:yaml.org,2002:value':
|
||||
key_node.tag = u'tag:yaml.org,2002:str'
|
||||
index += 1
|
||||
else:
|
||||
index += 1
|
||||
if merge:
|
||||
node.value = merge + node.value
|
||||
|
||||
def construct_mapping(self, node, deep=False):
|
||||
if isinstance(node, MappingNode):
|
||||
self.flatten_mapping(node)
|
||||
return BaseConstructor.construct_mapping(self, node, deep=deep)
|
||||
|
||||
def construct_yaml_null(self, node):
|
||||
self.construct_scalar(node)
|
||||
return None
|
||||
|
||||
bool_values = {
|
||||
u'yes': True,
|
||||
u'no': False,
|
||||
u'true': True,
|
||||
u'false': False,
|
||||
u'on': True,
|
||||
u'off': False,
|
||||
}
|
||||
|
||||
def construct_yaml_bool(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
return self.bool_values[value.lower()]
|
||||
|
||||
def construct_yaml_int(self, node):
|
||||
value = str(self.construct_scalar(node))
|
||||
value = value.replace('_', '')
|
||||
sign = +1
|
||||
if value[0] == '-':
|
||||
sign = -1
|
||||
if value[0] in '+-':
|
||||
value = value[1:]
|
||||
if value == '0':
|
||||
return 0
|
||||
elif value.startswith('0b'):
|
||||
return sign*int(value[2:], 2)
|
||||
elif value.startswith('0x'):
|
||||
return sign*int(value[2:], 16)
|
||||
elif value[0] == '0':
|
||||
return sign*int(value, 8)
|
||||
elif ':' in value:
|
||||
digits = [int(part) for part in value.split(':')]
|
||||
digits.reverse()
|
||||
base = 1
|
||||
value = 0
|
||||
for digit in digits:
|
||||
value += digit*base
|
||||
base *= 60
|
||||
return sign*value
|
||||
else:
|
||||
return sign*int(value)
|
||||
|
||||
inf_value = 1e300
|
||||
while inf_value != inf_value*inf_value:
|
||||
inf_value *= inf_value
|
||||
nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99).
|
||||
|
||||
def construct_yaml_float(self, node):
|
||||
value = str(self.construct_scalar(node))
|
||||
value = value.replace('_', '').lower()
|
||||
sign = +1
|
||||
if value[0] == '-':
|
||||
sign = -1
|
||||
if value[0] in '+-':
|
||||
value = value[1:]
|
||||
if value == '.inf':
|
||||
return sign*self.inf_value
|
||||
elif value == '.nan':
|
||||
return self.nan_value
|
||||
elif ':' in value:
|
||||
digits = [float(part) for part in value.split(':')]
|
||||
digits.reverse()
|
||||
base = 1
|
||||
value = 0.0
|
||||
for digit in digits:
|
||||
value += digit*base
|
||||
base *= 60
|
||||
return sign*value
|
||||
else:
|
||||
return sign*float(value)
|
||||
|
||||
def construct_yaml_binary(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
try:
|
||||
return str(value).decode('base64')
|
||||
except (binascii.Error, UnicodeEncodeError), exc:
|
||||
raise ConstructorError(None, None,
|
||||
"failed to decode base64 data: %s" % exc, node.start_mark)
|
||||
|
||||
timestamp_regexp = re.compile(
|
||||
ur'''^(?P<year>[0-9][0-9][0-9][0-9])
|
||||
-(?P<month>[0-9][0-9]?)
|
||||
-(?P<day>[0-9][0-9]?)
|
||||
(?:(?:[Tt]|[ \t]+)
|
||||
(?P<hour>[0-9][0-9]?)
|
||||
:(?P<minute>[0-9][0-9])
|
||||
:(?P<second>[0-9][0-9])
|
||||
(?:\.(?P<fraction>[0-9]*))?
|
||||
(?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
|
||||
(?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
|
||||
|
||||
def construct_yaml_timestamp(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
match = self.timestamp_regexp.match(node.value)
|
||||
values = match.groupdict()
|
||||
year = int(values['year'])
|
||||
month = int(values['month'])
|
||||
day = int(values['day'])
|
||||
if not values['hour']:
|
||||
return datetime.date(year, month, day)
|
||||
hour = int(values['hour'])
|
||||
minute = int(values['minute'])
|
||||
second = int(values['second'])
|
||||
fraction = 0
|
||||
if values['fraction']:
|
||||
fraction = values['fraction'][:6]
|
||||
while len(fraction) < 6:
|
||||
fraction += '0'
|
||||
fraction = int(fraction)
|
||||
delta = None
|
||||
if values['tz_sign']:
|
||||
tz_hour = int(values['tz_hour'])
|
||||
tz_minute = int(values['tz_minute'] or 0)
|
||||
delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
|
||||
if values['tz_sign'] == '-':
|
||||
delta = -delta
|
||||
data = datetime.datetime(year, month, day, hour, minute, second, fraction)
|
||||
if delta:
|
||||
data -= delta
|
||||
return data
|
||||
|
||||
def construct_yaml_omap(self, node):
|
||||
# Note: we do not check for duplicate keys, because it's too
|
||||
# CPU-expensive.
|
||||
omap = []
|
||||
yield omap
|
||||
if not isinstance(node, SequenceNode):
|
||||
raise ConstructorError("while constructing an ordered map", node.start_mark,
|
||||
"expected a sequence, but found %s" % node.id, node.start_mark)
|
||||
for subnode in node.value:
|
||||
if not isinstance(subnode, MappingNode):
|
||||
raise ConstructorError("while constructing an ordered map", node.start_mark,
|
||||
"expected a mapping of length 1, but found %s" % subnode.id,
|
||||
subnode.start_mark)
|
||||
if len(subnode.value) != 1:
|
||||
raise ConstructorError("while constructing an ordered map", node.start_mark,
|
||||
"expected a single mapping item, but found %d items" % len(subnode.value),
|
||||
subnode.start_mark)
|
||||
key_node, value_node = subnode.value[0]
|
||||
key = self.construct_object(key_node)
|
||||
value = self.construct_object(value_node)
|
||||
omap.append((key, value))
|
||||
|
||||
def construct_yaml_pairs(self, node):
|
||||
# Note: the same code as `construct_yaml_omap`.
|
||||
pairs = []
|
||||
yield pairs
|
||||
if not isinstance(node, SequenceNode):
|
||||
raise ConstructorError("while constructing pairs", node.start_mark,
|
||||
"expected a sequence, but found %s" % node.id, node.start_mark)
|
||||
for subnode in node.value:
|
||||
if not isinstance(subnode, MappingNode):
|
||||
raise ConstructorError("while constructing pairs", node.start_mark,
|
||||
"expected a mapping of length 1, but found %s" % subnode.id,
|
||||
subnode.start_mark)
|
||||
if len(subnode.value) != 1:
|
||||
raise ConstructorError("while constructing pairs", node.start_mark,
|
||||
"expected a single mapping item, but found %d items" % len(subnode.value),
|
||||
subnode.start_mark)
|
||||
key_node, value_node = subnode.value[0]
|
||||
key = self.construct_object(key_node)
|
||||
value = self.construct_object(value_node)
|
||||
pairs.append((key, value))
|
||||
|
||||
def construct_yaml_set(self, node):
|
||||
data = set()
|
||||
yield data
|
||||
value = self.construct_mapping(node)
|
||||
data.update(value)
|
||||
|
||||
def construct_yaml_str(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
try:
|
||||
return value.encode('ascii')
|
||||
except UnicodeEncodeError:
|
||||
return value
|
||||
|
||||
def construct_yaml_seq(self, node):
|
||||
data = []
|
||||
yield data
|
||||
data.extend(self.construct_sequence(node))
|
||||
|
||||
def construct_yaml_map(self, node):
|
||||
data = {}
|
||||
yield data
|
||||
value = self.construct_mapping(node)
|
||||
data.update(value)
|
||||
|
||||
def construct_yaml_object(self, node, cls):
|
||||
data = cls.__new__(cls)
|
||||
yield data
|
||||
if hasattr(data, '__setstate__'):
|
||||
state = self.construct_mapping(node, deep=True)
|
||||
data.__setstate__(state)
|
||||
else:
|
||||
state = self.construct_mapping(node)
|
||||
data.__dict__.update(state)
|
||||
|
||||
def construct_undefined(self, node):
|
||||
raise ConstructorError(None, None,
|
||||
"could not determine a constructor for the tag %r" % node.tag.encode('utf-8'),
|
||||
node.start_mark)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:null',
|
||||
SafeConstructor.construct_yaml_null)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:bool',
|
||||
SafeConstructor.construct_yaml_bool)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:int',
|
||||
SafeConstructor.construct_yaml_int)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:float',
|
||||
SafeConstructor.construct_yaml_float)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:binary',
|
||||
SafeConstructor.construct_yaml_binary)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:timestamp',
|
||||
SafeConstructor.construct_yaml_timestamp)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:omap',
|
||||
SafeConstructor.construct_yaml_omap)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:pairs',
|
||||
SafeConstructor.construct_yaml_pairs)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:set',
|
||||
SafeConstructor.construct_yaml_set)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:str',
|
||||
SafeConstructor.construct_yaml_str)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:seq',
|
||||
SafeConstructor.construct_yaml_seq)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
u'tag:yaml.org,2002:map',
|
||||
SafeConstructor.construct_yaml_map)
|
||||
|
||||
SafeConstructor.add_constructor(None,
|
||||
SafeConstructor.construct_undefined)
|
||||
|
||||
class Constructor(SafeConstructor):
|
||||
|
||||
def construct_python_str(self, node):
|
||||
return self.construct_scalar(node).encode('utf-8')
|
||||
|
||||
def construct_python_unicode(self, node):
|
||||
return self.construct_scalar(node)
|
||||
|
||||
def construct_python_long(self, node):
|
||||
return long(self.construct_yaml_int(node))
|
||||
|
||||
def construct_python_complex(self, node):
|
||||
return complex(self.construct_scalar(node))
|
||||
|
||||
def construct_python_tuple(self, node):
|
||||
return tuple(self.construct_sequence(node))
|
||||
|
||||
def find_python_module(self, name, mark):
|
||||
if not name:
|
||||
raise ConstructorError("while constructing a Python module", mark,
|
||||
"expected non-empty name appended to the tag", mark)
|
||||
try:
|
||||
__import__(name)
|
||||
except ImportError, exc:
|
||||
raise ConstructorError("while constructing a Python module", mark,
|
||||
"cannot find module %r (%s)" % (name.encode('utf-8'), exc), mark)
|
||||
return sys.modules[name]
|
||||
|
||||
def find_python_name(self, name, mark):
|
||||
if not name:
|
||||
raise ConstructorError("while constructing a Python object", mark,
|
||||
"expected non-empty name appended to the tag", mark)
|
||||
if u'.' in name:
|
||||
module_name, object_name = name.rsplit('.', 1)
|
||||
else:
|
||||
module_name = '__builtin__'
|
||||
object_name = name
|
||||
try:
|
||||
__import__(module_name)
|
||||
except ImportError, exc:
|
||||
raise ConstructorError("while constructing a Python object", mark,
|
||||
"cannot find module %r (%s)" % (module_name.encode('utf-8'), exc), mark)
|
||||
module = sys.modules[module_name]
|
||||
if not hasattr(module, object_name):
|
||||
raise ConstructorError("while constructing a Python object", mark,
|
||||
"cannot find %r in the module %r" % (object_name.encode('utf-8'),
|
||||
module.__name__), mark)
|
||||
return getattr(module, object_name)
|
||||
|
||||
def construct_python_name(self, suffix, node):
|
||||
value = self.construct_scalar(node)
|
||||
if value:
|
||||
raise ConstructorError("while constructing a Python name", node.start_mark,
|
||||
"expected the empty value, but found %r" % value.encode('utf-8'),
|
||||
node.start_mark)
|
||||
return self.find_python_name(suffix, node.start_mark)
|
||||
|
||||
def construct_python_module(self, suffix, node):
|
||||
value = self.construct_scalar(node)
|
||||
if value:
|
||||
raise ConstructorError("while constructing a Python module", node.start_mark,
|
||||
"expected the empty value, but found %r" % value.encode('utf-8'),
|
||||
node.start_mark)
|
||||
return self.find_python_module(suffix, node.start_mark)
|
||||
|
||||
class classobj: pass
|
||||
|
||||
def make_python_instance(self, suffix, node,
|
||||
args=None, kwds=None, newobj=False):
|
||||
if not args:
|
||||
args = []
|
||||
if not kwds:
|
||||
kwds = {}
|
||||
cls = self.find_python_name(suffix, node.start_mark)
|
||||
if newobj and isinstance(cls, type(self.classobj)) \
|
||||
and not args and not kwds:
|
||||
instance = self.classobj()
|
||||
instance.__class__ = cls
|
||||
return instance
|
||||
elif newobj and isinstance(cls, type):
|
||||
return cls.__new__(cls, *args, **kwds)
|
||||
else:
|
||||
return cls(*args, **kwds)
|
||||
|
||||
def set_python_instance_state(self, instance, state):
|
||||
if hasattr(instance, '__setstate__'):
|
||||
instance.__setstate__(state)
|
||||
else:
|
||||
slotstate = {}
|
||||
if isinstance(state, tuple) and len(state) == 2:
|
||||
state, slotstate = state
|
||||
if hasattr(instance, '__dict__'):
|
||||
instance.__dict__.update(state)
|
||||
elif state:
|
||||
slotstate.update(state)
|
||||
for key, value in slotstate.items():
|
||||
setattr(object, key, value)
|
||||
|
||||
def construct_python_object(self, suffix, node):
|
||||
# Format:
|
||||
# !!python/object:module.name { ... state ... }
|
||||
instance = self.make_python_instance(suffix, node, newobj=True)
|
||||
yield instance
|
||||
deep = hasattr(instance, '__setstate__')
|
||||
state = self.construct_mapping(node, deep=deep)
|
||||
self.set_python_instance_state(instance, state)
|
||||
|
||||
def construct_python_object_apply(self, suffix, node, newobj=False):
|
||||
# Format:
|
||||
# !!python/object/apply # (or !!python/object/new)
|
||||
# args: [ ... arguments ... ]
|
||||
# kwds: { ... keywords ... }
|
||||
# state: ... state ...
|
||||
# listitems: [ ... listitems ... ]
|
||||
# dictitems: { ... dictitems ... }
|
||||
# or short format:
|
||||
# !!python/object/apply [ ... arguments ... ]
|
||||
# The difference between !!python/object/apply and !!python/object/new
|
||||
# is how an object is created, check make_python_instance for details.
|
||||
if isinstance(node, SequenceNode):
|
||||
args = self.construct_sequence(node, deep=True)
|
||||
kwds = {}
|
||||
state = {}
|
||||
listitems = []
|
||||
dictitems = {}
|
||||
else:
|
||||
value = self.construct_mapping(node, deep=True)
|
||||
args = value.get('args', [])
|
||||
kwds = value.get('kwds', {})
|
||||
state = value.get('state', {})
|
||||
listitems = value.get('listitems', [])
|
||||
dictitems = value.get('dictitems', {})
|
||||
instance = self.make_python_instance(suffix, node, args, kwds, newobj)
|
||||
if state:
|
||||
self.set_python_instance_state(instance, state)
|
||||
if listitems:
|
||||
instance.extend(listitems)
|
||||
if dictitems:
|
||||
for key in dictitems:
|
||||
instance[key] = dictitems[key]
|
||||
return instance
|
||||
|
||||
def construct_python_object_new(self, suffix, node):
|
||||
return self.construct_python_object_apply(suffix, node, newobj=True)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/none',
|
||||
Constructor.construct_yaml_null)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/bool',
|
||||
Constructor.construct_yaml_bool)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/str',
|
||||
Constructor.construct_python_str)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/unicode',
|
||||
Constructor.construct_python_unicode)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/int',
|
||||
Constructor.construct_yaml_int)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/long',
|
||||
Constructor.construct_python_long)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/float',
|
||||
Constructor.construct_yaml_float)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/complex',
|
||||
Constructor.construct_python_complex)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/list',
|
||||
Constructor.construct_yaml_seq)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/tuple',
|
||||
Constructor.construct_python_tuple)
|
||||
|
||||
Constructor.add_constructor(
|
||||
u'tag:yaml.org,2002:python/dict',
|
||||
Constructor.construct_yaml_map)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
u'tag:yaml.org,2002:python/name:',
|
||||
Constructor.construct_python_name)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
u'tag:yaml.org,2002:python/module:',
|
||||
Constructor.construct_python_module)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
u'tag:yaml.org,2002:python/object:',
|
||||
Constructor.construct_python_object)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
u'tag:yaml.org,2002:python/object/apply:',
|
||||
Constructor.construct_python_object_apply)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
u'tag:yaml.org,2002:python/object/new:',
|
||||
Constructor.construct_python_object_new)
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
|
||||
__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader',
|
||||
'CBaseDumper', 'CSafeDumper', 'CDumper']
|
||||
|
||||
from _yaml import CParser, CEmitter
|
||||
|
||||
from constructor import *
|
||||
|
||||
from serializer import *
|
||||
from representer import *
|
||||
|
||||
from resolver import *
|
||||
|
||||
class CBaseLoader(CParser, BaseConstructor, BaseResolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
CParser.__init__(self, stream)
|
||||
BaseConstructor.__init__(self)
|
||||
BaseResolver.__init__(self)
|
||||
|
||||
class CSafeLoader(CParser, SafeConstructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
CParser.__init__(self, stream)
|
||||
SafeConstructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CLoader(CParser, Constructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
CParser.__init__(self, stream)
|
||||
Constructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
CEmitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width, encoding=encoding,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CSafeDumper(CEmitter, SafeRepresenter, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
CEmitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width, encoding=encoding,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
SafeRepresenter.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CDumper(CEmitter, Serializer, Representer, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
CEmitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width, encoding=encoding,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
|
||||
__all__ = ['BaseDumper', 'SafeDumper', 'Dumper']
|
||||
|
||||
from emitter import *
|
||||
from serializer import *
|
||||
from representer import *
|
||||
from resolver import *
|
||||
|
||||
class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
Emitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
Serializer.__init__(self, encoding=encoding,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
Emitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
Serializer.__init__(self, encoding=encoding,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
SafeRepresenter.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class Dumper(Emitter, Serializer, Representer, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
Emitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
Serializer.__init__(self, encoding=encoding,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,75 @@
|
|||
|
||||
__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
|
||||
|
||||
class Mark(object):
|
||||
|
||||
def __init__(self, name, index, line, column, buffer, pointer):
|
||||
self.name = name
|
||||
self.index = index
|
||||
self.line = line
|
||||
self.column = column
|
||||
self.buffer = buffer
|
||||
self.pointer = pointer
|
||||
|
||||
def get_snippet(self, indent=4, max_length=75):
|
||||
if self.buffer is None:
|
||||
return None
|
||||
head = ''
|
||||
start = self.pointer
|
||||
while start > 0 and self.buffer[start-1] not in u'\0\r\n\x85\u2028\u2029':
|
||||
start -= 1
|
||||
if self.pointer-start > max_length/2-1:
|
||||
head = ' ... '
|
||||
start += 5
|
||||
break
|
||||
tail = ''
|
||||
end = self.pointer
|
||||
while end < len(self.buffer) and self.buffer[end] not in u'\0\r\n\x85\u2028\u2029':
|
||||
end += 1
|
||||
if end-self.pointer > max_length/2-1:
|
||||
tail = ' ... '
|
||||
end -= 5
|
||||
break
|
||||
snippet = self.buffer[start:end].encode('utf-8')
|
||||
return ' '*indent + head + snippet + tail + '\n' \
|
||||
+ ' '*(indent+self.pointer-start+len(head)) + '^'
|
||||
|
||||
def __str__(self):
|
||||
snippet = self.get_snippet()
|
||||
where = " in \"%s\", line %d, column %d" \
|
||||
% (self.name, self.line+1, self.column+1)
|
||||
if snippet is not None:
|
||||
where += ":\n"+snippet
|
||||
return where
|
||||
|
||||
class YAMLError(Exception):
|
||||
pass
|
||||
|
||||
class MarkedYAMLError(YAMLError):
|
||||
|
||||
def __init__(self, context=None, context_mark=None,
|
||||
problem=None, problem_mark=None, note=None):
|
||||
self.context = context
|
||||
self.context_mark = context_mark
|
||||
self.problem = problem
|
||||
self.problem_mark = problem_mark
|
||||
self.note = note
|
||||
|
||||
def __str__(self):
|
||||
lines = []
|
||||
if self.context is not None:
|
||||
lines.append(self.context)
|
||||
if self.context_mark is not None \
|
||||
and (self.problem is None or self.problem_mark is None
|
||||
or self.context_mark.name != self.problem_mark.name
|
||||
or self.context_mark.line != self.problem_mark.line
|
||||
or self.context_mark.column != self.problem_mark.column):
|
||||
lines.append(str(self.context_mark))
|
||||
if self.problem is not None:
|
||||
lines.append(self.problem)
|
||||
if self.problem_mark is not None:
|
||||
lines.append(str(self.problem_mark))
|
||||
if self.note is not None:
|
||||
lines.append(self.note)
|
||||
return '\n'.join(lines)
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
|
||||
# Abstract classes.
|
||||
|
||||
class Event(object):
|
||||
def __init__(self, start_mark=None, end_mark=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
def __repr__(self):
|
||||
attributes = [key for key in ['anchor', 'tag', 'implicit', 'value']
|
||||
if hasattr(self, key)]
|
||||
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
|
||||
for key in attributes])
|
||||
return '%s(%s)' % (self.__class__.__name__, arguments)
|
||||
|
||||
class NodeEvent(Event):
|
||||
def __init__(self, anchor, start_mark=None, end_mark=None):
|
||||
self.anchor = anchor
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class CollectionStartEvent(NodeEvent):
|
||||
def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
|
||||
flow_style=None):
|
||||
self.anchor = anchor
|
||||
self.tag = tag
|
||||
self.implicit = implicit
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.flow_style = flow_style
|
||||
|
||||
class CollectionEndEvent(Event):
|
||||
pass
|
||||
|
||||
# Implementations.
|
||||
|
||||
class StreamStartEvent(Event):
|
||||
def __init__(self, start_mark=None, end_mark=None, encoding=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.encoding = encoding
|
||||
|
||||
class StreamEndEvent(Event):
|
||||
pass
|
||||
|
||||
class DocumentStartEvent(Event):
|
||||
def __init__(self, start_mark=None, end_mark=None,
|
||||
explicit=None, version=None, tags=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.explicit = explicit
|
||||
self.version = version
|
||||
self.tags = tags
|
||||
|
||||
class DocumentEndEvent(Event):
|
||||
def __init__(self, start_mark=None, end_mark=None,
|
||||
explicit=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.explicit = explicit
|
||||
|
||||
class AliasEvent(NodeEvent):
|
||||
pass
|
||||
|
||||
class ScalarEvent(NodeEvent):
|
||||
def __init__(self, anchor, tag, implicit, value,
|
||||
start_mark=None, end_mark=None, style=None):
|
||||
self.anchor = anchor
|
||||
self.tag = tag
|
||||
self.implicit = implicit
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.style = style
|
||||
|
||||
class SequenceStartEvent(CollectionStartEvent):
|
||||
pass
|
||||
|
||||
class SequenceEndEvent(CollectionEndEvent):
|
||||
pass
|
||||
|
||||
class MappingStartEvent(CollectionStartEvent):
|
||||
pass
|
||||
|
||||
class MappingEndEvent(CollectionEndEvent):
|
||||
pass
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
|
||||
__all__ = ['BaseLoader', 'SafeLoader', 'Loader']
|
||||
|
||||
from reader import *
|
||||
from scanner import *
|
||||
from parser import *
|
||||
from composer import *
|
||||
from constructor import *
|
||||
from resolver import *
|
||||
|
||||
class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
Reader.__init__(self, stream)
|
||||
Scanner.__init__(self)
|
||||
Parser.__init__(self)
|
||||
Composer.__init__(self)
|
||||
BaseConstructor.__init__(self)
|
||||
BaseResolver.__init__(self)
|
||||
|
||||
class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
Reader.__init__(self, stream)
|
||||
Scanner.__init__(self)
|
||||
Parser.__init__(self)
|
||||
Composer.__init__(self)
|
||||
SafeConstructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
Reader.__init__(self, stream)
|
||||
Scanner.__init__(self)
|
||||
Parser.__init__(self)
|
||||
Composer.__init__(self)
|
||||
Constructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
|
||||
class Node(object):
|
||||
def __init__(self, tag, value, start_mark, end_mark):
|
||||
self.tag = tag
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
def __repr__(self):
|
||||
value = self.value
|
||||
#if isinstance(value, list):
|
||||
# if len(value) == 0:
|
||||
# value = '<empty>'
|
||||
# elif len(value) == 1:
|
||||
# value = '<1 item>'
|
||||
# else:
|
||||
# value = '<%d items>' % len(value)
|
||||
#else:
|
||||
# if len(value) > 75:
|
||||
# value = repr(value[:70]+u' ... ')
|
||||
# else:
|
||||
# value = repr(value)
|
||||
value = repr(value)
|
||||
return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
|
||||
|
||||
class ScalarNode(Node):
|
||||
id = 'scalar'
|
||||
def __init__(self, tag, value,
|
||||
start_mark=None, end_mark=None, style=None):
|
||||
self.tag = tag
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.style = style
|
||||
|
||||
class CollectionNode(Node):
|
||||
def __init__(self, tag, value,
|
||||
start_mark=None, end_mark=None, flow_style=None):
|
||||
self.tag = tag
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.flow_style = flow_style
|
||||
|
||||
class SequenceNode(CollectionNode):
|
||||
id = 'sequence'
|
||||
|
||||
class MappingNode(CollectionNode):
|
||||
id = 'mapping'
|
||||
|
|
@ -0,0 +1,589 @@
|
|||
|
||||
# The following YAML grammar is LL(1) and is parsed by a recursive descent
|
||||
# parser.
|
||||
#
|
||||
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
|
||||
# implicit_document ::= block_node DOCUMENT-END*
|
||||
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
|
||||
# block_node_or_indentless_sequence ::=
|
||||
# ALIAS
|
||||
# | properties (block_content | indentless_block_sequence)?
|
||||
# | block_content
|
||||
# | indentless_block_sequence
|
||||
# block_node ::= ALIAS
|
||||
# | properties block_content?
|
||||
# | block_content
|
||||
# flow_node ::= ALIAS
|
||||
# | properties flow_content?
|
||||
# | flow_content
|
||||
# properties ::= TAG ANCHOR? | ANCHOR TAG?
|
||||
# block_content ::= block_collection | flow_collection | SCALAR
|
||||
# flow_content ::= flow_collection | SCALAR
|
||||
# block_collection ::= block_sequence | block_mapping
|
||||
# flow_collection ::= flow_sequence | flow_mapping
|
||||
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
|
||||
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
|
||||
# block_mapping ::= BLOCK-MAPPING_START
|
||||
# ((KEY block_node_or_indentless_sequence?)?
|
||||
# (VALUE block_node_or_indentless_sequence?)?)*
|
||||
# BLOCK-END
|
||||
# flow_sequence ::= FLOW-SEQUENCE-START
|
||||
# (flow_sequence_entry FLOW-ENTRY)*
|
||||
# flow_sequence_entry?
|
||||
# FLOW-SEQUENCE-END
|
||||
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
# flow_mapping ::= FLOW-MAPPING-START
|
||||
# (flow_mapping_entry FLOW-ENTRY)*
|
||||
# flow_mapping_entry?
|
||||
# FLOW-MAPPING-END
|
||||
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
#
|
||||
# FIRST sets:
|
||||
#
|
||||
# stream: { STREAM-START }
|
||||
# explicit_document: { DIRECTIVE DOCUMENT-START }
|
||||
# implicit_document: FIRST(block_node)
|
||||
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
|
||||
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
|
||||
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
|
||||
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# block_sequence: { BLOCK-SEQUENCE-START }
|
||||
# block_mapping: { BLOCK-MAPPING-START }
|
||||
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
|
||||
# indentless_sequence: { ENTRY }
|
||||
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# flow_sequence: { FLOW-SEQUENCE-START }
|
||||
# flow_mapping: { FLOW-MAPPING-START }
|
||||
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
|
||||
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
|
||||
|
||||
__all__ = ['Parser', 'ParserError']
|
||||
|
||||
from error import MarkedYAMLError
|
||||
from tokens import *
|
||||
from events import *
|
||||
from scanner import *
|
||||
|
||||
class ParserError(MarkedYAMLError):
|
||||
pass
|
||||
|
||||
class Parser(object):
|
||||
# Since writing a recursive-descendant parser is a straightforward task, we
|
||||
# do not give many comments here.
|
||||
|
||||
DEFAULT_TAGS = {
|
||||
u'!': u'!',
|
||||
u'!!': u'tag:yaml.org,2002:',
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
self.current_event = None
|
||||
self.yaml_version = None
|
||||
self.tag_handles = {}
|
||||
self.states = []
|
||||
self.marks = []
|
||||
self.state = self.parse_stream_start
|
||||
|
||||
def dispose(self):
|
||||
# Reset the state attributes (to clear self-references)
|
||||
self.states = []
|
||||
self.state = None
|
||||
|
||||
def check_event(self, *choices):
|
||||
# Check the type of the next event.
|
||||
if self.current_event is None:
|
||||
if self.state:
|
||||
self.current_event = self.state()
|
||||
if self.current_event is not None:
|
||||
if not choices:
|
||||
return True
|
||||
for choice in choices:
|
||||
if isinstance(self.current_event, choice):
|
||||
return True
|
||||
return False
|
||||
|
||||
def peek_event(self):
|
||||
# Get the next event.
|
||||
if self.current_event is None:
|
||||
if self.state:
|
||||
self.current_event = self.state()
|
||||
return self.current_event
|
||||
|
||||
def get_event(self):
|
||||
# Get the next event and proceed further.
|
||||
if self.current_event is None:
|
||||
if self.state:
|
||||
self.current_event = self.state()
|
||||
value = self.current_event
|
||||
self.current_event = None
|
||||
return value
|
||||
|
||||
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
|
||||
# implicit_document ::= block_node DOCUMENT-END*
|
||||
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
|
||||
|
||||
def parse_stream_start(self):
|
||||
|
||||
# Parse the stream start.
|
||||
token = self.get_token()
|
||||
event = StreamStartEvent(token.start_mark, token.end_mark,
|
||||
encoding=token.encoding)
|
||||
|
||||
# Prepare the next state.
|
||||
self.state = self.parse_implicit_document_start
|
||||
|
||||
return event
|
||||
|
||||
def parse_implicit_document_start(self):
|
||||
|
||||
# Parse an implicit document.
|
||||
if not self.check_token(DirectiveToken, DocumentStartToken,
|
||||
StreamEndToken):
|
||||
self.tag_handles = self.DEFAULT_TAGS
|
||||
token = self.peek_token()
|
||||
start_mark = end_mark = token.start_mark
|
||||
event = DocumentStartEvent(start_mark, end_mark,
|
||||
explicit=False)
|
||||
|
||||
# Prepare the next state.
|
||||
self.states.append(self.parse_document_end)
|
||||
self.state = self.parse_block_node
|
||||
|
||||
return event
|
||||
|
||||
else:
|
||||
return self.parse_document_start()
|
||||
|
||||
def parse_document_start(self):
|
||||
|
||||
# Parse any extra document end indicators.
|
||||
while self.check_token(DocumentEndToken):
|
||||
self.get_token()
|
||||
|
||||
# Parse an explicit document.
|
||||
if not self.check_token(StreamEndToken):
|
||||
token = self.peek_token()
|
||||
start_mark = token.start_mark
|
||||
version, tags = self.process_directives()
|
||||
if not self.check_token(DocumentStartToken):
|
||||
raise ParserError(None, None,
|
||||
"expected '<document start>', but found %r"
|
||||
% self.peek_token().id,
|
||||
self.peek_token().start_mark)
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
event = DocumentStartEvent(start_mark, end_mark,
|
||||
explicit=True, version=version, tags=tags)
|
||||
self.states.append(self.parse_document_end)
|
||||
self.state = self.parse_document_content
|
||||
else:
|
||||
# Parse the end of the stream.
|
||||
token = self.get_token()
|
||||
event = StreamEndEvent(token.start_mark, token.end_mark)
|
||||
assert not self.states
|
||||
assert not self.marks
|
||||
self.state = None
|
||||
return event
|
||||
|
||||
def parse_document_end(self):
|
||||
|
||||
# Parse the document end.
|
||||
token = self.peek_token()
|
||||
start_mark = end_mark = token.start_mark
|
||||
explicit = False
|
||||
if self.check_token(DocumentEndToken):
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
explicit = True
|
||||
event = DocumentEndEvent(start_mark, end_mark,
|
||||
explicit=explicit)
|
||||
|
||||
# Prepare the next state.
|
||||
self.state = self.parse_document_start
|
||||
|
||||
return event
|
||||
|
||||
def parse_document_content(self):
|
||||
if self.check_token(DirectiveToken,
|
||||
DocumentStartToken, DocumentEndToken, StreamEndToken):
|
||||
event = self.process_empty_scalar(self.peek_token().start_mark)
|
||||
self.state = self.states.pop()
|
||||
return event
|
||||
else:
|
||||
return self.parse_block_node()
|
||||
|
||||
def process_directives(self):
|
||||
self.yaml_version = None
|
||||
self.tag_handles = {}
|
||||
while self.check_token(DirectiveToken):
|
||||
token = self.get_token()
|
||||
if token.name == u'YAML':
|
||||
if self.yaml_version is not None:
|
||||
raise ParserError(None, None,
|
||||
"found duplicate YAML directive", token.start_mark)
|
||||
major, minor = token.value
|
||||
if major != 1:
|
||||
raise ParserError(None, None,
|
||||
"found incompatible YAML document (version 1.* is required)",
|
||||
token.start_mark)
|
||||
self.yaml_version = token.value
|
||||
elif token.name == u'TAG':
|
||||
handle, prefix = token.value
|
||||
if handle in self.tag_handles:
|
||||
raise ParserError(None, None,
|
||||
"duplicate tag handle %r" % handle.encode('utf-8'),
|
||||
token.start_mark)
|
||||
self.tag_handles[handle] = prefix
|
||||
if self.tag_handles:
|
||||
value = self.yaml_version, self.tag_handles.copy()
|
||||
else:
|
||||
value = self.yaml_version, None
|
||||
for key in self.DEFAULT_TAGS:
|
||||
if key not in self.tag_handles:
|
||||
self.tag_handles[key] = self.DEFAULT_TAGS[key]
|
||||
return value
|
||||
|
||||
# block_node_or_indentless_sequence ::= ALIAS
|
||||
# | properties (block_content | indentless_block_sequence)?
|
||||
# | block_content
|
||||
# | indentless_block_sequence
|
||||
# block_node ::= ALIAS
|
||||
# | properties block_content?
|
||||
# | block_content
|
||||
# flow_node ::= ALIAS
|
||||
# | properties flow_content?
|
||||
# | flow_content
|
||||
# properties ::= TAG ANCHOR? | ANCHOR TAG?
|
||||
# block_content ::= block_collection | flow_collection | SCALAR
|
||||
# flow_content ::= flow_collection | SCALAR
|
||||
# block_collection ::= block_sequence | block_mapping
|
||||
# flow_collection ::= flow_sequence | flow_mapping
|
||||
|
||||
def parse_block_node(self):
|
||||
return self.parse_node(block=True)
|
||||
|
||||
def parse_flow_node(self):
|
||||
return self.parse_node()
|
||||
|
||||
def parse_block_node_or_indentless_sequence(self):
|
||||
return self.parse_node(block=True, indentless_sequence=True)
|
||||
|
||||
def parse_node(self, block=False, indentless_sequence=False):
|
||||
if self.check_token(AliasToken):
|
||||
token = self.get_token()
|
||||
event = AliasEvent(token.value, token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
else:
|
||||
anchor = None
|
||||
tag = None
|
||||
start_mark = end_mark = tag_mark = None
|
||||
if self.check_token(AnchorToken):
|
||||
token = self.get_token()
|
||||
start_mark = token.start_mark
|
||||
end_mark = token.end_mark
|
||||
anchor = token.value
|
||||
if self.check_token(TagToken):
|
||||
token = self.get_token()
|
||||
tag_mark = token.start_mark
|
||||
end_mark = token.end_mark
|
||||
tag = token.value
|
||||
elif self.check_token(TagToken):
|
||||
token = self.get_token()
|
||||
start_mark = tag_mark = token.start_mark
|
||||
end_mark = token.end_mark
|
||||
tag = token.value
|
||||
if self.check_token(AnchorToken):
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
anchor = token.value
|
||||
if tag is not None:
|
||||
handle, suffix = tag
|
||||
if handle is not None:
|
||||
if handle not in self.tag_handles:
|
||||
raise ParserError("while parsing a node", start_mark,
|
||||
"found undefined tag handle %r" % handle.encode('utf-8'),
|
||||
tag_mark)
|
||||
tag = self.tag_handles[handle]+suffix
|
||||
else:
|
||||
tag = suffix
|
||||
#if tag == u'!':
|
||||
# raise ParserError("while parsing a node", start_mark,
|
||||
# "found non-specific tag '!'", tag_mark,
|
||||
# "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' and share your opinion.")
|
||||
if start_mark is None:
|
||||
start_mark = end_mark = self.peek_token().start_mark
|
||||
event = None
|
||||
implicit = (tag is None or tag == u'!')
|
||||
if indentless_sequence and self.check_token(BlockEntryToken):
|
||||
end_mark = self.peek_token().end_mark
|
||||
event = SequenceStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark)
|
||||
self.state = self.parse_indentless_sequence_entry
|
||||
else:
|
||||
if self.check_token(ScalarToken):
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
if (token.plain and tag is None) or tag == u'!':
|
||||
implicit = (True, False)
|
||||
elif tag is None:
|
||||
implicit = (False, True)
|
||||
else:
|
||||
implicit = (False, False)
|
||||
event = ScalarEvent(anchor, tag, implicit, token.value,
|
||||
start_mark, end_mark, style=token.style)
|
||||
self.state = self.states.pop()
|
||||
elif self.check_token(FlowSequenceStartToken):
|
||||
end_mark = self.peek_token().end_mark
|
||||
event = SequenceStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=True)
|
||||
self.state = self.parse_flow_sequence_first_entry
|
||||
elif self.check_token(FlowMappingStartToken):
|
||||
end_mark = self.peek_token().end_mark
|
||||
event = MappingStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=True)
|
||||
self.state = self.parse_flow_mapping_first_key
|
||||
elif block and self.check_token(BlockSequenceStartToken):
|
||||
end_mark = self.peek_token().start_mark
|
||||
event = SequenceStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=False)
|
||||
self.state = self.parse_block_sequence_first_entry
|
||||
elif block and self.check_token(BlockMappingStartToken):
|
||||
end_mark = self.peek_token().start_mark
|
||||
event = MappingStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=False)
|
||||
self.state = self.parse_block_mapping_first_key
|
||||
elif anchor is not None or tag is not None:
|
||||
# Empty scalars are allowed even if a tag or an anchor is
|
||||
# specified.
|
||||
event = ScalarEvent(anchor, tag, (implicit, False), u'',
|
||||
start_mark, end_mark)
|
||||
self.state = self.states.pop()
|
||||
else:
|
||||
if block:
|
||||
node = 'block'
|
||||
else:
|
||||
node = 'flow'
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a %s node" % node, start_mark,
|
||||
"expected the node content, but found %r" % token.id,
|
||||
token.start_mark)
|
||||
return event
|
||||
|
||||
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
|
||||
|
||||
def parse_block_sequence_first_entry(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_block_sequence_entry()
|
||||
|
||||
def parse_block_sequence_entry(self):
|
||||
if self.check_token(BlockEntryToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(BlockEntryToken, BlockEndToken):
|
||||
self.states.append(self.parse_block_sequence_entry)
|
||||
return self.parse_block_node()
|
||||
else:
|
||||
self.state = self.parse_block_sequence_entry
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
if not self.check_token(BlockEndToken):
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a block collection", self.marks[-1],
|
||||
"expected <block end>, but found %r" % token.id, token.start_mark)
|
||||
token = self.get_token()
|
||||
event = SequenceEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
|
||||
|
||||
def parse_indentless_sequence_entry(self):
|
||||
if self.check_token(BlockEntryToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(BlockEntryToken,
|
||||
KeyToken, ValueToken, BlockEndToken):
|
||||
self.states.append(self.parse_indentless_sequence_entry)
|
||||
return self.parse_block_node()
|
||||
else:
|
||||
self.state = self.parse_indentless_sequence_entry
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
token = self.peek_token()
|
||||
event = SequenceEndEvent(token.start_mark, token.start_mark)
|
||||
self.state = self.states.pop()
|
||||
return event
|
||||
|
||||
# block_mapping ::= BLOCK-MAPPING_START
|
||||
# ((KEY block_node_or_indentless_sequence?)?
|
||||
# (VALUE block_node_or_indentless_sequence?)?)*
|
||||
# BLOCK-END
|
||||
|
||||
def parse_block_mapping_first_key(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_block_mapping_key()
|
||||
|
||||
def parse_block_mapping_key(self):
|
||||
if self.check_token(KeyToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
|
||||
self.states.append(self.parse_block_mapping_value)
|
||||
return self.parse_block_node_or_indentless_sequence()
|
||||
else:
|
||||
self.state = self.parse_block_mapping_value
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
if not self.check_token(BlockEndToken):
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a block mapping", self.marks[-1],
|
||||
"expected <block end>, but found %r" % token.id, token.start_mark)
|
||||
token = self.get_token()
|
||||
event = MappingEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
def parse_block_mapping_value(self):
|
||||
if self.check_token(ValueToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
|
||||
self.states.append(self.parse_block_mapping_key)
|
||||
return self.parse_block_node_or_indentless_sequence()
|
||||
else:
|
||||
self.state = self.parse_block_mapping_key
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
else:
|
||||
self.state = self.parse_block_mapping_key
|
||||
token = self.peek_token()
|
||||
return self.process_empty_scalar(token.start_mark)
|
||||
|
||||
# flow_sequence ::= FLOW-SEQUENCE-START
|
||||
# (flow_sequence_entry FLOW-ENTRY)*
|
||||
# flow_sequence_entry?
|
||||
# FLOW-SEQUENCE-END
|
||||
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
#
|
||||
# Note that while production rules for both flow_sequence_entry and
|
||||
# flow_mapping_entry are equal, their interpretations are different.
|
||||
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
|
||||
# generate an inline mapping (set syntax).
|
||||
|
||||
def parse_flow_sequence_first_entry(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_flow_sequence_entry(first=True)
|
||||
|
||||
def parse_flow_sequence_entry(self, first=False):
|
||||
if not self.check_token(FlowSequenceEndToken):
|
||||
if not first:
|
||||
if self.check_token(FlowEntryToken):
|
||||
self.get_token()
|
||||
else:
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a flow sequence", self.marks[-1],
|
||||
"expected ',' or ']', but got %r" % token.id, token.start_mark)
|
||||
|
||||
if self.check_token(KeyToken):
|
||||
token = self.peek_token()
|
||||
event = MappingStartEvent(None, None, True,
|
||||
token.start_mark, token.end_mark,
|
||||
flow_style=True)
|
||||
self.state = self.parse_flow_sequence_entry_mapping_key
|
||||
return event
|
||||
elif not self.check_token(FlowSequenceEndToken):
|
||||
self.states.append(self.parse_flow_sequence_entry)
|
||||
return self.parse_flow_node()
|
||||
token = self.get_token()
|
||||
event = SequenceEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
def parse_flow_sequence_entry_mapping_key(self):
|
||||
token = self.get_token()
|
||||
if not self.check_token(ValueToken,
|
||||
FlowEntryToken, FlowSequenceEndToken):
|
||||
self.states.append(self.parse_flow_sequence_entry_mapping_value)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_sequence_entry_mapping_value
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
|
||||
def parse_flow_sequence_entry_mapping_value(self):
|
||||
if self.check_token(ValueToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
|
||||
self.states.append(self.parse_flow_sequence_entry_mapping_end)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_sequence_entry_mapping_end
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
else:
|
||||
self.state = self.parse_flow_sequence_entry_mapping_end
|
||||
token = self.peek_token()
|
||||
return self.process_empty_scalar(token.start_mark)
|
||||
|
||||
def parse_flow_sequence_entry_mapping_end(self):
|
||||
self.state = self.parse_flow_sequence_entry
|
||||
token = self.peek_token()
|
||||
return MappingEndEvent(token.start_mark, token.start_mark)
|
||||
|
||||
# flow_mapping ::= FLOW-MAPPING-START
|
||||
# (flow_mapping_entry FLOW-ENTRY)*
|
||||
# flow_mapping_entry?
|
||||
# FLOW-MAPPING-END
|
||||
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
|
||||
def parse_flow_mapping_first_key(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_flow_mapping_key(first=True)
|
||||
|
||||
def parse_flow_mapping_key(self, first=False):
|
||||
if not self.check_token(FlowMappingEndToken):
|
||||
if not first:
|
||||
if self.check_token(FlowEntryToken):
|
||||
self.get_token()
|
||||
else:
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a flow mapping", self.marks[-1],
|
||||
"expected ',' or '}', but got %r" % token.id, token.start_mark)
|
||||
if self.check_token(KeyToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(ValueToken,
|
||||
FlowEntryToken, FlowMappingEndToken):
|
||||
self.states.append(self.parse_flow_mapping_value)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_mapping_value
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
elif not self.check_token(FlowMappingEndToken):
|
||||
self.states.append(self.parse_flow_mapping_empty_value)
|
||||
return self.parse_flow_node()
|
||||
token = self.get_token()
|
||||
event = MappingEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
def parse_flow_mapping_value(self):
|
||||
if self.check_token(ValueToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(FlowEntryToken, FlowMappingEndToken):
|
||||
self.states.append(self.parse_flow_mapping_key)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_mapping_key
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
else:
|
||||
self.state = self.parse_flow_mapping_key
|
||||
token = self.peek_token()
|
||||
return self.process_empty_scalar(token.start_mark)
|
||||
|
||||
def parse_flow_mapping_empty_value(self):
|
||||
self.state = self.parse_flow_mapping_key
|
||||
return self.process_empty_scalar(self.peek_token().start_mark)
|
||||
|
||||
def process_empty_scalar(self, mark):
|
||||
return ScalarEvent(None, None, (True, False), u'', mark, mark)
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
# This module contains abstractions for the input stream. You don't have to
|
||||
# looks further, there are no pretty code.
|
||||
#
|
||||
# We define two classes here.
|
||||
#
|
||||
# Mark(source, line, column)
|
||||
# It's just a record and its only use is producing nice error messages.
|
||||
# Parser does not use it for any other purposes.
|
||||
#
|
||||
# Reader(source, data)
|
||||
# Reader determines the encoding of `data` and converts it to unicode.
|
||||
# Reader provides the following methods and attributes:
|
||||
# reader.peek(length=1) - return the next `length` characters
|
||||
# reader.forward(length=1) - move the current position to `length` characters.
|
||||
# reader.index - the number of the current character.
|
||||
# reader.line, stream.column - the line and the column of the current character.
|
||||
|
||||
__all__ = ['Reader', 'ReaderError']
|
||||
|
||||
from error import YAMLError, Mark
|
||||
|
||||
import codecs, re
|
||||
|
||||
class ReaderError(YAMLError):
|
||||
|
||||
def __init__(self, name, position, character, encoding, reason):
|
||||
self.name = name
|
||||
self.character = character
|
||||
self.position = position
|
||||
self.encoding = encoding
|
||||
self.reason = reason
|
||||
|
||||
def __str__(self):
|
||||
if isinstance(self.character, str):
|
||||
return "'%s' codec can't decode byte #x%02x: %s\n" \
|
||||
" in \"%s\", position %d" \
|
||||
% (self.encoding, ord(self.character), self.reason,
|
||||
self.name, self.position)
|
||||
else:
|
||||
return "unacceptable character #x%04x: %s\n" \
|
||||
" in \"%s\", position %d" \
|
||||
% (self.character, self.reason,
|
||||
self.name, self.position)
|
||||
|
||||
class Reader(object):
|
||||
# Reader:
|
||||
# - determines the data encoding and converts it to unicode,
|
||||
# - checks if characters are in allowed range,
|
||||
# - adds '\0' to the end.
|
||||
|
||||
# Reader accepts
|
||||
# - a `str` object,
|
||||
# - a `unicode` object,
|
||||
# - a file-like object with its `read` method returning `str`,
|
||||
# - a file-like object with its `read` method returning `unicode`.
|
||||
|
||||
# Yeah, it's ugly and slow.
|
||||
|
||||
def __init__(self, stream):
|
||||
self.name = None
|
||||
self.stream = None
|
||||
self.stream_pointer = 0
|
||||
self.eof = True
|
||||
self.buffer = u''
|
||||
self.pointer = 0
|
||||
self.raw_buffer = None
|
||||
self.raw_decode = None
|
||||
self.encoding = None
|
||||
self.index = 0
|
||||
self.line = 0
|
||||
self.column = 0
|
||||
if isinstance(stream, unicode):
|
||||
self.name = "<unicode string>"
|
||||
self.check_printable(stream)
|
||||
self.buffer = stream+u'\0'
|
||||
elif isinstance(stream, str):
|
||||
self.name = "<string>"
|
||||
self.raw_buffer = stream
|
||||
self.determine_encoding()
|
||||
else:
|
||||
self.stream = stream
|
||||
self.name = getattr(stream, 'name', "<file>")
|
||||
self.eof = False
|
||||
self.raw_buffer = ''
|
||||
self.determine_encoding()
|
||||
|
||||
def peek(self, index=0):
|
||||
try:
|
||||
return self.buffer[self.pointer+index]
|
||||
except IndexError:
|
||||
self.update(index+1)
|
||||
return self.buffer[self.pointer+index]
|
||||
|
||||
def prefix(self, length=1):
|
||||
if self.pointer+length >= len(self.buffer):
|
||||
self.update(length)
|
||||
return self.buffer[self.pointer:self.pointer+length]
|
||||
|
||||
def forward(self, length=1):
|
||||
if self.pointer+length+1 >= len(self.buffer):
|
||||
self.update(length+1)
|
||||
while length:
|
||||
ch = self.buffer[self.pointer]
|
||||
self.pointer += 1
|
||||
self.index += 1
|
||||
if ch in u'\n\x85\u2028\u2029' \
|
||||
or (ch == u'\r' and self.buffer[self.pointer] != u'\n'):
|
||||
self.line += 1
|
||||
self.column = 0
|
||||
elif ch != u'\uFEFF':
|
||||
self.column += 1
|
||||
length -= 1
|
||||
|
||||
def get_mark(self):
|
||||
if self.stream is None:
|
||||
return Mark(self.name, self.index, self.line, self.column,
|
||||
self.buffer, self.pointer)
|
||||
else:
|
||||
return Mark(self.name, self.index, self.line, self.column,
|
||||
None, None)
|
||||
|
||||
def determine_encoding(self):
|
||||
while not self.eof and len(self.raw_buffer) < 2:
|
||||
self.update_raw()
|
||||
if not isinstance(self.raw_buffer, unicode):
|
||||
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
|
||||
self.raw_decode = codecs.utf_16_le_decode
|
||||
self.encoding = 'utf-16-le'
|
||||
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
|
||||
self.raw_decode = codecs.utf_16_be_decode
|
||||
self.encoding = 'utf-16-be'
|
||||
else:
|
||||
self.raw_decode = codecs.utf_8_decode
|
||||
self.encoding = 'utf-8'
|
||||
self.update(1)
|
||||
|
||||
NON_PRINTABLE = re.compile(u'[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD]')
|
||||
def check_printable(self, data):
|
||||
match = self.NON_PRINTABLE.search(data)
|
||||
if match:
|
||||
character = match.group()
|
||||
position = self.index+(len(self.buffer)-self.pointer)+match.start()
|
||||
raise ReaderError(self.name, position, ord(character),
|
||||
'unicode', "special characters are not allowed")
|
||||
|
||||
def update(self, length):
|
||||
if self.raw_buffer is None:
|
||||
return
|
||||
self.buffer = self.buffer[self.pointer:]
|
||||
self.pointer = 0
|
||||
while len(self.buffer) < length:
|
||||
if not self.eof:
|
||||
self.update_raw()
|
||||
if self.raw_decode is not None:
|
||||
try:
|
||||
data, converted = self.raw_decode(self.raw_buffer,
|
||||
'strict', self.eof)
|
||||
except UnicodeDecodeError, exc:
|
||||
character = exc.object[exc.start]
|
||||
if self.stream is not None:
|
||||
position = self.stream_pointer-len(self.raw_buffer)+exc.start
|
||||
else:
|
||||
position = exc.start
|
||||
raise ReaderError(self.name, position, character,
|
||||
exc.encoding, exc.reason)
|
||||
else:
|
||||
data = self.raw_buffer
|
||||
converted = len(data)
|
||||
self.check_printable(data)
|
||||
self.buffer += data
|
||||
self.raw_buffer = self.raw_buffer[converted:]
|
||||
if self.eof:
|
||||
self.buffer += u'\0'
|
||||
self.raw_buffer = None
|
||||
break
|
||||
|
||||
def update_raw(self, size=1024):
|
||||
data = self.stream.read(size)
|
||||
if data:
|
||||
self.raw_buffer += data
|
||||
self.stream_pointer += len(data)
|
||||
else:
|
||||
self.eof = True
|
||||
|
||||
#try:
|
||||
# import psyco
|
||||
# psyco.bind(Reader)
|
||||
#except ImportError:
|
||||
# pass
|
||||
|
|
@ -0,0 +1,484 @@
|
|||
|
||||
__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
|
||||
'RepresenterError']
|
||||
|
||||
from error import *
|
||||
from nodes import *
|
||||
|
||||
import datetime
|
||||
|
||||
import sys, copy_reg, types
|
||||
|
||||
class RepresenterError(YAMLError):
|
||||
pass
|
||||
|
||||
class BaseRepresenter(object):
|
||||
|
||||
yaml_representers = {}
|
||||
yaml_multi_representers = {}
|
||||
|
||||
def __init__(self, default_style=None, default_flow_style=None):
|
||||
self.default_style = default_style
|
||||
self.default_flow_style = default_flow_style
|
||||
self.represented_objects = {}
|
||||
self.object_keeper = []
|
||||
self.alias_key = None
|
||||
|
||||
def represent(self, data):
|
||||
node = self.represent_data(data)
|
||||
self.serialize(node)
|
||||
self.represented_objects = {}
|
||||
self.object_keeper = []
|
||||
self.alias_key = None
|
||||
|
||||
def get_classobj_bases(self, cls):
|
||||
bases = [cls]
|
||||
for base in cls.__bases__:
|
||||
bases.extend(self.get_classobj_bases(base))
|
||||
return bases
|
||||
|
||||
def represent_data(self, data):
|
||||
if self.ignore_aliases(data):
|
||||
self.alias_key = None
|
||||
else:
|
||||
self.alias_key = id(data)
|
||||
if self.alias_key is not None:
|
||||
if self.alias_key in self.represented_objects:
|
||||
node = self.represented_objects[self.alias_key]
|
||||
#if node is None:
|
||||
# raise RepresenterError("recursive objects are not allowed: %r" % data)
|
||||
return node
|
||||
#self.represented_objects[alias_key] = None
|
||||
self.object_keeper.append(data)
|
||||
data_types = type(data).__mro__
|
||||
if type(data) is types.InstanceType:
|
||||
data_types = self.get_classobj_bases(data.__class__)+list(data_types)
|
||||
if data_types[0] in self.yaml_representers:
|
||||
node = self.yaml_representers[data_types[0]](self, data)
|
||||
else:
|
||||
for data_type in data_types:
|
||||
if data_type in self.yaml_multi_representers:
|
||||
node = self.yaml_multi_representers[data_type](self, data)
|
||||
break
|
||||
else:
|
||||
if None in self.yaml_multi_representers:
|
||||
node = self.yaml_multi_representers[None](self, data)
|
||||
elif None in self.yaml_representers:
|
||||
node = self.yaml_representers[None](self, data)
|
||||
else:
|
||||
node = ScalarNode(None, unicode(data))
|
||||
#if alias_key is not None:
|
||||
# self.represented_objects[alias_key] = node
|
||||
return node
|
||||
|
||||
def add_representer(cls, data_type, representer):
|
||||
if not 'yaml_representers' in cls.__dict__:
|
||||
cls.yaml_representers = cls.yaml_representers.copy()
|
||||
cls.yaml_representers[data_type] = representer
|
||||
add_representer = classmethod(add_representer)
|
||||
|
||||
def add_multi_representer(cls, data_type, representer):
|
||||
if not 'yaml_multi_representers' in cls.__dict__:
|
||||
cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
|
||||
cls.yaml_multi_representers[data_type] = representer
|
||||
add_multi_representer = classmethod(add_multi_representer)
|
||||
|
||||
def represent_scalar(self, tag, value, style=None):
|
||||
if style is None:
|
||||
style = self.default_style
|
||||
node = ScalarNode(tag, value, style=style)
|
||||
if self.alias_key is not None:
|
||||
self.represented_objects[self.alias_key] = node
|
||||
return node
|
||||
|
||||
def represent_sequence(self, tag, sequence, flow_style=None):
|
||||
value = []
|
||||
node = SequenceNode(tag, value, flow_style=flow_style)
|
||||
if self.alias_key is not None:
|
||||
self.represented_objects[self.alias_key] = node
|
||||
best_style = True
|
||||
for item in sequence:
|
||||
node_item = self.represent_data(item)
|
||||
if not (isinstance(node_item, ScalarNode) and not node_item.style):
|
||||
best_style = False
|
||||
value.append(node_item)
|
||||
if flow_style is None:
|
||||
if self.default_flow_style is not None:
|
||||
node.flow_style = self.default_flow_style
|
||||
else:
|
||||
node.flow_style = best_style
|
||||
return node
|
||||
|
||||
def represent_mapping(self, tag, mapping, flow_style=None):
|
||||
value = []
|
||||
node = MappingNode(tag, value, flow_style=flow_style)
|
||||
if self.alias_key is not None:
|
||||
self.represented_objects[self.alias_key] = node
|
||||
best_style = True
|
||||
if hasattr(mapping, 'items'):
|
||||
mapping = mapping.items()
|
||||
mapping.sort()
|
||||
for item_key, item_value in mapping:
|
||||
node_key = self.represent_data(item_key)
|
||||
node_value = self.represent_data(item_value)
|
||||
if not (isinstance(node_key, ScalarNode) and not node_key.style):
|
||||
best_style = False
|
||||
if not (isinstance(node_value, ScalarNode) and not node_value.style):
|
||||
best_style = False
|
||||
value.append((node_key, node_value))
|
||||
if flow_style is None:
|
||||
if self.default_flow_style is not None:
|
||||
node.flow_style = self.default_flow_style
|
||||
else:
|
||||
node.flow_style = best_style
|
||||
return node
|
||||
|
||||
def ignore_aliases(self, data):
|
||||
return False
|
||||
|
||||
class SafeRepresenter(BaseRepresenter):
|
||||
|
||||
def ignore_aliases(self, data):
|
||||
if data in [None, ()]:
|
||||
return True
|
||||
if isinstance(data, (str, unicode, bool, int, float)):
|
||||
return True
|
||||
|
||||
def represent_none(self, data):
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:null',
|
||||
u'null')
|
||||
|
||||
def represent_str(self, data):
|
||||
tag = None
|
||||
style = None
|
||||
try:
|
||||
data = unicode(data, 'ascii')
|
||||
tag = u'tag:yaml.org,2002:str'
|
||||
except UnicodeDecodeError:
|
||||
try:
|
||||
data = unicode(data, 'utf-8')
|
||||
tag = u'tag:yaml.org,2002:str'
|
||||
except UnicodeDecodeError:
|
||||
data = data.encode('base64')
|
||||
tag = u'tag:yaml.org,2002:binary'
|
||||
style = '|'
|
||||
return self.represent_scalar(tag, data, style=style)
|
||||
|
||||
def represent_unicode(self, data):
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:str', data)
|
||||
|
||||
def represent_bool(self, data):
|
||||
if data:
|
||||
value = u'true'
|
||||
else:
|
||||
value = u'false'
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:bool', value)
|
||||
|
||||
def represent_int(self, data):
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:int', unicode(data))
|
||||
|
||||
def represent_long(self, data):
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:int', unicode(data))
|
||||
|
||||
inf_value = 1e300
|
||||
while repr(inf_value) != repr(inf_value*inf_value):
|
||||
inf_value *= inf_value
|
||||
|
||||
def represent_float(self, data):
|
||||
if data != data or (data == 0.0 and data == 1.0):
|
||||
value = u'.nan'
|
||||
elif data == self.inf_value:
|
||||
value = u'.inf'
|
||||
elif data == -self.inf_value:
|
||||
value = u'-.inf'
|
||||
else:
|
||||
value = unicode(repr(data)).lower()
|
||||
# Note that in some cases `repr(data)` represents a float number
|
||||
# without the decimal parts. For instance:
|
||||
# >>> repr(1e17)
|
||||
# '1e17'
|
||||
# Unfortunately, this is not a valid float representation according
|
||||
# to the definition of the `!!float` tag. We fix this by adding
|
||||
# '.0' before the 'e' symbol.
|
||||
if u'.' not in value and u'e' in value:
|
||||
value = value.replace(u'e', u'.0e', 1)
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:float', value)
|
||||
|
||||
def represent_list(self, data):
|
||||
#pairs = (len(data) > 0 and isinstance(data, list))
|
||||
#if pairs:
|
||||
# for item in data:
|
||||
# if not isinstance(item, tuple) or len(item) != 2:
|
||||
# pairs = False
|
||||
# break
|
||||
#if not pairs:
|
||||
return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
|
||||
#value = []
|
||||
#for item_key, item_value in data:
|
||||
# value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
|
||||
# [(item_key, item_value)]))
|
||||
#return SequenceNode(u'tag:yaml.org,2002:pairs', value)
|
||||
|
||||
def represent_dict(self, data):
|
||||
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
|
||||
|
||||
def represent_set(self, data):
|
||||
value = {}
|
||||
for key in data:
|
||||
value[key] = None
|
||||
return self.represent_mapping(u'tag:yaml.org,2002:set', value)
|
||||
|
||||
def represent_date(self, data):
|
||||
value = unicode(data.isoformat())
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
|
||||
|
||||
def represent_datetime(self, data):
|
||||
value = unicode(data.isoformat(' '))
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
|
||||
|
||||
def represent_yaml_object(self, tag, data, cls, flow_style=None):
|
||||
if hasattr(data, '__getstate__'):
|
||||
state = data.__getstate__()
|
||||
else:
|
||||
state = data.__dict__.copy()
|
||||
return self.represent_mapping(tag, state, flow_style=flow_style)
|
||||
|
||||
def represent_undefined(self, data):
|
||||
raise RepresenterError("cannot represent an object: %s" % data)
|
||||
|
||||
SafeRepresenter.add_representer(type(None),
|
||||
SafeRepresenter.represent_none)
|
||||
|
||||
SafeRepresenter.add_representer(str,
|
||||
SafeRepresenter.represent_str)
|
||||
|
||||
SafeRepresenter.add_representer(unicode,
|
||||
SafeRepresenter.represent_unicode)
|
||||
|
||||
SafeRepresenter.add_representer(bool,
|
||||
SafeRepresenter.represent_bool)
|
||||
|
||||
SafeRepresenter.add_representer(int,
|
||||
SafeRepresenter.represent_int)
|
||||
|
||||
SafeRepresenter.add_representer(long,
|
||||
SafeRepresenter.represent_long)
|
||||
|
||||
SafeRepresenter.add_representer(float,
|
||||
SafeRepresenter.represent_float)
|
||||
|
||||
SafeRepresenter.add_representer(list,
|
||||
SafeRepresenter.represent_list)
|
||||
|
||||
SafeRepresenter.add_representer(tuple,
|
||||
SafeRepresenter.represent_list)
|
||||
|
||||
SafeRepresenter.add_representer(dict,
|
||||
SafeRepresenter.represent_dict)
|
||||
|
||||
SafeRepresenter.add_representer(set,
|
||||
SafeRepresenter.represent_set)
|
||||
|
||||
SafeRepresenter.add_representer(datetime.date,
|
||||
SafeRepresenter.represent_date)
|
||||
|
||||
SafeRepresenter.add_representer(datetime.datetime,
|
||||
SafeRepresenter.represent_datetime)
|
||||
|
||||
SafeRepresenter.add_representer(None,
|
||||
SafeRepresenter.represent_undefined)
|
||||
|
||||
class Representer(SafeRepresenter):
|
||||
|
||||
def represent_str(self, data):
|
||||
tag = None
|
||||
style = None
|
||||
try:
|
||||
data = unicode(data, 'ascii')
|
||||
tag = u'tag:yaml.org,2002:str'
|
||||
except UnicodeDecodeError:
|
||||
try:
|
||||
data = unicode(data, 'utf-8')
|
||||
tag = u'tag:yaml.org,2002:python/str'
|
||||
except UnicodeDecodeError:
|
||||
data = data.encode('base64')
|
||||
tag = u'tag:yaml.org,2002:binary'
|
||||
style = '|'
|
||||
return self.represent_scalar(tag, data, style=style)
|
||||
|
||||
def represent_unicode(self, data):
|
||||
tag = None
|
||||
try:
|
||||
data.encode('ascii')
|
||||
tag = u'tag:yaml.org,2002:python/unicode'
|
||||
except UnicodeEncodeError:
|
||||
tag = u'tag:yaml.org,2002:str'
|
||||
return self.represent_scalar(tag, data)
|
||||
|
||||
def represent_long(self, data):
|
||||
tag = u'tag:yaml.org,2002:int'
|
||||
if int(data) is not data:
|
||||
tag = u'tag:yaml.org,2002:python/long'
|
||||
return self.represent_scalar(tag, unicode(data))
|
||||
|
||||
def represent_complex(self, data):
|
||||
if data.imag == 0.0:
|
||||
data = u'%r' % data.real
|
||||
elif data.real == 0.0:
|
||||
data = u'%rj' % data.imag
|
||||
elif data.imag > 0:
|
||||
data = u'%r+%rj' % (data.real, data.imag)
|
||||
else:
|
||||
data = u'%r%rj' % (data.real, data.imag)
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:python/complex', data)
|
||||
|
||||
def represent_tuple(self, data):
|
||||
return self.represent_sequence(u'tag:yaml.org,2002:python/tuple', data)
|
||||
|
||||
def represent_name(self, data):
|
||||
name = u'%s.%s' % (data.__module__, data.__name__)
|
||||
return self.represent_scalar(u'tag:yaml.org,2002:python/name:'+name, u'')
|
||||
|
||||
def represent_module(self, data):
|
||||
return self.represent_scalar(
|
||||
u'tag:yaml.org,2002:python/module:'+data.__name__, u'')
|
||||
|
||||
def represent_instance(self, data):
|
||||
# For instances of classic classes, we use __getinitargs__ and
|
||||
# __getstate__ to serialize the data.
|
||||
|
||||
# If data.__getinitargs__ exists, the object must be reconstructed by
|
||||
# calling cls(**args), where args is a tuple returned by
|
||||
# __getinitargs__. Otherwise, the cls.__init__ method should never be
|
||||
# called and the class instance is created by instantiating a trivial
|
||||
# class and assigning to the instance's __class__ variable.
|
||||
|
||||
# If data.__getstate__ exists, it returns the state of the object.
|
||||
# Otherwise, the state of the object is data.__dict__.
|
||||
|
||||
# We produce either a !!python/object or !!python/object/new node.
|
||||
# If data.__getinitargs__ does not exist and state is a dictionary, we
|
||||
# produce a !!python/object node . Otherwise we produce a
|
||||
# !!python/object/new node.
|
||||
|
||||
cls = data.__class__
|
||||
class_name = u'%s.%s' % (cls.__module__, cls.__name__)
|
||||
args = None
|
||||
state = None
|
||||
if hasattr(data, '__getinitargs__'):
|
||||
args = list(data.__getinitargs__())
|
||||
if hasattr(data, '__getstate__'):
|
||||
state = data.__getstate__()
|
||||
else:
|
||||
state = data.__dict__
|
||||
if args is None and isinstance(state, dict):
|
||||
return self.represent_mapping(
|
||||
u'tag:yaml.org,2002:python/object:'+class_name, state)
|
||||
if isinstance(state, dict) and not state:
|
||||
return self.represent_sequence(
|
||||
u'tag:yaml.org,2002:python/object/new:'+class_name, args)
|
||||
value = {}
|
||||
if args:
|
||||
value['args'] = args
|
||||
value['state'] = state
|
||||
return self.represent_mapping(
|
||||
u'tag:yaml.org,2002:python/object/new:'+class_name, value)
|
||||
|
||||
def represent_object(self, data):
|
||||
# We use __reduce__ API to save the data. data.__reduce__ returns
|
||||
# a tuple of length 2-5:
|
||||
# (function, args, state, listitems, dictitems)
|
||||
|
||||
# For reconstructing, we calls function(*args), then set its state,
|
||||
# listitems, and dictitems if they are not None.
|
||||
|
||||
# A special case is when function.__name__ == '__newobj__'. In this
|
||||
# case we create the object with args[0].__new__(*args).
|
||||
|
||||
# Another special case is when __reduce__ returns a string - we don't
|
||||
# support it.
|
||||
|
||||
# We produce a !!python/object, !!python/object/new or
|
||||
# !!python/object/apply node.
|
||||
|
||||
cls = type(data)
|
||||
if cls in copy_reg.dispatch_table:
|
||||
reduce = copy_reg.dispatch_table[cls](data)
|
||||
elif hasattr(data, '__reduce_ex__'):
|
||||
reduce = data.__reduce_ex__(2)
|
||||
elif hasattr(data, '__reduce__'):
|
||||
reduce = data.__reduce__()
|
||||
else:
|
||||
raise RepresenterError("cannot represent object: %r" % data)
|
||||
reduce = (list(reduce)+[None]*5)[:5]
|
||||
function, args, state, listitems, dictitems = reduce
|
||||
args = list(args)
|
||||
if state is None:
|
||||
state = {}
|
||||
if listitems is not None:
|
||||
listitems = list(listitems)
|
||||
if dictitems is not None:
|
||||
dictitems = dict(dictitems)
|
||||
if function.__name__ == '__newobj__':
|
||||
function = args[0]
|
||||
args = args[1:]
|
||||
tag = u'tag:yaml.org,2002:python/object/new:'
|
||||
newobj = True
|
||||
else:
|
||||
tag = u'tag:yaml.org,2002:python/object/apply:'
|
||||
newobj = False
|
||||
function_name = u'%s.%s' % (function.__module__, function.__name__)
|
||||
if not args and not listitems and not dictitems \
|
||||
and isinstance(state, dict) and newobj:
|
||||
return self.represent_mapping(
|
||||
u'tag:yaml.org,2002:python/object:'+function_name, state)
|
||||
if not listitems and not dictitems \
|
||||
and isinstance(state, dict) and not state:
|
||||
return self.represent_sequence(tag+function_name, args)
|
||||
value = {}
|
||||
if args:
|
||||
value['args'] = args
|
||||
if state or not isinstance(state, dict):
|
||||
value['state'] = state
|
||||
if listitems:
|
||||
value['listitems'] = listitems
|
||||
if dictitems:
|
||||
value['dictitems'] = dictitems
|
||||
return self.represent_mapping(tag+function_name, value)
|
||||
|
||||
Representer.add_representer(str,
|
||||
Representer.represent_str)
|
||||
|
||||
Representer.add_representer(unicode,
|
||||
Representer.represent_unicode)
|
||||
|
||||
Representer.add_representer(long,
|
||||
Representer.represent_long)
|
||||
|
||||
Representer.add_representer(complex,
|
||||
Representer.represent_complex)
|
||||
|
||||
Representer.add_representer(tuple,
|
||||
Representer.represent_tuple)
|
||||
|
||||
Representer.add_representer(type,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.ClassType,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.FunctionType,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.BuiltinFunctionType,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.ModuleType,
|
||||
Representer.represent_module)
|
||||
|
||||
Representer.add_multi_representer(types.InstanceType,
|
||||
Representer.represent_instance)
|
||||
|
||||
Representer.add_multi_representer(object,
|
||||
Representer.represent_object)
|
||||
|
|
@ -0,0 +1,224 @@
|
|||
|
||||
__all__ = ['BaseResolver', 'Resolver']
|
||||
|
||||
from error import *
|
||||
from nodes import *
|
||||
|
||||
import re
|
||||
|
||||
class ResolverError(YAMLError):
|
||||
pass
|
||||
|
||||
class BaseResolver(object):
|
||||
|
||||
DEFAULT_SCALAR_TAG = u'tag:yaml.org,2002:str'
|
||||
DEFAULT_SEQUENCE_TAG = u'tag:yaml.org,2002:seq'
|
||||
DEFAULT_MAPPING_TAG = u'tag:yaml.org,2002:map'
|
||||
|
||||
yaml_implicit_resolvers = {}
|
||||
yaml_path_resolvers = {}
|
||||
|
||||
def __init__(self):
|
||||
self.resolver_exact_paths = []
|
||||
self.resolver_prefix_paths = []
|
||||
|
||||
def add_implicit_resolver(cls, tag, regexp, first):
|
||||
if not 'yaml_implicit_resolvers' in cls.__dict__:
|
||||
cls.yaml_implicit_resolvers = cls.yaml_implicit_resolvers.copy()
|
||||
if first is None:
|
||||
first = [None]
|
||||
for ch in first:
|
||||
cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
|
||||
add_implicit_resolver = classmethod(add_implicit_resolver)
|
||||
|
||||
def add_path_resolver(cls, tag, path, kind=None):
|
||||
# Note: `add_path_resolver` is experimental. The API could be changed.
|
||||
# `new_path` is a pattern that is matched against the path from the
|
||||
# root to the node that is being considered. `node_path` elements are
|
||||
# tuples `(node_check, index_check)`. `node_check` is a node class:
|
||||
# `ScalarNode`, `SequenceNode`, `MappingNode` or `None`. `None`
|
||||
# matches any kind of a node. `index_check` could be `None`, a boolean
|
||||
# value, a string value, or a number. `None` and `False` match against
|
||||
# any _value_ of sequence and mapping nodes. `True` matches against
|
||||
# any _key_ of a mapping node. A string `index_check` matches against
|
||||
# a mapping value that corresponds to a scalar key which content is
|
||||
# equal to the `index_check` value. An integer `index_check` matches
|
||||
# against a sequence value with the index equal to `index_check`.
|
||||
if not 'yaml_path_resolvers' in cls.__dict__:
|
||||
cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
|
||||
new_path = []
|
||||
for element in path:
|
||||
if isinstance(element, (list, tuple)):
|
||||
if len(element) == 2:
|
||||
node_check, index_check = element
|
||||
elif len(element) == 1:
|
||||
node_check = element[0]
|
||||
index_check = True
|
||||
else:
|
||||
raise ResolverError("Invalid path element: %s" % element)
|
||||
else:
|
||||
node_check = None
|
||||
index_check = element
|
||||
if node_check is str:
|
||||
node_check = ScalarNode
|
||||
elif node_check is list:
|
||||
node_check = SequenceNode
|
||||
elif node_check is dict:
|
||||
node_check = MappingNode
|
||||
elif node_check not in [ScalarNode, SequenceNode, MappingNode] \
|
||||
and not isinstance(node_check, basestring) \
|
||||
and node_check is not None:
|
||||
raise ResolverError("Invalid node checker: %s" % node_check)
|
||||
if not isinstance(index_check, (basestring, int)) \
|
||||
and index_check is not None:
|
||||
raise ResolverError("Invalid index checker: %s" % index_check)
|
||||
new_path.append((node_check, index_check))
|
||||
if kind is str:
|
||||
kind = ScalarNode
|
||||
elif kind is list:
|
||||
kind = SequenceNode
|
||||
elif kind is dict:
|
||||
kind = MappingNode
|
||||
elif kind not in [ScalarNode, SequenceNode, MappingNode] \
|
||||
and kind is not None:
|
||||
raise ResolverError("Invalid node kind: %s" % kind)
|
||||
cls.yaml_path_resolvers[tuple(new_path), kind] = tag
|
||||
add_path_resolver = classmethod(add_path_resolver)
|
||||
|
||||
def descend_resolver(self, current_node, current_index):
|
||||
if not self.yaml_path_resolvers:
|
||||
return
|
||||
exact_paths = {}
|
||||
prefix_paths = []
|
||||
if current_node:
|
||||
depth = len(self.resolver_prefix_paths)
|
||||
for path, kind in self.resolver_prefix_paths[-1]:
|
||||
if self.check_resolver_prefix(depth, path, kind,
|
||||
current_node, current_index):
|
||||
if len(path) > depth:
|
||||
prefix_paths.append((path, kind))
|
||||
else:
|
||||
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
|
||||
else:
|
||||
for path, kind in self.yaml_path_resolvers:
|
||||
if not path:
|
||||
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
|
||||
else:
|
||||
prefix_paths.append((path, kind))
|
||||
self.resolver_exact_paths.append(exact_paths)
|
||||
self.resolver_prefix_paths.append(prefix_paths)
|
||||
|
||||
def ascend_resolver(self):
|
||||
if not self.yaml_path_resolvers:
|
||||
return
|
||||
self.resolver_exact_paths.pop()
|
||||
self.resolver_prefix_paths.pop()
|
||||
|
||||
def check_resolver_prefix(self, depth, path, kind,
|
||||
current_node, current_index):
|
||||
node_check, index_check = path[depth-1]
|
||||
if isinstance(node_check, basestring):
|
||||
if current_node.tag != node_check:
|
||||
return
|
||||
elif node_check is not None:
|
||||
if not isinstance(current_node, node_check):
|
||||
return
|
||||
if index_check is True and current_index is not None:
|
||||
return
|
||||
if (index_check is False or index_check is None) \
|
||||
and current_index is None:
|
||||
return
|
||||
if isinstance(index_check, basestring):
|
||||
if not (isinstance(current_index, ScalarNode)
|
||||
and index_check == current_index.value):
|
||||
return
|
||||
elif isinstance(index_check, int) and not isinstance(index_check, bool):
|
||||
if index_check != current_index:
|
||||
return
|
||||
return True
|
||||
|
||||
def resolve(self, kind, value, implicit):
|
||||
if kind is ScalarNode and implicit[0]:
|
||||
if value == u'':
|
||||
resolvers = self.yaml_implicit_resolvers.get(u'', [])
|
||||
else:
|
||||
resolvers = self.yaml_implicit_resolvers.get(value[0], [])
|
||||
resolvers += self.yaml_implicit_resolvers.get(None, [])
|
||||
for tag, regexp in resolvers:
|
||||
if regexp.match(value):
|
||||
return tag
|
||||
implicit = implicit[1]
|
||||
if self.yaml_path_resolvers:
|
||||
exact_paths = self.resolver_exact_paths[-1]
|
||||
if kind in exact_paths:
|
||||
return exact_paths[kind]
|
||||
if None in exact_paths:
|
||||
return exact_paths[None]
|
||||
if kind is ScalarNode:
|
||||
return self.DEFAULT_SCALAR_TAG
|
||||
elif kind is SequenceNode:
|
||||
return self.DEFAULT_SEQUENCE_TAG
|
||||
elif kind is MappingNode:
|
||||
return self.DEFAULT_MAPPING_TAG
|
||||
|
||||
class Resolver(BaseResolver):
|
||||
pass
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:bool',
|
||||
re.compile(ur'''^(?:yes|Yes|YES|no|No|NO
|
||||
|true|True|TRUE|false|False|FALSE
|
||||
|on|On|ON|off|Off|OFF)$''', re.X),
|
||||
list(u'yYnNtTfFoO'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:float',
|
||||
re.compile(ur'''^(?:[-+]?(?:[0-9][0-9_]*)\.[0-9_]*(?:[eE][-+][0-9]+)?
|
||||
|\.[0-9_]+(?:[eE][-+][0-9]+)?
|
||||
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*
|
||||
|[-+]?\.(?:inf|Inf|INF)
|
||||
|\.(?:nan|NaN|NAN))$''', re.X),
|
||||
list(u'-+0123456789.'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:int',
|
||||
re.compile(ur'''^(?:[-+]?0b[0-1_]+
|
||||
|[-+]?0[0-7_]+
|
||||
|[-+]?(?:0|[1-9][0-9_]*)
|
||||
|[-+]?0x[0-9a-fA-F_]+
|
||||
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
|
||||
list(u'-+0123456789'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:merge',
|
||||
re.compile(ur'^(?:<<)$'),
|
||||
[u'<'])
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:null',
|
||||
re.compile(ur'''^(?: ~
|
||||
|null|Null|NULL
|
||||
| )$''', re.X),
|
||||
[u'~', u'n', u'N', u''])
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:timestamp',
|
||||
re.compile(ur'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|
||||
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
|
||||
(?:[Tt]|[ \t]+)[0-9][0-9]?
|
||||
:[0-9][0-9] :[0-9][0-9] (?:\.[0-9]*)?
|
||||
(?:[ \t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
|
||||
list(u'0123456789'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:value',
|
||||
re.compile(ur'^(?:=)$'),
|
||||
[u'='])
|
||||
|
||||
# The following resolver is only for documentation purposes. It cannot work
|
||||
# because plain scalars cannot start with '!', '&', or '*'.
|
||||
Resolver.add_implicit_resolver(
|
||||
u'tag:yaml.org,2002:yaml',
|
||||
re.compile(ur'^(?:!|&|\*)$'),
|
||||
list(u'!&*'))
|
||||
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,111 @@
|
|||
|
||||
__all__ = ['Serializer', 'SerializerError']
|
||||
|
||||
from error import YAMLError
|
||||
from events import *
|
||||
from nodes import *
|
||||
|
||||
class SerializerError(YAMLError):
|
||||
pass
|
||||
|
||||
class Serializer(object):
|
||||
|
||||
ANCHOR_TEMPLATE = u'id%03d'
|
||||
|
||||
def __init__(self, encoding=None,
|
||||
explicit_start=None, explicit_end=None, version=None, tags=None):
|
||||
self.use_encoding = encoding
|
||||
self.use_explicit_start = explicit_start
|
||||
self.use_explicit_end = explicit_end
|
||||
self.use_version = version
|
||||
self.use_tags = tags
|
||||
self.serialized_nodes = {}
|
||||
self.anchors = {}
|
||||
self.last_anchor_id = 0
|
||||
self.closed = None
|
||||
|
||||
def open(self):
|
||||
if self.closed is None:
|
||||
self.emit(StreamStartEvent(encoding=self.use_encoding))
|
||||
self.closed = False
|
||||
elif self.closed:
|
||||
raise SerializerError("serializer is closed")
|
||||
else:
|
||||
raise SerializerError("serializer is already opened")
|
||||
|
||||
def close(self):
|
||||
if self.closed is None:
|
||||
raise SerializerError("serializer is not opened")
|
||||
elif not self.closed:
|
||||
self.emit(StreamEndEvent())
|
||||
self.closed = True
|
||||
|
||||
#def __del__(self):
|
||||
# self.close()
|
||||
|
||||
def serialize(self, node):
|
||||
if self.closed is None:
|
||||
raise SerializerError("serializer is not opened")
|
||||
elif self.closed:
|
||||
raise SerializerError("serializer is closed")
|
||||
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
|
||||
version=self.use_version, tags=self.use_tags))
|
||||
self.anchor_node(node)
|
||||
self.serialize_node(node, None, None)
|
||||
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
|
||||
self.serialized_nodes = {}
|
||||
self.anchors = {}
|
||||
self.last_anchor_id = 0
|
||||
|
||||
def anchor_node(self, node):
|
||||
if node in self.anchors:
|
||||
if self.anchors[node] is None:
|
||||
self.anchors[node] = self.generate_anchor(node)
|
||||
else:
|
||||
self.anchors[node] = None
|
||||
if isinstance(node, SequenceNode):
|
||||
for item in node.value:
|
||||
self.anchor_node(item)
|
||||
elif isinstance(node, MappingNode):
|
||||
for key, value in node.value:
|
||||
self.anchor_node(key)
|
||||
self.anchor_node(value)
|
||||
|
||||
def generate_anchor(self, node):
|
||||
self.last_anchor_id += 1
|
||||
return self.ANCHOR_TEMPLATE % self.last_anchor_id
|
||||
|
||||
def serialize_node(self, node, parent, index):
|
||||
alias = self.anchors[node]
|
||||
if node in self.serialized_nodes:
|
||||
self.emit(AliasEvent(alias))
|
||||
else:
|
||||
self.serialized_nodes[node] = True
|
||||
self.descend_resolver(parent, index)
|
||||
if isinstance(node, ScalarNode):
|
||||
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
|
||||
default_tag = self.resolve(ScalarNode, node.value, (False, True))
|
||||
implicit = (node.tag == detected_tag), (node.tag == default_tag)
|
||||
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
|
||||
style=node.style))
|
||||
elif isinstance(node, SequenceNode):
|
||||
implicit = (node.tag
|
||||
== self.resolve(SequenceNode, node.value, True))
|
||||
self.emit(SequenceStartEvent(alias, node.tag, implicit,
|
||||
flow_style=node.flow_style))
|
||||
index = 0
|
||||
for item in node.value:
|
||||
self.serialize_node(item, node, index)
|
||||
index += 1
|
||||
self.emit(SequenceEndEvent())
|
||||
elif isinstance(node, MappingNode):
|
||||
implicit = (node.tag
|
||||
== self.resolve(MappingNode, node.value, True))
|
||||
self.emit(MappingStartEvent(alias, node.tag, implicit,
|
||||
flow_style=node.flow_style))
|
||||
for key, value in node.value:
|
||||
self.serialize_node(key, node, None)
|
||||
self.serialize_node(value, node, key)
|
||||
self.emit(MappingEndEvent())
|
||||
self.ascend_resolver()
|
||||
|
|
@ -0,0 +1,104 @@
|
|||
|
||||
class Token(object):
|
||||
def __init__(self, start_mark, end_mark):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
def __repr__(self):
|
||||
attributes = [key for key in self.__dict__
|
||||
if not key.endswith('_mark')]
|
||||
attributes.sort()
|
||||
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
|
||||
for key in attributes])
|
||||
return '%s(%s)' % (self.__class__.__name__, arguments)
|
||||
|
||||
#class BOMToken(Token):
|
||||
# id = '<byte order mark>'
|
||||
|
||||
class DirectiveToken(Token):
|
||||
id = '<directive>'
|
||||
def __init__(self, name, value, start_mark, end_mark):
|
||||
self.name = name
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class DocumentStartToken(Token):
|
||||
id = '<document start>'
|
||||
|
||||
class DocumentEndToken(Token):
|
||||
id = '<document end>'
|
||||
|
||||
class StreamStartToken(Token):
|
||||
id = '<stream start>'
|
||||
def __init__(self, start_mark=None, end_mark=None,
|
||||
encoding=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.encoding = encoding
|
||||
|
||||
class StreamEndToken(Token):
|
||||
id = '<stream end>'
|
||||
|
||||
class BlockSequenceStartToken(Token):
|
||||
id = '<block sequence start>'
|
||||
|
||||
class BlockMappingStartToken(Token):
|
||||
id = '<block mapping start>'
|
||||
|
||||
class BlockEndToken(Token):
|
||||
id = '<block end>'
|
||||
|
||||
class FlowSequenceStartToken(Token):
|
||||
id = '['
|
||||
|
||||
class FlowMappingStartToken(Token):
|
||||
id = '{'
|
||||
|
||||
class FlowSequenceEndToken(Token):
|
||||
id = ']'
|
||||
|
||||
class FlowMappingEndToken(Token):
|
||||
id = '}'
|
||||
|
||||
class KeyToken(Token):
|
||||
id = '?'
|
||||
|
||||
class ValueToken(Token):
|
||||
id = ':'
|
||||
|
||||
class BlockEntryToken(Token):
|
||||
id = '-'
|
||||
|
||||
class FlowEntryToken(Token):
|
||||
id = ','
|
||||
|
||||
class AliasToken(Token):
|
||||
id = '<alias>'
|
||||
def __init__(self, value, start_mark, end_mark):
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class AnchorToken(Token):
|
||||
id = '<anchor>'
|
||||
def __init__(self, value, start_mark, end_mark):
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class TagToken(Token):
|
||||
id = '<tag>'
|
||||
def __init__(self, value, start_mark, end_mark):
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class ScalarToken(Token):
|
||||
id = '<scalar>'
|
||||
def __init__(self, value, plain, start_mark, end_mark, style=None):
|
||||
self.value = value
|
||||
self.plain = plain
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.style = style
|
||||
|
|
@ -0,0 +1,312 @@
|
|||
|
||||
from .error import *
|
||||
|
||||
from .tokens import *
|
||||
from .events import *
|
||||
from .nodes import *
|
||||
|
||||
from .loader import *
|
||||
from .dumper import *
|
||||
|
||||
__version__ = '3.11'
|
||||
try:
|
||||
from .cyaml import *
|
||||
__with_libyaml__ = True
|
||||
except ImportError:
|
||||
__with_libyaml__ = False
|
||||
|
||||
import io
|
||||
|
||||
def scan(stream, Loader=Loader):
|
||||
"""
|
||||
Scan a YAML stream and produce scanning tokens.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_token():
|
||||
yield loader.get_token()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def parse(stream, Loader=Loader):
|
||||
"""
|
||||
Parse a YAML stream and produce parsing events.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_event():
|
||||
yield loader.get_event()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def compose(stream, Loader=Loader):
|
||||
"""
|
||||
Parse the first YAML document in a stream
|
||||
and produce the corresponding representation tree.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
return loader.get_single_node()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def compose_all(stream, Loader=Loader):
|
||||
"""
|
||||
Parse all YAML documents in a stream
|
||||
and produce corresponding representation trees.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_node():
|
||||
yield loader.get_node()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def load(stream, Loader=Loader):
|
||||
"""
|
||||
Parse the first YAML document in a stream
|
||||
and produce the corresponding Python object.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
return loader.get_single_data()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def load_all(stream, Loader=Loader):
|
||||
"""
|
||||
Parse all YAML documents in a stream
|
||||
and produce corresponding Python objects.
|
||||
"""
|
||||
loader = Loader(stream)
|
||||
try:
|
||||
while loader.check_data():
|
||||
yield loader.get_data()
|
||||
finally:
|
||||
loader.dispose()
|
||||
|
||||
def safe_load(stream):
|
||||
"""
|
||||
Parse the first YAML document in a stream
|
||||
and produce the corresponding Python object.
|
||||
Resolve only basic YAML tags.
|
||||
"""
|
||||
return load(stream, SafeLoader)
|
||||
|
||||
def safe_load_all(stream):
|
||||
"""
|
||||
Parse all YAML documents in a stream
|
||||
and produce corresponding Python objects.
|
||||
Resolve only basic YAML tags.
|
||||
"""
|
||||
return load_all(stream, SafeLoader)
|
||||
|
||||
def emit(events, stream=None, Dumper=Dumper,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None):
|
||||
"""
|
||||
Emit YAML parsing events into a stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
getvalue = None
|
||||
if stream is None:
|
||||
stream = io.StringIO()
|
||||
getvalue = stream.getvalue
|
||||
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
try:
|
||||
for event in events:
|
||||
dumper.emit(event)
|
||||
finally:
|
||||
dumper.dispose()
|
||||
if getvalue:
|
||||
return getvalue()
|
||||
|
||||
def serialize_all(nodes, stream=None, Dumper=Dumper,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
"""
|
||||
Serialize a sequence of representation trees into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
getvalue = None
|
||||
if stream is None:
|
||||
if encoding is None:
|
||||
stream = io.StringIO()
|
||||
else:
|
||||
stream = io.BytesIO()
|
||||
getvalue = stream.getvalue
|
||||
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
encoding=encoding, version=version, tags=tags,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end)
|
||||
try:
|
||||
dumper.open()
|
||||
for node in nodes:
|
||||
dumper.serialize(node)
|
||||
dumper.close()
|
||||
finally:
|
||||
dumper.dispose()
|
||||
if getvalue:
|
||||
return getvalue()
|
||||
|
||||
def serialize(node, stream=None, Dumper=Dumper, **kwds):
|
||||
"""
|
||||
Serialize a representation tree into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return serialize_all([node], stream, Dumper=Dumper, **kwds)
|
||||
|
||||
def dump_all(documents, stream=None, Dumper=Dumper,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
"""
|
||||
Serialize a sequence of Python objects into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
getvalue = None
|
||||
if stream is None:
|
||||
if encoding is None:
|
||||
stream = io.StringIO()
|
||||
else:
|
||||
stream = io.BytesIO()
|
||||
getvalue = stream.getvalue
|
||||
dumper = Dumper(stream, default_style=default_style,
|
||||
default_flow_style=default_flow_style,
|
||||
canonical=canonical, indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
encoding=encoding, version=version, tags=tags,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end)
|
||||
try:
|
||||
dumper.open()
|
||||
for data in documents:
|
||||
dumper.represent(data)
|
||||
dumper.close()
|
||||
finally:
|
||||
dumper.dispose()
|
||||
if getvalue:
|
||||
return getvalue()
|
||||
|
||||
def dump(data, stream=None, Dumper=Dumper, **kwds):
|
||||
"""
|
||||
Serialize a Python object into a YAML stream.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return dump_all([data], stream, Dumper=Dumper, **kwds)
|
||||
|
||||
def safe_dump_all(documents, stream=None, **kwds):
|
||||
"""
|
||||
Serialize a sequence of Python objects into a YAML stream.
|
||||
Produce only basic YAML tags.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
|
||||
|
||||
def safe_dump(data, stream=None, **kwds):
|
||||
"""
|
||||
Serialize a Python object into a YAML stream.
|
||||
Produce only basic YAML tags.
|
||||
If stream is None, return the produced string instead.
|
||||
"""
|
||||
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
|
||||
|
||||
def add_implicit_resolver(tag, regexp, first=None,
|
||||
Loader=Loader, Dumper=Dumper):
|
||||
"""
|
||||
Add an implicit scalar detector.
|
||||
If an implicit scalar value matches the given regexp,
|
||||
the corresponding tag is assigned to the scalar.
|
||||
first is a sequence of possible initial characters or None.
|
||||
"""
|
||||
Loader.add_implicit_resolver(tag, regexp, first)
|
||||
Dumper.add_implicit_resolver(tag, regexp, first)
|
||||
|
||||
def add_path_resolver(tag, path, kind=None, Loader=Loader, Dumper=Dumper):
|
||||
"""
|
||||
Add a path based resolver for the given tag.
|
||||
A path is a list of keys that forms a path
|
||||
to a node in the representation tree.
|
||||
Keys can be string values, integers, or None.
|
||||
"""
|
||||
Loader.add_path_resolver(tag, path, kind)
|
||||
Dumper.add_path_resolver(tag, path, kind)
|
||||
|
||||
def add_constructor(tag, constructor, Loader=Loader):
|
||||
"""
|
||||
Add a constructor for the given tag.
|
||||
Constructor is a function that accepts a Loader instance
|
||||
and a node object and produces the corresponding Python object.
|
||||
"""
|
||||
Loader.add_constructor(tag, constructor)
|
||||
|
||||
def add_multi_constructor(tag_prefix, multi_constructor, Loader=Loader):
|
||||
"""
|
||||
Add a multi-constructor for the given tag prefix.
|
||||
Multi-constructor is called for a node if its tag starts with tag_prefix.
|
||||
Multi-constructor accepts a Loader instance, a tag suffix,
|
||||
and a node object and produces the corresponding Python object.
|
||||
"""
|
||||
Loader.add_multi_constructor(tag_prefix, multi_constructor)
|
||||
|
||||
def add_representer(data_type, representer, Dumper=Dumper):
|
||||
"""
|
||||
Add a representer for the given type.
|
||||
Representer is a function accepting a Dumper instance
|
||||
and an instance of the given data type
|
||||
and producing the corresponding representation node.
|
||||
"""
|
||||
Dumper.add_representer(data_type, representer)
|
||||
|
||||
def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
|
||||
"""
|
||||
Add a representer for the given type.
|
||||
Multi-representer is a function accepting a Dumper instance
|
||||
and an instance of the given data type or subtype
|
||||
and producing the corresponding representation node.
|
||||
"""
|
||||
Dumper.add_multi_representer(data_type, multi_representer)
|
||||
|
||||
class YAMLObjectMetaclass(type):
|
||||
"""
|
||||
The metaclass for YAMLObject.
|
||||
"""
|
||||
def __init__(cls, name, bases, kwds):
|
||||
super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
|
||||
if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
|
||||
cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
|
||||
cls.yaml_dumper.add_representer(cls, cls.to_yaml)
|
||||
|
||||
class YAMLObject(metaclass=YAMLObjectMetaclass):
|
||||
"""
|
||||
An object that can dump itself to a YAML stream
|
||||
and load itself from a YAML stream.
|
||||
"""
|
||||
|
||||
__slots__ = () # no direct instantiation, so allow immutable subclasses
|
||||
|
||||
yaml_loader = Loader
|
||||
yaml_dumper = Dumper
|
||||
|
||||
yaml_tag = None
|
||||
yaml_flow_style = None
|
||||
|
||||
@classmethod
|
||||
def from_yaml(cls, loader, node):
|
||||
"""
|
||||
Convert a representation node to a Python object.
|
||||
"""
|
||||
return loader.construct_yaml_object(node, cls)
|
||||
|
||||
@classmethod
|
||||
def to_yaml(cls, dumper, data):
|
||||
"""
|
||||
Convert a Python object to a representation node.
|
||||
"""
|
||||
return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
|
||||
flow_style=cls.yaml_flow_style)
|
||||
|
|
@ -0,0 +1,139 @@
|
|||
|
||||
__all__ = ['Composer', 'ComposerError']
|
||||
|
||||
from .error import MarkedYAMLError
|
||||
from .events import *
|
||||
from .nodes import *
|
||||
|
||||
class ComposerError(MarkedYAMLError):
|
||||
pass
|
||||
|
||||
class Composer:
|
||||
|
||||
def __init__(self):
|
||||
self.anchors = {}
|
||||
|
||||
def check_node(self):
|
||||
# Drop the STREAM-START event.
|
||||
if self.check_event(StreamStartEvent):
|
||||
self.get_event()
|
||||
|
||||
# If there are more documents available?
|
||||
return not self.check_event(StreamEndEvent)
|
||||
|
||||
def get_node(self):
|
||||
# Get the root node of the next document.
|
||||
if not self.check_event(StreamEndEvent):
|
||||
return self.compose_document()
|
||||
|
||||
def get_single_node(self):
|
||||
# Drop the STREAM-START event.
|
||||
self.get_event()
|
||||
|
||||
# Compose a document if the stream is not empty.
|
||||
document = None
|
||||
if not self.check_event(StreamEndEvent):
|
||||
document = self.compose_document()
|
||||
|
||||
# Ensure that the stream contains no more documents.
|
||||
if not self.check_event(StreamEndEvent):
|
||||
event = self.get_event()
|
||||
raise ComposerError("expected a single document in the stream",
|
||||
document.start_mark, "but found another document",
|
||||
event.start_mark)
|
||||
|
||||
# Drop the STREAM-END event.
|
||||
self.get_event()
|
||||
|
||||
return document
|
||||
|
||||
def compose_document(self):
|
||||
# Drop the DOCUMENT-START event.
|
||||
self.get_event()
|
||||
|
||||
# Compose the root node.
|
||||
node = self.compose_node(None, None)
|
||||
|
||||
# Drop the DOCUMENT-END event.
|
||||
self.get_event()
|
||||
|
||||
self.anchors = {}
|
||||
return node
|
||||
|
||||
def compose_node(self, parent, index):
|
||||
if self.check_event(AliasEvent):
|
||||
event = self.get_event()
|
||||
anchor = event.anchor
|
||||
if anchor not in self.anchors:
|
||||
raise ComposerError(None, None, "found undefined alias %r"
|
||||
% anchor, event.start_mark)
|
||||
return self.anchors[anchor]
|
||||
event = self.peek_event()
|
||||
anchor = event.anchor
|
||||
if anchor is not None:
|
||||
if anchor in self.anchors:
|
||||
raise ComposerError("found duplicate anchor %r; first occurence"
|
||||
% anchor, self.anchors[anchor].start_mark,
|
||||
"second occurence", event.start_mark)
|
||||
self.descend_resolver(parent, index)
|
||||
if self.check_event(ScalarEvent):
|
||||
node = self.compose_scalar_node(anchor)
|
||||
elif self.check_event(SequenceStartEvent):
|
||||
node = self.compose_sequence_node(anchor)
|
||||
elif self.check_event(MappingStartEvent):
|
||||
node = self.compose_mapping_node(anchor)
|
||||
self.ascend_resolver()
|
||||
return node
|
||||
|
||||
def compose_scalar_node(self, anchor):
|
||||
event = self.get_event()
|
||||
tag = event.tag
|
||||
if tag is None or tag == '!':
|
||||
tag = self.resolve(ScalarNode, event.value, event.implicit)
|
||||
node = ScalarNode(tag, event.value,
|
||||
event.start_mark, event.end_mark, style=event.style)
|
||||
if anchor is not None:
|
||||
self.anchors[anchor] = node
|
||||
return node
|
||||
|
||||
def compose_sequence_node(self, anchor):
|
||||
start_event = self.get_event()
|
||||
tag = start_event.tag
|
||||
if tag is None or tag == '!':
|
||||
tag = self.resolve(SequenceNode, None, start_event.implicit)
|
||||
node = SequenceNode(tag, [],
|
||||
start_event.start_mark, None,
|
||||
flow_style=start_event.flow_style)
|
||||
if anchor is not None:
|
||||
self.anchors[anchor] = node
|
||||
index = 0
|
||||
while not self.check_event(SequenceEndEvent):
|
||||
node.value.append(self.compose_node(node, index))
|
||||
index += 1
|
||||
end_event = self.get_event()
|
||||
node.end_mark = end_event.end_mark
|
||||
return node
|
||||
|
||||
def compose_mapping_node(self, anchor):
|
||||
start_event = self.get_event()
|
||||
tag = start_event.tag
|
||||
if tag is None or tag == '!':
|
||||
tag = self.resolve(MappingNode, None, start_event.implicit)
|
||||
node = MappingNode(tag, [],
|
||||
start_event.start_mark, None,
|
||||
flow_style=start_event.flow_style)
|
||||
if anchor is not None:
|
||||
self.anchors[anchor] = node
|
||||
while not self.check_event(MappingEndEvent):
|
||||
#key_event = self.peek_event()
|
||||
item_key = self.compose_node(node, None)
|
||||
#if item_key in node.value:
|
||||
# raise ComposerError("while composing a mapping", start_event.start_mark,
|
||||
# "found duplicate key", key_event.start_mark)
|
||||
item_value = self.compose_node(node, item_key)
|
||||
#node.value[item_key] = item_value
|
||||
node.value.append((item_key, item_value))
|
||||
end_event = self.get_event()
|
||||
node.end_mark = end_event.end_mark
|
||||
return node
|
||||
|
|
@ -0,0 +1,686 @@
|
|||
|
||||
__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
|
||||
'ConstructorError']
|
||||
|
||||
from .error import *
|
||||
from .nodes import *
|
||||
|
||||
import collections, datetime, base64, binascii, re, sys, types
|
||||
|
||||
class ConstructorError(MarkedYAMLError):
|
||||
pass
|
||||
|
||||
class BaseConstructor:
|
||||
|
||||
yaml_constructors = {}
|
||||
yaml_multi_constructors = {}
|
||||
|
||||
def __init__(self):
|
||||
self.constructed_objects = {}
|
||||
self.recursive_objects = {}
|
||||
self.state_generators = []
|
||||
self.deep_construct = False
|
||||
|
||||
def check_data(self):
|
||||
# If there are more documents available?
|
||||
return self.check_node()
|
||||
|
||||
def get_data(self):
|
||||
# Construct and return the next document.
|
||||
if self.check_node():
|
||||
return self.construct_document(self.get_node())
|
||||
|
||||
def get_single_data(self):
|
||||
# Ensure that the stream contains a single document and construct it.
|
||||
node = self.get_single_node()
|
||||
if node is not None:
|
||||
return self.construct_document(node)
|
||||
return None
|
||||
|
||||
def construct_document(self, node):
|
||||
data = self.construct_object(node)
|
||||
while self.state_generators:
|
||||
state_generators = self.state_generators
|
||||
self.state_generators = []
|
||||
for generator in state_generators:
|
||||
for dummy in generator:
|
||||
pass
|
||||
self.constructed_objects = {}
|
||||
self.recursive_objects = {}
|
||||
self.deep_construct = False
|
||||
return data
|
||||
|
||||
def construct_object(self, node, deep=False):
|
||||
if node in self.constructed_objects:
|
||||
return self.constructed_objects[node]
|
||||
if deep:
|
||||
old_deep = self.deep_construct
|
||||
self.deep_construct = True
|
||||
if node in self.recursive_objects:
|
||||
raise ConstructorError(None, None,
|
||||
"found unconstructable recursive node", node.start_mark)
|
||||
self.recursive_objects[node] = None
|
||||
constructor = None
|
||||
tag_suffix = None
|
||||
if node.tag in self.yaml_constructors:
|
||||
constructor = self.yaml_constructors[node.tag]
|
||||
else:
|
||||
for tag_prefix in self.yaml_multi_constructors:
|
||||
if node.tag.startswith(tag_prefix):
|
||||
tag_suffix = node.tag[len(tag_prefix):]
|
||||
constructor = self.yaml_multi_constructors[tag_prefix]
|
||||
break
|
||||
else:
|
||||
if None in self.yaml_multi_constructors:
|
||||
tag_suffix = node.tag
|
||||
constructor = self.yaml_multi_constructors[None]
|
||||
elif None in self.yaml_constructors:
|
||||
constructor = self.yaml_constructors[None]
|
||||
elif isinstance(node, ScalarNode):
|
||||
constructor = self.__class__.construct_scalar
|
||||
elif isinstance(node, SequenceNode):
|
||||
constructor = self.__class__.construct_sequence
|
||||
elif isinstance(node, MappingNode):
|
||||
constructor = self.__class__.construct_mapping
|
||||
if tag_suffix is None:
|
||||
data = constructor(self, node)
|
||||
else:
|
||||
data = constructor(self, tag_suffix, node)
|
||||
if isinstance(data, types.GeneratorType):
|
||||
generator = data
|
||||
data = next(generator)
|
||||
if self.deep_construct:
|
||||
for dummy in generator:
|
||||
pass
|
||||
else:
|
||||
self.state_generators.append(generator)
|
||||
self.constructed_objects[node] = data
|
||||
del self.recursive_objects[node]
|
||||
if deep:
|
||||
self.deep_construct = old_deep
|
||||
return data
|
||||
|
||||
def construct_scalar(self, node):
|
||||
if not isinstance(node, ScalarNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a scalar node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
return node.value
|
||||
|
||||
def construct_sequence(self, node, deep=False):
|
||||
if not isinstance(node, SequenceNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a sequence node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
return [self.construct_object(child, deep=deep)
|
||||
for child in node.value]
|
||||
|
||||
def construct_mapping(self, node, deep=False):
|
||||
if not isinstance(node, MappingNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a mapping node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
mapping = {}
|
||||
for key_node, value_node in node.value:
|
||||
key = self.construct_object(key_node, deep=deep)
|
||||
if not isinstance(key, collections.Hashable):
|
||||
raise ConstructorError("while constructing a mapping", node.start_mark,
|
||||
"found unhashable key", key_node.start_mark)
|
||||
value = self.construct_object(value_node, deep=deep)
|
||||
mapping[key] = value
|
||||
return mapping
|
||||
|
||||
def construct_pairs(self, node, deep=False):
|
||||
if not isinstance(node, MappingNode):
|
||||
raise ConstructorError(None, None,
|
||||
"expected a mapping node, but found %s" % node.id,
|
||||
node.start_mark)
|
||||
pairs = []
|
||||
for key_node, value_node in node.value:
|
||||
key = self.construct_object(key_node, deep=deep)
|
||||
value = self.construct_object(value_node, deep=deep)
|
||||
pairs.append((key, value))
|
||||
return pairs
|
||||
|
||||
@classmethod
|
||||
def add_constructor(cls, tag, constructor):
|
||||
if not 'yaml_constructors' in cls.__dict__:
|
||||
cls.yaml_constructors = cls.yaml_constructors.copy()
|
||||
cls.yaml_constructors[tag] = constructor
|
||||
|
||||
@classmethod
|
||||
def add_multi_constructor(cls, tag_prefix, multi_constructor):
|
||||
if not 'yaml_multi_constructors' in cls.__dict__:
|
||||
cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
|
||||
cls.yaml_multi_constructors[tag_prefix] = multi_constructor
|
||||
|
||||
class SafeConstructor(BaseConstructor):
|
||||
|
||||
def construct_scalar(self, node):
|
||||
if isinstance(node, MappingNode):
|
||||
for key_node, value_node in node.value:
|
||||
if key_node.tag == 'tag:yaml.org,2002:value':
|
||||
return self.construct_scalar(value_node)
|
||||
return super().construct_scalar(node)
|
||||
|
||||
def flatten_mapping(self, node):
|
||||
merge = []
|
||||
index = 0
|
||||
while index < len(node.value):
|
||||
key_node, value_node = node.value[index]
|
||||
if key_node.tag == 'tag:yaml.org,2002:merge':
|
||||
del node.value[index]
|
||||
if isinstance(value_node, MappingNode):
|
||||
self.flatten_mapping(value_node)
|
||||
merge.extend(value_node.value)
|
||||
elif isinstance(value_node, SequenceNode):
|
||||
submerge = []
|
||||
for subnode in value_node.value:
|
||||
if not isinstance(subnode, MappingNode):
|
||||
raise ConstructorError("while constructing a mapping",
|
||||
node.start_mark,
|
||||
"expected a mapping for merging, but found %s"
|
||||
% subnode.id, subnode.start_mark)
|
||||
self.flatten_mapping(subnode)
|
||||
submerge.append(subnode.value)
|
||||
submerge.reverse()
|
||||
for value in submerge:
|
||||
merge.extend(value)
|
||||
else:
|
||||
raise ConstructorError("while constructing a mapping", node.start_mark,
|
||||
"expected a mapping or list of mappings for merging, but found %s"
|
||||
% value_node.id, value_node.start_mark)
|
||||
elif key_node.tag == 'tag:yaml.org,2002:value':
|
||||
key_node.tag = 'tag:yaml.org,2002:str'
|
||||
index += 1
|
||||
else:
|
||||
index += 1
|
||||
if merge:
|
||||
node.value = merge + node.value
|
||||
|
||||
def construct_mapping(self, node, deep=False):
|
||||
if isinstance(node, MappingNode):
|
||||
self.flatten_mapping(node)
|
||||
return super().construct_mapping(node, deep=deep)
|
||||
|
||||
def construct_yaml_null(self, node):
|
||||
self.construct_scalar(node)
|
||||
return None
|
||||
|
||||
bool_values = {
|
||||
'yes': True,
|
||||
'no': False,
|
||||
'true': True,
|
||||
'false': False,
|
||||
'on': True,
|
||||
'off': False,
|
||||
}
|
||||
|
||||
def construct_yaml_bool(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
return self.bool_values[value.lower()]
|
||||
|
||||
def construct_yaml_int(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
value = value.replace('_', '')
|
||||
sign = +1
|
||||
if value[0] == '-':
|
||||
sign = -1
|
||||
if value[0] in '+-':
|
||||
value = value[1:]
|
||||
if value == '0':
|
||||
return 0
|
||||
elif value.startswith('0b'):
|
||||
return sign*int(value[2:], 2)
|
||||
elif value.startswith('0x'):
|
||||
return sign*int(value[2:], 16)
|
||||
elif value[0] == '0':
|
||||
return sign*int(value, 8)
|
||||
elif ':' in value:
|
||||
digits = [int(part) for part in value.split(':')]
|
||||
digits.reverse()
|
||||
base = 1
|
||||
value = 0
|
||||
for digit in digits:
|
||||
value += digit*base
|
||||
base *= 60
|
||||
return sign*value
|
||||
else:
|
||||
return sign*int(value)
|
||||
|
||||
inf_value = 1e300
|
||||
while inf_value != inf_value*inf_value:
|
||||
inf_value *= inf_value
|
||||
nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99).
|
||||
|
||||
def construct_yaml_float(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
value = value.replace('_', '').lower()
|
||||
sign = +1
|
||||
if value[0] == '-':
|
||||
sign = -1
|
||||
if value[0] in '+-':
|
||||
value = value[1:]
|
||||
if value == '.inf':
|
||||
return sign*self.inf_value
|
||||
elif value == '.nan':
|
||||
return self.nan_value
|
||||
elif ':' in value:
|
||||
digits = [float(part) for part in value.split(':')]
|
||||
digits.reverse()
|
||||
base = 1
|
||||
value = 0.0
|
||||
for digit in digits:
|
||||
value += digit*base
|
||||
base *= 60
|
||||
return sign*value
|
||||
else:
|
||||
return sign*float(value)
|
||||
|
||||
def construct_yaml_binary(self, node):
|
||||
try:
|
||||
value = self.construct_scalar(node).encode('ascii')
|
||||
except UnicodeEncodeError as exc:
|
||||
raise ConstructorError(None, None,
|
||||
"failed to convert base64 data into ascii: %s" % exc,
|
||||
node.start_mark)
|
||||
try:
|
||||
if hasattr(base64, 'decodebytes'):
|
||||
return base64.decodebytes(value)
|
||||
else:
|
||||
return base64.decodestring(value)
|
||||
except binascii.Error as exc:
|
||||
raise ConstructorError(None, None,
|
||||
"failed to decode base64 data: %s" % exc, node.start_mark)
|
||||
|
||||
timestamp_regexp = re.compile(
|
||||
r'''^(?P<year>[0-9][0-9][0-9][0-9])
|
||||
-(?P<month>[0-9][0-9]?)
|
||||
-(?P<day>[0-9][0-9]?)
|
||||
(?:(?:[Tt]|[ \t]+)
|
||||
(?P<hour>[0-9][0-9]?)
|
||||
:(?P<minute>[0-9][0-9])
|
||||
:(?P<second>[0-9][0-9])
|
||||
(?:\.(?P<fraction>[0-9]*))?
|
||||
(?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
|
||||
(?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
|
||||
|
||||
def construct_yaml_timestamp(self, node):
|
||||
value = self.construct_scalar(node)
|
||||
match = self.timestamp_regexp.match(node.value)
|
||||
values = match.groupdict()
|
||||
year = int(values['year'])
|
||||
month = int(values['month'])
|
||||
day = int(values['day'])
|
||||
if not values['hour']:
|
||||
return datetime.date(year, month, day)
|
||||
hour = int(values['hour'])
|
||||
minute = int(values['minute'])
|
||||
second = int(values['second'])
|
||||
fraction = 0
|
||||
if values['fraction']:
|
||||
fraction = values['fraction'][:6]
|
||||
while len(fraction) < 6:
|
||||
fraction += '0'
|
||||
fraction = int(fraction)
|
||||
delta = None
|
||||
if values['tz_sign']:
|
||||
tz_hour = int(values['tz_hour'])
|
||||
tz_minute = int(values['tz_minute'] or 0)
|
||||
delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
|
||||
if values['tz_sign'] == '-':
|
||||
delta = -delta
|
||||
data = datetime.datetime(year, month, day, hour, minute, second, fraction)
|
||||
if delta:
|
||||
data -= delta
|
||||
return data
|
||||
|
||||
def construct_yaml_omap(self, node):
|
||||
# Note: we do not check for duplicate keys, because it's too
|
||||
# CPU-expensive.
|
||||
omap = []
|
||||
yield omap
|
||||
if not isinstance(node, SequenceNode):
|
||||
raise ConstructorError("while constructing an ordered map", node.start_mark,
|
||||
"expected a sequence, but found %s" % node.id, node.start_mark)
|
||||
for subnode in node.value:
|
||||
if not isinstance(subnode, MappingNode):
|
||||
raise ConstructorError("while constructing an ordered map", node.start_mark,
|
||||
"expected a mapping of length 1, but found %s" % subnode.id,
|
||||
subnode.start_mark)
|
||||
if len(subnode.value) != 1:
|
||||
raise ConstructorError("while constructing an ordered map", node.start_mark,
|
||||
"expected a single mapping item, but found %d items" % len(subnode.value),
|
||||
subnode.start_mark)
|
||||
key_node, value_node = subnode.value[0]
|
||||
key = self.construct_object(key_node)
|
||||
value = self.construct_object(value_node)
|
||||
omap.append((key, value))
|
||||
|
||||
def construct_yaml_pairs(self, node):
|
||||
# Note: the same code as `construct_yaml_omap`.
|
||||
pairs = []
|
||||
yield pairs
|
||||
if not isinstance(node, SequenceNode):
|
||||
raise ConstructorError("while constructing pairs", node.start_mark,
|
||||
"expected a sequence, but found %s" % node.id, node.start_mark)
|
||||
for subnode in node.value:
|
||||
if not isinstance(subnode, MappingNode):
|
||||
raise ConstructorError("while constructing pairs", node.start_mark,
|
||||
"expected a mapping of length 1, but found %s" % subnode.id,
|
||||
subnode.start_mark)
|
||||
if len(subnode.value) != 1:
|
||||
raise ConstructorError("while constructing pairs", node.start_mark,
|
||||
"expected a single mapping item, but found %d items" % len(subnode.value),
|
||||
subnode.start_mark)
|
||||
key_node, value_node = subnode.value[0]
|
||||
key = self.construct_object(key_node)
|
||||
value = self.construct_object(value_node)
|
||||
pairs.append((key, value))
|
||||
|
||||
def construct_yaml_set(self, node):
|
||||
data = set()
|
||||
yield data
|
||||
value = self.construct_mapping(node)
|
||||
data.update(value)
|
||||
|
||||
def construct_yaml_str(self, node):
|
||||
return self.construct_scalar(node)
|
||||
|
||||
def construct_yaml_seq(self, node):
|
||||
data = []
|
||||
yield data
|
||||
data.extend(self.construct_sequence(node))
|
||||
|
||||
def construct_yaml_map(self, node):
|
||||
data = {}
|
||||
yield data
|
||||
value = self.construct_mapping(node)
|
||||
data.update(value)
|
||||
|
||||
def construct_yaml_object(self, node, cls):
|
||||
data = cls.__new__(cls)
|
||||
yield data
|
||||
if hasattr(data, '__setstate__'):
|
||||
state = self.construct_mapping(node, deep=True)
|
||||
data.__setstate__(state)
|
||||
else:
|
||||
state = self.construct_mapping(node)
|
||||
data.__dict__.update(state)
|
||||
|
||||
def construct_undefined(self, node):
|
||||
raise ConstructorError(None, None,
|
||||
"could not determine a constructor for the tag %r" % node.tag,
|
||||
node.start_mark)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:null',
|
||||
SafeConstructor.construct_yaml_null)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:bool',
|
||||
SafeConstructor.construct_yaml_bool)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:int',
|
||||
SafeConstructor.construct_yaml_int)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:float',
|
||||
SafeConstructor.construct_yaml_float)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:binary',
|
||||
SafeConstructor.construct_yaml_binary)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:timestamp',
|
||||
SafeConstructor.construct_yaml_timestamp)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:omap',
|
||||
SafeConstructor.construct_yaml_omap)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:pairs',
|
||||
SafeConstructor.construct_yaml_pairs)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:set',
|
||||
SafeConstructor.construct_yaml_set)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:str',
|
||||
SafeConstructor.construct_yaml_str)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:seq',
|
||||
SafeConstructor.construct_yaml_seq)
|
||||
|
||||
SafeConstructor.add_constructor(
|
||||
'tag:yaml.org,2002:map',
|
||||
SafeConstructor.construct_yaml_map)
|
||||
|
||||
SafeConstructor.add_constructor(None,
|
||||
SafeConstructor.construct_undefined)
|
||||
|
||||
class Constructor(SafeConstructor):
|
||||
|
||||
def construct_python_str(self, node):
|
||||
return self.construct_scalar(node)
|
||||
|
||||
def construct_python_unicode(self, node):
|
||||
return self.construct_scalar(node)
|
||||
|
||||
def construct_python_bytes(self, node):
|
||||
try:
|
||||
value = self.construct_scalar(node).encode('ascii')
|
||||
except UnicodeEncodeError as exc:
|
||||
raise ConstructorError(None, None,
|
||||
"failed to convert base64 data into ascii: %s" % exc,
|
||||
node.start_mark)
|
||||
try:
|
||||
if hasattr(base64, 'decodebytes'):
|
||||
return base64.decodebytes(value)
|
||||
else:
|
||||
return base64.decodestring(value)
|
||||
except binascii.Error as exc:
|
||||
raise ConstructorError(None, None,
|
||||
"failed to decode base64 data: %s" % exc, node.start_mark)
|
||||
|
||||
def construct_python_long(self, node):
|
||||
return self.construct_yaml_int(node)
|
||||
|
||||
def construct_python_complex(self, node):
|
||||
return complex(self.construct_scalar(node))
|
||||
|
||||
def construct_python_tuple(self, node):
|
||||
return tuple(self.construct_sequence(node))
|
||||
|
||||
def find_python_module(self, name, mark):
|
||||
if not name:
|
||||
raise ConstructorError("while constructing a Python module", mark,
|
||||
"expected non-empty name appended to the tag", mark)
|
||||
try:
|
||||
__import__(name)
|
||||
except ImportError as exc:
|
||||
raise ConstructorError("while constructing a Python module", mark,
|
||||
"cannot find module %r (%s)" % (name, exc), mark)
|
||||
return sys.modules[name]
|
||||
|
||||
def find_python_name(self, name, mark):
|
||||
if not name:
|
||||
raise ConstructorError("while constructing a Python object", mark,
|
||||
"expected non-empty name appended to the tag", mark)
|
||||
if '.' in name:
|
||||
module_name, object_name = name.rsplit('.', 1)
|
||||
else:
|
||||
module_name = 'builtins'
|
||||
object_name = name
|
||||
try:
|
||||
__import__(module_name)
|
||||
except ImportError as exc:
|
||||
raise ConstructorError("while constructing a Python object", mark,
|
||||
"cannot find module %r (%s)" % (module_name, exc), mark)
|
||||
module = sys.modules[module_name]
|
||||
if not hasattr(module, object_name):
|
||||
raise ConstructorError("while constructing a Python object", mark,
|
||||
"cannot find %r in the module %r"
|
||||
% (object_name, module.__name__), mark)
|
||||
return getattr(module, object_name)
|
||||
|
||||
def construct_python_name(self, suffix, node):
|
||||
value = self.construct_scalar(node)
|
||||
if value:
|
||||
raise ConstructorError("while constructing a Python name", node.start_mark,
|
||||
"expected the empty value, but found %r" % value, node.start_mark)
|
||||
return self.find_python_name(suffix, node.start_mark)
|
||||
|
||||
def construct_python_module(self, suffix, node):
|
||||
value = self.construct_scalar(node)
|
||||
if value:
|
||||
raise ConstructorError("while constructing a Python module", node.start_mark,
|
||||
"expected the empty value, but found %r" % value, node.start_mark)
|
||||
return self.find_python_module(suffix, node.start_mark)
|
||||
|
||||
def make_python_instance(self, suffix, node,
|
||||
args=None, kwds=None, newobj=False):
|
||||
if not args:
|
||||
args = []
|
||||
if not kwds:
|
||||
kwds = {}
|
||||
cls = self.find_python_name(suffix, node.start_mark)
|
||||
if newobj and isinstance(cls, type):
|
||||
return cls.__new__(cls, *args, **kwds)
|
||||
else:
|
||||
return cls(*args, **kwds)
|
||||
|
||||
def set_python_instance_state(self, instance, state):
|
||||
if hasattr(instance, '__setstate__'):
|
||||
instance.__setstate__(state)
|
||||
else:
|
||||
slotstate = {}
|
||||
if isinstance(state, tuple) and len(state) == 2:
|
||||
state, slotstate = state
|
||||
if hasattr(instance, '__dict__'):
|
||||
instance.__dict__.update(state)
|
||||
elif state:
|
||||
slotstate.update(state)
|
||||
for key, value in slotstate.items():
|
||||
setattr(object, key, value)
|
||||
|
||||
def construct_python_object(self, suffix, node):
|
||||
# Format:
|
||||
# !!python/object:module.name { ... state ... }
|
||||
instance = self.make_python_instance(suffix, node, newobj=True)
|
||||
yield instance
|
||||
deep = hasattr(instance, '__setstate__')
|
||||
state = self.construct_mapping(node, deep=deep)
|
||||
self.set_python_instance_state(instance, state)
|
||||
|
||||
def construct_python_object_apply(self, suffix, node, newobj=False):
|
||||
# Format:
|
||||
# !!python/object/apply # (or !!python/object/new)
|
||||
# args: [ ... arguments ... ]
|
||||
# kwds: { ... keywords ... }
|
||||
# state: ... state ...
|
||||
# listitems: [ ... listitems ... ]
|
||||
# dictitems: { ... dictitems ... }
|
||||
# or short format:
|
||||
# !!python/object/apply [ ... arguments ... ]
|
||||
# The difference between !!python/object/apply and !!python/object/new
|
||||
# is how an object is created, check make_python_instance for details.
|
||||
if isinstance(node, SequenceNode):
|
||||
args = self.construct_sequence(node, deep=True)
|
||||
kwds = {}
|
||||
state = {}
|
||||
listitems = []
|
||||
dictitems = {}
|
||||
else:
|
||||
value = self.construct_mapping(node, deep=True)
|
||||
args = value.get('args', [])
|
||||
kwds = value.get('kwds', {})
|
||||
state = value.get('state', {})
|
||||
listitems = value.get('listitems', [])
|
||||
dictitems = value.get('dictitems', {})
|
||||
instance = self.make_python_instance(suffix, node, args, kwds, newobj)
|
||||
if state:
|
||||
self.set_python_instance_state(instance, state)
|
||||
if listitems:
|
||||
instance.extend(listitems)
|
||||
if dictitems:
|
||||
for key in dictitems:
|
||||
instance[key] = dictitems[key]
|
||||
return instance
|
||||
|
||||
def construct_python_object_new(self, suffix, node):
|
||||
return self.construct_python_object_apply(suffix, node, newobj=True)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/none',
|
||||
Constructor.construct_yaml_null)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/bool',
|
||||
Constructor.construct_yaml_bool)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/str',
|
||||
Constructor.construct_python_str)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/unicode',
|
||||
Constructor.construct_python_unicode)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/bytes',
|
||||
Constructor.construct_python_bytes)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/int',
|
||||
Constructor.construct_yaml_int)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/long',
|
||||
Constructor.construct_python_long)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/float',
|
||||
Constructor.construct_yaml_float)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/complex',
|
||||
Constructor.construct_python_complex)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/list',
|
||||
Constructor.construct_yaml_seq)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/tuple',
|
||||
Constructor.construct_python_tuple)
|
||||
|
||||
Constructor.add_constructor(
|
||||
'tag:yaml.org,2002:python/dict',
|
||||
Constructor.construct_yaml_map)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
'tag:yaml.org,2002:python/name:',
|
||||
Constructor.construct_python_name)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
'tag:yaml.org,2002:python/module:',
|
||||
Constructor.construct_python_module)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
'tag:yaml.org,2002:python/object:',
|
||||
Constructor.construct_python_object)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
'tag:yaml.org,2002:python/object/apply:',
|
||||
Constructor.construct_python_object_apply)
|
||||
|
||||
Constructor.add_multi_constructor(
|
||||
'tag:yaml.org,2002:python/object/new:',
|
||||
Constructor.construct_python_object_new)
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
|
||||
__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader',
|
||||
'CBaseDumper', 'CSafeDumper', 'CDumper']
|
||||
|
||||
from _yaml import CParser, CEmitter
|
||||
|
||||
from .constructor import *
|
||||
|
||||
from .serializer import *
|
||||
from .representer import *
|
||||
|
||||
from .resolver import *
|
||||
|
||||
class CBaseLoader(CParser, BaseConstructor, BaseResolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
CParser.__init__(self, stream)
|
||||
BaseConstructor.__init__(self)
|
||||
BaseResolver.__init__(self)
|
||||
|
||||
class CSafeLoader(CParser, SafeConstructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
CParser.__init__(self, stream)
|
||||
SafeConstructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CLoader(CParser, Constructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
CParser.__init__(self, stream)
|
||||
Constructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
CEmitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width, encoding=encoding,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CSafeDumper(CEmitter, SafeRepresenter, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
CEmitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width, encoding=encoding,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
SafeRepresenter.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class CDumper(CEmitter, Serializer, Representer, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
CEmitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width, encoding=encoding,
|
||||
allow_unicode=allow_unicode, line_break=line_break,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
|
||||
__all__ = ['BaseDumper', 'SafeDumper', 'Dumper']
|
||||
|
||||
from .emitter import *
|
||||
from .serializer import *
|
||||
from .representer import *
|
||||
from .resolver import *
|
||||
|
||||
class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
Emitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
Serializer.__init__(self, encoding=encoding,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
Emitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
Serializer.__init__(self, encoding=encoding,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
SafeRepresenter.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class Dumper(Emitter, Serializer, Representer, Resolver):
|
||||
|
||||
def __init__(self, stream,
|
||||
default_style=None, default_flow_style=None,
|
||||
canonical=None, indent=None, width=None,
|
||||
allow_unicode=None, line_break=None,
|
||||
encoding=None, explicit_start=None, explicit_end=None,
|
||||
version=None, tags=None):
|
||||
Emitter.__init__(self, stream, canonical=canonical,
|
||||
indent=indent, width=width,
|
||||
allow_unicode=allow_unicode, line_break=line_break)
|
||||
Serializer.__init__(self, encoding=encoding,
|
||||
explicit_start=explicit_start, explicit_end=explicit_end,
|
||||
version=version, tags=tags)
|
||||
Representer.__init__(self, default_style=default_style,
|
||||
default_flow_style=default_flow_style)
|
||||
Resolver.__init__(self)
|
||||
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,75 @@
|
|||
|
||||
__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
|
||||
|
||||
class Mark:
|
||||
|
||||
def __init__(self, name, index, line, column, buffer, pointer):
|
||||
self.name = name
|
||||
self.index = index
|
||||
self.line = line
|
||||
self.column = column
|
||||
self.buffer = buffer
|
||||
self.pointer = pointer
|
||||
|
||||
def get_snippet(self, indent=4, max_length=75):
|
||||
if self.buffer is None:
|
||||
return None
|
||||
head = ''
|
||||
start = self.pointer
|
||||
while start > 0 and self.buffer[start-1] not in '\0\r\n\x85\u2028\u2029':
|
||||
start -= 1
|
||||
if self.pointer-start > max_length/2-1:
|
||||
head = ' ... '
|
||||
start += 5
|
||||
break
|
||||
tail = ''
|
||||
end = self.pointer
|
||||
while end < len(self.buffer) and self.buffer[end] not in '\0\r\n\x85\u2028\u2029':
|
||||
end += 1
|
||||
if end-self.pointer > max_length/2-1:
|
||||
tail = ' ... '
|
||||
end -= 5
|
||||
break
|
||||
snippet = self.buffer[start:end]
|
||||
return ' '*indent + head + snippet + tail + '\n' \
|
||||
+ ' '*(indent+self.pointer-start+len(head)) + '^'
|
||||
|
||||
def __str__(self):
|
||||
snippet = self.get_snippet()
|
||||
where = " in \"%s\", line %d, column %d" \
|
||||
% (self.name, self.line+1, self.column+1)
|
||||
if snippet is not None:
|
||||
where += ":\n"+snippet
|
||||
return where
|
||||
|
||||
class YAMLError(Exception):
|
||||
pass
|
||||
|
||||
class MarkedYAMLError(YAMLError):
|
||||
|
||||
def __init__(self, context=None, context_mark=None,
|
||||
problem=None, problem_mark=None, note=None):
|
||||
self.context = context
|
||||
self.context_mark = context_mark
|
||||
self.problem = problem
|
||||
self.problem_mark = problem_mark
|
||||
self.note = note
|
||||
|
||||
def __str__(self):
|
||||
lines = []
|
||||
if self.context is not None:
|
||||
lines.append(self.context)
|
||||
if self.context_mark is not None \
|
||||
and (self.problem is None or self.problem_mark is None
|
||||
or self.context_mark.name != self.problem_mark.name
|
||||
or self.context_mark.line != self.problem_mark.line
|
||||
or self.context_mark.column != self.problem_mark.column):
|
||||
lines.append(str(self.context_mark))
|
||||
if self.problem is not None:
|
||||
lines.append(self.problem)
|
||||
if self.problem_mark is not None:
|
||||
lines.append(str(self.problem_mark))
|
||||
if self.note is not None:
|
||||
lines.append(self.note)
|
||||
return '\n'.join(lines)
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
|
||||
# Abstract classes.
|
||||
|
||||
class Event(object):
|
||||
def __init__(self, start_mark=None, end_mark=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
def __repr__(self):
|
||||
attributes = [key for key in ['anchor', 'tag', 'implicit', 'value']
|
||||
if hasattr(self, key)]
|
||||
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
|
||||
for key in attributes])
|
||||
return '%s(%s)' % (self.__class__.__name__, arguments)
|
||||
|
||||
class NodeEvent(Event):
|
||||
def __init__(self, anchor, start_mark=None, end_mark=None):
|
||||
self.anchor = anchor
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class CollectionStartEvent(NodeEvent):
|
||||
def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
|
||||
flow_style=None):
|
||||
self.anchor = anchor
|
||||
self.tag = tag
|
||||
self.implicit = implicit
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.flow_style = flow_style
|
||||
|
||||
class CollectionEndEvent(Event):
|
||||
pass
|
||||
|
||||
# Implementations.
|
||||
|
||||
class StreamStartEvent(Event):
|
||||
def __init__(self, start_mark=None, end_mark=None, encoding=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.encoding = encoding
|
||||
|
||||
class StreamEndEvent(Event):
|
||||
pass
|
||||
|
||||
class DocumentStartEvent(Event):
|
||||
def __init__(self, start_mark=None, end_mark=None,
|
||||
explicit=None, version=None, tags=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.explicit = explicit
|
||||
self.version = version
|
||||
self.tags = tags
|
||||
|
||||
class DocumentEndEvent(Event):
|
||||
def __init__(self, start_mark=None, end_mark=None,
|
||||
explicit=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.explicit = explicit
|
||||
|
||||
class AliasEvent(NodeEvent):
|
||||
pass
|
||||
|
||||
class ScalarEvent(NodeEvent):
|
||||
def __init__(self, anchor, tag, implicit, value,
|
||||
start_mark=None, end_mark=None, style=None):
|
||||
self.anchor = anchor
|
||||
self.tag = tag
|
||||
self.implicit = implicit
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.style = style
|
||||
|
||||
class SequenceStartEvent(CollectionStartEvent):
|
||||
pass
|
||||
|
||||
class SequenceEndEvent(CollectionEndEvent):
|
||||
pass
|
||||
|
||||
class MappingStartEvent(CollectionStartEvent):
|
||||
pass
|
||||
|
||||
class MappingEndEvent(CollectionEndEvent):
|
||||
pass
|
||||
|
|
@ -0,0 +1,40 @@
|
|||
|
||||
__all__ = ['BaseLoader', 'SafeLoader', 'Loader']
|
||||
|
||||
from .reader import *
|
||||
from .scanner import *
|
||||
from .parser import *
|
||||
from .composer import *
|
||||
from .constructor import *
|
||||
from .resolver import *
|
||||
|
||||
class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
Reader.__init__(self, stream)
|
||||
Scanner.__init__(self)
|
||||
Parser.__init__(self)
|
||||
Composer.__init__(self)
|
||||
BaseConstructor.__init__(self)
|
||||
BaseResolver.__init__(self)
|
||||
|
||||
class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
Reader.__init__(self, stream)
|
||||
Scanner.__init__(self)
|
||||
Parser.__init__(self)
|
||||
Composer.__init__(self)
|
||||
SafeConstructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
||||
class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
|
||||
|
||||
def __init__(self, stream):
|
||||
Reader.__init__(self, stream)
|
||||
Scanner.__init__(self)
|
||||
Parser.__init__(self)
|
||||
Composer.__init__(self)
|
||||
Constructor.__init__(self)
|
||||
Resolver.__init__(self)
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
|
||||
class Node(object):
|
||||
def __init__(self, tag, value, start_mark, end_mark):
|
||||
self.tag = tag
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
def __repr__(self):
|
||||
value = self.value
|
||||
#if isinstance(value, list):
|
||||
# if len(value) == 0:
|
||||
# value = '<empty>'
|
||||
# elif len(value) == 1:
|
||||
# value = '<1 item>'
|
||||
# else:
|
||||
# value = '<%d items>' % len(value)
|
||||
#else:
|
||||
# if len(value) > 75:
|
||||
# value = repr(value[:70]+u' ... ')
|
||||
# else:
|
||||
# value = repr(value)
|
||||
value = repr(value)
|
||||
return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
|
||||
|
||||
class ScalarNode(Node):
|
||||
id = 'scalar'
|
||||
def __init__(self, tag, value,
|
||||
start_mark=None, end_mark=None, style=None):
|
||||
self.tag = tag
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.style = style
|
||||
|
||||
class CollectionNode(Node):
|
||||
def __init__(self, tag, value,
|
||||
start_mark=None, end_mark=None, flow_style=None):
|
||||
self.tag = tag
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.flow_style = flow_style
|
||||
|
||||
class SequenceNode(CollectionNode):
|
||||
id = 'sequence'
|
||||
|
||||
class MappingNode(CollectionNode):
|
||||
id = 'mapping'
|
||||
|
|
@ -0,0 +1,589 @@
|
|||
|
||||
# The following YAML grammar is LL(1) and is parsed by a recursive descent
|
||||
# parser.
|
||||
#
|
||||
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
|
||||
# implicit_document ::= block_node DOCUMENT-END*
|
||||
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
|
||||
# block_node_or_indentless_sequence ::=
|
||||
# ALIAS
|
||||
# | properties (block_content | indentless_block_sequence)?
|
||||
# | block_content
|
||||
# | indentless_block_sequence
|
||||
# block_node ::= ALIAS
|
||||
# | properties block_content?
|
||||
# | block_content
|
||||
# flow_node ::= ALIAS
|
||||
# | properties flow_content?
|
||||
# | flow_content
|
||||
# properties ::= TAG ANCHOR? | ANCHOR TAG?
|
||||
# block_content ::= block_collection | flow_collection | SCALAR
|
||||
# flow_content ::= flow_collection | SCALAR
|
||||
# block_collection ::= block_sequence | block_mapping
|
||||
# flow_collection ::= flow_sequence | flow_mapping
|
||||
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
|
||||
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
|
||||
# block_mapping ::= BLOCK-MAPPING_START
|
||||
# ((KEY block_node_or_indentless_sequence?)?
|
||||
# (VALUE block_node_or_indentless_sequence?)?)*
|
||||
# BLOCK-END
|
||||
# flow_sequence ::= FLOW-SEQUENCE-START
|
||||
# (flow_sequence_entry FLOW-ENTRY)*
|
||||
# flow_sequence_entry?
|
||||
# FLOW-SEQUENCE-END
|
||||
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
# flow_mapping ::= FLOW-MAPPING-START
|
||||
# (flow_mapping_entry FLOW-ENTRY)*
|
||||
# flow_mapping_entry?
|
||||
# FLOW-MAPPING-END
|
||||
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
#
|
||||
# FIRST sets:
|
||||
#
|
||||
# stream: { STREAM-START }
|
||||
# explicit_document: { DIRECTIVE DOCUMENT-START }
|
||||
# implicit_document: FIRST(block_node)
|
||||
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
|
||||
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
|
||||
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
|
||||
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# block_sequence: { BLOCK-SEQUENCE-START }
|
||||
# block_mapping: { BLOCK-MAPPING-START }
|
||||
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
|
||||
# indentless_sequence: { ENTRY }
|
||||
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
|
||||
# flow_sequence: { FLOW-SEQUENCE-START }
|
||||
# flow_mapping: { FLOW-MAPPING-START }
|
||||
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
|
||||
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
|
||||
|
||||
__all__ = ['Parser', 'ParserError']
|
||||
|
||||
from .error import MarkedYAMLError
|
||||
from .tokens import *
|
||||
from .events import *
|
||||
from .scanner import *
|
||||
|
||||
class ParserError(MarkedYAMLError):
|
||||
pass
|
||||
|
||||
class Parser:
|
||||
# Since writing a recursive-descendant parser is a straightforward task, we
|
||||
# do not give many comments here.
|
||||
|
||||
DEFAULT_TAGS = {
|
||||
'!': '!',
|
||||
'!!': 'tag:yaml.org,2002:',
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
self.current_event = None
|
||||
self.yaml_version = None
|
||||
self.tag_handles = {}
|
||||
self.states = []
|
||||
self.marks = []
|
||||
self.state = self.parse_stream_start
|
||||
|
||||
def dispose(self):
|
||||
# Reset the state attributes (to clear self-references)
|
||||
self.states = []
|
||||
self.state = None
|
||||
|
||||
def check_event(self, *choices):
|
||||
# Check the type of the next event.
|
||||
if self.current_event is None:
|
||||
if self.state:
|
||||
self.current_event = self.state()
|
||||
if self.current_event is not None:
|
||||
if not choices:
|
||||
return True
|
||||
for choice in choices:
|
||||
if isinstance(self.current_event, choice):
|
||||
return True
|
||||
return False
|
||||
|
||||
def peek_event(self):
|
||||
# Get the next event.
|
||||
if self.current_event is None:
|
||||
if self.state:
|
||||
self.current_event = self.state()
|
||||
return self.current_event
|
||||
|
||||
def get_event(self):
|
||||
# Get the next event and proceed further.
|
||||
if self.current_event is None:
|
||||
if self.state:
|
||||
self.current_event = self.state()
|
||||
value = self.current_event
|
||||
self.current_event = None
|
||||
return value
|
||||
|
||||
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
|
||||
# implicit_document ::= block_node DOCUMENT-END*
|
||||
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
|
||||
|
||||
def parse_stream_start(self):
|
||||
|
||||
# Parse the stream start.
|
||||
token = self.get_token()
|
||||
event = StreamStartEvent(token.start_mark, token.end_mark,
|
||||
encoding=token.encoding)
|
||||
|
||||
# Prepare the next state.
|
||||
self.state = self.parse_implicit_document_start
|
||||
|
||||
return event
|
||||
|
||||
def parse_implicit_document_start(self):
|
||||
|
||||
# Parse an implicit document.
|
||||
if not self.check_token(DirectiveToken, DocumentStartToken,
|
||||
StreamEndToken):
|
||||
self.tag_handles = self.DEFAULT_TAGS
|
||||
token = self.peek_token()
|
||||
start_mark = end_mark = token.start_mark
|
||||
event = DocumentStartEvent(start_mark, end_mark,
|
||||
explicit=False)
|
||||
|
||||
# Prepare the next state.
|
||||
self.states.append(self.parse_document_end)
|
||||
self.state = self.parse_block_node
|
||||
|
||||
return event
|
||||
|
||||
else:
|
||||
return self.parse_document_start()
|
||||
|
||||
def parse_document_start(self):
|
||||
|
||||
# Parse any extra document end indicators.
|
||||
while self.check_token(DocumentEndToken):
|
||||
self.get_token()
|
||||
|
||||
# Parse an explicit document.
|
||||
if not self.check_token(StreamEndToken):
|
||||
token = self.peek_token()
|
||||
start_mark = token.start_mark
|
||||
version, tags = self.process_directives()
|
||||
if not self.check_token(DocumentStartToken):
|
||||
raise ParserError(None, None,
|
||||
"expected '<document start>', but found %r"
|
||||
% self.peek_token().id,
|
||||
self.peek_token().start_mark)
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
event = DocumentStartEvent(start_mark, end_mark,
|
||||
explicit=True, version=version, tags=tags)
|
||||
self.states.append(self.parse_document_end)
|
||||
self.state = self.parse_document_content
|
||||
else:
|
||||
# Parse the end of the stream.
|
||||
token = self.get_token()
|
||||
event = StreamEndEvent(token.start_mark, token.end_mark)
|
||||
assert not self.states
|
||||
assert not self.marks
|
||||
self.state = None
|
||||
return event
|
||||
|
||||
def parse_document_end(self):
|
||||
|
||||
# Parse the document end.
|
||||
token = self.peek_token()
|
||||
start_mark = end_mark = token.start_mark
|
||||
explicit = False
|
||||
if self.check_token(DocumentEndToken):
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
explicit = True
|
||||
event = DocumentEndEvent(start_mark, end_mark,
|
||||
explicit=explicit)
|
||||
|
||||
# Prepare the next state.
|
||||
self.state = self.parse_document_start
|
||||
|
||||
return event
|
||||
|
||||
def parse_document_content(self):
|
||||
if self.check_token(DirectiveToken,
|
||||
DocumentStartToken, DocumentEndToken, StreamEndToken):
|
||||
event = self.process_empty_scalar(self.peek_token().start_mark)
|
||||
self.state = self.states.pop()
|
||||
return event
|
||||
else:
|
||||
return self.parse_block_node()
|
||||
|
||||
def process_directives(self):
|
||||
self.yaml_version = None
|
||||
self.tag_handles = {}
|
||||
while self.check_token(DirectiveToken):
|
||||
token = self.get_token()
|
||||
if token.name == 'YAML':
|
||||
if self.yaml_version is not None:
|
||||
raise ParserError(None, None,
|
||||
"found duplicate YAML directive", token.start_mark)
|
||||
major, minor = token.value
|
||||
if major != 1:
|
||||
raise ParserError(None, None,
|
||||
"found incompatible YAML document (version 1.* is required)",
|
||||
token.start_mark)
|
||||
self.yaml_version = token.value
|
||||
elif token.name == 'TAG':
|
||||
handle, prefix = token.value
|
||||
if handle in self.tag_handles:
|
||||
raise ParserError(None, None,
|
||||
"duplicate tag handle %r" % handle,
|
||||
token.start_mark)
|
||||
self.tag_handles[handle] = prefix
|
||||
if self.tag_handles:
|
||||
value = self.yaml_version, self.tag_handles.copy()
|
||||
else:
|
||||
value = self.yaml_version, None
|
||||
for key in self.DEFAULT_TAGS:
|
||||
if key not in self.tag_handles:
|
||||
self.tag_handles[key] = self.DEFAULT_TAGS[key]
|
||||
return value
|
||||
|
||||
# block_node_or_indentless_sequence ::= ALIAS
|
||||
# | properties (block_content | indentless_block_sequence)?
|
||||
# | block_content
|
||||
# | indentless_block_sequence
|
||||
# block_node ::= ALIAS
|
||||
# | properties block_content?
|
||||
# | block_content
|
||||
# flow_node ::= ALIAS
|
||||
# | properties flow_content?
|
||||
# | flow_content
|
||||
# properties ::= TAG ANCHOR? | ANCHOR TAG?
|
||||
# block_content ::= block_collection | flow_collection | SCALAR
|
||||
# flow_content ::= flow_collection | SCALAR
|
||||
# block_collection ::= block_sequence | block_mapping
|
||||
# flow_collection ::= flow_sequence | flow_mapping
|
||||
|
||||
def parse_block_node(self):
|
||||
return self.parse_node(block=True)
|
||||
|
||||
def parse_flow_node(self):
|
||||
return self.parse_node()
|
||||
|
||||
def parse_block_node_or_indentless_sequence(self):
|
||||
return self.parse_node(block=True, indentless_sequence=True)
|
||||
|
||||
def parse_node(self, block=False, indentless_sequence=False):
|
||||
if self.check_token(AliasToken):
|
||||
token = self.get_token()
|
||||
event = AliasEvent(token.value, token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
else:
|
||||
anchor = None
|
||||
tag = None
|
||||
start_mark = end_mark = tag_mark = None
|
||||
if self.check_token(AnchorToken):
|
||||
token = self.get_token()
|
||||
start_mark = token.start_mark
|
||||
end_mark = token.end_mark
|
||||
anchor = token.value
|
||||
if self.check_token(TagToken):
|
||||
token = self.get_token()
|
||||
tag_mark = token.start_mark
|
||||
end_mark = token.end_mark
|
||||
tag = token.value
|
||||
elif self.check_token(TagToken):
|
||||
token = self.get_token()
|
||||
start_mark = tag_mark = token.start_mark
|
||||
end_mark = token.end_mark
|
||||
tag = token.value
|
||||
if self.check_token(AnchorToken):
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
anchor = token.value
|
||||
if tag is not None:
|
||||
handle, suffix = tag
|
||||
if handle is not None:
|
||||
if handle not in self.tag_handles:
|
||||
raise ParserError("while parsing a node", start_mark,
|
||||
"found undefined tag handle %r" % handle,
|
||||
tag_mark)
|
||||
tag = self.tag_handles[handle]+suffix
|
||||
else:
|
||||
tag = suffix
|
||||
#if tag == '!':
|
||||
# raise ParserError("while parsing a node", start_mark,
|
||||
# "found non-specific tag '!'", tag_mark,
|
||||
# "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' and share your opinion.")
|
||||
if start_mark is None:
|
||||
start_mark = end_mark = self.peek_token().start_mark
|
||||
event = None
|
||||
implicit = (tag is None or tag == '!')
|
||||
if indentless_sequence and self.check_token(BlockEntryToken):
|
||||
end_mark = self.peek_token().end_mark
|
||||
event = SequenceStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark)
|
||||
self.state = self.parse_indentless_sequence_entry
|
||||
else:
|
||||
if self.check_token(ScalarToken):
|
||||
token = self.get_token()
|
||||
end_mark = token.end_mark
|
||||
if (token.plain and tag is None) or tag == '!':
|
||||
implicit = (True, False)
|
||||
elif tag is None:
|
||||
implicit = (False, True)
|
||||
else:
|
||||
implicit = (False, False)
|
||||
event = ScalarEvent(anchor, tag, implicit, token.value,
|
||||
start_mark, end_mark, style=token.style)
|
||||
self.state = self.states.pop()
|
||||
elif self.check_token(FlowSequenceStartToken):
|
||||
end_mark = self.peek_token().end_mark
|
||||
event = SequenceStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=True)
|
||||
self.state = self.parse_flow_sequence_first_entry
|
||||
elif self.check_token(FlowMappingStartToken):
|
||||
end_mark = self.peek_token().end_mark
|
||||
event = MappingStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=True)
|
||||
self.state = self.parse_flow_mapping_first_key
|
||||
elif block and self.check_token(BlockSequenceStartToken):
|
||||
end_mark = self.peek_token().start_mark
|
||||
event = SequenceStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=False)
|
||||
self.state = self.parse_block_sequence_first_entry
|
||||
elif block and self.check_token(BlockMappingStartToken):
|
||||
end_mark = self.peek_token().start_mark
|
||||
event = MappingStartEvent(anchor, tag, implicit,
|
||||
start_mark, end_mark, flow_style=False)
|
||||
self.state = self.parse_block_mapping_first_key
|
||||
elif anchor is not None or tag is not None:
|
||||
# Empty scalars are allowed even if a tag or an anchor is
|
||||
# specified.
|
||||
event = ScalarEvent(anchor, tag, (implicit, False), '',
|
||||
start_mark, end_mark)
|
||||
self.state = self.states.pop()
|
||||
else:
|
||||
if block:
|
||||
node = 'block'
|
||||
else:
|
||||
node = 'flow'
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a %s node" % node, start_mark,
|
||||
"expected the node content, but found %r" % token.id,
|
||||
token.start_mark)
|
||||
return event
|
||||
|
||||
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
|
||||
|
||||
def parse_block_sequence_first_entry(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_block_sequence_entry()
|
||||
|
||||
def parse_block_sequence_entry(self):
|
||||
if self.check_token(BlockEntryToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(BlockEntryToken, BlockEndToken):
|
||||
self.states.append(self.parse_block_sequence_entry)
|
||||
return self.parse_block_node()
|
||||
else:
|
||||
self.state = self.parse_block_sequence_entry
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
if not self.check_token(BlockEndToken):
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a block collection", self.marks[-1],
|
||||
"expected <block end>, but found %r" % token.id, token.start_mark)
|
||||
token = self.get_token()
|
||||
event = SequenceEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
|
||||
|
||||
def parse_indentless_sequence_entry(self):
|
||||
if self.check_token(BlockEntryToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(BlockEntryToken,
|
||||
KeyToken, ValueToken, BlockEndToken):
|
||||
self.states.append(self.parse_indentless_sequence_entry)
|
||||
return self.parse_block_node()
|
||||
else:
|
||||
self.state = self.parse_indentless_sequence_entry
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
token = self.peek_token()
|
||||
event = SequenceEndEvent(token.start_mark, token.start_mark)
|
||||
self.state = self.states.pop()
|
||||
return event
|
||||
|
||||
# block_mapping ::= BLOCK-MAPPING_START
|
||||
# ((KEY block_node_or_indentless_sequence?)?
|
||||
# (VALUE block_node_or_indentless_sequence?)?)*
|
||||
# BLOCK-END
|
||||
|
||||
def parse_block_mapping_first_key(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_block_mapping_key()
|
||||
|
||||
def parse_block_mapping_key(self):
|
||||
if self.check_token(KeyToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
|
||||
self.states.append(self.parse_block_mapping_value)
|
||||
return self.parse_block_node_or_indentless_sequence()
|
||||
else:
|
||||
self.state = self.parse_block_mapping_value
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
if not self.check_token(BlockEndToken):
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a block mapping", self.marks[-1],
|
||||
"expected <block end>, but found %r" % token.id, token.start_mark)
|
||||
token = self.get_token()
|
||||
event = MappingEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
def parse_block_mapping_value(self):
|
||||
if self.check_token(ValueToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
|
||||
self.states.append(self.parse_block_mapping_key)
|
||||
return self.parse_block_node_or_indentless_sequence()
|
||||
else:
|
||||
self.state = self.parse_block_mapping_key
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
else:
|
||||
self.state = self.parse_block_mapping_key
|
||||
token = self.peek_token()
|
||||
return self.process_empty_scalar(token.start_mark)
|
||||
|
||||
# flow_sequence ::= FLOW-SEQUENCE-START
|
||||
# (flow_sequence_entry FLOW-ENTRY)*
|
||||
# flow_sequence_entry?
|
||||
# FLOW-SEQUENCE-END
|
||||
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
#
|
||||
# Note that while production rules for both flow_sequence_entry and
|
||||
# flow_mapping_entry are equal, their interpretations are different.
|
||||
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
|
||||
# generate an inline mapping (set syntax).
|
||||
|
||||
def parse_flow_sequence_first_entry(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_flow_sequence_entry(first=True)
|
||||
|
||||
def parse_flow_sequence_entry(self, first=False):
|
||||
if not self.check_token(FlowSequenceEndToken):
|
||||
if not first:
|
||||
if self.check_token(FlowEntryToken):
|
||||
self.get_token()
|
||||
else:
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a flow sequence", self.marks[-1],
|
||||
"expected ',' or ']', but got %r" % token.id, token.start_mark)
|
||||
|
||||
if self.check_token(KeyToken):
|
||||
token = self.peek_token()
|
||||
event = MappingStartEvent(None, None, True,
|
||||
token.start_mark, token.end_mark,
|
||||
flow_style=True)
|
||||
self.state = self.parse_flow_sequence_entry_mapping_key
|
||||
return event
|
||||
elif not self.check_token(FlowSequenceEndToken):
|
||||
self.states.append(self.parse_flow_sequence_entry)
|
||||
return self.parse_flow_node()
|
||||
token = self.get_token()
|
||||
event = SequenceEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
def parse_flow_sequence_entry_mapping_key(self):
|
||||
token = self.get_token()
|
||||
if not self.check_token(ValueToken,
|
||||
FlowEntryToken, FlowSequenceEndToken):
|
||||
self.states.append(self.parse_flow_sequence_entry_mapping_value)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_sequence_entry_mapping_value
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
|
||||
def parse_flow_sequence_entry_mapping_value(self):
|
||||
if self.check_token(ValueToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
|
||||
self.states.append(self.parse_flow_sequence_entry_mapping_end)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_sequence_entry_mapping_end
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
else:
|
||||
self.state = self.parse_flow_sequence_entry_mapping_end
|
||||
token = self.peek_token()
|
||||
return self.process_empty_scalar(token.start_mark)
|
||||
|
||||
def parse_flow_sequence_entry_mapping_end(self):
|
||||
self.state = self.parse_flow_sequence_entry
|
||||
token = self.peek_token()
|
||||
return MappingEndEvent(token.start_mark, token.start_mark)
|
||||
|
||||
# flow_mapping ::= FLOW-MAPPING-START
|
||||
# (flow_mapping_entry FLOW-ENTRY)*
|
||||
# flow_mapping_entry?
|
||||
# FLOW-MAPPING-END
|
||||
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
|
||||
|
||||
def parse_flow_mapping_first_key(self):
|
||||
token = self.get_token()
|
||||
self.marks.append(token.start_mark)
|
||||
return self.parse_flow_mapping_key(first=True)
|
||||
|
||||
def parse_flow_mapping_key(self, first=False):
|
||||
if not self.check_token(FlowMappingEndToken):
|
||||
if not first:
|
||||
if self.check_token(FlowEntryToken):
|
||||
self.get_token()
|
||||
else:
|
||||
token = self.peek_token()
|
||||
raise ParserError("while parsing a flow mapping", self.marks[-1],
|
||||
"expected ',' or '}', but got %r" % token.id, token.start_mark)
|
||||
if self.check_token(KeyToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(ValueToken,
|
||||
FlowEntryToken, FlowMappingEndToken):
|
||||
self.states.append(self.parse_flow_mapping_value)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_mapping_value
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
elif not self.check_token(FlowMappingEndToken):
|
||||
self.states.append(self.parse_flow_mapping_empty_value)
|
||||
return self.parse_flow_node()
|
||||
token = self.get_token()
|
||||
event = MappingEndEvent(token.start_mark, token.end_mark)
|
||||
self.state = self.states.pop()
|
||||
self.marks.pop()
|
||||
return event
|
||||
|
||||
def parse_flow_mapping_value(self):
|
||||
if self.check_token(ValueToken):
|
||||
token = self.get_token()
|
||||
if not self.check_token(FlowEntryToken, FlowMappingEndToken):
|
||||
self.states.append(self.parse_flow_mapping_key)
|
||||
return self.parse_flow_node()
|
||||
else:
|
||||
self.state = self.parse_flow_mapping_key
|
||||
return self.process_empty_scalar(token.end_mark)
|
||||
else:
|
||||
self.state = self.parse_flow_mapping_key
|
||||
token = self.peek_token()
|
||||
return self.process_empty_scalar(token.start_mark)
|
||||
|
||||
def parse_flow_mapping_empty_value(self):
|
||||
self.state = self.parse_flow_mapping_key
|
||||
return self.process_empty_scalar(self.peek_token().start_mark)
|
||||
|
||||
def process_empty_scalar(self, mark):
|
||||
return ScalarEvent(None, None, (True, False), '', mark, mark)
|
||||
|
|
@ -0,0 +1,192 @@
|
|||
# This module contains abstractions for the input stream. You don't have to
|
||||
# looks further, there are no pretty code.
|
||||
#
|
||||
# We define two classes here.
|
||||
#
|
||||
# Mark(source, line, column)
|
||||
# It's just a record and its only use is producing nice error messages.
|
||||
# Parser does not use it for any other purposes.
|
||||
#
|
||||
# Reader(source, data)
|
||||
# Reader determines the encoding of `data` and converts it to unicode.
|
||||
# Reader provides the following methods and attributes:
|
||||
# reader.peek(length=1) - return the next `length` characters
|
||||
# reader.forward(length=1) - move the current position to `length` characters.
|
||||
# reader.index - the number of the current character.
|
||||
# reader.line, stream.column - the line and the column of the current character.
|
||||
|
||||
__all__ = ['Reader', 'ReaderError']
|
||||
|
||||
from .error import YAMLError, Mark
|
||||
|
||||
import codecs, re
|
||||
|
||||
class ReaderError(YAMLError):
|
||||
|
||||
def __init__(self, name, position, character, encoding, reason):
|
||||
self.name = name
|
||||
self.character = character
|
||||
self.position = position
|
||||
self.encoding = encoding
|
||||
self.reason = reason
|
||||
|
||||
def __str__(self):
|
||||
if isinstance(self.character, bytes):
|
||||
return "'%s' codec can't decode byte #x%02x: %s\n" \
|
||||
" in \"%s\", position %d" \
|
||||
% (self.encoding, ord(self.character), self.reason,
|
||||
self.name, self.position)
|
||||
else:
|
||||
return "unacceptable character #x%04x: %s\n" \
|
||||
" in \"%s\", position %d" \
|
||||
% (self.character, self.reason,
|
||||
self.name, self.position)
|
||||
|
||||
class Reader(object):
|
||||
# Reader:
|
||||
# - determines the data encoding and converts it to a unicode string,
|
||||
# - checks if characters are in allowed range,
|
||||
# - adds '\0' to the end.
|
||||
|
||||
# Reader accepts
|
||||
# - a `bytes` object,
|
||||
# - a `str` object,
|
||||
# - a file-like object with its `read` method returning `str`,
|
||||
# - a file-like object with its `read` method returning `unicode`.
|
||||
|
||||
# Yeah, it's ugly and slow.
|
||||
|
||||
def __init__(self, stream):
|
||||
self.name = None
|
||||
self.stream = None
|
||||
self.stream_pointer = 0
|
||||
self.eof = True
|
||||
self.buffer = ''
|
||||
self.pointer = 0
|
||||
self.raw_buffer = None
|
||||
self.raw_decode = None
|
||||
self.encoding = None
|
||||
self.index = 0
|
||||
self.line = 0
|
||||
self.column = 0
|
||||
if isinstance(stream, str):
|
||||
self.name = "<unicode string>"
|
||||
self.check_printable(stream)
|
||||
self.buffer = stream+'\0'
|
||||
elif isinstance(stream, bytes):
|
||||
self.name = "<byte string>"
|
||||
self.raw_buffer = stream
|
||||
self.determine_encoding()
|
||||
else:
|
||||
self.stream = stream
|
||||
self.name = getattr(stream, 'name', "<file>")
|
||||
self.eof = False
|
||||
self.raw_buffer = None
|
||||
self.determine_encoding()
|
||||
|
||||
def peek(self, index=0):
|
||||
try:
|
||||
return self.buffer[self.pointer+index]
|
||||
except IndexError:
|
||||
self.update(index+1)
|
||||
return self.buffer[self.pointer+index]
|
||||
|
||||
def prefix(self, length=1):
|
||||
if self.pointer+length >= len(self.buffer):
|
||||
self.update(length)
|
||||
return self.buffer[self.pointer:self.pointer+length]
|
||||
|
||||
def forward(self, length=1):
|
||||
if self.pointer+length+1 >= len(self.buffer):
|
||||
self.update(length+1)
|
||||
while length:
|
||||
ch = self.buffer[self.pointer]
|
||||
self.pointer += 1
|
||||
self.index += 1
|
||||
if ch in '\n\x85\u2028\u2029' \
|
||||
or (ch == '\r' and self.buffer[self.pointer] != '\n'):
|
||||
self.line += 1
|
||||
self.column = 0
|
||||
elif ch != '\uFEFF':
|
||||
self.column += 1
|
||||
length -= 1
|
||||
|
||||
def get_mark(self):
|
||||
if self.stream is None:
|
||||
return Mark(self.name, self.index, self.line, self.column,
|
||||
self.buffer, self.pointer)
|
||||
else:
|
||||
return Mark(self.name, self.index, self.line, self.column,
|
||||
None, None)
|
||||
|
||||
def determine_encoding(self):
|
||||
while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2):
|
||||
self.update_raw()
|
||||
if isinstance(self.raw_buffer, bytes):
|
||||
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
|
||||
self.raw_decode = codecs.utf_16_le_decode
|
||||
self.encoding = 'utf-16-le'
|
||||
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
|
||||
self.raw_decode = codecs.utf_16_be_decode
|
||||
self.encoding = 'utf-16-be'
|
||||
else:
|
||||
self.raw_decode = codecs.utf_8_decode
|
||||
self.encoding = 'utf-8'
|
||||
self.update(1)
|
||||
|
||||
NON_PRINTABLE = re.compile('[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD]')
|
||||
def check_printable(self, data):
|
||||
match = self.NON_PRINTABLE.search(data)
|
||||
if match:
|
||||
character = match.group()
|
||||
position = self.index+(len(self.buffer)-self.pointer)+match.start()
|
||||
raise ReaderError(self.name, position, ord(character),
|
||||
'unicode', "special characters are not allowed")
|
||||
|
||||
def update(self, length):
|
||||
if self.raw_buffer is None:
|
||||
return
|
||||
self.buffer = self.buffer[self.pointer:]
|
||||
self.pointer = 0
|
||||
while len(self.buffer) < length:
|
||||
if not self.eof:
|
||||
self.update_raw()
|
||||
if self.raw_decode is not None:
|
||||
try:
|
||||
data, converted = self.raw_decode(self.raw_buffer,
|
||||
'strict', self.eof)
|
||||
except UnicodeDecodeError as exc:
|
||||
character = self.raw_buffer[exc.start]
|
||||
if self.stream is not None:
|
||||
position = self.stream_pointer-len(self.raw_buffer)+exc.start
|
||||
else:
|
||||
position = exc.start
|
||||
raise ReaderError(self.name, position, character,
|
||||
exc.encoding, exc.reason)
|
||||
else:
|
||||
data = self.raw_buffer
|
||||
converted = len(data)
|
||||
self.check_printable(data)
|
||||
self.buffer += data
|
||||
self.raw_buffer = self.raw_buffer[converted:]
|
||||
if self.eof:
|
||||
self.buffer += '\0'
|
||||
self.raw_buffer = None
|
||||
break
|
||||
|
||||
def update_raw(self, size=4096):
|
||||
data = self.stream.read(size)
|
||||
if self.raw_buffer is None:
|
||||
self.raw_buffer = data
|
||||
else:
|
||||
self.raw_buffer += data
|
||||
self.stream_pointer += len(data)
|
||||
if not data:
|
||||
self.eof = True
|
||||
|
||||
#try:
|
||||
# import psyco
|
||||
# psyco.bind(Reader)
|
||||
#except ImportError:
|
||||
# pass
|
||||
|
|
@ -0,0 +1,374 @@
|
|||
|
||||
__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
|
||||
'RepresenterError']
|
||||
|
||||
from .error import *
|
||||
from .nodes import *
|
||||
|
||||
import datetime, sys, copyreg, types, base64
|
||||
|
||||
class RepresenterError(YAMLError):
|
||||
pass
|
||||
|
||||
class BaseRepresenter:
|
||||
|
||||
yaml_representers = {}
|
||||
yaml_multi_representers = {}
|
||||
|
||||
def __init__(self, default_style=None, default_flow_style=None):
|
||||
self.default_style = default_style
|
||||
self.default_flow_style = default_flow_style
|
||||
self.represented_objects = {}
|
||||
self.object_keeper = []
|
||||
self.alias_key = None
|
||||
|
||||
def represent(self, data):
|
||||
node = self.represent_data(data)
|
||||
self.serialize(node)
|
||||
self.represented_objects = {}
|
||||
self.object_keeper = []
|
||||
self.alias_key = None
|
||||
|
||||
def represent_data(self, data):
|
||||
if self.ignore_aliases(data):
|
||||
self.alias_key = None
|
||||
else:
|
||||
self.alias_key = id(data)
|
||||
if self.alias_key is not None:
|
||||
if self.alias_key in self.represented_objects:
|
||||
node = self.represented_objects[self.alias_key]
|
||||
#if node is None:
|
||||
# raise RepresenterError("recursive objects are not allowed: %r" % data)
|
||||
return node
|
||||
#self.represented_objects[alias_key] = None
|
||||
self.object_keeper.append(data)
|
||||
data_types = type(data).__mro__
|
||||
if data_types[0] in self.yaml_representers:
|
||||
node = self.yaml_representers[data_types[0]](self, data)
|
||||
else:
|
||||
for data_type in data_types:
|
||||
if data_type in self.yaml_multi_representers:
|
||||
node = self.yaml_multi_representers[data_type](self, data)
|
||||
break
|
||||
else:
|
||||
if None in self.yaml_multi_representers:
|
||||
node = self.yaml_multi_representers[None](self, data)
|
||||
elif None in self.yaml_representers:
|
||||
node = self.yaml_representers[None](self, data)
|
||||
else:
|
||||
node = ScalarNode(None, str(data))
|
||||
#if alias_key is not None:
|
||||
# self.represented_objects[alias_key] = node
|
||||
return node
|
||||
|
||||
@classmethod
|
||||
def add_representer(cls, data_type, representer):
|
||||
if not 'yaml_representers' in cls.__dict__:
|
||||
cls.yaml_representers = cls.yaml_representers.copy()
|
||||
cls.yaml_representers[data_type] = representer
|
||||
|
||||
@classmethod
|
||||
def add_multi_representer(cls, data_type, representer):
|
||||
if not 'yaml_multi_representers' in cls.__dict__:
|
||||
cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
|
||||
cls.yaml_multi_representers[data_type] = representer
|
||||
|
||||
def represent_scalar(self, tag, value, style=None):
|
||||
if style is None:
|
||||
style = self.default_style
|
||||
node = ScalarNode(tag, value, style=style)
|
||||
if self.alias_key is not None:
|
||||
self.represented_objects[self.alias_key] = node
|
||||
return node
|
||||
|
||||
def represent_sequence(self, tag, sequence, flow_style=None):
|
||||
value = []
|
||||
node = SequenceNode(tag, value, flow_style=flow_style)
|
||||
if self.alias_key is not None:
|
||||
self.represented_objects[self.alias_key] = node
|
||||
best_style = True
|
||||
for item in sequence:
|
||||
node_item = self.represent_data(item)
|
||||
if not (isinstance(node_item, ScalarNode) and not node_item.style):
|
||||
best_style = False
|
||||
value.append(node_item)
|
||||
if flow_style is None:
|
||||
if self.default_flow_style is not None:
|
||||
node.flow_style = self.default_flow_style
|
||||
else:
|
||||
node.flow_style = best_style
|
||||
return node
|
||||
|
||||
def represent_mapping(self, tag, mapping, flow_style=None):
|
||||
value = []
|
||||
node = MappingNode(tag, value, flow_style=flow_style)
|
||||
if self.alias_key is not None:
|
||||
self.represented_objects[self.alias_key] = node
|
||||
best_style = True
|
||||
if hasattr(mapping, 'items'):
|
||||
mapping = list(mapping.items())
|
||||
try:
|
||||
mapping = sorted(mapping)
|
||||
except TypeError:
|
||||
pass
|
||||
for item_key, item_value in mapping:
|
||||
node_key = self.represent_data(item_key)
|
||||
node_value = self.represent_data(item_value)
|
||||
if not (isinstance(node_key, ScalarNode) and not node_key.style):
|
||||
best_style = False
|
||||
if not (isinstance(node_value, ScalarNode) and not node_value.style):
|
||||
best_style = False
|
||||
value.append((node_key, node_value))
|
||||
if flow_style is None:
|
||||
if self.default_flow_style is not None:
|
||||
node.flow_style = self.default_flow_style
|
||||
else:
|
||||
node.flow_style = best_style
|
||||
return node
|
||||
|
||||
def ignore_aliases(self, data):
|
||||
return False
|
||||
|
||||
class SafeRepresenter(BaseRepresenter):
|
||||
|
||||
def ignore_aliases(self, data):
|
||||
if data in [None, ()]:
|
||||
return True
|
||||
if isinstance(data, (str, bytes, bool, int, float)):
|
||||
return True
|
||||
|
||||
def represent_none(self, data):
|
||||
return self.represent_scalar('tag:yaml.org,2002:null', 'null')
|
||||
|
||||
def represent_str(self, data):
|
||||
return self.represent_scalar('tag:yaml.org,2002:str', data)
|
||||
|
||||
def represent_binary(self, data):
|
||||
if hasattr(base64, 'encodebytes'):
|
||||
data = base64.encodebytes(data).decode('ascii')
|
||||
else:
|
||||
data = base64.encodestring(data).decode('ascii')
|
||||
return self.represent_scalar('tag:yaml.org,2002:binary', data, style='|')
|
||||
|
||||
def represent_bool(self, data):
|
||||
if data:
|
||||
value = 'true'
|
||||
else:
|
||||
value = 'false'
|
||||
return self.represent_scalar('tag:yaml.org,2002:bool', value)
|
||||
|
||||
def represent_int(self, data):
|
||||
return self.represent_scalar('tag:yaml.org,2002:int', str(data))
|
||||
|
||||
inf_value = 1e300
|
||||
while repr(inf_value) != repr(inf_value*inf_value):
|
||||
inf_value *= inf_value
|
||||
|
||||
def represent_float(self, data):
|
||||
if data != data or (data == 0.0 and data == 1.0):
|
||||
value = '.nan'
|
||||
elif data == self.inf_value:
|
||||
value = '.inf'
|
||||
elif data == -self.inf_value:
|
||||
value = '-.inf'
|
||||
else:
|
||||
value = repr(data).lower()
|
||||
# Note that in some cases `repr(data)` represents a float number
|
||||
# without the decimal parts. For instance:
|
||||
# >>> repr(1e17)
|
||||
# '1e17'
|
||||
# Unfortunately, this is not a valid float representation according
|
||||
# to the definition of the `!!float` tag. We fix this by adding
|
||||
# '.0' before the 'e' symbol.
|
||||
if '.' not in value and 'e' in value:
|
||||
value = value.replace('e', '.0e', 1)
|
||||
return self.represent_scalar('tag:yaml.org,2002:float', value)
|
||||
|
||||
def represent_list(self, data):
|
||||
#pairs = (len(data) > 0 and isinstance(data, list))
|
||||
#if pairs:
|
||||
# for item in data:
|
||||
# if not isinstance(item, tuple) or len(item) != 2:
|
||||
# pairs = False
|
||||
# break
|
||||
#if not pairs:
|
||||
return self.represent_sequence('tag:yaml.org,2002:seq', data)
|
||||
#value = []
|
||||
#for item_key, item_value in data:
|
||||
# value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
|
||||
# [(item_key, item_value)]))
|
||||
#return SequenceNode(u'tag:yaml.org,2002:pairs', value)
|
||||
|
||||
def represent_dict(self, data):
|
||||
return self.represent_mapping('tag:yaml.org,2002:map', data)
|
||||
|
||||
def represent_set(self, data):
|
||||
value = {}
|
||||
for key in data:
|
||||
value[key] = None
|
||||
return self.represent_mapping('tag:yaml.org,2002:set', value)
|
||||
|
||||
def represent_date(self, data):
|
||||
value = data.isoformat()
|
||||
return self.represent_scalar('tag:yaml.org,2002:timestamp', value)
|
||||
|
||||
def represent_datetime(self, data):
|
||||
value = data.isoformat(' ')
|
||||
return self.represent_scalar('tag:yaml.org,2002:timestamp', value)
|
||||
|
||||
def represent_yaml_object(self, tag, data, cls, flow_style=None):
|
||||
if hasattr(data, '__getstate__'):
|
||||
state = data.__getstate__()
|
||||
else:
|
||||
state = data.__dict__.copy()
|
||||
return self.represent_mapping(tag, state, flow_style=flow_style)
|
||||
|
||||
def represent_undefined(self, data):
|
||||
raise RepresenterError("cannot represent an object: %s" % data)
|
||||
|
||||
SafeRepresenter.add_representer(type(None),
|
||||
SafeRepresenter.represent_none)
|
||||
|
||||
SafeRepresenter.add_representer(str,
|
||||
SafeRepresenter.represent_str)
|
||||
|
||||
SafeRepresenter.add_representer(bytes,
|
||||
SafeRepresenter.represent_binary)
|
||||
|
||||
SafeRepresenter.add_representer(bool,
|
||||
SafeRepresenter.represent_bool)
|
||||
|
||||
SafeRepresenter.add_representer(int,
|
||||
SafeRepresenter.represent_int)
|
||||
|
||||
SafeRepresenter.add_representer(float,
|
||||
SafeRepresenter.represent_float)
|
||||
|
||||
SafeRepresenter.add_representer(list,
|
||||
SafeRepresenter.represent_list)
|
||||
|
||||
SafeRepresenter.add_representer(tuple,
|
||||
SafeRepresenter.represent_list)
|
||||
|
||||
SafeRepresenter.add_representer(dict,
|
||||
SafeRepresenter.represent_dict)
|
||||
|
||||
SafeRepresenter.add_representer(set,
|
||||
SafeRepresenter.represent_set)
|
||||
|
||||
SafeRepresenter.add_representer(datetime.date,
|
||||
SafeRepresenter.represent_date)
|
||||
|
||||
SafeRepresenter.add_representer(datetime.datetime,
|
||||
SafeRepresenter.represent_datetime)
|
||||
|
||||
SafeRepresenter.add_representer(None,
|
||||
SafeRepresenter.represent_undefined)
|
||||
|
||||
class Representer(SafeRepresenter):
|
||||
|
||||
def represent_complex(self, data):
|
||||
if data.imag == 0.0:
|
||||
data = '%r' % data.real
|
||||
elif data.real == 0.0:
|
||||
data = '%rj' % data.imag
|
||||
elif data.imag > 0:
|
||||
data = '%r+%rj' % (data.real, data.imag)
|
||||
else:
|
||||
data = '%r%rj' % (data.real, data.imag)
|
||||
return self.represent_scalar('tag:yaml.org,2002:python/complex', data)
|
||||
|
||||
def represent_tuple(self, data):
|
||||
return self.represent_sequence('tag:yaml.org,2002:python/tuple', data)
|
||||
|
||||
def represent_name(self, data):
|
||||
name = '%s.%s' % (data.__module__, data.__name__)
|
||||
return self.represent_scalar('tag:yaml.org,2002:python/name:'+name, '')
|
||||
|
||||
def represent_module(self, data):
|
||||
return self.represent_scalar(
|
||||
'tag:yaml.org,2002:python/module:'+data.__name__, '')
|
||||
|
||||
def represent_object(self, data):
|
||||
# We use __reduce__ API to save the data. data.__reduce__ returns
|
||||
# a tuple of length 2-5:
|
||||
# (function, args, state, listitems, dictitems)
|
||||
|
||||
# For reconstructing, we calls function(*args), then set its state,
|
||||
# listitems, and dictitems if they are not None.
|
||||
|
||||
# A special case is when function.__name__ == '__newobj__'. In this
|
||||
# case we create the object with args[0].__new__(*args).
|
||||
|
||||
# Another special case is when __reduce__ returns a string - we don't
|
||||
# support it.
|
||||
|
||||
# We produce a !!python/object, !!python/object/new or
|
||||
# !!python/object/apply node.
|
||||
|
||||
cls = type(data)
|
||||
if cls in copyreg.dispatch_table:
|
||||
reduce = copyreg.dispatch_table[cls](data)
|
||||
elif hasattr(data, '__reduce_ex__'):
|
||||
reduce = data.__reduce_ex__(2)
|
||||
elif hasattr(data, '__reduce__'):
|
||||
reduce = data.__reduce__()
|
||||
else:
|
||||
raise RepresenterError("cannot represent object: %r" % data)
|
||||
reduce = (list(reduce)+[None]*5)[:5]
|
||||
function, args, state, listitems, dictitems = reduce
|
||||
args = list(args)
|
||||
if state is None:
|
||||
state = {}
|
||||
if listitems is not None:
|
||||
listitems = list(listitems)
|
||||
if dictitems is not None:
|
||||
dictitems = dict(dictitems)
|
||||
if function.__name__ == '__newobj__':
|
||||
function = args[0]
|
||||
args = args[1:]
|
||||
tag = 'tag:yaml.org,2002:python/object/new:'
|
||||
newobj = True
|
||||
else:
|
||||
tag = 'tag:yaml.org,2002:python/object/apply:'
|
||||
newobj = False
|
||||
function_name = '%s.%s' % (function.__module__, function.__name__)
|
||||
if not args and not listitems and not dictitems \
|
||||
and isinstance(state, dict) and newobj:
|
||||
return self.represent_mapping(
|
||||
'tag:yaml.org,2002:python/object:'+function_name, state)
|
||||
if not listitems and not dictitems \
|
||||
and isinstance(state, dict) and not state:
|
||||
return self.represent_sequence(tag+function_name, args)
|
||||
value = {}
|
||||
if args:
|
||||
value['args'] = args
|
||||
if state or not isinstance(state, dict):
|
||||
value['state'] = state
|
||||
if listitems:
|
||||
value['listitems'] = listitems
|
||||
if dictitems:
|
||||
value['dictitems'] = dictitems
|
||||
return self.represent_mapping(tag+function_name, value)
|
||||
|
||||
Representer.add_representer(complex,
|
||||
Representer.represent_complex)
|
||||
|
||||
Representer.add_representer(tuple,
|
||||
Representer.represent_tuple)
|
||||
|
||||
Representer.add_representer(type,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.FunctionType,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.BuiltinFunctionType,
|
||||
Representer.represent_name)
|
||||
|
||||
Representer.add_representer(types.ModuleType,
|
||||
Representer.represent_module)
|
||||
|
||||
Representer.add_multi_representer(object,
|
||||
Representer.represent_object)
|
||||
|
|
@ -0,0 +1,224 @@
|
|||
|
||||
__all__ = ['BaseResolver', 'Resolver']
|
||||
|
||||
from .error import *
|
||||
from .nodes import *
|
||||
|
||||
import re
|
||||
|
||||
class ResolverError(YAMLError):
|
||||
pass
|
||||
|
||||
class BaseResolver:
|
||||
|
||||
DEFAULT_SCALAR_TAG = 'tag:yaml.org,2002:str'
|
||||
DEFAULT_SEQUENCE_TAG = 'tag:yaml.org,2002:seq'
|
||||
DEFAULT_MAPPING_TAG = 'tag:yaml.org,2002:map'
|
||||
|
||||
yaml_implicit_resolvers = {}
|
||||
yaml_path_resolvers = {}
|
||||
|
||||
def __init__(self):
|
||||
self.resolver_exact_paths = []
|
||||
self.resolver_prefix_paths = []
|
||||
|
||||
@classmethod
|
||||
def add_implicit_resolver(cls, tag, regexp, first):
|
||||
if not 'yaml_implicit_resolvers' in cls.__dict__:
|
||||
cls.yaml_implicit_resolvers = cls.yaml_implicit_resolvers.copy()
|
||||
if first is None:
|
||||
first = [None]
|
||||
for ch in first:
|
||||
cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
|
||||
|
||||
@classmethod
|
||||
def add_path_resolver(cls, tag, path, kind=None):
|
||||
# Note: `add_path_resolver` is experimental. The API could be changed.
|
||||
# `new_path` is a pattern that is matched against the path from the
|
||||
# root to the node that is being considered. `node_path` elements are
|
||||
# tuples `(node_check, index_check)`. `node_check` is a node class:
|
||||
# `ScalarNode`, `SequenceNode`, `MappingNode` or `None`. `None`
|
||||
# matches any kind of a node. `index_check` could be `None`, a boolean
|
||||
# value, a string value, or a number. `None` and `False` match against
|
||||
# any _value_ of sequence and mapping nodes. `True` matches against
|
||||
# any _key_ of a mapping node. A string `index_check` matches against
|
||||
# a mapping value that corresponds to a scalar key which content is
|
||||
# equal to the `index_check` value. An integer `index_check` matches
|
||||
# against a sequence value with the index equal to `index_check`.
|
||||
if not 'yaml_path_resolvers' in cls.__dict__:
|
||||
cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
|
||||
new_path = []
|
||||
for element in path:
|
||||
if isinstance(element, (list, tuple)):
|
||||
if len(element) == 2:
|
||||
node_check, index_check = element
|
||||
elif len(element) == 1:
|
||||
node_check = element[0]
|
||||
index_check = True
|
||||
else:
|
||||
raise ResolverError("Invalid path element: %s" % element)
|
||||
else:
|
||||
node_check = None
|
||||
index_check = element
|
||||
if node_check is str:
|
||||
node_check = ScalarNode
|
||||
elif node_check is list:
|
||||
node_check = SequenceNode
|
||||
elif node_check is dict:
|
||||
node_check = MappingNode
|
||||
elif node_check not in [ScalarNode, SequenceNode, MappingNode] \
|
||||
and not isinstance(node_check, str) \
|
||||
and node_check is not None:
|
||||
raise ResolverError("Invalid node checker: %s" % node_check)
|
||||
if not isinstance(index_check, (str, int)) \
|
||||
and index_check is not None:
|
||||
raise ResolverError("Invalid index checker: %s" % index_check)
|
||||
new_path.append((node_check, index_check))
|
||||
if kind is str:
|
||||
kind = ScalarNode
|
||||
elif kind is list:
|
||||
kind = SequenceNode
|
||||
elif kind is dict:
|
||||
kind = MappingNode
|
||||
elif kind not in [ScalarNode, SequenceNode, MappingNode] \
|
||||
and kind is not None:
|
||||
raise ResolverError("Invalid node kind: %s" % kind)
|
||||
cls.yaml_path_resolvers[tuple(new_path), kind] = tag
|
||||
|
||||
def descend_resolver(self, current_node, current_index):
|
||||
if not self.yaml_path_resolvers:
|
||||
return
|
||||
exact_paths = {}
|
||||
prefix_paths = []
|
||||
if current_node:
|
||||
depth = len(self.resolver_prefix_paths)
|
||||
for path, kind in self.resolver_prefix_paths[-1]:
|
||||
if self.check_resolver_prefix(depth, path, kind,
|
||||
current_node, current_index):
|
||||
if len(path) > depth:
|
||||
prefix_paths.append((path, kind))
|
||||
else:
|
||||
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
|
||||
else:
|
||||
for path, kind in self.yaml_path_resolvers:
|
||||
if not path:
|
||||
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
|
||||
else:
|
||||
prefix_paths.append((path, kind))
|
||||
self.resolver_exact_paths.append(exact_paths)
|
||||
self.resolver_prefix_paths.append(prefix_paths)
|
||||
|
||||
def ascend_resolver(self):
|
||||
if not self.yaml_path_resolvers:
|
||||
return
|
||||
self.resolver_exact_paths.pop()
|
||||
self.resolver_prefix_paths.pop()
|
||||
|
||||
def check_resolver_prefix(self, depth, path, kind,
|
||||
current_node, current_index):
|
||||
node_check, index_check = path[depth-1]
|
||||
if isinstance(node_check, str):
|
||||
if current_node.tag != node_check:
|
||||
return
|
||||
elif node_check is not None:
|
||||
if not isinstance(current_node, node_check):
|
||||
return
|
||||
if index_check is True and current_index is not None:
|
||||
return
|
||||
if (index_check is False or index_check is None) \
|
||||
and current_index is None:
|
||||
return
|
||||
if isinstance(index_check, str):
|
||||
if not (isinstance(current_index, ScalarNode)
|
||||
and index_check == current_index.value):
|
||||
return
|
||||
elif isinstance(index_check, int) and not isinstance(index_check, bool):
|
||||
if index_check != current_index:
|
||||
return
|
||||
return True
|
||||
|
||||
def resolve(self, kind, value, implicit):
|
||||
if kind is ScalarNode and implicit[0]:
|
||||
if value == '':
|
||||
resolvers = self.yaml_implicit_resolvers.get('', [])
|
||||
else:
|
||||
resolvers = self.yaml_implicit_resolvers.get(value[0], [])
|
||||
resolvers += self.yaml_implicit_resolvers.get(None, [])
|
||||
for tag, regexp in resolvers:
|
||||
if regexp.match(value):
|
||||
return tag
|
||||
implicit = implicit[1]
|
||||
if self.yaml_path_resolvers:
|
||||
exact_paths = self.resolver_exact_paths[-1]
|
||||
if kind in exact_paths:
|
||||
return exact_paths[kind]
|
||||
if None in exact_paths:
|
||||
return exact_paths[None]
|
||||
if kind is ScalarNode:
|
||||
return self.DEFAULT_SCALAR_TAG
|
||||
elif kind is SequenceNode:
|
||||
return self.DEFAULT_SEQUENCE_TAG
|
||||
elif kind is MappingNode:
|
||||
return self.DEFAULT_MAPPING_TAG
|
||||
|
||||
class Resolver(BaseResolver):
|
||||
pass
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:bool',
|
||||
re.compile(r'''^(?:yes|Yes|YES|no|No|NO
|
||||
|true|True|TRUE|false|False|FALSE
|
||||
|on|On|ON|off|Off|OFF)$''', re.X),
|
||||
list('yYnNtTfFoO'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:float',
|
||||
re.compile(r'''^(?:[-+]?(?:[0-9][0-9_]*)\.[0-9_]*(?:[eE][-+][0-9]+)?
|
||||
|\.[0-9_]+(?:[eE][-+][0-9]+)?
|
||||
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*
|
||||
|[-+]?\.(?:inf|Inf|INF)
|
||||
|\.(?:nan|NaN|NAN))$''', re.X),
|
||||
list('-+0123456789.'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:int',
|
||||
re.compile(r'''^(?:[-+]?0b[0-1_]+
|
||||
|[-+]?0[0-7_]+
|
||||
|[-+]?(?:0|[1-9][0-9_]*)
|
||||
|[-+]?0x[0-9a-fA-F_]+
|
||||
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
|
||||
list('-+0123456789'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:merge',
|
||||
re.compile(r'^(?:<<)$'),
|
||||
['<'])
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:null',
|
||||
re.compile(r'''^(?: ~
|
||||
|null|Null|NULL
|
||||
| )$''', re.X),
|
||||
['~', 'n', 'N', ''])
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:timestamp',
|
||||
re.compile(r'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|
||||
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
|
||||
(?:[Tt]|[ \t]+)[0-9][0-9]?
|
||||
:[0-9][0-9] :[0-9][0-9] (?:\.[0-9]*)?
|
||||
(?:[ \t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
|
||||
list('0123456789'))
|
||||
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:value',
|
||||
re.compile(r'^(?:=)$'),
|
||||
['='])
|
||||
|
||||
# The following resolver is only for documentation purposes. It cannot work
|
||||
# because plain scalars cannot start with '!', '&', or '*'.
|
||||
Resolver.add_implicit_resolver(
|
||||
'tag:yaml.org,2002:yaml',
|
||||
re.compile(r'^(?:!|&|\*)$'),
|
||||
list('!&*'))
|
||||
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,111 @@
|
|||
|
||||
__all__ = ['Serializer', 'SerializerError']
|
||||
|
||||
from .error import YAMLError
|
||||
from .events import *
|
||||
from .nodes import *
|
||||
|
||||
class SerializerError(YAMLError):
|
||||
pass
|
||||
|
||||
class Serializer:
|
||||
|
||||
ANCHOR_TEMPLATE = 'id%03d'
|
||||
|
||||
def __init__(self, encoding=None,
|
||||
explicit_start=None, explicit_end=None, version=None, tags=None):
|
||||
self.use_encoding = encoding
|
||||
self.use_explicit_start = explicit_start
|
||||
self.use_explicit_end = explicit_end
|
||||
self.use_version = version
|
||||
self.use_tags = tags
|
||||
self.serialized_nodes = {}
|
||||
self.anchors = {}
|
||||
self.last_anchor_id = 0
|
||||
self.closed = None
|
||||
|
||||
def open(self):
|
||||
if self.closed is None:
|
||||
self.emit(StreamStartEvent(encoding=self.use_encoding))
|
||||
self.closed = False
|
||||
elif self.closed:
|
||||
raise SerializerError("serializer is closed")
|
||||
else:
|
||||
raise SerializerError("serializer is already opened")
|
||||
|
||||
def close(self):
|
||||
if self.closed is None:
|
||||
raise SerializerError("serializer is not opened")
|
||||
elif not self.closed:
|
||||
self.emit(StreamEndEvent())
|
||||
self.closed = True
|
||||
|
||||
#def __del__(self):
|
||||
# self.close()
|
||||
|
||||
def serialize(self, node):
|
||||
if self.closed is None:
|
||||
raise SerializerError("serializer is not opened")
|
||||
elif self.closed:
|
||||
raise SerializerError("serializer is closed")
|
||||
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
|
||||
version=self.use_version, tags=self.use_tags))
|
||||
self.anchor_node(node)
|
||||
self.serialize_node(node, None, None)
|
||||
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
|
||||
self.serialized_nodes = {}
|
||||
self.anchors = {}
|
||||
self.last_anchor_id = 0
|
||||
|
||||
def anchor_node(self, node):
|
||||
if node in self.anchors:
|
||||
if self.anchors[node] is None:
|
||||
self.anchors[node] = self.generate_anchor(node)
|
||||
else:
|
||||
self.anchors[node] = None
|
||||
if isinstance(node, SequenceNode):
|
||||
for item in node.value:
|
||||
self.anchor_node(item)
|
||||
elif isinstance(node, MappingNode):
|
||||
for key, value in node.value:
|
||||
self.anchor_node(key)
|
||||
self.anchor_node(value)
|
||||
|
||||
def generate_anchor(self, node):
|
||||
self.last_anchor_id += 1
|
||||
return self.ANCHOR_TEMPLATE % self.last_anchor_id
|
||||
|
||||
def serialize_node(self, node, parent, index):
|
||||
alias = self.anchors[node]
|
||||
if node in self.serialized_nodes:
|
||||
self.emit(AliasEvent(alias))
|
||||
else:
|
||||
self.serialized_nodes[node] = True
|
||||
self.descend_resolver(parent, index)
|
||||
if isinstance(node, ScalarNode):
|
||||
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
|
||||
default_tag = self.resolve(ScalarNode, node.value, (False, True))
|
||||
implicit = (node.tag == detected_tag), (node.tag == default_tag)
|
||||
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
|
||||
style=node.style))
|
||||
elif isinstance(node, SequenceNode):
|
||||
implicit = (node.tag
|
||||
== self.resolve(SequenceNode, node.value, True))
|
||||
self.emit(SequenceStartEvent(alias, node.tag, implicit,
|
||||
flow_style=node.flow_style))
|
||||
index = 0
|
||||
for item in node.value:
|
||||
self.serialize_node(item, node, index)
|
||||
index += 1
|
||||
self.emit(SequenceEndEvent())
|
||||
elif isinstance(node, MappingNode):
|
||||
implicit = (node.tag
|
||||
== self.resolve(MappingNode, node.value, True))
|
||||
self.emit(MappingStartEvent(alias, node.tag, implicit,
|
||||
flow_style=node.flow_style))
|
||||
for key, value in node.value:
|
||||
self.serialize_node(key, node, None)
|
||||
self.serialize_node(value, node, key)
|
||||
self.emit(MappingEndEvent())
|
||||
self.ascend_resolver()
|
||||
|
|
@ -0,0 +1,104 @@
|
|||
|
||||
class Token(object):
|
||||
def __init__(self, start_mark, end_mark):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
def __repr__(self):
|
||||
attributes = [key for key in self.__dict__
|
||||
if not key.endswith('_mark')]
|
||||
attributes.sort()
|
||||
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
|
||||
for key in attributes])
|
||||
return '%s(%s)' % (self.__class__.__name__, arguments)
|
||||
|
||||
#class BOMToken(Token):
|
||||
# id = '<byte order mark>'
|
||||
|
||||
class DirectiveToken(Token):
|
||||
id = '<directive>'
|
||||
def __init__(self, name, value, start_mark, end_mark):
|
||||
self.name = name
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class DocumentStartToken(Token):
|
||||
id = '<document start>'
|
||||
|
||||
class DocumentEndToken(Token):
|
||||
id = '<document end>'
|
||||
|
||||
class StreamStartToken(Token):
|
||||
id = '<stream start>'
|
||||
def __init__(self, start_mark=None, end_mark=None,
|
||||
encoding=None):
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.encoding = encoding
|
||||
|
||||
class StreamEndToken(Token):
|
||||
id = '<stream end>'
|
||||
|
||||
class BlockSequenceStartToken(Token):
|
||||
id = '<block sequence start>'
|
||||
|
||||
class BlockMappingStartToken(Token):
|
||||
id = '<block mapping start>'
|
||||
|
||||
class BlockEndToken(Token):
|
||||
id = '<block end>'
|
||||
|
||||
class FlowSequenceStartToken(Token):
|
||||
id = '['
|
||||
|
||||
class FlowMappingStartToken(Token):
|
||||
id = '{'
|
||||
|
||||
class FlowSequenceEndToken(Token):
|
||||
id = ']'
|
||||
|
||||
class FlowMappingEndToken(Token):
|
||||
id = '}'
|
||||
|
||||
class KeyToken(Token):
|
||||
id = '?'
|
||||
|
||||
class ValueToken(Token):
|
||||
id = ':'
|
||||
|
||||
class BlockEntryToken(Token):
|
||||
id = '-'
|
||||
|
||||
class FlowEntryToken(Token):
|
||||
id = ','
|
||||
|
||||
class AliasToken(Token):
|
||||
id = '<alias>'
|
||||
def __init__(self, value, start_mark, end_mark):
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class AnchorToken(Token):
|
||||
id = '<anchor>'
|
||||
def __init__(self, value, start_mark, end_mark):
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class TagToken(Token):
|
||||
id = '<tag>'
|
||||
def __init__(self, value, start_mark, end_mark):
|
||||
self.value = value
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
|
||||
class ScalarToken(Token):
|
||||
id = '<scalar>'
|
||||
def __init__(self, value, plain, start_mark, end_mark, style=None):
|
||||
self.value = value
|
||||
self.plain = plain
|
||||
self.start_mark = start_mark
|
||||
self.end_mark = end_mark
|
||||
self.style = style
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
|
||||
# The INCLUDE and LIB directories to build the '_yaml' extension.
|
||||
# You may also set them using the options '-I' and '-L'.
|
||||
[build_ext]
|
||||
|
||||
# List of directories to search for 'yaml.h' (separated by ':').
|
||||
#include_dirs=/usr/local/include:../../include
|
||||
|
||||
# List of directories to search for 'libyaml.a' (separated by ':').
|
||||
#library_dirs=/usr/local/lib:../../lib
|
||||
|
||||
# An alternative compiler to build the extention.
|
||||
#compiler=mingw32
|
||||
|
||||
# Additional preprocessor definitions might be required.
|
||||
#define=YAML_DECLARE_STATIC
|
||||
|
||||
# The following options are used to build PyYAML Windows installer
|
||||
# for Python 2.5 on my PC:
|
||||
#include_dirs=../../../libyaml/tags/0.1.4/include
|
||||
#library_dirs=../../../libyaml/tags/0.1.4/win32/vs2003/output/release/lib
|
||||
#define=YAML_DECLARE_STATIC
|
||||
|
||||
# The following options are used to build PyYAML Windows installer
|
||||
# for Python 2.6, 2.7, 3.0, 3.1 and 3.2 on my PC:
|
||||
#include_dirs=../../../libyaml/tags/0.1.4/include
|
||||
#library_dirs=../../../libyaml/tags/0.1.4/win32/vs2008/output/release/lib
|
||||
#define=YAML_DECLARE_STATIC
|
||||
|
|
@ -0,0 +1,345 @@
|
|||
|
||||
NAME = 'PyYAML'
|
||||
VERSION = '3.11'
|
||||
DESCRIPTION = "YAML parser and emitter for Python"
|
||||
LONG_DESCRIPTION = """\
|
||||
YAML is a data serialization format designed for human readability
|
||||
and interaction with scripting languages. PyYAML is a YAML parser
|
||||
and emitter for Python.
|
||||
|
||||
PyYAML features a complete YAML 1.1 parser, Unicode support, pickle
|
||||
support, capable extension API, and sensible error messages. PyYAML
|
||||
supports standard YAML tags and provides Python-specific tags that
|
||||
allow to represent an arbitrary Python object.
|
||||
|
||||
PyYAML is applicable for a broad range of tasks from complex
|
||||
configuration files to object serialization and persistance."""
|
||||
AUTHOR = "Kirill Simonov"
|
||||
AUTHOR_EMAIL = 'xi@resolvent.net'
|
||||
LICENSE = "MIT"
|
||||
PLATFORMS = "Any"
|
||||
URL = "http://pyyaml.org/wiki/PyYAML"
|
||||
DOWNLOAD_URL = "http://pyyaml.org/download/pyyaml/%s-%s.tar.gz" % (NAME, VERSION)
|
||||
CLASSIFIERS = [
|
||||
"Development Status :: 5 - Production/Stable",
|
||||
"Intended Audience :: Developers",
|
||||
"License :: OSI Approved :: MIT License",
|
||||
"Operating System :: OS Independent",
|
||||
"Programming Language :: Python",
|
||||
"Programming Language :: Python :: 2",
|
||||
"Programming Language :: Python :: 2.5",
|
||||
"Programming Language :: Python :: 2.6",
|
||||
"Programming Language :: Python :: 2.7",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.0",
|
||||
"Programming Language :: Python :: 3.1",
|
||||
"Programming Language :: Python :: 3.2",
|
||||
"Topic :: Software Development :: Libraries :: Python Modules",
|
||||
"Topic :: Text Processing :: Markup",
|
||||
]
|
||||
|
||||
|
||||
LIBYAML_CHECK = """
|
||||
#include <yaml.h>
|
||||
|
||||
int main(void) {
|
||||
yaml_parser_t parser;
|
||||
yaml_emitter_t emitter;
|
||||
|
||||
yaml_parser_initialize(&parser);
|
||||
yaml_parser_delete(&parser);
|
||||
|
||||
yaml_emitter_initialize(&emitter);
|
||||
yaml_emitter_delete(&emitter);
|
||||
|
||||
return 0;
|
||||
}
|
||||
"""
|
||||
|
||||
|
||||
import sys, os.path
|
||||
|
||||
from distutils import log
|
||||
from distutils.core import setup, Command
|
||||
from distutils.core import Distribution as _Distribution
|
||||
from distutils.core import Extension as _Extension
|
||||
from distutils.dir_util import mkpath
|
||||
from distutils.command.build_ext import build_ext as _build_ext
|
||||
from distutils.command.bdist_rpm import bdist_rpm as _bdist_rpm
|
||||
from distutils.errors import CompileError, LinkError, DistutilsPlatformError
|
||||
|
||||
if 'setuptools.extension' in sys.modules:
|
||||
_Extension = sys.modules['setuptools.extension']._Extension
|
||||
sys.modules['distutils.core'].Extension = _Extension
|
||||
sys.modules['distutils.extension'].Extension = _Extension
|
||||
sys.modules['distutils.command.build_ext'].Extension = _Extension
|
||||
|
||||
with_pyrex = None
|
||||
if sys.version_info[0] < 3:
|
||||
try:
|
||||
from Cython.Distutils.extension import Extension as _Extension
|
||||
from Cython.Distutils import build_ext as _build_ext
|
||||
with_pyrex = 'cython'
|
||||
except ImportError:
|
||||
try:
|
||||
# Pyrex cannot build _yaml.c at the moment,
|
||||
# but it may get fixed eventually.
|
||||
from Pyrex.Distutils import Extension as _Extension
|
||||
from Pyrex.Distutils import build_ext as _build_ext
|
||||
with_pyrex = 'pyrex'
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
|
||||
class Distribution(_Distribution):
|
||||
|
||||
def __init__(self, attrs=None):
|
||||
_Distribution.__init__(self, attrs)
|
||||
if not self.ext_modules:
|
||||
return
|
||||
for idx in range(len(self.ext_modules)-1, -1, -1):
|
||||
ext = self.ext_modules[idx]
|
||||
if not isinstance(ext, Extension):
|
||||
continue
|
||||
setattr(self, ext.attr_name, None)
|
||||
self.global_options = [
|
||||
(ext.option_name, None,
|
||||
"include %s (default if %s is available)"
|
||||
% (ext.feature_description, ext.feature_name)),
|
||||
(ext.neg_option_name, None,
|
||||
"exclude %s" % ext.feature_description),
|
||||
] + self.global_options
|
||||
self.negative_opt = self.negative_opt.copy()
|
||||
self.negative_opt[ext.neg_option_name] = ext.option_name
|
||||
|
||||
def has_ext_modules(self):
|
||||
if not self.ext_modules:
|
||||
return False
|
||||
for ext in self.ext_modules:
|
||||
with_ext = self.ext_status(ext)
|
||||
if with_ext is None or with_ext:
|
||||
return True
|
||||
return False
|
||||
|
||||
def ext_status(self, ext):
|
||||
if 'Java' in sys.version or 'IronPython' in sys.version or 'PyPy' in sys.version:
|
||||
return False
|
||||
if isinstance(ext, Extension):
|
||||
with_ext = getattr(self, ext.attr_name)
|
||||
return with_ext
|
||||
else:
|
||||
return True
|
||||
|
||||
|
||||
class Extension(_Extension):
|
||||
|
||||
def __init__(self, name, sources, feature_name, feature_description,
|
||||
feature_check, **kwds):
|
||||
if not with_pyrex:
|
||||
for filename in sources[:]:
|
||||
base, ext = os.path.splitext(filename)
|
||||
if ext == '.pyx':
|
||||
sources.remove(filename)
|
||||
sources.append('%s.c' % base)
|
||||
_Extension.__init__(self, name, sources, **kwds)
|
||||
self.feature_name = feature_name
|
||||
self.feature_description = feature_description
|
||||
self.feature_check = feature_check
|
||||
self.attr_name = 'with_' + feature_name.replace('-', '_')
|
||||
self.option_name = 'with-' + feature_name
|
||||
self.neg_option_name = 'without-' + feature_name
|
||||
|
||||
|
||||
class build_ext(_build_ext):
|
||||
|
||||
def run(self):
|
||||
optional = True
|
||||
disabled = True
|
||||
for ext in self.extensions:
|
||||
with_ext = self.distribution.ext_status(ext)
|
||||
if with_ext is None:
|
||||
disabled = False
|
||||
elif with_ext:
|
||||
optional = False
|
||||
disabled = False
|
||||
break
|
||||
if disabled:
|
||||
return
|
||||
try:
|
||||
_build_ext.run(self)
|
||||
except DistutilsPlatformError:
|
||||
exc = sys.exc_info()[1]
|
||||
if optional:
|
||||
log.warn(str(exc))
|
||||
log.warn("skipping build_ext")
|
||||
else:
|
||||
raise
|
||||
|
||||
def get_source_files(self):
|
||||
self.check_extensions_list(self.extensions)
|
||||
filenames = []
|
||||
for ext in self.extensions:
|
||||
if with_pyrex == 'pyrex':
|
||||
self.pyrex_sources(ext.sources, ext)
|
||||
elif with_pyrex == 'cython':
|
||||
self.cython_sources(ext.sources, ext)
|
||||
for filename in ext.sources:
|
||||
filenames.append(filename)
|
||||
base = os.path.splitext(filename)[0]
|
||||
for ext in ['c', 'h', 'pyx', 'pxd']:
|
||||
filename = '%s.%s' % (base, ext)
|
||||
if filename not in filenames and os.path.isfile(filename):
|
||||
filenames.append(filename)
|
||||
return filenames
|
||||
|
||||
def get_outputs(self):
|
||||
self.check_extensions_list(self.extensions)
|
||||
outputs = []
|
||||
for ext in self.extensions:
|
||||
fullname = self.get_ext_fullname(ext.name)
|
||||
filename = os.path.join(self.build_lib,
|
||||
self.get_ext_filename(fullname))
|
||||
if os.path.isfile(filename):
|
||||
outputs.append(filename)
|
||||
return outputs
|
||||
|
||||
def build_extensions(self):
|
||||
self.check_extensions_list(self.extensions)
|
||||
for ext in self.extensions:
|
||||
with_ext = self.distribution.ext_status(ext)
|
||||
if with_ext is None:
|
||||
with_ext = self.check_extension_availability(ext)
|
||||
if not with_ext:
|
||||
continue
|
||||
if with_pyrex == 'pyrex':
|
||||
ext.sources = self.pyrex_sources(ext.sources, ext)
|
||||
elif with_pyrex == 'cython':
|
||||
ext.sources = self.cython_sources(ext.sources, ext)
|
||||
self.build_extension(ext)
|
||||
|
||||
def check_extension_availability(self, ext):
|
||||
cache = os.path.join(self.build_temp, 'check_%s.out' % ext.feature_name)
|
||||
if not self.force and os.path.isfile(cache):
|
||||
data = open(cache).read().strip()
|
||||
if data == '1':
|
||||
return True
|
||||
elif data == '0':
|
||||
return False
|
||||
mkpath(self.build_temp)
|
||||
src = os.path.join(self.build_temp, 'check_%s.c' % ext.feature_name)
|
||||
open(src, 'w').write(ext.feature_check)
|
||||
log.info("checking if %s is compilable" % ext.feature_name)
|
||||
try:
|
||||
[obj] = self.compiler.compile([src],
|
||||
macros=ext.define_macros+[(undef,) for undef in ext.undef_macros],
|
||||
include_dirs=ext.include_dirs,
|
||||
extra_postargs=(ext.extra_compile_args or []),
|
||||
depends=ext.depends)
|
||||
except CompileError:
|
||||
log.warn("")
|
||||
log.warn("%s is not found or a compiler error: forcing --%s"
|
||||
% (ext.feature_name, ext.neg_option_name))
|
||||
log.warn("(if %s is installed correctly, you may need to"
|
||||
% ext.feature_name)
|
||||
log.warn(" specify the option --include-dirs or uncomment and")
|
||||
log.warn(" modify the parameter include_dirs in setup.cfg)")
|
||||
open(cache, 'w').write('0\n')
|
||||
return False
|
||||
prog = 'check_%s' % ext.feature_name
|
||||
log.info("checking if %s is linkable" % ext.feature_name)
|
||||
try:
|
||||
self.compiler.link_executable([obj], prog,
|
||||
output_dir=self.build_temp,
|
||||
libraries=ext.libraries,
|
||||
library_dirs=ext.library_dirs,
|
||||
runtime_library_dirs=ext.runtime_library_dirs,
|
||||
extra_postargs=(ext.extra_link_args or []))
|
||||
except LinkError:
|
||||
log.warn("")
|
||||
log.warn("%s is not found or a linker error: forcing --%s"
|
||||
% (ext.feature_name, ext.neg_option_name))
|
||||
log.warn("(if %s is installed correctly, you may need to"
|
||||
% ext.feature_name)
|
||||
log.warn(" specify the option --library-dirs or uncomment and")
|
||||
log.warn(" modify the parameter library_dirs in setup.cfg)")
|
||||
open(cache, 'w').write('0\n')
|
||||
return False
|
||||
open(cache, 'w').write('1\n')
|
||||
return True
|
||||
|
||||
|
||||
class bdist_rpm(_bdist_rpm):
|
||||
|
||||
def _make_spec_file(self):
|
||||
argv0 = sys.argv[0]
|
||||
features = []
|
||||
for ext in self.distribution.ext_modules:
|
||||
if not isinstance(ext, Extension):
|
||||
continue
|
||||
with_ext = getattr(self.distribution, ext.attr_name)
|
||||
if with_ext is None:
|
||||
continue
|
||||
if with_ext:
|
||||
features.append('--'+ext.option_name)
|
||||
else:
|
||||
features.append('--'+ext.neg_option_name)
|
||||
sys.argv[0] = ' '.join([argv0]+features)
|
||||
spec_file = _bdist_rpm._make_spec_file(self)
|
||||
sys.argv[0] = argv0
|
||||
return spec_file
|
||||
|
||||
|
||||
class test(Command):
|
||||
|
||||
user_options = []
|
||||
|
||||
def initialize_options(self):
|
||||
pass
|
||||
|
||||
def finalize_options(self):
|
||||
pass
|
||||
|
||||
def run(self):
|
||||
build_cmd = self.get_finalized_command('build')
|
||||
build_cmd.run()
|
||||
sys.path.insert(0, build_cmd.build_lib)
|
||||
if sys.version_info[0] < 3:
|
||||
sys.path.insert(0, 'tests/lib')
|
||||
else:
|
||||
sys.path.insert(0, 'tests/lib3')
|
||||
import test_all
|
||||
test_all.main([])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
setup(
|
||||
name=NAME,
|
||||
version=VERSION,
|
||||
description=DESCRIPTION,
|
||||
long_description=LONG_DESCRIPTION,
|
||||
author=AUTHOR,
|
||||
author_email=AUTHOR_EMAIL,
|
||||
license=LICENSE,
|
||||
platforms=PLATFORMS,
|
||||
url=URL,
|
||||
download_url=DOWNLOAD_URL,
|
||||
classifiers=CLASSIFIERS,
|
||||
|
||||
package_dir={'': {2: 'lib', 3: 'lib3'}[sys.version_info[0]]},
|
||||
packages=['yaml'],
|
||||
ext_modules=[
|
||||
Extension('_yaml', ['ext/_yaml.pyx'],
|
||||
'libyaml', "LibYAML bindings", LIBYAML_CHECK,
|
||||
libraries=['yaml']),
|
||||
],
|
||||
|
||||
distclass=Distribution,
|
||||
|
||||
cmdclass={
|
||||
'build_ext': build_ext,
|
||||
'bdist_rpm': bdist_rpm,
|
||||
'test': test,
|
||||
},
|
||||
)
|
||||
|
Загрузка…
Ссылка в новой задаче