зеркало из https://github.com/mozilla/gecko-dev.git
Bug 1437593 - Vendor pipenv 11.9.0; r=ted
This was vendored by downloading the tar.gz for the release from https://github.com/pypa/pipenv/releases and extracting into third_party/python. The release number was then stripped from the resulting directory. MozReview-Commit-ID: 563xZxZ88lm --HG-- extra : rebase_source : aa7de216aeb68dfc057f3bfc2caccfd7fe33180f
This commit is contained in:
Родитель
9bff0f3d88
Коммит
95f4c23d40
|
@ -0,0 +1,821 @@
|
|||
11.9.0:
|
||||
- Vastly improve markers capabilities.
|
||||
- Support for environment variables in Pipfiles.
|
||||
- Cache the Pipfile internally (for large Pipfiles).
|
||||
- Remove pipenv --update.
|
||||
- Export PYTHONDONTWRITEBYTECODE, to attempt to increase compatibility.
|
||||
11.8.2:
|
||||
- Cleanup TOML.
|
||||
- Improve documentation.
|
||||
- Pass clear flag to resolver.
|
||||
- Improved private git URL handling.
|
||||
11.8.1:
|
||||
- Removed (unused) Safety DB (licensing concerns).
|
||||
11.8.0:
|
||||
- Fix a major bug in locking resolution.
|
||||
11.7.4:
|
||||
- Don't use JSON results — problematic.
|
||||
11.7.3:
|
||||
- Increase compatibility with strange Python installations (concurrency.futures).
|
||||
11.7.2:
|
||||
- Bugfixes.
|
||||
11.7.1:
|
||||
- Windows bugfix.
|
||||
11.7.0:
|
||||
- Improvements to lockfile generation with private indexes.
|
||||
11.6.9:
|
||||
- Bugfixes.
|
||||
11.6.8:
|
||||
- Fix a Windows bug.
|
||||
11.6.7:
|
||||
- Fix a Windows bug.
|
||||
11.6.5:
|
||||
- Fix graph resolution.
|
||||
- May be some incompatibilities with private indexes and hashing. If so, it's worth it to fix graph resolution for now. Priorities. One thing at a time.
|
||||
11.6.4:
|
||||
- Fix a bug.
|
||||
11.6.3:
|
||||
- Depend on certifi.
|
||||
11.6.2:
|
||||
- Properly vendor certifi.
|
||||
- Better support for --extra-index-url for private PyPI servers.
|
||||
- Bug fixes.
|
||||
11.6.1:
|
||||
- Remove concurrent.futures, as it's not being used any longer, and is problematic.
|
||||
11.6.0:
|
||||
- Vendor all of pip9, in preparation for the release of pip10.
|
||||
11.5.3:
|
||||
- Attempt to grab markers from -e – provided setup.py files.
|
||||
- Revert "rely on the underlying pipenv sync architecture to pick up dependencies".
|
||||
11.5.2:
|
||||
- Fix bug with markers (e.g. responses package).
|
||||
11.5.1:
|
||||
- Restore bare 'pipenv update' functionality.
|
||||
11.5.0:
|
||||
- Properly resolve hashes for private indexes.
|
||||
- Some subtle changes to the way resolution works — shouldn't affect you, but warrented a version bump.
|
||||
11.4.0:
|
||||
- Stability.
|
||||
- Don't install dependencies straight-away with pipenv–install — rely on the underlying pipenv sync architecture to pick up dependencies.
|
||||
- Warn (abord) if requested update package is not in Pipfile.
|
||||
- Don't configure the Pipfile for keep_outdated when update is used.
|
||||
11.3.3:
|
||||
- Sorry for all the bugs.
|
||||
11.3.2:
|
||||
- Bugfix, of the craziest, hardest to reproduce nature.
|
||||
11.3.1:
|
||||
- Fix shell --fancy.
|
||||
11.3.0:
|
||||
- Default to using the Python Pipenv was installed with for new virtualenvs.
|
||||
- Report Python version of specified interpreter when creating virtualenv.
|
||||
- Disable JSON API usage, for now. It appears to cause some minor bugs related to markers (working on it).
|
||||
11.2.2:
|
||||
- Potential bugfix related to subprocess invocations and environment variables.
|
||||
11.2.1:
|
||||
- Actually use the Warehouse JSON API.
|
||||
11.2.0:
|
||||
- Reduce the number of "bad packages", internally (e.g. don't exclude `six` anymore).
|
||||
11.1.11:
|
||||
- Help improvements.
|
||||
11.1.10:
|
||||
- Help improvements.
|
||||
11.1.9:
|
||||
- $ python -m pipenv.help
|
||||
11.1.8:
|
||||
- Resolver improvements.
|
||||
11.1.7:
|
||||
- Packaging fix.
|
||||
11.1.6:
|
||||
- Support for 'py' interpreter (on Windows).
|
||||
- Bugfixes.
|
||||
11.1.5:
|
||||
- Vendor pew.
|
||||
- Be specific about which version of psutil we want.
|
||||
- Patch pip and pip-tools (further) like crazy, for hard-to-believe reasons, and the benefit of all.
|
||||
11.1.4:
|
||||
- Resolve multiple extras when provided.
|
||||
- Improve completion time.
|
||||
- Remove vendored version of psutil (windows).
|
||||
- Bugfixes.
|
||||
11.1.3:
|
||||
- Bugfix.
|
||||
11.1.2:
|
||||
- No longer include hashes in `lock -r`.
|
||||
- Enable pew execution via python -m.
|
||||
11.1.1:
|
||||
- Undo previous change.
|
||||
11.1.0:
|
||||
- Default to the version of Python that Pipenv was installed with.
|
||||
11.0.9:
|
||||
- PPA release.
|
||||
11.0.8:
|
||||
- PPA release.
|
||||
11.0.7:
|
||||
- PPA release.
|
||||
11.0.6:
|
||||
- PPA release.
|
||||
11.0.5:
|
||||
- PPA release.
|
||||
11.0.4:
|
||||
- PPA release.
|
||||
11.0.3:
|
||||
- PPA release.
|
||||
11.0.2:
|
||||
- Hash order is deterministic now.
|
||||
- Bugfix.
|
||||
11.0.1:
|
||||
- Bugfix.
|
||||
11.0.0:
|
||||
- Massive resolver improvements!
|
||||
- Resolver now runs within virtual environments.
|
||||
- Resolver now uses PyPI JSON metadata to provide additional dependency information.
|
||||
- Environment information removed from `Pipfile.lock`.
|
||||
- Clean up temporary files used during dependency resolution.
|
||||
10.1.2:
|
||||
- Bugfix.
|
||||
10.1.1:
|
||||
- Assume `PIPENV_VENV_IN_PROJECT` if `./.venv/` already exists.
|
||||
- Use and generate hashes for PyPI mirrors and custom indexes.
|
||||
10.1.0:
|
||||
- Default dependencies now take precidence over Develop dependencies when
|
||||
creating a Pipfile.lock.
|
||||
- Introducing `pipenv lock --keep-outdated`, which can also be passed to
|
||||
`install` and `uninstall`.
|
||||
- Introducing `pipenv install --selective-upgrade <package>`, which only
|
||||
updates the given package in your Pipfile.lock.
|
||||
- New Pipfile configuration option for [pipenv] section: `keep_outdated`.
|
||||
10.0.1:
|
||||
- Add extra indexes from pip config files in Pipfile generation.
|
||||
- Fix bug with `pipenv clean`.
|
||||
- Install from Pipfile.lock after each successful `pipenv install`.
|
||||
- Temporary file cleanup.
|
||||
10.0.0:
|
||||
- Introduce `pipenv sync` command.
|
||||
- Introduce `pipenv clean` command.
|
||||
- Deprecate `pipenv update` command.
|
||||
- Fully remove `check --style` functionality.
|
||||
- Better `lock -r` functionality.
|
||||
- Up-to-date security checks for `pipenv check`.
|
||||
9.1.0:
|
||||
- Add --system flag to $ pipenv check.
|
||||
- Removal of package name suggestions.
|
||||
- Support for [scripts] in Pipfile.
|
||||
- Comment out invalid (to pip's hash checking mode) packages from `$ pipenv lock -r`.
|
||||
- Updated patched version of dotenv.
|
||||
- Do not allow `$ pipenv install --system packagename `to be used.
|
||||
- Deprecate the usage of `$ pipenv check --style`.
|
||||
- Show pip install logs with --verbose.
|
||||
- Allow -v as shorthand for --verbose for all commands.
|
||||
- Prevent duplicate virtualenv creation on windows due to drive casing.
|
||||
- Discard comments in output of `pip freeze` when running `pipenv update`.
|
||||
- Ignore existing `requirements.txt` files when pipenv is called with the `--requirements` flag.
|
||||
- Support `allow_global` during dependency resolution.
|
||||
- Add virtualenv activation support for `sh` (see #1388).
|
||||
- Improve startup times via lazy loading of imports.
|
||||
- Improve parsing of extras, markers, and path requirements.
|
||||
- Fix regression with VCS url parsing being treated as a normal path.
|
||||
- Resolve an issue causing local paths with the same name as a PyPI package to prevent proper dependency resolution.
|
||||
9.0.3:
|
||||
- v9.0.1.
|
||||
9.0.2:
|
||||
- A mistake.
|
||||
9.0.1:
|
||||
- Fixed issue with specifiers being treated as paths on Windows.
|
||||
- Fixed regression causing development packages to always be installed.
|
||||
9.0.0:
|
||||
- Fixed bug where packages beginning with vcs names (e.g. git) weren't installed correctly.
|
||||
- Fixed url parsing for <vcs>+<vcs>:// style urls.
|
||||
- Pipenv can now install relative file paths.
|
||||
- Better messaging around failed installs.
|
||||
- More resilient network io when retrieving data from PyPI.
|
||||
- Fixed bug with bad dependency pinning via pip-tools.
|
||||
- Prompt user to destroy and recreate virtualenvironment if they are in a currently activated environment.
|
||||
- Added enhancement for pip-tools to resolve dependencies with specific versions of python
|
||||
- Fixed bug where newlines were not escaped in .env files when loaded
|
||||
- Sequentially install all local and vcs dependencies to avoid write race conditions.
|
||||
- Fixed accidental exclusion of files from some VCS installs.
|
||||
8.3.2:
|
||||
- Moved automated update check to once every 24 hours.
|
||||
- Better default for PYENV_ROOT.
|
||||
- Correctly support all pip --index specifiers.
|
||||
- Fix bug where pre-releases of Python were chosen over finals.
|
||||
8.3.1:
|
||||
- Fixed issues with calling block too many times on single subprocess.
|
||||
- Updated vendored delegator.py.
|
||||
- Changed --dev flag for the uninstall command to --all-dev to better represent what it does.
|
||||
8.3.0:
|
||||
- Add support for installation from remote requirements file.
|
||||
- Add --reverse to pipenv graph, displaying inverted dependency graph.
|
||||
- VCS dependencies now install sequentially to avoid write lock conflicts.
|
||||
- Allow PIPENV_IGNORE_VIRTUALENVS to work with pipenv shell on Windows.
|
||||
- Enforce newline termination of Pipfile.
|
||||
- More robust requirements.txt conversion experience.
|
||||
- Respect allow_prereleases in all locking scenarios.
|
||||
- Separated default and development dependency output when using lock -r and lock -r -d respectively.
|
||||
- Print whole help message with pipenv --help.
|
||||
8.2.7:
|
||||
- Add update --sequential.
|
||||
- Fix unicode decode error on windows.
|
||||
- Fix bug with non-editable installs.
|
||||
- Update vendored setuptools.
|
||||
- Improvements to check --unused.
|
||||
- Fix install for local sdist packages.
|
||||
- Updating the patched pip-tools with the wheel dependency bugfix.
|
||||
- Fix git remote address modified changing underscore to a hyphen.
|
||||
- Fix py2toml with dashes (dev-packages)
|
||||
- Fix for --dry-run, reporting backwards.
|
||||
- Fix installing with all release specifiers.
|
||||
- Removed unused vendor libraries.
|
||||
8.2.6:
|
||||
- Fix for some git remotes.
|
||||
- Increased the default number of max rounds for pip-tools, made it user-configurable.
|
||||
- Fix self-updating.
|
||||
8.2.5:
|
||||
- Fixed bad attribute call on date checks.
|
||||
8.2.4:
|
||||
- Enhanced sha messaging — lockfile short shas are now displayed.
|
||||
- Improve Windows unicode output.
|
||||
- General UX and other improvements.
|
||||
8.2.3:
|
||||
- Don't show activation instructions when --deploy is used.
|
||||
8.2.2:
|
||||
- Improve system pip detection.
|
||||
8.2.1:
|
||||
- Enhanced pip resolver — hopefully that won't blow up in our faces.
|
||||
- Fixed file links.
|
||||
8.2.0:
|
||||
- Made things nicer.
|
||||
8.1.9:
|
||||
- Fix logging bug.
|
||||
8.1.8:
|
||||
- Fix dependencies with markers attached. That wasn't easy.
|
||||
- Vendor (patch) pip-tools.
|
||||
- Honor PIP_SRC if it is provided.
|
||||
8.1.7:
|
||||
- Update Python 2.x default to 2.7.14.
|
||||
- Deploy mode aborts if Python version doesn't match.
|
||||
8.1.6:
|
||||
- Abort when Python installation appears to fail.
|
||||
8.1.5:
|
||||
- Update pexcept to fix shellquote issues in subprocesses.
|
||||
8.1.4:
|
||||
- Tell users in compatibility mode how to exit the shell.
|
||||
- Updated patched pip's vendored pkg-resources.
|
||||
8.1.3:
|
||||
- Further improve patched pip, for crazy setup.py files.
|
||||
8.1.2:
|
||||
- chdir option for project, for really stubborn people.
|
||||
8.1.1:
|
||||
- Better exception handling when a corrupt virtualenv is being used.
|
||||
8.1.0:
|
||||
- Better path handling.
|
||||
8.0.9:
|
||||
- Bug when -r is passed in a subdirectory.
|
||||
8.0.8:
|
||||
- Add verbose mode to Pip.
|
||||
8.0.7:
|
||||
- Fix --skip-lock when verify_ssl = false.
|
||||
- Always quote pip path.
|
||||
- Fix --update.
|
||||
8.0.6:
|
||||
- Fix indexes.
|
||||
8.0.5:
|
||||
- $ pipenv open :module
|
||||
8.0.4:
|
||||
- $ pipenv install --deploy.
|
||||
8.0.3:
|
||||
- Improvements to dependency resolution against various versions of Python.
|
||||
- Fix issue with nested directories all containing Pipfiles.
|
||||
- Fix issue with --py when run outside of a project.
|
||||
- Refactoring of virtualenv detection.
|
||||
- Improvements to crayons library.
|
||||
- PIPENV_DOTENV_LOCATION.
|
||||
8.0.1:
|
||||
- Fix weird edge case with ramuel.ordereddict.
|
||||
8.0.0:
|
||||
- new [pipenv] settings, allows for allows_prereleases=True, automatically set when using install --pre.
|
||||
7.9.10:
|
||||
- Use urllib3 directly, for exceptions handling.
|
||||
7.9.9:
|
||||
- Fix argument parsing.
|
||||
7.9.8:
|
||||
- Fix argument parsing.
|
||||
7.9.7:
|
||||
- Fix help printout screen (and update it).
|
||||
- Use urllib3's warning suppression directly.
|
||||
7.9.6:
|
||||
- Did you mean?
|
||||
7.9.5:
|
||||
- More usage examples in help output.
|
||||
7.9.4:
|
||||
- Support for editable extras.
|
||||
7.9.3:
|
||||
- Use foreground color instead of white.
|
||||
7.9.2:
|
||||
- UX cleanup.
|
||||
7.9.1:
|
||||
- Bug fix with indexes.
|
||||
7.9.0:
|
||||
- Bug fix with indexes.
|
||||
7.8.9:
|
||||
- Fix for Heroku.
|
||||
7.8.8:
|
||||
- Make --fancy default for windows users.
|
||||
7.8.7:
|
||||
- Make resolver use client python for setup.py egg_info (very fancy).
|
||||
- Fix a nasty windows bug.
|
||||
- add --completion.
|
||||
- add --man.
|
||||
7.8.6:
|
||||
- Don't import code automatically, only use -c ..
|
||||
7.8.5:
|
||||
- Edge case.
|
||||
7.8.4:
|
||||
- Flake8 checking with check --style!
|
||||
7.8.3:
|
||||
- $ pipenv check --unused.
|
||||
7.8.2:
|
||||
- Fallback to toml parser for absurdly large files.
|
||||
7.8.1:
|
||||
- Catch all exceptions in pipreqs.
|
||||
7.8.0:
|
||||
- Packaging fix.
|
||||
7.7.9:
|
||||
- Ignore bad packages with -c.
|
||||
7.7.8:
|
||||
- Minor bug fix.
|
||||
7.7.7:
|
||||
- $ pipenv install -c .
|
||||
7.7.6:
|
||||
- Fix a very very minor UX bug.
|
||||
7.7.5:
|
||||
- No longer eat editables, as pip-tools does it for us now.
|
||||
7.7.4:
|
||||
- Install VCS deps into the virtualenv's src directory, not into the current directory.
|
||||
7.7.3:
|
||||
- Fix --three on Windows.
|
||||
7.7.2:
|
||||
- Bug fixes.
|
||||
7.7.1:
|
||||
- Bug fixes.
|
||||
- Improvements to --index support for requirements imports.
|
||||
7.7.0:
|
||||
- Improved update caching mechanism.
|
||||
- Only prompt for spelling correction in interactive sessions.
|
||||
- Cleanup -e.
|
||||
7.6.9:
|
||||
- Change --two, and --three to use --python 2 and --python 3 under the hood.
|
||||
- This restores --two / --three usage on windows.
|
||||
7.6.8:
|
||||
- `pipenv install -r requirements.txt --dev` now works.
|
||||
7.6.7:
|
||||
- New less-fancy progress bars (for linux users, specifically).
|
||||
- Support --python 3.
|
||||
7.6.6:
|
||||
- Packaging problem.
|
||||
7.6.5:
|
||||
- Patched vendored 'safety' package to remove yaml dependency — should work on all Pythons now.
|
||||
7.6.4:
|
||||
- Extensive integration test suite.
|
||||
- Don't suggest autocorrections as often.
|
||||
- Cleanups.
|
||||
- Don't depend on setuptools anymore.
|
||||
7.6.3:
|
||||
- Cleanups.
|
||||
7.6.2:
|
||||
- Support for install/lock --pre.
|
||||
7.6.1:
|
||||
- Fix a nasty bug.
|
||||
7.6.0:
|
||||
- PEP 508 marker support for packages!
|
||||
- Better verbose mode for install.
|
||||
- Fix a nasty bug.
|
||||
7.5.1:
|
||||
- Skip the resolver for pinned versions (this comes up a lot).
|
||||
- Maximum subprocesses (configurable) is now 8.
|
||||
7.5.0:
|
||||
- Deprecate shell -c mode.
|
||||
- Make a new shell --fancy mode (old default mode).
|
||||
- Introduce PIPENV_SHELL_FANCY.
|
||||
- Introduce `pipenv --envs`.
|
||||
7.4.9:
|
||||
- Improvements to PIPENV_DEFAULT_PYTHON_VERSION.
|
||||
- Improvements to auto-suggestions.
|
||||
- Fix nasty bug with failing dependencies.
|
||||
7.4.8:
|
||||
- PIPENV_DEFAULT_PYTHON_VERSION
|
||||
7.4.7:
|
||||
- install --sequential, for boring people.
|
||||
- PIPENV_DONT_LOAD_ENV.
|
||||
- Fix for prettytoml.
|
||||
- Don't add -e reqs to lockfile, as they're already present.
|
||||
7.4.6:
|
||||
- Specify a specific index for a specific dependency.
|
||||
7.4.5:
|
||||
- Support for custom indexes!
|
||||
- Random bugfixes.
|
||||
7.4.4:
|
||||
- PIPENV_PIPFILE environment variable support.
|
||||
- --site-packages flag, for the crazy at heart.
|
||||
- Installation concurrency on Windows.
|
||||
- make `graph --json` consistent with `graph`.
|
||||
- Much better support for suggesting package names.
|
||||
- Updated to pipfile spec 4, support for path= for relative package names.
|
||||
- Import sources from requirements files.
|
||||
- Cleanup stderr/stdout.
|
||||
- 'pipenv check' only reports safety now for Python 3.
|
||||
7.4.3:
|
||||
- Download/install things concurrently.
|
||||
7.4.2:
|
||||
- Fix a nasty pyenv bug.
|
||||
7.4.1:
|
||||
- `graph --json`.
|
||||
7.4.0:
|
||||
- `pipenv --where` fix.
|
||||
- Other general improvements.
|
||||
7.3.9:
|
||||
- Packaging fix.
|
||||
7.3.8:
|
||||
- Packaging fix.
|
||||
7.3.7:
|
||||
- Automatic support for .env files!
|
||||
- Fuzzy finding of popular package names, for typos. Auto-suggested corrections for popular packages.
|
||||
- Bug fixes.
|
||||
7.3.6:
|
||||
- Fix VCS dependency resolution.
|
||||
7.3.5:
|
||||
- Fix packaging.
|
||||
7.3.4:
|
||||
- An error occurred.
|
||||
7.3.3:
|
||||
- Pipenv check now includes security vulnerability disclosures!
|
||||
7.3.2:
|
||||
- Vastly improved support for VCS dependencies.
|
||||
7.3.1:
|
||||
- Advanced pyenv minor version support.
|
||||
- Added support for "full_python_version".
|
||||
- Added support for specifying minor versions of Python with `--python`.
|
||||
- Removed "considering this to be project home" messaging from `pipenv install`.
|
||||
7.3.0:
|
||||
- Added support for grabbing dependencies from -e requirements into dependency graph.
|
||||
7.2.9:
|
||||
- Bug fixes.
|
||||
7.2.8:
|
||||
- Vast improvements to python finding abilities (multiple pythons with the same name are now detected).
|
||||
7.2.7:
|
||||
- Automatically convert outline TOML tables to inline tables (losing comments in the process).
|
||||
- Bug fixes.
|
||||
7.2.6:
|
||||
- Fix pip execution from within existing virtualenvs.
|
||||
7.2.5:
|
||||
- Always tell patched pip what version of Python we're using.
|
||||
7.2.4:
|
||||
- Improve compatibility with --system.
|
||||
- Improve automatic --system use within shell spawning (disallowing it).
|
||||
7.2.3:
|
||||
- Courtesy notice when running in a virtualenv.
|
||||
7.2.2:
|
||||
- Improvements to pyenv detection.
|
||||
- Refactorings, and general improvements
|
||||
7.2.1:
|
||||
- Bug fix.
|
||||
7.2.0:
|
||||
- Automatically install Pythons, if they aren't available and pyenv is setup!
|
||||
- Fixes for when a requirements.txt file contains an !.
|
||||
- Support for relative package paths (that wasn't easy either).
|
||||
- Bug fixes.
|
||||
7.1.1:
|
||||
- Fixes for windows (full compatibility restored — sorry!).
|
||||
- Catch if graph is being run outside of a project directory.
|
||||
- Catch if self-updater doesn't get a clean response from PyPI.
|
||||
- Support Miniconda's `python --version` format
|
||||
7.1.0:
|
||||
- Inline TOML tables for things like requests[security]!
|
||||
- Attempt to preserve comments in Pipfiles.
|
||||
7.0.6:
|
||||
- NO_SPIN is now automatic when CI is set.
|
||||
- Additionally, vendor pip (a patched version) for doing advanced dependency resolution.
|
||||
7.0.5:
|
||||
- Depend on latest version of pip.
|
||||
7.0.4:
|
||||
- Bug fix.
|
||||
7.0.3:
|
||||
- Windows fixes.
|
||||
7.0.2:
|
||||
- Tell pip we're using the required Python version, with trickery, for dependency resolution.
|
||||
- Dev dependencies are now read from a lockfile before default dependencies, so
|
||||
any mismatches will prefer default to develop.
|
||||
- Add support for extras_require in Pipfile for vcs urls.
|
||||
- Warn if 'which' is not found on the system.
|
||||
- Warn if Pew or Virtualenv isn't in the PATH.
|
||||
- More consistent stderr output.
|
||||
7.0.1:
|
||||
- [requires] python_version is now set for new projects, automatically
|
||||
if a version of Python was specified.
|
||||
- That wasn't easy.
|
||||
7.0.0:
|
||||
- New path handling for --python, versions like '3.6' are now supported.
|
||||
- [requires] python_version is automatically honored.
|
||||
6.2.9:
|
||||
- Bug fix.
|
||||
6.2.8:
|
||||
- Bug fix.
|
||||
6.2.7:
|
||||
- pip run --system is now default.
|
||||
6.2.6:
|
||||
- Snakes, all the way down (and easter eggs for holidays!)
|
||||
- Much improved CLI output.
|
||||
- Introduction of PIPENV_HIDE_EMOJIS environment variable.
|
||||
- Guide users to set LANG and LC_ALL.
|
||||
6.2.5:
|
||||
- Bug fix for 2.7.
|
||||
6.2.4:
|
||||
- UX Improvements.
|
||||
- Install un-installable dependencies, anyway.
|
||||
6.2.3:
|
||||
- Bug fixes and improvements.
|
||||
- Add refs to lockfile for VCS dependencies.
|
||||
- Don't re-capitalize URLs.
|
||||
- Specify a requirements file to import from, with install --requirements / -r
|
||||
- Install dependencies for VCS installs.
|
||||
6.2.2:
|
||||
- Bug fix.
|
||||
- Support for passwords in git URLs.
|
||||
6.2.1:
|
||||
- Quick fix.
|
||||
6.2.0:
|
||||
- Support for arbitrary files (e.g. pipenv install URL)!
|
||||
- $ pipenv graph!
|
||||
- $ pipenv run --system ipython.
|
||||
- Skip virtualenv creation when --system is passed to install.
|
||||
- Removal of lock --legacy.
|
||||
- Improvements to locking mechanism integrity.
|
||||
- Introduction of $ pipenv --jumbotron.
|
||||
- Internal refactoring/code reduction.
|
||||
6.1.6:
|
||||
- Fix for Windows.
|
||||
6.1.5:
|
||||
- Grab hashes for un-grabbable hashes.
|
||||
6.1.4:
|
||||
- New update via $ pipenv --update, instead.
|
||||
6.1.3:
|
||||
- Skip validation of Pipfiles, massive speedup for far-away users.
|
||||
- Other speed-ups.
|
||||
6.1.1:
|
||||
- Bug fix.
|
||||
6.1.0:
|
||||
- Self–updating! Very fancy. $ pipenv update.
|
||||
- Verbose mode for update, install.
|
||||
6.0.3:
|
||||
- Major bug fix.
|
||||
- Fix for Daniel Ryan's weird corner case.
|
||||
6.0.2:
|
||||
- Fix Python 2 regression.
|
||||
6.0.1:
|
||||
- Minor (major) bug fix.
|
||||
6.0.0:
|
||||
- New locking functionality — support for multiple hashes per release!
|
||||
- Hashes are now default, everywhere, once again! We figured it out :)
|
||||
- Pipenv talks to the PyPi (Warehouse) API now for grabbing hashes.
|
||||
- --hashes flag removed.
|
||||
- Upgraded to Pipfile spec 2.
|
||||
- New --legacy mode for lock.
|
||||
5.4.3:
|
||||
- Fix for windows.
|
||||
5.4.2:
|
||||
- Compatibility improvement with `run`.
|
||||
5.4.1:
|
||||
- Fix for packaging.
|
||||
- $PIPENV_SKIP_VALIDATION.
|
||||
5.4.0:
|
||||
- Automatically load PATH from virtualenv, before running `pipenv shell`.
|
||||
- Addition of `pipenv lock --verbose`.
|
||||
- Vendor 'background' library.
|
||||
5.3.5:
|
||||
- Addition of update --dry-run.
|
||||
- Removal of install --lock option.
|
||||
5.3.4:
|
||||
- Fix pip index passing.
|
||||
5.3.3:
|
||||
- Automatic notification of version updates.
|
||||
5.3.2:
|
||||
- Automatic locking after install/uninstall (because it's fast now!)
|
||||
5.3.1:
|
||||
- Improvements for windows.
|
||||
5.3.0:
|
||||
- Mega fast pipenv lock!
|
||||
- Drop of Python 2.6.
|
||||
5.2.0:
|
||||
- Introduce install --skip-lock.
|
||||
- Bugfixes.
|
||||
5.1.3:
|
||||
- Updated delegator.py to 0.0.13
|
||||
5.1.2:
|
||||
- Add missing cacerts.pem file to MANIFEST.in
|
||||
- Improve error message when running `pipenv shell` multiple times.
|
||||
- Fixed translation for editable installs from requirements.txt to Pipfile.
|
||||
5.1.1:
|
||||
- Bug fix
|
||||
5.1.0:
|
||||
- Add PIPENV_TIMEOUT environment variable for custom timeouts.
|
||||
- Remove PIPENV_DEFAULT_THREE.
|
||||
5.0.0:
|
||||
- Automatically utilize virtualenvs when they are activated.
|
||||
- PIPENV_DEFAULT_THREE.
|
||||
4.1.4:
|
||||
- Fix regression in `pipenv lock -r` functionality.
|
||||
4.1.3:
|
||||
- Fix support for `pipenv install -e .`
|
||||
4.1.2:
|
||||
- Lazy load requirements for speed improvements.
|
||||
- Better messaging on failed installs.
|
||||
- More accurate logging for installation progress.
|
||||
4.1.1:
|
||||
- Remove old references
|
||||
4.1.0:
|
||||
- Properly handle extras on requirements with versions.
|
||||
- Accept the -e (editable) flag in pipenv install.
|
||||
- Progress Bars!
|
||||
- Minor optimizations to the install process.
|
||||
4.0.1:
|
||||
- Pin Sphinx requirement at a Python 2.6 compatible version.
|
||||
4.0.0:
|
||||
- Make --no-hashes default, introduce --hashes.
|
||||
- Fix for key error when uninstalling [dev-]packages
|
||||
3.6.2:
|
||||
- Fix bug introduced into `pipenv install` in 3.6.1.
|
||||
3.6.1:
|
||||
- pipenv install now works if only a requirements.txt is present.
|
||||
- `pipenv uninstall` now uninstalls from dev-packages as intended.
|
||||
3.6.0:
|
||||
- Make --two/--three handling more consistent.
|
||||
- Update vendored delegator.py.
|
||||
- Fix erroneous error messages in certain command combinations.
|
||||
- Better version number handling for post releases.
|
||||
- Bug fixes for some Windows environments (specifically Appveyor).
|
||||
3.5.6:
|
||||
- Fix broken help prompt.
|
||||
3.5.5:
|
||||
- Automatically cleanup virtualenv on keyboard interrupt.
|
||||
- General improvements.
|
||||
3.5.4:
|
||||
- Bug fixes.
|
||||
- Message formatting cleanup.
|
||||
3.5.3:
|
||||
- Add six to vendored libraries.
|
||||
- Support for --ignore-hashes added to install command.
|
||||
- Support for --no-hashes for lock command.
|
||||
3.5.2:
|
||||
- Vendor all the things!
|
||||
- get-pipenv.py.
|
||||
3.5.1:
|
||||
- Basic Windows support!
|
||||
3.5.0
|
||||
- Fully support multiple sources in Pipfile.
|
||||
- Support multiple project directories with same name.
|
||||
- Better support for non-standard project directory names.
|
||||
- Support for VCS dependencies.
|
||||
3.4.2
|
||||
- Attempt installing from all sources in Pipfile.
|
||||
- Fix bug with accidental deletion of Pipfile contents.
|
||||
- Update dependencies to work correctly with pipsi.
|
||||
3.4.1
|
||||
- --no-interactive mode now activates automatically when needed.
|
||||
3.4.0
|
||||
- --no-interactive mode added.
|
||||
- Properly handle non-standard versioning schemes including Epochs.
|
||||
- Handle percent-encoded filenames.
|
||||
- Fixed Bug with Pipfile initializations.
|
||||
- Streamlined file locations for projects.
|
||||
- Improved package name resolution.
|
||||
- Testing!
|
||||
3.3.6:
|
||||
- $ pipenv --venv option.
|
||||
- $ pipenv --rm option.
|
||||
3.3.5:
|
||||
- Disable spinner by setting PIPENV_NOSPIN=1 environment variable.
|
||||
3.3.4:
|
||||
- Fix PIPENV_VENV_IN_PROJECT mode.
|
||||
- Fix PIPENV_SHELL_COMPAT mode.
|
||||
3.3.3:
|
||||
- Spinners!
|
||||
- Shell compatibility mode ($ pipenv shell -c).
|
||||
- Classic virtualenv location (within project) mode.
|
||||
- Removal of $ pipenv install --requirements.
|
||||
- Addition of $ pipenv lock -r.
|
||||
3.3.2:
|
||||
- User-configurable max-depth for Pipfile searching.
|
||||
- Bugfix.
|
||||
3.3.1:
|
||||
- Bugfix for install.
|
||||
3.3.0:
|
||||
- Use pew to manage virtual environments.
|
||||
- Improved dashed version parsing.
|
||||
3.2.14:
|
||||
- Give --python precedence over --three/--two.
|
||||
- Improvements for lockfile output for specific problematic packages.
|
||||
- Bug fixes.
|
||||
3.2.13:
|
||||
- Improved stderr output for --requirements.
|
||||
- Bug fixes.
|
||||
3.2.12:
|
||||
- Disable colors by setting PIPENV_COLORBLIND=1 environment variable.
|
||||
3.2.11:
|
||||
- Properly use pinned versions from Pipfile in Pipfile.lock
|
||||
3.2.10:
|
||||
- Fix bugs.
|
||||
3.2.9:
|
||||
- Remove temporary requirements.txt after installation.
|
||||
- Add support for --python option, for specifying any version of Python.
|
||||
- Read source Pipfile.lock.
|
||||
3.2.8:
|
||||
- Lock before installing all dependencies, if lockfile isn't present.
|
||||
3.2.7:
|
||||
- Cache proper names for great speed increases.
|
||||
3.2.6:
|
||||
- Bug fixes.
|
||||
3.2.5:
|
||||
- Significant speed improvements for pipenv run and pipenv shell.
|
||||
- Shell completion via click-completion.
|
||||
- Perform package name normalization as best effort attempt.
|
||||
3.2.4:
|
||||
- $ pipenv uninstall --all
|
||||
- Don't uninstall setuptools, wheel, pip, or six.
|
||||
- Improvements to Pipfile re-ordering when writing.
|
||||
- Fix proper casing mechanism.
|
||||
- Prevent invalid shebangs with Homebrew Python.
|
||||
- Fix parsing issues with https://pypi.org/simple.
|
||||
- Depend on 'pipfile' package.
|
||||
3.2.3:
|
||||
- $ pip uninstall --dev
|
||||
- Minor refactoring.
|
||||
- Improved error messaging for missing SHELL environment variables.
|
||||
3.2.2:
|
||||
- Better support for fish terminal.
|
||||
3.2.1:
|
||||
- Ensure proper casing of all Pipfile-specified packages.
|
||||
3.2.0:
|
||||
- Improved proper casing handling for mis-named packages.
|
||||
- Support for $ pipenv install django-debug-toolbar.
|
||||
- Minor cleanups.
|
||||
- Fix for Python 3.
|
||||
3.1.9:
|
||||
- Bug fix.
|
||||
3.1.8:
|
||||
- Bug fix.
|
||||
3.1.7:
|
||||
- Actual Python 3 fix.
|
||||
3.1.6:
|
||||
- Python 3 fix.
|
||||
3.1.5:
|
||||
- Proper name resolver!
|
||||
3.1.4:
|
||||
- $ pip install --requirements.
|
||||
3.1.3:
|
||||
- Python 3 fix.
|
||||
3.1.2:
|
||||
- Python 3 fix.
|
||||
3.1.1:
|
||||
- Improved pip output (integrate with tool better).
|
||||
- Pass exit code of $ pipenv run commands.
|
||||
3.1.0:
|
||||
- Check hashes upon installation!
|
||||
3.0.1:
|
||||
- Oops, version jump.
|
||||
- Fix for $ pip uninstall --lock.
|
||||
3.0.0:
|
||||
- Speed of locking improved.
|
||||
- Lock now uses downloads instead of installation functionality.
|
||||
- Lock fix.
|
||||
- Removed $ pipenv install -r functionality.
|
||||
- Removal of $ pipenv lock --dev.
|
||||
- Addition of $ pipenv install/uninstall --lock.
|
||||
- Preliminary (non-enforced) hash functionality.
|
||||
0.2.9:
|
||||
- Enhanced–enhanced PEP 508 checking capabilities!
|
||||
0.2.8:
|
||||
- Enhanced PEP 508 checking capabilities!
|
||||
0.2.7:
|
||||
- Better workflow options for --three / --two.
|
||||
0.2.6:
|
||||
- Fix for bash shell invocation.
|
||||
- Better support for comments in requirements.txt files.
|
||||
- Support for Pipfile's [[source]].
|
||||
- Pretty colors for help.
|
||||
- Refactors.
|
||||
0.2.5:
|
||||
- Enhanced terminal resizing.
|
||||
- Cleanups from PRs: typos.
|
||||
- Better --where output when no Pipfile is present.
|
||||
- Fix for Python 3.
|
||||
- Rely directly on pexpect.
|
||||
0.2.4:
|
||||
- Fix for bash shell.
|
||||
0.2.3:
|
||||
- Support for Fish and Csh shells.
|
||||
0.2.1:
|
||||
- Trove classifiers.
|
||||
0.2.0:
|
||||
- Added support for $ pipenv --three / --two, for initializing virtualenvs with a specific Python version.
|
||||
- Added support for VCS-backed installs, including editables.
|
||||
- TODO: Still need to support non-git-backed VCS installations in Pipfiles.
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright 2017 Kenneth Reitz
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,6 @@
|
|||
include README.rst LICENSE NOTICES HISTORY.txt pipenv/patched/safety.zip
|
||||
include pipenv/patched/notpip/_vendor/requests/cacert.pem
|
||||
include pipenv/vendor/pip9/_vendor/requests/cacert.pem
|
||||
include pipenv/vendor/pipreqs/stdlib
|
||||
include pipenv/vendor/pipreqs/mapping
|
||||
include pipenv/pipenv.1
|
|
@ -0,0 +1,3 @@
|
|||
The contents of the vendor and patched directories are subject to different licenses
|
||||
than the rest of this project. Their respective licenses can be looked
|
||||
up at pypi.python.org.
|
|
@ -0,0 +1,279 @@
|
|||
Metadata-Version: 1.1
|
||||
Name: pipenv
|
||||
Version: 11.9.0
|
||||
Summary: Python Development Workflow for Humans.
|
||||
Home-page: https://github.com/pypa/pipenv
|
||||
Author: Kenneth Reitz
|
||||
Author-email: me@kennethreitz.org
|
||||
License: MIT
|
||||
Description-Content-Type: UNKNOWN
|
||||
Description:
|
||||
Pipenv: Python Development Workflow for Humans
|
||||
==============================================
|
||||
|
||||
.. image:: https://img.shields.io/pypi/v/pipenv.svg
|
||||
:target: https://pypi.python.org/pypi/pipenv
|
||||
|
||||
.. image:: https://img.shields.io/pypi/l/pipenv.svg
|
||||
:target: https://pypi.python.org/pypi/pipenv
|
||||
|
||||
.. image:: https://img.shields.io/pypi/pyversions/pipenv.svg
|
||||
:target: https://pypi.python.org/pypi/pipenv
|
||||
|
||||
.. image:: https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg
|
||||
:target: https://saythanks.io/to/kennethreitz
|
||||
|
||||
---------------
|
||||
|
||||
**Pipenv** — the officially recommended Python packaging tool from `Python.org <https://packaging.python.org/tutorials/managing-dependencies/#managing-dependencies>`_, free (as in freedom).
|
||||
|
||||
Pipenv is a tool that aims to bring the best of all packaging worlds (bundler, composer, npm, cargo, yarn, etc.) to the Python world. *Windows is a first–class citizen, in our world.*
|
||||
|
||||
It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your ``Pipfile`` as you install/uninstall packages. It also generates the ever–important ``Pipfile.lock``, which is used to produce deterministic builds.
|
||||
|
||||
.. image:: http://media.kennethreitz.com.s3.amazonaws.com/pipenv.gif
|
||||
|
||||
The problems that Pipenv seeks to solve are multi-faceted:
|
||||
|
||||
- You no longer need to use ``pip`` and ``virtualenv`` separately. They work together.
|
||||
- Managing a ``requirements.txt`` file `can be problematic <https://www.kennethreitz.org/essays/a-better-pip-workflow>`_, so Pipenv uses the upcoming ``Pipfile`` and ``Pipfile.lock`` instead, which is superior for basic use cases.
|
||||
- Hashes are used everywhere, always. Security. Automatically expose security vulnerabilities.
|
||||
- Give you insight into your dependency graph (e.g. ``$ pipenv graph``).
|
||||
- Streamline development workflow by loading ``.env`` files.
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
If you're on MacOS, you can install Pipenv easily with Homebrew::
|
||||
|
||||
$ brew install pipenv
|
||||
|
||||
Or, if you're using Ubuntu 17.10::
|
||||
|
||||
$ sudo apt install software-properties-common python-software-properties
|
||||
$ sudo add-apt-repository ppa:pypa/ppa
|
||||
$ sudo apt update
|
||||
$ sudo apt install pipenv
|
||||
|
||||
Otherwise, just use pip::
|
||||
|
||||
$ pip install pipenv
|
||||
|
||||
✨🍰✨
|
||||
|
||||
|
||||
☤ User Testimonials
|
||||
-------------------
|
||||
|
||||
**Jannis Leidel**, former pip maintainer—
|
||||
*Pipenv is the porcelain I always wanted to build for pip. It fits my brain and mostly replaces virtualenvwrapper and manual pip calls for me. Use it.*
|
||||
|
||||
**David Gang**—
|
||||
*This package manager is really awesome. For the first time I know exactly what my dependencies are which I installed and what the transitive dependencies are. Combined with the fact that installs are deterministic, makes this package manager first class, like cargo*.
|
||||
|
||||
**Justin Myles Holmes**—
|
||||
*Pipenv is finally an abstraction meant to engage the mind instead of merely the filesystem.*
|
||||
|
||||
|
||||
☤ Features
|
||||
----------
|
||||
|
||||
- Enables truly *deterministic builds*, while easily specifying *only what you want*.
|
||||
- Generates and checks file hashes for locked dependencies.
|
||||
- Automatically install required Pythons, if ``pyenv`` is available.
|
||||
- Automatically finds your project home, recursively, by looking for a ``Pipfile``.
|
||||
- Automatically generates a ``Pipfile``, if one doesn't exist.
|
||||
- Automatically creates a virtualenv in a standard location.
|
||||
- Automatically adds/removes packages to a ``Pipfile`` when they are un/installed.
|
||||
- Automatically loads ``.env`` files, if they exist.
|
||||
|
||||
The main commands are ``install``, ``uninstall``, and ``lock``, which generates a ``Pipfile.lock``. These are intended to replace ``$ pip install`` usage, as well as manual virtualenv management (to activate a virtualenv, run ``$ pipenv shell``).
|
||||
|
||||
Basic Concepts
|
||||
//////////////
|
||||
|
||||
- A virtualenv will automatically be created, when one doesn't exist.
|
||||
- When no parameters are passed to ``install``, all packages ``[packages]`` specified will be installed.
|
||||
- To initialize a Python 3 virtual environment, run ``$ pipenv --three``.
|
||||
- To initialize a Python 2 virtual environment, run ``$ pipenv --two``.
|
||||
- Otherwise, whatever virtualenv defaults to will be the default.
|
||||
|
||||
Other Commands
|
||||
//////////////
|
||||
|
||||
- ``shell`` will spawn a shell with the virtualenv activated.
|
||||
- ``run`` will run a given command from the virtualenv, with any arguments forwarded (e.g. ``$ pipenv run python``).
|
||||
- ``check`` asserts that PEP 508 requirements are being met by the current environment.
|
||||
- ``graph`` will print a pretty graph of all your installed dependencies.
|
||||
|
||||
Shell Completion
|
||||
////////////////
|
||||
|
||||
For example, with fish, put this in your ``~/.config/fish/completions/pipenv.fish``::
|
||||
|
||||
eval (pipenv --completion)
|
||||
|
||||
Alternatively, with bash, put this in your ``.bashrc`` or ``.bash_profile``::
|
||||
|
||||
eval "$(pipenv --completion)"
|
||||
|
||||
Magic shell completions are now enabled! There is also a `fish plugin <https://github.com/fisherman/pipenv>`_, which will automatically activate your subshells for you!
|
||||
|
||||
Fish is the best shell. You should use it.
|
||||
|
||||
☤ Usage
|
||||
-------
|
||||
|
||||
::
|
||||
|
||||
$ pipenv
|
||||
Usage: pipenv [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Options:
|
||||
--where Output project home information.
|
||||
--venv Output virtualenv information.
|
||||
--py Output Python interpreter information.
|
||||
--envs Output Environment Variable options.
|
||||
--rm Remove the virtualenv.
|
||||
--bare Minimal output.
|
||||
--completion Output completion (to be eval'd).
|
||||
--man Display manpage.
|
||||
--three / --two Use Python 3/2 when creating virtualenv.
|
||||
--python TEXT Specify which version of Python virtualenv should use.
|
||||
--site-packages Enable site-packages for the virtualenv.
|
||||
--version Show the version and exit.
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
|
||||
Usage Examples:
|
||||
Create a new project using Python 3.6, specifically:
|
||||
$ pipenv --python 3.6
|
||||
|
||||
Install all dependencies for a project (including dev):
|
||||
$ pipenv install --dev
|
||||
|
||||
Create a lockfile containing pre-releases:
|
||||
$ pipenv lock --pre
|
||||
|
||||
Show a graph of your installed dependencies:
|
||||
$ pipenv graph
|
||||
|
||||
Check your installed dependencies for security vulnerabilities:
|
||||
$ pipenv check
|
||||
|
||||
Install a local setup.py into your virtual environment/Pipfile:
|
||||
$ pipenv install -e .
|
||||
|
||||
Use a lower-level pip command:
|
||||
$ pipenv run pip freeze
|
||||
|
||||
Commands:
|
||||
check Checks for security vulnerabilities and against PEP 508 markers
|
||||
provided in Pipfile.
|
||||
clean Uninstalls all packages not specified in Pipfile.lock.
|
||||
graph Displays currently–installed dependency graph information.
|
||||
install Installs provided packages and adds them to Pipfile, or (if none
|
||||
is given), installs all packages.
|
||||
lock Generates Pipfile.lock.
|
||||
open View a given module in your editor.
|
||||
run Spawns a command installed into the virtualenv.
|
||||
shell Spawns a shell within the virtualenv.
|
||||
sync Installs all packages specified in Pipfile.lock.
|
||||
uninstall Un-installs a provided package and removes it from Pipfile.
|
||||
|
||||
|
||||
|
||||
|
||||
Locate the project::
|
||||
|
||||
$ pipenv --where
|
||||
/Users/kennethreitz/Library/Mobile Documents/com~apple~CloudDocs/repos/kr/pipenv/test
|
||||
|
||||
Locate the virtualenv::
|
||||
|
||||
$ pipenv --venv
|
||||
/Users/kennethreitz/.local/share/virtualenvs/test-Skyy4vre
|
||||
|
||||
Locate the Python interpreter::
|
||||
|
||||
$ pipenv --py
|
||||
/Users/kennethreitz/.local/share/virtualenvs/test-Skyy4vre/bin/python
|
||||
|
||||
Install packages::
|
||||
|
||||
$ pipenv install
|
||||
Creating a virtualenv for this project...
|
||||
...
|
||||
No package provided, installing all dependencies.
|
||||
Virtualenv location: /Users/kennethreitz/.local/share/virtualenvs/test-EJkjoYts
|
||||
Installing dependencies from Pipfile.lock...
|
||||
...
|
||||
|
||||
To activate this project's virtualenv, run the following:
|
||||
$ pipenv shell
|
||||
|
||||
Install a dev dependency::
|
||||
|
||||
$ pipenv install pytest --dev
|
||||
Installing pytest...
|
||||
...
|
||||
Adding pytest to Pipfile's [dev-packages]...
|
||||
|
||||
Show a dependency graph::
|
||||
|
||||
$ pipenv graph
|
||||
requests==2.18.4
|
||||
- certifi [required: >=2017.4.17, installed: 2017.7.27.1]
|
||||
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
|
||||
- idna [required: >=2.5,<2.7, installed: 2.6]
|
||||
- urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
|
||||
|
||||
Generate a lockfile::
|
||||
|
||||
$ pipenv lock
|
||||
Assuring all dependencies from Pipfile are installed...
|
||||
Locking [dev-packages] dependencies...
|
||||
Locking [packages] dependencies...
|
||||
Note: your project now has only default [packages] installed.
|
||||
To install [dev-packages], run: $ pipenv install --dev
|
||||
|
||||
Install all dev dependencies::
|
||||
|
||||
$ pipenv install --dev
|
||||
Pipfile found at /Users/kennethreitz/repos/kr/pip2/test/Pipfile. Considering this to be the project home.
|
||||
Pipfile.lock out of date, updating...
|
||||
Assuring all dependencies from Pipfile are installed...
|
||||
Locking [dev-packages] dependencies...
|
||||
Locking [packages] dependencies...
|
||||
|
||||
Uninstall everything::
|
||||
|
||||
$ pipenv uninstall --all
|
||||
No package provided, un-installing all dependencies.
|
||||
Found 25 installed package(s), purging...
|
||||
...
|
||||
Environment now purged and fresh!
|
||||
|
||||
Use the shell::
|
||||
|
||||
$ pipenv shell
|
||||
Loading .env environment variables…
|
||||
Launching subshell in virtual environment. Type 'exit' or 'Ctrl+D' to return.
|
||||
$ ▯
|
||||
|
||||
☤ Documentation
|
||||
---------------
|
||||
|
||||
Documentation resides over at `pipenv.org <http://pipenv.org/>`_.
|
||||
|
||||
Platform: UNKNOWN
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.3
|
||||
Classifier: Programming Language :: Python :: 3.4
|
||||
Classifier: Programming Language :: Python :: 3.5
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
|
@ -0,0 +1,257 @@
|
|||
Pipenv: Python Development Workflow for Humans
|
||||
==============================================
|
||||
|
||||
.. image:: https://img.shields.io/pypi/v/pipenv.svg
|
||||
:target: https://pypi.python.org/pypi/pipenv
|
||||
|
||||
.. image:: https://img.shields.io/pypi/l/pipenv.svg
|
||||
:target: https://pypi.python.org/pypi/pipenv
|
||||
|
||||
.. image:: https://img.shields.io/pypi/pyversions/pipenv.svg
|
||||
:target: https://pypi.python.org/pypi/pipenv
|
||||
|
||||
.. image:: https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg
|
||||
:target: https://saythanks.io/to/kennethreitz
|
||||
|
||||
---------------
|
||||
|
||||
**Pipenv** — the officially recommended Python packaging tool from `Python.org <https://packaging.python.org/tutorials/managing-dependencies/#managing-dependencies>`_, free (as in freedom).
|
||||
|
||||
Pipenv is a tool that aims to bring the best of all packaging worlds (bundler, composer, npm, cargo, yarn, etc.) to the Python world. *Windows is a first–class citizen, in our world.*
|
||||
|
||||
It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your ``Pipfile`` as you install/uninstall packages. It also generates the ever–important ``Pipfile.lock``, which is used to produce deterministic builds.
|
||||
|
||||
.. image:: http://media.kennethreitz.com.s3.amazonaws.com/pipenv.gif
|
||||
|
||||
The problems that Pipenv seeks to solve are multi-faceted:
|
||||
|
||||
- You no longer need to use ``pip`` and ``virtualenv`` separately. They work together.
|
||||
- Managing a ``requirements.txt`` file `can be problematic <https://www.kennethreitz.org/essays/a-better-pip-workflow>`_, so Pipenv uses the upcoming ``Pipfile`` and ``Pipfile.lock`` instead, which is superior for basic use cases.
|
||||
- Hashes are used everywhere, always. Security. Automatically expose security vulnerabilities.
|
||||
- Give you insight into your dependency graph (e.g. ``$ pipenv graph``).
|
||||
- Streamline development workflow by loading ``.env`` files.
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
If you're on MacOS, you can install Pipenv easily with Homebrew::
|
||||
|
||||
$ brew install pipenv
|
||||
|
||||
Or, if you're using Ubuntu 17.10::
|
||||
|
||||
$ sudo apt install software-properties-common python-software-properties
|
||||
$ sudo add-apt-repository ppa:pypa/ppa
|
||||
$ sudo apt update
|
||||
$ sudo apt install pipenv
|
||||
|
||||
Otherwise, just use pip::
|
||||
|
||||
$ pip install pipenv
|
||||
|
||||
✨🍰✨
|
||||
|
||||
|
||||
☤ User Testimonials
|
||||
-------------------
|
||||
|
||||
**Jannis Leidel**, former pip maintainer—
|
||||
*Pipenv is the porcelain I always wanted to build for pip. It fits my brain and mostly replaces virtualenvwrapper and manual pip calls for me. Use it.*
|
||||
|
||||
**David Gang**—
|
||||
*This package manager is really awesome. For the first time I know exactly what my dependencies are which I installed and what the transitive dependencies are. Combined with the fact that installs are deterministic, makes this package manager first class, like cargo*.
|
||||
|
||||
**Justin Myles Holmes**—
|
||||
*Pipenv is finally an abstraction meant to engage the mind instead of merely the filesystem.*
|
||||
|
||||
|
||||
☤ Features
|
||||
----------
|
||||
|
||||
- Enables truly *deterministic builds*, while easily specifying *only what you want*.
|
||||
- Generates and checks file hashes for locked dependencies.
|
||||
- Automatically install required Pythons, if ``pyenv`` is available.
|
||||
- Automatically finds your project home, recursively, by looking for a ``Pipfile``.
|
||||
- Automatically generates a ``Pipfile``, if one doesn't exist.
|
||||
- Automatically creates a virtualenv in a standard location.
|
||||
- Automatically adds/removes packages to a ``Pipfile`` when they are un/installed.
|
||||
- Automatically loads ``.env`` files, if they exist.
|
||||
|
||||
The main commands are ``install``, ``uninstall``, and ``lock``, which generates a ``Pipfile.lock``. These are intended to replace ``$ pip install`` usage, as well as manual virtualenv management (to activate a virtualenv, run ``$ pipenv shell``).
|
||||
|
||||
Basic Concepts
|
||||
//////////////
|
||||
|
||||
- A virtualenv will automatically be created, when one doesn't exist.
|
||||
- When no parameters are passed to ``install``, all packages ``[packages]`` specified will be installed.
|
||||
- To initialize a Python 3 virtual environment, run ``$ pipenv --three``.
|
||||
- To initialize a Python 2 virtual environment, run ``$ pipenv --two``.
|
||||
- Otherwise, whatever virtualenv defaults to will be the default.
|
||||
|
||||
Other Commands
|
||||
//////////////
|
||||
|
||||
- ``shell`` will spawn a shell with the virtualenv activated.
|
||||
- ``run`` will run a given command from the virtualenv, with any arguments forwarded (e.g. ``$ pipenv run python``).
|
||||
- ``check`` asserts that PEP 508 requirements are being met by the current environment.
|
||||
- ``graph`` will print a pretty graph of all your installed dependencies.
|
||||
|
||||
Shell Completion
|
||||
////////////////
|
||||
|
||||
For example, with fish, put this in your ``~/.config/fish/completions/pipenv.fish``::
|
||||
|
||||
eval (pipenv --completion)
|
||||
|
||||
Alternatively, with bash, put this in your ``.bashrc`` or ``.bash_profile``::
|
||||
|
||||
eval "$(pipenv --completion)"
|
||||
|
||||
Magic shell completions are now enabled! There is also a `fish plugin <https://github.com/fisherman/pipenv>`_, which will automatically activate your subshells for you!
|
||||
|
||||
Fish is the best shell. You should use it.
|
||||
|
||||
☤ Usage
|
||||
-------
|
||||
|
||||
::
|
||||
|
||||
$ pipenv
|
||||
Usage: pipenv [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Options:
|
||||
--where Output project home information.
|
||||
--venv Output virtualenv information.
|
||||
--py Output Python interpreter information.
|
||||
--envs Output Environment Variable options.
|
||||
--rm Remove the virtualenv.
|
||||
--bare Minimal output.
|
||||
--completion Output completion (to be eval'd).
|
||||
--man Display manpage.
|
||||
--three / --two Use Python 3/2 when creating virtualenv.
|
||||
--python TEXT Specify which version of Python virtualenv should use.
|
||||
--site-packages Enable site-packages for the virtualenv.
|
||||
--version Show the version and exit.
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
|
||||
Usage Examples:
|
||||
Create a new project using Python 3.6, specifically:
|
||||
$ pipenv --python 3.6
|
||||
|
||||
Install all dependencies for a project (including dev):
|
||||
$ pipenv install --dev
|
||||
|
||||
Create a lockfile containing pre-releases:
|
||||
$ pipenv lock --pre
|
||||
|
||||
Show a graph of your installed dependencies:
|
||||
$ pipenv graph
|
||||
|
||||
Check your installed dependencies for security vulnerabilities:
|
||||
$ pipenv check
|
||||
|
||||
Install a local setup.py into your virtual environment/Pipfile:
|
||||
$ pipenv install -e .
|
||||
|
||||
Use a lower-level pip command:
|
||||
$ pipenv run pip freeze
|
||||
|
||||
Commands:
|
||||
check Checks for security vulnerabilities and against PEP 508 markers
|
||||
provided in Pipfile.
|
||||
clean Uninstalls all packages not specified in Pipfile.lock.
|
||||
graph Displays currently–installed dependency graph information.
|
||||
install Installs provided packages and adds them to Pipfile, or (if none
|
||||
is given), installs all packages.
|
||||
lock Generates Pipfile.lock.
|
||||
open View a given module in your editor.
|
||||
run Spawns a command installed into the virtualenv.
|
||||
shell Spawns a shell within the virtualenv.
|
||||
sync Installs all packages specified in Pipfile.lock.
|
||||
uninstall Un-installs a provided package and removes it from Pipfile.
|
||||
|
||||
|
||||
|
||||
|
||||
Locate the project::
|
||||
|
||||
$ pipenv --where
|
||||
/Users/kennethreitz/Library/Mobile Documents/com~apple~CloudDocs/repos/kr/pipenv/test
|
||||
|
||||
Locate the virtualenv::
|
||||
|
||||
$ pipenv --venv
|
||||
/Users/kennethreitz/.local/share/virtualenvs/test-Skyy4vre
|
||||
|
||||
Locate the Python interpreter::
|
||||
|
||||
$ pipenv --py
|
||||
/Users/kennethreitz/.local/share/virtualenvs/test-Skyy4vre/bin/python
|
||||
|
||||
Install packages::
|
||||
|
||||
$ pipenv install
|
||||
Creating a virtualenv for this project...
|
||||
...
|
||||
No package provided, installing all dependencies.
|
||||
Virtualenv location: /Users/kennethreitz/.local/share/virtualenvs/test-EJkjoYts
|
||||
Installing dependencies from Pipfile.lock...
|
||||
...
|
||||
|
||||
To activate this project's virtualenv, run the following:
|
||||
$ pipenv shell
|
||||
|
||||
Install a dev dependency::
|
||||
|
||||
$ pipenv install pytest --dev
|
||||
Installing pytest...
|
||||
...
|
||||
Adding pytest to Pipfile's [dev-packages]...
|
||||
|
||||
Show a dependency graph::
|
||||
|
||||
$ pipenv graph
|
||||
requests==2.18.4
|
||||
- certifi [required: >=2017.4.17, installed: 2017.7.27.1]
|
||||
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
|
||||
- idna [required: >=2.5,<2.7, installed: 2.6]
|
||||
- urllib3 [required: <1.23,>=1.21.1, installed: 1.22]
|
||||
|
||||
Generate a lockfile::
|
||||
|
||||
$ pipenv lock
|
||||
Assuring all dependencies from Pipfile are installed...
|
||||
Locking [dev-packages] dependencies...
|
||||
Locking [packages] dependencies...
|
||||
Note: your project now has only default [packages] installed.
|
||||
To install [dev-packages], run: $ pipenv install --dev
|
||||
|
||||
Install all dev dependencies::
|
||||
|
||||
$ pipenv install --dev
|
||||
Pipfile found at /Users/kennethreitz/repos/kr/pip2/test/Pipfile. Considering this to be the project home.
|
||||
Pipfile.lock out of date, updating...
|
||||
Assuring all dependencies from Pipfile are installed...
|
||||
Locking [dev-packages] dependencies...
|
||||
Locking [packages] dependencies...
|
||||
|
||||
Uninstall everything::
|
||||
|
||||
$ pipenv uninstall --all
|
||||
No package provided, un-installing all dependencies.
|
||||
Found 25 installed package(s), purging...
|
||||
...
|
||||
Environment now purged and fresh!
|
||||
|
||||
Use the shell::
|
||||
|
||||
$ pipenv shell
|
||||
Loading .env environment variables…
|
||||
Launching subshell in virtual environment. Type 'exit' or 'Ctrl+D' to return.
|
||||
$ ▯
|
||||
|
||||
☤ Documentation
|
||||
---------------
|
||||
|
||||
Documentation resides over at `pipenv.org <http://pipenv.org/>`_.
|
|
@ -0,0 +1,25 @@
|
|||
# |~~\' |~~
|
||||
# |__/||~~\|--|/~\\ /
|
||||
# | ||__/|__| |\/
|
||||
# |
|
||||
import os
|
||||
import sys
|
||||
|
||||
PIPENV_ROOT = os.path.dirname(os.path.realpath(__file__))
|
||||
PIPENV_VENDOR = os.sep.join([PIPENV_ROOT, 'vendor'])
|
||||
PIPENV_PATCHED = os.sep.join([PIPENV_ROOT, 'patched'])
|
||||
# Inject vendored directory into system path.
|
||||
sys.path.insert(0, PIPENV_VENDOR)
|
||||
# Inject patched directory into system path.
|
||||
sys.path.insert(0, PIPENV_PATCHED)
|
||||
# Hack to make things work better.
|
||||
try:
|
||||
if 'concurrency' in sys.modules:
|
||||
del sys.modules['concurrency']
|
||||
except Exception:
|
||||
pass
|
||||
from .cli import cli
|
||||
from . import resolver
|
||||
|
||||
if __name__ == '__main__':
|
||||
cli()
|
|
@ -0,0 +1,4 @@
|
|||
from .cli import cli
|
||||
|
||||
if __name__ == '__main__':
|
||||
cli()
|
|
@ -0,0 +1,5 @@
|
|||
# ___ ( ) ___ ___ __
|
||||
# // ) ) / / // ) ) //___) ) // ) ) || / /
|
||||
# //___/ / / / //___/ / // // / / || / /
|
||||
# // / / // ((____ // / / ||/ /
|
||||
__version__ = '11.9.0'
|
|
@ -0,0 +1,984 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
|
||||
import click
|
||||
import click_completion
|
||||
import crayons
|
||||
import delegator
|
||||
from click_didyoumean import DYMCommandCollection
|
||||
|
||||
from .__version__ import __version__
|
||||
|
||||
from .import environments
|
||||
from .environments import *
|
||||
|
||||
# Enable shell completion.
|
||||
click_completion.init()
|
||||
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
|
||||
|
||||
|
||||
class PipenvGroup(click.Group):
|
||||
"""Custom Group class provides formatted main help"""
|
||||
|
||||
def get_help_option(self, ctx):
|
||||
from .import core
|
||||
|
||||
"""Override for showing formatted main help via --help and -h options"""
|
||||
help_options = self.get_help_option_names(ctx)
|
||||
if not help_options or not self.add_help_option:
|
||||
return
|
||||
|
||||
def show_help(ctx, param, value):
|
||||
if value and not ctx.resilient_parsing:
|
||||
if not ctx.invoked_subcommand:
|
||||
# legit main help
|
||||
click.echo(core.format_help(ctx.get_help()))
|
||||
else:
|
||||
# legit sub-command help
|
||||
click.echo(ctx.get_help(), color=ctx.color)
|
||||
ctx.exit()
|
||||
|
||||
return click.Option(
|
||||
help_options,
|
||||
is_flag=True,
|
||||
is_eager=True,
|
||||
expose_value=False,
|
||||
callback=show_help,
|
||||
help='Show this message and exit.',
|
||||
)
|
||||
|
||||
|
||||
def setup_verbose(ctx, param, value):
|
||||
if value:
|
||||
logging.getLogger('pip').setLevel(logging.INFO)
|
||||
return value
|
||||
|
||||
|
||||
@click.group(
|
||||
cls=PipenvGroup,
|
||||
invoke_without_command=True,
|
||||
context_settings=CONTEXT_SETTINGS,
|
||||
)
|
||||
@click.option(
|
||||
'--where',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Output project home information.",
|
||||
)
|
||||
@click.option(
|
||||
'--venv',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Output virtualenv information.",
|
||||
)
|
||||
@click.option(
|
||||
'--py',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Output Python interpreter information.",
|
||||
)
|
||||
@click.option(
|
||||
'--envs',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Output Environment Variable options.",
|
||||
)
|
||||
@click.option(
|
||||
'--rm', is_flag=True, default=False, help="Remove the virtualenv."
|
||||
)
|
||||
@click.option('--bare', is_flag=True, default=False, help="Minimal output.")
|
||||
@click.option(
|
||||
'--completion',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Output completion (to be eval'd).",
|
||||
)
|
||||
@click.option('--man', is_flag=True, default=False, help="Display manpage.")
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--site-packages',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Enable site-packages for the virtualenv.",
|
||||
)
|
||||
@click.version_option(
|
||||
prog_name=crayons.normal('pipenv', bold=True), version=__version__
|
||||
)
|
||||
@click.pass_context
|
||||
def cli(
|
||||
ctx,
|
||||
where=False,
|
||||
venv=False,
|
||||
rm=False,
|
||||
bare=False,
|
||||
three=False,
|
||||
python=False,
|
||||
help=False,
|
||||
py=False,
|
||||
site_packages=False,
|
||||
envs=False,
|
||||
man=False,
|
||||
completion=False,
|
||||
):
|
||||
if completion: # Handle this ASAP to make shell startup fast.
|
||||
if PIPENV_SHELL:
|
||||
click.echo(
|
||||
click_completion.get_code(
|
||||
shell=PIPENV_SHELL.split(os.sep)[-1], prog_name='pipenv'
|
||||
)
|
||||
)
|
||||
else:
|
||||
click.echo(
|
||||
'Please ensure that the {0} environment variable '
|
||||
'is set.'.format(crayons.normal('SHELL', bold=True)),
|
||||
err=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
sys.exit(0)
|
||||
from .import core
|
||||
if man:
|
||||
if core.system_which('man'):
|
||||
path = os.sep.join([os.path.dirname(__file__), 'pipenv.1'])
|
||||
os.execle(core.system_which('man'), 'man', path, os.environ)
|
||||
else:
|
||||
click.echo(
|
||||
'man does not appear to be available on your system.', err=True
|
||||
)
|
||||
if envs:
|
||||
click.echo(
|
||||
'The following environment variables can be set, to do various things:\n'
|
||||
)
|
||||
for key in environments.__dict__:
|
||||
if key.startswith('PIPENV'):
|
||||
click.echo(' - {0}'.format(crayons.normal(key, bold=True)))
|
||||
click.echo(
|
||||
'\nYou can learn more at:\n {0}'.format(
|
||||
crayons.green(
|
||||
'http://docs.pipenv.org/advanced/#configuration-with-environment-variables'
|
||||
)
|
||||
)
|
||||
)
|
||||
sys.exit(0)
|
||||
core.warn_in_virtualenv()
|
||||
if ctx.invoked_subcommand is None:
|
||||
# --where was passed...
|
||||
if where:
|
||||
core.do_where(bare=True)
|
||||
sys.exit(0)
|
||||
elif py:
|
||||
core.do_py()
|
||||
sys.exit()
|
||||
# --venv was passed...
|
||||
elif venv:
|
||||
# There is no virtualenv yet.
|
||||
if not core.project.virtualenv_exists:
|
||||
click.echo(
|
||||
crayons.red(
|
||||
'No virtualenv has been created for this project yet!'
|
||||
),
|
||||
err=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
click.echo(core.project.virtualenv_location)
|
||||
sys.exit(0)
|
||||
# --rm was passed...
|
||||
elif rm:
|
||||
# Abort if --system (or running in a virtualenv).
|
||||
if PIPENV_USE_SYSTEM:
|
||||
click.echo(
|
||||
crayons.red(
|
||||
'You are attempting to remove a virtualenv that '
|
||||
'Pipenv did not create. Aborting.'
|
||||
)
|
||||
)
|
||||
sys.exit(1)
|
||||
if core.project.virtualenv_exists:
|
||||
loc = core.project.virtualenv_location
|
||||
click.echo(
|
||||
crayons.normal(
|
||||
u'{0} ({1})…'.format(
|
||||
crayons.normal('Removing virtualenv', bold=True),
|
||||
crayons.green(loc),
|
||||
)
|
||||
)
|
||||
)
|
||||
with core.spinner():
|
||||
# Remove the virtualenv.
|
||||
core.cleanup_virtualenv(bare=True)
|
||||
sys.exit(0)
|
||||
else:
|
||||
click.echo(
|
||||
crayons.red(
|
||||
'No virtualenv has been created for this project yet!',
|
||||
bold=True,
|
||||
),
|
||||
err=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
# --two / --three was passed...
|
||||
if (python or three is not None) or site_packages:
|
||||
core.ensure_project(
|
||||
three=three, python=python, warn=True, site_packages=site_packages
|
||||
)
|
||||
# Check this again before exiting for empty ``pipenv`` command.
|
||||
elif ctx.invoked_subcommand is None:
|
||||
# Display help to user, if no commands were passed.
|
||||
click.echo(core.format_help(ctx.get_help()))
|
||||
|
||||
|
||||
@click.command(
|
||||
short_help="Installs provided packages and adds them to Pipfile, or (if none is given), installs all packages.",
|
||||
context_settings=dict(ignore_unknown_options=True, allow_extra_args=True),
|
||||
)
|
||||
@click.argument('package_name', default=False)
|
||||
@click.argument('more_packages', nargs=-1)
|
||||
@click.option(
|
||||
'--dev',
|
||||
'-d',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Install package(s) in [dev-packages].",
|
||||
)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--system', is_flag=True, default=False, help="System pip management."
|
||||
)
|
||||
@click.option(
|
||||
'--requirements',
|
||||
'-r',
|
||||
nargs=1,
|
||||
default=False,
|
||||
help="Import a requirements.txt file.",
|
||||
)
|
||||
@click.option(
|
||||
'--code', '-c', nargs=1, default=False, help="Import from codebase."
|
||||
)
|
||||
@click.option(
|
||||
'--verbose',
|
||||
'-v',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Verbose mode.",
|
||||
callback=setup_verbose,
|
||||
)
|
||||
@click.option(
|
||||
'--ignore-pipfile',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Ignore Pipfile when installing, using the Pipfile.lock.",
|
||||
)
|
||||
@click.option(
|
||||
'--sequential',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Install dependencies one-at-a-time, instead of concurrently.",
|
||||
)
|
||||
@click.option(
|
||||
'--skip-lock',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"Ignore locking mechanisms when installing—use the Pipfile, instead.",
|
||||
)
|
||||
@click.option(
|
||||
'--deploy',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"Abort if the Pipfile.lock is out–of–date, or Python version is wrong.",
|
||||
)
|
||||
@click.option(
|
||||
'--pre', is_flag=True, default=False, help=u"Allow pre–releases."
|
||||
)
|
||||
@click.option(
|
||||
'--keep-outdated',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"Keep out–dated dependencies from being updated in Pipfile.lock.",
|
||||
)
|
||||
@click.option(
|
||||
'--selective-upgrade',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Update specified packages.",
|
||||
)
|
||||
def install(
|
||||
package_name=False,
|
||||
more_packages=False,
|
||||
dev=False,
|
||||
three=False,
|
||||
python=False,
|
||||
system=False,
|
||||
lock=True,
|
||||
ignore_pipfile=False,
|
||||
skip_lock=False,
|
||||
verbose=False,
|
||||
requirements=False,
|
||||
sequential=False,
|
||||
pre=False,
|
||||
code=False,
|
||||
deploy=False,
|
||||
keep_outdated=False,
|
||||
selective_upgrade=False,
|
||||
):
|
||||
from .import core
|
||||
|
||||
core.do_install(
|
||||
package_name=package_name,
|
||||
more_packages=more_packages,
|
||||
dev=dev,
|
||||
three=three,
|
||||
python=python,
|
||||
system=system,
|
||||
lock=lock,
|
||||
ignore_pipfile=ignore_pipfile,
|
||||
skip_lock=skip_lock,
|
||||
verbose=verbose,
|
||||
requirements=requirements,
|
||||
sequential=sequential,
|
||||
pre=pre,
|
||||
code=code,
|
||||
deploy=deploy,
|
||||
keep_outdated=keep_outdated,
|
||||
selective_upgrade=selective_upgrade,
|
||||
)
|
||||
|
||||
|
||||
@click.command(
|
||||
short_help="Un-installs a provided package and removes it from Pipfile."
|
||||
)
|
||||
@click.argument('package_name', default=False)
|
||||
@click.argument('more_packages', nargs=-1)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--system', is_flag=True, default=False, help="System pip management."
|
||||
)
|
||||
@click.option(
|
||||
'--verbose',
|
||||
'-v',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Verbose mode.",
|
||||
callback=setup_verbose,
|
||||
)
|
||||
@click.option('--lock', is_flag=True, default=True, help="Lock afterwards.")
|
||||
@click.option(
|
||||
'--all-dev',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Un-install all package from [dev-packages].",
|
||||
)
|
||||
@click.option(
|
||||
'--all',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Purge all package(s) from virtualenv. Does not edit Pipfile.",
|
||||
)
|
||||
@click.option(
|
||||
'--keep-outdated',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"Keep out–dated dependencies from being updated in Pipfile.lock.",
|
||||
)
|
||||
def uninstall(
|
||||
package_name=False,
|
||||
more_packages=False,
|
||||
three=None,
|
||||
python=False,
|
||||
system=False,
|
||||
lock=False,
|
||||
all_dev=False,
|
||||
all=False,
|
||||
verbose=False,
|
||||
keep_outdated=False,
|
||||
):
|
||||
from .import core
|
||||
|
||||
core.do_uninstall(
|
||||
package_name=package_name,
|
||||
more_packages=more_packages,
|
||||
three=three,
|
||||
python=python,
|
||||
system=system,
|
||||
lock=lock,
|
||||
all_dev=all_dev,
|
||||
all=all,
|
||||
verbose=verbose,
|
||||
keep_outdated=keep_outdated,
|
||||
)
|
||||
|
||||
|
||||
@click.command(short_help="Generates Pipfile.lock.")
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--verbose',
|
||||
'-v',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Verbose mode.",
|
||||
callback=setup_verbose,
|
||||
)
|
||||
@click.option(
|
||||
'--requirements',
|
||||
'-r',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Generate output compatible with requirements.txt.",
|
||||
)
|
||||
@click.option(
|
||||
'--dev',
|
||||
'-d',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Generate output compatible with requirements.txt for the development dependencies.",
|
||||
)
|
||||
@click.option(
|
||||
'--clear', is_flag=True, default=False, help="Clear the dependency cache."
|
||||
)
|
||||
@click.option(
|
||||
'--pre', is_flag=True, default=False, help=u"Allow pre–releases."
|
||||
)
|
||||
@click.option(
|
||||
'--keep-outdated',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"Keep out–dated dependencies from being updated in Pipfile.lock.",
|
||||
)
|
||||
def lock(
|
||||
three=None,
|
||||
python=False,
|
||||
verbose=False,
|
||||
requirements=False,
|
||||
dev=False,
|
||||
clear=False,
|
||||
pre=False,
|
||||
keep_outdated=False,
|
||||
):
|
||||
from .import core
|
||||
|
||||
# Ensure that virtualenv is available.
|
||||
core.ensure_project(three=three, python=python)
|
||||
# Load the --pre settings from the Pipfile.
|
||||
if not pre:
|
||||
pre = core.project.settings.get('pre')
|
||||
if requirements:
|
||||
core.do_init(dev=dev, requirements=requirements)
|
||||
core.do_lock(
|
||||
verbose=verbose, clear=clear, pre=pre, keep_outdated=keep_outdated
|
||||
)
|
||||
|
||||
|
||||
@click.command(
|
||||
short_help="Spawns a shell within the virtualenv.",
|
||||
context_settings=dict(ignore_unknown_options=True, allow_extra_args=True),
|
||||
)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--fancy',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Run in shell in fancy mode (for elegantly configured shells).",
|
||||
)
|
||||
@click.option(
|
||||
'--anyway',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Always spawn a subshell, even if one is already spawned.",
|
||||
)
|
||||
@click.argument('shell_args', nargs=-1)
|
||||
def shell(
|
||||
three=None, python=False, fancy=False, shell_args=None, anyway=False
|
||||
):
|
||||
from .import core
|
||||
|
||||
# Prevent user from activating nested environments.
|
||||
if 'PIPENV_ACTIVE' in os.environ:
|
||||
# If PIPENV_ACTIVE is set, VIRTUAL_ENV should always be set too.
|
||||
venv_name = os.environ.get(
|
||||
'VIRTUAL_ENV', 'UNKNOWN_VIRTUAL_ENVIRONMENT'
|
||||
)
|
||||
if not anyway:
|
||||
click.echo(
|
||||
'{0} {1} {2}\nNo action taken to avoid nested environments.'.format(
|
||||
crayons.normal('Shell for'),
|
||||
crayons.green(venv_name, bold=True),
|
||||
crayons.normal('already activated.', bold=True),
|
||||
),
|
||||
err=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
# Load .env file.
|
||||
core.load_dot_env()
|
||||
# Use fancy mode for Windows.
|
||||
if os.name == 'nt':
|
||||
fancy = True
|
||||
core.do_shell(
|
||||
three=three, python=python, fancy=fancy, shell_args=shell_args
|
||||
)
|
||||
|
||||
|
||||
@click.command(
|
||||
add_help_option=False,
|
||||
short_help="Spawns a command installed into the virtualenv.",
|
||||
context_settings=dict(
|
||||
ignore_unknown_options=True,
|
||||
allow_interspersed_args=False,
|
||||
allow_extra_args=True,
|
||||
),
|
||||
)
|
||||
@click.argument('command')
|
||||
@click.argument('args', nargs=-1)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
def run(command, args, three=None, python=False):
|
||||
from .import core
|
||||
|
||||
core.do_run(command=command, args=args, three=three, python=python)
|
||||
|
||||
|
||||
@click.command(
|
||||
short_help="Checks for security vulnerabilities and against PEP 508 markers provided in Pipfile.",
|
||||
context_settings=dict(ignore_unknown_options=True, allow_extra_args=True),
|
||||
)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--system', is_flag=True, default=False, help="Use system Python."
|
||||
)
|
||||
@click.option(
|
||||
'--unused',
|
||||
nargs=1,
|
||||
default=False,
|
||||
help="Given a code path, show potentially unused dependencies.",
|
||||
)
|
||||
@click.argument('args', nargs=-1)
|
||||
def check(
|
||||
three=None,
|
||||
python=False,
|
||||
system=False,
|
||||
unused=False,
|
||||
style=False,
|
||||
args=None,
|
||||
):
|
||||
from .import core
|
||||
|
||||
core.do_check(
|
||||
three=three, python=python, system=system, unused=unused, args=args
|
||||
)
|
||||
|
||||
|
||||
@click.command(short_help="Runs lock, then sync.")
|
||||
@click.argument('more_packages', nargs=-1)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--verbose',
|
||||
'-v',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Verbose mode.",
|
||||
callback=setup_verbose,
|
||||
)
|
||||
@click.option(
|
||||
'--dev',
|
||||
'-d',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Install package(s) in [dev-packages].",
|
||||
)
|
||||
@click.option(
|
||||
'--clear', is_flag=True, default=False, help="Clear the dependency cache."
|
||||
)
|
||||
@click.option('--bare', is_flag=True, default=False, help="Minimal output.")
|
||||
@click.option(
|
||||
'--pre', is_flag=True, default=False, help=u"Allow pre–releases."
|
||||
)
|
||||
@click.option(
|
||||
'--keep-outdated',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"Keep out–dated dependencies from being updated in Pipfile.lock.",
|
||||
)
|
||||
@click.option(
|
||||
'--sequential',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Install dependencies one-at-a-time, instead of concurrently.",
|
||||
)
|
||||
@click.option(
|
||||
'--outdated',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help=u"List out–of–date dependencies.",
|
||||
)
|
||||
@click.option(
|
||||
'--dry-run',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help=u"List out–of–date dependencies.",
|
||||
)
|
||||
@click.argument('package', default=False)
|
||||
@click.pass_context
|
||||
def update(
|
||||
ctx,
|
||||
three=None,
|
||||
python=False,
|
||||
system=False,
|
||||
verbose=False,
|
||||
clear=False,
|
||||
keep_outdated=False,
|
||||
pre=False,
|
||||
dev=False,
|
||||
bare=False,
|
||||
sequential=False,
|
||||
package=None,
|
||||
dry_run=None,
|
||||
outdated=False,
|
||||
more_packages=None,
|
||||
):
|
||||
from .import core
|
||||
|
||||
core.ensure_project(three=three, python=python, warn=True)
|
||||
if not outdated:
|
||||
outdated = bool(dry_run)
|
||||
if outdated:
|
||||
core.do_outdated()
|
||||
if not package:
|
||||
click.echo(
|
||||
'{0} {1} {2} {3}{4}'.format(
|
||||
crayons.white('Running', bold=True),
|
||||
crayons.red('$ pipenv lock', bold=True),
|
||||
crayons.white('then', bold=True),
|
||||
crayons.red('$ pipenv sync', bold=True),
|
||||
crayons.white('.', bold=True),
|
||||
)
|
||||
)
|
||||
# Load the --pre settings from the Pipfile.
|
||||
if not pre:
|
||||
pre = core.project.settings.get('pre')
|
||||
core.do_lock(
|
||||
verbose=verbose, clear=clear, pre=pre, keep_outdated=keep_outdated
|
||||
)
|
||||
core.do_sync(
|
||||
ctx=ctx,
|
||||
install=install,
|
||||
dev=dev,
|
||||
three=three,
|
||||
python=python,
|
||||
bare=bare,
|
||||
dont_upgrade=False,
|
||||
user=False,
|
||||
verbose=verbose,
|
||||
clear=clear,
|
||||
unused=False,
|
||||
sequential=sequential,
|
||||
)
|
||||
else:
|
||||
for package in ([package] + list(more_packages) or []):
|
||||
if package not in core.project.all_packages:
|
||||
click.echo(
|
||||
'{0}: {1} was not found in your Pipfile! Aborting.'
|
||||
''.format(
|
||||
crayons.red('Warning', bold=True),
|
||||
crayons.green(package, bold=True),
|
||||
),
|
||||
err=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
core.ensure_lockfile(keep_outdated=core.project.lockfile_exists)
|
||||
# Install the dependencies.
|
||||
core.do_install(
|
||||
package_name=package,
|
||||
more_packages=more_packages,
|
||||
dev=dev,
|
||||
three=three,
|
||||
python=python,
|
||||
system=system,
|
||||
lock=True,
|
||||
ignore_pipfile=False,
|
||||
skip_lock=False,
|
||||
verbose=verbose,
|
||||
requirements=False,
|
||||
sequential=sequential,
|
||||
pre=pre,
|
||||
code=False,
|
||||
deploy=False,
|
||||
keep_outdated=True,
|
||||
selective_upgrade=True,
|
||||
)
|
||||
|
||||
|
||||
@click.command(
|
||||
short_help=u"Displays currently–installed dependency graph information."
|
||||
)
|
||||
@click.option('--bare', is_flag=True, default=False, help="Minimal output.")
|
||||
@click.option('--json', is_flag=True, default=False, help="Output JSON.")
|
||||
@click.option(
|
||||
'--reverse', is_flag=True, default=False, help="Reversed dependency graph."
|
||||
)
|
||||
def graph(bare=False, json=False, reverse=False):
|
||||
from .import core
|
||||
|
||||
core.do_graph(bare=bare, json=json, reverse=reverse)
|
||||
|
||||
|
||||
@click.command(short_help="View a given module in your editor.", name="open")
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.argument('module', nargs=1)
|
||||
def run_open(module, three=None, python=None):
|
||||
from .import core
|
||||
|
||||
# Ensure that virtualenv is available.
|
||||
core.ensure_project(three=three, python=python, validate=False)
|
||||
c = delegator.run(
|
||||
'{0} -c "import {1}; print({1}.__file__);"'.format(
|
||||
core.which('python'), module
|
||||
)
|
||||
)
|
||||
try:
|
||||
assert c.return_code == 0
|
||||
except AssertionError:
|
||||
click.echo(crayons.red('Module not found!'))
|
||||
sys.exit(1)
|
||||
if '__init__.py' in c.out:
|
||||
p = os.path.dirname(c.out.strip().rstrip('cdo'))
|
||||
else:
|
||||
p = c.out.strip().rstrip('cdo')
|
||||
click.echo(
|
||||
crayons.normal('Opening {0!r} in your EDITOR.'.format(p), bold=True)
|
||||
)
|
||||
click.edit(filename=p)
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
@click.command(short_help="Installs all packages specified in Pipfile.lock.")
|
||||
@click.option(
|
||||
'--verbose',
|
||||
'-v',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Verbose mode.",
|
||||
callback=setup_verbose,
|
||||
)
|
||||
@click.option(
|
||||
'--dev',
|
||||
'-d',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Additionally install package(s) in [dev-packages].",
|
||||
)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option('--bare', is_flag=True, default=False, help="Minimal output.")
|
||||
@click.option(
|
||||
'--clear', is_flag=True, default=False, help="Clear the dependency cache."
|
||||
)
|
||||
@click.option(
|
||||
'--sequential',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Install dependencies one-at-a-time, instead of concurrently.",
|
||||
)
|
||||
@click.pass_context
|
||||
def sync(
|
||||
ctx,
|
||||
dev=False,
|
||||
three=None,
|
||||
python=None,
|
||||
bare=False,
|
||||
dont_upgrade=False,
|
||||
user=False,
|
||||
verbose=False,
|
||||
clear=False,
|
||||
unused=False,
|
||||
package_name=None,
|
||||
sequential=False,
|
||||
):
|
||||
from .import core
|
||||
|
||||
core.do_sync(
|
||||
ctx=ctx,
|
||||
install=install,
|
||||
dev=dev,
|
||||
three=three,
|
||||
python=python,
|
||||
bare=bare,
|
||||
dont_upgrade=dont_upgrade,
|
||||
user=user,
|
||||
verbose=verbose,
|
||||
clear=clear,
|
||||
unused=unused,
|
||||
sequential=sequential,
|
||||
)
|
||||
|
||||
|
||||
@click.command(
|
||||
short_help="Uninstalls all packages not specified in Pipfile.lock."
|
||||
)
|
||||
@click.option(
|
||||
'--verbose',
|
||||
'-v',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Verbose mode.",
|
||||
callback=setup_verbose,
|
||||
)
|
||||
@click.option(
|
||||
'--three/--two',
|
||||
is_flag=True,
|
||||
default=None,
|
||||
help="Use Python 3/2 when creating virtualenv.",
|
||||
)
|
||||
@click.option(
|
||||
'--python',
|
||||
default=False,
|
||||
nargs=1,
|
||||
help="Specify which version of Python virtualenv should use.",
|
||||
)
|
||||
@click.option(
|
||||
'--dry-run',
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Just output unneeded packages.",
|
||||
)
|
||||
@click.pass_context
|
||||
def clean(
|
||||
ctx,
|
||||
three=None,
|
||||
python=None,
|
||||
dry_run=False,
|
||||
bare=False,
|
||||
user=False,
|
||||
verbose=False,
|
||||
):
|
||||
from .import core
|
||||
|
||||
core.do_clean(
|
||||
ctx=ctx, three=three, python=python, dry_run=dry_run, verbose=verbose
|
||||
)
|
||||
|
||||
|
||||
# Install click commands.
|
||||
cli.add_command(graph)
|
||||
cli.add_command(install)
|
||||
cli.add_command(uninstall)
|
||||
cli.add_command(sync)
|
||||
cli.add_command(lock)
|
||||
cli.add_command(check)
|
||||
cli.add_command(clean)
|
||||
cli.add_command(shell)
|
||||
cli.add_command(run)
|
||||
cli.add_command(update)
|
||||
cli.add_command(run_open)
|
||||
# Only invoke the "did you mean" when an argument wasn't passed (it breaks those).
|
||||
if '-' not in ''.join(sys.argv) and len(sys.argv) > 1:
|
||||
cli = DYMCommandCollection(sources=[cli])
|
||||
if __name__ == '__main__':
|
||||
cli()
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -0,0 +1,80 @@
|
|||
import os
|
||||
import sys
|
||||
from appdirs import user_cache_dir
|
||||
|
||||
os.environ['PYTHONDONTWRITEBYTECODE'] = '1'
|
||||
|
||||
# Prevent invalid shebangs with Homebrew-installed Python:
|
||||
# https://bugs.python.org/issue22490
|
||||
os.environ.pop('__PYVENV_LAUNCHER__', None)
|
||||
# Shell compatibility mode, for mis-configured shells.
|
||||
PIPENV_SHELL_FANCY = bool(os.environ.get('PIPENV_SHELL_FANCY'))
|
||||
# Support for both Python 2 and Python 3 at the same time.
|
||||
PIPENV_PYTHON = os.environ.get('PIPENV_PYTHON')
|
||||
# Create the virtualenv in the project, instead of with pew.
|
||||
PIPENV_VENV_IN_PROJECT = bool(
|
||||
os.environ.get('PIPENV_VENV_IN_PROJECT')
|
||||
) or os.path.isdir(
|
||||
'.venv'
|
||||
)
|
||||
# Overwrite all index funcitonality.
|
||||
PIPENV_TEST_INDEX = os.environ.get('PIPENV_TEST_INDEX')
|
||||
# No color mode, for unfun people.
|
||||
PIPENV_COLORBLIND = bool(os.environ.get('PIPENV_COLORBLIND'))
|
||||
# Disable spinner for better test and deploy logs (for the unworthy).
|
||||
PIPENV_NOSPIN = bool(os.environ.get('PIPENV_NOSPIN'))
|
||||
# Tells Pipenv how many rounds of resolving to do for Pip-Tools.
|
||||
PIPENV_MAX_ROUNDS = int(os.environ.get('PIPENV_MAX_ROUNDS', '16'))
|
||||
# Specify a custom Pipfile location.
|
||||
PIPENV_PIPFILE = os.environ.get('PIPENV_PIPFILE')
|
||||
# Tells Pipenv which Python to default to, when none is provided.
|
||||
PIPENV_DEFAULT_PYTHON_VERSION = os.environ.get('PIPENV_DEFAULT_PYTHON_VERSION')
|
||||
# Tells Pipenv to not load .env files.
|
||||
PIPENV_DONT_LOAD_ENV = bool(os.environ.get('PIPENV_DONT_LOAD_ENV'))
|
||||
# Tell Pipenv to default to yes at all prompts.
|
||||
PIPENV_YES = bool(os.environ.get('PIPENV_YES'))
|
||||
# Tells Pipenv how many subprocesses to use when installing.
|
||||
PIPENV_MAX_SUBPROCESS = int(os.environ.get('PIPENV_MAX_SUBPROCESS', '16'))
|
||||
# User-configurable max-depth for Pipfile searching.
|
||||
# Note: +1 because of a temporary bug in Pipenv.
|
||||
PIPENV_MAX_DEPTH = int(os.environ.get('PIPENV_MAX_DEPTH', '3')) + 1
|
||||
# Tell Pipenv not to inherit parent directories (for development, mostly).
|
||||
PIPENV_NO_INHERIT = 'PIPENV_NO_INHERIT' in os.environ
|
||||
if PIPENV_NO_INHERIT:
|
||||
PIPENV_MAX_DEPTH = 2
|
||||
# Tells Pipenv to use the virtualenv-provided pip instead.
|
||||
PIPENV_VIRTUALENV = None
|
||||
PIPENV_USE_SYSTEM = False
|
||||
if 'PIPENV_ACTIVE' not in os.environ:
|
||||
if 'PIPENV_IGNORE_VIRTUALENVS' not in os.environ:
|
||||
PIPENV_VIRTUALENV = os.environ.get('VIRTUAL_ENV')
|
||||
PIPENV_USE_SYSTEM = bool(os.environ.get('VIRTUAL_ENV'))
|
||||
# Tells Pipenv to use hashing mode.
|
||||
PIPENV_USE_HASHES = True
|
||||
# Tells Pipenv to skip case-checking (slow internet connections).
|
||||
PIPENV_SKIP_VALIDATION = True
|
||||
# Tells Pipenv where to load .env from.
|
||||
PIPENV_DOTENV_LOCATION = os.environ.get('PIPENV_DOTENV_LOCATION')
|
||||
# Use shell compatibility mode when using venv in project mode.
|
||||
if PIPENV_VENV_IN_PROJECT:
|
||||
PIPENV_SHELL_COMPAT = True
|
||||
# Disable spinner on Windows.
|
||||
if os.name == 'nt':
|
||||
PIPENV_NOSPIN = True
|
||||
# Disable the spinner on Travis-Ci (and friends).
|
||||
if 'CI' in os.environ:
|
||||
PIPENV_NOSPIN = True
|
||||
PIPENV_HIDE_EMOJIS = bool(os.environ.get('PIPENV_HIDE_EMOJIS'))
|
||||
if os.name == 'nt':
|
||||
PIPENV_HIDE_EMOJIS = True
|
||||
# Tells Pipenv how long to wait for virtualenvs to be created in seconds.
|
||||
PIPENV_TIMEOUT = int(os.environ.get('PIPENV_TIMEOUT', 120))
|
||||
PIPENV_INSTALL_TIMEOUT = 60 * 15
|
||||
PIPENV_DONT_USE_PYENV = os.environ.get('PIPENV_DONT_USE_PYENV')
|
||||
PYENV_ROOT = os.environ.get('PYENV_ROOT', os.path.expanduser('~/.pyenv'))
|
||||
PYENV_INSTALLED = (
|
||||
bool(os.environ.get('PYENV_SHELL')) or bool(os.environ.get('PYENV_ROOT'))
|
||||
)
|
||||
SESSION_IS_INTERACTIVE = bool(os.isatty(sys.stdout.fileno()))
|
||||
PIPENV_SHELL = os.environ.get('SHELL') or os.environ.get('PYENV_SHELL')
|
||||
PIPENV_CACHE_DIR = os.environ.get('PIPENV_CACHE_DIR', user_cache_dir('pipenv'))
|
|
@ -0,0 +1,90 @@
|
|||
# coding: utf-8
|
||||
import os
|
||||
import sys
|
||||
import crayons
|
||||
import pipenv
|
||||
|
||||
from pprint import pprint
|
||||
from .__version__ import __version__
|
||||
from .core import project, system_which, find_python_in_path, python_version
|
||||
from .pep508checker import lookup
|
||||
|
||||
|
||||
def main():
|
||||
print('<details><summary>$ python -m pipenv.help output</summary>')
|
||||
print('')
|
||||
print('Pipenv version: `{0!r}`'.format(__version__))
|
||||
print('')
|
||||
print('Pipenv location: `{0!r}`'.format(os.path.dirname(pipenv.__file__)))
|
||||
print('')
|
||||
print('Python location: `{0!r}`'.format(sys.executable))
|
||||
print('')
|
||||
print('Other Python installations in `PATH`:')
|
||||
print('')
|
||||
for python_v in ('2.5', '2.6', '2.7', '3.4', '3.5', '3.6', '3.7'):
|
||||
found = find_python_in_path(python_v)
|
||||
if found:
|
||||
print(' - `{0}`: `{1}`'.format(python_v, found))
|
||||
found = system_which('python{0}'.format(python_v), mult=True)
|
||||
if found:
|
||||
for f in found:
|
||||
print(' - `{0}`: `{1}`'.format(python_v, f))
|
||||
print('')
|
||||
for p in ('python', 'python2', 'python3', 'py'):
|
||||
found = system_which(p, mult=True)
|
||||
for f in found:
|
||||
print(' - `{0}`: `{1}`'.format(python_version(f), f))
|
||||
print('')
|
||||
print('PEP 508 Information:')
|
||||
print('')
|
||||
print('```')
|
||||
pprint(lookup)
|
||||
print('```')
|
||||
print('')
|
||||
print('System environment variables:')
|
||||
print('')
|
||||
for key in os.environ:
|
||||
print(' - `{0}`'.format(key))
|
||||
print('')
|
||||
print(u'Pipenv–specific environment variables:')
|
||||
print('')
|
||||
for key in os.environ:
|
||||
if key.startswith('PIPENV'):
|
||||
print(' - `{0}`: `{1}`'.format(key, os.environ[key]))
|
||||
print('')
|
||||
print(u'Debug–specific environment variables:')
|
||||
print('')
|
||||
for key in ('PATH', 'SHELL', 'EDITOR', 'LANG', 'PWD', 'VIRTUAL_ENV'):
|
||||
if key in os.environ:
|
||||
print(' - `{0}`: `{1}`'.format(key, os.environ[key]))
|
||||
print('')
|
||||
print('')
|
||||
print('---------------------------')
|
||||
print('')
|
||||
if project.pipfile_exists:
|
||||
print(
|
||||
u'Contents of `Pipfile` ({0!r}):'.format(project.pipfile_location)
|
||||
)
|
||||
print('')
|
||||
print('```toml')
|
||||
with open(project.pipfile_location, 'r') as f:
|
||||
print(f.read())
|
||||
print('```')
|
||||
print('')
|
||||
if project.lockfile_exists:
|
||||
print('')
|
||||
print(
|
||||
u'Contents of `Pipfile.lock` ({0!r}):'.format(
|
||||
project.lockfile_location
|
||||
)
|
||||
)
|
||||
print('')
|
||||
print('```json')
|
||||
with open(project.lockfile_location, 'r') as f:
|
||||
print(f.read())
|
||||
print('```')
|
||||
print('</details>')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,48 @@
|
|||
from ._version import VERSION
|
||||
|
||||
__version__ = VERSION
|
||||
|
||||
|
||||
def loads(text):
|
||||
"""
|
||||
Parses TOML text into a dict-like object and returns it.
|
||||
"""
|
||||
from prettytoml.parser import parse_tokens
|
||||
from prettytoml.lexer import tokenize as lexer
|
||||
from .file import TOMLFile
|
||||
|
||||
tokens = tuple(lexer(text, is_top_level=True))
|
||||
elements = parse_tokens(tokens)
|
||||
return TOMLFile(elements)
|
||||
|
||||
|
||||
def load(file_path):
|
||||
"""
|
||||
Parses a TOML file into a dict-like object and returns it.
|
||||
"""
|
||||
return loads(open(file_path).read())
|
||||
|
||||
|
||||
def dumps(value):
|
||||
"""
|
||||
Dumps a data structure to TOML source code.
|
||||
|
||||
The given value must be either a dict of dict values, a dict, or a TOML file constructed by this module.
|
||||
"""
|
||||
|
||||
from contoml.file.file import TOMLFile
|
||||
|
||||
if not isinstance(value, TOMLFile):
|
||||
raise RuntimeError("Can only dump a TOMLFile instance loaded by load() or loads()")
|
||||
|
||||
return value.dumps()
|
||||
|
||||
|
||||
def dump(obj, file_path, prettify=False):
|
||||
"""
|
||||
Dumps a data structure to the filesystem as TOML.
|
||||
|
||||
The given value must be either a dict of dict values, a dict, or a TOML file constructed by this module.
|
||||
"""
|
||||
with open(file_path, 'w') as fp:
|
||||
fp.write(dumps(obj))
|
|
@ -0,0 +1 @@
|
|||
VERSION = 'master'
|
|
@ -0,0 +1,3 @@
|
|||
|
||||
|
||||
from .file import TOMLFile
|
|
@ -0,0 +1,40 @@
|
|||
from prettytoml.elements.table import TableElement
|
||||
from prettytoml.errors import InvalidValueError
|
||||
from contoml.file.freshtable import FreshTable
|
||||
from prettytoml import util
|
||||
|
||||
|
||||
class ArrayOfTables(list):
|
||||
|
||||
def __init__(self, toml_file, name, iterable=None):
|
||||
if iterable:
|
||||
list.__init__(self, iterable)
|
||||
self._name = name
|
||||
self._toml_file = toml_file
|
||||
|
||||
def append(self, value):
|
||||
if isinstance(value, dict):
|
||||
table = FreshTable(parent=self, name=self._name, is_array=True)
|
||||
table._append_to_parent()
|
||||
index = len(self._toml_file[self._name]) - 1
|
||||
for key_seq, value in util.flatten_nested(value).items():
|
||||
# self._toml_file._setitem_with_key_seq((self._name, index) + key_seq, value)
|
||||
self._toml_file._array_setitem_with_key_seq(self._name, index, key_seq, value)
|
||||
# for k, v in value.items():
|
||||
# table[k] = v
|
||||
else:
|
||||
raise InvalidValueError('Can only append a dict to an array of tables')
|
||||
|
||||
def __getitem__(self, item):
|
||||
try:
|
||||
return list.__getitem__(self, item)
|
||||
except IndexError:
|
||||
if item == len(self):
|
||||
return FreshTable(parent=self, name=self._name, is_array=True)
|
||||
else:
|
||||
raise
|
||||
|
||||
def append_fresh_table(self, fresh_table):
|
||||
list.append(self, fresh_table)
|
||||
if self._toml_file:
|
||||
self._toml_file.append_fresh_table(fresh_table)
|
56
third_party/python/pipenv/pipenv/patched/contoml/file/cascadedict.py
поставляемый
Executable file
56
third_party/python/pipenv/pipenv/patched/contoml/file/cascadedict.py
поставляемый
Executable file
|
@ -0,0 +1,56 @@
|
|||
import operator
|
||||
from functools import reduce
|
||||
from contoml.file import raw
|
||||
|
||||
|
||||
class CascadeDict:
|
||||
"""
|
||||
A dict-like object made up of one or more other dict-like objects where querying for an item cascade-gets
|
||||
it from all the internal dicts in order of their listing, and setting an item sets it on the first dict listed.
|
||||
"""
|
||||
|
||||
def __init__(self, *internal_dicts):
|
||||
assert internal_dicts, 'internal_dicts cannot be empty'
|
||||
self._internal_dicts = tuple(internal_dicts)
|
||||
|
||||
def cascaded_with(self, one_more_dict):
|
||||
"""
|
||||
Returns another instance with one more dict cascaded at the end.
|
||||
"""
|
||||
return CascadeDict(self._internal_dicts, one_more_dict,)
|
||||
|
||||
def __getitem__(self, item):
|
||||
for d in self._internal_dicts:
|
||||
try:
|
||||
return d[item]
|
||||
except KeyError:
|
||||
pass
|
||||
raise KeyError
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
self._internal_dicts[0][key] = value
|
||||
|
||||
def keys(self):
|
||||
return set(reduce(operator.or_, (set(d.keys()) for d in self._internal_dicts)))
|
||||
|
||||
def items(self):
|
||||
all_items = reduce(operator.add, (list(d.items()) for d in reversed(self._internal_dicts)))
|
||||
unique_items = {k: v for k, v in all_items}.items()
|
||||
return tuple(unique_items)
|
||||
|
||||
def __contains__(self, item):
|
||||
for d in self._internal_dicts:
|
||||
if item in d:
|
||||
return True
|
||||
return False
|
||||
|
||||
@property
|
||||
def neutralized(self):
|
||||
return {k: raw.to_raw(v) for k, v in self.items()}
|
||||
|
||||
@property
|
||||
def primitive_value(self):
|
||||
return self.neutralized
|
||||
|
||||
def __repr__(self):
|
||||
return repr(self.primitive_value)
|
|
@ -0,0 +1,293 @@
|
|||
from prettytoml.errors import NoArrayFoundError, DuplicateKeysError, DuplicateTablesError
|
||||
from contoml.file import structurer, toplevels, raw
|
||||
from contoml.file.array import ArrayOfTables
|
||||
from contoml.file.freshtable import FreshTable
|
||||
import prettytoml.elements.factory as element_factory
|
||||
import prettytoml.util as util
|
||||
|
||||
|
||||
class TOMLFile:
|
||||
"""
|
||||
A TOMLFile object that tries its best to prserve formatting and order of mappings of the input source.
|
||||
|
||||
Raises InvalidTOMLFileError on invalid input elements.
|
||||
|
||||
Raises DuplicateKeysError, DuplicateTableError when appropriate.
|
||||
"""
|
||||
|
||||
def __init__(self, _elements):
|
||||
self._elements = []
|
||||
self._navigable = {}
|
||||
self.append_elements(_elements)
|
||||
|
||||
def __getitem__(self, item):
|
||||
try:
|
||||
value = self._navigable[item]
|
||||
if isinstance(value, (list, tuple)):
|
||||
return ArrayOfTables(toml_file=self, name=item, iterable=value)
|
||||
else:
|
||||
return value
|
||||
except KeyError:
|
||||
return FreshTable(parent=self, name=item, is_array=False)
|
||||
|
||||
def get(self, item, default=None):
|
||||
"""This was not here for who knows why."""
|
||||
|
||||
if item not in self:
|
||||
return default
|
||||
else:
|
||||
return self.__getitem__(item)
|
||||
|
||||
def __contains__(self, item):
|
||||
return item in self.keys()
|
||||
|
||||
def _setitem_with_key_seq(self, key_seq, value):
|
||||
"""
|
||||
Sets a the value in the TOML file located by the given key sequence.
|
||||
|
||||
Example:
|
||||
self._setitem(('key1', 'key2', 'key3'), 'text_value')
|
||||
is equivalent to doing
|
||||
self['key1']['key2']['key3'] = 'text_value'
|
||||
"""
|
||||
table = self
|
||||
key_so_far = tuple()
|
||||
for key in key_seq[:-1]:
|
||||
key_so_far += (key,)
|
||||
self._make_sure_table_exists(key_so_far)
|
||||
table = table[key]
|
||||
table[key_seq[-1]] = value
|
||||
|
||||
def _array_setitem_with_key_seq(self, array_name, index, key_seq, value):
|
||||
"""
|
||||
Sets a the array value in the TOML file located by the given key sequence.
|
||||
|
||||
Example:
|
||||
self._array_setitem(array_name, index, ('key1', 'key2', 'key3'), 'text_value')
|
||||
is equivalent to doing
|
||||
self.array(array_name)[index]['key1']['key2']['key3'] = 'text_value'
|
||||
"""
|
||||
table = self.array(array_name)[index]
|
||||
key_so_far = tuple()
|
||||
for key in key_seq[:-1]:
|
||||
key_so_far += (key,)
|
||||
new_table = self._array_make_sure_table_exists(array_name, index, key_so_far)
|
||||
if new_table is not None:
|
||||
table = new_table
|
||||
else:
|
||||
table = table[key]
|
||||
table[key_seq[-1]] = value
|
||||
|
||||
def _make_sure_table_exists(self, name_seq):
|
||||
"""
|
||||
Makes sure the table with the full name comprising of name_seq exists.
|
||||
"""
|
||||
t = self
|
||||
for key in name_seq[:-1]:
|
||||
t = t[key]
|
||||
name = name_seq[-1]
|
||||
if name not in t:
|
||||
self.append_elements([element_factory.create_table_header_element(name_seq),
|
||||
element_factory.create_table({})])
|
||||
|
||||
def _array_make_sure_table_exists(self, array_name, index, name_seq):
|
||||
"""
|
||||
Makes sure the table with the full name comprising of name_seq exists.
|
||||
"""
|
||||
t = self[array_name][index]
|
||||
for key in name_seq[:-1]:
|
||||
t = t[key]
|
||||
name = name_seq[-1]
|
||||
if name not in t:
|
||||
new_table = element_factory.create_table({})
|
||||
self.append_elements([element_factory.create_table_header_element((array_name,) + name_seq), new_table])
|
||||
return new_table
|
||||
|
||||
def __delitem__(self, key):
|
||||
table_element_index = self._elements.index(self._navigable[key])
|
||||
self._elements[table_element_index] = element_factory.create_table({})
|
||||
self._on_element_change()
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
|
||||
# Setting an array-of-tables
|
||||
if key and isinstance(value, (tuple, list)) and value and all(isinstance(v, dict) for v in value):
|
||||
for table in value:
|
||||
self.array(key).append(table)
|
||||
|
||||
# Or setting a whole single table
|
||||
elif isinstance(value, dict):
|
||||
|
||||
if key and key in self:
|
||||
del self[key]
|
||||
|
||||
for key_seq, child_value in util.flatten_nested({key: value}).items():
|
||||
self._setitem_with_key_seq(key_seq, child_value)
|
||||
|
||||
# if key in self._navigable:
|
||||
# del self[key]
|
||||
# index = self._elements.index(self._navigable[key])
|
||||
# self._elements = self._elements[:index] + [element_factory.create_table(value)] + self._elements[index+1:]
|
||||
# else:
|
||||
# if key:
|
||||
# self._elements.append(element_factory.create_table_header_element(key))
|
||||
# self._elements.append(element_factory.create_table(value))
|
||||
|
||||
|
||||
# Or updating the anonymous section table
|
||||
else:
|
||||
# It's mea
|
||||
self[''][key] = value
|
||||
|
||||
self._on_element_change()
|
||||
|
||||
def _detect_toplevels(self):
|
||||
"""
|
||||
Returns a sequence of TopLevel instances for the current state of this table.
|
||||
"""
|
||||
return tuple(e for e in toplevels.identify(self.elements) if isinstance(e, toplevels.Table))
|
||||
|
||||
def _update_table_fallbacks(self, table_toplevels):
|
||||
"""
|
||||
Updates the fallbacks on all the table elements to make relative table access possible.
|
||||
|
||||
Raises DuplicateKeysError if appropriate.
|
||||
"""
|
||||
|
||||
if len(self.elements) <= 1:
|
||||
return
|
||||
|
||||
def parent_of(toplevel):
|
||||
# Returns an TopLevel parent of the given entry, or None.
|
||||
for parent_toplevel in table_toplevels:
|
||||
if toplevel.name.sub_names[:-1] == parent_toplevel.name.sub_names:
|
||||
return parent_toplevel
|
||||
|
||||
for entry in table_toplevels:
|
||||
if entry.name.is_qualified:
|
||||
parent = parent_of(entry)
|
||||
if parent:
|
||||
child_name = entry.name.without_prefix(parent.name)
|
||||
parent.table_element.set_fallback({child_name.sub_names[0]: entry.table_element})
|
||||
|
||||
def _recreate_navigable(self):
|
||||
if self._elements:
|
||||
self._navigable = structurer.structure(toplevels.identify(self._elements))
|
||||
|
||||
def array(self, name):
|
||||
"""
|
||||
Returns the array of tables with the given name.
|
||||
"""
|
||||
if name in self._navigable:
|
||||
if isinstance(self._navigable[name], (list, tuple)):
|
||||
return self[name]
|
||||
else:
|
||||
raise NoArrayFoundError
|
||||
else:
|
||||
return ArrayOfTables(toml_file=self, name=name)
|
||||
|
||||
def _on_element_change(self):
|
||||
self._recreate_navigable()
|
||||
|
||||
table_toplevels = self._detect_toplevels()
|
||||
self._update_table_fallbacks(table_toplevels)
|
||||
|
||||
def append_elements(self, elements):
|
||||
"""
|
||||
Appends more elements to the contained internal elements.
|
||||
"""
|
||||
self._elements = self._elements + list(elements)
|
||||
self._on_element_change()
|
||||
|
||||
def prepend_elements(self, elements):
|
||||
"""
|
||||
Prepends more elements to the contained internal elements.
|
||||
"""
|
||||
self._elements = list(elements) + self._elements
|
||||
self._on_element_change()
|
||||
|
||||
def dumps(self):
|
||||
"""
|
||||
Returns the TOML file serialized back to str.
|
||||
"""
|
||||
return ''.join(element.serialized() for element in self._elements)
|
||||
|
||||
def dump(self, file_path):
|
||||
with open(file_path, mode='w') as fp:
|
||||
fp.write(self.dumps())
|
||||
|
||||
def keys(self):
|
||||
return set(self._navigable.keys()) | {''}
|
||||
|
||||
def values(self):
|
||||
return self._navigable.values()
|
||||
|
||||
def items(self):
|
||||
items = self._navigable.items()
|
||||
|
||||
def has_anonymous_entry():
|
||||
return any(key == '' for (key, _) in items)
|
||||
|
||||
if has_anonymous_entry():
|
||||
return items
|
||||
else:
|
||||
return items + [('', self[''])]
|
||||
|
||||
@property
|
||||
def primitive(self):
|
||||
"""
|
||||
Returns a primitive object representation for this container (which is a dict).
|
||||
|
||||
WARNING: The returned container does not contain any markup or formatting metadata.
|
||||
"""
|
||||
raw_container = raw.to_raw(self._navigable)
|
||||
|
||||
# Collapsing the anonymous table onto the top-level container is present
|
||||
if '' in raw_container:
|
||||
raw_container.update(raw_container[''])
|
||||
del raw_container['']
|
||||
|
||||
return raw_container
|
||||
|
||||
def append_fresh_table(self, fresh_table):
|
||||
"""
|
||||
Gets called by FreshTable instances when they get written to.
|
||||
"""
|
||||
if fresh_table.name:
|
||||
elements = []
|
||||
if fresh_table.is_array:
|
||||
elements += [element_factory.create_array_of_tables_header_element(fresh_table.name)]
|
||||
else:
|
||||
elements += [element_factory.create_table_header_element(fresh_table.name)]
|
||||
|
||||
elements += [fresh_table, element_factory.create_newline_element()]
|
||||
self.append_elements(elements)
|
||||
|
||||
else:
|
||||
# It's an anonymous table
|
||||
self.prepend_elements([fresh_table, element_factory.create_newline_element()])
|
||||
|
||||
@property
|
||||
def elements(self):
|
||||
return self._elements
|
||||
|
||||
def __str__(self):
|
||||
|
||||
is_empty = (not self['']) and (not tuple(k for k in self.keys() if k))
|
||||
|
||||
def key_name(key):
|
||||
return '[ANONYMOUS]' if not key else key
|
||||
|
||||
def pair(key, value):
|
||||
return '%s = %s' % (key_name(key), str(value))
|
||||
|
||||
content_text = '' if is_empty else \
|
||||
'\n\t' + ',\n\t'.join(pair(k, v) for (k, v) in self.items() if v) + '\n'
|
||||
|
||||
return "TOMLFile{%s}" % content_text
|
||||
|
||||
def __repr__(self):
|
||||
return str(self)
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
from prettytoml.elements.table import TableElement
|
||||
|
||||
|
||||
class FreshTable(TableElement):
|
||||
"""
|
||||
A fresh TableElement that appended itself to each of parents when it first gets written to at most once.
|
||||
|
||||
parents is a sequence of objects providing an append_fresh_table(TableElement) method
|
||||
"""
|
||||
|
||||
def __init__(self, parent, name, is_array=False):
|
||||
TableElement.__init__(self, sub_elements=[])
|
||||
|
||||
self._parent = parent
|
||||
self._name = name
|
||||
self._is_array = is_array
|
||||
|
||||
# As long as this flag is false, setitem() operations will append the table header and this table
|
||||
# to the toml_file's elements
|
||||
self.__appended = False
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return self._name
|
||||
|
||||
@property
|
||||
def is_array(self):
|
||||
return self._is_array
|
||||
|
||||
def _append_to_parent(self):
|
||||
"""
|
||||
Causes this ephemeral table to be persisted on the TOMLFile.
|
||||
"""
|
||||
|
||||
if self.__appended:
|
||||
return
|
||||
|
||||
if self._parent is not None:
|
||||
self._parent.append_fresh_table(self)
|
||||
|
||||
self.__appended = True
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
TableElement.__setitem__(self, key, value)
|
||||
self._append_to_parent()
|
|
@ -0,0 +1,30 @@
|
|||
import itertools
|
||||
|
||||
|
||||
class PeekableIterator:
|
||||
|
||||
# Returned by peek() when the iterator is exhausted. Truthiness is False.
|
||||
Nothing = tuple()
|
||||
|
||||
def __init__(self, iter):
|
||||
self._iter = iter
|
||||
|
||||
def __next__(self):
|
||||
return next(self._iter)
|
||||
|
||||
def next(self):
|
||||
return self.__next__()
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def peek(self):
|
||||
"""
|
||||
Returns PeekableIterator.Nothing when the iterator is exhausted.
|
||||
"""
|
||||
try:
|
||||
v = next(self._iter)
|
||||
self._iter = itertools.chain((v,), self._iter)
|
||||
return v
|
||||
except StopIteration:
|
||||
return PeekableIterator.Nothing
|
|
@ -0,0 +1,16 @@
|
|||
from prettytoml.elements.abstracttable import AbstractTable
|
||||
|
||||
|
||||
def to_raw(x):
|
||||
from contoml.file.cascadedict import CascadeDict
|
||||
|
||||
if isinstance(x, AbstractTable):
|
||||
return x.primitive_value
|
||||
elif isinstance(x, CascadeDict):
|
||||
return x.neutralized
|
||||
elif isinstance(x, (list, tuple)):
|
||||
return [to_raw(y) for y in x]
|
||||
elif isinstance(x, dict):
|
||||
return {k: to_raw(v) for (k, v) in x.items()}
|
||||
else:
|
||||
return x
|
116
third_party/python/pipenv/pipenv/patched/contoml/file/structurer.py
поставляемый
Executable file
116
third_party/python/pipenv/pipenv/patched/contoml/file/structurer.py
поставляемый
Executable file
|
@ -0,0 +1,116 @@
|
|||
from contoml.file import toplevels
|
||||
from contoml.file.cascadedict import CascadeDict
|
||||
|
||||
|
||||
class NamedDict(dict):
|
||||
"""
|
||||
A dict that can use Name instances as keys.
|
||||
"""
|
||||
|
||||
def __init__(self, other_dict=None):
|
||||
dict.__init__(self)
|
||||
if other_dict:
|
||||
for k, v in other_dict.items():
|
||||
self[k] = v
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
"""
|
||||
key can be an Name instance.
|
||||
|
||||
When key is a path in the form of an Name instance, all the parents and grandparents of the value are
|
||||
created along the way as instances of NamedDict. If the parent of the value exists, it is replaced with a
|
||||
CascadeDict() that cascades the old parent value with a new NamedDict that contains the given child name
|
||||
and value.
|
||||
"""
|
||||
if isinstance(key, toplevels.Name):
|
||||
|
||||
if len(key.sub_names) == 1:
|
||||
name = key.sub_names[0]
|
||||
if name in self:
|
||||
self[name] = CascadeDict(self[name], value)
|
||||
else:
|
||||
self[name] = value
|
||||
|
||||
elif len(key.sub_names) > 1:
|
||||
name = key.sub_names[0]
|
||||
rest_of_key = key.drop(1)
|
||||
if name in self:
|
||||
named_dict = NamedDict()
|
||||
named_dict[rest_of_key] = value
|
||||
self[name] = CascadeDict(self[name], named_dict)
|
||||
else:
|
||||
self[name] = NamedDict()
|
||||
self[name][rest_of_key] = value
|
||||
else:
|
||||
return dict.__setitem__(self, key, value)
|
||||
|
||||
def __contains__(self, item):
|
||||
try:
|
||||
_ = self[item]
|
||||
return True
|
||||
except KeyError:
|
||||
return False
|
||||
|
||||
def append(self, key, value):
|
||||
"""
|
||||
Makes sure the value pointed to by key exists and is a list and appends the given value to it.
|
||||
"""
|
||||
if key in self:
|
||||
self[key].append(value)
|
||||
else:
|
||||
self[key] = [value]
|
||||
|
||||
def __getitem__(self, item):
|
||||
|
||||
if isinstance(item, toplevels.Name):
|
||||
d = self
|
||||
for name in item.sub_names:
|
||||
d = d[name]
|
||||
return d
|
||||
else:
|
||||
return dict.__getitem__(self, item)
|
||||
|
||||
|
||||
def structure(table_toplevels):
|
||||
"""
|
||||
Accepts an ordered sequence of TopLevel instances and returns a navigable object structure representation of the
|
||||
TOML file.
|
||||
"""
|
||||
|
||||
table_toplevels = tuple(table_toplevels)
|
||||
obj = NamedDict()
|
||||
|
||||
last_array_of_tables = None # The Name of the last array-of-tables header
|
||||
|
||||
for toplevel in table_toplevels:
|
||||
|
||||
if isinstance(toplevel, toplevels.AnonymousTable):
|
||||
obj[''] = toplevel.table_element
|
||||
|
||||
elif isinstance(toplevel, toplevels.Table):
|
||||
if last_array_of_tables and toplevel.name.is_prefixed_with(last_array_of_tables):
|
||||
seq = obj[last_array_of_tables]
|
||||
unprefixed_name = toplevel.name.without_prefix(last_array_of_tables)
|
||||
|
||||
seq[-1] = CascadeDict(seq[-1], NamedDict({unprefixed_name: toplevel.table_element}))
|
||||
else:
|
||||
obj[toplevel.name] = toplevel.table_element
|
||||
else: # It's an ArrayOfTables
|
||||
|
||||
if last_array_of_tables and toplevel.name != last_array_of_tables and \
|
||||
toplevel.name.is_prefixed_with(last_array_of_tables):
|
||||
|
||||
seq = obj[last_array_of_tables]
|
||||
unprefixed_name = toplevel.name.without_prefix(last_array_of_tables)
|
||||
|
||||
if unprefixed_name in seq[-1]:
|
||||
seq[-1][unprefixed_name].append(toplevel.table_element)
|
||||
else:
|
||||
cascaded_with = NamedDict({unprefixed_name: [toplevel.table_element]})
|
||||
seq[-1] = CascadeDict(seq[-1], cascaded_with)
|
||||
|
||||
else:
|
||||
obj.append(toplevel.name, toplevel.table_element)
|
||||
last_array_of_tables = toplevel.name
|
||||
|
||||
return obj
|
25
third_party/python/pipenv/pipenv/patched/contoml/file/test_cascadedict.py
поставляемый
Executable file
25
third_party/python/pipenv/pipenv/patched/contoml/file/test_cascadedict.py
поставляемый
Executable file
|
@ -0,0 +1,25 @@
|
|||
from contoml.file.cascadedict import CascadeDict
|
||||
|
||||
|
||||
def test_cascadedict():
|
||||
|
||||
d1 = {'a': 1, 'b': 2, 'c': 3}
|
||||
d2 = {'b': 12, 'e': 4, 'f': 5}
|
||||
|
||||
cascade = CascadeDict(d1, d2)
|
||||
|
||||
# Test querying
|
||||
assert cascade['a'] == 1
|
||||
assert cascade['b'] == 2
|
||||
assert cascade['c'] == 3
|
||||
assert cascade['e'] == 4
|
||||
assert cascade.keys() == {'a', 'b', 'c', 'e', 'f'}
|
||||
assert set(cascade.items()) == {('a', 1), ('b', 2), ('c', 3), ('e', 4), ('f', 5)}
|
||||
|
||||
# Test mutating
|
||||
cascade['a'] = 11
|
||||
cascade['f'] = 'fff'
|
||||
cascade['super'] = 'man'
|
||||
assert d1['a'] == 11
|
||||
assert d1['super'] == 'man'
|
||||
assert d1['f'] == 'fff'
|
20
third_party/python/pipenv/pipenv/patched/contoml/file/test_entries.py
поставляемый
Executable file
20
third_party/python/pipenv/pipenv/patched/contoml/file/test_entries.py
поставляемый
Executable file
|
@ -0,0 +1,20 @@
|
|||
from prettytoml import parser, lexer
|
||||
from contoml.file import toplevels
|
||||
|
||||
|
||||
def test_entry_extraction():
|
||||
text = open('sample.toml').read()
|
||||
elements = parser.parse_tokens(lexer.tokenize(text))
|
||||
|
||||
e = tuple(toplevels.identify(elements))
|
||||
|
||||
assert len(e) == 13
|
||||
assert isinstance(e[0], toplevels.AnonymousTable)
|
||||
|
||||
|
||||
def test_entry_names():
|
||||
name_a = toplevels.Name(('super', 'sub1'))
|
||||
name_b = toplevels.Name(('super', 'sub1', 'sub2', 'sub3'))
|
||||
|
||||
assert name_b.is_prefixed_with(name_a)
|
||||
assert name_b.without_prefix(name_a).sub_names == ('sub2', 'sub3')
|
12
third_party/python/pipenv/pipenv/patched/contoml/file/test_peekableit.py
поставляемый
Executable file
12
third_party/python/pipenv/pipenv/patched/contoml/file/test_peekableit.py
поставляемый
Executable file
|
@ -0,0 +1,12 @@
|
|||
from contoml.file.peekableit import PeekableIterator
|
||||
|
||||
|
||||
def test_peekable_iterator():
|
||||
|
||||
peekable = PeekableIterator(i for i in (1, 2, 3, 4))
|
||||
|
||||
assert peekable.peek() == 1
|
||||
assert peekable.peek() == 1
|
||||
assert peekable.peek() == 1
|
||||
|
||||
assert [next(peekable), next(peekable), next(peekable), next(peekable)] == [1, 2, 3, 4]
|
41
third_party/python/pipenv/pipenv/patched/contoml/file/test_structurer.py
поставляемый
Executable file
41
third_party/python/pipenv/pipenv/patched/contoml/file/test_structurer.py
поставляемый
Executable file
|
@ -0,0 +1,41 @@
|
|||
from prettytoml import lexer, parser
|
||||
from contoml.file import toplevels
|
||||
from prettytoml.parser import elementsanitizer
|
||||
from contoml.file.structurer import NamedDict, structure
|
||||
from prettytoml.parser.tokenstream import TokenStream
|
||||
|
||||
|
||||
def test_NamedDict():
|
||||
|
||||
d = NamedDict()
|
||||
|
||||
d[toplevels.Name(('super', 'sub1', 'sub2'))] = {'sub3': 12}
|
||||
d[toplevels.Name(('super', 'sub1', 'sub2'))]['sub4'] = 42
|
||||
|
||||
assert d[toplevels.Name(('super', 'sub1', 'sub2', 'sub3'))] == 12
|
||||
assert d[toplevels.Name(('super', 'sub1', 'sub2', 'sub4'))] == 42
|
||||
|
||||
|
||||
def test_structure():
|
||||
tokens = lexer.tokenize(open('sample.toml').read())
|
||||
elements = elementsanitizer.sanitize(parser.parse_tokens(tokens))
|
||||
entries_ = tuple(toplevels.identify(elements))
|
||||
|
||||
s = structure(entries_)
|
||||
|
||||
assert s['']['title'] == 'TOML Example'
|
||||
assert s['owner']['name'] == 'Tom Preston-Werner'
|
||||
assert s['database']['ports'][1] == 8001
|
||||
assert s['servers']['alpha']['dc'] == 'eqdc10'
|
||||
assert s['clients']['data'][1][0] == 1
|
||||
assert s['clients']['key3'] == 'The quick brown fox jumps over the lazy dog.'
|
||||
|
||||
assert s['fruit'][0]['name'] == 'apple'
|
||||
assert s['fruit'][0]['physical']['color'] == 'red'
|
||||
assert s['fruit'][0]['physical']['shape'] == 'round'
|
||||
assert s['fruit'][0]['variety'][0]['name'] == 'red delicious'
|
||||
assert s['fruit'][0]['variety'][1]['name'] == 'granny smith'
|
||||
|
||||
assert s['fruit'][1]['name'] == 'banana'
|
||||
assert s['fruit'][1]['variety'][0]['name'] == 'plantain'
|
||||
assert s['fruit'][1]['variety'][0]['points'][2]['y'] == 4
|
|
@ -0,0 +1,142 @@
|
|||
"""
|
||||
Top-level entries in a TOML file.
|
||||
"""
|
||||
|
||||
from prettytoml import elements
|
||||
from prettytoml.elements import TableElement, TableHeaderElement
|
||||
from .peekableit import PeekableIterator
|
||||
|
||||
|
||||
class TopLevel:
|
||||
"""
|
||||
A abstract top-level entry.
|
||||
"""
|
||||
|
||||
def __init__(self, names, table_element):
|
||||
self._table_element = table_element
|
||||
self._names = Name(names)
|
||||
|
||||
@property
|
||||
def table_element(self):
|
||||
return self._table_element
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
"""
|
||||
The distinct name of a table entry as an Name instance.
|
||||
"""
|
||||
return self._names
|
||||
|
||||
|
||||
class Name:
|
||||
|
||||
def __init__(self, names):
|
||||
self._names = names
|
||||
|
||||
@property
|
||||
def sub_names(self):
|
||||
return self._names
|
||||
|
||||
def drop(self, n=0):
|
||||
"""
|
||||
Returns the name after dropping the first n entries of it.
|
||||
"""
|
||||
return Name(names=self._names[n:])
|
||||
|
||||
def is_prefixed_with(self, names):
|
||||
if isinstance(names, Name):
|
||||
return self.is_prefixed_with(names.sub_names)
|
||||
|
||||
for i, name in enumerate(names):
|
||||
if self._names[i] != name:
|
||||
return False
|
||||
return True
|
||||
|
||||
def without_prefix(self, names):
|
||||
if isinstance(names, Name):
|
||||
return self.without_prefix(names.sub_names)
|
||||
|
||||
for i, name in enumerate(names):
|
||||
if name != self._names[i]:
|
||||
return Name(self._names[i:])
|
||||
return Name(names=self.sub_names[len(names):])
|
||||
|
||||
@property
|
||||
def is_qualified(self):
|
||||
return len(self._names) > 1
|
||||
|
||||
def __str__(self):
|
||||
return '.'.join(self.sub_names)
|
||||
|
||||
def __hash__(self):
|
||||
return hash(str(self))
|
||||
|
||||
def __eq__(self, other):
|
||||
return str(self) == str(other)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
|
||||
class AnonymousTable(TopLevel):
|
||||
|
||||
def __init__(self, table_element):
|
||||
TopLevel.__init__(self, ('',), table_element)
|
||||
|
||||
|
||||
class Table(TopLevel):
|
||||
|
||||
def __init__(self, names, table_element):
|
||||
TopLevel.__init__(self, names=names, table_element=table_element)
|
||||
|
||||
|
||||
class ArrayOfTables(TopLevel):
|
||||
|
||||
def __init__(self, names, table_element):
|
||||
TopLevel.__init__(self, names=names, table_element=table_element)
|
||||
|
||||
|
||||
def _validate_file_elements(file_elements):
|
||||
pass
|
||||
|
||||
|
||||
def identify(file_elements):
|
||||
"""
|
||||
Outputs an ordered sequence of instances of TopLevel types.
|
||||
|
||||
Elements start with an optional TableElement, followed by zero or more pairs of (TableHeaderElement, TableElement).
|
||||
"""
|
||||
|
||||
if not file_elements:
|
||||
return
|
||||
|
||||
_validate_file_elements(file_elements)
|
||||
|
||||
# An iterator over enumerate(the non-metadata) elements
|
||||
iterator = PeekableIterator((element_i, element) for (element_i, element) in enumerate(file_elements)
|
||||
if element.type != elements.TYPE_METADATA)
|
||||
|
||||
try:
|
||||
_, first_element = iterator.peek()
|
||||
if isinstance(first_element, TableElement):
|
||||
iterator.next()
|
||||
yield AnonymousTable(first_element)
|
||||
except KeyError:
|
||||
pass
|
||||
except StopIteration:
|
||||
return
|
||||
|
||||
for element_i, element in iterator:
|
||||
|
||||
if not isinstance(element, TableHeaderElement):
|
||||
continue
|
||||
|
||||
# If TableHeader of a regular table, return Table following it
|
||||
if not element.is_array_of_tables:
|
||||
table_element_i, table_element = next(iterator)
|
||||
yield Table(names=element.names, table_element=table_element)
|
||||
|
||||
# If TableHeader of an array of tables, do your thing
|
||||
else:
|
||||
table_element_i, table_element = next(iterator)
|
||||
yield ArrayOfTables(names=element.names, table_element=table_element)
|
|
@ -0,0 +1,164 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
"""
|
||||
clint.colored
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
This module provides a simple and elegant wrapper for colorama.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
import colorama
|
||||
|
||||
PY3 = sys.version_info[0] >= 3
|
||||
|
||||
__all__ = (
|
||||
'red', 'green', 'yellow', 'blue',
|
||||
'black', 'magenta', 'cyan', 'white', 'normal',
|
||||
'clean', 'disable',
|
||||
)
|
||||
|
||||
COLORS = __all__[:-2]
|
||||
|
||||
if 'get_ipython' in dir():
|
||||
"""
|
||||
when ipython is fired lot of variables like _oh, etc are used.
|
||||
There are so many ways to find current python interpreter is ipython.
|
||||
get_ipython is easiest is most appealing for readers to understand.
|
||||
"""
|
||||
DISABLE_COLOR = True
|
||||
else:
|
||||
DISABLE_COLOR = False
|
||||
|
||||
|
||||
class ColoredString(object):
|
||||
"""Enhanced string for __len__ operations on Colored output."""
|
||||
def __init__(self, color, s, always_color=False, bold=False):
|
||||
super(ColoredString, self).__init__()
|
||||
if not PY3 and isinstance(s, unicode):
|
||||
self.s = s.encode('utf-8')
|
||||
else:
|
||||
self.s = s
|
||||
self.color = color
|
||||
self.always_color = always_color
|
||||
self.bold = bold
|
||||
if os.environ.get('PIPENV_FORCE_COLOR'):
|
||||
self.always_color = True
|
||||
|
||||
def __getattr__(self, att):
|
||||
def func_help(*args, **kwargs):
|
||||
result = getattr(self.s, att)(*args, **kwargs)
|
||||
try:
|
||||
is_result_string = isinstance(result, basestring)
|
||||
except NameError:
|
||||
is_result_string = isinstance(result, str)
|
||||
if is_result_string:
|
||||
return self._new(result)
|
||||
elif isinstance(result, list):
|
||||
return [self._new(x) for x in result]
|
||||
else:
|
||||
return result
|
||||
return func_help
|
||||
|
||||
@property
|
||||
def color_str(self):
|
||||
style = 'BRIGHT' if self.bold else 'NORMAL'
|
||||
c = '%s%s%s%s%s' % (getattr(colorama.Fore, self.color), getattr(colorama.Style, style), self.s, colorama.Fore.RESET, getattr(colorama.Style, 'NORMAL'))
|
||||
|
||||
if self.always_color:
|
||||
return c
|
||||
elif sys.stdout.isatty() and not DISABLE_COLOR:
|
||||
return c
|
||||
else:
|
||||
return self.s
|
||||
|
||||
def __len__(self):
|
||||
return len(self.s)
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s-string: '%s'>" % (self.color, self.s)
|
||||
|
||||
def __unicode__(self):
|
||||
value = self.color_str
|
||||
if isinstance(value, bytes):
|
||||
return value.decode('utf8')
|
||||
return value
|
||||
|
||||
if PY3:
|
||||
__str__ = __unicode__
|
||||
else:
|
||||
def __str__(self):
|
||||
return self.color_str
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.color_str)
|
||||
|
||||
def __add__(self, other):
|
||||
return str(self.color_str) + str(other)
|
||||
|
||||
def __radd__(self, other):
|
||||
return str(other) + str(self.color_str)
|
||||
|
||||
def __mul__(self, other):
|
||||
return (self.color_str * other)
|
||||
|
||||
def _new(self, s):
|
||||
return ColoredString(self.color, s)
|
||||
|
||||
|
||||
def clean(s):
|
||||
strip = re.compile("([^-_a-zA-Z0-9!@#%&=,/'\";:~`\$\^\*\(\)\+\[\]\.\{\}\|\?\<\>\\]+|[^\s]+)")
|
||||
txt = strip.sub('', str(s))
|
||||
|
||||
strip = re.compile(r'\[\d+m')
|
||||
txt = strip.sub('', txt)
|
||||
|
||||
return txt
|
||||
|
||||
|
||||
def normal(string, always=False, bold=False):
|
||||
return ColoredString('RESET', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def black(string, always=False, bold=False):
|
||||
return ColoredString('BLACK', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def red(string, always=False, bold=False):
|
||||
return ColoredString('RED', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def green(string, always=False, bold=False):
|
||||
return ColoredString('GREEN', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def yellow(string, always=False, bold=False):
|
||||
return ColoredString('YELLOW', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def blue(string, always=False, bold=False):
|
||||
return ColoredString('BLUE', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def magenta(string, always=False, bold=False):
|
||||
return ColoredString('MAGENTA', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def cyan(string, always=False, bold=False):
|
||||
return ColoredString('CYAN', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def white(string, always=False, bold=False):
|
||||
# This upsets people...
|
||||
return ColoredString('WHITE', string, always_color=always, bold=bold)
|
||||
|
||||
|
||||
def disable():
|
||||
"""Disables colors."""
|
||||
global DISABLE_COLOR
|
||||
|
||||
DISABLE_COLOR = True
|
|
@ -0,0 +1,4 @@
|
|||
from .cli import get_cli_string
|
||||
from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv
|
||||
|
||||
__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv']
|
|
@ -0,0 +1,98 @@
|
|||
import os
|
||||
|
||||
import click
|
||||
|
||||
from .main import get_key, dotenv_values, set_key, unset_key
|
||||
|
||||
|
||||
@click.group()
|
||||
@click.option('-f', '--file', default=os.path.join(os.getcwd(), '.env'),
|
||||
type=click.Path(exists=True),
|
||||
help="Location of the .env file, defaults to .env file in current working directory.")
|
||||
@click.option('-q', '--quote', default='always',
|
||||
type=click.Choice(['always', 'never', 'auto']),
|
||||
help="Whether to quote or not the variable values. Default mode is always. This does not affect parsing.")
|
||||
@click.pass_context
|
||||
def cli(ctx, file, quote):
|
||||
'''This script is used to set, get or unset values from a .env file.'''
|
||||
ctx.obj = {}
|
||||
ctx.obj['FILE'] = file
|
||||
ctx.obj['QUOTE'] = quote
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.pass_context
|
||||
def list(ctx):
|
||||
'''Display all the stored key/value.'''
|
||||
file = ctx.obj['FILE']
|
||||
dotenv_as_dict = dotenv_values(file)
|
||||
for k, v in dotenv_as_dict.items():
|
||||
click.echo('%s="%s"' % (k, v))
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.pass_context
|
||||
@click.argument('key', required=True)
|
||||
@click.argument('value', required=True)
|
||||
def set(ctx, key, value):
|
||||
'''Store the given key/value.'''
|
||||
file = ctx.obj['FILE']
|
||||
quote = ctx.obj['QUOTE']
|
||||
success, key, value = set_key(file, key, value, quote)
|
||||
if success:
|
||||
click.echo('%s="%s"' % (key, value))
|
||||
else:
|
||||
exit(1)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.pass_context
|
||||
@click.argument('key', required=True)
|
||||
def get(ctx, key):
|
||||
'''Retrieve the value for the given key.'''
|
||||
file = ctx.obj['FILE']
|
||||
stored_value = get_key(file, key)
|
||||
if stored_value:
|
||||
click.echo('%s="%s"' % (key, stored_value))
|
||||
else:
|
||||
exit(1)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.pass_context
|
||||
@click.argument('key', required=True)
|
||||
def unset(ctx, key):
|
||||
'''Removes the given key.'''
|
||||
file = ctx.obj['FILE']
|
||||
quote = ctx.obj['QUOTE']
|
||||
success, key = unset_key(file, key, quote)
|
||||
if success:
|
||||
click.echo("Successfully removed %s" % key)
|
||||
else:
|
||||
exit(1)
|
||||
|
||||
|
||||
def get_cli_string(path=None, action=None, key=None, value=None):
|
||||
"""Returns a string suitable for running as a shell script.
|
||||
|
||||
Useful for converting a arguments passed to a fabric task
|
||||
to be passed to a `local` or `run` command.
|
||||
"""
|
||||
command = ['dotenv']
|
||||
if path:
|
||||
command.append('-f %s' % path)
|
||||
if action:
|
||||
command.append(action)
|
||||
if key:
|
||||
command.append(key)
|
||||
if value:
|
||||
if ' ' in value:
|
||||
command.append('"%s"' % value)
|
||||
else:
|
||||
command.append(value)
|
||||
|
||||
return ' '.join(command).strip()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cli()
|
|
@ -0,0 +1,186 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from __future__ import absolute_import
|
||||
|
||||
import codecs
|
||||
import os
|
||||
import sys
|
||||
import warnings
|
||||
import re
|
||||
from collections import OrderedDict
|
||||
|
||||
__escape_decoder = codecs.getdecoder('unicode_escape')
|
||||
__posix_variable = re.compile('\$\{[^\}]*\}')
|
||||
__variable_declaration = re.compile('^\s*(\w*)\s*=\s*("[^"]*"|\'[^\']*\'|[^\s]*)\s*$',
|
||||
flags=re.MULTILINE)
|
||||
|
||||
|
||||
def decode_escaped(escaped):
|
||||
return __escape_decoder(escaped)[0]
|
||||
|
||||
|
||||
def load_dotenv(dotenv_path, verbose=False, override=False):
|
||||
"""
|
||||
Read a .env file and load into os.environ.
|
||||
"""
|
||||
if not os.path.exists(dotenv_path):
|
||||
if verbose:
|
||||
warnings.warn("Not loading %s - it doesn't exist." % dotenv_path)
|
||||
return None
|
||||
for k, v in dotenv_values(dotenv_path).items():
|
||||
if override:
|
||||
os.environ[k] = v
|
||||
else:
|
||||
os.environ.setdefault(k, v)
|
||||
return True
|
||||
|
||||
|
||||
def get_key(dotenv_path, key_to_get):
|
||||
"""
|
||||
Gets the value of a given key from the given .env
|
||||
|
||||
If the .env path given doesn't exist, fails
|
||||
"""
|
||||
key_to_get = str(key_to_get)
|
||||
if not os.path.exists(dotenv_path):
|
||||
warnings.warn("can't read %s - it doesn't exist." % dotenv_path)
|
||||
return None
|
||||
dotenv_as_dict = dotenv_values(dotenv_path)
|
||||
if key_to_get in dotenv_as_dict:
|
||||
return dotenv_as_dict[key_to_get]
|
||||
else:
|
||||
warnings.warn("key %s not found in %s." % (key_to_get, dotenv_path))
|
||||
return None
|
||||
|
||||
|
||||
def set_key(dotenv_path, key_to_set, value_to_set, quote_mode="always"):
|
||||
"""
|
||||
Adds or Updates a key/value to the given .env
|
||||
|
||||
If the .env path given doesn't exist, fails instead of risking creating
|
||||
an orphan .env somewhere in the filesystem
|
||||
"""
|
||||
key_to_set = str(key_to_set)
|
||||
value_to_set = str(value_to_set).strip("'").strip('"')
|
||||
if not os.path.exists(dotenv_path):
|
||||
warnings.warn("can't write to %s - it doesn't exist." % dotenv_path)
|
||||
return None, key_to_set, value_to_set
|
||||
dotenv_as_dict = OrderedDict(parse_dotenv(dotenv_path))
|
||||
dotenv_as_dict[key_to_set] = value_to_set
|
||||
success = flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode)
|
||||
return success, key_to_set, value_to_set
|
||||
|
||||
|
||||
def unset_key(dotenv_path, key_to_unset, quote_mode="always"):
|
||||
"""
|
||||
Removes a given key from the given .env
|
||||
|
||||
If the .env path given doesn't exist, fails
|
||||
If the given key doesn't exist in the .env, fails
|
||||
"""
|
||||
key_to_unset = str(key_to_unset)
|
||||
if not os.path.exists(dotenv_path):
|
||||
warnings.warn("can't delete from %s - it doesn't exist." % dotenv_path)
|
||||
return None, key_to_unset
|
||||
dotenv_as_dict = dotenv_values(dotenv_path)
|
||||
if key_to_unset in dotenv_as_dict:
|
||||
dotenv_as_dict.pop(key_to_unset, None)
|
||||
else:
|
||||
warnings.warn("key %s not removed from %s - key doesn't exist." % (key_to_unset, dotenv_path))
|
||||
return None, key_to_unset
|
||||
success = flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode)
|
||||
return success, key_to_unset
|
||||
|
||||
|
||||
def dotenv_values(dotenv_path):
|
||||
values = OrderedDict(parse_dotenv(dotenv_path))
|
||||
values = resolve_nested_variables(values)
|
||||
return values
|
||||
|
||||
|
||||
def parse_dotenv(dotenv_path):
|
||||
with open(dotenv_path) as f:
|
||||
for k, v in __variable_declaration.findall(f.read()):
|
||||
if len(v) > 0:
|
||||
quoted = v[0] == v[len(v) - 1] in ['"', "'"]
|
||||
|
||||
if quoted:
|
||||
v = decode_escaped(v[1:-1])
|
||||
|
||||
yield k, v
|
||||
|
||||
|
||||
def resolve_nested_variables(values):
|
||||
def _replacement(name):
|
||||
"""
|
||||
get appropriate value for a variable name.
|
||||
first search in environ, if not found,
|
||||
then look into the dotenv variables
|
||||
"""
|
||||
ret = os.getenv(name, values.get(name, ""))
|
||||
return ret
|
||||
|
||||
def _re_sub_callback(match_object):
|
||||
"""
|
||||
From a match object gets the variable name and returns
|
||||
the correct replacement
|
||||
"""
|
||||
return _replacement(match_object.group()[2:-1])
|
||||
|
||||
for k, v in values.items():
|
||||
values[k] = __posix_variable.sub(_re_sub_callback, v)
|
||||
|
||||
return values
|
||||
|
||||
|
||||
def flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode="always"):
|
||||
with open(dotenv_path, "w") as f:
|
||||
for k, v in dotenv_as_dict.items():
|
||||
_mode = quote_mode
|
||||
if _mode == "auto" and " " in v:
|
||||
_mode = "always"
|
||||
str_format = '%s="%s"\n' if _mode == "always" else '%s=%s\n'
|
||||
f.write(str_format % (k, v))
|
||||
return True
|
||||
|
||||
|
||||
def _walk_to_root(path):
|
||||
"""
|
||||
Yield directories starting from the given directory up to the root
|
||||
"""
|
||||
if not os.path.exists(path):
|
||||
raise IOError('Starting path not found')
|
||||
|
||||
if os.path.isfile(path):
|
||||
path = os.path.dirname(path)
|
||||
|
||||
last_dir = None
|
||||
current_dir = os.path.abspath(path)
|
||||
while last_dir != current_dir:
|
||||
yield current_dir
|
||||
parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))
|
||||
last_dir, current_dir = current_dir, parent_dir
|
||||
|
||||
|
||||
def find_dotenv(filename='.env', raise_error_if_not_found=False, usecwd=False):
|
||||
"""
|
||||
Search in increasingly higher folders for the given file
|
||||
|
||||
Returns path to the file if found, or an empty string otherwise
|
||||
"""
|
||||
if usecwd or '__file__' not in globals():
|
||||
# should work without __file__, e.g. in REPL or IPython notebook
|
||||
path = os.getcwd()
|
||||
else:
|
||||
# will work for .py files
|
||||
frame_filename = sys._getframe().f_back.f_code.co_filename
|
||||
path = os.path.dirname(os.path.abspath(frame_filename))
|
||||
|
||||
for dirname in _walk_to_root(path):
|
||||
check_path = os.path.join(dirname, filename)
|
||||
if os.path.exists(check_path):
|
||||
return check_path
|
||||
|
||||
if raise_error_if_not_found:
|
||||
raise IOError('File not found')
|
||||
|
||||
return ''
|
|
@ -0,0 +1,85 @@
|
|||
import os
|
||||
from textwrap import dedent
|
||||
import unittest
|
||||
|
||||
from main import parse_dotenv
|
||||
|
||||
|
||||
class TestParseDotenv(unittest.TestCase):
|
||||
filename = 'testfile.conf'
|
||||
|
||||
def tearDown(self):
|
||||
if os.path.exists(self.filename):
|
||||
os.remove(self.filename)
|
||||
|
||||
def write_file(self, contents):
|
||||
with open(self.filename, 'w') as f:
|
||||
f.write(contents)
|
||||
|
||||
def assert_parsed(self, *expected):
|
||||
parsed = parse_dotenv(self.filename)
|
||||
for expected_key, expected_val in expected:
|
||||
actual_key, actual_val = next(parsed)
|
||||
self.assertEqual(actual_key, expected_key)
|
||||
self.assertEqual(actual_val, expected_val)
|
||||
|
||||
with self.assertRaises(StopIteration):
|
||||
next(parsed)
|
||||
|
||||
def test_value_unquoted(self):
|
||||
self.write_file(dedent("""
|
||||
var1 = value1
|
||||
var2=value2
|
||||
"""))
|
||||
self.assert_parsed(('var1', 'value1'), ('var2', 'value2'))
|
||||
|
||||
def test_value_double_quoted(self):
|
||||
self.write_file(dedent("""
|
||||
var1 = "value1"
|
||||
var2="value2"
|
||||
"""))
|
||||
self.assert_parsed(('var1', 'value1'), ('var2', 'value2'))
|
||||
|
||||
def test_value_single_quoted(self):
|
||||
self.write_file(dedent("""
|
||||
var1 = 'value1'
|
||||
var2='value2'
|
||||
"""))
|
||||
self.assert_parsed(('var1', 'value1'), ('var2', 'value2'))
|
||||
|
||||
def test_value_with_space_double_quoted(self):
|
||||
self.write_file(dedent("""
|
||||
var1 = "value1 with spaces"
|
||||
var2 = "othervalue"
|
||||
"""))
|
||||
self.assert_parsed(('var1', 'value1 with spaces'),
|
||||
('var2', 'othervalue'))
|
||||
|
||||
def test_value_with_space_single_quoted(self):
|
||||
self.write_file(dedent("""
|
||||
var1 = 'value with spaces'
|
||||
var2 = 'othervalue'
|
||||
"""))
|
||||
self.assert_parsed(('var1', 'value with spaces'),
|
||||
('var2', 'othervalue'))
|
||||
|
||||
def test_values_with_mixed_quotes_and_spaces(self):
|
||||
self.write_file(dedent("""
|
||||
var1 = 'value with spaces'
|
||||
var2= othervalue
|
||||
var3="double-quoted value with spaces"
|
||||
var4 = "double-quoted value
|
||||
with
|
||||
newlines"
|
||||
var5='single quote
|
||||
and newline'
|
||||
"""))
|
||||
self.assert_parsed(('var1', 'value with spaces'),
|
||||
('var2', 'othervalue'),
|
||||
('var3', 'double-quoted value with spaces'),
|
||||
('var4', 'double-quoted value\nwith\nnewlines'),
|
||||
('var5', 'single quote\nand newline'))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
|
@ -0,0 +1,331 @@
|
|||
#!/usr/bin/env python
|
||||
from __future__ import absolute_import
|
||||
|
||||
import locale
|
||||
import logging
|
||||
import os
|
||||
import optparse
|
||||
import warnings
|
||||
|
||||
import sys
|
||||
import re
|
||||
|
||||
# 2016-06-17 barry@debian.org: urllib3 1.14 added optional support for socks,
|
||||
# but if invoked (i.e. imported), it will issue a warning to stderr if socks
|
||||
# isn't available. requests unconditionally imports urllib3's socks contrib
|
||||
# module, triggering this warning. The warning breaks DEP-8 tests (because of
|
||||
# the stderr output) and is just plain annoying in normal usage. I don't want
|
||||
# to add socks as yet another dependency for pip, nor do I want to allow-stder
|
||||
# in the DEP-8 tests, so just suppress the warning. pdb tells me this has to
|
||||
# be done before the import of pip9.vcs.
|
||||
from pip9._vendor.requests.packages.urllib3.exceptions import DependencyWarning
|
||||
warnings.filterwarnings("ignore", category=DependencyWarning) # noqa
|
||||
|
||||
|
||||
from pip9.exceptions import InstallationError, CommandError, PipError
|
||||
from pip9.utils import get_installed_distributions, get_prog
|
||||
from pip9.utils import deprecation, dist_is_editable
|
||||
from pip9.vcs import git, mercurial, subversion, bazaar # noqa
|
||||
from pip9.baseparser import ConfigOptionParser, UpdatingDefaultsHelpFormatter
|
||||
from pip9.commands import get_summaries, get_similar_commands
|
||||
from pip9.commands import commands_dict
|
||||
from pip9._vendor.requests.packages.urllib3.exceptions import (
|
||||
InsecureRequestWarning,
|
||||
)
|
||||
|
||||
|
||||
# assignment for flake8 to be happy
|
||||
|
||||
# This fixes a peculiarity when importing via __import__ - as we are
|
||||
# initialising the pip module, "from pip9 import cmdoptions" is recursive
|
||||
# and appears not to work properly in that situation.
|
||||
import pip9.cmdoptions
|
||||
cmdoptions = pip9.cmdoptions
|
||||
|
||||
# The version as used in the setup.py and the docs conf.py
|
||||
__version__ = "9.0.1"
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Hide the InsecureRequestWarning from urllib3
|
||||
warnings.filterwarnings("ignore", category=InsecureRequestWarning)
|
||||
|
||||
|
||||
def autocomplete():
|
||||
"""Command and option completion for the main option parser (and options)
|
||||
and its subcommands (and options).
|
||||
|
||||
Enable by sourcing one of the completion shell scripts (bash, zsh or fish).
|
||||
"""
|
||||
# Don't complete if user hasn't sourced bash_completion file.
|
||||
if 'PIP_AUTO_COMPLETE' not in os.environ:
|
||||
return
|
||||
cwords = os.environ['COMP_WORDS'].split()[1:]
|
||||
cword = int(os.environ['COMP_CWORD'])
|
||||
try:
|
||||
current = cwords[cword - 1]
|
||||
except IndexError:
|
||||
current = ''
|
||||
|
||||
subcommands = [cmd for cmd, summary in get_summaries()]
|
||||
options = []
|
||||
# subcommand
|
||||
try:
|
||||
subcommand_name = [w for w in cwords if w in subcommands][0]
|
||||
except IndexError:
|
||||
subcommand_name = None
|
||||
|
||||
parser = create_main_parser()
|
||||
# subcommand options
|
||||
if subcommand_name:
|
||||
# special case: 'help' subcommand has no options
|
||||
if subcommand_name == 'help':
|
||||
sys.exit(1)
|
||||
# special case: list locally installed dists for uninstall command
|
||||
if subcommand_name == 'uninstall' and not current.startswith('-'):
|
||||
installed = []
|
||||
lc = current.lower()
|
||||
for dist in get_installed_distributions(local_only=True):
|
||||
if dist.key.startswith(lc) and dist.key not in cwords[1:]:
|
||||
installed.append(dist.key)
|
||||
# if there are no dists installed, fall back to option completion
|
||||
if installed:
|
||||
for dist in installed:
|
||||
print(dist)
|
||||
sys.exit(1)
|
||||
|
||||
subcommand = commands_dict[subcommand_name]()
|
||||
options += [(opt.get_opt_string(), opt.nargs)
|
||||
for opt in subcommand.parser.option_list_all
|
||||
if opt.help != optparse.SUPPRESS_HELP]
|
||||
|
||||
# filter out previously specified options from available options
|
||||
prev_opts = [x.split('=')[0] for x in cwords[1:cword - 1]]
|
||||
options = [(x, v) for (x, v) in options if x not in prev_opts]
|
||||
# filter options by current input
|
||||
options = [(k, v) for k, v in options if k.startswith(current)]
|
||||
for option in options:
|
||||
opt_label = option[0]
|
||||
# append '=' to options which require args
|
||||
if option[1]:
|
||||
opt_label += '='
|
||||
print(opt_label)
|
||||
else:
|
||||
# show main parser options only when necessary
|
||||
if current.startswith('-') or current.startswith('--'):
|
||||
opts = [i.option_list for i in parser.option_groups]
|
||||
opts.append(parser.option_list)
|
||||
opts = (o for it in opts for o in it)
|
||||
|
||||
subcommands += [i.get_opt_string() for i in opts
|
||||
if i.help != optparse.SUPPRESS_HELP]
|
||||
|
||||
print(' '.join([x for x in subcommands if x.startswith(current)]))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def create_main_parser():
|
||||
parser_kw = {
|
||||
'usage': '\n%prog <command> [options]',
|
||||
'add_help_option': False,
|
||||
'formatter': UpdatingDefaultsHelpFormatter(),
|
||||
'name': 'global',
|
||||
'prog': get_prog(),
|
||||
}
|
||||
|
||||
parser = ConfigOptionParser(**parser_kw)
|
||||
parser.disable_interspersed_args()
|
||||
|
||||
pip_pkg_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
parser.version = 'pip %s from %s (python %s)' % (
|
||||
__version__, pip_pkg_dir, sys.version[:3])
|
||||
|
||||
# add the general options
|
||||
gen_opts = cmdoptions.make_option_group(cmdoptions.general_group, parser)
|
||||
parser.add_option_group(gen_opts)
|
||||
|
||||
parser.main = True # so the help formatter knows
|
||||
|
||||
# create command listing for description
|
||||
command_summaries = get_summaries()
|
||||
description = [''] + ['%-27s %s' % (i, j) for i, j in command_summaries]
|
||||
parser.description = '\n'.join(description)
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def parseopts(args):
|
||||
parser = create_main_parser()
|
||||
|
||||
# Note: parser calls disable_interspersed_args(), so the result of this
|
||||
# call is to split the initial args into the general options before the
|
||||
# subcommand and everything else.
|
||||
# For example:
|
||||
# args: ['--timeout=5', 'install', '--user', 'INITools']
|
||||
# general_options: ['--timeout==5']
|
||||
# args_else: ['install', '--user', 'INITools']
|
||||
general_options, args_else = parser.parse_args(args)
|
||||
|
||||
# --version
|
||||
if general_options.version:
|
||||
sys.stdout.write(parser.version)
|
||||
sys.stdout.write(os.linesep)
|
||||
sys.exit()
|
||||
|
||||
# pip || pip help -> print_help()
|
||||
if not args_else or (args_else[0] == 'help' and len(args_else) == 1):
|
||||
parser.print_help()
|
||||
sys.exit()
|
||||
|
||||
# the subcommand name
|
||||
cmd_name = args_else[0]
|
||||
|
||||
if cmd_name not in commands_dict:
|
||||
guess = get_similar_commands(cmd_name)
|
||||
|
||||
msg = ['unknown command "%s"' % cmd_name]
|
||||
if guess:
|
||||
msg.append('maybe you meant "%s"' % guess)
|
||||
|
||||
raise CommandError(' - '.join(msg))
|
||||
|
||||
# all the args without the subcommand
|
||||
cmd_args = args[:]
|
||||
cmd_args.remove(cmd_name)
|
||||
|
||||
return cmd_name, cmd_args
|
||||
|
||||
|
||||
def check_isolated(args):
|
||||
isolated = False
|
||||
|
||||
if "--isolated" in args:
|
||||
isolated = True
|
||||
|
||||
return isolated
|
||||
|
||||
|
||||
def main(args=None):
|
||||
if args is None:
|
||||
args = sys.argv[1:]
|
||||
|
||||
# Configure our deprecation warnings to be sent through loggers
|
||||
deprecation.install_warning_logger()
|
||||
|
||||
autocomplete()
|
||||
|
||||
try:
|
||||
cmd_name, cmd_args = parseopts(args)
|
||||
except PipError as exc:
|
||||
sys.stderr.write("ERROR: %s" % exc)
|
||||
sys.stderr.write(os.linesep)
|
||||
sys.exit(1)
|
||||
|
||||
# Needed for locale.getpreferredencoding(False) to work
|
||||
# in pip9.utils.encoding.auto_decode
|
||||
try:
|
||||
locale.setlocale(locale.LC_ALL, '')
|
||||
except locale.Error as e:
|
||||
# setlocale can apparently crash if locale are uninitialized
|
||||
logger.debug("Ignoring error %s when setting locale", e)
|
||||
command = commands_dict[cmd_name](isolated=check_isolated(cmd_args))
|
||||
return command.main(cmd_args)
|
||||
|
||||
|
||||
# ###########################################################
|
||||
# # Writing freeze files
|
||||
|
||||
class FrozenRequirement(object):
|
||||
|
||||
def __init__(self, name, req, editable, comments=()):
|
||||
self.name = name
|
||||
self.req = req
|
||||
self.editable = editable
|
||||
self.comments = comments
|
||||
|
||||
_rev_re = re.compile(r'-r(\d+)$')
|
||||
_date_re = re.compile(r'-(20\d\d\d\d\d\d)$')
|
||||
|
||||
@classmethod
|
||||
def from_dist(cls, dist, dependency_links):
|
||||
location = os.path.normcase(os.path.abspath(dist.location))
|
||||
comments = []
|
||||
from pip9.vcs import vcs, get_src_requirement
|
||||
if dist_is_editable(dist) and vcs.get_backend_name(location):
|
||||
editable = True
|
||||
try:
|
||||
req = get_src_requirement(dist, location)
|
||||
except InstallationError as exc:
|
||||
logger.warning(
|
||||
"Error when trying to get requirement for VCS system %s, "
|
||||
"falling back to uneditable format", exc
|
||||
)
|
||||
req = None
|
||||
if req is None:
|
||||
logger.warning(
|
||||
'Could not determine repository location of %s', location
|
||||
)
|
||||
comments.append(
|
||||
'## !! Could not determine repository location'
|
||||
)
|
||||
req = dist.as_requirement()
|
||||
editable = False
|
||||
else:
|
||||
editable = False
|
||||
req = dist.as_requirement()
|
||||
specs = req.specs
|
||||
assert len(specs) == 1 and specs[0][0] in ["==", "==="], \
|
||||
'Expected 1 spec with == or ===; specs = %r; dist = %r' % \
|
||||
(specs, dist)
|
||||
version = specs[0][1]
|
||||
ver_match = cls._rev_re.search(version)
|
||||
date_match = cls._date_re.search(version)
|
||||
if ver_match or date_match:
|
||||
svn_backend = vcs.get_backend('svn')
|
||||
if svn_backend:
|
||||
svn_location = svn_backend().get_location(
|
||||
dist,
|
||||
dependency_links,
|
||||
)
|
||||
if not svn_location:
|
||||
logger.warning(
|
||||
'Warning: cannot find svn location for %s', req)
|
||||
comments.append(
|
||||
'## FIXME: could not find svn URL in dependency_links '
|
||||
'for this package:'
|
||||
)
|
||||
else:
|
||||
comments.append(
|
||||
'# Installing as editable to satisfy requirement %s:' %
|
||||
req
|
||||
)
|
||||
if ver_match:
|
||||
rev = ver_match.group(1)
|
||||
else:
|
||||
rev = '{%s}' % date_match.group(1)
|
||||
editable = True
|
||||
req = '%s@%s#egg=%s' % (
|
||||
svn_location,
|
||||
rev,
|
||||
cls.egg_name(dist)
|
||||
)
|
||||
return cls(dist.project_name, req, editable, comments)
|
||||
|
||||
@staticmethod
|
||||
def egg_name(dist):
|
||||
name = dist.egg_name()
|
||||
match = re.search(r'-py\d\.\d$', name)
|
||||
if match:
|
||||
name = name[:match.start()]
|
||||
return name
|
||||
|
||||
def __str__(self):
|
||||
req = self.req
|
||||
if self.editable:
|
||||
req = '-e %s' % req
|
||||
return '\n'.join(list(self.comments) + [str(req)]) + '\n'
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main())
|
|
@ -0,0 +1,19 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
# If we are running from a wheel, add the wheel to sys.path
|
||||
# This allows the usage python pip-*.whl/pip install pip-*.whl
|
||||
if __package__ == '':
|
||||
# __file__ is pip-*.whl/pip/__main__.py
|
||||
# first dirname call strips of '/__main__.py', second strips off '/pip'
|
||||
# Resulting path is the name of the wheel itself
|
||||
# Add that to sys.path so we can import pip9
|
||||
path = os.path.dirname(os.path.dirname(__file__))
|
||||
sys.path.insert(0, path)
|
||||
|
||||
import pip9 # noqa
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(pip9.main())
|
|
@ -0,0 +1,107 @@
|
|||
"""
|
||||
pip9._vendor is for vendoring dependencies of pip to prevent needing pip to
|
||||
depend on something external.
|
||||
|
||||
Files inside of pip9._vendor should be considered immutable and should only be
|
||||
updated to versions from upstream.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import glob
|
||||
import os.path
|
||||
import sys
|
||||
|
||||
# Downstream redistributors which have debundled our dependencies should also
|
||||
# patch this value to be true. This will trigger the additional patching
|
||||
# to cause things like "six" to be available as pip9.
|
||||
DEBUNDLED = False
|
||||
|
||||
# By default, look in this directory for a bunch of .whl files which we will
|
||||
# add to the beginning of sys.path before attempting to import anything. This
|
||||
# is done to support downstream re-distributors like Debian and Fedora who
|
||||
# wish to create their own Wheels for our dependencies to aid in debundling.
|
||||
WHEEL_DIR = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
|
||||
# Define a small helper function to alias our vendored modules to the real ones
|
||||
# if the vendored ones do not exist. This idea of this was taken from
|
||||
# https://github.com/kennethreitz/requests/pull/2567.
|
||||
def vendored(modulename):
|
||||
vendored_name = "{0}.{1}".format(__name__, modulename)
|
||||
|
||||
try:
|
||||
__import__(vendored_name, globals(), locals(), level=0)
|
||||
except ImportError:
|
||||
try:
|
||||
__import__(modulename, globals(), locals(), level=0)
|
||||
except ImportError:
|
||||
# We can just silently allow import failures to pass here. If we
|
||||
# got to this point it means that ``import pip9._vendor.whatever``
|
||||
# failed and so did ``import whatever``. Since we're importing this
|
||||
# upfront in an attempt to alias imports, not erroring here will
|
||||
# just mean we get a regular import error whenever pip *actually*
|
||||
# tries to import one of these modules to use it, which actually
|
||||
# gives us a better error message than we would have otherwise
|
||||
# gotten.
|
||||
pass
|
||||
else:
|
||||
sys.modules[vendored_name] = sys.modules[modulename]
|
||||
base, head = vendored_name.rsplit(".", 1)
|
||||
setattr(sys.modules[base], head, sys.modules[modulename])
|
||||
|
||||
|
||||
# If we're operating in a debundled setup, then we want to go ahead and trigger
|
||||
# the aliasing of our vendored libraries as well as looking for wheels to add
|
||||
# to our sys.path. This will cause all of this code to be a no-op typically
|
||||
# however downstream redistributors can enable it in a consistent way across
|
||||
# all platforms.
|
||||
if DEBUNDLED:
|
||||
# Actually look inside of WHEEL_DIR to find .whl files and add them to the
|
||||
# front of our sys.path.
|
||||
sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path
|
||||
|
||||
# Actually alias all of our vendored dependencies.
|
||||
vendored("cachecontrol")
|
||||
vendored("colorama")
|
||||
vendored("distlib")
|
||||
vendored("distro")
|
||||
vendored("html5lib")
|
||||
vendored("lockfile")
|
||||
vendored("six")
|
||||
vendored("six.moves")
|
||||
vendored("six.moves.urllib")
|
||||
vendored("packaging")
|
||||
vendored("packaging.version")
|
||||
vendored("packaging.specifiers")
|
||||
vendored("pkg_resources")
|
||||
vendored("progress")
|
||||
vendored("retrying")
|
||||
vendored("requests")
|
||||
vendored("requests.packages")
|
||||
vendored("requests.packages.urllib3")
|
||||
vendored("requests.packages.urllib3._collections")
|
||||
vendored("requests.packages.urllib3.connection")
|
||||
vendored("requests.packages.urllib3.connectionpool")
|
||||
vendored("requests.packages.urllib3.contrib")
|
||||
vendored("requests.packages.urllib3.contrib.ntlmpool")
|
||||
vendored("requests.packages.urllib3.contrib.pyopenssl")
|
||||
vendored("requests.packages.urllib3.exceptions")
|
||||
vendored("requests.packages.urllib3.fields")
|
||||
vendored("requests.packages.urllib3.filepost")
|
||||
vendored("requests.packages.urllib3.packages")
|
||||
vendored("requests.packages.urllib3.packages.ordered_dict")
|
||||
vendored("requests.packages.urllib3.packages.six")
|
||||
vendored("requests.packages.urllib3.packages.ssl_match_hostname")
|
||||
vendored("requests.packages.urllib3.packages.ssl_match_hostname."
|
||||
"_implementation")
|
||||
vendored("requests.packages.urllib3.poolmanager")
|
||||
vendored("requests.packages.urllib3.request")
|
||||
vendored("requests.packages.urllib3.response")
|
||||
vendored("requests.packages.urllib3.util")
|
||||
vendored("requests.packages.urllib3.util.connection")
|
||||
vendored("requests.packages.urllib3.util.request")
|
||||
vendored("requests.packages.urllib3.util.response")
|
||||
vendored("requests.packages.urllib3.util.retry")
|
||||
vendored("requests.packages.urllib3.util.ssl_")
|
||||
vendored("requests.packages.urllib3.util.timeout")
|
||||
vendored("requests.packages.urllib3.util.url")
|
|
@ -0,0 +1,552 @@
|
|||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright (c) 2005-2010 ActiveState Software Inc.
|
||||
# Copyright (c) 2013 Eddy Petrișor
|
||||
|
||||
"""Utilities for determining application-specific dirs.
|
||||
|
||||
See <http://github.com/ActiveState/appdirs> for details and usage.
|
||||
"""
|
||||
# Dev Notes:
|
||||
# - MSDN on where to store app data files:
|
||||
# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120
|
||||
# - macOS: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html
|
||||
# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
|
||||
|
||||
__version_info__ = (1, 4, 0)
|
||||
__version__ = '.'.join(map(str, __version_info__))
|
||||
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
PY3 = sys.version_info[0] == 3
|
||||
|
||||
if PY3:
|
||||
unicode = str
|
||||
|
||||
if sys.platform.startswith('java'):
|
||||
import platform
|
||||
os_name = platform.java_ver()[3][0]
|
||||
if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc.
|
||||
system = 'win32'
|
||||
elif os_name.startswith('Mac'): # "macOS", etc.
|
||||
system = 'darwin'
|
||||
else: # "Linux", "SunOS", "FreeBSD", etc.
|
||||
# Setting this to "linux2" is not ideal, but only Windows or Mac
|
||||
# are actually checked for and the rest of the module expects
|
||||
# *sys.platform* style strings.
|
||||
system = 'linux2'
|
||||
else:
|
||||
system = sys.platform
|
||||
|
||||
|
||||
|
||||
def user_data_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||
r"""Return full path to the user-specific data dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"roaming" (boolean, default False) can be set True to use the Windows
|
||||
roaming appdata directory. That means that for users on a Windows
|
||||
network setup for roaming profiles, this user data will be
|
||||
sync'd on login. See
|
||||
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||
for a discussion of issues.
|
||||
|
||||
Typical user data directories are:
|
||||
macOS: ~/Library/Application Support/<AppName>
|
||||
Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined
|
||||
Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName>
|
||||
Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>
|
||||
Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>
|
||||
Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName>
|
||||
|
||||
For Unix, we follow the XDG spec and support $XDG_DATA_HOME.
|
||||
That means, by default "~/.local/share/<AppName>".
|
||||
"""
|
||||
if system == "win32":
|
||||
if appauthor is None:
|
||||
appauthor = appname
|
||||
const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA"
|
||||
path = os.path.normpath(_get_win_folder(const))
|
||||
if appname:
|
||||
if appauthor is not False:
|
||||
path = os.path.join(path, appauthor, appname)
|
||||
else:
|
||||
path = os.path.join(path, appname)
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('~/Library/Application Support/')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share"))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def site_data_dir(appname=None, appauthor=None, version=None, multipath=False):
|
||||
"""Return full path to the user-shared data dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"multipath" is an optional parameter only applicable to *nix
|
||||
which indicates that the entire list of data dirs should be
|
||||
returned. By default, the first item from XDG_DATA_DIRS is
|
||||
returned, or '/usr/local/share/<AppName>',
|
||||
if XDG_DATA_DIRS is not set
|
||||
|
||||
Typical user data directories are:
|
||||
macOS: /Library/Application Support/<AppName>
|
||||
Unix: /usr/local/share/<AppName> or /usr/share/<AppName>
|
||||
Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName>
|
||||
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
|
||||
Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7.
|
||||
|
||||
For Unix, this is using the $XDG_DATA_DIRS[0] default.
|
||||
|
||||
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
|
||||
"""
|
||||
if system == "win32":
|
||||
if appauthor is None:
|
||||
appauthor = appname
|
||||
path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA"))
|
||||
if appname:
|
||||
if appauthor is not False:
|
||||
path = os.path.join(path, appauthor, appname)
|
||||
else:
|
||||
path = os.path.join(path, appname)
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('/Library/Application Support')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
# XDG default for $XDG_DATA_DIRS
|
||||
# only first, if multipath is False
|
||||
path = os.getenv('XDG_DATA_DIRS',
|
||||
os.pathsep.join(['/usr/local/share', '/usr/share']))
|
||||
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
|
||||
if appname:
|
||||
if version:
|
||||
appname = os.path.join(appname, version)
|
||||
pathlist = [os.sep.join([x, appname]) for x in pathlist]
|
||||
|
||||
if multipath:
|
||||
path = os.pathsep.join(pathlist)
|
||||
else:
|
||||
path = pathlist[0]
|
||||
return path
|
||||
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def user_config_dir(appname=None, appauthor=None, version=None, roaming=False):
|
||||
r"""Return full path to the user-specific config dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"roaming" (boolean, default False) can be set True to use the Windows
|
||||
roaming appdata directory. That means that for users on a Windows
|
||||
network setup for roaming profiles, this user data will be
|
||||
sync'd on login. See
|
||||
<http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>
|
||||
for a discussion of issues.
|
||||
|
||||
Typical user data directories are:
|
||||
macOS: same as user_data_dir
|
||||
Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined
|
||||
Win *: same as user_data_dir
|
||||
|
||||
For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.
|
||||
That means, by deafult "~/.config/<AppName>".
|
||||
"""
|
||||
if system in ["win32", "darwin"]:
|
||||
path = user_data_dir(appname, appauthor, None, roaming)
|
||||
else:
|
||||
path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config"))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def site_config_dir(appname=None, appauthor=None, version=None, multipath=False):
|
||||
"""Return full path to the user-shared data dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"multipath" is an optional parameter only applicable to *nix
|
||||
which indicates that the entire list of config dirs should be
|
||||
returned. By default, the first item from XDG_CONFIG_DIRS is
|
||||
returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set
|
||||
|
||||
Typical user data directories are:
|
||||
macOS: same as site_data_dir
|
||||
Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in
|
||||
$XDG_CONFIG_DIRS
|
||||
Win *: same as site_data_dir
|
||||
Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.)
|
||||
|
||||
For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False
|
||||
|
||||
WARNING: Do not use this on Windows. See the Vista-Fail note above for why.
|
||||
"""
|
||||
if system in ["win32", "darwin"]:
|
||||
path = site_data_dir(appname, appauthor)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
else:
|
||||
# XDG default for $XDG_CONFIG_DIRS
|
||||
# only first, if multipath is False
|
||||
path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg')
|
||||
pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)]
|
||||
if appname:
|
||||
if version:
|
||||
appname = os.path.join(appname, version)
|
||||
pathlist = [os.sep.join([x, appname]) for x in pathlist]
|
||||
|
||||
if multipath:
|
||||
path = os.pathsep.join(pathlist)
|
||||
else:
|
||||
path = pathlist[0]
|
||||
return path
|
||||
|
||||
|
||||
def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True):
|
||||
r"""Return full path to the user-specific cache dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"opinion" (boolean) can be False to disable the appending of
|
||||
"Cache" to the base app data dir for Windows. See
|
||||
discussion below.
|
||||
|
||||
Typical user cache directories are:
|
||||
macOS: ~/Library/Caches/<AppName>
|
||||
Unix: ~/.cache/<AppName> (XDG default)
|
||||
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache
|
||||
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache
|
||||
|
||||
On Windows the only suggestion in the MSDN docs is that local settings go in
|
||||
the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming
|
||||
app data dir (the default returned by `user_data_dir` above). Apps typically
|
||||
put cache data somewhere *under* the given dir here. Some examples:
|
||||
...\Mozilla\Firefox\Profiles\<ProfileName>\Cache
|
||||
...\Acme\SuperApp\Cache\1.0
|
||||
OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value.
|
||||
This can be disabled with the `opinion=False` option.
|
||||
"""
|
||||
if system == "win32":
|
||||
if appauthor is None:
|
||||
appauthor = appname
|
||||
path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA"))
|
||||
if appname:
|
||||
if appauthor is not False:
|
||||
path = os.path.join(path, appauthor, appname)
|
||||
else:
|
||||
path = os.path.join(path, appname)
|
||||
if opinion:
|
||||
path = os.path.join(path, "Cache")
|
||||
elif system == 'darwin':
|
||||
path = os.path.expanduser('~/Library/Caches')
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
else:
|
||||
path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache'))
|
||||
if appname:
|
||||
path = os.path.join(path, appname)
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
def user_log_dir(appname=None, appauthor=None, version=None, opinion=True):
|
||||
r"""Return full path to the user-specific log dir for this application.
|
||||
|
||||
"appname" is the name of application.
|
||||
If None, just the system directory is returned.
|
||||
"appauthor" (only used on Windows) is the name of the
|
||||
appauthor or distributing body for this application. Typically
|
||||
it is the owning company name. This falls back to appname. You may
|
||||
pass False to disable it.
|
||||
"version" is an optional version path element to append to the
|
||||
path. You might want to use this if you want multiple versions
|
||||
of your app to be able to run independently. If used, this
|
||||
would typically be "<major>.<minor>".
|
||||
Only applied when appname is present.
|
||||
"opinion" (boolean) can be False to disable the appending of
|
||||
"Logs" to the base app data dir for Windows, and "log" to the
|
||||
base cache dir for Unix. See discussion below.
|
||||
|
||||
Typical user cache directories are:
|
||||
macOS: ~/Library/Logs/<AppName>
|
||||
Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined
|
||||
Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs
|
||||
Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs
|
||||
|
||||
On Windows the only suggestion in the MSDN docs is that local settings
|
||||
go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in
|
||||
examples of what some windows apps use for a logs dir.)
|
||||
|
||||
OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA`
|
||||
value for Windows and appends "log" to the user cache dir for Unix.
|
||||
This can be disabled with the `opinion=False` option.
|
||||
"""
|
||||
if system == "darwin":
|
||||
path = os.path.join(
|
||||
os.path.expanduser('~/Library/Logs'),
|
||||
appname)
|
||||
elif system == "win32":
|
||||
path = user_data_dir(appname, appauthor, version)
|
||||
version = False
|
||||
if opinion:
|
||||
path = os.path.join(path, "Logs")
|
||||
else:
|
||||
path = user_cache_dir(appname, appauthor, version)
|
||||
version = False
|
||||
if opinion:
|
||||
path = os.path.join(path, "log")
|
||||
if appname and version:
|
||||
path = os.path.join(path, version)
|
||||
return path
|
||||
|
||||
|
||||
class AppDirs(object):
|
||||
"""Convenience wrapper for getting application dirs."""
|
||||
def __init__(self, appname, appauthor=None, version=None, roaming=False,
|
||||
multipath=False):
|
||||
self.appname = appname
|
||||
self.appauthor = appauthor
|
||||
self.version = version
|
||||
self.roaming = roaming
|
||||
self.multipath = multipath
|
||||
|
||||
@property
|
||||
def user_data_dir(self):
|
||||
return user_data_dir(self.appname, self.appauthor,
|
||||
version=self.version, roaming=self.roaming)
|
||||
|
||||
@property
|
||||
def site_data_dir(self):
|
||||
return site_data_dir(self.appname, self.appauthor,
|
||||
version=self.version, multipath=self.multipath)
|
||||
|
||||
@property
|
||||
def user_config_dir(self):
|
||||
return user_config_dir(self.appname, self.appauthor,
|
||||
version=self.version, roaming=self.roaming)
|
||||
|
||||
@property
|
||||
def site_config_dir(self):
|
||||
return site_config_dir(self.appname, self.appauthor,
|
||||
version=self.version, multipath=self.multipath)
|
||||
|
||||
@property
|
||||
def user_cache_dir(self):
|
||||
return user_cache_dir(self.appname, self.appauthor,
|
||||
version=self.version)
|
||||
|
||||
@property
|
||||
def user_log_dir(self):
|
||||
return user_log_dir(self.appname, self.appauthor,
|
||||
version=self.version)
|
||||
|
||||
|
||||
#---- internal support stuff
|
||||
|
||||
def _get_win_folder_from_registry(csidl_name):
|
||||
"""This is a fallback technique at best. I'm not sure if using the
|
||||
registry for this guarantees us the correct answer for all CSIDL_*
|
||||
names.
|
||||
"""
|
||||
import _winreg
|
||||
|
||||
shell_folder_name = {
|
||||
"CSIDL_APPDATA": "AppData",
|
||||
"CSIDL_COMMON_APPDATA": "Common AppData",
|
||||
"CSIDL_LOCAL_APPDATA": "Local AppData",
|
||||
}[csidl_name]
|
||||
|
||||
key = _winreg.OpenKey(
|
||||
_winreg.HKEY_CURRENT_USER,
|
||||
r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders"
|
||||
)
|
||||
dir, type = _winreg.QueryValueEx(key, shell_folder_name)
|
||||
return dir
|
||||
|
||||
|
||||
def _get_win_folder_with_pywin32(csidl_name):
|
||||
from win32com.shell import shellcon, shell
|
||||
dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0)
|
||||
# Try to make this a unicode path because SHGetFolderPath does
|
||||
# not return unicode strings when there is unicode data in the
|
||||
# path.
|
||||
try:
|
||||
dir = unicode(dir)
|
||||
|
||||
# Downgrade to short path name if have highbit chars. See
|
||||
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||
has_high_char = False
|
||||
for c in dir:
|
||||
if ord(c) > 255:
|
||||
has_high_char = True
|
||||
break
|
||||
if has_high_char:
|
||||
try:
|
||||
import win32api
|
||||
dir = win32api.GetShortPathName(dir)
|
||||
except ImportError:
|
||||
pass
|
||||
except UnicodeError:
|
||||
pass
|
||||
return dir
|
||||
|
||||
|
||||
def _get_win_folder_with_ctypes(csidl_name):
|
||||
import ctypes
|
||||
|
||||
csidl_const = {
|
||||
"CSIDL_APPDATA": 26,
|
||||
"CSIDL_COMMON_APPDATA": 35,
|
||||
"CSIDL_LOCAL_APPDATA": 28,
|
||||
}[csidl_name]
|
||||
|
||||
buf = ctypes.create_unicode_buffer(1024)
|
||||
ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf)
|
||||
|
||||
# Downgrade to short path name if have highbit chars. See
|
||||
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||
has_high_char = False
|
||||
for c in buf:
|
||||
if ord(c) > 255:
|
||||
has_high_char = True
|
||||
break
|
||||
if has_high_char:
|
||||
buf2 = ctypes.create_unicode_buffer(1024)
|
||||
if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024):
|
||||
buf = buf2
|
||||
|
||||
return buf.value
|
||||
|
||||
def _get_win_folder_with_jna(csidl_name):
|
||||
import array
|
||||
from com.sun import jna
|
||||
from com.sun.jna.platform import win32
|
||||
|
||||
buf_size = win32.WinDef.MAX_PATH * 2
|
||||
buf = array.zeros('c', buf_size)
|
||||
shell = win32.Shell32.INSTANCE
|
||||
shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf)
|
||||
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
|
||||
|
||||
# Downgrade to short path name if have highbit chars. See
|
||||
# <http://bugs.activestate.com/show_bug.cgi?id=85099>.
|
||||
has_high_char = False
|
||||
for c in dir:
|
||||
if ord(c) > 255:
|
||||
has_high_char = True
|
||||
break
|
||||
if has_high_char:
|
||||
buf = array.zeros('c', buf_size)
|
||||
kernel = win32.Kernel32.INSTANCE
|
||||
if kernal.GetShortPathName(dir, buf, buf_size):
|
||||
dir = jna.Native.toString(buf.tostring()).rstrip("\0")
|
||||
|
||||
return dir
|
||||
|
||||
if system == "win32":
|
||||
try:
|
||||
import win32com.shell
|
||||
_get_win_folder = _get_win_folder_with_pywin32
|
||||
except ImportError:
|
||||
try:
|
||||
from ctypes import windll
|
||||
_get_win_folder = _get_win_folder_with_ctypes
|
||||
except ImportError:
|
||||
try:
|
||||
import com.sun.jna
|
||||
_get_win_folder = _get_win_folder_with_jna
|
||||
except ImportError:
|
||||
_get_win_folder = _get_win_folder_from_registry
|
||||
|
||||
|
||||
#---- self test code
|
||||
|
||||
if __name__ == "__main__":
|
||||
appname = "MyApp"
|
||||
appauthor = "MyCompany"
|
||||
|
||||
props = ("user_data_dir", "site_data_dir",
|
||||
"user_config_dir", "site_config_dir",
|
||||
"user_cache_dir", "user_log_dir")
|
||||
|
||||
print("-- app dirs (with optional 'version')")
|
||||
dirs = AppDirs(appname, appauthor, version="1.0")
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||
|
||||
print("\n-- app dirs (without optional 'version')")
|
||||
dirs = AppDirs(appname, appauthor)
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||
|
||||
print("\n-- app dirs (without optional 'appauthor')")
|
||||
dirs = AppDirs(appname)
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
||||
|
||||
print("\n-- app dirs (with disabled 'appauthor')")
|
||||
dirs = AppDirs(appname, appauthor=False)
|
||||
for prop in props:
|
||||
print("%s: %s" % (prop, getattr(dirs, prop)))
|
11
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/__init__.py
поставляемый
Normal file
11
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,11 @@
|
|||
"""CacheControl import Interface.
|
||||
|
||||
Make it easy to import from cachecontrol without long namespaces.
|
||||
"""
|
||||
__author__ = 'Eric Larson'
|
||||
__email__ = 'eric@ionrock.org'
|
||||
__version__ = '0.11.7'
|
||||
|
||||
from .wrapper import CacheControl
|
||||
from .adapter import CacheControlAdapter
|
||||
from .controller import CacheController
|
60
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/_cmd.py
поставляемый
Normal file
60
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/_cmd.py
поставляемый
Normal file
|
@ -0,0 +1,60 @@
|
|||
import logging
|
||||
|
||||
from pip9._vendor import requests
|
||||
|
||||
from pip9._vendor.cachecontrol.adapter import CacheControlAdapter
|
||||
from pip9._vendor.cachecontrol.cache import DictCache
|
||||
from pip9._vendor.cachecontrol.controller import logger
|
||||
|
||||
from argparse import ArgumentParser
|
||||
|
||||
|
||||
def setup_logging():
|
||||
logger.setLevel(logging.DEBUG)
|
||||
handler = logging.StreamHandler()
|
||||
logger.addHandler(handler)
|
||||
|
||||
|
||||
def get_session():
|
||||
adapter = CacheControlAdapter(
|
||||
DictCache(),
|
||||
cache_etags=True,
|
||||
serializer=None,
|
||||
heuristic=None,
|
||||
)
|
||||
sess = requests.Session()
|
||||
sess.mount('http://', adapter)
|
||||
sess.mount('https://', adapter)
|
||||
|
||||
sess.cache_controller = adapter.controller
|
||||
return sess
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = ArgumentParser()
|
||||
parser.add_argument('url', help='The URL to try and cache')
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main(args=None):
|
||||
args = get_args()
|
||||
sess = get_session()
|
||||
|
||||
# Make a request to get a response
|
||||
resp = sess.get(args.url)
|
||||
|
||||
# Turn on logging
|
||||
setup_logging()
|
||||
|
||||
# try setting the cache
|
||||
sess.cache_controller.cache_response(resp.request, resp.raw)
|
||||
|
||||
# Now try to get it
|
||||
if sess.cache_controller.cached_request(resp.request):
|
||||
print('Cached!')
|
||||
else:
|
||||
print('Not cached :(')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
125
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/adapter.py
поставляемый
Normal file
125
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/adapter.py
поставляемый
Normal file
|
@ -0,0 +1,125 @@
|
|||
import types
|
||||
import functools
|
||||
|
||||
from pip9._vendor.requests.adapters import HTTPAdapter
|
||||
|
||||
from .controller import CacheController
|
||||
from .cache import DictCache
|
||||
from .filewrapper import CallbackFileWrapper
|
||||
|
||||
|
||||
class CacheControlAdapter(HTTPAdapter):
|
||||
invalidating_methods = set(['PUT', 'DELETE'])
|
||||
|
||||
def __init__(self, cache=None,
|
||||
cache_etags=True,
|
||||
controller_class=None,
|
||||
serializer=None,
|
||||
heuristic=None,
|
||||
*args, **kw):
|
||||
super(CacheControlAdapter, self).__init__(*args, **kw)
|
||||
self.cache = cache or DictCache()
|
||||
self.heuristic = heuristic
|
||||
|
||||
controller_factory = controller_class or CacheController
|
||||
self.controller = controller_factory(
|
||||
self.cache,
|
||||
cache_etags=cache_etags,
|
||||
serializer=serializer,
|
||||
)
|
||||
|
||||
def send(self, request, **kw):
|
||||
"""
|
||||
Send a request. Use the request information to see if it
|
||||
exists in the cache and cache the response if we need to and can.
|
||||
"""
|
||||
if request.method == 'GET':
|
||||
cached_response = self.controller.cached_request(request)
|
||||
if cached_response:
|
||||
return self.build_response(request, cached_response,
|
||||
from_cache=True)
|
||||
|
||||
# check for etags and add headers if appropriate
|
||||
request.headers.update(
|
||||
self.controller.conditional_headers(request)
|
||||
)
|
||||
|
||||
resp = super(CacheControlAdapter, self).send(request, **kw)
|
||||
|
||||
return resp
|
||||
|
||||
def build_response(self, request, response, from_cache=False):
|
||||
"""
|
||||
Build a response by making a request or using the cache.
|
||||
|
||||
This will end up calling send and returning a potentially
|
||||
cached response
|
||||
"""
|
||||
if not from_cache and request.method == 'GET':
|
||||
# Check for any heuristics that might update headers
|
||||
# before trying to cache.
|
||||
if self.heuristic:
|
||||
response = self.heuristic.apply(response)
|
||||
|
||||
# apply any expiration heuristics
|
||||
if response.status == 304:
|
||||
# We must have sent an ETag request. This could mean
|
||||
# that we've been expired already or that we simply
|
||||
# have an etag. In either case, we want to try and
|
||||
# update the cache if that is the case.
|
||||
cached_response = self.controller.update_cached_response(
|
||||
request, response
|
||||
)
|
||||
|
||||
if cached_response is not response:
|
||||
from_cache = True
|
||||
|
||||
# We are done with the server response, read a
|
||||
# possible response body (compliant servers will
|
||||
# not return one, but we cannot be 100% sure) and
|
||||
# release the connection back to the pool.
|
||||
response.read(decode_content=False)
|
||||
response.release_conn()
|
||||
|
||||
response = cached_response
|
||||
|
||||
# We always cache the 301 responses
|
||||
elif response.status == 301:
|
||||
self.controller.cache_response(request, response)
|
||||
else:
|
||||
# Wrap the response file with a wrapper that will cache the
|
||||
# response when the stream has been consumed.
|
||||
response._fp = CallbackFileWrapper(
|
||||
response._fp,
|
||||
functools.partial(
|
||||
self.controller.cache_response,
|
||||
request,
|
||||
response,
|
||||
)
|
||||
)
|
||||
if response.chunked:
|
||||
super_update_chunk_length = response._update_chunk_length
|
||||
|
||||
def _update_chunk_length(self):
|
||||
super_update_chunk_length()
|
||||
if self.chunk_left == 0:
|
||||
self._fp._close()
|
||||
response._update_chunk_length = types.MethodType(_update_chunk_length, response)
|
||||
|
||||
resp = super(CacheControlAdapter, self).build_response(
|
||||
request, response
|
||||
)
|
||||
|
||||
# See if we should invalidate the cache.
|
||||
if request.method in self.invalidating_methods and resp.ok:
|
||||
cache_url = self.controller.cache_url(request.url)
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# Give the request a from_cache attr to let people use it
|
||||
resp.from_cache = from_cache
|
||||
|
||||
return resp
|
||||
|
||||
def close(self):
|
||||
self.cache.close()
|
||||
super(CacheControlAdapter, self).close()
|
39
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/cache.py
поставляемый
Normal file
39
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/cache.py
поставляемый
Normal file
|
@ -0,0 +1,39 @@
|
|||
"""
|
||||
The cache object API for implementing caches. The default is a thread
|
||||
safe in-memory dictionary.
|
||||
"""
|
||||
from threading import Lock
|
||||
|
||||
|
||||
class BaseCache(object):
|
||||
|
||||
def get(self, key):
|
||||
raise NotImplemented()
|
||||
|
||||
def set(self, key, value):
|
||||
raise NotImplemented()
|
||||
|
||||
def delete(self, key):
|
||||
raise NotImplemented()
|
||||
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
|
||||
class DictCache(BaseCache):
|
||||
|
||||
def __init__(self, init_dict=None):
|
||||
self.lock = Lock()
|
||||
self.data = init_dict or {}
|
||||
|
||||
def get(self, key):
|
||||
return self.data.get(key, None)
|
||||
|
||||
def set(self, key, value):
|
||||
with self.lock:
|
||||
self.data.update({key: value})
|
||||
|
||||
def delete(self, key):
|
||||
with self.lock:
|
||||
if key in self.data:
|
||||
self.data.pop(key)
|
18
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/caches/__init__.py
поставляемый
Normal file
18
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/caches/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,18 @@
|
|||
from textwrap import dedent
|
||||
|
||||
try:
|
||||
from .file_cache import FileCache
|
||||
except ImportError:
|
||||
notice = dedent('''
|
||||
NOTE: In order to use the FileCache you must have
|
||||
lockfile installed. You can install it via pip:
|
||||
pip install lockfile
|
||||
''')
|
||||
print(notice)
|
||||
|
||||
|
||||
try:
|
||||
import redis
|
||||
from .redis_cache import RedisCache
|
||||
except ImportError:
|
||||
pass
|
116
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/caches/file_cache.py
поставляемый
Normal file
116
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/caches/file_cache.py
поставляемый
Normal file
|
@ -0,0 +1,116 @@
|
|||
import hashlib
|
||||
import os
|
||||
|
||||
from pip9._vendor.lockfile import LockFile
|
||||
from pip9._vendor.lockfile.mkdirlockfile import MkdirLockFile
|
||||
|
||||
from ..cache import BaseCache
|
||||
from ..controller import CacheController
|
||||
|
||||
|
||||
def _secure_open_write(filename, fmode):
|
||||
# We only want to write to this file, so open it in write only mode
|
||||
flags = os.O_WRONLY
|
||||
|
||||
# os.O_CREAT | os.O_EXCL will fail if the file already exists, so we only
|
||||
# will open *new* files.
|
||||
# We specify this because we want to ensure that the mode we pass is the
|
||||
# mode of the file.
|
||||
flags |= os.O_CREAT | os.O_EXCL
|
||||
|
||||
# Do not follow symlinks to prevent someone from making a symlink that
|
||||
# we follow and insecurely open a cache file.
|
||||
if hasattr(os, "O_NOFOLLOW"):
|
||||
flags |= os.O_NOFOLLOW
|
||||
|
||||
# On Windows we'll mark this file as binary
|
||||
if hasattr(os, "O_BINARY"):
|
||||
flags |= os.O_BINARY
|
||||
|
||||
# Before we open our file, we want to delete any existing file that is
|
||||
# there
|
||||
try:
|
||||
os.remove(filename)
|
||||
except (IOError, OSError):
|
||||
# The file must not exist already, so we can just skip ahead to opening
|
||||
pass
|
||||
|
||||
# Open our file, the use of os.O_CREAT | os.O_EXCL will ensure that if a
|
||||
# race condition happens between the os.remove and this line, that an
|
||||
# error will be raised. Because we utilize a lockfile this should only
|
||||
# happen if someone is attempting to attack us.
|
||||
fd = os.open(filename, flags, fmode)
|
||||
try:
|
||||
return os.fdopen(fd, "wb")
|
||||
except:
|
||||
# An error occurred wrapping our FD in a file object
|
||||
os.close(fd)
|
||||
raise
|
||||
|
||||
|
||||
class FileCache(BaseCache):
|
||||
def __init__(self, directory, forever=False, filemode=0o0600,
|
||||
dirmode=0o0700, use_dir_lock=None, lock_class=None):
|
||||
|
||||
if use_dir_lock is not None and lock_class is not None:
|
||||
raise ValueError("Cannot use use_dir_lock and lock_class together")
|
||||
|
||||
if use_dir_lock:
|
||||
lock_class = MkdirLockFile
|
||||
|
||||
if lock_class is None:
|
||||
lock_class = LockFile
|
||||
|
||||
self.directory = directory
|
||||
self.forever = forever
|
||||
self.filemode = filemode
|
||||
self.dirmode = dirmode
|
||||
self.lock_class = lock_class
|
||||
|
||||
|
||||
@staticmethod
|
||||
def encode(x):
|
||||
return hashlib.sha224(x.encode()).hexdigest()
|
||||
|
||||
def _fn(self, name):
|
||||
# NOTE: This method should not change as some may depend on it.
|
||||
# See: https://github.com/ionrock/cachecontrol/issues/63
|
||||
hashed = self.encode(name)
|
||||
parts = list(hashed[:5]) + [hashed]
|
||||
return os.path.join(self.directory, *parts)
|
||||
|
||||
def get(self, key):
|
||||
name = self._fn(key)
|
||||
if not os.path.exists(name):
|
||||
return None
|
||||
|
||||
with open(name, 'rb') as fh:
|
||||
return fh.read()
|
||||
|
||||
def set(self, key, value):
|
||||
name = self._fn(key)
|
||||
|
||||
# Make sure the directory exists
|
||||
try:
|
||||
os.makedirs(os.path.dirname(name), self.dirmode)
|
||||
except (IOError, OSError):
|
||||
pass
|
||||
|
||||
with self.lock_class(name) as lock:
|
||||
# Write our actual file
|
||||
with _secure_open_write(lock.path, self.filemode) as fh:
|
||||
fh.write(value)
|
||||
|
||||
def delete(self, key):
|
||||
name = self._fn(key)
|
||||
if not self.forever:
|
||||
os.remove(name)
|
||||
|
||||
|
||||
def url_to_file_path(url, filecache):
|
||||
"""Return the file cache path based on the URL.
|
||||
|
||||
This does not ensure the file exists!
|
||||
"""
|
||||
key = CacheController.cache_url(url)
|
||||
return filecache._fn(key)
|
41
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/caches/redis_cache.py
поставляемый
Normal file
41
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/caches/redis_cache.py
поставляемый
Normal file
|
@ -0,0 +1,41 @@
|
|||
from __future__ import division
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def total_seconds(td):
|
||||
"""Python 2.6 compatability"""
|
||||
if hasattr(td, 'total_seconds'):
|
||||
return td.total_seconds()
|
||||
|
||||
ms = td.microseconds
|
||||
secs = (td.seconds + td.days * 24 * 3600)
|
||||
return (ms + secs * 10**6) / 10**6
|
||||
|
||||
|
||||
class RedisCache(object):
|
||||
|
||||
def __init__(self, conn):
|
||||
self.conn = conn
|
||||
|
||||
def get(self, key):
|
||||
return self.conn.get(key)
|
||||
|
||||
def set(self, key, value, expires=None):
|
||||
if not expires:
|
||||
self.conn.set(key, value)
|
||||
else:
|
||||
expires = expires - datetime.now()
|
||||
self.conn.setex(key, total_seconds(expires), value)
|
||||
|
||||
def delete(self, key):
|
||||
self.conn.delete(key)
|
||||
|
||||
def clear(self):
|
||||
"""Helper for clearing all the keys in a database. Use with
|
||||
caution!"""
|
||||
for key in self.conn.keys():
|
||||
self.conn.delete(key)
|
||||
|
||||
def close(self):
|
||||
self.conn.disconnect()
|
20
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/compat.py
поставляемый
Normal file
20
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/compat.py
поставляемый
Normal file
|
@ -0,0 +1,20 @@
|
|||
try:
|
||||
from urllib.parse import urljoin
|
||||
except ImportError:
|
||||
from urlparse import urljoin
|
||||
|
||||
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
|
||||
|
||||
from pip9._vendor.requests.packages.urllib3.response import HTTPResponse
|
||||
from pip9._vendor.requests.packages.urllib3.util import is_fp_closed
|
||||
|
||||
# Replicate some six behaviour
|
||||
try:
|
||||
text_type = (unicode,)
|
||||
except NameError:
|
||||
text_type = (str,)
|
353
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/controller.py
поставляемый
Normal file
353
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/controller.py
поставляемый
Normal file
|
@ -0,0 +1,353 @@
|
|||
"""
|
||||
The httplib2 algorithms ported for use with requests.
|
||||
"""
|
||||
import logging
|
||||
import re
|
||||
import calendar
|
||||
import time
|
||||
from email.utils import parsedate_tz
|
||||
|
||||
from pip9._vendor.requests.structures import CaseInsensitiveDict
|
||||
|
||||
from .cache import DictCache
|
||||
from .serialize import Serializer
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
|
||||
|
||||
|
||||
def parse_uri(uri):
|
||||
"""Parses a URI using the regex given in Appendix B of RFC 3986.
|
||||
|
||||
(scheme, authority, path, query, fragment) = parse_uri(uri)
|
||||
"""
|
||||
groups = URI.match(uri).groups()
|
||||
return (groups[1], groups[3], groups[4], groups[6], groups[8])
|
||||
|
||||
|
||||
class CacheController(object):
|
||||
"""An interface to see if request should cached or not.
|
||||
"""
|
||||
def __init__(self, cache=None, cache_etags=True, serializer=None):
|
||||
self.cache = cache or DictCache()
|
||||
self.cache_etags = cache_etags
|
||||
self.serializer = serializer or Serializer()
|
||||
|
||||
@classmethod
|
||||
def _urlnorm(cls, uri):
|
||||
"""Normalize the URL to create a safe key for the cache"""
|
||||
(scheme, authority, path, query, fragment) = parse_uri(uri)
|
||||
if not scheme or not authority:
|
||||
raise Exception("Only absolute URIs are allowed. uri = %s" % uri)
|
||||
|
||||
scheme = scheme.lower()
|
||||
authority = authority.lower()
|
||||
|
||||
if not path:
|
||||
path = "/"
|
||||
|
||||
# Could do syntax based normalization of the URI before
|
||||
# computing the digest. See Section 6.2.2 of Std 66.
|
||||
request_uri = query and "?".join([path, query]) or path
|
||||
defrag_uri = scheme + "://" + authority + request_uri
|
||||
|
||||
return defrag_uri
|
||||
|
||||
@classmethod
|
||||
def cache_url(cls, uri):
|
||||
return cls._urlnorm(uri)
|
||||
|
||||
def parse_cache_control(self, headers):
|
||||
"""
|
||||
Parse the cache control headers returning a dictionary with values
|
||||
for the different directives.
|
||||
"""
|
||||
retval = {}
|
||||
|
||||
cc_header = 'cache-control'
|
||||
if 'Cache-Control' in headers:
|
||||
cc_header = 'Cache-Control'
|
||||
|
||||
if cc_header in headers:
|
||||
parts = headers[cc_header].split(',')
|
||||
parts_with_args = [
|
||||
tuple([x.strip().lower() for x in part.split("=", 1)])
|
||||
for part in parts if -1 != part.find("=")
|
||||
]
|
||||
parts_wo_args = [
|
||||
(name.strip().lower(), 1)
|
||||
for name in parts if -1 == name.find("=")
|
||||
]
|
||||
retval = dict(parts_with_args + parts_wo_args)
|
||||
return retval
|
||||
|
||||
def cached_request(self, request):
|
||||
"""
|
||||
Return a cached response if it exists in the cache, otherwise
|
||||
return False.
|
||||
"""
|
||||
cache_url = self.cache_url(request.url)
|
||||
logger.debug('Looking up "%s" in the cache', cache_url)
|
||||
cc = self.parse_cache_control(request.headers)
|
||||
|
||||
# Bail out if the request insists on fresh data
|
||||
if 'no-cache' in cc:
|
||||
logger.debug('Request header has "no-cache", cache bypassed')
|
||||
return False
|
||||
|
||||
if 'max-age' in cc and cc['max-age'] == 0:
|
||||
logger.debug('Request header has "max_age" as 0, cache bypassed')
|
||||
return False
|
||||
|
||||
# Request allows serving from the cache, let's see if we find something
|
||||
cache_data = self.cache.get(cache_url)
|
||||
if cache_data is None:
|
||||
logger.debug('No cache entry available')
|
||||
return False
|
||||
|
||||
# Check whether it can be deserialized
|
||||
resp = self.serializer.loads(request, cache_data)
|
||||
if not resp:
|
||||
logger.warning('Cache entry deserialization failed, entry ignored')
|
||||
return False
|
||||
|
||||
# If we have a cached 301, return it immediately. We don't
|
||||
# need to test our response for other headers b/c it is
|
||||
# intrinsically "cacheable" as it is Permanent.
|
||||
# See:
|
||||
# https://tools.ietf.org/html/rfc7231#section-6.4.2
|
||||
#
|
||||
# Client can try to refresh the value by repeating the request
|
||||
# with cache busting headers as usual (ie no-cache).
|
||||
if resp.status == 301:
|
||||
msg = ('Returning cached "301 Moved Permanently" response '
|
||||
'(ignoring date and etag information)')
|
||||
logger.debug(msg)
|
||||
return resp
|
||||
|
||||
headers = CaseInsensitiveDict(resp.headers)
|
||||
if not headers or 'date' not in headers:
|
||||
if 'etag' not in headers:
|
||||
# Without date or etag, the cached response can never be used
|
||||
# and should be deleted.
|
||||
logger.debug('Purging cached response: no date or etag')
|
||||
self.cache.delete(cache_url)
|
||||
logger.debug('Ignoring cached response: no date')
|
||||
return False
|
||||
|
||||
now = time.time()
|
||||
date = calendar.timegm(
|
||||
parsedate_tz(headers['date'])
|
||||
)
|
||||
current_age = max(0, now - date)
|
||||
logger.debug('Current age based on date: %i', current_age)
|
||||
|
||||
# TODO: There is an assumption that the result will be a
|
||||
# urllib3 response object. This may not be best since we
|
||||
# could probably avoid instantiating or constructing the
|
||||
# response until we know we need it.
|
||||
resp_cc = self.parse_cache_control(headers)
|
||||
|
||||
# determine freshness
|
||||
freshness_lifetime = 0
|
||||
|
||||
# Check the max-age pragma in the cache control header
|
||||
if 'max-age' in resp_cc and resp_cc['max-age'].isdigit():
|
||||
freshness_lifetime = int(resp_cc['max-age'])
|
||||
logger.debug('Freshness lifetime from max-age: %i',
|
||||
freshness_lifetime)
|
||||
|
||||
# If there isn't a max-age, check for an expires header
|
||||
elif 'expires' in headers:
|
||||
expires = parsedate_tz(headers['expires'])
|
||||
if expires is not None:
|
||||
expire_time = calendar.timegm(expires) - date
|
||||
freshness_lifetime = max(0, expire_time)
|
||||
logger.debug("Freshness lifetime from expires: %i",
|
||||
freshness_lifetime)
|
||||
|
||||
# Determine if we are setting freshness limit in the
|
||||
# request. Note, this overrides what was in the response.
|
||||
if 'max-age' in cc:
|
||||
try:
|
||||
freshness_lifetime = int(cc['max-age'])
|
||||
logger.debug('Freshness lifetime from request max-age: %i',
|
||||
freshness_lifetime)
|
||||
except ValueError:
|
||||
freshness_lifetime = 0
|
||||
|
||||
if 'min-fresh' in cc:
|
||||
try:
|
||||
min_fresh = int(cc['min-fresh'])
|
||||
except ValueError:
|
||||
min_fresh = 0
|
||||
# adjust our current age by our min fresh
|
||||
current_age += min_fresh
|
||||
logger.debug('Adjusted current age from min-fresh: %i',
|
||||
current_age)
|
||||
|
||||
# Return entry if it is fresh enough
|
||||
if freshness_lifetime > current_age:
|
||||
logger.debug('The response is "fresh", returning cached response')
|
||||
logger.debug('%i > %i', freshness_lifetime, current_age)
|
||||
return resp
|
||||
|
||||
# we're not fresh. If we don't have an Etag, clear it out
|
||||
if 'etag' not in headers:
|
||||
logger.debug(
|
||||
'The cached response is "stale" with no etag, purging'
|
||||
)
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# return the original handler
|
||||
return False
|
||||
|
||||
def conditional_headers(self, request):
|
||||
cache_url = self.cache_url(request.url)
|
||||
resp = self.serializer.loads(request, self.cache.get(cache_url))
|
||||
new_headers = {}
|
||||
|
||||
if resp:
|
||||
headers = CaseInsensitiveDict(resp.headers)
|
||||
|
||||
if 'etag' in headers:
|
||||
new_headers['If-None-Match'] = headers['ETag']
|
||||
|
||||
if 'last-modified' in headers:
|
||||
new_headers['If-Modified-Since'] = headers['Last-Modified']
|
||||
|
||||
return new_headers
|
||||
|
||||
def cache_response(self, request, response, body=None):
|
||||
"""
|
||||
Algorithm for caching requests.
|
||||
|
||||
This assumes a requests Response object.
|
||||
"""
|
||||
# From httplib2: Don't cache 206's since we aren't going to
|
||||
# handle byte range requests
|
||||
cacheable_status_codes = [200, 203, 300, 301]
|
||||
if response.status not in cacheable_status_codes:
|
||||
logger.debug(
|
||||
'Status code %s not in %s',
|
||||
response.status,
|
||||
cacheable_status_codes
|
||||
)
|
||||
return
|
||||
|
||||
response_headers = CaseInsensitiveDict(response.headers)
|
||||
|
||||
# If we've been given a body, our response has a Content-Length, that
|
||||
# Content-Length is valid then we can check to see if the body we've
|
||||
# been given matches the expected size, and if it doesn't we'll just
|
||||
# skip trying to cache it.
|
||||
if (body is not None and
|
||||
"content-length" in response_headers and
|
||||
response_headers["content-length"].isdigit() and
|
||||
int(response_headers["content-length"]) != len(body)):
|
||||
return
|
||||
|
||||
cc_req = self.parse_cache_control(request.headers)
|
||||
cc = self.parse_cache_control(response_headers)
|
||||
|
||||
cache_url = self.cache_url(request.url)
|
||||
logger.debug('Updating cache with response from "%s"', cache_url)
|
||||
|
||||
# Delete it from the cache if we happen to have it stored there
|
||||
no_store = False
|
||||
if cc.get('no-store'):
|
||||
no_store = True
|
||||
logger.debug('Response header has "no-store"')
|
||||
if cc_req.get('no-store'):
|
||||
no_store = True
|
||||
logger.debug('Request header has "no-store"')
|
||||
if no_store and self.cache.get(cache_url):
|
||||
logger.debug('Purging existing cache entry to honor "no-store"')
|
||||
self.cache.delete(cache_url)
|
||||
|
||||
# If we've been given an etag, then keep the response
|
||||
if self.cache_etags and 'etag' in response_headers:
|
||||
logger.debug('Caching due to etag')
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, body=body),
|
||||
)
|
||||
|
||||
# Add to the cache any 301s. We do this before looking that
|
||||
# the Date headers.
|
||||
elif response.status == 301:
|
||||
logger.debug('Caching permanant redirect')
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response)
|
||||
)
|
||||
|
||||
# Add to the cache if the response headers demand it. If there
|
||||
# is no date header then we can't do anything about expiring
|
||||
# the cache.
|
||||
elif 'date' in response_headers:
|
||||
# cache when there is a max-age > 0
|
||||
if cc and cc.get('max-age'):
|
||||
if cc['max-age'].isdigit() and int(cc['max-age']) > 0:
|
||||
logger.debug('Caching b/c date exists and max-age > 0')
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, body=body),
|
||||
)
|
||||
|
||||
# If the request can expire, it means we should cache it
|
||||
# in the meantime.
|
||||
elif 'expires' in response_headers:
|
||||
if response_headers['expires']:
|
||||
logger.debug('Caching b/c of expires header')
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, response, body=body),
|
||||
)
|
||||
|
||||
def update_cached_response(self, request, response):
|
||||
"""On a 304 we will get a new set of headers that we want to
|
||||
update our cached value with, assuming we have one.
|
||||
|
||||
This should only ever be called when we've sent an ETag and
|
||||
gotten a 304 as the response.
|
||||
"""
|
||||
cache_url = self.cache_url(request.url)
|
||||
|
||||
cached_response = self.serializer.loads(
|
||||
request,
|
||||
self.cache.get(cache_url)
|
||||
)
|
||||
|
||||
if not cached_response:
|
||||
# we didn't have a cached response
|
||||
return response
|
||||
|
||||
# Lets update our headers with the headers from the new request:
|
||||
# http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1
|
||||
#
|
||||
# The server isn't supposed to send headers that would make
|
||||
# the cached body invalid. But... just in case, we'll be sure
|
||||
# to strip out ones we know that might be problmatic due to
|
||||
# typical assumptions.
|
||||
excluded_headers = [
|
||||
"content-length",
|
||||
]
|
||||
|
||||
cached_response.headers.update(
|
||||
dict((k, v) for k, v in response.headers.items()
|
||||
if k.lower() not in excluded_headers)
|
||||
)
|
||||
|
||||
# we want a 200 b/c we have content via the cache
|
||||
cached_response.status = 200
|
||||
|
||||
# update our cache
|
||||
self.cache.set(
|
||||
cache_url,
|
||||
self.serializer.dumps(request, cached_response),
|
||||
)
|
||||
|
||||
return cached_response
|
78
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/filewrapper.py
поставляемый
Normal file
78
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/filewrapper.py
поставляемый
Normal file
|
@ -0,0 +1,78 @@
|
|||
from io import BytesIO
|
||||
|
||||
|
||||
class CallbackFileWrapper(object):
|
||||
"""
|
||||
Small wrapper around a fp object which will tee everything read into a
|
||||
buffer, and when that file is closed it will execute a callback with the
|
||||
contents of that buffer.
|
||||
|
||||
All attributes are proxied to the underlying file object.
|
||||
|
||||
This class uses members with a double underscore (__) leading prefix so as
|
||||
not to accidentally shadow an attribute.
|
||||
"""
|
||||
|
||||
def __init__(self, fp, callback):
|
||||
self.__buf = BytesIO()
|
||||
self.__fp = fp
|
||||
self.__callback = callback
|
||||
|
||||
def __getattr__(self, name):
|
||||
# The vaguaries of garbage collection means that self.__fp is
|
||||
# not always set. By using __getattribute__ and the private
|
||||
# name[0] allows looking up the attribute value and raising an
|
||||
# AttributeError when it doesn't exist. This stop thigns from
|
||||
# infinitely recursing calls to getattr in the case where
|
||||
# self.__fp hasn't been set.
|
||||
#
|
||||
# [0] https://docs.python.org/2/reference/expressions.html#atom-identifiers
|
||||
fp = self.__getattribute__('_CallbackFileWrapper__fp')
|
||||
return getattr(fp, name)
|
||||
|
||||
def __is_fp_closed(self):
|
||||
try:
|
||||
return self.__fp.fp is None
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
try:
|
||||
return self.__fp.closed
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# We just don't cache it then.
|
||||
# TODO: Add some logging here...
|
||||
return False
|
||||
|
||||
def _close(self):
|
||||
if self.__callback:
|
||||
self.__callback(self.__buf.getvalue())
|
||||
|
||||
# We assign this to None here, because otherwise we can get into
|
||||
# really tricky problems where the CPython interpreter dead locks
|
||||
# because the callback is holding a reference to something which
|
||||
# has a __del__ method. Setting this to None breaks the cycle
|
||||
# and allows the garbage collector to do it's thing normally.
|
||||
self.__callback = None
|
||||
|
||||
def read(self, amt=None):
|
||||
data = self.__fp.read(amt)
|
||||
self.__buf.write(data)
|
||||
if self.__is_fp_closed():
|
||||
self._close()
|
||||
|
||||
return data
|
||||
|
||||
def _safe_read(self, amt):
|
||||
data = self.__fp._safe_read(amt)
|
||||
if amt == 2 and data == b'\r\n':
|
||||
# urllib executes this read to toss the CRLF at the end
|
||||
# of the chunk.
|
||||
return data
|
||||
|
||||
self.__buf.write(data)
|
||||
if self.__is_fp_closed():
|
||||
self._close()
|
||||
|
||||
return data
|
138
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/heuristics.py
поставляемый
Normal file
138
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/heuristics.py
поставляемый
Normal file
|
@ -0,0 +1,138 @@
|
|||
import calendar
|
||||
import time
|
||||
|
||||
from email.utils import formatdate, parsedate, parsedate_tz
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
TIME_FMT = "%a, %d %b %Y %H:%M:%S GMT"
|
||||
|
||||
|
||||
def expire_after(delta, date=None):
|
||||
date = date or datetime.now()
|
||||
return date + delta
|
||||
|
||||
|
||||
def datetime_to_header(dt):
|
||||
return formatdate(calendar.timegm(dt.timetuple()))
|
||||
|
||||
|
||||
class BaseHeuristic(object):
|
||||
|
||||
def warning(self, response):
|
||||
"""
|
||||
Return a valid 1xx warning header value describing the cache
|
||||
adjustments.
|
||||
|
||||
The response is provided too allow warnings like 113
|
||||
http://tools.ietf.org/html/rfc7234#section-5.5.4 where we need
|
||||
to explicitly say response is over 24 hours old.
|
||||
"""
|
||||
return '110 - "Response is Stale"'
|
||||
|
||||
def update_headers(self, response):
|
||||
"""Update the response headers with any new headers.
|
||||
|
||||
NOTE: This SHOULD always include some Warning header to
|
||||
signify that the response was cached by the client, not
|
||||
by way of the provided headers.
|
||||
"""
|
||||
return {}
|
||||
|
||||
def apply(self, response):
|
||||
updated_headers = self.update_headers(response)
|
||||
|
||||
if updated_headers:
|
||||
response.headers.update(updated_headers)
|
||||
warning_header_value = self.warning(response)
|
||||
if warning_header_value is not None:
|
||||
response.headers.update({'Warning': warning_header_value})
|
||||
|
||||
return response
|
||||
|
||||
|
||||
class OneDayCache(BaseHeuristic):
|
||||
"""
|
||||
Cache the response by providing an expires 1 day in the
|
||||
future.
|
||||
"""
|
||||
def update_headers(self, response):
|
||||
headers = {}
|
||||
|
||||
if 'expires' not in response.headers:
|
||||
date = parsedate(response.headers['date'])
|
||||
expires = expire_after(timedelta(days=1),
|
||||
date=datetime(*date[:6]))
|
||||
headers['expires'] = datetime_to_header(expires)
|
||||
headers['cache-control'] = 'public'
|
||||
return headers
|
||||
|
||||
|
||||
class ExpiresAfter(BaseHeuristic):
|
||||
"""
|
||||
Cache **all** requests for a defined time period.
|
||||
"""
|
||||
|
||||
def __init__(self, **kw):
|
||||
self.delta = timedelta(**kw)
|
||||
|
||||
def update_headers(self, response):
|
||||
expires = expire_after(self.delta)
|
||||
return {
|
||||
'expires': datetime_to_header(expires),
|
||||
'cache-control': 'public',
|
||||
}
|
||||
|
||||
def warning(self, response):
|
||||
tmpl = '110 - Automatically cached for %s. Response might be stale'
|
||||
return tmpl % self.delta
|
||||
|
||||
|
||||
class LastModified(BaseHeuristic):
|
||||
"""
|
||||
If there is no Expires header already, fall back on Last-Modified
|
||||
using the heuristic from
|
||||
http://tools.ietf.org/html/rfc7234#section-4.2.2
|
||||
to calculate a reasonable value.
|
||||
|
||||
Firefox also does something like this per
|
||||
https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching_FAQ
|
||||
http://lxr.mozilla.org/mozilla-release/source/netwerk/protocol/http/nsHttpResponseHead.cpp#397
|
||||
Unlike mozilla we limit this to 24-hr.
|
||||
"""
|
||||
cacheable_by_default_statuses = set([
|
||||
200, 203, 204, 206, 300, 301, 404, 405, 410, 414, 501
|
||||
])
|
||||
|
||||
def update_headers(self, resp):
|
||||
headers = resp.headers
|
||||
|
||||
if 'expires' in headers:
|
||||
return {}
|
||||
|
||||
if 'cache-control' in headers and headers['cache-control'] != 'public':
|
||||
return {}
|
||||
|
||||
if resp.status not in self.cacheable_by_default_statuses:
|
||||
return {}
|
||||
|
||||
if 'date' not in headers or 'last-modified' not in headers:
|
||||
return {}
|
||||
|
||||
date = calendar.timegm(parsedate_tz(headers['date']))
|
||||
last_modified = parsedate(headers['last-modified'])
|
||||
if date is None or last_modified is None:
|
||||
return {}
|
||||
|
||||
now = time.time()
|
||||
current_age = max(0, now - date)
|
||||
delta = date - calendar.timegm(last_modified)
|
||||
freshness_lifetime = max(0, min(delta / 10, 24 * 3600))
|
||||
if freshness_lifetime <= current_age:
|
||||
return {}
|
||||
|
||||
expires = date + freshness_lifetime
|
||||
return {'expires': time.strftime(TIME_FMT, time.gmtime(expires))}
|
||||
|
||||
def warning(self, resp):
|
||||
return None
|
196
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/serialize.py
поставляемый
Normal file
196
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/serialize.py
поставляемый
Normal file
|
@ -0,0 +1,196 @@
|
|||
import base64
|
||||
import io
|
||||
import json
|
||||
import zlib
|
||||
|
||||
from pip9._vendor.requests.structures import CaseInsensitiveDict
|
||||
|
||||
from .compat import HTTPResponse, pickle, text_type
|
||||
|
||||
|
||||
def _b64_encode_bytes(b):
|
||||
return base64.b64encode(b).decode("ascii")
|
||||
|
||||
|
||||
def _b64_encode_str(s):
|
||||
return _b64_encode_bytes(s.encode("utf8"))
|
||||
|
||||
|
||||
def _b64_encode(s):
|
||||
if isinstance(s, text_type):
|
||||
return _b64_encode_str(s)
|
||||
return _b64_encode_bytes(s)
|
||||
|
||||
|
||||
def _b64_decode_bytes(b):
|
||||
return base64.b64decode(b.encode("ascii"))
|
||||
|
||||
|
||||
def _b64_decode_str(s):
|
||||
return _b64_decode_bytes(s).decode("utf8")
|
||||
|
||||
|
||||
class Serializer(object):
|
||||
|
||||
def dumps(self, request, response, body=None):
|
||||
response_headers = CaseInsensitiveDict(response.headers)
|
||||
|
||||
if body is None:
|
||||
body = response.read(decode_content=False)
|
||||
|
||||
# NOTE: 99% sure this is dead code. I'm only leaving it
|
||||
# here b/c I don't have a test yet to prove
|
||||
# it. Basically, before using
|
||||
# `cachecontrol.filewrapper.CallbackFileWrapper`,
|
||||
# this made an effort to reset the file handle. The
|
||||
# `CallbackFileWrapper` short circuits this code by
|
||||
# setting the body as the content is consumed, the
|
||||
# result being a `body` argument is *always* passed
|
||||
# into cache_response, and in turn,
|
||||
# `Serializer.dump`.
|
||||
response._fp = io.BytesIO(body)
|
||||
|
||||
data = {
|
||||
"response": {
|
||||
"body": _b64_encode_bytes(body),
|
||||
"headers": dict(
|
||||
(_b64_encode(k), _b64_encode(v))
|
||||
for k, v in response.headers.items()
|
||||
),
|
||||
"status": response.status,
|
||||
"version": response.version,
|
||||
"reason": _b64_encode_str(response.reason),
|
||||
"strict": response.strict,
|
||||
"decode_content": response.decode_content,
|
||||
},
|
||||
}
|
||||
|
||||
# Construct our vary headers
|
||||
data["vary"] = {}
|
||||
if "vary" in response_headers:
|
||||
varied_headers = response_headers['vary'].split(',')
|
||||
for header in varied_headers:
|
||||
header = header.strip()
|
||||
data["vary"][header] = request.headers.get(header, None)
|
||||
|
||||
# Encode our Vary headers to ensure they can be serialized as JSON
|
||||
data["vary"] = dict(
|
||||
(_b64_encode(k), _b64_encode(v) if v is not None else v)
|
||||
for k, v in data["vary"].items()
|
||||
)
|
||||
|
||||
return b",".join([
|
||||
b"cc=2",
|
||||
zlib.compress(
|
||||
json.dumps(
|
||||
data, separators=(",", ":"), sort_keys=True,
|
||||
).encode("utf8"),
|
||||
),
|
||||
])
|
||||
|
||||
def loads(self, request, data):
|
||||
# Short circuit if we've been given an empty set of data
|
||||
if not data:
|
||||
return
|
||||
|
||||
# Determine what version of the serializer the data was serialized
|
||||
# with
|
||||
try:
|
||||
ver, data = data.split(b",", 1)
|
||||
except ValueError:
|
||||
ver = b"cc=0"
|
||||
|
||||
# Make sure that our "ver" is actually a version and isn't a false
|
||||
# positive from a , being in the data stream.
|
||||
if ver[:3] != b"cc=":
|
||||
data = ver + data
|
||||
ver = b"cc=0"
|
||||
|
||||
# Get the version number out of the cc=N
|
||||
ver = ver.split(b"=", 1)[-1].decode("ascii")
|
||||
|
||||
# Dispatch to the actual load method for the given version
|
||||
try:
|
||||
return getattr(self, "_loads_v{0}".format(ver))(request, data)
|
||||
except AttributeError:
|
||||
# This is a version we don't have a loads function for, so we'll
|
||||
# just treat it as a miss and return None
|
||||
return
|
||||
|
||||
def prepare_response(self, request, cached):
|
||||
"""Verify our vary headers match and construct a real urllib3
|
||||
HTTPResponse object.
|
||||
"""
|
||||
# Special case the '*' Vary value as it means we cannot actually
|
||||
# determine if the cached response is suitable for this request.
|
||||
if "*" in cached.get("vary", {}):
|
||||
return
|
||||
|
||||
# Ensure that the Vary headers for the cached response match our
|
||||
# request
|
||||
for header, value in cached.get("vary", {}).items():
|
||||
if request.headers.get(header, None) != value:
|
||||
return
|
||||
|
||||
body_raw = cached["response"].pop("body")
|
||||
|
||||
headers = CaseInsensitiveDict(data=cached['response']['headers'])
|
||||
if headers.get('transfer-encoding', '') == 'chunked':
|
||||
headers.pop('transfer-encoding')
|
||||
|
||||
cached['response']['headers'] = headers
|
||||
|
||||
try:
|
||||
body = io.BytesIO(body_raw)
|
||||
except TypeError:
|
||||
# This can happen if cachecontrol serialized to v1 format (pickle)
|
||||
# using Python 2. A Python 2 str(byte string) will be unpickled as
|
||||
# a Python 3 str (unicode string), which will cause the above to
|
||||
# fail with:
|
||||
#
|
||||
# TypeError: 'str' does not support the buffer interface
|
||||
body = io.BytesIO(body_raw.encode('utf8'))
|
||||
|
||||
return HTTPResponse(
|
||||
body=body,
|
||||
preload_content=False,
|
||||
**cached["response"]
|
||||
)
|
||||
|
||||
def _loads_v0(self, request, data):
|
||||
# The original legacy cache data. This doesn't contain enough
|
||||
# information to construct everything we need, so we'll treat this as
|
||||
# a miss.
|
||||
return
|
||||
|
||||
def _loads_v1(self, request, data):
|
||||
try:
|
||||
cached = pickle.loads(data)
|
||||
except ValueError:
|
||||
return
|
||||
|
||||
return self.prepare_response(request, cached)
|
||||
|
||||
def _loads_v2(self, request, data):
|
||||
try:
|
||||
cached = json.loads(zlib.decompress(data).decode("utf8"))
|
||||
except ValueError:
|
||||
return
|
||||
|
||||
# We need to decode the items that we've base64 encoded
|
||||
cached["response"]["body"] = _b64_decode_bytes(
|
||||
cached["response"]["body"]
|
||||
)
|
||||
cached["response"]["headers"] = dict(
|
||||
(_b64_decode_str(k), _b64_decode_str(v))
|
||||
for k, v in cached["response"]["headers"].items()
|
||||
)
|
||||
cached["response"]["reason"] = _b64_decode_str(
|
||||
cached["response"]["reason"],
|
||||
)
|
||||
cached["vary"] = dict(
|
||||
(_b64_decode_str(k), _b64_decode_str(v) if v is not None else v)
|
||||
for k, v in cached["vary"].items()
|
||||
)
|
||||
|
||||
return self.prepare_response(request, cached)
|
21
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/wrapper.py
поставляемый
Normal file
21
third_party/python/pipenv/pipenv/patched/notpip/_vendor/cachecontrol/wrapper.py
поставляемый
Normal file
|
@ -0,0 +1,21 @@
|
|||
from .adapter import CacheControlAdapter
|
||||
from .cache import DictCache
|
||||
|
||||
|
||||
def CacheControl(sess,
|
||||
cache=None,
|
||||
cache_etags=True,
|
||||
serializer=None,
|
||||
heuristic=None):
|
||||
|
||||
cache = cache or DictCache()
|
||||
adapter = CacheControlAdapter(
|
||||
cache,
|
||||
cache_etags=cache_etags,
|
||||
serializer=serializer,
|
||||
heuristic=heuristic,
|
||||
)
|
||||
sess.mount('http://', adapter)
|
||||
sess.mount('https://', adapter)
|
||||
|
||||
return sess
|
7
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/__init__.py
поставляемый
Normal file
7
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,7 @@
|
|||
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
|
||||
from .initialise import init, deinit, reinit, colorama_text
|
||||
from .ansi import Fore, Back, Style, Cursor
|
||||
from .ansitowin32 import AnsiToWin32
|
||||
|
||||
__version__ = '0.3.7'
|
||||
|
102
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/ansi.py
поставляемый
Normal file
102
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/ansi.py
поставляемый
Normal file
|
@ -0,0 +1,102 @@
|
|||
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
|
||||
'''
|
||||
This module generates ANSI character codes to printing colors to terminals.
|
||||
See: http://en.wikipedia.org/wiki/ANSI_escape_code
|
||||
'''
|
||||
|
||||
CSI = '\033['
|
||||
OSC = '\033]'
|
||||
BEL = '\007'
|
||||
|
||||
|
||||
def code_to_chars(code):
|
||||
return CSI + str(code) + 'm'
|
||||
|
||||
def set_title(title):
|
||||
return OSC + '2;' + title + BEL
|
||||
|
||||
def clear_screen(mode=2):
|
||||
return CSI + str(mode) + 'J'
|
||||
|
||||
def clear_line(mode=2):
|
||||
return CSI + str(mode) + 'K'
|
||||
|
||||
|
||||
class AnsiCodes(object):
|
||||
def __init__(self):
|
||||
# the subclasses declare class attributes which are numbers.
|
||||
# Upon instantiation we define instance attributes, which are the same
|
||||
# as the class attributes but wrapped with the ANSI escape sequence
|
||||
for name in dir(self):
|
||||
if not name.startswith('_'):
|
||||
value = getattr(self, name)
|
||||
setattr(self, name, code_to_chars(value))
|
||||
|
||||
|
||||
class AnsiCursor(object):
|
||||
def UP(self, n=1):
|
||||
return CSI + str(n) + 'A'
|
||||
def DOWN(self, n=1):
|
||||
return CSI + str(n) + 'B'
|
||||
def FORWARD(self, n=1):
|
||||
return CSI + str(n) + 'C'
|
||||
def BACK(self, n=1):
|
||||
return CSI + str(n) + 'D'
|
||||
def POS(self, x=1, y=1):
|
||||
return CSI + str(y) + ';' + str(x) + 'H'
|
||||
|
||||
|
||||
class AnsiFore(AnsiCodes):
|
||||
BLACK = 30
|
||||
RED = 31
|
||||
GREEN = 32
|
||||
YELLOW = 33
|
||||
BLUE = 34
|
||||
MAGENTA = 35
|
||||
CYAN = 36
|
||||
WHITE = 37
|
||||
RESET = 39
|
||||
|
||||
# These are fairly well supported, but not part of the standard.
|
||||
LIGHTBLACK_EX = 90
|
||||
LIGHTRED_EX = 91
|
||||
LIGHTGREEN_EX = 92
|
||||
LIGHTYELLOW_EX = 93
|
||||
LIGHTBLUE_EX = 94
|
||||
LIGHTMAGENTA_EX = 95
|
||||
LIGHTCYAN_EX = 96
|
||||
LIGHTWHITE_EX = 97
|
||||
|
||||
|
||||
class AnsiBack(AnsiCodes):
|
||||
BLACK = 40
|
||||
RED = 41
|
||||
GREEN = 42
|
||||
YELLOW = 43
|
||||
BLUE = 44
|
||||
MAGENTA = 45
|
||||
CYAN = 46
|
||||
WHITE = 47
|
||||
RESET = 49
|
||||
|
||||
# These are fairly well supported, but not part of the standard.
|
||||
LIGHTBLACK_EX = 100
|
||||
LIGHTRED_EX = 101
|
||||
LIGHTGREEN_EX = 102
|
||||
LIGHTYELLOW_EX = 103
|
||||
LIGHTBLUE_EX = 104
|
||||
LIGHTMAGENTA_EX = 105
|
||||
LIGHTCYAN_EX = 106
|
||||
LIGHTWHITE_EX = 107
|
||||
|
||||
|
||||
class AnsiStyle(AnsiCodes):
|
||||
BRIGHT = 1
|
||||
DIM = 2
|
||||
NORMAL = 22
|
||||
RESET_ALL = 0
|
||||
|
||||
Fore = AnsiFore()
|
||||
Back = AnsiBack()
|
||||
Style = AnsiStyle()
|
||||
Cursor = AnsiCursor()
|
236
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/ansitowin32.py
поставляемый
Normal file
236
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/ansitowin32.py
поставляемый
Normal file
|
@ -0,0 +1,236 @@
|
|||
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
|
||||
import re
|
||||
import sys
|
||||
import os
|
||||
|
||||
from .ansi import AnsiFore, AnsiBack, AnsiStyle, Style
|
||||
from .winterm import WinTerm, WinColor, WinStyle
|
||||
from .win32 import windll, winapi_test
|
||||
|
||||
|
||||
winterm = None
|
||||
if windll is not None:
|
||||
winterm = WinTerm()
|
||||
|
||||
|
||||
def is_stream_closed(stream):
|
||||
return not hasattr(stream, 'closed') or stream.closed
|
||||
|
||||
|
||||
def is_a_tty(stream):
|
||||
return hasattr(stream, 'isatty') and stream.isatty()
|
||||
|
||||
|
||||
class StreamWrapper(object):
|
||||
'''
|
||||
Wraps a stream (such as stdout), acting as a transparent proxy for all
|
||||
attribute access apart from method 'write()', which is delegated to our
|
||||
Converter instance.
|
||||
'''
|
||||
def __init__(self, wrapped, converter):
|
||||
# double-underscore everything to prevent clashes with names of
|
||||
# attributes on the wrapped stream object.
|
||||
self.__wrapped = wrapped
|
||||
self.__convertor = converter
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self.__wrapped, name)
|
||||
|
||||
def write(self, text):
|
||||
self.__convertor.write(text)
|
||||
|
||||
|
||||
class AnsiToWin32(object):
|
||||
'''
|
||||
Implements a 'write()' method which, on Windows, will strip ANSI character
|
||||
sequences from the text, and if outputting to a tty, will convert them into
|
||||
win32 function calls.
|
||||
'''
|
||||
ANSI_CSI_RE = re.compile('\001?\033\[((?:\d|;)*)([a-zA-Z])\002?') # Control Sequence Introducer
|
||||
ANSI_OSC_RE = re.compile('\001?\033\]((?:.|;)*?)(\x07)\002?') # Operating System Command
|
||||
|
||||
def __init__(self, wrapped, convert=None, strip=None, autoreset=False):
|
||||
# The wrapped stream (normally sys.stdout or sys.stderr)
|
||||
self.wrapped = wrapped
|
||||
|
||||
# should we reset colors to defaults after every .write()
|
||||
self.autoreset = autoreset
|
||||
|
||||
# create the proxy wrapping our output stream
|
||||
self.stream = StreamWrapper(wrapped, self)
|
||||
|
||||
on_windows = os.name == 'nt'
|
||||
# We test if the WinAPI works, because even if we are on Windows
|
||||
# we may be using a terminal that doesn't support the WinAPI
|
||||
# (e.g. Cygwin Terminal). In this case it's up to the terminal
|
||||
# to support the ANSI codes.
|
||||
conversion_supported = on_windows and winapi_test()
|
||||
|
||||
# should we strip ANSI sequences from our output?
|
||||
if strip is None:
|
||||
strip = conversion_supported or (not is_stream_closed(wrapped) and not is_a_tty(wrapped))
|
||||
self.strip = strip
|
||||
|
||||
# should we should convert ANSI sequences into win32 calls?
|
||||
if convert is None:
|
||||
convert = conversion_supported and not is_stream_closed(wrapped) and is_a_tty(wrapped)
|
||||
self.convert = convert
|
||||
|
||||
# dict of ansi codes to win32 functions and parameters
|
||||
self.win32_calls = self.get_win32_calls()
|
||||
|
||||
# are we wrapping stderr?
|
||||
self.on_stderr = self.wrapped is sys.stderr
|
||||
|
||||
def should_wrap(self):
|
||||
'''
|
||||
True if this class is actually needed. If false, then the output
|
||||
stream will not be affected, nor will win32 calls be issued, so
|
||||
wrapping stdout is not actually required. This will generally be
|
||||
False on non-Windows platforms, unless optional functionality like
|
||||
autoreset has been requested using kwargs to init()
|
||||
'''
|
||||
return self.convert or self.strip or self.autoreset
|
||||
|
||||
def get_win32_calls(self):
|
||||
if self.convert and winterm:
|
||||
return {
|
||||
AnsiStyle.RESET_ALL: (winterm.reset_all, ),
|
||||
AnsiStyle.BRIGHT: (winterm.style, WinStyle.BRIGHT),
|
||||
AnsiStyle.DIM: (winterm.style, WinStyle.NORMAL),
|
||||
AnsiStyle.NORMAL: (winterm.style, WinStyle.NORMAL),
|
||||
AnsiFore.BLACK: (winterm.fore, WinColor.BLACK),
|
||||
AnsiFore.RED: (winterm.fore, WinColor.RED),
|
||||
AnsiFore.GREEN: (winterm.fore, WinColor.GREEN),
|
||||
AnsiFore.YELLOW: (winterm.fore, WinColor.YELLOW),
|
||||
AnsiFore.BLUE: (winterm.fore, WinColor.BLUE),
|
||||
AnsiFore.MAGENTA: (winterm.fore, WinColor.MAGENTA),
|
||||
AnsiFore.CYAN: (winterm.fore, WinColor.CYAN),
|
||||
AnsiFore.WHITE: (winterm.fore, WinColor.GREY),
|
||||
AnsiFore.RESET: (winterm.fore, ),
|
||||
AnsiFore.LIGHTBLACK_EX: (winterm.fore, WinColor.BLACK, True),
|
||||
AnsiFore.LIGHTRED_EX: (winterm.fore, WinColor.RED, True),
|
||||
AnsiFore.LIGHTGREEN_EX: (winterm.fore, WinColor.GREEN, True),
|
||||
AnsiFore.LIGHTYELLOW_EX: (winterm.fore, WinColor.YELLOW, True),
|
||||
AnsiFore.LIGHTBLUE_EX: (winterm.fore, WinColor.BLUE, True),
|
||||
AnsiFore.LIGHTMAGENTA_EX: (winterm.fore, WinColor.MAGENTA, True),
|
||||
AnsiFore.LIGHTCYAN_EX: (winterm.fore, WinColor.CYAN, True),
|
||||
AnsiFore.LIGHTWHITE_EX: (winterm.fore, WinColor.GREY, True),
|
||||
AnsiBack.BLACK: (winterm.back, WinColor.BLACK),
|
||||
AnsiBack.RED: (winterm.back, WinColor.RED),
|
||||
AnsiBack.GREEN: (winterm.back, WinColor.GREEN),
|
||||
AnsiBack.YELLOW: (winterm.back, WinColor.YELLOW),
|
||||
AnsiBack.BLUE: (winterm.back, WinColor.BLUE),
|
||||
AnsiBack.MAGENTA: (winterm.back, WinColor.MAGENTA),
|
||||
AnsiBack.CYAN: (winterm.back, WinColor.CYAN),
|
||||
AnsiBack.WHITE: (winterm.back, WinColor.GREY),
|
||||
AnsiBack.RESET: (winterm.back, ),
|
||||
AnsiBack.LIGHTBLACK_EX: (winterm.back, WinColor.BLACK, True),
|
||||
AnsiBack.LIGHTRED_EX: (winterm.back, WinColor.RED, True),
|
||||
AnsiBack.LIGHTGREEN_EX: (winterm.back, WinColor.GREEN, True),
|
||||
AnsiBack.LIGHTYELLOW_EX: (winterm.back, WinColor.YELLOW, True),
|
||||
AnsiBack.LIGHTBLUE_EX: (winterm.back, WinColor.BLUE, True),
|
||||
AnsiBack.LIGHTMAGENTA_EX: (winterm.back, WinColor.MAGENTA, True),
|
||||
AnsiBack.LIGHTCYAN_EX: (winterm.back, WinColor.CYAN, True),
|
||||
AnsiBack.LIGHTWHITE_EX: (winterm.back, WinColor.GREY, True),
|
||||
}
|
||||
return dict()
|
||||
|
||||
def write(self, text):
|
||||
if self.strip or self.convert:
|
||||
self.write_and_convert(text)
|
||||
else:
|
||||
self.wrapped.write(text)
|
||||
self.wrapped.flush()
|
||||
if self.autoreset:
|
||||
self.reset_all()
|
||||
|
||||
|
||||
def reset_all(self):
|
||||
if self.convert:
|
||||
self.call_win32('m', (0,))
|
||||
elif not self.strip and not is_stream_closed(self.wrapped):
|
||||
self.wrapped.write(Style.RESET_ALL)
|
||||
|
||||
|
||||
def write_and_convert(self, text):
|
||||
'''
|
||||
Write the given text to our wrapped stream, stripping any ANSI
|
||||
sequences from the text, and optionally converting them into win32
|
||||
calls.
|
||||
'''
|
||||
cursor = 0
|
||||
text = self.convert_osc(text)
|
||||
for match in self.ANSI_CSI_RE.finditer(text):
|
||||
start, end = match.span()
|
||||
self.write_plain_text(text, cursor, start)
|
||||
self.convert_ansi(*match.groups())
|
||||
cursor = end
|
||||
self.write_plain_text(text, cursor, len(text))
|
||||
|
||||
|
||||
def write_plain_text(self, text, start, end):
|
||||
if start < end:
|
||||
self.wrapped.write(text[start:end])
|
||||
self.wrapped.flush()
|
||||
|
||||
|
||||
def convert_ansi(self, paramstring, command):
|
||||
if self.convert:
|
||||
params = self.extract_params(command, paramstring)
|
||||
self.call_win32(command, params)
|
||||
|
||||
|
||||
def extract_params(self, command, paramstring):
|
||||
if command in 'Hf':
|
||||
params = tuple(int(p) if len(p) != 0 else 1 for p in paramstring.split(';'))
|
||||
while len(params) < 2:
|
||||
# defaults:
|
||||
params = params + (1,)
|
||||
else:
|
||||
params = tuple(int(p) for p in paramstring.split(';') if len(p) != 0)
|
||||
if len(params) == 0:
|
||||
# defaults:
|
||||
if command in 'JKm':
|
||||
params = (0,)
|
||||
elif command in 'ABCD':
|
||||
params = (1,)
|
||||
|
||||
return params
|
||||
|
||||
|
||||
def call_win32(self, command, params):
|
||||
if command == 'm':
|
||||
for param in params:
|
||||
if param in self.win32_calls:
|
||||
func_args = self.win32_calls[param]
|
||||
func = func_args[0]
|
||||
args = func_args[1:]
|
||||
kwargs = dict(on_stderr=self.on_stderr)
|
||||
func(*args, **kwargs)
|
||||
elif command in 'J':
|
||||
winterm.erase_screen(params[0], on_stderr=self.on_stderr)
|
||||
elif command in 'K':
|
||||
winterm.erase_line(params[0], on_stderr=self.on_stderr)
|
||||
elif command in 'Hf': # cursor position - absolute
|
||||
winterm.set_cursor_position(params, on_stderr=self.on_stderr)
|
||||
elif command in 'ABCD': # cursor position - relative
|
||||
n = params[0]
|
||||
# A - up, B - down, C - forward, D - back
|
||||
x, y = {'A': (0, -n), 'B': (0, n), 'C': (n, 0), 'D': (-n, 0)}[command]
|
||||
winterm.cursor_adjust(x, y, on_stderr=self.on_stderr)
|
||||
|
||||
|
||||
def convert_osc(self, text):
|
||||
for match in self.ANSI_OSC_RE.finditer(text):
|
||||
start, end = match.span()
|
||||
text = text[:start] + text[end:]
|
||||
paramstring, command = match.groups()
|
||||
if command in '\x07': # \x07 = BEL
|
||||
params = paramstring.split(";")
|
||||
# 0 - change title and icon (we will only change title)
|
||||
# 1 - change icon (we don't support this)
|
||||
# 2 - change title
|
||||
if params[0] in '02':
|
||||
winterm.set_title(params[1])
|
||||
return text
|
82
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/initialise.py
поставляемый
Normal file
82
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/initialise.py
поставляемый
Normal file
|
@ -0,0 +1,82 @@
|
|||
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
|
||||
import atexit
|
||||
import contextlib
|
||||
import sys
|
||||
|
||||
from .ansitowin32 import AnsiToWin32
|
||||
|
||||
|
||||
orig_stdout = None
|
||||
orig_stderr = None
|
||||
|
||||
wrapped_stdout = None
|
||||
wrapped_stderr = None
|
||||
|
||||
atexit_done = False
|
||||
|
||||
|
||||
def reset_all():
|
||||
if AnsiToWin32 is not None: # Issue #74: objects might become None at exit
|
||||
AnsiToWin32(orig_stdout).reset_all()
|
||||
|
||||
|
||||
def init(autoreset=False, convert=None, strip=None, wrap=True):
|
||||
|
||||
if not wrap and any([autoreset, convert, strip]):
|
||||
raise ValueError('wrap=False conflicts with any other arg=True')
|
||||
|
||||
global wrapped_stdout, wrapped_stderr
|
||||
global orig_stdout, orig_stderr
|
||||
|
||||
orig_stdout = sys.stdout
|
||||
orig_stderr = sys.stderr
|
||||
|
||||
if sys.stdout is None:
|
||||
wrapped_stdout = None
|
||||
else:
|
||||
sys.stdout = wrapped_stdout = \
|
||||
wrap_stream(orig_stdout, convert, strip, autoreset, wrap)
|
||||
if sys.stderr is None:
|
||||
wrapped_stderr = None
|
||||
else:
|
||||
sys.stderr = wrapped_stderr = \
|
||||
wrap_stream(orig_stderr, convert, strip, autoreset, wrap)
|
||||
|
||||
global atexit_done
|
||||
if not atexit_done:
|
||||
atexit.register(reset_all)
|
||||
atexit_done = True
|
||||
|
||||
|
||||
def deinit():
|
||||
if orig_stdout is not None:
|
||||
sys.stdout = orig_stdout
|
||||
if orig_stderr is not None:
|
||||
sys.stderr = orig_stderr
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def colorama_text(*args, **kwargs):
|
||||
init(*args, **kwargs)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
deinit()
|
||||
|
||||
|
||||
def reinit():
|
||||
if wrapped_stdout is not None:
|
||||
sys.stdout = wrapped_stdout
|
||||
if wrapped_stderr is not None:
|
||||
sys.stderr = wrapped_stderr
|
||||
|
||||
|
||||
def wrap_stream(stream, convert, strip, autoreset, wrap):
|
||||
if wrap:
|
||||
wrapper = AnsiToWin32(stream,
|
||||
convert=convert, strip=strip, autoreset=autoreset)
|
||||
if wrapper.should_wrap():
|
||||
stream = wrapper.stream
|
||||
return stream
|
||||
|
||||
|
154
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/win32.py
поставляемый
Normal file
154
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/win32.py
поставляемый
Normal file
|
@ -0,0 +1,154 @@
|
|||
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
|
||||
|
||||
# from winbase.h
|
||||
STDOUT = -11
|
||||
STDERR = -12
|
||||
|
||||
try:
|
||||
import ctypes
|
||||
from ctypes import LibraryLoader
|
||||
windll = LibraryLoader(ctypes.WinDLL)
|
||||
from ctypes import wintypes
|
||||
except (AttributeError, ImportError):
|
||||
windll = None
|
||||
SetConsoleTextAttribute = lambda *_: None
|
||||
winapi_test = lambda *_: None
|
||||
else:
|
||||
from ctypes import byref, Structure, c_char, POINTER
|
||||
|
||||
COORD = wintypes._COORD
|
||||
|
||||
class CONSOLE_SCREEN_BUFFER_INFO(Structure):
|
||||
"""struct in wincon.h."""
|
||||
_fields_ = [
|
||||
("dwSize", COORD),
|
||||
("dwCursorPosition", COORD),
|
||||
("wAttributes", wintypes.WORD),
|
||||
("srWindow", wintypes.SMALL_RECT),
|
||||
("dwMaximumWindowSize", COORD),
|
||||
]
|
||||
def __str__(self):
|
||||
return '(%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d)' % (
|
||||
self.dwSize.Y, self.dwSize.X
|
||||
, self.dwCursorPosition.Y, self.dwCursorPosition.X
|
||||
, self.wAttributes
|
||||
, self.srWindow.Top, self.srWindow.Left, self.srWindow.Bottom, self.srWindow.Right
|
||||
, self.dwMaximumWindowSize.Y, self.dwMaximumWindowSize.X
|
||||
)
|
||||
|
||||
_GetStdHandle = windll.kernel32.GetStdHandle
|
||||
_GetStdHandle.argtypes = [
|
||||
wintypes.DWORD,
|
||||
]
|
||||
_GetStdHandle.restype = wintypes.HANDLE
|
||||
|
||||
_GetConsoleScreenBufferInfo = windll.kernel32.GetConsoleScreenBufferInfo
|
||||
_GetConsoleScreenBufferInfo.argtypes = [
|
||||
wintypes.HANDLE,
|
||||
POINTER(CONSOLE_SCREEN_BUFFER_INFO),
|
||||
]
|
||||
_GetConsoleScreenBufferInfo.restype = wintypes.BOOL
|
||||
|
||||
_SetConsoleTextAttribute = windll.kernel32.SetConsoleTextAttribute
|
||||
_SetConsoleTextAttribute.argtypes = [
|
||||
wintypes.HANDLE,
|
||||
wintypes.WORD,
|
||||
]
|
||||
_SetConsoleTextAttribute.restype = wintypes.BOOL
|
||||
|
||||
_SetConsoleCursorPosition = windll.kernel32.SetConsoleCursorPosition
|
||||
_SetConsoleCursorPosition.argtypes = [
|
||||
wintypes.HANDLE,
|
||||
COORD,
|
||||
]
|
||||
_SetConsoleCursorPosition.restype = wintypes.BOOL
|
||||
|
||||
_FillConsoleOutputCharacterA = windll.kernel32.FillConsoleOutputCharacterA
|
||||
_FillConsoleOutputCharacterA.argtypes = [
|
||||
wintypes.HANDLE,
|
||||
c_char,
|
||||
wintypes.DWORD,
|
||||
COORD,
|
||||
POINTER(wintypes.DWORD),
|
||||
]
|
||||
_FillConsoleOutputCharacterA.restype = wintypes.BOOL
|
||||
|
||||
_FillConsoleOutputAttribute = windll.kernel32.FillConsoleOutputAttribute
|
||||
_FillConsoleOutputAttribute.argtypes = [
|
||||
wintypes.HANDLE,
|
||||
wintypes.WORD,
|
||||
wintypes.DWORD,
|
||||
COORD,
|
||||
POINTER(wintypes.DWORD),
|
||||
]
|
||||
_FillConsoleOutputAttribute.restype = wintypes.BOOL
|
||||
|
||||
_SetConsoleTitleW = windll.kernel32.SetConsoleTitleA
|
||||
_SetConsoleTitleW.argtypes = [
|
||||
wintypes.LPCSTR
|
||||
]
|
||||
_SetConsoleTitleW.restype = wintypes.BOOL
|
||||
|
||||
handles = {
|
||||
STDOUT: _GetStdHandle(STDOUT),
|
||||
STDERR: _GetStdHandle(STDERR),
|
||||
}
|
||||
|
||||
def winapi_test():
|
||||
handle = handles[STDOUT]
|
||||
csbi = CONSOLE_SCREEN_BUFFER_INFO()
|
||||
success = _GetConsoleScreenBufferInfo(
|
||||
handle, byref(csbi))
|
||||
return bool(success)
|
||||
|
||||
def GetConsoleScreenBufferInfo(stream_id=STDOUT):
|
||||
handle = handles[stream_id]
|
||||
csbi = CONSOLE_SCREEN_BUFFER_INFO()
|
||||
success = _GetConsoleScreenBufferInfo(
|
||||
handle, byref(csbi))
|
||||
return csbi
|
||||
|
||||
def SetConsoleTextAttribute(stream_id, attrs):
|
||||
handle = handles[stream_id]
|
||||
return _SetConsoleTextAttribute(handle, attrs)
|
||||
|
||||
def SetConsoleCursorPosition(stream_id, position, adjust=True):
|
||||
position = COORD(*position)
|
||||
# If the position is out of range, do nothing.
|
||||
if position.Y <= 0 or position.X <= 0:
|
||||
return
|
||||
# Adjust for Windows' SetConsoleCursorPosition:
|
||||
# 1. being 0-based, while ANSI is 1-based.
|
||||
# 2. expecting (x,y), while ANSI uses (y,x).
|
||||
adjusted_position = COORD(position.Y - 1, position.X - 1)
|
||||
if adjust:
|
||||
# Adjust for viewport's scroll position
|
||||
sr = GetConsoleScreenBufferInfo(STDOUT).srWindow
|
||||
adjusted_position.Y += sr.Top
|
||||
adjusted_position.X += sr.Left
|
||||
# Resume normal processing
|
||||
handle = handles[stream_id]
|
||||
return _SetConsoleCursorPosition(handle, adjusted_position)
|
||||
|
||||
def FillConsoleOutputCharacter(stream_id, char, length, start):
|
||||
handle = handles[stream_id]
|
||||
char = c_char(char.encode())
|
||||
length = wintypes.DWORD(length)
|
||||
num_written = wintypes.DWORD(0)
|
||||
# Note that this is hard-coded for ANSI (vs wide) bytes.
|
||||
success = _FillConsoleOutputCharacterA(
|
||||
handle, char, length, start, byref(num_written))
|
||||
return num_written.value
|
||||
|
||||
def FillConsoleOutputAttribute(stream_id, attr, length, start):
|
||||
''' FillConsoleOutputAttribute( hConsole, csbi.wAttributes, dwConSize, coordScreen, &cCharsWritten )'''
|
||||
handle = handles[stream_id]
|
||||
attribute = wintypes.WORD(attr)
|
||||
length = wintypes.DWORD(length)
|
||||
num_written = wintypes.DWORD(0)
|
||||
# Note that this is hard-coded for ANSI (vs wide) bytes.
|
||||
return _FillConsoleOutputAttribute(
|
||||
handle, attribute, length, start, byref(num_written))
|
||||
|
||||
def SetConsoleTitle(title):
|
||||
return _SetConsoleTitleW(title)
|
162
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/winterm.py
поставляемый
Normal file
162
third_party/python/pipenv/pipenv/patched/notpip/_vendor/colorama/winterm.py
поставляемый
Normal file
|
@ -0,0 +1,162 @@
|
|||
# Copyright Jonathan Hartley 2013. BSD 3-Clause license, see LICENSE file.
|
||||
from . import win32
|
||||
|
||||
|
||||
# from wincon.h
|
||||
class WinColor(object):
|
||||
BLACK = 0
|
||||
BLUE = 1
|
||||
GREEN = 2
|
||||
CYAN = 3
|
||||
RED = 4
|
||||
MAGENTA = 5
|
||||
YELLOW = 6
|
||||
GREY = 7
|
||||
|
||||
# from wincon.h
|
||||
class WinStyle(object):
|
||||
NORMAL = 0x00 # dim text, dim background
|
||||
BRIGHT = 0x08 # bright text, dim background
|
||||
BRIGHT_BACKGROUND = 0x80 # dim text, bright background
|
||||
|
||||
class WinTerm(object):
|
||||
|
||||
def __init__(self):
|
||||
self._default = win32.GetConsoleScreenBufferInfo(win32.STDOUT).wAttributes
|
||||
self.set_attrs(self._default)
|
||||
self._default_fore = self._fore
|
||||
self._default_back = self._back
|
||||
self._default_style = self._style
|
||||
# In order to emulate LIGHT_EX in windows, we borrow the BRIGHT style.
|
||||
# So that LIGHT_EX colors and BRIGHT style do not clobber each other,
|
||||
# we track them separately, since LIGHT_EX is overwritten by Fore/Back
|
||||
# and BRIGHT is overwritten by Style codes.
|
||||
self._light = 0
|
||||
|
||||
def get_attrs(self):
|
||||
return self._fore + self._back * 16 + (self._style | self._light)
|
||||
|
||||
def set_attrs(self, value):
|
||||
self._fore = value & 7
|
||||
self._back = (value >> 4) & 7
|
||||
self._style = value & (WinStyle.BRIGHT | WinStyle.BRIGHT_BACKGROUND)
|
||||
|
||||
def reset_all(self, on_stderr=None):
|
||||
self.set_attrs(self._default)
|
||||
self.set_console(attrs=self._default)
|
||||
|
||||
def fore(self, fore=None, light=False, on_stderr=False):
|
||||
if fore is None:
|
||||
fore = self._default_fore
|
||||
self._fore = fore
|
||||
# Emulate LIGHT_EX with BRIGHT Style
|
||||
if light:
|
||||
self._light |= WinStyle.BRIGHT
|
||||
else:
|
||||
self._light &= ~WinStyle.BRIGHT
|
||||
self.set_console(on_stderr=on_stderr)
|
||||
|
||||
def back(self, back=None, light=False, on_stderr=False):
|
||||
if back is None:
|
||||
back = self._default_back
|
||||
self._back = back
|
||||
# Emulate LIGHT_EX with BRIGHT_BACKGROUND Style
|
||||
if light:
|
||||
self._light |= WinStyle.BRIGHT_BACKGROUND
|
||||
else:
|
||||
self._light &= ~WinStyle.BRIGHT_BACKGROUND
|
||||
self.set_console(on_stderr=on_stderr)
|
||||
|
||||
def style(self, style=None, on_stderr=False):
|
||||
if style is None:
|
||||
style = self._default_style
|
||||
self._style = style
|
||||
self.set_console(on_stderr=on_stderr)
|
||||
|
||||
def set_console(self, attrs=None, on_stderr=False):
|
||||
if attrs is None:
|
||||
attrs = self.get_attrs()
|
||||
handle = win32.STDOUT
|
||||
if on_stderr:
|
||||
handle = win32.STDERR
|
||||
win32.SetConsoleTextAttribute(handle, attrs)
|
||||
|
||||
def get_position(self, handle):
|
||||
position = win32.GetConsoleScreenBufferInfo(handle).dwCursorPosition
|
||||
# Because Windows coordinates are 0-based,
|
||||
# and win32.SetConsoleCursorPosition expects 1-based.
|
||||
position.X += 1
|
||||
position.Y += 1
|
||||
return position
|
||||
|
||||
def set_cursor_position(self, position=None, on_stderr=False):
|
||||
if position is None:
|
||||
# I'm not currently tracking the position, so there is no default.
|
||||
# position = self.get_position()
|
||||
return
|
||||
handle = win32.STDOUT
|
||||
if on_stderr:
|
||||
handle = win32.STDERR
|
||||
win32.SetConsoleCursorPosition(handle, position)
|
||||
|
||||
def cursor_adjust(self, x, y, on_stderr=False):
|
||||
handle = win32.STDOUT
|
||||
if on_stderr:
|
||||
handle = win32.STDERR
|
||||
position = self.get_position(handle)
|
||||
adjusted_position = (position.Y + y, position.X + x)
|
||||
win32.SetConsoleCursorPosition(handle, adjusted_position, adjust=False)
|
||||
|
||||
def erase_screen(self, mode=0, on_stderr=False):
|
||||
# 0 should clear from the cursor to the end of the screen.
|
||||
# 1 should clear from the cursor to the beginning of the screen.
|
||||
# 2 should clear the entire screen, and move cursor to (1,1)
|
||||
handle = win32.STDOUT
|
||||
if on_stderr:
|
||||
handle = win32.STDERR
|
||||
csbi = win32.GetConsoleScreenBufferInfo(handle)
|
||||
# get the number of character cells in the current buffer
|
||||
cells_in_screen = csbi.dwSize.X * csbi.dwSize.Y
|
||||
# get number of character cells before current cursor position
|
||||
cells_before_cursor = csbi.dwSize.X * csbi.dwCursorPosition.Y + csbi.dwCursorPosition.X
|
||||
if mode == 0:
|
||||
from_coord = csbi.dwCursorPosition
|
||||
cells_to_erase = cells_in_screen - cells_before_cursor
|
||||
if mode == 1:
|
||||
from_coord = win32.COORD(0, 0)
|
||||
cells_to_erase = cells_before_cursor
|
||||
elif mode == 2:
|
||||
from_coord = win32.COORD(0, 0)
|
||||
cells_to_erase = cells_in_screen
|
||||
# fill the entire screen with blanks
|
||||
win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord)
|
||||
# now set the buffer's attributes accordingly
|
||||
win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord)
|
||||
if mode == 2:
|
||||
# put the cursor where needed
|
||||
win32.SetConsoleCursorPosition(handle, (1, 1))
|
||||
|
||||
def erase_line(self, mode=0, on_stderr=False):
|
||||
# 0 should clear from the cursor to the end of the line.
|
||||
# 1 should clear from the cursor to the beginning of the line.
|
||||
# 2 should clear the entire line.
|
||||
handle = win32.STDOUT
|
||||
if on_stderr:
|
||||
handle = win32.STDERR
|
||||
csbi = win32.GetConsoleScreenBufferInfo(handle)
|
||||
if mode == 0:
|
||||
from_coord = csbi.dwCursorPosition
|
||||
cells_to_erase = csbi.dwSize.X - csbi.dwCursorPosition.X
|
||||
if mode == 1:
|
||||
from_coord = win32.COORD(0, csbi.dwCursorPosition.Y)
|
||||
cells_to_erase = csbi.dwCursorPosition.X
|
||||
elif mode == 2:
|
||||
from_coord = win32.COORD(0, csbi.dwCursorPosition.Y)
|
||||
cells_to_erase = csbi.dwSize.X
|
||||
# fill the entire screen with blanks
|
||||
win32.FillConsoleOutputCharacter(handle, ' ', cells_to_erase, from_coord)
|
||||
# now set the buffer's attributes accordingly
|
||||
win32.FillConsoleOutputAttribute(handle, self.get_attrs(), cells_to_erase, from_coord)
|
||||
|
||||
def set_title(self, title):
|
||||
win32.SetConsoleTitle(title)
|
23
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/__init__.py
поставляемый
Normal file
23
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,23 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012-2016 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
import logging
|
||||
|
||||
__version__ = '0.2.4'
|
||||
|
||||
class DistlibException(Exception):
|
||||
pass
|
||||
|
||||
try:
|
||||
from logging import NullHandler
|
||||
except ImportError: # pragma: no cover
|
||||
class NullHandler(logging.Handler):
|
||||
def handle(self, record): pass
|
||||
def emit(self, record): pass
|
||||
def createLock(self): self.lock = None
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.addHandler(NullHandler())
|
6
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/__init__.py
поставляемый
Normal file
6
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,6 @@
|
|||
"""Modules copied from Python 3 standard libraries, for internal use only.
|
||||
|
||||
Individual classes and functions are found in d2._backport.misc. Intended
|
||||
usage is to always import things missing from 3.1 from that module: the
|
||||
built-in/stdlib objects will be used if found.
|
||||
"""
|
41
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/misc.py
поставляемый
Normal file
41
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/misc.py
поставляемый
Normal file
|
@ -0,0 +1,41 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012 The Python Software Foundation.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
"""Backports for individual classes and functions."""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
__all__ = ['cache_from_source', 'callable', 'fsencode']
|
||||
|
||||
|
||||
try:
|
||||
from imp import cache_from_source
|
||||
except ImportError:
|
||||
def cache_from_source(py_file, debug=__debug__):
|
||||
ext = debug and 'c' or 'o'
|
||||
return py_file + ext
|
||||
|
||||
|
||||
try:
|
||||
callable = callable
|
||||
except NameError:
|
||||
from collections import Callable
|
||||
|
||||
def callable(obj):
|
||||
return isinstance(obj, Callable)
|
||||
|
||||
|
||||
try:
|
||||
fsencode = os.fsencode
|
||||
except AttributeError:
|
||||
def fsencode(filename):
|
||||
if isinstance(filename, bytes):
|
||||
return filename
|
||||
elif isinstance(filename, str):
|
||||
return filename.encode(sys.getfilesystemencoding())
|
||||
else:
|
||||
raise TypeError("expect bytes or str, not %s" %
|
||||
type(filename).__name__)
|
761
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/shutil.py
поставляемый
Normal file
761
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/shutil.py
поставляемый
Normal file
|
@ -0,0 +1,761 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012 The Python Software Foundation.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
"""Utility functions for copying and archiving files and directory trees.
|
||||
|
||||
XXX The functions here don't copy the resource fork or other metadata on Mac.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import stat
|
||||
from os.path import abspath
|
||||
import fnmatch
|
||||
import collections
|
||||
import errno
|
||||
from . import tarfile
|
||||
|
||||
try:
|
||||
import bz2
|
||||
_BZ2_SUPPORTED = True
|
||||
except ImportError:
|
||||
_BZ2_SUPPORTED = False
|
||||
|
||||
try:
|
||||
from pwd import getpwnam
|
||||
except ImportError:
|
||||
getpwnam = None
|
||||
|
||||
try:
|
||||
from grp import getgrnam
|
||||
except ImportError:
|
||||
getgrnam = None
|
||||
|
||||
__all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2",
|
||||
"copytree", "move", "rmtree", "Error", "SpecialFileError",
|
||||
"ExecError", "make_archive", "get_archive_formats",
|
||||
"register_archive_format", "unregister_archive_format",
|
||||
"get_unpack_formats", "register_unpack_format",
|
||||
"unregister_unpack_format", "unpack_archive", "ignore_patterns"]
|
||||
|
||||
class Error(EnvironmentError):
|
||||
pass
|
||||
|
||||
class SpecialFileError(EnvironmentError):
|
||||
"""Raised when trying to do a kind of operation (e.g. copying) which is
|
||||
not supported on a special file (e.g. a named pipe)"""
|
||||
|
||||
class ExecError(EnvironmentError):
|
||||
"""Raised when a command could not be executed"""
|
||||
|
||||
class ReadError(EnvironmentError):
|
||||
"""Raised when an archive cannot be read"""
|
||||
|
||||
class RegistryError(Exception):
|
||||
"""Raised when a registry operation with the archiving
|
||||
and unpacking registries fails"""
|
||||
|
||||
|
||||
try:
|
||||
WindowsError
|
||||
except NameError:
|
||||
WindowsError = None
|
||||
|
||||
def copyfileobj(fsrc, fdst, length=16*1024):
|
||||
"""copy data from file-like object fsrc to file-like object fdst"""
|
||||
while 1:
|
||||
buf = fsrc.read(length)
|
||||
if not buf:
|
||||
break
|
||||
fdst.write(buf)
|
||||
|
||||
def _samefile(src, dst):
|
||||
# Macintosh, Unix.
|
||||
if hasattr(os.path, 'samefile'):
|
||||
try:
|
||||
return os.path.samefile(src, dst)
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
# All other platforms: check for same pathname.
|
||||
return (os.path.normcase(os.path.abspath(src)) ==
|
||||
os.path.normcase(os.path.abspath(dst)))
|
||||
|
||||
def copyfile(src, dst):
|
||||
"""Copy data from src to dst"""
|
||||
if _samefile(src, dst):
|
||||
raise Error("`%s` and `%s` are the same file" % (src, dst))
|
||||
|
||||
for fn in [src, dst]:
|
||||
try:
|
||||
st = os.stat(fn)
|
||||
except OSError:
|
||||
# File most likely does not exist
|
||||
pass
|
||||
else:
|
||||
# XXX What about other special files? (sockets, devices...)
|
||||
if stat.S_ISFIFO(st.st_mode):
|
||||
raise SpecialFileError("`%s` is a named pipe" % fn)
|
||||
|
||||
with open(src, 'rb') as fsrc:
|
||||
with open(dst, 'wb') as fdst:
|
||||
copyfileobj(fsrc, fdst)
|
||||
|
||||
def copymode(src, dst):
|
||||
"""Copy mode bits from src to dst"""
|
||||
if hasattr(os, 'chmod'):
|
||||
st = os.stat(src)
|
||||
mode = stat.S_IMODE(st.st_mode)
|
||||
os.chmod(dst, mode)
|
||||
|
||||
def copystat(src, dst):
|
||||
"""Copy all stat info (mode bits, atime, mtime, flags) from src to dst"""
|
||||
st = os.stat(src)
|
||||
mode = stat.S_IMODE(st.st_mode)
|
||||
if hasattr(os, 'utime'):
|
||||
os.utime(dst, (st.st_atime, st.st_mtime))
|
||||
if hasattr(os, 'chmod'):
|
||||
os.chmod(dst, mode)
|
||||
if hasattr(os, 'chflags') and hasattr(st, 'st_flags'):
|
||||
try:
|
||||
os.chflags(dst, st.st_flags)
|
||||
except OSError as why:
|
||||
if (not hasattr(errno, 'EOPNOTSUPP') or
|
||||
why.errno != errno.EOPNOTSUPP):
|
||||
raise
|
||||
|
||||
def copy(src, dst):
|
||||
"""Copy data and mode bits ("cp src dst").
|
||||
|
||||
The destination may be a directory.
|
||||
|
||||
"""
|
||||
if os.path.isdir(dst):
|
||||
dst = os.path.join(dst, os.path.basename(src))
|
||||
copyfile(src, dst)
|
||||
copymode(src, dst)
|
||||
|
||||
def copy2(src, dst):
|
||||
"""Copy data and all stat info ("cp -p src dst").
|
||||
|
||||
The destination may be a directory.
|
||||
|
||||
"""
|
||||
if os.path.isdir(dst):
|
||||
dst = os.path.join(dst, os.path.basename(src))
|
||||
copyfile(src, dst)
|
||||
copystat(src, dst)
|
||||
|
||||
def ignore_patterns(*patterns):
|
||||
"""Function that can be used as copytree() ignore parameter.
|
||||
|
||||
Patterns is a sequence of glob-style patterns
|
||||
that are used to exclude files"""
|
||||
def _ignore_patterns(path, names):
|
||||
ignored_names = []
|
||||
for pattern in patterns:
|
||||
ignored_names.extend(fnmatch.filter(names, pattern))
|
||||
return set(ignored_names)
|
||||
return _ignore_patterns
|
||||
|
||||
def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2,
|
||||
ignore_dangling_symlinks=False):
|
||||
"""Recursively copy a directory tree.
|
||||
|
||||
The destination directory must not already exist.
|
||||
If exception(s) occur, an Error is raised with a list of reasons.
|
||||
|
||||
If the optional symlinks flag is true, symbolic links in the
|
||||
source tree result in symbolic links in the destination tree; if
|
||||
it is false, the contents of the files pointed to by symbolic
|
||||
links are copied. If the file pointed by the symlink doesn't
|
||||
exist, an exception will be added in the list of errors raised in
|
||||
an Error exception at the end of the copy process.
|
||||
|
||||
You can set the optional ignore_dangling_symlinks flag to true if you
|
||||
want to silence this exception. Notice that this has no effect on
|
||||
platforms that don't support os.symlink.
|
||||
|
||||
The optional ignore argument is a callable. If given, it
|
||||
is called with the `src` parameter, which is the directory
|
||||
being visited by copytree(), and `names` which is the list of
|
||||
`src` contents, as returned by os.listdir():
|
||||
|
||||
callable(src, names) -> ignored_names
|
||||
|
||||
Since copytree() is called recursively, the callable will be
|
||||
called once for each directory that is copied. It returns a
|
||||
list of names relative to the `src` directory that should
|
||||
not be copied.
|
||||
|
||||
The optional copy_function argument is a callable that will be used
|
||||
to copy each file. It will be called with the source path and the
|
||||
destination path as arguments. By default, copy2() is used, but any
|
||||
function that supports the same signature (like copy()) can be used.
|
||||
|
||||
"""
|
||||
names = os.listdir(src)
|
||||
if ignore is not None:
|
||||
ignored_names = ignore(src, names)
|
||||
else:
|
||||
ignored_names = set()
|
||||
|
||||
os.makedirs(dst)
|
||||
errors = []
|
||||
for name in names:
|
||||
if name in ignored_names:
|
||||
continue
|
||||
srcname = os.path.join(src, name)
|
||||
dstname = os.path.join(dst, name)
|
||||
try:
|
||||
if os.path.islink(srcname):
|
||||
linkto = os.readlink(srcname)
|
||||
if symlinks:
|
||||
os.symlink(linkto, dstname)
|
||||
else:
|
||||
# ignore dangling symlink if the flag is on
|
||||
if not os.path.exists(linkto) and ignore_dangling_symlinks:
|
||||
continue
|
||||
# otherwise let the copy occurs. copy2 will raise an error
|
||||
copy_function(srcname, dstname)
|
||||
elif os.path.isdir(srcname):
|
||||
copytree(srcname, dstname, symlinks, ignore, copy_function)
|
||||
else:
|
||||
# Will raise a SpecialFileError for unsupported file types
|
||||
copy_function(srcname, dstname)
|
||||
# catch the Error from the recursive copytree so that we can
|
||||
# continue with other files
|
||||
except Error as err:
|
||||
errors.extend(err.args[0])
|
||||
except EnvironmentError as why:
|
||||
errors.append((srcname, dstname, str(why)))
|
||||
try:
|
||||
copystat(src, dst)
|
||||
except OSError as why:
|
||||
if WindowsError is not None and isinstance(why, WindowsError):
|
||||
# Copying file access times may fail on Windows
|
||||
pass
|
||||
else:
|
||||
errors.extend((src, dst, str(why)))
|
||||
if errors:
|
||||
raise Error(errors)
|
||||
|
||||
def rmtree(path, ignore_errors=False, onerror=None):
|
||||
"""Recursively delete a directory tree.
|
||||
|
||||
If ignore_errors is set, errors are ignored; otherwise, if onerror
|
||||
is set, it is called to handle the error with arguments (func,
|
||||
path, exc_info) where func is os.listdir, os.remove, or os.rmdir;
|
||||
path is the argument to that function that caused it to fail; and
|
||||
exc_info is a tuple returned by sys.exc_info(). If ignore_errors
|
||||
is false and onerror is None, an exception is raised.
|
||||
|
||||
"""
|
||||
if ignore_errors:
|
||||
def onerror(*args):
|
||||
pass
|
||||
elif onerror is None:
|
||||
def onerror(*args):
|
||||
raise
|
||||
try:
|
||||
if os.path.islink(path):
|
||||
# symlinks to directories are forbidden, see bug #1669
|
||||
raise OSError("Cannot call rmtree on a symbolic link")
|
||||
except OSError:
|
||||
onerror(os.path.islink, path, sys.exc_info())
|
||||
# can't continue even if onerror hook returns
|
||||
return
|
||||
names = []
|
||||
try:
|
||||
names = os.listdir(path)
|
||||
except os.error:
|
||||
onerror(os.listdir, path, sys.exc_info())
|
||||
for name in names:
|
||||
fullname = os.path.join(path, name)
|
||||
try:
|
||||
mode = os.lstat(fullname).st_mode
|
||||
except os.error:
|
||||
mode = 0
|
||||
if stat.S_ISDIR(mode):
|
||||
rmtree(fullname, ignore_errors, onerror)
|
||||
else:
|
||||
try:
|
||||
os.remove(fullname)
|
||||
except os.error:
|
||||
onerror(os.remove, fullname, sys.exc_info())
|
||||
try:
|
||||
os.rmdir(path)
|
||||
except os.error:
|
||||
onerror(os.rmdir, path, sys.exc_info())
|
||||
|
||||
|
||||
def _basename(path):
|
||||
# A basename() variant which first strips the trailing slash, if present.
|
||||
# Thus we always get the last component of the path, even for directories.
|
||||
return os.path.basename(path.rstrip(os.path.sep))
|
||||
|
||||
def move(src, dst):
|
||||
"""Recursively move a file or directory to another location. This is
|
||||
similar to the Unix "mv" command.
|
||||
|
||||
If the destination is a directory or a symlink to a directory, the source
|
||||
is moved inside the directory. The destination path must not already
|
||||
exist.
|
||||
|
||||
If the destination already exists but is not a directory, it may be
|
||||
overwritten depending on os.rename() semantics.
|
||||
|
||||
If the destination is on our current filesystem, then rename() is used.
|
||||
Otherwise, src is copied to the destination and then removed.
|
||||
A lot more could be done here... A look at a mv.c shows a lot of
|
||||
the issues this implementation glosses over.
|
||||
|
||||
"""
|
||||
real_dst = dst
|
||||
if os.path.isdir(dst):
|
||||
if _samefile(src, dst):
|
||||
# We might be on a case insensitive filesystem,
|
||||
# perform the rename anyway.
|
||||
os.rename(src, dst)
|
||||
return
|
||||
|
||||
real_dst = os.path.join(dst, _basename(src))
|
||||
if os.path.exists(real_dst):
|
||||
raise Error("Destination path '%s' already exists" % real_dst)
|
||||
try:
|
||||
os.rename(src, real_dst)
|
||||
except OSError:
|
||||
if os.path.isdir(src):
|
||||
if _destinsrc(src, dst):
|
||||
raise Error("Cannot move a directory '%s' into itself '%s'." % (src, dst))
|
||||
copytree(src, real_dst, symlinks=True)
|
||||
rmtree(src)
|
||||
else:
|
||||
copy2(src, real_dst)
|
||||
os.unlink(src)
|
||||
|
||||
def _destinsrc(src, dst):
|
||||
src = abspath(src)
|
||||
dst = abspath(dst)
|
||||
if not src.endswith(os.path.sep):
|
||||
src += os.path.sep
|
||||
if not dst.endswith(os.path.sep):
|
||||
dst += os.path.sep
|
||||
return dst.startswith(src)
|
||||
|
||||
def _get_gid(name):
|
||||
"""Returns a gid, given a group name."""
|
||||
if getgrnam is None or name is None:
|
||||
return None
|
||||
try:
|
||||
result = getgrnam(name)
|
||||
except KeyError:
|
||||
result = None
|
||||
if result is not None:
|
||||
return result[2]
|
||||
return None
|
||||
|
||||
def _get_uid(name):
|
||||
"""Returns an uid, given a user name."""
|
||||
if getpwnam is None or name is None:
|
||||
return None
|
||||
try:
|
||||
result = getpwnam(name)
|
||||
except KeyError:
|
||||
result = None
|
||||
if result is not None:
|
||||
return result[2]
|
||||
return None
|
||||
|
||||
def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0,
|
||||
owner=None, group=None, logger=None):
|
||||
"""Create a (possibly compressed) tar file from all the files under
|
||||
'base_dir'.
|
||||
|
||||
'compress' must be "gzip" (the default), "bzip2", or None.
|
||||
|
||||
'owner' and 'group' can be used to define an owner and a group for the
|
||||
archive that is being built. If not provided, the current owner and group
|
||||
will be used.
|
||||
|
||||
The output tar file will be named 'base_name' + ".tar", possibly plus
|
||||
the appropriate compression extension (".gz", or ".bz2").
|
||||
|
||||
Returns the output filename.
|
||||
"""
|
||||
tar_compression = {'gzip': 'gz', None: ''}
|
||||
compress_ext = {'gzip': '.gz'}
|
||||
|
||||
if _BZ2_SUPPORTED:
|
||||
tar_compression['bzip2'] = 'bz2'
|
||||
compress_ext['bzip2'] = '.bz2'
|
||||
|
||||
# flags for compression program, each element of list will be an argument
|
||||
if compress is not None and compress not in compress_ext:
|
||||
raise ValueError("bad value for 'compress', or compression format not "
|
||||
"supported : {0}".format(compress))
|
||||
|
||||
archive_name = base_name + '.tar' + compress_ext.get(compress, '')
|
||||
archive_dir = os.path.dirname(archive_name)
|
||||
|
||||
if not os.path.exists(archive_dir):
|
||||
if logger is not None:
|
||||
logger.info("creating %s", archive_dir)
|
||||
if not dry_run:
|
||||
os.makedirs(archive_dir)
|
||||
|
||||
# creating the tarball
|
||||
if logger is not None:
|
||||
logger.info('Creating tar archive')
|
||||
|
||||
uid = _get_uid(owner)
|
||||
gid = _get_gid(group)
|
||||
|
||||
def _set_uid_gid(tarinfo):
|
||||
if gid is not None:
|
||||
tarinfo.gid = gid
|
||||
tarinfo.gname = group
|
||||
if uid is not None:
|
||||
tarinfo.uid = uid
|
||||
tarinfo.uname = owner
|
||||
return tarinfo
|
||||
|
||||
if not dry_run:
|
||||
tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress])
|
||||
try:
|
||||
tar.add(base_dir, filter=_set_uid_gid)
|
||||
finally:
|
||||
tar.close()
|
||||
|
||||
return archive_name
|
||||
|
||||
def _call_external_zip(base_dir, zip_filename, verbose=False, dry_run=False):
|
||||
# XXX see if we want to keep an external call here
|
||||
if verbose:
|
||||
zipoptions = "-r"
|
||||
else:
|
||||
zipoptions = "-rq"
|
||||
from distutils.errors import DistutilsExecError
|
||||
from distutils.spawn import spawn
|
||||
try:
|
||||
spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run)
|
||||
except DistutilsExecError:
|
||||
# XXX really should distinguish between "couldn't find
|
||||
# external 'zip' command" and "zip failed".
|
||||
raise ExecError("unable to create zip file '%s': "
|
||||
"could neither import the 'zipfile' module nor "
|
||||
"find a standalone zip utility") % zip_filename
|
||||
|
||||
def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None):
|
||||
"""Create a zip file from all the files under 'base_dir'.
|
||||
|
||||
The output zip file will be named 'base_name' + ".zip". Uses either the
|
||||
"zipfile" Python module (if available) or the InfoZIP "zip" utility
|
||||
(if installed and found on the default search path). If neither tool is
|
||||
available, raises ExecError. Returns the name of the output zip
|
||||
file.
|
||||
"""
|
||||
zip_filename = base_name + ".zip"
|
||||
archive_dir = os.path.dirname(base_name)
|
||||
|
||||
if not os.path.exists(archive_dir):
|
||||
if logger is not None:
|
||||
logger.info("creating %s", archive_dir)
|
||||
if not dry_run:
|
||||
os.makedirs(archive_dir)
|
||||
|
||||
# If zipfile module is not available, try spawning an external 'zip'
|
||||
# command.
|
||||
try:
|
||||
import zipfile
|
||||
except ImportError:
|
||||
zipfile = None
|
||||
|
||||
if zipfile is None:
|
||||
_call_external_zip(base_dir, zip_filename, verbose, dry_run)
|
||||
else:
|
||||
if logger is not None:
|
||||
logger.info("creating '%s' and adding '%s' to it",
|
||||
zip_filename, base_dir)
|
||||
|
||||
if not dry_run:
|
||||
zip = zipfile.ZipFile(zip_filename, "w",
|
||||
compression=zipfile.ZIP_DEFLATED)
|
||||
|
||||
for dirpath, dirnames, filenames in os.walk(base_dir):
|
||||
for name in filenames:
|
||||
path = os.path.normpath(os.path.join(dirpath, name))
|
||||
if os.path.isfile(path):
|
||||
zip.write(path, path)
|
||||
if logger is not None:
|
||||
logger.info("adding '%s'", path)
|
||||
zip.close()
|
||||
|
||||
return zip_filename
|
||||
|
||||
_ARCHIVE_FORMATS = {
|
||||
'gztar': (_make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"),
|
||||
'bztar': (_make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"),
|
||||
'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"),
|
||||
'zip': (_make_zipfile, [], "ZIP file"),
|
||||
}
|
||||
|
||||
if _BZ2_SUPPORTED:
|
||||
_ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')],
|
||||
"bzip2'ed tar-file")
|
||||
|
||||
def get_archive_formats():
|
||||
"""Returns a list of supported formats for archiving and unarchiving.
|
||||
|
||||
Each element of the returned sequence is a tuple (name, description)
|
||||
"""
|
||||
formats = [(name, registry[2]) for name, registry in
|
||||
_ARCHIVE_FORMATS.items()]
|
||||
formats.sort()
|
||||
return formats
|
||||
|
||||
def register_archive_format(name, function, extra_args=None, description=''):
|
||||
"""Registers an archive format.
|
||||
|
||||
name is the name of the format. function is the callable that will be
|
||||
used to create archives. If provided, extra_args is a sequence of
|
||||
(name, value) tuples that will be passed as arguments to the callable.
|
||||
description can be provided to describe the format, and will be returned
|
||||
by the get_archive_formats() function.
|
||||
"""
|
||||
if extra_args is None:
|
||||
extra_args = []
|
||||
if not isinstance(function, collections.Callable):
|
||||
raise TypeError('The %s object is not callable' % function)
|
||||
if not isinstance(extra_args, (tuple, list)):
|
||||
raise TypeError('extra_args needs to be a sequence')
|
||||
for element in extra_args:
|
||||
if not isinstance(element, (tuple, list)) or len(element) !=2:
|
||||
raise TypeError('extra_args elements are : (arg_name, value)')
|
||||
|
||||
_ARCHIVE_FORMATS[name] = (function, extra_args, description)
|
||||
|
||||
def unregister_archive_format(name):
|
||||
del _ARCHIVE_FORMATS[name]
|
||||
|
||||
def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0,
|
||||
dry_run=0, owner=None, group=None, logger=None):
|
||||
"""Create an archive file (eg. zip or tar).
|
||||
|
||||
'base_name' is the name of the file to create, minus any format-specific
|
||||
extension; 'format' is the archive format: one of "zip", "tar", "bztar"
|
||||
or "gztar".
|
||||
|
||||
'root_dir' is a directory that will be the root directory of the
|
||||
archive; ie. we typically chdir into 'root_dir' before creating the
|
||||
archive. 'base_dir' is the directory where we start archiving from;
|
||||
ie. 'base_dir' will be the common prefix of all files and
|
||||
directories in the archive. 'root_dir' and 'base_dir' both default
|
||||
to the current directory. Returns the name of the archive file.
|
||||
|
||||
'owner' and 'group' are used when creating a tar archive. By default,
|
||||
uses the current owner and group.
|
||||
"""
|
||||
save_cwd = os.getcwd()
|
||||
if root_dir is not None:
|
||||
if logger is not None:
|
||||
logger.debug("changing into '%s'", root_dir)
|
||||
base_name = os.path.abspath(base_name)
|
||||
if not dry_run:
|
||||
os.chdir(root_dir)
|
||||
|
||||
if base_dir is None:
|
||||
base_dir = os.curdir
|
||||
|
||||
kwargs = {'dry_run': dry_run, 'logger': logger}
|
||||
|
||||
try:
|
||||
format_info = _ARCHIVE_FORMATS[format]
|
||||
except KeyError:
|
||||
raise ValueError("unknown archive format '%s'" % format)
|
||||
|
||||
func = format_info[0]
|
||||
for arg, val in format_info[1]:
|
||||
kwargs[arg] = val
|
||||
|
||||
if format != 'zip':
|
||||
kwargs['owner'] = owner
|
||||
kwargs['group'] = group
|
||||
|
||||
try:
|
||||
filename = func(base_name, base_dir, **kwargs)
|
||||
finally:
|
||||
if root_dir is not None:
|
||||
if logger is not None:
|
||||
logger.debug("changing back to '%s'", save_cwd)
|
||||
os.chdir(save_cwd)
|
||||
|
||||
return filename
|
||||
|
||||
|
||||
def get_unpack_formats():
|
||||
"""Returns a list of supported formats for unpacking.
|
||||
|
||||
Each element of the returned sequence is a tuple
|
||||
(name, extensions, description)
|
||||
"""
|
||||
formats = [(name, info[0], info[3]) for name, info in
|
||||
_UNPACK_FORMATS.items()]
|
||||
formats.sort()
|
||||
return formats
|
||||
|
||||
def _check_unpack_options(extensions, function, extra_args):
|
||||
"""Checks what gets registered as an unpacker."""
|
||||
# first make sure no other unpacker is registered for this extension
|
||||
existing_extensions = {}
|
||||
for name, info in _UNPACK_FORMATS.items():
|
||||
for ext in info[0]:
|
||||
existing_extensions[ext] = name
|
||||
|
||||
for extension in extensions:
|
||||
if extension in existing_extensions:
|
||||
msg = '%s is already registered for "%s"'
|
||||
raise RegistryError(msg % (extension,
|
||||
existing_extensions[extension]))
|
||||
|
||||
if not isinstance(function, collections.Callable):
|
||||
raise TypeError('The registered function must be a callable')
|
||||
|
||||
|
||||
def register_unpack_format(name, extensions, function, extra_args=None,
|
||||
description=''):
|
||||
"""Registers an unpack format.
|
||||
|
||||
`name` is the name of the format. `extensions` is a list of extensions
|
||||
corresponding to the format.
|
||||
|
||||
`function` is the callable that will be
|
||||
used to unpack archives. The callable will receive archives to unpack.
|
||||
If it's unable to handle an archive, it needs to raise a ReadError
|
||||
exception.
|
||||
|
||||
If provided, `extra_args` is a sequence of
|
||||
(name, value) tuples that will be passed as arguments to the callable.
|
||||
description can be provided to describe the format, and will be returned
|
||||
by the get_unpack_formats() function.
|
||||
"""
|
||||
if extra_args is None:
|
||||
extra_args = []
|
||||
_check_unpack_options(extensions, function, extra_args)
|
||||
_UNPACK_FORMATS[name] = extensions, function, extra_args, description
|
||||
|
||||
def unregister_unpack_format(name):
|
||||
"""Removes the pack format from the registry."""
|
||||
del _UNPACK_FORMATS[name]
|
||||
|
||||
def _ensure_directory(path):
|
||||
"""Ensure that the parent directory of `path` exists"""
|
||||
dirname = os.path.dirname(path)
|
||||
if not os.path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
|
||||
def _unpack_zipfile(filename, extract_dir):
|
||||
"""Unpack zip `filename` to `extract_dir`
|
||||
"""
|
||||
try:
|
||||
import zipfile
|
||||
except ImportError:
|
||||
raise ReadError('zlib not supported, cannot unpack this archive.')
|
||||
|
||||
if not zipfile.is_zipfile(filename):
|
||||
raise ReadError("%s is not a zip file" % filename)
|
||||
|
||||
zip = zipfile.ZipFile(filename)
|
||||
try:
|
||||
for info in zip.infolist():
|
||||
name = info.filename
|
||||
|
||||
# don't extract absolute paths or ones with .. in them
|
||||
if name.startswith('/') or '..' in name:
|
||||
continue
|
||||
|
||||
target = os.path.join(extract_dir, *name.split('/'))
|
||||
if not target:
|
||||
continue
|
||||
|
||||
_ensure_directory(target)
|
||||
if not name.endswith('/'):
|
||||
# file
|
||||
data = zip.read(info.filename)
|
||||
f = open(target, 'wb')
|
||||
try:
|
||||
f.write(data)
|
||||
finally:
|
||||
f.close()
|
||||
del data
|
||||
finally:
|
||||
zip.close()
|
||||
|
||||
def _unpack_tarfile(filename, extract_dir):
|
||||
"""Unpack tar/tar.gz/tar.bz2 `filename` to `extract_dir`
|
||||
"""
|
||||
try:
|
||||
tarobj = tarfile.open(filename)
|
||||
except tarfile.TarError:
|
||||
raise ReadError(
|
||||
"%s is not a compressed or uncompressed tar file" % filename)
|
||||
try:
|
||||
tarobj.extractall(extract_dir)
|
||||
finally:
|
||||
tarobj.close()
|
||||
|
||||
_UNPACK_FORMATS = {
|
||||
'gztar': (['.tar.gz', '.tgz'], _unpack_tarfile, [], "gzip'ed tar-file"),
|
||||
'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"),
|
||||
'zip': (['.zip'], _unpack_zipfile, [], "ZIP file")
|
||||
}
|
||||
|
||||
if _BZ2_SUPPORTED:
|
||||
_UNPACK_FORMATS['bztar'] = (['.bz2'], _unpack_tarfile, [],
|
||||
"bzip2'ed tar-file")
|
||||
|
||||
def _find_unpack_format(filename):
|
||||
for name, info in _UNPACK_FORMATS.items():
|
||||
for extension in info[0]:
|
||||
if filename.endswith(extension):
|
||||
return name
|
||||
return None
|
||||
|
||||
def unpack_archive(filename, extract_dir=None, format=None):
|
||||
"""Unpack an archive.
|
||||
|
||||
`filename` is the name of the archive.
|
||||
|
||||
`extract_dir` is the name of the target directory, where the archive
|
||||
is unpacked. If not provided, the current working directory is used.
|
||||
|
||||
`format` is the archive format: one of "zip", "tar", or "gztar". Or any
|
||||
other registered format. If not provided, unpack_archive will use the
|
||||
filename extension and see if an unpacker was registered for that
|
||||
extension.
|
||||
|
||||
In case none is found, a ValueError is raised.
|
||||
"""
|
||||
if extract_dir is None:
|
||||
extract_dir = os.getcwd()
|
||||
|
||||
if format is not None:
|
||||
try:
|
||||
format_info = _UNPACK_FORMATS[format]
|
||||
except KeyError:
|
||||
raise ValueError("Unknown unpack format '{0}'".format(format))
|
||||
|
||||
func = format_info[1]
|
||||
func(filename, extract_dir, **dict(format_info[2]))
|
||||
else:
|
||||
# we need to look at the registered unpackers supported extensions
|
||||
format = _find_unpack_format(filename)
|
||||
if format is None:
|
||||
raise ReadError("Unknown archive format '{0}'".format(filename))
|
||||
|
||||
func = _UNPACK_FORMATS[format][1]
|
||||
kwargs = dict(_UNPACK_FORMATS[format][2])
|
||||
func(filename, extract_dir, **kwargs)
|
788
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/sysconfig.py
поставляемый
Normal file
788
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/sysconfig.py
поставляемый
Normal file
|
@ -0,0 +1,788 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012 The Python Software Foundation.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
"""Access to Python's configuration information."""
|
||||
|
||||
import codecs
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from os.path import pardir, realpath
|
||||
try:
|
||||
import configparser
|
||||
except ImportError:
|
||||
import ConfigParser as configparser
|
||||
|
||||
|
||||
__all__ = [
|
||||
'get_config_h_filename',
|
||||
'get_config_var',
|
||||
'get_config_vars',
|
||||
'get_makefile_filename',
|
||||
'get_path',
|
||||
'get_path_names',
|
||||
'get_paths',
|
||||
'get_platform',
|
||||
'get_python_version',
|
||||
'get_scheme_names',
|
||||
'parse_config_h',
|
||||
]
|
||||
|
||||
|
||||
def _safe_realpath(path):
|
||||
try:
|
||||
return realpath(path)
|
||||
except OSError:
|
||||
return path
|
||||
|
||||
|
||||
if sys.executable:
|
||||
_PROJECT_BASE = os.path.dirname(_safe_realpath(sys.executable))
|
||||
else:
|
||||
# sys.executable can be empty if argv[0] has been changed and Python is
|
||||
# unable to retrieve the real program name
|
||||
_PROJECT_BASE = _safe_realpath(os.getcwd())
|
||||
|
||||
if os.name == "nt" and "pcbuild" in _PROJECT_BASE[-8:].lower():
|
||||
_PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir))
|
||||
# PC/VS7.1
|
||||
if os.name == "nt" and "\\pc\\v" in _PROJECT_BASE[-10:].lower():
|
||||
_PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir))
|
||||
# PC/AMD64
|
||||
if os.name == "nt" and "\\pcbuild\\amd64" in _PROJECT_BASE[-14:].lower():
|
||||
_PROJECT_BASE = _safe_realpath(os.path.join(_PROJECT_BASE, pardir, pardir))
|
||||
|
||||
|
||||
def is_python_build():
|
||||
for fn in ("Setup.dist", "Setup.local"):
|
||||
if os.path.isfile(os.path.join(_PROJECT_BASE, "Modules", fn)):
|
||||
return True
|
||||
return False
|
||||
|
||||
_PYTHON_BUILD = is_python_build()
|
||||
|
||||
_cfg_read = False
|
||||
|
||||
def _ensure_cfg_read():
|
||||
global _cfg_read
|
||||
if not _cfg_read:
|
||||
from ..resources import finder
|
||||
backport_package = __name__.rsplit('.', 1)[0]
|
||||
_finder = finder(backport_package)
|
||||
_cfgfile = _finder.find('sysconfig.cfg')
|
||||
assert _cfgfile, 'sysconfig.cfg exists'
|
||||
with _cfgfile.as_stream() as s:
|
||||
_SCHEMES.readfp(s)
|
||||
if _PYTHON_BUILD:
|
||||
for scheme in ('posix_prefix', 'posix_home'):
|
||||
_SCHEMES.set(scheme, 'include', '{srcdir}/Include')
|
||||
_SCHEMES.set(scheme, 'platinclude', '{projectbase}/.')
|
||||
|
||||
_cfg_read = True
|
||||
|
||||
|
||||
_SCHEMES = configparser.RawConfigParser()
|
||||
_VAR_REPL = re.compile(r'\{([^{]*?)\}')
|
||||
|
||||
def _expand_globals(config):
|
||||
_ensure_cfg_read()
|
||||
if config.has_section('globals'):
|
||||
globals = config.items('globals')
|
||||
else:
|
||||
globals = tuple()
|
||||
|
||||
sections = config.sections()
|
||||
for section in sections:
|
||||
if section == 'globals':
|
||||
continue
|
||||
for option, value in globals:
|
||||
if config.has_option(section, option):
|
||||
continue
|
||||
config.set(section, option, value)
|
||||
config.remove_section('globals')
|
||||
|
||||
# now expanding local variables defined in the cfg file
|
||||
#
|
||||
for section in config.sections():
|
||||
variables = dict(config.items(section))
|
||||
|
||||
def _replacer(matchobj):
|
||||
name = matchobj.group(1)
|
||||
if name in variables:
|
||||
return variables[name]
|
||||
return matchobj.group(0)
|
||||
|
||||
for option, value in config.items(section):
|
||||
config.set(section, option, _VAR_REPL.sub(_replacer, value))
|
||||
|
||||
#_expand_globals(_SCHEMES)
|
||||
|
||||
# FIXME don't rely on sys.version here, its format is an implementation detail
|
||||
# of CPython, use sys.version_info or sys.hexversion
|
||||
_PY_VERSION = sys.version.split()[0]
|
||||
_PY_VERSION_SHORT = sys.version[:3]
|
||||
_PY_VERSION_SHORT_NO_DOT = _PY_VERSION[0] + _PY_VERSION[2]
|
||||
_PREFIX = os.path.normpath(sys.prefix)
|
||||
_EXEC_PREFIX = os.path.normpath(sys.exec_prefix)
|
||||
_CONFIG_VARS = None
|
||||
_USER_BASE = None
|
||||
|
||||
|
||||
def _subst_vars(path, local_vars):
|
||||
"""In the string `path`, replace tokens like {some.thing} with the
|
||||
corresponding value from the map `local_vars`.
|
||||
|
||||
If there is no corresponding value, leave the token unchanged.
|
||||
"""
|
||||
def _replacer(matchobj):
|
||||
name = matchobj.group(1)
|
||||
if name in local_vars:
|
||||
return local_vars[name]
|
||||
elif name in os.environ:
|
||||
return os.environ[name]
|
||||
return matchobj.group(0)
|
||||
return _VAR_REPL.sub(_replacer, path)
|
||||
|
||||
|
||||
def _extend_dict(target_dict, other_dict):
|
||||
target_keys = target_dict.keys()
|
||||
for key, value in other_dict.items():
|
||||
if key in target_keys:
|
||||
continue
|
||||
target_dict[key] = value
|
||||
|
||||
|
||||
def _expand_vars(scheme, vars):
|
||||
res = {}
|
||||
if vars is None:
|
||||
vars = {}
|
||||
_extend_dict(vars, get_config_vars())
|
||||
|
||||
for key, value in _SCHEMES.items(scheme):
|
||||
if os.name in ('posix', 'nt'):
|
||||
value = os.path.expanduser(value)
|
||||
res[key] = os.path.normpath(_subst_vars(value, vars))
|
||||
return res
|
||||
|
||||
|
||||
def format_value(value, vars):
|
||||
def _replacer(matchobj):
|
||||
name = matchobj.group(1)
|
||||
if name in vars:
|
||||
return vars[name]
|
||||
return matchobj.group(0)
|
||||
return _VAR_REPL.sub(_replacer, value)
|
||||
|
||||
|
||||
def _get_default_scheme():
|
||||
if os.name == 'posix':
|
||||
# the default scheme for posix is posix_prefix
|
||||
return 'posix_prefix'
|
||||
return os.name
|
||||
|
||||
|
||||
def _getuserbase():
|
||||
env_base = os.environ.get("PYTHONUSERBASE", None)
|
||||
|
||||
def joinuser(*args):
|
||||
return os.path.expanduser(os.path.join(*args))
|
||||
|
||||
# what about 'os2emx', 'riscos' ?
|
||||
if os.name == "nt":
|
||||
base = os.environ.get("APPDATA") or "~"
|
||||
if env_base:
|
||||
return env_base
|
||||
else:
|
||||
return joinuser(base, "Python")
|
||||
|
||||
if sys.platform == "darwin":
|
||||
framework = get_config_var("PYTHONFRAMEWORK")
|
||||
if framework:
|
||||
if env_base:
|
||||
return env_base
|
||||
else:
|
||||
return joinuser("~", "Library", framework, "%d.%d" %
|
||||
sys.version_info[:2])
|
||||
|
||||
if env_base:
|
||||
return env_base
|
||||
else:
|
||||
return joinuser("~", ".local")
|
||||
|
||||
|
||||
def _parse_makefile(filename, vars=None):
|
||||
"""Parse a Makefile-style file.
|
||||
|
||||
A dictionary containing name/value pairs is returned. If an
|
||||
optional dictionary is passed in as the second argument, it is
|
||||
used instead of a new dictionary.
|
||||
"""
|
||||
# Regexes needed for parsing Makefile (and similar syntaxes,
|
||||
# like old-style Setup files).
|
||||
_variable_rx = re.compile("([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)")
|
||||
_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)")
|
||||
_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}")
|
||||
|
||||
if vars is None:
|
||||
vars = {}
|
||||
done = {}
|
||||
notdone = {}
|
||||
|
||||
with codecs.open(filename, encoding='utf-8', errors="surrogateescape") as f:
|
||||
lines = f.readlines()
|
||||
|
||||
for line in lines:
|
||||
if line.startswith('#') or line.strip() == '':
|
||||
continue
|
||||
m = _variable_rx.match(line)
|
||||
if m:
|
||||
n, v = m.group(1, 2)
|
||||
v = v.strip()
|
||||
# `$$' is a literal `$' in make
|
||||
tmpv = v.replace('$$', '')
|
||||
|
||||
if "$" in tmpv:
|
||||
notdone[n] = v
|
||||
else:
|
||||
try:
|
||||
v = int(v)
|
||||
except ValueError:
|
||||
# insert literal `$'
|
||||
done[n] = v.replace('$$', '$')
|
||||
else:
|
||||
done[n] = v
|
||||
|
||||
# do variable interpolation here
|
||||
variables = list(notdone.keys())
|
||||
|
||||
# Variables with a 'PY_' prefix in the makefile. These need to
|
||||
# be made available without that prefix through sysconfig.
|
||||
# Special care is needed to ensure that variable expansion works, even
|
||||
# if the expansion uses the name without a prefix.
|
||||
renamed_variables = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS')
|
||||
|
||||
while len(variables) > 0:
|
||||
for name in tuple(variables):
|
||||
value = notdone[name]
|
||||
m = _findvar1_rx.search(value) or _findvar2_rx.search(value)
|
||||
if m is not None:
|
||||
n = m.group(1)
|
||||
found = True
|
||||
if n in done:
|
||||
item = str(done[n])
|
||||
elif n in notdone:
|
||||
# get it on a subsequent round
|
||||
found = False
|
||||
elif n in os.environ:
|
||||
# do it like make: fall back to environment
|
||||
item = os.environ[n]
|
||||
|
||||
elif n in renamed_variables:
|
||||
if (name.startswith('PY_') and
|
||||
name[3:] in renamed_variables):
|
||||
item = ""
|
||||
|
||||
elif 'PY_' + n in notdone:
|
||||
found = False
|
||||
|
||||
else:
|
||||
item = str(done['PY_' + n])
|
||||
|
||||
else:
|
||||
done[n] = item = ""
|
||||
|
||||
if found:
|
||||
after = value[m.end():]
|
||||
value = value[:m.start()] + item + after
|
||||
if "$" in after:
|
||||
notdone[name] = value
|
||||
else:
|
||||
try:
|
||||
value = int(value)
|
||||
except ValueError:
|
||||
done[name] = value.strip()
|
||||
else:
|
||||
done[name] = value
|
||||
variables.remove(name)
|
||||
|
||||
if (name.startswith('PY_') and
|
||||
name[3:] in renamed_variables):
|
||||
|
||||
name = name[3:]
|
||||
if name not in done:
|
||||
done[name] = value
|
||||
|
||||
else:
|
||||
# bogus variable reference (e.g. "prefix=$/opt/python");
|
||||
# just drop it since we can't deal
|
||||
done[name] = value
|
||||
variables.remove(name)
|
||||
|
||||
# strip spurious spaces
|
||||
for k, v in done.items():
|
||||
if isinstance(v, str):
|
||||
done[k] = v.strip()
|
||||
|
||||
# save the results in the global dictionary
|
||||
vars.update(done)
|
||||
return vars
|
||||
|
||||
|
||||
def get_makefile_filename():
|
||||
"""Return the path of the Makefile."""
|
||||
if _PYTHON_BUILD:
|
||||
return os.path.join(_PROJECT_BASE, "Makefile")
|
||||
if hasattr(sys, 'abiflags'):
|
||||
config_dir_name = 'config-%s%s' % (_PY_VERSION_SHORT, sys.abiflags)
|
||||
else:
|
||||
config_dir_name = 'config'
|
||||
return os.path.join(get_path('stdlib'), config_dir_name, 'Makefile')
|
||||
|
||||
|
||||
def _init_posix(vars):
|
||||
"""Initialize the module as appropriate for POSIX systems."""
|
||||
# load the installed Makefile:
|
||||
makefile = get_makefile_filename()
|
||||
try:
|
||||
_parse_makefile(makefile, vars)
|
||||
except IOError as e:
|
||||
msg = "invalid Python installation: unable to open %s" % makefile
|
||||
if hasattr(e, "strerror"):
|
||||
msg = msg + " (%s)" % e.strerror
|
||||
raise IOError(msg)
|
||||
# load the installed pyconfig.h:
|
||||
config_h = get_config_h_filename()
|
||||
try:
|
||||
with open(config_h) as f:
|
||||
parse_config_h(f, vars)
|
||||
except IOError as e:
|
||||
msg = "invalid Python installation: unable to open %s" % config_h
|
||||
if hasattr(e, "strerror"):
|
||||
msg = msg + " (%s)" % e.strerror
|
||||
raise IOError(msg)
|
||||
# On AIX, there are wrong paths to the linker scripts in the Makefile
|
||||
# -- these paths are relative to the Python source, but when installed
|
||||
# the scripts are in another directory.
|
||||
if _PYTHON_BUILD:
|
||||
vars['LDSHARED'] = vars['BLDSHARED']
|
||||
|
||||
|
||||
def _init_non_posix(vars):
|
||||
"""Initialize the module as appropriate for NT"""
|
||||
# set basic install directories
|
||||
vars['LIBDEST'] = get_path('stdlib')
|
||||
vars['BINLIBDEST'] = get_path('platstdlib')
|
||||
vars['INCLUDEPY'] = get_path('include')
|
||||
vars['SO'] = '.pyd'
|
||||
vars['EXE'] = '.exe'
|
||||
vars['VERSION'] = _PY_VERSION_SHORT_NO_DOT
|
||||
vars['BINDIR'] = os.path.dirname(_safe_realpath(sys.executable))
|
||||
|
||||
#
|
||||
# public APIs
|
||||
#
|
||||
|
||||
|
||||
def parse_config_h(fp, vars=None):
|
||||
"""Parse a config.h-style file.
|
||||
|
||||
A dictionary containing name/value pairs is returned. If an
|
||||
optional dictionary is passed in as the second argument, it is
|
||||
used instead of a new dictionary.
|
||||
"""
|
||||
if vars is None:
|
||||
vars = {}
|
||||
define_rx = re.compile("#define ([A-Z][A-Za-z0-9_]+) (.*)\n")
|
||||
undef_rx = re.compile("/[*] #undef ([A-Z][A-Za-z0-9_]+) [*]/\n")
|
||||
|
||||
while True:
|
||||
line = fp.readline()
|
||||
if not line:
|
||||
break
|
||||
m = define_rx.match(line)
|
||||
if m:
|
||||
n, v = m.group(1, 2)
|
||||
try:
|
||||
v = int(v)
|
||||
except ValueError:
|
||||
pass
|
||||
vars[n] = v
|
||||
else:
|
||||
m = undef_rx.match(line)
|
||||
if m:
|
||||
vars[m.group(1)] = 0
|
||||
return vars
|
||||
|
||||
|
||||
def get_config_h_filename():
|
||||
"""Return the path of pyconfig.h."""
|
||||
if _PYTHON_BUILD:
|
||||
if os.name == "nt":
|
||||
inc_dir = os.path.join(_PROJECT_BASE, "PC")
|
||||
else:
|
||||
inc_dir = _PROJECT_BASE
|
||||
else:
|
||||
inc_dir = get_path('platinclude')
|
||||
return os.path.join(inc_dir, 'pyconfig.h')
|
||||
|
||||
|
||||
def get_scheme_names():
|
||||
"""Return a tuple containing the schemes names."""
|
||||
return tuple(sorted(_SCHEMES.sections()))
|
||||
|
||||
|
||||
def get_path_names():
|
||||
"""Return a tuple containing the paths names."""
|
||||
# xxx see if we want a static list
|
||||
return _SCHEMES.options('posix_prefix')
|
||||
|
||||
|
||||
def get_paths(scheme=_get_default_scheme(), vars=None, expand=True):
|
||||
"""Return a mapping containing an install scheme.
|
||||
|
||||
``scheme`` is the install scheme name. If not provided, it will
|
||||
return the default scheme for the current platform.
|
||||
"""
|
||||
_ensure_cfg_read()
|
||||
if expand:
|
||||
return _expand_vars(scheme, vars)
|
||||
else:
|
||||
return dict(_SCHEMES.items(scheme))
|
||||
|
||||
|
||||
def get_path(name, scheme=_get_default_scheme(), vars=None, expand=True):
|
||||
"""Return a path corresponding to the scheme.
|
||||
|
||||
``scheme`` is the install scheme name.
|
||||
"""
|
||||
return get_paths(scheme, vars, expand)[name]
|
||||
|
||||
|
||||
def get_config_vars(*args):
|
||||
"""With no arguments, return a dictionary of all configuration
|
||||
variables relevant for the current platform.
|
||||
|
||||
On Unix, this means every variable defined in Python's installed Makefile;
|
||||
On Windows and Mac OS it's a much smaller set.
|
||||
|
||||
With arguments, return a list of values that result from looking up
|
||||
each argument in the configuration variable dictionary.
|
||||
"""
|
||||
global _CONFIG_VARS
|
||||
if _CONFIG_VARS is None:
|
||||
_CONFIG_VARS = {}
|
||||
# Normalized versions of prefix and exec_prefix are handy to have;
|
||||
# in fact, these are the standard versions used most places in the
|
||||
# distutils2 module.
|
||||
_CONFIG_VARS['prefix'] = _PREFIX
|
||||
_CONFIG_VARS['exec_prefix'] = _EXEC_PREFIX
|
||||
_CONFIG_VARS['py_version'] = _PY_VERSION
|
||||
_CONFIG_VARS['py_version_short'] = _PY_VERSION_SHORT
|
||||
_CONFIG_VARS['py_version_nodot'] = _PY_VERSION[0] + _PY_VERSION[2]
|
||||
_CONFIG_VARS['base'] = _PREFIX
|
||||
_CONFIG_VARS['platbase'] = _EXEC_PREFIX
|
||||
_CONFIG_VARS['projectbase'] = _PROJECT_BASE
|
||||
try:
|
||||
_CONFIG_VARS['abiflags'] = sys.abiflags
|
||||
except AttributeError:
|
||||
# sys.abiflags may not be defined on all platforms.
|
||||
_CONFIG_VARS['abiflags'] = ''
|
||||
|
||||
if os.name in ('nt', 'os2'):
|
||||
_init_non_posix(_CONFIG_VARS)
|
||||
if os.name == 'posix':
|
||||
_init_posix(_CONFIG_VARS)
|
||||
# Setting 'userbase' is done below the call to the
|
||||
# init function to enable using 'get_config_var' in
|
||||
# the init-function.
|
||||
if sys.version >= '2.6':
|
||||
_CONFIG_VARS['userbase'] = _getuserbase()
|
||||
|
||||
if 'srcdir' not in _CONFIG_VARS:
|
||||
_CONFIG_VARS['srcdir'] = _PROJECT_BASE
|
||||
else:
|
||||
_CONFIG_VARS['srcdir'] = _safe_realpath(_CONFIG_VARS['srcdir'])
|
||||
|
||||
# Convert srcdir into an absolute path if it appears necessary.
|
||||
# Normally it is relative to the build directory. However, during
|
||||
# testing, for example, we might be running a non-installed python
|
||||
# from a different directory.
|
||||
if _PYTHON_BUILD and os.name == "posix":
|
||||
base = _PROJECT_BASE
|
||||
try:
|
||||
cwd = os.getcwd()
|
||||
except OSError:
|
||||
cwd = None
|
||||
if (not os.path.isabs(_CONFIG_VARS['srcdir']) and
|
||||
base != cwd):
|
||||
# srcdir is relative and we are not in the same directory
|
||||
# as the executable. Assume executable is in the build
|
||||
# directory and make srcdir absolute.
|
||||
srcdir = os.path.join(base, _CONFIG_VARS['srcdir'])
|
||||
_CONFIG_VARS['srcdir'] = os.path.normpath(srcdir)
|
||||
|
||||
if sys.platform == 'darwin':
|
||||
kernel_version = os.uname()[2] # Kernel version (8.4.3)
|
||||
major_version = int(kernel_version.split('.')[0])
|
||||
|
||||
if major_version < 8:
|
||||
# On macOS before 10.4, check if -arch and -isysroot
|
||||
# are in CFLAGS or LDFLAGS and remove them if they are.
|
||||
# This is needed when building extensions on a 10.3 system
|
||||
# using a universal build of python.
|
||||
for key in ('LDFLAGS', 'BASECFLAGS',
|
||||
# a number of derived variables. These need to be
|
||||
# patched up as well.
|
||||
'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'):
|
||||
flags = _CONFIG_VARS[key]
|
||||
flags = re.sub('-arch\s+\w+\s', ' ', flags)
|
||||
flags = re.sub('-isysroot [^ \t]*', ' ', flags)
|
||||
_CONFIG_VARS[key] = flags
|
||||
else:
|
||||
# Allow the user to override the architecture flags using
|
||||
# an environment variable.
|
||||
# NOTE: This name was introduced by Apple in OSX 10.5 and
|
||||
# is used by several scripting languages distributed with
|
||||
# that OS release.
|
||||
if 'ARCHFLAGS' in os.environ:
|
||||
arch = os.environ['ARCHFLAGS']
|
||||
for key in ('LDFLAGS', 'BASECFLAGS',
|
||||
# a number of derived variables. These need to be
|
||||
# patched up as well.
|
||||
'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'):
|
||||
|
||||
flags = _CONFIG_VARS[key]
|
||||
flags = re.sub('-arch\s+\w+\s', ' ', flags)
|
||||
flags = flags + ' ' + arch
|
||||
_CONFIG_VARS[key] = flags
|
||||
|
||||
# If we're on OSX 10.5 or later and the user tries to
|
||||
# compiles an extension using an SDK that is not present
|
||||
# on the current machine it is better to not use an SDK
|
||||
# than to fail.
|
||||
#
|
||||
# The major usecase for this is users using a Python.org
|
||||
# binary installer on OSX 10.6: that installer uses
|
||||
# the 10.4u SDK, but that SDK is not installed by default
|
||||
# when you install Xcode.
|
||||
#
|
||||
CFLAGS = _CONFIG_VARS.get('CFLAGS', '')
|
||||
m = re.search('-isysroot\s+(\S+)', CFLAGS)
|
||||
if m is not None:
|
||||
sdk = m.group(1)
|
||||
if not os.path.exists(sdk):
|
||||
for key in ('LDFLAGS', 'BASECFLAGS',
|
||||
# a number of derived variables. These need to be
|
||||
# patched up as well.
|
||||
'CFLAGS', 'PY_CFLAGS', 'BLDSHARED'):
|
||||
|
||||
flags = _CONFIG_VARS[key]
|
||||
flags = re.sub('-isysroot\s+\S+(\s|$)', ' ', flags)
|
||||
_CONFIG_VARS[key] = flags
|
||||
|
||||
if args:
|
||||
vals = []
|
||||
for name in args:
|
||||
vals.append(_CONFIG_VARS.get(name))
|
||||
return vals
|
||||
else:
|
||||
return _CONFIG_VARS
|
||||
|
||||
|
||||
def get_config_var(name):
|
||||
"""Return the value of a single variable using the dictionary returned by
|
||||
'get_config_vars()'.
|
||||
|
||||
Equivalent to get_config_vars().get(name)
|
||||
"""
|
||||
return get_config_vars().get(name)
|
||||
|
||||
|
||||
def get_platform():
|
||||
"""Return a string that identifies the current platform.
|
||||
|
||||
This is used mainly to distinguish platform-specific build directories and
|
||||
platform-specific built distributions. Typically includes the OS name
|
||||
and version and the architecture (as supplied by 'os.uname()'),
|
||||
although the exact information included depends on the OS; eg. for IRIX
|
||||
the architecture isn't particularly important (IRIX only runs on SGI
|
||||
hardware), but for Linux the kernel version isn't particularly
|
||||
important.
|
||||
|
||||
Examples of returned values:
|
||||
linux-i586
|
||||
linux-alpha (?)
|
||||
solaris-2.6-sun4u
|
||||
irix-5.3
|
||||
irix64-6.2
|
||||
|
||||
Windows will return one of:
|
||||
win-amd64 (64bit Windows on AMD64 (aka x86_64, Intel64, EM64T, etc)
|
||||
win-ia64 (64bit Windows on Itanium)
|
||||
win32 (all others - specifically, sys.platform is returned)
|
||||
|
||||
For other non-POSIX platforms, currently just returns 'sys.platform'.
|
||||
"""
|
||||
if os.name == 'nt':
|
||||
# sniff sys.version for architecture.
|
||||
prefix = " bit ("
|
||||
i = sys.version.find(prefix)
|
||||
if i == -1:
|
||||
return sys.platform
|
||||
j = sys.version.find(")", i)
|
||||
look = sys.version[i+len(prefix):j].lower()
|
||||
if look == 'amd64':
|
||||
return 'win-amd64'
|
||||
if look == 'itanium':
|
||||
return 'win-ia64'
|
||||
return sys.platform
|
||||
|
||||
if os.name != "posix" or not hasattr(os, 'uname'):
|
||||
# XXX what about the architecture? NT is Intel or Alpha,
|
||||
# Mac OS is M68k or PPC, etc.
|
||||
return sys.platform
|
||||
|
||||
# Try to distinguish various flavours of Unix
|
||||
osname, host, release, version, machine = os.uname()
|
||||
|
||||
# Convert the OS name to lowercase, remove '/' characters
|
||||
# (to accommodate BSD/OS), and translate spaces (for "Power Macintosh")
|
||||
osname = osname.lower().replace('/', '')
|
||||
machine = machine.replace(' ', '_')
|
||||
machine = machine.replace('/', '-')
|
||||
|
||||
if osname[:5] == "linux":
|
||||
# At least on Linux/Intel, 'machine' is the processor --
|
||||
# i386, etc.
|
||||
# XXX what about Alpha, SPARC, etc?
|
||||
return "%s-%s" % (osname, machine)
|
||||
elif osname[:5] == "sunos":
|
||||
if release[0] >= "5": # SunOS 5 == Solaris 2
|
||||
osname = "solaris"
|
||||
release = "%d.%s" % (int(release[0]) - 3, release[2:])
|
||||
# fall through to standard osname-release-machine representation
|
||||
elif osname[:4] == "irix": # could be "irix64"!
|
||||
return "%s-%s" % (osname, release)
|
||||
elif osname[:3] == "aix":
|
||||
return "%s-%s.%s" % (osname, version, release)
|
||||
elif osname[:6] == "cygwin":
|
||||
osname = "cygwin"
|
||||
rel_re = re.compile(r'[\d.]+')
|
||||
m = rel_re.match(release)
|
||||
if m:
|
||||
release = m.group()
|
||||
elif osname[:6] == "darwin":
|
||||
#
|
||||
# For our purposes, we'll assume that the system version from
|
||||
# distutils' perspective is what MACOSX_DEPLOYMENT_TARGET is set
|
||||
# to. This makes the compatibility story a bit more sane because the
|
||||
# machine is going to compile and link as if it were
|
||||
# MACOSX_DEPLOYMENT_TARGET.
|
||||
cfgvars = get_config_vars()
|
||||
macver = cfgvars.get('MACOSX_DEPLOYMENT_TARGET')
|
||||
|
||||
if True:
|
||||
# Always calculate the release of the running machine,
|
||||
# needed to determine if we can build fat binaries or not.
|
||||
|
||||
macrelease = macver
|
||||
# Get the system version. Reading this plist is a documented
|
||||
# way to get the system version (see the documentation for
|
||||
# the Gestalt Manager)
|
||||
try:
|
||||
f = open('/System/Library/CoreServices/SystemVersion.plist')
|
||||
except IOError:
|
||||
# We're on a plain darwin box, fall back to the default
|
||||
# behaviour.
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
m = re.search(r'<key>ProductUserVisibleVersion</key>\s*'
|
||||
r'<string>(.*?)</string>', f.read())
|
||||
finally:
|
||||
f.close()
|
||||
if m is not None:
|
||||
macrelease = '.'.join(m.group(1).split('.')[:2])
|
||||
# else: fall back to the default behaviour
|
||||
|
||||
if not macver:
|
||||
macver = macrelease
|
||||
|
||||
if macver:
|
||||
release = macver
|
||||
osname = "macosx"
|
||||
|
||||
if ((macrelease + '.') >= '10.4.' and
|
||||
'-arch' in get_config_vars().get('CFLAGS', '').strip()):
|
||||
# The universal build will build fat binaries, but not on
|
||||
# systems before 10.4
|
||||
#
|
||||
# Try to detect 4-way universal builds, those have machine-type
|
||||
# 'universal' instead of 'fat'.
|
||||
|
||||
machine = 'fat'
|
||||
cflags = get_config_vars().get('CFLAGS')
|
||||
|
||||
archs = re.findall('-arch\s+(\S+)', cflags)
|
||||
archs = tuple(sorted(set(archs)))
|
||||
|
||||
if len(archs) == 1:
|
||||
machine = archs[0]
|
||||
elif archs == ('i386', 'ppc'):
|
||||
machine = 'fat'
|
||||
elif archs == ('i386', 'x86_64'):
|
||||
machine = 'intel'
|
||||
elif archs == ('i386', 'ppc', 'x86_64'):
|
||||
machine = 'fat3'
|
||||
elif archs == ('ppc64', 'x86_64'):
|
||||
machine = 'fat64'
|
||||
elif archs == ('i386', 'ppc', 'ppc64', 'x86_64'):
|
||||
machine = 'universal'
|
||||
else:
|
||||
raise ValueError(
|
||||
"Don't know machine value for archs=%r" % (archs,))
|
||||
|
||||
elif machine == 'i386':
|
||||
# On OSX the machine type returned by uname is always the
|
||||
# 32-bit variant, even if the executable architecture is
|
||||
# the 64-bit variant
|
||||
if sys.maxsize >= 2**32:
|
||||
machine = 'x86_64'
|
||||
|
||||
elif machine in ('PowerPC', 'Power_Macintosh'):
|
||||
# Pick a sane name for the PPC architecture.
|
||||
# See 'i386' case
|
||||
if sys.maxsize >= 2**32:
|
||||
machine = 'ppc64'
|
||||
else:
|
||||
machine = 'ppc'
|
||||
|
||||
return "%s-%s-%s" % (osname, release, machine)
|
||||
|
||||
|
||||
def get_python_version():
|
||||
return _PY_VERSION_SHORT
|
||||
|
||||
|
||||
def _print_dict(title, data):
|
||||
for index, (key, value) in enumerate(sorted(data.items())):
|
||||
if index == 0:
|
||||
print('%s: ' % (title))
|
||||
print('\t%s = "%s"' % (key, value))
|
||||
|
||||
|
||||
def _main():
|
||||
"""Display all information sysconfig detains."""
|
||||
print('Platform: "%s"' % get_platform())
|
||||
print('Python version: "%s"' % get_python_version())
|
||||
print('Current installation scheme: "%s"' % _get_default_scheme())
|
||||
print()
|
||||
_print_dict('Paths', get_paths())
|
||||
print()
|
||||
_print_dict('Variables', get_config_vars())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
_main()
|
2607
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/tarfile.py
поставляемый
Normal file
2607
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/_backport/tarfile.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
1111
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/compat.py
поставляемый
Normal file
1111
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/compat.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
1312
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/database.py
поставляемый
Normal file
1312
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/database.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
515
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/index.py
поставляемый
Normal file
515
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/index.py
поставляемый
Normal file
|
@ -0,0 +1,515 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import tempfile
|
||||
try:
|
||||
from threading import Thread
|
||||
except ImportError:
|
||||
from dummy_threading import Thread
|
||||
|
||||
from . import DistlibException
|
||||
from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr,
|
||||
urlparse, build_opener, string_types)
|
||||
from .util import cached_property, zip_dir, ServerProxy
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_INDEX = 'https://pypi.python.org/pypi'
|
||||
DEFAULT_REALM = 'pypi'
|
||||
|
||||
class PackageIndex(object):
|
||||
"""
|
||||
This class represents a package index compatible with PyPI, the Python
|
||||
Package Index.
|
||||
"""
|
||||
|
||||
boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$'
|
||||
|
||||
def __init__(self, url=None):
|
||||
"""
|
||||
Initialise an instance.
|
||||
|
||||
:param url: The URL of the index. If not specified, the URL for PyPI is
|
||||
used.
|
||||
"""
|
||||
self.url = url or DEFAULT_INDEX
|
||||
self.read_configuration()
|
||||
scheme, netloc, path, params, query, frag = urlparse(self.url)
|
||||
if params or query or frag or scheme not in ('http', 'https'):
|
||||
raise DistlibException('invalid repository: %s' % self.url)
|
||||
self.password_handler = None
|
||||
self.ssl_verifier = None
|
||||
self.gpg = None
|
||||
self.gpg_home = None
|
||||
self.rpc_proxy = None
|
||||
with open(os.devnull, 'w') as sink:
|
||||
# Use gpg by default rather than gpg2, as gpg2 insists on
|
||||
# prompting for passwords
|
||||
for s in ('gpg', 'gpg2'):
|
||||
try:
|
||||
rc = subprocess.check_call([s, '--version'], stdout=sink,
|
||||
stderr=sink)
|
||||
if rc == 0:
|
||||
self.gpg = s
|
||||
break
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
def _get_pypirc_command(self):
|
||||
"""
|
||||
Get the distutils command for interacting with PyPI configurations.
|
||||
:return: the command.
|
||||
"""
|
||||
from distutils.core import Distribution
|
||||
from distutils.config import PyPIRCCommand
|
||||
d = Distribution()
|
||||
return PyPIRCCommand(d)
|
||||
|
||||
def read_configuration(self):
|
||||
"""
|
||||
Read the PyPI access configuration as supported by distutils, getting
|
||||
PyPI to do the actual work. This populates ``username``, ``password``,
|
||||
``realm`` and ``url`` attributes from the configuration.
|
||||
"""
|
||||
# get distutils to do the work
|
||||
c = self._get_pypirc_command()
|
||||
c.repository = self.url
|
||||
cfg = c._read_pypirc()
|
||||
self.username = cfg.get('username')
|
||||
self.password = cfg.get('password')
|
||||
self.realm = cfg.get('realm', 'pypi')
|
||||
self.url = cfg.get('repository', self.url)
|
||||
|
||||
def save_configuration(self):
|
||||
"""
|
||||
Save the PyPI access configuration. You must have set ``username`` and
|
||||
``password`` attributes before calling this method.
|
||||
|
||||
Again, distutils is used to do the actual work.
|
||||
"""
|
||||
self.check_credentials()
|
||||
# get distutils to do the work
|
||||
c = self._get_pypirc_command()
|
||||
c._store_pypirc(self.username, self.password)
|
||||
|
||||
def check_credentials(self):
|
||||
"""
|
||||
Check that ``username`` and ``password`` have been set, and raise an
|
||||
exception if not.
|
||||
"""
|
||||
if self.username is None or self.password is None:
|
||||
raise DistlibException('username and password must be set')
|
||||
pm = HTTPPasswordMgr()
|
||||
_, netloc, _, _, _, _ = urlparse(self.url)
|
||||
pm.add_password(self.realm, netloc, self.username, self.password)
|
||||
self.password_handler = HTTPBasicAuthHandler(pm)
|
||||
|
||||
def register(self, metadata):
|
||||
"""
|
||||
Register a distribution on PyPI, using the provided metadata.
|
||||
|
||||
:param metadata: A :class:`Metadata` instance defining at least a name
|
||||
and version number for the distribution to be
|
||||
registered.
|
||||
:return: The HTTP response received from PyPI upon submission of the
|
||||
request.
|
||||
"""
|
||||
self.check_credentials()
|
||||
metadata.validate()
|
||||
d = metadata.todict()
|
||||
d[':action'] = 'verify'
|
||||
request = self.encode_request(d.items(), [])
|
||||
response = self.send_request(request)
|
||||
d[':action'] = 'submit'
|
||||
request = self.encode_request(d.items(), [])
|
||||
return self.send_request(request)
|
||||
|
||||
def _reader(self, name, stream, outbuf):
|
||||
"""
|
||||
Thread runner for reading lines of from a subprocess into a buffer.
|
||||
|
||||
:param name: The logical name of the stream (used for logging only).
|
||||
:param stream: The stream to read from. This will typically a pipe
|
||||
connected to the output stream of a subprocess.
|
||||
:param outbuf: The list to append the read lines to.
|
||||
"""
|
||||
while True:
|
||||
s = stream.readline()
|
||||
if not s:
|
||||
break
|
||||
s = s.decode('utf-8').rstrip()
|
||||
outbuf.append(s)
|
||||
logger.debug('%s: %s' % (name, s))
|
||||
stream.close()
|
||||
|
||||
def get_sign_command(self, filename, signer, sign_password,
|
||||
keystore=None):
|
||||
"""
|
||||
Return a suitable command for signing a file.
|
||||
|
||||
:param filename: The pathname to the file to be signed.
|
||||
:param signer: The identifier of the signer of the file.
|
||||
:param sign_password: The passphrase for the signer's
|
||||
private key used for signing.
|
||||
:param keystore: The path to a directory which contains the keys
|
||||
used in verification. If not specified, the
|
||||
instance's ``gpg_home`` attribute is used instead.
|
||||
:return: The signing command as a list suitable to be
|
||||
passed to :class:`subprocess.Popen`.
|
||||
"""
|
||||
cmd = [self.gpg, '--status-fd', '2', '--no-tty']
|
||||
if keystore is None:
|
||||
keystore = self.gpg_home
|
||||
if keystore:
|
||||
cmd.extend(['--homedir', keystore])
|
||||
if sign_password is not None:
|
||||
cmd.extend(['--batch', '--passphrase-fd', '0'])
|
||||
td = tempfile.mkdtemp()
|
||||
sf = os.path.join(td, os.path.basename(filename) + '.asc')
|
||||
cmd.extend(['--detach-sign', '--armor', '--local-user',
|
||||
signer, '--output', sf, filename])
|
||||
logger.debug('invoking: %s', ' '.join(cmd))
|
||||
return cmd, sf
|
||||
|
||||
def run_command(self, cmd, input_data=None):
|
||||
"""
|
||||
Run a command in a child process , passing it any input data specified.
|
||||
|
||||
:param cmd: The command to run.
|
||||
:param input_data: If specified, this must be a byte string containing
|
||||
data to be sent to the child process.
|
||||
:return: A tuple consisting of the subprocess' exit code, a list of
|
||||
lines read from the subprocess' ``stdout``, and a list of
|
||||
lines read from the subprocess' ``stderr``.
|
||||
"""
|
||||
kwargs = {
|
||||
'stdout': subprocess.PIPE,
|
||||
'stderr': subprocess.PIPE,
|
||||
}
|
||||
if input_data is not None:
|
||||
kwargs['stdin'] = subprocess.PIPE
|
||||
stdout = []
|
||||
stderr = []
|
||||
p = subprocess.Popen(cmd, **kwargs)
|
||||
# We don't use communicate() here because we may need to
|
||||
# get clever with interacting with the command
|
||||
t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout))
|
||||
t1.start()
|
||||
t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr))
|
||||
t2.start()
|
||||
if input_data is not None:
|
||||
p.stdin.write(input_data)
|
||||
p.stdin.close()
|
||||
|
||||
p.wait()
|
||||
t1.join()
|
||||
t2.join()
|
||||
return p.returncode, stdout, stderr
|
||||
|
||||
def sign_file(self, filename, signer, sign_password, keystore=None):
|
||||
"""
|
||||
Sign a file.
|
||||
|
||||
:param filename: The pathname to the file to be signed.
|
||||
:param signer: The identifier of the signer of the file.
|
||||
:param sign_password: The passphrase for the signer's
|
||||
private key used for signing.
|
||||
:param keystore: The path to a directory which contains the keys
|
||||
used in signing. If not specified, the instance's
|
||||
``gpg_home`` attribute is used instead.
|
||||
:return: The absolute pathname of the file where the signature is
|
||||
stored.
|
||||
"""
|
||||
cmd, sig_file = self.get_sign_command(filename, signer, sign_password,
|
||||
keystore)
|
||||
rc, stdout, stderr = self.run_command(cmd,
|
||||
sign_password.encode('utf-8'))
|
||||
if rc != 0:
|
||||
raise DistlibException('sign command failed with error '
|
||||
'code %s' % rc)
|
||||
return sig_file
|
||||
|
||||
def upload_file(self, metadata, filename, signer=None, sign_password=None,
|
||||
filetype='sdist', pyversion='source', keystore=None):
|
||||
"""
|
||||
Upload a release file to the index.
|
||||
|
||||
:param metadata: A :class:`Metadata` instance defining at least a name
|
||||
and version number for the file to be uploaded.
|
||||
:param filename: The pathname of the file to be uploaded.
|
||||
:param signer: The identifier of the signer of the file.
|
||||
:param sign_password: The passphrase for the signer's
|
||||
private key used for signing.
|
||||
:param filetype: The type of the file being uploaded. This is the
|
||||
distutils command which produced that file, e.g.
|
||||
``sdist`` or ``bdist_wheel``.
|
||||
:param pyversion: The version of Python which the release relates
|
||||
to. For code compatible with any Python, this would
|
||||
be ``source``, otherwise it would be e.g. ``3.2``.
|
||||
:param keystore: The path to a directory which contains the keys
|
||||
used in signing. If not specified, the instance's
|
||||
``gpg_home`` attribute is used instead.
|
||||
:return: The HTTP response received from PyPI upon submission of the
|
||||
request.
|
||||
"""
|
||||
self.check_credentials()
|
||||
if not os.path.exists(filename):
|
||||
raise DistlibException('not found: %s' % filename)
|
||||
metadata.validate()
|
||||
d = metadata.todict()
|
||||
sig_file = None
|
||||
if signer:
|
||||
if not self.gpg:
|
||||
logger.warning('no signing program available - not signed')
|
||||
else:
|
||||
sig_file = self.sign_file(filename, signer, sign_password,
|
||||
keystore)
|
||||
with open(filename, 'rb') as f:
|
||||
file_data = f.read()
|
||||
md5_digest = hashlib.md5(file_data).hexdigest()
|
||||
sha256_digest = hashlib.sha256(file_data).hexdigest()
|
||||
d.update({
|
||||
':action': 'file_upload',
|
||||
'protocol_version': '1',
|
||||
'filetype': filetype,
|
||||
'pyversion': pyversion,
|
||||
'md5_digest': md5_digest,
|
||||
'sha256_digest': sha256_digest,
|
||||
})
|
||||
files = [('content', os.path.basename(filename), file_data)]
|
||||
if sig_file:
|
||||
with open(sig_file, 'rb') as f:
|
||||
sig_data = f.read()
|
||||
files.append(('gpg_signature', os.path.basename(sig_file),
|
||||
sig_data))
|
||||
shutil.rmtree(os.path.dirname(sig_file))
|
||||
request = self.encode_request(d.items(), files)
|
||||
return self.send_request(request)
|
||||
|
||||
def upload_documentation(self, metadata, doc_dir):
|
||||
"""
|
||||
Upload documentation to the index.
|
||||
|
||||
:param metadata: A :class:`Metadata` instance defining at least a name
|
||||
and version number for the documentation to be
|
||||
uploaded.
|
||||
:param doc_dir: The pathname of the directory which contains the
|
||||
documentation. This should be the directory that
|
||||
contains the ``index.html`` for the documentation.
|
||||
:return: The HTTP response received from PyPI upon submission of the
|
||||
request.
|
||||
"""
|
||||
self.check_credentials()
|
||||
if not os.path.isdir(doc_dir):
|
||||
raise DistlibException('not a directory: %r' % doc_dir)
|
||||
fn = os.path.join(doc_dir, 'index.html')
|
||||
if not os.path.exists(fn):
|
||||
raise DistlibException('not found: %r' % fn)
|
||||
metadata.validate()
|
||||
name, version = metadata.name, metadata.version
|
||||
zip_data = zip_dir(doc_dir).getvalue()
|
||||
fields = [(':action', 'doc_upload'),
|
||||
('name', name), ('version', version)]
|
||||
files = [('content', name, zip_data)]
|
||||
request = self.encode_request(fields, files)
|
||||
return self.send_request(request)
|
||||
|
||||
def get_verify_command(self, signature_filename, data_filename,
|
||||
keystore=None):
|
||||
"""
|
||||
Return a suitable command for verifying a file.
|
||||
|
||||
:param signature_filename: The pathname to the file containing the
|
||||
signature.
|
||||
:param data_filename: The pathname to the file containing the
|
||||
signed data.
|
||||
:param keystore: The path to a directory which contains the keys
|
||||
used in verification. If not specified, the
|
||||
instance's ``gpg_home`` attribute is used instead.
|
||||
:return: The verifying command as a list suitable to be
|
||||
passed to :class:`subprocess.Popen`.
|
||||
"""
|
||||
cmd = [self.gpg, '--status-fd', '2', '--no-tty']
|
||||
if keystore is None:
|
||||
keystore = self.gpg_home
|
||||
if keystore:
|
||||
cmd.extend(['--homedir', keystore])
|
||||
cmd.extend(['--verify', signature_filename, data_filename])
|
||||
logger.debug('invoking: %s', ' '.join(cmd))
|
||||
return cmd
|
||||
|
||||
def verify_signature(self, signature_filename, data_filename,
|
||||
keystore=None):
|
||||
"""
|
||||
Verify a signature for a file.
|
||||
|
||||
:param signature_filename: The pathname to the file containing the
|
||||
signature.
|
||||
:param data_filename: The pathname to the file containing the
|
||||
signed data.
|
||||
:param keystore: The path to a directory which contains the keys
|
||||
used in verification. If not specified, the
|
||||
instance's ``gpg_home`` attribute is used instead.
|
||||
:return: True if the signature was verified, else False.
|
||||
"""
|
||||
if not self.gpg:
|
||||
raise DistlibException('verification unavailable because gpg '
|
||||
'unavailable')
|
||||
cmd = self.get_verify_command(signature_filename, data_filename,
|
||||
keystore)
|
||||
rc, stdout, stderr = self.run_command(cmd)
|
||||
if rc not in (0, 1):
|
||||
raise DistlibException('verify command failed with error '
|
||||
'code %s' % rc)
|
||||
return rc == 0
|
||||
|
||||
def download_file(self, url, destfile, digest=None, reporthook=None):
|
||||
"""
|
||||
This is a convenience method for downloading a file from an URL.
|
||||
Normally, this will be a file from the index, though currently
|
||||
no check is made for this (i.e. a file can be downloaded from
|
||||
anywhere).
|
||||
|
||||
The method is just like the :func:`urlretrieve` function in the
|
||||
standard library, except that it allows digest computation to be
|
||||
done during download and checking that the downloaded data
|
||||
matched any expected value.
|
||||
|
||||
:param url: The URL of the file to be downloaded (assumed to be
|
||||
available via an HTTP GET request).
|
||||
:param destfile: The pathname where the downloaded file is to be
|
||||
saved.
|
||||
:param digest: If specified, this must be a (hasher, value)
|
||||
tuple, where hasher is the algorithm used (e.g.
|
||||
``'md5'``) and ``value`` is the expected value.
|
||||
:param reporthook: The same as for :func:`urlretrieve` in the
|
||||
standard library.
|
||||
"""
|
||||
if digest is None:
|
||||
digester = None
|
||||
logger.debug('No digest specified')
|
||||
else:
|
||||
if isinstance(digest, (list, tuple)):
|
||||
hasher, digest = digest
|
||||
else:
|
||||
hasher = 'md5'
|
||||
digester = getattr(hashlib, hasher)()
|
||||
logger.debug('Digest specified: %s' % digest)
|
||||
# The following code is equivalent to urlretrieve.
|
||||
# We need to do it this way so that we can compute the
|
||||
# digest of the file as we go.
|
||||
with open(destfile, 'wb') as dfp:
|
||||
# addinfourl is not a context manager on 2.x
|
||||
# so we have to use try/finally
|
||||
sfp = self.send_request(Request(url))
|
||||
try:
|
||||
headers = sfp.info()
|
||||
blocksize = 8192
|
||||
size = -1
|
||||
read = 0
|
||||
blocknum = 0
|
||||
if "content-length" in headers:
|
||||
size = int(headers["Content-Length"])
|
||||
if reporthook:
|
||||
reporthook(blocknum, blocksize, size)
|
||||
while True:
|
||||
block = sfp.read(blocksize)
|
||||
if not block:
|
||||
break
|
||||
read += len(block)
|
||||
dfp.write(block)
|
||||
if digester:
|
||||
digester.update(block)
|
||||
blocknum += 1
|
||||
if reporthook:
|
||||
reporthook(blocknum, blocksize, size)
|
||||
finally:
|
||||
sfp.close()
|
||||
|
||||
# check that we got the whole file, if we can
|
||||
if size >= 0 and read < size:
|
||||
raise DistlibException(
|
||||
'retrieval incomplete: got only %d out of %d bytes'
|
||||
% (read, size))
|
||||
# if we have a digest, it must match.
|
||||
if digester:
|
||||
actual = digester.hexdigest()
|
||||
if digest != actual:
|
||||
raise DistlibException('%s digest mismatch for %s: expected '
|
||||
'%s, got %s' % (hasher, destfile,
|
||||
digest, actual))
|
||||
logger.debug('Digest verified: %s', digest)
|
||||
|
||||
def send_request(self, req):
|
||||
"""
|
||||
Send a standard library :class:`Request` to PyPI and return its
|
||||
response.
|
||||
|
||||
:param req: The request to send.
|
||||
:return: The HTTP response from PyPI (a standard library HTTPResponse).
|
||||
"""
|
||||
handlers = []
|
||||
if self.password_handler:
|
||||
handlers.append(self.password_handler)
|
||||
if self.ssl_verifier:
|
||||
handlers.append(self.ssl_verifier)
|
||||
opener = build_opener(*handlers)
|
||||
return opener.open(req)
|
||||
|
||||
def encode_request(self, fields, files):
|
||||
"""
|
||||
Encode fields and files for posting to an HTTP server.
|
||||
|
||||
:param fields: The fields to send as a list of (fieldname, value)
|
||||
tuples.
|
||||
:param files: The files to send as a list of (fieldname, filename,
|
||||
file_bytes) tuple.
|
||||
"""
|
||||
# Adapted from packaging, which in turn was adapted from
|
||||
# http://code.activestate.com/recipes/146306
|
||||
|
||||
parts = []
|
||||
boundary = self.boundary
|
||||
for k, values in fields:
|
||||
if not isinstance(values, (list, tuple)):
|
||||
values = [values]
|
||||
|
||||
for v in values:
|
||||
parts.extend((
|
||||
b'--' + boundary,
|
||||
('Content-Disposition: form-data; name="%s"' %
|
||||
k).encode('utf-8'),
|
||||
b'',
|
||||
v.encode('utf-8')))
|
||||
for key, filename, value in files:
|
||||
parts.extend((
|
||||
b'--' + boundary,
|
||||
('Content-Disposition: form-data; name="%s"; filename="%s"' %
|
||||
(key, filename)).encode('utf-8'),
|
||||
b'',
|
||||
value))
|
||||
|
||||
parts.extend((b'--' + boundary + b'--', b''))
|
||||
|
||||
body = b'\r\n'.join(parts)
|
||||
ct = b'multipart/form-data; boundary=' + boundary
|
||||
headers = {
|
||||
'Content-type': ct,
|
||||
'Content-length': str(len(body))
|
||||
}
|
||||
return Request(self.url, body, headers)
|
||||
|
||||
def search(self, terms, operator=None):
|
||||
if isinstance(terms, string_types):
|
||||
terms = {'name': terms}
|
||||
if self.rpc_proxy is None:
|
||||
self.rpc_proxy = ServerProxy(self.url, timeout=3.0)
|
||||
return self.rpc_proxy.search(terms, operator or 'and')
|
1283
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/locators.py
поставляемый
Normal file
1283
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/locators.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
393
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/manifest.py
поставляемый
Normal file
393
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/manifest.py
поставляемый
Normal file
|
@ -0,0 +1,393 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012-2013 Python Software Foundation.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
"""
|
||||
Class representing the list of files in a distribution.
|
||||
|
||||
Equivalent to distutils.filelist, but fixes some problems.
|
||||
"""
|
||||
import fnmatch
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
||||
from . import DistlibException
|
||||
from .compat import fsdecode
|
||||
from .util import convert_path
|
||||
|
||||
|
||||
__all__ = ['Manifest']
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# a \ followed by some spaces + EOL
|
||||
_COLLAPSE_PATTERN = re.compile('\\\w*\n', re.M)
|
||||
_COMMENTED_LINE = re.compile('#.*?(?=\n)|\n(?=$)', re.M | re.S)
|
||||
|
||||
#
|
||||
# Due to the different results returned by fnmatch.translate, we need
|
||||
# to do slightly different processing for Python 2.7 and 3.2 ... this needed
|
||||
# to be brought in for Python 3.6 onwards.
|
||||
#
|
||||
_PYTHON_VERSION = sys.version_info[:2]
|
||||
|
||||
class Manifest(object):
|
||||
"""A list of files built by on exploring the filesystem and filtered by
|
||||
applying various patterns to what we find there.
|
||||
"""
|
||||
|
||||
def __init__(self, base=None):
|
||||
"""
|
||||
Initialise an instance.
|
||||
|
||||
:param base: The base directory to explore under.
|
||||
"""
|
||||
self.base = os.path.abspath(os.path.normpath(base or os.getcwd()))
|
||||
self.prefix = self.base + os.sep
|
||||
self.allfiles = None
|
||||
self.files = set()
|
||||
|
||||
#
|
||||
# Public API
|
||||
#
|
||||
|
||||
def findall(self):
|
||||
"""Find all files under the base and set ``allfiles`` to the absolute
|
||||
pathnames of files found.
|
||||
"""
|
||||
from stat import S_ISREG, S_ISDIR, S_ISLNK
|
||||
|
||||
self.allfiles = allfiles = []
|
||||
root = self.base
|
||||
stack = [root]
|
||||
pop = stack.pop
|
||||
push = stack.append
|
||||
|
||||
while stack:
|
||||
root = pop()
|
||||
names = os.listdir(root)
|
||||
|
||||
for name in names:
|
||||
fullname = os.path.join(root, name)
|
||||
|
||||
# Avoid excess stat calls -- just one will do, thank you!
|
||||
stat = os.stat(fullname)
|
||||
mode = stat.st_mode
|
||||
if S_ISREG(mode):
|
||||
allfiles.append(fsdecode(fullname))
|
||||
elif S_ISDIR(mode) and not S_ISLNK(mode):
|
||||
push(fullname)
|
||||
|
||||
def add(self, item):
|
||||
"""
|
||||
Add a file to the manifest.
|
||||
|
||||
:param item: The pathname to add. This can be relative to the base.
|
||||
"""
|
||||
if not item.startswith(self.prefix):
|
||||
item = os.path.join(self.base, item)
|
||||
self.files.add(os.path.normpath(item))
|
||||
|
||||
def add_many(self, items):
|
||||
"""
|
||||
Add a list of files to the manifest.
|
||||
|
||||
:param items: The pathnames to add. These can be relative to the base.
|
||||
"""
|
||||
for item in items:
|
||||
self.add(item)
|
||||
|
||||
def sorted(self, wantdirs=False):
|
||||
"""
|
||||
Return sorted files in directory order
|
||||
"""
|
||||
|
||||
def add_dir(dirs, d):
|
||||
dirs.add(d)
|
||||
logger.debug('add_dir added %s', d)
|
||||
if d != self.base:
|
||||
parent, _ = os.path.split(d)
|
||||
assert parent not in ('', '/')
|
||||
add_dir(dirs, parent)
|
||||
|
||||
result = set(self.files) # make a copy!
|
||||
if wantdirs:
|
||||
dirs = set()
|
||||
for f in result:
|
||||
add_dir(dirs, os.path.dirname(f))
|
||||
result |= dirs
|
||||
return [os.path.join(*path_tuple) for path_tuple in
|
||||
sorted(os.path.split(path) for path in result)]
|
||||
|
||||
def clear(self):
|
||||
"""Clear all collected files."""
|
||||
self.files = set()
|
||||
self.allfiles = []
|
||||
|
||||
def process_directive(self, directive):
|
||||
"""
|
||||
Process a directive which either adds some files from ``allfiles`` to
|
||||
``files``, or removes some files from ``files``.
|
||||
|
||||
:param directive: The directive to process. This should be in a format
|
||||
compatible with distutils ``MANIFEST.in`` files:
|
||||
|
||||
http://docs.python.org/distutils/sourcedist.html#commands
|
||||
"""
|
||||
# Parse the line: split it up, make sure the right number of words
|
||||
# is there, and return the relevant words. 'action' is always
|
||||
# defined: it's the first word of the line. Which of the other
|
||||
# three are defined depends on the action; it'll be either
|
||||
# patterns, (dir and patterns), or (dirpattern).
|
||||
action, patterns, thedir, dirpattern = self._parse_directive(directive)
|
||||
|
||||
# OK, now we know that the action is valid and we have the
|
||||
# right number of words on the line for that action -- so we
|
||||
# can proceed with minimal error-checking.
|
||||
if action == 'include':
|
||||
for pattern in patterns:
|
||||
if not self._include_pattern(pattern, anchor=True):
|
||||
logger.warning('no files found matching %r', pattern)
|
||||
|
||||
elif action == 'exclude':
|
||||
for pattern in patterns:
|
||||
found = self._exclude_pattern(pattern, anchor=True)
|
||||
#if not found:
|
||||
# logger.warning('no previously-included files '
|
||||
# 'found matching %r', pattern)
|
||||
|
||||
elif action == 'global-include':
|
||||
for pattern in patterns:
|
||||
if not self._include_pattern(pattern, anchor=False):
|
||||
logger.warning('no files found matching %r '
|
||||
'anywhere in distribution', pattern)
|
||||
|
||||
elif action == 'global-exclude':
|
||||
for pattern in patterns:
|
||||
found = self._exclude_pattern(pattern, anchor=False)
|
||||
#if not found:
|
||||
# logger.warning('no previously-included files '
|
||||
# 'matching %r found anywhere in '
|
||||
# 'distribution', pattern)
|
||||
|
||||
elif action == 'recursive-include':
|
||||
for pattern in patterns:
|
||||
if not self._include_pattern(pattern, prefix=thedir):
|
||||
logger.warning('no files found matching %r '
|
||||
'under directory %r', pattern, thedir)
|
||||
|
||||
elif action == 'recursive-exclude':
|
||||
for pattern in patterns:
|
||||
found = self._exclude_pattern(pattern, prefix=thedir)
|
||||
#if not found:
|
||||
# logger.warning('no previously-included files '
|
||||
# 'matching %r found under directory %r',
|
||||
# pattern, thedir)
|
||||
|
||||
elif action == 'graft':
|
||||
if not self._include_pattern(None, prefix=dirpattern):
|
||||
logger.warning('no directories found matching %r',
|
||||
dirpattern)
|
||||
|
||||
elif action == 'prune':
|
||||
if not self._exclude_pattern(None, prefix=dirpattern):
|
||||
logger.warning('no previously-included directories found '
|
||||
'matching %r', dirpattern)
|
||||
else: # pragma: no cover
|
||||
# This should never happen, as it should be caught in
|
||||
# _parse_template_line
|
||||
raise DistlibException(
|
||||
'invalid action %r' % action)
|
||||
|
||||
#
|
||||
# Private API
|
||||
#
|
||||
|
||||
def _parse_directive(self, directive):
|
||||
"""
|
||||
Validate a directive.
|
||||
:param directive: The directive to validate.
|
||||
:return: A tuple of action, patterns, thedir, dir_patterns
|
||||
"""
|
||||
words = directive.split()
|
||||
if len(words) == 1 and words[0] not in ('include', 'exclude',
|
||||
'global-include',
|
||||
'global-exclude',
|
||||
'recursive-include',
|
||||
'recursive-exclude',
|
||||
'graft', 'prune'):
|
||||
# no action given, let's use the default 'include'
|
||||
words.insert(0, 'include')
|
||||
|
||||
action = words[0]
|
||||
patterns = thedir = dir_pattern = None
|
||||
|
||||
if action in ('include', 'exclude',
|
||||
'global-include', 'global-exclude'):
|
||||
if len(words) < 2:
|
||||
raise DistlibException(
|
||||
'%r expects <pattern1> <pattern2> ...' % action)
|
||||
|
||||
patterns = [convert_path(word) for word in words[1:]]
|
||||
|
||||
elif action in ('recursive-include', 'recursive-exclude'):
|
||||
if len(words) < 3:
|
||||
raise DistlibException(
|
||||
'%r expects <dir> <pattern1> <pattern2> ...' % action)
|
||||
|
||||
thedir = convert_path(words[1])
|
||||
patterns = [convert_path(word) for word in words[2:]]
|
||||
|
||||
elif action in ('graft', 'prune'):
|
||||
if len(words) != 2:
|
||||
raise DistlibException(
|
||||
'%r expects a single <dir_pattern>' % action)
|
||||
|
||||
dir_pattern = convert_path(words[1])
|
||||
|
||||
else:
|
||||
raise DistlibException('unknown action %r' % action)
|
||||
|
||||
return action, patterns, thedir, dir_pattern
|
||||
|
||||
def _include_pattern(self, pattern, anchor=True, prefix=None,
|
||||
is_regex=False):
|
||||
"""Select strings (presumably filenames) from 'self.files' that
|
||||
match 'pattern', a Unix-style wildcard (glob) pattern.
|
||||
|
||||
Patterns are not quite the same as implemented by the 'fnmatch'
|
||||
module: '*' and '?' match non-special characters, where "special"
|
||||
is platform-dependent: slash on Unix; colon, slash, and backslash on
|
||||
DOS/Windows; and colon on Mac OS.
|
||||
|
||||
If 'anchor' is true (the default), then the pattern match is more
|
||||
stringent: "*.py" will match "foo.py" but not "foo/bar.py". If
|
||||
'anchor' is false, both of these will match.
|
||||
|
||||
If 'prefix' is supplied, then only filenames starting with 'prefix'
|
||||
(itself a pattern) and ending with 'pattern', with anything in between
|
||||
them, will match. 'anchor' is ignored in this case.
|
||||
|
||||
If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and
|
||||
'pattern' is assumed to be either a string containing a regex or a
|
||||
regex object -- no translation is done, the regex is just compiled
|
||||
and used as-is.
|
||||
|
||||
Selected strings will be added to self.files.
|
||||
|
||||
Return True if files are found.
|
||||
"""
|
||||
# XXX docstring lying about what the special chars are?
|
||||
found = False
|
||||
pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex)
|
||||
|
||||
# delayed loading of allfiles list
|
||||
if self.allfiles is None:
|
||||
self.findall()
|
||||
|
||||
for name in self.allfiles:
|
||||
if pattern_re.search(name):
|
||||
self.files.add(name)
|
||||
found = True
|
||||
return found
|
||||
|
||||
def _exclude_pattern(self, pattern, anchor=True, prefix=None,
|
||||
is_regex=False):
|
||||
"""Remove strings (presumably filenames) from 'files' that match
|
||||
'pattern'.
|
||||
|
||||
Other parameters are the same as for 'include_pattern()', above.
|
||||
The list 'self.files' is modified in place. Return True if files are
|
||||
found.
|
||||
|
||||
This API is public to allow e.g. exclusion of SCM subdirs, e.g. when
|
||||
packaging source distributions
|
||||
"""
|
||||
found = False
|
||||
pattern_re = self._translate_pattern(pattern, anchor, prefix, is_regex)
|
||||
for f in list(self.files):
|
||||
if pattern_re.search(f):
|
||||
self.files.remove(f)
|
||||
found = True
|
||||
return found
|
||||
|
||||
def _translate_pattern(self, pattern, anchor=True, prefix=None,
|
||||
is_regex=False):
|
||||
"""Translate a shell-like wildcard pattern to a compiled regular
|
||||
expression.
|
||||
|
||||
Return the compiled regex. If 'is_regex' true,
|
||||
then 'pattern' is directly compiled to a regex (if it's a string)
|
||||
or just returned as-is (assumes it's a regex object).
|
||||
"""
|
||||
if is_regex:
|
||||
if isinstance(pattern, str):
|
||||
return re.compile(pattern)
|
||||
else:
|
||||
return pattern
|
||||
|
||||
if _PYTHON_VERSION > (3, 2):
|
||||
# ditch start and end characters
|
||||
start, _, end = self._glob_to_re('_').partition('_')
|
||||
|
||||
if pattern:
|
||||
pattern_re = self._glob_to_re(pattern)
|
||||
if _PYTHON_VERSION > (3, 2):
|
||||
assert pattern_re.startswith(start) and pattern_re.endswith(end)
|
||||
else:
|
||||
pattern_re = ''
|
||||
|
||||
base = re.escape(os.path.join(self.base, ''))
|
||||
if prefix is not None:
|
||||
# ditch end of pattern character
|
||||
if _PYTHON_VERSION <= (3, 2):
|
||||
empty_pattern = self._glob_to_re('')
|
||||
prefix_re = self._glob_to_re(prefix)[:-len(empty_pattern)]
|
||||
else:
|
||||
prefix_re = self._glob_to_re(prefix)
|
||||
assert prefix_re.startswith(start) and prefix_re.endswith(end)
|
||||
prefix_re = prefix_re[len(start): len(prefix_re) - len(end)]
|
||||
sep = os.sep
|
||||
if os.sep == '\\':
|
||||
sep = r'\\'
|
||||
if _PYTHON_VERSION <= (3, 2):
|
||||
pattern_re = '^' + base + sep.join((prefix_re,
|
||||
'.*' + pattern_re))
|
||||
else:
|
||||
pattern_re = pattern_re[len(start): len(pattern_re) - len(end)]
|
||||
pattern_re = r'%s%s%s%s.*%s%s' % (start, base, prefix_re, sep,
|
||||
pattern_re, end)
|
||||
else: # no prefix -- respect anchor flag
|
||||
if anchor:
|
||||
if _PYTHON_VERSION <= (3, 2):
|
||||
pattern_re = '^' + base + pattern_re
|
||||
else:
|
||||
pattern_re = r'%s%s%s' % (start, base, pattern_re[len(start):])
|
||||
|
||||
return re.compile(pattern_re)
|
||||
|
||||
def _glob_to_re(self, pattern):
|
||||
"""Translate a shell-like glob pattern to a regular expression.
|
||||
|
||||
Return a string containing the regex. Differs from
|
||||
'fnmatch.translate()' in that '*' does not match "special characters"
|
||||
(which are platform-specific).
|
||||
"""
|
||||
pattern_re = fnmatch.translate(pattern)
|
||||
|
||||
# '?' and '*' in the glob pattern become '.' and '.*' in the RE, which
|
||||
# IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix,
|
||||
# and by extension they shouldn't match such "special characters" under
|
||||
# any OS. So change all non-escaped dots in the RE to match any
|
||||
# character except the special characters (currently: just os.sep).
|
||||
sep = os.sep
|
||||
if os.sep == '\\':
|
||||
# we're using a regex to manipulate a regex, so we need
|
||||
# to escape the backslash twice
|
||||
sep = r'\\\\'
|
||||
escaped = r'\1[^%s]' % sep
|
||||
pattern_re = re.sub(r'((?<!\\)(\\\\)*)\.', escaped, pattern_re)
|
||||
return pattern_re
|
190
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/markers.py
поставляемый
Normal file
190
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/markers.py
поставляемый
Normal file
|
@ -0,0 +1,190 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012-2013 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
"""Parser for the environment markers micro-language defined in PEP 345."""
|
||||
|
||||
import ast
|
||||
import os
|
||||
import sys
|
||||
import platform
|
||||
|
||||
from .compat import python_implementation, string_types
|
||||
from .util import in_venv
|
||||
|
||||
__all__ = ['interpret']
|
||||
|
||||
|
||||
class Evaluator(object):
|
||||
"""
|
||||
A limited evaluator for Python expressions.
|
||||
"""
|
||||
|
||||
operators = {
|
||||
'eq': lambda x, y: x == y,
|
||||
'gt': lambda x, y: x > y,
|
||||
'gte': lambda x, y: x >= y,
|
||||
'in': lambda x, y: x in y,
|
||||
'lt': lambda x, y: x < y,
|
||||
'lte': lambda x, y: x <= y,
|
||||
'not': lambda x: not x,
|
||||
'noteq': lambda x, y: x != y,
|
||||
'notin': lambda x, y: x not in y,
|
||||
}
|
||||
|
||||
allowed_values = {
|
||||
'sys_platform': sys.platform,
|
||||
'python_version': '%s.%s' % sys.version_info[:2],
|
||||
# parsing sys.platform is not reliable, but there is no other
|
||||
# way to get e.g. 2.7.2+, and the PEP is defined with sys.version
|
||||
'python_full_version': sys.version.split(' ', 1)[0],
|
||||
'os_name': os.name,
|
||||
'platform_in_venv': str(in_venv()),
|
||||
'platform_release': platform.release(),
|
||||
'platform_version': platform.version(),
|
||||
'platform_machine': platform.machine(),
|
||||
'platform_python_implementation': python_implementation(),
|
||||
}
|
||||
|
||||
def __init__(self, context=None):
|
||||
"""
|
||||
Initialise an instance.
|
||||
|
||||
:param context: If specified, names are looked up in this mapping.
|
||||
"""
|
||||
self.context = context or {}
|
||||
self.source = None
|
||||
|
||||
def get_fragment(self, offset):
|
||||
"""
|
||||
Get the part of the source which is causing a problem.
|
||||
"""
|
||||
fragment_len = 10
|
||||
s = '%r' % (self.source[offset:offset + fragment_len])
|
||||
if offset + fragment_len < len(self.source):
|
||||
s += '...'
|
||||
return s
|
||||
|
||||
def get_handler(self, node_type):
|
||||
"""
|
||||
Get a handler for the specified AST node type.
|
||||
"""
|
||||
return getattr(self, 'do_%s' % node_type, None)
|
||||
|
||||
def evaluate(self, node, filename=None):
|
||||
"""
|
||||
Evaluate a source string or node, using ``filename`` when
|
||||
displaying errors.
|
||||
"""
|
||||
if isinstance(node, string_types):
|
||||
self.source = node
|
||||
kwargs = {'mode': 'eval'}
|
||||
if filename:
|
||||
kwargs['filename'] = filename
|
||||
try:
|
||||
node = ast.parse(node, **kwargs)
|
||||
except SyntaxError as e:
|
||||
s = self.get_fragment(e.offset)
|
||||
raise SyntaxError('syntax error %s' % s)
|
||||
node_type = node.__class__.__name__.lower()
|
||||
handler = self.get_handler(node_type)
|
||||
if handler is None:
|
||||
if self.source is None:
|
||||
s = '(source not available)'
|
||||
else:
|
||||
s = self.get_fragment(node.col_offset)
|
||||
raise SyntaxError("don't know how to evaluate %r %s" % (
|
||||
node_type, s))
|
||||
return handler(node)
|
||||
|
||||
def get_attr_key(self, node):
|
||||
assert isinstance(node, ast.Attribute), 'attribute node expected'
|
||||
return '%s.%s' % (node.value.id, node.attr)
|
||||
|
||||
def do_attribute(self, node):
|
||||
if not isinstance(node.value, ast.Name):
|
||||
valid = False
|
||||
else:
|
||||
key = self.get_attr_key(node)
|
||||
valid = key in self.context or key in self.allowed_values
|
||||
if not valid:
|
||||
raise SyntaxError('invalid expression: %s' % key)
|
||||
if key in self.context:
|
||||
result = self.context[key]
|
||||
else:
|
||||
result = self.allowed_values[key]
|
||||
return result
|
||||
|
||||
def do_boolop(self, node):
|
||||
result = self.evaluate(node.values[0])
|
||||
is_or = node.op.__class__ is ast.Or
|
||||
is_and = node.op.__class__ is ast.And
|
||||
assert is_or or is_and
|
||||
if (is_and and result) or (is_or and not result):
|
||||
for n in node.values[1:]:
|
||||
result = self.evaluate(n)
|
||||
if (is_or and result) or (is_and and not result):
|
||||
break
|
||||
return result
|
||||
|
||||
def do_compare(self, node):
|
||||
def sanity_check(lhsnode, rhsnode):
|
||||
valid = True
|
||||
if isinstance(lhsnode, ast.Str) and isinstance(rhsnode, ast.Str):
|
||||
valid = False
|
||||
#elif (isinstance(lhsnode, ast.Attribute)
|
||||
# and isinstance(rhsnode, ast.Attribute)):
|
||||
# klhs = self.get_attr_key(lhsnode)
|
||||
# krhs = self.get_attr_key(rhsnode)
|
||||
# valid = klhs != krhs
|
||||
if not valid:
|
||||
s = self.get_fragment(node.col_offset)
|
||||
raise SyntaxError('Invalid comparison: %s' % s)
|
||||
|
||||
lhsnode = node.left
|
||||
lhs = self.evaluate(lhsnode)
|
||||
result = True
|
||||
for op, rhsnode in zip(node.ops, node.comparators):
|
||||
sanity_check(lhsnode, rhsnode)
|
||||
op = op.__class__.__name__.lower()
|
||||
if op not in self.operators:
|
||||
raise SyntaxError('unsupported operation: %r' % op)
|
||||
rhs = self.evaluate(rhsnode)
|
||||
result = self.operators[op](lhs, rhs)
|
||||
if not result:
|
||||
break
|
||||
lhs = rhs
|
||||
lhsnode = rhsnode
|
||||
return result
|
||||
|
||||
def do_expression(self, node):
|
||||
return self.evaluate(node.body)
|
||||
|
||||
def do_name(self, node):
|
||||
valid = False
|
||||
if node.id in self.context:
|
||||
valid = True
|
||||
result = self.context[node.id]
|
||||
elif node.id in self.allowed_values:
|
||||
valid = True
|
||||
result = self.allowed_values[node.id]
|
||||
if not valid:
|
||||
raise SyntaxError('invalid expression: %s' % node.id)
|
||||
return result
|
||||
|
||||
def do_str(self, node):
|
||||
return node.s
|
||||
|
||||
|
||||
def interpret(marker, execution_context=None):
|
||||
"""
|
||||
Interpret a marker and return a result depending on environment.
|
||||
|
||||
:param marker: The marker to interpret.
|
||||
:type marker: str
|
||||
:param execution_context: The context used for name lookup.
|
||||
:type execution_context: mapping
|
||||
"""
|
||||
return Evaluator(execution_context).evaluate(marker.strip())
|
1068
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/metadata.py
поставляемый
Normal file
1068
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/metadata.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
355
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/resources.py
поставляемый
Normal file
355
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/resources.py
поставляемый
Normal file
|
@ -0,0 +1,355 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2016 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import bisect
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import pkgutil
|
||||
import shutil
|
||||
import sys
|
||||
import types
|
||||
import zipimport
|
||||
|
||||
from . import DistlibException
|
||||
from .util import cached_property, get_cache_base, path_to_cache_dir, Cache
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
cache = None # created when needed
|
||||
|
||||
|
||||
class ResourceCache(Cache):
|
||||
def __init__(self, base=None):
|
||||
if base is None:
|
||||
# Use native string to avoid issues on 2.x: see Python #20140.
|
||||
base = os.path.join(get_cache_base(), str('resource-cache'))
|
||||
super(ResourceCache, self).__init__(base)
|
||||
|
||||
def is_stale(self, resource, path):
|
||||
"""
|
||||
Is the cache stale for the given resource?
|
||||
|
||||
:param resource: The :class:`Resource` being cached.
|
||||
:param path: The path of the resource in the cache.
|
||||
:return: True if the cache is stale.
|
||||
"""
|
||||
# Cache invalidation is a hard problem :-)
|
||||
return True
|
||||
|
||||
def get(self, resource):
|
||||
"""
|
||||
Get a resource into the cache,
|
||||
|
||||
:param resource: A :class:`Resource` instance.
|
||||
:return: The pathname of the resource in the cache.
|
||||
"""
|
||||
prefix, path = resource.finder.get_cache_info(resource)
|
||||
if prefix is None:
|
||||
result = path
|
||||
else:
|
||||
result = os.path.join(self.base, self.prefix_to_dir(prefix), path)
|
||||
dirname = os.path.dirname(result)
|
||||
if not os.path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
if not os.path.exists(result):
|
||||
stale = True
|
||||
else:
|
||||
stale = self.is_stale(resource, path)
|
||||
if stale:
|
||||
# write the bytes of the resource to the cache location
|
||||
with open(result, 'wb') as f:
|
||||
f.write(resource.bytes)
|
||||
return result
|
||||
|
||||
|
||||
class ResourceBase(object):
|
||||
def __init__(self, finder, name):
|
||||
self.finder = finder
|
||||
self.name = name
|
||||
|
||||
|
||||
class Resource(ResourceBase):
|
||||
"""
|
||||
A class representing an in-package resource, such as a data file. This is
|
||||
not normally instantiated by user code, but rather by a
|
||||
:class:`ResourceFinder` which manages the resource.
|
||||
"""
|
||||
is_container = False # Backwards compatibility
|
||||
|
||||
def as_stream(self):
|
||||
"""
|
||||
Get the resource as a stream.
|
||||
|
||||
This is not a property to make it obvious that it returns a new stream
|
||||
each time.
|
||||
"""
|
||||
return self.finder.get_stream(self)
|
||||
|
||||
@cached_property
|
||||
def file_path(self):
|
||||
global cache
|
||||
if cache is None:
|
||||
cache = ResourceCache()
|
||||
return cache.get(self)
|
||||
|
||||
@cached_property
|
||||
def bytes(self):
|
||||
return self.finder.get_bytes(self)
|
||||
|
||||
@cached_property
|
||||
def size(self):
|
||||
return self.finder.get_size(self)
|
||||
|
||||
|
||||
class ResourceContainer(ResourceBase):
|
||||
is_container = True # Backwards compatibility
|
||||
|
||||
@cached_property
|
||||
def resources(self):
|
||||
return self.finder.get_resources(self)
|
||||
|
||||
|
||||
class ResourceFinder(object):
|
||||
"""
|
||||
Resource finder for file system resources.
|
||||
"""
|
||||
|
||||
if sys.platform.startswith('java'):
|
||||
skipped_extensions = ('.pyc', '.pyo', '.class')
|
||||
else:
|
||||
skipped_extensions = ('.pyc', '.pyo')
|
||||
|
||||
def __init__(self, module):
|
||||
self.module = module
|
||||
self.loader = getattr(module, '__loader__', None)
|
||||
self.base = os.path.dirname(getattr(module, '__file__', ''))
|
||||
|
||||
def _adjust_path(self, path):
|
||||
return os.path.realpath(path)
|
||||
|
||||
def _make_path(self, resource_name):
|
||||
# Issue #50: need to preserve type of path on Python 2.x
|
||||
# like os.path._get_sep
|
||||
if isinstance(resource_name, bytes): # should only happen on 2.x
|
||||
sep = b'/'
|
||||
else:
|
||||
sep = '/'
|
||||
parts = resource_name.split(sep)
|
||||
parts.insert(0, self.base)
|
||||
result = os.path.join(*parts)
|
||||
return self._adjust_path(result)
|
||||
|
||||
def _find(self, path):
|
||||
return os.path.exists(path)
|
||||
|
||||
def get_cache_info(self, resource):
|
||||
return None, resource.path
|
||||
|
||||
def find(self, resource_name):
|
||||
path = self._make_path(resource_name)
|
||||
if not self._find(path):
|
||||
result = None
|
||||
else:
|
||||
if self._is_directory(path):
|
||||
result = ResourceContainer(self, resource_name)
|
||||
else:
|
||||
result = Resource(self, resource_name)
|
||||
result.path = path
|
||||
return result
|
||||
|
||||
def get_stream(self, resource):
|
||||
return open(resource.path, 'rb')
|
||||
|
||||
def get_bytes(self, resource):
|
||||
with open(resource.path, 'rb') as f:
|
||||
return f.read()
|
||||
|
||||
def get_size(self, resource):
|
||||
return os.path.getsize(resource.path)
|
||||
|
||||
def get_resources(self, resource):
|
||||
def allowed(f):
|
||||
return (f != '__pycache__' and not
|
||||
f.endswith(self.skipped_extensions))
|
||||
return set([f for f in os.listdir(resource.path) if allowed(f)])
|
||||
|
||||
def is_container(self, resource):
|
||||
return self._is_directory(resource.path)
|
||||
|
||||
_is_directory = staticmethod(os.path.isdir)
|
||||
|
||||
def iterator(self, resource_name):
|
||||
resource = self.find(resource_name)
|
||||
if resource is not None:
|
||||
todo = [resource]
|
||||
while todo:
|
||||
resource = todo.pop(0)
|
||||
yield resource
|
||||
if resource.is_container:
|
||||
rname = resource.name
|
||||
for name in resource.resources:
|
||||
if not rname:
|
||||
new_name = name
|
||||
else:
|
||||
new_name = '/'.join([rname, name])
|
||||
child = self.find(new_name)
|
||||
if child.is_container:
|
||||
todo.append(child)
|
||||
else:
|
||||
yield child
|
||||
|
||||
|
||||
class ZipResourceFinder(ResourceFinder):
|
||||
"""
|
||||
Resource finder for resources in .zip files.
|
||||
"""
|
||||
def __init__(self, module):
|
||||
super(ZipResourceFinder, self).__init__(module)
|
||||
archive = self.loader.archive
|
||||
self.prefix_len = 1 + len(archive)
|
||||
# PyPy doesn't have a _files attr on zipimporter, and you can't set one
|
||||
if hasattr(self.loader, '_files'):
|
||||
self._files = self.loader._files
|
||||
else:
|
||||
self._files = zipimport._zip_directory_cache[archive]
|
||||
self.index = sorted(self._files)
|
||||
|
||||
def _adjust_path(self, path):
|
||||
return path
|
||||
|
||||
def _find(self, path):
|
||||
path = path[self.prefix_len:]
|
||||
if path in self._files:
|
||||
result = True
|
||||
else:
|
||||
if path and path[-1] != os.sep:
|
||||
path = path + os.sep
|
||||
i = bisect.bisect(self.index, path)
|
||||
try:
|
||||
result = self.index[i].startswith(path)
|
||||
except IndexError:
|
||||
result = False
|
||||
if not result:
|
||||
logger.debug('_find failed: %r %r', path, self.loader.prefix)
|
||||
else:
|
||||
logger.debug('_find worked: %r %r', path, self.loader.prefix)
|
||||
return result
|
||||
|
||||
def get_cache_info(self, resource):
|
||||
prefix = self.loader.archive
|
||||
path = resource.path[1 + len(prefix):]
|
||||
return prefix, path
|
||||
|
||||
def get_bytes(self, resource):
|
||||
return self.loader.get_data(resource.path)
|
||||
|
||||
def get_stream(self, resource):
|
||||
return io.BytesIO(self.get_bytes(resource))
|
||||
|
||||
def get_size(self, resource):
|
||||
path = resource.path[self.prefix_len:]
|
||||
return self._files[path][3]
|
||||
|
||||
def get_resources(self, resource):
|
||||
path = resource.path[self.prefix_len:]
|
||||
if path and path[-1] != os.sep:
|
||||
path += os.sep
|
||||
plen = len(path)
|
||||
result = set()
|
||||
i = bisect.bisect(self.index, path)
|
||||
while i < len(self.index):
|
||||
if not self.index[i].startswith(path):
|
||||
break
|
||||
s = self.index[i][plen:]
|
||||
result.add(s.split(os.sep, 1)[0]) # only immediate children
|
||||
i += 1
|
||||
return result
|
||||
|
||||
def _is_directory(self, path):
|
||||
path = path[self.prefix_len:]
|
||||
if path and path[-1] != os.sep:
|
||||
path += os.sep
|
||||
i = bisect.bisect(self.index, path)
|
||||
try:
|
||||
result = self.index[i].startswith(path)
|
||||
except IndexError:
|
||||
result = False
|
||||
return result
|
||||
|
||||
_finder_registry = {
|
||||
type(None): ResourceFinder,
|
||||
zipimport.zipimporter: ZipResourceFinder
|
||||
}
|
||||
|
||||
try:
|
||||
# In Python 3.6, _frozen_importlib -> _frozen_importlib_external
|
||||
try:
|
||||
import _frozen_importlib_external as _fi
|
||||
except ImportError:
|
||||
import _frozen_importlib as _fi
|
||||
_finder_registry[_fi.SourceFileLoader] = ResourceFinder
|
||||
_finder_registry[_fi.FileFinder] = ResourceFinder
|
||||
del _fi
|
||||
except (ImportError, AttributeError):
|
||||
pass
|
||||
|
||||
|
||||
def register_finder(loader, finder_maker):
|
||||
_finder_registry[type(loader)] = finder_maker
|
||||
|
||||
_finder_cache = {}
|
||||
|
||||
|
||||
def finder(package):
|
||||
"""
|
||||
Return a resource finder for a package.
|
||||
:param package: The name of the package.
|
||||
:return: A :class:`ResourceFinder` instance for the package.
|
||||
"""
|
||||
if package in _finder_cache:
|
||||
result = _finder_cache[package]
|
||||
else:
|
||||
if package not in sys.modules:
|
||||
__import__(package)
|
||||
module = sys.modules[package]
|
||||
path = getattr(module, '__path__', None)
|
||||
if path is None:
|
||||
raise DistlibException('You cannot get a finder for a module, '
|
||||
'only for a package')
|
||||
loader = getattr(module, '__loader__', None)
|
||||
finder_maker = _finder_registry.get(type(loader))
|
||||
if finder_maker is None:
|
||||
raise DistlibException('Unable to locate finder for %r' % package)
|
||||
result = finder_maker(module)
|
||||
_finder_cache[package] = result
|
||||
return result
|
||||
|
||||
|
||||
_dummy_module = types.ModuleType(str('__dummy__'))
|
||||
|
||||
|
||||
def finder_for_path(path):
|
||||
"""
|
||||
Return a resource finder for a path, which should represent a container.
|
||||
|
||||
:param path: The path.
|
||||
:return: A :class:`ResourceFinder` instance for the path.
|
||||
"""
|
||||
result = None
|
||||
# calls any path hooks, gets importer into cache
|
||||
pkgutil.get_importer(path)
|
||||
loader = sys.path_importer_cache.get(path)
|
||||
finder = _finder_registry.get(type(loader))
|
||||
if finder:
|
||||
module = _dummy_module
|
||||
module.__file__ = os.path.join(path, '')
|
||||
module.__loader__ = loader
|
||||
result = finder(module)
|
||||
return result
|
384
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/scripts.py
поставляемый
Normal file
384
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/scripts.py
поставляемый
Normal file
|
@ -0,0 +1,384 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2015 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from io import BytesIO
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import struct
|
||||
import sys
|
||||
|
||||
from .compat import sysconfig, detect_encoding, ZipFile
|
||||
from .resources import finder
|
||||
from .util import (FileOperator, get_export_entry, convert_path,
|
||||
get_executable, in_venv)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_DEFAULT_MANIFEST = '''
|
||||
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
|
||||
<assemblyIdentity version="1.0.0.0"
|
||||
processorArchitecture="X86"
|
||||
name="%s"
|
||||
type="win32"/>
|
||||
|
||||
<!-- Identify the application security requirements. -->
|
||||
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
|
||||
<security>
|
||||
<requestedPrivileges>
|
||||
<requestedExecutionLevel level="asInvoker" uiAccess="false"/>
|
||||
</requestedPrivileges>
|
||||
</security>
|
||||
</trustInfo>
|
||||
</assembly>'''.strip()
|
||||
|
||||
# check if Python is called on the first line with this expression
|
||||
FIRST_LINE_RE = re.compile(b'^#!.*pythonw?[0-9.]*([ \t].*)?$')
|
||||
SCRIPT_TEMPLATE = '''# -*- coding: utf-8 -*-
|
||||
if __name__ == '__main__':
|
||||
import sys, re
|
||||
|
||||
def _resolve(module, func):
|
||||
__import__(module)
|
||||
mod = sys.modules[module]
|
||||
parts = func.split('.')
|
||||
result = getattr(mod, parts.pop(0))
|
||||
for p in parts:
|
||||
result = getattr(result, p)
|
||||
return result
|
||||
|
||||
try:
|
||||
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
|
||||
|
||||
func = _resolve('%(module)s', '%(func)s')
|
||||
rc = func() # None interpreted as 0
|
||||
except Exception as e: # only supporting Python >= 2.6
|
||||
sys.stderr.write('%%s\\n' %% e)
|
||||
rc = 1
|
||||
sys.exit(rc)
|
||||
'''
|
||||
|
||||
|
||||
def _enquote_executable(executable):
|
||||
if ' ' in executable:
|
||||
# make sure we quote only the executable in case of env
|
||||
# for example /usr/bin/env "/dir with spaces/bin/jython"
|
||||
# instead of "/usr/bin/env /dir with spaces/bin/jython"
|
||||
# otherwise whole
|
||||
if executable.startswith('/usr/bin/env '):
|
||||
env, _executable = executable.split(' ', 1)
|
||||
if ' ' in _executable and not _executable.startswith('"'):
|
||||
executable = '%s "%s"' % (env, _executable)
|
||||
else:
|
||||
if not executable.startswith('"'):
|
||||
executable = '"%s"' % executable
|
||||
return executable
|
||||
|
||||
|
||||
class ScriptMaker(object):
|
||||
"""
|
||||
A class to copy or create scripts from source scripts or callable
|
||||
specifications.
|
||||
"""
|
||||
script_template = SCRIPT_TEMPLATE
|
||||
|
||||
executable = None # for shebangs
|
||||
|
||||
def __init__(self, source_dir, target_dir, add_launchers=True,
|
||||
dry_run=False, fileop=None):
|
||||
self.source_dir = source_dir
|
||||
self.target_dir = target_dir
|
||||
self.add_launchers = add_launchers
|
||||
self.force = False
|
||||
self.clobber = False
|
||||
# It only makes sense to set mode bits on POSIX.
|
||||
self.set_mode = (os.name == 'posix') or (os.name == 'java' and
|
||||
os._name == 'posix')
|
||||
self.variants = set(('', 'X.Y'))
|
||||
self._fileop = fileop or FileOperator(dry_run)
|
||||
|
||||
self._is_nt = os.name == 'nt' or (
|
||||
os.name == 'java' and os._name == 'nt')
|
||||
|
||||
def _get_alternate_executable(self, executable, options):
|
||||
if options.get('gui', False) and self._is_nt: # pragma: no cover
|
||||
dn, fn = os.path.split(executable)
|
||||
fn = fn.replace('python', 'pythonw')
|
||||
executable = os.path.join(dn, fn)
|
||||
return executable
|
||||
|
||||
if sys.platform.startswith('java'): # pragma: no cover
|
||||
def _is_shell(self, executable):
|
||||
"""
|
||||
Determine if the specified executable is a script
|
||||
(contains a #! line)
|
||||
"""
|
||||
try:
|
||||
with open(executable) as fp:
|
||||
return fp.read(2) == '#!'
|
||||
except (OSError, IOError):
|
||||
logger.warning('Failed to open %s', executable)
|
||||
return False
|
||||
|
||||
def _fix_jython_executable(self, executable):
|
||||
if self._is_shell(executable):
|
||||
# Workaround for Jython is not needed on Linux systems.
|
||||
import java
|
||||
|
||||
if java.lang.System.getProperty('os.name') == 'Linux':
|
||||
return executable
|
||||
elif executable.lower().endswith('jython.exe'):
|
||||
# Use wrapper exe for Jython on Windows
|
||||
return executable
|
||||
return '/usr/bin/env %s' % executable
|
||||
|
||||
def _get_shebang(self, encoding, post_interp=b'', options=None):
|
||||
enquote = True
|
||||
if self.executable:
|
||||
executable = self.executable
|
||||
enquote = False # assume this will be taken care of
|
||||
elif not sysconfig.is_python_build():
|
||||
executable = get_executable()
|
||||
elif in_venv(): # pragma: no cover
|
||||
executable = os.path.join(sysconfig.get_path('scripts'),
|
||||
'python%s' % sysconfig.get_config_var('EXE'))
|
||||
else: # pragma: no cover
|
||||
executable = os.path.join(
|
||||
sysconfig.get_config_var('BINDIR'),
|
||||
'python%s%s' % (sysconfig.get_config_var('VERSION'),
|
||||
sysconfig.get_config_var('EXE')))
|
||||
if options:
|
||||
executable = self._get_alternate_executable(executable, options)
|
||||
|
||||
if sys.platform.startswith('java'): # pragma: no cover
|
||||
executable = self._fix_jython_executable(executable)
|
||||
# Normalise case for Windows
|
||||
executable = os.path.normcase(executable)
|
||||
# If the user didn't specify an executable, it may be necessary to
|
||||
# cater for executable paths with spaces (not uncommon on Windows)
|
||||
if enquote:
|
||||
executable = _enquote_executable(executable)
|
||||
# Issue #51: don't use fsencode, since we later try to
|
||||
# check that the shebang is decodable using utf-8.
|
||||
executable = executable.encode('utf-8')
|
||||
# in case of IronPython, play safe and enable frames support
|
||||
if (sys.platform == 'cli' and '-X:Frames' not in post_interp
|
||||
and '-X:FullFrames' not in post_interp): # pragma: no cover
|
||||
post_interp += b' -X:Frames'
|
||||
shebang = b'#!' + executable + post_interp + b'\n'
|
||||
# Python parser starts to read a script using UTF-8 until
|
||||
# it gets a #coding:xxx cookie. The shebang has to be the
|
||||
# first line of a file, the #coding:xxx cookie cannot be
|
||||
# written before. So the shebang has to be decodable from
|
||||
# UTF-8.
|
||||
try:
|
||||
shebang.decode('utf-8')
|
||||
except UnicodeDecodeError: # pragma: no cover
|
||||
raise ValueError(
|
||||
'The shebang (%r) is not decodable from utf-8' % shebang)
|
||||
# If the script is encoded to a custom encoding (use a
|
||||
# #coding:xxx cookie), the shebang has to be decodable from
|
||||
# the script encoding too.
|
||||
if encoding != 'utf-8':
|
||||
try:
|
||||
shebang.decode(encoding)
|
||||
except UnicodeDecodeError: # pragma: no cover
|
||||
raise ValueError(
|
||||
'The shebang (%r) is not decodable '
|
||||
'from the script encoding (%r)' % (shebang, encoding))
|
||||
return shebang
|
||||
|
||||
def _get_script_text(self, entry):
|
||||
return self.script_template % dict(module=entry.prefix,
|
||||
func=entry.suffix)
|
||||
|
||||
manifest = _DEFAULT_MANIFEST
|
||||
|
||||
def get_manifest(self, exename):
|
||||
base = os.path.basename(exename)
|
||||
return self.manifest % base
|
||||
|
||||
def _write_script(self, names, shebang, script_bytes, filenames, ext):
|
||||
use_launcher = self.add_launchers and self._is_nt
|
||||
linesep = os.linesep.encode('utf-8')
|
||||
if not use_launcher:
|
||||
script_bytes = shebang + linesep + script_bytes
|
||||
else: # pragma: no cover
|
||||
if ext == 'py':
|
||||
launcher = self._get_launcher('t')
|
||||
else:
|
||||
launcher = self._get_launcher('w')
|
||||
stream = BytesIO()
|
||||
with ZipFile(stream, 'w') as zf:
|
||||
zf.writestr('__main__.py', script_bytes)
|
||||
zip_data = stream.getvalue()
|
||||
script_bytes = launcher + shebang + linesep + zip_data
|
||||
for name in names:
|
||||
outname = os.path.join(self.target_dir, name)
|
||||
if use_launcher: # pragma: no cover
|
||||
n, e = os.path.splitext(outname)
|
||||
if e.startswith('.py'):
|
||||
outname = n
|
||||
outname = '%s.exe' % outname
|
||||
try:
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
except Exception:
|
||||
# Failed writing an executable - it might be in use.
|
||||
logger.warning('Failed to write executable - trying to '
|
||||
'use .deleteme logic')
|
||||
dfname = '%s.deleteme' % outname
|
||||
if os.path.exists(dfname):
|
||||
os.remove(dfname) # Not allowed to fail here
|
||||
os.rename(outname, dfname) # nor here
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
logger.debug('Able to replace executable using '
|
||||
'.deleteme logic')
|
||||
try:
|
||||
os.remove(dfname)
|
||||
except Exception:
|
||||
pass # still in use - ignore error
|
||||
else:
|
||||
if self._is_nt and not outname.endswith('.' + ext): # pragma: no cover
|
||||
outname = '%s.%s' % (outname, ext)
|
||||
if os.path.exists(outname) and not self.clobber:
|
||||
logger.warning('Skipping existing file %s', outname)
|
||||
continue
|
||||
self._fileop.write_binary_file(outname, script_bytes)
|
||||
if self.set_mode:
|
||||
self._fileop.set_executable_mode([outname])
|
||||
filenames.append(outname)
|
||||
|
||||
def _make_script(self, entry, filenames, options=None):
|
||||
post_interp = b''
|
||||
if options:
|
||||
args = options.get('interpreter_args', [])
|
||||
if args:
|
||||
args = ' %s' % ' '.join(args)
|
||||
post_interp = args.encode('utf-8')
|
||||
shebang = self._get_shebang('utf-8', post_interp, options=options)
|
||||
script = self._get_script_text(entry).encode('utf-8')
|
||||
name = entry.name
|
||||
scriptnames = set()
|
||||
if '' in self.variants:
|
||||
scriptnames.add(name)
|
||||
if 'X' in self.variants:
|
||||
scriptnames.add('%s%s' % (name, sys.version[0]))
|
||||
if 'X.Y' in self.variants:
|
||||
scriptnames.add('%s-%s' % (name, sys.version[:3]))
|
||||
if options and options.get('gui', False):
|
||||
ext = 'pyw'
|
||||
else:
|
||||
ext = 'py'
|
||||
self._write_script(scriptnames, shebang, script, filenames, ext)
|
||||
|
||||
def _copy_script(self, script, filenames):
|
||||
adjust = False
|
||||
script = os.path.join(self.source_dir, convert_path(script))
|
||||
outname = os.path.join(self.target_dir, os.path.basename(script))
|
||||
if not self.force and not self._fileop.newer(script, outname):
|
||||
logger.debug('not copying %s (up-to-date)', script)
|
||||
return
|
||||
|
||||
# Always open the file, but ignore failures in dry-run mode --
|
||||
# that way, we'll get accurate feedback if we can read the
|
||||
# script.
|
||||
try:
|
||||
f = open(script, 'rb')
|
||||
except IOError: # pragma: no cover
|
||||
if not self.dry_run:
|
||||
raise
|
||||
f = None
|
||||
else:
|
||||
first_line = f.readline()
|
||||
if not first_line: # pragma: no cover
|
||||
logger.warning('%s: %s is an empty file (skipping)',
|
||||
self.get_command_name(), script)
|
||||
return
|
||||
|
||||
match = FIRST_LINE_RE.match(first_line.replace(b'\r\n', b'\n'))
|
||||
if match:
|
||||
adjust = True
|
||||
post_interp = match.group(1) or b''
|
||||
|
||||
if not adjust:
|
||||
if f:
|
||||
f.close()
|
||||
self._fileop.copy_file(script, outname)
|
||||
if self.set_mode:
|
||||
self._fileop.set_executable_mode([outname])
|
||||
filenames.append(outname)
|
||||
else:
|
||||
logger.info('copying and adjusting %s -> %s', script,
|
||||
self.target_dir)
|
||||
if not self._fileop.dry_run:
|
||||
encoding, lines = detect_encoding(f.readline)
|
||||
f.seek(0)
|
||||
shebang = self._get_shebang(encoding, post_interp)
|
||||
if b'pythonw' in first_line: # pragma: no cover
|
||||
ext = 'pyw'
|
||||
else:
|
||||
ext = 'py'
|
||||
n = os.path.basename(outname)
|
||||
self._write_script([n], shebang, f.read(), filenames, ext)
|
||||
if f:
|
||||
f.close()
|
||||
|
||||
@property
|
||||
def dry_run(self):
|
||||
return self._fileop.dry_run
|
||||
|
||||
@dry_run.setter
|
||||
def dry_run(self, value):
|
||||
self._fileop.dry_run = value
|
||||
|
||||
if os.name == 'nt' or (os.name == 'java' and os._name == 'nt'): # pragma: no cover
|
||||
# Executable launcher support.
|
||||
# Launchers are from https://bitbucket.org/vinay.sajip/simple_launcher/
|
||||
|
||||
def _get_launcher(self, kind):
|
||||
if struct.calcsize('P') == 8: # 64-bit
|
||||
bits = '64'
|
||||
else:
|
||||
bits = '32'
|
||||
name = '%s%s.exe' % (kind, bits)
|
||||
# Issue 31: don't hardcode an absolute package name, but
|
||||
# determine it relative to the current package
|
||||
distlib_package = __name__.rsplit('.', 1)[0]
|
||||
result = finder(distlib_package).find(name).bytes
|
||||
return result
|
||||
|
||||
# Public API follows
|
||||
|
||||
def make(self, specification, options=None):
|
||||
"""
|
||||
Make a script.
|
||||
|
||||
:param specification: The specification, which is either a valid export
|
||||
entry specification (to make a script from a
|
||||
callable) or a filename (to make a script by
|
||||
copying from a source location).
|
||||
:param options: A dictionary of options controlling script generation.
|
||||
:return: A list of all absolute pathnames written to.
|
||||
"""
|
||||
filenames = []
|
||||
entry = get_export_entry(specification)
|
||||
if entry is None:
|
||||
self._copy_script(specification, filenames)
|
||||
else:
|
||||
self._make_script(entry, filenames, options=options)
|
||||
return filenames
|
||||
|
||||
def make_multiple(self, specifications, options=None):
|
||||
"""
|
||||
Take a list of specifications and make scripts from them,
|
||||
:param specifications: A list of specifications.
|
||||
:return: A list of all absolute pathnames written to,
|
||||
"""
|
||||
filenames = []
|
||||
for specification in specifications:
|
||||
filenames.extend(self.make(specification, options))
|
||||
return filenames
|
1611
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/util.py
поставляемый
Normal file
1611
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/util.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
742
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/version.py
поставляемый
Normal file
742
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/version.py
поставляемый
Normal file
|
@ -0,0 +1,742 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2012-2016 The Python Software Foundation.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
"""
|
||||
Implementation of a flexible versioning scheme providing support for PEP-440,
|
||||
setuptools-compatible and semantic versioning.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import re
|
||||
|
||||
from .compat import string_types
|
||||
|
||||
__all__ = ['NormalizedVersion', 'NormalizedMatcher',
|
||||
'LegacyVersion', 'LegacyMatcher',
|
||||
'SemanticVersion', 'SemanticMatcher',
|
||||
'UnsupportedVersionError', 'get_scheme']
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class UnsupportedVersionError(ValueError):
|
||||
"""This is an unsupported version."""
|
||||
pass
|
||||
|
||||
|
||||
class Version(object):
|
||||
def __init__(self, s):
|
||||
self._string = s = s.strip()
|
||||
self._parts = parts = self.parse(s)
|
||||
assert isinstance(parts, tuple)
|
||||
assert len(parts) > 0
|
||||
|
||||
def parse(self, s):
|
||||
raise NotImplementedError('please implement in a subclass')
|
||||
|
||||
def _check_compatible(self, other):
|
||||
if type(self) != type(other):
|
||||
raise TypeError('cannot compare %r and %r' % (self, other))
|
||||
|
||||
def __eq__(self, other):
|
||||
self._check_compatible(other)
|
||||
return self._parts == other._parts
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
def __lt__(self, other):
|
||||
self._check_compatible(other)
|
||||
return self._parts < other._parts
|
||||
|
||||
def __gt__(self, other):
|
||||
return not (self.__lt__(other) or self.__eq__(other))
|
||||
|
||||
def __le__(self, other):
|
||||
return self.__lt__(other) or self.__eq__(other)
|
||||
|
||||
def __ge__(self, other):
|
||||
return self.__gt__(other) or self.__eq__(other)
|
||||
|
||||
# See http://docs.python.org/reference/datamodel#object.__hash__
|
||||
def __hash__(self):
|
||||
return hash(self._parts)
|
||||
|
||||
def __repr__(self):
|
||||
return "%s('%s')" % (self.__class__.__name__, self._string)
|
||||
|
||||
def __str__(self):
|
||||
return self._string
|
||||
|
||||
@property
|
||||
def is_prerelease(self):
|
||||
raise NotImplementedError('Please implement in subclasses.')
|
||||
|
||||
|
||||
class Matcher(object):
|
||||
version_class = None
|
||||
|
||||
dist_re = re.compile(r"^(\w[\s\w'.-]*)(\((.*)\))?")
|
||||
comp_re = re.compile(r'^(<=|>=|<|>|!=|={2,3}|~=)?\s*([^\s,]+)$')
|
||||
num_re = re.compile(r'^\d+(\.\d+)*$')
|
||||
|
||||
# value is either a callable or the name of a method
|
||||
_operators = {
|
||||
'<': lambda v, c, p: v < c,
|
||||
'>': lambda v, c, p: v > c,
|
||||
'<=': lambda v, c, p: v == c or v < c,
|
||||
'>=': lambda v, c, p: v == c or v > c,
|
||||
'==': lambda v, c, p: v == c,
|
||||
'===': lambda v, c, p: v == c,
|
||||
# by default, compatible => >=.
|
||||
'~=': lambda v, c, p: v == c or v > c,
|
||||
'!=': lambda v, c, p: v != c,
|
||||
}
|
||||
|
||||
def __init__(self, s):
|
||||
if self.version_class is None:
|
||||
raise ValueError('Please specify a version class')
|
||||
self._string = s = s.strip()
|
||||
m = self.dist_re.match(s)
|
||||
if not m:
|
||||
raise ValueError('Not valid: %r' % s)
|
||||
groups = m.groups('')
|
||||
self.name = groups[0].strip()
|
||||
self.key = self.name.lower() # for case-insensitive comparisons
|
||||
clist = []
|
||||
if groups[2]:
|
||||
constraints = [c.strip() for c in groups[2].split(',')]
|
||||
for c in constraints:
|
||||
m = self.comp_re.match(c)
|
||||
if not m:
|
||||
raise ValueError('Invalid %r in %r' % (c, s))
|
||||
groups = m.groups()
|
||||
op = groups[0] or '~='
|
||||
s = groups[1]
|
||||
if s.endswith('.*'):
|
||||
if op not in ('==', '!='):
|
||||
raise ValueError('\'.*\' not allowed for '
|
||||
'%r constraints' % op)
|
||||
# Could be a partial version (e.g. for '2.*') which
|
||||
# won't parse as a version, so keep it as a string
|
||||
vn, prefix = s[:-2], True
|
||||
if not self.num_re.match(vn):
|
||||
# Just to check that vn is a valid version
|
||||
self.version_class(vn)
|
||||
else:
|
||||
# Should parse as a version, so we can create an
|
||||
# instance for the comparison
|
||||
vn, prefix = self.version_class(s), False
|
||||
clist.append((op, vn, prefix))
|
||||
self._parts = tuple(clist)
|
||||
|
||||
def match(self, version):
|
||||
"""
|
||||
Check if the provided version matches the constraints.
|
||||
|
||||
:param version: The version to match against this instance.
|
||||
:type version: String or :class:`Version` instance.
|
||||
"""
|
||||
if isinstance(version, string_types):
|
||||
version = self.version_class(version)
|
||||
for operator, constraint, prefix in self._parts:
|
||||
f = self._operators.get(operator)
|
||||
if isinstance(f, string_types):
|
||||
f = getattr(self, f)
|
||||
if not f:
|
||||
msg = ('%r not implemented '
|
||||
'for %s' % (operator, self.__class__.__name__))
|
||||
raise NotImplementedError(msg)
|
||||
if not f(version, constraint, prefix):
|
||||
return False
|
||||
return True
|
||||
|
||||
@property
|
||||
def exact_version(self):
|
||||
result = None
|
||||
if len(self._parts) == 1 and self._parts[0][0] in ('==', '==='):
|
||||
result = self._parts[0][1]
|
||||
return result
|
||||
|
||||
def _check_compatible(self, other):
|
||||
if type(self) != type(other) or self.name != other.name:
|
||||
raise TypeError('cannot compare %s and %s' % (self, other))
|
||||
|
||||
def __eq__(self, other):
|
||||
self._check_compatible(other)
|
||||
return self.key == other.key and self._parts == other._parts
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
# See http://docs.python.org/reference/datamodel#object.__hash__
|
||||
def __hash__(self):
|
||||
return hash(self.key) + hash(self._parts)
|
||||
|
||||
def __repr__(self):
|
||||
return "%s(%r)" % (self.__class__.__name__, self._string)
|
||||
|
||||
def __str__(self):
|
||||
return self._string
|
||||
|
||||
|
||||
PEP440_VERSION_RE = re.compile(r'^v?(\d+!)?(\d+(\.\d+)*)((a|b|c|rc)(\d+))?'
|
||||
r'(\.(post)(\d+))?(\.(dev)(\d+))?'
|
||||
r'(\+([a-zA-Z\d]+(\.[a-zA-Z\d]+)?))?$')
|
||||
|
||||
|
||||
def _pep_440_key(s):
|
||||
s = s.strip()
|
||||
m = PEP440_VERSION_RE.match(s)
|
||||
if not m:
|
||||
raise UnsupportedVersionError('Not a valid version: %s' % s)
|
||||
groups = m.groups()
|
||||
nums = tuple(int(v) for v in groups[1].split('.'))
|
||||
while len(nums) > 1 and nums[-1] == 0:
|
||||
nums = nums[:-1]
|
||||
|
||||
if not groups[0]:
|
||||
epoch = 0
|
||||
else:
|
||||
epoch = int(groups[0])
|
||||
pre = groups[4:6]
|
||||
post = groups[7:9]
|
||||
dev = groups[10:12]
|
||||
local = groups[13]
|
||||
if pre == (None, None):
|
||||
pre = ()
|
||||
else:
|
||||
pre = pre[0], int(pre[1])
|
||||
if post == (None, None):
|
||||
post = ()
|
||||
else:
|
||||
post = post[0], int(post[1])
|
||||
if dev == (None, None):
|
||||
dev = ()
|
||||
else:
|
||||
dev = dev[0], int(dev[1])
|
||||
if local is None:
|
||||
local = ()
|
||||
else:
|
||||
parts = []
|
||||
for part in local.split('.'):
|
||||
# to ensure that numeric compares as > lexicographic, avoid
|
||||
# comparing them directly, but encode a tuple which ensures
|
||||
# correct sorting
|
||||
if part.isdigit():
|
||||
part = (1, int(part))
|
||||
else:
|
||||
part = (0, part)
|
||||
parts.append(part)
|
||||
local = tuple(parts)
|
||||
if not pre:
|
||||
# either before pre-release, or final release and after
|
||||
if not post and dev:
|
||||
# before pre-release
|
||||
pre = ('a', -1) # to sort before a0
|
||||
else:
|
||||
pre = ('z',) # to sort after all pre-releases
|
||||
# now look at the state of post and dev.
|
||||
if not post:
|
||||
post = ('_',) # sort before 'a'
|
||||
if not dev:
|
||||
dev = ('final',)
|
||||
|
||||
#print('%s -> %s' % (s, m.groups()))
|
||||
return epoch, nums, pre, post, dev, local
|
||||
|
||||
|
||||
_normalized_key = _pep_440_key
|
||||
|
||||
|
||||
class NormalizedVersion(Version):
|
||||
"""A rational version.
|
||||
|
||||
Good:
|
||||
1.2 # equivalent to "1.2.0"
|
||||
1.2.0
|
||||
1.2a1
|
||||
1.2.3a2
|
||||
1.2.3b1
|
||||
1.2.3c1
|
||||
1.2.3.4
|
||||
TODO: fill this out
|
||||
|
||||
Bad:
|
||||
1 # minimum two numbers
|
||||
1.2a # release level must have a release serial
|
||||
1.2.3b
|
||||
"""
|
||||
def parse(self, s):
|
||||
result = _normalized_key(s)
|
||||
# _normalized_key loses trailing zeroes in the release
|
||||
# clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0
|
||||
# However, PEP 440 prefix matching needs it: for example,
|
||||
# (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0).
|
||||
m = PEP440_VERSION_RE.match(s) # must succeed
|
||||
groups = m.groups()
|
||||
self._release_clause = tuple(int(v) for v in groups[1].split('.'))
|
||||
return result
|
||||
|
||||
PREREL_TAGS = set(['a', 'b', 'c', 'rc', 'dev'])
|
||||
|
||||
@property
|
||||
def is_prerelease(self):
|
||||
return any(t[0] in self.PREREL_TAGS for t in self._parts if t)
|
||||
|
||||
|
||||
def _match_prefix(x, y):
|
||||
x = str(x)
|
||||
y = str(y)
|
||||
if x == y:
|
||||
return True
|
||||
if not x.startswith(y):
|
||||
return False
|
||||
n = len(y)
|
||||
return x[n] == '.'
|
||||
|
||||
|
||||
class NormalizedMatcher(Matcher):
|
||||
version_class = NormalizedVersion
|
||||
|
||||
# value is either a callable or the name of a method
|
||||
_operators = {
|
||||
'~=': '_match_compatible',
|
||||
'<': '_match_lt',
|
||||
'>': '_match_gt',
|
||||
'<=': '_match_le',
|
||||
'>=': '_match_ge',
|
||||
'==': '_match_eq',
|
||||
'===': '_match_arbitrary',
|
||||
'!=': '_match_ne',
|
||||
}
|
||||
|
||||
def _adjust_local(self, version, constraint, prefix):
|
||||
if prefix:
|
||||
strip_local = '+' not in constraint and version._parts[-1]
|
||||
else:
|
||||
# both constraint and version are
|
||||
# NormalizedVersion instances.
|
||||
# If constraint does not have a local component,
|
||||
# ensure the version doesn't, either.
|
||||
strip_local = not constraint._parts[-1] and version._parts[-1]
|
||||
if strip_local:
|
||||
s = version._string.split('+', 1)[0]
|
||||
version = self.version_class(s)
|
||||
return version, constraint
|
||||
|
||||
def _match_lt(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
if version >= constraint:
|
||||
return False
|
||||
release_clause = constraint._release_clause
|
||||
pfx = '.'.join([str(i) for i in release_clause])
|
||||
return not _match_prefix(version, pfx)
|
||||
|
||||
def _match_gt(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
if version <= constraint:
|
||||
return False
|
||||
release_clause = constraint._release_clause
|
||||
pfx = '.'.join([str(i) for i in release_clause])
|
||||
return not _match_prefix(version, pfx)
|
||||
|
||||
def _match_le(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
return version <= constraint
|
||||
|
||||
def _match_ge(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
return version >= constraint
|
||||
|
||||
def _match_eq(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
if not prefix:
|
||||
result = (version == constraint)
|
||||
else:
|
||||
result = _match_prefix(version, constraint)
|
||||
return result
|
||||
|
||||
def _match_arbitrary(self, version, constraint, prefix):
|
||||
return str(version) == str(constraint)
|
||||
|
||||
def _match_ne(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
if not prefix:
|
||||
result = (version != constraint)
|
||||
else:
|
||||
result = not _match_prefix(version, constraint)
|
||||
return result
|
||||
|
||||
def _match_compatible(self, version, constraint, prefix):
|
||||
version, constraint = self._adjust_local(version, constraint, prefix)
|
||||
if version == constraint:
|
||||
return True
|
||||
if version < constraint:
|
||||
return False
|
||||
# if not prefix:
|
||||
# return True
|
||||
release_clause = constraint._release_clause
|
||||
if len(release_clause) > 1:
|
||||
release_clause = release_clause[:-1]
|
||||
pfx = '.'.join([str(i) for i in release_clause])
|
||||
return _match_prefix(version, pfx)
|
||||
|
||||
_REPLACEMENTS = (
|
||||
(re.compile('[.+-]$'), ''), # remove trailing puncts
|
||||
(re.compile(r'^[.](\d)'), r'0.\1'), # .N -> 0.N at start
|
||||
(re.compile('^[.-]'), ''), # remove leading puncts
|
||||
(re.compile(r'^\((.*)\)$'), r'\1'), # remove parentheses
|
||||
(re.compile(r'^v(ersion)?\s*(\d+)'), r'\2'), # remove leading v(ersion)
|
||||
(re.compile(r'^r(ev)?\s*(\d+)'), r'\2'), # remove leading v(ersion)
|
||||
(re.compile('[.]{2,}'), '.'), # multiple runs of '.'
|
||||
(re.compile(r'\b(alfa|apha)\b'), 'alpha'), # misspelt alpha
|
||||
(re.compile(r'\b(pre-alpha|prealpha)\b'),
|
||||
'pre.alpha'), # standardise
|
||||
(re.compile(r'\(beta\)$'), 'beta'), # remove parentheses
|
||||
)
|
||||
|
||||
_SUFFIX_REPLACEMENTS = (
|
||||
(re.compile('^[:~._+-]+'), ''), # remove leading puncts
|
||||
(re.compile('[,*")([\]]'), ''), # remove unwanted chars
|
||||
(re.compile('[~:+_ -]'), '.'), # replace illegal chars
|
||||
(re.compile('[.]{2,}'), '.'), # multiple runs of '.'
|
||||
(re.compile(r'\.$'), ''), # trailing '.'
|
||||
)
|
||||
|
||||
_NUMERIC_PREFIX = re.compile(r'(\d+(\.\d+)*)')
|
||||
|
||||
|
||||
def _suggest_semantic_version(s):
|
||||
"""
|
||||
Try to suggest a semantic form for a version for which
|
||||
_suggest_normalized_version couldn't come up with anything.
|
||||
"""
|
||||
result = s.strip().lower()
|
||||
for pat, repl in _REPLACEMENTS:
|
||||
result = pat.sub(repl, result)
|
||||
if not result:
|
||||
result = '0.0.0'
|
||||
|
||||
# Now look for numeric prefix, and separate it out from
|
||||
# the rest.
|
||||
#import pdb; pdb.set_trace()
|
||||
m = _NUMERIC_PREFIX.match(result)
|
||||
if not m:
|
||||
prefix = '0.0.0'
|
||||
suffix = result
|
||||
else:
|
||||
prefix = m.groups()[0].split('.')
|
||||
prefix = [int(i) for i in prefix]
|
||||
while len(prefix) < 3:
|
||||
prefix.append(0)
|
||||
if len(prefix) == 3:
|
||||
suffix = result[m.end():]
|
||||
else:
|
||||
suffix = '.'.join([str(i) for i in prefix[3:]]) + result[m.end():]
|
||||
prefix = prefix[:3]
|
||||
prefix = '.'.join([str(i) for i in prefix])
|
||||
suffix = suffix.strip()
|
||||
if suffix:
|
||||
#import pdb; pdb.set_trace()
|
||||
# massage the suffix.
|
||||
for pat, repl in _SUFFIX_REPLACEMENTS:
|
||||
suffix = pat.sub(repl, suffix)
|
||||
|
||||
if not suffix:
|
||||
result = prefix
|
||||
else:
|
||||
sep = '-' if 'dev' in suffix else '+'
|
||||
result = prefix + sep + suffix
|
||||
if not is_semver(result):
|
||||
result = None
|
||||
return result
|
||||
|
||||
|
||||
def _suggest_normalized_version(s):
|
||||
"""Suggest a normalized version close to the given version string.
|
||||
|
||||
If you have a version string that isn't rational (i.e. NormalizedVersion
|
||||
doesn't like it) then you might be able to get an equivalent (or close)
|
||||
rational version from this function.
|
||||
|
||||
This does a number of simple normalizations to the given string, based
|
||||
on observation of versions currently in use on PyPI. Given a dump of
|
||||
those version during PyCon 2009, 4287 of them:
|
||||
- 2312 (53.93%) match NormalizedVersion without change
|
||||
with the automatic suggestion
|
||||
- 3474 (81.04%) match when using this suggestion method
|
||||
|
||||
@param s {str} An irrational version string.
|
||||
@returns A rational version string, or None, if couldn't determine one.
|
||||
"""
|
||||
try:
|
||||
_normalized_key(s)
|
||||
return s # already rational
|
||||
except UnsupportedVersionError:
|
||||
pass
|
||||
|
||||
rs = s.lower()
|
||||
|
||||
# part of this could use maketrans
|
||||
for orig, repl in (('-alpha', 'a'), ('-beta', 'b'), ('alpha', 'a'),
|
||||
('beta', 'b'), ('rc', 'c'), ('-final', ''),
|
||||
('-pre', 'c'),
|
||||
('-release', ''), ('.release', ''), ('-stable', ''),
|
||||
('+', '.'), ('_', '.'), (' ', ''), ('.final', ''),
|
||||
('final', '')):
|
||||
rs = rs.replace(orig, repl)
|
||||
|
||||
# if something ends with dev or pre, we add a 0
|
||||
rs = re.sub(r"pre$", r"pre0", rs)
|
||||
rs = re.sub(r"dev$", r"dev0", rs)
|
||||
|
||||
# if we have something like "b-2" or "a.2" at the end of the
|
||||
# version, that is probably beta, alpha, etc
|
||||
# let's remove the dash or dot
|
||||
rs = re.sub(r"([abc]|rc)[\-\.](\d+)$", r"\1\2", rs)
|
||||
|
||||
# 1.0-dev-r371 -> 1.0.dev371
|
||||
# 0.1-dev-r79 -> 0.1.dev79
|
||||
rs = re.sub(r"[\-\.](dev)[\-\.]?r?(\d+)$", r".\1\2", rs)
|
||||
|
||||
# Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1
|
||||
rs = re.sub(r"[.~]?([abc])\.?", r"\1", rs)
|
||||
|
||||
# Clean: v0.3, v1.0
|
||||
if rs.startswith('v'):
|
||||
rs = rs[1:]
|
||||
|
||||
# Clean leading '0's on numbers.
|
||||
#TODO: unintended side-effect on, e.g., "2003.05.09"
|
||||
# PyPI stats: 77 (~2%) better
|
||||
rs = re.sub(r"\b0+(\d+)(?!\d)", r"\1", rs)
|
||||
|
||||
# Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers
|
||||
# zero.
|
||||
# PyPI stats: 245 (7.56%) better
|
||||
rs = re.sub(r"(\d+[abc])$", r"\g<1>0", rs)
|
||||
|
||||
# the 'dev-rNNN' tag is a dev tag
|
||||
rs = re.sub(r"\.?(dev-r|dev\.r)\.?(\d+)$", r".dev\2", rs)
|
||||
|
||||
# clean the - when used as a pre delimiter
|
||||
rs = re.sub(r"-(a|b|c)(\d+)$", r"\1\2", rs)
|
||||
|
||||
# a terminal "dev" or "devel" can be changed into ".dev0"
|
||||
rs = re.sub(r"[\.\-](dev|devel)$", r".dev0", rs)
|
||||
|
||||
# a terminal "dev" can be changed into ".dev0"
|
||||
rs = re.sub(r"(?![\.\-])dev$", r".dev0", rs)
|
||||
|
||||
# a terminal "final" or "stable" can be removed
|
||||
rs = re.sub(r"(final|stable)$", "", rs)
|
||||
|
||||
# The 'r' and the '-' tags are post release tags
|
||||
# 0.4a1.r10 -> 0.4a1.post10
|
||||
# 0.9.33-17222 -> 0.9.33.post17222
|
||||
# 0.9.33-r17222 -> 0.9.33.post17222
|
||||
rs = re.sub(r"\.?(r|-|-r)\.?(\d+)$", r".post\2", rs)
|
||||
|
||||
# Clean 'r' instead of 'dev' usage:
|
||||
# 0.9.33+r17222 -> 0.9.33.dev17222
|
||||
# 1.0dev123 -> 1.0.dev123
|
||||
# 1.0.git123 -> 1.0.dev123
|
||||
# 1.0.bzr123 -> 1.0.dev123
|
||||
# 0.1a0dev.123 -> 0.1a0.dev123
|
||||
# PyPI stats: ~150 (~4%) better
|
||||
rs = re.sub(r"\.?(dev|git|bzr)\.?(\d+)$", r".dev\2", rs)
|
||||
|
||||
# Clean '.pre' (normalized from '-pre' above) instead of 'c' usage:
|
||||
# 0.2.pre1 -> 0.2c1
|
||||
# 0.2-c1 -> 0.2c1
|
||||
# 1.0preview123 -> 1.0c123
|
||||
# PyPI stats: ~21 (0.62%) better
|
||||
rs = re.sub(r"\.?(pre|preview|-c)(\d+)$", r"c\g<2>", rs)
|
||||
|
||||
# Tcl/Tk uses "px" for their post release markers
|
||||
rs = re.sub(r"p(\d+)$", r".post\1", rs)
|
||||
|
||||
try:
|
||||
_normalized_key(rs)
|
||||
except UnsupportedVersionError:
|
||||
rs = None
|
||||
return rs
|
||||
|
||||
#
|
||||
# Legacy version processing (distribute-compatible)
|
||||
#
|
||||
|
||||
_VERSION_PART = re.compile(r'([a-z]+|\d+|[\.-])', re.I)
|
||||
_VERSION_REPLACE = {
|
||||
'pre': 'c',
|
||||
'preview': 'c',
|
||||
'-': 'final-',
|
||||
'rc': 'c',
|
||||
'dev': '@',
|
||||
'': None,
|
||||
'.': None,
|
||||
}
|
||||
|
||||
|
||||
def _legacy_key(s):
|
||||
def get_parts(s):
|
||||
result = []
|
||||
for p in _VERSION_PART.split(s.lower()):
|
||||
p = _VERSION_REPLACE.get(p, p)
|
||||
if p:
|
||||
if '0' <= p[:1] <= '9':
|
||||
p = p.zfill(8)
|
||||
else:
|
||||
p = '*' + p
|
||||
result.append(p)
|
||||
result.append('*final')
|
||||
return result
|
||||
|
||||
result = []
|
||||
for p in get_parts(s):
|
||||
if p.startswith('*'):
|
||||
if p < '*final':
|
||||
while result and result[-1] == '*final-':
|
||||
result.pop()
|
||||
while result and result[-1] == '00000000':
|
||||
result.pop()
|
||||
result.append(p)
|
||||
return tuple(result)
|
||||
|
||||
|
||||
class LegacyVersion(Version):
|
||||
def parse(self, s):
|
||||
return _legacy_key(s)
|
||||
|
||||
@property
|
||||
def is_prerelease(self):
|
||||
result = False
|
||||
for x in self._parts:
|
||||
if (isinstance(x, string_types) and x.startswith('*') and
|
||||
x < '*final'):
|
||||
result = True
|
||||
break
|
||||
return result
|
||||
|
||||
|
||||
class LegacyMatcher(Matcher):
|
||||
version_class = LegacyVersion
|
||||
|
||||
_operators = dict(Matcher._operators)
|
||||
_operators['~='] = '_match_compatible'
|
||||
|
||||
numeric_re = re.compile('^(\d+(\.\d+)*)')
|
||||
|
||||
def _match_compatible(self, version, constraint, prefix):
|
||||
if version < constraint:
|
||||
return False
|
||||
m = self.numeric_re.match(str(constraint))
|
||||
if not m:
|
||||
logger.warning('Cannot compute compatible match for version %s '
|
||||
' and constraint %s', version, constraint)
|
||||
return True
|
||||
s = m.groups()[0]
|
||||
if '.' in s:
|
||||
s = s.rsplit('.', 1)[0]
|
||||
return _match_prefix(version, s)
|
||||
|
||||
#
|
||||
# Semantic versioning
|
||||
#
|
||||
|
||||
_SEMVER_RE = re.compile(r'^(\d+)\.(\d+)\.(\d+)'
|
||||
r'(-[a-z0-9]+(\.[a-z0-9-]+)*)?'
|
||||
r'(\+[a-z0-9]+(\.[a-z0-9-]+)*)?$', re.I)
|
||||
|
||||
|
||||
def is_semver(s):
|
||||
return _SEMVER_RE.match(s)
|
||||
|
||||
|
||||
def _semantic_key(s):
|
||||
def make_tuple(s, absent):
|
||||
if s is None:
|
||||
result = (absent,)
|
||||
else:
|
||||
parts = s[1:].split('.')
|
||||
# We can't compare ints and strings on Python 3, so fudge it
|
||||
# by zero-filling numeric values so simulate a numeric comparison
|
||||
result = tuple([p.zfill(8) if p.isdigit() else p for p in parts])
|
||||
return result
|
||||
|
||||
m = is_semver(s)
|
||||
if not m:
|
||||
raise UnsupportedVersionError(s)
|
||||
groups = m.groups()
|
||||
major, minor, patch = [int(i) for i in groups[:3]]
|
||||
# choose the '|' and '*' so that versions sort correctly
|
||||
pre, build = make_tuple(groups[3], '|'), make_tuple(groups[5], '*')
|
||||
return (major, minor, patch), pre, build
|
||||
|
||||
|
||||
class SemanticVersion(Version):
|
||||
def parse(self, s):
|
||||
return _semantic_key(s)
|
||||
|
||||
@property
|
||||
def is_prerelease(self):
|
||||
return self._parts[1][0] != '|'
|
||||
|
||||
|
||||
class SemanticMatcher(Matcher):
|
||||
version_class = SemanticVersion
|
||||
|
||||
|
||||
class VersionScheme(object):
|
||||
def __init__(self, key, matcher, suggester=None):
|
||||
self.key = key
|
||||
self.matcher = matcher
|
||||
self.suggester = suggester
|
||||
|
||||
def is_valid_version(self, s):
|
||||
try:
|
||||
self.matcher.version_class(s)
|
||||
result = True
|
||||
except UnsupportedVersionError:
|
||||
result = False
|
||||
return result
|
||||
|
||||
def is_valid_matcher(self, s):
|
||||
try:
|
||||
self.matcher(s)
|
||||
result = True
|
||||
except UnsupportedVersionError:
|
||||
result = False
|
||||
return result
|
||||
|
||||
def is_valid_constraint_list(self, s):
|
||||
"""
|
||||
Used for processing some metadata fields
|
||||
"""
|
||||
return self.is_valid_matcher('dummy_name (%s)' % s)
|
||||
|
||||
def suggest(self, s):
|
||||
if self.suggester is None:
|
||||
result = None
|
||||
else:
|
||||
result = self.suggester(s)
|
||||
return result
|
||||
|
||||
_SCHEMES = {
|
||||
'normalized': VersionScheme(_normalized_key, NormalizedMatcher,
|
||||
_suggest_normalized_version),
|
||||
'legacy': VersionScheme(_legacy_key, LegacyMatcher, lambda self, s: s),
|
||||
'semantic': VersionScheme(_semantic_key, SemanticMatcher,
|
||||
_suggest_semantic_version),
|
||||
}
|
||||
|
||||
_SCHEMES['default'] = _SCHEMES['normalized']
|
||||
|
||||
|
||||
def get_scheme(name):
|
||||
if name not in _SCHEMES:
|
||||
raise ValueError('unknown scheme name: %r' % name)
|
||||
return _SCHEMES[name]
|
978
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/wheel.py
поставляемый
Normal file
978
third_party/python/pipenv/pipenv/patched/notpip/_vendor/distlib/wheel.py
поставляемый
Normal file
|
@ -0,0 +1,978 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Copyright (C) 2013-2016 Vinay Sajip.
|
||||
# Licensed to the Python Software Foundation under a contributor agreement.
|
||||
# See LICENSE.txt and CONTRIBUTORS.txt.
|
||||
#
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import base64
|
||||
import codecs
|
||||
import datetime
|
||||
import distutils.util
|
||||
from email import message_from_file
|
||||
import hashlib
|
||||
import imp
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import posixpath
|
||||
import re
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
import zipfile
|
||||
|
||||
from . import __version__, DistlibException
|
||||
from .compat import sysconfig, ZipFile, fsdecode, text_type, filter
|
||||
from .database import InstalledDistribution
|
||||
from .metadata import Metadata, METADATA_FILENAME
|
||||
from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache,
|
||||
cached_property, get_cache_base, read_exports, tempdir)
|
||||
from .version import NormalizedVersion, UnsupportedVersionError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
cache = None # created when needed
|
||||
|
||||
if hasattr(sys, 'pypy_version_info'):
|
||||
IMP_PREFIX = 'pp'
|
||||
elif sys.platform.startswith('java'):
|
||||
IMP_PREFIX = 'jy'
|
||||
elif sys.platform == 'cli':
|
||||
IMP_PREFIX = 'ip'
|
||||
else:
|
||||
IMP_PREFIX = 'cp'
|
||||
|
||||
VER_SUFFIX = sysconfig.get_config_var('py_version_nodot')
|
||||
if not VER_SUFFIX: # pragma: no cover
|
||||
VER_SUFFIX = '%s%s' % sys.version_info[:2]
|
||||
PYVER = 'py' + VER_SUFFIX
|
||||
IMPVER = IMP_PREFIX + VER_SUFFIX
|
||||
|
||||
ARCH = distutils.util.get_platform().replace('-', '_').replace('.', '_')
|
||||
|
||||
ABI = sysconfig.get_config_var('SOABI')
|
||||
if ABI and ABI.startswith('cpython-'):
|
||||
ABI = ABI.replace('cpython-', 'cp')
|
||||
else:
|
||||
def _derive_abi():
|
||||
parts = ['cp', VER_SUFFIX]
|
||||
if sysconfig.get_config_var('Py_DEBUG'):
|
||||
parts.append('d')
|
||||
if sysconfig.get_config_var('WITH_PYMALLOC'):
|
||||
parts.append('m')
|
||||
if sysconfig.get_config_var('Py_UNICODE_SIZE') == 4:
|
||||
parts.append('u')
|
||||
return ''.join(parts)
|
||||
ABI = _derive_abi()
|
||||
del _derive_abi
|
||||
|
||||
FILENAME_RE = re.compile(r'''
|
||||
(?P<nm>[^-]+)
|
||||
-(?P<vn>\d+[^-]*)
|
||||
(-(?P<bn>\d+[^-]*))?
|
||||
-(?P<py>\w+\d+(\.\w+\d+)*)
|
||||
-(?P<bi>\w+)
|
||||
-(?P<ar>\w+(\.\w+)*)
|
||||
\.whl$
|
||||
''', re.IGNORECASE | re.VERBOSE)
|
||||
|
||||
NAME_VERSION_RE = re.compile(r'''
|
||||
(?P<nm>[^-]+)
|
||||
-(?P<vn>\d+[^-]*)
|
||||
(-(?P<bn>\d+[^-]*))?$
|
||||
''', re.IGNORECASE | re.VERBOSE)
|
||||
|
||||
SHEBANG_RE = re.compile(br'\s*#![^\r\n]*')
|
||||
SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$')
|
||||
SHEBANG_PYTHON = b'#!python'
|
||||
SHEBANG_PYTHONW = b'#!pythonw'
|
||||
|
||||
if os.sep == '/':
|
||||
to_posix = lambda o: o
|
||||
else:
|
||||
to_posix = lambda o: o.replace(os.sep, '/')
|
||||
|
||||
|
||||
class Mounter(object):
|
||||
def __init__(self):
|
||||
self.impure_wheels = {}
|
||||
self.libs = {}
|
||||
|
||||
def add(self, pathname, extensions):
|
||||
self.impure_wheels[pathname] = extensions
|
||||
self.libs.update(extensions)
|
||||
|
||||
def remove(self, pathname):
|
||||
extensions = self.impure_wheels.pop(pathname)
|
||||
for k, v in extensions:
|
||||
if k in self.libs:
|
||||
del self.libs[k]
|
||||
|
||||
def find_module(self, fullname, path=None):
|
||||
if fullname in self.libs:
|
||||
result = self
|
||||
else:
|
||||
result = None
|
||||
return result
|
||||
|
||||
def load_module(self, fullname):
|
||||
if fullname in sys.modules:
|
||||
result = sys.modules[fullname]
|
||||
else:
|
||||
if fullname not in self.libs:
|
||||
raise ImportError('unable to find extension for %s' % fullname)
|
||||
result = imp.load_dynamic(fullname, self.libs[fullname])
|
||||
result.__loader__ = self
|
||||
parts = fullname.rsplit('.', 1)
|
||||
if len(parts) > 1:
|
||||
result.__package__ = parts[0]
|
||||
return result
|
||||
|
||||
_hook = Mounter()
|
||||
|
||||
|
||||
class Wheel(object):
|
||||
"""
|
||||
Class to build and install from Wheel files (PEP 427).
|
||||
"""
|
||||
|
||||
wheel_version = (1, 1)
|
||||
hash_kind = 'sha256'
|
||||
|
||||
def __init__(self, filename=None, sign=False, verify=False):
|
||||
"""
|
||||
Initialise an instance using a (valid) filename.
|
||||
"""
|
||||
self.sign = sign
|
||||
self.should_verify = verify
|
||||
self.buildver = ''
|
||||
self.pyver = [PYVER]
|
||||
self.abi = ['none']
|
||||
self.arch = ['any']
|
||||
self.dirname = os.getcwd()
|
||||
if filename is None:
|
||||
self.name = 'dummy'
|
||||
self.version = '0.1'
|
||||
self._filename = self.filename
|
||||
else:
|
||||
m = NAME_VERSION_RE.match(filename)
|
||||
if m:
|
||||
info = m.groupdict('')
|
||||
self.name = info['nm']
|
||||
# Reinstate the local version separator
|
||||
self.version = info['vn'].replace('_', '-')
|
||||
self.buildver = info['bn']
|
||||
self._filename = self.filename
|
||||
else:
|
||||
dirname, filename = os.path.split(filename)
|
||||
m = FILENAME_RE.match(filename)
|
||||
if not m:
|
||||
raise DistlibException('Invalid name or '
|
||||
'filename: %r' % filename)
|
||||
if dirname:
|
||||
self.dirname = os.path.abspath(dirname)
|
||||
self._filename = filename
|
||||
info = m.groupdict('')
|
||||
self.name = info['nm']
|
||||
self.version = info['vn']
|
||||
self.buildver = info['bn']
|
||||
self.pyver = info['py'].split('.')
|
||||
self.abi = info['bi'].split('.')
|
||||
self.arch = info['ar'].split('.')
|
||||
|
||||
@property
|
||||
def filename(self):
|
||||
"""
|
||||
Build and return a filename from the various components.
|
||||
"""
|
||||
if self.buildver:
|
||||
buildver = '-' + self.buildver
|
||||
else:
|
||||
buildver = ''
|
||||
pyver = '.'.join(self.pyver)
|
||||
abi = '.'.join(self.abi)
|
||||
arch = '.'.join(self.arch)
|
||||
# replace - with _ as a local version separator
|
||||
version = self.version.replace('-', '_')
|
||||
return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver,
|
||||
pyver, abi, arch)
|
||||
|
||||
@property
|
||||
def exists(self):
|
||||
path = os.path.join(self.dirname, self.filename)
|
||||
return os.path.isfile(path)
|
||||
|
||||
@property
|
||||
def tags(self):
|
||||
for pyver in self.pyver:
|
||||
for abi in self.abi:
|
||||
for arch in self.arch:
|
||||
yield pyver, abi, arch
|
||||
|
||||
@cached_property
|
||||
def metadata(self):
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
wrapper = codecs.getreader('utf-8')
|
||||
with ZipFile(pathname, 'r') as zf:
|
||||
wheel_metadata = self.get_wheel_metadata(zf)
|
||||
wv = wheel_metadata['Wheel-Version'].split('.', 1)
|
||||
file_version = tuple([int(i) for i in wv])
|
||||
if file_version < (1, 1):
|
||||
fn = 'METADATA'
|
||||
else:
|
||||
fn = METADATA_FILENAME
|
||||
try:
|
||||
metadata_filename = posixpath.join(info_dir, fn)
|
||||
with zf.open(metadata_filename) as bf:
|
||||
wf = wrapper(bf)
|
||||
result = Metadata(fileobj=wf)
|
||||
except KeyError:
|
||||
raise ValueError('Invalid wheel, because %s is '
|
||||
'missing' % fn)
|
||||
return result
|
||||
|
||||
def get_wheel_metadata(self, zf):
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
metadata_filename = posixpath.join(info_dir, 'WHEEL')
|
||||
with zf.open(metadata_filename) as bf:
|
||||
wf = codecs.getreader('utf-8')(bf)
|
||||
message = message_from_file(wf)
|
||||
return dict(message)
|
||||
|
||||
@cached_property
|
||||
def info(self):
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
with ZipFile(pathname, 'r') as zf:
|
||||
result = self.get_wheel_metadata(zf)
|
||||
return result
|
||||
|
||||
def process_shebang(self, data):
|
||||
m = SHEBANG_RE.match(data)
|
||||
if m:
|
||||
end = m.end()
|
||||
shebang, data_after_shebang = data[:end], data[end:]
|
||||
# Preserve any arguments after the interpreter
|
||||
if b'pythonw' in shebang.lower():
|
||||
shebang_python = SHEBANG_PYTHONW
|
||||
else:
|
||||
shebang_python = SHEBANG_PYTHON
|
||||
m = SHEBANG_DETAIL_RE.match(shebang)
|
||||
if m:
|
||||
args = b' ' + m.groups()[-1]
|
||||
else:
|
||||
args = b''
|
||||
shebang = shebang_python + args
|
||||
data = shebang + data_after_shebang
|
||||
else:
|
||||
cr = data.find(b'\r')
|
||||
lf = data.find(b'\n')
|
||||
if cr < 0 or cr > lf:
|
||||
term = b'\n'
|
||||
else:
|
||||
if data[cr:cr + 2] == b'\r\n':
|
||||
term = b'\r\n'
|
||||
else:
|
||||
term = b'\r'
|
||||
data = SHEBANG_PYTHON + term + data
|
||||
return data
|
||||
|
||||
def get_hash(self, data, hash_kind=None):
|
||||
if hash_kind is None:
|
||||
hash_kind = self.hash_kind
|
||||
try:
|
||||
hasher = getattr(hashlib, hash_kind)
|
||||
except AttributeError:
|
||||
raise DistlibException('Unsupported hash algorithm: %r' % hash_kind)
|
||||
result = hasher(data).digest()
|
||||
result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii')
|
||||
return hash_kind, result
|
||||
|
||||
def write_record(self, records, record_path, base):
|
||||
records = list(records) # make a copy for sorting
|
||||
p = to_posix(os.path.relpath(record_path, base))
|
||||
records.append((p, '', ''))
|
||||
records.sort()
|
||||
with CSVWriter(record_path) as writer:
|
||||
for row in records:
|
||||
writer.writerow(row)
|
||||
|
||||
def write_records(self, info, libdir, archive_paths):
|
||||
records = []
|
||||
distinfo, info_dir = info
|
||||
hasher = getattr(hashlib, self.hash_kind)
|
||||
for ap, p in archive_paths:
|
||||
with open(p, 'rb') as f:
|
||||
data = f.read()
|
||||
digest = '%s=%s' % self.get_hash(data)
|
||||
size = os.path.getsize(p)
|
||||
records.append((ap, digest, size))
|
||||
|
||||
p = os.path.join(distinfo, 'RECORD')
|
||||
self.write_record(records, p, libdir)
|
||||
ap = to_posix(os.path.join(info_dir, 'RECORD'))
|
||||
archive_paths.append((ap, p))
|
||||
|
||||
def build_zip(self, pathname, archive_paths):
|
||||
with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf:
|
||||
for ap, p in archive_paths:
|
||||
logger.debug('Wrote %s to %s in wheel', p, ap)
|
||||
zf.write(p, ap)
|
||||
|
||||
def build(self, paths, tags=None, wheel_version=None):
|
||||
"""
|
||||
Build a wheel from files in specified paths, and use any specified tags
|
||||
when determining the name of the wheel.
|
||||
"""
|
||||
if tags is None:
|
||||
tags = {}
|
||||
|
||||
libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0]
|
||||
if libkey == 'platlib':
|
||||
is_pure = 'false'
|
||||
default_pyver = [IMPVER]
|
||||
default_abi = [ABI]
|
||||
default_arch = [ARCH]
|
||||
else:
|
||||
is_pure = 'true'
|
||||
default_pyver = [PYVER]
|
||||
default_abi = ['none']
|
||||
default_arch = ['any']
|
||||
|
||||
self.pyver = tags.get('pyver', default_pyver)
|
||||
self.abi = tags.get('abi', default_abi)
|
||||
self.arch = tags.get('arch', default_arch)
|
||||
|
||||
libdir = paths[libkey]
|
||||
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
data_dir = '%s.data' % name_ver
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
|
||||
archive_paths = []
|
||||
|
||||
# First, stuff which is not in site-packages
|
||||
for key in ('data', 'headers', 'scripts'):
|
||||
if key not in paths:
|
||||
continue
|
||||
path = paths[key]
|
||||
if os.path.isdir(path):
|
||||
for root, dirs, files in os.walk(path):
|
||||
for fn in files:
|
||||
p = fsdecode(os.path.join(root, fn))
|
||||
rp = os.path.relpath(p, path)
|
||||
ap = to_posix(os.path.join(data_dir, key, rp))
|
||||
archive_paths.append((ap, p))
|
||||
if key == 'scripts' and not p.endswith('.exe'):
|
||||
with open(p, 'rb') as f:
|
||||
data = f.read()
|
||||
data = self.process_shebang(data)
|
||||
with open(p, 'wb') as f:
|
||||
f.write(data)
|
||||
|
||||
# Now, stuff which is in site-packages, other than the
|
||||
# distinfo stuff.
|
||||
path = libdir
|
||||
distinfo = None
|
||||
for root, dirs, files in os.walk(path):
|
||||
if root == path:
|
||||
# At the top level only, save distinfo for later
|
||||
# and skip it for now
|
||||
for i, dn in enumerate(dirs):
|
||||
dn = fsdecode(dn)
|
||||
if dn.endswith('.dist-info'):
|
||||
distinfo = os.path.join(root, dn)
|
||||
del dirs[i]
|
||||
break
|
||||
assert distinfo, '.dist-info directory expected, not found'
|
||||
|
||||
for fn in files:
|
||||
# comment out next suite to leave .pyc files in
|
||||
if fsdecode(fn).endswith(('.pyc', '.pyo')):
|
||||
continue
|
||||
p = os.path.join(root, fn)
|
||||
rp = to_posix(os.path.relpath(p, path))
|
||||
archive_paths.append((rp, p))
|
||||
|
||||
# Now distinfo. Assumed to be flat, i.e. os.listdir is enough.
|
||||
files = os.listdir(distinfo)
|
||||
for fn in files:
|
||||
if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'):
|
||||
p = fsdecode(os.path.join(distinfo, fn))
|
||||
ap = to_posix(os.path.join(info_dir, fn))
|
||||
archive_paths.append((ap, p))
|
||||
|
||||
wheel_metadata = [
|
||||
'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version),
|
||||
'Generator: distlib %s' % __version__,
|
||||
'Root-Is-Purelib: %s' % is_pure,
|
||||
]
|
||||
for pyver, abi, arch in self.tags:
|
||||
wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch))
|
||||
p = os.path.join(distinfo, 'WHEEL')
|
||||
with open(p, 'w') as f:
|
||||
f.write('\n'.join(wheel_metadata))
|
||||
ap = to_posix(os.path.join(info_dir, 'WHEEL'))
|
||||
archive_paths.append((ap, p))
|
||||
|
||||
# Now, at last, RECORD.
|
||||
# Paths in here are archive paths - nothing else makes sense.
|
||||
self.write_records((distinfo, info_dir), libdir, archive_paths)
|
||||
# Now, ready to build the zip file
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
self.build_zip(pathname, archive_paths)
|
||||
return pathname
|
||||
|
||||
def install(self, paths, maker, **kwargs):
|
||||
"""
|
||||
Install a wheel to the specified paths. If kwarg ``warner`` is
|
||||
specified, it should be a callable, which will be called with two
|
||||
tuples indicating the wheel version of this software and the wheel
|
||||
version in the file, if there is a discrepancy in the versions.
|
||||
This can be used to issue any warnings to raise any exceptions.
|
||||
If kwarg ``lib_only`` is True, only the purelib/platlib files are
|
||||
installed, and the headers, scripts, data and dist-info metadata are
|
||||
not written.
|
||||
|
||||
The return value is a :class:`InstalledDistribution` instance unless
|
||||
``options.lib_only`` is True, in which case the return value is ``None``.
|
||||
"""
|
||||
|
||||
dry_run = maker.dry_run
|
||||
warner = kwargs.get('warner')
|
||||
lib_only = kwargs.get('lib_only', False)
|
||||
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
data_dir = '%s.data' % name_ver
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
|
||||
metadata_name = posixpath.join(info_dir, METADATA_FILENAME)
|
||||
wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
|
||||
record_name = posixpath.join(info_dir, 'RECORD')
|
||||
|
||||
wrapper = codecs.getreader('utf-8')
|
||||
|
||||
with ZipFile(pathname, 'r') as zf:
|
||||
with zf.open(wheel_metadata_name) as bwf:
|
||||
wf = wrapper(bwf)
|
||||
message = message_from_file(wf)
|
||||
wv = message['Wheel-Version'].split('.', 1)
|
||||
file_version = tuple([int(i) for i in wv])
|
||||
if (file_version != self.wheel_version) and warner:
|
||||
warner(self.wheel_version, file_version)
|
||||
|
||||
if message['Root-Is-Purelib'] == 'true':
|
||||
libdir = paths['purelib']
|
||||
else:
|
||||
libdir = paths['platlib']
|
||||
|
||||
records = {}
|
||||
with zf.open(record_name) as bf:
|
||||
with CSVReader(stream=bf) as reader:
|
||||
for row in reader:
|
||||
p = row[0]
|
||||
records[p] = row
|
||||
|
||||
data_pfx = posixpath.join(data_dir, '')
|
||||
info_pfx = posixpath.join(info_dir, '')
|
||||
script_pfx = posixpath.join(data_dir, 'scripts', '')
|
||||
|
||||
# make a new instance rather than a copy of maker's,
|
||||
# as we mutate it
|
||||
fileop = FileOperator(dry_run=dry_run)
|
||||
fileop.record = True # so we can rollback if needed
|
||||
|
||||
bc = not sys.dont_write_bytecode # Double negatives. Lovely!
|
||||
|
||||
outfiles = [] # for RECORD writing
|
||||
|
||||
# for script copying/shebang processing
|
||||
workdir = tempfile.mkdtemp()
|
||||
# set target dir later
|
||||
# we default add_launchers to False, as the
|
||||
# Python Launcher should be used instead
|
||||
maker.source_dir = workdir
|
||||
maker.target_dir = None
|
||||
try:
|
||||
for zinfo in zf.infolist():
|
||||
arcname = zinfo.filename
|
||||
if isinstance(arcname, text_type):
|
||||
u_arcname = arcname
|
||||
else:
|
||||
u_arcname = arcname.decode('utf-8')
|
||||
# The signature file won't be in RECORD,
|
||||
# and we don't currently don't do anything with it
|
||||
if u_arcname.endswith('/RECORD.jws'):
|
||||
continue
|
||||
row = records[u_arcname]
|
||||
if row[2] and str(zinfo.file_size) != row[2]:
|
||||
raise DistlibException('size mismatch for '
|
||||
'%s' % u_arcname)
|
||||
if row[1]:
|
||||
kind, value = row[1].split('=', 1)
|
||||
with zf.open(arcname) as bf:
|
||||
data = bf.read()
|
||||
_, digest = self.get_hash(data, kind)
|
||||
if digest != value:
|
||||
raise DistlibException('digest mismatch for '
|
||||
'%s' % arcname)
|
||||
|
||||
if lib_only and u_arcname.startswith((info_pfx, data_pfx)):
|
||||
logger.debug('lib_only: skipping %s', u_arcname)
|
||||
continue
|
||||
is_script = (u_arcname.startswith(script_pfx)
|
||||
and not u_arcname.endswith('.exe'))
|
||||
|
||||
if u_arcname.startswith(data_pfx):
|
||||
_, where, rp = u_arcname.split('/', 2)
|
||||
outfile = os.path.join(paths[where], convert_path(rp))
|
||||
else:
|
||||
# meant for site-packages.
|
||||
if u_arcname in (wheel_metadata_name, record_name):
|
||||
continue
|
||||
outfile = os.path.join(libdir, convert_path(u_arcname))
|
||||
if not is_script:
|
||||
with zf.open(arcname) as bf:
|
||||
fileop.copy_stream(bf, outfile)
|
||||
outfiles.append(outfile)
|
||||
# Double check the digest of the written file
|
||||
if not dry_run and row[1]:
|
||||
with open(outfile, 'rb') as bf:
|
||||
data = bf.read()
|
||||
_, newdigest = self.get_hash(data, kind)
|
||||
if newdigest != digest:
|
||||
raise DistlibException('digest mismatch '
|
||||
'on write for '
|
||||
'%s' % outfile)
|
||||
if bc and outfile.endswith('.py'):
|
||||
try:
|
||||
pyc = fileop.byte_compile(outfile)
|
||||
outfiles.append(pyc)
|
||||
except Exception:
|
||||
# Don't give up if byte-compilation fails,
|
||||
# but log it and perhaps warn the user
|
||||
logger.warning('Byte-compilation failed',
|
||||
exc_info=True)
|
||||
else:
|
||||
fn = os.path.basename(convert_path(arcname))
|
||||
workname = os.path.join(workdir, fn)
|
||||
with zf.open(arcname) as bf:
|
||||
fileop.copy_stream(bf, workname)
|
||||
|
||||
dn, fn = os.path.split(outfile)
|
||||
maker.target_dir = dn
|
||||
filenames = maker.make(fn)
|
||||
fileop.set_executable_mode(filenames)
|
||||
outfiles.extend(filenames)
|
||||
|
||||
if lib_only:
|
||||
logger.debug('lib_only: returning None')
|
||||
dist = None
|
||||
else:
|
||||
# Generate scripts
|
||||
|
||||
# Try to get pydist.json so we can see if there are
|
||||
# any commands to generate. If this fails (e.g. because
|
||||
# of a legacy wheel), log a warning but don't give up.
|
||||
commands = None
|
||||
file_version = self.info['Wheel-Version']
|
||||
if file_version == '1.0':
|
||||
# Use legacy info
|
||||
ep = posixpath.join(info_dir, 'entry_points.txt')
|
||||
try:
|
||||
with zf.open(ep) as bwf:
|
||||
epdata = read_exports(bwf)
|
||||
commands = {}
|
||||
for key in ('console', 'gui'):
|
||||
k = '%s_scripts' % key
|
||||
if k in epdata:
|
||||
commands['wrap_%s' % key] = d = {}
|
||||
for v in epdata[k].values():
|
||||
s = '%s:%s' % (v.prefix, v.suffix)
|
||||
if v.flags:
|
||||
s += ' %s' % v.flags
|
||||
d[v.name] = s
|
||||
except Exception:
|
||||
logger.warning('Unable to read legacy script '
|
||||
'metadata, so cannot generate '
|
||||
'scripts')
|
||||
else:
|
||||
try:
|
||||
with zf.open(metadata_name) as bwf:
|
||||
wf = wrapper(bwf)
|
||||
commands = json.load(wf).get('extensions')
|
||||
if commands:
|
||||
commands = commands.get('python.commands')
|
||||
except Exception:
|
||||
logger.warning('Unable to read JSON metadata, so '
|
||||
'cannot generate scripts')
|
||||
if commands:
|
||||
console_scripts = commands.get('wrap_console', {})
|
||||
gui_scripts = commands.get('wrap_gui', {})
|
||||
if console_scripts or gui_scripts:
|
||||
script_dir = paths.get('scripts', '')
|
||||
if not os.path.isdir(script_dir):
|
||||
raise ValueError('Valid script path not '
|
||||
'specified')
|
||||
maker.target_dir = script_dir
|
||||
for k, v in console_scripts.items():
|
||||
script = '%s = %s' % (k, v)
|
||||
filenames = maker.make(script)
|
||||
fileop.set_executable_mode(filenames)
|
||||
|
||||
if gui_scripts:
|
||||
options = {'gui': True }
|
||||
for k, v in gui_scripts.items():
|
||||
script = '%s = %s' % (k, v)
|
||||
filenames = maker.make(script, options)
|
||||
fileop.set_executable_mode(filenames)
|
||||
|
||||
p = os.path.join(libdir, info_dir)
|
||||
dist = InstalledDistribution(p)
|
||||
|
||||
# Write SHARED
|
||||
paths = dict(paths) # don't change passed in dict
|
||||
del paths['purelib']
|
||||
del paths['platlib']
|
||||
paths['lib'] = libdir
|
||||
p = dist.write_shared_locations(paths, dry_run)
|
||||
if p:
|
||||
outfiles.append(p)
|
||||
|
||||
# Write RECORD
|
||||
dist.write_installed_files(outfiles, paths['prefix'],
|
||||
dry_run)
|
||||
return dist
|
||||
except Exception: # pragma: no cover
|
||||
logger.exception('installation failed.')
|
||||
fileop.rollback()
|
||||
raise
|
||||
finally:
|
||||
shutil.rmtree(workdir)
|
||||
|
||||
def _get_dylib_cache(self):
|
||||
global cache
|
||||
if cache is None:
|
||||
# Use native string to avoid issues on 2.x: see Python #20140.
|
||||
base = os.path.join(get_cache_base(), str('dylib-cache'),
|
||||
sys.version[:3])
|
||||
cache = Cache(base)
|
||||
return cache
|
||||
|
||||
def _get_extensions(self):
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
arcname = posixpath.join(info_dir, 'EXTENSIONS')
|
||||
wrapper = codecs.getreader('utf-8')
|
||||
result = []
|
||||
with ZipFile(pathname, 'r') as zf:
|
||||
try:
|
||||
with zf.open(arcname) as bf:
|
||||
wf = wrapper(bf)
|
||||
extensions = json.load(wf)
|
||||
cache = self._get_dylib_cache()
|
||||
prefix = cache.prefix_to_dir(pathname)
|
||||
cache_base = os.path.join(cache.base, prefix)
|
||||
if not os.path.isdir(cache_base):
|
||||
os.makedirs(cache_base)
|
||||
for name, relpath in extensions.items():
|
||||
dest = os.path.join(cache_base, convert_path(relpath))
|
||||
if not os.path.exists(dest):
|
||||
extract = True
|
||||
else:
|
||||
file_time = os.stat(dest).st_mtime
|
||||
file_time = datetime.datetime.fromtimestamp(file_time)
|
||||
info = zf.getinfo(relpath)
|
||||
wheel_time = datetime.datetime(*info.date_time)
|
||||
extract = wheel_time > file_time
|
||||
if extract:
|
||||
zf.extract(relpath, cache_base)
|
||||
result.append((name, dest))
|
||||
except KeyError:
|
||||
pass
|
||||
return result
|
||||
|
||||
def is_compatible(self):
|
||||
"""
|
||||
Determine if a wheel is compatible with the running system.
|
||||
"""
|
||||
return is_compatible(self)
|
||||
|
||||
def is_mountable(self):
|
||||
"""
|
||||
Determine if a wheel is asserted as mountable by its metadata.
|
||||
"""
|
||||
return True # for now - metadata details TBD
|
||||
|
||||
def mount(self, append=False):
|
||||
pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
|
||||
if not self.is_compatible():
|
||||
msg = 'Wheel %s not compatible with this Python.' % pathname
|
||||
raise DistlibException(msg)
|
||||
if not self.is_mountable():
|
||||
msg = 'Wheel %s is marked as not mountable.' % pathname
|
||||
raise DistlibException(msg)
|
||||
if pathname in sys.path:
|
||||
logger.debug('%s already in path', pathname)
|
||||
else:
|
||||
if append:
|
||||
sys.path.append(pathname)
|
||||
else:
|
||||
sys.path.insert(0, pathname)
|
||||
extensions = self._get_extensions()
|
||||
if extensions:
|
||||
if _hook not in sys.meta_path:
|
||||
sys.meta_path.append(_hook)
|
||||
_hook.add(pathname, extensions)
|
||||
|
||||
def unmount(self):
|
||||
pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
|
||||
if pathname not in sys.path:
|
||||
logger.debug('%s not in path', pathname)
|
||||
else:
|
||||
sys.path.remove(pathname)
|
||||
if pathname in _hook.impure_wheels:
|
||||
_hook.remove(pathname)
|
||||
if not _hook.impure_wheels:
|
||||
if _hook in sys.meta_path:
|
||||
sys.meta_path.remove(_hook)
|
||||
|
||||
def verify(self):
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
data_dir = '%s.data' % name_ver
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
|
||||
metadata_name = posixpath.join(info_dir, METADATA_FILENAME)
|
||||
wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
|
||||
record_name = posixpath.join(info_dir, 'RECORD')
|
||||
|
||||
wrapper = codecs.getreader('utf-8')
|
||||
|
||||
with ZipFile(pathname, 'r') as zf:
|
||||
with zf.open(wheel_metadata_name) as bwf:
|
||||
wf = wrapper(bwf)
|
||||
message = message_from_file(wf)
|
||||
wv = message['Wheel-Version'].split('.', 1)
|
||||
file_version = tuple([int(i) for i in wv])
|
||||
# TODO version verification
|
||||
|
||||
records = {}
|
||||
with zf.open(record_name) as bf:
|
||||
with CSVReader(stream=bf) as reader:
|
||||
for row in reader:
|
||||
p = row[0]
|
||||
records[p] = row
|
||||
|
||||
for zinfo in zf.infolist():
|
||||
arcname = zinfo.filename
|
||||
if isinstance(arcname, text_type):
|
||||
u_arcname = arcname
|
||||
else:
|
||||
u_arcname = arcname.decode('utf-8')
|
||||
if '..' in u_arcname:
|
||||
raise DistlibException('invalid entry in '
|
||||
'wheel: %r' % u_arcname)
|
||||
|
||||
# The signature file won't be in RECORD,
|
||||
# and we don't currently don't do anything with it
|
||||
if u_arcname.endswith('/RECORD.jws'):
|
||||
continue
|
||||
row = records[u_arcname]
|
||||
if row[2] and str(zinfo.file_size) != row[2]:
|
||||
raise DistlibException('size mismatch for '
|
||||
'%s' % u_arcname)
|
||||
if row[1]:
|
||||
kind, value = row[1].split('=', 1)
|
||||
with zf.open(arcname) as bf:
|
||||
data = bf.read()
|
||||
_, digest = self.get_hash(data, kind)
|
||||
if digest != value:
|
||||
raise DistlibException('digest mismatch for '
|
||||
'%s' % arcname)
|
||||
|
||||
def update(self, modifier, dest_dir=None, **kwargs):
|
||||
"""
|
||||
Update the contents of a wheel in a generic way. The modifier should
|
||||
be a callable which expects a dictionary argument: its keys are
|
||||
archive-entry paths, and its values are absolute filesystem paths
|
||||
where the contents the corresponding archive entries can be found. The
|
||||
modifier is free to change the contents of the files pointed to, add
|
||||
new entries and remove entries, before returning. This method will
|
||||
extract the entire contents of the wheel to a temporary location, call
|
||||
the modifier, and then use the passed (and possibly updated)
|
||||
dictionary to write a new wheel. If ``dest_dir`` is specified, the new
|
||||
wheel is written there -- otherwise, the original wheel is overwritten.
|
||||
|
||||
The modifier should return True if it updated the wheel, else False.
|
||||
This method returns the same value the modifier returns.
|
||||
"""
|
||||
|
||||
def get_version(path_map, info_dir):
|
||||
version = path = None
|
||||
key = '%s/%s' % (info_dir, METADATA_FILENAME)
|
||||
if key not in path_map:
|
||||
key = '%s/PKG-INFO' % info_dir
|
||||
if key in path_map:
|
||||
path = path_map[key]
|
||||
version = Metadata(path=path).version
|
||||
return version, path
|
||||
|
||||
def update_version(version, path):
|
||||
updated = None
|
||||
try:
|
||||
v = NormalizedVersion(version)
|
||||
i = version.find('-')
|
||||
if i < 0:
|
||||
updated = '%s+1' % version
|
||||
else:
|
||||
parts = [int(s) for s in version[i + 1:].split('.')]
|
||||
parts[-1] += 1
|
||||
updated = '%s+%s' % (version[:i],
|
||||
'.'.join(str(i) for i in parts))
|
||||
except UnsupportedVersionError:
|
||||
logger.debug('Cannot update non-compliant (PEP-440) '
|
||||
'version %r', version)
|
||||
if updated:
|
||||
md = Metadata(path=path)
|
||||
md.version = updated
|
||||
legacy = not path.endswith(METADATA_FILENAME)
|
||||
md.write(path=path, legacy=legacy)
|
||||
logger.debug('Version updated from %r to %r', version,
|
||||
updated)
|
||||
|
||||
pathname = os.path.join(self.dirname, self.filename)
|
||||
name_ver = '%s-%s' % (self.name, self.version)
|
||||
info_dir = '%s.dist-info' % name_ver
|
||||
record_name = posixpath.join(info_dir, 'RECORD')
|
||||
with tempdir() as workdir:
|
||||
with ZipFile(pathname, 'r') as zf:
|
||||
path_map = {}
|
||||
for zinfo in zf.infolist():
|
||||
arcname = zinfo.filename
|
||||
if isinstance(arcname, text_type):
|
||||
u_arcname = arcname
|
||||
else:
|
||||
u_arcname = arcname.decode('utf-8')
|
||||
if u_arcname == record_name:
|
||||
continue
|
||||
if '..' in u_arcname:
|
||||
raise DistlibException('invalid entry in '
|
||||
'wheel: %r' % u_arcname)
|
||||
zf.extract(zinfo, workdir)
|
||||
path = os.path.join(workdir, convert_path(u_arcname))
|
||||
path_map[u_arcname] = path
|
||||
|
||||
# Remember the version.
|
||||
original_version, _ = get_version(path_map, info_dir)
|
||||
# Files extracted. Call the modifier.
|
||||
modified = modifier(path_map, **kwargs)
|
||||
if modified:
|
||||
# Something changed - need to build a new wheel.
|
||||
current_version, path = get_version(path_map, info_dir)
|
||||
if current_version and (current_version == original_version):
|
||||
# Add or update local version to signify changes.
|
||||
update_version(current_version, path)
|
||||
# Decide where the new wheel goes.
|
||||
if dest_dir is None:
|
||||
fd, newpath = tempfile.mkstemp(suffix='.whl',
|
||||
prefix='wheel-update-',
|
||||
dir=workdir)
|
||||
os.close(fd)
|
||||
else:
|
||||
if not os.path.isdir(dest_dir):
|
||||
raise DistlibException('Not a directory: %r' % dest_dir)
|
||||
newpath = os.path.join(dest_dir, self.filename)
|
||||
archive_paths = list(path_map.items())
|
||||
distinfo = os.path.join(workdir, info_dir)
|
||||
info = distinfo, info_dir
|
||||
self.write_records(info, workdir, archive_paths)
|
||||
self.build_zip(newpath, archive_paths)
|
||||
if dest_dir is None:
|
||||
shutil.copyfile(newpath, pathname)
|
||||
return modified
|
||||
|
||||
def compatible_tags():
|
||||
"""
|
||||
Return (pyver, abi, arch) tuples compatible with this Python.
|
||||
"""
|
||||
versions = [VER_SUFFIX]
|
||||
major = VER_SUFFIX[0]
|
||||
for minor in range(sys.version_info[1] - 1, - 1, -1):
|
||||
versions.append(''.join([major, str(minor)]))
|
||||
|
||||
abis = []
|
||||
for suffix, _, _ in imp.get_suffixes():
|
||||
if suffix.startswith('.abi'):
|
||||
abis.append(suffix.split('.', 2)[1])
|
||||
abis.sort()
|
||||
if ABI != 'none':
|
||||
abis.insert(0, ABI)
|
||||
abis.append('none')
|
||||
result = []
|
||||
|
||||
arches = [ARCH]
|
||||
if sys.platform == 'darwin':
|
||||
m = re.match('(\w+)_(\d+)_(\d+)_(\w+)$', ARCH)
|
||||
if m:
|
||||
name, major, minor, arch = m.groups()
|
||||
minor = int(minor)
|
||||
matches = [arch]
|
||||
if arch in ('i386', 'ppc'):
|
||||
matches.append('fat')
|
||||
if arch in ('i386', 'ppc', 'x86_64'):
|
||||
matches.append('fat3')
|
||||
if arch in ('ppc64', 'x86_64'):
|
||||
matches.append('fat64')
|
||||
if arch in ('i386', 'x86_64'):
|
||||
matches.append('intel')
|
||||
if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'):
|
||||
matches.append('universal')
|
||||
while minor >= 0:
|
||||
for match in matches:
|
||||
s = '%s_%s_%s_%s' % (name, major, minor, match)
|
||||
if s != ARCH: # already there
|
||||
arches.append(s)
|
||||
minor -= 1
|
||||
|
||||
# Most specific - our Python version, ABI and arch
|
||||
for abi in abis:
|
||||
for arch in arches:
|
||||
result.append((''.join((IMP_PREFIX, versions[0])), abi, arch))
|
||||
|
||||
# where no ABI / arch dependency, but IMP_PREFIX dependency
|
||||
for i, version in enumerate(versions):
|
||||
result.append((''.join((IMP_PREFIX, version)), 'none', 'any'))
|
||||
if i == 0:
|
||||
result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any'))
|
||||
|
||||
# no IMP_PREFIX, ABI or arch dependency
|
||||
for i, version in enumerate(versions):
|
||||
result.append((''.join(('py', version)), 'none', 'any'))
|
||||
if i == 0:
|
||||
result.append((''.join(('py', version[0])), 'none', 'any'))
|
||||
return set(result)
|
||||
|
||||
|
||||
COMPATIBLE_TAGS = compatible_tags()
|
||||
|
||||
del compatible_tags
|
||||
|
||||
|
||||
def is_compatible(wheel, tags=None):
|
||||
if not isinstance(wheel, Wheel):
|
||||
wheel = Wheel(wheel) # assume it's a filename
|
||||
result = False
|
||||
if tags is None:
|
||||
tags = COMPATIBLE_TAGS
|
||||
for ver, abi, arch in tags:
|
||||
if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch:
|
||||
result = True
|
||||
break
|
||||
return result
|
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
25
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/__init__.py
поставляемый
Normal file
25
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,25 @@
|
|||
"""
|
||||
HTML parsing library based on the WHATWG "HTML5"
|
||||
specification. The parser is designed to be compatible with existing
|
||||
HTML found in the wild and implements well-defined error recovery that
|
||||
is largely compatible with modern desktop web browsers.
|
||||
|
||||
Example usage:
|
||||
|
||||
import html5lib
|
||||
f = open("my_document.html")
|
||||
tree = html5lib.parse(f)
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from .html5parser import HTMLParser, parse, parseFragment
|
||||
from .treebuilders import getTreeBuilder
|
||||
from .treewalkers import getTreeWalker
|
||||
from .serializer import serialize
|
||||
|
||||
__all__ = ["HTMLParser", "parse", "parseFragment", "getTreeBuilder",
|
||||
"getTreeWalker", "serialize"]
|
||||
|
||||
# this has to be at the top level, see how setup.py parses this
|
||||
__version__ = "1.0b10"
|
288
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_ihatexml.py
поставляемый
Normal file
288
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_ihatexml.py
поставляемый
Normal file
|
@ -0,0 +1,288 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
import re
|
||||
import warnings
|
||||
|
||||
from .constants import DataLossWarning
|
||||
|
||||
baseChar = """
|
||||
[#x0041-#x005A] | [#x0061-#x007A] | [#x00C0-#x00D6] | [#x00D8-#x00F6] |
|
||||
[#x00F8-#x00FF] | [#x0100-#x0131] | [#x0134-#x013E] | [#x0141-#x0148] |
|
||||
[#x014A-#x017E] | [#x0180-#x01C3] | [#x01CD-#x01F0] | [#x01F4-#x01F5] |
|
||||
[#x01FA-#x0217] | [#x0250-#x02A8] | [#x02BB-#x02C1] | #x0386 |
|
||||
[#x0388-#x038A] | #x038C | [#x038E-#x03A1] | [#x03A3-#x03CE] |
|
||||
[#x03D0-#x03D6] | #x03DA | #x03DC | #x03DE | #x03E0 | [#x03E2-#x03F3] |
|
||||
[#x0401-#x040C] | [#x040E-#x044F] | [#x0451-#x045C] | [#x045E-#x0481] |
|
||||
[#x0490-#x04C4] | [#x04C7-#x04C8] | [#x04CB-#x04CC] | [#x04D0-#x04EB] |
|
||||
[#x04EE-#x04F5] | [#x04F8-#x04F9] | [#x0531-#x0556] | #x0559 |
|
||||
[#x0561-#x0586] | [#x05D0-#x05EA] | [#x05F0-#x05F2] | [#x0621-#x063A] |
|
||||
[#x0641-#x064A] | [#x0671-#x06B7] | [#x06BA-#x06BE] | [#x06C0-#x06CE] |
|
||||
[#x06D0-#x06D3] | #x06D5 | [#x06E5-#x06E6] | [#x0905-#x0939] | #x093D |
|
||||
[#x0958-#x0961] | [#x0985-#x098C] | [#x098F-#x0990] | [#x0993-#x09A8] |
|
||||
[#x09AA-#x09B0] | #x09B2 | [#x09B6-#x09B9] | [#x09DC-#x09DD] |
|
||||
[#x09DF-#x09E1] | [#x09F0-#x09F1] | [#x0A05-#x0A0A] | [#x0A0F-#x0A10] |
|
||||
[#x0A13-#x0A28] | [#x0A2A-#x0A30] | [#x0A32-#x0A33] | [#x0A35-#x0A36] |
|
||||
[#x0A38-#x0A39] | [#x0A59-#x0A5C] | #x0A5E | [#x0A72-#x0A74] |
|
||||
[#x0A85-#x0A8B] | #x0A8D | [#x0A8F-#x0A91] | [#x0A93-#x0AA8] |
|
||||
[#x0AAA-#x0AB0] | [#x0AB2-#x0AB3] | [#x0AB5-#x0AB9] | #x0ABD | #x0AE0 |
|
||||
[#x0B05-#x0B0C] | [#x0B0F-#x0B10] | [#x0B13-#x0B28] | [#x0B2A-#x0B30] |
|
||||
[#x0B32-#x0B33] | [#x0B36-#x0B39] | #x0B3D | [#x0B5C-#x0B5D] |
|
||||
[#x0B5F-#x0B61] | [#x0B85-#x0B8A] | [#x0B8E-#x0B90] | [#x0B92-#x0B95] |
|
||||
[#x0B99-#x0B9A] | #x0B9C | [#x0B9E-#x0B9F] | [#x0BA3-#x0BA4] |
|
||||
[#x0BA8-#x0BAA] | [#x0BAE-#x0BB5] | [#x0BB7-#x0BB9] | [#x0C05-#x0C0C] |
|
||||
[#x0C0E-#x0C10] | [#x0C12-#x0C28] | [#x0C2A-#x0C33] | [#x0C35-#x0C39] |
|
||||
[#x0C60-#x0C61] | [#x0C85-#x0C8C] | [#x0C8E-#x0C90] | [#x0C92-#x0CA8] |
|
||||
[#x0CAA-#x0CB3] | [#x0CB5-#x0CB9] | #x0CDE | [#x0CE0-#x0CE1] |
|
||||
[#x0D05-#x0D0C] | [#x0D0E-#x0D10] | [#x0D12-#x0D28] | [#x0D2A-#x0D39] |
|
||||
[#x0D60-#x0D61] | [#x0E01-#x0E2E] | #x0E30 | [#x0E32-#x0E33] |
|
||||
[#x0E40-#x0E45] | [#x0E81-#x0E82] | #x0E84 | [#x0E87-#x0E88] | #x0E8A |
|
||||
#x0E8D | [#x0E94-#x0E97] | [#x0E99-#x0E9F] | [#x0EA1-#x0EA3] | #x0EA5 |
|
||||
#x0EA7 | [#x0EAA-#x0EAB] | [#x0EAD-#x0EAE] | #x0EB0 | [#x0EB2-#x0EB3] |
|
||||
#x0EBD | [#x0EC0-#x0EC4] | [#x0F40-#x0F47] | [#x0F49-#x0F69] |
|
||||
[#x10A0-#x10C5] | [#x10D0-#x10F6] | #x1100 | [#x1102-#x1103] |
|
||||
[#x1105-#x1107] | #x1109 | [#x110B-#x110C] | [#x110E-#x1112] | #x113C |
|
||||
#x113E | #x1140 | #x114C | #x114E | #x1150 | [#x1154-#x1155] | #x1159 |
|
||||
[#x115F-#x1161] | #x1163 | #x1165 | #x1167 | #x1169 | [#x116D-#x116E] |
|
||||
[#x1172-#x1173] | #x1175 | #x119E | #x11A8 | #x11AB | [#x11AE-#x11AF] |
|
||||
[#x11B7-#x11B8] | #x11BA | [#x11BC-#x11C2] | #x11EB | #x11F0 | #x11F9 |
|
||||
[#x1E00-#x1E9B] | [#x1EA0-#x1EF9] | [#x1F00-#x1F15] | [#x1F18-#x1F1D] |
|
||||
[#x1F20-#x1F45] | [#x1F48-#x1F4D] | [#x1F50-#x1F57] | #x1F59 | #x1F5B |
|
||||
#x1F5D | [#x1F5F-#x1F7D] | [#x1F80-#x1FB4] | [#x1FB6-#x1FBC] | #x1FBE |
|
||||
[#x1FC2-#x1FC4] | [#x1FC6-#x1FCC] | [#x1FD0-#x1FD3] | [#x1FD6-#x1FDB] |
|
||||
[#x1FE0-#x1FEC] | [#x1FF2-#x1FF4] | [#x1FF6-#x1FFC] | #x2126 |
|
||||
[#x212A-#x212B] | #x212E | [#x2180-#x2182] | [#x3041-#x3094] |
|
||||
[#x30A1-#x30FA] | [#x3105-#x312C] | [#xAC00-#xD7A3]"""
|
||||
|
||||
ideographic = """[#x4E00-#x9FA5] | #x3007 | [#x3021-#x3029]"""
|
||||
|
||||
combiningCharacter = """
|
||||
[#x0300-#x0345] | [#x0360-#x0361] | [#x0483-#x0486] | [#x0591-#x05A1] |
|
||||
[#x05A3-#x05B9] | [#x05BB-#x05BD] | #x05BF | [#x05C1-#x05C2] | #x05C4 |
|
||||
[#x064B-#x0652] | #x0670 | [#x06D6-#x06DC] | [#x06DD-#x06DF] |
|
||||
[#x06E0-#x06E4] | [#x06E7-#x06E8] | [#x06EA-#x06ED] | [#x0901-#x0903] |
|
||||
#x093C | [#x093E-#x094C] | #x094D | [#x0951-#x0954] | [#x0962-#x0963] |
|
||||
[#x0981-#x0983] | #x09BC | #x09BE | #x09BF | [#x09C0-#x09C4] |
|
||||
[#x09C7-#x09C8] | [#x09CB-#x09CD] | #x09D7 | [#x09E2-#x09E3] | #x0A02 |
|
||||
#x0A3C | #x0A3E | #x0A3F | [#x0A40-#x0A42] | [#x0A47-#x0A48] |
|
||||
[#x0A4B-#x0A4D] | [#x0A70-#x0A71] | [#x0A81-#x0A83] | #x0ABC |
|
||||
[#x0ABE-#x0AC5] | [#x0AC7-#x0AC9] | [#x0ACB-#x0ACD] | [#x0B01-#x0B03] |
|
||||
#x0B3C | [#x0B3E-#x0B43] | [#x0B47-#x0B48] | [#x0B4B-#x0B4D] |
|
||||
[#x0B56-#x0B57] | [#x0B82-#x0B83] | [#x0BBE-#x0BC2] | [#x0BC6-#x0BC8] |
|
||||
[#x0BCA-#x0BCD] | #x0BD7 | [#x0C01-#x0C03] | [#x0C3E-#x0C44] |
|
||||
[#x0C46-#x0C48] | [#x0C4A-#x0C4D] | [#x0C55-#x0C56] | [#x0C82-#x0C83] |
|
||||
[#x0CBE-#x0CC4] | [#x0CC6-#x0CC8] | [#x0CCA-#x0CCD] | [#x0CD5-#x0CD6] |
|
||||
[#x0D02-#x0D03] | [#x0D3E-#x0D43] | [#x0D46-#x0D48] | [#x0D4A-#x0D4D] |
|
||||
#x0D57 | #x0E31 | [#x0E34-#x0E3A] | [#x0E47-#x0E4E] | #x0EB1 |
|
||||
[#x0EB4-#x0EB9] | [#x0EBB-#x0EBC] | [#x0EC8-#x0ECD] | [#x0F18-#x0F19] |
|
||||
#x0F35 | #x0F37 | #x0F39 | #x0F3E | #x0F3F | [#x0F71-#x0F84] |
|
||||
[#x0F86-#x0F8B] | [#x0F90-#x0F95] | #x0F97 | [#x0F99-#x0FAD] |
|
||||
[#x0FB1-#x0FB7] | #x0FB9 | [#x20D0-#x20DC] | #x20E1 | [#x302A-#x302F] |
|
||||
#x3099 | #x309A"""
|
||||
|
||||
digit = """
|
||||
[#x0030-#x0039] | [#x0660-#x0669] | [#x06F0-#x06F9] | [#x0966-#x096F] |
|
||||
[#x09E6-#x09EF] | [#x0A66-#x0A6F] | [#x0AE6-#x0AEF] | [#x0B66-#x0B6F] |
|
||||
[#x0BE7-#x0BEF] | [#x0C66-#x0C6F] | [#x0CE6-#x0CEF] | [#x0D66-#x0D6F] |
|
||||
[#x0E50-#x0E59] | [#x0ED0-#x0ED9] | [#x0F20-#x0F29]"""
|
||||
|
||||
extender = """
|
||||
#x00B7 | #x02D0 | #x02D1 | #x0387 | #x0640 | #x0E46 | #x0EC6 | #x3005 |
|
||||
#[#x3031-#x3035] | [#x309D-#x309E] | [#x30FC-#x30FE]"""
|
||||
|
||||
letter = " | ".join([baseChar, ideographic])
|
||||
|
||||
# Without the
|
||||
name = " | ".join([letter, digit, ".", "-", "_", combiningCharacter,
|
||||
extender])
|
||||
nameFirst = " | ".join([letter, "_"])
|
||||
|
||||
reChar = re.compile(r"#x([\d|A-F]{4,4})")
|
||||
reCharRange = re.compile(r"\[#x([\d|A-F]{4,4})-#x([\d|A-F]{4,4})\]")
|
||||
|
||||
|
||||
def charStringToList(chars):
|
||||
charRanges = [item.strip() for item in chars.split(" | ")]
|
||||
rv = []
|
||||
for item in charRanges:
|
||||
foundMatch = False
|
||||
for regexp in (reChar, reCharRange):
|
||||
match = regexp.match(item)
|
||||
if match is not None:
|
||||
rv.append([hexToInt(item) for item in match.groups()])
|
||||
if len(rv[-1]) == 1:
|
||||
rv[-1] = rv[-1] * 2
|
||||
foundMatch = True
|
||||
break
|
||||
if not foundMatch:
|
||||
assert len(item) == 1
|
||||
|
||||
rv.append([ord(item)] * 2)
|
||||
rv = normaliseCharList(rv)
|
||||
return rv
|
||||
|
||||
|
||||
def normaliseCharList(charList):
|
||||
charList = sorted(charList)
|
||||
for item in charList:
|
||||
assert item[1] >= item[0]
|
||||
rv = []
|
||||
i = 0
|
||||
while i < len(charList):
|
||||
j = 1
|
||||
rv.append(charList[i])
|
||||
while i + j < len(charList) and charList[i + j][0] <= rv[-1][1] + 1:
|
||||
rv[-1][1] = charList[i + j][1]
|
||||
j += 1
|
||||
i += j
|
||||
return rv
|
||||
|
||||
# We don't really support characters above the BMP :(
|
||||
max_unicode = int("FFFF", 16)
|
||||
|
||||
|
||||
def missingRanges(charList):
|
||||
rv = []
|
||||
if charList[0] != 0:
|
||||
rv.append([0, charList[0][0] - 1])
|
||||
for i, item in enumerate(charList[:-1]):
|
||||
rv.append([item[1] + 1, charList[i + 1][0] - 1])
|
||||
if charList[-1][1] != max_unicode:
|
||||
rv.append([charList[-1][1] + 1, max_unicode])
|
||||
return rv
|
||||
|
||||
|
||||
def listToRegexpStr(charList):
|
||||
rv = []
|
||||
for item in charList:
|
||||
if item[0] == item[1]:
|
||||
rv.append(escapeRegexp(chr(item[0])))
|
||||
else:
|
||||
rv.append(escapeRegexp(chr(item[0])) + "-" +
|
||||
escapeRegexp(chr(item[1])))
|
||||
return "[%s]" % "".join(rv)
|
||||
|
||||
|
||||
def hexToInt(hex_str):
|
||||
return int(hex_str, 16)
|
||||
|
||||
|
||||
def escapeRegexp(string):
|
||||
specialCharacters = (".", "^", "$", "*", "+", "?", "{", "}",
|
||||
"[", "]", "|", "(", ")", "-")
|
||||
for char in specialCharacters:
|
||||
string = string.replace(char, "\\" + char)
|
||||
|
||||
return string
|
||||
|
||||
# output from the above
|
||||
nonXmlNameBMPRegexp = re.compile('[\x00-,/:-@\\[-\\^`\\{-\xb6\xb8-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u02cf\u02d2-\u02ff\u0346-\u035f\u0362-\u0385\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482\u0487-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u0590\u05a2\u05ba\u05be\u05c0\u05c3\u05c5-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u063f\u0653-\u065f\u066a-\u066f\u06b8-\u06b9\u06bf\u06cf\u06d4\u06e9\u06ee-\u06ef\u06fa-\u0900\u0904\u093a-\u093b\u094e-\u0950\u0955-\u0957\u0964-\u0965\u0970-\u0980\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09bd\u09c5-\u09c6\u09c9-\u09ca\u09ce-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09f2-\u0a01\u0a03-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a58\u0a5d\u0a5f-\u0a65\u0a75-\u0a80\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0adf\u0ae1-\u0ae5\u0af0-\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3b\u0b44-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b62-\u0b65\u0b70-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bd6\u0bd8-\u0be6\u0bf0-\u0c00\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c3d\u0c45\u0c49\u0c4e-\u0c54\u0c57-\u0c5f\u0c62-\u0c65\u0c70-\u0c81\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbd\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce2-\u0ce5\u0cf0-\u0d01\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d3d\u0d44-\u0d45\u0d49\u0d4e-\u0d56\u0d58-\u0d5f\u0d62-\u0d65\u0d70-\u0e00\u0e2f\u0e3b-\u0e3f\u0e4f\u0e5a-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0f17\u0f1a-\u0f1f\u0f2a-\u0f34\u0f36\u0f38\u0f3a-\u0f3d\u0f48\u0f6a-\u0f70\u0f85\u0f8c-\u0f8f\u0f96\u0f98\u0fae-\u0fb0\u0fb8\u0fba-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u20cf\u20dd-\u20e0\u20e2-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3004\u3006\u3008-\u3020\u3030\u3036-\u3040\u3095-\u3098\u309b-\u309c\u309f-\u30a0\u30fb\u30ff-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]') # noqa
|
||||
|
||||
nonXmlNameFirstBMPRegexp = re.compile('[\x00-@\\[-\\^`\\{-\xbf\xd7\xf7\u0132-\u0133\u013f-\u0140\u0149\u017f\u01c4-\u01cc\u01f1-\u01f3\u01f6-\u01f9\u0218-\u024f\u02a9-\u02ba\u02c2-\u0385\u0387\u038b\u038d\u03a2\u03cf\u03d7-\u03d9\u03db\u03dd\u03df\u03e1\u03f4-\u0400\u040d\u0450\u045d\u0482-\u048f\u04c5-\u04c6\u04c9-\u04ca\u04cd-\u04cf\u04ec-\u04ed\u04f6-\u04f7\u04fa-\u0530\u0557-\u0558\u055a-\u0560\u0587-\u05cf\u05eb-\u05ef\u05f3-\u0620\u063b-\u0640\u064b-\u0670\u06b8-\u06b9\u06bf\u06cf\u06d4\u06d6-\u06e4\u06e7-\u0904\u093a-\u093c\u093e-\u0957\u0962-\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09db\u09de\u09e2-\u09ef\u09f2-\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a58\u0a5d\u0a5f-\u0a71\u0a75-\u0a84\u0a8c\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abc\u0abe-\u0adf\u0ae1-\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34-\u0b35\u0b3a-\u0b3c\u0b3e-\u0b5b\u0b5e\u0b62-\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bb6\u0bba-\u0c04\u0c0d\u0c11\u0c29\u0c34\u0c3a-\u0c5f\u0c62-\u0c84\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cdd\u0cdf\u0ce2-\u0d04\u0d0d\u0d11\u0d29\u0d3a-\u0d5f\u0d62-\u0e00\u0e2f\u0e31\u0e34-\u0e3f\u0e46-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eaf\u0eb1\u0eb4-\u0ebc\u0ebe-\u0ebf\u0ec5-\u0f3f\u0f48\u0f6a-\u109f\u10c6-\u10cf\u10f7-\u10ff\u1101\u1104\u1108\u110a\u110d\u1113-\u113b\u113d\u113f\u1141-\u114b\u114d\u114f\u1151-\u1153\u1156-\u1158\u115a-\u115e\u1162\u1164\u1166\u1168\u116a-\u116c\u116f-\u1171\u1174\u1176-\u119d\u119f-\u11a7\u11a9-\u11aa\u11ac-\u11ad\u11b0-\u11b6\u11b9\u11bb\u11c3-\u11ea\u11ec-\u11ef\u11f1-\u11f8\u11fa-\u1dff\u1e9c-\u1e9f\u1efa-\u1eff\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fbd\u1fbf-\u1fc1\u1fc5\u1fcd-\u1fcf\u1fd4-\u1fd5\u1fdc-\u1fdf\u1fed-\u1ff1\u1ff5\u1ffd-\u2125\u2127-\u2129\u212c-\u212d\u212f-\u217f\u2183-\u3006\u3008-\u3020\u302a-\u3040\u3095-\u30a0\u30fb-\u3104\u312d-\u4dff\u9fa6-\uabff\ud7a4-\uffff]') # noqa
|
||||
|
||||
# Simpler things
|
||||
nonPubidCharRegexp = re.compile("[^\x20\x0D\x0Aa-zA-Z0-9\-\'()+,./:=?;!*#@$_%]")
|
||||
|
||||
|
||||
class InfosetFilter(object):
|
||||
replacementRegexp = re.compile(r"U[\dA-F]{5,5}")
|
||||
|
||||
def __init__(self,
|
||||
dropXmlnsLocalName=False,
|
||||
dropXmlnsAttrNs=False,
|
||||
preventDoubleDashComments=False,
|
||||
preventDashAtCommentEnd=False,
|
||||
replaceFormFeedCharacters=True,
|
||||
preventSingleQuotePubid=False):
|
||||
|
||||
self.dropXmlnsLocalName = dropXmlnsLocalName
|
||||
self.dropXmlnsAttrNs = dropXmlnsAttrNs
|
||||
|
||||
self.preventDoubleDashComments = preventDoubleDashComments
|
||||
self.preventDashAtCommentEnd = preventDashAtCommentEnd
|
||||
|
||||
self.replaceFormFeedCharacters = replaceFormFeedCharacters
|
||||
|
||||
self.preventSingleQuotePubid = preventSingleQuotePubid
|
||||
|
||||
self.replaceCache = {}
|
||||
|
||||
def coerceAttribute(self, name, namespace=None):
|
||||
if self.dropXmlnsLocalName and name.startswith("xmlns:"):
|
||||
warnings.warn("Attributes cannot begin with xmlns", DataLossWarning)
|
||||
return None
|
||||
elif (self.dropXmlnsAttrNs and
|
||||
namespace == "http://www.w3.org/2000/xmlns/"):
|
||||
warnings.warn("Attributes cannot be in the xml namespace", DataLossWarning)
|
||||
return None
|
||||
else:
|
||||
return self.toXmlName(name)
|
||||
|
||||
def coerceElement(self, name):
|
||||
return self.toXmlName(name)
|
||||
|
||||
def coerceComment(self, data):
|
||||
if self.preventDoubleDashComments:
|
||||
while "--" in data:
|
||||
warnings.warn("Comments cannot contain adjacent dashes", DataLossWarning)
|
||||
data = data.replace("--", "- -")
|
||||
if data.endswith("-"):
|
||||
warnings.warn("Comments cannot end in a dash", DataLossWarning)
|
||||
data += " "
|
||||
return data
|
||||
|
||||
def coerceCharacters(self, data):
|
||||
if self.replaceFormFeedCharacters:
|
||||
for _ in range(data.count("\x0C")):
|
||||
warnings.warn("Text cannot contain U+000C", DataLossWarning)
|
||||
data = data.replace("\x0C", " ")
|
||||
# Other non-xml characters
|
||||
return data
|
||||
|
||||
def coercePubid(self, data):
|
||||
dataOutput = data
|
||||
for char in nonPubidCharRegexp.findall(data):
|
||||
warnings.warn("Coercing non-XML pubid", DataLossWarning)
|
||||
replacement = self.getReplacementCharacter(char)
|
||||
dataOutput = dataOutput.replace(char, replacement)
|
||||
if self.preventSingleQuotePubid and dataOutput.find("'") >= 0:
|
||||
warnings.warn("Pubid cannot contain single quote", DataLossWarning)
|
||||
dataOutput = dataOutput.replace("'", self.getReplacementCharacter("'"))
|
||||
return dataOutput
|
||||
|
||||
def toXmlName(self, name):
|
||||
nameFirst = name[0]
|
||||
nameRest = name[1:]
|
||||
m = nonXmlNameFirstBMPRegexp.match(nameFirst)
|
||||
if m:
|
||||
warnings.warn("Coercing non-XML name", DataLossWarning)
|
||||
nameFirstOutput = self.getReplacementCharacter(nameFirst)
|
||||
else:
|
||||
nameFirstOutput = nameFirst
|
||||
|
||||
nameRestOutput = nameRest
|
||||
replaceChars = set(nonXmlNameBMPRegexp.findall(nameRest))
|
||||
for char in replaceChars:
|
||||
warnings.warn("Coercing non-XML name", DataLossWarning)
|
||||
replacement = self.getReplacementCharacter(char)
|
||||
nameRestOutput = nameRestOutput.replace(char, replacement)
|
||||
return nameFirstOutput + nameRestOutput
|
||||
|
||||
def getReplacementCharacter(self, char):
|
||||
if char in self.replaceCache:
|
||||
replacement = self.replaceCache[char]
|
||||
else:
|
||||
replacement = self.escapeChar(char)
|
||||
return replacement
|
||||
|
||||
def fromXmlName(self, name):
|
||||
for item in set(self.replacementRegexp.findall(name)):
|
||||
name = name.replace(item, self.unescapeChar(item))
|
||||
return name
|
||||
|
||||
def escapeChar(self, char):
|
||||
replacement = "U%05X" % ord(char)
|
||||
self.replaceCache[char] = replacement
|
||||
return replacement
|
||||
|
||||
def unescapeChar(self, charcode):
|
||||
return chr(int(charcode[1:], 16))
|
923
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_inputstream.py
поставляемый
Normal file
923
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_inputstream.py
поставляемый
Normal file
|
@ -0,0 +1,923 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from pip9._vendor.six import text_type, binary_type
|
||||
from pip9._vendor.six.moves import http_client, urllib
|
||||
|
||||
import codecs
|
||||
import re
|
||||
|
||||
from pip9._vendor import webencodings
|
||||
|
||||
from .constants import EOF, spaceCharacters, asciiLetters, asciiUppercase
|
||||
from .constants import ReparseException
|
||||
from . import _utils
|
||||
|
||||
from io import StringIO
|
||||
|
||||
try:
|
||||
from io import BytesIO
|
||||
except ImportError:
|
||||
BytesIO = StringIO
|
||||
|
||||
# Non-unicode versions of constants for use in the pre-parser
|
||||
spaceCharactersBytes = frozenset([item.encode("ascii") for item in spaceCharacters])
|
||||
asciiLettersBytes = frozenset([item.encode("ascii") for item in asciiLetters])
|
||||
asciiUppercaseBytes = frozenset([item.encode("ascii") for item in asciiUppercase])
|
||||
spacesAngleBrackets = spaceCharactersBytes | frozenset([b">", b"<"])
|
||||
|
||||
|
||||
invalid_unicode_no_surrogate = "[\u0001-\u0008\u000B\u000E-\u001F\u007F-\u009F\uFDD0-\uFDEF\uFFFE\uFFFF\U0001FFFE\U0001FFFF\U0002FFFE\U0002FFFF\U0003FFFE\U0003FFFF\U0004FFFE\U0004FFFF\U0005FFFE\U0005FFFF\U0006FFFE\U0006FFFF\U0007FFFE\U0007FFFF\U0008FFFE\U0008FFFF\U0009FFFE\U0009FFFF\U000AFFFE\U000AFFFF\U000BFFFE\U000BFFFF\U000CFFFE\U000CFFFF\U000DFFFE\U000DFFFF\U000EFFFE\U000EFFFF\U000FFFFE\U000FFFFF\U0010FFFE\U0010FFFF]" # noqa
|
||||
|
||||
if _utils.supports_lone_surrogates:
|
||||
# Use one extra step of indirection and create surrogates with
|
||||
# eval. Not using this indirection would introduce an illegal
|
||||
# unicode literal on platforms not supporting such lone
|
||||
# surrogates.
|
||||
assert invalid_unicode_no_surrogate[-1] == "]" and invalid_unicode_no_surrogate.count("]") == 1
|
||||
invalid_unicode_re = re.compile(invalid_unicode_no_surrogate[:-1] +
|
||||
eval('"\\uD800-\\uDFFF"') + # pylint:disable=eval-used
|
||||
"]")
|
||||
else:
|
||||
invalid_unicode_re = re.compile(invalid_unicode_no_surrogate)
|
||||
|
||||
non_bmp_invalid_codepoints = set([0x1FFFE, 0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE,
|
||||
0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE, 0x5FFFF,
|
||||
0x6FFFE, 0x6FFFF, 0x7FFFE, 0x7FFFF, 0x8FFFE,
|
||||
0x8FFFF, 0x9FFFE, 0x9FFFF, 0xAFFFE, 0xAFFFF,
|
||||
0xBFFFE, 0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE,
|
||||
0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE, 0xFFFFF,
|
||||
0x10FFFE, 0x10FFFF])
|
||||
|
||||
ascii_punctuation_re = re.compile("[\u0009-\u000D\u0020-\u002F\u003A-\u0040\u005B-\u0060\u007B-\u007E]")
|
||||
|
||||
# Cache for charsUntil()
|
||||
charsUntilRegEx = {}
|
||||
|
||||
|
||||
class BufferedStream(object):
|
||||
"""Buffering for streams that do not have buffering of their own
|
||||
|
||||
The buffer is implemented as a list of chunks on the assumption that
|
||||
joining many strings will be slow since it is O(n**2)
|
||||
"""
|
||||
|
||||
def __init__(self, stream):
|
||||
self.stream = stream
|
||||
self.buffer = []
|
||||
self.position = [-1, 0] # chunk number, offset
|
||||
|
||||
def tell(self):
|
||||
pos = 0
|
||||
for chunk in self.buffer[:self.position[0]]:
|
||||
pos += len(chunk)
|
||||
pos += self.position[1]
|
||||
return pos
|
||||
|
||||
def seek(self, pos):
|
||||
assert pos <= self._bufferedBytes()
|
||||
offset = pos
|
||||
i = 0
|
||||
while len(self.buffer[i]) < offset:
|
||||
offset -= len(self.buffer[i])
|
||||
i += 1
|
||||
self.position = [i, offset]
|
||||
|
||||
def read(self, bytes):
|
||||
if not self.buffer:
|
||||
return self._readStream(bytes)
|
||||
elif (self.position[0] == len(self.buffer) and
|
||||
self.position[1] == len(self.buffer[-1])):
|
||||
return self._readStream(bytes)
|
||||
else:
|
||||
return self._readFromBuffer(bytes)
|
||||
|
||||
def _bufferedBytes(self):
|
||||
return sum([len(item) for item in self.buffer])
|
||||
|
||||
def _readStream(self, bytes):
|
||||
data = self.stream.read(bytes)
|
||||
self.buffer.append(data)
|
||||
self.position[0] += 1
|
||||
self.position[1] = len(data)
|
||||
return data
|
||||
|
||||
def _readFromBuffer(self, bytes):
|
||||
remainingBytes = bytes
|
||||
rv = []
|
||||
bufferIndex = self.position[0]
|
||||
bufferOffset = self.position[1]
|
||||
while bufferIndex < len(self.buffer) and remainingBytes != 0:
|
||||
assert remainingBytes > 0
|
||||
bufferedData = self.buffer[bufferIndex]
|
||||
|
||||
if remainingBytes <= len(bufferedData) - bufferOffset:
|
||||
bytesToRead = remainingBytes
|
||||
self.position = [bufferIndex, bufferOffset + bytesToRead]
|
||||
else:
|
||||
bytesToRead = len(bufferedData) - bufferOffset
|
||||
self.position = [bufferIndex, len(bufferedData)]
|
||||
bufferIndex += 1
|
||||
rv.append(bufferedData[bufferOffset:bufferOffset + bytesToRead])
|
||||
remainingBytes -= bytesToRead
|
||||
|
||||
bufferOffset = 0
|
||||
|
||||
if remainingBytes:
|
||||
rv.append(self._readStream(remainingBytes))
|
||||
|
||||
return b"".join(rv)
|
||||
|
||||
|
||||
def HTMLInputStream(source, **kwargs):
|
||||
# Work around Python bug #20007: read(0) closes the connection.
|
||||
# http://bugs.python.org/issue20007
|
||||
if (isinstance(source, http_client.HTTPResponse) or
|
||||
# Also check for addinfourl wrapping HTTPResponse
|
||||
(isinstance(source, urllib.response.addbase) and
|
||||
isinstance(source.fp, http_client.HTTPResponse))):
|
||||
isUnicode = False
|
||||
elif hasattr(source, "read"):
|
||||
isUnicode = isinstance(source.read(0), text_type)
|
||||
else:
|
||||
isUnicode = isinstance(source, text_type)
|
||||
|
||||
if isUnicode:
|
||||
encodings = [x for x in kwargs if x.endswith("_encoding")]
|
||||
if encodings:
|
||||
raise TypeError("Cannot set an encoding with a unicode input, set %r" % encodings)
|
||||
|
||||
return HTMLUnicodeInputStream(source, **kwargs)
|
||||
else:
|
||||
return HTMLBinaryInputStream(source, **kwargs)
|
||||
|
||||
|
||||
class HTMLUnicodeInputStream(object):
|
||||
"""Provides a unicode stream of characters to the HTMLTokenizer.
|
||||
|
||||
This class takes care of character encoding and removing or replacing
|
||||
incorrect byte-sequences and also provides column and line tracking.
|
||||
|
||||
"""
|
||||
|
||||
_defaultChunkSize = 10240
|
||||
|
||||
def __init__(self, source):
|
||||
"""Initialises the HTMLInputStream.
|
||||
|
||||
HTMLInputStream(source, [encoding]) -> Normalized stream from source
|
||||
for use by html5lib.
|
||||
|
||||
source can be either a file-object, local filename or a string.
|
||||
|
||||
The optional encoding parameter must be a string that indicates
|
||||
the encoding. If specified, that encoding will be used,
|
||||
regardless of any BOM or later declaration (such as in a meta
|
||||
element)
|
||||
|
||||
"""
|
||||
|
||||
if not _utils.supports_lone_surrogates:
|
||||
# Such platforms will have already checked for such
|
||||
# surrogate errors, so no need to do this checking.
|
||||
self.reportCharacterErrors = None
|
||||
elif len("\U0010FFFF") == 1:
|
||||
self.reportCharacterErrors = self.characterErrorsUCS4
|
||||
else:
|
||||
self.reportCharacterErrors = self.characterErrorsUCS2
|
||||
|
||||
# List of where new lines occur
|
||||
self.newLines = [0]
|
||||
|
||||
self.charEncoding = (lookupEncoding("utf-8"), "certain")
|
||||
self.dataStream = self.openStream(source)
|
||||
|
||||
self.reset()
|
||||
|
||||
def reset(self):
|
||||
self.chunk = ""
|
||||
self.chunkSize = 0
|
||||
self.chunkOffset = 0
|
||||
self.errors = []
|
||||
|
||||
# number of (complete) lines in previous chunks
|
||||
self.prevNumLines = 0
|
||||
# number of columns in the last line of the previous chunk
|
||||
self.prevNumCols = 0
|
||||
|
||||
# Deal with CR LF and surrogates split over chunk boundaries
|
||||
self._bufferedCharacter = None
|
||||
|
||||
def openStream(self, source):
|
||||
"""Produces a file object from source.
|
||||
|
||||
source can be either a file object, local filename or a string.
|
||||
|
||||
"""
|
||||
# Already a file object
|
||||
if hasattr(source, 'read'):
|
||||
stream = source
|
||||
else:
|
||||
stream = StringIO(source)
|
||||
|
||||
return stream
|
||||
|
||||
def _position(self, offset):
|
||||
chunk = self.chunk
|
||||
nLines = chunk.count('\n', 0, offset)
|
||||
positionLine = self.prevNumLines + nLines
|
||||
lastLinePos = chunk.rfind('\n', 0, offset)
|
||||
if lastLinePos == -1:
|
||||
positionColumn = self.prevNumCols + offset
|
||||
else:
|
||||
positionColumn = offset - (lastLinePos + 1)
|
||||
return (positionLine, positionColumn)
|
||||
|
||||
def position(self):
|
||||
"""Returns (line, col) of the current position in the stream."""
|
||||
line, col = self._position(self.chunkOffset)
|
||||
return (line + 1, col)
|
||||
|
||||
def char(self):
|
||||
""" Read one character from the stream or queue if available. Return
|
||||
EOF when EOF is reached.
|
||||
"""
|
||||
# Read a new chunk from the input stream if necessary
|
||||
if self.chunkOffset >= self.chunkSize:
|
||||
if not self.readChunk():
|
||||
return EOF
|
||||
|
||||
chunkOffset = self.chunkOffset
|
||||
char = self.chunk[chunkOffset]
|
||||
self.chunkOffset = chunkOffset + 1
|
||||
|
||||
return char
|
||||
|
||||
def readChunk(self, chunkSize=None):
|
||||
if chunkSize is None:
|
||||
chunkSize = self._defaultChunkSize
|
||||
|
||||
self.prevNumLines, self.prevNumCols = self._position(self.chunkSize)
|
||||
|
||||
self.chunk = ""
|
||||
self.chunkSize = 0
|
||||
self.chunkOffset = 0
|
||||
|
||||
data = self.dataStream.read(chunkSize)
|
||||
|
||||
# Deal with CR LF and surrogates broken across chunks
|
||||
if self._bufferedCharacter:
|
||||
data = self._bufferedCharacter + data
|
||||
self._bufferedCharacter = None
|
||||
elif not data:
|
||||
# We have no more data, bye-bye stream
|
||||
return False
|
||||
|
||||
if len(data) > 1:
|
||||
lastv = ord(data[-1])
|
||||
if lastv == 0x0D or 0xD800 <= lastv <= 0xDBFF:
|
||||
self._bufferedCharacter = data[-1]
|
||||
data = data[:-1]
|
||||
|
||||
if self.reportCharacterErrors:
|
||||
self.reportCharacterErrors(data)
|
||||
|
||||
# Replace invalid characters
|
||||
data = data.replace("\r\n", "\n")
|
||||
data = data.replace("\r", "\n")
|
||||
|
||||
self.chunk = data
|
||||
self.chunkSize = len(data)
|
||||
|
||||
return True
|
||||
|
||||
def characterErrorsUCS4(self, data):
|
||||
for _ in range(len(invalid_unicode_re.findall(data))):
|
||||
self.errors.append("invalid-codepoint")
|
||||
|
||||
def characterErrorsUCS2(self, data):
|
||||
# Someone picked the wrong compile option
|
||||
# You lose
|
||||
skip = False
|
||||
for match in invalid_unicode_re.finditer(data):
|
||||
if skip:
|
||||
continue
|
||||
codepoint = ord(match.group())
|
||||
pos = match.start()
|
||||
# Pretty sure there should be endianness issues here
|
||||
if _utils.isSurrogatePair(data[pos:pos + 2]):
|
||||
# We have a surrogate pair!
|
||||
char_val = _utils.surrogatePairToCodepoint(data[pos:pos + 2])
|
||||
if char_val in non_bmp_invalid_codepoints:
|
||||
self.errors.append("invalid-codepoint")
|
||||
skip = True
|
||||
elif (codepoint >= 0xD800 and codepoint <= 0xDFFF and
|
||||
pos == len(data) - 1):
|
||||
self.errors.append("invalid-codepoint")
|
||||
else:
|
||||
skip = False
|
||||
self.errors.append("invalid-codepoint")
|
||||
|
||||
def charsUntil(self, characters, opposite=False):
|
||||
""" Returns a string of characters from the stream up to but not
|
||||
including any character in 'characters' or EOF. 'characters' must be
|
||||
a container that supports the 'in' method and iteration over its
|
||||
characters.
|
||||
"""
|
||||
|
||||
# Use a cache of regexps to find the required characters
|
||||
try:
|
||||
chars = charsUntilRegEx[(characters, opposite)]
|
||||
except KeyError:
|
||||
if __debug__:
|
||||
for c in characters:
|
||||
assert(ord(c) < 128)
|
||||
regex = "".join(["\\x%02x" % ord(c) for c in characters])
|
||||
if not opposite:
|
||||
regex = "^%s" % regex
|
||||
chars = charsUntilRegEx[(characters, opposite)] = re.compile("[%s]+" % regex)
|
||||
|
||||
rv = []
|
||||
|
||||
while True:
|
||||
# Find the longest matching prefix
|
||||
m = chars.match(self.chunk, self.chunkOffset)
|
||||
if m is None:
|
||||
# If nothing matched, and it wasn't because we ran out of chunk,
|
||||
# then stop
|
||||
if self.chunkOffset != self.chunkSize:
|
||||
break
|
||||
else:
|
||||
end = m.end()
|
||||
# If not the whole chunk matched, return everything
|
||||
# up to the part that didn't match
|
||||
if end != self.chunkSize:
|
||||
rv.append(self.chunk[self.chunkOffset:end])
|
||||
self.chunkOffset = end
|
||||
break
|
||||
# If the whole remainder of the chunk matched,
|
||||
# use it all and read the next chunk
|
||||
rv.append(self.chunk[self.chunkOffset:])
|
||||
if not self.readChunk():
|
||||
# Reached EOF
|
||||
break
|
||||
|
||||
r = "".join(rv)
|
||||
return r
|
||||
|
||||
def unget(self, char):
|
||||
# Only one character is allowed to be ungotten at once - it must
|
||||
# be consumed again before any further call to unget
|
||||
if char is not None:
|
||||
if self.chunkOffset == 0:
|
||||
# unget is called quite rarely, so it's a good idea to do
|
||||
# more work here if it saves a bit of work in the frequently
|
||||
# called char and charsUntil.
|
||||
# So, just prepend the ungotten character onto the current
|
||||
# chunk:
|
||||
self.chunk = char + self.chunk
|
||||
self.chunkSize += 1
|
||||
else:
|
||||
self.chunkOffset -= 1
|
||||
assert self.chunk[self.chunkOffset] == char
|
||||
|
||||
|
||||
class HTMLBinaryInputStream(HTMLUnicodeInputStream):
|
||||
"""Provides a unicode stream of characters to the HTMLTokenizer.
|
||||
|
||||
This class takes care of character encoding and removing or replacing
|
||||
incorrect byte-sequences and also provides column and line tracking.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, source, override_encoding=None, transport_encoding=None,
|
||||
same_origin_parent_encoding=None, likely_encoding=None,
|
||||
default_encoding="windows-1252", useChardet=True):
|
||||
"""Initialises the HTMLInputStream.
|
||||
|
||||
HTMLInputStream(source, [encoding]) -> Normalized stream from source
|
||||
for use by html5lib.
|
||||
|
||||
source can be either a file-object, local filename or a string.
|
||||
|
||||
The optional encoding parameter must be a string that indicates
|
||||
the encoding. If specified, that encoding will be used,
|
||||
regardless of any BOM or later declaration (such as in a meta
|
||||
element)
|
||||
|
||||
"""
|
||||
# Raw Stream - for unicode objects this will encode to utf-8 and set
|
||||
# self.charEncoding as appropriate
|
||||
self.rawStream = self.openStream(source)
|
||||
|
||||
HTMLUnicodeInputStream.__init__(self, self.rawStream)
|
||||
|
||||
# Encoding Information
|
||||
# Number of bytes to use when looking for a meta element with
|
||||
# encoding information
|
||||
self.numBytesMeta = 1024
|
||||
# Number of bytes to use when using detecting encoding using chardet
|
||||
self.numBytesChardet = 100
|
||||
# Things from args
|
||||
self.override_encoding = override_encoding
|
||||
self.transport_encoding = transport_encoding
|
||||
self.same_origin_parent_encoding = same_origin_parent_encoding
|
||||
self.likely_encoding = likely_encoding
|
||||
self.default_encoding = default_encoding
|
||||
|
||||
# Determine encoding
|
||||
self.charEncoding = self.determineEncoding(useChardet)
|
||||
assert self.charEncoding[0] is not None
|
||||
|
||||
# Call superclass
|
||||
self.reset()
|
||||
|
||||
def reset(self):
|
||||
self.dataStream = self.charEncoding[0].codec_info.streamreader(self.rawStream, 'replace')
|
||||
HTMLUnicodeInputStream.reset(self)
|
||||
|
||||
def openStream(self, source):
|
||||
"""Produces a file object from source.
|
||||
|
||||
source can be either a file object, local filename or a string.
|
||||
|
||||
"""
|
||||
# Already a file object
|
||||
if hasattr(source, 'read'):
|
||||
stream = source
|
||||
else:
|
||||
stream = BytesIO(source)
|
||||
|
||||
try:
|
||||
stream.seek(stream.tell())
|
||||
except: # pylint:disable=bare-except
|
||||
stream = BufferedStream(stream)
|
||||
|
||||
return stream
|
||||
|
||||
def determineEncoding(self, chardet=True):
|
||||
# BOMs take precedence over everything
|
||||
# This will also read past the BOM if present
|
||||
charEncoding = self.detectBOM(), "certain"
|
||||
if charEncoding[0] is not None:
|
||||
return charEncoding
|
||||
|
||||
# If we've been overriden, we've been overriden
|
||||
charEncoding = lookupEncoding(self.override_encoding), "certain"
|
||||
if charEncoding[0] is not None:
|
||||
return charEncoding
|
||||
|
||||
# Now check the transport layer
|
||||
charEncoding = lookupEncoding(self.transport_encoding), "certain"
|
||||
if charEncoding[0] is not None:
|
||||
return charEncoding
|
||||
|
||||
# Look for meta elements with encoding information
|
||||
charEncoding = self.detectEncodingMeta(), "tentative"
|
||||
if charEncoding[0] is not None:
|
||||
return charEncoding
|
||||
|
||||
# Parent document encoding
|
||||
charEncoding = lookupEncoding(self.same_origin_parent_encoding), "tentative"
|
||||
if charEncoding[0] is not None and not charEncoding[0].name.startswith("utf-16"):
|
||||
return charEncoding
|
||||
|
||||
# "likely" encoding
|
||||
charEncoding = lookupEncoding(self.likely_encoding), "tentative"
|
||||
if charEncoding[0] is not None:
|
||||
return charEncoding
|
||||
|
||||
# Guess with chardet, if available
|
||||
if chardet:
|
||||
try:
|
||||
from chardet.universaldetector import UniversalDetector
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
buffers = []
|
||||
detector = UniversalDetector()
|
||||
while not detector.done:
|
||||
buffer = self.rawStream.read(self.numBytesChardet)
|
||||
assert isinstance(buffer, bytes)
|
||||
if not buffer:
|
||||
break
|
||||
buffers.append(buffer)
|
||||
detector.feed(buffer)
|
||||
detector.close()
|
||||
encoding = lookupEncoding(detector.result['encoding'])
|
||||
self.rawStream.seek(0)
|
||||
if encoding is not None:
|
||||
return encoding, "tentative"
|
||||
|
||||
# Try the default encoding
|
||||
charEncoding = lookupEncoding(self.default_encoding), "tentative"
|
||||
if charEncoding[0] is not None:
|
||||
return charEncoding
|
||||
|
||||
# Fallback to html5lib's default if even that hasn't worked
|
||||
return lookupEncoding("windows-1252"), "tentative"
|
||||
|
||||
def changeEncoding(self, newEncoding):
|
||||
assert self.charEncoding[1] != "certain"
|
||||
newEncoding = lookupEncoding(newEncoding)
|
||||
if newEncoding is None:
|
||||
return
|
||||
if newEncoding.name in ("utf-16be", "utf-16le"):
|
||||
newEncoding = lookupEncoding("utf-8")
|
||||
assert newEncoding is not None
|
||||
elif newEncoding == self.charEncoding[0]:
|
||||
self.charEncoding = (self.charEncoding[0], "certain")
|
||||
else:
|
||||
self.rawStream.seek(0)
|
||||
self.charEncoding = (newEncoding, "certain")
|
||||
self.reset()
|
||||
raise ReparseException("Encoding changed from %s to %s" % (self.charEncoding[0], newEncoding))
|
||||
|
||||
def detectBOM(self):
|
||||
"""Attempts to detect at BOM at the start of the stream. If
|
||||
an encoding can be determined from the BOM return the name of the
|
||||
encoding otherwise return None"""
|
||||
bomDict = {
|
||||
codecs.BOM_UTF8: 'utf-8',
|
||||
codecs.BOM_UTF16_LE: 'utf-16le', codecs.BOM_UTF16_BE: 'utf-16be',
|
||||
codecs.BOM_UTF32_LE: 'utf-32le', codecs.BOM_UTF32_BE: 'utf-32be'
|
||||
}
|
||||
|
||||
# Go to beginning of file and read in 4 bytes
|
||||
string = self.rawStream.read(4)
|
||||
assert isinstance(string, bytes)
|
||||
|
||||
# Try detecting the BOM using bytes from the string
|
||||
encoding = bomDict.get(string[:3]) # UTF-8
|
||||
seek = 3
|
||||
if not encoding:
|
||||
# Need to detect UTF-32 before UTF-16
|
||||
encoding = bomDict.get(string) # UTF-32
|
||||
seek = 4
|
||||
if not encoding:
|
||||
encoding = bomDict.get(string[:2]) # UTF-16
|
||||
seek = 2
|
||||
|
||||
# Set the read position past the BOM if one was found, otherwise
|
||||
# set it to the start of the stream
|
||||
if encoding:
|
||||
self.rawStream.seek(seek)
|
||||
return lookupEncoding(encoding)
|
||||
else:
|
||||
self.rawStream.seek(0)
|
||||
return None
|
||||
|
||||
def detectEncodingMeta(self):
|
||||
"""Report the encoding declared by the meta element
|
||||
"""
|
||||
buffer = self.rawStream.read(self.numBytesMeta)
|
||||
assert isinstance(buffer, bytes)
|
||||
parser = EncodingParser(buffer)
|
||||
self.rawStream.seek(0)
|
||||
encoding = parser.getEncoding()
|
||||
|
||||
if encoding is not None and encoding.name in ("utf-16be", "utf-16le"):
|
||||
encoding = lookupEncoding("utf-8")
|
||||
|
||||
return encoding
|
||||
|
||||
|
||||
class EncodingBytes(bytes):
|
||||
"""String-like object with an associated position and various extra methods
|
||||
If the position is ever greater than the string length then an exception is
|
||||
raised"""
|
||||
def __new__(self, value):
|
||||
assert isinstance(value, bytes)
|
||||
return bytes.__new__(self, value.lower())
|
||||
|
||||
def __init__(self, value):
|
||||
# pylint:disable=unused-argument
|
||||
self._position = -1
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
p = self._position = self._position + 1
|
||||
if p >= len(self):
|
||||
raise StopIteration
|
||||
elif p < 0:
|
||||
raise TypeError
|
||||
return self[p:p + 1]
|
||||
|
||||
def next(self):
|
||||
# Py2 compat
|
||||
return self.__next__()
|
||||
|
||||
def previous(self):
|
||||
p = self._position
|
||||
if p >= len(self):
|
||||
raise StopIteration
|
||||
elif p < 0:
|
||||
raise TypeError
|
||||
self._position = p = p - 1
|
||||
return self[p:p + 1]
|
||||
|
||||
def setPosition(self, position):
|
||||
if self._position >= len(self):
|
||||
raise StopIteration
|
||||
self._position = position
|
||||
|
||||
def getPosition(self):
|
||||
if self._position >= len(self):
|
||||
raise StopIteration
|
||||
if self._position >= 0:
|
||||
return self._position
|
||||
else:
|
||||
return None
|
||||
|
||||
position = property(getPosition, setPosition)
|
||||
|
||||
def getCurrentByte(self):
|
||||
return self[self.position:self.position + 1]
|
||||
|
||||
currentByte = property(getCurrentByte)
|
||||
|
||||
def skip(self, chars=spaceCharactersBytes):
|
||||
"""Skip past a list of characters"""
|
||||
p = self.position # use property for the error-checking
|
||||
while p < len(self):
|
||||
c = self[p:p + 1]
|
||||
if c not in chars:
|
||||
self._position = p
|
||||
return c
|
||||
p += 1
|
||||
self._position = p
|
||||
return None
|
||||
|
||||
def skipUntil(self, chars):
|
||||
p = self.position
|
||||
while p < len(self):
|
||||
c = self[p:p + 1]
|
||||
if c in chars:
|
||||
self._position = p
|
||||
return c
|
||||
p += 1
|
||||
self._position = p
|
||||
return None
|
||||
|
||||
def matchBytes(self, bytes):
|
||||
"""Look for a sequence of bytes at the start of a string. If the bytes
|
||||
are found return True and advance the position to the byte after the
|
||||
match. Otherwise return False and leave the position alone"""
|
||||
p = self.position
|
||||
data = self[p:p + len(bytes)]
|
||||
rv = data.startswith(bytes)
|
||||
if rv:
|
||||
self.position += len(bytes)
|
||||
return rv
|
||||
|
||||
def jumpTo(self, bytes):
|
||||
"""Look for the next sequence of bytes matching a given sequence. If
|
||||
a match is found advance the position to the last byte of the match"""
|
||||
newPosition = self[self.position:].find(bytes)
|
||||
if newPosition > -1:
|
||||
# XXX: This is ugly, but I can't see a nicer way to fix this.
|
||||
if self._position == -1:
|
||||
self._position = 0
|
||||
self._position += (newPosition + len(bytes) - 1)
|
||||
return True
|
||||
else:
|
||||
raise StopIteration
|
||||
|
||||
|
||||
class EncodingParser(object):
|
||||
"""Mini parser for detecting character encoding from meta elements"""
|
||||
|
||||
def __init__(self, data):
|
||||
"""string - the data to work on for encoding detection"""
|
||||
self.data = EncodingBytes(data)
|
||||
self.encoding = None
|
||||
|
||||
def getEncoding(self):
|
||||
methodDispatch = (
|
||||
(b"<!--", self.handleComment),
|
||||
(b"<meta", self.handleMeta),
|
||||
(b"</", self.handlePossibleEndTag),
|
||||
(b"<!", self.handleOther),
|
||||
(b"<?", self.handleOther),
|
||||
(b"<", self.handlePossibleStartTag))
|
||||
for _ in self.data:
|
||||
keepParsing = True
|
||||
for key, method in methodDispatch:
|
||||
if self.data.matchBytes(key):
|
||||
try:
|
||||
keepParsing = method()
|
||||
break
|
||||
except StopIteration:
|
||||
keepParsing = False
|
||||
break
|
||||
if not keepParsing:
|
||||
break
|
||||
|
||||
return self.encoding
|
||||
|
||||
def handleComment(self):
|
||||
"""Skip over comments"""
|
||||
return self.data.jumpTo(b"-->")
|
||||
|
||||
def handleMeta(self):
|
||||
if self.data.currentByte not in spaceCharactersBytes:
|
||||
# if we have <meta not followed by a space so just keep going
|
||||
return True
|
||||
# We have a valid meta element we want to search for attributes
|
||||
hasPragma = False
|
||||
pendingEncoding = None
|
||||
while True:
|
||||
# Try to find the next attribute after the current position
|
||||
attr = self.getAttribute()
|
||||
if attr is None:
|
||||
return True
|
||||
else:
|
||||
if attr[0] == b"http-equiv":
|
||||
hasPragma = attr[1] == b"content-type"
|
||||
if hasPragma and pendingEncoding is not None:
|
||||
self.encoding = pendingEncoding
|
||||
return False
|
||||
elif attr[0] == b"charset":
|
||||
tentativeEncoding = attr[1]
|
||||
codec = lookupEncoding(tentativeEncoding)
|
||||
if codec is not None:
|
||||
self.encoding = codec
|
||||
return False
|
||||
elif attr[0] == b"content":
|
||||
contentParser = ContentAttrParser(EncodingBytes(attr[1]))
|
||||
tentativeEncoding = contentParser.parse()
|
||||
if tentativeEncoding is not None:
|
||||
codec = lookupEncoding(tentativeEncoding)
|
||||
if codec is not None:
|
||||
if hasPragma:
|
||||
self.encoding = codec
|
||||
return False
|
||||
else:
|
||||
pendingEncoding = codec
|
||||
|
||||
def handlePossibleStartTag(self):
|
||||
return self.handlePossibleTag(False)
|
||||
|
||||
def handlePossibleEndTag(self):
|
||||
next(self.data)
|
||||
return self.handlePossibleTag(True)
|
||||
|
||||
def handlePossibleTag(self, endTag):
|
||||
data = self.data
|
||||
if data.currentByte not in asciiLettersBytes:
|
||||
# If the next byte is not an ascii letter either ignore this
|
||||
# fragment (possible start tag case) or treat it according to
|
||||
# handleOther
|
||||
if endTag:
|
||||
data.previous()
|
||||
self.handleOther()
|
||||
return True
|
||||
|
||||
c = data.skipUntil(spacesAngleBrackets)
|
||||
if c == b"<":
|
||||
# return to the first step in the overall "two step" algorithm
|
||||
# reprocessing the < byte
|
||||
data.previous()
|
||||
else:
|
||||
# Read all attributes
|
||||
attr = self.getAttribute()
|
||||
while attr is not None:
|
||||
attr = self.getAttribute()
|
||||
return True
|
||||
|
||||
def handleOther(self):
|
||||
return self.data.jumpTo(b">")
|
||||
|
||||
def getAttribute(self):
|
||||
"""Return a name,value pair for the next attribute in the stream,
|
||||
if one is found, or None"""
|
||||
data = self.data
|
||||
# Step 1 (skip chars)
|
||||
c = data.skip(spaceCharactersBytes | frozenset([b"/"]))
|
||||
assert c is None or len(c) == 1
|
||||
# Step 2
|
||||
if c in (b">", None):
|
||||
return None
|
||||
# Step 3
|
||||
attrName = []
|
||||
attrValue = []
|
||||
# Step 4 attribute name
|
||||
while True:
|
||||
if c == b"=" and attrName:
|
||||
break
|
||||
elif c in spaceCharactersBytes:
|
||||
# Step 6!
|
||||
c = data.skip()
|
||||
break
|
||||
elif c in (b"/", b">"):
|
||||
return b"".join(attrName), b""
|
||||
elif c in asciiUppercaseBytes:
|
||||
attrName.append(c.lower())
|
||||
elif c is None:
|
||||
return None
|
||||
else:
|
||||
attrName.append(c)
|
||||
# Step 5
|
||||
c = next(data)
|
||||
# Step 7
|
||||
if c != b"=":
|
||||
data.previous()
|
||||
return b"".join(attrName), b""
|
||||
# Step 8
|
||||
next(data)
|
||||
# Step 9
|
||||
c = data.skip()
|
||||
# Step 10
|
||||
if c in (b"'", b'"'):
|
||||
# 10.1
|
||||
quoteChar = c
|
||||
while True:
|
||||
# 10.2
|
||||
c = next(data)
|
||||
# 10.3
|
||||
if c == quoteChar:
|
||||
next(data)
|
||||
return b"".join(attrName), b"".join(attrValue)
|
||||
# 10.4
|
||||
elif c in asciiUppercaseBytes:
|
||||
attrValue.append(c.lower())
|
||||
# 10.5
|
||||
else:
|
||||
attrValue.append(c)
|
||||
elif c == b">":
|
||||
return b"".join(attrName), b""
|
||||
elif c in asciiUppercaseBytes:
|
||||
attrValue.append(c.lower())
|
||||
elif c is None:
|
||||
return None
|
||||
else:
|
||||
attrValue.append(c)
|
||||
# Step 11
|
||||
while True:
|
||||
c = next(data)
|
||||
if c in spacesAngleBrackets:
|
||||
return b"".join(attrName), b"".join(attrValue)
|
||||
elif c in asciiUppercaseBytes:
|
||||
attrValue.append(c.lower())
|
||||
elif c is None:
|
||||
return None
|
||||
else:
|
||||
attrValue.append(c)
|
||||
|
||||
|
||||
class ContentAttrParser(object):
|
||||
def __init__(self, data):
|
||||
assert isinstance(data, bytes)
|
||||
self.data = data
|
||||
|
||||
def parse(self):
|
||||
try:
|
||||
# Check if the attr name is charset
|
||||
# otherwise return
|
||||
self.data.jumpTo(b"charset")
|
||||
self.data.position += 1
|
||||
self.data.skip()
|
||||
if not self.data.currentByte == b"=":
|
||||
# If there is no = sign keep looking for attrs
|
||||
return None
|
||||
self.data.position += 1
|
||||
self.data.skip()
|
||||
# Look for an encoding between matching quote marks
|
||||
if self.data.currentByte in (b'"', b"'"):
|
||||
quoteMark = self.data.currentByte
|
||||
self.data.position += 1
|
||||
oldPosition = self.data.position
|
||||
if self.data.jumpTo(quoteMark):
|
||||
return self.data[oldPosition:self.data.position]
|
||||
else:
|
||||
return None
|
||||
else:
|
||||
# Unquoted value
|
||||
oldPosition = self.data.position
|
||||
try:
|
||||
self.data.skipUntil(spaceCharactersBytes)
|
||||
return self.data[oldPosition:self.data.position]
|
||||
except StopIteration:
|
||||
# Return the whole remaining value
|
||||
return self.data[oldPosition:]
|
||||
except StopIteration:
|
||||
return None
|
||||
|
||||
|
||||
def lookupEncoding(encoding):
|
||||
"""Return the python codec name corresponding to an encoding or None if the
|
||||
string doesn't correspond to a valid encoding."""
|
||||
if isinstance(encoding, binary_type):
|
||||
try:
|
||||
encoding = encoding.decode("ascii")
|
||||
except UnicodeDecodeError:
|
||||
return None
|
||||
|
||||
if encoding is not None:
|
||||
try:
|
||||
return webencodings.lookup(encoding)
|
||||
except AttributeError:
|
||||
return None
|
||||
else:
|
||||
return None
|
1721
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_tokenizer.py
поставляемый
Normal file
1721
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_tokenizer.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
14
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/__init__.py
поставляемый
Normal file
14
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,14 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from .py import Trie as PyTrie
|
||||
|
||||
Trie = PyTrie
|
||||
|
||||
# pylint:disable=wrong-import-position
|
||||
try:
|
||||
from .datrie import Trie as DATrie
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
Trie = DATrie
|
||||
# pylint:enable=wrong-import-position
|
38
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/_base.py
поставляемый
Normal file
38
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/_base.py
поставляемый
Normal file
|
@ -0,0 +1,38 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from collections import Mapping
|
||||
|
||||
|
||||
class Trie(Mapping):
|
||||
"""Abstract base class for tries"""
|
||||
|
||||
def keys(self, prefix=None):
|
||||
# pylint:disable=arguments-differ
|
||||
keys = super(Trie, self).keys()
|
||||
|
||||
if prefix is None:
|
||||
return set(keys)
|
||||
|
||||
# Python 2.6: no set comprehensions
|
||||
return set([x for x in keys if x.startswith(prefix)])
|
||||
|
||||
def has_keys_with_prefix(self, prefix):
|
||||
for key in self.keys():
|
||||
if key.startswith(prefix):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def longest_prefix(self, prefix):
|
||||
if prefix in self:
|
||||
return prefix
|
||||
|
||||
for i in range(1, len(prefix) + 1):
|
||||
if prefix[:-i] in self:
|
||||
return prefix[:-i]
|
||||
|
||||
raise KeyError(prefix)
|
||||
|
||||
def longest_prefix_item(self, prefix):
|
||||
lprefix = self.longest_prefix(prefix)
|
||||
return (lprefix, self[lprefix])
|
44
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/datrie.py
поставляемый
Normal file
44
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/datrie.py
поставляемый
Normal file
|
@ -0,0 +1,44 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from datrie import Trie as DATrie
|
||||
from pip9._vendor.six import text_type
|
||||
|
||||
from ._base import Trie as ABCTrie
|
||||
|
||||
|
||||
class Trie(ABCTrie):
|
||||
def __init__(self, data):
|
||||
chars = set()
|
||||
for key in data.keys():
|
||||
if not isinstance(key, text_type):
|
||||
raise TypeError("All keys must be strings")
|
||||
for char in key:
|
||||
chars.add(char)
|
||||
|
||||
self._data = DATrie("".join(chars))
|
||||
for key, value in data.items():
|
||||
self._data[key] = value
|
||||
|
||||
def __contains__(self, key):
|
||||
return key in self._data
|
||||
|
||||
def __len__(self):
|
||||
return len(self._data)
|
||||
|
||||
def __iter__(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self._data[key]
|
||||
|
||||
def keys(self, prefix=None):
|
||||
return self._data.keys(prefix)
|
||||
|
||||
def has_keys_with_prefix(self, prefix):
|
||||
return self._data.has_keys_with_prefix(prefix)
|
||||
|
||||
def longest_prefix(self, prefix):
|
||||
return self._data.longest_prefix(prefix)
|
||||
|
||||
def longest_prefix_item(self, prefix):
|
||||
return self._data.longest_prefix_item(prefix)
|
67
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/py.py
поставляемый
Normal file
67
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_trie/py.py
поставляемый
Normal file
|
@ -0,0 +1,67 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
from pip9._vendor.six import text_type
|
||||
|
||||
from bisect import bisect_left
|
||||
|
||||
from ._base import Trie as ABCTrie
|
||||
|
||||
|
||||
class Trie(ABCTrie):
|
||||
def __init__(self, data):
|
||||
if not all(isinstance(x, text_type) for x in data.keys()):
|
||||
raise TypeError("All keys must be strings")
|
||||
|
||||
self._data = data
|
||||
self._keys = sorted(data.keys())
|
||||
self._cachestr = ""
|
||||
self._cachepoints = (0, len(data))
|
||||
|
||||
def __contains__(self, key):
|
||||
return key in self._data
|
||||
|
||||
def __len__(self):
|
||||
return len(self._data)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self._data)
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self._data[key]
|
||||
|
||||
def keys(self, prefix=None):
|
||||
if prefix is None or prefix == "" or not self._keys:
|
||||
return set(self._keys)
|
||||
|
||||
if prefix.startswith(self._cachestr):
|
||||
lo, hi = self._cachepoints
|
||||
start = i = bisect_left(self._keys, prefix, lo, hi)
|
||||
else:
|
||||
start = i = bisect_left(self._keys, prefix)
|
||||
|
||||
keys = set()
|
||||
if start == len(self._keys):
|
||||
return keys
|
||||
|
||||
while self._keys[i].startswith(prefix):
|
||||
keys.add(self._keys[i])
|
||||
i += 1
|
||||
|
||||
self._cachestr = prefix
|
||||
self._cachepoints = (start, i)
|
||||
|
||||
return keys
|
||||
|
||||
def has_keys_with_prefix(self, prefix):
|
||||
if prefix in self._data:
|
||||
return True
|
||||
|
||||
if prefix.startswith(self._cachestr):
|
||||
lo, hi = self._cachepoints
|
||||
i = bisect_left(self._keys, prefix, lo, hi)
|
||||
else:
|
||||
i = bisect_left(self._keys, prefix)
|
||||
|
||||
if i == len(self._keys):
|
||||
return False
|
||||
|
||||
return self._keys[i].startswith(prefix)
|
127
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_utils.py
поставляемый
Normal file
127
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/_utils.py
поставляемый
Normal file
|
@ -0,0 +1,127 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
import sys
|
||||
from types import ModuleType
|
||||
|
||||
from pip9._vendor.six import text_type
|
||||
|
||||
try:
|
||||
import xml.etree.cElementTree as default_etree
|
||||
except ImportError:
|
||||
import xml.etree.ElementTree as default_etree
|
||||
|
||||
|
||||
__all__ = ["default_etree", "MethodDispatcher", "isSurrogatePair",
|
||||
"surrogatePairToCodepoint", "moduleFactoryFactory",
|
||||
"supports_lone_surrogates", "PY27"]
|
||||
|
||||
|
||||
PY27 = sys.version_info[0] == 2 and sys.version_info[1] >= 7
|
||||
|
||||
# Platforms not supporting lone surrogates (\uD800-\uDFFF) should be
|
||||
# caught by the below test. In general this would be any platform
|
||||
# using UTF-16 as its encoding of unicode strings, such as
|
||||
# Jython. This is because UTF-16 itself is based on the use of such
|
||||
# surrogates, and there is no mechanism to further escape such
|
||||
# escapes.
|
||||
try:
|
||||
_x = eval('"\\uD800"') # pylint:disable=eval-used
|
||||
if not isinstance(_x, text_type):
|
||||
# We need this with u"" because of http://bugs.jython.org/issue2039
|
||||
_x = eval('u"\\uD800"') # pylint:disable=eval-used
|
||||
assert isinstance(_x, text_type)
|
||||
except: # pylint:disable=bare-except
|
||||
supports_lone_surrogates = False
|
||||
else:
|
||||
supports_lone_surrogates = True
|
||||
|
||||
|
||||
class MethodDispatcher(dict):
|
||||
"""Dict with 2 special properties:
|
||||
|
||||
On initiation, keys that are lists, sets or tuples are converted to
|
||||
multiple keys so accessing any one of the items in the original
|
||||
list-like object returns the matching value
|
||||
|
||||
md = MethodDispatcher({("foo", "bar"):"baz"})
|
||||
md["foo"] == "baz"
|
||||
|
||||
A default value which can be set through the default attribute.
|
||||
"""
|
||||
|
||||
def __init__(self, items=()):
|
||||
# Using _dictEntries instead of directly assigning to self is about
|
||||
# twice as fast. Please do careful performance testing before changing
|
||||
# anything here.
|
||||
_dictEntries = []
|
||||
for name, value in items:
|
||||
if isinstance(name, (list, tuple, frozenset, set)):
|
||||
for item in name:
|
||||
_dictEntries.append((item, value))
|
||||
else:
|
||||
_dictEntries.append((name, value))
|
||||
dict.__init__(self, _dictEntries)
|
||||
assert len(self) == len(_dictEntries)
|
||||
self.default = None
|
||||
|
||||
def __getitem__(self, key):
|
||||
return dict.get(self, key, self.default)
|
||||
|
||||
|
||||
# Some utility functions to deal with weirdness around UCS2 vs UCS4
|
||||
# python builds
|
||||
|
||||
def isSurrogatePair(data):
|
||||
return (len(data) == 2 and
|
||||
ord(data[0]) >= 0xD800 and ord(data[0]) <= 0xDBFF and
|
||||
ord(data[1]) >= 0xDC00 and ord(data[1]) <= 0xDFFF)
|
||||
|
||||
|
||||
def surrogatePairToCodepoint(data):
|
||||
char_val = (0x10000 + (ord(data[0]) - 0xD800) * 0x400 +
|
||||
(ord(data[1]) - 0xDC00))
|
||||
return char_val
|
||||
|
||||
# Module Factory Factory (no, this isn't Java, I know)
|
||||
# Here to stop this being duplicated all over the place.
|
||||
|
||||
|
||||
def moduleFactoryFactory(factory):
|
||||
moduleCache = {}
|
||||
|
||||
def moduleFactory(baseModule, *args, **kwargs):
|
||||
if isinstance(ModuleType.__name__, type("")):
|
||||
name = "_%s_factory" % baseModule.__name__
|
||||
else:
|
||||
name = b"_%s_factory" % baseModule.__name__
|
||||
|
||||
kwargs_tuple = tuple(kwargs.items())
|
||||
|
||||
try:
|
||||
return moduleCache[name][args][kwargs_tuple]
|
||||
except KeyError:
|
||||
mod = ModuleType(name)
|
||||
objs = factory(baseModule, *args, **kwargs)
|
||||
mod.__dict__.update(objs)
|
||||
if "name" not in moduleCache:
|
||||
moduleCache[name] = {}
|
||||
if "args" not in moduleCache[name]:
|
||||
moduleCache[name][args] = {}
|
||||
if "kwargs" not in moduleCache[name][args]:
|
||||
moduleCache[name][args][kwargs_tuple] = {}
|
||||
moduleCache[name][args][kwargs_tuple] = mod
|
||||
return mod
|
||||
|
||||
return moduleFactory
|
||||
|
||||
|
||||
def memoize(func):
|
||||
cache = {}
|
||||
|
||||
def wrapped(*args, **kwargs):
|
||||
key = (tuple(args), tuple(kwargs.items()))
|
||||
if key not in cache:
|
||||
cache[key] = func(*args, **kwargs)
|
||||
return cache[key]
|
||||
|
||||
return wrapped
|
2945
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/constants.py
поставляемый
Normal file
2945
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/constants.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
0
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/__init__.py
поставляемый
Normal file
0
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/__init__.py
поставляемый
Normal file
20
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/alphabeticalattributes.py
поставляемый
Normal file
20
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/alphabeticalattributes.py
поставляемый
Normal file
|
@ -0,0 +1,20 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from . import base
|
||||
|
||||
try:
|
||||
from collections import OrderedDict
|
||||
except ImportError:
|
||||
from ordereddict import OrderedDict
|
||||
|
||||
|
||||
class Filter(base.Filter):
|
||||
def __iter__(self):
|
||||
for token in base.Filter.__iter__(self):
|
||||
if token["type"] in ("StartTag", "EmptyTag"):
|
||||
attrs = OrderedDict()
|
||||
for name, value in sorted(token["data"].items(),
|
||||
key=lambda x: x[0]):
|
||||
attrs[name] = value
|
||||
token["data"] = attrs
|
||||
yield token
|
12
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/base.py
поставляемый
Normal file
12
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/base.py
поставляемый
Normal file
|
@ -0,0 +1,12 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
|
||||
class Filter(object):
|
||||
def __init__(self, source):
|
||||
self.source = source
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.source)
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self.source, name)
|
65
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/inject_meta_charset.py
поставляемый
Normal file
65
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/inject_meta_charset.py
поставляемый
Normal file
|
@ -0,0 +1,65 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from . import base
|
||||
|
||||
|
||||
class Filter(base.Filter):
|
||||
def __init__(self, source, encoding):
|
||||
base.Filter.__init__(self, source)
|
||||
self.encoding = encoding
|
||||
|
||||
def __iter__(self):
|
||||
state = "pre_head"
|
||||
meta_found = (self.encoding is None)
|
||||
pending = []
|
||||
|
||||
for token in base.Filter.__iter__(self):
|
||||
type = token["type"]
|
||||
if type == "StartTag":
|
||||
if token["name"].lower() == "head":
|
||||
state = "in_head"
|
||||
|
||||
elif type == "EmptyTag":
|
||||
if token["name"].lower() == "meta":
|
||||
# replace charset with actual encoding
|
||||
has_http_equiv_content_type = False
|
||||
for (namespace, name), value in token["data"].items():
|
||||
if namespace is not None:
|
||||
continue
|
||||
elif name.lower() == 'charset':
|
||||
token["data"][(namespace, name)] = self.encoding
|
||||
meta_found = True
|
||||
break
|
||||
elif name == 'http-equiv' and value.lower() == 'content-type':
|
||||
has_http_equiv_content_type = True
|
||||
else:
|
||||
if has_http_equiv_content_type and (None, "content") in token["data"]:
|
||||
token["data"][(None, "content")] = 'text/html; charset=%s' % self.encoding
|
||||
meta_found = True
|
||||
|
||||
elif token["name"].lower() == "head" and not meta_found:
|
||||
# insert meta into empty head
|
||||
yield {"type": "StartTag", "name": "head",
|
||||
"data": token["data"]}
|
||||
yield {"type": "EmptyTag", "name": "meta",
|
||||
"data": {(None, "charset"): self.encoding}}
|
||||
yield {"type": "EndTag", "name": "head"}
|
||||
meta_found = True
|
||||
continue
|
||||
|
||||
elif type == "EndTag":
|
||||
if token["name"].lower() == "head" and pending:
|
||||
# insert meta into head (if necessary) and flush pending queue
|
||||
yield pending.pop(0)
|
||||
if not meta_found:
|
||||
yield {"type": "EmptyTag", "name": "meta",
|
||||
"data": {(None, "charset"): self.encoding}}
|
||||
while pending:
|
||||
yield pending.pop(0)
|
||||
meta_found = True
|
||||
state = "post_head"
|
||||
|
||||
if state == "in_head":
|
||||
pending.append(token)
|
||||
else:
|
||||
yield token
|
81
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/lint.py
поставляемый
Normal file
81
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/lint.py
поставляемый
Normal file
|
@ -0,0 +1,81 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from pip9._vendor.six import text_type
|
||||
|
||||
from . import base
|
||||
from ..constants import namespaces, voidElements
|
||||
|
||||
from ..constants import spaceCharacters
|
||||
spaceCharacters = "".join(spaceCharacters)
|
||||
|
||||
|
||||
class Filter(base.Filter):
|
||||
def __init__(self, source, require_matching_tags=True):
|
||||
super(Filter, self).__init__(source)
|
||||
self.require_matching_tags = require_matching_tags
|
||||
|
||||
def __iter__(self):
|
||||
open_elements = []
|
||||
for token in base.Filter.__iter__(self):
|
||||
type = token["type"]
|
||||
if type in ("StartTag", "EmptyTag"):
|
||||
namespace = token["namespace"]
|
||||
name = token["name"]
|
||||
assert namespace is None or isinstance(namespace, text_type)
|
||||
assert namespace != ""
|
||||
assert isinstance(name, text_type)
|
||||
assert name != ""
|
||||
assert isinstance(token["data"], dict)
|
||||
if (not namespace or namespace == namespaces["html"]) and name in voidElements:
|
||||
assert type == "EmptyTag"
|
||||
else:
|
||||
assert type == "StartTag"
|
||||
if type == "StartTag" and self.require_matching_tags:
|
||||
open_elements.append((namespace, name))
|
||||
for (namespace, name), value in token["data"].items():
|
||||
assert namespace is None or isinstance(namespace, text_type)
|
||||
assert namespace != ""
|
||||
assert isinstance(name, text_type)
|
||||
assert name != ""
|
||||
assert isinstance(value, text_type)
|
||||
|
||||
elif type == "EndTag":
|
||||
namespace = token["namespace"]
|
||||
name = token["name"]
|
||||
assert namespace is None or isinstance(namespace, text_type)
|
||||
assert namespace != ""
|
||||
assert isinstance(name, text_type)
|
||||
assert name != ""
|
||||
if (not namespace or namespace == namespaces["html"]) and name in voidElements:
|
||||
assert False, "Void element reported as EndTag token: %(tag)s" % {"tag": name}
|
||||
elif self.require_matching_tags:
|
||||
start = open_elements.pop()
|
||||
assert start == (namespace, name)
|
||||
|
||||
elif type == "Comment":
|
||||
data = token["data"]
|
||||
assert isinstance(data, text_type)
|
||||
|
||||
elif type in ("Characters", "SpaceCharacters"):
|
||||
data = token["data"]
|
||||
assert isinstance(data, text_type)
|
||||
assert data != ""
|
||||
if type == "SpaceCharacters":
|
||||
assert data.strip(spaceCharacters) == ""
|
||||
|
||||
elif type == "Doctype":
|
||||
name = token["name"]
|
||||
assert name is None or isinstance(name, text_type)
|
||||
assert token["publicId"] is None or isinstance(name, text_type)
|
||||
assert token["systemId"] is None or isinstance(name, text_type)
|
||||
|
||||
elif type == "Entity":
|
||||
assert isinstance(token["name"], text_type)
|
||||
|
||||
elif type == "SerializerError":
|
||||
assert isinstance(token["data"], text_type)
|
||||
|
||||
else:
|
||||
assert False, "Unknown token type: %(type)s" % {"type": type}
|
||||
|
||||
yield token
|
206
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/optionaltags.py
поставляемый
Normal file
206
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/optionaltags.py
поставляемый
Normal file
|
@ -0,0 +1,206 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from . import base
|
||||
|
||||
|
||||
class Filter(base.Filter):
|
||||
def slider(self):
|
||||
previous1 = previous2 = None
|
||||
for token in self.source:
|
||||
if previous1 is not None:
|
||||
yield previous2, previous1, token
|
||||
previous2 = previous1
|
||||
previous1 = token
|
||||
if previous1 is not None:
|
||||
yield previous2, previous1, None
|
||||
|
||||
def __iter__(self):
|
||||
for previous, token, next in self.slider():
|
||||
type = token["type"]
|
||||
if type == "StartTag":
|
||||
if (token["data"] or
|
||||
not self.is_optional_start(token["name"], previous, next)):
|
||||
yield token
|
||||
elif type == "EndTag":
|
||||
if not self.is_optional_end(token["name"], next):
|
||||
yield token
|
||||
else:
|
||||
yield token
|
||||
|
||||
def is_optional_start(self, tagname, previous, next):
|
||||
type = next and next["type"] or None
|
||||
if tagname in 'html':
|
||||
# An html element's start tag may be omitted if the first thing
|
||||
# inside the html element is not a space character or a comment.
|
||||
return type not in ("Comment", "SpaceCharacters")
|
||||
elif tagname == 'head':
|
||||
# A head element's start tag may be omitted if the first thing
|
||||
# inside the head element is an element.
|
||||
# XXX: we also omit the start tag if the head element is empty
|
||||
if type in ("StartTag", "EmptyTag"):
|
||||
return True
|
||||
elif type == "EndTag":
|
||||
return next["name"] == "head"
|
||||
elif tagname == 'body':
|
||||
# A body element's start tag may be omitted if the first thing
|
||||
# inside the body element is not a space character or a comment,
|
||||
# except if the first thing inside the body element is a script
|
||||
# or style element and the node immediately preceding the body
|
||||
# element is a head element whose end tag has been omitted.
|
||||
if type in ("Comment", "SpaceCharacters"):
|
||||
return False
|
||||
elif type == "StartTag":
|
||||
# XXX: we do not look at the preceding event, so we never omit
|
||||
# the body element's start tag if it's followed by a script or
|
||||
# a style element.
|
||||
return next["name"] not in ('script', 'style')
|
||||
else:
|
||||
return True
|
||||
elif tagname == 'colgroup':
|
||||
# A colgroup element's start tag may be omitted if the first thing
|
||||
# inside the colgroup element is a col element, and if the element
|
||||
# is not immediately preceded by another colgroup element whose
|
||||
# end tag has been omitted.
|
||||
if type in ("StartTag", "EmptyTag"):
|
||||
# XXX: we do not look at the preceding event, so instead we never
|
||||
# omit the colgroup element's end tag when it is immediately
|
||||
# followed by another colgroup element. See is_optional_end.
|
||||
return next["name"] == "col"
|
||||
else:
|
||||
return False
|
||||
elif tagname == 'tbody':
|
||||
# A tbody element's start tag may be omitted if the first thing
|
||||
# inside the tbody element is a tr element, and if the element is
|
||||
# not immediately preceded by a tbody, thead, or tfoot element
|
||||
# whose end tag has been omitted.
|
||||
if type == "StartTag":
|
||||
# omit the thead and tfoot elements' end tag when they are
|
||||
# immediately followed by a tbody element. See is_optional_end.
|
||||
if previous and previous['type'] == 'EndTag' and \
|
||||
previous['name'] in ('tbody', 'thead', 'tfoot'):
|
||||
return False
|
||||
return next["name"] == 'tr'
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
|
||||
def is_optional_end(self, tagname, next):
|
||||
type = next and next["type"] or None
|
||||
if tagname in ('html', 'head', 'body'):
|
||||
# An html element's end tag may be omitted if the html element
|
||||
# is not immediately followed by a space character or a comment.
|
||||
return type not in ("Comment", "SpaceCharacters")
|
||||
elif tagname in ('li', 'optgroup', 'tr'):
|
||||
# A li element's end tag may be omitted if the li element is
|
||||
# immediately followed by another li element or if there is
|
||||
# no more content in the parent element.
|
||||
# An optgroup element's end tag may be omitted if the optgroup
|
||||
# element is immediately followed by another optgroup element,
|
||||
# or if there is no more content in the parent element.
|
||||
# A tr element's end tag may be omitted if the tr element is
|
||||
# immediately followed by another tr element, or if there is
|
||||
# no more content in the parent element.
|
||||
if type == "StartTag":
|
||||
return next["name"] == tagname
|
||||
else:
|
||||
return type == "EndTag" or type is None
|
||||
elif tagname in ('dt', 'dd'):
|
||||
# A dt element's end tag may be omitted if the dt element is
|
||||
# immediately followed by another dt element or a dd element.
|
||||
# A dd element's end tag may be omitted if the dd element is
|
||||
# immediately followed by another dd element or a dt element,
|
||||
# or if there is no more content in the parent element.
|
||||
if type == "StartTag":
|
||||
return next["name"] in ('dt', 'dd')
|
||||
elif tagname == 'dd':
|
||||
return type == "EndTag" or type is None
|
||||
else:
|
||||
return False
|
||||
elif tagname == 'p':
|
||||
# A p element's end tag may be omitted if the p element is
|
||||
# immediately followed by an address, article, aside,
|
||||
# blockquote, datagrid, dialog, dir, div, dl, fieldset,
|
||||
# footer, form, h1, h2, h3, h4, h5, h6, header, hr, menu,
|
||||
# nav, ol, p, pre, section, table, or ul, element, or if
|
||||
# there is no more content in the parent element.
|
||||
if type in ("StartTag", "EmptyTag"):
|
||||
return next["name"] in ('address', 'article', 'aside',
|
||||
'blockquote', 'datagrid', 'dialog',
|
||||
'dir', 'div', 'dl', 'fieldset', 'footer',
|
||||
'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
|
||||
'header', 'hr', 'menu', 'nav', 'ol',
|
||||
'p', 'pre', 'section', 'table', 'ul')
|
||||
else:
|
||||
return type == "EndTag" or type is None
|
||||
elif tagname == 'option':
|
||||
# An option element's end tag may be omitted if the option
|
||||
# element is immediately followed by another option element,
|
||||
# or if it is immediately followed by an <code>optgroup</code>
|
||||
# element, or if there is no more content in the parent
|
||||
# element.
|
||||
if type == "StartTag":
|
||||
return next["name"] in ('option', 'optgroup')
|
||||
else:
|
||||
return type == "EndTag" or type is None
|
||||
elif tagname in ('rt', 'rp'):
|
||||
# An rt element's end tag may be omitted if the rt element is
|
||||
# immediately followed by an rt or rp element, or if there is
|
||||
# no more content in the parent element.
|
||||
# An rp element's end tag may be omitted if the rp element is
|
||||
# immediately followed by an rt or rp element, or if there is
|
||||
# no more content in the parent element.
|
||||
if type == "StartTag":
|
||||
return next["name"] in ('rt', 'rp')
|
||||
else:
|
||||
return type == "EndTag" or type is None
|
||||
elif tagname == 'colgroup':
|
||||
# A colgroup element's end tag may be omitted if the colgroup
|
||||
# element is not immediately followed by a space character or
|
||||
# a comment.
|
||||
if type in ("Comment", "SpaceCharacters"):
|
||||
return False
|
||||
elif type == "StartTag":
|
||||
# XXX: we also look for an immediately following colgroup
|
||||
# element. See is_optional_start.
|
||||
return next["name"] != 'colgroup'
|
||||
else:
|
||||
return True
|
||||
elif tagname in ('thead', 'tbody'):
|
||||
# A thead element's end tag may be omitted if the thead element
|
||||
# is immediately followed by a tbody or tfoot element.
|
||||
# A tbody element's end tag may be omitted if the tbody element
|
||||
# is immediately followed by a tbody or tfoot element, or if
|
||||
# there is no more content in the parent element.
|
||||
# A tfoot element's end tag may be omitted if the tfoot element
|
||||
# is immediately followed by a tbody element, or if there is no
|
||||
# more content in the parent element.
|
||||
# XXX: we never omit the end tag when the following element is
|
||||
# a tbody. See is_optional_start.
|
||||
if type == "StartTag":
|
||||
return next["name"] in ['tbody', 'tfoot']
|
||||
elif tagname == 'tbody':
|
||||
return type == "EndTag" or type is None
|
||||
else:
|
||||
return False
|
||||
elif tagname == 'tfoot':
|
||||
# A tfoot element's end tag may be omitted if the tfoot element
|
||||
# is immediately followed by a tbody element, or if there is no
|
||||
# more content in the parent element.
|
||||
# XXX: we never omit the end tag when the following element is
|
||||
# a tbody. See is_optional_start.
|
||||
if type == "StartTag":
|
||||
return next["name"] == 'tbody'
|
||||
else:
|
||||
return type == "EndTag" or type is None
|
||||
elif tagname in ('td', 'th'):
|
||||
# A td element's end tag may be omitted if the td element is
|
||||
# immediately followed by a td or th element, or if there is
|
||||
# no more content in the parent element.
|
||||
# A th element's end tag may be omitted if the th element is
|
||||
# immediately followed by a td or th element, or if there is
|
||||
# no more content in the parent element.
|
||||
if type == "StartTag":
|
||||
return next["name"] in ('td', 'th')
|
||||
else:
|
||||
return type == "EndTag" or type is None
|
||||
return False
|
865
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/sanitizer.py
поставляемый
Normal file
865
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/sanitizer.py
поставляемый
Normal file
|
@ -0,0 +1,865 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
import re
|
||||
from xml.sax.saxutils import escape, unescape
|
||||
|
||||
from pip9._vendor.six.moves import urllib_parse as urlparse
|
||||
|
||||
from . import base
|
||||
from ..constants import namespaces, prefixes
|
||||
|
||||
__all__ = ["Filter"]
|
||||
|
||||
|
||||
allowed_elements = frozenset((
|
||||
(namespaces['html'], 'a'),
|
||||
(namespaces['html'], 'abbr'),
|
||||
(namespaces['html'], 'acronym'),
|
||||
(namespaces['html'], 'address'),
|
||||
(namespaces['html'], 'area'),
|
||||
(namespaces['html'], 'article'),
|
||||
(namespaces['html'], 'aside'),
|
||||
(namespaces['html'], 'audio'),
|
||||
(namespaces['html'], 'b'),
|
||||
(namespaces['html'], 'big'),
|
||||
(namespaces['html'], 'blockquote'),
|
||||
(namespaces['html'], 'br'),
|
||||
(namespaces['html'], 'button'),
|
||||
(namespaces['html'], 'canvas'),
|
||||
(namespaces['html'], 'caption'),
|
||||
(namespaces['html'], 'center'),
|
||||
(namespaces['html'], 'cite'),
|
||||
(namespaces['html'], 'code'),
|
||||
(namespaces['html'], 'col'),
|
||||
(namespaces['html'], 'colgroup'),
|
||||
(namespaces['html'], 'command'),
|
||||
(namespaces['html'], 'datagrid'),
|
||||
(namespaces['html'], 'datalist'),
|
||||
(namespaces['html'], 'dd'),
|
||||
(namespaces['html'], 'del'),
|
||||
(namespaces['html'], 'details'),
|
||||
(namespaces['html'], 'dfn'),
|
||||
(namespaces['html'], 'dialog'),
|
||||
(namespaces['html'], 'dir'),
|
||||
(namespaces['html'], 'div'),
|
||||
(namespaces['html'], 'dl'),
|
||||
(namespaces['html'], 'dt'),
|
||||
(namespaces['html'], 'em'),
|
||||
(namespaces['html'], 'event-source'),
|
||||
(namespaces['html'], 'fieldset'),
|
||||
(namespaces['html'], 'figcaption'),
|
||||
(namespaces['html'], 'figure'),
|
||||
(namespaces['html'], 'footer'),
|
||||
(namespaces['html'], 'font'),
|
||||
(namespaces['html'], 'form'),
|
||||
(namespaces['html'], 'header'),
|
||||
(namespaces['html'], 'h1'),
|
||||
(namespaces['html'], 'h2'),
|
||||
(namespaces['html'], 'h3'),
|
||||
(namespaces['html'], 'h4'),
|
||||
(namespaces['html'], 'h5'),
|
||||
(namespaces['html'], 'h6'),
|
||||
(namespaces['html'], 'hr'),
|
||||
(namespaces['html'], 'i'),
|
||||
(namespaces['html'], 'img'),
|
||||
(namespaces['html'], 'input'),
|
||||
(namespaces['html'], 'ins'),
|
||||
(namespaces['html'], 'keygen'),
|
||||
(namespaces['html'], 'kbd'),
|
||||
(namespaces['html'], 'label'),
|
||||
(namespaces['html'], 'legend'),
|
||||
(namespaces['html'], 'li'),
|
||||
(namespaces['html'], 'm'),
|
||||
(namespaces['html'], 'map'),
|
||||
(namespaces['html'], 'menu'),
|
||||
(namespaces['html'], 'meter'),
|
||||
(namespaces['html'], 'multicol'),
|
||||
(namespaces['html'], 'nav'),
|
||||
(namespaces['html'], 'nextid'),
|
||||
(namespaces['html'], 'ol'),
|
||||
(namespaces['html'], 'output'),
|
||||
(namespaces['html'], 'optgroup'),
|
||||
(namespaces['html'], 'option'),
|
||||
(namespaces['html'], 'p'),
|
||||
(namespaces['html'], 'pre'),
|
||||
(namespaces['html'], 'progress'),
|
||||
(namespaces['html'], 'q'),
|
||||
(namespaces['html'], 's'),
|
||||
(namespaces['html'], 'samp'),
|
||||
(namespaces['html'], 'section'),
|
||||
(namespaces['html'], 'select'),
|
||||
(namespaces['html'], 'small'),
|
||||
(namespaces['html'], 'sound'),
|
||||
(namespaces['html'], 'source'),
|
||||
(namespaces['html'], 'spacer'),
|
||||
(namespaces['html'], 'span'),
|
||||
(namespaces['html'], 'strike'),
|
||||
(namespaces['html'], 'strong'),
|
||||
(namespaces['html'], 'sub'),
|
||||
(namespaces['html'], 'sup'),
|
||||
(namespaces['html'], 'table'),
|
||||
(namespaces['html'], 'tbody'),
|
||||
(namespaces['html'], 'td'),
|
||||
(namespaces['html'], 'textarea'),
|
||||
(namespaces['html'], 'time'),
|
||||
(namespaces['html'], 'tfoot'),
|
||||
(namespaces['html'], 'th'),
|
||||
(namespaces['html'], 'thead'),
|
||||
(namespaces['html'], 'tr'),
|
||||
(namespaces['html'], 'tt'),
|
||||
(namespaces['html'], 'u'),
|
||||
(namespaces['html'], 'ul'),
|
||||
(namespaces['html'], 'var'),
|
||||
(namespaces['html'], 'video'),
|
||||
(namespaces['mathml'], 'maction'),
|
||||
(namespaces['mathml'], 'math'),
|
||||
(namespaces['mathml'], 'merror'),
|
||||
(namespaces['mathml'], 'mfrac'),
|
||||
(namespaces['mathml'], 'mi'),
|
||||
(namespaces['mathml'], 'mmultiscripts'),
|
||||
(namespaces['mathml'], 'mn'),
|
||||
(namespaces['mathml'], 'mo'),
|
||||
(namespaces['mathml'], 'mover'),
|
||||
(namespaces['mathml'], 'mpadded'),
|
||||
(namespaces['mathml'], 'mphantom'),
|
||||
(namespaces['mathml'], 'mprescripts'),
|
||||
(namespaces['mathml'], 'mroot'),
|
||||
(namespaces['mathml'], 'mrow'),
|
||||
(namespaces['mathml'], 'mspace'),
|
||||
(namespaces['mathml'], 'msqrt'),
|
||||
(namespaces['mathml'], 'mstyle'),
|
||||
(namespaces['mathml'], 'msub'),
|
||||
(namespaces['mathml'], 'msubsup'),
|
||||
(namespaces['mathml'], 'msup'),
|
||||
(namespaces['mathml'], 'mtable'),
|
||||
(namespaces['mathml'], 'mtd'),
|
||||
(namespaces['mathml'], 'mtext'),
|
||||
(namespaces['mathml'], 'mtr'),
|
||||
(namespaces['mathml'], 'munder'),
|
||||
(namespaces['mathml'], 'munderover'),
|
||||
(namespaces['mathml'], 'none'),
|
||||
(namespaces['svg'], 'a'),
|
||||
(namespaces['svg'], 'animate'),
|
||||
(namespaces['svg'], 'animateColor'),
|
||||
(namespaces['svg'], 'animateMotion'),
|
||||
(namespaces['svg'], 'animateTransform'),
|
||||
(namespaces['svg'], 'clipPath'),
|
||||
(namespaces['svg'], 'circle'),
|
||||
(namespaces['svg'], 'defs'),
|
||||
(namespaces['svg'], 'desc'),
|
||||
(namespaces['svg'], 'ellipse'),
|
||||
(namespaces['svg'], 'font-face'),
|
||||
(namespaces['svg'], 'font-face-name'),
|
||||
(namespaces['svg'], 'font-face-src'),
|
||||
(namespaces['svg'], 'g'),
|
||||
(namespaces['svg'], 'glyph'),
|
||||
(namespaces['svg'], 'hkern'),
|
||||
(namespaces['svg'], 'linearGradient'),
|
||||
(namespaces['svg'], 'line'),
|
||||
(namespaces['svg'], 'marker'),
|
||||
(namespaces['svg'], 'metadata'),
|
||||
(namespaces['svg'], 'missing-glyph'),
|
||||
(namespaces['svg'], 'mpath'),
|
||||
(namespaces['svg'], 'path'),
|
||||
(namespaces['svg'], 'polygon'),
|
||||
(namespaces['svg'], 'polyline'),
|
||||
(namespaces['svg'], 'radialGradient'),
|
||||
(namespaces['svg'], 'rect'),
|
||||
(namespaces['svg'], 'set'),
|
||||
(namespaces['svg'], 'stop'),
|
||||
(namespaces['svg'], 'svg'),
|
||||
(namespaces['svg'], 'switch'),
|
||||
(namespaces['svg'], 'text'),
|
||||
(namespaces['svg'], 'title'),
|
||||
(namespaces['svg'], 'tspan'),
|
||||
(namespaces['svg'], 'use'),
|
||||
))
|
||||
|
||||
allowed_attributes = frozenset((
|
||||
# HTML attributes
|
||||
(None, 'abbr'),
|
||||
(None, 'accept'),
|
||||
(None, 'accept-charset'),
|
||||
(None, 'accesskey'),
|
||||
(None, 'action'),
|
||||
(None, 'align'),
|
||||
(None, 'alt'),
|
||||
(None, 'autocomplete'),
|
||||
(None, 'autofocus'),
|
||||
(None, 'axis'),
|
||||
(None, 'background'),
|
||||
(None, 'balance'),
|
||||
(None, 'bgcolor'),
|
||||
(None, 'bgproperties'),
|
||||
(None, 'border'),
|
||||
(None, 'bordercolor'),
|
||||
(None, 'bordercolordark'),
|
||||
(None, 'bordercolorlight'),
|
||||
(None, 'bottompadding'),
|
||||
(None, 'cellpadding'),
|
||||
(None, 'cellspacing'),
|
||||
(None, 'ch'),
|
||||
(None, 'challenge'),
|
||||
(None, 'char'),
|
||||
(None, 'charoff'),
|
||||
(None, 'choff'),
|
||||
(None, 'charset'),
|
||||
(None, 'checked'),
|
||||
(None, 'cite'),
|
||||
(None, 'class'),
|
||||
(None, 'clear'),
|
||||
(None, 'color'),
|
||||
(None, 'cols'),
|
||||
(None, 'colspan'),
|
||||
(None, 'compact'),
|
||||
(None, 'contenteditable'),
|
||||
(None, 'controls'),
|
||||
(None, 'coords'),
|
||||
(None, 'data'),
|
||||
(None, 'datafld'),
|
||||
(None, 'datapagesize'),
|
||||
(None, 'datasrc'),
|
||||
(None, 'datetime'),
|
||||
(None, 'default'),
|
||||
(None, 'delay'),
|
||||
(None, 'dir'),
|
||||
(None, 'disabled'),
|
||||
(None, 'draggable'),
|
||||
(None, 'dynsrc'),
|
||||
(None, 'enctype'),
|
||||
(None, 'end'),
|
||||
(None, 'face'),
|
||||
(None, 'for'),
|
||||
(None, 'form'),
|
||||
(None, 'frame'),
|
||||
(None, 'galleryimg'),
|
||||
(None, 'gutter'),
|
||||
(None, 'headers'),
|
||||
(None, 'height'),
|
||||
(None, 'hidefocus'),
|
||||
(None, 'hidden'),
|
||||
(None, 'high'),
|
||||
(None, 'href'),
|
||||
(None, 'hreflang'),
|
||||
(None, 'hspace'),
|
||||
(None, 'icon'),
|
||||
(None, 'id'),
|
||||
(None, 'inputmode'),
|
||||
(None, 'ismap'),
|
||||
(None, 'keytype'),
|
||||
(None, 'label'),
|
||||
(None, 'leftspacing'),
|
||||
(None, 'lang'),
|
||||
(None, 'list'),
|
||||
(None, 'longdesc'),
|
||||
(None, 'loop'),
|
||||
(None, 'loopcount'),
|
||||
(None, 'loopend'),
|
||||
(None, 'loopstart'),
|
||||
(None, 'low'),
|
||||
(None, 'lowsrc'),
|
||||
(None, 'max'),
|
||||
(None, 'maxlength'),
|
||||
(None, 'media'),
|
||||
(None, 'method'),
|
||||
(None, 'min'),
|
||||
(None, 'multiple'),
|
||||
(None, 'name'),
|
||||
(None, 'nohref'),
|
||||
(None, 'noshade'),
|
||||
(None, 'nowrap'),
|
||||
(None, 'open'),
|
||||
(None, 'optimum'),
|
||||
(None, 'pattern'),
|
||||
(None, 'ping'),
|
||||
(None, 'point-size'),
|
||||
(None, 'poster'),
|
||||
(None, 'pqg'),
|
||||
(None, 'preload'),
|
||||
(None, 'prompt'),
|
||||
(None, 'radiogroup'),
|
||||
(None, 'readonly'),
|
||||
(None, 'rel'),
|
||||
(None, 'repeat-max'),
|
||||
(None, 'repeat-min'),
|
||||
(None, 'replace'),
|
||||
(None, 'required'),
|
||||
(None, 'rev'),
|
||||
(None, 'rightspacing'),
|
||||
(None, 'rows'),
|
||||
(None, 'rowspan'),
|
||||
(None, 'rules'),
|
||||
(None, 'scope'),
|
||||
(None, 'selected'),
|
||||
(None, 'shape'),
|
||||
(None, 'size'),
|
||||
(None, 'span'),
|
||||
(None, 'src'),
|
||||
(None, 'start'),
|
||||
(None, 'step'),
|
||||
(None, 'style'),
|
||||
(None, 'summary'),
|
||||
(None, 'suppress'),
|
||||
(None, 'tabindex'),
|
||||
(None, 'target'),
|
||||
(None, 'template'),
|
||||
(None, 'title'),
|
||||
(None, 'toppadding'),
|
||||
(None, 'type'),
|
||||
(None, 'unselectable'),
|
||||
(None, 'usemap'),
|
||||
(None, 'urn'),
|
||||
(None, 'valign'),
|
||||
(None, 'value'),
|
||||
(None, 'variable'),
|
||||
(None, 'volume'),
|
||||
(None, 'vspace'),
|
||||
(None, 'vrml'),
|
||||
(None, 'width'),
|
||||
(None, 'wrap'),
|
||||
(namespaces['xml'], 'lang'),
|
||||
# MathML attributes
|
||||
(None, 'actiontype'),
|
||||
(None, 'align'),
|
||||
(None, 'columnalign'),
|
||||
(None, 'columnalign'),
|
||||
(None, 'columnalign'),
|
||||
(None, 'columnlines'),
|
||||
(None, 'columnspacing'),
|
||||
(None, 'columnspan'),
|
||||
(None, 'depth'),
|
||||
(None, 'display'),
|
||||
(None, 'displaystyle'),
|
||||
(None, 'equalcolumns'),
|
||||
(None, 'equalrows'),
|
||||
(None, 'fence'),
|
||||
(None, 'fontstyle'),
|
||||
(None, 'fontweight'),
|
||||
(None, 'frame'),
|
||||
(None, 'height'),
|
||||
(None, 'linethickness'),
|
||||
(None, 'lspace'),
|
||||
(None, 'mathbackground'),
|
||||
(None, 'mathcolor'),
|
||||
(None, 'mathvariant'),
|
||||
(None, 'mathvariant'),
|
||||
(None, 'maxsize'),
|
||||
(None, 'minsize'),
|
||||
(None, 'other'),
|
||||
(None, 'rowalign'),
|
||||
(None, 'rowalign'),
|
||||
(None, 'rowalign'),
|
||||
(None, 'rowlines'),
|
||||
(None, 'rowspacing'),
|
||||
(None, 'rowspan'),
|
||||
(None, 'rspace'),
|
||||
(None, 'scriptlevel'),
|
||||
(None, 'selection'),
|
||||
(None, 'separator'),
|
||||
(None, 'stretchy'),
|
||||
(None, 'width'),
|
||||
(None, 'width'),
|
||||
(namespaces['xlink'], 'href'),
|
||||
(namespaces['xlink'], 'show'),
|
||||
(namespaces['xlink'], 'type'),
|
||||
# SVG attributes
|
||||
(None, 'accent-height'),
|
||||
(None, 'accumulate'),
|
||||
(None, 'additive'),
|
||||
(None, 'alphabetic'),
|
||||
(None, 'arabic-form'),
|
||||
(None, 'ascent'),
|
||||
(None, 'attributeName'),
|
||||
(None, 'attributeType'),
|
||||
(None, 'baseProfile'),
|
||||
(None, 'bbox'),
|
||||
(None, 'begin'),
|
||||
(None, 'by'),
|
||||
(None, 'calcMode'),
|
||||
(None, 'cap-height'),
|
||||
(None, 'class'),
|
||||
(None, 'clip-path'),
|
||||
(None, 'color'),
|
||||
(None, 'color-rendering'),
|
||||
(None, 'content'),
|
||||
(None, 'cx'),
|
||||
(None, 'cy'),
|
||||
(None, 'd'),
|
||||
(None, 'dx'),
|
||||
(None, 'dy'),
|
||||
(None, 'descent'),
|
||||
(None, 'display'),
|
||||
(None, 'dur'),
|
||||
(None, 'end'),
|
||||
(None, 'fill'),
|
||||
(None, 'fill-opacity'),
|
||||
(None, 'fill-rule'),
|
||||
(None, 'font-family'),
|
||||
(None, 'font-size'),
|
||||
(None, 'font-stretch'),
|
||||
(None, 'font-style'),
|
||||
(None, 'font-variant'),
|
||||
(None, 'font-weight'),
|
||||
(None, 'from'),
|
||||
(None, 'fx'),
|
||||
(None, 'fy'),
|
||||
(None, 'g1'),
|
||||
(None, 'g2'),
|
||||
(None, 'glyph-name'),
|
||||
(None, 'gradientUnits'),
|
||||
(None, 'hanging'),
|
||||
(None, 'height'),
|
||||
(None, 'horiz-adv-x'),
|
||||
(None, 'horiz-origin-x'),
|
||||
(None, 'id'),
|
||||
(None, 'ideographic'),
|
||||
(None, 'k'),
|
||||
(None, 'keyPoints'),
|
||||
(None, 'keySplines'),
|
||||
(None, 'keyTimes'),
|
||||
(None, 'lang'),
|
||||
(None, 'marker-end'),
|
||||
(None, 'marker-mid'),
|
||||
(None, 'marker-start'),
|
||||
(None, 'markerHeight'),
|
||||
(None, 'markerUnits'),
|
||||
(None, 'markerWidth'),
|
||||
(None, 'mathematical'),
|
||||
(None, 'max'),
|
||||
(None, 'min'),
|
||||
(None, 'name'),
|
||||
(None, 'offset'),
|
||||
(None, 'opacity'),
|
||||
(None, 'orient'),
|
||||
(None, 'origin'),
|
||||
(None, 'overline-position'),
|
||||
(None, 'overline-thickness'),
|
||||
(None, 'panose-1'),
|
||||
(None, 'path'),
|
||||
(None, 'pathLength'),
|
||||
(None, 'points'),
|
||||
(None, 'preserveAspectRatio'),
|
||||
(None, 'r'),
|
||||
(None, 'refX'),
|
||||
(None, 'refY'),
|
||||
(None, 'repeatCount'),
|
||||
(None, 'repeatDur'),
|
||||
(None, 'requiredExtensions'),
|
||||
(None, 'requiredFeatures'),
|
||||
(None, 'restart'),
|
||||
(None, 'rotate'),
|
||||
(None, 'rx'),
|
||||
(None, 'ry'),
|
||||
(None, 'slope'),
|
||||
(None, 'stemh'),
|
||||
(None, 'stemv'),
|
||||
(None, 'stop-color'),
|
||||
(None, 'stop-opacity'),
|
||||
(None, 'strikethrough-position'),
|
||||
(None, 'strikethrough-thickness'),
|
||||
(None, 'stroke'),
|
||||
(None, 'stroke-dasharray'),
|
||||
(None, 'stroke-dashoffset'),
|
||||
(None, 'stroke-linecap'),
|
||||
(None, 'stroke-linejoin'),
|
||||
(None, 'stroke-miterlimit'),
|
||||
(None, 'stroke-opacity'),
|
||||
(None, 'stroke-width'),
|
||||
(None, 'systemLanguage'),
|
||||
(None, 'target'),
|
||||
(None, 'text-anchor'),
|
||||
(None, 'to'),
|
||||
(None, 'transform'),
|
||||
(None, 'type'),
|
||||
(None, 'u1'),
|
||||
(None, 'u2'),
|
||||
(None, 'underline-position'),
|
||||
(None, 'underline-thickness'),
|
||||
(None, 'unicode'),
|
||||
(None, 'unicode-range'),
|
||||
(None, 'units-per-em'),
|
||||
(None, 'values'),
|
||||
(None, 'version'),
|
||||
(None, 'viewBox'),
|
||||
(None, 'visibility'),
|
||||
(None, 'width'),
|
||||
(None, 'widths'),
|
||||
(None, 'x'),
|
||||
(None, 'x-height'),
|
||||
(None, 'x1'),
|
||||
(None, 'x2'),
|
||||
(namespaces['xlink'], 'actuate'),
|
||||
(namespaces['xlink'], 'arcrole'),
|
||||
(namespaces['xlink'], 'href'),
|
||||
(namespaces['xlink'], 'role'),
|
||||
(namespaces['xlink'], 'show'),
|
||||
(namespaces['xlink'], 'title'),
|
||||
(namespaces['xlink'], 'type'),
|
||||
(namespaces['xml'], 'base'),
|
||||
(namespaces['xml'], 'lang'),
|
||||
(namespaces['xml'], 'space'),
|
||||
(None, 'y'),
|
||||
(None, 'y1'),
|
||||
(None, 'y2'),
|
||||
(None, 'zoomAndPan'),
|
||||
))
|
||||
|
||||
attr_val_is_uri = frozenset((
|
||||
(None, 'href'),
|
||||
(None, 'src'),
|
||||
(None, 'cite'),
|
||||
(None, 'action'),
|
||||
(None, 'longdesc'),
|
||||
(None, 'poster'),
|
||||
(None, 'background'),
|
||||
(None, 'datasrc'),
|
||||
(None, 'dynsrc'),
|
||||
(None, 'lowsrc'),
|
||||
(None, 'ping'),
|
||||
(namespaces['xlink'], 'href'),
|
||||
(namespaces['xml'], 'base'),
|
||||
))
|
||||
|
||||
svg_attr_val_allows_ref = frozenset((
|
||||
(None, 'clip-path'),
|
||||
(None, 'color-profile'),
|
||||
(None, 'cursor'),
|
||||
(None, 'fill'),
|
||||
(None, 'filter'),
|
||||
(None, 'marker'),
|
||||
(None, 'marker-start'),
|
||||
(None, 'marker-mid'),
|
||||
(None, 'marker-end'),
|
||||
(None, 'mask'),
|
||||
(None, 'stroke'),
|
||||
))
|
||||
|
||||
svg_allow_local_href = frozenset((
|
||||
(None, 'altGlyph'),
|
||||
(None, 'animate'),
|
||||
(None, 'animateColor'),
|
||||
(None, 'animateMotion'),
|
||||
(None, 'animateTransform'),
|
||||
(None, 'cursor'),
|
||||
(None, 'feImage'),
|
||||
(None, 'filter'),
|
||||
(None, 'linearGradient'),
|
||||
(None, 'pattern'),
|
||||
(None, 'radialGradient'),
|
||||
(None, 'textpath'),
|
||||
(None, 'tref'),
|
||||
(None, 'set'),
|
||||
(None, 'use')
|
||||
))
|
||||
|
||||
allowed_css_properties = frozenset((
|
||||
'azimuth',
|
||||
'background-color',
|
||||
'border-bottom-color',
|
||||
'border-collapse',
|
||||
'border-color',
|
||||
'border-left-color',
|
||||
'border-right-color',
|
||||
'border-top-color',
|
||||
'clear',
|
||||
'color',
|
||||
'cursor',
|
||||
'direction',
|
||||
'display',
|
||||
'elevation',
|
||||
'float',
|
||||
'font',
|
||||
'font-family',
|
||||
'font-size',
|
||||
'font-style',
|
||||
'font-variant',
|
||||
'font-weight',
|
||||
'height',
|
||||
'letter-spacing',
|
||||
'line-height',
|
||||
'overflow',
|
||||
'pause',
|
||||
'pause-after',
|
||||
'pause-before',
|
||||
'pitch',
|
||||
'pitch-range',
|
||||
'richness',
|
||||
'speak',
|
||||
'speak-header',
|
||||
'speak-numeral',
|
||||
'speak-punctuation',
|
||||
'speech-rate',
|
||||
'stress',
|
||||
'text-align',
|
||||
'text-decoration',
|
||||
'text-indent',
|
||||
'unicode-bidi',
|
||||
'vertical-align',
|
||||
'voice-family',
|
||||
'volume',
|
||||
'white-space',
|
||||
'width',
|
||||
))
|
||||
|
||||
allowed_css_keywords = frozenset((
|
||||
'auto',
|
||||
'aqua',
|
||||
'black',
|
||||
'block',
|
||||
'blue',
|
||||
'bold',
|
||||
'both',
|
||||
'bottom',
|
||||
'brown',
|
||||
'center',
|
||||
'collapse',
|
||||
'dashed',
|
||||
'dotted',
|
||||
'fuchsia',
|
||||
'gray',
|
||||
'green',
|
||||
'!important',
|
||||
'italic',
|
||||
'left',
|
||||
'lime',
|
||||
'maroon',
|
||||
'medium',
|
||||
'none',
|
||||
'navy',
|
||||
'normal',
|
||||
'nowrap',
|
||||
'olive',
|
||||
'pointer',
|
||||
'purple',
|
||||
'red',
|
||||
'right',
|
||||
'solid',
|
||||
'silver',
|
||||
'teal',
|
||||
'top',
|
||||
'transparent',
|
||||
'underline',
|
||||
'white',
|
||||
'yellow',
|
||||
))
|
||||
|
||||
allowed_svg_properties = frozenset((
|
||||
'fill',
|
||||
'fill-opacity',
|
||||
'fill-rule',
|
||||
'stroke',
|
||||
'stroke-width',
|
||||
'stroke-linecap',
|
||||
'stroke-linejoin',
|
||||
'stroke-opacity',
|
||||
))
|
||||
|
||||
allowed_protocols = frozenset((
|
||||
'ed2k',
|
||||
'ftp',
|
||||
'http',
|
||||
'https',
|
||||
'irc',
|
||||
'mailto',
|
||||
'news',
|
||||
'gopher',
|
||||
'nntp',
|
||||
'telnet',
|
||||
'webcal',
|
||||
'xmpp',
|
||||
'callto',
|
||||
'feed',
|
||||
'urn',
|
||||
'aim',
|
||||
'rsync',
|
||||
'tag',
|
||||
'ssh',
|
||||
'sftp',
|
||||
'rtsp',
|
||||
'afs',
|
||||
'data',
|
||||
))
|
||||
|
||||
allowed_content_types = frozenset((
|
||||
'image/png',
|
||||
'image/jpeg',
|
||||
'image/gif',
|
||||
'image/webp',
|
||||
'image/bmp',
|
||||
'text/plain',
|
||||
))
|
||||
|
||||
|
||||
data_content_type = re.compile(r'''
|
||||
^
|
||||
# Match a content type <application>/<type>
|
||||
(?P<content_type>[-a-zA-Z0-9.]+/[-a-zA-Z0-9.]+)
|
||||
# Match any character set and encoding
|
||||
(?:(?:;charset=(?:[-a-zA-Z0-9]+)(?:;(?:base64))?)
|
||||
|(?:;(?:base64))?(?:;charset=(?:[-a-zA-Z0-9]+))?)
|
||||
# Assume the rest is data
|
||||
,.*
|
||||
$
|
||||
''',
|
||||
re.VERBOSE)
|
||||
|
||||
|
||||
class Filter(base.Filter):
|
||||
""" sanitization of XHTML+MathML+SVG and of inline style attributes."""
|
||||
def __init__(self,
|
||||
source,
|
||||
allowed_elements=allowed_elements,
|
||||
allowed_attributes=allowed_attributes,
|
||||
allowed_css_properties=allowed_css_properties,
|
||||
allowed_css_keywords=allowed_css_keywords,
|
||||
allowed_svg_properties=allowed_svg_properties,
|
||||
allowed_protocols=allowed_protocols,
|
||||
allowed_content_types=allowed_content_types,
|
||||
attr_val_is_uri=attr_val_is_uri,
|
||||
svg_attr_val_allows_ref=svg_attr_val_allows_ref,
|
||||
svg_allow_local_href=svg_allow_local_href):
|
||||
super(Filter, self).__init__(source)
|
||||
self.allowed_elements = allowed_elements
|
||||
self.allowed_attributes = allowed_attributes
|
||||
self.allowed_css_properties = allowed_css_properties
|
||||
self.allowed_css_keywords = allowed_css_keywords
|
||||
self.allowed_svg_properties = allowed_svg_properties
|
||||
self.allowed_protocols = allowed_protocols
|
||||
self.allowed_content_types = allowed_content_types
|
||||
self.attr_val_is_uri = attr_val_is_uri
|
||||
self.svg_attr_val_allows_ref = svg_attr_val_allows_ref
|
||||
self.svg_allow_local_href = svg_allow_local_href
|
||||
|
||||
def __iter__(self):
|
||||
for token in base.Filter.__iter__(self):
|
||||
token = self.sanitize_token(token)
|
||||
if token:
|
||||
yield token
|
||||
|
||||
# Sanitize the +html+, escaping all elements not in ALLOWED_ELEMENTS, and
|
||||
# stripping out all # attributes not in ALLOWED_ATTRIBUTES. Style
|
||||
# attributes are parsed, and a restricted set, # specified by
|
||||
# ALLOWED_CSS_PROPERTIES and ALLOWED_CSS_KEYWORDS, are allowed through.
|
||||
# attributes in ATTR_VAL_IS_URI are scanned, and only URI schemes specified
|
||||
# in ALLOWED_PROTOCOLS are allowed.
|
||||
#
|
||||
# sanitize_html('<script> do_nasty_stuff() </script>')
|
||||
# => <script> do_nasty_stuff() </script>
|
||||
# sanitize_html('<a href="javascript: sucker();">Click here for $100</a>')
|
||||
# => <a>Click here for $100</a>
|
||||
def sanitize_token(self, token):
|
||||
|
||||
# accommodate filters which use token_type differently
|
||||
token_type = token["type"]
|
||||
if token_type in ("StartTag", "EndTag", "EmptyTag"):
|
||||
name = token["name"]
|
||||
namespace = token["namespace"]
|
||||
if ((namespace, name) in self.allowed_elements or
|
||||
(namespace is None and
|
||||
(namespaces["html"], name) in self.allowed_elements)):
|
||||
return self.allowed_token(token)
|
||||
else:
|
||||
return self.disallowed_token(token)
|
||||
elif token_type == "Comment":
|
||||
pass
|
||||
else:
|
||||
return token
|
||||
|
||||
def allowed_token(self, token):
|
||||
if "data" in token:
|
||||
attrs = token["data"]
|
||||
attr_names = set(attrs.keys())
|
||||
|
||||
# Remove forbidden attributes
|
||||
for to_remove in (attr_names - self.allowed_attributes):
|
||||
del token["data"][to_remove]
|
||||
attr_names.remove(to_remove)
|
||||
|
||||
# Remove attributes with disallowed URL values
|
||||
for attr in (attr_names & self.attr_val_is_uri):
|
||||
assert attr in attrs
|
||||
# I don't have a clue where this regexp comes from or why it matches those
|
||||
# characters, nor why we call unescape. I just know it's always been here.
|
||||
# Should you be worried by this comment in a sanitizer? Yes. On the other hand, all
|
||||
# this will do is remove *more* than it otherwise would.
|
||||
val_unescaped = re.sub("[`\x00-\x20\x7f-\xa0\s]+", '',
|
||||
unescape(attrs[attr])).lower()
|
||||
# remove replacement characters from unescaped characters
|
||||
val_unescaped = val_unescaped.replace("\ufffd", "")
|
||||
try:
|
||||
uri = urlparse.urlparse(val_unescaped)
|
||||
except ValueError:
|
||||
uri = None
|
||||
del attrs[attr]
|
||||
if uri and uri.scheme:
|
||||
if uri.scheme not in self.allowed_protocols:
|
||||
del attrs[attr]
|
||||
if uri.scheme == 'data':
|
||||
m = data_content_type.match(uri.path)
|
||||
if not m:
|
||||
del attrs[attr]
|
||||
elif m.group('content_type') not in self.allowed_content_types:
|
||||
del attrs[attr]
|
||||
|
||||
for attr in self.svg_attr_val_allows_ref:
|
||||
if attr in attrs:
|
||||
attrs[attr] = re.sub(r'url\s*\(\s*[^#\s][^)]+?\)',
|
||||
' ',
|
||||
unescape(attrs[attr]))
|
||||
if (token["name"] in self.svg_allow_local_href and
|
||||
(namespaces['xlink'], 'href') in attrs and re.search('^\s*[^#\s].*',
|
||||
attrs[(namespaces['xlink'], 'href')])):
|
||||
del attrs[(namespaces['xlink'], 'href')]
|
||||
if (None, 'style') in attrs:
|
||||
attrs[(None, 'style')] = self.sanitize_css(attrs[(None, 'style')])
|
||||
token["data"] = attrs
|
||||
return token
|
||||
|
||||
def disallowed_token(self, token):
|
||||
token_type = token["type"]
|
||||
if token_type == "EndTag":
|
||||
token["data"] = "</%s>" % token["name"]
|
||||
elif token["data"]:
|
||||
assert token_type in ("StartTag", "EmptyTag")
|
||||
attrs = []
|
||||
for (ns, name), v in token["data"].items():
|
||||
attrs.append(' %s="%s"' % (name if ns is None else "%s:%s" % (prefixes[ns], name), escape(v)))
|
||||
token["data"] = "<%s%s>" % (token["name"], ''.join(attrs))
|
||||
else:
|
||||
token["data"] = "<%s>" % token["name"]
|
||||
if token.get("selfClosing"):
|
||||
token["data"] = token["data"][:-1] + "/>"
|
||||
|
||||
token["type"] = "Characters"
|
||||
|
||||
del token["name"]
|
||||
return token
|
||||
|
||||
def sanitize_css(self, style):
|
||||
# disallow urls
|
||||
style = re.compile('url\s*\(\s*[^\s)]+?\s*\)\s*').sub(' ', style)
|
||||
|
||||
# gauntlet
|
||||
if not re.match("""^([:,;#%.\sa-zA-Z0-9!]|\w-\w|'[\s\w]+'|"[\s\w]+"|\([\d,\s]+\))*$""", style):
|
||||
return ''
|
||||
if not re.match("^\s*([-\w]+\s*:[^:;]*(;\s*|$))*$", style):
|
||||
return ''
|
||||
|
||||
clean = []
|
||||
for prop, value in re.findall("([-\w]+)\s*:\s*([^:;]*)", style):
|
||||
if not value:
|
||||
continue
|
||||
if prop.lower() in self.allowed_css_properties:
|
||||
clean.append(prop + ': ' + value + ';')
|
||||
elif prop.split('-')[0].lower() in ['background', 'border', 'margin',
|
||||
'padding']:
|
||||
for keyword in value.split():
|
||||
if keyword not in self.allowed_css_keywords and \
|
||||
not re.match("^(#[0-9a-f]+|rgb\(\d+%?,\d*%?,?\d*%?\)?|\d{0,2}\.?\d{0,2}(cm|em|ex|in|mm|pc|pt|px|%|,|\))?)$", keyword): # noqa
|
||||
break
|
||||
else:
|
||||
clean.append(prop + ': ' + value + ';')
|
||||
elif prop.lower() in self.allowed_svg_properties:
|
||||
clean.append(prop + ': ' + value + ';')
|
||||
|
||||
return ' '.join(clean)
|
38
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/whitespace.py
поставляемый
Normal file
38
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/filters/whitespace.py
поставляемый
Normal file
|
@ -0,0 +1,38 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
import re
|
||||
|
||||
from . import base
|
||||
from ..constants import rcdataElements, spaceCharacters
|
||||
spaceCharacters = "".join(spaceCharacters)
|
||||
|
||||
SPACES_REGEX = re.compile("[%s]+" % spaceCharacters)
|
||||
|
||||
|
||||
class Filter(base.Filter):
|
||||
|
||||
spacePreserveElements = frozenset(["pre", "textarea"] + list(rcdataElements))
|
||||
|
||||
def __iter__(self):
|
||||
preserve = 0
|
||||
for token in base.Filter.__iter__(self):
|
||||
type = token["type"]
|
||||
if type == "StartTag" \
|
||||
and (preserve or token["name"] in self.spacePreserveElements):
|
||||
preserve += 1
|
||||
|
||||
elif type == "EndTag" and preserve:
|
||||
preserve -= 1
|
||||
|
||||
elif not preserve and type == "SpaceCharacters" and token["data"]:
|
||||
# Test on token["data"] above to not introduce spaces where there were not
|
||||
token["data"] = " "
|
||||
|
||||
elif not preserve and type == "Characters":
|
||||
token["data"] = collapse_spaces(token["data"])
|
||||
|
||||
yield token
|
||||
|
||||
|
||||
def collapse_spaces(text):
|
||||
return SPACES_REGEX.sub(' ', text)
|
2733
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/html5parser.py
поставляемый
Normal file
2733
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/html5parser.py
поставляемый
Normal file
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
334
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/serializer.py
поставляемый
Normal file
334
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/serializer.py
поставляемый
Normal file
|
@ -0,0 +1,334 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
from pip9._vendor.six import text_type
|
||||
|
||||
import re
|
||||
|
||||
from codecs import register_error, xmlcharrefreplace_errors
|
||||
|
||||
from .constants import voidElements, booleanAttributes, spaceCharacters
|
||||
from .constants import rcdataElements, entities, xmlEntities
|
||||
from . import treewalkers, _utils
|
||||
from xml.sax.saxutils import escape
|
||||
|
||||
_quoteAttributeSpecChars = "".join(spaceCharacters) + "\"'=<>`"
|
||||
_quoteAttributeSpec = re.compile("[" + _quoteAttributeSpecChars + "]")
|
||||
_quoteAttributeLegacy = re.compile("[" + _quoteAttributeSpecChars +
|
||||
"\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n"
|
||||
"\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14\x15"
|
||||
"\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f"
|
||||
"\x20\x2f\x60\xa0\u1680\u180e\u180f\u2000"
|
||||
"\u2001\u2002\u2003\u2004\u2005\u2006\u2007"
|
||||
"\u2008\u2009\u200a\u2028\u2029\u202f\u205f"
|
||||
"\u3000]")
|
||||
|
||||
|
||||
_encode_entity_map = {}
|
||||
_is_ucs4 = len("\U0010FFFF") == 1
|
||||
for k, v in list(entities.items()):
|
||||
# skip multi-character entities
|
||||
if ((_is_ucs4 and len(v) > 1) or
|
||||
(not _is_ucs4 and len(v) > 2)):
|
||||
continue
|
||||
if v != "&":
|
||||
if len(v) == 2:
|
||||
v = _utils.surrogatePairToCodepoint(v)
|
||||
else:
|
||||
v = ord(v)
|
||||
if v not in _encode_entity_map or k.islower():
|
||||
# prefer < over < and similarly for &, >, etc.
|
||||
_encode_entity_map[v] = k
|
||||
|
||||
|
||||
def htmlentityreplace_errors(exc):
|
||||
if isinstance(exc, (UnicodeEncodeError, UnicodeTranslateError)):
|
||||
res = []
|
||||
codepoints = []
|
||||
skip = False
|
||||
for i, c in enumerate(exc.object[exc.start:exc.end]):
|
||||
if skip:
|
||||
skip = False
|
||||
continue
|
||||
index = i + exc.start
|
||||
if _utils.isSurrogatePair(exc.object[index:min([exc.end, index + 2])]):
|
||||
codepoint = _utils.surrogatePairToCodepoint(exc.object[index:index + 2])
|
||||
skip = True
|
||||
else:
|
||||
codepoint = ord(c)
|
||||
codepoints.append(codepoint)
|
||||
for cp in codepoints:
|
||||
e = _encode_entity_map.get(cp)
|
||||
if e:
|
||||
res.append("&")
|
||||
res.append(e)
|
||||
if not e.endswith(";"):
|
||||
res.append(";")
|
||||
else:
|
||||
res.append("&#x%s;" % (hex(cp)[2:]))
|
||||
return ("".join(res), exc.end)
|
||||
else:
|
||||
return xmlcharrefreplace_errors(exc)
|
||||
|
||||
register_error("htmlentityreplace", htmlentityreplace_errors)
|
||||
|
||||
|
||||
def serialize(input, tree="etree", encoding=None, **serializer_opts):
|
||||
# XXX: Should we cache this?
|
||||
walker = treewalkers.getTreeWalker(tree)
|
||||
s = HTMLSerializer(**serializer_opts)
|
||||
return s.render(walker(input), encoding)
|
||||
|
||||
|
||||
class HTMLSerializer(object):
|
||||
|
||||
# attribute quoting options
|
||||
quote_attr_values = "legacy" # be secure by default
|
||||
quote_char = '"'
|
||||
use_best_quote_char = True
|
||||
|
||||
# tag syntax options
|
||||
omit_optional_tags = True
|
||||
minimize_boolean_attributes = True
|
||||
use_trailing_solidus = False
|
||||
space_before_trailing_solidus = True
|
||||
|
||||
# escaping options
|
||||
escape_lt_in_attrs = False
|
||||
escape_rcdata = False
|
||||
resolve_entities = True
|
||||
|
||||
# miscellaneous options
|
||||
alphabetical_attributes = False
|
||||
inject_meta_charset = True
|
||||
strip_whitespace = False
|
||||
sanitize = False
|
||||
|
||||
options = ("quote_attr_values", "quote_char", "use_best_quote_char",
|
||||
"omit_optional_tags", "minimize_boolean_attributes",
|
||||
"use_trailing_solidus", "space_before_trailing_solidus",
|
||||
"escape_lt_in_attrs", "escape_rcdata", "resolve_entities",
|
||||
"alphabetical_attributes", "inject_meta_charset",
|
||||
"strip_whitespace", "sanitize")
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
"""Initialize HTMLSerializer.
|
||||
|
||||
Keyword options (default given first unless specified) include:
|
||||
|
||||
inject_meta_charset=True|False
|
||||
Whether it insert a meta element to define the character set of the
|
||||
document.
|
||||
quote_attr_values="legacy"|"spec"|"always"
|
||||
Whether to quote attribute values that don't require quoting
|
||||
per legacy browser behaviour, when required by the standard, or always.
|
||||
quote_char=u'"'|u"'"
|
||||
Use given quote character for attribute quoting. Default is to
|
||||
use double quote unless attribute value contains a double quote,
|
||||
in which case single quotes are used instead.
|
||||
escape_lt_in_attrs=False|True
|
||||
Whether to escape < in attribute values.
|
||||
escape_rcdata=False|True
|
||||
Whether to escape characters that need to be escaped within normal
|
||||
elements within rcdata elements such as style.
|
||||
resolve_entities=True|False
|
||||
Whether to resolve named character entities that appear in the
|
||||
source tree. The XML predefined entities < > & " '
|
||||
are unaffected by this setting.
|
||||
strip_whitespace=False|True
|
||||
Whether to remove semantically meaningless whitespace. (This
|
||||
compresses all whitespace to a single space except within pre.)
|
||||
minimize_boolean_attributes=True|False
|
||||
Shortens boolean attributes to give just the attribute value,
|
||||
for example <input disabled="disabled"> becomes <input disabled>.
|
||||
use_trailing_solidus=False|True
|
||||
Includes a close-tag slash at the end of the start tag of void
|
||||
elements (empty elements whose end tag is forbidden). E.g. <hr/>.
|
||||
space_before_trailing_solidus=True|False
|
||||
Places a space immediately before the closing slash in a tag
|
||||
using a trailing solidus. E.g. <hr />. Requires use_trailing_solidus.
|
||||
sanitize=False|True
|
||||
Strip all unsafe or unknown constructs from output.
|
||||
See `html5lib user documentation`_
|
||||
omit_optional_tags=True|False
|
||||
Omit start/end tags that are optional.
|
||||
alphabetical_attributes=False|True
|
||||
Reorder attributes to be in alphabetical order.
|
||||
|
||||
.. _html5lib user documentation: http://code.google.com/p/html5lib/wiki/UserDocumentation
|
||||
"""
|
||||
unexpected_args = frozenset(kwargs) - frozenset(self.options)
|
||||
if len(unexpected_args) > 0:
|
||||
raise TypeError("__init__() got an unexpected keyword argument '%s'" % next(iter(unexpected_args)))
|
||||
if 'quote_char' in kwargs:
|
||||
self.use_best_quote_char = False
|
||||
for attr in self.options:
|
||||
setattr(self, attr, kwargs.get(attr, getattr(self, attr)))
|
||||
self.errors = []
|
||||
self.strict = False
|
||||
|
||||
def encode(self, string):
|
||||
assert(isinstance(string, text_type))
|
||||
if self.encoding:
|
||||
return string.encode(self.encoding, "htmlentityreplace")
|
||||
else:
|
||||
return string
|
||||
|
||||
def encodeStrict(self, string):
|
||||
assert(isinstance(string, text_type))
|
||||
if self.encoding:
|
||||
return string.encode(self.encoding, "strict")
|
||||
else:
|
||||
return string
|
||||
|
||||
def serialize(self, treewalker, encoding=None):
|
||||
# pylint:disable=too-many-nested-blocks
|
||||
self.encoding = encoding
|
||||
in_cdata = False
|
||||
self.errors = []
|
||||
|
||||
if encoding and self.inject_meta_charset:
|
||||
from .filters.inject_meta_charset import Filter
|
||||
treewalker = Filter(treewalker, encoding)
|
||||
# Alphabetical attributes is here under the assumption that none of
|
||||
# the later filters add or change order of attributes; it needs to be
|
||||
# before the sanitizer so escaped elements come out correctly
|
||||
if self.alphabetical_attributes:
|
||||
from .filters.alphabeticalattributes import Filter
|
||||
treewalker = Filter(treewalker)
|
||||
# WhitespaceFilter should be used before OptionalTagFilter
|
||||
# for maximum efficiently of this latter filter
|
||||
if self.strip_whitespace:
|
||||
from .filters.whitespace import Filter
|
||||
treewalker = Filter(treewalker)
|
||||
if self.sanitize:
|
||||
from .filters.sanitizer import Filter
|
||||
treewalker = Filter(treewalker)
|
||||
if self.omit_optional_tags:
|
||||
from .filters.optionaltags import Filter
|
||||
treewalker = Filter(treewalker)
|
||||
|
||||
for token in treewalker:
|
||||
type = token["type"]
|
||||
if type == "Doctype":
|
||||
doctype = "<!DOCTYPE %s" % token["name"]
|
||||
|
||||
if token["publicId"]:
|
||||
doctype += ' PUBLIC "%s"' % token["publicId"]
|
||||
elif token["systemId"]:
|
||||
doctype += " SYSTEM"
|
||||
if token["systemId"]:
|
||||
if token["systemId"].find('"') >= 0:
|
||||
if token["systemId"].find("'") >= 0:
|
||||
self.serializeError("System identifer contains both single and double quote characters")
|
||||
quote_char = "'"
|
||||
else:
|
||||
quote_char = '"'
|
||||
doctype += " %s%s%s" % (quote_char, token["systemId"], quote_char)
|
||||
|
||||
doctype += ">"
|
||||
yield self.encodeStrict(doctype)
|
||||
|
||||
elif type in ("Characters", "SpaceCharacters"):
|
||||
if type == "SpaceCharacters" or in_cdata:
|
||||
if in_cdata and token["data"].find("</") >= 0:
|
||||
self.serializeError("Unexpected </ in CDATA")
|
||||
yield self.encode(token["data"])
|
||||
else:
|
||||
yield self.encode(escape(token["data"]))
|
||||
|
||||
elif type in ("StartTag", "EmptyTag"):
|
||||
name = token["name"]
|
||||
yield self.encodeStrict("<%s" % name)
|
||||
if name in rcdataElements and not self.escape_rcdata:
|
||||
in_cdata = True
|
||||
elif in_cdata:
|
||||
self.serializeError("Unexpected child element of a CDATA element")
|
||||
for (_, attr_name), attr_value in token["data"].items():
|
||||
# TODO: Add namespace support here
|
||||
k = attr_name
|
||||
v = attr_value
|
||||
yield self.encodeStrict(' ')
|
||||
|
||||
yield self.encodeStrict(k)
|
||||
if not self.minimize_boolean_attributes or \
|
||||
(k not in booleanAttributes.get(name, tuple()) and
|
||||
k not in booleanAttributes.get("", tuple())):
|
||||
yield self.encodeStrict("=")
|
||||
if self.quote_attr_values == "always" or len(v) == 0:
|
||||
quote_attr = True
|
||||
elif self.quote_attr_values == "spec":
|
||||
quote_attr = _quoteAttributeSpec.search(v) is not None
|
||||
elif self.quote_attr_values == "legacy":
|
||||
quote_attr = _quoteAttributeLegacy.search(v) is not None
|
||||
else:
|
||||
raise ValueError("quote_attr_values must be one of: "
|
||||
"'always', 'spec', or 'legacy'")
|
||||
v = v.replace("&", "&")
|
||||
if self.escape_lt_in_attrs:
|
||||
v = v.replace("<", "<")
|
||||
if quote_attr:
|
||||
quote_char = self.quote_char
|
||||
if self.use_best_quote_char:
|
||||
if "'" in v and '"' not in v:
|
||||
quote_char = '"'
|
||||
elif '"' in v and "'" not in v:
|
||||
quote_char = "'"
|
||||
if quote_char == "'":
|
||||
v = v.replace("'", "'")
|
||||
else:
|
||||
v = v.replace('"', """)
|
||||
yield self.encodeStrict(quote_char)
|
||||
yield self.encode(v)
|
||||
yield self.encodeStrict(quote_char)
|
||||
else:
|
||||
yield self.encode(v)
|
||||
if name in voidElements and self.use_trailing_solidus:
|
||||
if self.space_before_trailing_solidus:
|
||||
yield self.encodeStrict(" /")
|
||||
else:
|
||||
yield self.encodeStrict("/")
|
||||
yield self.encode(">")
|
||||
|
||||
elif type == "EndTag":
|
||||
name = token["name"]
|
||||
if name in rcdataElements:
|
||||
in_cdata = False
|
||||
elif in_cdata:
|
||||
self.serializeError("Unexpected child element of a CDATA element")
|
||||
yield self.encodeStrict("</%s>" % name)
|
||||
|
||||
elif type == "Comment":
|
||||
data = token["data"]
|
||||
if data.find("--") >= 0:
|
||||
self.serializeError("Comment contains --")
|
||||
yield self.encodeStrict("<!--%s-->" % token["data"])
|
||||
|
||||
elif type == "Entity":
|
||||
name = token["name"]
|
||||
key = name + ";"
|
||||
if key not in entities:
|
||||
self.serializeError("Entity %s not recognized" % name)
|
||||
if self.resolve_entities and key not in xmlEntities:
|
||||
data = entities[key]
|
||||
else:
|
||||
data = "&%s;" % name
|
||||
yield self.encodeStrict(data)
|
||||
|
||||
else:
|
||||
self.serializeError(token["data"])
|
||||
|
||||
def render(self, treewalker, encoding=None):
|
||||
if encoding:
|
||||
return b"".join(list(self.serialize(treewalker, encoding)))
|
||||
else:
|
||||
return "".join(list(self.serialize(treewalker)))
|
||||
|
||||
def serializeError(self, data="XXX ERROR MESSAGE NEEDED"):
|
||||
# XXX The idea is to make data mandatory.
|
||||
self.errors.append(data)
|
||||
if self.strict:
|
||||
raise SerializeError
|
||||
|
||||
|
||||
class SerializeError(Exception):
|
||||
"""Error in serialized tree"""
|
||||
pass
|
12
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treeadapters/__init__.py
поставляемый
Normal file
12
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treeadapters/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,12 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from . import sax
|
||||
|
||||
__all__ = ["sax"]
|
||||
|
||||
try:
|
||||
from . import genshi # noqa
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
__all__.append("genshi")
|
47
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treeadapters/genshi.py
поставляемый
Normal file
47
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treeadapters/genshi.py
поставляемый
Normal file
|
@ -0,0 +1,47 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from genshi.core import QName, Attrs
|
||||
from genshi.core import START, END, TEXT, COMMENT, DOCTYPE
|
||||
|
||||
|
||||
def to_genshi(walker):
|
||||
text = []
|
||||
for token in walker:
|
||||
type = token["type"]
|
||||
if type in ("Characters", "SpaceCharacters"):
|
||||
text.append(token["data"])
|
||||
elif text:
|
||||
yield TEXT, "".join(text), (None, -1, -1)
|
||||
text = []
|
||||
|
||||
if type in ("StartTag", "EmptyTag"):
|
||||
if token["namespace"]:
|
||||
name = "{%s}%s" % (token["namespace"], token["name"])
|
||||
else:
|
||||
name = token["name"]
|
||||
attrs = Attrs([(QName("{%s}%s" % attr if attr[0] is not None else attr[1]), value)
|
||||
for attr, value in token["data"].items()])
|
||||
yield (START, (QName(name), attrs), (None, -1, -1))
|
||||
if type == "EmptyTag":
|
||||
type = "EndTag"
|
||||
|
||||
if type == "EndTag":
|
||||
if token["namespace"]:
|
||||
name = "{%s}%s" % (token["namespace"], token["name"])
|
||||
else:
|
||||
name = token["name"]
|
||||
|
||||
yield END, QName(name), (None, -1, -1)
|
||||
|
||||
elif type == "Comment":
|
||||
yield COMMENT, token["data"], (None, -1, -1)
|
||||
|
||||
elif type == "Doctype":
|
||||
yield DOCTYPE, (token["name"], token["publicId"],
|
||||
token["systemId"]), (None, -1, -1)
|
||||
|
||||
else:
|
||||
pass # FIXME: What to do?
|
||||
|
||||
if text:
|
||||
yield TEXT, "".join(text), (None, -1, -1)
|
44
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treeadapters/sax.py
поставляемый
Normal file
44
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treeadapters/sax.py
поставляемый
Normal file
|
@ -0,0 +1,44 @@
|
|||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from xml.sax.xmlreader import AttributesNSImpl
|
||||
|
||||
from ..constants import adjustForeignAttributes, unadjustForeignAttributes
|
||||
|
||||
prefix_mapping = {}
|
||||
for prefix, localName, namespace in adjustForeignAttributes.values():
|
||||
if prefix is not None:
|
||||
prefix_mapping[prefix] = namespace
|
||||
|
||||
|
||||
def to_sax(walker, handler):
|
||||
"""Call SAX-like content handler based on treewalker walker"""
|
||||
handler.startDocument()
|
||||
for prefix, namespace in prefix_mapping.items():
|
||||
handler.startPrefixMapping(prefix, namespace)
|
||||
|
||||
for token in walker:
|
||||
type = token["type"]
|
||||
if type == "Doctype":
|
||||
continue
|
||||
elif type in ("StartTag", "EmptyTag"):
|
||||
attrs = AttributesNSImpl(token["data"],
|
||||
unadjustForeignAttributes)
|
||||
handler.startElementNS((token["namespace"], token["name"]),
|
||||
token["name"],
|
||||
attrs)
|
||||
if type == "EmptyTag":
|
||||
handler.endElementNS((token["namespace"], token["name"]),
|
||||
token["name"])
|
||||
elif type == "EndTag":
|
||||
handler.endElementNS((token["namespace"], token["name"]),
|
||||
token["name"])
|
||||
elif type in ("Characters", "SpaceCharacters"):
|
||||
handler.characters(token["data"])
|
||||
elif type == "Comment":
|
||||
pass
|
||||
else:
|
||||
assert False, "Unknown token type"
|
||||
|
||||
for prefix, namespace in prefix_mapping.items():
|
||||
handler.endPrefixMapping(prefix)
|
||||
handler.endDocument()
|
76
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treebuilders/__init__.py
поставляемый
Normal file
76
third_party/python/pipenv/pipenv/patched/notpip/_vendor/html5lib/treebuilders/__init__.py
поставляемый
Normal file
|
@ -0,0 +1,76 @@
|
|||
"""A collection of modules for building different kinds of tree from
|
||||
HTML documents.
|
||||
|
||||
To create a treebuilder for a new type of tree, you need to do
|
||||
implement several things:
|
||||
|
||||
1) A set of classes for various types of elements: Document, Doctype,
|
||||
Comment, Element. These must implement the interface of
|
||||
_base.treebuilders.Node (although comment nodes have a different
|
||||
signature for their constructor, see treebuilders.etree.Comment)
|
||||
Textual content may also be implemented as another node type, or not, as
|
||||
your tree implementation requires.
|
||||
|
||||
2) A treebuilder object (called TreeBuilder by convention) that
|
||||
inherits from treebuilders._base.TreeBuilder. This has 4 required attributes:
|
||||
documentClass - the class to use for the bottommost node of a document
|
||||
elementClass - the class to use for HTML Elements
|
||||
commentClass - the class to use for comments
|
||||
doctypeClass - the class to use for doctypes
|
||||
It also has one required method:
|
||||
getDocument - Returns the root node of the complete document tree
|
||||
|
||||
3) If you wish to run the unit tests, you must also create a
|
||||
testSerializer method on your treebuilder which accepts a node and
|
||||
returns a string containing Node and its children serialized according
|
||||
to the format used in the unittests
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import, division, unicode_literals
|
||||
|
||||
from .._utils import default_etree
|
||||
|
||||
treeBuilderCache = {}
|
||||
|
||||
|
||||
def getTreeBuilder(treeType, implementation=None, **kwargs):
|
||||
"""Get a TreeBuilder class for various types of tree with built-in support
|
||||
|
||||
treeType - the name of the tree type required (case-insensitive). Supported
|
||||
values are:
|
||||
|
||||
"dom" - A generic builder for DOM implementations, defaulting to
|
||||
a xml.dom.minidom based implementation.
|
||||
"etree" - A generic builder for tree implementations exposing an
|
||||
ElementTree-like interface, defaulting to
|
||||
xml.etree.cElementTree if available and
|
||||
xml.etree.ElementTree if not.
|
||||
"lxml" - A etree-based builder for lxml.etree, handling
|
||||
limitations of lxml's implementation.
|
||||
|
||||
implementation - (Currently applies to the "etree" and "dom" tree types). A
|
||||
module implementing the tree type e.g.
|
||||
xml.etree.ElementTree or xml.etree.cElementTree."""
|
||||
|
||||
treeType = treeType.lower()
|
||||
if treeType not in treeBuilderCache:
|
||||
if treeType == "dom":
|
||||
from . import dom
|
||||
# Come up with a sane default (pref. from the stdlib)
|
||||
if implementation is None:
|
||||
from xml.dom import minidom
|
||||
implementation = minidom
|
||||
# NEVER cache here, caching is done in the dom submodule
|
||||
return dom.getDomModule(implementation, **kwargs).TreeBuilder
|
||||
elif treeType == "lxml":
|
||||
from . import etree_lxml
|
||||
treeBuilderCache[treeType] = etree_lxml.TreeBuilder
|
||||
elif treeType == "etree":
|
||||
from . import etree
|
||||
if implementation is None:
|
||||
implementation = default_etree
|
||||
# NEVER cache here, caching is done in the etree submodule
|
||||
return etree.getETreeModule(implementation, **kwargs).TreeBuilder
|
||||
else:
|
||||
raise ValueError("""Unrecognised treebuilder "%s" """ % treeType)
|
||||
return treeBuilderCache.get(treeType)
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче