Migration of runtime and development scripts.

This is a single large PR adding all the current build/install/run scripting
support.  There are a few noteworthy caveats:

1) The primary build script (build_release.py) is an interim python script.
   The intent is to replace this with a gradle build.

2) The main installation script (install_spinnaker.py) pulls the artifacts
   from a directory or storage bucket (e.g. S3 or GCS). The expectation is
   to run apt-get install spinnaker instead where the spinnaker debian package
   does similar things but with more standard packaging.

3) There are a pair of scripts to install a development environment on a
   machine (install_development.py) and set one up for a user
   (bootstrap_dev.sh). This too is interim and should become
   an apt-get install spinnaker-dev debian package.

4) There is a script to build a Google Image. It uses packer so should be
   easy to add AMIs and other platform image support. In the end, it uses
   the install script, which is independent of platform anyway.

5) There are runtime scripts for managing an instance (start/stop).
   The validate script is minimal at the moment. I'll add rules back
   in future PRs. For now it is representative. The reconfigure script
   is intended to add customizations into the settings.js that have to be
   static.

6) The pylib/yaml directory is a copy of the standard python YAML library
   that is not included by default. It is here to avoid complexity managing
   python libraries (since pip is not installed by default either) when
   there is no other need for python here.

The dev/ directory is only intended for developers (stuff available in the
spinnaker-dev packaging). The others are intended for operational runtime
environments.

The scripts support running a standard installation (in the runtime directory)
or as a developer (straight out of gradle). The developer scripts are
generally separate so they can inject a different configuration for where
to locate components.

Minimal usage as a developer[*] would be:
  mkdir build
  cd build
  ../spinnaker/dev/refresh_source.sh --github_user=<user> --pull_origin
  ../spinnaker/dev/run_dev.sh  (will build and run as a developer)

[*] Assuming you already have a machine setup.
If you dont then you could run
  spinnaker/dev/install_development.py  (to set up a machine)
  spinnaker/dev/bootstrap_dev.sh  (to setup user environments)

The bootstrap_dev.sh will create the build directory and refresh the sources
leaving you in ./build.

To use this in a production environment, you'd need to build a release.
  RELEASE_PATH=<path or storage bucket>
  ../spinnaker/dev/refresh_source.sh --pull_origin
  ../spinnaker/dev/build_release --release_path=$RELEASE_PATH

That will create and write to the RELEASE_PATH and also a
install_spinnaker.py.zip file. You can then copy that .py.zip onto
the machine you want ot install spinnaker on, then
   python install_spinnaker.py.zip \
       --package_manager \
       --release_path=$RELEASE_PATH

To complete the installation create a /root/.spinnaker/spinnaker-local.yml
file (from /opt/spinnaker/config/default-spinnaker-local.yml) and fine tune
it with your credentials and so forth. Be sure to enable a provider and
set credentials.

Then start spinnaker
   sudo /opt/spinnaker/scripts/start_spinnaker.sh
This commit is contained in:
Eric Wiseblatt 2015-10-28 17:42:04 +00:00
Родитель 0d6898782a
Коммит a24001f693
65 изменённых файлов: 11657 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,2 @@
CREATE KEYSPACE IF NOT EXISTS echo
WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

Просмотреть файл

@ -0,0 +1,2 @@
CREATE KEYSPACE IF NOT EXISTS front50
WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

Просмотреть файл

@ -0,0 +1,2 @@
CREATE KEYSPACE IF NOT EXISTS rush
WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

0
dev/__init__.py Normal file
Просмотреть файл

Просмотреть файл

@ -0,0 +1,209 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Creates an image in the default project named with the release name.
# python create_google_image.py --release_path $RELEASE_PATH
import argparse
import os
import sys
import tempfile
from spinnaker.run import check_run_and_monitor
from spinnaker.run import check_run_quick
from spinnaker.run import run_quick
class AbstractPackerBuilder(object):
PACKER_TEMPLATE = "Undefined" # specializations should override
@property
def packer_output(self):
"""The output from the packer process once it has completed."""
return self.__output
@property
def options(self):
"""The options parsed from the commandline."""
return self.__options
def __init__(self, options):
self.__options = options
self.__installer_path = None
self.__packer_vars = []
self.__raw_args = sys.argv[1:]
self.__var_map = {}
self.__output = None
# This is used to clean up error reporting
self.__in_subprocess = False
def remove_raw_arg(self, name):
"""Remove the raw argument data for the given name."""
result = []
flag = '--' + name
remaining = self.__raw_args
while remaining:
arg = remaining.pop(0)
if not arg.startswith(flag):
result.append(arg)
continue
if remaining and not remaining[0].startswith('--'):
# pop value for this argument as well.
remaining.pop(0)
self.__raw_args = result
def add_packer_variable(self, name, value):
"""Adds variable to pass to packer.
Args:
name [string]: The name of the variable.
value [string]: The value to pass.
"""
self.__var_map[name] = value
def _do_prepare(self):
"""Hook for specialized builders to prepare arguments if needed."""
pass
def _do_cleanup(self):
"""Hook for specialized builders to cleanup if needed."""
pass
def _do_next_steps(self):
"""Hook for specialized builders to add followup instructions."""
pass
def create_image(self):
"""Runs the process for creating an image.
Prepare, Build, Cleanup.
"""
self.__prepare()
try:
self.__in_subprocess = True
result = check_run_and_monitor(
'packer build {vars} {packer}'
.format(vars=' '.join(self.__packer_vars),
packer=self.PACKER_TEMPLATE),
echo=True)
self.__in_subprocess = False
self.__output = result.stdout
finally:
self.__cleanup()
def __prepare(self):
"""Internal helper function implementing the Prepare step.
Calls _do_prepare to allow specialized classes to hook in.
"""
fd,self.__installer_path = tempfile.mkstemp()
os.close(fd)
self.__var_map['installer_path'] = self.__installer_path
if self.options.release_path.startswith('gs://'):
program = 'gsutil'
elif self.options.release_path.startswith('s3://'):
program = 'aws s3'
else:
raise ValueError('--release_path must be either GCS or S3, not "{path}".'
.format(path=self.options.release_path))
self.__in_subprocess = True
check_run_quick(
'{program} cp {release}/install/install_spinnaker.py.zip {path}'
.format(program=program,
release=self.options.release_path,
path=self.__installer_path))
self.__in_subprocess = False
self._do_prepare()
self.__add_args_to_map()
self.__packer_vars = ['-var "{name}={value}"'
.format(name=name, value=value)
for name,value in self.__var_map.items()]
def __cleanup(self):
"""Internal helper function implementing the Cleanup step.
Calls _do_cleanup to allow specialized classes to hook in.
"""
if self.__installer_path:
try:
os.remove(self.__installer_path)
except:
pass
self._do_cleanup()
def __add_args_to_map(self):
"""Add remaining raw args to the packer variable map.
This is a helper method for internal prepare()
"""
remaining = list(self.__raw_args)
while remaining:
arg = remaining.pop(0)
if not arg.startswith('--'):
raise ValueError('Unexpected argument "{arg}"'.format(arg=arg))
arg = arg[2:]
eq = arg.find('=')
if eq > 0:
self.__var_map[arg[0:eq]] = arg[eq + 1:]
else:
self.__var_map[arg] = ''
if remaining and not remaining[0].startswith('--'):
self.__var_map[arg] = remaining.pop(0)
@classmethod
def init_argument_parser(cls, parser):
"""Initialize the command-line parameters."""
parser.add_argument(
'--release_path', required=True,
help='URI to the release to install on a storage service.')
@classmethod
def main(cls):
class NonAbbreviatingParser(argparse.ArgumentParser):
# Dont allow implied abbreviations.
# This causes unknown options that are substrings of known options
# to be interpreted as the known option.
def _get_option_tuples(self, option_string):
return []
parser = NonAbbreviatingParser()
parser.description = (
'Additional command line variables are passed through to {packer}'
'\nSee the "variables" section in the template for more options.'
.format(packer=cls.PACKER_TEMPLATE))
cls.init_argument_parser(parser)
options, unknown = parser.parse_known_args()
builder = cls(options)
try:
builder.create_image()
print builder._do_get_next_steps()
except BaseException as ex:
if builder.__in_subprocess:
# If we failed in packer, just exit
sys.exit(-1)
# This was our programming error, so get a stack trace.
raise

243
dev/bootstrap_dev.sh Executable file
Просмотреть файл

@ -0,0 +1,243 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# About this script:
# -----------------
# This script prepares a user environment to build spinnaker from source.
# It is intended to only be run one time. Typically on a newly provisioned
# GCE instance, but could be run on any linux machine.
#
# What it does:
# ------------
# This script will do the following:
#
# * Create a $HOME/.git-credentials file if one does not already exist
# You will be prompted for your github username and two-factor access
# token.
#
# * Creates a build/ directory as a subdirectory of $PWD.
#
# * Clone each of the spinnaker subsystem github repositories into build/
#
# When a repository is cloned, an upstream remote will be added to
# reference the authoritative repository for the given repository.
# (e.g. the spinnaker github repository corresponding to your origin).
#
# If the environment variable GIHUB_REPOSITORY_OWNER is set then
# the repositories will be cloned from that github user. Otherwise it
# will be cloned from the user in your .git-credentials. If the owner
# is "upstream" then it will clone the authoritative repository for each
# repository cloned (e.g. all the 'spinnaker' repositories).
#
# * Print out next step instructions.
#
#
# Running the script:
# -------------------
# Rather than executing the script, you can source it to leave some side
# effects in the current shell, such as path changes for new installed
# components.
function prompt_YN() {
def=$1
msg=$2
if [[ "$def" == "Y" ]]; then
local choice="Y/n"
else
local choice="y/N"
fi
while true; do
got=""
read -p "$msg [$choice]: " got
if [[ "$got" == "" ]]; then
got=$def
fi
if [[ "$got" == "y" || "$got" == "Y" ]]; then
return 0
fi
if [[ "$got" == "n" || "$got" == "N" ]]; then
return 1
fi
done
}
function git_clone() {
local git_user="$1"
local git_project="$2"
local upstream_user="$3"
if [[ "$git_user" == "default" || "$git_user" == "upstream" ]]; then
git_user="$upstream_user"
fi
git clone https://github.com/$git_user/$git_project.git
if [[ "$github_user" == "$upstream_user" ]]; then
git -C $git_project remote set-url --push origin disabled
else
git -C $git_project remote add upstream \
https://github.com/$upstream_user/${git_project}.git
fi
}
function prepare_git() {
# If you do not have a .git-credentials file, you might want to create one.
# You were better off doing this on your original machine because then
# it would have been copied here (and to future VMs created by this script).
if [[ -f ~/.git-credentials ]]; then
GITHUB_USER=$(sed 's/https:\/\/\([^:]\+\):.*@github.com/\1/' ~/.git-credentials)
else
read -p 'Please enter your GitHub User ID: ' GITHUB_USER
read -p 'Please enter your GitHub Access Token: ' ACCESS_TOKEN
cat <<EOF > ~/.git-credentials
https://$GITHUB_USER:$ACCESS_TOKEN@github.com
EOF
chmod 600 ~/.git-credentials
if prompt_YN "Y" "Cache git credentials?"; then
git config --global credential.helper store
fi
fi
# If specified then use this as the user owning github repositories when
# cloning them. If the owner is "default" then use the default owner for the
# given repository. If this is not defined, then use GITHUB_USER which is
# intended to be the github user account for the user running this script.
GITHUB_REPOSITORY_OWNER=${GITHUB_REPOSITORY_OWNER:-"$GITHUB_USER"}
# Select repository
# Inform that "upstream" is a choice
cat <<EOF
When selecting a repository owner, you can use "upstream" to use
each of the authoritative repositories rather than your own forks.
However, you will not be able to push any changes "upstream".
This selection is only used if this script will be cloning repositories.
EOF
read -p "Github repository owner [$GITHUB_REPOSITORY_OWNER] " \
CONFIRMED_GITHUB_REPOSITORY_OWNER
if [[ "$CONFIRMED_GITHUB_REPOSITORY_OWNER" == "" ]]; then
CONFIRMED_GITHUB_REPOSITORY_OWNER=$GITHUB_REPOSITORY_OWNER
fi
}
have_packer=$(which packer)
if [[ ! $have_packer ]] \
&& prompt_YN "N" "Install packer (to build images)?"; then
echo "Getting packer"
url=https://dl.bintray.com/mitchellh/packer/packer_0.8.6_linux_amd64.zip
pushd $HOME
if ! curl -s --location -O "$url"; then
popd
echo "Failed downloading $url"
exit -1
fi
unzip $(basename $url) -d packer > /dev/null
rm -f $(basename $url)
popd
export PATH=$PATH:$HOME/packer
if prompt_YN "Y" "Update .bash_profile to add $HOME/packer to your PATH?"; then
echo "PATH=\$PATH:\$HOME/packer" >> $HOME/.bash_profile
fi
fi
prepare_git
if prompt_YN "Y" "Install (or update) Google Cloud Platform SDK?"; then
# Download gcloud to ensure it is a recent version.
# Note that this is in this script because the gcloud install method isn't
# system-wide. The awscli is installed in the install_development.py script.
pushd $HOME
echo "*** BEGIN installing gcloud..."
curl https://sdk.cloud.google.com | bash
if ! $(gcloud auth list 2>&1 | grep "No credential"); then
echo "Running gcloud authentication..."
gcloud auth login
else
echo "*** Using existing gcloud authentication:"
gcloud auth list
fi
echo "*** FINISHED installing gcloud..."
popd
fi
# This is a bootstrap pull of the development scripts.
if [[ ! -e "spinnaker" ]]; then
git_clone $CONFIRMED_GITHUB_REPOSITORY_OWNER "spinnaker" "spinnaker"
else
echo "spinnaker/ already exists. Don't clone it."
fi
# Pull the spinnaker source into a fresh build directory.
mkdir -p build
cd build
../spinnaker/google/dev/refresh_source.sh --pull_origin \
--github_user $CONFIRMED_GITHUB_REPOSITORY_OWNER
# Some dependencies of Deck rely on Bower to manage their dependencies. Bower
# annoyingly prompts the user to collect some stats, so this disables that.
echo "{\"interactive\":false}" > ~/.bowerrc
# If this script was run in a different shell then we
# dont have the environment variables we set, and arent in the build directory.
function print_invoke_instructions() {
cat <<EOF
To initate a build and run spinnaker:
cd build
../spinnaker/google/dev/run_dev.sh
EOF
}
# If we sourced this script, we already have a bunch of stuff setup.
function print_source_instructions() {
cat <<EOF
To initate a build and run spinnaker:
../spinnaker/google/dev/run_dev.sh
EOF
}
function print_run_book_reference() {
cat <<EOF
For more help, see the Spinnaker Build & Run Book:
https://docs.google.com/document/d/1Q_ah8eG3Imyw-RWS1DSp_ItM2pyn56QEepCeaOAPaKA
EOF
}
# The /bogus prefix here is because eval seems to make $0 -bash,
# which basename thinks are flags. So since basename ignores the
# leading path, we'll just add a bogus one in.
if [[ "$(basename '/bogus/$0')" == "bootstrap_dev.sh" ]]; then
print_invoke_instructions
else
print_source_instructions
fi
print_run_book_reference
# Let path changes take effect in calling shell (if we source'd this)
exec bash -l

Просмотреть файл

@ -0,0 +1,33 @@
{
"variables": {
"release_path": null,
"project_id": null,
"installer_path": null,
"json_credentials": "",
"zone": "us-central1-f",
"source_image": "ubuntu-1404-trusty-v20150909a",
"target_image": "{{env `USER`}}-spinnaker-{{timestamp}}",
"install_args": ""
},
"builders": [{
"type": "googlecompute",
"account_file": "{{user `json_credentials`}}",
"project_id": "{{user `project_id`}}",
"source_image": "{{user `source_image`}}",
"zone": "{{user `zone`}}",
"image_name": "{{user `target_image`}}"
}],
"provisioners": [
{
"type": "file",
"source": "{{user `installer_path`}}",
"destination": "/tmp/install_spinnaker.py.zip"
},
{
"type": "shell",
"inline": ["python /tmp/install_spinnaker.py.zip --package_manager --release_path={{user `release_path`}} {{user `install_args`}}"]
}
]
}

97
dev/build_google_image.py Normal file
Просмотреть файл

@ -0,0 +1,97 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Creates an image in the default project named with the release name.
# python create_google_image.py --release_path $RELEASE_PATH
import argparse
import os
import re
import sys
from spinnaker.run import check_run_quick
from spinnaker.run import run_quick
from abstract_packer_builder import AbstractPackerBuilder
def get_default_project():
"""Determine the default project name.
The default project name is the gcloud configured default project.
"""
result = check_run_quick('gcloud config list', echo=False)
return re.search('project = (.*)\n', result.stdout).group(1)
class GooglePackerBuilder(AbstractPackerBuilder):
PACKER_TEMPLATE = os.path.dirname(sys.argv[0]) + '/build_google_image.packer'
def _do_prepare(self):
if not self.options.image_project:
self.options.image_project = get_default_project()
# Set the default target_image name to the release name.
# If --target_image was on the commandline it will override this
# later when the commandline vars are added.
self.add_packer_variable(
'target_image',
os.path.basename(self.options.release_path).replace('_', '-'))
# image_project isn't passed through to packer.
self.remove_raw_arg('image_project')
# The default project_id may be overriden by a commandline argument later.
self.add_packer_variable('project_id', self.options.image_project)
def _do_get_next_steps(self):
match = re.search('googlecompute: A disk image was created: (.+)',
self.packer_output)
image_name = match.group(1) if match else '$IMAGE_NAME'
return """
To deploy this image, use a command like:
gcloud compute instances create {image} \\
--project $GOOGLE_SPINNAKER_PROJECT \\
--image {image} \\
--image-project {image_project} \\
--machine-type n1-standard-8 \\
--zone $GOOGLE_ZONE \\
--scopes=compute-rw \\
--metadata=startup-script=/opt/spinnaker/install/first_google_boot.sh \\
--metadata-from-file=\\
spinnaker_local=$SPINNAKER_YML_PATH,\\
managed_project_credentials=$GOOGLE_PRIMARY_JSON_CREDENTIAL_PATH
You can leave off the managed_project_credentials metadata if
$SPINNAKER_PROJECT is the same as the GOOGLE_PRIMARY_MANAGED_PROJECT_ID
in the spinnaker-local.yml.
""".format(
image=image_name,
image_project=self.options.image_project)
@classmethod
def init_argument_parser(cls, parser):
"""Initialize the command-line parameters."""
super(GooglePackerBuilder, cls).init_argument_parser(parser)
parser.add_argument(
'--image_project', default='',
help='Google Cloud Platform project to add the image to.')
if __name__ == '__main__':
GooglePackerBuilder.main()

19
dev/build_google_image.sh Executable file
Просмотреть файл

@ -0,0 +1,19 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PYTHONPATH=$(dirname $0)/../pylib \
python \
$(dirname $0)/build_google_image.py "$@"

225
dev/build_google_tarball.py Normal file
Просмотреть файл

@ -0,0 +1,225 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Given an existing GCE image, create a tarball for it.
# PYTHONPATH=. python dev/build_gce_tarball.py --image=$IMAGE_NAME --tarball_uri=gs://$RELEASE_URI/$(basename $RELEASE_URI).tar.gz
import argparse
import os
import re
import sys
import time
from pylib.spinnaker.run import run_quick
from pylib.spinnaker.run import check_run_quick
def get_default_project():
"""Determine the default project name.
The default project name is the gcloud configured default project.
"""
result = check_run_quick('gcloud config list', echo=False)
return re.search('project = (.*)\n', result.stdout).group(1)
class Builder(object):
def __init__(self, options):
self.options = options
self.check_for_image_tarball()
self.__zone = options.zone
self.__project = options.project or get_default_project()
self.__instance = options.instance
def deploy_instance(self):
"""Deploy an instance (from an image) so we can get at its disks.
This isnt necessarily efficient, but is simple since we already have
means to create images.
"""
if self.__instance:
print 'Using existing instance {name}'.format(name=self.__instance)
return
if not self.options.image:
raise ValueError('Neither --instance nor --image was specified.')
instance = 'build-spinnaker-tarball-{unique}'.format(
unique=time.strftime('%Y%m%d%H%M%S'))
print 'Deploying temporary instance {name}'.format(name=instance)
check_run_quick('gcloud compute instances create {name}'
' --zone={zone} --project={project}'
' --image={image} --image-project={image_project}'
' --scopes compute-rw,storage-rw'
.format(name=instance,
zone=self.__zone,
project=self.__project,
image=self.options.image,
image_project=self.options.image_project),
echo=False)
self.__instance = instance
def cleanup_instance(self):
"""If we deployed an instance, tear it down."""
if self.options.instance:
print 'Leaving pre-existing instance {name}'.format(
self.options.instance)
return
print 'Deleting instance {name}'.format(name=self.__instance)
run_quick('gcloud compute instances delete {name}'
' --zone={zone} --project={project}'
.format(name=self.__instance,
zone=self.__zone,
project=self.__project),
echo=False)
def check_for_image_tarball(self):
"""See if the tarball aleady exists."""
uri = self.options.tarball_uri
if (not uri.startswith('gs://')):
error = ('--tarball_uri must be a Google Cloud Storage URI'
', not "{uri}"'
.format(uri=uri))
raise ValueError(error)
result = run_quick('gsutil ls {uri}'.format(uri=uri), echo=False)
if not result.returncode:
error = 'tarball "{uri}" already exists.'.format(uri=uri)
raise ValueError(error)
def __extract_image_tarball_helper(self):
"""Helper function for make_image_tarball that does the work.
Note that the work happens on the instance itself. So this function
builds a remote command that it then executes on the prototype instance.
"""
print 'Creating image tarball.'
set_excludes_bash_command = (
'EXCLUDES=`python -c'
' "import glob; print \',\'.join(glob.glob(\'/home/*\'))"`')
tar_path = self.options.tarball_uri
tar_name = os.path.basename(tar_path)
remote_script = [
'sudo mkdir /mnt/tmp',
'sudo /usr/share/google/safe_format_and_mount -m'
' "mkfs.ext4 -F" /dev/sdb /mnt/tmp',
set_excludes_bash_command,
'sudo gcimagebundle -d /dev/sda -o /mnt/tmp'
' --log_file=/tmp/export.log --output_file_name={tar_name}'
' --excludes=/tmp,\\$EXCLUDES'.format(tar_name=tar_name),
'gsutil -q cp /mnt/tmp/{tar_name} {output_path}'.format(
tar_name=tar_name, output_path=tar_path)]
command = '; '.join(remote_script)
check_run_quick('gcloud compute ssh --command="{command}"'
' --project {project} --zone {zone} {instance}'
.format(command=command.replace('"', r'\"'),
project=self.__project,
zone=self.__zone,
instance=self.__instance))
def create_tarball(self):
"""Create a tar.gz file from the instance specified by the options.
The file will be written to options.tarball_uri.
It can be later turned into a GCE image by passing it as the --source-uri
to gcloud images create.
"""
project = self.__project
basename = os.path.basename(self.options.tarball_uri).replace('_', '-')
first_dot = basename.find('.')
if first_dot:
basename = basename[0:first_dot]
disk_name = '{name}-export'.format(name=basename)
print 'Attaching external disk "{disk}" to extract image tarball.'.format(
disk=disk_name)
# TODO(ewiseblatt): 20151002
# Add an option to reuse an existing disk to reduce the cycle time.
# Then guard the create/format/destroy around this option.
# Still may want/need to attach/detach it here to reduce race conditions
# on its use since it can only be bound to once instance at a time.
check_run_quick('gcloud compute disks create '
' {disk_name} --project {project} --zone {zone} --size=10'
.format(disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
check_run_quick('gcloud compute instances attach-disk {instance}'
' --disk={disk_name} --device-name=export-disk'
' --project={project} --zone={zone}'
.format(instance=self.__instance,
disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
try:
self.__extract_image_tarball_helper()
finally:
print 'Detaching and deleting external disk.'
run_quick('gcloud compute instances detach-disk -q {instance}'
' --disk={disk_name} --project={project} --zone={zone}'
.format(instance=self.__instance,
disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
run_quick('gcloud compute disks delete -q {disk_name}'
' --project={project} --zone={zone}'
.format(disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
def init_argument_parser(parser):
parser.add_argument(
'--tarball_uri', required=True,
help='A path to a Google Cloud Storage bucket or path within one.')
parser.add_argument(
'--instance', default='',
help='If specified use this instance, otherwise use deploy a new one.')
parser.add_argument(
'--image', default='', help='The image to tar if no --instance.')
parser.add_argument(
'--image_project', default='', help='The project for --image.')
parser.add_argument('--zone', default='us-central1-f')
parser.add_argument(
'--project', default='',
help='GCE project to write image to.'
' If not specified then use the default gcloud project.')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
init_argument_parser(parser)
options = parser.parse_args()
builder = Builder(options)
builder.deploy_instance()
try:
builder.create_tarball()
finally:
builder.cleanup_instance()
print 'DONE'

590
dev/build_release.py Normal file
Просмотреть файл

@ -0,0 +1,590 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Coordinates a global build of a Spinnaker release.
Spinnaker components use Gradle. This particular script might be more
appropriate to be a Gradle script. However this script came from a
context in which writing it in python was more convenient. It could be
replaced with a gradle script in the future without impacting other scripts
or the overall development process if having this be a Gradle script
is more maintainable.
This script builds all the components, and squirrels them away into a filesystem
somewhere (local, Amazon Simple Storage Service or Google Cloud Storage).
The individual components are built using their respective repository's Gradle
build. This script coordinates those builds and adds additional runtime
administrative scripts into the release.
TODO(ewiseblatt): 20151007
This should [also] generate a Debian package that can be installed.
The default should be to generate a .deb package rather than write a filesystem
tree. However, for historical development reasons, that is not yet done.
"""
import argparse
import collections
import fnmatch
import os
import multiprocessing
import multiprocessing.pool
import re
import shutil
import subprocess
import sys
import tempfile
import time
import zipfile
import refresh_source
from spinnaker.run import check_run_quick
from spinnaker.run import run_quick
SUBSYSTEM_LIST = ['clouddriver', 'orca', 'front50',
'rush', 'echo', 'rosco', 'gate', 'igor', 'deck']
def ensure_gcs_bucket(name, project=''):
"""Ensure that the desired GCS bucket exists, creating it if needed.
Args:
name [string]: The bucket name.
project [string]: Optional Google Project id that will own the bucket.
If none is provided, then the bucket will be associated with the default
bucket configured to gcloud.
Raises:
RutimeError if the bucket could not be created.
"""
bucket = 'gs://'+ name
if not project:
config_result = run_quick('gcloud config list', echo=False)
error = None
if config_result.returncode:
error = 'Could not run gcloud: {error}'.format(
error=config_result.stdout)
else:
match = re.search('(?m)^project = (.*)', config_result.stdout)
if not match:
error = ('gcloud is not configured with a default project.\n'
'run gcloud config or provide a --google_project.\n')
if error:
raise SystemError(error)
project = match.group(1)
list_result = run_quick('gsutil list -p ' + project, echo=False)
if list_result.returncode:
error = ('Could not create Google Cloud Storage bucket'
'"{name}" in project "{project}":\n{error}'
.format(name=name, project=project, error=list_result.stdout))
raise RuntimeError(error)
if re.search('(?m)^{bucket}/\n'.format(bucket=bucket), list_result.stdout):
sys.stderr.write(
'WARNING: "{bucket}" already exists. Overwriting.\n'.format(
bucket=bucket))
else:
print 'Creating GCS bucket "{bucket}" in project "{project}".'.format(
bucket=bucket, project=project)
check_run_quick('gsutil mb -p {project} {bucket}'
.format(project=project, bucket=bucket),
echo=True)
def ensure_s3_bucket(name, region=""):
"""Ensure that the desired S3 bucket exists, creating it if needed.
Args:
name [string]: The bucket name.
region [string]: The S3 region for the bucket. If empty use aws default.
Raises:
RutimeError if the bucket could not be created.
"""
bucket = 's3://' + name
list_result = run_quick('aws s3 ls ' + bucket, echo=False)
if not list_result.returncode:
sys.stderr.write(
'WARNING: "{bucket}" already exists. Overwriting.\n'.format(
bucket=bucket))
else:
print 'Creating S3 bucket "{bucket}"'.format(bucket=bucket)
command = 'aws s3 mb ' + bucket
if region:
command += ' --region ' + region
check_run_quick(command, echo=False)
class BackgroundProcess(
collections.namedtuple('BackgroundProcess', ['name', 'subprocess'])):
"""Denotes a running background process.
Attributes:
name [string]: The visible name of the process for reporting.
subproces [subprocess]: The subprocess instance.
"""
@staticmethod
def spawn(name, args):
sp = subprocess.Popen(args, shell=True, close_fds=True,
stdout=sys.stdout, stderr=subprocess.STDOUT)
return BackgroundProcess(name, sp)
def wait(self):
if not self.subprocess:
return None
return self.subprocess.wait()
def check_wait(self):
if self.wait():
error = '{name} failed.'.format(name=self.name)
raise SystemError(error)
NO_PROCESS = BackgroundProcess('nop', None)
class Builder(object):
"""Knows how to coordinate a Spinnaker release."""
def __init__(self, options):
self.__package_list = []
self.__config_list = []
self.__background_processes = []
os.environ['NODE_ENV'] = 'dev'
self.__options = options
self.refresher = refresh_source.Refresher(options)
# NOTE(ewiseblatt):
# This is the GCE directory.
# Ultimately we'll want to go to the root directory and install
# standard stuff and gce stuff.
self.__project_dir = os.path.abspath(
os.path.dirname(sys.argv[0]) + '/..')
self.__release_dir = options.release_path
if self.__release_dir.startswith('gs://'):
ensure_gcs_bucket(name=self.__release_dir[5:].split('/')[0],
project=options.google_project)
elif self.__release_dir.startswith('s3://'):
ensure_s3_bucket(name=self.__release_dir[5:].split('/')[0],
region=options.aws_region)
def start_build_target(self, name, target):
"""Start a subprocess to build the designated target.
Args:
name [string]: The name of the subsystem repository.
target [string]: The gradle build target.
Returns:
BackgroundProcess
"""
print 'Building {name}...'.format(name=name)
return BackgroundProcess.spawn(
'Building {name}'.format(name=name),
'cd {name}; ./gradlew {target}'.format(name=name, target=target))
def start_copy_dir(self, source, target, filter='*'):
if target.startswith('s3://'):
return BackgroundProcess.spawn(
'Copying {source}'.format,
'aws s3 cp --recursive "{source}" "{target}"'
' --exclude "*" --include "{filter}"'
.format(source=source, target=target, filter=filter))
list = []
for root, dirs, files in os.walk(source):
postfix = root[len(source):]
rel_target = (target
if not postfix
else os.path.join(target, root[len(source) + 1:]))
for file in fnmatch.filter(files, filter):
list.append(self.start_copy_file(os.path.join(root, file),
os.path.join(rel_target, file)))
print ' Waiting to finish copying directory {source}'.format(source=source)
for p in list:
p.check_wait()
return NO_PROCESS
def start_copy_file(self, source, target, dir=False):
"""Start a subprocess to copy the source file.
Args:
source [string]: The path to the source to copy must be local.
target [string]: The target path can also be a storage service URI.
Returns:
BackgroundProcess
"""
if target.startswith('s3://'):
return BackgroundProcess.spawn(
'Copying {source}'.format,
'aws s3 cp "{source}" "{target}"'
.format(source=source, target=target))
elif target.startswith('gs://'):
return BackgroundProcess.spawn(
'Copying {source}'.format,
'gsutil -q -m cp "{source}" "{target}"'
.format(source=source, target=target))
else:
try:
os.makedirs(os.path.dirname(target))
except OSError:
pass
shutil.copy(source, target)
return NO_PROCESS
def start_copy_debian_target(self, name):
"""Copies the debian package for the specified subsystem.
Args:
name [string]: The name of the subsystem repository.
"""
if os.path.exists(os.path.join(name, '{name}-web'.format(name=name))):
submodule = '{name}-web'.format(name=name)
elif os.path.exists(os.path.join(name, '{name}-core'.format(name=name))):
submodule = '{name}-core'.format(name=name)
else:
submodule = '.'
with open(os.path.join(name, submodule, 'build/debian/control')) as f:
content = f.read()
match = re.search('(?m)^Version: (.*)', content)
version = match.group(1)
build_dir = '{submodule}/build/distributions'.format(submodule=submodule)
package = '{name}_{version}_all.deb'.format(name=name, version=version)
if not os.path.exists(os.path.join(name, build_dir, package)):
if os.path.exists(os.path.join(name, build_dir,
'{submodule}_{version}_all.deb'
.format(submodule=submodule, version=version))):
# This is for front50 only
package = '{submodule}_{version}_all.deb'.format(
submodule=submodule, version=version)
else:
error = ('Cannot find .deb for name={name} version={version}\n'
.format(name=name, version=version))
raise AssertionError(error)
from_path = os.path.join(name, build_dir, package)
to_path = os.path.join(self.__release_dir, package)
print 'Adding {path}'.format(path=from_path)
self.__package_list.append(package)
return self.start_copy_file(from_path, to_path)
def __do_build(self, subsys):
self.start_build_target(subsys, 'buildDeb').check_wait()
def build_packages(self):
"""Build all the Spinnaker packages."""
if self.__options.build:
# Build in parallel using half available cores
# to keep load in check.
pool = multiprocessing.pool.ThreadPool(
processes=min(1,
self.__options.cpu_ratio * multiprocessing.cpu_count()))
pool.map(self.__do_build, SUBSYSTEM_LIST)
source_config_dir = self.__options.config_source
processes = []
# Copy global spinnaker config (and sample local).
for yml in [ 'default-spinnaker-local.yml', 'spinnaker.yml']:
source_config = os.path.join(source_config_dir, yml)
target_config = os.path.join(self.__release_dir, 'config', yml)
self.__config_list.append(yml)
processes.append(self.start_copy_file(source_config, target_config))
# Copy subsystem configuration files.
for subsys in SUBSYSTEM_LIST:
processes.append(self.start_copy_debian_target(subsys))
if subsys == 'deck':
source_config = os.path.join(source_config_dir, 'settings.js')
target_config = os.path.join(
self.__release_dir, 'config/settings.js')
processes.append(self.start_copy_file(source_config, target_config))
else:
source_config = os.path.join(source_config_dir, subsys + '.yml')
yml = os.path.basename(source_config)
target_config = os.path.join(self.__release_dir, 'config', yml)
self.__config_list.append(yml)
processes.append(
self.start_copy_file(source_config, target_config))
print 'Waiting for package copying to finish....'
for p in processes:
p.check_wait()
def copy_dependency_files(self):
"""Copy additional files used by external dependencies into release."""
source_dir = os.path.join(self.__project_dir, 'cassandra')
target_dir = os.path.join(self.__release_dir, 'cassandra')
processes = []
processes.append(self.start_copy_dir(
source_dir, target_dir, filter='*.cql'))
print 'Waiting for dependency scripts.'
for p in processes:
p.check_wait()
def copy_install_scripts(self):
"""Copy installation scripts into release."""
source_dir = os.path.join(self.__project_dir, 'install')
target_dir = os.path.join(self.__release_dir, 'install')
processes = []
processes.append(self.start_copy_dir(
source_dir, target_dir, filter='*.py'))
processes.append(self.start_copy_dir(
source_dir, target_dir, filter='*.sh'))
print 'Waiting for install scripts to finish.'
for p in processes:
p.check_wait()
def copy_admin_scripts(self):
"""Copy administrative/operational support scripts into release."""
processes = []
processes.append(self.start_copy_dir(
os.path.join(self.__project_dir, 'pylib'),
os.path.join(self.__release_dir, 'pylib'),
filter='*.py'))
processes.append(self.start_copy_dir(
os.path.join(self.__project_dir, 'runtime'),
os.path.join(self.__release_dir, 'runtime'),
filter='*.sh'))
print 'Waiting for admin scripts to finish.'
for p in processes:
p.check_wait()
def copy_release_config(self):
"""Copy configuration files into release."""
source_dir = self.__options.config_source
target_dir = os.path.join(self.__release_dir, 'config')
# This is the contents of the release_config.cfg file.
# Which acts as manifest to inform the installer what packages to install.
fd, temp_file = tempfile.mkstemp()
os.write(fd, '# This file is not intended to be user-modified.\n'
'CONFIG_LIST="{configs}"\n'
'PACKAGE_LIST="{packages}"\n'
.format(configs=' '.join(self.__config_list),
packages=' '.join(self.__package_list)))
os.close(fd)
try:
self.start_copy_file(
temp_file, os.path.join(target_dir, 'release_config.cfg')).check_wait()
finally:
os.remove(temp_file)
def build_web_installer_zip(self):
"""Build encapsulated python zip file for install_spinnaker.py
This is useful as an installer that can be pointed at a release somewhere,
and just pull and install it onto any machine. Unfortunately you cannot
directly run a zip through stdin so need to download the zip first, then
run it. The zip is packaged as part of the release for distribution
convenience.
"""
fd, zip_path = tempfile.mkstemp()
os.close(fd)
zip = zipfile.ZipFile(zip_path, 'w')
try:
zip.writestr('__main__.py', """
from install_spinnaker import main
import os
import sys
if __name__ == '__main__':
if len(sys.argv) == 1 and os.environ.get('RELEASE_PATH', ''):
sys.argv.extend('--release_path', os.environ['RELEASE_PATH'])
retcode = main()
sys.exit(retcode)
""")
dep_root = os.path.dirname(sys.argv[0]) + '/..'
deps = ['install/install_spinnaker.py',
'install/install_runtime_dependencies.py',
'pylib/spinnaker/run.py',
'pylib/spinnaker/fetch.py']
for file in deps:
with open(os.path.join(dep_root, file), 'r') as f:
zip.writestr(os.path.basename(file),
f.read().replace('from spinnaker.', 'from '))
zip.close()
zip = None
shutil.move(zip_path, './install_spinnaker.py.zip')
p = self.start_copy_file('./install_spinnaker.py.zip',
os.path.join(self.__release_dir,
'install/install_spinnaker.py.zip'))
p.check_wait()
finally:
if zip is not None:
zip.close()
@staticmethod
def __zip_dir(zip_file, source_path, arcname=''):
"""Zip the contents of a directory.
Args:
zip_file: [ZipFile] The zip file to write into.
source_path: [string] The directory to add.
arcname: [string] Optional name for the source to appear as in the zip.
"""
if arcname:
# Effectively replace os.path.basename(parent_path) with arcname.
arcbase = arcname + '/'
parent_path = source_path
else:
# Will start relative paths from os.path.basename(source_path).
arcbase = ''
parent_path = os.path.dirname(source_path)
# Copy the tree at source_path adding relative paths into the zip.
rel_offset = len(parent_path) + 1
entries = os.walk(source_path)
for root, dirs, files in entries:
for dirname in dirs:
abs_path = os.path.join(root, dirname)
zip_file.write(abs_path, arcbase + abs_path[rel_offset:])
for filename in files:
abs_path = os.path.join(root, filename)
zip_file.write(abs_path, arcbase + abs_path[rel_offset:])
def add_python_test_zip(self, test_name):
"""Build encapsulated python zip file for the given test test_name.
This allows integration tests to be packaged with the release, at least
for the time being. This is useful for testing them, or validating the
initial installation and configuration.
"""
fd, zip_path = tempfile.mkstemp()
os.close(fd)
zip = zipfile.ZipFile(zip_path, 'w')
try:
zip.writestr('__main__.py', """
from {test_name} import main
import sys
if __name__ == '__main__':
retcode = main()
sys.exit(retcode)
""".format(test_name=test_name))
# Add citest sources as baseline
# TODO(ewiseblatt): 20150810
# Eventually this needs to be the transitive closure,
# but there are currently no other dependencies.
zip.writestr('__init__.py', '')
self.__zip_dir(zip, 'citest/citest', 'citest')
self.__zip_dir(zip,
'citest/spinnaker/spinnaker_testing', 'spinnaker_testing')
self.__zip_dir(zip, 'pylib/yaml', 'yaml')
test_py = '{test_name}.py'.format(test_name=test_name)
zip.write('citest/spinnaker/spinnaker_system/' + test_py, test_py)
zip.close()
p = self.start_copy_file(
zip_path, os.path.join(self.__release_dir, 'tests', test_py + '.zip'))
p.check_wait()
finally:
os.remove(zip_path)
def add_test_zip_files(self):
if not os.path.exists('citest'):
print 'Adding citest repository'
self.refresher.git_clone('citest', owner='google')
print 'Adding tests...'
self.add_python_test_zip('aws_kato_test')
self.add_python_test_zip('kato_test')
self.add_python_test_zip('smoke_test')
self.add_python_test_zip('server_group_tests')
@classmethod
def init_argument_parser(cls, parser):
refresh_source.Refresher.init_argument_parser(parser)
parser.add_argument('--build', default=True, action='store_true',
help='Build the sources.')
parser.add_argument(
'--cpu_ratio', type=float, default=1.25, # 125%
help='Number of concurrent threads as ratio of available cores.')
parser.add_argument('--nobuild', dest='build', action='store_false')
config_path= os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]),
'../config'))
parser.add_argument(
'--config_source', default=config_path,
help='Path to directory for release config file templates.')
parser.add_argument('--release_path', required=True,
help='Specifies the path to the release to build.'
' The release name is assumed to be the basename.'
' The path can be a directory, GCS URI or S3 URI.')
parser.add_argument(
'--google_project', default='',
help='If release repository is a GCS bucket then this is the project'
' owning the bucket. The default is the project configured as the'
' default for gcloud.')
parser.add_argument(
'--aws_region', default='',
help='If release repository is a S3 bucket then this is the AWS'
' region to add the bucket to if the bucket did not already exist.')
@classmethod
def main(cls):
parser = argparse.ArgumentParser()
cls.init_argument_parser(parser)
options = parser.parse_args()
builder = cls(options)
if options.pull_origin:
builder.refresher.pull_all_from_origin()
builder.build_packages()
builder.build_web_installer_zip()
builder.copy_dependency_files()
builder.copy_install_scripts()
builder.copy_admin_scripts()
builder.copy_release_config()
builder.add_test_zip_files()
print '\nFINISHED writing release to {dir}'.format(
dir=builder.__release_dir)
if __name__ == '__main__':
Builder.main()

17
dev/build_release.sh Executable file
Просмотреть файл

@ -0,0 +1,17 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PYTHONPATH=$(dirname $0)/../pylib python $(dirname $0)/build_release.py $@

20
dev/create_dev_vm.sh Executable file
Просмотреть файл

@ -0,0 +1,20 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# There could be a flag here for the provider to create the instance on
# but for now this is specialized for Google.
SCRIPT_DIR=$(dirname $0)
PYTHONPATH=$SCRIPT_DIR/../pylib python $SCRIPT_DIR/create_google_dev_vm.py "$@"

336
dev/create_google_dev.py Executable file
Просмотреть файл

@ -0,0 +1,336 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import re
import sys
import tempfile
import time
from spinnaker.run import run_quick
from spinnaker.run import check_run_quick
from spinnaker.yaml_util import YamlBindings
__NEXT_STEP_INSTRUCTIONS = """
To finish the installation, do the following (with or without tunnel):
(1) Log into your new instance (with or without tunneling ssh-flags):
gcloud compute ssh --project {project} --zone {zone} {instance}\
--ssh-flag="-L 9000:localhost:9000"\
--ssh-flag="-L 8084:localhost:8084"
(2) Set up the build environment:
source /opt/spinnaker/install/bootstrap_dev.sh
(3a) Build and run directly from the sources:
../spinnaker/google/dev/run_dev.sh
For more help, see the Spinnaker Build & Run Book:
https://docs.google.com/document/d/1Q_ah8eG3Imyw-RWS1DSp_ItM2pyn56QEepCeaOAPaKA
"""
def get_project(options):
"""Determine the default project name.
The default project name is the gcloud configured default project.
"""
if not options.project:
result = check_run_quick('gcloud config list', echo=False)
options.project = re.search('project = (.*)\n', result.stdout).group(1)
return options.project
def init_argument_parser(parser):
parser.add_argument(
'--instance',
default='{user}-spinnaker-dev'.format(user=os.environ['USER']),
help='The name of the GCE instance to create.')
parser.add_argument(
'--project', default=None,
help='The Google Project ID to create the new instance in.'
' If left empty, use the default project gcloud was configured with.')
parser.add_argument(
'--zone', default='us-central1-f',
help='The Google Cloud Platform zone to create the new instance in.')
parser.add_argument(
'--disk_type', default='pd-standard',
help='The Google Cloud Platform disk type to use for the new instance.'
' The default is pd-standard. For a list of other available options,'
' see "gcloud compute disk-types list".')
parser.add_argument('--disk_size', default='200GB',
help='Warnings appear if disk size < 200GB')
parser.add_argument('--machine_type', default='n1-highmem-8')
parser.add_argument(
'--nopersonal', default=False, action='store_true',
help='Do not copy personal files (.gitconfig, etc.)')
parser.add_argument(
'--copy_private_files', default=False, action='store_true',
help='Also copy private files (.ssh/id_rsa*, .git-credentials, etc)')
parser.add_argument(
'--aws_credentials', default=None,
help='If specified, the path to the aws credentials file.')
parser.add_argument(
'--master_yml', default=None,
help='If specified, the path to the master spinnaker-local.yml file.')
parser.add_argument(
'--address', default=None,
help='The IP address to assign to the new instance. The address may'
' be an IP address or the name or URI of an address resource.')
parser.add_argument(
'--scopes', default='compute-rw,storage-rw',
help='Create the instance with these scopes.'
'The default are the minimal scopes needed to run the development'
' scripts. This is currently "compute-rw,storage-rw".')
def copy_file(options, source, target):
if os.path.exists(source):
# TODO(ewiseblatt): we can use scp here instead, and pass the
# credentials we want to copy with rather than the additional command
# below. But we need to figure out the IP address to copy to.
# For now, do it the long way.
print 'Copying {source}'.format(source=source)
command = ' '.join([
'gcloud compute copy-files',
'--project', get_project(options),
'--zone', options.zone,
source,
'{instance}:{target}'.format(instance=options.instance,
target=target)])
while True:
result = run_quick(command, echo=False)
if not result.returncode:
break
print 'New instance does not seem ready yet...retry in 5s.'
time.sleep(5)
command = ' '.join([
'gcloud compute ssh',
'--command="chmod 600 /home/{gcp_user}/{target}"'.format(
gcp_user=os.environ['LOGNAME'], target=target),
options.instance,
'--project', get_project(options),
'--zone', options.zone])
check_run_quick(command, echo=False)
def copy_home_files(options, type, file_list, source_dir=None):
print 'Copying {type} files...'.format(type=type)
home=os.environ['HOME']
for file in file_list:
source = '{0}/{1}'.format(home, file)
copy_file(options, source, file)
print 'Finished copying {type} files.'.format(type=type)
def copy_private_files(options):
copy_home_files(options, 'private',
['.ssh/id_rsa', '.ssh/google_compute_engine',
'.git-credentials'])
def copy_personal_files(options):
copy_home_files(options, 'personal',
['.gitconfig', '.emacs', '.bashrc', '.screenrc'])
def create_instance(options):
"""Creates new GCE VM instance for development."""
project = get_project(options)
print 'Creating instance {project}/{zone}/{instance}'.format(
project=project, zone=options.zone, instance=options.instance)
print 'with machine type {type} and boot disk size {disk_size}...'.format(
type=options.machine_type, disk_size=options.disk_size)
dev_dir = os.path.dirname(sys.argv[0])
install_dir = '{dir}/../install'.format(dir=dev_dir)
pylib_spinnaker_dir = '{dir}/../pylib/spinnaker'.format(dir=dev_dir)
with open('{dir}/install_development.py'.format(dir=dev_dir), 'r') as f:
# Remove leading install. package reference to module imports
# because we're going to place this in the same package as
# the things it is importing (no need for PYTHONPATH)
content = f.read()
content = content.replace('install.install', 'install')
content = content.replace('from spinnaker.', 'from ')
fd, temp_install_development = tempfile.mkstemp()
os.write(fd, content)
os.close(fd)
with open('{dir}/install_runtime_dependencies.py'.format(dir=install_dir),
'r') as f:
content = f.read()
content = content.replace('install.install', 'install')
content = content.replace('from spinnaker.', 'from ')
fd, temp_install_runtime = tempfile.mkstemp()
os.write(fd, content)
os.close(fd)
startup_command = ['install_development.py',
'--package_manager']
metadata_files = [
'startup-script={dev_dir}/google_install_loader.py'
',py_fetch={pylib_spinnaker_dir}/fetch.py'
',py_run={pylib_spinnaker_dir}/run.py'
',py_install_development={temp_install_development}'
',sh_bootstrap_dev={dev_dir}/bootstrap_dev.sh'
',py_install_runtime_dependencies={temp_install_runtime}'
.format(dev_dir=dev_dir, pylib_spinnaker_dir=pylib_spinnaker_dir,
temp_install_runtime=temp_install_runtime,
temp_install_development=temp_install_development)]
metadata = ','.join([
'startup_py_command={startup_command}'.format(
startup_command='+'.join(startup_command)),
'startup_loader_files='
'py_fetch'
'+py_run'
'+py_install_development'
'+py_install_runtime_dependencies'
'+sh_bootstrap_dev'])
command = ['gcloud', 'compute', 'instances', 'create',
options.instance,
'--project', get_project(options),
'--zone', options.zone,
'--machine-type', options.machine_type,
'--image', 'ubuntu-14-04',
'--scopes', 'compute-rw,storage-rw',
'--boot-disk-size={size}'.format(size=options.disk_size),
'--boot-disk-type={type}'.format(type=options.disk_type),
'--metadata', metadata,
'--metadata-from-file={files}'.format(
files=','.join(metadata_files))]
if options.address:
command.extend(['--address', options.address])
try:
check_run_quick(' '.join(command), echo=True)
finally:
os.remove(temp_install_development)
os.remove(temp_install_runtime)
def copy_master_yml(options):
"""Copy the specified master spinnaker-local.yml, and credentials.
This will look for paths to credentials within the spinnaker-local.yml, and
copy those as well. The paths to the credentials (and the reference
in the config file) will be changed to reflect the filesystem on the
new instance, which may be different than on this instance.
Args:
options [Namespace]: The parser namespace options contain information
about the instance we're going to copy to, as well as the source
of the master spinnaker-local.yml file.
"""
print 'Creating .spinnaker directory...'
check_run_quick('gcloud compute ssh --command "mkdir -p .spinnaker"'
' --project={project} --zone={zone} {instance}'
.format(project=get_project(options),
zone=options.zone,
instance=options.instance),
echo=False)
bindings = YamlBindings()
bindings.import_path(options.master_yml)
try:
json_credential_path = bindings.get(
'providers.google.primaryCredentials.jsonPath')
except KeyError:
json_credential_path = None
gcp_home = os.path.join('/home', os.environ['LOGNAME'], '.spinnaker')
# If there are credentials, write them to this path
gcp_credential_path = os.path.join(gcp_home, 'google-credentials.json')
with open(options.master_yml, 'r') as f:
content = f.read()
# Replace all the occurances of the original credentials path with the
# path that we are going to place the file in on the new instance.
if json_credential_path:
new_content = content.replace(json_credential_path, gcp_credential_path)
fd, temp_path = tempfile.mkstemp()
os.write(fd, new_content)
os.close(fd)
actual_path = temp_path
# Copy the credentials here. The cfg file will be copied after.
copy_file(options, actual_path, '.spinnaker/spinnaker-local.yml')
if json_credential_path:
copy_file(options, json_credential_path,
'.spinnaker/google-credentials.json')
if temp_path:
os.remove(temp_path)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
init_argument_parser(parser)
options = parser.parse_args()
if options.master_yml and not os.path.exists(options.master_yml):
sys.stderr.write('ERROR: {path} does not exist.'.format(
path=options.master_yml))
sys.exit(-1)
create_instance(options)
if not options.nopersonal:
copy_personal_files(options)
if options.copy_private_files:
copy_private_files(options)
if options.master_yml:
copy_master_yml(options)
if options.aws_credentials:
print 'Creating .aws directory...'
check_run_quick('gcloud compute ssh --command "mkdir -p .aws"'
' --project={project} --zone={zone} {instance}'
.format(project=get_project(options),
zone=options.zone,
instance=options.instance),
echo=False)
copy_file(options, options.aws_credentials, '.aws/credentials')
print __NEXT_STEP_INSTRUCTIONS.format(
project=get_project(options),
zone=options.zone,
instance=options.instance)

196
dev/dev_runner.py Executable file
Просмотреть файл

@ -0,0 +1,196 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import shutil
import signal
import stat
import subprocess
import sys
import time
from spinnaker.fetch import fetch
from spinnaker.configurator import InstallationParameters
from spinnaker import spinnaker_runner
class DevInstallationParameters(InstallationParameters):
"""Specialization of the normal production InstallationParameters.
This is a developer deployment where the paths are setup to run directly
out of this repository rather than a standard system installation.
Also, custom configuration parameters come from the $HOME/.spinnaker
rather than the normal installation location.
"""
DEV_SCRIPT_DIR = os.path.abspath(os.path.dirname(sys.argv[0]))
SUBSYSTEM_ROOT_DIR = os.getcwd()
USER_CONFIG_DIR = os.path.join(os.environ['HOME'], '.spinnaker')
LOG_DIR = os.path.join(SUBSYSTEM_ROOT_DIR, 'logs')
SPINNAKER_INSTALL_DIR = os.path.abspath(
os.path.join(DEV_SCRIPT_DIR, '..'))
INSTALLED_CONFIG_DIR = os.path.abspath(
os.path.join(DEV_SCRIPT_DIR, '../config'))
UTILITY_SCRIPT_DIR = os.path.abspath(
os.path.join(DEV_SCRIPT_DIR, '../runtime'))
EXTERNAL_DEPENDENCY_SCRIPT_DIR = os.path.abspath(
os.path.join(DEV_SCRIPT_DIR, '../runtime'))
DECK_INSTALL_DIR = os.path.join(SUBSYSTEM_ROOT_DIR, 'deck')
HACK_DECK_SETTINGS_FILENAME = 'settings.js'
DECK_PORT = 9000
class DevRunner(spinnaker_runner.Runner):
"""Specialization of the normal spinnaker runner for development use.
This class has different behaviors than the normal runner.
It follows similar heuristics for launching and stopping jobs,
however, the details differ in fundamental ways.
* The subsystems are run from their source (using gradle)
and will attempt to rebuild before running.
* Spinnaker will be reconfigured on each invocation.
The runner will display all the events to the subsystem error logs
to the console for as long as this script is running. When the script
terminates, the console will no longer show the error log, but the processes
will remain running, and continue logging to the logs directory.
"""
def __init__(self, installation_parameters=None):
installation = installation_parameters or DevInstallationParameters
super(DevRunner, self).__init__(installation)
def start_subsystem(self, subsystem, environ=None):
"""Starts the specified subsystem.
Args:
subsystem [string]: The repository name of the subsystem to run.
"""
print 'Starting {subsystem}'.format(subsystem=subsystem)
command = os.path.join(
self.installation.SUBSYSTEM_ROOT_DIR,
subsystem,
'start_dev.sh')
return self.run_daemon(command, [command], environ=environ)
def tail_error_logs(self):
"""Start a background tail job of all the component error logs."""
log_dir = self.installation.LOG_DIR
try:
os.makedirs(log_dir)
except OSError:
pass
tail_jobs = []
for subsystem in self.get_all_subsystem_names():
path = os.path.join(log_dir, subsystem + '.err')
open(path, 'w').close()
tail_jobs.append(self.start_tail(path))
return tail_jobs
def get_deck_pid(self):
"""Return the process id for deck, or None."""
program='node ./node_modules/webpack-dev-server/bin/webpack-dev-server.js'
stdout, stderr = subprocess.Popen(
'ps -fwwwC node', stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, close_fds=True).communicate()
match = re.search('(?m)^[^ ]+ +([0-9]+) .* {program}'.format(
program=program), stdout)
return int(match.group(1)) if match else None
def start_deck(self):
"""Start subprocess for deck."""
pid = self.get_deck_pid()
if pid:
print 'Deck is already running as pid={pid}'.format(pid=pid)
return pid
path = os.path.join(self.installation.SUBSYSTEM_ROOT_DIR,
'deck/start_dev.sh')
return self.run_daemon(path, [path])
def stop_deck(self):
"""Stop subprocess for deck."""
pid = self.get_deck_pid()
if pid:
print 'Terminating deck in pid={pid}'.format(pid=pid)
os.kill(pid, signal.SIGTERM)
def start_all(self, options):
"""Starts all the components then logs stderr to the console forever.
The subsystems are in forked processes disassociated from this, so will
continue running even after this process exists. Only the stderr logging
to console will stop once this process is terminated. However, the
logging will still continue into the LOG_DIR.
"""
self.configurator.update_deck_settings()
ignore_tail_jobs = self.tail_error_logs()
super(DevRunner, self).start_all(options)
deck_port = self.installation.DECK_PORT
print 'Waiting for deck to start on port {port}'.format(port=deck_port)
# Tail the log file while we wait and run.
# But the log file might not yet exist if deck hasnt started yet.
# So wait for the log file to exist before starting to tail it.
# Deck cant be ready yet if it hasnt started yet anyawy.
deck_log_path = os.path.join(self.installation.LOG_DIR, 'deck.log')
while not os.path.exists(deck_log_path):
time.sleep(0.1)
ignore_tail_jobs.append(self.start_tail(deck_log_path))
# Dont just wait for port to be ready, but for deck to respond
# because it takes a long time to startup once port is ready.
while True:
code, ignore = fetch('http://localhost:{port}/'.format(port=deck_port))
if code == 200:
break
else:
time.sleep(0.1)
print """Spinnaker is now ready on port {port}.
You can ^C (ctrl-c) to finish the script, which will stop emitting errors.
Spinnaker will continue until you run scripts/release/stop_spinnaker.sh
""".format(port=deck_port)
while True:
time.sleep(3600)
def program_to_subsystem(self, program):
return program
def subsystem_to_program(self, subsystem):
return subsystem
if __name__ == '__main__':
if not os.path.exists('deck'):
sys.stderr.write('This script needs to be run from the root of'
' your build directory.\n')
sys.exit(-1)
DevRunner.main()

232
dev/google_install_loader.py Executable file
Просмотреть файл

@ -0,0 +1,232 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Acts as a "bootloader" for setting up GCE instances from scratch.
If this is run as a startup script, it will extract files from the
instance metadata, then run another startup script. This makes it convienent
to write startup scripts that use existing modules that may span multiple
files so that setting up a Google Compute Engine instance (such as an image)
can use the same basic scripts and procedures as non-GCE instances. Thisjbootloader is specific to GCE in that it "bootloads" off GCE metadata. However, once
it does that (thus preparing the filesystem with the files that are needed
for the "real" startup script), it forks the specified standard script.
If additional GCE specific initialization is required, the standard script
can still conditionally perform that.
To use this as a bootloader:
add a metadata entry for each file to attach. The metadata key is the encoded
filename to extract to. Because '.' is not a valid metadata key char,
the encoding is in the form ext_basename where the first '_' acts as
the suffix. So ext_basename will be extracted as basename.ext.
Additional underscores are left as is. A leading '_' (or no '_' at all)
indicates no extension.
set the "startup_loader_files" metadata value to the keys of the attached
files that should be extracted into /opt/spinnaker/install.
set the "startup_py_command" metadata value to the command to execute after
the bootloader extracts the files. This can include commandline
arguments. The command will be run with an implied "python". The
filename to run is the literal name, not the encoded name.
set the "startup-script" metadata key to install_loader.py
The attached files will be extracted to /opt/spinnaker/install.
The attached file metadata and startup command will be cleared,
and the startup-script will be rewritten to a generated
/opt/spinnaker/install/startup_script.py that calls the specified command.
"""
import os
import shutil
import socket
import subprocess
import sys
import urllib2
GOOGLE_METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1'
GOOGLE_INSTANCE_METADATA_URL = '{url}/instance'.format(url=GOOGLE_METADATA_URL)
_MY_ZONE = None
def fetch(url, google=False):
request = urllib2.Request(url)
if google:
request.add_header('Metadata-Flavor', 'Google')
try:
response = urllib2.urlopen(request)
return response.getcode(), response.read()
except urllib2.HTTPError as e:
return e.code, str(e.reason)
except urllib2.URLError as e:
return -1, str(e.reason)
def get_zone():
global _MY_ZONE
if _MY_ZONE != None:
return _MY_ZONE
code, output = fetch('{url}/zone'.format(url=GOOGLE_INSTANCE_METADATA_URL),
google=True)
if code == 200:
_MY_ZONE = os.path.basename(output)
else:
_MY_ZONE = ''
return _MY_ZONE
def running_on_gce():
return get_zone() != ''
def get_instance_metadata_attribute(name):
code, output = fetch(
'{url}/attributes/{name}'.format(url=GOOGLE_INSTANCE_METADATA_URL,
name=name),
google=True)
if code == 200:
return output
else:
return None
def clear_metadata_to_file(name, path):
value = get_instance_metadata_attribute(name)
if value != None:
with open(path, 'w') as f:
f.write(value)
clear_instance_metadata(name)
def clear_instance_metadata(name):
p = subprocess.Popen('gcloud compute instances remove-metadata'
' {hostname} --zone={zone} --keys={name}'
.format(hostname=socket.gethostname(),
zone=get_zone(),
name=name),
shell=True, close_fds=True)
if p.wait():
raise SystemExit('Unexpected failure clearing metadata.')
def write_instance_metadata(name, value):
p = subprocess.Popen('gcloud compute instances add-metadata'
' {hostname} --zone={zone} --metadata={name}={value}'
.format(hostname=socket.gethostname(),
zone=get_zone(),
name=name, value=value),
shell=True, close_fds=True)
if p.wait():
raise SystemExit('Unexpected failure writing metadata.')
def unpack_files(key_list):
"""Args unpack and clear the specified keys into their corresponding files.
Key names correspond to file names using the following encoding:
'.' is not permitted in a metadata name, so we'll use a leading
underscore separator in the file name to indicate the extension.
a value in the form "ext_base_name" means the file "base_name.ext"
a value in the form "_ext_base_name" means the file "ext_base_name"
a value in the form "basename" means the file "basename"
Args: key_list a list of strings denoting the metadata keys contianing the
file content.
"""
for key in key_list:
underscore = key.find('_')
if underscore <= 0:
filename = key if underscore < 0 else key[1:]
else:
filename = '{basename}.{ext}'.format(
basename = key[underscore + 1:],
ext=key[:underscore])
clear_metadata_to_file(key, filename)
def __unpack_and_run():
"""Unpack the files from metadata, and run the main script.
This is intended to be used where a startup [python] script needs a bunch
of different files for a startup script that is passed through metadata
in a GCE instance.
The actual startup acts like a bootloader that unpacks all the
files from the metadata, then passes control the specific startup script
for the installation.
The bootloader unpacks the files mentioned in the 'startup_loader_files'
metadata, which is a space-delimited list of other metadata keys that
contain the files. Because the keys cannot contain '.', we encode the
filenames as <ext>_<basename> using the leading '_' to separate the
extension, whichi s added as a prefix.
The true startup script is denoted by the 'startup_py_command' attribute,
which specifies the name of a python script to run (presumably packed
into the startup_loader_files). The script uses the unencoded name once
unpacked. The python command itself is ommited and will be added here.
"""
script_keys = get_instance_metadata_attribute('startup_loader_files')
key_list = script_keys.split('+') if script_keys else []
unpack_files(key_list)
if script_keys:
clear_instance_metadata('startup_loader_files')
startup_py_command = get_instance_metadata_attribute('startup_py_command')
if not startup_py_command:
sys.stderr.write('No "startup_py_command" metadata key.\n')
raise SystemExit('No "startup_py_command" metadata key.')
# Change the startup script to the final command that we run
# so that future boots will just run that command. And take down
# the rest of the boostrap metadata since we dont need it anymore.
command = 'python ' + startup_py_command.replace('+', ' ')
with open('__startup_script__.sh', 'w') as f:
f.write('#!/bin/bash\ncd /opt/spinnaker/install\n{command}\n'
.format(command=command))
os.chmod('__startup_script__.sh', 0555)
write_instance_metadata('startup-script',
'/opt/spinnaker/install/__startup_script__.sh')
clear_instance_metadata('startup_py_command')
# Now run the command (which is also the future startup script).
p = subprocess.Popen(command, shell=True, close_fds=True)
p.communicate()
return p.returncode
if __name__ == '__main__':
if not running_on_gce():
sys.stderr.write('You do not appear to be on Google Compute Engine.\n')
sys.exit(-1)
try:
os.makedirs('/opt/spinnaker/install')
os.chdir('/opt/spinnaker/install')
except OSError:
pass
# Copy this script to /opt/spinnaker/install as install_loader.py
# since other scripts will reference it that way.
shutil.copyfile('/var/run/google.startup.script',
'/opt/spinnaker/install/google_install_loader.py')
print 'RUNNING with argv={0}'.format(sys.argv)
sys.exit(__unpack_and_run())

189
dev/install_development.py Normal file
Просмотреть файл

@ -0,0 +1,189 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import sys
import tempfile
from spinnaker.fetch import check_fetch
from spinnaker.run import run_and_monitor
from spinnaker.run import run_quick
from spinnaker.run import check_run_quick
from spinnaker.run import check_run_and_monitor
import install.install_runtime_dependencies
NODE_VERSION = '0.12'
NVM_VERSION = 'v0.26.0'
__LOCAL_INVOCATION_NEXT_STEPS = """
To finish the personal developer workspace installation, do the following:
source ${dev}/bootstrap_dev.sh
This will leave you in a 'build' subdirectory. To run Spinnaker:
../spinnaker/google/dev/run_dev.sh
""".format(dev=os.path.dirname(sys.argv[0]))
__STARTUP_SCRIPT_INVOCATION_NEXT_STEPS = """
To finish the personal developer workspace installation, do the following:
Log into this vm as your development user.
source /opt/spinnaker/install/bootstrap_dev.sh
This will leave you in a 'build' subdirectory. To run Spinnaker:
../spinnaker/google/dev/run_dev.sh
"""
__NVM_SCRIPT = """#!/bin/bash
export NVM_DIR=/usr/local/nvm
source /usr/local/nvm/nvm.sh
export NPM_CONFIG_PREFIX=/usr/local/node
export PATH="/usr/local/node/bin:$PATH"
"""
def init_argument_parser(parser, default_values={}):
tmp = {}
tmp.update(default_values)
default_values = tmp
if not 'apache' in default_values:
default_values['apache'] = False
install.install_runtime_dependencies.init_argument_parser(
parser, default_values)
parser.add_argument('--gcloud',
default=default_values.get('gcloud', False),
action='store_true',
help='Install gcloud')
parser.add_argument('--nogcloud', dest='gcloud', action='store_false')
parser.add_argument('--awscli',
default=default_values.get('awscli', True),
action='store_true',
help='Install AWS CLI')
parser.add_argument('--noawscli', dest='awscli', action='store_false')
def install_awscli(options):
if not options.awscli:
return
print 'Installing AWS CLI'
check_run_and_monitor('sudo apt-get install -y awscli', echo=True)
def install_gcloud(options):
if not options.gcloud:
return
result = run_quick('gcloud --version', echo=False)
if not result.returncode:
print 'GCloud is already installed:\n {version_info}'.format(
version_info=result.stdout.replace('\n', '\n '))
return
print 'Installing GCloud.'
check_run_and_monitor('curl https://sdk.cloud.google.com | bash', echo=True)
def install_nvm(options):
print '---------- Installing NVM ---------'
check_run_quick('sudo chmod 775 /usr/local')
check_run_quick('sudo mkdir -m 777 -p /usr/local/node /usr/local/nvm')
result = check_fetch(
'https://raw.githubusercontent.com/creationix/nvm/{nvm_version}/install.sh'
.format(nvm_version=NVM_VERSION))
fd, temp = tempfile.mkstemp()
os.write(fd, result.content)
os.close(fd)
try:
run_and_monitor(
'bash -c "NVM_DIR=/usr/local/nvm source {temp}"'.format(temp=temp))
finally:
os.remove(temp)
# curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.26.0/install.sh | NVM_DIR=/usr/local/nvm bash
check_run_and_monitor('sudo bash -c "cat > /etc/profile.d/nvm.sh"',
input=__NVM_SCRIPT)
print '---------- Installing Node {version} ---------'.format(
version=NODE_VERSION)
run_and_monitor('bash -c "source /etc/profile.d/nvm.sh'
'; nvm install {version}'
'; nvm alias default {version}"'
.format(version=NODE_VERSION))
def add_gcevm_to_etc_hosts(options):
"""Add gcevm as an alias for localhost to ease working with SOCKS proxy."""
with open('/etc/hosts', 'r') as f:
content = f.read()
modified = content.replace('127.0.0.1 localhost',
'127.0.0.1 localhost gcevm')
fd, tmp = tempfile.mkstemp()
os.write(fd, modified)
os.close(fd)
try:
check_run_quick('sudo bash -c "'
'chown --reference=/etc/hosts {tmp}'
'; chmod --reference=/etc/hosts {tmp}'
'; mv {tmp} /etc/hosts'
'"'.format(tmp=tmp),
echo=False)
except BaseException:
os.remove(tmp)
def install_build_tools(options):
check_run_and_monitor('sudo apt-get update')
check_run_and_monitor('sudo apt-get install -y git')
check_run_and_monitor('sudo apt-get install -y zip')
check_run_and_monitor('sudo apt-get install -y build-essential')
install_nvm(options)
def main():
parser = argparse.ArgumentParser()
init_argument_parser(parser)
options = parser.parse_args()
install_build_tools(options)
install_awscli(options)
install_gcloud(options)
add_gcevm_to_etc_hosts(options)
install.install_runtime_dependencies.install_java(options, which='jdk')
# Force java off since we just installed it.
options.java = False
install.install_runtime_dependencies.install_runtime_dependencies(options)
if os.path.dirname(sys.argv[0]) == 'dev':
print __LOCAL_INVOCATION_NEXT_STEPS
else:
print __STARTUP_SCRIPT_INVOCATION_NEXT_STEPS
if __name__ == '__main__':
main()

437
dev/refresh_source.py Normal file
Просмотреть файл

@ -0,0 +1,437 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import collections
import os
import re
import sys
from spinnaker.run import run_and_monitor
from spinnaker.run import run_quick
from spinnaker.run import check_run_quick
class SourceRepository(
collections.namedtuple('SourceRepository', ['name', 'owner'])):
"""Denotes a github repository.
Attributes:
name: The [short] name of the repository.
owner: The github user name owning the repository
"""
pass
class Refresher(object):
__OPTIONAL_REPOSITORIES = [SourceRepository('citest', 'google')]
__REQUIRED_REPOSITORIES = [
SourceRepository('clouddriver', 'spinnaker'),
SourceRepository('orca', 'spinnaker'),
SourceRepository('front50', 'spinnaker'),
SourceRepository('rush', 'spinnaker'),
SourceRepository('echo', 'spinnaker'),
SourceRepository('rosco', 'spinnaker'),
SourceRepository('gate', 'spinnaker'),
SourceRepository('igor', 'spinnaker'),
SourceRepository('deck', 'spinnaker')]
def __init__(self, options):
self.__options = options
self.__extra_repositories = self.__OPTIONAL_REPOSITORIES
if options.extra_repos:
for extra in options.extra_repos.split(','):
pair = extra.split('=')
if len(pair) != 2:
raise ValueError(
'Invalid --extra_repos value "{extra}"'.format(extra=extra))
self.__extra_repositories.append(SourceRepository(pair[0], pair[1]))
def get_branch_name(self, name):
"""Determine which git branch a local repository is in.
Args:
name [string]: The repository name.
Returns:
The name of the branch.
"""
result = run_quick('git -C {dir} rev-parse --abbrev-ref HEAD'
.format(dir=name),
echo=True)
if result.returncode:
error = 'Could not determine branch: ' + result.stdout
raise RuntimeError(error)
return result.stdout.strip()
def get_github_repository_url(self, repository, owner=None):
"""Determine the URL for a given github repository.
Args:
respository [string]: The upstream SourceRepository.
owner [string]: The explicit owner for the repository we want.
If not provided then use the github_user in the bound options.
"""
user = owner or self.__options.github_user
if not user:
raise ValueError('No --github_user specified.')
if user == 'default' or user == 'upstream':
user = repository.owner
url_pattern = ('https://github.com/{user}/{name}.git'
if self.__options.use_https
else 'git@github.com:{user}/{name}.git')
return url_pattern.format(user=user, name=repository.name)
def git_clone(self, repository, required=True, owner=None):
"""Clone the specified repository
Args:
repository [string]: The name of the github repository (without owner).
required [bool]: Whether the clone must succeed or not.
owner [string]: An explicit repository owner.
If not provided use the configured options.
"""
name = repository.name
upstream_user = repository.owner
origin_url = self.get_github_repository_url(repository, owner=owner)
upstream_url = 'https://github.com/{upstream_user}/{name}.git'.format(
upstream_user=upstream_user, name=name)
# Dont echo because we're going to hide some failure.
print 'Cloning {name} from {origin_url}.'.format(
name=name, origin_url=origin_url)
shell_result = run_and_monitor('git clone ' + origin_url, echo=False)
if not shell_result.returncode:
if shell_result.stdout:
print shell_result.stdout
else:
if repository in self.__extra_repositories:
sys.stderr.write('WARNING: Missing optional repository {name}.\n'
.format(name=name))
sys.stderr.write(' Continue on without it.\n')
return
sys.stderr.write(shell_result.stderr or shell_result.stdout)
sys.stderr.write(
'FATAL: Cannot continue without required'
' repository {name}.\n'
' Consider using github to fork one from {upstream}.\n'.
format(name=name, upstream=upstream_url))
raise SystemExit('Repository {url} not found.'.format(url=origin_url))
if self.__options.add_upstream and origin_url != upstream_url:
print ' Adding upstream repository {upstream}.'.format(
upstream=upstream_url)
check_run_quick('git -C {dir} remote add upstream {url}'
.format(dir=name, url=upstream_url),
echo=False)
if self.__options.disable_upstream_push:
which = 'upstream' if origin_url != upstream_url else 'origin'
print ' Disabling git pushes to {which} {upstream}'.format(
which=which, upstream=upstream_url)
check_run_quick(
'git -C {dir} remote set-url --push {which} disabled'
.format(dir=name, which=which),
echo=False)
def pull_from_origin(self, repository):
"""Pulls the current branch from the git origin.
Args:
repository [string]: The local repository to update.
"""
name = repository.name
owner = repository.owner
if not os.path.exists(name):
self.git_clone(repository)
return
print 'Updating {name} from origin'.format(name=name)
branch = self.get_branch_name(name)
if branch != 'master':
sys.stderr.write(
'WARNING: Updating {name} branch={branch}, *NOT* "master"\n'
.format(name=name, branch=branch))
check_run_quick('git -C {dir} pull origin {branch}'
.format(dir=name, branch=branch),
echo=True)
def pull_from_upstream_if_master(self, repository):
"""Pulls the master branch fromthe upstream repository.
This will only have effect if the local repository exists
and is currently in the master branch.
Args:
repository [string]: The name of the local repository to update.
"""
name = repository.name
if not os.path.exists(name):
self.pull_from_origin(repository)
branch = self.get_branch_name(name)
if branch != 'master':
sys.stderr.write('Skipping {name} because it is in branch={branch}.\n'
.format(name=name, branch=branch))
return
print 'Pulling master {name} from upstream'.format(name=name)
check_run_quick('git -C {name} pull upstream master'
.format(name=name),
echo=True)
def push_to_origin_if_master(self, repository):
"""Pushes the current master branch of the local repository to the origin.
This will only have effect if the local repository exists
and is currently in the master branch.
Args:
repository [string]: The name of the local repository to push from.
"""
name = repository.name
if not os.path.exists(name):
sys.stderr.write('Skipping {name} because it does not yet exist.\n'
.format(name=name))
return
branch = self.get_branch_name(name)
if branch != 'master':
sys.stderr.write('Skipping {name} because it is in branch={branch}.\n'
.format(name=name, branch=branch))
return
print 'Pushing {name} to origin.'.format(name=name)
check_run_quick('git -C {dir} push origin master'.format(dir=name),
echo=True)
def push_all_to_origin_if_master(self):
"""Push all the local repositories current master branch to origin.
This will skip any local repositories that are not currently in the master
branch.
"""
all_repos = self.__REQUIRED_REPOSITORIES + self.__extra_repositories
for repository in all_repos:
self.push_to_origin_if_master(repository)
def pull_all_from_upstream_if_master(self):
"""Pull all the upstream master branches into their local repository.
This will skip any local repositories that are not currently in the master
branch.
"""
all_repos = self.__REQUIRED_REPOSITORIES + self.__extra_repositories
for repository in all_repos:
self.pull_from_upstream_if_master(repository)
def pull_all_from_origin(self):
"""Pull all the origin master branches into their local repository.
This will skip any local repositories that are not currently in the master
branch.
"""
all_repos = self.__REQUIRED_REPOSITORIES + self.__extra_repositories
for repository in all_repos:
self.pull_from_origin(repository)
def __determine_spring_config_location(self):
root = '{dir}/config'.format(
dir=os.path.abspath(os.path.join(os.path.dirname(sys.argv[0]), '..')))
home = os.path.join(os.environ['HOME'] + '/.spinnaker')
return '{root}/,{home}/'.format(home=home, root=root)
def write_gradle_run_script(self, repository):
"""Generate a dev_run.sh script for the local repository.
Args:
repository [string]: The name of the local repository to generate in.
"""
name = repository.name
path = '{name}/start_dev.sh'.format(name=name)
with open(path, 'w') as f:
f.write("""#!/bin/bash
cd $(dirname $0)
LOG_DIR=${{LOG_DIR:-../logs}}
DEF_SYS_PROPERTIES="-Dspring.config.location='{spring_location}'"
bash -c "(./gradlew $DEF_SYS_PROPERTIES $@ > $LOG_DIR/{name}.log) 2>&1\
| tee -a $LOG_DIR/{name}.log >& $LOG_DIR/{name}.err &"
""".format(name=name, spring_location=self.__determine_spring_config_location()))
os.chmod(path, 0777)
def write_deck_run_script(self, repository):
"""Generate a dev_run.sh script for running deck locally.
Args:
repository [string]: The name of the local repository to generate in.
"""
name = repository.name
path = '{name}/start_dev.sh'.format(name=name)
with open(path, 'w') as f:
f.write("""#!/bin/bash
cd $(dirname $0)
LOG_DIR=${{LOG_DIR:-../logs}}
if [[ node_modules -ot .git ]]; then
# Update npm, otherwise assume nothing changed and we're good.
npm install >& $LOG_DIR/deck.log
else
echo "deck npm node_modules looks up to date already."
fi
# Append to the log file we just started.
bash -c "(npm start >> $LOG_DIR/{name}.log) 2>&1\
| tee -a $LOG_DIR/{name}.log >& $LOG_DIR/{name}.err &"
""".format(name=name))
os.chmod(path, 0777)
def update_spinnaker_run_scripts(self):
"""Regenerate the local dev_run.sh script for each local repository."""
for repository in self.__REQUIRED_REPOSITORIES:
name = repository.name
if not os.path.exists(name):
continue
if name == 'deck':
self.write_deck_run_script(repository)
else:
self.write_gradle_run_script(repository)
@classmethod
def init_extra_argument_parser(cls, parser):
"""Initialize additional arguments for managing remote repositories.
This is to sync the origin and upstream repositories. The intent
is to ultimately sync the origin from the upstream repository, but
this might be in two steps so the upstream can be verified [again]
before pushing the changes to the origin.
"""
# Note that we only pull the master branch from upstream.
# Pulling other branches dont normally make sense.
parser.add_argument('--pull_upstream', default=False,
action='store_true',
help='If the local branch is master, then refresh it'
' from the upstream repository.'
' Otherwise leave as is.')
parser.add_argument('--nopull_upstream',
dest='pull_upstream',
action='store_false')
parser.add_argument('--refresh_master_from_upstream',
dest='pull_upstream',
help='DEPRECATED '
'If the local branch is master, then refresh it'
' from the upstream repository.'
' Otherwise leave as is.')
parser.add_argument('--norefresh_master_from_upstream',
help='DEPRECATED',
dest='pull_upstream',
action='store_false')
# Note we only push master branches to origin.
# To push another branch, you must explicitly push it with git.
# Perhaps it could make sense to coordinate branches with a common name
# across multiple repositories to push a conceptual change touching
# multiple repositories, but for now we are being conservative with
# what we push.
parser.add_argument('--push_master', default=False,
action='store_true',
help='If the local branch is master then push it to'
' the origin repository. Otherwise do not.')
parser.add_argument('--nopush_master',
dest='push_master')
parser.add_argument('--push_master_to_origin', default=False,
dest='push_master',
action='store_true',
help='DEPRECATED '
'If the local branch is master then push it to'
' the origin repository. Otherwise do not.')
parser.add_argument('--nopush_master_to_origin',
help='DEPRECATED',
dest='push_master_to_origin')
@classmethod
def init_argument_parser(cls, parser):
parser.add_argument('--use_https', default=True, action='store_true',
help='Use https when cloning github repositories.')
parser.add_argument('--use_ssh', dest='use_https', action='store_false',
help='Use SSH when cloning github repositories.')
parser.add_argument('--add_upstream', default=True,
action='store_true',
help='Add upstream repository when cloning.')
parser.add_argument('--noadd_upstream', dest='add_upstream',
action='store_false')
parser.add_argument('--disable_upstream_push', default=True,
action='store_true',
help='Disable future pushes to the upstream'
' repository when cloning a repository.')
parser.add_argument('--nodisable_upstream_push',
dest='disable_upstream_push',
action='store_false')
parser.add_argument('--pull_origin', default=False,
action='store_true',
help='Refresh the local branch from the origin.')
parser.add_argument('--nopull_origin', dest='pull_origin',
action='store_false')
parser.add_argument(
'--extra_repos', default=None,
help='A comma-delimited list of name=owner optional repositories.'
'name is the repository name,'
' owner is the authoritative github user name owning it.'
' The --github_user will still be used to determine the origin.')
parser.add_argument('--github_user', default=None,
help='Pull from this github user\'s repositories.'
' If the user is "default" then use the'
' authoritative (upstream) repository.')
@classmethod
def main(cls):
parser = argparse.ArgumentParser()
cls.init_argument_parser(parser)
cls.init_extra_argument_parser(parser)
options = parser.parse_args()
builder = cls(options)
nothing = True
if options.pull_upstream:
nothing = False
builder.pull_all_from_upstream_if_master()
if options.push_master:
nothing = False
builder.push_all_to_origin_if_master()
if options.pull_origin:
nothing = False
builder.pull_all_from_origin()
builder.update_spinnaker_run_scripts()
if nothing:
sys.stderr.write('No pull/push options were specified.\n')
else:
print 'DONE'
if __name__ == '__main__':
Refresher.main()

17
dev/refresh_source.sh Executable file
Просмотреть файл

@ -0,0 +1,17 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PYTHONPATH=$(dirname $0)/../pylib python $(dirname $0)/refresh_source.py $@

23
dev/run_dev.sh Executable file
Просмотреть файл

@ -0,0 +1,23 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [[ $# -eq 0 ]]; then
args="ALL"
else
args="$@"
fi
PYTHONPATH=$(dirname $0)/../pylib python $(dirname $0)/dev_runner.py START $args

23
dev/stop_dev.sh Executable file
Просмотреть файл

@ -0,0 +1,23 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [[ $# -eq 0 ]]; then
args="ALL"
else
args="$@"
fi
PYTHONPATH=$(dirname $0)/../pylib python $(dirname $0)/dev_runner.py STOP $args

0
install/__init__.py Normal file
Просмотреть файл

Просмотреть файл

@ -0,0 +1,211 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script is specific to preparing a Google-hosted virtual machine
# for running Spinnaker when the instance was created with metadata
# holding configuration information.
set -e
set -u
# We're running as root, but HOME might not be defined.
HOME=${HOME:-"/root"}
SPINNAKER_INSTALL_DIR=/opt/spinnaker
CONFIG_DIR=$HOME/.spinnaker
# This status prefix provides a hook to inject output signals with status
# messages for consumers like the Google Deployment Manager Coordinator.
# Normally this isnt needed. Callers will populate it as they need
# using --status_prefix.
STATUS_PREFIX="*"
METADATA_URL="http://metadata.google.internal/computeMetadata/v1"
INSTANCE_METADATA_URL="$METADATA_URL/instance"
if full_zone=$(curl -s -H "Metadata-Flavor: Google" "$INSTANCE_METADATA_URL/zone"); then
MY_ZONE=$(basename $full_zone)
else
echo "Not running on Google Cloud Platform."
MY_ZONE=""
fi
function get_instance_metadata_attribute() {
local name="$1"
local value=$(curl -s -f -H "Metadata-Flavor: Google" \
$INSTANCE_METADATA_URL/attributes/$name)
if [[ $? -eq 0 ]]; then
echo "$value"
else
echo ""
fi
}
function write_instance_metadata() {
gcloud compute instances add-metadata `hostname` \
--zone $MY_ZONE \
--metadata "$@"
return $?
}
function clear_metadata_to_file() {
local key="$1"
local path="$2"
local value=$(get_instance_metadata_attribute "$key")
if [[ "$value" != "" ]]; then
echo "$value" > $path
clear_instance_metadata "$key"
if [[ $? -ne 0 ]]; then
die "Could not clear metadata from $key"
fi
return 0
fi
return 1
}
function clear_instance_metadata() {
gcloud compute instances remove-metadata `hostname` \
--zone $MY_ZONE \
--keys "$1"
return $?
}
function replace_startup_script() {
# Keep the original around for reference.
# From now on, all we need to do is start_spinnaker
local original=$(get_instance_metadata_attribute "startup-script")
echo "$original" > "$SPINNAKER_INSTALL_DIR/scripts/original_startup_script.sh"
write_instance_metadata \
"startup-script=$SPINNAKER_INSTALL_DIR/scripts/start_spinnaker.sh"
}
function extract_spinnaker_local_yaml() {
local value=$(get_instance_metadata_attribute "spinnaker_local")
if [[ "$value" == "" ]]; then
return 1
fi
local config="$CONFIG_DIR/spinnaker-local.yml"
mkdir -p $(dirname $config)
echo "$value" > $config
chmod 600 $config
clear_instance_metadata "spinnaker_local"
return 0
}
function extract_spinnaker_credentials() {
extract_spinnaker_google_credentials
extract_spinnaker_aws_credentials
}
function extract_spinnaker_google_credentials() {
local json_path="$CONFIG_DIR/ManagedProjectCredentials.json"
mkdir -p $(dirname $json_path)
if clear_metadata_to_file "managed_project_credentials" $json_path; then
# This is a workaround for difficulties using the Google Deployment Manager
# to express no value. We'll use the value "None". But we dont want
# to officially support this, so we'll just strip it out of this first
# time boot if we happen to see it, and assume the Google Deployment Manager
# got in the way.
sed -i s/^None$//g $json_path
if [[ -s $json_path ]]; then
chmod 400 $json_path
echo "Extracted google credentials to $json_path"
else
rm $json_path
fi
else
clear_instance_metadata "managed_project_credentials"
json_path=""
fi
# This cant be configured when we create the instance because
# the path is local within this instance (file transmitted in metadata)
# Remove the old line, if one existed, and replace it with a new one.
# This way it does not matter whether the user supplied it or not
# (and might have had it point to something client side).
if [[ -f "$CONFIG_DIR/spinnaker-local.yml" ]]; then
sed -i "s/\( \+jsonPath:\).\+/\1 ${json_path//\//\\\/}/g" \
$CONFIG_DIR/spinnaker-local.yml
fi
}
function extract_spinnaker_aws_credentials() {
local credentials_path="$HOME/.aws/credentials"
mkdir -p $(dirname $credentials_path)
if clear_metadata_to_file "aws_credentials" $credentials_path; then
# This is a workaround for difficulties using the Google Deployment Manager
# to express no value. We'll use the value "None". But we dont want
# to officially support this, so we'll just strip it out of this first
# time boot if we happen to see it, and assume the Google Deployment Manager
# got in the way.
sed -i s/^None$//g $credentials_path
if [[ -s $credentials_path ]]; then
chmod 400 $credentials_path
echo "Extracted aws credentials to $credentials_path"
else
rm $credentials_path
fi
else
clear_instance_metadata "aws_credentials"
fi
}
function process_args() {
while [[ $# > 0 ]]
do
local key="$1"
case $key in
--status_prefix)
STATUS_PREFIX="$2"
shift
;;
*)
echo "ERROR: unknown option '$key'."
exit -1
;;
esac
shift
done
}
# apply outstanding updates since time of image creation
apt-get -y update
apt-get -y dist-upgrade
process_args
mkdir -p /root/.spinnaker
echo "$STATUS_PREFIX Extracting Configuration Info"
extract_spinnaker_local_yaml
echo "$STATUS_PREFIX Extracting Credentials"
extract_spinnaker_credentials
echo "$STATUS_PREFIX Configuring Spinnaker"
$SPINNAKER_INSTALL_DIR/scripts/reconfigure_spinnaker.sh
# Replace this first time boot with the normal startup script
# that just starts spinnaker (and its dependencies) without configuring anymore.
echo "$STATUS_PREFIX Cleaning Up"
replace_startup_script
echo "$STATUS_PREFIX Starting Spinnaker"
$SPINNAKER_INSTALL_DIR/scripts/start_spinnaker.sh
echo "$STATUS_PREFIX Spinnaker is now ready"

Просмотреть файл

@ -0,0 +1,44 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The Spinnaker debian packages have dependencies on openjdk-8-jre.
# If you have a different JDK 1.8 installed then we need to subvert
# the requirement checks.
#
# This script creates a fake debian package that claims to offer openjdk-8-jre.
# Installing it will satisfy other dependency checks, though obviously this does
# not actually provide openjdk-8-jre.
cd /tmp
cat > fake-openjdk-8-jre.txt <<EOF
Section: misc
Priority: optional
Standards-Version: 3.9.2
Package: fake-openjdk-8-jre
Version: 1.0
Provides: openjdk-8-jre
Description: Fake openjdk-8-jre dependency
EOF
if ! which equivs-build; then
print 'installing equivs...'
sudo apt-get install -y equivs
fi
equivs-build fake-openjdk-8-jre.txt
echo "For the record, "Java -version" says\n$(java -version)"
echo "Installing 'fake-openjdk-8-jre' package to suppress openjdk-8-jre checks".
sudo dpkg -i fake-openjdk-8-jre_1.0_all.deb

Просмотреть файл

@ -0,0 +1,303 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import re
import sys
import tempfile
from spinnaker.run import check_run_and_monitor
from spinnaker.run import run_quick
from spinnaker.fetch import check_fetch
# These explicit versions are only applicable when not using the
# package manager. When using the package manager, the version will
# be determined by the package manager itself (i.e. latest version).
EXPLICIT_CASSANDRA_VERSION='2.1.9'
EXPLICIT_OPENJDK_8_VERSION='8u45-b14-1~14.04'
DECK_PORT=9000
def check_options(options):
if options.package_manager == None:
error = 'Must specify either --package_manager or --nopackage_manager'
raise SystemExit(error)
def init_argument_parser(parser, default_values={}):
"""Initialize ArgumentParser with commandline arguments for this module."""
parser.add_argument('--apache',
default=default_values.get('apache', True),
action='store_true',
help='Install apache2 server.')
parser.add_argument('--noapache', dest='apache', action='store_false')
parser.add_argument('--cassandra',
default=default_values.get('cassandra', True),
action='store_true',
help='Install cassandra service.')
parser.add_argument('--nocassandra', dest='cassandra',
action='store_false')
parser.add_argument('--jdk',
default=default_values.get('jdk', True),
action='store_true',
help='Install openjdk.')
parser.add_argument('--nojdk', dest='jdk', action='store_false')
parser.add_argument(
'--package_manager',
default=default_values.get('package_manager', None),
action='store_true',
help='Allow modifications to package manager repository list.'
' If this is not permitted, then packages not part of the standard'
' release will be installed directly rather than adding their'
' sources to the package manager repository list.')
parser.add_argument('--nopackage_manager',
dest='package_manager', action='store_false')
parser.add_argument('--redis',
default=default_values.get('redis', True),
action='store_true',
help='Install redis-server service.')
parser.add_argument('--noredis', dest='redis', action='store_false')
parser.add_argument('--update_os',
default=default_values.get('update_os', False),
action='store_true',
help='Install OS updates since the base image.')
parser.add_argument(
'--noupdate_os', dest='update_os', action='store_false')
def check_install_package(name, version=None, options=[]):
"""Install the specified package, with specific version if provide.
Args:
name: The unversioned package name.
version: If provided, the specific version to install.
options: Additional command-line options to apt-get install.
"""
package_name = name
if version:
package_name += '={0}'.format(version)
command = ['sudo apt-get -q -y']
command.extend(options)
command.extend(['install', package_name])
check_run_and_monitor(' '.join(command), echo=True)
def install_os_updates(options):
if not options.update_os:
print 'Skipping os upgrades.'
return
print 'Upgrading packages...'
check_run_and_monitor('sudo apt-get -y update', echo=True)
check_run_and_monitor('sudo apt-get -y dist-upgrade', echo=True)
def install_runtime_dependencies(options):
"""Install all the spinnaker runtime dependencies.
Args:
options: ArgumentParserNamespace can turn off individual dependencies.
"""
check_options(options)
install_java(options, which='jre')
install_cassandra(options)
install_redis(options)
install_apache(options)
install_os_updates(options)
def check_java_version():
try:
result = run_quick('java -version', echo=False)
except OSError as error:
return str(error)
info = result.stdout
if result.returncode != 0:
return 'Java does not appear to be installed.'
m = re.search(r'(?m)^openjdk version "(.*)"', info)
if not m:
m = re.search(r'(?m)^java version "(.*)"', info)
if not m:
return 'Unrecognized java version:\n{0}'.format(info)
if m.group(1)[0:3] != '1.8':
return ('Java {version} is currently installed.'
' However, version 1.8 is required.'.format(version=m.group(1)))
print 'Found java {version}'.format(version=m.group(0))
return None
def install_java(options, which='jre'):
"""Install java.
TODO(ewiseblatt):
This requires a package manager, but only because I'm not sure how
to install it without one. If you are not using a package manager,
then verison 1.8 must already be installed.
Args:
options: ArgumentParserNamespace options.
which: Install either 'jre' or 'jdk'.
"""
if not options.jdk:
print '--nojdk skipping Java install.'
return
if which != 'jre' and which != 'jdk':
raise ValueError('Expected which=(jdk|jre)')
check_options(options)
if not options.package_manager:
msg = check_java_version()
if msg:
sys.stderr.write(
('{msg}\nSorry, Java must already be installed using the'
' package manager.\n'.format(msg=msg)))
raise SystemExit('Java must already be installed.')
else:
print 'Using existing java.'
return
print 'Installing OpenJdk...'
check_run_and_monitor('sudo add-apt-repository -y ppa:openjdk-r/ppa',
echo=True)
check_run_and_monitor('sudo apt-get -y update', echo=True)
check_install_package('openjdk-8-{which}'.format(which=which),
version=EXPLICIT_OPENJDK_8_VERSION)
cmd = ['sudo', 'update-java-alternatives']
if which == 'jre':
cmd.append('--jre')
cmd.extend(['-s', '/usr/lib/jvm/java-1.8.0-openjdk-amd64'])
check_run_and_monitor(' '.join(cmd), echo=True)
def install_cassandra(options):
"""Install Cassandra.
Args:
options: ArgumentParserNamespace options.
"""
if not options.cassandra:
print '--nocassandra skipping Casssandra install.'
return
print 'Installing Cassandra...'
check_options(options)
preferred_version = None
if not options.package_manager:
root = 'https://archive.apache.org/dist/cassandra/debian/pool/main/c'
try:
os.mkdir('downloads')
except OSError:
pass
preferred_version = EXPLICIT_CASSANDRA_VERSION
cassandra = 'cassandra_{ver}_all.deb'.format(ver=preferred_version)
tools = 'cassandra-tools_{ver}_all.deb'.format(ver=preferred_version)
fetch_result = check_fetch(
'{root}/cassandra/{cassandra}'.format(root=root, cassandra=cassandra))
with open('downloads/{cassandra}'
.format(cassandra=cassandra), 'w') as f:
f.write(fetch_result.content)
fetch_result = check_fetch(
'{root}/cassandra/{tools}'
.format(root=root, tools=tools))
with open('downloads/{tools}'
.format(tools=tools), 'w') as f:
f.write(fetch_result.content)
check_run_and_monitor('sudo dpkg -i downloads/' + cassandra, echo=True)
check_run_and_monitor('sudo dpkg -i downloads/' + tools, echo=True)
else:
check_run_and_monitor(
'sudo add-apt-repository -s'
' "deb http://www.apache.org/dist/cassandra/debian 21x main"',
echo=True)
check_run_and_monitor('sudo apt-get -q -y update', echo=True)
check_install_package('cassandra', version=preferred_version,
options=['--force-yes'])
def install_redis(options):
"""Install Redis-Server.
Args:
options: ArgumentParserNamespace options.
"""
if not options.redis:
print '--noredis skips Redis install.'
return
print 'Installing Redis...'
check_install_package('redis-server', version=None)
def install_apache(options):
"""Install Apache2
This will update /etc/apache2/ports so Apache listens on DECK_PORT
instead of its default port 80.
Args:
options: ArgumentParserNamespace options.
"""
if not options.apache:
print '--noapache skips Apache install.'
return
print 'Installing apache2...'
check_install_package('apache2', version=None)
# Change apache to run on port $DECK_PORT by default.
# We're writing back with cat so we can sudo.
with open('/etc/apache2/ports.conf', 'r') as f:
content = f.read()
print 'Changing default port to {0}'.format(DECK_PORT)
content = content.replace('Listen 80\n', 'Listen {0}\n'.format(DECK_PORT))
# write changes to a temp file so we can atomically replace the old one
fd, temp_path = tempfile.mkstemp()
os.write(fd, content)
os.close(fd)
# Replace the file while preserving the original owner and protection bits.
check_run_and_monitor('sudo bash -c "'
'chmod --reference={etc} {temp}'
'; chown --reference={etc} {temp}'
'; mv {temp} {etc}"'
.format(etc='/etc/apache2/ports.conf', temp=temp_path),
echo=False)
check_run_and_monitor('sudo apt-get install -f -y', echo=True)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
init_argument_parser(parser)
options = parser.parse_args()
install_runtime_dependencies(options)

487
install/install_spinnaker.py Executable file
Просмотреть файл

@ -0,0 +1,487 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Installs spinnaker onto the local machine.
--release_path must be specified using either a path or storage service URI
(either Google Compute Storage or Amazon S3).
Spinnaker depends on openjdk-8-jre. If this isnt installed but some other
equivalent JDK 1.8 is installed, then you can run install_fake_openjdk8.sh
to fake out the package manager. That script is included with this script.
"""
import argparse
import os
import re
import subprocess
import sys
import tempfile
import install_runtime_dependencies
from spinnaker.run import run_and_monitor
from spinnaker.run import check_run_and_monitor
from spinnaker.run import check_run_quick
from spinnaker.run import run_quick
def get_user_config_dir(options):
"""Returns the directory used to hold deployment configuration info."""
return '/root/.spinnaker'
def get_config_install_dir(options):
"""Returns the directory used to hold the installation master config.
These are not intended to be overriden, but -local variants can be added.
"""
return (os.path.join(get_spinnaker_dir(options), 'config'))
def get_spinnaker_dir(options):
"""Returns the spinnaker installation directory."""
path = options.spinnaker_dir or '/opt/spinnaker'
if not os.path.exists(path):
print 'Creating spinnaker_dir=' + path
safe_mkdir(path)
return path
def init_argument_parser(parser):
install_runtime_dependencies.init_argument_parser(parser)
parser.add_argument(
'--dependencies', default=True, action='store_true',
help='Install the runtime system dependencies.')
parser.add_argument(
'--nodependencies', dest='dependencies', action='store_false')
parser.add_argument(
'--spinnaker', default=True, action='store_true',
help='Install spinnaker subsystems.')
parser.add_argument(
'--nospinnaker', dest='spinnaker', action='store_false')
parser.add_argument(
'--spinnaker_dir', default=None,
help='Nonstandard path to install spinnaker files into.')
parser.add_argument(
'--release_path', default=None,
help='The path to the release being installed.')
def safe_mkdir(dir):
"""Create a local directory if it does not already exist.
Args:
dir [string]: The path to the directory to create.
"""
result = run_quick('sudo mkdir -p "{dir}"'.format(dir=dir), echo=False)
if result.returncode:
raise RuntimeError('Could not create directory "{dir}": {error}'.format(
dir=dir, error=result.stdout))
def start_copy_file(options, source, target, dir=False):
"""Copy a file.
Args:
source [string]: The path to copy from is either local or the URI for
a storage service (Amazon S3 or Google Cloud Storage).
target [string]: A local path to copy to.
Returns:
A subprocess instance performing the copy.
"""
if source.startswith('gs://'):
if dir:
safe_mkdir(target)
command = ('sudo bash -c'
' "PATH=$PATH gsutil -m -q cp {R} \"{source}\"{X} \"{target}\""'
.format(source=source, target=target,
R='-R' if dir else '',
X='/*' if dir else ''))
elif source.startswith('s3://'):
command = ('sudo bash -c'
' "PATH=$PATH aws s3 cp {R} --region {region}'
' \"{source}\" \"{target}\""'
.format(source=source, target=target, region=options.region,
R='--recursive' if dir else ''))
else:
# Use a shell to copy here to handle wildcard expansion.
command = 'sudo cp "{source}" "{target}"'.format(
source=source, target=target)
process = subprocess.Popen(command, stderr=subprocess.PIPE, shell=True)
return process
def start_copy_dir(options, source, target):
return start_copy_file(options, source, target, dir=True)
def check_wait_for_copy_complete(jobs):
"""Waits for each of the subprocesses to finish.
Args:
jobs [list of subprocess]: Jobs we are waiting on.
Raises:
RuntimeError if any of the copies failed.
"""
for j in jobs:
stdout, stderr = j.communicate()
if j.returncode != 0:
output = stdout or stderr or ''
error = 'COPY FAILED with {0}: {1}'.format(j.returncode, output.strip())
raise RuntimeError(error)
def get_release_metadata(options, bucket):
"""Gets metadata files from the release.
This sets the global PACKAGE_LIST and CONFIG_LIST variables
telling us specifically what we'll need to install.
Args:
options [namespace]: The argparse namespace with options.
bucket [string]: The path or storage service URI to pull from.
"""
spinnaker_dir = get_spinnaker_dir(options)
safe_mkdir(spinnaker_dir)
job = start_copy_file(options,
os.path.join(bucket, 'config/release_config.cfg'),
spinnaker_dir)
check_wait_for_copy_complete([job])
with open(os.path.join(spinnaker_dir, 'release_config.cfg'), 'r') as f:
content = f.read()
global PACKAGE_LIST
global CONFIG_LIST
PACKAGE_LIST = (re.search('\nPACKAGE_LIST="(.*?)"', content)
.group(1).split())
CONFIG_LIST = (re.search('\nCONFIG_LIST="(.*?)"', content)
.group(1).split())
def check_google_path(path):
check_result = run_quick('gsutil --version', echo=False)
if check_result.returncode:
error = """
ERROR: gsutil is required to retrieve the spinnaker release from GCS.
If you already have gsutil, fix your path.
Otherwise follow the instructions at
https://cloud.google.com/storage/docs/gsutil_install?hl=en#install
and be sure you run gsutil config.
Then run again.
"""
raise RuntimeError(error)
result = run_quick('gsutil ls ' + path, echo=False)
if result.returncode:
error = ('The path "{dir}" does not seem to exist within GCS.'
' gsutil ls returned "{stdout}"\n'.format(
dir=path, stdout=result.stdout.strip()))
raise RuntimeError(error)
def check_s3_path(path):
check_result = run_quick('aws --version', echo=False)
if check_result.returncode:
error = """
ERROR: aws is required to retrieve the spinnaker release from S3.
If you already have aws, fix your path.
Otherwise install awscli with "sudo apt-get install awscli".
Then run again.
"""
raise RuntimeError(error)
result = run_quick('aws s3 ls ' + path, echo=False)
if result.returncode:
error = ('The path "{dir}" does not seem to exist within S3.'
' aws s3 ls returned "{stdout}"\n'.format(
dir=path, stdout=result.stdout.strip()))
raise RuntimeError(error)
def check_release_dir(options):
"""Verify the options specify a release_path we can read.
Args:
options [namespace]: The argparse namespace
"""
if not options.release_path:
error = ('--release_path cannot be empty.'
' Either specify a --release or a --release_path.')
raise ValueError(error)
if os.path.exists(options.release_path):
return
if options.release_path.startswith('gs://'):
check_google_path(options.release_path)
elif options.release_path.startswith('s3://'):
check_s3_path(options.release_path)
else:
error = 'Unknown path --release_path={dir}\n'.format(
dir=options.release_path)
raise ValueError(error)
def check_options(options):
"""Verify the options make sense.
Args:
options [namespace]: The options from argparser.
"""
install_runtime_dependencies.check_options(options)
check_release_dir(options)
if (options.release_path.startswith('s3://')
and not options.region):
raise ValueError('--region is required with an S3 release-uri.')
def inject_spring_config_location(options, subsystem):
"""Add spinnaker.yml to the spring config location path.
This might be temporary. Once this is standardized perhaps the packages will
already be shipped with this.
"""
if subsystem == "deck":
return
path = os.path.join('/opt', subsystem, 'bin', subsystem)
with open(path, 'r') as f:
content = f.read()
match = re.search('\nDEFAULT_JVM_OPTS=(.+)\n', content)
if not match:
raise ValueError('Expected DEFAULT_JVM_OPTS in ' + path)
value = match.group(1)
if value.find('-Dspring.config.location=') >= 0:
sys.stderr.write(
'WARNING: spring.config.location was already explicitly defined.'
'\nLeaving ' + match.group(0) + '\n') # Show whole thing.
return
new_content = [content[0:match.start(1)]]
offset = 1 if value[0] == '\'' or value[0] == '"' else 0
quote = '"' if value[0] == '\'' else '\''
root = '/opt/spinnaker/config'
home = '/root/.spinnaker'
new_content.append(value[0:offset])
new_content.append('{quote}-Dspring.config.location={root}/,{home}/{quote}'
.format(quote=quote, home=home, root=root))
new_content.append(' ')
new_content.append(content[match.start(1) + 1:])
fd,temp = tempfile.mkstemp()
os.write(fd, ''.join(new_content))
os.close(fd)
check_run_quick(
'chmod --reference={path} {temp}'.format(path=path, temp=temp))
check_run_quick('sudo mv {temp} {path}'.format(temp=temp, path=path))
def install_spinnaker_packages(options, bucket):
"""Install the spinnaker packages from the specified path.
Args:
bucket [string]: The path to install from, or a storage service URI.
"""
if not options.spinnaker:
return
print 'Installing Spinnaker components from {0}.'.format(bucket)
install_config_dir = get_config_install_dir(options)
spinnaker_dir = get_spinnaker_dir(options)
jobs = []
###########################
# Copy Configuration files
###########################
print 'Copying configuration files.'
safe_mkdir(install_config_dir)
# For now we are copying both the old configuration files
# and the new ones.
# The new ones are not yet fully working so we are keeping
# the old ones around. It's particularly messy because the two
# cohabitate the same directory in the bucket. We separate them
# out in the installation so that the old -local files arent
# intercepted (with precedence) when using the new files.
# The new files are not enabled by default.
for cfg in CONFIG_LIST:
jobs.append(start_copy_file(options,
os.path.join(bucket, 'config', cfg),
install_config_dir))
jobs.append(
start_copy_file(
options,
os.path.join(bucket, 'config/settings.js'),
os.path.join(bucket, install_config_dir + '/settings.js')))
check_wait_for_copy_complete(jobs)
jobs = []
#############
# Copy Tests
#############
print 'Copying tests.'
tests_dir = os.path.join(spinnaker_dir, 'tests')
jobs.append(
start_copy_dir(options,
os.path.join(bucket, 'tests'),
tests_dir))
###########################
# Copy Subsystem Packages
###########################
print 'Downloading spinnaker release packages...'
package_dir = os.path.join(spinnaker_dir, 'install')
safe_mkdir(package_dir)
for pkg in PACKAGE_LIST:
jobs.append(start_copy_file(options,
os.path.join(bucket, pkg), package_dir))
check_wait_for_copy_complete(jobs)
for pkg in PACKAGE_LIST:
print 'Installing {0}.'.format(pkg)
# Let this fail because it may have dependencies
# that we'll pick up below.
run_and_monitor('sudo dpkg -i ' + os.path.join(package_dir, pkg))
check_run_and_monitor('sudo apt-get install -f -y')
# Convert package name to install directory name.
inject_spring_config_location(options, pkg[0:pkg.find('_')])
# Install package dependencies
check_run_and_monitor('sudo apt-get install -f -y')
def install_spinnaker(options):
"""Install the spinnaker packages.
Args:
options [namespace]: The argparse options.
"""
if not (options.spinnaker or options.dependencies):
return
# The bucket might just be a plain-old path.
# But could be a gs:// URL to a path in a Google Cloud Storage bucket.
bucket = options.release_path
get_release_metadata(options, bucket)
install_spinnaker_packages(options, bucket)
spinnaker_dir = get_spinnaker_dir(options)
#####################################
# Copy Scripts and Cassandra Schemas
#####################################
install_dir = os.path.join(spinnaker_dir, 'install')
script_dir = os.path.join(spinnaker_dir, 'scripts')
pylib_dir = os.path.join(spinnaker_dir, 'pylib')
cassandra_dir = os.path.join(spinnaker_dir, 'cassandra')
jobs = []
print 'Installing spinnaker scripts.'
# Note this also copies some install files that may already be there
# depending on when this script is being run. The files in the release
# may be different, but they are the release we are installing.
# If this is an issue, we can look into copy without overwriting.
jobs.append(start_copy_dir(options,
os.path.join(bucket, 'install'),
install_dir))
jobs.append(start_copy_dir(options,
os.path.join(bucket, 'runtime'), script_dir))
jobs.append(start_copy_dir(options,
os.path.join(bucket, 'pylib'), pylib_dir))
print 'Installing cassandra schemas.'
jobs.append(start_copy_dir(options,
os.path.join(bucket, 'cassandra'), cassandra_dir))
check_wait_for_copy_complete(jobs)
# Use chmod since +x is convienent.
# Fork a shell to do the wildcard expansion.
check_run_quick('sudo chmod +x {files}'
.format(files=os.path.join(spinnaker_dir, 'scripts/*.sh')))
check_run_quick('sudo chmod +x {files}'
.format(files=os.path.join(spinnaker_dir, 'install/*.sh')))
user_config_dir = get_user_config_dir(options)
install_config_dir = get_config_install_dir(options)
local_yml_path = os.path.join(user_config_dir, 'spinnaker-local.yml')
if not os.path.exists(local_yml_path):
print 'Copying a default spinnaker-local.yml'
prototype_path = os.path.join(install_config_dir,
'default-spinnaker-local.yml')
local_yml_content = make_default_spinnaker_yml_from_path(prototype_path)
fd,temp = tempfile.mkstemp()
os.write(fd, local_yml_content)
os.close(fd)
commands = ['mkdir -p {config_dir}'
.format(config_dir=user_config_dir),
'cp {temp} {config_dir}/spinnaker-local.yml'
.format(temp=temp, config_dir=user_config_dir),
'chmod 600 {config_dir}/spinnaker-local.yml'
.format(temp=temp, config_dir=user_config_dir),
'rm -f {temp}'.format(temp=temp)]
check_run_quick('sudo bash -c "{commands}"'
.format(commands=' && '.join(commands)), echo=True)
def make_default_spinnaker_yml_from_path(prototype_path):
with open(prototype_path, 'r') as f:
content = f.read()
return content
def main():
parser = argparse.ArgumentParser()
init_argument_parser(parser)
options = parser.parse_args()
check_options(options)
if options.dependencies:
install_runtime_dependencies.install_runtime_dependencies(options)
else:
if install_runtime_dependencies.check_java_version() is not None:
install_runtime_dependencies.install_java(options)
if options.update_os:
install_runtime_dependencies.install_os_updates(options)
if options.spinnaker:
install_runtime_dependencies.install_apache(options)
install_spinnaker(options)
if __name__ == '__main__':
main()

Просмотреть файл

Просмотреть файл

@ -0,0 +1,190 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import sys
import yaml_util
class InstallationParameters(object):
"""Describes a standard release installation layout.
Contains constants for where different parts of the release are installed.
Attributes:
USER_CONFIG_DIR: Path to directory containing installation configuration
files for the indivual subsystems.
LOG_DIR: Path to directory where individual log files are written.
SUBSYSTEM_ROOT_DIR: Path to directory containing spinnaker subsystem
installation directories.
SPINNAKER_INSTALL_DIR: Path to the root spinnaker installation directory.
UTILITY_SCRIPT_DIR: Path to directory containing spinnaker maintainence
and other utility scripts.
EXTERNAL_DEPENDENCY_SCRIPT_DIR: Path to directory containing maintainence
and utility scripts for managing dependencies outside spinnaker itself.
INSTALLED_CONFIG_DIR: Path to directory containing the master configuration
files for the release. These are intended to be read-only.
DECK_INSTALL_DIR: Path to directory where deck is installed, which is
typically different from the other spinnaker subsystems.
HACK_DECK_SETTINGS_FILENAME: The name of the settings file for deck
is non-standard and recorded here for the time being.
"""
USER_CONFIG_DIR = '/root/.spinnaker'
LOG_DIR = '/opt/spinnaker/logs'
SUBSYSTEM_ROOT_DIR = '/opt'
SPINNAKER_INSTALL_DIR = '/opt/spinnaker'
UTILITY_SCRIPT_DIR = '/opt/spinnaker/scripts'
EXTERNAL_DEPENDENCY_SCRIPT_DIR = '/opt/spinnaker/scripts'
INSTALLED_CONFIG_DIR = SPINNAKER_INSTALL_DIR + '/config'
DECK_INSTALL_DIR = '/var/www'
HACK_DECK_SETTINGS_FILENAME = 'settings.js'
class Configurator(object):
"""Defines methods for manipulating spinnaker configuration data."""
@property
def bindings(self):
"""Returns the system level yaml bindings.
This is spinnaker.yml with spinnaker-local imposed on top of it.
"""
if self.__bindings is None:
self.__bindings = yaml_util.load_bindings(
self.installation_config_dir, self.user_config_dir)
return self.__bindings
@property
def installation(self):
"""Returns the installation configuration (directory locations)."""
return self.__installation
@property
def installation_config_dir(self):
"""Returns the location of the system installed config directory."""
return self.__installation.INSTALLED_CONFIG_DIR
@property
def deck_install_dir(self):
"""Returns the location of the deck directory for the active settings.js"""
if not self.__installation.DECK_INSTALL_DIR:
pwd = os.environ.get('PWD', '.')
deck_path = os.path.join(pwd, 'deck')
if not os.path.exists(deck_path):
error = ('To operate on deck, this program must be run from your'
' build directory containing the deck project subdirectory'
', not "{pwd}".'.format(pwd=pwd))
raise RuntimeError(error)
self.__installation.DECK_INSTALL_DIR = deck_path
return self.__installation.DECK_INSTALL_DIR
@property
def user_config_dir(self):
"""Returns the user (or system's) .spinnaker directory for overrides."""
return self.__installation.USER_CONFIG_DIR
def __init__(self, installation_parameters=None):
"""Constructor
Args:
installation_parameters: An InstallationParameters instance.
"""
if not installation_parameters:
installation_parameters = InstallationParameters()
if os.geteuid():
# If we are not running as root and there is an installation on
# this machine as well as a user/.spinnaker directory then it is
# ambguous which we are validating. For saftey we'll force this
# to be the normal system installation. Warn that we are doing this.
user_config = os.path.join(os.environ['HOME'], '.spinnaker')
deck_dir = installation_parameters.DECK_INSTALL_DIR
if os.path.exists('/root/.spinnaker'):
user_config = '/root/.spinnaker'
if os.path.exists(user_config):
sys.stderr.write(
'WARNING: You have both personal and system Spinnaker'
' configurations on this machine. Assuming the system'
' configuration.\n')
else:
# Discover it from build directory if needed.
deck_dir = None
# If we arenot root, allow for a non-standard installation location.
installation_parameters.INSTALLED_CONFIG_DIR = os.path.abspath(
os.path.join(os.path.dirname(sys.argv[0]), '../../config'))
installation_parameters.USER_CONFIG_DIR = user_config
installation_parameters.DECK_INSTALL_DIR = deck_dir
self.__installation = installation_parameters
self.__bindings = None # Load on demand
def update_deck_settings(self):
"""Update the settings.js file from configuration info."""
source_path = os.path.join(self.installation_config_dir, 'settings.js')
with open(source_path, 'r') as f:
source = f.read()
settings = self.process_deck_settings(source)
target_path = os.path.join(self.deck_install_dir, 'settings.js')
print 'Rewriting deck settings in "{path}".'.format(path=target_path)
with open(target_path, 'w') as f:
f.write(''.join(settings))
def process_deck_settings(self, source):
offset = source.find('// BEGIN reconfigure_spinnaker')
if offset < 0:
raise ValueError(
'deck settings file does not contain a'
' "# BEGIN reconfigure_spinnaker" marker.')
end = source.find('// END reconfigure_spinnaker')
if end < 0:
raise ValueError(
'deck settings file does not contain a'
' "// END reconfigure_spinnaker" marker.')
original_block = source[offset:end]
# Remove all the explicit declarations in this block
# Leaving us with just comments
block = re.sub('\n\s*let\s+\w+\s*=(.+)\n', '\n', original_block)
settings = [source[:offset]]
# Now iterate over the comments looking for let specifications
offset = 0
for match in re.finditer('//\s*let\s+(\w+)\s*=\s*(.+?);?\n', block) or []:
settings.append(block[offset:match.end()])
offset = match.end()
name = match.group(1)
value = self.bindings.replace(match.group(2))
settings.append('let {name} = {value!r};\n'.format(
name=name, value=value))
settings.append(block[offset:])
settings.append(source[end:])
return ''.join(settings)

140
pylib/spinnaker/fetch.py Normal file
Просмотреть файл

@ -0,0 +1,140 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import os
import socket
import sys
import urllib2
from run import check_run_quick
GOOGLE_METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1'
GOOGLE_INSTANCE_METADATA_URL = GOOGLE_METADATA_URL + '/instance'
GOOGLE_OAUTH_URL = 'https://www.googleapis.com/auth'
AWS_METADATA_URL = 'http://169.254.169.254/latest/meta-data/'
class FetchResult(collections.namedtuple(
'FetchResult', ['httpcode', 'content'])):
"""Captures the result of fetching a url.
Attributes:
httpcode [int]: The HTTP code returned or -1 if the request raised
an exception.
content [string or error]: The HTTP payload result, or the error raised.
"""
def ok(self):
return self.httpcode >= 200 and self.httpcode < 300
def fetch(url, google=False):
request = urllib2.Request(url)
if google:
request.add_header('Metadata-Flavor', 'Google')
try:
response = urllib2.urlopen(request)
return FetchResult(response.getcode(), response.read())
except urllib2.HTTPError as e:
return FetchResult(-1, e)
except urllib2.URLError as e:
return FetchResult(-1, e)
def check_fetch(url, google=False):
response = fetch(url, google)
if not response.ok():
sys.stderr.write('{code}: {url}\n{result}\n'.format(
code=response.httpcode, url=url, result=response.content))
raise SystemExit('FAILED')
return response
__IS_ON_GOOGLE = None
__IS_ON_AWS = None
__ZONE = None
def is_google_instance():
"""Determine if we are running on a Google Cloud Platform instance."""
global __IS_ON_GOOGLE
if __IS_ON_GOOGLE is None:
__IS_ON_GOOGLE = fetch(GOOGLE_METADATA_URL, google=True).ok()
return __IS_ON_GOOGLE
def is_aws_instance():
"""Determine if we are running on an Amazon Web Services instance."""
global __IS_ON_AWS
if __IS_ON_AWS == None:
__IS_ON_AWS = fetch(AWS_METADATA_URL).ok()
return __IS_ON_AWS
def check_write_instance_metadata(name, value):
"""Add a name/value pair to our instance metadata.
Args:
name [string]: The key name.
value [string]: The key value.
Raises
UnsupportedError if not on a platform with metadata.
"""
if is_google_instance():
check_run_quick(
'gcloud compute instances add-metadata'
' {hostname} --zone={zone} --metadata={name}={value}'
.format(hostname=socket.gethostname(),
zone=check_get_zone(), name=name, value=value))
elif is_aws_instance():
result = check_fetch(os.path.join(AWS_METADATA_URL, 'instance-id'))
id = result.content.strip()
result = check_fetch(os.path.join(AWS_METADATA_URL,
'placement/availability-zone'))
region = result.content.strip()[:-1]
command = ['aws ec2 create-tags --resources', id,
'--region', region,
'--tags Key={key},Value={value}'.format(key=name, value=value)]
check_run_quick(' '.join(command), echo=False)
else:
raise UnsupportedError('This platform does not support metadata.')
def get_google_project():
"""Return the Google project this is running in, or None."""
result = fetch(GOOGLE_METADATA_URL + '/project/project-id', google=True)
return result.content if result.ok() else None
def check_get_zone():
global __ZONE
if __ZONE is None:
if is_google_instance():
result = check_fetch(GOOGLE_INSTANCE_METADATA_URL + '/zone', google=True)
__ZONE = os.path.basename(result.content)
elif is_aws_instance():
result = check_fetch(AWS_METADATA_URL + '/placement/availability-zone')
__ZONE = result.content
else:
raise UnsupportedError('This platform does not support zones.')
return __ZONE

Просмотреть файл

@ -0,0 +1,24 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from configurator import Configurator
if __name__ == '__main__':
configurator = Configurator()
configurator.update_deck_settings()

185
pylib/spinnaker/run.py Normal file
Просмотреть файл

@ -0,0 +1,185 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provides support functions for running shell commands."""
import collections
import fcntl
import os
import subprocess
import sys
class RunResult(collections.namedtuple('RunResult',
['returncode', 'stdout', 'stderr'])):
"""Captures the result of running a subprocess.
If output was not captured then stdout and stderr will be None.
"""
pass
def __collect_from_stream(stream, buffer, echo_stream):
"""Read all the input from a stream.
Args:
stream [File]: The file to read() from.
buffer [list of string]: The buffer to append the collected data to.
This will write a single chunk if any data was read.
echo_stream [stream]: If not None, the File to write() for logging
the stream.
Returns:
Number of additional bytes added to the buffer.
"""
collected = []
try:
while True:
got = os.read(stream.fileno(), 1)
if not got:
break
collected.append(got)
if echo_stream:
echo_stream.write(got)
echo_stream.flush()
except OSError:
pass
# Chunk together all the data we just received.
if collected:
buffer.append(''.join(collected))
return len(collected)
def run_and_monitor(command, echo=True, input=None):
"""Run the provided command in a subprocess shell.
Args:
command [string]: The shell command to execute.
echo [bool]: If True then echo the command and output to stdout.
input [string]: If non-empty then feed this to stdin.
Returns:
RunResult with result code and output from running the command.
"""
if echo:
print command
sys.stdout.flush()
stdin = subprocess.PIPE if input else None
process = subprocess.Popen(
command,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=stdin,
shell=True, close_fds=True)
# Setup for nonblocking reads
fl = fcntl.fcntl(process.stdout, fcntl.F_GETFL)
fcntl.fcntl(process.stdout, fcntl.F_SETFL, fl | os.O_NONBLOCK)
fl = fcntl.fcntl(process.stderr, fcntl.F_GETFL)
fcntl.fcntl(process.stderr, fcntl.F_SETFL, fl | os.O_NONBLOCK)
if stdin:
process.stdin.write(input)
process.stdin.close()
out = []
err = []
echo_out = sys.stdout if echo else None
echo_err = sys.stderr if echo else None
while (__collect_from_stream(process.stdout, out, echo_out)
or __collect_from_stream(process.stderr, err, echo_err)
or process.poll() is None):
pass
# Get any trailing data from termination race condition
__collect_from_stream(process.stdout, out, echo_out)
__collect_from_stream(process.stderr, err, echo_err)
return RunResult(process.returncode, ''.join(out), ''.join(err))
def run_quick(command, echo=True):
"""A more efficient form of run_and_monitor that doesnt monitor output.
Args:
command [string]: The shell command to run.
echo [bool]: If True then echo the command and output to stdout.
Returns:
RunResult with result code and output from running the command.
The content of stderr will be joined into stdout.
stderr itself will be None.
"""
p = subprocess.Popen(command, shell=True, close_fds=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = p.communicate()
if echo:
print command
print stdout
return RunResult(p.returncode, stdout, None)
def check_run_and_monitor(command, echo=True, input=None):
"""Runs the command in a subshell and throws an exception if it fails.
Args:
command [string]: The shell command to run.
echo [bool]: If True then echo the command and output to stdout.
input [string]: If non-empty then feed this to stdin.
Returns:
RunResult with result code and output from running the command.
Raises:
RuntimeError if command failed.
"""
result = run_and_monitor(command, echo=echo, input=input)
if result.returncode != 0:
error = 'FAILED {command} with exit code {code}\n{err}'.format(
command=command, code=result.returncode, err=result.stderr.strip())
sys.stderr.write(error + '\n')
raise RuntimeError(error)
return result
def check_run_quick(command, echo=True):
"""A more efficient form of check_run_and_monitor that doesnt monitor output.
Args:
command [string]: The shell command to run.
echo [bool]: If True then echo the command and output to stdout.
Returns:
RunResult with result code and output from running the command.
The content of stderr will be joined into stdout.
stderr itself will be None.
Raises:
RuntimeError if command failed.
"""
result = run_quick(command, echo)
if result.returncode:
error = ('FAILED with exit code {code}'
'\n\nCommand was {command}'
'\n\nError was {stdout}'.format(
code=result.returncode,
command=command,
stdout=result.stdout))
raise RuntimeError(error)
return result

Просмотреть файл

@ -0,0 +1,666 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import re
import resource
import shutil
import signal
import socket
import subprocess
import sys
import time
import yaml
from configurator import Configurator
from fetch import check_fetch
from fetch import get_google_project
from fetch import is_google_instance
from fetch import GOOGLE_METADATA_URL
from run import check_run_quick
from run import run_quick
__ifconfig_lines = None
def is_local(ip):
"""Determine if the given ip address refers to this machine or not.
Args:
ip [string]: A hostname or ip address suitable for binding to a socket.
"""
if (ip == 'localhost' or ip == '127.0.0.1' # variations of localhost binding
or ip == socket.gethostname() # external NIC binding
or ip == '0.0.0.0'): # all local interfaces
return True
global __ifconfig_lines
if not __ifconfig_lines:
result = run_quick('/sbin/ifconfig | egrep -i " inet?"', echo=False)
__ifconfig_lines = result.stdout
# This should be quick and dirty without worrying about IP4 vs IP6 or
# particulars about how the system ifconfig formats its output.
# It assumes the ip is valid.
return __ifconfig_lines.find(ip) >= 0
class Runner(object):
"""Provides routines for starting / stopping Spinnaker subsystems."""
# Denotes we are operating on all the subsystems
__SPINNAKER_COMPONENT = 'all'
# These are all the standard spinnaker subsystems that can be started
# independent of one another.
INDEPENDENT_SUBSYSTEM_LIST=['clouddriver', 'front50', 'orca', 'rosco',
'echo']
# Denotes a process running on an external host
EXTERNAL_PID = -123
@property
def _first_time_use_instructions(self):
"""Instructions for configuring Spinnaker for the first time.
Google Cloud Platform is treated as a special case because some
configuration parameters have defaults implied by the runtime environment.
"""
optional_defaults_if_on_google = ''
if is_google_instance():
google_project = get_google_project()
optional_defaults_if_on_google = """ # NOTE: Since you deployed on GCE:
# * You do not need JSON credentials to manage project id "{google_project}".
""".format(google_project=google_project)
return """
{sudo}mkdir -p {config_dir}
{sudo}cp {install_dir}/default-spinnaker-local.yml \\
{config_dir}/spinnaker-local.yml
{sudo}chmod 600 {config_dir}/spinnaker-local.yml
# edit {config_dir}/spinnaker-local.yml to your liking:
# If you want to deploy to Amazon Web Services:
# * Set providers.aws.enabled = true.
# * Add your keys to providers.aws.primaryCredentials
# or write them to $HOME/.aws/credentials
#
# If you want to deploy to Google Cloud Platform:
# * Set providers.google.enabled = true.
# * Add your project_id to providers.google.primaryCredentials.project
# * Add your the path to your json service account credentials to
# providers.google.primaryCredentials.jsonPath.
{optional_defaults_if_on_google}
{sudo}{script_dir}/stop_spinnaker.sh
{sudo}{script_dir}/reconfigure_spinnaker.sh
{sudo}{script_dir}/start_spinnaker.sh
""".format(
sudo='' if os.geteuid() else 'sudo ',
install_dir=self.__configurator.installation_config_dir,
config_dir=self.__configurator.user_config_dir,
script_dir=self.__installation.UTILITY_SCRIPT_DIR,
optional_defaults_if_on_google=optional_defaults_if_on_google)
@property
def bindings(self):
return self.__bindings
@property
def installation(self):
return self.__installation
@property
def configurator(self):
return self.__configurator
def __init__(self, installation_parameters=None):
self.__configurator = Configurator(installation_parameters)
self.__bindings = self.__configurator.bindings
self.__installation = self.__configurator.installation
local_yml_path = os.path.join(self.__installation.USER_CONFIG_DIR,
'spinnaker-local.yml')
if not os.path.exists(local_yml_path):
# Just warn and proceed anyway.
sys.stderr.write('WARNING: No {local_yml_path}\n'.format(
local_yml_path=local_yml_path))
# These are all the spinnaker subsystems in total.
@classmethod
def get_all_subsystem_names(cls):
# These are always started. Order doesnt matter.
result = list(cls.INDEPENDENT_SUBSYSTEM_LIST)
# These are additional, optional subsystems.
result.extend(['rush', 'igor'])
# Gate is started after everything else is up and available.
result.append('gate')
# deck is not included here because it is run within apache.
# which is managed separately.
return result
@staticmethod
def run_daemon(path, args, detach=True, environ=None):
"""Run a program as a long-running background process.
Args:
path [string]: Path to the program to run.
args [list of string]: Arguments to pass to program
detch [bool]: True if we're running it in separate process group.
A seprate process group will continue after we exit.
"""
pid = os.fork()
if pid == 0:
if detach:
os.setsid()
else:
return pid
# Iterate through and close all file descriptors
# (other than stdin/out/err).
maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
if (maxfd == resource.RLIM_INFINITY):
maxfd = 1024
for fd in range(3, maxfd):
try:
os.close(fd)
except OSError:
pass
os.execve(path, args, environ or os.environ)
def stop_subsystem(self, subsystem, pid):
"""Stop the specified subsystem.
Args:
subsystem [string]: The name of the subsystem.
pid [int]: The process id of the runningn subsystem.
"""
os.kill(pid, signal.SIGTERM)
def start_subsystem_if_local(self, subsystem, environ=None):
"""Start the specified subsystem.
Args:
subsystem [string]: The name of the subsystem.
environ [dict]: If set, use these environment variables instead.
Returns:
The pid of the subsystem once running or EXTERNAL_PID if run elsewhere.
"""
host = self.__bindings.get(
'services.{system}.host'.format(system=subsystem))
if not is_local(host):
print 'Expecting {subsystem} to be on external host {host}'.format(
subsystem=subsystem, host=host)
return self.EXTERNAL_PID
return self.start_subsystem(subsystem, environ)
def start_subsystem(self, subsystem, environ=None):
"""Start the specified subsystem.
Args:
subsystem [string]: The name of the subsystem.
environ [dict]: If set, use these environment variables instead.
Returns:
The pid of the subsystem once running.
"""
print 'Starting {subsystem}'.format(subsystem=subsystem)
command = os.path.join(self.__installation.SUBSYSTEM_ROOT_DIR,
subsystem, 'bin', subsystem)
base_log_path = os.path.join(self.__installation.LOG_DIR, subsystem)
return self.run_daemon('/bin/bash',
['/bin/bash',
'-c',
'({command} > {log}.log) 2>&1 '
'| tee -a {log}.log >& {log}.err'
.format(command=command, log=base_log_path)],
environ=environ)
def start_dependencies(self):
"""Start all the external dependencies running on this host."""
run_dir = self.__installation.EXTERNAL_DEPENDENCY_SCRIPT_DIR
cassandra_host = self.__bindings.get('services.cassandra.host')
redis_host = self.__bindings.get('services.redis.host')
print 'Starting external dependencies...'
check_run_quick(
'REDIS_HOST={host}'
' LOG_DIR={log_dir}'
' {run_dir}/start_redis.sh'
.format(host=redis_host,
log_dir=self.__installation.LOG_DIR,
run_dir=run_dir),
echo=True)
check_run_quick(
'CASSANDRA_HOST={host}'
' CASSANDRA_DIR={install_dir}/cassandra'
' {run_dir}/start_cassandra.sh'
.format(host=cassandra_host,
install_dir=self.__installation.SPINNAKER_INSTALL_DIR,
run_dir=run_dir),
echo=True)
def get_subsystem_environ(self, subsystem):
if self.__bindings and subsystem != 'clouddriver':
return os.environ
if not self.__bindings.get('providers.aws.enabled'):
return os.environ
environ = dict(os.environ)
# Set AWS environment variables for credentials if not already there.
key_id = self.__bindings.get(
'providers.aws.primaryCredentials.access_key_id')
secret_key = self.__bindings.get(
'providers.aws.primaryCredentials.secret_key')
if key_id:
environ['AWS_ACCESS_KEY_ID'] = environ.get('AWS_ACCESS_KEY_ID', key_id)
if secret_key:
environ['AWS_SECRET_KEY'] = environ.get('AWS_SECRET_KEY', secret_key)
return environ
def maybe_start_job(self, jobs, subsystem):
if subsystem in jobs:
print '{subsystem} already running as pid {pid}'.format(
subsystem=subsystem, pid=jobs[subsystem])
return jobs[subsystem]
else:
return self.start_subsystem_if_local(
subsystem, environ=self.get_subsystem_environ(subsystem))
def start_spinnaker_subsystems(self, jobs):
started_list = []
for subsys in self.INDEPENDENT_SUBSYSTEM_LIST:
pid = self.maybe_start_job(jobs, subsys)
if pid:
started_list.append((subsys, pid))
docker_address = self.__bindings.get('services.docker.baseUrl')
jenkins_address = self.__bindings.get(
'services.jenkins.defaultMaster.baseUrl')
igor_enabled = self.__bindings.get('services.igor.enabled')
# Conditionally run rush only if docker is configured.
# A '$' indicates an unbound variable so it wasnt configured.
if docker_address and docker_address[0] != '$':
pid = self.maybe_start_job(jobs, 'rush')
if pid:
started_list.append(('rush', pid))
else:
print 'Not using rush because docker is not configured.'
# Conditionally run igor only if jenkins is configured.
# A '$' indicates an unbound variable so it wasnt configured.
if jenkins_address and jenkins_address[0] != '$':
if not igor_enabled:
sys.stderr.write(
'WARNING: Not starting igor because IGOR_ENABLED=false'
' even though JENKINS_ADDRESS="{address}"\n'.format(
address=jenkins_address))
else:
pid = self.maybe_start_job(jobs, 'igor')
if pid:
started_list.append(('igor', pid))
else:
print 'Not using igor because jenkins is not configured.'
for subsystem in started_list:
self.wait_for_service(subsystem[0], pid=subsystem[1])
pid = self.maybe_start_job(jobs, 'gate')
self.wait_for_service('gate', pid=pid)
def get_all_java_subsystem_jobs(self):
"""Look up all the running java jobs.
Returns:
dictionary keyed by package name (spinnaker subsystem) with pid values.
"""
re_pid_and_subsystem = None
# Try jps, but this is not currently available on openjdk-8-jre
# so depending on the JRE environment, this might not work.
p = subprocess.Popen(
'jps -l', stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, close_fds=True)
stdout, stderr = p.communicate()
if p.returncode == 0:
re_pid_and_subsystem = re.compile(
'([0-9]+) com\.netflix\.spinnaker\.([^\.]+)\.')
else:
# If jps did not work, then try using ps instead.
# ps can be flaky because it truncates the commandline to 4K, which
# is typically too short for the spinnaker classpath alone, never mind
# additional arguments. The reliable part of the command is in the
# truncated region, so we'll look for something in a potentially
# brittle part of the commandline.
stdout, stderr = subprocess.Popen(
'ps -fwwC java', stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, close_fds=True).communicate()
re_pid_and_subsystem = re.compile(
'([0-9]+) .* -classpath {install_root}/([^/]+)/'
.format(install_root=self.__installation.SUBSYSTEM_ROOT_DIR))
job_map = {}
for match in re_pid_and_subsystem.finditer(stdout):
name = match.groups()[1]
pid = int(match.groups()[0])
job_map[name] = pid
return job_map
def find_new_port_and_address(self, subsystem):
"""This is assuming a specific configuration practice.
Overrides for default ports only occur in ~/<subsystem>-local.yml
or in ~/spinnaker-local.yml or in <install>/config/spinnnaker.yml
The actual runtime uses spring, which can be overriden for additional
search locations.
"""
path = os.path.join(self.__installation.USER_CONFIG_DIR,
subsystem + '-local.yml')
if os.path.exists(path):
bindings = yaml_util.YamlBindings()
bindings.import_dict(self.__bindings.map)
bindings.import_path(path)
else:
bindings = self.__bindings
subsystem = subsystem.replace('-', '_')
return (bindings.get('services.{subsys}.port'.format(subsys=subsystem)),
bindings.get('services.{subsys}.host'.format(subsys=subsystem)))
def find_port_and_address(self, subsystem):
if self.__bindings:
return self.find_new_port_and_address(subsystem)
path = os.path.join(self.__installation.USER_CONFIG_DIR,
subsystem + '-local.yml')
if not os.path.exists(path):
raise SystemExit('ERROR: Expected configuration file {path}.\n'
' Run {sudo}{dir}/reconfigure_spinnaker.sh'
.format(path=path,
sudo='' if os.geteuid() else 'sudo ',
dir=self.__installation.UTILITY_SCRIPT_DIR))
with open(path, 'r') as f:
data = yaml.load(f, Loader=yaml.Loader)
return data['server']['port'], data['server'].get('address', None)
@staticmethod
def start_tail(path):
return subprocess.Popen(['/usr/bin/tail', '-f', path], stdout=sys.stdout,
shell=False)
def wait_for_service(self, subsystem, pid, show_log_while_waiting=True):
try:
port, address = self.find_port_and_address(subsystem)
except KeyError:
error = ('A port for {subsystem} is not explicit in the configuration.'
' Assuming it is up since it isnt clear how to test for it.'
.format(subsystem=subsystem))
sys.stderr.write(error)
raise SystemExit(error)
log_path = os.path.join(self.__installation.LOG_DIR, subsystem + '.log')
print ('Waiting for {subsys} to start accepting requests on port {port}...'
.format(subsys=subsystem, port=port))
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
tail_process = None
wait_msg_retries = 5 # Give half a second before showing log/warning.
while True:
if address:
host_colon = address.find(':')
host = address if host_colon < 0 else address[:host_colon]
else:
host = 'localhost'
try:
sock.connect((host, port))
break
except IOError:
if pid == self.EXTERNAL_PID:
tail_process = True # Pretend, but really nothing to tail.
else:
try:
os.kill(pid, 0)
except OSError:
raise SystemExit('{subsys} failed to start'.format(
subsys=subsystem))
if show_log_while_waiting and not tail_process:
if wait_msg_retries > 0:
# Give an initial delay before checking,
# as well as a delay between checks.
# The initial delay is because of a race condition having an old
# log file and starting to write a new one.
wait_msg_retries -= 1
elif not os.path.exists(log_path):
print '{path} does not yet exist..'.format(path=log_path)
wait_msg_retries = 5 # dont display again for half a second.
else:
tail_process = self.start_tail(log_path)
time.sleep(0.1)
if tail_process and pid != self.EXTERNAL_PID:
tail_process.kill()
sock.close()
print 'Spinnaker subsystem={subsys} is up.'.format(subsys=subsystem)
def warn_if_configuration_looks_old(self):
local_yml_path = os.path.join(self.__installation.USER_CONFIG_DIR,
'spinnaker-local.yml')
try:
global_stat = os.stat(local_yml_path)
except OSError:
return
settings_path = os.path.join(
self.__installation.DECK_INSTALL_DIR,
self.__installation.HACK_DECK_SETTINGS_FILENAME)
old = False
if os.path.exists(settings_path):
setting_stat = os.stat(settings_path)
if setting_stat.st_mtime < global_stat.st_mtime:
sys.stderr.write('WARNING: {settings} is older than {baseline}\n'
.format(settings=settings_path,
baseline=local_yml_path))
old = True
if old:
sys.stderr.write("""
To fix this run the following:
sudo {script_dir}/stop_spinnaker.sh
sudo {script_dir}/reconfigure_spinnaker_instance.sh
sudo {script_dir}/start_spinnaker.sh
Proceeding anyway.
""".format(script_dir=self.__installation.UTILITY_SCRIPT_DIR))
def stop_deck(self):
print 'Stopping apache server while starting Spinnaker.'
run_quick('service apache2 stop', echo=True)
def start_deck(self):
print 'Starting apache server.'
run_quick('service apache2 start', echo=True)
def start_all(self, options):
self.check_configuration(options)
try:
os.makedirs(self.__installation.LOG_DIR)
except OSError:
pass
self.start_dependencies()
google_enabled = self.__bindings.get('providers.google.enabled')
jobs = self.get_all_java_subsystem_jobs()
self.start_spinnaker_subsystems(jobs)
self.start_deck()
print 'Started all Spinnaker components.'
def maybe_stop_subsystem(self, name, jobs):
pid = jobs.get(name, 0)
if not pid:
print '{name} was not running'.format(name=name)
return 0
print 'Terminating {name} in pid={pid}'.format(name=name, pid=pid)
self.stop_subsystem(name, pid)
return pid
def stop(self, options):
stopped_list = []
component = options.component.lower()
if not component:
component = self.__SPINNAKER_COMPONENT
jobs = self.get_all_java_subsystem_jobs()
if component != self.__SPINNAKER_COMPONENT:
if component == 'deck':
self.stop_deck()
else:
pid = self.maybe_stop_subsystem(component, jobs)
if pid:
stopped_list.append((component, pid))
else:
self.stop_deck()
for name in self.get_all_subsystem_names():
pid = self.maybe_stop_subsystem(name, jobs)
if pid:
stopped_list.append((name, pid))
for name,pid in stopped_list:
count = 0
while True:
try:
os.kill(pid, 0)
count += 1
if count % 10 == 0:
if count == 0:
sys.stdout.write('Waiting on {name}, pid={pid}..'.format(
name=name, pid=pid))
else:
sys.stdout.write('.')
sys.stdout.flush()
time.sleep(0.1)
except OSError:
if count > 10: # We didnt start logging until 10
sys.stdout.write('{pid} stopped.\n'.format(pid=pid))
sys.stdout.flush()
break
def run(self, options):
action = options.action.upper()
component = options.component.lower()
if action == 'RESTART':
self.stop(options)
action = 'START'
if action == 'START':
if component == self.__SPINNAKER_COMPONENT:
self.start_all(options)
else:
self.maybe_start_job(self.get_all_java_subsystem_jobs(), component)
if action == 'STOP':
self.stop(options)
def init_argument_parser(self, parser):
parser.add_argument('action', help='START or STOP or RESTART')
parser.add_argument('component',
help='Name of component to start or stop, or ALL')
def check_configuration(self, options):
local_path = os.path.join(self.__installation.USER_CONFIG_DIR,
'spinnaker-local.yml')
if not os.path.exists(local_path):
sys.stderr.write(
'WARNING: {path} does not exist.\n'
'To custom-configure spinnaker do the following: {first_time_use}\n'
.format(path=local_path,
first_time_use=self._first_time_use_instructions))
self.warn_if_configuration_looks_old()
@classmethod
def main(cls):
cls.check_java_version()
runner = cls()
parser = argparse.ArgumentParser()
runner.init_argument_parser(parser)
options = parser.parse_args()
runner.run(options)
@staticmethod
def check_java_version():
"""Ensure that we will be running the right version of Java.
The point here is to fail quickly with a concise message if not. Otherwise,
the runtime will perform a check and give an obscure lengthy exception
trace about a version mismatch which is not at all apparent as to what the
actual problem is.
"""
try:
p = subprocess.Popen('java -version', shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = p.communicate()
code = p.returncode
except OSError as error:
return str(error)
info = stdout
if code != 0:
return 'Java does not appear to be installed.'
m = re.search(r'(?m)^openjdk version "(.*)"', info)
if not m:
m = re.search(r'(?m)^java version "(.*)"', info)
if not m:
raise SystemExit('Unrecognized java version:\n{0}'.format(info))
if m.group(1)[0:3] != '1.8':
raise SystemExit('You are running Java version {version}.'
' However, Java version 1.8 is required for Spinnaker.'
' Your PATH may be wrong, or you may need to install Java 1.8.'
.format(version=m.group(1)))
if __name__ == '__main__':
if os.geteuid():
sys.stderr.write('ERROR: This script must be run with sudo.\n')
sys.exit(-1)
Runner.main()

Просмотреть файл

@ -0,0 +1,221 @@
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import sys
from configurator import Configurator
from fetch import fetch
from fetch import is_google_instance
from fetch import GOOGLE_INSTANCE_METADATA_URL
from fetch import GOOGLE_METADATA_URL
from fetch import GOOGLE_OAUTH_URL
class ValidateConfig(object):
@property
def errors(self):
return self.__errors
@property
def warnings(self):
return self.__warnings
def __init__(self, configurator=None):
if not configurator:
configurator = Configurator()
self.__bindings = configurator.bindings
self.__user_config_dir = configurator.user_config_dir
self.__warnings = []
self.__errors = []
def validate(self):
"""Validate the configuration.
Returns:
True or False after print result to stdout
"""
# TODO: Add more verification here
# This is representative for the time being.
self.verify_google_scopes()
self.verify_external_dependencies()
self.verify_security()
yml_path = os.path.join(os.environ.get('HOME', '/root'),
'.spinnaker/spinnaker-local.yml')
if not os.path.exists(yml_path):
self.__warnings.append(
'There is no custom configuration file "{path}"'.format(path=yml_path))
if self.__warnings:
print ('{path} has non-fatal configuration warnings:\n * {warnings}'
.format(path=yml_path, warnings='\n * '.join(self.__warnings)))
if not self.__errors:
print '{path} seems ok.'.format(path=yml_path)
return True
else:
print ('{path} has configuration errors:\n * {errors}'
.format(path=yml_path, errors='\n * '.join(self.__errors)))
return False
def check_validate(self):
"""Validate the configuration.
Raise a ValueError if the configuration is invalid.
"""
ok = self.validate()
if not ok:
msg = 'Configuration seems invalid.\n * {errors}'.format(
errors='\n * '.join(self.__errors))
raise ValueError(msg)
def is_reference(self, value):
"""Determine if a YAML value is an unresolved variable reference or not.
Args:
value [string]: value to check.
"""
return isinstance(value, basestring) and value.startswith('${')
def verify_true_false(self, name):
"""Verify name has a True or False value.
Args:
name [string]: variable name.
"""
value = self.__bindings.get(name)
if self.is_reference(value):
self.__errors.append('Missing "{name}".'.format(name=name))
return False
if isinstance(value, bool):
return True
self.__errors.append('{name}="{value}" is not valid.'
' Must be boolean true or false.'
.format(name=name, value=value))
return False
def verify_host(self, name, required):
"""Verify name is a valid hostname.
Args:
name [string]: variable name.
required [bool]: If True value cannot be empty.
"""
value = self.__bindings.get(name)
if self.is_reference(value):
self.__errors.append('Missing "{name}".'.format(name=name))
return False
host_regex = '^[-_\.a-z0-9]+$'
if not value:
if not required:
return True
else:
self.__errors.append(
'No host provided for "{name}".'.format(name=name))
return False
if re.match(host_regex, value):
return True
self.__errors.append(
'name="{value}" does not look like {regex}'.format(regex=host_regex))
return False
def verify_google_scopes(self):
"""Verify that if we are running on Google that our scopes are valid."""
if not is_google_instance():
return
if not self.verify_true_false('providers.google.enabled'):
return
if not self.__bindings.get('providers.google.enabled'):
return
result = fetch(
GOOGLE_INSTANCE_METADATA_URL + '/service-accounts/', google=True)
service_accounts = result.content if result.ok() else ''
required_scopes = [GOOGLE_OAUTH_URL + '/compute']
found_scopes = []
for account in filter(bool, service_accounts.split('\n')):
if account[-1] == '/':
# Strip off trailing '/' so we can take the basename.
account = account[0:-1]
result = fetch(
os.path.join(GOOGLE_INSTANCE_METADATA_URL, 'service-accounts',
os.path.basename(account), 'scopes'),
google=True)
# cloud-platform scope implies all the other scopes.
have = str(result.content)
if have.find('https://www.googleapis.com/auth/cloud-platform') >= 0:
found_scopes.extend(required_scopes)
for scope in required_scopes:
if have.find(scope) >= 0:
found_scopes.append(scope)
for scope in required_scopes:
if not scope in found_scopes:
self.__errors.append(
'Missing required scope "{scope}".'.format(scope=scope))
def verify_external_dependencies(self):
"""Verify that the external dependency references make sense."""
ok = self.verify_host('services.cassandra.host', required=False)
ok = self.verify_host('services.redis.host', required=False) and ok
return ok
def verify_user_access_only(self, path):
"""Verify only the user has permissions to operate on the supplied path.
Args:
path [string]: Path to local file.
"""
if not path or not os.path.exists(path):
return True
stat = os.stat(path)
if stat.st_mode & 077:
self.__errors.append('"{path}" should not have non-owner access.'
' Mode is {mode}.'
.format(path=path,
mode='%03o' % (stat.st_mode & 0xfff)))
return False
return True
def verify_security(self):
"""Verify the permissions on the sensitive configuration files."""
ok = self.verify_user_access_only(
self.__bindings.get('providers.google.primaryCredentials.jsonPath'))
ok = self.verify_user_access_only(
os.path.join(self.__user_config_dir, 'spinnaker-local.yml')) and ok
ok = self.verify_user_access_only(
os.path.join(os.environ.get('HOME', '/root'), '.aws/credentials')) and ok
return ok
if __name__ == '__main__':
sys.exit(0 if ValidateConfig().validate() else -1)

Просмотреть файл

@ -0,0 +1,127 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
import yaml
class YamlBindings(object):
"""Implements a map from yaml using variable references similar to spring."""
@property
def map(self):
return self.__map
def __init__(self):
self.__map = {}
def get(self, field):
return self.__get_value(field, [], original=field)
def import_dict(self, d):
for name,value in d.items():
self.__update_field(name, value, self.__map)
def import_string(self, s):
self.import_dict(yaml.load(s, Loader=yaml.Loader))
def import_path(self, path):
with open(path, 'r') as f:
self.import_dict(yaml.load(f, Loader=yaml.Loader))
def __update_field(self, name, value, container):
if not isinstance(value, dict) or not name in container:
container[name] = value
return
container_value = container[name]
if not isinstance(container_value, dict):
container[name] = value
return
for child_name, child_value in value.items():
self.__update_field(child_name, child_value, container_value)
def __get_node(self, field):
path = field.split('.')
node = self.__map
for part in path:
if not isinstance(node, dict) or not part in node:
raise KeyError(field)
if isinstance(node, list):
node = node[0][part]
else:
node = node[part]
return node
def __get_value(self, field, saw, original):
value = self.__get_node(field)
if not isinstance(value, basestring) or not value.startswith('$'):
return value
if field in saw:
raise ValueError('Cycle looking up variable ' + original)
saw = saw + [field]
result = []
offset = 0
# Look for fragments of ${key} or ${key:default} then resolve them.
text = value
for match in re.finditer('\${([\._a-zA-Z0-9]+)(:.+?)?}', text):
result.append(text[offset:match.start()])
try:
got = self.__get_value(match.group(1), saw, original)
result.append(str(got))
except KeyError:
if match.group(2):
result.append(str(match.group(2)[1:]))
else:
result.append(match.group(0))
offset = match.end() # skip trailing '}'
result.append(text[offset:])
return ''.join(result)
def replace(self, text):
result = []
offset = 0
# Look for fragments of ${key} or ${key:default} then resolve them.
for match in re.finditer('\${([\._a-zA-Z0-9]+)(:.+?)?}', text):
result.append(text[offset:match.start()])
try:
result.append(self.get(match.group(1)))
except KeyError:
if match.group(2):
result.append(str(match.group(2)[1:]))
else:
raise
offset = match.end() # skip trailing '}'
result.append(text[offset:])
return ''.join(result)
def load_bindings(installed_config_dir, user_config_dir, only_if_local=False):
local_yml_path = os.path.join(user_config_dir, 'spinnaker-local.yml')
have_local = os.path.exists(local_yml_path)
if only_if_local and not have_local:
return None
bindings = YamlBindings()
bindings.import_path(os.path.join(installed_config_dir, 'spinnaker.yml'))
if have_local:
bindings.import_path(local_yml_path)
return bindings

Двоичные данные
pylib/yaml/._LICENSE Normal file

Двоичный файл не отображается.

Двоичные данные
pylib/yaml/._README.original Normal file

Двоичный файл не отображается.

19
pylib/yaml/LICENSE Normal file
Просмотреть файл

@ -0,0 +1,19 @@
Copyright (c) 2006 Kirill Simonov
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Просмотреть файл

@ -0,0 +1,35 @@
PyYAML - The next generation YAML parser and emitter for Python.
To install, type 'python setup.py install'.
By default, the setup.py script checks whether LibYAML is installed
and if so, builds and installs LibYAML bindings. To skip the check
and force installation of LibYAML bindings, use the option '--with-libyaml':
'python setup.py --with-libyaml install'. To disable the check and
skip building and installing LibYAML bindings, use '--without-libyaml':
'python setup.py --without-libyaml install'.
When LibYAML bindings are installed, you may use fast LibYAML-based
parser and emitter as follows:
>>> yaml.load(stream, Loader=yaml.CLoader)
>>> yaml.dump(data, Dumper=yaml.CDumper)
PyYAML includes a comprehensive test suite. To run the tests,
type 'python setup.py test'.
For more information, check the PyYAML homepage:
'http://pyyaml.org/wiki/PyYAML'.
For PyYAML tutorial and reference, see:
'http://pyyaml.org/wiki/PyYAMLDocumentation'.
Post your questions and opinions to the YAML-Core mailing list:
'http://lists.sourceforge.net/lists/listinfo/yaml-core'.
Submit bug reports and feature requests to the PyYAML bug tracker:
'http://pyyaml.org/newticket?component=pyyaml'.
PyYAML is written by Kirill Simonov <xi@resolvent.net>. It is released
under the MIT license. See the file LICENSE for more details.

10
pylib/yaml/README.txt Normal file
Просмотреть файл

@ -0,0 +1,10 @@
# The contents of this directory came from:
# curl -O http://pyyaml.org/download/pyyaml/PyYAML-3.11.tar.gz
# tar xzf !$
# cd PyYAML-3.11
# python setup.py --without-libyaml build
# cd build <platform-subdir>
# And capturing the yaml subdirectory.
# The README.pyml and LICENSE came from the root of the tar.gz distribution.
# The LICENSE applies only the the contents of this directory (yaml package).

315
pylib/yaml/__init__.py Normal file
Просмотреть файл

@ -0,0 +1,315 @@
from error import *
from tokens import *
from events import *
from nodes import *
from loader import *
from dumper import *
__version__ = '3.11'
try:
from cyaml import *
__with_libyaml__ = True
except ImportError:
__with_libyaml__ = False
def scan(stream, Loader=Loader):
"""
Scan a YAML stream and produce scanning tokens.
"""
loader = Loader(stream)
try:
while loader.check_token():
yield loader.get_token()
finally:
loader.dispose()
def parse(stream, Loader=Loader):
"""
Parse a YAML stream and produce parsing events.
"""
loader = Loader(stream)
try:
while loader.check_event():
yield loader.get_event()
finally:
loader.dispose()
def compose(stream, Loader=Loader):
"""
Parse the first YAML document in a stream
and produce the corresponding representation tree.
"""
loader = Loader(stream)
try:
return loader.get_single_node()
finally:
loader.dispose()
def compose_all(stream, Loader=Loader):
"""
Parse all YAML documents in a stream
and produce corresponding representation trees.
"""
loader = Loader(stream)
try:
while loader.check_node():
yield loader.get_node()
finally:
loader.dispose()
def load(stream, Loader=Loader):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
"""
loader = Loader(stream)
try:
return loader.get_single_data()
finally:
loader.dispose()
def load_all(stream, Loader=Loader):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
"""
loader = Loader(stream)
try:
while loader.check_data():
yield loader.get_data()
finally:
loader.dispose()
def safe_load(stream):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
Resolve only basic YAML tags.
"""
return load(stream, SafeLoader)
def safe_load_all(stream):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
Resolve only basic YAML tags.
"""
return load_all(stream, SafeLoader)
def emit(events, stream=None, Dumper=Dumper,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None):
"""
Emit YAML parsing events into a stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
from StringIO import StringIO
stream = StringIO()
getvalue = stream.getvalue
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
try:
for event in events:
dumper.emit(event)
finally:
dumper.dispose()
if getvalue:
return getvalue()
def serialize_all(nodes, stream=None, Dumper=Dumper,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding='utf-8', explicit_start=None, explicit_end=None,
version=None, tags=None):
"""
Serialize a sequence of representation trees into a YAML stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
if encoding is None:
from StringIO import StringIO
else:
from cStringIO import StringIO
stream = StringIO()
getvalue = stream.getvalue
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
encoding=encoding, version=version, tags=tags,
explicit_start=explicit_start, explicit_end=explicit_end)
try:
dumper.open()
for node in nodes:
dumper.serialize(node)
dumper.close()
finally:
dumper.dispose()
if getvalue:
return getvalue()
def serialize(node, stream=None, Dumper=Dumper, **kwds):
"""
Serialize a representation tree into a YAML stream.
If stream is None, return the produced string instead.
"""
return serialize_all([node], stream, Dumper=Dumper, **kwds)
def dump_all(documents, stream=None, Dumper=Dumper,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding='utf-8', explicit_start=None, explicit_end=None,
version=None, tags=None):
"""
Serialize a sequence of Python objects into a YAML stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
if encoding is None:
from StringIO import StringIO
else:
from cStringIO import StringIO
stream = StringIO()
getvalue = stream.getvalue
dumper = Dumper(stream, default_style=default_style,
default_flow_style=default_flow_style,
canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
encoding=encoding, version=version, tags=tags,
explicit_start=explicit_start, explicit_end=explicit_end)
try:
dumper.open()
for data in documents:
dumper.represent(data)
dumper.close()
finally:
dumper.dispose()
if getvalue:
return getvalue()
def dump(data, stream=None, Dumper=Dumper, **kwds):
"""
Serialize a Python object into a YAML stream.
If stream is None, return the produced string instead.
"""
return dump_all([data], stream, Dumper=Dumper, **kwds)
def safe_dump_all(documents, stream=None, **kwds):
"""
Serialize a sequence of Python objects into a YAML stream.
Produce only basic YAML tags.
If stream is None, return the produced string instead.
"""
return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
def safe_dump(data, stream=None, **kwds):
"""
Serialize a Python object into a YAML stream.
Produce only basic YAML tags.
If stream is None, return the produced string instead.
"""
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
def add_implicit_resolver(tag, regexp, first=None,
Loader=Loader, Dumper=Dumper):
"""
Add an implicit scalar detector.
If an implicit scalar value matches the given regexp,
the corresponding tag is assigned to the scalar.
first is a sequence of possible initial characters or None.
"""
Loader.add_implicit_resolver(tag, regexp, first)
Dumper.add_implicit_resolver(tag, regexp, first)
def add_path_resolver(tag, path, kind=None, Loader=Loader, Dumper=Dumper):
"""
Add a path based resolver for the given tag.
A path is a list of keys that forms a path
to a node in the representation tree.
Keys can be string values, integers, or None.
"""
Loader.add_path_resolver(tag, path, kind)
Dumper.add_path_resolver(tag, path, kind)
def add_constructor(tag, constructor, Loader=Loader):
"""
Add a constructor for the given tag.
Constructor is a function that accepts a Loader instance
and a node object and produces the corresponding Python object.
"""
Loader.add_constructor(tag, constructor)
def add_multi_constructor(tag_prefix, multi_constructor, Loader=Loader):
"""
Add a multi-constructor for the given tag prefix.
Multi-constructor is called for a node if its tag starts with tag_prefix.
Multi-constructor accepts a Loader instance, a tag suffix,
and a node object and produces the corresponding Python object.
"""
Loader.add_multi_constructor(tag_prefix, multi_constructor)
def add_representer(data_type, representer, Dumper=Dumper):
"""
Add a representer for the given type.
Representer is a function accepting a Dumper instance
and an instance of the given data type
and producing the corresponding representation node.
"""
Dumper.add_representer(data_type, representer)
def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
"""
Add a representer for the given type.
Multi-representer is a function accepting a Dumper instance
and an instance of the given data type or subtype
and producing the corresponding representation node.
"""
Dumper.add_multi_representer(data_type, multi_representer)
class YAMLObjectMetaclass(type):
"""
The metaclass for YAMLObject.
"""
def __init__(cls, name, bases, kwds):
super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
cls.yaml_dumper.add_representer(cls, cls.to_yaml)
class YAMLObject(object):
"""
An object that can dump itself to a YAML stream
and load itself from a YAML stream.
"""
__metaclass__ = YAMLObjectMetaclass
__slots__ = () # no direct instantiation, so allow immutable subclasses
yaml_loader = Loader
yaml_dumper = Dumper
yaml_tag = None
yaml_flow_style = None
def from_yaml(cls, loader, node):
"""
Convert a representation node to a Python object.
"""
return loader.construct_yaml_object(node, cls)
from_yaml = classmethod(from_yaml)
def to_yaml(cls, dumper, data):
"""
Convert a Python object to a representation node.
"""
return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
flow_style=cls.yaml_flow_style)
to_yaml = classmethod(to_yaml)

139
pylib/yaml/composer.py Normal file
Просмотреть файл

@ -0,0 +1,139 @@
__all__ = ['Composer', 'ComposerError']
from error import MarkedYAMLError
from events import *
from nodes import *
class ComposerError(MarkedYAMLError):
pass
class Composer(object):
def __init__(self):
self.anchors = {}
def check_node(self):
# Drop the STREAM-START event.
if self.check_event(StreamStartEvent):
self.get_event()
# If there are more documents available?
return not self.check_event(StreamEndEvent)
def get_node(self):
# Get the root node of the next document.
if not self.check_event(StreamEndEvent):
return self.compose_document()
def get_single_node(self):
# Drop the STREAM-START event.
self.get_event()
# Compose a document if the stream is not empty.
document = None
if not self.check_event(StreamEndEvent):
document = self.compose_document()
# Ensure that the stream contains no more documents.
if not self.check_event(StreamEndEvent):
event = self.get_event()
raise ComposerError("expected a single document in the stream",
document.start_mark, "but found another document",
event.start_mark)
# Drop the STREAM-END event.
self.get_event()
return document
def compose_document(self):
# Drop the DOCUMENT-START event.
self.get_event()
# Compose the root node.
node = self.compose_node(None, None)
# Drop the DOCUMENT-END event.
self.get_event()
self.anchors = {}
return node
def compose_node(self, parent, index):
if self.check_event(AliasEvent):
event = self.get_event()
anchor = event.anchor
if anchor not in self.anchors:
raise ComposerError(None, None, "found undefined alias %r"
% anchor.encode('utf-8'), event.start_mark)
return self.anchors[anchor]
event = self.peek_event()
anchor = event.anchor
if anchor is not None:
if anchor in self.anchors:
raise ComposerError("found duplicate anchor %r; first occurence"
% anchor.encode('utf-8'), self.anchors[anchor].start_mark,
"second occurence", event.start_mark)
self.descend_resolver(parent, index)
if self.check_event(ScalarEvent):
node = self.compose_scalar_node(anchor)
elif self.check_event(SequenceStartEvent):
node = self.compose_sequence_node(anchor)
elif self.check_event(MappingStartEvent):
node = self.compose_mapping_node(anchor)
self.ascend_resolver()
return node
def compose_scalar_node(self, anchor):
event = self.get_event()
tag = event.tag
if tag is None or tag == u'!':
tag = self.resolve(ScalarNode, event.value, event.implicit)
node = ScalarNode(tag, event.value,
event.start_mark, event.end_mark, style=event.style)
if anchor is not None:
self.anchors[anchor] = node
return node
def compose_sequence_node(self, anchor):
start_event = self.get_event()
tag = start_event.tag
if tag is None or tag == u'!':
tag = self.resolve(SequenceNode, None, start_event.implicit)
node = SequenceNode(tag, [],
start_event.start_mark, None,
flow_style=start_event.flow_style)
if anchor is not None:
self.anchors[anchor] = node
index = 0
while not self.check_event(SequenceEndEvent):
node.value.append(self.compose_node(node, index))
index += 1
end_event = self.get_event()
node.end_mark = end_event.end_mark
return node
def compose_mapping_node(self, anchor):
start_event = self.get_event()
tag = start_event.tag
if tag is None or tag == u'!':
tag = self.resolve(MappingNode, None, start_event.implicit)
node = MappingNode(tag, [],
start_event.start_mark, None,
flow_style=start_event.flow_style)
if anchor is not None:
self.anchors[anchor] = node
while not self.check_event(MappingEndEvent):
#key_event = self.peek_event()
item_key = self.compose_node(node, None)
#if item_key in node.value:
# raise ComposerError("while composing a mapping", start_event.start_mark,
# "found duplicate key", key_event.start_mark)
item_value = self.compose_node(node, item_key)
#node.value[item_key] = item_value
node.value.append((item_key, item_value))
end_event = self.get_event()
node.end_mark = end_event.end_mark
return node

675
pylib/yaml/constructor.py Normal file
Просмотреть файл

@ -0,0 +1,675 @@
__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
'ConstructorError']
from error import *
from nodes import *
import datetime
import binascii, re, sys, types
class ConstructorError(MarkedYAMLError):
pass
class BaseConstructor(object):
yaml_constructors = {}
yaml_multi_constructors = {}
def __init__(self):
self.constructed_objects = {}
self.recursive_objects = {}
self.state_generators = []
self.deep_construct = False
def check_data(self):
# If there are more documents available?
return self.check_node()
def get_data(self):
# Construct and return the next document.
if self.check_node():
return self.construct_document(self.get_node())
def get_single_data(self):
# Ensure that the stream contains a single document and construct it.
node = self.get_single_node()
if node is not None:
return self.construct_document(node)
return None
def construct_document(self, node):
data = self.construct_object(node)
while self.state_generators:
state_generators = self.state_generators
self.state_generators = []
for generator in state_generators:
for dummy in generator:
pass
self.constructed_objects = {}
self.recursive_objects = {}
self.deep_construct = False
return data
def construct_object(self, node, deep=False):
if node in self.constructed_objects:
return self.constructed_objects[node]
if deep:
old_deep = self.deep_construct
self.deep_construct = True
if node in self.recursive_objects:
raise ConstructorError(None, None,
"found unconstructable recursive node", node.start_mark)
self.recursive_objects[node] = None
constructor = None
tag_suffix = None
if node.tag in self.yaml_constructors:
constructor = self.yaml_constructors[node.tag]
else:
for tag_prefix in self.yaml_multi_constructors:
if node.tag.startswith(tag_prefix):
tag_suffix = node.tag[len(tag_prefix):]
constructor = self.yaml_multi_constructors[tag_prefix]
break
else:
if None in self.yaml_multi_constructors:
tag_suffix = node.tag
constructor = self.yaml_multi_constructors[None]
elif None in self.yaml_constructors:
constructor = self.yaml_constructors[None]
elif isinstance(node, ScalarNode):
constructor = self.__class__.construct_scalar
elif isinstance(node, SequenceNode):
constructor = self.__class__.construct_sequence
elif isinstance(node, MappingNode):
constructor = self.__class__.construct_mapping
if tag_suffix is None:
data = constructor(self, node)
else:
data = constructor(self, tag_suffix, node)
if isinstance(data, types.GeneratorType):
generator = data
data = generator.next()
if self.deep_construct:
for dummy in generator:
pass
else:
self.state_generators.append(generator)
self.constructed_objects[node] = data
del self.recursive_objects[node]
if deep:
self.deep_construct = old_deep
return data
def construct_scalar(self, node):
if not isinstance(node, ScalarNode):
raise ConstructorError(None, None,
"expected a scalar node, but found %s" % node.id,
node.start_mark)
return node.value
def construct_sequence(self, node, deep=False):
if not isinstance(node, SequenceNode):
raise ConstructorError(None, None,
"expected a sequence node, but found %s" % node.id,
node.start_mark)
return [self.construct_object(child, deep=deep)
for child in node.value]
def construct_mapping(self, node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
mapping = {}
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
try:
hash(key)
except TypeError, exc:
raise ConstructorError("while constructing a mapping", node.start_mark,
"found unacceptable key (%s)" % exc, key_node.start_mark)
value = self.construct_object(value_node, deep=deep)
mapping[key] = value
return mapping
def construct_pairs(self, node, deep=False):
if not isinstance(node, MappingNode):
raise ConstructorError(None, None,
"expected a mapping node, but found %s" % node.id,
node.start_mark)
pairs = []
for key_node, value_node in node.value:
key = self.construct_object(key_node, deep=deep)
value = self.construct_object(value_node, deep=deep)
pairs.append((key, value))
return pairs
def add_constructor(cls, tag, constructor):
if not 'yaml_constructors' in cls.__dict__:
cls.yaml_constructors = cls.yaml_constructors.copy()
cls.yaml_constructors[tag] = constructor
add_constructor = classmethod(add_constructor)
def add_multi_constructor(cls, tag_prefix, multi_constructor):
if not 'yaml_multi_constructors' in cls.__dict__:
cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy()
cls.yaml_multi_constructors[tag_prefix] = multi_constructor
add_multi_constructor = classmethod(add_multi_constructor)
class SafeConstructor(BaseConstructor):
def construct_scalar(self, node):
if isinstance(node, MappingNode):
for key_node, value_node in node.value:
if key_node.tag == u'tag:yaml.org,2002:value':
return self.construct_scalar(value_node)
return BaseConstructor.construct_scalar(self, node)
def flatten_mapping(self, node):
merge = []
index = 0
while index < len(node.value):
key_node, value_node = node.value[index]
if key_node.tag == u'tag:yaml.org,2002:merge':
del node.value[index]
if isinstance(value_node, MappingNode):
self.flatten_mapping(value_node)
merge.extend(value_node.value)
elif isinstance(value_node, SequenceNode):
submerge = []
for subnode in value_node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing a mapping",
node.start_mark,
"expected a mapping for merging, but found %s"
% subnode.id, subnode.start_mark)
self.flatten_mapping(subnode)
submerge.append(subnode.value)
submerge.reverse()
for value in submerge:
merge.extend(value)
else:
raise ConstructorError("while constructing a mapping", node.start_mark,
"expected a mapping or list of mappings for merging, but found %s"
% value_node.id, value_node.start_mark)
elif key_node.tag == u'tag:yaml.org,2002:value':
key_node.tag = u'tag:yaml.org,2002:str'
index += 1
else:
index += 1
if merge:
node.value = merge + node.value
def construct_mapping(self, node, deep=False):
if isinstance(node, MappingNode):
self.flatten_mapping(node)
return BaseConstructor.construct_mapping(self, node, deep=deep)
def construct_yaml_null(self, node):
self.construct_scalar(node)
return None
bool_values = {
u'yes': True,
u'no': False,
u'true': True,
u'false': False,
u'on': True,
u'off': False,
}
def construct_yaml_bool(self, node):
value = self.construct_scalar(node)
return self.bool_values[value.lower()]
def construct_yaml_int(self, node):
value = str(self.construct_scalar(node))
value = value.replace('_', '')
sign = +1
if value[0] == '-':
sign = -1
if value[0] in '+-':
value = value[1:]
if value == '0':
return 0
elif value.startswith('0b'):
return sign*int(value[2:], 2)
elif value.startswith('0x'):
return sign*int(value[2:], 16)
elif value[0] == '0':
return sign*int(value, 8)
elif ':' in value:
digits = [int(part) for part in value.split(':')]
digits.reverse()
base = 1
value = 0
for digit in digits:
value += digit*base
base *= 60
return sign*value
else:
return sign*int(value)
inf_value = 1e300
while inf_value != inf_value*inf_value:
inf_value *= inf_value
nan_value = -inf_value/inf_value # Trying to make a quiet NaN (like C99).
def construct_yaml_float(self, node):
value = str(self.construct_scalar(node))
value = value.replace('_', '').lower()
sign = +1
if value[0] == '-':
sign = -1
if value[0] in '+-':
value = value[1:]
if value == '.inf':
return sign*self.inf_value
elif value == '.nan':
return self.nan_value
elif ':' in value:
digits = [float(part) for part in value.split(':')]
digits.reverse()
base = 1
value = 0.0
for digit in digits:
value += digit*base
base *= 60
return sign*value
else:
return sign*float(value)
def construct_yaml_binary(self, node):
value = self.construct_scalar(node)
try:
return str(value).decode('base64')
except (binascii.Error, UnicodeEncodeError), exc:
raise ConstructorError(None, None,
"failed to decode base64 data: %s" % exc, node.start_mark)
timestamp_regexp = re.compile(
ur'''^(?P<year>[0-9][0-9][0-9][0-9])
-(?P<month>[0-9][0-9]?)
-(?P<day>[0-9][0-9]?)
(?:(?:[Tt]|[ \t]+)
(?P<hour>[0-9][0-9]?)
:(?P<minute>[0-9][0-9])
:(?P<second>[0-9][0-9])
(?:\.(?P<fraction>[0-9]*))?
(?:[ \t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
(?::(?P<tz_minute>[0-9][0-9]))?))?)?$''', re.X)
def construct_yaml_timestamp(self, node):
value = self.construct_scalar(node)
match = self.timestamp_regexp.match(node.value)
values = match.groupdict()
year = int(values['year'])
month = int(values['month'])
day = int(values['day'])
if not values['hour']:
return datetime.date(year, month, day)
hour = int(values['hour'])
minute = int(values['minute'])
second = int(values['second'])
fraction = 0
if values['fraction']:
fraction = values['fraction'][:6]
while len(fraction) < 6:
fraction += '0'
fraction = int(fraction)
delta = None
if values['tz_sign']:
tz_hour = int(values['tz_hour'])
tz_minute = int(values['tz_minute'] or 0)
delta = datetime.timedelta(hours=tz_hour, minutes=tz_minute)
if values['tz_sign'] == '-':
delta = -delta
data = datetime.datetime(year, month, day, hour, minute, second, fraction)
if delta:
data -= delta
return data
def construct_yaml_omap(self, node):
# Note: we do not check for duplicate keys, because it's too
# CPU-expensive.
omap = []
yield omap
if not isinstance(node, SequenceNode):
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a sequence, but found %s" % node.id, node.start_mark)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a mapping of length 1, but found %s" % subnode.id,
subnode.start_mark)
if len(subnode.value) != 1:
raise ConstructorError("while constructing an ordered map", node.start_mark,
"expected a single mapping item, but found %d items" % len(subnode.value),
subnode.start_mark)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
omap.append((key, value))
def construct_yaml_pairs(self, node):
# Note: the same code as `construct_yaml_omap`.
pairs = []
yield pairs
if not isinstance(node, SequenceNode):
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a sequence, but found %s" % node.id, node.start_mark)
for subnode in node.value:
if not isinstance(subnode, MappingNode):
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a mapping of length 1, but found %s" % subnode.id,
subnode.start_mark)
if len(subnode.value) != 1:
raise ConstructorError("while constructing pairs", node.start_mark,
"expected a single mapping item, but found %d items" % len(subnode.value),
subnode.start_mark)
key_node, value_node = subnode.value[0]
key = self.construct_object(key_node)
value = self.construct_object(value_node)
pairs.append((key, value))
def construct_yaml_set(self, node):
data = set()
yield data
value = self.construct_mapping(node)
data.update(value)
def construct_yaml_str(self, node):
value = self.construct_scalar(node)
try:
return value.encode('ascii')
except UnicodeEncodeError:
return value
def construct_yaml_seq(self, node):
data = []
yield data
data.extend(self.construct_sequence(node))
def construct_yaml_map(self, node):
data = {}
yield data
value = self.construct_mapping(node)
data.update(value)
def construct_yaml_object(self, node, cls):
data = cls.__new__(cls)
yield data
if hasattr(data, '__setstate__'):
state = self.construct_mapping(node, deep=True)
data.__setstate__(state)
else:
state = self.construct_mapping(node)
data.__dict__.update(state)
def construct_undefined(self, node):
raise ConstructorError(None, None,
"could not determine a constructor for the tag %r" % node.tag.encode('utf-8'),
node.start_mark)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:null',
SafeConstructor.construct_yaml_null)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:bool',
SafeConstructor.construct_yaml_bool)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:int',
SafeConstructor.construct_yaml_int)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:float',
SafeConstructor.construct_yaml_float)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:binary',
SafeConstructor.construct_yaml_binary)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:timestamp',
SafeConstructor.construct_yaml_timestamp)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:omap',
SafeConstructor.construct_yaml_omap)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:pairs',
SafeConstructor.construct_yaml_pairs)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:set',
SafeConstructor.construct_yaml_set)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:str',
SafeConstructor.construct_yaml_str)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:seq',
SafeConstructor.construct_yaml_seq)
SafeConstructor.add_constructor(
u'tag:yaml.org,2002:map',
SafeConstructor.construct_yaml_map)
SafeConstructor.add_constructor(None,
SafeConstructor.construct_undefined)
class Constructor(SafeConstructor):
def construct_python_str(self, node):
return self.construct_scalar(node).encode('utf-8')
def construct_python_unicode(self, node):
return self.construct_scalar(node)
def construct_python_long(self, node):
return long(self.construct_yaml_int(node))
def construct_python_complex(self, node):
return complex(self.construct_scalar(node))
def construct_python_tuple(self, node):
return tuple(self.construct_sequence(node))
def find_python_module(self, name, mark):
if not name:
raise ConstructorError("while constructing a Python module", mark,
"expected non-empty name appended to the tag", mark)
try:
__import__(name)
except ImportError, exc:
raise ConstructorError("while constructing a Python module", mark,
"cannot find module %r (%s)" % (name.encode('utf-8'), exc), mark)
return sys.modules[name]
def find_python_name(self, name, mark):
if not name:
raise ConstructorError("while constructing a Python object", mark,
"expected non-empty name appended to the tag", mark)
if u'.' in name:
module_name, object_name = name.rsplit('.', 1)
else:
module_name = '__builtin__'
object_name = name
try:
__import__(module_name)
except ImportError, exc:
raise ConstructorError("while constructing a Python object", mark,
"cannot find module %r (%s)" % (module_name.encode('utf-8'), exc), mark)
module = sys.modules[module_name]
if not hasattr(module, object_name):
raise ConstructorError("while constructing a Python object", mark,
"cannot find %r in the module %r" % (object_name.encode('utf-8'),
module.__name__), mark)
return getattr(module, object_name)
def construct_python_name(self, suffix, node):
value = self.construct_scalar(node)
if value:
raise ConstructorError("while constructing a Python name", node.start_mark,
"expected the empty value, but found %r" % value.encode('utf-8'),
node.start_mark)
return self.find_python_name(suffix, node.start_mark)
def construct_python_module(self, suffix, node):
value = self.construct_scalar(node)
if value:
raise ConstructorError("while constructing a Python module", node.start_mark,
"expected the empty value, but found %r" % value.encode('utf-8'),
node.start_mark)
return self.find_python_module(suffix, node.start_mark)
class classobj: pass
def make_python_instance(self, suffix, node,
args=None, kwds=None, newobj=False):
if not args:
args = []
if not kwds:
kwds = {}
cls = self.find_python_name(suffix, node.start_mark)
if newobj and isinstance(cls, type(self.classobj)) \
and not args and not kwds:
instance = self.classobj()
instance.__class__ = cls
return instance
elif newobj and isinstance(cls, type):
return cls.__new__(cls, *args, **kwds)
else:
return cls(*args, **kwds)
def set_python_instance_state(self, instance, state):
if hasattr(instance, '__setstate__'):
instance.__setstate__(state)
else:
slotstate = {}
if isinstance(state, tuple) and len(state) == 2:
state, slotstate = state
if hasattr(instance, '__dict__'):
instance.__dict__.update(state)
elif state:
slotstate.update(state)
for key, value in slotstate.items():
setattr(object, key, value)
def construct_python_object(self, suffix, node):
# Format:
# !!python/object:module.name { ... state ... }
instance = self.make_python_instance(suffix, node, newobj=True)
yield instance
deep = hasattr(instance, '__setstate__')
state = self.construct_mapping(node, deep=deep)
self.set_python_instance_state(instance, state)
def construct_python_object_apply(self, suffix, node, newobj=False):
# Format:
# !!python/object/apply # (or !!python/object/new)
# args: [ ... arguments ... ]
# kwds: { ... keywords ... }
# state: ... state ...
# listitems: [ ... listitems ... ]
# dictitems: { ... dictitems ... }
# or short format:
# !!python/object/apply [ ... arguments ... ]
# The difference between !!python/object/apply and !!python/object/new
# is how an object is created, check make_python_instance for details.
if isinstance(node, SequenceNode):
args = self.construct_sequence(node, deep=True)
kwds = {}
state = {}
listitems = []
dictitems = {}
else:
value = self.construct_mapping(node, deep=True)
args = value.get('args', [])
kwds = value.get('kwds', {})
state = value.get('state', {})
listitems = value.get('listitems', [])
dictitems = value.get('dictitems', {})
instance = self.make_python_instance(suffix, node, args, kwds, newobj)
if state:
self.set_python_instance_state(instance, state)
if listitems:
instance.extend(listitems)
if dictitems:
for key in dictitems:
instance[key] = dictitems[key]
return instance
def construct_python_object_new(self, suffix, node):
return self.construct_python_object_apply(suffix, node, newobj=True)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/none',
Constructor.construct_yaml_null)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/bool',
Constructor.construct_yaml_bool)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/str',
Constructor.construct_python_str)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/unicode',
Constructor.construct_python_unicode)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/int',
Constructor.construct_yaml_int)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/long',
Constructor.construct_python_long)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/float',
Constructor.construct_yaml_float)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/complex',
Constructor.construct_python_complex)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/list',
Constructor.construct_yaml_seq)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/tuple',
Constructor.construct_python_tuple)
Constructor.add_constructor(
u'tag:yaml.org,2002:python/dict',
Constructor.construct_yaml_map)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/name:',
Constructor.construct_python_name)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/module:',
Constructor.construct_python_module)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/object:',
Constructor.construct_python_object)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/object/apply:',
Constructor.construct_python_object_apply)
Constructor.add_multi_constructor(
u'tag:yaml.org,2002:python/object/new:',
Constructor.construct_python_object_new)

85
pylib/yaml/cyaml.py Normal file
Просмотреть файл

@ -0,0 +1,85 @@
__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader',
'CBaseDumper', 'CSafeDumper', 'CDumper']
from _yaml import CParser, CEmitter
from constructor import *
from serializer import *
from representer import *
from resolver import *
class CBaseLoader(CParser, BaseConstructor, BaseResolver):
def __init__(self, stream):
CParser.__init__(self, stream)
BaseConstructor.__init__(self)
BaseResolver.__init__(self)
class CSafeLoader(CParser, SafeConstructor, Resolver):
def __init__(self, stream):
CParser.__init__(self, stream)
SafeConstructor.__init__(self)
Resolver.__init__(self)
class CLoader(CParser, Constructor, Resolver):
def __init__(self, stream):
CParser.__init__(self, stream)
Constructor.__init__(self)
Resolver.__init__(self)
class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
CEmitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width, encoding=encoding,
allow_unicode=allow_unicode, line_break=line_break,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class CSafeDumper(CEmitter, SafeRepresenter, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
CEmitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width, encoding=encoding,
allow_unicode=allow_unicode, line_break=line_break,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
SafeRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class CDumper(CEmitter, Serializer, Representer, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
CEmitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width, encoding=encoding,
allow_unicode=allow_unicode, line_break=line_break,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)

62
pylib/yaml/dumper.py Normal file
Просмотреть файл

@ -0,0 +1,62 @@
__all__ = ['BaseDumper', 'SafeDumper', 'Dumper']
from emitter import *
from serializer import *
from representer import *
from resolver import *
class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
SafeRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class Dumper(Emitter, Serializer, Representer, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start, explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)

1140
pylib/yaml/emitter.py Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

75
pylib/yaml/error.py Normal file
Просмотреть файл

@ -0,0 +1,75 @@
__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
class Mark(object):
def __init__(self, name, index, line, column, buffer, pointer):
self.name = name
self.index = index
self.line = line
self.column = column
self.buffer = buffer
self.pointer = pointer
def get_snippet(self, indent=4, max_length=75):
if self.buffer is None:
return None
head = ''
start = self.pointer
while start > 0 and self.buffer[start-1] not in u'\0\r\n\x85\u2028\u2029':
start -= 1
if self.pointer-start > max_length/2-1:
head = ' ... '
start += 5
break
tail = ''
end = self.pointer
while end < len(self.buffer) and self.buffer[end] not in u'\0\r\n\x85\u2028\u2029':
end += 1
if end-self.pointer > max_length/2-1:
tail = ' ... '
end -= 5
break
snippet = self.buffer[start:end].encode('utf-8')
return ' '*indent + head + snippet + tail + '\n' \
+ ' '*(indent+self.pointer-start+len(head)) + '^'
def __str__(self):
snippet = self.get_snippet()
where = " in \"%s\", line %d, column %d" \
% (self.name, self.line+1, self.column+1)
if snippet is not None:
where += ":\n"+snippet
return where
class YAMLError(Exception):
pass
class MarkedYAMLError(YAMLError):
def __init__(self, context=None, context_mark=None,
problem=None, problem_mark=None, note=None):
self.context = context
self.context_mark = context_mark
self.problem = problem
self.problem_mark = problem_mark
self.note = note
def __str__(self):
lines = []
if self.context is not None:
lines.append(self.context)
if self.context_mark is not None \
and (self.problem is None or self.problem_mark is None
or self.context_mark.name != self.problem_mark.name
or self.context_mark.line != self.problem_mark.line
or self.context_mark.column != self.problem_mark.column):
lines.append(str(self.context_mark))
if self.problem is not None:
lines.append(self.problem)
if self.problem_mark is not None:
lines.append(str(self.problem_mark))
if self.note is not None:
lines.append(self.note)
return '\n'.join(lines)

86
pylib/yaml/events.py Normal file
Просмотреть файл

@ -0,0 +1,86 @@
# Abstract classes.
class Event(object):
def __init__(self, start_mark=None, end_mark=None):
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
attributes = [key for key in ['anchor', 'tag', 'implicit', 'value']
if hasattr(self, key)]
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
for key in attributes])
return '%s(%s)' % (self.__class__.__name__, arguments)
class NodeEvent(Event):
def __init__(self, anchor, start_mark=None, end_mark=None):
self.anchor = anchor
self.start_mark = start_mark
self.end_mark = end_mark
class CollectionStartEvent(NodeEvent):
def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
flow_style=None):
self.anchor = anchor
self.tag = tag
self.implicit = implicit
self.start_mark = start_mark
self.end_mark = end_mark
self.flow_style = flow_style
class CollectionEndEvent(Event):
pass
# Implementations.
class StreamStartEvent(Event):
def __init__(self, start_mark=None, end_mark=None, encoding=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.encoding = encoding
class StreamEndEvent(Event):
pass
class DocumentStartEvent(Event):
def __init__(self, start_mark=None, end_mark=None,
explicit=None, version=None, tags=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.explicit = explicit
self.version = version
self.tags = tags
class DocumentEndEvent(Event):
def __init__(self, start_mark=None, end_mark=None,
explicit=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.explicit = explicit
class AliasEvent(NodeEvent):
pass
class ScalarEvent(NodeEvent):
def __init__(self, anchor, tag, implicit, value,
start_mark=None, end_mark=None, style=None):
self.anchor = anchor
self.tag = tag
self.implicit = implicit
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.style = style
class SequenceStartEvent(CollectionStartEvent):
pass
class SequenceEndEvent(CollectionEndEvent):
pass
class MappingStartEvent(CollectionStartEvent):
pass
class MappingEndEvent(CollectionEndEvent):
pass

40
pylib/yaml/loader.py Normal file
Просмотреть файл

@ -0,0 +1,40 @@
__all__ = ['BaseLoader', 'SafeLoader', 'Loader']
from reader import *
from scanner import *
from parser import *
from composer import *
from constructor import *
from resolver import *
class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
BaseConstructor.__init__(self)
BaseResolver.__init__(self)
class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
SafeConstructor.__init__(self)
Resolver.__init__(self)
class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
def __init__(self, stream):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
Constructor.__init__(self)
Resolver.__init__(self)

49
pylib/yaml/nodes.py Normal file
Просмотреть файл

@ -0,0 +1,49 @@
class Node(object):
def __init__(self, tag, value, start_mark, end_mark):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
value = self.value
#if isinstance(value, list):
# if len(value) == 0:
# value = '<empty>'
# elif len(value) == 1:
# value = '<1 item>'
# else:
# value = '<%d items>' % len(value)
#else:
# if len(value) > 75:
# value = repr(value[:70]+u' ... ')
# else:
# value = repr(value)
value = repr(value)
return '%s(tag=%r, value=%s)' % (self.__class__.__name__, self.tag, value)
class ScalarNode(Node):
id = 'scalar'
def __init__(self, tag, value,
start_mark=None, end_mark=None, style=None):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.style = style
class CollectionNode(Node):
def __init__(self, tag, value,
start_mark=None, end_mark=None, flow_style=None):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.flow_style = flow_style
class SequenceNode(CollectionNode):
id = 'sequence'
class MappingNode(CollectionNode):
id = 'mapping'

589
pylib/yaml/parser.py Normal file
Просмотреть файл

@ -0,0 +1,589 @@
# The following YAML grammar is LL(1) and is parsed by a recursive descent
# parser.
#
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
# block_node_or_indentless_sequence ::=
# ALIAS
# | properties (block_content | indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# FIRST sets:
#
# stream: { STREAM-START }
# explicit_document: { DIRECTIVE DOCUMENT-START }
# implicit_document: FIRST(block_node)
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_sequence: { BLOCK-SEQUENCE-START }
# block_mapping: { BLOCK-MAPPING-START }
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START BLOCK-ENTRY }
# indentless_sequence: { ENTRY }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_sequence: { FLOW-SEQUENCE-START }
# flow_mapping: { FLOW-MAPPING-START }
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START KEY }
__all__ = ['Parser', 'ParserError']
from error import MarkedYAMLError
from tokens import *
from events import *
from scanner import *
class ParserError(MarkedYAMLError):
pass
class Parser(object):
# Since writing a recursive-descendant parser is a straightforward task, we
# do not give many comments here.
DEFAULT_TAGS = {
u'!': u'!',
u'!!': u'tag:yaml.org,2002:',
}
def __init__(self):
self.current_event = None
self.yaml_version = None
self.tag_handles = {}
self.states = []
self.marks = []
self.state = self.parse_stream_start
def dispose(self):
# Reset the state attributes (to clear self-references)
self.states = []
self.state = None
def check_event(self, *choices):
# Check the type of the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
if self.current_event is not None:
if not choices:
return True
for choice in choices:
if isinstance(self.current_event, choice):
return True
return False
def peek_event(self):
# Get the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
return self.current_event
def get_event(self):
# Get the next event and proceed further.
if self.current_event is None:
if self.state:
self.current_event = self.state()
value = self.current_event
self.current_event = None
return value
# stream ::= STREAM-START implicit_document? explicit_document* STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
def parse_stream_start(self):
# Parse the stream start.
token = self.get_token()
event = StreamStartEvent(token.start_mark, token.end_mark,
encoding=token.encoding)
# Prepare the next state.
self.state = self.parse_implicit_document_start
return event
def parse_implicit_document_start(self):
# Parse an implicit document.
if not self.check_token(DirectiveToken, DocumentStartToken,
StreamEndToken):
self.tag_handles = self.DEFAULT_TAGS
token = self.peek_token()
start_mark = end_mark = token.start_mark
event = DocumentStartEvent(start_mark, end_mark,
explicit=False)
# Prepare the next state.
self.states.append(self.parse_document_end)
self.state = self.parse_block_node
return event
else:
return self.parse_document_start()
def parse_document_start(self):
# Parse any extra document end indicators.
while self.check_token(DocumentEndToken):
self.get_token()
# Parse an explicit document.
if not self.check_token(StreamEndToken):
token = self.peek_token()
start_mark = token.start_mark
version, tags = self.process_directives()
if not self.check_token(DocumentStartToken):
raise ParserError(None, None,
"expected '<document start>', but found %r"
% self.peek_token().id,
self.peek_token().start_mark)
token = self.get_token()
end_mark = token.end_mark
event = DocumentStartEvent(start_mark, end_mark,
explicit=True, version=version, tags=tags)
self.states.append(self.parse_document_end)
self.state = self.parse_document_content
else:
# Parse the end of the stream.
token = self.get_token()
event = StreamEndEvent(token.start_mark, token.end_mark)
assert not self.states
assert not self.marks
self.state = None
return event
def parse_document_end(self):
# Parse the document end.
token = self.peek_token()
start_mark = end_mark = token.start_mark
explicit = False
if self.check_token(DocumentEndToken):
token = self.get_token()
end_mark = token.end_mark
explicit = True
event = DocumentEndEvent(start_mark, end_mark,
explicit=explicit)
# Prepare the next state.
self.state = self.parse_document_start
return event
def parse_document_content(self):
if self.check_token(DirectiveToken,
DocumentStartToken, DocumentEndToken, StreamEndToken):
event = self.process_empty_scalar(self.peek_token().start_mark)
self.state = self.states.pop()
return event
else:
return self.parse_block_node()
def process_directives(self):
self.yaml_version = None
self.tag_handles = {}
while self.check_token(DirectiveToken):
token = self.get_token()
if token.name == u'YAML':
if self.yaml_version is not None:
raise ParserError(None, None,
"found duplicate YAML directive", token.start_mark)
major, minor = token.value
if major != 1:
raise ParserError(None, None,
"found incompatible YAML document (version 1.* is required)",
token.start_mark)
self.yaml_version = token.value
elif token.name == u'TAG':
handle, prefix = token.value
if handle in self.tag_handles:
raise ParserError(None, None,
"duplicate tag handle %r" % handle.encode('utf-8'),
token.start_mark)
self.tag_handles[handle] = prefix
if self.tag_handles:
value = self.yaml_version, self.tag_handles.copy()
else:
value = self.yaml_version, None
for key in self.DEFAULT_TAGS:
if key not in self.tag_handles:
self.tag_handles[key] = self.DEFAULT_TAGS[key]
return value
# block_node_or_indentless_sequence ::= ALIAS
# | properties (block_content | indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
def parse_block_node(self):
return self.parse_node(block=True)
def parse_flow_node(self):
return self.parse_node()
def parse_block_node_or_indentless_sequence(self):
return self.parse_node(block=True, indentless_sequence=True)
def parse_node(self, block=False, indentless_sequence=False):
if self.check_token(AliasToken):
token = self.get_token()
event = AliasEvent(token.value, token.start_mark, token.end_mark)
self.state = self.states.pop()
else:
anchor = None
tag = None
start_mark = end_mark = tag_mark = None
if self.check_token(AnchorToken):
token = self.get_token()
start_mark = token.start_mark
end_mark = token.end_mark
anchor = token.value
if self.check_token(TagToken):
token = self.get_token()
tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
elif self.check_token(TagToken):
token = self.get_token()
start_mark = tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
if self.check_token(AnchorToken):
token = self.get_token()
end_mark = token.end_mark
anchor = token.value
if tag is not None:
handle, suffix = tag
if handle is not None:
if handle not in self.tag_handles:
raise ParserError("while parsing a node", start_mark,
"found undefined tag handle %r" % handle.encode('utf-8'),
tag_mark)
tag = self.tag_handles[handle]+suffix
else:
tag = suffix
#if tag == u'!':
# raise ParserError("while parsing a node", start_mark,
# "found non-specific tag '!'", tag_mark,
# "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' and share your opinion.")
if start_mark is None:
start_mark = end_mark = self.peek_token().start_mark
event = None
implicit = (tag is None or tag == u'!')
if indentless_sequence and self.check_token(BlockEntryToken):
end_mark = self.peek_token().end_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark)
self.state = self.parse_indentless_sequence_entry
else:
if self.check_token(ScalarToken):
token = self.get_token()
end_mark = token.end_mark
if (token.plain and tag is None) or tag == u'!':
implicit = (True, False)
elif tag is None:
implicit = (False, True)
else:
implicit = (False, False)
event = ScalarEvent(anchor, tag, implicit, token.value,
start_mark, end_mark, style=token.style)
self.state = self.states.pop()
elif self.check_token(FlowSequenceStartToken):
end_mark = self.peek_token().end_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=True)
self.state = self.parse_flow_sequence_first_entry
elif self.check_token(FlowMappingStartToken):
end_mark = self.peek_token().end_mark
event = MappingStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=True)
self.state = self.parse_flow_mapping_first_key
elif block and self.check_token(BlockSequenceStartToken):
end_mark = self.peek_token().start_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=False)
self.state = self.parse_block_sequence_first_entry
elif block and self.check_token(BlockMappingStartToken):
end_mark = self.peek_token().start_mark
event = MappingStartEvent(anchor, tag, implicit,
start_mark, end_mark, flow_style=False)
self.state = self.parse_block_mapping_first_key
elif anchor is not None or tag is not None:
# Empty scalars are allowed even if a tag or an anchor is
# specified.
event = ScalarEvent(anchor, tag, (implicit, False), u'',
start_mark, end_mark)
self.state = self.states.pop()
else:
if block:
node = 'block'
else:
node = 'flow'
token = self.peek_token()
raise ParserError("while parsing a %s node" % node, start_mark,
"expected the node content, but found %r" % token.id,
token.start_mark)
return event
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END
def parse_block_sequence_first_entry(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_block_sequence_entry()
def parse_block_sequence_entry(self):
if self.check_token(BlockEntryToken):
token = self.get_token()
if not self.check_token(BlockEntryToken, BlockEndToken):
self.states.append(self.parse_block_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_block_sequence_entry
return self.process_empty_scalar(token.end_mark)
if not self.check_token(BlockEndToken):
token = self.peek_token()
raise ParserError("while parsing a block collection", self.marks[-1],
"expected <block end>, but found %r" % token.id, token.start_mark)
token = self.get_token()
event = SequenceEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
def parse_indentless_sequence_entry(self):
if self.check_token(BlockEntryToken):
token = self.get_token()
if not self.check_token(BlockEntryToken,
KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_indentless_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_indentless_sequence_entry
return self.process_empty_scalar(token.end_mark)
token = self.peek_token()
event = SequenceEndEvent(token.start_mark, token.start_mark)
self.state = self.states.pop()
return event
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
def parse_block_mapping_first_key(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_block_mapping_key()
def parse_block_mapping_key(self):
if self.check_token(KeyToken):
token = self.get_token()
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_value)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_value
return self.process_empty_scalar(token.end_mark)
if not self.check_token(BlockEndToken):
token = self.peek_token()
raise ParserError("while parsing a block mapping", self.marks[-1],
"expected <block end>, but found %r" % token.id, token.start_mark)
token = self.get_token()
event = MappingEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_block_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_key)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_block_mapping_key
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# Note that while production rules for both flow_sequence_entry and
# flow_mapping_entry are equal, their interpretations are different.
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
# generate an inline mapping (set syntax).
def parse_flow_sequence_first_entry(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_sequence_entry(first=True)
def parse_flow_sequence_entry(self, first=False):
if not self.check_token(FlowSequenceEndToken):
if not first:
if self.check_token(FlowEntryToken):
self.get_token()
else:
token = self.peek_token()
raise ParserError("while parsing a flow sequence", self.marks[-1],
"expected ',' or ']', but got %r" % token.id, token.start_mark)
if self.check_token(KeyToken):
token = self.peek_token()
event = MappingStartEvent(None, None, True,
token.start_mark, token.end_mark,
flow_style=True)
self.state = self.parse_flow_sequence_entry_mapping_key
return event
elif not self.check_token(FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry)
return self.parse_flow_node()
token = self.get_token()
event = SequenceEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_sequence_entry_mapping_key(self):
token = self.get_token()
if not self.check_token(ValueToken,
FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_value
return self.process_empty_scalar(token.end_mark)
def parse_flow_sequence_entry_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_end)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_end
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_sequence_entry_mapping_end
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_sequence_entry_mapping_end(self):
self.state = self.parse_flow_sequence_entry
token = self.peek_token()
return MappingEndEvent(token.start_mark, token.start_mark)
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
def parse_flow_mapping_first_key(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_mapping_key(first=True)
def parse_flow_mapping_key(self, first=False):
if not self.check_token(FlowMappingEndToken):
if not first:
if self.check_token(FlowEntryToken):
self.get_token()
else:
token = self.peek_token()
raise ParserError("while parsing a flow mapping", self.marks[-1],
"expected ',' or '}', but got %r" % token.id, token.start_mark)
if self.check_token(KeyToken):
token = self.get_token()
if not self.check_token(ValueToken,
FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_value
return self.process_empty_scalar(token.end_mark)
elif not self.check_token(FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_empty_value)
return self.parse_flow_node()
token = self.get_token()
event = MappingEndEvent(token.start_mark, token.end_mark)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_key)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_mapping_key
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_mapping_empty_value(self):
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(self.peek_token().start_mark)
def process_empty_scalar(self, mark):
return ScalarEvent(None, None, (True, False), u'', mark, mark)

190
pylib/yaml/reader.py Normal file
Просмотреть файл

@ -0,0 +1,190 @@
# This module contains abstractions for the input stream. You don't have to
# looks further, there are no pretty code.
#
# We define two classes here.
#
# Mark(source, line, column)
# It's just a record and its only use is producing nice error messages.
# Parser does not use it for any other purposes.
#
# Reader(source, data)
# Reader determines the encoding of `data` and converts it to unicode.
# Reader provides the following methods and attributes:
# reader.peek(length=1) - return the next `length` characters
# reader.forward(length=1) - move the current position to `length` characters.
# reader.index - the number of the current character.
# reader.line, stream.column - the line and the column of the current character.
__all__ = ['Reader', 'ReaderError']
from error import YAMLError, Mark
import codecs, re
class ReaderError(YAMLError):
def __init__(self, name, position, character, encoding, reason):
self.name = name
self.character = character
self.position = position
self.encoding = encoding
self.reason = reason
def __str__(self):
if isinstance(self.character, str):
return "'%s' codec can't decode byte #x%02x: %s\n" \
" in \"%s\", position %d" \
% (self.encoding, ord(self.character), self.reason,
self.name, self.position)
else:
return "unacceptable character #x%04x: %s\n" \
" in \"%s\", position %d" \
% (self.character, self.reason,
self.name, self.position)
class Reader(object):
# Reader:
# - determines the data encoding and converts it to unicode,
# - checks if characters are in allowed range,
# - adds '\0' to the end.
# Reader accepts
# - a `str` object,
# - a `unicode` object,
# - a file-like object with its `read` method returning `str`,
# - a file-like object with its `read` method returning `unicode`.
# Yeah, it's ugly and slow.
def __init__(self, stream):
self.name = None
self.stream = None
self.stream_pointer = 0
self.eof = True
self.buffer = u''
self.pointer = 0
self.raw_buffer = None
self.raw_decode = None
self.encoding = None
self.index = 0
self.line = 0
self.column = 0
if isinstance(stream, unicode):
self.name = "<unicode string>"
self.check_printable(stream)
self.buffer = stream+u'\0'
elif isinstance(stream, str):
self.name = "<string>"
self.raw_buffer = stream
self.determine_encoding()
else:
self.stream = stream
self.name = getattr(stream, 'name', "<file>")
self.eof = False
self.raw_buffer = ''
self.determine_encoding()
def peek(self, index=0):
try:
return self.buffer[self.pointer+index]
except IndexError:
self.update(index+1)
return self.buffer[self.pointer+index]
def prefix(self, length=1):
if self.pointer+length >= len(self.buffer):
self.update(length)
return self.buffer[self.pointer:self.pointer+length]
def forward(self, length=1):
if self.pointer+length+1 >= len(self.buffer):
self.update(length+1)
while length:
ch = self.buffer[self.pointer]
self.pointer += 1
self.index += 1
if ch in u'\n\x85\u2028\u2029' \
or (ch == u'\r' and self.buffer[self.pointer] != u'\n'):
self.line += 1
self.column = 0
elif ch != u'\uFEFF':
self.column += 1
length -= 1
def get_mark(self):
if self.stream is None:
return Mark(self.name, self.index, self.line, self.column,
self.buffer, self.pointer)
else:
return Mark(self.name, self.index, self.line, self.column,
None, None)
def determine_encoding(self):
while not self.eof and len(self.raw_buffer) < 2:
self.update_raw()
if not isinstance(self.raw_buffer, unicode):
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
self.raw_decode = codecs.utf_16_le_decode
self.encoding = 'utf-16-le'
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
self.raw_decode = codecs.utf_16_be_decode
self.encoding = 'utf-16-be'
else:
self.raw_decode = codecs.utf_8_decode
self.encoding = 'utf-8'
self.update(1)
NON_PRINTABLE = re.compile(u'[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD]')
def check_printable(self, data):
match = self.NON_PRINTABLE.search(data)
if match:
character = match.group()
position = self.index+(len(self.buffer)-self.pointer)+match.start()
raise ReaderError(self.name, position, ord(character),
'unicode', "special characters are not allowed")
def update(self, length):
if self.raw_buffer is None:
return
self.buffer = self.buffer[self.pointer:]
self.pointer = 0
while len(self.buffer) < length:
if not self.eof:
self.update_raw()
if self.raw_decode is not None:
try:
data, converted = self.raw_decode(self.raw_buffer,
'strict', self.eof)
except UnicodeDecodeError, exc:
character = exc.object[exc.start]
if self.stream is not None:
position = self.stream_pointer-len(self.raw_buffer)+exc.start
else:
position = exc.start
raise ReaderError(self.name, position, character,
exc.encoding, exc.reason)
else:
data = self.raw_buffer
converted = len(data)
self.check_printable(data)
self.buffer += data
self.raw_buffer = self.raw_buffer[converted:]
if self.eof:
self.buffer += u'\0'
self.raw_buffer = None
break
def update_raw(self, size=1024):
data = self.stream.read(size)
if data:
self.raw_buffer += data
self.stream_pointer += len(data)
else:
self.eof = True
#try:
# import psyco
# psyco.bind(Reader)
#except ImportError:
# pass

484
pylib/yaml/representer.py Normal file
Просмотреть файл

@ -0,0 +1,484 @@
__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
'RepresenterError']
from error import *
from nodes import *
import datetime
import sys, copy_reg, types
class RepresenterError(YAMLError):
pass
class BaseRepresenter(object):
yaml_representers = {}
yaml_multi_representers = {}
def __init__(self, default_style=None, default_flow_style=None):
self.default_style = default_style
self.default_flow_style = default_flow_style
self.represented_objects = {}
self.object_keeper = []
self.alias_key = None
def represent(self, data):
node = self.represent_data(data)
self.serialize(node)
self.represented_objects = {}
self.object_keeper = []
self.alias_key = None
def get_classobj_bases(self, cls):
bases = [cls]
for base in cls.__bases__:
bases.extend(self.get_classobj_bases(base))
return bases
def represent_data(self, data):
if self.ignore_aliases(data):
self.alias_key = None
else:
self.alias_key = id(data)
if self.alias_key is not None:
if self.alias_key in self.represented_objects:
node = self.represented_objects[self.alias_key]
#if node is None:
# raise RepresenterError("recursive objects are not allowed: %r" % data)
return node
#self.represented_objects[alias_key] = None
self.object_keeper.append(data)
data_types = type(data).__mro__
if type(data) is types.InstanceType:
data_types = self.get_classobj_bases(data.__class__)+list(data_types)
if data_types[0] in self.yaml_representers:
node = self.yaml_representers[data_types[0]](self, data)
else:
for data_type in data_types:
if data_type in self.yaml_multi_representers:
node = self.yaml_multi_representers[data_type](self, data)
break
else:
if None in self.yaml_multi_representers:
node = self.yaml_multi_representers[None](self, data)
elif None in self.yaml_representers:
node = self.yaml_representers[None](self, data)
else:
node = ScalarNode(None, unicode(data))
#if alias_key is not None:
# self.represented_objects[alias_key] = node
return node
def add_representer(cls, data_type, representer):
if not 'yaml_representers' in cls.__dict__:
cls.yaml_representers = cls.yaml_representers.copy()
cls.yaml_representers[data_type] = representer
add_representer = classmethod(add_representer)
def add_multi_representer(cls, data_type, representer):
if not 'yaml_multi_representers' in cls.__dict__:
cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
cls.yaml_multi_representers[data_type] = representer
add_multi_representer = classmethod(add_multi_representer)
def represent_scalar(self, tag, value, style=None):
if style is None:
style = self.default_style
node = ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
def represent_sequence(self, tag, sequence, flow_style=None):
value = []
node = SequenceNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
for item in sequence:
node_item = self.represent_data(item)
if not (isinstance(node_item, ScalarNode) and not node_item.style):
best_style = False
value.append(node_item)
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def represent_mapping(self, tag, mapping, flow_style=None):
value = []
node = MappingNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
if hasattr(mapping, 'items'):
mapping = mapping.items()
mapping.sort()
for item_key, item_value in mapping:
node_key = self.represent_data(item_key)
node_value = self.represent_data(item_value)
if not (isinstance(node_key, ScalarNode) and not node_key.style):
best_style = False
if not (isinstance(node_value, ScalarNode) and not node_value.style):
best_style = False
value.append((node_key, node_value))
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def ignore_aliases(self, data):
return False
class SafeRepresenter(BaseRepresenter):
def ignore_aliases(self, data):
if data in [None, ()]:
return True
if isinstance(data, (str, unicode, bool, int, float)):
return True
def represent_none(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:null',
u'null')
def represent_str(self, data):
tag = None
style = None
try:
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
except UnicodeDecodeError:
try:
data = unicode(data, 'utf-8')
tag = u'tag:yaml.org,2002:str'
except UnicodeDecodeError:
data = data.encode('base64')
tag = u'tag:yaml.org,2002:binary'
style = '|'
return self.represent_scalar(tag, data, style=style)
def represent_unicode(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:str', data)
def represent_bool(self, data):
if data:
value = u'true'
else:
value = u'false'
return self.represent_scalar(u'tag:yaml.org,2002:bool', value)
def represent_int(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:int', unicode(data))
def represent_long(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:int', unicode(data))
inf_value = 1e300
while repr(inf_value) != repr(inf_value*inf_value):
inf_value *= inf_value
def represent_float(self, data):
if data != data or (data == 0.0 and data == 1.0):
value = u'.nan'
elif data == self.inf_value:
value = u'.inf'
elif data == -self.inf_value:
value = u'-.inf'
else:
value = unicode(repr(data)).lower()
# Note that in some cases `repr(data)` represents a float number
# without the decimal parts. For instance:
# >>> repr(1e17)
# '1e17'
# Unfortunately, this is not a valid float representation according
# to the definition of the `!!float` tag. We fix this by adding
# '.0' before the 'e' symbol.
if u'.' not in value and u'e' in value:
value = value.replace(u'e', u'.0e', 1)
return self.represent_scalar(u'tag:yaml.org,2002:float', value)
def represent_list(self, data):
#pairs = (len(data) > 0 and isinstance(data, list))
#if pairs:
# for item in data:
# if not isinstance(item, tuple) or len(item) != 2:
# pairs = False
# break
#if not pairs:
return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
#value = []
#for item_key, item_value in data:
# value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
# [(item_key, item_value)]))
#return SequenceNode(u'tag:yaml.org,2002:pairs', value)
def represent_dict(self, data):
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
def represent_set(self, data):
value = {}
for key in data:
value[key] = None
return self.represent_mapping(u'tag:yaml.org,2002:set', value)
def represent_date(self, data):
value = unicode(data.isoformat())
return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
def represent_datetime(self, data):
value = unicode(data.isoformat(' '))
return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
def represent_yaml_object(self, tag, data, cls, flow_style=None):
if hasattr(data, '__getstate__'):
state = data.__getstate__()
else:
state = data.__dict__.copy()
return self.represent_mapping(tag, state, flow_style=flow_style)
def represent_undefined(self, data):
raise RepresenterError("cannot represent an object: %s" % data)
SafeRepresenter.add_representer(type(None),
SafeRepresenter.represent_none)
SafeRepresenter.add_representer(str,
SafeRepresenter.represent_str)
SafeRepresenter.add_representer(unicode,
SafeRepresenter.represent_unicode)
SafeRepresenter.add_representer(bool,
SafeRepresenter.represent_bool)
SafeRepresenter.add_representer(int,
SafeRepresenter.represent_int)
SafeRepresenter.add_representer(long,
SafeRepresenter.represent_long)
SafeRepresenter.add_representer(float,
SafeRepresenter.represent_float)
SafeRepresenter.add_representer(list,
SafeRepresenter.represent_list)
SafeRepresenter.add_representer(tuple,
SafeRepresenter.represent_list)
SafeRepresenter.add_representer(dict,
SafeRepresenter.represent_dict)
SafeRepresenter.add_representer(set,
SafeRepresenter.represent_set)
SafeRepresenter.add_representer(datetime.date,
SafeRepresenter.represent_date)
SafeRepresenter.add_representer(datetime.datetime,
SafeRepresenter.represent_datetime)
SafeRepresenter.add_representer(None,
SafeRepresenter.represent_undefined)
class Representer(SafeRepresenter):
def represent_str(self, data):
tag = None
style = None
try:
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
except UnicodeDecodeError:
try:
data = unicode(data, 'utf-8')
tag = u'tag:yaml.org,2002:python/str'
except UnicodeDecodeError:
data = data.encode('base64')
tag = u'tag:yaml.org,2002:binary'
style = '|'
return self.represent_scalar(tag, data, style=style)
def represent_unicode(self, data):
tag = None
try:
data.encode('ascii')
tag = u'tag:yaml.org,2002:python/unicode'
except UnicodeEncodeError:
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data)
def represent_long(self, data):
tag = u'tag:yaml.org,2002:int'
if int(data) is not data:
tag = u'tag:yaml.org,2002:python/long'
return self.represent_scalar(tag, unicode(data))
def represent_complex(self, data):
if data.imag == 0.0:
data = u'%r' % data.real
elif data.real == 0.0:
data = u'%rj' % data.imag
elif data.imag > 0:
data = u'%r+%rj' % (data.real, data.imag)
else:
data = u'%r%rj' % (data.real, data.imag)
return self.represent_scalar(u'tag:yaml.org,2002:python/complex', data)
def represent_tuple(self, data):
return self.represent_sequence(u'tag:yaml.org,2002:python/tuple', data)
def represent_name(self, data):
name = u'%s.%s' % (data.__module__, data.__name__)
return self.represent_scalar(u'tag:yaml.org,2002:python/name:'+name, u'')
def represent_module(self, data):
return self.represent_scalar(
u'tag:yaml.org,2002:python/module:'+data.__name__, u'')
def represent_instance(self, data):
# For instances of classic classes, we use __getinitargs__ and
# __getstate__ to serialize the data.
# If data.__getinitargs__ exists, the object must be reconstructed by
# calling cls(**args), where args is a tuple returned by
# __getinitargs__. Otherwise, the cls.__init__ method should never be
# called and the class instance is created by instantiating a trivial
# class and assigning to the instance's __class__ variable.
# If data.__getstate__ exists, it returns the state of the object.
# Otherwise, the state of the object is data.__dict__.
# We produce either a !!python/object or !!python/object/new node.
# If data.__getinitargs__ does not exist and state is a dictionary, we
# produce a !!python/object node . Otherwise we produce a
# !!python/object/new node.
cls = data.__class__
class_name = u'%s.%s' % (cls.__module__, cls.__name__)
args = None
state = None
if hasattr(data, '__getinitargs__'):
args = list(data.__getinitargs__())
if hasattr(data, '__getstate__'):
state = data.__getstate__()
else:
state = data.__dict__
if args is None and isinstance(state, dict):
return self.represent_mapping(
u'tag:yaml.org,2002:python/object:'+class_name, state)
if isinstance(state, dict) and not state:
return self.represent_sequence(
u'tag:yaml.org,2002:python/object/new:'+class_name, args)
value = {}
if args:
value['args'] = args
value['state'] = state
return self.represent_mapping(
u'tag:yaml.org,2002:python/object/new:'+class_name, value)
def represent_object(self, data):
# We use __reduce__ API to save the data. data.__reduce__ returns
# a tuple of length 2-5:
# (function, args, state, listitems, dictitems)
# For reconstructing, we calls function(*args), then set its state,
# listitems, and dictitems if they are not None.
# A special case is when function.__name__ == '__newobj__'. In this
# case we create the object with args[0].__new__(*args).
# Another special case is when __reduce__ returns a string - we don't
# support it.
# We produce a !!python/object, !!python/object/new or
# !!python/object/apply node.
cls = type(data)
if cls in copy_reg.dispatch_table:
reduce = copy_reg.dispatch_table[cls](data)
elif hasattr(data, '__reduce_ex__'):
reduce = data.__reduce_ex__(2)
elif hasattr(data, '__reduce__'):
reduce = data.__reduce__()
else:
raise RepresenterError("cannot represent object: %r" % data)
reduce = (list(reduce)+[None]*5)[:5]
function, args, state, listitems, dictitems = reduce
args = list(args)
if state is None:
state = {}
if listitems is not None:
listitems = list(listitems)
if dictitems is not None:
dictitems = dict(dictitems)
if function.__name__ == '__newobj__':
function = args[0]
args = args[1:]
tag = u'tag:yaml.org,2002:python/object/new:'
newobj = True
else:
tag = u'tag:yaml.org,2002:python/object/apply:'
newobj = False
function_name = u'%s.%s' % (function.__module__, function.__name__)
if not args and not listitems and not dictitems \
and isinstance(state, dict) and newobj:
return self.represent_mapping(
u'tag:yaml.org,2002:python/object:'+function_name, state)
if not listitems and not dictitems \
and isinstance(state, dict) and not state:
return self.represent_sequence(tag+function_name, args)
value = {}
if args:
value['args'] = args
if state or not isinstance(state, dict):
value['state'] = state
if listitems:
value['listitems'] = listitems
if dictitems:
value['dictitems'] = dictitems
return self.represent_mapping(tag+function_name, value)
Representer.add_representer(str,
Representer.represent_str)
Representer.add_representer(unicode,
Representer.represent_unicode)
Representer.add_representer(long,
Representer.represent_long)
Representer.add_representer(complex,
Representer.represent_complex)
Representer.add_representer(tuple,
Representer.represent_tuple)
Representer.add_representer(type,
Representer.represent_name)
Representer.add_representer(types.ClassType,
Representer.represent_name)
Representer.add_representer(types.FunctionType,
Representer.represent_name)
Representer.add_representer(types.BuiltinFunctionType,
Representer.represent_name)
Representer.add_representer(types.ModuleType,
Representer.represent_module)
Representer.add_multi_representer(types.InstanceType,
Representer.represent_instance)
Representer.add_multi_representer(object,
Representer.represent_object)

224
pylib/yaml/resolver.py Normal file
Просмотреть файл

@ -0,0 +1,224 @@
__all__ = ['BaseResolver', 'Resolver']
from error import *
from nodes import *
import re
class ResolverError(YAMLError):
pass
class BaseResolver(object):
DEFAULT_SCALAR_TAG = u'tag:yaml.org,2002:str'
DEFAULT_SEQUENCE_TAG = u'tag:yaml.org,2002:seq'
DEFAULT_MAPPING_TAG = u'tag:yaml.org,2002:map'
yaml_implicit_resolvers = {}
yaml_path_resolvers = {}
def __init__(self):
self.resolver_exact_paths = []
self.resolver_prefix_paths = []
def add_implicit_resolver(cls, tag, regexp, first):
if not 'yaml_implicit_resolvers' in cls.__dict__:
cls.yaml_implicit_resolvers = cls.yaml_implicit_resolvers.copy()
if first is None:
first = [None]
for ch in first:
cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
add_implicit_resolver = classmethod(add_implicit_resolver)
def add_path_resolver(cls, tag, path, kind=None):
# Note: `add_path_resolver` is experimental. The API could be changed.
# `new_path` is a pattern that is matched against the path from the
# root to the node that is being considered. `node_path` elements are
# tuples `(node_check, index_check)`. `node_check` is a node class:
# `ScalarNode`, `SequenceNode`, `MappingNode` or `None`. `None`
# matches any kind of a node. `index_check` could be `None`, a boolean
# value, a string value, or a number. `None` and `False` match against
# any _value_ of sequence and mapping nodes. `True` matches against
# any _key_ of a mapping node. A string `index_check` matches against
# a mapping value that corresponds to a scalar key which content is
# equal to the `index_check` value. An integer `index_check` matches
# against a sequence value with the index equal to `index_check`.
if not 'yaml_path_resolvers' in cls.__dict__:
cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
new_path = []
for element in path:
if isinstance(element, (list, tuple)):
if len(element) == 2:
node_check, index_check = element
elif len(element) == 1:
node_check = element[0]
index_check = True
else:
raise ResolverError("Invalid path element: %s" % element)
else:
node_check = None
index_check = element
if node_check is str:
node_check = ScalarNode
elif node_check is list:
node_check = SequenceNode
elif node_check is dict:
node_check = MappingNode
elif node_check not in [ScalarNode, SequenceNode, MappingNode] \
and not isinstance(node_check, basestring) \
and node_check is not None:
raise ResolverError("Invalid node checker: %s" % node_check)
if not isinstance(index_check, (basestring, int)) \
and index_check is not None:
raise ResolverError("Invalid index checker: %s" % index_check)
new_path.append((node_check, index_check))
if kind is str:
kind = ScalarNode
elif kind is list:
kind = SequenceNode
elif kind is dict:
kind = MappingNode
elif kind not in [ScalarNode, SequenceNode, MappingNode] \
and kind is not None:
raise ResolverError("Invalid node kind: %s" % kind)
cls.yaml_path_resolvers[tuple(new_path), kind] = tag
add_path_resolver = classmethod(add_path_resolver)
def descend_resolver(self, current_node, current_index):
if not self.yaml_path_resolvers:
return
exact_paths = {}
prefix_paths = []
if current_node:
depth = len(self.resolver_prefix_paths)
for path, kind in self.resolver_prefix_paths[-1]:
if self.check_resolver_prefix(depth, path, kind,
current_node, current_index):
if len(path) > depth:
prefix_paths.append((path, kind))
else:
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
else:
for path, kind in self.yaml_path_resolvers:
if not path:
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
else:
prefix_paths.append((path, kind))
self.resolver_exact_paths.append(exact_paths)
self.resolver_prefix_paths.append(prefix_paths)
def ascend_resolver(self):
if not self.yaml_path_resolvers:
return
self.resolver_exact_paths.pop()
self.resolver_prefix_paths.pop()
def check_resolver_prefix(self, depth, path, kind,
current_node, current_index):
node_check, index_check = path[depth-1]
if isinstance(node_check, basestring):
if current_node.tag != node_check:
return
elif node_check is not None:
if not isinstance(current_node, node_check):
return
if index_check is True and current_index is not None:
return
if (index_check is False or index_check is None) \
and current_index is None:
return
if isinstance(index_check, basestring):
if not (isinstance(current_index, ScalarNode)
and index_check == current_index.value):
return
elif isinstance(index_check, int) and not isinstance(index_check, bool):
if index_check != current_index:
return
return True
def resolve(self, kind, value, implicit):
if kind is ScalarNode and implicit[0]:
if value == u'':
resolvers = self.yaml_implicit_resolvers.get(u'', [])
else:
resolvers = self.yaml_implicit_resolvers.get(value[0], [])
resolvers += self.yaml_implicit_resolvers.get(None, [])
for tag, regexp in resolvers:
if regexp.match(value):
return tag
implicit = implicit[1]
if self.yaml_path_resolvers:
exact_paths = self.resolver_exact_paths[-1]
if kind in exact_paths:
return exact_paths[kind]
if None in exact_paths:
return exact_paths[None]
if kind is ScalarNode:
return self.DEFAULT_SCALAR_TAG
elif kind is SequenceNode:
return self.DEFAULT_SEQUENCE_TAG
elif kind is MappingNode:
return self.DEFAULT_MAPPING_TAG
class Resolver(BaseResolver):
pass
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:bool',
re.compile(ur'''^(?:yes|Yes|YES|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF)$''', re.X),
list(u'yYnNtTfFoO'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:float',
re.compile(ur'''^(?:[-+]?(?:[0-9][0-9_]*)\.[0-9_]*(?:[eE][-+][0-9]+)?
|\.[0-9_]+(?:[eE][-+][0-9]+)?
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*
|[-+]?\.(?:inf|Inf|INF)
|\.(?:nan|NaN|NAN))$''', re.X),
list(u'-+0123456789.'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:int',
re.compile(ur'''^(?:[-+]?0b[0-1_]+
|[-+]?0[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
list(u'-+0123456789'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:merge',
re.compile(ur'^(?:<<)$'),
[u'<'])
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:null',
re.compile(ur'''^(?: ~
|null|Null|NULL
| )$''', re.X),
[u'~', u'n', u'N', u''])
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:timestamp',
re.compile(ur'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
(?:[Tt]|[ \t]+)[0-9][0-9]?
:[0-9][0-9] :[0-9][0-9] (?:\.[0-9]*)?
(?:[ \t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
list(u'0123456789'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:value',
re.compile(ur'^(?:=)$'),
[u'='])
# The following resolver is only for documentation purposes. It cannot work
# because plain scalars cannot start with '!', '&', or '*'.
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:yaml',
re.compile(ur'^(?:!|&|\*)$'),
list(u'!&*'))

1457
pylib/yaml/scanner.py Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

111
pylib/yaml/serializer.py Normal file
Просмотреть файл

@ -0,0 +1,111 @@
__all__ = ['Serializer', 'SerializerError']
from error import YAMLError
from events import *
from nodes import *
class SerializerError(YAMLError):
pass
class Serializer(object):
ANCHOR_TEMPLATE = u'id%03d'
def __init__(self, encoding=None,
explicit_start=None, explicit_end=None, version=None, tags=None):
self.use_encoding = encoding
self.use_explicit_start = explicit_start
self.use_explicit_end = explicit_end
self.use_version = version
self.use_tags = tags
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
self.closed = None
def open(self):
if self.closed is None:
self.emit(StreamStartEvent(encoding=self.use_encoding))
self.closed = False
elif self.closed:
raise SerializerError("serializer is closed")
else:
raise SerializerError("serializer is already opened")
def close(self):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif not self.closed:
self.emit(StreamEndEvent())
self.closed = True
#def __del__(self):
# self.close()
def serialize(self, node):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif self.closed:
raise SerializerError("serializer is closed")
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
version=self.use_version, tags=self.use_tags))
self.anchor_node(node)
self.serialize_node(node, None, None)
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
def anchor_node(self, node):
if node in self.anchors:
if self.anchors[node] is None:
self.anchors[node] = self.generate_anchor(node)
else:
self.anchors[node] = None
if isinstance(node, SequenceNode):
for item in node.value:
self.anchor_node(item)
elif isinstance(node, MappingNode):
for key, value in node.value:
self.anchor_node(key)
self.anchor_node(value)
def generate_anchor(self, node):
self.last_anchor_id += 1
return self.ANCHOR_TEMPLATE % self.last_anchor_id
def serialize_node(self, node, parent, index):
alias = self.anchors[node]
if node in self.serialized_nodes:
self.emit(AliasEvent(alias))
else:
self.serialized_nodes[node] = True
self.descend_resolver(parent, index)
if isinstance(node, ScalarNode):
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
default_tag = self.resolve(ScalarNode, node.value, (False, True))
implicit = (node.tag == detected_tag), (node.tag == default_tag)
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
style=node.style))
elif isinstance(node, SequenceNode):
implicit = (node.tag
== self.resolve(SequenceNode, node.value, True))
self.emit(SequenceStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
index = 0
for item in node.value:
self.serialize_node(item, node, index)
index += 1
self.emit(SequenceEndEvent())
elif isinstance(node, MappingNode):
implicit = (node.tag
== self.resolve(MappingNode, node.value, True))
self.emit(MappingStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style))
for key, value in node.value:
self.serialize_node(key, node, None)
self.serialize_node(value, node, key)
self.emit(MappingEndEvent())
self.ascend_resolver()

104
pylib/yaml/tokens.py Normal file
Просмотреть файл

@ -0,0 +1,104 @@
class Token(object):
def __init__(self, start_mark, end_mark):
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
attributes = [key for key in self.__dict__
if not key.endswith('_mark')]
attributes.sort()
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
for key in attributes])
return '%s(%s)' % (self.__class__.__name__, arguments)
#class BOMToken(Token):
# id = '<byte order mark>'
class DirectiveToken(Token):
id = '<directive>'
def __init__(self, name, value, start_mark, end_mark):
self.name = name
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class DocumentStartToken(Token):
id = '<document start>'
class DocumentEndToken(Token):
id = '<document end>'
class StreamStartToken(Token):
id = '<stream start>'
def __init__(self, start_mark=None, end_mark=None,
encoding=None):
self.start_mark = start_mark
self.end_mark = end_mark
self.encoding = encoding
class StreamEndToken(Token):
id = '<stream end>'
class BlockSequenceStartToken(Token):
id = '<block sequence start>'
class BlockMappingStartToken(Token):
id = '<block mapping start>'
class BlockEndToken(Token):
id = '<block end>'
class FlowSequenceStartToken(Token):
id = '['
class FlowMappingStartToken(Token):
id = '{'
class FlowSequenceEndToken(Token):
id = ']'
class FlowMappingEndToken(Token):
id = '}'
class KeyToken(Token):
id = '?'
class ValueToken(Token):
id = ':'
class BlockEntryToken(Token):
id = '-'
class FlowEntryToken(Token):
id = ','
class AliasToken(Token):
id = '<alias>'
def __init__(self, value, start_mark, end_mark):
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class AnchorToken(Token):
id = '<anchor>'
def __init__(self, value, start_mark, end_mark):
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class TagToken(Token):
id = '<tag>'
def __init__(self, value, start_mark, end_mark):
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
class ScalarToken(Token):
id = '<scalar>'
def __init__(self, value, plain, start_mark, end_mark, style=None):
self.value = value
self.plain = plain
self.start_mark = start_mark
self.end_mark = end_mark
self.style = style

Просмотреть файл

@ -0,0 +1,19 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PYTHONPATH=$(dirname $0)/../pylib \
python $(dirname $0)/../pylib/spinnaker/reconfigure_spinnaker.py $0

67
runtime/start_cassandra.sh Executable file
Просмотреть файл

@ -0,0 +1,67 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
CASSANDRA_DIR=${CASSANDRA_DIR:-"$(dirname $0)/../cassandra"}
CASSANDRA_PORT=${CASSANDRA_PORT:-9042}
CASSANDRA_HOST=${CASSANDRA_HOST:-127.0.0.1}
export CQLSH_HOST=$CASSANDRA_HOST
function is_local() {
local ip="$1"
if [[ "$ip" == "localhost" || "$ip" == "$(hostname)" || "$ip" == "0.0.0.0" ]]; then
return 0
elif ifconfig | grep " inet addr:${ip} " > /dev/null; then
return 0
else
return 1
fi
}
function maybe_start_cassandra() {
if is_local "$CASSANDRA_HOST"; then
echo "Starting Cassandra on $CASSANDRA_HOST"
sudo service cassandra start
else
echo "Using remote Cassandra from $CASSANDRA_HOST"
fi
}
echo "Checking for Cassandra on $CASSANDRA_HOST:$CASSANDRA_PORT"
if nc -z $CASSANDRA_HOST $CASSANDRA_PORT; then
echo "Cassandra is already up on $CASSANDRA_HOST:$CASSANDRA_PORT."
else
maybe_start_cassandra
echo "Waiting for Cassandra to start accepting requests on" \
"$CASSANDRA_HOST:$CASSANDRA_PORT..."
while ! nc -z $CASSANDRA_HOST $CASSANDRA_PORT; do sleep 0.1; done;
echo "Cassandra is up."
fi
# Create Cassandra keyspaces.
echo "Creating Cassandra keyspaces..."
DELAY=1
while ! cqlsh -f $CASSANDRA_DIR/create_echo_keyspace.cql && [ "$DELAY" -lt 32 ]
do
sleep $DELAY
let DELAY*=2
done
cqlsh -f $CASSANDRA_DIR/create_front50_keyspace.cql
cqlsh -f $CASSANDRA_DIR/create_rush_keyspace.cql
echo "Cassandra is ready."

50
runtime/start_redis.sh Executable file
Просмотреть файл

@ -0,0 +1,50 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
REDIS_PORT=${REDIS_PORT:-6379}
REDIS_HOST=${REDIS_HOST:-127.0.0.1}
function is_local() {
local ip="$1"
if [[ "$ip" == "localhost" || "$ip" == "$(hostname)" || "$ip" == "0.0.0.0" ]]; then
return 0
elif ifconfig | grep " inet addr:${ip} " > /dev/null; then
return 0
else
return 1
fi
}
function maybe_start_redis() {
if is_local "$REDIS_HOST"; then
echo "Starting Redis on $REDIS_HOST"
sudo service redis-server start
else
echo "Using remote Redis from $REDIS_HOST:$REDIS_PORT"
fi
}
echo "Checking for Redis on $REDIS_HOST:$REDIS_PORT."
if nc -z $REDIS_HOST $REDIS_PORT; then
echo "Redis is already up on $REDIS_HOST:$REDIS_PORT."
else
maybe_start_redis
echo "Waiting for Redis to start accepting requests on" \
"$REDIS_HOST:$REDIS_PORT..."
while ! nc -z $REDIS_HOST $REDIS_PORT; do sleep 0.1; done
echo "Redis is up."
fi

25
runtime/start_spinnaker.sh Executable file
Просмотреть файл

@ -0,0 +1,25 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [[ $# -eq 0 ]]; then
args="ALL"
else
args="$@"
fi
PYTHONPATH=$(dirname $0)/../pylib \
python $(dirname $0)/../pylib/spinnaker/spinnaker_runner.py START $args

17
runtime/stop_cassandra.sh Executable file
Просмотреть файл

@ -0,0 +1,17 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sudo service cassandra stop

17
runtime/stop_redis.sh Executable file
Просмотреть файл

@ -0,0 +1,17 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sudo service redis-server stop

24
runtime/stop_spinnaker.sh Executable file
Просмотреть файл

@ -0,0 +1,24 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [[ $# -eq 0 ]]; then
args="ALL"
else
args="$@"
fi
PYTHONPATH=$(dirname $0)/../pylib \
python $(dirname $0)/../pylib/spinnaker/spinnaker_runner.py STOP $args

Просмотреть файл

@ -0,0 +1,20 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
SPINNAKER_SCRIPT_DIR=/opt/spinnaker/scripts
$SPINNAKER_SCRIPT_DIR/stop_redis.sh
$SPINNAKER_SCRIPT_DIR/stop_cassandra.sh

Просмотреть файл

@ -0,0 +1,19 @@
#!/bin/bash
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
PYTHONPATH=$(dirname $0)/../pylib \
python $(dirname $0)/../pylib/spinnaker/validate_configuration.py $0