spinnaker/dev/build_google_tarball.py

226 строки
8.4 KiB
Python
Исходник Обычный вид История

Migration of runtime and development scripts. This is a single large PR adding all the current build/install/run scripting support. There are a few noteworthy caveats: 1) The primary build script (build_release.py) is an interim python script. The intent is to replace this with a gradle build. 2) The main installation script (install_spinnaker.py) pulls the artifacts from a directory or storage bucket (e.g. S3 or GCS). The expectation is to run apt-get install spinnaker instead where the spinnaker debian package does similar things but with more standard packaging. 3) There are a pair of scripts to install a development environment on a machine (install_development.py) and set one up for a user (bootstrap_dev.sh). This too is interim and should become an apt-get install spinnaker-dev debian package. 4) There is a script to build a Google Image. It uses packer so should be easy to add AMIs and other platform image support. In the end, it uses the install script, which is independent of platform anyway. 5) There are runtime scripts for managing an instance (start/stop). The validate script is minimal at the moment. I'll add rules back in future PRs. For now it is representative. The reconfigure script is intended to add customizations into the settings.js that have to be static. 6) The pylib/yaml directory is a copy of the standard python YAML library that is not included by default. It is here to avoid complexity managing python libraries (since pip is not installed by default either) when there is no other need for python here. The dev/ directory is only intended for developers (stuff available in the spinnaker-dev packaging). The others are intended for operational runtime environments. The scripts support running a standard installation (in the runtime directory) or as a developer (straight out of gradle). The developer scripts are generally separate so they can inject a different configuration for where to locate components. Minimal usage as a developer[*] would be: mkdir build cd build ../spinnaker/dev/refresh_source.sh --github_user=<user> --pull_origin ../spinnaker/dev/run_dev.sh (will build and run as a developer) [*] Assuming you already have a machine setup. If you dont then you could run spinnaker/dev/install_development.py (to set up a machine) spinnaker/dev/bootstrap_dev.sh (to setup user environments) The bootstrap_dev.sh will create the build directory and refresh the sources leaving you in ./build. To use this in a production environment, you'd need to build a release. RELEASE_PATH=<path or storage bucket> ../spinnaker/dev/refresh_source.sh --pull_origin ../spinnaker/dev/build_release --release_path=$RELEASE_PATH That will create and write to the RELEASE_PATH and also a install_spinnaker.py.zip file. You can then copy that .py.zip onto the machine you want ot install spinnaker on, then python install_spinnaker.py.zip \ --package_manager \ --release_path=$RELEASE_PATH To complete the installation create a /root/.spinnaker/spinnaker-local.yml file (from /opt/spinnaker/config/default-spinnaker-local.yml) and fine tune it with your credentials and so forth. Be sure to enable a provider and set credentials. Then start spinnaker sudo /opt/spinnaker/scripts/start_spinnaker.sh
2015-10-28 20:42:04 +03:00
#!/usr/bin/python
#
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Given an existing GCE image, create a tarball for it.
# PYTHONPATH=. python dev/build_gce_tarball.py --image=$IMAGE_NAME --tarball_uri=gs://$RELEASE_URI/$(basename $RELEASE_URI).tar.gz
import argparse
import os
import re
import sys
import time
from pylib.spinnaker.run import run_quick
from pylib.spinnaker.run import check_run_quick
def get_default_project():
"""Determine the default project name.
The default project name is the gcloud configured default project.
"""
result = check_run_quick('gcloud config list', echo=False)
return re.search('project = (.*)\n', result.stdout).group(1)
class Builder(object):
def __init__(self, options):
self.options = options
self.check_for_image_tarball()
self.__zone = options.zone
self.__project = options.project or get_default_project()
self.__instance = options.instance
def deploy_instance(self):
"""Deploy an instance (from an image) so we can get at its disks.
This isnt necessarily efficient, but is simple since we already have
means to create images.
"""
if self.__instance:
print 'Using existing instance {name}'.format(name=self.__instance)
return
if not self.options.image:
raise ValueError('Neither --instance nor --image was specified.')
instance = 'build-spinnaker-tarball-{unique}'.format(
unique=time.strftime('%Y%m%d%H%M%S'))
print 'Deploying temporary instance {name}'.format(name=instance)
check_run_quick('gcloud compute instances create {name}'
' --zone={zone} --project={project}'
' --image={image} --image-project={image_project}'
' --scopes compute-rw,storage-rw'
.format(name=instance,
zone=self.__zone,
project=self.__project,
image=self.options.image,
image_project=self.options.image_project),
echo=False)
self.__instance = instance
def cleanup_instance(self):
"""If we deployed an instance, tear it down."""
if self.options.instance:
print 'Leaving pre-existing instance {name}'.format(
self.options.instance)
return
print 'Deleting instance {name}'.format(name=self.__instance)
run_quick('gcloud compute instances delete {name}'
' --zone={zone} --project={project}'
.format(name=self.__instance,
zone=self.__zone,
project=self.__project),
echo=False)
def check_for_image_tarball(self):
"""See if the tarball aleady exists."""
uri = self.options.tarball_uri
if (not uri.startswith('gs://')):
error = ('--tarball_uri must be a Google Cloud Storage URI'
', not "{uri}"'
.format(uri=uri))
raise ValueError(error)
result = run_quick('gsutil ls {uri}'.format(uri=uri), echo=False)
if not result.returncode:
error = 'tarball "{uri}" already exists.'.format(uri=uri)
raise ValueError(error)
def __extract_image_tarball_helper(self):
"""Helper function for make_image_tarball that does the work.
Note that the work happens on the instance itself. So this function
builds a remote command that it then executes on the prototype instance.
"""
print 'Creating image tarball.'
set_excludes_bash_command = (
'EXCLUDES=`python -c'
' "import glob; print \',\'.join(glob.glob(\'/home/*\'))"`')
tar_path = self.options.tarball_uri
tar_name = os.path.basename(tar_path)
remote_script = [
'sudo mkdir /mnt/tmp',
'sudo /usr/share/google/safe_format_and_mount -m'
' "mkfs.ext4 -F" /dev/sdb /mnt/tmp',
set_excludes_bash_command,
'sudo gcimagebundle -d /dev/sda -o /mnt/tmp'
' --log_file=/tmp/export.log --output_file_name={tar_name}'
' --excludes=/tmp,\\$EXCLUDES'.format(tar_name=tar_name),
'gsutil -q cp /mnt/tmp/{tar_name} {output_path}'.format(
tar_name=tar_name, output_path=tar_path)]
command = '; '.join(remote_script)
check_run_quick('gcloud compute ssh --command="{command}"'
' --project {project} --zone {zone} {instance}'
.format(command=command.replace('"', r'\"'),
project=self.__project,
zone=self.__zone,
instance=self.__instance))
def create_tarball(self):
"""Create a tar.gz file from the instance specified by the options.
The file will be written to options.tarball_uri.
It can be later turned into a GCE image by passing it as the --source-uri
to gcloud images create.
"""
project = self.__project
basename = os.path.basename(self.options.tarball_uri).replace('_', '-')
first_dot = basename.find('.')
if first_dot:
basename = basename[0:first_dot]
disk_name = '{name}-export'.format(name=basename)
print 'Attaching external disk "{disk}" to extract image tarball.'.format(
disk=disk_name)
# TODO(ewiseblatt): 20151002
# Add an option to reuse an existing disk to reduce the cycle time.
# Then guard the create/format/destroy around this option.
# Still may want/need to attach/detach it here to reduce race conditions
# on its use since it can only be bound to once instance at a time.
check_run_quick('gcloud compute disks create '
' {disk_name} --project {project} --zone {zone} --size=10'
.format(disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
check_run_quick('gcloud compute instances attach-disk {instance}'
' --disk={disk_name} --device-name=export-disk'
' --project={project} --zone={zone}'
.format(instance=self.__instance,
disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
try:
self.__extract_image_tarball_helper()
finally:
print 'Detaching and deleting external disk.'
run_quick('gcloud compute instances detach-disk -q {instance}'
' --disk={disk_name} --project={project} --zone={zone}'
.format(instance=self.__instance,
disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
run_quick('gcloud compute disks delete -q {disk_name}'
' --project={project} --zone={zone}'
.format(disk_name=disk_name,
project=self.__project,
zone=self.__zone),
echo=False)
def init_argument_parser(parser):
parser.add_argument(
'--tarball_uri', required=True,
help='A path to a Google Cloud Storage bucket or path within one.')
parser.add_argument(
'--instance', default='',
help='If specified use this instance, otherwise use deploy a new one.')
parser.add_argument(
'--image', default='', help='The image to tar if no --instance.')
parser.add_argument(
'--image_project', default='', help='The project for --image.')
parser.add_argument('--zone', default='us-central1-f')
parser.add_argument(
'--project', default='',
help='GCE project to write image to.'
' If not specified then use the default gcloud project.')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
init_argument_parser(parser)
options = parser.parse_args()
builder = Builder(options)
builder.deploy_instance()
try:
builder.create_tarball()
finally:
builder.cleanup_instance()
print 'DONE'