Why app startup, and not part of staging? Because every deployment related
tasks are appened to the 'startup' script, so they'll be run on the DEA
(before app startup) instead of the cloud_controller. Not sure if this
should change.
Change-Id: I1b1f09fb7965d318f522a28c165d17e355ea195f
* use virtualenv's pip, not global one (which requires sudo anyway)
* redirect pip's output to pip.log, otherwise it is impossible to tell what went wrong when pip fails
Change-Id: Idba6bc42e699d09a77b446c0c4ed7c3200857df9
Using globally installed gunicorn will not recognize modules installed
(from requirements.txt) into the app's virtualenv.
Change-Id: Ib8c480297ae9537983b7d3be2848a14095e7b043
- Fixed varz stats keeping frameworks/runtimes around that are no longer in the DB
- Added started apps and instances
- Aggregated queries to reduce number of requests
Change-Id: I03ee8368e1e2f06819f40cb1cc74ced949f8a79f
For applying this fix, you can use MySQL (/w em_mysql2 adapter) as cloud controller database by configuring cloud_controller.yml as follows:
development:
database: cloudcontroller
host: localhost
port: 3306
username: root
password: password
adapter: em_mysql2
encoding: utf8
timeout: 2000
Change-Id: Ie881f617f77d1d4aacc0772b1594547b9beb1af5
All deployments (multi host and single host) are driven through a templated
config file. Look at dev_setup/deployments/sample* for an example of what this
config file looks like. There is sufficient documentation inside the configs.
But at a high level the user specifes "jobs" to "install" and jobs that are
"installed". The "installed" jobs have properties associated with them which are
used by jobs that are in the "install" list. e.g. nats could be an installed job
and its properties like host/port will be used by the jobs in the install list.
The deployment code now goes through this config file, does sanity checking to
verify valid job names, valid properties etc. It leverages "rake" to manage
dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all
dependent jobs for a given job are either in the "installed" list or in the
"install" list. Once all job specs are verified, we generate the chef runlist
on the fly and the deployment proceeds to install the required components using
chef.
NOTE:
1. For now, multi host service deployment is limited to only selecting a service
say "redis" and the scripts will install the service node, gateway and the redis
software itself on the same host. We can add more flexibility to this later.
2. For now, we use roles to enforce chef recipe dependencies. It seems to be
working well for what we have right now and I like the fact that there is one
location that maintains the chef dependencies for a given component.
3. For now, not all configurations of multi host are tested. In fact, I have
only
verified NATS. The config file template changes for all the other components
will be added in later changes. e.g. with this change you cannot run ccdb on a
separate host and expect cloud controller to work. But the changes to make it
work are more about adding the right templated fields and testing those changes.
Testing Done: I have verified that I can install "nats" on one box and the rest
of the components on a different box and things work fine.
Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
Added template files for all cf components including services.
Add comments to the deployment yaml config file. Cleanup some of the scripts.
Change-Id: I9209749ab9ca50a2bd894189c571e1b4c33bc77b