- Configure sasldb and lib event when going dev setup for memcached
- Addressed CR comments
- Updated vcap to remove couchdb from optional services, this change will be made later
- Cleaned up memcache dev setup recipie (in place edit of saslauthd config file)
- Configured neo4j and memcached to be excluded by default, use environment variable to override
- Config files corrections from CR comments
- Deleted unused file, added default 16M memory for memcached instance
Change-Id: I3f28d4b42989af6dc56ca1757daff93b43b5f6d7
don't update old vcap_setup
Amended to address Jesse's review comments.
Amended to remove submodule update for services.
Change-Id: Id22dbb005275b99017d132ed6e50f4035e394206
Change-Id: Ib81996a84b0283fd6afc7bb0109d12c620bbcce8
- Add neo4j support
- Save the last deployment in a deployment target file under $HOME
- Change service name i.e. make mysql => mysql_node etc
- Point out the profile file at the end of a deployment, has paths to run vmc
- Audit admin list
- Remove some packages from essentials list
- Increase timeout in bin/vcap that checks if a component is started
patch 3 - Fixed olegs comments.
Testing Done:
Ran Neo4j feature bvt test
Ran single host deployment
Ran multi host mongodb deployment
Change-Id: Id96757bfe109af2f89b9a25e35a669a41c2d259c
In addtion this change also includes the following changes.
o Remove all backup service related scripts/templates.
o Simple fix to change the deployment name for mulithost mongodb deployment.
Testing Done: verified single host deployment, checked presence of log files in
the new location.
Change-Id: I2b96246da62fe44a3e7de5ca86ca0bf49a20b9c3
bin/vcap - fix a bug where the specified config file was not being used to
lookup properties.
bin/vcap_dev_setup - check for git clone failures
job_manager.rb - remove postgresql as a services
patch 2 - fix $configdir => $config_dir
Change-Id: Ie0214ce340f62f585f6233a7c7de1e78cc6e1ad8
Change
This review has changes related to the following 3 things.
1. Move all the default config (that was embeded in
vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks.
2. Add postgres CCDB.
3. Multi host setup for services and dea. This meant separating service
gateway from service nodes. To this effect we added a service gateway role.
Overall a lot of files have changed, most of those changes are related to
moving all the default config values from the various wrapper scripts to the
cookbooks, especially the new cookbook called deployment which is the holding
place for all the deployment related config options.
* dev_setup/bin/vcap:
Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for
this is that opensource CF code will not be in sync with the
private repo and internal testing of CF dev_setup scrips would not work if it
relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we
just package vcap with dev_setup.
* vcap_dev
* Since all defaults are now maintained in chef scripts. The chef "deployment"
* role/recipe creates a deployment info file that is consumed by this vcap_dev
* script.
* The vcap_dev_setup file now saves the list of components that were installed
for a deployment. This script only starts the components included in that
list.
* Uses the vcap binary from dev_setup/bin
* vcap_dev_setup
Moved a bunch of directory creating code to the deployment cookbook
* Added a CCDB role/recipe.
* Creates and Configures the CC postgres database.
* Added deployment cookbook
* Moves all the comon directory creation code here. Note, we should go back
and see if we really do need all these various directories and probably come
up with a well designed directory layout.
Testing:
* Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom
of it before i submit this change)
* Tested the following multi host setup.
1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything
else. Verified that BVTs passed.
2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified
that BVTs passed.
Note: I have only tested the multihost setup with mysql. I will test mongodb
and redis before submitting this change, I expect those to run without any
glitches.
Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
All deployments (multi host and single host) are driven through a templated
config file. Look at dev_setup/deployments/sample* for an example of what this
config file looks like. There is sufficient documentation inside the configs.
But at a high level the user specifes "jobs" to "install" and jobs that are
"installed". The "installed" jobs have properties associated with them which are
used by jobs that are in the "install" list. e.g. nats could be an installed job
and its properties like host/port will be used by the jobs in the install list.
The deployment code now goes through this config file, does sanity checking to
verify valid job names, valid properties etc. It leverages "rake" to manage
dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all
dependent jobs for a given job are either in the "installed" list or in the
"install" list. Once all job specs are verified, we generate the chef runlist
on the fly and the deployment proceeds to install the required components using
chef.
NOTE:
1. For now, multi host service deployment is limited to only selecting a service
say "redis" and the scripts will install the service node, gateway and the redis
software itself on the same host. We can add more flexibility to this later.
2. For now, we use roles to enforce chef recipe dependencies. It seems to be
working well for what we have right now and I like the fact that there is one
location that maintains the chef dependencies for a given component.
3. For now, not all configurations of multi host are tested. In fact, I have
only
verified NATS. The config file template changes for all the other components
will be added in later changes. e.g. with this change you cannot run ccdb on a
separate host and expect cloud controller to work. But the changes to make it
work are more about adding the right templated fields and testing those changes.
Testing Done: I have verified that I can install "nats" on one box and the rest
of the components on a different box and things work fine.
Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce