labs-vcap/dev_setup/lib/job_manager.rb

255 строки
7.8 KiB
Ruby
Исходник Обычный вид История

Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
#!/usr/bin/env ruby
require 'rake'
require 'yaml'
require 'set'
require 'pp'
$LOAD_PATH.unshift(File.dirname(__FILE__))
class JobManager
ALL = "all"
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
NATS = "nats_server"
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
ROUTER = "router"
CC = "cloud_controller"
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
CCDB = "ccdb"
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
CF = "cloudfoundry"
HM = "health_manager"
DEA = "dea"
UAA = "uaa"
UAADB = "uaadb"
ACM = "acm"
ACMDB = "acmdb"
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
SERVICES = ["redis", "mysql", "mongodb", "neo4j", "rabbitmq", "memcached"]
SERVICES_NODE = SERVICES.map do |service|
"#{service}_node"
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
SERVICES_GATEWAY = SERVICES.map do |service|
"#{service}_gateway"
end
SERVICES_NODE.each do |node|
# Service name constant e.g. REDIS_NODE -> "redis_node"
const_set(node.upcase, node)
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
end
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
# All supported jobs
JOBS = [ALL, NATS, ROUTER, CF, CC, HM, DEA, CCDB, UAA, UAADB] + SERVICES_NODE + SERVICES_GATEWAY
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
SYSTEM_JOB = [CF]
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
# List of the required properties for jobs
INSTALLED_JOB_PROPERTIES = {NATS => ["host"], CC => ["service_api_uri", "builtin_services"],
CCDB => ["host"]}
INSTALL_JOB_PROPERTIES = {CC => ["builtin_services"], MYSQL_NODE => ["index"], MONGODB_NODE => ["index"], REDIS_NODE => ["index"], NEO4J_NODE => ["index"], RABBITMQ_NODE => ["index"], MEMCACHED_NODE => ["index"]}
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
# Dependency between JOBS and components that are consumed by "vcap_dev" when cf is started or
# stopped
SERVICE_NODE_RUN_COMPONENTS = Hash.new
SERVICES_NODE.each do |node|
SERVICE_NODE_RUN_COMPONENTS[node] = node
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
end
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
SERVICE_GATEWAY_RUN_COMPONENTS = Hash.new
SERVICES_GATEWAY.each do |gateway|
SERVICE_GATEWAY_RUN_COMPONENTS[gateway] = gateway
end
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
RUN_COMPONENTS = {ROUTER => ROUTER, CC => CC, HM => HM, DEA => DEA, UAA => UAA}.update(SERVICE_NODE_RUN_COMPONENTS).update(SERVICE_GATEWAY_RUN_COMPONENTS)
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
class << self
if defined?(Rake::DSL)
include Rake::DSL
end
# Take user input and create a Hash that has the job name as the key and its
# properties as value.
#
# Allows the user to specify jobs as either
# jobs:
# install:
# - redis
# - mysql
# OR
# jobs:
# install:
# redis:
# mysql:
#
def sanitize_jobs(type)
return nil if @config["jobs"][type].nil?
jobs = {}
config_jobs = @config["jobs"][type]
config_jobs.each do |element|
case element
when String
jobs[element] = nil
when Hash
if element.length > 1
puts "Bad input, #{element.pretty_inspect} should have only one key, please fix your yaml file."
exit 1
end
element.each do |job, properties|
jobs[job] = properties.nil? ? nil : properties.dup
end
else
puts "Unsupported type for Installed or Install job #{element}"
exit 1
end
end
# validate jobs
given_jobs = Set.new(jobs.keys)
if (intersect = @valid_jobs.intersection(given_jobs)) != given_jobs
puts "Input Error: Please provide valid #{type} jobs, following jobs are not recognized\n#{(given_jobs - intersect).pretty_inspect}"
exit 1
end
jobs
end
def detect_duplicate_jobs
if !@config["jobs"]["install"].nil? && !@config["jobs"]["installed"].nil?
install_jobs = Set.new(@config["jobs"]["install"].keys)
installed_jobs = Set.new(@config["jobs"]["installed"].keys)
common = install_jobs.intersection(installed_jobs)
unless common.empty?
puts "Input error, The following jobs are specified in both the install and installed list.\n#{common.pretty_inspect}"
exit 1
end
end
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
def validate_properties(jobs, required_properties)
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
return if jobs.nil?
missing_keys = {}
jobs.each do |job, properties|
# Check if this job needs properties
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
next if required_properties[job].nil?
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
expected = Set.new(required_properties[job])
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
given = properties.nil? ? Set.new : Set.new(properties.keys)
# Check if all the required properties are given
if !expected.subset?(given)
missing_keys[job] ||= []
missing_keys[job] << (expected - given).to_a
end
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
if !missing_keys.empty?
puts "Input Error: The following mandatory job properties are missing #{missing_keys.pretty_inspect}"
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
exit 1
end
end
# Gets called by rake tasks for each job.
# The possibilities for each job are as follows.
# 1. It is already installed
# 2. It is in the install list
# 3. It is not on the install list or the installed list
#
# Case 1, propogate the properties of the installed job to the spec
# so dependent jobs can use these properties.
# Case 2, add the required chef role to the chef run list, add default/given
# properties to the spec
# Case 3, is a dependecy failure.
def install(job)
unless @all_install
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
if !@config["jobs"]["installed"].nil? && !@config["jobs"]["installed"][job].nil?
@spec[job] = @config["jobs"]["installed"][job].dup
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
return
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
unless @config["jobs"]["install"].has_key?(job) || SYSTEM_JOB.include?(job)
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
puts "Dependecy check error: job #{job} is needed by one of the jobs in the install list, please add job #{job} to the install or installed list"
exit 1
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
if !@config["jobs"]["install"][job].nil?
@spec[job] = @config["jobs"]["install"][job].dup
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
end
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
# Prepare the run list for this job
if RUN_COMPONENTS.has_key?(job)
case RUN_COMPONENTS[job]
when String
@run_list << RUN_COMPONENTS[job]
when Array
RUN_COMPONENTS[job].each do |component|
@run_list << component
end
end
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
@roles << job
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
def process_jobs
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
# Default to all jobs
if @config["jobs"].nil?
# Install all jobs
@all_install = true
Rake.application[ALL].invoke
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
return
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
end
# Make sure that the "install" and "installed" jobs specified are valid
@config["jobs"]["install"] = sanitize_jobs("install")
@config["jobs"]["installed"] = sanitize_jobs("installed")
if @config["jobs"]["install"].nil?
puts "You have not selected any jobs to install."
exit 0
end
# Make sure that the "install" and "installed" jobs do not intersect
detect_duplicate_jobs
if @config["jobs"]["install"].include?("all")
# Install all jobs
if !@config["jobs"]["installed"].nil?
puts "Please correct your config file. You are trying to install all jobs, but you have also specified an 'installed' section"
exit 1
end
if @config["jobs"]["install"].length != 1
puts "Please correct your config file. You are trying to install all jobs, remove all other jobs from the 'install' list"
exit 1
end
@all_install = true
Rake.application[ALL].invoke
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
return
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
end
# Sanity check the given properties
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
validate_properties(@config["jobs"]["installed"], INSTALLED_JOB_PROPERTIES)
validate_properties(@config["jobs"]["install"], INSTALL_JOB_PROPERTIES)
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
# Let the install rake task do the dependency management
@config["jobs"]["install"].keys.each do |job|
Rake.application[job].invoke
end
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
end
def go(config)
@spec = {}
@roles = []
@run_list = Set.new
@valid_jobs = Set.new(JOBS)
@config = config.dup
@all_install = false
# Load the job dependecies
Rake.application.rake_require("job_dependency")
process_jobs
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
Multi host part 2 + changes to allow deployment like bash < <(curl vcap_dev_setup_url) Change This review has changes related to the following 3 things. 1. Move all the default config (that was embeded in vcap_dev/vcap_dev_setup/job_manager) into chef cookbooks. 2. Add postgres CCDB. 3. Multi host setup for services and dea. This meant separating service gateway from service nodes. To this effect we added a service gateway role. Overall a lot of files have changed, most of those changes are related to moving all the default config values from the various wrapper scripts to the cookbooks, especially the new cookbook called deployment which is the holding place for all the deployment related config options. * dev_setup/bin/vcap: Added vcap to dev_setup/bin instead of just changing bin/vcap. The reason for this is that opensource CF code will not be in sync with the private repo and internal testing of CF dev_setup scrips would not work if it relied on the version of vcap in the opensource repo i.e. bin/vcap. So now we just package vcap with dev_setup. * vcap_dev * Since all defaults are now maintained in chef scripts. The chef "deployment" * role/recipe creates a deployment info file that is consumed by this vcap_dev * script. * The vcap_dev_setup file now saves the list of components that were installed for a deployment. This script only starts the components included in that list. * Uses the vcap binary from dev_setup/bin * vcap_dev_setup Moved a bunch of directory creating code to the deployment cookbook * Added a CCDB role/recipe. * Creates and Configures the CC postgres database. * Added deployment cookbook * Moves all the comon directory creation code here. Note, we should go back and see if we really do need all these various directories and probably come up with a well designed directory layout. Testing: * Ran BVTS. Lift BVT failure needs investigating (I will fix/get to the bottom of it before i submit this change) * Tested the following multi host setup. 1. Ran mysql database with mysql node on 2 VMs. On a 3rd VM ran everything else. Verified that BVTs passed. 2. Ran dea on 2 different VMs. Ran everything else on a 3rd VM. Verified that BVTs passed. Note: I have only tested the multihost setup with mysql. I will test mongodb and redis before submitting this change, I expect those to run without any glitches. Change-Id: I6a084be09a81bf920eebc62be8d7aa6625cc17e9
2011-08-15 20:56:58 +04:00
# All dependencies are resolved, return the job property spec, chef
# roles and the vcap run list
return @spec, @roles, @run_list.to_a
Multi host setup scripts All deployments (multi host and single host) are driven through a templated config file. Look at dev_setup/deployments/sample* for an example of what this config file looks like. There is sufficient documentation inside the configs. But at a high level the user specifes "jobs" to "install" and jobs that are "installed". The "installed" jobs have properties associated with them which are used by jobs that are in the "install" list. e.g. nats could be an installed job and its properties like host/port will be used by the jobs in the install list. The deployment code now goes through this config file, does sanity checking to verify valid job names, valid properties etc. It leverages "rake" to manage dependencies between jobs, look at dev_setup/lib/job_*. So we enforce that all dependent jobs for a given job are either in the "installed" list or in the "install" list. Once all job specs are verified, we generate the chef runlist on the fly and the deployment proceeds to install the required components using chef. NOTE: 1. For now, multi host service deployment is limited to only selecting a service say "redis" and the scripts will install the service node, gateway and the redis software itself on the same host. We can add more flexibility to this later. 2. For now, we use roles to enforce chef recipe dependencies. It seems to be working well for what we have right now and I like the fact that there is one location that maintains the chef dependencies for a given component. 3. For now, not all configurations of multi host are tested. In fact, I have only verified NATS. The config file template changes for all the other components will be added in later changes. e.g. with this change you cannot run ccdb on a separate host and expect cloud controller to work. But the changes to make it work are more about adding the right templated fields and testing those changes. Testing Done: I have verified that I can install "nats" on one box and the rest of the components on a different box and things work fine. Change-Id: I165b01fd65e4283748cf2cf9b2438369ae6332ce
2011-07-30 08:12:53 +04:00
end
end
end