Updated README, added testDryRun script, updated configs and tests

This commit is contained in:
nidhi0622 2024-03-27 16:07:43 -05:00
Родитель 3b61643c47
Коммит d02b136445
11 изменённых файлов: 242 добавлений и 193 удалений

Просмотреть файл

@ -138,6 +138,82 @@ burst into Azure.
Please contact azure support for help with this configuration.
# Guide for using Static Templates with HostFactory
Follow below steps if you are not using the chef in project:
1. Copy cyclecloud-symphony-pkg-{version}.zip to /tmp directory in master node.
2. Unzip the file in /tmp directory
3. Ensure python3 is installed
4. You should have following environment variables set as per your environment otherwise install.sh script under hostfactory/host_provider directory take default values:
EGO_TOP
HF_TOP
HF_VERSION
HF_CONFDIR
HF_WORKDIR
HF_LOGDIR
You can run install.sh within the folder and it will install python virtual environment at
plugin path like below:
$HF_TOP/$HF_VERSION/providerplugins/azurecc/scripts/venv
If you also need to generate symphony configuration then you can run install.sh with argument generate_config. This will set all the configurations assuming you have only azurecc provider and enable it.
## Guide to using scripts
These scripts can be found under $HF_TOP/$HF_VERSION/providerplugins/azurecc/scripts
1. generateWeightedTemplates.sh
This script is used to generated weighted template. You need to run this script as root.
./generateWeightedTemplates.sh
This will create a template based on current cyclecloud template selections and print it.
If there are errors please check in /tmp/template_generate.out
You must then store template in $HF_TOP/$HF_VERSION/hostfactory/providers/conf/azurecc_templates.json file.
2. testDryRunWeightedTemplates.sh
These are test scripts which has 2 options:
a. validate_templates - this will check the template in default path where azurecc_templates.json is stored
./testDryRunWeightedTemplates.sh validate_templates
b. create_machines - for this you need to pass input json as an example:
input.json:
{"template": {"machineCount": 205,
"templateId": "execute"},
"dry-run": true}
./testDryRunWeightedTemplates.sh create_machines input.json
This will not create machines but show what machines would have been created with given input. Make sure "dry-run" is true.
## Testing capacity issues
You can trigger a capacity issue using following python script.
Run LastFailureTime.py under hostfactory/host_provider/test/ with follwing arguments:
1. Number of seconds before current time
2. Account name
3. region
4. machine name
python LastFailureTime.py 1 AccountName westus2 Standard_D8_v5
Output is like below:
AdType = "Cloud.Capacity"
ExpirationTime = `2024-03-01 16:34:54.045927-06:00`
AccountName = "AccountName"
StatusUpdatedTime = `2024-03-01 15:34:53.045927-06:00`
Region = "westus2"
HasCapacity = False
Provider = "azure"
Name = "region/AccountName/westus2/Standard_D8_v5"
MachineType = "Standard_D8_v5"
Copy the above output in /tmp/test.dat file and run below command to update the Cloud.Capacity record
curl -k -u ${CC_USER} -X POST -H "Content-Type:application/text" --data-binary "@/tmp/test.dat" localhost:8080/db/Cloud.Capacity?format=text
## Additional configs for symphony templates
Following 2 configuration have been added:
1. enable_weighted_templates -> default value is true. Used to generate template in which templateId corresponds to nodearray and vmTypes are in dictionary format with weight. But you can change it under configuration symphony for master node. In that case you need to use default format of templateId as nodearray+ sku name.
2. ncpus_use_vcpus -> default value is true. It assumes you want slot ncpu attribute to be based on vcpu count. You can change in symphony configuration for master node.
3. capacity-failure-backoff -> default value is 300 (unit is seconds).
This is the time after which scalelib will wait to make next attempt at allocation after failure occurs.
# Contributing

Просмотреть файл

@ -18,9 +18,6 @@ scriptDir=`dirname $0`
export PYTHONPATH=$PYTHONPATH:$scriptDir/src
. $venv_path/activate
$venv_path/python3 -m cyclecloud_provider generate_templates -f /tmp/input.json 2>> /tmp/template_generate.out
if [ $? != 0 ]; then
echo "Template generation failed check logs in /tmp/template_generate.out"
fi
rm -rf $HF_LOGDIR
rm -rf $HF_WORKDIR
exit 0

Просмотреть файл

@ -1047,7 +1047,7 @@ def main(argv=sys.argv, json_writer=simple_json_writer): # pragma: no cover
# best effort cleanup.
provider.periodic_cleanup(skip_templates=(cmd == "templates"))
provider.periodic_cleanup()
except ImportError as e:
logger.exception(str(e))

Просмотреть файл

@ -0,0 +1,24 @@
import sys
import datetime
now = datetime.datetime.now()
seconds = int(sys.argv[1])
status_updated_time = now - datetime.timedelta(seconds=seconds)
expiration_time = now + datetime.timedelta(hours=1)
def format_time(timestamp):
return datetime.datetime.strftime(timestamp, "%Y-%m-%d %H:%M:%S.%f-06:00")
account_name = sys.argv[2]
region_name = sys.argv[3]
machine_name = sys.argv[4]
print(f'''
AdType = "Cloud.Capacity"
ExpirationTime = `{format_time(expiration_time)}`
AccountName = "{account_name}"
StatusUpdatedTime = `{format_time(status_updated_time)}`
Region = "{region_name}"
HasCapacity = {seconds <= 0}
Provider = "azure"
Name = "region/{account_name}/{region_name}/{machine_name}"
MachineType = "{machine_name}" ''')

Просмотреть файл

@ -6,6 +6,7 @@ import subprocess
import sys
import tempfile
import unittest
from unittest.mock import MagicMock, patch
import cyclecloud_provider
from symphony import RequestStates, MachineStates, MachineResults
@ -37,10 +38,12 @@ class MockHostnamer:
class MockCluster:
def __init__(self, nodearrays):
self.new_node_manager = MagicMock(return_value=None)
self.cluster_name = "mock_cluster"
self._nodearrays = nodearrays
self._nodearrays["nodearrays"].append({"name": "execute",
"nodearray": {"Configuration": {"run_list": ["recipe[symphony::execute]"]}}})
self.buckets = []
# self._nodearrays["nodearrays"].append({"name": "execute",
# "nodearray": {"Configuration": {"run_list": ["recipe[symphony::execute]"]}}})
# template -> requestI
self._nodes = {}
self.raise_during_termination = False
@ -145,7 +148,12 @@ class MockCluster:
node["Status"] = "TerminationPreparation"
node["TargetState"] = "Terminated"
def set_buckets(self, buckets):
self.buckets = buckets
def get_buckets(self):
return self.buckets
class RequestsStoreInMem:
def __init__(self, requests=None):
@ -164,10 +172,15 @@ class RequestsStoreInMem:
pass
class NodeBucket:
def __init__(self, nodearray, bucket_id, priority=None):
def __init__(self, nodearray, available, vm_size, id, resources, vcpu_count, max_count, software_configuration):
self.nodearray = nodearray
self.bucket_id = bucket_id
self.priority = priority
self.available = available
self.vm_size = vm_size
self.id = id
self.resources = resources
self.vcpu_count = vcpu_count
self.max_count = max_count
self.software_configuration = software_configuration
def json_writer(data, debug_output=False):
return data
@ -302,14 +315,27 @@ class TestHostFactory(unittest.TestCase):
"buckets": [a4bucket, a8bucket]}]})
epoch_clock = MockClock((1970, 1, 1, 0, 0, 0))
hostnamer = MockHostnamer()
provider = cyclecloud_provider.CycleCloudProvider(provider_config, cluster, hostnamer, json_writer,
terminate_requests=RequestsStoreInMem(),
creation_requests=RequestsStoreInMem(),
templates=RequestsStoreInMem(),
clock=epoch_clock)
provider.capacity_tracker.reset()
provider = cyclecloud_provider.CycleCloudProvider(provider_config, cluster, hostnamer, json_writer, RequestsStoreInMem(), RequestsStoreInMem(), epoch_clock)
provider.request_tracker.reset()
return provider
def _new_provider_scalelib(self, provider_config=None, UserData=""):
provider_config = provider_config or util.ProviderConfig({}, {})
a4bucket = {"maxCount": 2, "activeCount": 0, "definition": {"machineType": "A4"}, "virtualMachine": MACHINE_TYPES["A4"]}
a8bucket = {"maxCoreCount": 24, "activeCount": 0, "definition": {"machineType": "A8"}, "virtualMachine": MACHINE_TYPES["A8"]}
cluster = MockCluster({"nodearrays": [{"name": "execute",
"UserData": UserData,
"nodearray": {"machineType": ["a4", "a8"], "Interruptible": False, "Configuration": {"autoscaling": {"enabled": True}, "symphony": {"autoscale": True}}},
"buckets": [a4bucket, a8bucket]},
{"name": "lp_execute",
"UserData": UserData,
"nodearray": {"machineType": ["a4", "a8"], "Interruptible": True, "Configuration": {"autoscaling": {"enabled": True}, "symphony": {"autoscale": True}}},
"buckets": [a4bucket, a8bucket]}]})
epoch_clock = MockClock((1970, 1, 1, 0, 0, 0))
hostnamer = MockHostnamer()
provider = cyclecloud_provider.CycleCloudProvider(provider_config, cluster, hostnamer, json_writer, RequestsStoreInMem(), RequestsStoreInMem(), epoch_clock)
provider.request_tracker.reset()
return provider
def _make_request(self, template_id, machine_count, rc_account="default", user_data={}):
return {"user_data": user_data,
"rc_account": rc_account,
@ -752,7 +778,44 @@ class TestHostFactory(unittest.TestCase):
nodearrays = [{"name": "n1", "nodearray":{}}, {"name": "n2", "nodearray": {"Priority": 20}}]
self.assertEqual(20000, cyclecloud_provider.bucket_priority(nodearrays, nodearrays[0], b_index=0))
self.assertEqual(20000, cyclecloud_provider.bucket_priority(nodearrays, nodearrays[1], b_index=0))
def test_validate_templates(self):
provider = self._new_provider()
with patch('builtins.open', create=True) as mock_open:
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({"templates":[]})
self.assertFalse(provider.validate_template())
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({})
self.assertFalse(provider.validate_template())
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({"templates":[{"templateId":"execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100, "vmTypes":{"A4":2}}]})
self.assertFalse(provider.validate_template())
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({"templates":[{"templateId":"execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100, "vmTypes":{"A4":2}}]})
self.assertFalse(provider.validate_template())
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({"templates":[{"templateId":"execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100}]})
self.assertFalse(provider.validate_template())
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({"templates":[{"templateId":"execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100, "vmTypes":{"A4":1, "A8":1}},
{"templateId":"lp_execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100, "vmTypes":{"A4":1, "A8":1}}]})
self.assertTrue(provider.validate_template())
mock_open.return_value.__enter__.return_value.read.return_value = json.dumps({"templates":[{"templateId":"execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100, "vmTypes":{"A4":1, "A8":1}},
{"templateId":"lp_execute","attributes":{"ncores":["Numeric","1"],"ncpus":["Numeric","1"],"mem":["Numeric","1024"],"type":["String","X86_64"]},"maxNumber":100, "vmTypes":{"A4":1}}]})
self.assertFalse(provider.validate_template())
def test_generate_sample_template(self):
saved_stdout = sys.stdout
from io import StringIO
capture_output = StringIO()
sys.stdout = capture_output
return_value = [NodeBucket("execute", 50, "A2", "cdcd4c31-3bbf-48af-b266-1c3de4b8a3d4", resources={"ncores":1}, vcpu_count=1, max_count=100, software_configuration={ "autoscaling": {"enabled": True}}),
NodeBucket("execute", 50, "A4", "cdcd4c31-3bbf-48af-b266-1c3de4b8a3d4", resources={"ncores":2}, vcpu_count=2, max_count=50, software_configuration={ "autoscaling": {"enabled": True}})]
provider = self._new_provider()
provider.cluster.set_buckets(return_value)
provider.generate_sample_template()
json_data = json.loads(capture_output.getvalue())
self.assertEqual(1, len(json_data["templates"]))
assert json.loads(capture_output.getvalue())
sys.stdout = saved_stdout
if __name__ == "__main__":
unittest.main()

Просмотреть файл

@ -1,7 +1,7 @@
import unittest
from symphony import RequestStates, MachineStates, MachineResults
from capacity_tracking_db import CapacityTrackingDb
from request_tracking_db import RequestTrackingDb
class MockClock:
@ -20,7 +20,7 @@ class TestHostFactory(unittest.TestCase):
cluster_name = "test_cluster"
clock = MockClock((1970, 1, 1, 0, 0, 0))
db = CapacityTrackingDb(config, cluster_name, clock)
db = RequestTrackingDb(config, cluster_name, clock)
db.reset()
self.assertFalse(db.get_requests())
@ -40,65 +40,5 @@ class TestHostFactory(unittest.TestCase):
db.remove_request(request_id)
self.assertFalse(db.get_requests())
def test_capacity_limit_and_expiry(self):
def build_request_set(request_id, machine_type, machine_count, user_data={}):
MACHINE_TYPES = {
"A4": {"Name": "A4", "vcpuCount": 4, "memory": 1., "Location": "ukwest", "Quota": 10},
"A8": {"Name": "A8", "vcpuCount": 8, "memory": 2., "Location": "ukwest", "Quota": 20}
}
a4bucket = {"maxCount": 100, "definition": {"machineType": "A4"}, "virtualMachine": MACHINE_TYPES["A4"]}
a8bucket = {"maxCoreCount": 800, "definition": {"machineType": "A8"}, "virtualMachine": MACHINE_TYPES["A8"]}
nodearray = {"name": "execute",
"UserData": {},
"nodearray": {"machineType": ["a4", "a8"], "Configuration": {"autoscaling": {"enabled": True}, "symphony": {"autoscale": True}}},
"buckets": [a4bucket, a8bucket]}
request_set = {'count': machine_count,
'requestId': request_id,
'definition': {'machineType': machine_type},
'nodeAttributes': {'Tags': {"foo": "bar"},
'Configuration': user_data},
'nodearray': 'execute'}
return request_set
config = {}
cluster_name = "test_cluster"
clock = MockClock((1970, 1, 1, 0, 0, 0))
db = CapacityTrackingDb(config, cluster_name, clock)
db.reset()
self.assertFalse(db.get_requests())
# request 100 <= bucket['MaxCount']
request_id = "test_request_id"
request_set = build_request_set(request_id, "A4", 100)
db.add_request(request_set)
self.assertIsNotNone(db.get_request(request_id))
# request completed with only 1 machine
#create_response = {"requestId": request_id,
# "status": RequestStates.complete,
# "machines": [{"name": "host-123", "machineId": "id-123"}]}
#db.request_completed(create_response)
db.pause_capacity(request_set.get("nodearray"), request_set['definition']['machineType'])
key = db._capacity_key("execute", "A4")
capacity_db = db.capacity_db.read()
self.assertIn(key, capacity_db)
db._release_expired_limits()
capacity_db = db.capacity_db.read()
self.assertIn(key, capacity_db)
# Now verify that capacity is limited
self.assertTrue(db.is_paused("execute", "A4"))
# Finally advance clock just over 5 min to expire the limit - default expiry is 300 sec
db.clock.now = (1970, 1, 1, 0, 5, 10)
db._release_expired_limits()
capacity_db = db.capacity_db.read()
self.assertNotIn(key, capacity_db)
if __name__ == "__main__":
unittest.main()

Просмотреть файл

@ -1,128 +1,43 @@
import unittest
from unittest.mock import MagicMock, patch
import cluster
import weighted_template_parse
import cyclecloud_provider
def azurecc_template_generate(vmTypes, maxNumber=100):
azurecc_template = [{
"templateId": "execute",
"attributes" : {
"type": ["String", "X86_64"],
"nram": ["Numeric", "4096"],
"ncpus": ["Numeric", 1],
"nodearray": ["String", "execute"]
},
"vmTypes": {" Standard_D2_v2 ":2, " Standard_D1_v2 ":1},
"priceInfo": ["String", "price:0.1,billingTimeUnitType:prorated_hour,billingTimeUnitNumber:1,billingRoundoffType:unit"],
"rank": ["Numeric", "0"],
"maxNumber": 100
}]
azurecc_template[0]["vmTypes"] = vmTypes
azurecc_template[0]["maxNumber"] = maxNumber
return azurecc_template
class TestWeightedTemplate(unittest.TestCase):
def setUp(self):
cluster.new_node_manager = MagicMock(return_value=None)
self.weighted_template = weighted_template_parse.WeightedTemplates("symphony", {"cyclecloud.cluster.name": "symphony","cyclecloud.config.web_server": "http://localhost","cyclecloud.config.username":"cc_admin", "cyclecloud.config.password":"password" }, None)
@patch('cluster.Cluster.status')
def testInRangeMachineCount(self, mock_status):
vmTypes = {" Standard_D2_v2 ":2, " Standard_D1_v2 ":1}
vmTypePriority = {" Standard_D2_v2 ":1000, " Standard_D1_v2 ":100}
maxNumber = 100
azurecc_template = weighted_template_parse.azurecc_template_generate(vmTypes, vmTypePriority, maxNumber)
self.weighted_template = weighted_template_parse.WeightedTemplates( None)
def test_parse_weighted_template(self):
vmTypes = {"Standard_D2_v2":2, "Standard_D1_v2":1}
templates = azurecc_template_generate(vmTypes)
print(templates)
input_json = {
"template": {
"templateId": "execute",
"machineCount": 10 #Interpreted as request for 10 compute units
}
}
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 5}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 5}]}]}
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D2_v2 ", 5)])
input_json["template"]["machineCount"] = 15
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D2_v2 ", 5), (" Standard_D1_v2 ", 5)])
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 10}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 10}]}]}
input_json["template"]["machineCount"] = 25
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D2_v2 ", 10), (" Standard_D1_v2 ", 5)])
@patch('cluster.Cluster.status')
def testOutOfRangeMachineCount(self, mock_status):
vmTypes = {" Standard_D2_v2 ":2, " Standard_D1_v2 ":1}
vmTypePriority = {" Standard_D2_v2 ":1000, " Standard_D1_v2 ":100}
maxNumber = 2
azurecc_template = weighted_template_parse.azurecc_template_generate(vmTypes, vmTypePriority, maxNumber)
input_json = {
"template": {
"templateId": "execute",
"machineCount": 10 #Interpreted as request for 10 compute units
}
}
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 1}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 2}]}]}
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D2_v2 ", 1)])
@patch('cluster.Cluster.status')
def testNoVMPriority(self, mock_status):
vmTypes = { " Standard_D1_v2 ":1, " Standard_D2_v2 ":2}
vmTypePriority = {}
maxNumber = 100
azurecc_template = weighted_template_parse.azurecc_template_generate(vmTypes, vmTypePriority, maxNumber)
input_json = {
"template": {
"templateId": "execute",
"machineCount": 10
}
}
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 5}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 5}]}]}
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D1_v2 ", 5), (" Standard_D2_v2 ", 3)])
input_json["template"]["machineCount"] = 15
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D1_v2 ", 5), (" Standard_D2_v2 ", 5)])
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 10}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 10}]}]}
input_json["template"]["machineCount"] = 25
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D1_v2 ", 10), (" Standard_D2_v2 ", 8)])
@patch('cluster.Cluster.status')
def testMaxNumberLessThanAvailCount(self, mock_status):
vmTypes = { " Standard_D1_v2 ":1, " Standard_D2_v2 ":2}
vmTypePriority = {}
maxNumber = 5
azurecc_template = weighted_template_parse.azurecc_template_generate(vmTypes, vmTypePriority, maxNumber)
input_json = {
"template": {
"templateId": "execute",
"machineCount": 10
}
}
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 5}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 5}]}]}
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D1_v2 ", 5)])
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": " Standard_D2_v2 "}, "availableCount": 10}, {"definition": {"machineType": " Standard_D1_v2 "}, "availableCount": 10}]}]}
input_json["template"]["machineCount"] = 25
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual( result, [(" Standard_D1_v2 ", 5)])
@patch('cluster.Cluster.status')
def test6SKUsAllESeries(self, mock_status):
vmTypes = {"Standard_E2a_v4":2, "Standard_E4a_v4":4, "Standard_E8a_v4":8, "Standard_E16a_v4":16, "Standard_E32a_v4":32, "Standard_E64a_v4":64}
vmTypePriority = {"Standard_E2a_v4":90, "Standard_E4a_v4":94, "Standard_E8a_v4":98, "Standard_E16a_v4":95, "Standard_E32a_v4":99, "Standard_E64a_v4":100}
maxNumber = 1000
azurecc_template = weighted_template_parse.azurecc_template_generate(vmTypes, vmTypePriority, maxNumber)
input_json = {
"template": {
"templateId": "execute",
"machineCount": 100
}
}
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": "Standard_E2a_v4"}, "availableCount": 1000}, {"definition": {"machineType": "Standard_E4a_v4"}, "availableCount": 100}, {"definition": {"machineType": "Standard_E8a_v4"}, "availableCount": 10}, {"definition": {"machineType": "Standard_E16a_v4"}, "availableCount": 10}, {"definition": {"machineType": "Standard_E32a_v4"}, "availableCount": 10}, {"definition": {"machineType": "Standard_E64a_v4"}, "availableCount": 10}]}]}
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual(result, [("Standard_E64a_v4", 2)])
mock_status.return_value = {"nodearrays": [{"name": "execute", "buckets": [{"definition": {"machineType": "Standard_E2a_v4"}, "availableCount": 1000}, {"definition": {"machineType": "Standard_E4a_v4"}, "availableCount": 100}, {"definition": {"machineType": "Standard_E8a_v4"}, "availableCount": 10}, {"definition": {"machineType": "Standard_E16a_v4"}, "availableCount": 10}, {"definition": {"machineType": "Standard_E32a_v4"}, "availableCount": 0}, {"definition": {"machineType": "Standard_E64a_v4"}, "availableCount": 5}]}]}
input_json = {
"template": {
"templateId": "execute",
"machineCount": 900
}
}
result = self.weighted_template.create_machines(input_json, azurecc_template)
self.assertEqual(result, [("Standard_E64a_v4", 5), ("Standard_E8a_v4", 10), ("Standard_E16a_v4", 10), ("Standard_E4a_v4", 85)])
self.assertEqual(self.weighted_template.parse_weighted_template(input_json, templates), vmTypes)
if __name__ == "__main__":
unittest.main()

Просмотреть файл

@ -0,0 +1,31 @@
#!/bin/bash -e
export HF_LOGDIR=/tmp/log
export HF_CONFDIR=$HF_TOP/conf
export HF_WORKDIR=/tmp/work
mkdir -p $HF_LOGDIR
mkdir -p $HF_WORKDIR
cat <<EOF >/tmp/input.json
{}
EOF
export PRO_LOG_DIR=${HF_LOGDIR}
export PRO_CONF_DIR=${HF_CONFDIR}/providers/azurecc
export PRO_DATA_DIR=${HF_WORKDIR}
env > /tmp/invoke2.env
venv_path=$HF_TOP/$HF_VERSION/providerplugins/azurecc/venv/bin
scriptDir=`dirname $0`
export PYTHONPATH=$PYTHONPATH:$scriptDir/src
. $venv_path/activate
if [ $1 == "validate_templates" ]; then
$venv_path/python3 -m cyclecloud_provider validate_templates -f /tmp/input.json 2>> /tmp/dry_run.out
elif [ $1 == "create_machines" ]; then
cp $2 /tmp/reqMachine.dry_run.json
$venv_path/python3 -m cyclecloud_provider create_machines -f /tmp/reqMachine.dry_run.json 2>> /tmp/dry_run.out
fi
if [ $? != 0 ]; then
echo "$1 failed check dry_run.out"
fi
rm -rf $HF_LOGDIR
rm -rf $HF_WORKDIR
exit 0

Просмотреть файл

@ -196,6 +196,7 @@ if [ $# -eq 1 ]; then
Generate-Provider-Config
Generate-Provider-Plugins-Config
Update-Requestors-Config
Install-Python-Packages
else
echo "Argument $1 is invalid"
fi

Просмотреть файл

@ -139,7 +139,7 @@ bash 'Installing HostFactory Package...' do
code <<-EOH
cd /tmp/hostfactory
chmod +x install.sh
./install.sh
./install.sh generate_config
EOH
user "root"
group "root"

Просмотреть файл

@ -40,6 +40,8 @@ Autoscale = $Autoscale
[[[configuration symphony]]]
is_management = true
is_master = true
enable_weighted_templates = true
[[[input-endpoint ganglia]]]
PrivatePort = 8652
@ -172,7 +174,7 @@ Order = 10
Label = Execute VM Type
Description = The machine type for the default Symphony Execute nodes
ParameterType = Cloud.MachineType
DefaultValue = Standard_F2s_v2
DefaultValue = Standard_F2s_v2,Standard_D2_v5
Config.Multiselect = true