aztk/examples/sdk/sdk_example.py

115 строки
3.5 KiB
Python
Исходник Постоянная ссылка Обычный вид История

import sys, os, time
import aztk.spark
from aztk.error import AztkError
# set your secrets
secrets_confg = aztk.spark.models.SecretsConfiguration(
Update stable to v0.6.0 (#418) * Feature: on node user creation (#303) * client side on node user creation * start create user on node implementation * fix on node user creation * remove debug statements * remove commented code * line too long * fix spinner password prompt ui bug * set wait to false by default, formatting * encrypt password on client, decrypt on node * update docs, log warning if password used * Fix list-apps crash (#364) * Allow submitting jobs into a VNET (#365) * Add subnet_id to job submission cluster config * add some docs * Feature: Spark mixed mode support (#350) * add support for aad creds for storage on node * add mixed mode support * add docs * switch error order * add dedicated to get_cluster * remove mixed mode in print_cluster_conf * Feature: spark init docker repo customization (#358) * customize docker_repo based on init args * whitespace * add some docs * r-base to r * case insensitive r flag, typo fix * Bug: Load default Jars for job submission CLI (#367) * load jars in .aztk/ by default * rewrite loading config files * Feature: Cluster Run and Copy (#304) * start implementation of cluster run * fix cluster_run * start debug sequential user add and delete * parallelize user creation and deletion, start implementation of cluster scp * continue cluster_scp implementation * debug statements, disconnect error: permission denied * untesteed parakimo implementation of clus_run * continue debugging user creation bug * fix bug with pool user creation, start concurrent implementation * start fix of paramiko cluster_run and cluster_copy * working paramiko cluster_run implementation, start cluster_scp * fix cluster_scp command * update requirements, rename cluster_run function * remove unused shell functions * parallelize run and scp, add container_name, create logs wrapper * change scp to copy, clean up * sort imports * remove asyncssh from node requirements * remove old import * remove bad error handling * make cluster user management methods private * remove comment * remove accidental commit * fix merge, move delete to finally clause * add docs * formatting * Feature: Refactor cluster config to use ClusterConfiguration model (#343) * Bug: fix core-site.xml typo (#378) * fix typo * crlf->lf * Bug: fix regex for is_gpu_enabled (#380) * fix regex for is_gpu_enabled * crlf->lf * Bug: spark SDK example fix (#383) * start fix sdk * fix sdk example * crlf->lf * Fix: Custom scripts not read from cluster.yaml (#388) * Feature: spark shuffle service (#374) * start shuffle service by default * whitespace, delete misplaced file * crlf->lf * crlf->lf * move spark scratch space off os drive * Feature: enable dynamic allocation by default (#386) * Bug: stop using mutable default parameters (#392) * Bug: always upload spark job logs errors (#395) * Bug: spark submit upload error log type error (#397) * Bug: Spark Job list apps exit code 0 (#396) * Bug: fix spark-submit cores args (#399) * Fix: Trying to add user before master is ready show better error (#402) * Bug: move spark.local.dir to location usable by rstudioserver (#407) * Feature: SDK support for file-like configuration objects (#373) * add support for filelike objects for conifguration files * fix custom scripts * remove os.pathlike * merge error * Feature: Basic Cluster and Job Submission SDK Tests (#344) * add initial cluster tests * add cluster tests, add simple job submission test scenario * sort imports * fix job tests * fix job tests * remove pytest from travis build * cluster per test, parallel pytest plugin * delete cluster after tests, wait until deleted * fix bugs * catch right error, change cluster_id to base_cluster_id * fix test name * fixes * move tests to intregration_tests dir * update travis to run non-integration tests * directory structure, decoupled job tests * fix job tests, issue with submit_job * fix bug * add test docs * add cluster and job delete to finally clause * Feature: Spark add worker on master option (#415) * Add worker_on_master to ClusterConfiguration * add worker_on_master to JobConfiguration * Feature: task affinity to master node (#413) * Release: v0.6.0 (#416) * update changelog and version * underscores to stars
2018-02-24 02:18:03 +03:00
service_principal=aztk.spark.models.ServicePrincipalConfiguration(
tenant_id="<org>.onmicrosoft.com",
client_id="",
credential="",
batch_account_resource_id="",
storage_account_resource_id="",
),
ssh_pub_key=""
)
# set path to root of repository to reference files
ROOT_PATH = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..'))
# create a client
client = aztk.spark.Client(secrets_confg)
# list available clusters
clusters = client.list_clusters()
# define a custom script
custom_script = aztk.spark.models.CustomScript(
name="simple.sh",
script=os.path.join(ROOT_PATH, 'custom-scripts', 'simple.sh'),
run_on="all-nodes")
# define spark configuration
spark_conf = aztk.spark.models.SparkConfiguration(
spark_defaults_conf=os.path.join(ROOT_PATH, 'config', 'spark-defaults.conf'),
Update stable to v0.6.0 (#418) * Feature: on node user creation (#303) * client side on node user creation * start create user on node implementation * fix on node user creation * remove debug statements * remove commented code * line too long * fix spinner password prompt ui bug * set wait to false by default, formatting * encrypt password on client, decrypt on node * update docs, log warning if password used * Fix list-apps crash (#364) * Allow submitting jobs into a VNET (#365) * Add subnet_id to job submission cluster config * add some docs * Feature: Spark mixed mode support (#350) * add support for aad creds for storage on node * add mixed mode support * add docs * switch error order * add dedicated to get_cluster * remove mixed mode in print_cluster_conf * Feature: spark init docker repo customization (#358) * customize docker_repo based on init args * whitespace * add some docs * r-base to r * case insensitive r flag, typo fix * Bug: Load default Jars for job submission CLI (#367) * load jars in .aztk/ by default * rewrite loading config files * Feature: Cluster Run and Copy (#304) * start implementation of cluster run * fix cluster_run * start debug sequential user add and delete * parallelize user creation and deletion, start implementation of cluster scp * continue cluster_scp implementation * debug statements, disconnect error: permission denied * untesteed parakimo implementation of clus_run * continue debugging user creation bug * fix bug with pool user creation, start concurrent implementation * start fix of paramiko cluster_run and cluster_copy * working paramiko cluster_run implementation, start cluster_scp * fix cluster_scp command * update requirements, rename cluster_run function * remove unused shell functions * parallelize run and scp, add container_name, create logs wrapper * change scp to copy, clean up * sort imports * remove asyncssh from node requirements * remove old import * remove bad error handling * make cluster user management methods private * remove comment * remove accidental commit * fix merge, move delete to finally clause * add docs * formatting * Feature: Refactor cluster config to use ClusterConfiguration model (#343) * Bug: fix core-site.xml typo (#378) * fix typo * crlf->lf * Bug: fix regex for is_gpu_enabled (#380) * fix regex for is_gpu_enabled * crlf->lf * Bug: spark SDK example fix (#383) * start fix sdk * fix sdk example * crlf->lf * Fix: Custom scripts not read from cluster.yaml (#388) * Feature: spark shuffle service (#374) * start shuffle service by default * whitespace, delete misplaced file * crlf->lf * crlf->lf * move spark scratch space off os drive * Feature: enable dynamic allocation by default (#386) * Bug: stop using mutable default parameters (#392) * Bug: always upload spark job logs errors (#395) * Bug: spark submit upload error log type error (#397) * Bug: Spark Job list apps exit code 0 (#396) * Bug: fix spark-submit cores args (#399) * Fix: Trying to add user before master is ready show better error (#402) * Bug: move spark.local.dir to location usable by rstudioserver (#407) * Feature: SDK support for file-like configuration objects (#373) * add support for filelike objects for conifguration files * fix custom scripts * remove os.pathlike * merge error * Feature: Basic Cluster and Job Submission SDK Tests (#344) * add initial cluster tests * add cluster tests, add simple job submission test scenario * sort imports * fix job tests * fix job tests * remove pytest from travis build * cluster per test, parallel pytest plugin * delete cluster after tests, wait until deleted * fix bugs * catch right error, change cluster_id to base_cluster_id * fix test name * fixes * move tests to intregration_tests dir * update travis to run non-integration tests * directory structure, decoupled job tests * fix job tests, issue with submit_job * fix bug * add test docs * add cluster and job delete to finally clause * Feature: Spark add worker on master option (#415) * Add worker_on_master to ClusterConfiguration * add worker_on_master to JobConfiguration * Feature: task affinity to master node (#413) * Release: v0.6.0 (#416) * update changelog and version * underscores to stars
2018-02-24 02:18:03 +03:00
spark_env_sh=os.path.join(ROOT_PATH, 'config', 'spark-env.sh'),
core_site_xml=os.path.join(ROOT_PATH, 'config', 'core-site.xml'),
jars=[os.path.join(ROOT_PATH, 'config', 'jars', jar) for jar in os.listdir(os.path.join(ROOT_PATH, 'config', 'jars'))]
)
# configure my cluster
cluster_config = aztk.spark.models.ClusterConfiguration(
Update stable to v0.6.0 (#418) * Feature: on node user creation (#303) * client side on node user creation * start create user on node implementation * fix on node user creation * remove debug statements * remove commented code * line too long * fix spinner password prompt ui bug * set wait to false by default, formatting * encrypt password on client, decrypt on node * update docs, log warning if password used * Fix list-apps crash (#364) * Allow submitting jobs into a VNET (#365) * Add subnet_id to job submission cluster config * add some docs * Feature: Spark mixed mode support (#350) * add support for aad creds for storage on node * add mixed mode support * add docs * switch error order * add dedicated to get_cluster * remove mixed mode in print_cluster_conf * Feature: spark init docker repo customization (#358) * customize docker_repo based on init args * whitespace * add some docs * r-base to r * case insensitive r flag, typo fix * Bug: Load default Jars for job submission CLI (#367) * load jars in .aztk/ by default * rewrite loading config files * Feature: Cluster Run and Copy (#304) * start implementation of cluster run * fix cluster_run * start debug sequential user add and delete * parallelize user creation and deletion, start implementation of cluster scp * continue cluster_scp implementation * debug statements, disconnect error: permission denied * untesteed parakimo implementation of clus_run * continue debugging user creation bug * fix bug with pool user creation, start concurrent implementation * start fix of paramiko cluster_run and cluster_copy * working paramiko cluster_run implementation, start cluster_scp * fix cluster_scp command * update requirements, rename cluster_run function * remove unused shell functions * parallelize run and scp, add container_name, create logs wrapper * change scp to copy, clean up * sort imports * remove asyncssh from node requirements * remove old import * remove bad error handling * make cluster user management methods private * remove comment * remove accidental commit * fix merge, move delete to finally clause * add docs * formatting * Feature: Refactor cluster config to use ClusterConfiguration model (#343) * Bug: fix core-site.xml typo (#378) * fix typo * crlf->lf * Bug: fix regex for is_gpu_enabled (#380) * fix regex for is_gpu_enabled * crlf->lf * Bug: spark SDK example fix (#383) * start fix sdk * fix sdk example * crlf->lf * Fix: Custom scripts not read from cluster.yaml (#388) * Feature: spark shuffle service (#374) * start shuffle service by default * whitespace, delete misplaced file * crlf->lf * crlf->lf * move spark scratch space off os drive * Feature: enable dynamic allocation by default (#386) * Bug: stop using mutable default parameters (#392) * Bug: always upload spark job logs errors (#395) * Bug: spark submit upload error log type error (#397) * Bug: Spark Job list apps exit code 0 (#396) * Bug: fix spark-submit cores args (#399) * Fix: Trying to add user before master is ready show better error (#402) * Bug: move spark.local.dir to location usable by rstudioserver (#407) * Feature: SDK support for file-like configuration objects (#373) * add support for filelike objects for conifguration files * fix custom scripts * remove os.pathlike * merge error * Feature: Basic Cluster and Job Submission SDK Tests (#344) * add initial cluster tests * add cluster tests, add simple job submission test scenario * sort imports * fix job tests * fix job tests * remove pytest from travis build * cluster per test, parallel pytest plugin * delete cluster after tests, wait until deleted * fix bugs * catch right error, change cluster_id to base_cluster_id * fix test name * fixes * move tests to intregration_tests dir * update travis to run non-integration tests * directory structure, decoupled job tests * fix job tests, issue with submit_job * fix bug * add test docs * add cluster and job delete to finally clause * Feature: Spark add worker on master option (#415) * Add worker_on_master to ClusterConfiguration * add worker_on_master to JobConfiguration * Feature: task affinity to master node (#413) * Release: v0.6.0 (#416) * update changelog and version * underscores to stars
2018-02-24 02:18:03 +03:00
cluster_id="sdk-test",
vm_low_pri_count=2,
vm_size="standard_f2",
custom_scripts=[custom_script],
spark_configuration=spark_conf
)
# create a cluster, and wait until it is ready
try:
cluster = client.create_cluster(cluster_config)
cluster = client.wait_until_cluster_is_ready(cluster.id)
except AztkError as e:
print(e.message)
sys.exit()
# get details of specific cluster
cluster = client.get_cluster(cluster_config.cluster_id)
# # create a user for the cluster
client.create_user(cluster.id, "sdk_example_user", "example_password")
# create some apps to run
Update stable to v0.6.0 (#418) * Feature: on node user creation (#303) * client side on node user creation * start create user on node implementation * fix on node user creation * remove debug statements * remove commented code * line too long * fix spinner password prompt ui bug * set wait to false by default, formatting * encrypt password on client, decrypt on node * update docs, log warning if password used * Fix list-apps crash (#364) * Allow submitting jobs into a VNET (#365) * Add subnet_id to job submission cluster config * add some docs * Feature: Spark mixed mode support (#350) * add support for aad creds for storage on node * add mixed mode support * add docs * switch error order * add dedicated to get_cluster * remove mixed mode in print_cluster_conf * Feature: spark init docker repo customization (#358) * customize docker_repo based on init args * whitespace * add some docs * r-base to r * case insensitive r flag, typo fix * Bug: Load default Jars for job submission CLI (#367) * load jars in .aztk/ by default * rewrite loading config files * Feature: Cluster Run and Copy (#304) * start implementation of cluster run * fix cluster_run * start debug sequential user add and delete * parallelize user creation and deletion, start implementation of cluster scp * continue cluster_scp implementation * debug statements, disconnect error: permission denied * untesteed parakimo implementation of clus_run * continue debugging user creation bug * fix bug with pool user creation, start concurrent implementation * start fix of paramiko cluster_run and cluster_copy * working paramiko cluster_run implementation, start cluster_scp * fix cluster_scp command * update requirements, rename cluster_run function * remove unused shell functions * parallelize run and scp, add container_name, create logs wrapper * change scp to copy, clean up * sort imports * remove asyncssh from node requirements * remove old import * remove bad error handling * make cluster user management methods private * remove comment * remove accidental commit * fix merge, move delete to finally clause * add docs * formatting * Feature: Refactor cluster config to use ClusterConfiguration model (#343) * Bug: fix core-site.xml typo (#378) * fix typo * crlf->lf * Bug: fix regex for is_gpu_enabled (#380) * fix regex for is_gpu_enabled * crlf->lf * Bug: spark SDK example fix (#383) * start fix sdk * fix sdk example * crlf->lf * Fix: Custom scripts not read from cluster.yaml (#388) * Feature: spark shuffle service (#374) * start shuffle service by default * whitespace, delete misplaced file * crlf->lf * crlf->lf * move spark scratch space off os drive * Feature: enable dynamic allocation by default (#386) * Bug: stop using mutable default parameters (#392) * Bug: always upload spark job logs errors (#395) * Bug: spark submit upload error log type error (#397) * Bug: Spark Job list apps exit code 0 (#396) * Bug: fix spark-submit cores args (#399) * Fix: Trying to add user before master is ready show better error (#402) * Bug: move spark.local.dir to location usable by rstudioserver (#407) * Feature: SDK support for file-like configuration objects (#373) * add support for filelike objects for conifguration files * fix custom scripts * remove os.pathlike * merge error * Feature: Basic Cluster and Job Submission SDK Tests (#344) * add initial cluster tests * add cluster tests, add simple job submission test scenario * sort imports * fix job tests * fix job tests * remove pytest from travis build * cluster per test, parallel pytest plugin * delete cluster after tests, wait until deleted * fix bugs * catch right error, change cluster_id to base_cluster_id * fix test name * fixes * move tests to intregration_tests dir * update travis to run non-integration tests * directory structure, decoupled job tests * fix job tests, issue with submit_job * fix bug * add test docs * add cluster and job delete to finally clause * Feature: Spark add worker on master option (#415) * Add worker_on_master to ClusterConfiguration * add worker_on_master to JobConfiguration * Feature: task affinity to master node (#413) * Release: v0.6.0 (#416) * update changelog and version * underscores to stars
2018-02-24 02:18:03 +03:00
app1 = aztk.spark.models.ApplicationConfiguration(
name="pipy1",
application=os.path.join(ROOT_PATH, 'examples', 'src', 'main', 'python', 'pi.py'),
application_args="10"
)
Update stable to v0.6.0 (#418) * Feature: on node user creation (#303) * client side on node user creation * start create user on node implementation * fix on node user creation * remove debug statements * remove commented code * line too long * fix spinner password prompt ui bug * set wait to false by default, formatting * encrypt password on client, decrypt on node * update docs, log warning if password used * Fix list-apps crash (#364) * Allow submitting jobs into a VNET (#365) * Add subnet_id to job submission cluster config * add some docs * Feature: Spark mixed mode support (#350) * add support for aad creds for storage on node * add mixed mode support * add docs * switch error order * add dedicated to get_cluster * remove mixed mode in print_cluster_conf * Feature: spark init docker repo customization (#358) * customize docker_repo based on init args * whitespace * add some docs * r-base to r * case insensitive r flag, typo fix * Bug: Load default Jars for job submission CLI (#367) * load jars in .aztk/ by default * rewrite loading config files * Feature: Cluster Run and Copy (#304) * start implementation of cluster run * fix cluster_run * start debug sequential user add and delete * parallelize user creation and deletion, start implementation of cluster scp * continue cluster_scp implementation * debug statements, disconnect error: permission denied * untesteed parakimo implementation of clus_run * continue debugging user creation bug * fix bug with pool user creation, start concurrent implementation * start fix of paramiko cluster_run and cluster_copy * working paramiko cluster_run implementation, start cluster_scp * fix cluster_scp command * update requirements, rename cluster_run function * remove unused shell functions * parallelize run and scp, add container_name, create logs wrapper * change scp to copy, clean up * sort imports * remove asyncssh from node requirements * remove old import * remove bad error handling * make cluster user management methods private * remove comment * remove accidental commit * fix merge, move delete to finally clause * add docs * formatting * Feature: Refactor cluster config to use ClusterConfiguration model (#343) * Bug: fix core-site.xml typo (#378) * fix typo * crlf->lf * Bug: fix regex for is_gpu_enabled (#380) * fix regex for is_gpu_enabled * crlf->lf * Bug: spark SDK example fix (#383) * start fix sdk * fix sdk example * crlf->lf * Fix: Custom scripts not read from cluster.yaml (#388) * Feature: spark shuffle service (#374) * start shuffle service by default * whitespace, delete misplaced file * crlf->lf * crlf->lf * move spark scratch space off os drive * Feature: enable dynamic allocation by default (#386) * Bug: stop using mutable default parameters (#392) * Bug: always upload spark job logs errors (#395) * Bug: spark submit upload error log type error (#397) * Bug: Spark Job list apps exit code 0 (#396) * Bug: fix spark-submit cores args (#399) * Fix: Trying to add user before master is ready show better error (#402) * Bug: move spark.local.dir to location usable by rstudioserver (#407) * Feature: SDK support for file-like configuration objects (#373) * add support for filelike objects for conifguration files * fix custom scripts * remove os.pathlike * merge error * Feature: Basic Cluster and Job Submission SDK Tests (#344) * add initial cluster tests * add cluster tests, add simple job submission test scenario * sort imports * fix job tests * fix job tests * remove pytest from travis build * cluster per test, parallel pytest plugin * delete cluster after tests, wait until deleted * fix bugs * catch right error, change cluster_id to base_cluster_id * fix test name * fixes * move tests to intregration_tests dir * update travis to run non-integration tests * directory structure, decoupled job tests * fix job tests, issue with submit_job * fix bug * add test docs * add cluster and job delete to finally clause * Feature: Spark add worker on master option (#415) * Add worker_on_master to ClusterConfiguration * add worker_on_master to JobConfiguration * Feature: task affinity to master node (#413) * Release: v0.6.0 (#416) * update changelog and version * underscores to stars
2018-02-24 02:18:03 +03:00
app2 = aztk.spark.models.ApplicationConfiguration(
name="pipy2",
application=os.path.join(ROOT_PATH, 'examples', 'src', 'main', 'python', 'pi.py'),
application_args="20"
)
Update stable to v0.6.0 (#418) * Feature: on node user creation (#303) * client side on node user creation * start create user on node implementation * fix on node user creation * remove debug statements * remove commented code * line too long * fix spinner password prompt ui bug * set wait to false by default, formatting * encrypt password on client, decrypt on node * update docs, log warning if password used * Fix list-apps crash (#364) * Allow submitting jobs into a VNET (#365) * Add subnet_id to job submission cluster config * add some docs * Feature: Spark mixed mode support (#350) * add support for aad creds for storage on node * add mixed mode support * add docs * switch error order * add dedicated to get_cluster * remove mixed mode in print_cluster_conf * Feature: spark init docker repo customization (#358) * customize docker_repo based on init args * whitespace * add some docs * r-base to r * case insensitive r flag, typo fix * Bug: Load default Jars for job submission CLI (#367) * load jars in .aztk/ by default * rewrite loading config files * Feature: Cluster Run and Copy (#304) * start implementation of cluster run * fix cluster_run * start debug sequential user add and delete * parallelize user creation and deletion, start implementation of cluster scp * continue cluster_scp implementation * debug statements, disconnect error: permission denied * untesteed parakimo implementation of clus_run * continue debugging user creation bug * fix bug with pool user creation, start concurrent implementation * start fix of paramiko cluster_run and cluster_copy * working paramiko cluster_run implementation, start cluster_scp * fix cluster_scp command * update requirements, rename cluster_run function * remove unused shell functions * parallelize run and scp, add container_name, create logs wrapper * change scp to copy, clean up * sort imports * remove asyncssh from node requirements * remove old import * remove bad error handling * make cluster user management methods private * remove comment * remove accidental commit * fix merge, move delete to finally clause * add docs * formatting * Feature: Refactor cluster config to use ClusterConfiguration model (#343) * Bug: fix core-site.xml typo (#378) * fix typo * crlf->lf * Bug: fix regex for is_gpu_enabled (#380) * fix regex for is_gpu_enabled * crlf->lf * Bug: spark SDK example fix (#383) * start fix sdk * fix sdk example * crlf->lf * Fix: Custom scripts not read from cluster.yaml (#388) * Feature: spark shuffle service (#374) * start shuffle service by default * whitespace, delete misplaced file * crlf->lf * crlf->lf * move spark scratch space off os drive * Feature: enable dynamic allocation by default (#386) * Bug: stop using mutable default parameters (#392) * Bug: always upload spark job logs errors (#395) * Bug: spark submit upload error log type error (#397) * Bug: Spark Job list apps exit code 0 (#396) * Bug: fix spark-submit cores args (#399) * Fix: Trying to add user before master is ready show better error (#402) * Bug: move spark.local.dir to location usable by rstudioserver (#407) * Feature: SDK support for file-like configuration objects (#373) * add support for filelike objects for conifguration files * fix custom scripts * remove os.pathlike * merge error * Feature: Basic Cluster and Job Submission SDK Tests (#344) * add initial cluster tests * add cluster tests, add simple job submission test scenario * sort imports * fix job tests * fix job tests * remove pytest from travis build * cluster per test, parallel pytest plugin * delete cluster after tests, wait until deleted * fix bugs * catch right error, change cluster_id to base_cluster_id * fix test name * fixes * move tests to intregration_tests dir * update travis to run non-integration tests * directory structure, decoupled job tests * fix job tests, issue with submit_job * fix bug * add test docs * add cluster and job delete to finally clause * Feature: Spark add worker on master option (#415) * Add worker_on_master to ClusterConfiguration * add worker_on_master to JobConfiguration * Feature: task affinity to master node (#413) * Release: v0.6.0 (#416) * update changelog and version * underscores to stars
2018-02-24 02:18:03 +03:00
app3 = aztk.spark.models.ApplicationConfiguration(
name="pipy3",
application=os.path.join(ROOT_PATH, 'examples', 'src', 'main', 'python', 'pi.py'),
application_args="30"
)
# submit an app and wait until it is finished running
client.submit(cluster.id, app1)
client.wait_until_application_done(cluster.id, app1.name)
# get logs for app, print to console
app1_logs = client.get_application_log(cluster_id=cluster_config.cluster_id, application_name=app1.name)
print(app1_logs.log)
# submit some other apps to the cluster in parallel
client.submit_all_applications(cluster.id, [app2, app3])
# get status of app
status = client.get_application_status(cluster_config.cluster_id, app2.name)
# stream logs of app, print to console as it runs
current_bytes = 0
while True:
app2_logs = client.get_application_log(
cluster_id=cluster_config.cluster_id,
application_name=app2.name,
tail=True,
current_bytes=current_bytes)
print(app2_logs.log, end="")
if app2_logs.application_state == 'completed':
break
current_bytes = app2_logs.total_bytes
time.sleep(1)
# wait until all jobs finish, then delete the cluster
client.wait_until_applications_done(cluster.id)
client.delete_cluster(cluster.id)