AZTK powered by Azure Batch: On-demand, Dockerized, Spark Jobs on Azure
Перейти к файлу
Jacob Freck 8b8cd6260f
Fix: Remove old spark-defaults.conf jars (#567)
2018-05-30 13:05:55 -07:00
.vscode Feature: New Models design with auto validation, default and merging (#543) 2018-05-30 09:07:09 -07:00
aztk Feature: Disable scheduling on group of nodes (#540) 2018-05-30 13:02:48 -07:00
aztk_cli Fix: Remove old spark-defaults.conf jars (#567) 2018-05-30 13:05:55 -07:00
custom-scripts Feature: refactor docker images (#510) 2018-04-30 17:19:01 -07:00
docker-image Feature: nvBLAS and OpenBLAS plugin (#539) 2018-05-15 17:47:41 -07:00
docs Feature: Disable scheduling on group of nodes (#540) 2018-05-30 13:02:48 -07:00
examples Bug: remove unnecessary example (#417) 2018-03-08 12:35:29 -08:00
tests Feature: Disable scheduling on group of nodes (#540) 2018-05-30 13:02:48 -07:00
.editorconfig Feature: Readthedocs support (#497) 2018-04-26 14:03:45 -07:00
.gitattributes Fix gitattributes for jar files (#548) 2018-05-09 10:43:54 -07:00
.gitignore Feature: New Models design with auto validation, default and merging (#543) 2018-05-30 09:07:09 -07:00
.style.yapf Feature: New Models design with auto validation, default and merging (#543) 2018-05-30 09:07:09 -07:00
.travis.yml Bug: pypi long description (#450) 2018-04-26 15:24:53 -07:00
.vsts-ci.yml Feature: Add VSTS CI (#561) 2018-05-22 15:17:02 -07:00
CHANGELOG.md Release: 0.7.1 (#554) 2018-05-14 10:55:57 -07:00
LICENSE add MIT license (#323) 2018-01-11 10:19:14 -08:00
README.md Feature: refactor docker images (#510) 2018-04-30 17:19:01 -07:00
account_setup.py Fix: getting started script reuse aad application (#569) 2018-05-21 12:36:48 -07:00
account_setup.sh Fix: getting started script reuse aad application (#569) 2018-05-21 12:36:48 -07:00
pylintrc Feature: Disable scheduling on group of nodes (#540) 2018-05-30 13:02:48 -07:00
pytest.ini Feature/secrets.cfg (#43) 2017-08-21 12:36:45 -07:00
requirements.txt Fix: switch from pycryptodome to pycryptodomex (#564) 2018-05-21 12:57:28 -07:00
setup.py Fix: switch from pycryptodome to pycryptodomex (#564) 2018-05-21 12:57:28 -07:00

README.md

Azure Distributed Data Engineering Toolkit (AZTK)

Azure Distributed Data Engineering Toolkit (AZTK) is a python CLI application for provisioning on-demand Spark on Docker clusters in Azure. It's a cheap and easy way to get up and running with a Spark cluster, and a great tool for Spark users who want to experiment and start testing at scale.

This toolkit is built on top of Azure Batch but does not require any Azure Batch knowledge to use.

Notable Features

Setup

  1. Install aztk with pip:
    pip install aztk
  1. Initialize the project in a directory. This will automatically create a .aztk folder with config files in your working directory:
    aztk spark init
  1. Login or register for an Azure Account, navigate to Azure Cloud Shell, and run:
wget -q https://raw.githubusercontent.com/Azure/aztk/v0.7.0/account_setup.sh -O account_setup.sh &&
chmod 755 account_setup.sh &&
/bin/bash account_setup.sh
  1. Follow the on screen prompts to create the necessary Azure resources and copy the output into your .aztk/secrets.yaml file. For more infomration see Getting Started Scripts.

Quickstart Guide

The core experience of this package is centered around a few commands.

# create your cluster
aztk spark cluster create
aztk spark cluster add-user
# monitor and manage your clusters
aztk spark cluster get
aztk spark cluster list
aztk spark cluster delete
# login and submit applications to your cluster
aztk spark cluster ssh
aztk spark cluster submit

1. Create and setup your cluster

First, create your cluster:

aztk spark cluster create --id my_cluster --size 5 --vm-size standard_d2_v2
  • See our available VM sizes here.
  • The --vm-size argument must be the official SKU name which usually come in the form: "standard_d2_v2"
  • You can create low-priority VMs at an 80% discount by using --size-low-pri instead of --size
  • By default, AZTK runs Spark 2.2.0 on an Ubuntu16.04 Docker image. More info here
  • By default, AZTK will create a user (with the username spark) for your cluster
  • The cluster id (--id) can only contain alphanumeric characters including hyphens and underscores, and cannot contain more than 64 characters.
  • By default, you cannot create clusters of more than 20 cores in total. Visit this page to request a core quota increase.

More information regarding using a cluster can be found in the cluster documentation

2. Check on your cluster status

To check your cluster status, use the get command:

aztk spark cluster get --id my_cluster

3. Submit a Spark job

When your cluster is ready, you can submit jobs from your local machine to run against the cluster. The output of the spark-submit will be streamed to your local console. Run this command from the cloned AZTK repo:

// submit a java application
aztk spark cluster submit \
    --id my_cluster \
    --name my_java_job \
    --class org.apache.spark.examples.SparkPi \
    --executor-memory 20G \
    path\to\examples.jar 1000
    
// submit a python application
aztk spark cluster submit \
    --id my_cluster \
    --name my_python_job \
    --executor-memory 20G \
    path\to\pi.py 1000
  • The aztk spark cluster submit command takes the same parameters as the standard spark-submit command, except instead of specifying --master, AZTK requires that you specify your cluster --id and a unique job --name
  • The job name, --name, argument must be atleast 3 characters long
    • It can only contain alphanumeric characters including hypens but excluding underscores
    • It cannot contain uppercase letters
  • Each job you submit must have a unique name
  • Use the --no-wait option for your command to return immediately

Learn more about the spark submit command here

4. Log in and Interact with your Spark Cluster

Most users will want to work interactively with their Spark clusters. With the aztk spark cluster ssh command, you can SSH into the cluster's master node. This command also helps you port-forward your Spark Web UI and Spark Jobs UI to your local machine:

aztk spark cluster ssh --id my_cluster --user spark

By default, we port forward the Spark Web UI to localhost:8080, Spark Jobs UI to localhost:4040, and the Spark History Server to localhost:18080.

You can configure these settings in the .aztk/ssh.yaml file.

NOTE: When working interactively, you may want to use tools like Jupyter or RStudio-Server depending on whether or not you are a python or R user. To do so, you need to setup your cluster with the appropriate docker image and custom scripts:

5. Manage and Monitor your Spark Cluster

You can also see your clusters from the CLI:

aztk spark cluster list

And get the state of any specified cluster:

aztk spark cluster get --id <my_cluster_id>

Finally, you can delete any specified cluster:

aztk spark cluster delete --id <my_cluster_id>

FAQs

Next Steps

You can find more documentation here