1b741a74c5 | ||
---|---|---|
.github | ||
.vsts | ||
cargo | ||
cascade | ||
config_templates | ||
contrib | ||
convoy | ||
docs | ||
federation | ||
heimdall | ||
images | ||
recipes | ||
schemas | ||
scripts | ||
site-extension | ||
slurm | ||
.gitattributes | ||
.gitignore | ||
.travis.yml | ||
CHANGELOG.md | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
README.md | ||
THIRD_PARTY_NOTICES.txt | ||
appveyor.yml | ||
install.cmd | ||
install.sh | ||
mkdocs.yml | ||
req_nodeps.txt | ||
requirements.txt | ||
shipyard.py |
README.md
Batch Shipyard
Batch Shipyard is a tool to help provision, execute, and monitor container-based batch processing and HPC workloads on Azure Batch. Batch Shipyard supports both Docker and Singularity containers. No experience with the Azure Batch SDK is needed; run your containers with easy-to-understand configuration files. All Azure regions are supported, including non-public Azure regions.
Additionally, Batch Shipyard provides the ability to provision and manage entire standalone remote file systems (storage clusters) in Azure, independent of any integrated Azure Batch functionality.
Major Features
Container Runtime and Image Management
- Support for multiple container runtimes including Docker, Singularity, and Kata Containers tuned for Azure Batch compute nodes
- Automated deployment of container images required for tasks to compute nodes
- Support for container registries including Azure Container Registry for both Docker and Singularity images (ORAS), other Internet-accessible public and private registries, and support for the Sylabs Singularity Library and Singularity Hub
- Transparent support for GPU-accelerated container applications on both Docker and Singularity on Azure N-Series VM instances
- Transparent assist for running Docker and Singularity containers utilizing Infiniband/RDMA on HPC Azure VM instances including A-Series, H-Series, Hb/Hc-Series, and N-Series
Data Management and Shared File Systems
- Comprehensive data movement support: move data easily between locally accessible storage systems, remote filesystems, Azure Blob or File Storage, and compute nodes
- Standalone Remote Filesystem Provisioning with integration to auto-link these filesystems to compute nodes with support for NFS and GlusterFS distributed network file system
- Automatic shared data volume support for linking to Remote Filesystems, Azure File via SMB, Azure Blob via blobfuse, GlusterFS provisioned directly on compute nodes, and custom Linux mount support (fstab)
- Support for automated on-demand, per-job distributed scratch space provisioning via BeeGFS BeeOND
Monitoring
- Automated, integrated resource monitoring with Prometheus and Grafana for Batch pools and RemoteFS storage clusters
- Support for Batch Insights
Open Source Scheduler Integration
- Support for elastic cloud bursting on Slurm to Batch pools with automated RemoteFS shared file system linking
Azure Ecosystem Integration
- Support for serverless execution binding with Azure Functions
- Support for credential management through Azure KeyVault
Azure Batch Integration and Enhancements
- Federation support: enables unified, constraint-based scheduling to collections of heterogeneous pools, including across multiple Batch accounts and Azure regions
- Support for simple, scenario-based pool autoscale and autopool to dynamically scale and control computing resources on-demand
- Support for Task Factories with the ability to generate tasks based on parametric (parameter) sweeps, randomized input, file enumeration, replication, and custom Python code-based generators
- Support for multi-instance tasks to accommodate MPI and multi-node cluster applications packaged as Docker or Singularity containers on compute pools with automatic job completion and task termination
- Seamless, direct high-level configuration support for popular MPI runtimes including OpenMPI, MPICH, MVAPICH, and Intel MPI with automatic configuration for Infiniband, including SR-IOV RDMA VM sizes
- Seamless integration with Azure Batch job, task and file concepts along with full pass-through of the Azure Batch API to containers executed on compute nodes
- Support for Azure Batch task dependencies allowing complex processing pipelines and DAGs
- Support for merge or final task specification that automatically depends on all other tasks within the job
- Support for job schedules and recurrences for automatic execution of tasks at set intervals
- Support for live job and job schedule migration between pools
- Support for Low Priority Compute Nodes
- Support for deploying Batch compute nodes into a specified Virtual Network and pre-defined public IP addresses
- Automatic setup of SSH or RDP users to all nodes in the compute pool and optional creation of SSH tunneling scripts to Docker Hosts on compute nodes
- Support for custom host images including Shared Image Gallery
- Support for Windows Containers on compliant Windows compute node pools with the ability to activate Azure Hybrid Use Benefit if applicable
Installation
Local Installation
Please see the installation guide for more information regarding the various local installation options and requirements.
Azure Cloud Shell
Batch Shipyard is integrated directly into Azure Cloud Shell and you can execute any Batch Shipyard workload using your web browser or the Microsoft Azure Android and iOS app.
Simply request a Cloud Shell session and type shipyard
to invoke the CLI;
no installation is required. Try Batch Shipyard now
in your browser.
Documentation and Recipes
Please refer to the Batch Shipyard Documentation on Read the Docs.
Visit the Batch Shipyard Recipes section for various sample container workloads using Azure Batch and Batch Shipyard.
Batch Shipyard Compute Node Host OS Support
Batch Shipyard is currently compatible with popular Azure Batch supported Marketplace Linux VMs, compliant Linux custom images, and native Azure Batch Windows Server with Containers VMs. Please see the platform image support documentation for more information specific to Batch Shipyard support of compute node host operating systems.
Change Log
Please see the Change Log for project history.
Please see this project's Code of Conduct and Contributing guidelines.