Resource scheduling and cluster management for AI
Перейти к файлу
Liu Dongqing e149410dd4 Support ssh auth with private key file to enable to deploy pai to AWS… (#1308)
* Support ssh auth with private key file to enable to deploy pai to AWS EC2 (#1295)

* Check the ssh-key-filename as it is optional (#1295)

* Check the ssh-key-filename is not None (#1295)

* Rename configuration key-filename to keyfile-path; Make keyfile path option to pass CI

* Use k8s secret to pass ssh key to watchdog

* Create k8s secret automatically; fixed the quick start config

* Fixed the watchdog keyfile path config name

* Fixed the format of the watchdog yaml template
2018-09-12 10:18:41 +08:00
cluster-configuration [normalize service deployment]Changed to nodeAffinity (#1081) 2018-09-05 10:33:09 +08:00
docs add hdfs access link at job submission doc (#1255) 2018-09-03 21:27:52 +08:00
examples fix keras TensorFlow backend example (#1130) 2018-08-31 15:05:38 +08:00
frameworklauncher [Launcher]: Support ACL Backward Compatibility (#1184) 2018-08-28 17:11:11 +08:00
grafana add picture 2018-08-06 13:38:59 +08:00
hadoop-ai Add docker executor to help preserve job (#1247) 2018-09-06 12:41:36 +08:00
job-tutorial/Dockerfiles Merge branch 'master' into yanjga/doc_new 2018-07-25 10:31:18 +08:00
pai-management Support ssh auth with private key file to enable to deploy pai to AWS… (#1308) 2018-09-12 10:18:41 +08:00
prometheus Support ssh auth with private key file to enable to deploy pai to AWS… (#1308) 2018-09-12 10:18:41 +08:00
pylon Merge pull request #1029 from Microsoft/hwuu/pylon-readiness-probe 2018-08-03 13:56:58 +08:00
rest-server Create default user in etcd when users directory is empty (#1269) 2018-09-07 12:53:26 +08:00
utilities support muting alert (#1261) 2018-09-05 17:17:51 +08:00
webportal Webportal: disable JS optimize in debug mode (#1267) 2018-09-09 23:28:53 +08:00
yarn refine docs 2018-08-02 02:26:29 +00:00
zookeeper Fix metric export port (#1192) 2018-08-31 12:22:35 +08:00
.gitattributes Change the line ending character of js and css file to LF. 2018-03-07 13:09:53 +08:00
.gitignore add tools for detective broken links in documents (#817) 2018-07-06 10:47:00 +08:00
.travis.yml use port to expose metrics in watchdog 2018-08-20 09:47:22 +08:00
Jenkinsfile A new sub-command for configuration operation: paictl config (#1263) 2018-09-10 14:42:14 +08:00
LICENSE Initial commit 2017-09-25 04:24:56 -07:00
README.md [doc] add faq for private job image / Adjust the order to make web portal as the first choice for job submission at entry page (#1228) 2018-08-31 21:10:29 +08:00
pailogo.jpg add logo 2018-04-18 19:37:06 +08:00
sysarch.png update system architecture section 2017-11-24 16:18:19 +08:00

README.md

Open Platform for AI (OpenPAI) alt text

Build Status Coverage Status

OpenPAI is an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.

Table of Contents

  1. When to consider OpenPAI
  2. Why choose OpenPAI
  3. How to deploy
  4. How to use
  5. Resources
  6. Get Involved
  7. How to contribute

When to consider OpenPAI

  1. When your organization needs to share powerful AI computing resources (GPU/FPGA farm, etc.) among teams.
  2. When your organization needs to share and reuse common AI assets like Model, Data, Environment, etc.
  3. When your organization needs an easy IT ops platform for AI.
  4. When you want to run a complete training pipeline in one place.

Why choose OpenPAI

The platform incorporates the mature design that has a proven track record in Microsoft's large-scale production environment.

Support on-premises and easy to deploy

OpenPAI is a full stack solution. OpenPAI not only supports on-premises, hybrid, or public Cloud deployment but also supports single-box deployment for trial users.

Pre-built docker for popular AI frameworks. Easy to include heterogeneous hardware. Support Distributed training, such as distributed TensorFlow.

Most complete solution and easy to extend

OpenPAI is a most complete solution for deep learning, support virtual cluster, compatible Hadoop / kubernetes eco-system, complete training pipeline at one cluster etc. OpenPAI is architected in a modular way: different module can be plugged in as appropriate.

How to deploy

1 Prerequisites

Before start, you need to meet the following requirements:

  • Ubuntu 16.04
  • Assign each server a static IP address. Network is reachable between servers.
  • Server can access the external network, especially need to have access to a Docker registry service (e.g., Docker hub) to pull the Docker images for the services to be deployed.
  • All machines' SSH service is enabled, share the same username / password and have sudo privilege.
  • Need to enable NTP service.
  • Recommend no Docker installed or a Docker with api version >= 1.26.

2 Deploy OpenPAI

2.1 Customized deploy
2.2 Single Box deploy

How to use

How to train jobs

Cluster administration

Resources

  • The OpenPAI user documentations provides in-depth instructions for using OpenPAI
  • Visit the release notes to read about the new features, or download the release today.

Get Involved

  • StackOverflow: If you have questions about OpenPAI, please submit question at Stackoverflow under tag: openpai
  • Report an issue: If you have issue/ bug/ new feature, please submit it at Github

How to contribute

Contributor License Agreement

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Who should consider contributing to OpenPAI?

  • Folks who want to add support for other ML and DL frameworks
  • Folks who want to make OpenPAI a richer AI platform (e.g. support for more ML pipelines, hyperparameter tuning)
  • Folks who want to write tutorials/blog posts showing how to use OpenPAI to solve AI problems

Contributors

One key purpose of PAI is to support the highly diversified requirements from academia and industry. PAI is completely open: it is under the MIT license. This makes PAI particularly attractive to evaluate various research ideas, which include but not limited to the components.

PAI operates in an open model. It is initially designed and developed by Microsoft Research (MSR) and Microsoft Search Technology Center (STC) platform team. We are glad to have Peking University, Xi'an Jiaotong University, Zhejiang University, and University of Science and Technology of China join us to develop the platform jointly. Contributions from academia and industry are all highly welcome.