diff --git a/.github/workflows/continuous-integration.yml b/.github/workflows/continuous-integration.yml index 4b089f9b3..50ba0235c 100644 --- a/.github/workflows/continuous-integration.yml +++ b/.github/workflows/continuous-integration.yml @@ -178,11 +178,6 @@ jobs: uses: actions/setup-node@v1 with: node-version: ${{ matrix.node }} - - name: Test contrib/submit-simple-job - run: | - cd contrib/submit-simple-job - npm install - npm test - name: Test contrib/submit-job-v2 run: | cd contrib/submit-job-v2 diff --git a/contrib/notebook-extension/README.md b/contrib/notebook-extension/README.md deleted file mode 100644 index cc93bcc74..000000000 --- a/contrib/notebook-extension/README.md +++ /dev/null @@ -1,155 +0,0 @@ - # OpenPAI Submitter - -***Note: OpenPAI Submitter is deprecated. New plugin support for Jupyter Notebook is under development.*** - -***OpenPAI Submitter*** is a Jupyter Notebook extension, created for easy-to-use job submission and management on OpenPAI clusters. Users can submit Jupyter job in one click, and manage recent jobs by a flexible dialog. - -![](docs_img/submitter-1.gif) - -## How to Install - -This extension requires **Python 3+** and Jupyter Notebook to work. Make sure you are using Jupyter Notebook with a Python 3 kernel. - -Please use the following commands to install this extension (Make sure you are in the correct `python` environment). - -```bash -pip install --upgrade pip -git clone https://github.com/Microsoft/pai -cd pai/contrib/notebook-extension -python setup.py # add --user to avoid permission issues if necessary -``` - -This extension leverage the [`Python` SDK](https://github.com/microsoft/pai/tree/master/contrib/python-sdk) as the low level implementation. It will also be installed in above commands (use `-i` in of `setup.py` to avoid installing SDK). - -Before starting, user needs to give the basic information of the clusters. If you log in to your cluster by user/password, you can use the following command to add your cluster. The is a cluster name chosen by you. -```bash -# for user/password authentication -opai cluster add --cluster-alias --pai-uri --user --password -``` -If you log in to your cluster by Azure AD authentication, the following command is for you to add the cluster: -```bash -# for Azure AD authentication -opai cluster add --cluster-alias --pai-uri --user --toke -``` - -Now you can use the command `opai cluster list` to list all clusters. - -The following command is used to delete one of your clusters: -```bash -# Delete a cluster by calling its alias. -opai cluster delete -``` - -If you want to update some settings of clusters (e.g. cluster alias, username or password), it is recommended to delete the old cluster by `opai cluster delete `, then use `opai cluster add` to re-add it with new settings. A more complex way is to edit the [YAML file](../python-sdk/#define-your-clusters) directly. - -There are other ways to manage the clusters, see the [documentation of SDK](../python-sdk). - -## Quick Start - -Once installed, the extension will add two buttons on the notebook page, namely and . - -Button is designed for job submission. You can click it and the detailed cluster information will be loaded. Then click ***Quick Submit***. The extension will do the following work for you: - -- Pack all files in current folder as a .zip file, and upload it to the cluster by WebHDFS. -- Generate job settings automatically, then submit it. -- Wait until the notebook is ready. - -The picture below shows the submission process: - -![](docs_img/submitter-1.gif) - -You can safely close the page when the extension is waiting. Once the notebook is ready, the submitter will show up and give you the notebook URL: - -![](docs_img/submitter-2.gif) - -**Note: The waiting process will take 5 to 10 minutes.** If you are not willing to wait, you could probably click the bottom link on the submitter to start a new session. The submitted job will not lose, you can click to find it. - -### Submit as Interactive Notebook v.s. Python Script v.s. Silent Notebbook - -You can submit jobs in two ways: -- as an ***interactive notebook*** -- as a ***Python Script (.py file)*** -- as a ***silent notebook*** - -The interactive mode is a quick way for you to submit the notebook you work on locally to the cluster. The notebook will stay the same but have access to GPU resource on cluster. This mode is mainly designed for experimenting and debugging. - -On the other hand, submitting the job as a .py file will firstly convert the notebook to a Python script, then execute the script directly. This mode is a good way for deployment and batch submitting. - -If you submit a notebook as a silent notebook, you won't have an interactive notebook as in the interactive mode. Your notebook will be executed in the background. Once it is finished, you can get the result as a file. The difference between this mode and the python script mode is that, you can not see the output during the silent notebook is running, but you can get `matplotlib` plot or other graph of your notebook. - - - -### Advanced job configuration - -#### Setup frequently used `docker-images` and `resources` - -As shown in above example figure, users could specify resources and docker image by selection in the panel. And further, you can add your frequently used docker images or resource combinations by: - -```bash -opai set -g image-list+= image-list+= ... -opai set -g resource-list+="<#gpu>,<#cpu>,<#mem>" resource-list+="<#gpu>,<#cpu>,<#mem>" ... -``` -Here `<#mem>` can be numbers in unit of `MBytes`, or a string like `32GB` (or `32g`). - -For example, you can add `your.docker.image` and the resource spec `1 GPU, 4 vCores CPU, 3GB` by: - -```bash -opai set -g image-list+=your.docker.image -opai set -g resource-list+="1,4,3gb" -``` - -After running the command, one should restart the notebook to make it work: - - - - -These settings are permanent since they are saved on disk. If you want to `update`, `delete`, or `change the order of` them, you can edit the file `~/.openpai/defaults.yaml` (For Windows, the path is `C:\Users\\.openpai\defaults.yaml`) directly. Also remember to restart the notebook kernel after editing. - -#### Advanced configuration by `NotebookConfiguration` - -In the submitting panel, user can change basic configuration of the job. However, for the users who want to change the advanced configuration, the extension would receive configuration from `NotebookConfiguration` in the notebook. - -For example, after executing below codes in the notebook cell, the extension will configure the job resource specification to 2 GPUs, 16 CPU cores and 32 GB memory. -```python -from openpaisdk.notebook import NotebookConfiguration - -NotebookConfiguration.set("mem", "512GB") -``` - -Execute below codes to have a quick look of all supported items in `NotebookConfiguration`. -```python -# print supported configuration items -NotebookConfiguration.print_supported_items() -``` - -### Quick Submit v.s. Download Config - -Only the pre-defined resource and docker image settings are available, when you use the button *Quick Submit* to submit jobs. If you need different settings, you can click the button *Download Config* to get the job configuration file. Then import it on the web portal for further configuring. - -## Job Management -![](docs_img/recent-jobs.gif) - -Clicking will open the *Recent Jobs* panel. **This panel records all jobs submitted by this extension on this machine** (If a job is submitted in a different way, it won't show up). The panel will show some basic information about your jobs. Also, it will show notebook URL **when the job is submitted as an interactive notebook, and the notebook is ready.** The panel will not show completed jobs by default, but you can use the upper-right toggle to find all jobs. - -## How to Update or Uninstall - -To update this extension, please use the following commands: -```bash -git clone https://github.com/Microsoft/pai -cd pai/contrib/notebook-extension -jupyter nbextension install openpai_submitter -jupyter nbextension enable openpai_submitter/main -``` - -To disable this extension, please use the following commands: -```bash -jupyter nbextension disable openpai_submitter/main -``` - -## Known Issues -- This extension is not compatible with *Variable Inspector*. -- This extension is not compatible with AdBlock. - -## Feedback - -Please use this [link](https://github.com/microsoft/pai/issues/new?title=[Jupyter%20Extension%20Feedback]) for feedbacks. diff --git a/contrib/notebook-extension/docs_img/job-button.png b/contrib/notebook-extension/docs_img/job-button.png deleted file mode 100644 index 2118ade8b..000000000 Binary files a/contrib/notebook-extension/docs_img/job-button.png and /dev/null differ diff --git a/contrib/notebook-extension/docs_img/recent-jobs.gif b/contrib/notebook-extension/docs_img/recent-jobs.gif deleted file mode 100644 index d7a32a6c8..000000000 Binary files a/contrib/notebook-extension/docs_img/recent-jobs.gif and /dev/null differ diff --git a/contrib/notebook-extension/docs_img/restart-kernel.png b/contrib/notebook-extension/docs_img/restart-kernel.png deleted file mode 100644 index 5653f1861..000000000 Binary files a/contrib/notebook-extension/docs_img/restart-kernel.png and /dev/null differ diff --git a/contrib/notebook-extension/docs_img/submit-button.png b/contrib/notebook-extension/docs_img/submit-button.png deleted file mode 100644 index 2a8d1c253..000000000 Binary files a/contrib/notebook-extension/docs_img/submit-button.png and /dev/null differ diff --git a/contrib/notebook-extension/docs_img/submit-form.png b/contrib/notebook-extension/docs_img/submit-form.png deleted file mode 100644 index 286f987f1..000000000 Binary files a/contrib/notebook-extension/docs_img/submit-form.png and /dev/null differ diff --git a/contrib/notebook-extension/docs_img/submitter-1.gif b/contrib/notebook-extension/docs_img/submitter-1.gif deleted file mode 100644 index c2c763061..000000000 Binary files a/contrib/notebook-extension/docs_img/submitter-1.gif and /dev/null differ diff --git a/contrib/notebook-extension/docs_img/submitter-2.gif b/contrib/notebook-extension/docs_img/submitter-2.gif deleted file mode 100644 index 90fe4c33d..000000000 Binary files a/contrib/notebook-extension/docs_img/submitter-2.gif and /dev/null differ diff --git a/contrib/notebook-extension/examples/hello-openpai.ipynb b/contrib/notebook-extension/examples/hello-openpai.ipynb deleted file mode 100644 index 1cdbef078..000000000 --- a/contrib/notebook-extension/examples/hello-openpai.ipynb +++ /dev/null @@ -1,71 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "\"Hello, OpenPAI\"\n" - ] - } - ], - "source": [ - "! echo \"Hello, OpenPAI\"" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - }, - "varInspector": { - "cols": { - "lenName": 16, - "lenType": 16, - "lenVar": 40 - }, - "kernels_config": { - "python": { - "delete_cmd_postfix": "", - "delete_cmd_prefix": "del ", - "library": "var_list.py", - "varRefreshCmd": "print(var_dic_list())" - }, - "r": { - "delete_cmd_postfix": ") ", - "delete_cmd_prefix": "rm(", - "library": "var_list.r", - "varRefreshCmd": "cat(var_dic_list()) " - } - }, - "types_to_exclude": [ - "module", - "function", - "builtin_function_or_method", - "instance", - "_Feature" - ], - "window_display": false - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/contrib/notebook-extension/lint.cmd b/contrib/notebook-extension/lint.cmd deleted file mode 100644 index d0a4a2a05..000000000 --- a/contrib/notebook-extension/lint.cmd +++ /dev/null @@ -1 +0,0 @@ -standard --env amd --env browser --env es6 --fix diff --git a/contrib/notebook-extension/openpai_submitter/README.md b/contrib/notebook-extension/openpai_submitter/README.md deleted file mode 100644 index 09a375409..000000000 --- a/contrib/notebook-extension/openpai_submitter/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# OpenPAI Submitter - -A jupyter notebook plugin for quick submission to OpenPAI cluster. \ No newline at end of file diff --git a/contrib/notebook-extension/openpai_submitter/data.py b/contrib/notebook-extension/openpai_submitter/data.py deleted file mode 100644 index 15a86bc2a..000000000 --- a/contrib/notebook-extension/openpai_submitter/data.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import json as openpai_ext_json -import threading as openpai_ext_threading - -if 'openpai_ext_lock' not in vars(): - openpai_ext_buffer_lock = openpai_ext_threading.Lock() - - -class openpai_ext_Storage(object): - ''' - This class will not be run in multiple threads, - but it may be run in multiple processes. - It uses file system to store information and sync with each processes. - ''' - - def use_output(func): - - def func_wrapper(*args, **kwargs): - token = args[1] - args = args[0:1] + args[2:] - ret = func(*args, **kwargs) - openpai_ext_buffer_lock.acquire() - print("__openpai${}__".format(token) + openpai_ext_json.dumps( - { - 'code': 0, - 'message': ret, - } - ), flush=True) - openpai_ext_buffer_lock.release() - - return func_wrapper - - def __init__(self, max_length=100): - import os - from openpaisdk import __flags__ - self.os = os - self.max_length = max_length - self.dirname = os.path.join(os.path.expanduser('~'), __flags__.cache) - self.lock_path = os.path.join(self.dirname, "data.lock") - self.data_path = os.path.join(self.dirname, "data") - if not(os.path.exists(self.data_path)): - self.data = [] - self.write_to_file() - else: - self.read_file() - - def acquire_lock(self): - if self.os.path.exists(self.lock_path): - raise Exception( - 'Unexpected lock file: {}! Please refresh the page or remove it manually!'.format(self.lock_path)) - with open(self.lock_path, 'w'): - pass - - def release_lock(self): - if not(self.os.path.exists(self.lock_path)): - raise Exception('Missing lock file: {}! Please refresh the page.'.format(self.lock_path)) - self.os.remove(self.lock_path) - - def write_to_file(self): - self.acquire_lock() - try: - with open(self.data_path, 'w') as f: - openpai_ext_json.dump(self.data, f) - except Exception: - pass - finally: - self.release_lock() - - def read_file(self): - with open(self.data_path) as f: - self.data = openpai_ext_json.load(f) - - @use_output - def get(self): - self.read_file() - return self.data - - @use_output - def add(self, record): - self.read_file() - if len(self.data) == self.max_length: - self.data = self.data[1:] - self.data.append(record) - self.write_to_file() - return record - - @use_output - def clear(self): - self.data = [] - self.write_to_file() - return "" - - @use_output - def save(self, data): - self.data = data - self.write_to_file() - return "" - - -openpai_ext_storage = openpai_ext_Storage() diff --git a/contrib/notebook-extension/openpai_submitter/description.yaml b/contrib/notebook-extension/openpai_submitter/description.yaml deleted file mode 100644 index 9a185e503..000000000 --- a/contrib/notebook-extension/openpai_submitter/description.yaml +++ /dev/null @@ -1,6 +0,0 @@ -Type: IPython Notebook Extension -Name: openpai_submitter -Description: A jupyter notebook plugin for quick submission to OpenPAI cluster. -Link: README.md -Main: main.js -Compatibility: 3.x, 4.x, 5.x, 6.x \ No newline at end of file diff --git a/contrib/notebook-extension/openpai_submitter/main.js b/contrib/notebook-extension/openpai_submitter/main.js deleted file mode 100644 index c15d091f0..000000000 --- a/contrib/notebook-extension/openpai_submitter/main.js +++ /dev/null @@ -1,81 +0,0 @@ -// Copyright (c) Microsoft Corporation -// All rights reserved. -// -// MIT License -// -// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -// documentation files (the "Software"), to deal in the Software without restriction, including without limitation -// the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -// to permit persons to whom the Software is furnished to do so, subject to the following conditions: -// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -// BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -define([ - 'require', - 'jquery', - 'base/js/namespace', - 'base/js/events', - '//cdn.datatables.net/1.10.19/js/jquery.dataTables.min.js', - 'nbextensions/openpai_submitter/scripts/panel', - 'nbextensions/openpai_submitter/scripts/panel_recent' -], function (requirejs, $, Jupyter, events, _, Panel, PanelRecent) { - function loadCss (filename) { - var cssUrl = requirejs.toUrl(filename) - $('head').append( - $('') - .attr('href', cssUrl) - ) - } - - function registerButtonPanel () { - var handler = function () { - panel.send(panel.MSG.CLICK_BUTTON) - } - var action = { - icon: 'fa-rocket', // a font-awesome class used on buttons, etc - help: 'openpai-submitter', - help_index: 'zz', - handler: handler - } - var prefix = 'my_extension' - var actionName = 'show-panel' - var fullActionName = Jupyter.actions.register(action, actionName, prefix) - Jupyter.toolbar.add_buttons_group([fullActionName]) - } - function registerButtonPanelRecent () { - var handler = function () { - panelRecent.send(panelRecent.MSG.CLICK_BUTTON) - } - var action = { - icon: 'fa-list-alt', // a font-awesome class used on buttons, etc - help: 'openpai-submitter', - help_index: 'zz', - handler: handler - } - var prefix = 'my_extension' - var actionName = 'show-panel-recent' - var fullActionName = Jupyter.actions.register(action, actionName, prefix) - Jupyter.toolbar.add_buttons_group([fullActionName]) - } - var panel = Panel() - var panelRecent = PanelRecent() - - function loadIPythonExtension () { - loadCss('./misc/style.css') - loadCss('//cdn.datatables.net/1.10.19/css/jquery.dataTables.min.css') - panel.send(panel.MSG.PLEASE_INIT) - panelRecent.send(panelRecent.MSG.PLEASE_INIT) - registerButtonPanel() - registerButtonPanelRecent() - panel.bindPanelRecent(panelRecent) - panelRecent.bindPanel(panel) - } - return { - load_ipython_extension: loadIPythonExtension - } -}) diff --git a/contrib/notebook-extension/openpai_submitter/main.py b/contrib/notebook-extension/openpai_submitter/main.py deleted file mode 100644 index 6c4e52670..000000000 --- a/contrib/notebook-extension/openpai_submitter/main.py +++ /dev/null @@ -1,251 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import threading as openpai_ext_threading -import json as openpai_ext_json -from openpaisdk import __flags__ as openpai_ext_flags - -openpai_ext_flags.disable_to_screen = True - -if 'openpai_ext_lock' not in vars(): - openpai_ext_buffer_lock = openpai_ext_threading.Lock() - - -class openpai_ext_Thread(openpai_ext_threading.Thread): - ''' - In Javascript: - Each time the code executed by Jupyter.notebook.kernel.execute gives output, - the callback function in callbacks.iopub.output will receive message. - - In Python: - We run python code in a new thread to avoid blocking the notebook. - The handler is set to print json messages, - thus the callback in javascript will get noticed. - ''' - - def success_handler(self, ret): - openpai_ext_buffer_lock.acquire() - print("__openpai${}__".format(self.token) + openpai_ext_json.dumps( - { - 'code': 0, - 'message': ret, - } - ), flush=True) - openpai_ext_buffer_lock.release() - - def err_handler(self, e): - openpai_ext_buffer_lock.acquire() - print("__openpai${}__".format(self.token) + openpai_ext_json.dumps( - { - 'code': -1, - 'message': str(e), - } - ), flush=True) - openpai_ext_buffer_lock.release() - - def __init__(self, target, token, args=[], kwargs={}): - super(openpai_ext_Thread, self).__init__() - self.target = target - self.token = token - self.args = args - self.kwargs = kwargs - - def run(self): - try: - ret = self.target(*self.args, **self.kwargs) - self.success_handler(ret) - except Exception as e: - import traceback - self.err_handler(traceback.format_exc()) - - -class openpai_ext_Interface(object): - - def __init__(self): - from openpaisdk import LayeredSettings, ClusterList - if LayeredSettings.get('container-sdk-branch') != 'master': - LayeredSettings.update('user_basic', 'container-sdk-branch', 'master') - self.cll = ClusterList().load() - - def execute(self, target, token, args=[], kwargs={}): - t = openpai_ext_Thread(target, token, args, kwargs) - t.start() - - def tell_resources(self, token): - self.execute(self.cll.tell, token) - - def available_resources(self, token): - self.execute(self.cll.available_resources, token) - - def read_defaults(self, token): - def _read_defaults_helper(): - from openpaisdk import LayeredSettings - from openpaisdk.job import JobResource - # add default settings - image_list = LayeredSettings.get('image-list') - if image_list is None or len(image_list) == 0: - # add default images here - default_images = [ - 'openpai/pytorch-py36-cu90', - 'openpai/pytorch-py36-cpu', - 'openpai/tensorflow-py36-cu90', - 'openpai/tensorflow-py36-cpu', - ] - for image in default_images: - LayeredSettings.update('global_default', 'image-list', image) - image_list = LayeredSettings.get('image-list') - resource_list = JobResource.parse_list(LayeredSettings.get('resource-list')) - if resource_list is None or len(resource_list) == 0: - # add default resource here - default_resources = [ - '1,4,8g', - '1,8,16g', - '0,4,8g', - '2,8,16g', - '4,16,32g', - ] - for resource in default_resources: - LayeredSettings.update('global_default', 'resource-list', resource) - resource_list = JobResource.parse_list(LayeredSettings.get('resource-list')) - return { - 'image-list': image_list, - 'resource-list': resource_list, - 'web-default-form': LayeredSettings.get('web-default-form'), - 'web-default-image': LayeredSettings.get('web-default-image'), - 'web-default-resource': LayeredSettings.get('web-default-resource'), - } - self.execute(_read_defaults_helper, token) - - def __set_selected(self, ctx): - from openpaisdk import LayeredSettings - LayeredSettings.update('global_default', 'web-default-form', ctx['form']) - LayeredSettings.update('global_default', 'web-default-image', ctx['docker_image']) - LayeredSettings.update('global_default', 'web-default-resource', ','.join([str(ctx['gpu']), str(ctx['cpu']), str(ctx['memoryMB'])])) - - def __submit_job_helper(self, ctx): - import tempfile - from openpaisdk import Job - import os - import sys - from openpaisdk.notebook import get_notebook_path - from openpaisdk import LayeredSettings - import yaml - - # save settings - self.__set_selected(ctx) - - # setting layers description - # layer name | from : priority - # user_advanced | NotebookConfiguration.set : 0 - # user_basic | extension panel selection : 1 - # local_default | deaults in .openpai/defaults.yaml : 2 - # global_default | defaults in ~/.openpai/defaults.yaml : 3 - # - | predefined in flags.py : 4 - LayeredSettings.update("user_basic", "cluster-alias", ctx['cluster']) - LayeredSettings.update("user_basic", "virtual-cluster", ctx['vc']) - LayeredSettings.update("user_basic", "image", ctx['docker_image']) - LayeredSettings.update("user_basic", "cpu", ctx['cpu']), - LayeredSettings.update("user_basic", "gpu", ctx['gpu']), - LayeredSettings.update("user_basic", "memoryMB", ctx['memoryMB']) - - cfgs = LayeredSettings.as_dict() - - notebook_path = get_notebook_path() - _, _, sources = next(os.walk('.')) - - if ctx['form'] == 'file': - jobname = 'python_' + tempfile.mkdtemp()[-8:] - mode = 'script' - elif ctx['form'] == 'notebook': - jobname = 'jupyter_' + tempfile.mkdtemp()[-8:] - mode = 'interactive' - else: - jobname = 'silent_' + tempfile.mkdtemp()[-8:] - mode = 'silent' - - job = Job(jobname)\ - .from_notebook( - nb_file=get_notebook_path(), - cluster={ - 'cluster_alias': cfgs['cluster-alias'], - 'virtual_cluster': cfgs['virtual-cluster'], - 'workspace': cfgs['workspace'], - }, - mode=mode, - **{ - 'token': '', - 'image': cfgs["image"], - 'resources': { - 'cpu': cfgs["cpu"], - 'gpu': cfgs["gpu"], - 'memoryMB': cfgs["memoryMB"], - 'mem': cfgs['mem'] - }, - 'sources': sources + cfgs["sources"], - 'pip_installs': cfgs["pip-installs"], - } - ) - ctx['job_config'] = yaml.dump(job.get_config(), default_flow_style=False) - ctx['jobname'] = job.name - if ctx['type'] == 'quick': - ret = job.submit() - ctx['joblink'] = ret['job_link'] - ctx['jobname'] = ret['job_name'] - return ctx - - def submit_job(self, token, ctx): - self.execute(self.__submit_job_helper, token, args=[ctx]) - - def __wait_jupyter_helper(self, ctx): - from openpaisdk import Job - job = Job(ctx['jobname']).load(cluster_alias=ctx['cluster']) - ret = job.wait() - ret = job.connect_jupyter() # ret will be None if run in silent mode and without this - ctx['state'] = ret['state'] - if ret['notebook'] is None: - ctx['notebook_url'] = '-' - else: - ctx['notebook_url'] = ret['notebook'] - return ctx - - def wait_jupyter(self, token, ctx): - self.execute(self.__wait_jupyter_helper, token, args=[ctx]) - - def __detect_jobs_helper(self, jobs_ctx): - from openpaisdk import Job - ret = [] - for ctx in jobs_ctx: - try: - job = Job(ctx['jobname']).load(cluster_alias=ctx['cluster']) - job_info = job.connect_jupyter() - ctx['state'] = job_info['state'] - ctx['notebook_url'] = job_info['notebook'] - if ctx['notebook_url'] is None: - ctx['notebook_url'] = '-' - except Exception as e: - ctx['state'] = 'UNKNOWN'.format(e) - ctx['notebook_url'] = '-' - finally: - ret.append(ctx) - return ret - - def detect_jobs(self, token, jobs_ctx): - self.execute(self.__detect_jobs_helper, token, args=[jobs_ctx]) - - -openpai_ext_interface = openpai_ext_Interface() diff --git a/contrib/notebook-extension/openpai_submitter/misc/loading.gif b/contrib/notebook-extension/openpai_submitter/misc/loading.gif deleted file mode 100644 index e6b32df2b..000000000 Binary files a/contrib/notebook-extension/openpai_submitter/misc/loading.gif and /dev/null differ diff --git a/contrib/notebook-extension/openpai_submitter/misc/pailogo.jpg b/contrib/notebook-extension/openpai_submitter/misc/pailogo.jpg deleted file mode 100644 index cb3a2b9a0..000000000 Binary files a/contrib/notebook-extension/openpai_submitter/misc/pailogo.jpg and /dev/null differ diff --git a/contrib/notebook-extension/openpai_submitter/misc/style.css b/contrib/notebook-extension/openpai_submitter/misc/style.css deleted file mode 100644 index 67e024ce0..000000000 --- a/contrib/notebook-extension/openpai_submitter/misc/style.css +++ /dev/null @@ -1,161 +0,0 @@ -.openpai-wrapper{ - position: fixed !important; /* remove !important will cause problems */ - border: thin solid rgba(0, 0, 0, 0.38); - border-radius: 5px; - padding: 10px; - background-color: #fff; - opacity: .95; - z-index: 100; - overflow: hidden; -} - -#openpai-panel-wrapper{ - width: 629px; - height: 566px; - top: 10%; - left: 50%; -} - -#openpai-panel-recent-wrapper{ - width: 720px; - height: 400px; - top: 10%; - left: 30%; -} -#openpai-panel, #openpai-panel-recent{ - margin-left: 10px; - margin-right: 10px; - margin-top: 5px; -} - -.openpai-float-right{ - float: right; -} - -.openpai-inline{ - display: inline; -} - -.openpai-header-text{ - vertical-align: middle; - padding-left: 5px; -} - -.openpai-button{ - padding-top: 8px; -} - -.openpai-panel-header{ - margin-bottom: 10px; - margin-right: 5px; - margin-left: 5px; - margin-top: 2px; -} - -table, th{ - text-align: center; -} - -.openpai-fieldset{ - border: 1px solid #c0c0c0; - margin: 0 2px; - padding: 0.35em 0.625em 0.75em; -} - -.openpai-legend{ - font-size: 17px; - line-height: inherit; - border: 0; - padding: 2px; - width: auto; - margin-bottom: 0px; -} - -#basic-setting-fieldset{ - margin-bottom: 10px; - padding-bottom: 0.3em; - padding-top: 0.8em; - padding-left: 1em; - padding-right: 0.35em; -} - -.loading-img{ - width: 45px; - height: 45px; -} - -.loading-img-small{ - width: 19px; - height: 19px; -} - -.switch { - position: relative; - display: inline-block; - width: 36px; - height: 20px; - margin-bottom: 0px; -} - -.switch input { - opacity: 0; - width: 0; - height: 0; -} - -.slider { - position: absolute; - cursor: pointer; - top: 0; - left: 0; - right: 0; - bottom: 0; - background-color: #ccc; - -webkit-transition: .3s; - transition: .3s; -} - -.slider:before { - position: absolute; - content: ""; - height: 15px; - width: 15px; - left: 4px; - bottom: 3px; - background-color: white; - -webkit-transition: .3s; - transition: .3s; -} - -input:checked + .slider { - background-color: #337ab7; -} - -input:focus + .slider { - box-shadow: 0 0 1px #2196F3; -} - -input:checked + .slider:before { - -webkit-transform: translateX(14px); - -ms-transform: translateX(14px); - transform: translateX(14px); -} - -.slider.round { - border-radius: 30px; -} - -.slider.round:before { - border-radius: 50%; -} - -#openpai-hide-jobs-toggle{ - margin-right: 1em; - margin-top: 0.2em; - margin-bottom: 0.2em; -} - -.openpai-fieldset .openpai-table-button{ - color: #337ab7; - line-height: 1; -} \ No newline at end of file diff --git a/contrib/notebook-extension/openpai_submitter/scripts/config.js b/contrib/notebook-extension/openpai_submitter/scripts/config.js deleted file mode 100644 index 2a3ab4127..000000000 --- a/contrib/notebook-extension/openpai_submitter/scripts/config.js +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright (c) Microsoft Corporation -// All rights reserved. -// -// MIT License -// -// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -// documentation files (the "Software"), to deal in the Software without restriction, including without limitation -// the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -// to permit persons to whom the Software is furnished to do so, subject to the following conditions: -// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -// BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -define([], function () { - return { - plugin_name: 'openpai_submitter', - panel_toggle_speed: 400 - } -}) diff --git a/contrib/notebook-extension/openpai_submitter/scripts/interface.js b/contrib/notebook-extension/openpai_submitter/scripts/interface.js deleted file mode 100644 index 68a93fa82..000000000 --- a/contrib/notebook-extension/openpai_submitter/scripts/interface.js +++ /dev/null @@ -1,175 +0,0 @@ -// Copyright (c) Microsoft Corporation -// All rights reserved. -// -// MIT License -// -// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -// documentation files (the "Software"), to deal in the Software without restriction, including without limitation -// the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -// to permit persons to whom the Software is furnished to do so, subject to the following conditions: -// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -// BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -define([ - 'require', - 'jquery', - 'base/js/namespace', - 'base/js/events', - 'nbextensions/openpai_submitter/scripts/config' -], -function (requirejs, $, Jupyter, events, config) { - var panel - var codeMain - var codeStorage - var pool = [] // {token: "token", resolveFunc: resolveFunc, rejectFunc: rejectFunc} - - function getToken () { - return Math.random().toString(36).substring(2, 6) + Math.random().toString(36).substring(2, 6) - } - - function initiate (panelInstance, resolve, reject) { - /* save the python code to codeMain */ - panel = panelInstance - var mainUrl = requirejs.toUrl('../main.py') - var storageUrl = requirejs.toUrl('../data.py') - var loadMain = new Promise( - function (resolve, reject) { - $.get(mainUrl).done(function (data) { - codeMain = data - resolve() - }) - }) - var loadStorage = new Promise( - function (resolve, reject) { - $.get(storageUrl).done(function (data) { - codeStorage = data - resolve() - }) - }) - Promise.all([loadMain, loadStorage]).then( - () => resolve() - ).catch((e) => reject(e)) - } - - var getIOPub = function (resolve, reject) { - return { - output: function (msg) { - /* - A callback to handle python execution. - Note: This function will be executed multiple times, - if any stdout/stderr comes out. - */ - function parseSingleOutput (token, msgContent) { - /* - msgContent: parsed JSON, such as: {"code": 0, "message": ""} - */ - for (var pooledToken in pool) { - if (pooledToken === token) { - if (msgContent['code'] !== 0) { pool[token]['rejectFunc'](msgContent['message']) } else { pool[token]['resolveFunc'](msgContent['message']) } - delete pool[token] - return - } - } - console.error('[openpai submitter] Unknown token', token) - } - console.log('[openpai submitter] [code return]:', msg) - if (msg.msg_type === 'error') { - reject(msg.content.evalue) - } else if (msg.content.name !== 'stdout') { - // ignore any info which is not stdout - console.error(msg.content.text) - } else { - try { - var m = msg.content.text - var tokens = m.match(/__openpai\$(.{8})__/g) - if (tokens === null || tokens.length === 0) { - console.error(m) - return - } - var splittedMSG = m.split(/__openpai\$.{8}__/) - var i = 0 - for (var item of splittedMSG) { - item = $.trim(item) - if (item === '') continue - var jsonMSG = JSON.parse(item) - parseSingleOutput(tokens[i].substr(10, 8), jsonMSG) - i += 1 - } - } catch (e) { - console.error(e) - } - } - } - } - } - - // return a promise - function executePromise (initCode, code) { - return new Promise( - function (resolve, reject) { - if (!(Jupyter.notebook.kernel.is_connected())) { - console.error('Cannot find active kernel.') - throw new Error('Cannot find active kernel. Please wait until the kernel is ready and refresh.') - } - resolve() - } - ).then( - function () { - console.log('[openpai submitter] [code executed]:' + code) - return new Promise( - function (resolve, reject) { - /* replace with real token */ - var token = getToken() - code = code.replace('', token) - var codeMerged = initCode + '\n' + code - /* register final resolve / reject */ - pool[token] = { - resolveFunc: resolve, - rejectFunc: reject - } - /* execute */ - Jupyter.notebook.kernel.execute( - codeMerged, { - iopub: getIOPub(resolve, reject) - } - ) - }) - } - ) - } - - return { - initiate: initiate, - - // main api - read_defaults: - () => executePromise(codeMain, 'openpai_ext_interface.read_defaults("")'), - tell_resources: - () => executePromise(codeMain, 'openpai_ext_interface.tell_resources("")'), - available_resources: - () => executePromise(codeMain, 'openpai_ext_interface.available_resources("")'), - zip_and_upload: - (ctx) => executePromise(codeMain, 'openpai_ext_interface.zip_and_upload("",' + JSON.stringify(ctx) + ')'), - submit_job: - (ctx) => executePromise(codeMain, 'openpai_ext_interface.submit_job("",' + JSON.stringify(ctx) + ')'), - wait_jupyter: - (ctx) => executePromise(codeMain, 'openpai_ext_interface.wait_jupyter("",' + JSON.stringify(ctx) + ')'), - detect_jobs: - (jobsCtx) => executePromise(codeMain, 'openpai_ext_interface.detect_jobs("",' + JSON.stringify(jobsCtx) + ')'), - - // storage api - add_job: - (record) => executePromise(codeStorage, 'openpai_ext_storage.add("",' + JSON.stringify(record) + ')'), - get_jobs: - () => executePromise(codeStorage, 'openpai_ext_storage.get("")'), - save_jobs: - (data) => executePromise(codeStorage, 'openpai_ext_storage.save("", ' + JSON.stringify(data) + ')') - - } -} -) diff --git a/contrib/notebook-extension/openpai_submitter/scripts/panel.js b/contrib/notebook-extension/openpai_submitter/scripts/panel.js deleted file mode 100644 index 8b6b5a77a..000000000 --- a/contrib/notebook-extension/openpai_submitter/scripts/panel.js +++ /dev/null @@ -1,539 +0,0 @@ -// Copyright (c) Microsoft Corporation -// All rights reserved. -// -// MIT License -// -// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -// documentation files (the "Software"), to deal in the Software without restriction, including without limitation -// the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -// to permit persons to whom the Software is furnished to do so, subject to the following conditions: -// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -// BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -define([ - 'require', - 'jquery', - 'base/js/namespace', - 'base/js/events', - 'nbextensions/openpai_submitter/scripts/config', - 'nbextensions/openpai_submitter/scripts/interface', - 'nbextensions/openpai_submitter/scripts/utils' -], -function (requirejs, $, Jupyter, events, config, Interface, Utils) { - function Panel () { - var STATUS_R = [ - 'NOT_READY', - 'READY_NOT_LOADING', - 'READY_LOADING', - 'SHOWING_INFO', - 'SUBMITTING_1', - 'SUBMITTING_2', - 'SUBMITTING_3', - 'SUBMITTING_OK', - 'CANCELLING', - 'ERROR', - 'FATAL' - ] - var MSG_R = [ - 'PLEASE_INIT', - 'INIT_OK', - 'CLICK_BUTTON', - 'CLICK_CLOSE', - 'CLICK_REFRESH', - 'SUBMIT_START_1', - 'SUBMIT_START_2', - 'SUBMIT_START_3', - 'SUBMIT_OK', - 'CANCEL', - 'ERROR', - 'FATAL_ERROR' - ] - - var STATUS = {} - for (var i = 0; i < STATUS_R.length; i += 1) { STATUS[STATUS_R[i]] = i } - var MSG = {} - for (var j = 0; j < MSG_R.length; j += 1) { MSG[MSG_R[j]] = j } - - var set = function (s) { - // console.log('[openpai submitter] set status', STATUS_R[s]) - status = s - } - - var status - var panelRecent - - set(STATUS.NOT_READY) - - var speed = config.panel_toggle_speed - - var showInformation = function (info) { - /* this function will hide table and show information for users. */ - $('#panel-table-wrapper').hide() - $('#panel-information').html(info) - $('#panel-information-wrapper').show() - } - - var appendInformation = function (info) { - $('#panel-information').append(info) - } - - var send = function (msg, value) { - // console.log('[openpai submitter]', 'status:', STATUS_R[status], 'msg', MSG_R[msg], 'value', value) - switch (msg) { - case MSG.PLEASE_INIT: - handleInit() - break - case MSG.INIT_OK: - handleInitOK() - break - case MSG.CLICK_BUTTON: - if (!($('#openpai-panel-wrapper').is(':visible'))) { - if ((status !== STATUS.READY_LOADING) && (status !== STATUS.SUBMITTING_1) && - (status !== STATUS.SUBMITTING_2) && (status !== STATUS.SUBMITTING_3) && - (status !== STATUS.SUBMITTING_4) && (status !== STATUS.SUBMITTING_OK) && - (status !== STATUS.FATAL)) { send(MSG.CLICK_REFRESH) } - } - togglePanel() - break - case MSG.CLICK_CLOSE: - closePanel() - break - case MSG.CLICK_REFRESH: - handleRefresh() - break - case MSG.SUBMIT_START_1: - handleSubmitStart1(value) - break - case MSG.ERROR: - handleError(value) - break - case MSG.FATAL_ERROR: - handleFatalError(value) - break - default: - send(MSG.ERROR, 'unknown message received by panel!') - } - } - - var handleInit = function () { - var panelUrl = requirejs.toUrl('../templates/panel.html') - var panel = $('
').load(panelUrl) - - Promise.all([ - /* Promise 1: add panel to html body and bind functions */ - panel.promise() - .then( - function () { - panel.draggable() - panel.toggle() - $('body').append(panel) - $('body').on('click', '#close-panel-button', function () { - send(MSG.CLICK_CLOSE) - }) - $('body').on('click', '#refresh-panel-button', function () { - send(MSG.CLICK_REFRESH) - }) - } - ) - .then( - () => Utils.set_timeout(600) - ).then(function () { - panel.resizable() - $('.openpai-logo').attr('src', requirejs.toUrl('../misc/pailogo.jpg')) - $('#cluster-data') - .DataTable({ - dom: 'rtip', - order: [ - [2, 'desc'] - ] - }) - }), - /* Promise 2: load python script */ - new Promise(function (resolve, reject) { - Interface.initiate(panel, resolve, reject) - }) - ]).then(function (value) { - send(MSG.INIT_OK, value) - }) - .catch(function (err) { - send(MSG.FATAL_ERROR, err) - }) - } - - var handleInitOK = function () { - if (status === STATUS.NOT_READY) { - if ($('#openpai-panel-wrapper').is(':visible')) { - /* if the panel has been shown, then load the cluster info */ - set(STATUS.READY_NOT_LOADING) - send(MSG.CLICK_REFRESH) - } else { - /* if the panel has not been shown, change the status to READY_NOT_LOADING and wait */ - showInformation('') - set(STATUS.READY_NOT_LOADING) - } - } - } - - var handleRefresh = function () { - if (status === STATUS.NOT_READY || status === STATUS.READY_LOADING) { return } - if (status === STATUS.SUBMITTING_1 || status === STATUS.SUBMITTING_2 || - status === STATUS.SUBMITTING_3 || status === STATUS.SUBMITTING_4) { - alert('Please do not refresh during submission.') - return - } - if (status === STATUS.FATAL) { - alert('Please refresh the whole page to reload this extension.') - return - } - if (status === STATUS.SUBMIT_OK) { - if (confirm('Are you sure to refresh? This will clear the current job!') === false) { - return - } - } - set(STATUS.READY_LOADING) - showInformation('Loading the cluster information, please wait...' + Utils.getLoadingImg('loading-cluster-info')) - Interface.read_defaults().then(function (data) { - var resourceMenu = '' - for (var item of data['resource-list']) { - var memoryGB = parseInt(item['memoryMB'] / 1024) - var optionValue = item['gpu'] + ',' + item['cpu'] + ',' + item['memoryMB'] - resourceMenu += '\n' - } - resourceMenu = $('') - var imageAliasDict = { - 'openpai/pytorch-py36-cu90': 'PyTorch + Python3.6 with GPU, CUDA 9.0', - 'openpai/pytorch-py36-cpu': 'PyTorch + Python3.6 with CPU', - 'openpai/tensorflow-py36-cu90': 'TensorFlow + Python3.6 with GPU, CUDA 9.0', - 'openpai/tensorflow-py36-cpu': 'TensorFlow + Python3.6 with CPU' - } - var imageMenu = '' - var imageAlias - for (var image of data['image-list']) { - if (image in imageAliasDict) { imageAlias = imageAliasDict[image] } else { imageAlias = image } - imageMenu += '' - } - imageMenu = $('') - // select the first option - - // append to html - $('#resource-menu').remove() - $('#docker-image-menu').remove() - $('#resouce-menu-label').after(resourceMenu) - $('#docker-image-menu-label').after(imageMenu) - // select option - var formMenu = $('#submit-form-menu') - formMenu.find('option').removeAttr('selected') - resourceMenu.find('option').removeAttr('selected') - imageMenu.find('option').removeAttr('selected') - function selectOption (menu, value) { - if (value) { - var option = menu.find('option[value="' + value + '"]') - if (option.length > 0) { option.attr('selected', 'selected') } else { $(menu.find('option')[0]).attr('selected', 'selected') } - } else { $(menu.find('option')[0]).attr('selected', 'selected') } - } - selectOption(formMenu, data['web-default-form']) - selectOption(resourceMenu, data['web-default-resource']) - selectOption(imageMenu, data['web-default-image']) - }).then( - () => - Interface.tell_resources().then(function (data) { - var ret = [] - for (var cluster in data) { - for (var vc in data[cluster]) { - ret.push({ - cluster: cluster, - vc: vc, - gpu: { - display: Utils.getLoadingImgSmall(), - gpu_value: 0 - }, - button_sub: ``, - button_edit: `` - }) - } - } - $('#cluster-data') - .DataTable({ - dom: 'rtip', - order: [ - [2, 'desc'] - ], - destroy: true, - data: ret, - columns: [{ - data: 'cluster' - }, { - data: 'vc' - }, { - data: 'gpu', - type: 'num', - render: { - _: 'display', - sort: 'gpu_value' - } - }, { - data: 'button_sub' - }, { - data: 'button_edit' - }], - initComplete: function () { - set(STATUS.SHOWING_INFO) - Interface.available_resources().then(function (clusterData) { - var table = $('#cluster-data').DataTable() - table.rows().every(function (rowIdx, tableLoop, rowLoop) { - var tableData = this.data() - var info = clusterData[tableData['cluster']][tableData['vc']] - if (info === undefined) { - tableData['gpu']['gpu_value'] = -2 - tableData['gpu']['display'] = '?' - } else - if (info['GPUs'] === -1) { - tableData['gpu']['gpu_value'] = info['GPUs'] - tableData['gpu']['display'] = '?' - } else { - tableData['gpu']['gpu_value'] = info['GPUs'] - tableData['gpu']['display'] = info['GPUs'] - } - this.data(tableData) - }) - table.draw() - }) - }, - fnDrawCallback: function () { - $('.openpai-tooltip').tooltip({ - classes: { - 'ui-tooltip': 'highlight' - } - } - ) - $('.submit_button').on('click', function () { - var cluster = $(this).data('cluster') - var vc = $(this).data('vc') - var type = $(this).data('type') - send(MSG.SUBMIT_START_1, { - cluster: cluster, - vc: vc, - type: type - }) - }) - } - }) - $('#panel-information-wrapper').hide() - $('#panel-table-wrapper').show() - }) - ) - .catch(function (e) { - send(MSG.ERROR, e) - }) - } - - var handleSubmitStart1 = function (info) { - if (status !== STATUS.SHOWING_INFO) { - return - } - set(STATUS.SUBMITTING_1) - /* get some basic */ - var submittingCtx = { - form: $('#submit-form-menu').val(), // file | notebook | silent - type: info['type'], // quick | edit - cluster: info['cluster'], - vc: info['vc'], - gpu: parseInt($('#resource-menu option:selected').data('gpu')), - cpu: parseInt($('#resource-menu option:selected').data('cpu')), - memoryMB: parseInt($('#resource-menu option:selected').data('memory')), - docker_image: $('#docker-image-menu').val(), - notebook_name: Jupyter.notebook.notebook_name - } - if (submittingCtx['type'] === 'edit') { submittingCtx['stage_num'] = 1 } else { - if (submittingCtx['form'] === 'file') { submittingCtx['stage_num'] = 1 } else { submittingCtx['stage_num'] = 2 } - } - - console.log('[openpai submitter] submitting ctx:', submittingCtx) - showInformation('') - if (submittingCtx['type'] === 'edit') { appendInformation('Uploading files and generating config...' + Utils.getLoadingImg('loading-stage-1')) } else { - if (submittingCtx['stage_num'] === 1) { appendInformation('Uploading files and submitting the job...' + Utils.getLoadingImg('loading-stage-1')) } else { appendInformation('Stage 1 / 2 : Uploading files and submitting the job...' + Utils.getLoadingImg('loading-stage-1')) } - } - var promiseSubmitting = Jupyter.notebook.save_notebook() - .then( - function () { - appendInformation('



Click [here] to cancel this job.

') - var cancelThis - var promise = Promise.race([ - Interface.submit_job(submittingCtx), - new Promise(function (resolve, reject) { - cancelThis = reject - }) - ]) - $('body').off('click', '#openpai-clear-info-force').on('click', '#openpai-clear-info-force', function () { - if (confirm('Are you sure to start a new OpenPAI Submitter job (Your previous job will be saved in the Recent Jobs panel)?')) { - $('#openpai-clear-info-force').remove() - cancelThis('cancelled') - set(STATUS.NOT_READY) - send(MSG.INIT_OK) - } - }) - return promise - } - ) - .then( - function (ctx) { - set(STATUS.SUBMITTING_2) - $('#text-clear-info-force').remove() - $('#loading-stage-1').remove() - appendInformation('
') - submittingCtx = ctx - if (ctx['type'] === 'quick') { - var submissionTime = (function () { - var ts = new Date() - var mm = ts.getMonth() + 1 - var dd = ts.getDate() - var HH = ts.getHours() - var MM = ts.getMinutes() - var SS = ts.getSeconds() - if (mm < 10) mm = '0' + mm - if (dd < 10) dd = '0' + dd - if (HH < 10) HH = '0' + HH - if (MM < 10) MM = '0' + MM - if (SS < 10) SS = '0' + SS - return mm + '-' + dd + ' ' + HH + ':' + MM + ':' + SS - }()) - panelRecent.send( - panelRecent.MSG.ADD_JOB, { - cluster: ctx['cluster'], - vc: ctx['vc'], - user: ctx['user'], - time: submissionTime, - jobname: ctx['jobname'], - joblink: ctx['joblink'], - form: ctx['form'], - state: 'WAITING' - } - ) - appendInformation('The job name is: ' + ctx['jobname'] + '
') - appendInformation('The job link is: ' + ctx['joblink'] + '') - return new Promise((resolve, reject) => resolve(ctx)) - } else { - /* ctx["type"] === "edit" */ - var download = function (filename, text) { - var element = document.createElement('a') - element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text)) - element.setAttribute('download', filename) - element.style.display = 'none' - document.body.appendChild(element) - element.click() - document.body.removeChild(element) - } - download(ctx['jobname'] + '.yaml', ctx['job_config']) - } - } - ) - if (submittingCtx['stage_num'] === 2) { - promiseSubmitting = promiseSubmitting.then( - function (ctx) { - appendInformation('

') - if (ctx['form'] === 'notebook') { - appendInformation('Stage 2 / 2: Wait until the notebook is ready...' + - Utils.getLoadingImg('loading-stage-2')) - } else { appendInformation('Stage 2 / 2: Wait until the result is ready...' + Utils.getLoadingImg('loading-stage-2')) } - appendInformation('
') - if (ctx['form'] === 'notebook') { - appendInformation('

Note: This procedure may persist for several minutes. You can safely close' + - ' this submitter, and the notebook URL will be shown here once it is prepared.


') - } else { - appendInformation('

Note: The notebook will run in the background. You can safely close' + - ' this submitter, and the result file link will be shown here once it is prepared.


') - } - appendInformation('

You can also click [here] to start a new OpenPAI Submitter job. Your previous job will be saved in the Recent Jobs panel.

') - var cancelThis - var promise = Promise.race([ - Interface.wait_jupyter(ctx), - new Promise(function (resolve, reject) { - cancelThis = reject - }) - ]) - $('body').off('click', '#openpai-clear-info-force').on('click', '#openpai-clear-info-force', function () { - if (confirm('Are you sure to start a new OpenPAI Submitter job (Your previous job will be saved in the Recent Jobs panel)?')) { - $('#text-clear-info-force').remove() - cancelThis('cancelled') - set(STATUS.NOT_READY) - send(MSG.INIT_OK) - } - }) - return promise - } - ).then( - function (ctx) { - if (!($('#openpai-panel-wrapper').is(':visible'))) { - togglePanel() - } - $('#loading-stage-2').remove() - $('#text-notebook-show').hide() - $('#text-clear-info-force').hide() - if (ctx['form'] === 'notebook') { appendInformation('The notebook url is: ' + ctx['notebook_url'] + '') } else { appendInformation('The result file link is (please copy it to your clipboard and paste it to a new page) : ' + ctx['notebook_url'] + '') } - return new Promise((resolve, reject) => resolve(ctx)) - }) - } - promiseSubmitting = promiseSubmitting.then( - function (ctx) { - set(STATUS.SUBMITTING_OK) - appendInformation('

You can click [here] to start a new OpenPAI Submitter job. Your previous job will be saved in the Recent Jobs panel.') - $('body').off('click', '#openpai-clear-info').on('click', '#openpai-clear-info', function () { - set(STATUS.NOT_READY) - send(MSG.INIT_OK) - }) - } - ).catch(function (e) { - if (e !== 'cancelled') { send(MSG.ERROR, e) } - }) - } - - var handleError = function (err) { - showInformation( - '

An error happened. ' + - 'Please click [refresh] to retry.

' + - '

Error Information:' + err + '

' - ) - set(STATUS.ERROR) - } - - var handleFatalError = function (err) { - showInformation( - '

A fatal error happened and the OpenPAI Submitter has been terminated. ' + - 'Please refresh the page and click Kernel - Restart & Clear Output to retry.

' + - '

Error Information:' + err + '

' - ) - $('#refresh-panel-button').hide() - set(STATUS.FATAL) - } - - var togglePanel = function (callback = null) { - $('#openpai-panel-wrapper').toggle(speed, callback) - } - - var openPanel = function (callback = null) { - $('#openpai-panel-wrapper').show(speed, callback) - } - - var closePanel = function (callback = null) { - $('#openpai-panel-wrapper').hide(speed, callback) - } - - var bindPanelRecent = function (panelRecentInstance) { - panelRecent = panelRecentInstance - } - - return { - send: send, - STATUS: STATUS, - MSG: MSG, - bindPanelRecent: bindPanelRecent - } - } - - return Panel -}) diff --git a/contrib/notebook-extension/openpai_submitter/scripts/panel_recent.js b/contrib/notebook-extension/openpai_submitter/scripts/panel_recent.js deleted file mode 100644 index 20af209d9..000000000 --- a/contrib/notebook-extension/openpai_submitter/scripts/panel_recent.js +++ /dev/null @@ -1,367 +0,0 @@ -// Copyright (c) Microsoft Corporation -// All rights reserved. -// -// MIT License -// -// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -// documentation files (the "Software"), to deal in the Software without restriction, including without limitation -// the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -// to permit persons to whom the Software is furnished to do so, subject to the following conditions: -// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -// BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -define([ - 'require', - 'jquery', - 'base/js/namespace', - 'base/js/events', - 'nbextensions/openpai_submitter/scripts/config', - 'nbextensions/openpai_submitter/scripts/interface', - 'nbextensions/openpai_submitter/scripts/utils' -], -function (requirejs, $, Jupyter, events, config, Interface, Utils) { - function Panel () { - var STATUS_R = [ - 'NOT_READY', - 'READY_NOT_LOADING', - 'READY_LOADING', - 'SHOWING_INFO', - 'ERROR', - 'FATAL' - ] - var MSG_R = [ - 'PLEASE_INIT', - 'INIT_OK', - 'ADD_JOB', - 'CLICK_BUTTON', - 'CLICK_CLOSE', - 'CLICK_REFRESH', - 'ERROR', - 'FATAL_ERROR' - ] - - var STATUS = {} - for (var i = 0; i < STATUS_R.length; i += 1) { STATUS[STATUS_R[i]] = i } - var MSG = {} - for (var j = 0; j < MSG_R.length; j += 1) { MSG[MSG_R[j]] = j } - - var set = function (s) { - // console.log('[openpai submitter] [panel-recent] set status', STATUS_R[s]) - status = s - } - - var status - var panel // main panel - var jobStatusFinished = ['FAILED', 'STOPPED', 'SUCCEEDED'] - var hasAddFilter = false - - set(STATUS.NOT_READY) - - var speed = config.panel_toggle_speed - - var showInformation = function (info) { - /* this function will hide table and show information for users. */ - $('#panel-recent-table-wrapper').hide() - $('#panel-recent-information-wrapper').show() - } - - var appendInformation = function (info) { - $('#panel-recent-information').append(info) - } - - var send = function (msg, value) { - // console.log('[openpai submitter] [panel-recent]', 'status:', STATUS_R[status], 'msg', MSG_R[msg], 'value', value) - switch (msg) { - case MSG.PLEASE_INIT: - handleInit() - break - case MSG.INIT_OK: - handleInitOK() - break - case MSG.ADD_JOB: - handleAddJob(value) - break - case MSG.CLICK_BUTTON: - if (!($('#openpai-panel-recent-wrapper').is(':visible'))) { - if ((status !== STATUS.READY_LOADING) && (status !== STATUS.FATAL)) { - Utils.set_timeout(config.panel_toggle_speed).then( - () => send(MSG.CLICK_REFRESH) - ) - } - } - togglePanel() - break - case MSG.CLICK_CLOSE: - closePanel() - break - case MSG.CLICK_REFRESH: - handleRefresh() - break - case MSG.ERROR: - handleError(value) - break - case MSG.FATAL_ERROR: - handleFatalError(value) - break - default: - send(MSG.ERROR, 'unknown message received by panel!') - } - } - - var turnOnFilter = function () { - hasAddFilter = true - $.fn.dataTable.ext.search.push( - function (settings, data, dataIndex) { - /* only show unfinished jobs */ - if (settings.nTable.getAttribute('id') !== 'recent-jobs') { return true } - return jobStatusFinished.indexOf(data[4]) < 0 - }) - } - - var turnOffFilter = function () { - hasAddFilter = false - $.fn.dataTable.ext.search.pop() - } - - var handleInit = function () { - var panelUrl = requirejs.toUrl('../templates/panel_recent.html') - var panel = $('
').load(panelUrl) - - Promise.all([ - /* Promise 1: add panel to html body and bind functions */ - panel.promise() - .then( - function () { - panel.draggable() - panel.toggle() - $('body').append(panel) - $('body').on('click', '#close-panel-recent-button', function () { - send(MSG.CLICK_CLOSE) - }) - $('body').on('click', '#refresh-panel-recent-button', function () { - send(MSG.CLICK_REFRESH) - }) - - turnOnFilter() - - $('body').on('click', '#openpai-if-hide-jobs', function () { - if ($('#openpai-if-hide-jobs').prop('checked') === true && - hasAddFilter === false) { - turnOnFilter() - $('#recent-jobs').DataTable().draw() - } else if ($('#openpai-if-hide-jobs').prop('checked') === false && - hasAddFilter === true) { - turnOffFilter() - $('#recent-jobs').DataTable().draw() - } - }) - } - ) - .then( - () => Utils.set_timeout(600) - ).then(function () { - panel.resizable() - $('.openpai-logo').attr('src', requirejs.toUrl('../misc/pailogo.jpg')) - $('#recent-jobs') - .DataTable({ - dom: 'rtip', - order: [ - [2, 'desc'] - ], - data: [] - }) - }), - /* Promise 2: load python script */ - new Promise(function (resolve, reject) { - Interface.initiate(panel, resolve, reject) - }) - ]).then(function (value) { - send(MSG.INIT_OK, value) - }) - .catch(function (err) { - send(MSG.FATAL_ERROR, err) - }) - } - - var handleInitOK = function () { - if (status === STATUS.NOT_READY) { - if ($('#openpai-panel-recent-wrapper').is(':visible')) { - /* if the panel has been shown, then load the cluster info */ - set(STATUS.READY_NOT_LOADING) - send(MSG.CLICK_REFRESH) - } else { - /* if the panel has not been shown, change the status to READY_NOT_LOADING and wait */ - showInformation('') - set(STATUS.READY_NOT_LOADING) - } - } - } - - var handleAddJob = function (record) { - Interface.add_job(record) - .catch((e) => send(MSG.ERROR, e)) - } - - var handleRefresh = function () { - if (status === STATUS.NOT_READY || status === STATUS.READY_LOADING) { return } - if (status === STATUS.FATAL) { - alert('Please refresh the whole page to reload this extension.') - return - } - set(STATUS.READY_LOADING) - var jobData - Interface.get_jobs().then( - function (data) { - var ret = [] - jobData = data - for (var i = 0; i < data.length; i += 1) { - var record = data[i] - var item = { - jobname: record['jobname'], - cluster: record['cluster'], - vc: record['vc'], - user: record['user'], - time: record['time'], - joblink: '' - } - if (jobStatusFinished.indexOf(record['state']) >= 0) { - item['state'] = record['state'] - if (record['form'] !== 'silent') { item['notebook_url'] = '-' } else { - if ((record['notebook_url'] === undefined) || (record['notebook_url'] === '-')) { item['notebook_url'] = '-' } else { item['notebook_url'] = '' } - } - } else { - item['state'] = '' + Utils.getLoadingImgSmall() + '' - item['notebook_url'] = '' + Utils.getLoadingImgSmall() + '' - } - ret.push(item) - } - $('#recent-jobs') - .DataTable({ - dom: 'rtip', - order: [ - [3, 'desc'] - ], - data: ret, - destroy: true, - rowId: rowData => 'openpai-job-' + rowData['jobname'], - columns: [{ - data: 'jobname', - width: '15%' - }, { - data: 'cluster', - width: '12%' - }, { - data: 'vc', - width: '12%' - }, { - data: 'time', - width: '25%' - }, { - data: 'state', - width: '12%' - }, { - data: 'joblink', - width: '12%' - }, { - data: 'notebook_url', - width: '12%' - }], - initComplete: function () { - set(STATUS.READY_LOADING) - $('body').off('click', '.silent-link').on('click', '.silent-link', function (e) { - var url = $(e.target).parent().data('path') - Utils.copy_to_clipboard(url).then( - () => alert('The result file link has been copied to your clipboard! Please paste it to a new page.') - ).catch( - () => alert('Failed to copy the file link. Please find the file manually. Location: ' + url) - ) - }) - var jobsFinished = [] - var jobsUnfinished = [] - for (var item of jobData) { - if (jobStatusFinished.indexOf(item['state']) >= 0) { jobsFinished.push(item) } else { jobsUnfinished.push(item) } - } - /* Only detect unfinished jobs */ - Interface.detect_jobs(jobsUnfinished) - .then(function (jobsUnfinished) { - Interface - .save_jobs(jobsUnfinished.concat(jobsFinished)) - .catch((e) => console.error(e)) // Although it is a promise, we don't care whether it succeeds or not. - for (var item of jobsUnfinished) { - var originalData = $('#recent-jobs').DataTable().row('#openpai-job-' + item['jobname']).data() - originalData['state'] = item['state'] - if (item['notebook_url'] !== undefined && item['notebook_url'] !== '-') { - if (item['form'] === 'notebook') { originalData['notebook_url'] = '' } else { originalData['notebook_url'] = '' } - } else { originalData['notebook_url'] = '-' } - $('#recent-jobs').DataTable().row('#openpai-job-' + item['jobname']).data(originalData) - } - set(STATUS.SHOWING_INFO) - }) - .catch( - function (e) { - console.error('[openpai submitter]', e) - set(STATUS.SHOWING_INFO) - } - ) - } - } - ) - $('#panel-recent-information-wrapper').hide() - $('#panel-recent-table-wrapper').show() - } - ).catch((e) => send(MSG.ERROR, e)) - } - - var handleError = function (err) { - showInformation( - '

An error happened. ' + - 'Please click [refresh] to retry.

' + - '

Error Information:' + err + '

' - ) - set(STATUS.ERROR) - } - - var handleFatalError = function (err) { - showInformation( - '

A fatal error happened and the OpenPAI Submitter has been terminated. ' + - 'Please refresh the page and click Kernel - Restart & Clear Output to retry.

' + - '

Error Information:' + err + '

' - ) - set(STATUS.FATAL) - } - - var togglePanel = function (callback = null) { - $('#openpai-panel-recent-wrapper').toggle(speed, callback) - } - - var openPanel = function (callback = null) { - $('#openpai-panel-recent-wrapper').show(speed, callback) - } - - var closePanel = function (callback = null) { - $('#openpai-panel-recent-wrapper').hide(speed, callback) - } - - var bindPanel = function (panelInstance) { - panel = panelInstance - } - - return { - send: send, - STATUS: STATUS, - MSG: MSG, - bindPanel: bindPanel - } - } - - return Panel -}) diff --git a/contrib/notebook-extension/openpai_submitter/scripts/utils.js b/contrib/notebook-extension/openpai_submitter/scripts/utils.js deleted file mode 100644 index a3d1345d7..000000000 --- a/contrib/notebook-extension/openpai_submitter/scripts/utils.js +++ /dev/null @@ -1,75 +0,0 @@ -// Copyright (c) Microsoft Corporation -// All rights reserved. -// -// MIT License -// -// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -// documentation files (the "Software"), to deal in the Software without restriction, including without limitation -// the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -// to permit persons to whom the Software is furnished to do so, subject to the following conditions: -// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -// BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -define(['require'], function (requirejs) { - return { - getLoadingImg: function (idName) { - var loadingImg - if (idName !== undefined) { loadingImg = '' } else { loadingImg = '' } - return loadingImg - }, - getLoadingImgSmall: function (idName) { - var loadingImg - if (idName !== undefined) { loadingImg = '' } else { loadingImg = '' } - return loadingImg - }, - copy_to_clipboard: function (text) { - return new Promise(function (resolve, reject) { - function fallbackCopyTextToClipboard (text) { - var textArea = document.createElement('textarea') - textArea.value = text - document.body.appendChild(textArea) - textArea.focus() - textArea.select() - try { - var successful = document.execCommand('copy') - var msg = successful ? 'successful' : 'unsuccessful' - resolve() - } catch (err) { - reject(err) - } - document.body.removeChild(textArea) - } - function copyTextToClipboard (text) { - if (!navigator.clipboard) { - fallbackCopyTextToClipboard(text) - return - } - navigator.clipboard.writeText(text).then(function () { - resolve() - }, function (err) { - reject(err) - }) - } - copyTextToClipboard(text) - }) - }, - set_timeout: function timeout (ms, value) { - return new Promise((resolve, reject) => { - setTimeout(resolve, ms, value) - }) - }, - set_timeout_func: function timeoutFunc (ms, func, args) { - return new Promise((resolve, reject) => { - setTimeout(function () { - func.apply(args) - resolve() - }, ms) - }) - } - } -}) diff --git a/contrib/notebook-extension/openpai_submitter/templates/panel.html b/contrib/notebook-extension/openpai_submitter/templates/panel.html deleted file mode 100644 index 6af2c3d0b..000000000 --- a/contrib/notebook-extension/openpai_submitter/templates/panel.html +++ /dev/null @@ -1,63 +0,0 @@ -
-
- -

OpenPAI Submitter

- [close] -    - [refresh] -
- -
-
- -
- - -
-
- - -
-
- - -
-
-
- - -
- The panel is not ready. Please wait. -
-
-
-
\ No newline at end of file diff --git a/contrib/notebook-extension/openpai_submitter/templates/panel_recent.html b/contrib/notebook-extension/openpai_submitter/templates/panel_recent.html deleted file mode 100644 index 5646647b5..000000000 --- a/contrib/notebook-extension/openpai_submitter/templates/panel_recent.html +++ /dev/null @@ -1,41 +0,0 @@ -
-
- -

Recent Jobs

- [close] -    - [refresh] -
- -
-
- -
- The panel is not ready. Please wait. -
-
-
-
\ No newline at end of file diff --git a/contrib/notebook-extension/setup.py b/contrib/notebook-extension/setup.py deleted file mode 100644 index e8cc11301..000000000 --- a/contrib/notebook-extension/setup.py +++ /dev/null @@ -1,50 +0,0 @@ -"""this is the setup (install) script for OpenPAI notebook extension -""" -import os -import sys -from argparse import ArgumentParser -from subprocess import check_output - - -def run(cmds: list, comment: str = None): - if comment: - print(comment, flush=True) - check_output(cmds, shell=True) - - -if __name__ == '__main__': - parser = ArgumentParser() - parser.add_argument('--user', action='store_true', default=False, help='pip install in user mode') - parser.add_argument('--ignore-sdk', '-i', action='store_true', default=False, - help='dont install python sdk, make sure you have a workable version instead') - args = parser.parse_args() - - pip_cmd = [sys.executable, '-m', 'pip', 'install'] - if args.user: - pip_cmd += ['--user'] - jupyter_cmd = [sys.executable, '-m', 'jupyter'] - - run( - pip_cmd + ['jupyter', 'jupyter_contrib_nbextensions'], - '==== install requirements ====' - ) - - run( - jupyter_cmd + ['contrib', 'nbextension', 'install', '--user'], - '==== install nbextension ====' - ) - - if not args.ignore_sdk: - run( - pip_cmd + ['--upgrade', os.path.join('..', 'python-sdk')], - '==== install sdk ====' - ) - - run( - jupyter_cmd + ['nbextension', 'install', 'openpai_submitter'], - '==== install openpai_submitter ====' - ) - run( - jupyter_cmd + ['nbextension', 'enable', 'openpai_submitter/main'], - '==== enable openpai_submitter ====' - ) diff --git a/contrib/python-sdk/README.md b/contrib/python-sdk/README.md deleted file mode 100644 index 98e2bf299..000000000 --- a/contrib/python-sdk/README.md +++ /dev/null @@ -1,457 +0,0 @@ -The `Python` SDK and CLI for `OpenPAI` ----- - -***Note: Python SDK is deprecated and will be removed in the future. New SDK and CLI support is available at [openpaisdk](https://github.com/microsoft/openpaisdk).*** - -This is a proof-of-concept SDK (Python) and CLI (command-line-interface) tool for the [OpenPAI](http://github.com/microsoft/pai). This project provides some facilities to make `OpenPAI` more easily accessible and usable for users. With it, - -- User can easily access `OpenPAI` resources in scripts (`Python` or `Shell`) and `Jupyter` notebooks -- User can easily submit and list jobs by simple commands, or snippets of code -- User can easily accomplish complicated operations with `OpenPAI` -- User can easily reuse local codes and notebooks -- User can easily manage and switch between multiple `OpenPAI` clusters - -Besides above benefits, this project also provides powerful runtime support, which bridges users' (local) working environments and jobs' running environments (inside the containers started by remote cluster). See more about[ the scenarios and user stories](docs/scenarios-and-user-stories.md). - -- [Get started](#get-started) - - [Installation](#installation) - - [Dependencies](#dependencies) - - [Define your clusters](#define-your-clusters) -- [How-to guide for the CLI tool](#how-to-guide-for-the-cli-tool) - - [Cluster and storage management](#cluster-and-storage-management) - - [How to list existing clusters](#how-to-list-existing-clusters) - - [How to open and edit the cluster configuration file](#how-to-open-and-edit-the-cluster-configuration-file) - - [How to check the available resources of clusters](#how-to-check-the-available-resources-of-clusters) - - [How to add a cluster](#how-to-add-a-cluster) - - [How to delete a cluster](#how-to-delete-a-cluster) - - [How to access storages of a cluster](#how-to-access-storages-of-a-cluster) - - [Job operations](#job-operations) - - [How to query my jobs in a cluster](#how-to-query-my-jobs-in-a-cluster) - - [How to submit a job from existing job config file](#how-to-submit-a-job-from-existing-job-config-file) - - [How to change the configuration before submitting](#how-to-change-the-configuration-before-submitting) - - [How to submit a job if I have no existing job config file](#how-to-submit-a-job-if-i-have-no-existing-job-config-file) - - [How to request (GPU) resources for the job](#how-to-request-gpu-resources-for-the-job) - - [How to reference a local file when submitting a job](#how-to-reference-a-local-file-when-submitting-a-job) - - [How to submit a job given a sequence of commands](#how-to-submit-a-job-given-a-sequence-of-commands) - - [How to add `pip install` packages](#how-to-add-pip-install-packages) - - [How to preview the generated job config but not submit it](#how-to-preview-the-generated-job-config-but-not-submit-it) - - [`Jupyter` notebook](#jupyter-notebook) - - [How to run a local notebook with remote resources](#how-to-run-a-local-notebook-with-remote-resources) - - [How to launch a remote `Jupyter` server and connect it](#how-to-launch-a-remote-jupyter-server-and-connect-it) - - [Other FAQ of CLI](#other-faq-of-cli) - - [How to select a cluster to use until I change it](#how-to-select-a-cluster-to-use-until-i-change-it) - - [How to simplify the command](#how-to-simplify-the-command) - - [How to install a different version of SDK](#how-to-install-a-different-version-of-sdk) - - [How to specify the `python` environment I want to use in the job container](#how-to-specify-the-python-environment-i-want-to-use-in-the-job-container) -- [Python binding](#python-binding) - - [Cluster management](#cluster-management) - - [Job management](#job-management) -- [Make contributions](#make-contributions) - - [Release plan](#release-plan) - - [Debug the SDK](#debug-the-sdk) - - [Unit tests](#unit-tests) - -# Get started - -This section will give guidance about installation, cluster management. User may find more details not covered in the [command line ref](docs/command-line-references.md). - -## Installation - -We provide installing method leveraging `pip install` - -```bash -python -m pip install --upgrade pip -pip install -U "git+https://github.com/Microsoft/pai@master#egg=openpaisdk&subdirectory=contrib/python-sdk" -``` - -Refer to [How to install a different version of SDK](#How-to-install-a-different-version-of-SDK) for more details about installing. After installing, please verify by CLI or python binding as below. - -```bash -opai -h -python -c "from openpaisdk import __version__; print(__version__)" -``` - -### Dependencies - -- The package requires python3 (mainly because of `type hinting`), and we only tested it on `py3.5+` environment. _Only commands `job sub` and `job notebook` require installing this project inside container, others don't make any constraints of `python` version in the docker container._ -- [`Pylon`](https://github.com/microsoft/pai/tree/master/docs/pylon) is required to parse the REST api path like `/reset-server/`. - -## Define your clusters - -Please store the list of your clusters in `~/.openpai/clusters.yaml`. Every cluster would have an alias for calling, and you may save more than one cluster in the list. - -```YAML -- cluster_alias: - pai_uri: http://x.x.x.x - user: - password: - token: # if Azure AD is enabled, must use token for authentication - pylon_enabled: true - aad_enabled: false - storages: # a cluster may have multiple storages - builtin: # storage alias, every cluster would always have a builtin storage - protocol: hdfs - uri: http://x.x.x.x # if not specified, use - ports: - native: 9000 # used for hdfs-mount - webhdfs: webhdfs # used for webhdfs REST API wrapping - virtual_clusters: - - - - - - ... -``` - -Now below command shows all your clusters would be displayed. - -```bash -opai cluster list -``` - -# How-to guide for the CLI tool - -This section will brief you how to leverage the CLI tool (prefixed by `opai`) to improve the productivity of interacting with `OpenPAI`. Below is a summary of functions provided. - -| Command | Description | -| -------------------------- | ---------------------------------------------------------------------------------- | -| `opai cluster list` | list clusters defined in `~/.openpai/clusters.yaml` | -| `opai cluster resources` | list available resources of every cluster (GPUs/vCores/Memory per virtual cluster) | -| `opai cluster edit` | open `~/.openpai/clusters.yaml` for your editing | -| `opai cluster add` | add a cluster | -| `opai job list` | list all jobs of given user (in a given cluster) | -| `opai job status` | query the status of a job | -| `opai job stop` | stop a job | -| `opai job submit` | submit a given job config file to cluster | -| `opai job sub` | shortcut to generate job config and submit from a given command | -| `opai job notebook` | shortcut to run a local notebook remotely | -| `opai storage ` | execute ``* on selected storage (of a given cluster) | - -_*: operations include `list`, `status`, `upload`, `download` and `delete`_ - -Before starting, we'd like to define some commonly used variables as below. - -| Variable name | CLI options | Description | -| ----------------- | --------------------- | --------------------------------------------- | -| `` | `--cluster-alias, -a` | alias to specify a particular cluster | -| `` | `--job-name, -j` | job name | -| `` | `--image, -i` | image name (and tag) for the job | -| `` | `--workspace, -w` | remote storage path to save files for a job * | - -_*: if specified, a directory `/jobs/` and subfolders (e.g. `source`, `output` ...) will be created to store necessary files for the job named ``_ - -## Cluster and storage management - -### How to list existing clusters - -To list all existing clusters in `~/.openpai/clusters.yaml`, execute below command - -```bash -opai cluster list -``` - -### How to open and edit the cluster configuration file - -We add a convenient shortcut command to open the cluster configuration file with your editor directly by - -```bash -opai cluster edit [--editor ] -``` - -The default editor is VS Code (`code`), users may change to other editor (e.g. `--editor notepad`). - -## How to check the available resources of clusters - -To check the availability of each cluster, use the command -```bash -opai cluster resources -``` -it will return the available GPUs, vCores and memory of every virtual cluster in every cluster. - -User can also check it in a `Python` script as below -```python -from openpaisdk import __cluster_config_file__ -from openpaisdk.io_utils import from_file -from openpaisdk.cluster import ClusterList - -cfg = from_file(__cluster_config_file__, default=[]) -ClusterList(cfg).available_resources() -``` - -### How to add a cluster - -User can use `add` and `delete` command to add (or delete) a clusters from the clusters file. - -```bash -# for user/password authentication -opai cluster add --cluster-alias --pai-uri --user --password -# for Azure AD authentication -opai cluster add --cluster-alias --pai-uri --user --token -``` - -On receiving the add command, the CLI will try to connect the cluster, and get basic configuration from it. - -User can also add it by `python` binding as below. - - -### How to delete a cluster - -Delete a cluster by calling its alias. - -```bash -opai cluster delete -``` - -### How to access storages of a cluster - -Before accessing, user needs to attach storages to a specify cluster. - -```bash -opai cluster attach-hdfs --cluster-alias --storage-alias hdfs --web-hdfs-uri http://x.x.x.x:port --default -``` - -It is supported to attach multiple heterogeneous storages (e.g. `HDFS`, `NFS` ...*) to a cluster, and one of the storages will be set as default (to upload local codes). If not defined, the storage firstly added will be set as default. - -After attaching, basic operations (e.g. `list`, `upload`, `download` ...) are provided. - -```bash -opai storage list -a -s -opai storage download -a -s -opai storage upload -a -s -``` - -## Job operations - -### How to query my jobs in a cluster - -User could retrieve the list of submitted jobs from a cluster. If more information is wanted, add the `` in the command. - -```bash -opai job list -a [] -``` - -### How to submit a job from existing job config file - -If you already has a job config file, you could submit a job based on it directly. The job config file could be in the format of `json` or `yaml`, and it must be compatible with [job configuration specification v1](https://github.com/microsoft/pai/blob/master/docs/job_tutorial.md) or [pai-job-protocol v2](https://github.com/microsoft/openpai-protocol/blob/master/schemas/v2/schema.yaml). - -```bash -opai job submit -a -``` - -The CLI would judge whether it is `v1` or `v2` job configuration and call corresponding REST API to submit it. - -### How to change the configuration before submitting - -The CLI tools also provides the function to change some contents of existing job config file before submitting it. For example, we need to change the job name to avoid duplicated names, and maybe want to switch to a virtual cluster with more available resources. Of course, user could change the contents of `jobName` and `virtualCluster` (in `v1` format) or `name` and `virtualCluster` in `defaults` (in `v2` format) manually. But the CLI provides a more efficient and easy way to to the same thing. - -```bash -# compatible with v1 specification -opai job submit --update name= -u defaults:virtualCluster=test - -# compatible with v2 specification -opai job submit --update jobName= -u virtualCluster=test -``` - -### How to submit a job if I have no existing job config file - -It is not convenient to write a job config file (no matter according to `v1` or `v2` specification). For users just want to run a specific command (or a sequence of commands) in the resources of the cluster, the CLI provides a command `sub` (different from`submit`), which could generate the job config file first and then `submit` it. - -For example, user want to run `mnist_cnn.py` in a docker container (the file is contained by the docker image), the command would be - -```bash -opai job sub -a -i -j python mnist_cnn.py -``` - -### How to request (GPU) resources for the job - -User could apply for specific resources (CPUs, GPUs and Memory) for the job, just by adding below options in above commands - -- `--cpu <#cpu>` - -- `--gpu <#gpu>` -- `--memoryMB <#memory-in-unit-of-MB>` -- `--ports = [--ports = [...]]` - -### How to reference a local file when submitting a job - -If the `mnist_cnn.py` is not copied in the docker image and it is a file stored in your local disk, above command would fail due to the file cannot be accessed in remote job container. To solve this problem, the option `--sources mnist_cnn.py` would be added in the command. Since the job container could access local disk directly, we need to upload the file to somewhere (defined by `--workspace`) in [the default storage of the cluster](#How-to-access-storages-of-a-cluster). - -```bash -opai job sub -a -i -j -w --sources mnist_cnn.py python mnist_cnn.py -``` - -### How to submit a job given a sequence of commands - -In some cases, user wants to do a sequence of commands in the job. The recommended way is to put your commands in a pair of quotes (like `"git clone ... && python ..."`) and combine them with `&&` if you have multiple commands to run. Here is an example of combining 3 commands. - -```bash -opai job sub [...] "git clone && cd && python run.py arg1 arg2 ..." -``` - -### How to add `pip install` packages - -Of course, you could write a sequence of commands like `pip install ... && python ...` . There is another way which use `--pip-installs ` and `--pip-path ` options in the commands. it will add new commands in the `preCommands` in the `deployment`. - -### How to preview the generated job config but not submit it - -In some cases, user may want to preview the job config (in `v2` format) but not submit it directly. To fulfill this, just add `--preview` option. The commands support this feature includes `job submit`, `job sub` and `job notebook`. - -## `Jupyter` notebook - -### How to run a local notebook with remote resources - -If given a local `` (e.g. `mnist_cnn.ipynb` stored in local disk), and user wants to run it remotely (on `OpenPAI`) and see the result. - -```bash -opai job notebook -a -i -w -``` - -This command requires options as the `opai job sub` does. This command would - -- *Local* - upload `` to `/jobs//source` and submit the job to cluster (`` is set to `_` if not defined) -- _In job container_ - download ` ` and execute it by `jupyter nbconver --execute`, the result would be saved in `` with the same name (`*.html`) -- _In job container_ - upload `` to `/jobs//output` -- _Local_ - wait and query the job state until its status to be `SUCCEEDED` -- _Local_ - download `` to local and open it with web browser - -### How to launch a remote `Jupyter` server and connect it - -Sometimes user may want to launch a remote `Jupyter` server and do some work on it interactively. To do this, just add `--interactive` in `job notebook` command. After submitting the job, a link like `http://x.x.x.x:port/notebooks/` will be opened in your browser. Since it takes a while to start the container, please wait and refresh the page until the notebook opens. Use the default token `abcd` (unless it is overridden by `--token `) to login the notebook. - -## Other FAQ of CLI - -### How to select a cluster to use until I change it - -As shown in above examples, `--cluster-alias, -a` is required by lots of commands, but it may not be changed frequently. So it is annoying to type it every time. The CLI tool provides a command to select a cluster to use by - -``` -opai cluster select [-g] -``` - -Commands after `opai cluster select` will have a default option (if necessary) `--cluster-alias `, which can be overwritten explicitly. The mechanism and priority sequence is the same to below section. - -### How to simplify the command - -The mechanism behind `opai cluster select` command help us to simplify the command further. For example, we could set `--workspace, -w` with a default value by - -```bash -opai set [-g] workspace= -``` - -The SDK will first load (`~/.openpai/defaults.yaml`), and then update them with the contents in `.openpai/defaults.yaml` in your current working directory. In every command requires a `--workspace, -w` option but no value defined, the default value would be used. - -Some commonly used default variables includes - -- `cluster-alias=` -- `image=` -- `workspace=` -- `container-sdk-branch=` which branch to use when install the sdk in job container - -### How to install a different version of SDK - -User could easily switch to another version of SDK both in local environment and in job container. In local environment, user just change `` to another branch (e.g. `pai-0.14.y` for `OpenPAI` end-June release or a feature developing branch for the canary version). - -```bash -pip install -U "git+https://github.com/Microsoft/pai@#egg=openpaisdk&subdirectory=contrib/python-sdk" -``` - -To debug a local update, just use `pip install -U your/path/to/setup.py`. - -For jobs submitted by the SDK or command line tool, the version specified by `opai set container-sdk-branch=` would be used firstly. If not specified, `master` branch will be used. - -### How to specify the `python` environment I want to use in the job container - -In some cases, there are more than one `python` environments in a docker image. For example, there are both `python` and `python3` environments in `openpai/pai.example.keras.tensorflow`. User could add `--python ` (e.g. `--python python3`) in the command `job notebook` or `job sub` to use the specific `python` environment. Refer to [notebook example](examples/1-submit-and-query-via-command-line.ipynb) for more details. - -# Python binding - -## Cluster management - -- [x] User can describe a cluster with `openpaisdk.core.ClusterList` class to describe multiple clusters - -```python -clusters = ClusterList().load() # defaultly loaded from "~/.openpai/clusters.yaml" -``` - -User `add`, `delete` methods to update clusters, `select` and `get_client` methods to select one from multiple clusters. - -To add a cluster: -```python -cluster_cfg = { - "cluster_alias": ..., # each cluster mush have an unique alias - "pai_uri": ..., - "user": ..., - # for user/password authentication - "password": ..., - # for Azure AD authentication - "token": ..., -} -ClusterList().load().add(cluster_cfg).save() -``` - -To delete a cluster: -```python -ClusterList().load().delete(cluster_alias).save() -``` - -- [x] the `Cluster` class has methods to query and submit jobs - -```python -client = clusters.get_client(alias) -client.jobs(name) -client.rest_api_submit(job_config) -``` - -- [x] the `Cluster` class has methods to access storage (through `WebHDFS` only for this version) - -```python -Cluster(...).storage.upload/download(...) -``` - -## Job management - -- [x] User can describe a job with `openpaisdk.core.Job` class, which is compatible with the v2 protocol - -```python -job = Job(name) -job.submit(cluster_alias) # submit current job to a cluster -``` - -- [x] provide some quick template of simple jobs - -```python -job.one_liner(...) # generate job config from a command -job.from_notebook(...) # turn notebook to job -``` - -# Make contributions - -User may open issues and feature requests on [Github](https://github.com/microsoft/pai). - -## Release plan - -If there are functions requests not included, please open an issue for feature request. - -## Debug the SDK - -For users those want to improve the functions themselves, you may create the branch of `OpenPAI` project, and make modifications locally. And then set your own branch to the SDK installation source by - -```bash -opai set container-sdk-branch= -``` - -Then the `pip install` command in the job container would use `` . User may check the generated job config to check. - -To set the internal logger to debug level, create an empty file `.openpai/debug_enable` to let sdk enable debugging logging. And remove the empty file make it work normally. - -## Unit tests - -Please execute below command under the `tests` directory to have a quick unit test. -```bash -python -m unittest discover -``` - -Since the unit tests will try to connect your cluster, we set a test environment instead of corrupting the practical settings. Please add a `ut_init.sh` file in `tests` as below -```bash -opai set clusters-in-local=yes # don't corrupt practical environment -opai cluster add -a --pai-uri http://x.x.x.x --user --password -opai cluster select -``` diff --git a/contrib/python-sdk/README_zh_CN.md b/contrib/python-sdk/README_zh_CN.md deleted file mode 100644 index 39cd5af54..000000000 --- a/contrib/python-sdk/README_zh_CN.md +++ /dev/null @@ -1,409 +0,0 @@ -## The `Python` SDK and CLI for `OpenPAI` - -This is a proof-of-concept SDK (Python) and CLI (command-line-interface) tool for the [OpenPAI](http://github.com/microsoft/pai). This project provides some facilities to make `OpenPAI` more easily accessible and usable for users. With it, - -- User can easily access `OpenPAI` resources in scripts (`Python` or `Shell`) and `Jupyter` notebooks -- User can easily submit and list jobs by simple commands, or snippets of code -- User can easily accomplish complicated operations with `OpenPAI` -- User can easily reuse local codes and notebooks -- User can easily manage and switch between multiple `OpenPAI` clusters - -Besides above benefits, this project also provides powerful runtime support, which bridges users' (local) working environments and jobs' running environments (inside the containers started by remote cluster). See more about[ the scenarios and user stories](docs/scenarios-and-user-stories.md). - -- [Get started](#get-started) - - [Installation](#installation) - - [Dependencies](#dependencies) - - [Define your clusters](#define-your-clusters) -- [How-to guide for the CLI tool](#how-to-guide-for-the-cli-tool) - - [Cluster and storage management](#cluster-and-storage-management) - - [How to list existing clusters](#how-to-list-existing-clusters) - - [How to open and edit the cluster configuration file](#how-to-open-and-edit-the-cluster-configuration-file) - - [How to check the available resources of clusters](#how-to-check-the-available-resources-of-clusters) - - [How to add / delete a cluster](#how-to-add--delete-a-cluster) - - [How to access storages of a cluster](#how-to-access-storages-of-a-cluster) - - [Job operations](#job-operations) - - [How to query my jobs in a cluster](#how-to-query-my-jobs-in-a-cluster) - - [How to submit a job from existing job config file](#how-to-submit-a-job-from-existing-job-config-file) - - [How to change the configuration before submitting](#how-to-change-the-configuration-before-submitting) - - [How to submit a job if I have no existing job config file](#how-to-submit-a-job-if-i-have-no-existing-job-config-file) - - [How to request (GPU) resources for the job](#how-to-request-gpu-resources-for-the-job) - - [How to reference a local file when submitting a job](#how-to-reference-a-local-file-when-submitting-a-job) - - [How to submit a job given a sequence of commands](#how-to-submit-a-job-given-a-sequence-of-commands) - - [How to add `pip install` packages](#how-to-add-pip-install-packages) - - [How to preview the generated job config but not submit it](#how-to-preview-the-generated-job-config-but-not-submit-it) - - [`Jupyter` notebook](#jupyter-notebook) - - [How to run a local notebook with remote resources](#how-to-run-a-local-notebook-with-remote-resources) - - [How to launch a remote `Jupyter` server and connect it](#how-to-launch-a-remote-jupyter-server-and-connect-it) - - [Other FAQ of CLI](#other-faq-of-cli) - - [How to select a cluster to use until I change it](#how-to-select-a-cluster-to-use-until-i-change-it) - - [How to simplify the command](#how-to-simplify-the-command) - - [How to install a different version of SDK](#how-to-install-a-different-version-of-sdk) - - [How to specify the `python` environment I want to use in the job container](#how-to-specify-the-python-environment-i-want-to-use-in-the-job-container) -- [Python binding](#python-binding) - - [Cluster management](#cluster-management) - - [Job management](#job-management) -- [Make contributions](#make-contributions) - - [Release plan](#release-plan) - - [Debug the SDK](#debug-the-sdk) - - [Unit tests](#unit-tests) - -# Get started - -This section will give guidance about installation, cluster management. User may find more details not covered in the [command line ref](docs/command-line-references.md). - -## Installation - -We provide installing method leveraging `pip install` - -```bash -python -m pip install --upgrade pip -pip install -U "git+https://github.com/Microsoft/pai@master#egg=openpaisdk&subdirectory=contrib/python-sdk" -``` - -Refer to [How to install a different version of SDK](#How-to-install-a-different-version-of-SDK) for more details about installing. After installing, please verify by CLI or python binding as below. - -```bash -opai -h -python -c "from openpaisdk import __version__; print(__version__)" -``` - -### Dependencies - -- The package requires python3 (mainly because of `type hinting`), and we only tested it on `py3.5+` environment. *Only commands `job sub` and `job notebook` require installing this project inside container, others don't make any constraints of `python` version in the docker container.* -- [`Pylon`](https://github.com/microsoft/pai/tree/master/docs/pylon) is required to parse the REST api path like `/reset-server/`. - -## Define your clusters - -Please store the list of your clusters in `~/.openpai/clusters.yaml`. Every cluster would have an alias for calling, and you may save more than one cluster in the list. - -```yaml -- cluster_alias: cluster-for-test - pai_uri: http://x.x.x.x - user: myuser - password: mypassword - default_storage_alias: hdfs - storages: - - protocol: webHDFS - storage_alias: hdfs - web_hdfs_uri: http://x.x.x.x:port - -``` - -Now below command shows all your clusters would be displayed. - -```bash -opai cluster list -``` - -# How-to guide for the CLI tool - -This section will brief you how to leverage the CLI tool (prefixed by `opai`) to improve the productivity of interacting with `OpenPAI`. Below is a summary of functions provided. - -| Command | Description | -| -------------------------------- | ---------------------------------------------------------------------------------- | -| `opai cluster list` | list clusters defined in `~/.openpai/clusters.yaml` | -| `opai cluster resources` | list available resources of every cluster (GPUs/vCores/Memory per virtual cluster) | -| `opai cluster edit` | open `~/.openpai/clusters.yaml` for your editing | -| `opai cluster add` | add a cluster | -| `opai cluster attach-hdfs` | attach a `hdfs` storage through `WebHDFS` | -| `opai job list` | list all jobs of current user (in a given cluster) | -| `opai job submit` | submit a given job config file to cluster | -| `opai job sub` | shortcut to generate job config and submit from a given command | -| `opai job notebook` | shortcut to run a local notebook remotely | -| `opai storage ` | execute ``* on selected storage (of a given cluster) | - -**: operations include `list`, `status`, `upload`, `download` and `delete`* - -Before starting, we'd like to define some commonly used variables as below. - -| Variable name | CLI options | Description | -| ----------------------- | --------------------- | --------------------------------------------- | -| `` | `--cluster-alias, -a` | alias to specify a particular cluster | -| `` | `--job-name, -j` | job name | -| `` | `--image, -i` | image name (and tag) for the job | -| `` | `--workspace, -w` | remote storage path to save files for a job * | - -**: if specified, a directory `/jobs/` and subfolders (e.g. `source`, `output` ...) will be created to store necessary files for the job named ``* - -## Cluster and storage management - -### How to list existing clusters - -To list all existing clusters in `~/.openpai/clusters.yaml`, execute below command - -```bash -opai cluster list -``` - -### How to open and edit the cluster configuration file - -We add a convenient shortcut command to open the cluster configuration file with your editor directly by - -```bash -opai cluster edit [--editor ] -``` - -The default editor is VS Code (`code`), users may change to other editor (e.g. `--editor notepad`). - -## How to check the available resources of clusters - -To check the availability of each cluster, use the command - -```bash -opai cluster resources -``` - -it will return the available GPUs, vCores and memory of every virtual cluster in every cluster. - -User can also check it in a `Python` script as below - -```python -from openpaisdk import __cluster_config_file__ -from openpaisdk.io_utils import from_file -from openpaisdk.cluster import ClusterList - -cfg = from_file(__cluster_config_file__, default=[]) -ClusterList(cfg).available_resources() -``` - -### How to add / delete a cluster - -User can use `add` and `delete` command to add (or delete) a clusters from the clusters file. - -```bash -opai cluster add --cluster-alias --pai-uri http://x.x.x.x --user myuser --password mypassword -opai cluster delete -``` - -After adding a cluster, user may add more information (such as storage info) to it. - -### How to access storages of a cluster - -Before accessing, user needs to attach storages to a specify cluster. - -```bash -opai cluster attach-hdfs --cluster-alias --storage-alias hdfs --web-hdfs-uri http://x.x.x.x:port --default -``` - -It is supported to attach multiple heterogeneous storages (e.g. `HDFS`, `NFS` ...*) to a cluster, and one of the storages will be set as default (to upload local codes). If not defined, the storage firstly added will be set as default. - -After attaching, basic operations (e.g. `list`, `upload`, `download` ...) are provided. - -```bash -opai storage list -a -s -opai storage download -a -s -opai storage upload -a -s -``` - -## Job operations - -### How to query my jobs in a cluster - -User could retrieve the list of submitted jobs from a cluster. If more information is wanted, add the `` in the command. - -```bash -opai job list -a [] -``` - -### How to submit a job from existing job config file - -If you already has a job config file, you could submit a job based on it directly. The job config file could be in the format of `json` or `yaml`, and it must be compatible with [job configuration specification v1](https://github.com/microsoft/pai/blob/master/docs/job_tutorial.md) or [pai-job-protocol v2](https://github.com/microsoft/openpai-protocol/blob/master/schemas/v2/schema.yaml). - -```bash -opai job submit -a -``` - -The CLI would judge whether it is `v1` or `v2` job configuration and call corresponding REST API to submit it. - -### How to change the configuration before submitting - -The CLI tools also provides the function to change some contents of existing job config file before submitting it. For example, we need to change the job name to avoid duplicated names, and maybe want to switch to a virtual cluster with more available resources. Of course, user could change the contents of `jobName` and `virtualCluster` (in `v1` format) or `name` and `virtualCluster` in `defaults` (in `v2` format) manually. But the CLI provides a more efficient and easy way to to the same thing. - -```bash -# compatible with v1 specification -opai job submit --update name= -u defaults:virtualCluster=test - -# compatible with v2 specification -opai job submit --update jobName= -u virtualCluster=test -``` - -### How to submit a job if I have no existing job config file - -It is not convenient to write a job config file (no matter according to `v1` or `v2` specification). For users just want to run a specific command (or a sequence of commands) in the resources of the cluster, the CLI provides a command `sub` (different from`submit`), which could generate the job config file first and then `submit` it. - -For example, user want to run `mnist_cnn.py` in a docker container (the file is contained by the docker image), the command would be - -```bash -opai job sub -a -i -j python mnist_cnn.py -``` - -### How to request (GPU) resources for the job - -User could apply for specific resources (CPUs, GPUs and Memory) for the job, just by adding below options in above commands - -- `--cpu <#cpu>` - -- `--gpu <#gpu>` - -- `--memoryMB <#memory-in-unit-of-MB>` -- `--ports = [--ports = [...]]` - -### How to reference a local file when submitting a job - -If the `mnist_cnn.py` is not copied in the docker image and it is a file stored in your local disk, above command would fail due to the file cannot be accessed in remote job container. To solve this problem, the option `--sources mnist_cnn.py` would be added in the command. Since the job container could access local disk directly, we need to upload the file to somewhere (defined by `--workspace`) in [the default storage of the cluster](#How-to-access-storages-of-a-cluster). - -```bash -opai job sub -a -i -j -w --sources mnist_cnn.py python mnist_cnn.py -``` - -### How to submit a job given a sequence of commands - -In some cases, user wants to do a sequence of commands in the job. The recommended way is to put your commands in a pair of quotes (like `"git clone ... && python ..."`) and combine them with `&&` if you have multiple commands to run. Here is an example of combining 3 commands. - -```bash -opai job sub [...] "git clone && cd && python run.py arg1 arg2 ..." -``` - -### How to add `pip install` packages - -Of course, you could write a sequence of commands like `pip install ... && python ...` . There is another way which use `--pip-installs ` and `--pip-path ` options in the commands. it will add new commands in the `preCommands` in the `deployment`. - -### How to preview the generated job config but not submit it - -In some cases, user may want to preview the job config (in `v2` format) but not submit it directly. To fulfill this, just add `--preview` option. The commands support this feature includes `job submit`, `job sub` and `job notebook`. - -## `Jupyter` notebook - -### How to run a local notebook with remote resources - -If given a local `` (e.g. `mnist_cnn.ipynb` stored in local disk), and user wants to run it remotely (on `OpenPAI`) and see the result. - -```bash -opai job notebook -a -i -w -``` - -This command requires options as the `opai job sub` does. This command would - -- *Local* - upload `` to `/jobs//source` and submit the job to cluster (`` is set to `_` if not defined) -- *In job container* - download `` and execute it by `jupyter nbconver --execute`, the result would be saved in `` with the same name (`*.html`) -- *In job container* - upload `` to `/jobs//output` -- *Local* - wait and query the job state until its status to be `SUCCEEDED` -- *Local* - download `` to local and open it with web browser - -### How to launch a remote `Jupyter` server and connect it - -Sometimes user may want to launch a remote `Jupyter` server and do some work on it interactively. To do this, just add `--interactive` in `job notebook` command. After submitting the job, a link like `http://x.x.x.x:port/notebooks/` will be opened in your browser. Since it takes a while to start the container, please wait and refresh the page until the notebook opens. Use the default token `abcd` (unless it is overridden by `--token `) to login the notebook. - -## Other FAQ of CLI - -### How to select a cluster to use until I change it - -As shown in above examples, `--cluster-alias, -a` is required by lots of commands, but it may not be changed frequently. So it is annoying to type it every time. The CLI tool provides a command to select a cluster to use by - - opai cluster select [-g] - - -Commands after `opai cluster select` will have a default option (if necessary) `--cluster-alias `, which can be overwritten explicitly. The mechanism and priority sequence is the same to below section. - -### How to simplify the command - -The mechanism behind `opai cluster select` command help us to simplify the command further. For example, we could set `--workspace, -w` with a default value by - -```bash -opai set [-g] workspace= -``` - -The SDK will first load (`~/.openpai/defaults.yaml`), and then update them with the contents in `.openpai/defaults.yaml` in your current working directory. In every command requires a `--workspace, -w` option but no value defined, the default value would be used. - -Some commonly used default variables includes - -- `cluster-alias=` -- `image=` -- `workspace=` -- `sdk-branch=` which branch to use when install the sdk in job container - -### How to install a different version of SDK - -User could easily switch to another version of SDK both in local environment and in job container. In local environment, user just change `` to another branch (e.g. `pai-0.14.y` for `OpenPAI` end-June release or a feature developing branch for the canary version). - -```bash -pip install -U "git+https://github.com/Microsoft/pai@#egg=openpaisdk&subdirectory=contrib/python-sdk" -``` - -To debug a local update, just use `pip install -U your/path/to/setup.py`. - -For jobs submitted by the SDK or command line tool, the version specified by `opai set sdk-branch=` would be used firstly. If not specified, `master` branch will be used. - -### How to specify the `python` environment I want to use in the job container - -In some cases, there are more than one `python` environments in a docker image. For example, there are both `python` and `python3` environments in `openpai/pai.example.keras.tensorflow`. User could add `--python ` (e.g. `--python python3`) in the command `job notebook` or `job sub` to use the specific `python` environment. Refer to [notebook example](examples/1-submit-and-query-via-command-line.ipynb) for more details. - -# Python binding - -## Cluster management - -- [x] User can describe a cluster with `openpaisdk.core.ClusterList` class to describe multiple clusters - -```python -clusters = ClusterList().load() # defaultly loaded from "~/.openpai/clusters.yaml" -``` - -User `add`, `delete` methods to update clusters, `select` and `get_client` methods to select one from multiple clusters - -- [x] the `Cluster` class has methods to query and submit jobs - -```python -client = clusters.get_client(alias) -client.jobs(name) -client.rest_api_submit(job_config) -``` - -- [x] the `Cluster` class has methods to access storage (through `WebHDFS` only for this version) - -```python -Cluster(...).storage.upload/download(...) -``` - -## Job management - -- [x] User can describe a job with `openpaisdk.core.Job` class, which is compatible with the v2 protocol - -```python -job = Job(name) -job.submit(cluster_alias) # submit current job to a cluster -``` - -- [x] provide some quick template of simple jobs - -```python -job.one_liner(...) # generate job config from a command -job.from_notebook(...) # turn notebook to job -``` - -# Make contributions - -User may open issues and feature requests on [Github](https://github.com/microsoft/pai). - -## Release plan - -If there are functions requests not included, please open an issue for feature request. - -## Debug the SDK - -For users those want to improve the functions themselves, you may create the branch of `OpenPAI` project, and make modifications locally. And then set your own branch to the SDK installation source by - -```bash -opai set sdk-branch= -``` - -Then the `pip install` command in the job container would use `` . User may check the generated job config to check. - -To set the internal logger to debug level, create an empty file `.openpai/debug_enable` to let sdk enable debugging logging. And remove the empty file make it work normally. - -## Unit tests - -Please execute below command under the `tests` directory to have a quick unit test. - -```bash -python -m unittest discover -``` \ No newline at end of file diff --git a/contrib/python-sdk/docs/command-line-references.md b/contrib/python-sdk/docs/command-line-references.md deleted file mode 100644 index 243836df4..000000000 --- a/contrib/python-sdk/docs/command-line-references.md +++ /dev/null @@ -1,149 +0,0 @@ -# 1. Get started - -This section will give guidance about installation, cluster management and setting up the variables frequently used. Refer to README for more details. - -## 1.1. Installation - -Refer to [README](../README.md#21-Installation) for how to install the sdk and specify your cluster information. - -## 1.2. Set default values - -It is annoying that specify some arguments every time, (e.g. `-a ` or `-i `). During the workflow, user may often reference some variables without changing. For example, it is usually to use the same docker image for multiple jobs, and the storage root doesn't change either. To simplify, it is suggested setting them by `default` command, which would be stored in `.opanpai/defaults.json` in current working directory. - -```bash -opai set [= [= [...]]] -opai unset [ [...]] -``` - -Here are some frequently used variables. - -| Variable | Description | -| -- | -- | -| `cluster-alias` | the alias to select which cluster to connect | -| `image` | docker image name (and tag) to use | -| `workspace` | the root path in remote storage to store job information (`/jobs/`) | - -_Note: some required arguments in below examples are set in defaults (and ignored in the examples), please refer to `help` information by `-h` or `--help`_ - -# 2. CLI tools - -The command line tool `opai` provides several useful subcommands. - -| Scene | Action | Description | -| -- | -- | -- | -| `cluster` | `list` | cluster configuration management | -| `storage` | `list`, `status`, `upload`, `download`, `delete` | remote storage access | -| `job` | `list`, `new`, `submit`, `sub` | query, create and summit a job | -| `task` | `add` | add a task role to a job | -| `require` | `pip`, `weblink` | add requirements (prerequisites) to a job or task role | -| `runtime` | `execute` | python SDK run as the runtime | - -## 2.1. Query your existing jobs - -By executing below commands, all your existing job names would be displayed. - -```bash -opai job list [-a ] [] [{config,ssh}] -``` - -## 2.2. Submit a job with an existing config file - -Of course, you could submit a job from a job config `Json` file by - -```bash -opai job submit [-a ] --config -``` - -## 2.3. Submit a job step by step from sketch up - -To submit a job from sketch, user need to `create` the job (it would be cached in `.openpai/jobs/`). Then task roles could be added by `task` command one by one, and `submit` commond would dump the job config to `.openpai/jobs//config.json` and submit it through `REST` API. - -```bash -opai job new [-a ] -j [-i ] [-s ] -opai task -t [-n ] [--gpu ] [--cpu ] [--mem ] python ... -opai task -t [-n ] [--gpu ] [--cpu ] [--mem ] python ... -opai job submit [--preview] -``` - -## 2.4. Add requirements (prerequisites) - -It is common scenarios that users would prepare their environments by add requirements, such as installing python packages, mapping data storages. The prerequisites can apply to a specific task role (if both `--job-name, -j` and `--task-role-name, -t` specified) or to all task roles in the job (if only `--job-name` specified). - -```bash -opai require pip ... -opai require weblink http://x.x.x.x/filename.zip /data -``` - -In the above command, user can specify `--job-name ` (required) and `--task-role-name ` (optional). If task role name is specified, the command only applies to the specific task role, otherwise, it is for the job (all task roles). - -Now we support - -- python `pip` packages -- data mapping with weblink - -## 2.5. Submit one-line job in command line - -For the jobs that are simple (e.g. with only one task role), the CLI tool provides a shortcut to combine create, task and submit into only one command `sub`. - -If your job only has one task role and its command looks like `python script.py arg1 arg2`, you may submit it in a simplest way like - -```bash -opai job sub -j [-a ] [-i ] python script.py arg1 arg2 -``` - -## 2.6. _InProgress_ Job management and fetching outputs - -The SDK provides simple job management based folder structure on _remote_ storage. It is recommended to upload user logging or results to the output directory. - - -```bash -workspace (remote storage) - └─jobs - └─job-name-1 - ├─code - └─output - └─job-name-2 - ├─code - └─output -``` -| -The `workspace` and output directory path would be passed to job container by `PAI_SDK_JOB_WORKSPACE` and `PAI_SDK_JOB_OUTPUT_DIR`. - -User can use below commands to fetch the outputs. - -```bash -opai output list [-j ] -opai output download [-j ] [ [...]] -opai output peek [-j ] [--stdout] [--stdin] [--save ] -``` - -## 2.7. Storage access - -```bash -opai storage list -opai storage delete -opai storage status -opai storage upload [--overwrite] -opai storage download -``` - -The `HDFS` accessing is implemented by the package `hdfs`, the backend of which is through `webHDFS` API. - -## 2.8. _InProgress_ Job cloning and batch submitting - -The advanced function like job cloning has been proven to be very useful. User can clone from a local job config file or an existing job name. And user may change some parameters (nested in dictionary path joined by `::`) to a new value. - -```bash -opai job clone --from -j = [...] -``` - -It is natural to try submitting multiple jobs with only small changes in the config. - -```python -from subprocess import check_call -# base job -check_call(f'opai job sub -j base_job --env LR=0.001 python train.py $LR'.split()) -# batch submit -for lr in ["0.005", "0.01"]: - check_call(f'opai job clone --from base_job -j bj_lr_{lr} jobEnvs::LR={lr}'.split()) -``` \ No newline at end of file diff --git a/contrib/python-sdk/docs/command-line-references_zh_CN.md b/contrib/python-sdk/docs/command-line-references_zh_CN.md deleted file mode 100644 index f4a8e1b4a..000000000 --- a/contrib/python-sdk/docs/command-line-references_zh_CN.md +++ /dev/null @@ -1,148 +0,0 @@ -# 1. Get started - -This section will give guidance about installation, cluster management and setting up the variables frequently used. Refer to README for more details. - -## 1.1. Installation - -Refer to [README](../README.md#21-Installation) for how to install the sdk and specify your cluster information. - -## 1.2. Set default values - -It is annoying that specify some arguments every time, (e.g. `-a ` or `-i `). During the workflow, user may often reference some variables without changing. For example, it is usually to use the same docker image for multiple jobs, and the storage root doesn't change either. To simplify, it is suggested setting them by `default` command, which would be stored in `.opanpai/defaults.json` in current working directory. - -```bash -opai set [= [= [...]]] -opai unset [ [...]] -``` - -Here are some frequently used variables. - -| Variable | Description | -| --------------- | ---------------------------------------------------------------------------------------------------- | -| `cluster-alias` | the alias to select which cluster to connect | -| `image` | docker image name (and tag) to use | -| `workspace` | the root path in remote storage to store job information (`/jobs/`) | - -_Note: some required arguments in below examples are set in defaults (and ignored in the examples), please refer to `help` information by `-h` or `--help`_ - -# 2. CLI tools - -The command line tool `opai` provides several useful subcommands. - -| Scene | Action | Description | -| --------- | ------------------------------------------------ | ------------------------------------------------------ | -| `cluster` | `list` | cluster configuration management | -| `storage` | `list`, `status`, `upload`, `download`, `delete` | remote storage access | -| `job` | `list`, `new`, `submit`, `sub` | query, create and summit a job | -| `task` | `add` | add a task role to a job | -| `require` | `pip`, `weblink` | add requirements (prerequisites) to a job or task role | -| `runtime` | `execute` | python SDK run as the runtime | - -## 2.1. Query your existing jobs - -By executing below commands, all your existing job names would be displayed. - -```bash -opai job list [-a ] [] [{config,ssh}] -``` - -## 2.2. Submit a job with an existing config file - -Of course, you could submit a job from a job config `Json` file by - -```bash -opai job submit [-a ] --config -``` - -## 2.3. Submit a job step by step from sketch up - -To submit a job from sketch, user need to `create` the job (it would be cached in `.openpai/jobs/`). Then task roles could be added by `task` command one by one, and `submit` commond would dump the job config to `.openpai/jobs//config.json` and submit it through `REST` API. - -```bash -opai job new [-a ] -j [-i ] [-s ] -opai task -t [-n ] [--gpu ] [--cpu ] [--mem ] python ... -opai task -t [-n ] [--gpu ] [--cpu ] [--mem ] python ... -opai job submit [--preview] -``` - -## 2.4. Add requirements (prerequisites) - -It is common scenarios that users would prepare their environments by add requirements, such as installing python packages, mapping data storages. The prerequisites can apply to a specific task role (if both `--job-name, -j` and `--task-role-name, -t` specified) or to all task roles in the job (if only `--job-name` specified). - -```bash -opai require pip ... -opai require weblink http://x.x.x.x/filename.zip /data -``` - -In the above command, user can specify `--job-name ` (required) and `--task-role-name ` (optional). If task role name is specified, the command only applies to the specific task role, otherwise, it is for the job (all task roles). - -Now we support - -- python `pip` packages -- data mapping with weblink - -## 2.5. Submit one-line job in command line - -For the jobs that are simple (e.g. with only one task role), the CLI tool provides a shortcut to combine create, task and submit into only one command `sub`. - -If your job only has one task role and its command looks like `python script.py arg1 arg2`, you may submit it in a simplest way like - -```bash -opai job sub -j [-a ] [-i ] python script.py arg1 arg2 -``` - -## 2.6. *InProgress* Job management and fetching outputs - -The SDK provides simple job management based folder structure on *remote* storage. It is recommended to upload user logging or results to the output directory. - -```bash -workspace (remote storage) - └─jobs - └─job-name-1 - ├─code - └─output - └─job-name-2 - ├─code - └─output -``` - -| The `workspace` and output directory path would be passed to job container by `PAI_SDK_JOB_WORKSPACE` and `PAI_SDK_JOB_OUTPUT_DIR`. - -User can use below commands to fetch the outputs. - -```bash -opai output list [-j ] -opai output download [-j ] [ [...]] -opai output peek [-j ] [--stdout] [--stdin] [--save ] -``` - -## 2.7. Storage access - -```bash -opai storage list -opai storage delete -opai storage status -opai storage upload [--overwrite] -opai storage download -``` - -The `HDFS` accessing is implemented by the package `hdfs`, the backend of which is through `webHDFS` API. - -## 2.8. *InProgress* Job cloning and batch submitting - -The advanced function like job cloning has been proven to be very useful. User can clone from a local job config file or an existing job name. And user may change some parameters (nested in dictionary path joined by `::`) to a new value. - -```bash -opai job clone --from -j = [...] -``` - -It is natural to try submitting multiple jobs with only small changes in the config. - -```python -from subprocess import check_call -# base job -check_call(f'opai job sub -j base_job --env LR=0.001 python train.py $LR'.split()) -# batch submit -for lr in ["0.005", "0.01"]: - check_call(f'opai job clone --from base_job -j bj_lr_{lr} jobEnvs::LR={lr}'.split()) -``` \ No newline at end of file diff --git a/contrib/python-sdk/docs/medias/programming_model.md b/contrib/python-sdk/docs/medias/programming_model.md deleted file mode 100644 index 427b34e2e..000000000 --- a/contrib/python-sdk/docs/medias/programming_model.md +++ /dev/null @@ -1,18 +0,0 @@ -```mermaid -sequenceDiagram - participant FE as Front End or Plugins - participant Launcher as OpenPAI Core - participant RT as Runtime (in container) - Note left of FE: User - FE->>FE: prepare data & codes * - FE->>Launcher: submit a job * - Launcher->>+RT: pass info through Protocol - Note right of RT: parse protocol * - Note over RT, Storage: access data (if any) * - Note right of RT: execute cmds * - Note right of RT: callbacks * - RT->>Storage: save annotated files * - RT->>-Launcher: exit container - FE->>Launcher: query job info * - FE->>Storage: fetch job outputs * -``` \ No newline at end of file diff --git a/contrib/python-sdk/docs/medias/programming_model.svg b/contrib/python-sdk/docs/medias/programming_model.svg deleted file mode 100644 index 651b0ee23..000000000 --- a/contrib/python-sdk/docs/medias/programming_model.svg +++ /dev/null @@ -1,360 +0,0 @@ -Front End or PluginsOpenPAI CoreRuntime (in container)StorageUserprepare data & codes *submit a job *pass info through Protocolparse protocol *access data (if any) *execute cmds *callbacks *save annotated files *exit containerquery job info *fetch job outputs *Front End or PluginsOpenPAI CoreRuntime (in container)Storage \ No newline at end of file diff --git a/contrib/python-sdk/docs/medias/programming_model_zh_CN.md b/contrib/python-sdk/docs/medias/programming_model_zh_CN.md deleted file mode 100644 index 427b34e2e..000000000 --- a/contrib/python-sdk/docs/medias/programming_model_zh_CN.md +++ /dev/null @@ -1,18 +0,0 @@ -```mermaid -sequenceDiagram - participant FE as Front End or Plugins - participant Launcher as OpenPAI Core - participant RT as Runtime (in container) - Note left of FE: User - FE->>FE: prepare data & codes * - FE->>Launcher: submit a job * - Launcher->>+RT: pass info through Protocol - Note right of RT: parse protocol * - Note over RT, Storage: access data (if any) * - Note right of RT: execute cmds * - Note right of RT: callbacks * - RT->>Storage: save annotated files * - RT->>-Launcher: exit container - FE->>Launcher: query job info * - FE->>Storage: fetch job outputs * -``` \ No newline at end of file diff --git a/contrib/python-sdk/docs/python-binding-references.md b/contrib/python-sdk/docs/python-binding-references.md deleted file mode 100644 index d18c7bac9..000000000 --- a/contrib/python-sdk/docs/python-binding-references.md +++ /dev/null @@ -1,61 +0,0 @@ -# 1. Python binding - -After installing the SDK, there is a package named `openpaisdk` that can be imported in python code. Here are some classes being frequently used. - -```python -from openpaisdk.core import Client # OpenPAI client -from openpaisdk.job import Job # job description -from openpaisdk.command_line import Engine # command dispatcher -``` - -## 1.1. Detect your execution environment - -In your code, you may use `openpaisdk.core.in_job_container` to indicate where you are. This let you to do different things according to your environment. - -```python -from openpaisdk.core import in_job_container -# help(in_job_container) for more details -if in_job_container(): - pass -else: - pass -``` - -This function is implemented by checking whether some environmental variable (e.g. `PAI_CONTAINER_ID` is set to a non-empty value). - -## 1.2. Do it in easy way - -To unify the interface and simplifying user's learning cost, user can do whatever CLI provides in their python code in a similar way by calling `Engine`. For example, the following lines query all existing jobs submitted by current user in cluster named `your-alias`. - -```python -from openpaisdk.command_line import Engine - -job_name_list = Engine().process(['job', 'list', '--name', '-a', 'your-alias']) -``` - -The advantages of this way over using `os.system()` or `subprocess.check_call` lies in (a) avoid overhead and (b) get the structued result (no need to parsing the text output). And this way can guarantee the consistency between CLI and python binding. - -## 1.3. Do it in a more pythoic way - -Since someone may not like above solution, of course, user can use the code snippets behind CLI. Here is the code to do the same thing. - -```python -from openpaisdk.core import Client -from openpaisdk import __cluster_config_file__ - -client, _ = Client.from_json(__cluster_config_file__, 'your-alias') -job_name_list = client.jobs(name_only=True) -``` - -## 1.4. Submit your working notebook running in local server - -If you are working in your local `Jupyter` notebook, add below cell and execute it would submit a job. - -```python -from openpaisdk.notebook import submit_notebook -from openpaisdk.core import in_job_container -# help(submit_notebook) for more details -if not in_job_container(): - job_link = submit_notebook() - print(job_link) -``` diff --git a/contrib/python-sdk/docs/python-binding-references_zh_CN.md b/contrib/python-sdk/docs/python-binding-references_zh_CN.md deleted file mode 100644 index 6d9c59f89..000000000 --- a/contrib/python-sdk/docs/python-binding-references_zh_CN.md +++ /dev/null @@ -1,61 +0,0 @@ -# 1. Python binding - -After installing the SDK, there is a package named `openpaisdk` that can be imported in python code. Here are some classes being frequently used. - -```python -from openpaisdk.core import Client # OpenPAI client -from openpaisdk.job import Job # job description -from openpaisdk.command_line import Engine # command dispatcher -``` - -## 1.1. Detect your execution environment - -In your code, you may use `openpaisdk.core.in_job_container` to indicate where you are. This let you to do different things according to your environment. - -```python -from openpaisdk.core import in_job_container -# help(in_job_container) for more details -if in_job_container(): - pass -else: - pass -``` - -This function is implemented by checking whether some environmental variable (e.g. `PAI_CONTAINER_ID` is set to a non-empty value). - -## 1.2. Do it in easy way - -To unify the interface and simplifying user's learning cost, user can do whatever CLI provides in their python code in a similar way by calling `Engine`. For example, the following lines query all existing jobs submitted by current user in cluster named `your-alias`. - -```python -from openpaisdk.command_line import Engine - -job_name_list = Engine().process(['job', 'list', '--name', '-a', 'your-alias']) -``` - -The advantages of this way over using `os.system()` or `subprocess.check_call` lies in (a) avoid overhead and (b) get the structued result (no need to parsing the text output). And this way can guarantee the consistency between CLI and python binding. - -## 1.3. Do it in a more pythoic way - -Since someone may not like above solution, of course, user can use the code snippets behind CLI. Here is the code to do the same thing. - -```python -from openpaisdk.core import Client -from openpaisdk import __cluster_config_file__ - -client, _ = Client.from_json(__cluster_config_file__, 'your-alias') -job_name_list = client.jobs(name_only=True) -``` - -## 1.4. Submit your working notebook running in local server - -If you are working in your local `Jupyter` notebook, add below cell and execute it would submit a job. - -```python -from openpaisdk.notebook import submit_notebook -from openpaisdk.core import in_job_container -# help(submit_notebook) for more details -if not in_job_container(): - job_link = submit_notebook() - print(job_link) -``` \ No newline at end of file diff --git a/contrib/python-sdk/docs/runtime-references.md b/contrib/python-sdk/docs/runtime-references.md deleted file mode 100644 index 725d29af1..000000000 --- a/contrib/python-sdk/docs/runtime-references.md +++ /dev/null @@ -1,62 +0,0 @@ -# 1. _ToDiscuss_ Python SDK as a runtime - -When submitting a job through the SDK (CLI or python binding), the SDK would be isntalled inside the job container automatically by default (turn off by adding `--disable-sdk-install` in `job create`). - -## 1.1. Reconstruct the client in job container - -The SDK has passed necessary information to job container through the `__clusters__` and `__defaults__` items of the `extras` part in job config file, and the `runtime` command will save them to `~/.openpai/clusters.json` and `.opanpai/defaults.json` respectively. - -## 1.2. User can customize callbacks before or after the command executation - -This is similar to the pre- or post- commands in protocol v2. - -## 1.3. User can customize callbacks when exception raised - -This is for debugging. - -## 1.4. Implementation - -An ideal implementation is SDK provides some decorators for registering callbacks. Here is an example. - -```python -# original codes -... - -def main(args): - ... - -if __name__ == "__main__": - ... - result = main(args) - ... -``` - -After customizing callbacks, it may look like - -```python -# for openpai - -from openpai.runtime import Runtime - -app = Runtime.from_env() - -@app.on('start') -def pre_commands(...): # if not defined, use that generated from job config - ... - -@app.on('end') -def post_commands(...): # if not defined, use that generated from job config - ... - -@app.on('main') -def main(args): - ... - -if __name__ == "__main__": - ... - result = app.run(args) - ... - -``` - -_Note: the RunTime may only be triggered when in_job_container() is true, or some user-defined conditions_ diff --git a/contrib/python-sdk/docs/runtime-references_zh_CN.md b/contrib/python-sdk/docs/runtime-references_zh_CN.md deleted file mode 100644 index 03ae3a38f..000000000 --- a/contrib/python-sdk/docs/runtime-references_zh_CN.md +++ /dev/null @@ -1,62 +0,0 @@ -# 1. *ToDiscuss* Python SDK as a runtime - -When submitting a job through the SDK (CLI or python binding), the SDK would be isntalled inside the job container automatically by default (turn off by adding `--disable-sdk-install` in `job create`). - -## 1.1. Reconstruct the client in job container - -The SDK has passed necessary information to job container through the `__clusters__` and `__defaults__` items of the `extras` part in job config file, and the `runtime` command will save them to `~/.openpai/clusters.json` and `.opanpai/defaults.json` respectively. - -## 1.2. User can customize callbacks before or after the command executation - -This is similar to the pre- or post- commands in protocol v2. - -## 1.3. User can customize callbacks when exception raised - -This is for debugging. - -## 1.4. Implementation - -An ideal implementation is SDK provides some decorators for registering callbacks. Here is an example. - -```python -# original codes -... - -def main(args): - ... - -if __name__ == "__main__": - ... - result = main(args) - ... -``` - -After customizing callbacks, it may look like - -```python -# for openpai - -from openpai.runtime import Runtime - -app = Runtime.from_env() - -@app.on('start') -def pre_commands(...): # if not defined, use that generated from job config - ... - -@app.on('end') -def post_commands(...): # if not defined, use that generated from job config - ... - -@app.on('main') -def main(args): - ... - -if __name__ == "__main__": - ... - result = app.run(args) - ... - -``` - -*Note: the RunTime may only be triggered when in_job_container() is true, or some user-defined conditions* \ No newline at end of file diff --git a/contrib/python-sdk/docs/scenarios-and-user-stories.md b/contrib/python-sdk/docs/scenarios-and-user-stories.md deleted file mode 100644 index 09d39c345..000000000 --- a/contrib/python-sdk/docs/scenarios-and-user-stories.md +++ /dev/null @@ -1,66 +0,0 @@ -# 1. Benefits and scenarios - -## 1.1. Easily accessible `OpenPAI` interface - -- **User can easily access `OpenPAI` resources in scripts (`Python` or `Shell`) and `Jupyter` notebooks** - -The SDK provides classes to describe the clusters (`openpaisdk.core.Cluster`) and jobs (`openpaisdk.job.Job`). The Cluster class wraps necessary REST apis for convenient operations. The Job class is an implementation of the [protocol](https://github.com/microsoft/openpai-protocol/blob/master/schemas/v2/schema.yaml), with which user can easily organize (add or edit) the content of job `yaml` and `json` configuration. - -Besides the wrapping of APIs, the SDK also provides functions to facilitate user to utilize `OpenPAI`. Such functions includes *cluster management*, *storage accessing*, *execution environment detection (local or in a job container)*. - -_Refer to [this doc]() for more details of Python binding_ - -- **User can submit and list jobs by simple commands** - -This SDK provides a command line interface with prefix (`opai`). User can complete basic and advanced operations in simple commands, e.g. - -```bash -# query jobs -opai job list -# submit an existing job config file -opai job submit --config your/job/config/file -# submit a job in one line -opai job sub --image your/docker/image --gpu 1 some/commands -# storage access -opai storage upload/download/list ... -``` - -_Refer to [command-line-references.md](command-line-references.md) or execute `opai -h` for more details about the command line interface_ - -- **User can easily accomplish complicated operations with `OpenPAI`** - -For some advanced users or tools running on `OpenPAI` (e.g. [NNI]()), it is quite convenient to provide a way to let user can complete operations. For example, user can submit tens of jobs to optimize a parameter in a simple `for-loop`, however, it is not so convenient if users have to do it manually. - -- **User can easily reuse local codes** - -`OpenPAI` is quite efficient in utilizing powerful computing resources to run deep learning jobs. However, user have to make their codes and environment ready first. One of the common way is to start a long-running interactive job and write (debug) codes in it before really execution. There are two disadvantages, one is the inconvenience of remoting debugging, the other is the wasting of computing resources. - -The SDK aims to solve the problem, by letting user codes locally and executes on `OpenPAI`. For example, user can code and debug in a local running notebook first, and use `openpaisdk.notebook.submit_notebook` to turn it to a jobs with only a few lines. - -## 1.2. Powerful runtime support - -By installing this package in the docker container, the SDK can run as part of the runtime - -- **It can provide more powerful built-in functions than `pre-commands` and `post-commands`** - -The current `OpenPAI` leverages pre-commands and post-commands to do necessary operations before or after user commands. However, it is limited by the representation capability of shell commands. It would be quite hard to specify complicated behaviors. For examples, some operations (e.g. storage mounting) requires conditional operations according to OS versions. It is hard to implement in pre-commands, however, easy to do by a function in SDK. - -- **It provide basic job management based on workspace and job folder structure** - -For jobs submitted by the SDK (or CLI), a storage structure will be constructed for it. The SDK will create `code` and `output` (or others if required) directory in `/jobs/`. The SDK or CLI also provides interfaces to access them. - -- **It can let user annotate output files to be saved before exiting the container** - -User can annotate some files (or folders) to be uploaded during submitting the job. - -- **It can provide a mechanism to execute certain callbacks at specified scenarios** - -We provide pre- and post- commands in current implementation, however, the SDK would try to let user specify behaviors at other cases. For example, user can specify what to do if user commands have a non-zero exit return code. - -## 1.3. Unified workflow - -In the new implementation, the [job protocol]() would bridge user specification and the real execution of the job. The SDK is one of the implementations of the protocol, which includes functions to organize, edit, parse and execute the protocol as user's expectation. - -![program model](C:/Users/yuqyang.FAREAST/Workings/pai/contrib/python-sdk/docs/medias/programming_model.svg) - -_*: the functions provided by the SDK or CLI_ \ No newline at end of file diff --git a/contrib/python-sdk/docs/scenarios-and-user-stories_zh_CN.md b/contrib/python-sdk/docs/scenarios-and-user-stories_zh_CN.md deleted file mode 100644 index ff935b51d..000000000 --- a/contrib/python-sdk/docs/scenarios-and-user-stories_zh_CN.md +++ /dev/null @@ -1,66 +0,0 @@ -# 1. Benefits and scenarios - -## 1.1. Easily accessible `OpenPAI` interface - -- **User can easily access `OpenPAI` resources in scripts (`Python` or `Shell`) and `Jupyter` notebooks** - -The SDK provides classes to describe the clusters (`openpaisdk.core.Cluster`) and jobs (`openpaisdk.job.Job`). The Cluster class wraps necessary REST apis for convenient operations. The Job class is an implementation of the [protocol](https://github.com/microsoft/openpai-protocol/blob/master/schemas/v2/schema.yaml), with which user can easily organize (add or edit) the content of job `yaml` and `json` configuration. - -Besides the wrapping of APIs, the SDK also provides functions to facilitate user to utilize `OpenPAI`. Such functions includes *cluster management*, *storage accessing*, *execution environment detection (local or in a job container)*. - -*Refer to [this doc]() for more details of Python binding* - -- **User can submit and list jobs by simple commands** - -This SDK provides a command line interface with prefix (`opai`). User can complete basic and advanced operations in simple commands, e.g. - -```bash -# query jobs -opai job list -# submit an existing job config file -opai job submit --config your/job/config/file -# submit a job in one line -opai job sub --image your/docker/image --gpu 1 some/commands -# storage access -opai storage upload/download/list ... -``` - -*Refer to or execute `opai -h` for more details about the command line interface* - -- **User can easily accomplish complicated operations with `OpenPAI`** - -For some advanced users or tools running on `OpenPAI` (e.g. [NNI]()), it is quite convenient to provide a way to let user can complete operations. For example, user can submit tens of jobs to optimize a parameter in a simple `for-loop`, however, it is not so convenient if users have to do it manually. - -- **User can easily reuse local codes** - -`OpenPAI` is quite efficient in utilizing powerful computing resources to run deep learning jobs. However, user have to make their codes and environment ready first. One of the common way is to start a long-running interactive job and write (debug) codes in it before really execution. There are two disadvantages, one is the inconvenience of remoting debugging, the other is the wasting of computing resources. - -The SDK aims to solve the problem, by letting user codes locally and executes on `OpenPAI`. For example, user can code and debug in a local running notebook first, and use `openpaisdk.notebook.submit_notebook` to turn it to a jobs with only a few lines. - -## 1.2. Powerful runtime support - -By installing this package in the docker container, the SDK can run as part of the runtime - -- **It can provide more powerful built-in functions than `pre-commands` and `post-commands`** - -The current `OpenPAI` leverages pre-commands and post-commands to do necessary operations before or after user commands. However, it is limited by the representation capability of shell commands. It would be quite hard to specify complicated behaviors. For examples, some operations (e.g. storage mounting) requires conditional operations according to OS versions. It is hard to implement in pre-commands, however, easy to do by a function in SDK. - -- **It provide basic job management based on workspace and job folder structure** - -For jobs submitted by the SDK (or CLI), a storage structure will be constructed for it. The SDK will create `code` and `output` (or others if required) directory in `/jobs/`. The SDK or CLI also provides interfaces to access them. - -- **It can let user annotate output files to be saved before exiting the container** - -User can annotate some files (or folders) to be uploaded during submitting the job. - -- **It can provide a mechanism to execute certain callbacks at specified scenarios** - -We provide pre- and post- commands in current implementation, however, the SDK would try to let user specify behaviors at other cases. For example, user can specify what to do if user commands have a non-zero exit return code. - -## 1.3. Unified workflow - -In the new implementation, the [job protocol]() would bridge user specification and the real execution of the job. The SDK is one of the implementations of the protocol, which includes functions to organize, edit, parse and execute the protocol as user's expectation. - -![program model](C:/Users/yuqyang.FAREAST/Workings/pai/contrib/python-sdk/docs/medias/programming_model.svg) - -**: the functions provided by the SDK or CLI* \ No newline at end of file diff --git a/contrib/python-sdk/examples/0-install-sdk-specify-openpai-cluster.ipynb b/contrib/python-sdk/examples/0-install-sdk-specify-openpai-cluster.ipynb deleted file mode 100644 index 03cffa2ce..000000000 --- a/contrib/python-sdk/examples/0-install-sdk-specify-openpai-cluster.ipynb +++ /dev/null @@ -1,113 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Install the SDK\n", - "Refer to the **Installation** part of [README](https://github.com/microsoft/pai/blob/sdk-release-v0.4.00/contrib/python-sdk/README.md)\n", - "\n", - "*Note: now the code in a feature developping branch, will merge to master if stable*\n", - "\n", - "*Note 2: Restarting the kernel may be required to let python load the newly installed package*\n", - "\n", - "After installation, check it." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import openpaisdk\n", - "print(openpaisdk.__version__)\n", - "print(openpaisdk.__container_sdk_branch__)\n", - "print(openpaisdk.get_install_uri())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "And also check the command line interface (CLI) tool `opai`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! opai -h" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Sepcify `OpenPAI` cluster information\n", - "Refer to corresponding part of [README](https://github.com/microsoft/pai/blob/sdk-release-v0.4.00/contrib/python-sdk/README.md)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Add a cluster\n", - "User may add a new cluster by `opai cluster add` and attach a hdfs storage for it as below." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "! opai cluster add --cluster-alias cluster-for-test --pai-uri http://x.x.x.x --user myuser --password mypassword\n", - "! opai cluster attach-hdfs --default --cluster-alias cluster-for-test --storage-alias hdfs --web-hdfs-uri http://x.x.x.x:port" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## List your clusters\n", - "User may list all specified clusters by `opai cluster list`" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from openpaisdk.command_line import Engine\n", - "\n", - "cluster_cfg = Engine().process(['cluster', 'list'])[\"cluster-for-test\"]\n", - "cluster_cfg" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file diff --git a/contrib/python-sdk/examples/1-submit-and-query-via-command-line.ipynb b/contrib/python-sdk/examples/1-submit-and-query-via-command-line.ipynb deleted file mode 100644 index 21c9690f6..000000000 --- a/contrib/python-sdk/examples/1-submit-and-query-via-command-line.ipynb +++ /dev/null @@ -1,192 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Prerequisites\n", - "Install the `OpenPAI` sdk from `github` and specify your cluster information in `~/.openpai/clusters.yaml`. \n", - "\n", - "And for simplicity and security, we recommand user to setup necessary information in `.openpai/defaults.json` other than shown in the example notebook. (Refer to for [README](https://github.com/microsoft/pai/blob/sdk-release-v0.4.00/contrib/python-sdk/README.md) more details.)\n", - "\n", - "_Please make sure you have set default values for ***cluster-alias***. This notebook will not set them explicitly for security and privacy issue_\n", - "\n", - "If not, use below commands to set them\n", - "```bash\n", - "opai set cluster-alias=\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%load_ext autoreload\n", - "%autoreload 2\n", - "\n", - "from openpaisdk.command_line import Engine\n", - "from openpaisdk.core import ClusterList, in_job_container\n", - "from uuid import uuid4 as randstr\n", - "\n", - "clusters = Engine().process(['cluster', 'list'])\n", - "default_values = Engine().process(['set'])\n", - "print(default_values)\n", - "\n", - "cluster_alias = default_values[\"cluster-alias\"]\n", - "assert cluster_alias in clusters, \"please specify cluster-alias and workspace\"\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Submit jobs\n", - "\n", - "Now we submit jobs from \n", - "- an existing version 1 job config file\n", - "- an existing version 2 job config file\n", - "- a hello-world command line" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%writefile mnist_v1.json\n", - "{\n", - " \"jobName\": \"keras_tensorflow_backend_mnist\",\n", - " \"image\": \"openpai/pai.example.keras.tensorflow:stable\",\n", - " \"taskRoles\": [\n", - " {\n", - " \"name\": \"mnist\",\n", - " \"taskNumber\": 1,\n", - " \"cpuNumber\": 4,\n", - " \"memoryMB\": 8192,\n", - " \"gpuNumber\": 1,\n", - " \"command\": \"python mnist_cnn.py\"\n", - " }\n", - " ]\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%writefile mnist_v2.yaml\n", - "protocolVersion: 2\n", - "name: keras_tensorflow_mnist\n", - "type: job\n", - "version: 1.0\n", - "contributor: OpenPAI\n", - "description: |\n", - " # Keras Tensorflow Backend MNIST Digit Recognition Examples\n", - " Trains a simple convnet on the MNIST dataset.\n", - " Gets to 99.25% test accuracy after 12 epochs\n", - " (there is still a lot of margin for parameter tuning).\n", - " 16 seconds per epoch on a GRID K520 GPU.\n", - "\n", - " Reference https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py\n", - "\n", - "prerequisites:\n", - " - protocolVersion: 2\n", - " name: keras_tensorflow_example\n", - " type: dockerimage\n", - " version: 1.0\n", - " contributor : OpenPAI\n", - " description: |\n", - " This is an [example Keras with TensorFlow backend Docker image on OpenPAI](https://github.com/Microsoft/pai/tree/master/examples/keras).\n", - " uri : openpai/pai.example.keras.tensorflow\n", - "\n", - "taskRoles:\n", - " train:\n", - " instances: 1\n", - " completion:\n", - " minSucceededInstances: 1\n", - " dockerImage: keras_tensorflow_example\n", - " resourcePerInstance:\n", - " cpu: 4\n", - " memoryMB: 8192\n", - " gpu: 1\n", - " commands:\n", - " - python mnist_cnn.py" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "tests = [\"submit_v1\", \"submit_v2\", \"sub_oneliner\"]\n", - "jobnames = {k: k + '_' + randstr().hex for k in tests}\n", - "\n", - "options = \"\"\n", - "# options += \" --preview\"\n", - "\n", - "if not in_job_container():\n", - " jobs, cmds = [], []\n", - " \n", - " # submit v1\n", - " jobs.append(\"submit_v1_\" + randstr().hex)\n", - " cmds.append(f'opai job submit {options} --update jobName={jobs[-1]} mnist_v1.json')\n", - "\n", - " # submit v2\n", - " jobs.append(\"submit_v2_\" + randstr().hex)\n", - " cmds.append(f'opai job submit {options} --update name={jobs[-1]} mnist_v2.yaml')\n", - " \n", - " # sub\n", - " jobs.append(\"sub_\" + randstr().hex) \n", - " resource = '-i openpai/pai.example.keras.tensorflow --cpu 4 --memoryMB 8192 --gpu 1'\n", - " cmds.append(f'opai job sub {options} -j {jobs[-1]} {resource} python mnist_cnn.py')\n", - "\n", - " # notebook\n", - " jobs.append(\"notebook_\" + randstr().hex) \n", - " cmds.append(f'opai job notebook {options} -j {jobs[-1]} {resource} --python python3 --pip-installs keras 2-submit-job-from-local-notebook.ipynb')\n", - "\n", - " for cmd in cmds:\n", - " print(cmd, \"\\n\")\n", - " ! {cmd}\n", - " print(\"\\n\")\n", - " \n", - " states = ClusterList().load().get_client(cluster_alias).wait(jobs)\n", - " failed_jobs = [t for i, t in enumerate(jobs) if states[i] != \"SUCCEEDED\"]\n", - " assert not failed_jobs, \"some of jobs fails %s\" % failed_jobs" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/contrib/python-sdk/examples/2-submit-job-from-local-notebook.ipynb b/contrib/python-sdk/examples/2-submit-job-from-local-notebook.ipynb deleted file mode 100644 index bcfe08bfb..000000000 --- a/contrib/python-sdk/examples/2-submit-job-from-local-notebook.ipynb +++ /dev/null @@ -1,115 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Keras MNIST CNN example\n", - "\n", - "https://keras.io/examples/mnist_cnn/\n", - "\n", - "Trains a simple convnet on the MNIST dataset.\n", - "\n", - "Gets to 99.25% test accuracy after 12 epochs (there is still a lot of margin for parameter tuning). 16 seconds per epoch on a GRID K520 GPU.\n", - "\n", - "Submit this notebook to openpai by \n", - "\n", - "```bash\n", - "opai job notebook -i openpai/pai.example.keras.tensorflow --cpu 4 --memoryMB 8192 --gpu 1 --python python3 --pip-installs keras 2-submit-job-from-local-notebook.ipynb\n", - " ```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from __future__ import print_function\n", - "import keras\n", - "from keras.datasets import mnist\n", - "from keras.models import Sequential\n", - "from keras.layers import Dense, Dropout, Flatten\n", - "from keras.layers import Conv2D, MaxPooling2D\n", - "from keras import backend as K\n", - "\n", - "batch_size = 128\n", - "num_classes = 10\n", - "epochs = 12\n", - "\n", - "# input image dimensions\n", - "img_rows, img_cols = 28, 28\n", - "\n", - "# the data, split between train and test sets\n", - "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n", - "\n", - "if K.image_data_format() == 'channels_first':\n", - " x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)\n", - " x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)\n", - " input_shape = (1, img_rows, img_cols)\n", - "else:\n", - " x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\n", - " x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)\n", - " input_shape = (img_rows, img_cols, 1)\n", - "\n", - "x_train = x_train.astype('float32')\n", - "x_test = x_test.astype('float32')\n", - "x_train /= 255\n", - "x_test /= 255\n", - "print('x_train shape:', x_train.shape)\n", - "print(x_train.shape[0], 'train samples')\n", - "print(x_test.shape[0], 'test samples')\n", - "\n", - "# convert class vectors to binary class matrices\n", - "y_train = keras.utils.to_categorical(y_train, num_classes)\n", - "y_test = keras.utils.to_categorical(y_test, num_classes)\n", - "\n", - "model = Sequential()\n", - "model.add(Conv2D(32, kernel_size=(3, 3),\n", - " activation='relu',\n", - " input_shape=input_shape))\n", - "model.add(Conv2D(64, (3, 3), activation='relu'))\n", - "model.add(MaxPooling2D(pool_size=(2, 2)))\n", - "model.add(Dropout(0.25))\n", - "model.add(Flatten())\n", - "model.add(Dense(128, activation='relu'))\n", - "model.add(Dropout(0.5))\n", - "model.add(Dense(num_classes, activation='softmax'))\n", - "\n", - "model.compile(loss=keras.losses.categorical_crossentropy,\n", - " optimizer=keras.optimizers.Adadelta(),\n", - " metrics=['accuracy'])\n", - "\n", - "model.fit(x_train, y_train,\n", - " batch_size=batch_size,\n", - " epochs=epochs,\n", - " verbose=1,\n", - " validation_data=(x_test, y_test))\n", - "score = model.evaluate(x_test, y_test, verbose=0)\n", - "print('Test loss:', score[0])\n", - "print('Test accuracy:', score[1])" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/contrib/python-sdk/examples/3-submit-local-notebook-python-binding.ipynb b/contrib/python-sdk/examples/3-submit-local-notebook-python-binding.ipynb deleted file mode 100644 index 3b45194a4..000000000 --- a/contrib/python-sdk/examples/3-submit-local-notebook-python-binding.ipynb +++ /dev/null @@ -1,185 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%load_ext autoreload\n", - "%autoreload 2\n", - "\n", - "from hello import say_hello\n", - "say_hello()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from openpaisdk.notebook import parse_notebook_path, get_notebook_path\n", - "from openpaisdk.core import get_defaults, randstr\n", - "from openpaisdk.io_utils import to_screen\n", - "\n", - "cluster = {\n", - " \"cluster_alias\": get_defaults()[\"cluster-alias\"],\n", - " \"virtual_cluster\": None,\n", - " \"workspace\": get_defaults()[\"workspace\"],\n", - "}\n", - "\n", - "job_name = parse_notebook_path()[0] + '_' + randstr().hex\n", - "\n", - "to_screen(cluster)\n", - "to_screen(job_name)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from openpaisdk.core import Job\n", - "help(Job.from_notebook)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "__nb_ext_custom_cfg__ = {\n", - " \"token\": \"abcdef\", # not to set a int string like 1234\n", - " \"image\": 'ufoym/deepo:pytorch-py36-cu90',\n", - " \"resources\": {\n", - " \"cpu\": 4, \"memoryMB\": 8192, \"gpu\": 0,\n", - " },\n", - " \"sources\": [\"hello.py\"], \n", - " \"pip_installs\": [],\n", - "}\n", - "\n", - "job = Job(job_name).from_notebook(nb_file=get_notebook_path(), cluster=cluster, **__nb_ext_custom_cfg__)\n", - "# to_screen(job.get_config())" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "help(Job.submit)\n", - "job.submit(cluster[\"cluster_alias\"], cluster[\"virtual_cluster\"])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# restore the job from a name and cluster\n", - "job2 = Job(job_name).load(cluster_alias=cluster[\"cluster_alias\"])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# one time check, return {state:..., notebook:...}\n", - "job2.connect_jupyter()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# wait until notebook url is ready\n", - "help(Job.wait)\n", - "job2.wait(timeout=100)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# to_screen(job2.logs()[\"stderr\"])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# job2.stop()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - }, - "varInspector": { - "cols": { - "lenName": 16, - "lenType": 16, - "lenVar": 40 - }, - "kernels_config": { - "python": { - "delete_cmd_postfix": "", - "delete_cmd_prefix": "del ", - "library": "var_list.py", - "varRefreshCmd": "print(var_dic_list())" - }, - "r": { - "delete_cmd_postfix": ") ", - "delete_cmd_prefix": "rm(", - "library": "var_list.r", - "varRefreshCmd": "cat(var_dic_list()) " - } - }, - "types_to_exclude": [ - "module", - "function", - "builtin_function_or_method", - "instance", - "_Feature" - ], - "window_display": false - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file diff --git a/contrib/python-sdk/examples/hello.py b/contrib/python-sdk/examples/hello.py deleted file mode 100644 index 12a5d65d8..000000000 --- a/contrib/python-sdk/examples/hello.py +++ /dev/null @@ -1,2 +0,0 @@ -def say_hello(): - print("Hello, OpenPAI") diff --git a/contrib/python-sdk/examples/run_all_notebooks.py b/contrib/python-sdk/examples/run_all_notebooks.py deleted file mode 100644 index c6a035d78..000000000 --- a/contrib/python-sdk/examples/run_all_notebooks.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import sys -import shutil -from openpaisdk.utils import run_command -from openpaisdk.io_utils import browser_open - - -try: - import nbmerge -except: - run_command([sys.executable, '-m pip install nbmerge']) - -test_notebooks = [ - '0-install-sdk-specify-openpai-cluster.ipynb', - '1-submit-and-query-via-command-line.ipynb', - # '2-submit-job-from-local-notebook.ipynb', -] - -merged_file = "integrated_tests.ipynb" -html_file = os.path.splitext(merged_file)[0] + '.html' -shutil.rmtree(merged_file, ignore_errors=True) -shutil.rmtree(html_file, ignore_errors=True) - -# clear output for committing -for f in test_notebooks: - os.system("jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace %s" % f) - -os.system('nbmerge %s -o %s' % (' '.join(test_notebooks), merged_file)) -os.system('jupyter nbconvert --ExecutePreprocessor.timeout=-1 --ExecutePreprocessor.allow_errors=True --to html --execute %s' % merged_file) - -browser_open(html_file) \ No newline at end of file diff --git a/contrib/python-sdk/openpaisdk/__init__.py b/contrib/python-sdk/openpaisdk/__init__.py deleted file mode 100644 index 82a648414..000000000 --- a/contrib/python-sdk/openpaisdk/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -from openpaisdk.flags import __flags__ -from openpaisdk.io_utils import to_screen -from openpaisdk.defaults import get_defaults, update_default, LayeredSettings -from openpaisdk.cluster import ClusterList, Cluster -from openpaisdk.job import Job, JobStatusParser - - -__version__ = '0.4.00' - - -def in_job_container(varname: str = 'PAI_CONTAINER_ID'): - """in_job_container check whether it is inside a job container (by checking environmental variables) - - - Keyword Arguments: - varname {str} -- the variable to test (default: {'PAI_CONTAINER_ID'}) - - Returns: - [bool] -- return True is os.environ[varname] is set - """ - if not os.environ.get(varname, ''): - return False - return True diff --git a/contrib/python-sdk/openpaisdk/cli_arguments.py b/contrib/python-sdk/openpaisdk/cli_arguments.py deleted file mode 100644 index fb57d1b39..000000000 --- a/contrib/python-sdk/openpaisdk/cli_arguments.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -"""This file provides a mechanism to couple a Namespace (argparse) and pai protocol -""" -import argparse -from openpaisdk.defaults import LayeredSettings - - -class ArgumentFactory: - - def __init__(self): - self.factory = dict() - - # deal with predefined defaults - for name, params in LayeredSettings.definitions.items(): - args = ['--' + name] - abbr = params.get('abbreviation', None) - if abbr: # args = ['--{name}', '-{abbr}' or '--{abbr}'] - args += [('-' if len(abbr) == 1 else '--') + abbr] - kwargs = {k: v for k, v in params.items() if k not in ["name", "abbreviation"]} - kwargs["default"] = LayeredSettings.get(name) - self.add_argument(*args, **kwargs) - - # cluster - self.add_argument('cluster_alias', help='cluster alias to select') - - self.add_argument('--pai-uri', help="uri of openpai cluster, in format of http://x.x.x.x") - self.add_argument('--user', help='username') - self.add_argument('--password', help="password") - self.add_argument('--authen-token', '--token', dest='token', help="authentication token") - - self.add_argument('--editor', default="code", help="path to your editor used to open files") - - # job spec - self.add_argument('--job-name', '-j', help='job name') - - self.add_argument('--is-global', '-g', action="store_true", - help="set globally (not limited to current working folder)", default=False) - self.add_argument('--update', '-u', action='append', - help='replace current key-value pairs with new key=value (key1:key2:...=value for nested objects)') - self.add_argument('--preview', action='store_true', help='preview result before doing action') - self.add_argument('--no-browser', action='store_true', help='does not open the job link in web browser') - self.add_argument('--interactive', action='store_true', help='enter the interactive mode after job starts') - self.add_argument('--notebook-token', '--token', dest='token', default="abcd", - help='jupyter notebook authentication token') - self.add_argument("--python", default="python", - help="command or path of python, default is {python}, may be {python3}") - - self.add_argument('--cmd-sep', default="\s*&&\s*", help="command separator, default is (&&)") - self.add_argument('commands', nargs=argparse.REMAINDER, help='shell commands to execute') - - # runtime - self.add_argument('config', nargs='?', help='job config file') - self.add_argument('notebook', nargs='?', help='Jupyter notebook file') - - # storage - self.add_argument('--recursive', action='store_true', default=False, help="recursive target operation") - self.add_argument('--overwrite', action='store_true', default=False, help="enable overwrite if exists") - self.add_argument('local_path', help="local path") - self.add_argument('remote_path', help="remote path") - - def add_argument(self, *args, **kwargs): - self.factory[args[0]] = dict(args=args, kwargs=kwargs) - - def get(self, key): - value = self.factory[key] - return value['args'], value['kwargs'] - - -__arguments_factory__ = ArgumentFactory() - - -def cli_add_arguments(parser: argparse.ArgumentParser, args: list): - for a in args: - args, kwargs = __arguments_factory__.get(a) - # assert parser.conflict_handler == 'resolve', "set conflict_handler to avoid duplicated" - parser.add_argument(*args, **kwargs) diff --git a/contrib/python-sdk/openpaisdk/cli_factory.py b/contrib/python-sdk/openpaisdk/cli_factory.py deleted file mode 100644 index ea36e1001..000000000 --- a/contrib/python-sdk/openpaisdk/cli_factory.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import argparse -from openpaisdk.io_utils import to_screen -from openpaisdk.job import Job -from openpaisdk.cluster import ClusterList - - -class ArgumentError(Exception): - - pass - - -class Action: - - def __init__(self, action: str, help_s: str): - self.action, self.help_s = action, help_s - - def define_arguments(self, parser: argparse.ArgumentParser): - pass - - def check_arguments(self, args): - pass - - def restore(self, args): - pass - - def store(self, args): - pass - - def do_action(self, args): - raise NotImplementedError - - -class ActionFactory(Action): - - def __init__(self, action: str, allowed_actions: dict): - assert action in allowed_actions, ("unsupported action of job", action) - super().__init__(action, allowed_actions[action]) - suffix = action.replace('-', '_') - for attr in ["define_arguments", "check_arguments", "do_action"]: - if hasattr(self, f"{attr}_{suffix}"): - setattr(self, attr, getattr(self, f"{attr}_{suffix}")) - else: - assert attr != "do_action", f"must specify a method named {attr}_{suffix} in {self.__class__.__name__}" - - self.__job__ = Job() - self.__clusters__ = ClusterList() - self.enable_svaing = dict(job=False, clusters=False) - - def restore(self, args): - if getattr(args, 'job_name', None): - self.__job__.load(job_name=args.job_name) - self.__clusters__.load() - return self - - def store(self, args): - if self.enable_svaing["job"]: - self.__job__.save() - if self.enable_svaing["clusters"]: - self.__clusters__.save() - return self - - -class Scene: - - def __init__(self, scene: str, help_s: str, parser: argparse.ArgumentParser, - action_list # type: list[Action] - ): - self.scene, self.help_s = scene, help_s - self.single_action = len(action_list) == 1 and scene == action_list[0].action - if self.single_action: - self.actor = action_list[0] - self.actor.define_arguments(parser) - else: - self.actions, subparsers = dict(), parser.add_subparsers(dest='action', help=help_s) - for a in action_list: - p = subparsers.add_parser(a.action, help=a.help_s) - a.define_arguments(p) - self.actions[a.action] = a - - def process(self, args): - actor = self.actor if self.single_action else self.actions[args.action] - actor.check_arguments(args) - actor.restore(args) - result = actor.do_action(args) - actor.store(args) - return result - - -class EngineFactory: - - def __init__(self, cli_structure): - self.parser = argparse.ArgumentParser( - description='command line interface for OpenPAI', - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - subparsers = self.parser.add_subparsers( - dest='scene', - help='openpai cli working scenarios', - ) - self.scenes = dict() - for k, v in cli_structure.items(): - p = subparsers.add_parser(k, help=v[0]) - self.scenes[k] = Scene(k, v[0], p, v[1]) - - def process(self, a: list): - to_screen(f'Received arguments {a}', _type="debug") - args = self.parser.parse_args(a) - return self.process_args(args) - - def process_args(self, args): - to_screen(f'Parsed arguments {args}', _type="debug") - if not args.scene: - self.parser.print_help() - return - return self.scenes[args.scene].process(args) diff --git a/contrib/python-sdk/openpaisdk/cluster.py b/contrib/python-sdk/openpaisdk/cluster.py deleted file mode 100644 index 1e74b1141..000000000 --- a/contrib/python-sdk/openpaisdk/cluster.py +++ /dev/null @@ -1,307 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -from openpaisdk.io_utils import from_file, to_file, to_screen -from openpaisdk.storage import Storage -from openpaisdk.utils import OrganizedList -from openpaisdk.utils import get_response, na, exception_free, RestSrvError, concurrent_map - - -def get_cluster(alias: str, fname: str = None, get_client: bool = True): - """the generalized function call to load cluster - return cluster client if assert get_client else return config""" - if get_client: - return ClusterList().load(fname).get_client(alias) - else: - return ClusterList().load(fname).select(alias) - - -class ClusterList: - """Data structure corresponding to the contents of ~/.openpai/clusters.yaml - We use an OrganizedList to handle the operations to this class - """ - - def __init__(self, clusters: list = None): - self.clusters = OrganizedList(clusters, _key="cluster_alias") if clusters else [] - - def load(self, fname: str = None): - fname = na(fname, self.default_config_file) - self.clusters = OrganizedList(from_file(fname, default=[]), _key="cluster_alias") - return self - - def save(self): - to_file(self.clusters.as_list, self.default_config_file) - - @property - def default_config_file(self): - from openpaisdk.flags import __flags__ - from openpaisdk.defaults import get_defaults - return __flags__.get_cluster_cfg_file(get_defaults()["clusters-in-local"]) - - def tell(self): - return { - a: { - v: dict(GPUs='-', memory='-', vCores='-', uri=cfg["pai_uri"], user=cfg["user"]) for v in cfg["virtual_clusters"] - } for a, cfg in self.clusters.as_dict.items() - } - - def add(self, cluster: dict): - cfg = Cluster().load(**cluster).check().config - self.clusters.add(cfg, replace=True) - return self - - def update_all(self): - for a in self.aliases: - self.add(self.clusters.first(a)) - - def delete(self, alias: str): - return self.clusters.remove(alias) - - def select(self, alias: str): - return self.clusters.first(alias) - - def get_client(self, alias: str): - return Cluster().load(**self.select(alias)) - - def available_resources(self): - """concurrent version to get available resources""" - aliases = self.aliases - ret = concurrent_map(Cluster.available_resources, (self.get_client(a) for a in aliases)) - return {a: r for a, r in zip(aliases, ret) if r is not None} - - @property - def aliases(self): - return [c["cluster_alias"] for c in self.clusters if "cluster_alias" in c] - - @property - def alias(self): - return self.config["cluster_alias"] - - -class Cluster: - """A wrapper of cluster to access the REST APIs""" - - def __init__(self, toke_expiration: int = 3600): - # ! currently sdk will not handle toke refreshing - self.config = {} - self.__token_expire = toke_expiration - self.__token = None - - def load(self, cluster_alias: str = None, pai_uri: str = None, user: str = None, password: str = None, token: str = None, **kwargs): - import re - self.config.update( - cluster_alias=cluster_alias, - pai_uri=pai_uri.strip("/"), - user=user, - password=password, - token=token, - ) - self.config.update( - {k: v for k, v in kwargs.items() if k in ["info", "storages", "virtual_clusters"]} - ) - # validate - assert self.alias, "cluster must have an alias" - assert self.user, "must specify a user name" - assert re.match("^(http|https)://(.*[^/])$", - self.pai_uri), "pai_uri should be a uri in the format of http(s)://x.x.x.x" - return self - - def check(self): - to_screen("try to connect cluster {}".format(self.alias)) - storages = self.rest_api_storages() - for i, s in enumerate(storages): - s.setdefault("storage_alias", s["protocol"] + f'-{i}') - cluster_info = na(self.rest_api_cluster_info(), {}) - if cluster_info.get("authnMethod", "basic") == "OIDC": - assert self.config["token"], "must use authentication token (instead of password) in OIDC mode" - self.config.update( - info=cluster_info, - storages=storages, - virtual_clusters=self.virtual_clusters(), - ) - # ! will check authentication types according to AAD enabled or not - return self - - @property - def alias(self): - return self.config["cluster_alias"] - - @property - def pai_uri(self): - return self.config["pai_uri"].strip("/") - - @property - def user(self): - return self.config["user"] - - @property - def password(self): - return str(self.config["password"]) - - @property - def token(self): - if self.config["token"]: - return str(self.config["token"]) - if not self.__token: - self.__token = self.rest_api_token(self.__token_expire) - return self.__token - - def get_storage(self, alias: str = None): - # ! every cluster should have a builtin storage - for sto in self.config.get("storages", []): - if alias is None or sto["storage_alias"] == alias: - if sto["protocol"] == 'hdfs': - return Storage(protocol='webHDFS', url=sto["webhdfs"], user=sto.get('user', self.user)) - - def get_job_link(self, job_name: str): - return '{}/job-detail.html?username={}&jobName={}'.format(self.pai_uri, self.user, job_name) - - @property - def rest_srv(self): - return '{}/rest-server/api'.format(self.pai_uri) - - # ! for some older version that does not support this API - @exception_free(Exception, None, "Cluster info API is not supported") - def rest_api_cluster_info(self): - "refer to https://github.com/microsoft/pai/pull/3281/" - return get_response('GET', [self.rest_srv, 'v1'], allowed_status=[200]).json() - - def rest_api_storages(self): - # ! currently this is a fake - return [ - { - "protocol": "hdfs", - "webhdfs": f"{self.pai_uri}/webhdfs" - }, - ] - - @exception_free(RestSrvError, None) - def rest_api_job_list(self, user: str = None): - return get_response( - 'GET', [self.rest_srv, 'v1', ('user', user), 'jobs'] - ).json() - - @exception_free(RestSrvError, None) - def rest_api_job_info(self, job_name: str = None, info: str = None, user: str = None): - import json - import yaml - user = self.user if user is None else user - assert info in [None, 'config', 'ssh'], ('unsupported query information', info) - response = get_response( - 'GET', [self.rest_srv, 'v1', 'user', user, 'jobs', job_name, info] - ) - try: - return response.json() - except json.decoder.JSONDecodeError: - return yaml.load(response.text, Loader=yaml.FullLoader) - else: - raise RestSrvError - - @exception_free(Exception, None) - def rest_api_token(self, expiration=3600): - return get_response( - 'POST', [self.rest_srv, 'v1', 'token'], - body={ - 'username': self.user, 'password': self.password, 'expiration': expiration - } - ).json()['token'] - - def rest_api_submit(self, job: dict): - use_v2 = str(job.get("protocolVersion", 1)) == "2" - if use_v2: - import yaml - return get_response( - 'POST', [self.rest_srv, 'v2', 'jobs'], - headers={ - 'Authorization': 'Bearer {}'.format(self.token), - 'Content-Type': 'text/yaml', - }, - body=yaml.dump(job), - allowed_status=[202, 201] - ) - else: - return get_response( - 'POST', [self.rest_srv, 'v1', 'user', self.user, 'jobs'], - headers={ - 'Authorization': 'Bearer {}'.format(self.token), - 'Content-Type': 'application/json', - }, - body=job, - allowed_status=[202, 201] - ) - - @exception_free(RestSrvError, None) - def rest_api_execute_job(self, job_name: str, e_type: str = "STOP"): - assert e_type in ["START", "STOP"], "unsupported execute type {}".format(e_type) - return get_response( - 'PUT', [self.rest_srv, 'v1', 'user', self.user, 'jobs', job_name, 'executionType'], - headers={ - 'Authorization': 'Bearer {}'.format(self.token), - }, - body={ - "value": e_type - }, - allowed_status=[200, 202], - ).json() - - @exception_free(RestSrvError, None) - def rest_api_virtual_clusters(self): - return get_response( - 'GET', [self.rest_srv, 'v1', 'virtual-clusters'], - headers={ - 'Authorization': 'Bearer {}'.format(self.token), - 'Content-Type': 'application/json', - }, - allowed_status=[200] - ).json() - - @exception_free(RestSrvError, None) - def rest_api_user(self, user: str = None): - return get_response( - 'GET', [self.rest_srv, 'v1', 'user', user if user else self.user], - headers={ - 'Authorization': 'Bearer {}'.format(self.token), - }, - ).json() - - def virtual_clusters(self, user_info: dict = None): - user_info = na(user_info, self.rest_api_user()) - assert user_info, f'failed to get user information from {self.alias}' - my_virtual_clusters = user_info["virtualCluster"] - if isinstance(my_virtual_clusters, str): - my_virtual_clusters = my_virtual_clusters.split(",") - return my_virtual_clusters - - def virtual_cluster_available_resources(self): - vc_info = self.rest_api_virtual_clusters() - dic = dict() - for key, vc in vc_info.items(): - if "resourcesTotal" in vc: - used, total = vc["resourcesUsed"], vc["resourcesTotal"] - dic[key] = { - k: max(0, int(total[k] - used[k])) for k in total - } - else: - # return -1 if the REST api not supported - dic[key] = dict(GPUs=-1, memory=-1, vCores=-1) - return dic - - @exception_free(Exception, None) - def available_resources(self): - resources = self.virtual_cluster_available_resources() - return {k: v for k, v in resources.items() if k in self.config["virtual_clusters"]} diff --git a/contrib/python-sdk/openpaisdk/command_line.py b/contrib/python-sdk/openpaisdk/command_line.py deleted file mode 100644 index d622caa01..000000000 --- a/contrib/python-sdk/openpaisdk/command_line.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import argparse -import os -import sys -from openpaisdk.cli_arguments import cli_add_arguments -from openpaisdk.cli_factory import ActionFactory, EngineFactory -from openpaisdk.defaults import get_defaults, update_default -from openpaisdk.io_utils import browser_open, to_screen -from openpaisdk.utils import Nested, run_command, na, randstr -from openpaisdk.defaults import __flags__ - - -def extract_args(args: argparse.Namespace, get_list: list = None, ignore_list: list = ["scene", "action"]): - if get_list: - return {k: getattr(args, k) for k in get_list} - return {k: v for k, v in vars(args).items() if k not in ignore_list} - - -class ActionFactoryForDefault(ActionFactory): - - def define_arguments(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--is-global']) - parser.add_argument('contents', nargs='*', help='(variable=value) pair to be set as default') - - def do_action_set(self, args): - import re - if not args.contents: - return get_defaults(False, True, False) if args.is_global else get_defaults(True, True, False) - kv_pairs = [] - for content in args.contents: - m = re.match("^([^=]+?)([\+|\-]*=)([^=]*)$", content) - if m: - kv_pairs.append(m.groups()) - else: - kv_pairs.append((content, '', '')) - for kv_pair in kv_pairs: - assert kv_pair[0] and kv_pair[1] in ["=", "+=", "-="] and kv_pair[2], \ - f"must specify a key=value pair ({kv_pair[0]}, {kv_pair[2]})" - update_default(kv_pair[0], kv_pair[2], is_global=args.is_global) - - def do_action_unset(self, args): - for kv_pair in args.contents: - update_default(kv_pair[0], kv_pair[2], is_global=args.is_global, to_delete=True) - - -class ActionFactoryForCluster(ActionFactory): - - def define_arguments_edit(self, parser): - cli_add_arguments(parser, ["--editor"]) - - def check_arguments_edit(self, args): - assert args.editor, "cannot edit the file without an editor" - - def do_action_edit(self, args): - run_command([args.editor, cluster_cfg_file]) - - def define_arguments_update(self, parser): - pass - - def do_action_update(self, args): - self.enable_svaing["clusters"] = True - return self.__clusters__.update_all() - - def define_arguments_list(self, parser): - cli_add_arguments(parser, []) - - @staticmethod - def tabulate_resources(dic: dict): - to_screen([ - [c, i.get("uri", None), i.get("user", None), v, i["GPUs"], i["vCores"], i["memory"]] for c in dic.keys() for v, i in dic[c].items() - ], _type="table", headers=["cluster", "uri", "user", "virtual-cluster", "GPUs", "vCores", "memory"]) - return dic - - def do_action_list(self, args): - info = self.__clusters__.tell() - ActionFactoryForCluster.tabulate_resources(info) - - def define_arguments_resources(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, []) - - def do_action_resources(self, args): - r = self.__clusters__.available_resources() - ActionFactoryForCluster.tabulate_resources(r) - - def define_arguments_add(self, parser: argparse.ArgumentParser): - cli_add_arguments( - parser, ['--cluster-alias', '--pai-uri', '--user', '--password', '--authen-token']) - - def check_arguments_add(self, args): - assert args.cluster_alias or args.pai_uri or args.user, "must specify cluster-alias, pai-uri, user" - assert args.password or args.token, "please add an authentication credential, password or token" - - def do_action_add(self, args): - self.enable_svaing["clusters"] = True - self.__clusters__.add(extract_args(args)) - - def define_arguments_delete(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['cluster_alias']) - - def do_action_delete(self, args): - if self.__clusters__.delete(args.cluster_alias): - to_screen("cluster %s deleted" % args.cluster_alias) - return None - - def define_arguments_select(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--is-global', 'cluster_alias']) - - def check_arguments_select(self, args): - assert args.cluster_alias, "must specify a valid cluster-alias" - - def do_action_select(self, args): - update_default('cluster-alias', args.cluster_alias, - is_global=args.is_global) - - -class ActionFactoryForJob(ActionFactory): - - # basic commands - def define_arguments_list(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--cluster-alias', '--user']) - - def do_action_list(self, args): - client = self.__clusters__.get_client(args.cluster_alias) - if not args.user: - args.user = client.user - to_screen("if not set, only your job will be listed, user `--user __all__` to list jobs of all users") - if args.user == '__all__': - args.user = None - jobs = client.rest_api_job_list(user=args.user) - return ["%s [%s]" % (j["name"], j.get("state", "UNKNOWN")) for j in jobs] - - def define_arguments_status(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--cluster-alias', '--user']) - parser.add_argument('job_name', help='job name') - parser.add_argument('query', nargs='?', choices=['config', 'ssh']) - - def check_arguments_status(self, args): - assert args.job_name, "must specify a job name" - - def do_action_status(self, args): - client = self.__clusters__.get_client(args.cluster_alias) - if not args.user: - args.user = client.user - return client.rest_api_job_info(args.job_name, args.query, user=args.user) - - def define_arguments_stop(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--cluster-alias']) - parser.add_argument('job_names', nargs='+', help='job name') - - def check_arguments_stop(self, args): - assert args.job_names, "must specify a job name" - - def do_action_stop(self, args): - client = self.__clusters__.get_client(args.cluster_alias) - for job_name in args.job_names: - to_screen(client.rest_api_execute_job(job_name, "STOP")) - - def define_arguments_submit(self, parser: argparse.ArgumentParser): - cli_add_arguments( - parser, ['--cluster-alias', '--virtual-cluster', '--preview', '--update', 'config']) - - def check_arguments_submit(self, args): - assert args.config, "please specify a job config file (json or yaml format)" - assert os.path.isfile(args.config), "%s cannot be read" % args.config - - def submit_it(self, args): - if args.preview: - return self.__job__.validate().get_config() - result = self.__job__.submit(args.cluster_alias, args.virtual_cluster) - if "job_link" in result and not getattr(args, 'no_browser', False): - browser_open(result["job_link"]) - return result - - def do_action_submit(self, args): - # key-value pair in --update option would support nested key, e.g. defaults->virtualCluster= - self.__job__.load(fname=args.config) - if args.update: - for s in args.update: - key, value = s.split("=") - Nested(self.__job__.protocol).set(key, value) - return self.submit_it(args) - - def define_essentials(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, [ - '--job-name', - '--cluster-alias', '--virtual-cluster', '--workspace', # for cluster - '--sources', '--pip-installs', # for sdk_template - '--image', '--cpu', '--gpu', '--mem', "--memoryMB", - '--preview', '--no-browser', - '--python', - ]) - - def check_essentials(self, args): - assert args.cluster_alias, "must specify a cluster" - args.sources = [] if not args.sources else args.sources - args.pip_installs = [] if not args.pip_installs else args.pip_installs - if args.sources: - assert args.workspace, "must specify --workspace if --sources used" - for s in args.sources: - assert os.path.isfile(s), "file %s not found" % s - assert args.image, "must specify a docker image" - if args.job_name: - args.job_name = args.job_name.replace("$", randstr(10)) - - def define_arguments_sub(self, parser: argparse.ArgumentParser): - self.define_essentials(parser) - cli_add_arguments(parser, [ - 'commands' - ]) - - def check_arguments_sub(self, args): - self.check_essentials(args) - - def do_action_sub(self, args): - self.__job__.new(args.job_name).one_liner( - commands=" ".join(args.commands), - image=args.image, - resources=extract_args(args, ["gpu", "cpu", "memoryMB", "mem"]), - cluster=extract_args( - args, ["cluster_alias", "virtual_cluster", "workspace"]), - sources=args.sources, pip_installs=args.pip_installs, - ) - self.__job__.protocol["parameters"]["python_path"] = args.python - return self.submit_it(args) - - def define_arguments_notebook(self, parser: argparse.ArgumentParser): - self.define_essentials(parser) - cli_add_arguments(parser, [ - '--interactive', - '--notebook-token', - 'notebook' - ]) - - def check_arguments_notebook(self, args): - self.check_essentials(args) - assert args.notebook or args.interactive, "must specify a notebook name unless in interactive mode" - if not args.job_name: - assert args.notebook or args.interactive, "must specify a notebook if no job name defined" - args.job_name = os.path.splitext(os.path.basename(args.notebook))[ - 0] + "_" + randstr().hex if args.notebook else "jupyter_server_{}".format(randstr().hex) - if args.interactive and not args.token: - to_screen("no authentication token is set", _type="warn") - - def connect_notebook(self): - result = self.__job__.wait() - if result.get("notebook", None) is not None: - browser_open(result["notebook"]) - return result - - def do_action_notebook(self, args): - self.__job__.new(args.job_name).from_notebook( - nb_file=args.notebook, mode="interactive" if args.interactive else "silent", token=args.token, - image=args.image, - cluster=extract_args( - args, ["cluster_alias", "virtual_cluster", "workspace"]), - resources=extract_args(args, ["gpu", "cpu", "memoryMB", "mem"]), - sources=args.sources, pip_installs=args.pip_installs, - ) - self.__job__.protocol["parameters"]["python_path"] = args.python - result = self.submit_it(args) - if not args.preview: - result.update(na(self.connect_notebook(), {})) - return result - - def define_arguments_connect(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--cluster-alias']) - parser.add_argument('job_name', help="job name to connect") - - def check_arguments_connect(self, args): - assert args.cluster_alias, "must specify a cluster" - assert args.job_name, "must specify a job name" - - def do_action_connect(self, args): - to_screen("retrieving job config from cluster") - self.__job__.load(job_name=args.job_name, cluster_alias=args.cluster_alias) - return self.connect_notebook() - - -class ActionFactoryForStorage(ActionFactory): - - def define_arguments_list_storage(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, ['--cluster-alias']) - - def do_action_list_storage(self, args): - return self.__clusters__.select(args.cluster_alias)['storages'] - - def define_arguments_list(self, parser: argparse.ArgumentParser): - cli_add_arguments( - parser, ['--cluster-alias', '--storage-alias', 'remote_path']) - - def do_action_list(self, args): - return self.__clusters__.get_client(args.cluster_alias).get_storage(args.storage_alias).list(args.remote_path) - - def define_arguments_status(self, parser: argparse.ArgumentParser): - cli_add_arguments( - parser, ['--cluster-alias', '--storage-alias', 'remote_path']) - - def do_action_status(self, args): - return self.__clusters__.get_client(args.cluster_alias).get_storage(args.storage_alias).status(args.remote_path) - - def define_arguments_delete(self, parser: argparse.ArgumentParser): - cli_add_arguments( - parser, ['--cluster-alias', '--storage-alias', '--recursive', 'remote_path']) - - def do_action_delete(self, args): - return self.__clusters__.get_client(args.cluster_alias).get_storage(args.storage_alias).delete(args.remote_path, recursive=args.recursive) - - def define_arguments_download(self, parser: argparse.ArgumentParser): - cli_add_arguments( - parser, ['--cluster-alias', '--storage-alias', 'remote_path', 'local_path']) - - def do_action_download(self, args): - return self.__clusters__.get_client(args.cluster_alias).get_storage(args.storage_alias).download(remote_path=args.remote_path, local_path=args.local_path) - - def define_arguments_upload(self, parser: argparse.ArgumentParser): - cli_add_arguments(parser, [ - '--cluster-alias', '--storage-alias', '--overwrite', 'local_path', 'remote_path']) - - def do_action_upload(self, args): - return self.__clusters__.get_client(args.cluster_alias).get_storage(args.storage_alias).upload(remote_path=args.remote_path, local_path=args.local_path, overwrite=getattr(args, "overwrite", False)) - - -cluster_cfg_file = __flags__.get_cluster_cfg_file(get_defaults()["clusters-in-local"]) - - -def generate_cli_structure(is_beta: bool): - cli_s = { - "cluster": { - "help": "cluster management", - "factory": ActionFactoryForCluster, - "actions": { - "list": "list clusters in config file %s" % cluster_cfg_file, - "resources": "report the (available, used, total) resources of the cluster", - "update": "check the healthness of clusters and update the information", - "edit": "edit the config file in your editor %s" % cluster_cfg_file, - "add": "add a cluster to config file %s" % cluster_cfg_file, - "delete": "delete a cluster from config file %s" % cluster_cfg_file, - "select": "select a cluster as default", - } - }, - "job": { - "help": "job operations", - "factory": ActionFactoryForJob, - "actions": { - "list": "list existing jobs", - "status": "query the status of a job", - "stop": "stop the job", - "submit": "submit the job from a config file", - "sub": "generate a config file from commands, and then `submit` it", - "notebook": "run a jupyter notebook remotely", - "connect": "connect to an existing job", - } - }, - "storage": { - "help": "storage operations", - "factory": ActionFactoryForStorage, - "actions": { - "list-storage": "list storage attached to the cluster", - "list": "list items about the remote path", - "status": "get detailed information about remote path", - "upload": "upload", - "download": "download", - "delete": "delete", - } - }, - } - dic = { - key: [ - value["help"], - [value["factory"](x, value["actions"]) - for x in value["actions"].keys()] - ] for key, value in cli_s.items() - } - dic.update({ - "set": [ - "set a (default) variable for cluster and job", [ - ActionFactoryForDefault("set", {"set": ["set"]})] - ], - "unset": [ - "un-set a (default) variable for cluster and job", [ - ActionFactoryForDefault("unset", {"unset": ["unset"]})] - ], - }) - return dic - - -class Engine(EngineFactory): - - def __init__(self): - super().__init__(generate_cli_structure(is_beta=False)) - - -def main(): - try: - eng = Engine() - result = eng.process(sys.argv[1:]) - if result: - to_screen(result) - return 0 - except AssertionError as identifier: - to_screen(f"Value error: {repr(identifier)}", _type="error") - return 1 - except Exception as identifier: - to_screen(f"Error: {repr(identifier)}", _type="error") - return 2 - else: - return -1 - - -if __name__ == '__main__': - main() diff --git a/contrib/python-sdk/openpaisdk/defaults.py b/contrib/python-sdk/openpaisdk/defaults.py deleted file mode 100644 index b55d8ff96..000000000 --- a/contrib/python-sdk/openpaisdk/defaults.py +++ /dev/null @@ -1,166 +0,0 @@ - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -""" this module is to set a way to control the predefined configurations -""" -from openpaisdk.flags import __flags__ -from openpaisdk.utils import na, OrganizedList -from openpaisdk.io_utils import from_file, to_file, to_screen - - -class CfgLayer: - - def __init__(self, name: str, include: list = None, exclude: list = None, file: str = None, values: dict = None, allow_unknown: bool = True): - self.name = name - self.file = file - self.values = from_file(file, {}, silent=True) if file else na(values, {}) - self.definitions = OrganizedList( - __flags__.default_var_definitions(), - _key="name" - ).filter(None, include, exclude) # type: OrganizedList - - def update(self, key: str, value=None, delete: bool = False): - if not self.allow(key): - to_screen(f"{key} is not a recognized default variable, ignored") - return - dic = self.values - if delete: - if key not in dic: - to_screen(f"key {key} not found in {self.name}, ignored") - elif not self.act_append(key) or not value: # delete the key when not append action - del dic[key] - to_screen(f"key {key} removed completely from {self.name} successfully") - else: - dic[key].remove(value) - to_screen(f"{value} removed in {key} under {self.name} successfully") - else: - if self.act_append(key): - def _append(dic, key, value): - dic.setdefault(key, []) - if value not in dic[key]: - dic[key].append(value) - _append(dic, key, value) - to_screen(f"{value} added to {key} under {self.name} successfully") - else: - dic[key] = value - to_screen(f"{key} set to {value} under {self.name} successfully") - if self.file: - to_file(self.values, self.file) - - def allow(self, key: str): - return self.definitions.first_index(key) is not None - - def act_append(self, key: str): - if self.allow(key): - return self.definitions.first(key).get("action", None) == "append" - return False - - -class LayeredSettings: - """key-value querying from a list of dicts, priority depends on list index - refer to [TestDefaults](../tests/test_utils.py) for more usage examples - """ - - layers = None - definitions = None - - @classmethod - def init(cls): - if cls.layers is None: - cls.reset() - - @classmethod - def reset(cls): - cls.definitions = OrganizedList(__flags__.default_var_definitions(), _key="name").as_dict - cls.layers = OrganizedList([ - CfgLayer( - name="user_advaced", - exclude=["clusters-in-local", "image-list", "resource-specs"] - ), - CfgLayer( - name="user_basic", - exclude=["clusters-in-local", "image-list", "resource-specs"] - ), - CfgLayer( - name="local_default", - exclude=[], file=__flags__.get_default_file(is_global=False) - ), - CfgLayer( - name="global_default", - exclude=[], file=__flags__.get_default_file(is_global=True) - ) - ], _key="name", _getter=getattr) - - @classmethod - def keys(cls): - dic = set() - for layer in cls.layers: - for key in layer.values.keys(): - dic.add(key) - dic = dic.union(cls.definitions.keys()) - return list(dic) - - @classmethod - def act_append(cls, key): - return cls.definitions.get(key, {}).get("action", None) == "append" - - @classmethod - def get(cls, key): - __not_found__ = "==Not-Found==" - lst = [layer.values.get(key, __not_found__) for layer in cls.layers] - lst.append(cls.definitions.get(key, {}).get("default", None)) - lst = [x for x in lst if x != __not_found__] - - if cls.act_append(key): - from openpaisdk.utils import flatten - return list(flatten(lst)) - else: - return lst[0] if lst else None - - @classmethod - def update(cls, layer: str, key: str, value=None, delete: bool = False): - cls.layers.first(layer).update(key, value, delete) - - @classmethod - def as_dict(cls): - return {key: cls.get(key) for key in cls.keys()} - - @classmethod - def print_supported_items(cls): - headers = ['name', 'default', 'help'] - return to_screen([ - [x.get(k, None) for k in headers] for x in __flags__.default_var_definitions() - ], _type="table", headers=headers) - - -LayeredSettings.init() - - -def get_defaults(en_local=True, en_global=True, en_predefined=True): - return LayeredSettings.as_dict() - - -def update_default(key: str, value: str = None, is_global: bool = False, to_delete: bool = False): - layer = "global_default" if is_global else "local_default" - LayeredSettings.update(layer, key, value, to_delete) - - -def get_install_uri(ver: str = None): - ver = get_defaults()["container-sdk-branch"] if not ver else ver - return '-e "git+https://github.com/Microsoft/pai@{}#egg=openpaisdk&subdirectory=contrib/python-sdk"'.format(ver) diff --git a/contrib/python-sdk/openpaisdk/flags.py b/contrib/python-sdk/openpaisdk/flags.py deleted file mode 100644 index 2feefa87e..000000000 --- a/contrib/python-sdk/openpaisdk/flags.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import os - - -class __flags__(object): - "store the flags and constants" - disable_to_screen = False # A flag to disable to_screen output - debug_mode = os.path.isfile('debug_enable') - - # ! below attributes should not be changed - cache = '.openpai' - cluster_cfg_file = 'clusters.yaml' - defaults_file = 'defaults.yaml' - container_sdk_branch = 'master' - resources_requirements = dict(cpu=2, gpu=0, memoryMB=4096, ports={}) - storage_root = '/openpai-sdk' - custom_predefined = [] - - @staticmethod - def default_var_definitions(): - return [ - { - "name": "clusters-in-local", - "default": "no", - "help": f"[yes / no], if yes, clusters configuration stored in {__flags__.get_cluster_cfg_file('yes')} other than ~/{__flags__.get_cluster_cfg_file('yes')}", - }, - { - "name": "cluster-alias", - "abbreviation": "a", - "help": "cluster alias", - }, - { - "name": "virtual-cluster", - "abbreviation": "vc", - "help": "virtual cluster name" - }, - { - "name": "storage-alias", - "abbreviation": "s", - "help": "alias of storage to use" - }, - { - "name": "workspace", - "default": None, - "abbreviation": "w", - "help": f"storage root for a job to store its codes / data / outputs ... (default is {__flags__.storage_root}/$user)" - }, - { - "name": "container-sdk-branch", - "default": __flags__.container_sdk_branch, - "help": "code branch to install sdk from (in a job container)" - }, - { - "name": "image", - "abbreviation": "i", - "help": "docker image" - }, - { - "name": "cpu", - "help": f"cpu number per instance (default is {__flags__.resources_requirements['cpu']})" - }, - { - "name": "gpu", - "help": f"gpu number per instance (default is {__flags__.resources_requirements['gpu']})" - }, - { - "name": "memoryMB", - "help": f"memory (MB) per instance (default is {__flags__.resources_requirements['memoryMB']}) (will be overridden by --mem)" - }, - { - "name": "mem", - "help": "memory (MB / GB) per instance (default is %.0fGB)" % (__flags__.resources_requirements["memoryMB"] / 1024.0) - }, - { - "name": "sources", - "default": [], - "abbreviation": "src", - "action": "append", - "help": "source files to upload (into container)" - }, - { - "name": "pip-installs", - "default": [], - "abbreviation": "pip", - "action": "append", - "help": "packages to install via pip" - }, - { - "name": "image-list", - "default": [], - "action": "append", - "help": "list of images that are frequently used" - }, - { - "name": "resource-list", - "default": [], - "action": "append", - "help": "list of resource specs that are frequently used" - }, - { - "name": "web-default-form", - "help": "web-default-form (in Submitter)" - }, - { - "name": "web-default-image", - "help": "web-default-image (in Submitter)" - }, - { - "name": "web-default-resource", - "help": "web-default-resource (in Submitter), format: ',,'" - }, - ] + __flags__.custom_predefined - - @staticmethod - def get_cluster_cfg_file(clusters_in_local: str = 'no') -> str: - assert clusters_in_local in ['no', 'yes'], f"only allow yes / no, but {clusters_in_local} received" - pth = [__flags__.cache, __flags__.cluster_cfg_file] - if clusters_in_local == 'no': - pth = [os.path.expanduser('~')] + pth - return os.path.join(*pth) - - @staticmethod - def get_default_file(is_global: bool) -> str: - pth = [__flags__.cache, __flags__.defaults_file] - pth = [os.path.expanduser('~')] + pth if is_global else pth - return os.path.join(*pth) - - @staticmethod - def print_predefined(exclude: list = None, include: list = None): - from tabulate import tabulate - citems = __flags__.predefined_defaults(exclude, include) - print(tabulate(citems, headers=citems[0]._asdict().keys()), flush=True) diff --git a/contrib/python-sdk/openpaisdk/io_utils.py b/contrib/python-sdk/openpaisdk/io_utils.py deleted file mode 100644 index f3d9dc857..000000000 --- a/contrib/python-sdk/openpaisdk/io_utils.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import os -import errno -import shutil -from webbrowser import open_new_tab -from contextlib import contextmanager -from functools import partial -import json -import yaml -import logging -from urllib.request import urlopen -from urllib.parse import urlsplit -from urllib.request import urlretrieve -import cgi -from openpaisdk.flags import __flags__ - -logging.basicConfig(format='%(name)s - %(levelname)s - %(message)s') -__logger__ = logging.getLogger(name="openpai") -__logger__.setLevel(level=logging.DEBUG if __flags__.debug_mode else logging.INFO) - - -def to_screen(msg, _type: str = "normal", **kwargs): - """a general wrapping function to deal with interactive IO and logging - """ - def print_out(msg, **kwargs): - out = yaml.dump(msg, default_flow_style=False, **kwargs) if not isinstance(msg, str) else msg - if not __flags__.disable_to_screen: - print(out, flush=True) - return out - - def print_table(msg, **kwargs): - from tabulate import tabulate - out = tabulate(msg, **kwargs) - if not __flags__.disable_to_screen: - print(out, flush=True) - return out - - func_dict = { - "normal": print_out, - "table": print_table, - "warn": partial(__logger__.warn, exc_info=__flags__.debug_mode), - "debug": __logger__.debug, - "error": partial(__logger__.error, exc_info=True), - } - assert _type in func_dict, f"unsupported output type {_type}, only {list(func_dict.keys(()))} are valid" - ret = func_dict[_type](msg, **kwargs) - return ret if _type == "table" else msg - - -def listdir(path): - assert os.path.isdir(path), "{} is not a valid path of directory".format(path) - root, dirs, files = next(os.walk(path)) - return { - "root": root, - "dirs": dirs, - "files": files - } - - -def browser_open(url: str): - __logger__.info("open in browser: %s", url) - try: - open_new_tab(url) - except Exception as e: - to_screen(f"fail to open {url} due to {repx(e)}", _type="warn") - - -def from_file(fname: str, default=None, silent: bool = False, **kwargs): - """read yaml or json file; return default if (only when default is not None) - - file non existing - - empty file or contents in file is not valid - - loaded content is not expected type (type(default)) - """ - import yaml - assert os.path.splitext(fname)[1] in __json_exts__ + __yaml_exts__, f"unrecognized {fname}" - try: - with open(fname) as fp: - dic = dict(kwargs) - dic.setdefault('Loader', yaml.FullLoader) - ret = yaml.load(fp, **dic) - assert ret, f"read empty object ({ret}) from {fname}, return {default}" - assert default is None or isinstance( - ret, type(default)), f"read wrong type ({type(ret)}, expected {type(default)}) from {fname}, return {default}" - return ret - except Exception as identifier: - if default is None: - to_screen(f"{repr(identifier)} when reading {fname}", _type="error") - raise identifier - if not silent: - to_screen(f"{repr(identifier)} when reading {fname}", _type="warn") - return default - - -def get_url_filename_from_server(url): - try: - blah = urlopen(url).info()['Content-Disposition'] - _, params = cgi.parse_header(blah) - return params["filename"] - except Exception as e: - to_screen(f'Failed to get filename from server: {repr(e)}', _type="warn") - return None - - -def web_download_to_folder(url: str, folder: str, filename: str = None): - if not filename: - split = urlsplit(url) - filename = split.path.split("/")[-1] - filename = os.path.join(folder, filename) - os.makedirs(folder, exist_ok=True) - try: - urlretrieve(url, filename) - __logger__.info('download from %s to %s', url, filename) - return filename - except Exception: - __logger__.error("failed to download", exc_info=True) - - -def mkdir_for(pth: str): - d = os.path.dirname(pth) - if d: - os.makedirs(d, exist_ok=True) - return d - - -def file_func(kwargs: dict, func=shutil.copy2, tester: str = 'dst'): - try: - return func(**kwargs) - except IOError as identifier: - # ENOENT(2): file does not exist, raised also on missing dest parent dir - if identifier.errno != errno.ENOENT: - print(identifier.__dict__) - assert tester in kwargs.keys(), 'wrong parameter {}'.format(tester) - os.makedirs(os.path.dirname(kwargs[tester]), exist_ok=True) - return func(**kwargs) - except Exception as identifier: - print(identifier) - return None - - -@contextmanager -def safe_open(filename: str, mode: str = 'r', func=open, **kwargs): - "if directory of filename does not exist, create it first" - mkdir_for(filename) - fn = func(filename, mode=mode, **kwargs) - yield fn - fn.close() - - -@contextmanager -def safe_chdir(pth: str): - "safely change directory to pth, and then go back" - currdir = os.getcwd() - try: - if not pth: - pth = currdir - os.chdir(pth) - __logger__.info("changing directory to %s", pth) - yield pth - finally: - os.chdir(currdir) - __logger__.info("changing directory back to %s", currdir) - - -def safe_copy(src: str, dst: str): - "if directory of filename doesnot exist, create it first" - return file_func({'src': src, 'dst': dst}) - - -__yaml_exts__, __json_exts__ = ['.yaml', '.yml'], ['.json', '.jsn'] - - -def to_file(obj, fname: str, fmt=None, **kwargs): - if not fmt: - _, ext = os.path.splitext(fname) - if ext in __json_exts__: - fmt, dic = json, dict(indent=4) - elif ext in __yaml_exts__: - import yaml - fmt, dic = yaml, dict(default_flow_style=False) - else: - raise NotImplementedError - dic.update(kwargs) - else: - dic = kwargs - with safe_open(fname, 'w') as fp: - fmt.dump(obj, fp, **dic) - __logger__.debug("serialize object to file %s", fname) diff --git a/contrib/python-sdk/openpaisdk/job.py b/contrib/python-sdk/openpaisdk/job.py deleted file mode 100644 index faf0849c5..000000000 --- a/contrib/python-sdk/openpaisdk/job.py +++ /dev/null @@ -1,659 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import json -import os -import re -import pathlib -from typing import Union, List -from copy import deepcopy -from html2text import html2text - -from openpaisdk.flags import __flags__ -from openpaisdk.defaults import get_install_uri, LayeredSettings -from openpaisdk.io_utils import from_file, safe_open, to_file, to_screen -from openpaisdk.utils import Retry, concurrent_map, exception_free, find, get_response, na, na_lazy -from openpaisdk.cluster import get_cluster - -__protocol_filename__ = "job_protocol.yaml" -__config_filename__ = "job_config.json" -__protocol_unit_types__ = ["job", "data", "script", "dockerimage", "output"] - - -class ProtocolUnit: - - @staticmethod - def validate(u: dict): - # assert u["protocolVersion"] in ["1", "2", 1, 2], "invalid protocolVersion (%s)" % u["protocolVersion"] - assert u["type"] in __protocol_unit_types__, "invalid type (%s)" % u["type"] - assert u["name"], "invalid name" - # uri: String or list, required # Only when the type is data can the uri be a list. - assert isinstance(u["uri"], str) or u["type"] == "data" and isinstance(u["uri"], list), "uri: String or list, required # Only when the type is data can the uri be a list. (Error: %s)" % u - - -class TaskRole: - - @staticmethod - def validate(t: dict): - assert t["dockerImage"], "unknown dockerImage" - assert t["resourcePerInstance"]["cpu"] > 0, "invalid cpu number (%d)" % t["resourcePerInstance"]["cpu"] - assert t["resourcePerInstance"]["gpu"] >= 0, "invalid gpu number (%d)" % t["resourcePerInstance"]["gpu"] - assert t["resourcePerInstance"]["memoryMB"] > 0, "invalid memoryMB number (%d)" % t["resourcePerInstance"]["memoryMB"] - for label, port in t["resourcePerInstance"].get("ports", {}).items(): - assert port >= 0, "invalid port (%s : %d)" % (label, port) - assert isinstance(t["commands"], list) and t["commands"], "empty commands" - - -class Deployment: - - @staticmethod - def validate(d: dict, task_role_names: list): - assert d["name"], "deployment should have a name" - for t, c in d["taskRoles"].items(): - assert t in task_role_names, "invalid taskrole name (%s)" % (t) - assert isinstance(["preCommands"], list), "preCommands should be a list" - assert isinstance(["postCommands"], list), "postCommands should be a list" - - -class JobResource: - - def __init__(self, r: dict = None): - from copy import deepcopy - - def gb2mb(m): - if not isinstance(m, str) or m.isnumeric(): - return int(m) - if m.lower().endswith('g'): - return int(m[:-1]) * 1024 - if m.lower().endswith('gb'): - return int(m[:-2]) * 1024 - raise ValueError(m) - - r = {} if not r else r - dic = deepcopy(__flags__.resources_requirements) - for key in ["cpu", "gpu", "memoryMB", "ports"]: - if r.get(key, None) is not None: - dic[key] = int(r[key]) if not key == "ports" else r[key] - if r.get("mem", None) is not None: - dic["memoryMB"] = gb2mb(r["mem"]) - self.req = dic - - def add_port(self, name: str, num: int = 1): - self.req.setdefault("ports", {})[name] = num - return self - - @property - def as_dict(self): - return self.req - - @staticmethod - def parse_list(lst: List[str]): - r = [] - for spec in lst: - s = spec.replace(" ", '').split(",") - r.append(JobResource({ - "gpu": s[0], "cpu": s[1], "mem": s[2], - }).as_dict) - return r - - -class Job: - """ - the data structure and methods to describe a job compatible with https://github.com/microsoft/openpai-protocol/blob/master/schemas/v2/schema.yaml - external methods: - - I/O - - save(...) / load(...): store and restore to the disk - - Job protocol wizard - - sdk_job_template(...): generate a job template with the sdk (embedding cluster / storage information) - - one_liner(...): generate a single-taskrole job protocol from commands and other essential information - - from_notebook(...): generate a job protocol from a jupyter notebook - - Interaction with clusters - - submit(...): submit to a cluster, including archiving and uploading local source files - - wait(...): wait a job until completed - - log(...): - - Parse logs - - connect_jupyter(...): wait job running and connected to jupyter server - """ - - def __init__(self, name: str=None, **kwargs): - self.protocol = dict() # follow the schema of https://github.com/microsoft/openpai-protocol/blob/master/schemas/v2/schema.yaml - self._client = None # cluster client - self.new(name, **kwargs) - - def new(self, name: str, **kwargs): - self.protocol = { - "name": name, - "protocolVersion": 2, - "type": "job", - "prerequisites": [], - "parameters": dict(), - "secrets": dict(), - "taskRoles": dict(), - "deployments": [], - "defaults": dict(), - "extras": dict(), - } - self.protocol.update(kwargs) - return self - - def load(self, fname: str = None, job_name: str = None, cluster_alias: str = None): - if cluster_alias: # load job config from cluster by REST api - job_name = na(job_name, self.name) - self.protocol = get_cluster(cluster_alias).rest_api_job_info(job_name, 'config') - else: # load from local file - if not fname: - fname = Job(job_name).protocol_file - if os.path.isfile(fname): - self.protocol = from_file(fname, default="==FATAL==") - self.protocol.setdefault('protocolVersion', '1') # v1 protocol (json) has no protocolVersion - return self - - def save(self): - if self.name: - to_file(self.protocol, self.protocol_file) - return self - - def validate(self): - assert self.protocolVersion in ["1", "2"], "unknown protocolVersion (%s)" % self.protocol["protocolVersion"] - assert self.name is not None, "job name is null %s" % self.protocol - if self.protocolVersion == "2": - assert self.protocol["type"] == "job", "type must be job (%s)" % self.protocol["type"] - for t in self.protocol.get("taskRoles", {}).values(): - TaskRole.validate(t) - for d in self.protocol.get("deployments", []): - Deployment.validate(d, list(self.protocol["taskRoles"].keys())) - for u in self.protocol.get("prerequisites", []): - ProtocolUnit.validate(u) - return self - - @property - def protocolVersion(self): - return str(self.protocol.get("protocolVersion", "1")) - - @property - def name(self): - return self.protocol.get("name" if self.protocolVersion == "2" else "jobName", None) - - @property - def cache_dir(self): - assert self.name, "cannot get cache directory for an empty job name" - return os.path.join(__flags__.cache, self.name) - - def cache_file(self, fname): - return os.path.join(self.cache_dir, fname) - - @property - def protocol_file(self): - return self.cache_file(__protocol_filename__) - - @property - def temp_archive(self): - return self.cache_file(self.name + ".tar.gz") - - @staticmethod - def get_config_file(job_name: str, v2: bool=True): - return Job(job_name).cache_file(__protocol_filename__ if v2 else __config_filename__) - - def param(self, key, default=None, field: str="parameters"): - return self.protocol.get(field, {}).get(key, default) - - def set_param(self, key, value, field: str="parameters"): - self.protocol.setdefault(field, {})[key] = value - - def secret(self, key, default=None): - return self.param(key, default, "secrets") - - def set_secret(self, key, value): - self.set_param(key, value, "secrets") - - def extra(self, key, default=None): - return self.param(key, default, "extras") - - def set_extra(self, key, value): - self.set_param(key, value, "extras") - - def tags(self): - return self.param("tags", [], "extras") - - def add_tag(self, tag: str): - lst = self.tags() - if tag not in lst: - lst.append(tag) - self.set_param("tags", lst, "extras") - return self - - def has_tag(self, tag: str): - return tag in self.tags() - - def get_config(self): - if self.protocolVersion == "2": - self.interpret_sdk_plugin() - for d in self.protocol.get("deployments", []): - r = d["taskRoles"] - t_lst = list(r.keys()) - for t in t_lst: - for k in ["preCommands", "postCommands"]: # pre- / post- - if k not in r[t]: - continue - if len(r[t][k]) == 0: - del r[t][k] - if len(r[t]) == 0: - del r[t] - for key in ["deployments", "parameters"]: - if key in self.protocol and len(self.protocol[key]) == 0: - del self.protocol[key] - for t in self.protocol["taskRoles"].values(): - if "ports" in t["resourcePerInstance"] and len(t["resourcePerInstance"]["ports"]) == 0: - del t["resourcePerInstance"]["ports"] - return self.protocol - else: - dic = deepcopy(self.protocol) - del dic["protocolVersion"] - return dic - - def sdk_job_template(self, cluster_alias_lst: str=[], workspace: str=None, sources: list=None, pip_installs: list=None): - "generate the job template for a sdk-submitted job" - # secrets - clusters = [get_cluster(alias, get_client=False) for alias in cluster_alias_lst] - workspace = na(workspace, LayeredSettings.get("workspace")) - workspace = na(workspace, f"{__flags__.storage_root}/{clusters[0]['user']}") - self.set_secret("clusters", json.dumps(clusters)) - self.set_param("cluster_alias", cluster_alias_lst[0] if cluster_alias_lst else None) - self.set_param("work_directory", '{}/jobs/{}'.format(workspace, self.name) if workspace else None) - - # parameters - self.set_param("python_path", "python") - - # signature - self.add_tag(__internal_tags__["sdk"]) - - # sdk.plugins - sdk_install_uri = "-U {}".format(get_install_uri()) - c_dir = '~/{}'.format(__flags__.cache) - c_file = '%s/%s' % (c_dir, __flags__.cluster_cfg_file) - - plugins = [] - if sources: - plugins.append({ - "plugin": "local.uploadFiles", - "parameters": { - "files": list(set([os.path.relpath(s) for s in sources])), - }, - }) - - plugins.extend([ - { - "plugin": "container.preCommands", # commands to install essential pip packages - "parameters": { - "commands": [ - "<% $parameters.python_path %> -m pip install {}".format(p) for p in [sdk_install_uri] + na(pip_installs, []) - ] - } - }, - { - "plugin": "container.preCommands", # copy cluster information - "parameters": { - "commands": [ - "mkdir %s" % c_dir, - "echo \"write config to {}\"".format(c_file), - "echo <% $secrets.clusters %> > {}".format(c_file), - "opai cluster select <% $parameters.cluster_alias %>", - ] - } - } - ]) - - if sources: - a_file = os.path.basename(self.temp_archive) - plugins.append({ - "plugin": "container.preCommands", - "parameters": { - "commands": [ - "opai storage download <% $parameters.work_directory %>/source/{} {}".format(a_file, a_file), - "tar xvfz {}".format(a_file) - ] - } - }) - self.set_extra("sdk.plugins", plugins) - return self - - def one_liner(self, - commands: Union[list, str], image: str, cluster: dict, resources: dict=None, - sources: list = None, pip_installs: list = None - ): - """generate the single-task-role job protocol from essentials such as commands, docker image... - :param cluster (dict): a dictionary includes {cluster_alias, virtual_cluster, workspace} - """ - self.sdk_job_template([cluster["cluster_alias"]], cluster.get("workspace", None), sources, pip_installs) - self.protocol["prerequisites"].append({ - "name": "docker_image", - "type": "dockerimage", - "protocolVersion": "2", - "uri": image, - }) - self.protocol.setdefault("taskRoles", {})["main"] = { - "dockerImage": "docker_image", - "resourcePerInstance": JobResource(resources).as_dict, - "commands": commands if isinstance(commands, list) else [commands] - } - self.add_tag(__internal_tags__["one_liner"]) - return self - - def from_notebook(self, - nb_file: str, mode: str="interactive", token: str="abcd", - image: str=None, cluster: dict=None, resources: dict=None, - sources: list = None, pip_installs: list = None - ): - """ - mode: interactive / silent / script - """ - assert mode in ["interactive", "silent", "script"], "unsupported mode %s" % mode - if not nb_file: - mode, nb_file = "interactive", "" - else: - assert os.path.isfile(nb_file), "cannot read the ipython notebook {}".format(nb_file) - sources = na(sources, []) - sources.append(nb_file) - self.set_param("notebook_file", os.path.splitext(os.path.basename(nb_file))[0] if nb_file else "") - resources = JobResource(resources) - if mode == "interactive": - resources.add_port("jupyter") - self.set_secret("token", token) - cmds = [ - " ".join([ - "jupyter notebook", - "--no-browser", "--ip 0.0.0.0", "--port $PAI_CONTAINER_HOST_jupyter_PORT_LIST", - "--NotebookApp.token=<% $secrets.token %>", - "--allow-root --NotebookApp.file_to_run=<% $parameters.notebook_file %>.ipynb", - ]), - ] - elif mode == "silent": - cmds = [ - " ".join([ - "jupyter nbconvert --ExecutePreprocessor.timeout=-1 --ExecutePreprocessor.allow_errors=True", - "--to html --execute <% $parameters.notebook_file %>.ipynb", - ]), - "opai storage upload <% $parameters.notebook_file %>.html <% $parameters.work_directory %>/output/<% $parameters.notebook_file %>.html", - ] - else: - cmds = [ - "jupyter nbconvert --to script <% $parameters.notebook_file %>.ipynb --output openpai_submitter_entry", - "echo ======================== Python Script Starts ========================", - # execute notebook by iPython. To remove color information, we use "--no-term-title" and sed below - """ipython --no-term-title openpai_submitter_entry.py | sed -r "s/\\x1B\\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g" | tr -dc '[[:print:]]\\n'""", - ] - self.one_liner(cmds, image, cluster, resources.as_dict, sources, na(pip_installs, []) + ["jupyter"]) - mode_to_tag = {"interactive": "interactive_nb", "silent": "batch_nb", "script": "script_nb"} - self.add_tag(__internal_tags__[mode_to_tag[mode]]) - return self - - def interpret_sdk_plugin(self): - plugins = self.extra("sdk.plugins", []) - # concatenate commands - if len(self.protocol.setdefault("deployments", [])) == 0: # will move to plugin fields when it is ready - # we could use a new deployments for every pre- / post- commands plugin - deployment_name, task_role_names = "sdk_deployment", list(self.protocol["taskRoles"]) - deployment = {key: dict(preCommands=[], postCommands=[]) for key in task_role_names} - plugins_to_remove = [] - for i, plugin in enumerate(plugins): - target = find("container.(\w+)", plugin["plugin"]) - if target not in ["preCommands", "postCommands"]: - continue - for t in plugin.get("taskRoles", task_role_names): - deployment[t][target].extend(plugin["parameters"]["commands"]) - plugins_to_remove.append(i) - if plugins_to_remove: - self.protocol["deployments"].append({ - "name": deployment_name, - "taskRoles": deployment, - }) - self.protocol.setdefault("defaults", {})["deployment"] = deployment_name - for i in reversed(plugins_to_remove): - del plugins[i] - return self - - @property - def client(self): - if self._client is None: - alias = self.param("cluster_alias") - if alias: - self._client = get_cluster(alias) - return self._client - - def select_cluster(self, cluster_alias: str=None, virtual_cluster: str=None): - self._client = get_cluster(cluster_alias) - if virtual_cluster: - if self.protocolVersion == "1": - self.protocol["virtualCluster"] = virtual_cluster - else: - self.set_param("virtualCluster", virtual_cluster, field="defaults") - return self - - # methods only for SDK-enabled jobs - def submit(self, cluster_alias: str = None, virtual_cluster: str = None): - cluster_alias = na(cluster_alias, self.param("cluster_alias", None)) - self.select_cluster(cluster_alias, virtual_cluster) - self.validate().local_process() - to_screen("submit job %s to cluster %s" % (self.name, cluster_alias)) - try: - self.client.rest_api_submit(self.get_config()) - job_link = self.client.get_job_link(self.name) - return {"job_link": job_link, "job_name": self.name} - except Exception as identifier: - to_screen(f"submit failed due to {repr(identifier)}", _type="error") - to_screen(self.get_config()) - raise identifier - - def stop(self): - return self.client.rest_api_execute_job(self.name) - - def get_status(self): - return self.client.rest_api_job_info(self.name) - - def wait(self, t_sleep: float = 10, timeout: float = 3600, silent: bool = False): - """for jupyter job, wait until ready to connect - for normal job, wait until completed""" - exit_states = __job_states__["completed"] - repeater = Retry(timeout=timeout, t_sleep=t_sleep, silent=silent) - interactive_nb = self.has_tag(__internal_tags__["interactive_nb"]) - batch_nb = self.has_tag(__internal_tags__["batch_nb"]) - if interactive_nb or batch_nb: - if interactive_nb: - to_screen("{} is recognized to be an interactive jupyter notebook job".format(self.name)) - to_screen("notebook job needs to be RUNNING state and the kernel started") - if batch_nb: - to_screen("{} is recognized to be a silent jupyter notebook job".format(self.name)) - to_screen("notebook job needs to be SUCCEEDED state and the output is ready") - return repeater.retry( - lambda x: x.get('state', None) in exit_states or x.get("notebook", None) is not None, - self.connect_jupyter - ) - to_screen("wait until job to be completed ({})".format(exit_states)) - return repeater.retry( - lambda x: JobStatusParser.state(x) in exit_states, # x: job status - self.get_status - ) - - def plugin_uploadFiles(self, plugin: dict): - import tarfile - to_screen("archiving and uploading ...") - work_directory = self.param("work_directory") - assert work_directory, "must specify a storage to upload" - with safe_open(self.temp_archive, "w:gz", func=tarfile.open) as fn: - for src in plugin["parameters"]["files"]: - src = os.path.relpath(src) - if os.path.dirname(src) != "": - to_screen("files not in current folder may cause wrong location when unarchived in the container, please check it {}".format(src), _type="warn") - fn.add(src) - to_screen("{} archived and wait to be uploaded".format(src)) - self.client.get_storage().upload( - local_path=self.temp_archive, - remote_path="{}/source/{}".format(work_directory, os.path.basename(self.temp_archive)), - overwrite=True - ) - - def local_process(self): - "pre-process the job protocol locally, including uploading files, deal with pre-/post- commands" - self.validate() - plugins = self.protocol.get("extras", {}).get("sdk.plugins", []) - for plugin in plugins: - s = find("local.(\w+)", plugin["plugin"]) - if not s: - continue - getattr(self, "plugin_" + s)(plugin) - return self - - def connect_jupyter(self): - if self.has_tag(__internal_tags__["script_nb"]): - return self.connect_jupyter_script() - if self.has_tag(__internal_tags__["batch_nb"]): - return self.connect_jupyter_batch() - if self.has_tag(__internal_tags__["interactive_nb"]): - return self.connect_jupyter_interactive() - - def connect_jupyter_batch(self): - "fetch the html result if ready" - status = self.get_status() - state = JobStatusParser.state(status) - url = None - if state in __job_states__["successful"]: - html_file = self.param("notebook_file") + ".html" - local_path = html_file - remote_path = '{}/output/{}'.format(self.param("work_directory"), html_file) - self.client.get_storage().download(remote_path=remote_path, local_path=local_path) - url = pathlib.Path(os.path.abspath(html_file)).as_uri() - return dict(state=state, notebook=url) - - def connect_jupyter_interactive(self): - "get the url of notebook if ready" - status = self.get_status() - nb_file = self.param("notebook_file") + ".ipynb" if self.param("notebook_file") else None - return JobStatusParser.interactive_jupyter_url(status, nb_file) - - def connect_jupyter_script(self): - status = self.get_status() - state = self.state(status) - return dict(state=state, notebook=None) - - -__internal_tags__ = { - "sdk": "py-sdk", - "one_liner": 'py-sdk-one-liner', - "interactive_nb": 'py-sdk-notebook-interactive', - "batch_nb": 'py-sdk-notebook-batch', - "script_nb": 'py-sdk-notebook-script', -} - - -__job_states__ = { - "successful": ["SUCCEEDED"], - "failed": ["FAILED", "STOPPED"], - "ongoing": ["WAITING", "RUNNING", "COMPLETING"], -} -__job_states__["completed"] = __job_states__["successful"] + __job_states__["failed"] -__job_states__["ready"] = __job_states__["completed"] + ["RUNNING"] -__job_states__["valid"] = [s for sub in __job_states__.values() for s in sub] - - -class JobStatusParser: - - @staticmethod - @exception_free(KeyError, None) - def state(status: dict): - return status["jobStatus"]["state"] - - @staticmethod - @exception_free(KeyError, None) - def single_task_logs(status: dict, task_role: str = 'main', index: int = 0, log_type: dict=None, return_urls: bool=False): - """change to use containerLog""" - log_type = na(log_type, { - "stdout": "user.pai.stdout/?start=0", - "stderr": "user.pai.stderr/?start=0" - }) - containers = status.get("taskRoles", {}).get(task_role, {}).get("taskStatuses", []) - if len(containers) < index + 1: - return None - containerLog = containers[index].get("containerLog", None) - if not containerLog: - return None - urls = { - k: "{}{}".format(containerLog, v) - for k, v in log_type.items() - } - if return_urls: - return urls - else: - html_contents = {k: get_response('GET', v).text for k, v in urls.items()} - try: - from html2text import html2text - return {k: html2text(v) for k, v in html_contents.items()} - except ImportError: - return html_contents - - @staticmethod - @exception_free(Exception, None) - def all_tasks_logs(status: dict): - """retrieve logs of all tasks""" - logs = { - 'stdout': {}, 'stderr': {} - } - for tr_name, tf_info in status['taskRoles'].items(): - for task_status in tf_info['taskStatuses']: - task_id = '{}[{}]'.format(tr_name, task_status['taskIndex']) - task_logs = JobStatusParser.single_task_logs(status, tr_name, task_status['taskIndex']) - for k, v in task_logs.items(): - logs.setdefault(k, {})[task_id] = v - return logs - - @staticmethod - @exception_free(Exception, dict(state=None, notebook=None)) - def interactive_jupyter_url(status: dict, nb_file: str=None, task_role: str='main', index: int= 0): - "get the url of notebook if ready" - state = JobStatusParser.state(status) - url = None - if state == "RUNNING": - job_log = JobStatusParser.single_task_logs( - status, task_role, index - )["stderr"].split('\n') - for line in job_log: - if re.search("The Jupyter Notebook is running at:", line): - from openpaisdk.utils import path_join - container = status["taskRoles"][task_role]["taskStatuses"][index] - ip, port = container["containerIp"], container["containerPorts"]["jupyter"] - url = path_join([f"http://{ip}:{port}", "notebooks", nb_file]) - break - return dict(state=state, notebook=url) - - -def job_spider(cluster, jobs: list = None): - jobs = na_lazy(jobs, cluster.rest_api_job_list) - to_screen("{} jobs to be captured in the cluster {}".format(len(jobs), cluster.alias)) - job_statuses = concurrent_map( - lambda j: cluster.rest_api_job_info(j['name'], info=None, user=j['username']), - jobs - ) - job_configs = concurrent_map( - lambda j: cluster.rest_api_job_info(j['name'], info='config', user=j['username']), - jobs - ) - job_logs = concurrent_map(JobStatusParser.all_tasks_logs, job_statuses) - for job, sta, cfg, logs in zip(jobs, job_statuses, job_configs, job_logs): - job['status'] = sta - job['config'] = cfg - job['logs'] = logs - return jobs diff --git a/contrib/python-sdk/openpaisdk/notebook.py b/contrib/python-sdk/openpaisdk/notebook.py deleted file mode 100644 index fee7ba110..000000000 --- a/contrib/python-sdk/openpaisdk/notebook.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import json -import os.path -import re -from openpaisdk.defaults import LayeredSettings, __flags__ - - -def get_notebook_path(): - """ - Return the full path of the jupyter notebook. - Reference: https://github.com/jupyter/notebook/issues/1000#issuecomment-359875246 - """ - import requests - from requests.compat import urljoin - from notebook.notebookapp import list_running_servers - import ipykernel - - kernel_id = re.search('kernel-(.*).json', - ipykernel.connect.get_connection_file()).group(1) - servers = list_running_servers() - for ss in servers: - response = requests.get(urljoin(ss['url'], 'api/sessions'), - params={'token': ss.get('token', '')}) - info = json.loads(response.text) - if isinstance(info, dict) and info['message'] == 'Forbidden': - continue - for nn in info: - if nn['kernel']['id'] == kernel_id: - relative_path = nn['notebook']['path'] - return os.path.join(ss['notebook_dir'], relative_path) - - -def parse_notebook_path(): - "parse the running notebook path to name, folder, extension" - nb_file = get_notebook_path() - folder, fname = os.path.split(nb_file) - name, ext = os.path.splitext(fname) - return name, folder, ext - - -class NotebookConfiguration: - "wrapper of LayeredSettings" - - @staticmethod - def reset(): - LayeredSettings.reset() - - @staticmethod - def print_supported_items(): - ret = LayeredSettings.print_supported_items() - if __flags__.disable_to_screen: - print(ret) - - @staticmethod - def set(key, value): - LayeredSettings.update("user_advaced", key, value) - - @staticmethod - def get(*args): - dic = LayeredSettings.as_dict() - if not args: - return dic - elif len(args) == 1: - return dic[args[0]] - else: - return [dic[a] for a in args] diff --git a/contrib/python-sdk/openpaisdk/storage.py b/contrib/python-sdk/openpaisdk/storage.py deleted file mode 100644 index 4ad592f0e..000000000 --- a/contrib/python-sdk/openpaisdk/storage.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -""" -[summary] -""" -from openpaisdk.io_utils import mkdir_for, to_screen - - -class Storage: - - def __init__(self, protocol: str = 'webHDFS', *args, **kwargs): - self.protocol, self.client = protocol.lower(), None - if protocol.lower() == 'webHDFS'.lower(): - from hdfs import InsecureClient - self.client = InsecureClient(*args, **kwargs) - for f in 'upload download list status delete'.split(): - setattr(self, f, getattr(self, '%s_%s' % - (f, protocol.lower()))) - - def upload_webhdfs(self, local_path: str, remote_path: str, **kwargs): - to_screen("upload %s -> %s" % (local_path, remote_path)) - return self.client.upload(local_path=local_path, hdfs_path=remote_path, **kwargs) - - def download_webhdfs(self, remote_path: str, local_path: str, **kwargs): - mkdir_for(local_path) - to_screen("download %s -> %s" % (remote_path, local_path)) - return self.client.download(local_path=local_path, hdfs_path=remote_path, overwrite=True, **kwargs) - - def list_webhdfs(self, remote_path: str, **kwargs): - return self.client.list(hdfs_path=remote_path, **kwargs) - - def status_webhdfs(self, remote_path: str, **kwargs): - return self.client.status(hdfs_path=remote_path, **kwargs) - - def delete_webhdfs(self, remote_path: str, **kwargs): - return self.client.delete(hdfs_path=remote_path, **kwargs) diff --git a/contrib/python-sdk/openpaisdk/utils.py b/contrib/python-sdk/openpaisdk/utils.py deleted file mode 100644 index 9359b9121..000000000 --- a/contrib/python-sdk/openpaisdk/utils.py +++ /dev/null @@ -1,298 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -""" -common functions to -""" -from openpaisdk.io_utils import safe_chdir, to_screen, __logger__ -import subprocess -import importlib -import os -import time -import requests -from typing import Union -from functools import wraps -from collections import Iterable -from requests_toolbelt.utils import dump -from urllib3.exceptions import InsecureRequestWarning - -# Suppress only the single warning from urllib3 needed. -requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning) - - -def exception_free(err_type, default, err_msg: str = None): - "return the default value if the exception is caught" - def inner_func(fn): - @wraps(fn) - def wrapper(*args, **kwargs): - try: - return fn(*args, **kwargs) - except err_type as e: - if not err_msg: - to_screen(repr(e), _type="warn") - else: - to_screen(err_msg, _type="warn") - return default - except Exception as e: - raise e - return wrapper - return inner_func - - -def concurrent_map(fn, it, max_workers=None): - "a wrapper of concurrent.futures.ThreadPoolExecutor.map, retrieve the results" - from concurrent.futures import ThreadPoolExecutor - ret = [] - with ThreadPoolExecutor(max_workers=max_workers) as executor: - futures = executor.map(fn, it) - for f in futures: - ret.append(f) - return ret - - -class OrganizedList(list): - - def __init__(self, lst: list, _key: str = None, _getter=dict.get): - super().__init__(lst) - self._getter = _getter - self._key = _key - - @property - def _fn_get(self): - return lambda elem: self._getter(elem, self._key) - - def first_index(self, target): - for i, elem in enumerate(self): - if self._fn_get(elem) == target: - return i - return None - - def first(self, target): - i = self.first_index(target) - return self[i] if i is not None else None - - def filter_index(self, target=None, include: list = None, exclude: list = None): - if include is not None: - return [i for i, elem in enumerate(self) if self._fn_get(elem) in include] - if exclude is not None: - return [i for i, elem in enumerate(self) if self._fn_get(elem) not in exclude] - return [i for i, elem in enumerate(self) if self._fn_get(elem) == target] - - def filter(self, target=None, include=None, exclude=None): - return OrganizedList([self[i] for i in self.filter_index(target, include, exclude)], self._key, self._getter) - - @property - def as_dict(self): - return {self._fn_get(elem): elem for elem in self} - - @property - def as_list(self): - return [x for x in self] - - def add(self, elem: dict, getter=dict.get, silent: bool = False, replace: bool = False): - for i in self.filter_index(self._fn_get(elem)): - if replace: - self[i] = elem - if not silent: - to_screen(f"OrganizedList: {self._key} = {self._fn_get(elem)} already exists, replace it") - else: - self[i].update(elem) - if not silent: - to_screen(f"OrderedDict: {self._key} = {self._fn_get(elem)} already exists, update it") - return self # ~ return - self.append(elem) - if not silent: - to_screen(f"OrganizedList: {self._key} = {self._fn_get(elem)} added") - return self - - def remove(self, target): - indexes = self.filter_index(target) - if not indexes: - to_screen(f"OrganizedList: {self._key} = {target} cannot be deleted due to non-existence") - return self - for index in sorted(indexes, reverse=True): - del self[index] - to_screen(f"OrganizedList: {self._key} = {target} removed") - return self - - -class Nested: - - def __init__(self, t, sep: str = ":"): - self.__sep__ = sep - self.content = t - - def get(self, keys: str): - return Nested.s_get(self.content, keys.split(self.__sep__)) - - def set(self, keys: str, value): - return Nested.s_set(self.content, keys.split(self.__sep__), value) - - @staticmethod - def _validate(context: Union[list, dict], idx: Union[str, int]): - return int(idx) if isinstance(context, list) else idx - - @staticmethod - def s_get(target, keys: list): - k = Nested._validate(target, keys[0]) - if len(keys) == 1: - return target[k] - return Nested.s_get(target[k], keys[1:]) - - @staticmethod - def s_set(target, keys: list, value): - # ! not allow to create a list - k = Nested._validate(target, keys[0]) - if len(keys) == 1: - target[k] = value - return - if isinstance(target, dict) and k not in target: - target[k] = dict() - return Nested.s_set(target[k], keys[1:], value) - - -def getobj(name: str): - mod_name, func_name = name.rsplit('.', 1) - mod = importlib.import_module(mod_name) - return getattr(mod, func_name) - - -class RestSrvError(Exception): - pass - - -class NotReadyError(Exception): - pass - - -class Retry: - - def __init__(self, max_try: int = 10, t_sleep: float = 10, timeout: float = 600, silent: bool = True): - self.max_try = max_try - self.t_sleep = t_sleep - self.timeout = timeout - if self.timeout: - assert self.t_sleep, "must specify a period to sleep if timeout is set" - self.silent = silent - - def retry(self, f_exit, func, *args, **kwargs): - t, i = 0, 0 - while True: - try: - x = func(*args, **kwargs) - if f_exit(x): - if not self.silent: - to_screen("ready: {}".format(x)) - return x - except NotReadyError as identifier: - __logger__.debug("condition not satisfied", identifier) - if not self.silent: - to_screen("not ready yet: {}".format(x)) - i, t = i + 1, t + self.t_sleep - if self.max_try and i >= self.max_try or self.timeout and t >= self.timeout: - return None - if self.t_sleep: - time.sleep(self.t_sleep) - - -def path_join(path: Union[list, str], sep: str = '/'): - """ join path from list or str - - ['aaa', 'bbb', 'ccc'] -> 'aaa/bbb/ccc' - - ['aaa', 'bbb', ('xxx', None), 'ddd'] -> 'aaa/bbb/ccc' - - ['aaa', 'bbb', ('xxx', 'x-val'), 'ddd'] -> 'aaa/bbb/xxx/x-val/ccc' - """ - def is_single_element(x): - return isinstance(x, str) or not isinstance(x, Iterable) - if is_single_element(path): - return str(path) - p_lst = [] - for p in path: - if not p: - continue - if is_single_element(p): - p_lst.append(str(p)) - elif all(p): - p_lst.extend([str(x) for x in p]) - return '/'.join(p_lst) - - -def get_response(method: str, path: Union[list, str], headers: dict = None, body: dict = None, allowed_status: list = [200], **kwargs): - """an easy wrapper of request, including: - - path accept a list of strings and more complicated input - - will checked the response status_code, raise RestSrvError if not in the allowed_status - """ - path = path_join(path) - headers = na(headers, {}) - body = na(body, {}) - application_json = 'Content-Type' not in headers or headers['Content-Type'] == 'application/json' - response = requests.request(method, path, headers=headers, ** kwargs, **{ - "json" if application_json else "data": body, - "verify": False, # support https - }) - __logger__.debug('----------Response-------------\n%s', dump.dump_all(response).decode('utf-8')) - if allowed_status and response.status_code not in allowed_status: - __logger__.warn(response.status_code, response.json()) - raise RestSrvError(response.status_code, response.json()) - return response - - -def run_command(commands, # type: Union[list, str] - cwd=None, # type: str - ): - command = commands if isinstance(commands, str) else " ".join(commands) - with safe_chdir(cwd): - rtn_code = os.system(command) - if rtn_code: - raise subprocess.CalledProcessError(rtn_code, commands) - - -def sys_call(args, dec_mode: str = 'utf-8'): - p = subprocess.Popen(args, shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE) - out, err = p.communicate() - if dec_mode: - out, err = out.decode(dec_mode), err.decode(dec_mode) - if p.returncode: - raise subprocess.CalledProcessError(f"ErrCode: {p.returncode}, {err}") - return out, err - - -def find(fmt: str, s: str, g: int = 1, func=None): - import re - func = na(func, re.match) - m = func(fmt, s) - return m.group(g) if m else None - - -def na(a, default): - return a if a is not None else default - - -def na_lazy(a, fn, *args, **kwargs): - return a if a is not None else fn(*args, **kwargs) - - -def flatten(lst: list): - return sum(lst, []) - - -def randstr(num: int = 10, letters=None): - "get a random string with given length" - import string - import random - letters = na(letters, string.ascii_letters) - return ''.join(random.choice(letters) for i in range(num)) diff --git a/contrib/python-sdk/setup.py b/contrib/python-sdk/setup.py deleted file mode 100644 index ae7af9b9d..000000000 --- a/contrib/python-sdk/setup.py +++ /dev/null @@ -1,15 +0,0 @@ -from setuptools import setup - -setup(name='openpaisdk', - version='0.4.00', - description='A simple SDK for OpenPAI', - url='https://github.com/microsoft/pai/contrib/python-sdk', - packages=['openpaisdk'], - install_requires=[ - 'requests', 'hdfs', 'PyYAML', 'requests-toolbelt', 'html2text', 'tabulate' - ], - entry_points={ - 'console_scripts': ['opai=openpaisdk.command_line:main'], - }, - zip_safe=False - ) diff --git a/contrib/python-sdk/test/basic_test.py b/contrib/python-sdk/test/basic_test.py deleted file mode 100644 index e948df0ec..000000000 --- a/contrib/python-sdk/test/basic_test.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import os -import unittest -from typing import Union -from openpaisdk.io_utils import to_screen, safe_chdir - - -def separated(method): - "run the each test in a separated directory" - def func(*args, **kwargs): - dir_name = 'utdir_' + method.__name__ - os.makedirs(dir_name, exist_ok=True) - try: - with safe_chdir(dir_name): - method(*args, **kwargs) - except Exception as identifier: - raise identifier - finally: - to_screen(f"trying to remove {dir_name}") - # ! rmtree not work on windows - os.system(f'rm -rf {dir_name}') - return func - - -class OrderedUnitTestCase(unittest.TestCase): - - def get_steps(self): - for name in dir(self): # dir() result is implicitly sorted - if name.lower().startswith("step"): - yield name, getattr(self, name) - - def run_steps(self): - for name, func in self.get_steps(): - try: - to_screen(f"\n==== begin to test {name} ====") - func() - except Exception as identifier: - self.fail("test {} failed ({}: {})".format(name, type(identifier), repr(identifier))) - - def cmd_exec(self, cmds: Union[list, str]): - if isinstance(cmds, list): - cmds = ' '.join(cmds) - print(cmds) - exit_code = os.system(cmds) - self.assertEqual(exit_code, 0, f"fail to run {cmds}") diff --git a/contrib/python-sdk/test/test_command_line.py b/contrib/python-sdk/test/test_command_line.py deleted file mode 100644 index 008ee62aa..000000000 --- a/contrib/python-sdk/test/test_command_line.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import os -from openpaisdk import get_defaults, ClusterList, JobStatusParser -from openpaisdk.utils import run_command, randstr -from openpaisdk.io_utils import to_screen -from typing import Union -from basic_test import OrderedUnitTestCase, separated - - -def get_cmd(cmd: Union[str, list], flags: dict, args: Union[list, str] = None): - lst = [] - lst.extend(cmd if isinstance(cmd, list) else cmd.split()) - for flag, value in flags.items(): - lst.extend(["--" + flag, value.__str__()]) - if args: - lst.extend(args if isinstance(args, list) else args.split()) - return lst - - -def run_commands(*cmds, sep: str = '&&'): - lst = [] - for i, c in enumerate(cmds): - lst.extend(c) - if i != len(cmds) - 1: - lst.append(sep) - run_command(lst) - - -def run_test_command(cmd: Union[str, list], flags: dict, args: Union[list, str] = None): - run_command(get_cmd(cmd, flags, args)) - - -def gen_expected(dic: dict, **kwargs): - dic2 = {k.replace("-", "_"): v if k != "password" else "******" for k, v in dic.items()} - dic2.update(kwargs) - return dic2 - - -class TestCommandLineInterface(OrderedUnitTestCase): - - ut_init_shell = os.path.join('..', 'ut_init.sh') - - def step1_init_clusters(self): - to_screen("""\ -testing REST APIs related to retrieving cluster info, including -- rest_api_cluster_info -- rest_api_user -- rest_api_token -- rest_api_virtual_clusters - """) - with open(self.ut_init_shell) as fn: - for line in fn: - if line.startswith('#'): - continue - self.cmd_exec(line) - alias = get_defaults()["cluster-alias"] - self.assertTrue(alias, "not specify a cluster") - self.cmd_exec('opai cluster resources') - - def step2_submit_job(self): - import time - to_screen("""\ -testing REST APIs related to submitting a job, including -- rest_api_submit - """) - self.job_name = 'ut_test_' + randstr(10) - self.cmd_exec(['opai', 'job', 'sub', '-i', 'python:3', '-j', self.job_name, 'opai cluster resources']) - time.sleep(10) - - def step3_job_monitoring(self): - to_screen("""\ -testing REST APIs related to querying a job, including -- rest_api_job_list -- rest_api_job_info - """) - client = ClusterList().load().get_client(get_defaults()["cluster-alias"]) - self.cmd_exec(['opai', 'job', 'list']) - job_list = client.rest_api_job_list(client.user) # ! only jobs from current user to reduce time - job_list = [job['name'] for job in job_list] - assert self.job_name in job_list, job_list - to_screen(f"testing job monitoring with {self.job_name}") - status = client.rest_api_job_info(self.job_name) - to_screen(f"retrieving job status and get its state {JobStatusParser.state(status)}") - client.rest_api_job_info(self.job_name, 'config') - to_screen("retrieving job config") - logs = JobStatusParser.all_tasks_logs(status) - assert logs, f"failed to read logs from status \n{status}" - for k, v in logs.items(): - for t, content in v.items(): - to_screen(f"reading logs {k} for {t} and get {len(content)} Bytes") - - @separated - def test_commands_sequence(self): - self.run_steps() diff --git a/contrib/python-sdk/test/test_format.py b/contrib/python-sdk/test/test_format.py deleted file mode 100644 index f79698d39..000000000 --- a/contrib/python-sdk/test/test_format.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import os -import sys -import unittest - - -in_place_chaning = False - - -class TestFormat(unittest.TestCase): - - folders = [os.path.join('..', 'openpaisdk'), '.'] - - def test_format(self): - for folder in self.folders: - root, dirs, files = next(os.walk(folder)) - for src in [fn for fn in files if fn.endswith(".py")]: - os.system(' '.join([ - sys.executable, '-m', 'autoflake', - '--remove-unused-variables', - '--remove-all-unused-imports', - '--remove-duplicate-keys', - '--ignore-init-module-imports', - '-i' if in_place_chaning else '', - os.path.join(folder, src) - ])) - - def clear_notebook_output(self): - folders = [ - os.path.join('..', 'examples'), - os.path.join('..', '..', 'notebook-extension', 'examples'), - ] - for folder in folders: - root, dirs, files = next(os.walk(folder)) - for file in [fn for fn in files if fn.endswith('.ipynb')]: - src = os.path.join(folder, file) - print(src) - os.system(f"jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace {src}") - os.system(f"dos2unix {src}") - - -if __name__ == '__main__': - in_place_chaning = True - TestFormat().test_format() - TestFormat().clear_notebook_output() diff --git a/contrib/python-sdk/test/test_job.py b/contrib/python-sdk/test/test_job.py deleted file mode 100644 index 4ed26aa71..000000000 --- a/contrib/python-sdk/test/test_job.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -from basic_test import OrderedUnitTestCase, separated -from openpaisdk import to_screen - - -class TestJobResource(OrderedUnitTestCase): - - def test_job_resource_parser(self): - from openpaisdk.job import JobResource - from openpaisdk import __flags__ - self.assertDictEqual(__flags__.resources_requirements, JobResource(None).as_dict) - self.assertDictEqual(__flags__.resources_requirements, JobResource().as_dict) - self.assertDictEqual(__flags__.resources_requirements, JobResource({}).as_dict) - dic = dict(cpu=-1, gpu=-2, memoryMB=-1024) - for key, value in dic.items(): - self.assertEqual(value, JobResource(dic).as_dict[key]) - dic['mem'] = '-2gb' - self.assertEqual(-2048, JobResource(dic).as_dict["memoryMB"]) - dic['mem'] = '-3g' - self.assertEqual(-3072, JobResource(dic).as_dict["memoryMB"]) - dic['mem'] = 10240 - self.assertEqual(10240, JobResource(dic).as_dict["memoryMB"]) - self.assertEqual({"a": 1}, JobResource(dic).add_port("a").as_dict["ports"]) - - def test_job_resource_list(self): - from openpaisdk.job import JobResource - samples = { - "3,3,3g": dict(gpu=3, cpu=3, memoryMB=3072, ports={}), - "3,1, 2g": dict(gpu=3, cpu=1, memoryMB=2048, ports={}), - } - keys = list(samples.keys()) - rets = JobResource.parse_list(keys) - for k, r in zip(keys, rets): - self.assertDictEqual(r, samples[k]) diff --git a/contrib/python-sdk/test/test_notebook.py b/contrib/python-sdk/test/test_notebook.py deleted file mode 100644 index 90f14a016..000000000 --- a/contrib/python-sdk/test/test_notebook.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -from basic_test import OrderedUnitTestCase, separated -from openpaisdk import to_screen - - -class TestNbExtCfg(OrderedUnitTestCase): - - settings = dict(cpu=100, gpu=-2, mem='90g') - - def step1_init(self): - from openpaisdk.notebook import NotebookConfiguration - NotebookConfiguration.print_supported_items() - - def step2_setup(self): - from openpaisdk.notebook import NotebookConfiguration - from openpaisdk import LayeredSettings - NotebookConfiguration.set(**self.settings) - for key in self.settings.keys(): - LayeredSettings.update('user_basic', key, -1) - - def step3_check(self): - from openpaisdk.notebook import NotebookConfiguration - to_screen(NotebookConfiguration.get()) - dic = {k: NotebookConfiguration.get(k) for k in self.settings} - self.assertDictEqual(dic, self.settings) - - @separated - def test_nbext_configuration(self): - self.run_steps() diff --git a/contrib/python-sdk/test/test_utils.py b/contrib/python-sdk/test/test_utils.py deleted file mode 100644 index b40476447..000000000 --- a/contrib/python-sdk/test/test_utils.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import os -import unittest -from copy import deepcopy -from openpaisdk.utils import OrganizedList as ol -from openpaisdk.utils import Nested -from openpaisdk.utils import randstr -from openpaisdk.io_utils import __flags__, from_file, to_screen -from openpaisdk import get_defaults, update_default, LayeredSettings -from basic_test import separated - - -class TestIOUtils(unittest.TestCase): - - @separated - def test_reading_failures(self): - with self.assertRaises(Exception): # non existing file - from_file(randstr(8) + '.yaml') - with self.assertRaises(AssertionError): # unsupported file extension - from_file(randstr(10)) - with self.assertRaises(Exception): - fname = randstr(10) + '.json' - os.system(f"touch {fname}") - from_file(fname) - - @separated - def test_returning_default(self): - for dval in [[], ['a', 'b'], {}, {'a': 'b'}]: - ass_fn = self.assertListEqual if isinstance(dval, list) else self.assertDictEqual - with self.assertRaises(AssertionError): # unsupported file extension - from_file(randstr(10)) - fname = randstr(8) + '.yaml' - ass_fn(from_file(fname, dval), dval) # non existing - os.system(f"echo '' > {fname}") - ass_fn(from_file(fname, dval), dval) - os.system(f"echo 'abcd' > {fname}") - ass_fn(from_file(fname, dval), dval) - - -class TestDefaults(unittest.TestCase): - - global_default_file = __flags__.get_default_file(is_global=True) - local_default_file = __flags__.get_default_file(is_global=False) - - def get_random_var_name(self): - import random - from openpaisdk import LayeredSettings - lst = [x for x in LayeredSettings.keys() if not LayeredSettings.act_append(x)] - ret = lst[random.randint(0, len(lst) - 1)] - to_screen(f"random select {ret} in {lst}") - return ret - - @separated - def test_update_defaults(self): - # ! not test global defaults updating, test it in integration tests - test_key, test_value = self.get_random_var_name(), randstr(10) - # add a default key - update_default(test_key, test_value, is_global=False, to_delete=False) - self.assertEqual(get_defaults()[test_key], test_value, - msg=f"failed to check {test_key} in {LayeredSettings.as_dict()}") - # should appear in local - self.assertEqual(from_file(self.local_default_file)[test_key], test_value) - # delete - update_default(test_key, test_value, is_global=False, to_delete=True) - with self.assertRaises(KeyError): - os.system(f"cat {self.local_default_file}") - from_file(self.local_default_file, {})[test_key] - # add not allowed - test_key = randstr(10) - update_default(test_key, test_value, is_global=False, to_delete=False) - with self.assertRaises(KeyError): - from_file(self.local_default_file, {})[test_key] - - @separated - def test_layered_settings(self): - from openpaisdk import LayeredSettings, __flags__ - __flags__.custom_predefined = [ - { - 'name': 'test-key-1', - }, - { - 'name': 'test-key-2', - 'action': 'append', - 'default': [] - } - ] - LayeredSettings.reset() - # ? add / update append key - for test_key in ['test-key-1', 'test-key-2']: - for i, layer in enumerate(LayeredSettings.layers): - LayeredSettings.update(layer.name, test_key, i) - if layer.act_append(test_key): - self.assertTrue(isinstance(layer.values[test_key], list), msg=f"{layer.values}") - self.assertEqual(0, LayeredSettings.get('test-key-1')) - self.assertListEqual([0, 1, 2, 3], LayeredSettings.get('test-key-2')) - # ? delete - for test_key in ['test-key-1', 'test-key-2']: - for i, layer in enumerate(LayeredSettings.layers): - LayeredSettings.update(layer.name, test_key, None, delete=True) - # ? reset the predefined - __flags__.custom_predefined = [] - LayeredSettings.reset() - - @separated - def test_unknown_variable_defined(self): - from openpaisdk import LayeredSettings, __flags__ - test_key, test_value = 'test-key-long-existing', randstr(10) - __flags__.custom_predefined = [ - { - 'name': test_key, - }, - ] - LayeredSettings.reset() - # ? add / update append key - LayeredSettings.update('local_default', test_key, test_value) - # ? reset the predefined - __flags__.custom_predefined = [] - LayeredSettings.reset() - self.assertEqual(test_value, LayeredSettings.get(test_key)) - # cannot delete or change the unknown variable - LayeredSettings.update('local_default', test_key, randstr(10)) - LayeredSettings.reset() - self.assertEqual(test_value, LayeredSettings.get(test_key)) - LayeredSettings.update('local_default', test_key, delete=True) - LayeredSettings.reset() - self.assertEqual(test_value, LayeredSettings.get(test_key)) - - -class TestOrganizedList(unittest.TestCase): - - class foo: - - def __init__(self, a=None, b=None, c=None, d=None): - self.a, self.b, self.c, self.d = a, b, c, d - - @property - def as_dict(self): - return {k: v for k, v in vars(self).items() if v is not None} - - def update(self, other): - for key, value in other.as_dict.items(): - setattr(self, key, value) - - lst_objs = [foo("x", 0), foo("x", 1), foo("y", 2), foo("y", c=1), foo("z", 4)] - lst = [obj.as_dict for obj in lst_objs] - - def ol_test_run(self, lst, getter): - def to_dict(obj): - return obj if isinstance(obj, dict) else obj.as_dict - dut = ol(lst[:3], "a", getter) - # find - self.assertEqual(2, dut.first_index("y")) - self.assertDictEqual(to_dict(lst[2]), to_dict(dut.first("y"))) - # filter - self.assertListEqual([0, 1], dut.filter_index("x")) - self.assertListEqual(lst[:2], dut.filter("x").as_list) - # as_dict - self.assertDictEqual(dict(x=lst[1], y=lst[2]), dut.as_dict) - # add (update) - elem = lst[-2] - dut.add(elem) - self.assertEqual(2, getter(lst[2], "b")) - self.assertEqual(1, getter(lst[2], "c")) - # add (replace) - elem = lst[-2] - dut.add(elem, replace=True) - self.assertEqual(None, getter(dut[2], "b")) - # add (append) - elem = lst[-1] - dut.add(elem) - self.assertEqual(4, getter(dut[-1], "b")) - # delete - dut.remove("z") - self.assertEqual(3, len(dut)) - dut.remove("z") - self.assertEqual(3, len(dut)) - - def test_dict(self): - self.ol_test_run(deepcopy(self.lst), dict.get) - - def test_obj(self): - self.ol_test_run(deepcopy(self.lst_objs), getattr) - - -class TestNested(unittest.TestCase): - - def test_set(self): - nested_obj = { - "a": [ - { - "aa0": { - "aaa": "val_aaa" - }, - }, - { - "aa1": { - "aaa1": "val_aaa1" - } - } - - ], - "b": "haha" - } - n = Nested(nested_obj, sep="->") - self.assertEqual(n.get("a->0->aa0->aaa"), "val_aaa") - with self.assertRaises(KeyError): - nested_obj["a"][1]["aa2"]["aaa"] - n.set("a->1->aa2->aaa", "val_aaa2") - self.assertEqual(nested_obj["a"][1]["aa2"]["aaa"], "val_aaa2") diff --git a/contrib/samba-aad-server/README.md b/contrib/samba-aad-server/README.md deleted file mode 100644 index 902307ecb..000000000 --- a/contrib/samba-aad-server/README.md +++ /dev/null @@ -1,74 +0,0 @@ -# Samba server with AAD integration - -A samba server integrated with AAD. Has a shared path and private paths for AD users, and create a shared account. -Also, it offers api to query user groups by user name. -This is an example of samba server with AAD integration, please change to your own configuration before use. - -## Index -- [Components](#Components) -- [How to Use](#How_to_Use) - -### Components -- Samba server -Data Structure: -``` -root - -- data - -- users - -- user1 - -- user2 - -- user3 -``` -data: Shared folder. -user: User private folder, user folder will be created when user first use samba. - -- Nginx service -A service that can query user groups through domain user name. - - -### How to Use -- Replace with your own configs -krb5.conf: Replace realms. -smb.conf: Replace realm and id map. -domaininfo.py: Replace corp domains. - -- Build docker image -``` -./build.sh -``` - -- Start service -``` -./start.sh -``` -Variable|Spec ---|:--: -DOMAIN|Domain to join, e.g. FAREAST -DOMAINUSER|Existing domain user name. Will join domain using this account -DOMAINPWD|Password for domain user -PAISMBUSER|Create new local samba account for PAI to use -PAISMBPWD|Password for new samba account - -- Access samba with domain-joined windows system. -In windows file explorer, input: -``` -\\ -``` -This will show two folders: data and home. -Data folder is a shared folder for all users. -Home folder is private folder for current AD user. - -- Mount samba using personal account -``` -mount -t cifs /// -o username=,password=,domain= -``` - -- Mount samba using PAI account -``` -mount -t cifs /// -o username=,password=,domain=WORKGROUP -``` - -- Query user groups -``` -http://:/GetUserId?userName= -``` diff --git a/contrib/samba-aad-server/README_zh_CN.md b/contrib/samba-aad-server/README_zh_CN.md deleted file mode 100644 index b6f501944..000000000 --- a/contrib/samba-aad-server/README_zh_CN.md +++ /dev/null @@ -1,74 +0,0 @@ -# Samba server with AAD integration - -A samba server integrated with AAD. Has a shared path and private paths for AD users, and create a shared account. -Also, it offers api to query user groups by user name. -This is an example of samba server with AAD integration, please change to your own configuration before use. - -## Index - -- [Components](#Components) -- [How to Use](#How_to_Use) - -### Components - -- Samba server Data Structure: - - root - -- data - -- users - -- user1 - -- user2 - -- user3 - - -data: Shared folder. -user: User private folder, user folder will be created when user first use samba. - -- Nginx service A service that can query user groups through domain user name. - -### How to Use - -- Replace with your own configs krb5.conf: Replace realms. smb.conf: Replace realm and id map. domaininfo.py: Replace corp domains. - -- Build docker image - - ./build.sh - - -- Start service - - ./start.sh - - -| Variable | Spec | -| ---------- |:--------------------------------------------------------------:| -| DOMAIN | Domain to join, e.g. FAREAST | -| DOMAINUSER | Existing domain user name. Will join domain using this account | -| DOMAINPWD | Password for domain user | -| PAISMBUSER | Create new local samba account for PAI to use | -| PAISMBPWD | Password for new samba account | - - -- Access samba with domain-joined windows system. - In windows file explorer, input: - - \\ - - -This will show two folders: data and home. -Data folder is a shared folder for all users. -Home folder is private folder for current AD user. - -- Mount samba using personal account - - mount -t cifs /// -o username=,password=,domain= - - -- Mount samba using PAI account - - mount -t cifs /// -o username=,password=,domain=WORKGROUP - - -- Query user groups - - http://:/GetUserId?userName= \ No newline at end of file diff --git a/contrib/samba-aad-server/build.sh b/contrib/samba-aad-server/build.sh deleted file mode 100644 index cabe47b02..000000000 --- a/contrib/samba-aad-server/build.sh +++ /dev/null @@ -1 +0,0 @@ -docker build -t paismb:stable build/ \ No newline at end of file diff --git a/contrib/samba-aad-server/build/Dockerfile b/contrib/samba-aad-server/build/Dockerfile deleted file mode 100644 index 390e9466c..000000000 --- a/contrib/samba-aad-server/build/Dockerfile +++ /dev/null @@ -1,39 +0,0 @@ -FROM ubuntu:16.04 - -COPY krb5.conf /etc/krb5.conf -COPY nsswitch.conf /etc/nsswitch.conf - -RUN apt-get update && \ - apt-get install -y \ - samba \ - attr \ - winbind \ - libpam-winbind \ - libnss-winbind \ - libpam-krb5 \ - krb5-config \ - krb5-user \ - cifs-utils \ - nginx \ - python-dev \ - python-pip - -RUN pip install flask \ - flask_restful \ - uwsgi - -COPY smb.conf /etc/samba/smb.conf -COPY default /etc/nginx/sites-available/default - -ENV SHARE_ROOT=/share/pai - -ADD infosrv /infosrv -RUN mkdir -p /infosrv/uwsgi -COPY run.sh /run.sh -RUN chmod +x /run.sh -COPY sambauserhomecreate /usr/bin/ -RUN chmod +x /usr/bin/sambauserhomecreate -COPY sambadatacreate /usr/bin/ -RUN chmod +x /usr/bin/sambadatacreate - -CMD /run.sh diff --git a/contrib/samba-aad-server/build/default b/contrib/samba-aad-server/build/default deleted file mode 100644 index 246dc8f29..000000000 --- a/contrib/samba-aad-server/build/default +++ /dev/null @@ -1,28 +0,0 @@ -## -# You should look at the following URL's in order to grasp a solid understanding -# of Nginx configuration files in order to fully unleash the power of Nginx. -# http://wiki.nginx.org/Pitfalls -# http://wiki.nginx.org/QuickStart -# http://wiki.nginx.org/Configuration -# -# Generally, you will want to move this file somewhere, and start with a clean -# file but keep this around for reference. Or just disable in sites-enabled. -# -# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. -## - -# Default server configuration -# -server { - listen 80 default_server; - listen [::]:80 default_server; - - server_name _domaininfo_; - - location / { - include uwsgi_params; - #uwsgi_pass unix:/infosrv/uwsgi/uwsgi.sock; - uwsgi_pass 127.0.0.1:8988; - } - -} diff --git a/contrib/samba-aad-server/build/infosrv/domaininfo.py b/contrib/samba-aad-server/build/infosrv/domaininfo.py deleted file mode 100644 index 1c1a500e6..000000000 --- a/contrib/samba-aad-server/build/infosrv/domaininfo.py +++ /dev/null @@ -1,62 +0,0 @@ -import sys -import json -import os - -from flask import Flask -from flask_restful import reqparse, abort, Api, Resource -from flask import request, jsonify -import base64 -import subprocess - - -app = Flask(__name__) -api = Api(app) - - - -parser = reqparse.RequestParser() - -def cmd_exec(cmdStr): - try: - output = subprocess.check_output(["bash","-c", cmdStr]).strip() - except Exception as e: - print(e) - output = "" - return output - -class GetUserId(Resource): - def get(self): - parser.add_argument('userName') - args = parser.parse_args() - ret = {} - - if args["userName"] is not None and len(args["userName"].strip()) > 0: - # Replace with your corp domains - corpDomains = ['ATHENA'] - ret["uid"] = "" - - for corpDomain in corpDomains: - if len(ret["uid"].strip())==0: - userName = str(args["userName"]).strip().split("@")[0] - uid = cmd_exec("id -u %s\\\\%s" % (corpDomain,userName)) - gid = cmd_exec("id -g %s\\\\%s" % (corpDomain,userName)) - groups = cmd_exec("id -Gnz %s\\\\%s" % (corpDomain,userName)).split("\0") - - ret["uid"] = uid - ret["gid"] = gid - ret["groups"] = groups - - - resp = jsonify(ret) - resp.headers["Access-Control-Allow-Origin"] = "*" - resp.headers["dataType"] = "json" - - return resp - -## -## Actually setup the Api resource routing here -## -api.add_resource(GetUserId, '/GetUserId') - -if __name__ == '__main__': - app.run(debug=False,host="0.0.0.0",threaded=True) diff --git a/contrib/samba-aad-server/build/infosrv/uwsgi.ini b/contrib/samba-aad-server/build/infosrv/uwsgi.ini deleted file mode 100644 index 85f7b806d..000000000 --- a/contrib/samba-aad-server/build/infosrv/uwsgi.ini +++ /dev/null @@ -1,17 +0,0 @@ -[uwsgi] -chdir=/infosrv -module=domaininfo -callable=app -master=true -processes=4 -chmod-socket=666 -logfile-chmod=644 -procname-prefix-spaced=DomainInfo -py-autoreload=1 -socket=127.0.0.1:8988 - -vacuum=true -socket=%(chdir)/uwsgi/uwsgi.sock -stats=%(chdir)/uwsgi/uwsgi.status -pidfile=%(chdir)/uwsgi/uwsgi.pid -daemonize=%(chdir)/uwsgi/uwsgi.log diff --git a/contrib/samba-aad-server/build/krb5.conf b/contrib/samba-aad-server/build/krb5.conf deleted file mode 100644 index d464aa2cf..000000000 --- a/contrib/samba-aad-server/build/krb5.conf +++ /dev/null @@ -1,39 +0,0 @@ -# This is a template configure file. Please change to your own settings before use. -[libdefaults] - ticket_lifetime = 24h -# Replace with your own default realm - default_realm = ATHENA.MIT.EDU - forwardable = true - -# Replace with your own realms -[realms] - ATHENA.MIT.EDU = { - kdc = kerberos.mit.edu - kdc = kerberos-1.mit.edu - kdc = kerberos-2.mit.edu:88 - admin_server = kerberos.mit.edu - default_domain = mit.edu - } - -# Replace with your own domain realms -[domain_realm] - .mit.edu = ATHENA.MIT.EDU - mit.edu = ATHENA.MIT.EDU - -#[kdc] -# profile = /etc/krb5kdc/kdc.conf - -[appdefaults] - pam = { - debug = false - ticket_lifetime = 36000 - renew_lifetime = 36000 - forwardable = true - krb4_convert = false - } - -[logging] - kdc = SYSLOG:INFO:DAEMON - kdc = FILE:/var/log/krb5kdc.log - admin_server = FILE:/var/log/kadmin.log - default = FILE:/var/log/krb5lib.log diff --git a/contrib/samba-aad-server/build/nsswitch.conf b/contrib/samba-aad-server/build/nsswitch.conf deleted file mode 100644 index de1ce8387..000000000 --- a/contrib/samba-aad-server/build/nsswitch.conf +++ /dev/null @@ -1,20 +0,0 @@ -# /etc/nsswitch.conf -# -# Example configuration of GNU Name Service Switch functionality. -# If you have the `glibc-doc-reference' and `info' packages installed, try: -# `info libc "Name Service Switch"' for information about this file. - -passwd: compat winbind -group: compat winbind -shadow: compat -gshadow: files - -hosts: files dns -networks: files - -protocols: db files -services: db files -ethers: db files -rpc: db files - -netgroup: compat winbind diff --git a/contrib/samba-aad-server/build/run.sh b/contrib/samba-aad-server/build/run.sh deleted file mode 100644 index c26417948..000000000 --- a/contrib/samba-aad-server/build/run.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/bin/bash -sed -i 's/%$(PAISMBUSER)/'$PAISMBUSER'/' /etc/samba/smb.conf -sed -i 's/%$(DOMAIN)/'$DOMAIN'/' /etc/samba/smb.conf - -net ads join -U "$DOMAINUSER"%"$DOMAINPWD" -service winbind restart -service smbd restart - -useradd "$PAISMBUSER" -(echo "$PAISMBPWD" && echo "$PAISMBPWD") | ./usr/bin/smbpasswd -a "$PAISMBUSER" - -uwsgi --ini /infosrv/uwsgi.ini -service nginx stop -nginx -g 'daemon off;' diff --git a/contrib/samba-aad-server/build/sambadatacreate b/contrib/samba-aad-server/build/sambadatacreate deleted file mode 100644 index adcc2e105..000000000 --- a/contrib/samba-aad-server/build/sambadatacreate +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash -paiuser=$1 - -datapath=/share/pai/data -umask 000 -if [ ! -d "$datapath" ];then - mkdir -p "$datapath" - chown "$paiuser":"$paiuser" "$datapath" -fi diff --git a/contrib/samba-aad-server/build/sambauserhomecreate b/contrib/samba-aad-server/build/sambauserhomecreate deleted file mode 100644 index 5b5000bd8..000000000 --- a/contrib/samba-aad-server/build/sambauserhomecreate +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash -user=$1 -domain=$2 -uname=$3 -paiuser=$4 - -userspath=/share/pai/users -umask 000 -if [ ! -d "$userspath" ];then - mkdir -p "$userspath" - chown "$paiuser":"$paiuser" "$userspath" -fi - -umask 007 -userpath="$userspath"/"$user" -if [ ! -d "$userpath" ];then - mkdir -p "$userpath" - if [ $user != $uname ] - then - chown "$domain\\$user":"$paiuser" $userpath - else - chown "$user":"$paiuser" $userpath - fi - setfacl -m u:"$paiuser":rwx $userpath -fi diff --git a/contrib/samba-aad-server/build/smb.conf b/contrib/samba-aad-server/build/smb.conf deleted file mode 100644 index 67ffed80f..000000000 --- a/contrib/samba-aad-server/build/smb.conf +++ /dev/null @@ -1,118 +0,0 @@ -# This is a template configure file. Please change to your own settings before use. -# Further doco is here -# https://www.samba.org/samba/docs/man/manpages/smb.conf.5.html -[global] - # No .tld - workgroup = %$(DOMAIN) - # Active Directory System - security = ADS - - # Replace with your own realm defined in krb5.conf - realm = ATHENA.MIT.EDU - -# map to guest = bad user -# guest account = guest - # Just a member server - domain master = No - local master = No - preferred master = No - # Works both in samba 3.2 and 3.6 and 4.1 - - # Replace with your own idmap config - idmap config * : backend = rid - idmap config * : range = 900000000-999999999 - idmap config ATHENA : backend = rid - idmap config ATHENA : range = 100000000-199999999 - - - # One week is the default - idmap cache time = 604800 - # If you set this to 0 winbind will get thrown into a loop and - # be stuck at 99% mem and cpu. - # 5m is the default - winbind cache time = 300 - winbind enum users = No - winbind enum groups = No - # This way users log in with username instead of username@example.org - winbind use default domain = No - # Do not recursively descend into groups, it kills performance - winbind nested groups = No - # This is what slows down logins, if we didn't care about resolving groups - # we could set this to 0 - winbind expand groups = 0 - winbind refresh tickets = Yes - # Using offline login = Yes forces max domain connections to 1 - winbind offline logon = No - winbind max clients = 1500 - winbind max domain connections = 50 - - # winbind separator = @ - winbind:ignore domains = 001D 064D 343I ADVENTUREWORKS9 AMALGA AMALGATEST BIGPARK BINGLAB CAE CCSSELFHOST CDV CERDP CETI CFDEV CLOUDLAB CONNECTED CONTOSO-01 CPEXEC CPMT CPMTPPE CRMDFIFDDOM CSLAB CTDEV DCLAB E14 E15 ERIDANUS EXCHANGE EXTRANET EXTRANETTEST FORNAX FULTONDOMAIN GME GMR HADEV HAVANATWO HEALTH HOSPITALA HVAADCS HYDRI HYPER-V IDCNETTEST ISLAND IT ITNEXTGENLAB LAB1BOISE LHWKSTA MASSIVEINCORPOR MEXEXCHANGEDC MGDNOK MMS MPSD-WI MR MSGENG MS-GMR MSLPA MSSTORE MSTT MTETCS MUTEST MYOPWV NEBCPS1 NEBCPS2 NEBCPS3 NEBCPS4 NEBCPS5 NLCPS1 NEBCPST NEBCPST NOE NOKIAEA NORTHWINDTEST NTDEV OBPPERF OCTANS OEXTRANET OFFICEDOG OFORNAX OSSCPUB OUALAB PARTNERS PARTTEST PCTS PDSTEAM PEOPLETEST PHX PIN PORTAL PROSUPPORT PRVFAB PYXIDIS RESOURCE REVOLUTION2 SAW SDITESTT SEDEV SEGROUP SENET SENTILLIONINC SLCLAB SPEECH SPWLAB SPXMAILDOMAIN STBTEST STODC01 SYS-SQLSVR SYS-WINGROUP TANGODOM1 TELECOMLAB TEQUILA Threshold TNT UKMCS UPGROUP VE VMLIBDOM VOMJUMPSTART WGIA WINDEPLOY WINSE WINSE-CTDEV WINSRVLAB WMD WPDEV XCORP XCORP XGROUP XGROUP XGROUPPPE XPORTAL XRED ZIPLINE - # Disable printer support - load printers = No - printing = bsd - printcap name = /dev/null - disable spoolss = yes - # Becomes /home/example/username - template homedir = /storage/users/%U - # shell access - template shell = /bin/bash - client use spnego = Yes - client ntlmv2 auth = Yes - encrypt passwords = Yes - restrict anonymous = 2 - log level = 2 - log file = /var/log/samba/samba.log - smb2 max read = 8388608 - smb2 max write = 8388608 - smb2 max trans = 8388608 - # This is fairly custom to Ubuntu - # See www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html#ADDMACHINESCRIPT - # and https://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/domain-member.html - add machine script = /usr/sbin/adduser --system --gecos %u --home /var/lib/nobody --shell /bin/false --uid 300 --no-create-home %u - - -[root] - comment = Samba share root - path = /share/pai - valid users = %$(PAISMBUSER) - writable = yes - browseable = no - #root preexec = /usr/bin/sambarootcreate %$(PAISMBUSER) - create mask = 0777 - directory mask = 0777 - -[users] - comment = Samba share users - path = /share/pai/users - valid users = %$(PAISMBUSER) - writable = yes - browseable = no - root preexec = /usr/bin/sambauserhomecreate %U %D %u %$(PAISMBUSER) - create mask = 0777 - directory mask = 0777 - -[home] - comment = Samba share user home - path = /share/pai/users/%U - writeable = yes - browseable = yes - valid users = %$(PAISMBUSER) %D\%U - root preexec = /usr/bin/sambauserhomecreate %U %D %u %$(PAISMBUSER) - create mask = 0777 - -[data] - comment = Samba share data - path = /share/pai/data - valid users = %$(PAISMBUSER) %D\%U - writable = yes - browseable = yes - root preexec = /usr/bin/sambadatacreate %$(PAISMBUSER) - directory mask = 0777 - force directory mode = 0777 - directory security mask = 0777 - force directory security mode = 0777 - create mask = 0777 - force create mode = 0777 - security mask = 0777 - force security mode = 0777 diff --git a/contrib/samba-aad-server/start.sh b/contrib/samba-aad-server/start.sh deleted file mode 100644 index 4d1a45bee..000000000 --- a/contrib/samba-aad-server/start.sh +++ /dev/null @@ -1,15 +0,0 @@ -#!/bin/bash -if [ -z "$5" ]; then - echo "usage: ./start.sh " -else - DOMAIN=$1 - DOMAINUSER=$2 - DOMAINPWD=$3 - PAISMBUSER=$4 - PAISMBPWD=$5 - - mkdir -p /share/pai - docker run -dit --privileged --restart=always -p 8079:80 -p 445:445 --mount type=bind,source=/share/pai,target=/share/pai \ - --name paismb -e DOMAIN="$DOMAIN" -e DOMAINUSER="$DOMAINUSER" -e DOMAINPWD="$DOMAINPWD" \ - -e PAISMBUSER="$PAISMBUSER" -e PAISMBPWD="$PAISMBPWD" paismb:stable -fi \ No newline at end of file diff --git a/contrib/storage_plugin/README.MD b/contrib/storage_plugin/README.MD deleted file mode 100644 index 47ab4e89d..000000000 --- a/contrib/storage_plugin/README.MD +++ /dev/null @@ -1,241 +0,0 @@ -# Team wise storage - -*NOTICE: This tool has been deprecated, please refer to [Setup Kubernetes Persistent Volumes as Storage on PAI](../../docs/setup-persistent-volumes-on-pai.md).* - - -A tool to manage external storage in PAI. - -## Index -- [ What is team wise storage](#Team_storage) -- [ Team wise storage usages ](#Usages) - - [ Setup server ](#Usages_setup_server) - - [ Create storage server in PAI ](#Usages_server) - - [ Create storage config in PAI ](#Usages_config) - - [ Set storage config access for group ](#Usages_groupsc) - - [ Use Storage in PAI ](#Usages_job) - - [ Example ](#Usages_example) -- [ Storage data structure ](#Data_structure) - - [ Server data structure ](#Server_data) - - [ Nfs Server data structure ](#Nfs_data) - - [ Samba Server data structure ](#Samba_data) - - [ Azurefile Server data structure ](#Azurefile_data) - - [ Azureblob Server data structure ](#Azureblob_data) - - [ Hdfs Server data structure ](#Hdfs_data) - - [ Config data structure ](#Config_data) - - [ Config in group data ](#Config_in_group_data) - -## What is team wise storage -Team wise storage is a solution that helps admin to manage NAS(network attached storage) by team/group. After admin configured team wise storage settings, users can easily use NAS in their jobs.
-Team wise storage solution offers: -- Multiple NAS support, including NFS, Samba, Azurefile, Azureblob and HDFS -- Configurable mount structure settings -- Mixed usage for different NAS -- Configuration for Team/Group scope - -## Team wise storage usages - -### Setup server -- NFS - -Edit /etc/exports, export /root/path/to/share -``` -/root/path/to/share (rw, sync, no_root_squash) -``` -no_root_squash is needed for storage plugin to creae folders. - -- Samba - -After create samba server, create user for PAI to use samba. -``` -useradd paismb -smbpasswd -a paismb -#Input password for paismb -``` - -- Azurefile - -Create Azurefile share through azure web portal. - -- Azureblob - -Create Azureblob share through azure web portal. - - -### Create storage server in PAI -In PAI dev-box, swith to folder pai/contrib/storage-plugin - -Create server config using command: -- NFS: -``` -python storagectl.py server set NAME nfs ADDRESS ROOTPATH -``` - -- Samba: -``` -python storagectl.py server set NAME samba ADDRESS ROOTPATH USERNAME PASSWORD DOMAIN -``` - -- Azurefile: -``` -python storagectl.py server set NAME azurefile DATASTORE FILESHARE ACCOUNTNAME KEY - ``` - -- Azureblob: -``` -python storagectl.py server set NAME azureblob DATASTORE CONTAINERNAME ACCOUNTNAME KEY -``` - -- HDFS: -``` -python storagectl.py server set NAME hdfs NAMENODE PORT -``` - -### Create storage config in PAI -In PAI dev-box, swith to folder pai/contrib/storage-plugin - -Create config using command: -``` -python storagectl.py config set CONFIG_NAME GROUP_NAME [-s SERVER_NAME_1 SERVER_NAME_2 ...] [-m MOUNT_POINT SERVER PATH]... [-d] -``` - -### Set storage config access for group -In PAI dev-box, swith to folder pai/contrib/storage-plugin - -Set storage config access for group using command: -``` -python storagectl.py groupsc add GROUP_NAME CONFIG_NAME -``` - -### Use Storage info in job container -User can use team wise storage through job submit page. Please refer to related page for details. - -### Example -Suppose admin has set up a new samba server "smbserver" on "10.0.0.0", created PAI account "paismb" with password "paipwd". -The structure of samba server is as follows: -``` --- root - -- data - -- users - -- user1 - -- user2 - ... -``` -Now we want all members of "paigroup" mount server's data folder to /data, and user's data (e.g user1) to /user by default. The admin should setup storage config in PAI using: -```bash -python storagectl.py server set smbserver samba 10.0.0.1 root paismb paipwd local -python storagectl.py config set configsmb -s smbserver -m /data smbserver data -m /user smbserver 'users/${PAI_USER_NAME}' -d -python storagectl.py groupsc add paigroup configsmb -``` -Then when "paiuser" from "paigroup" uses job submit page, the configsmb will be shown and user can choose whether to use it
- - -## Team wise storage data structures - -### Server data structure -```json -{ - "spn": "servername", - "type": "nfs|samba|azurefile|azureblob" -} -``` -#### Nfs Server data structure -```json -{ - "spn": "servername", - "type": "nfs", - "address": "server/address", - "rootPath": "server/root/path" -} -``` - -#### Samba Server data structure -```json -{ - "spn": "servername", - "type": "samba", - "address": "server/address", - "rootPath": "server/root/path", - "userName": "username", - "password": "password", - "domain": "userdomain" -} -``` - -#### Azurefile Server data structure -```json -{ - "spn": "servername", - "type": "azurefile", - "dataStore": "datastore", - "fileShare": "fileshare", - "accountName": "accountname", - "key": "key" -} -``` - -#### Azureblob Server data structure -```json -{ - "spn": "servername", - "type": "azureblob", - "dataStore": "datastore", - "containerName": "containername", - "accountName": "accountname", - "key": "key" -} -``` - -#### Hdfs Server data structure -```json -{ - "spn": "servername", - "type": "hdfs", - "namenode": "namenode", - "port": "port", -} -``` - -### Config data structure -```json -{ - "name": "configname", - "gpn": "groupname", - "default": false, - "servers": [ - "servername", - ], - "mountInfos": [ - { - "mountpoint": "local/mount/point", - "server": "servername", - "path": "server/sub/path" - }, - ] -} -``` - -- MountInfo: How user mount server path to local. -```json -{ - "mountpoint": "local/mount/point", - "server": "servername", - "path": "server/sub/path" -} -``` - -### Config in group data -- Which storage configs that a group can access is stored in group data's extension field. For example, a group that can access STORAGE_CONFIG is like following: -```json -{ - "groupname": "groupname", - "externalName": "externalName", - "description": "description", - "extension": { - "acls": { - "admin": false, - "virtualClusters": [], - "storageConfigs": ["STORAGE_CONFIG"] - } - } -} -``` diff --git a/contrib/storage_plugin/__init__.py b/contrib/storage_plugin/__init__.py deleted file mode 100644 index d647bb847..000000000 --- a/contrib/storage_plugin/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/contrib/storage_plugin/examples/team-storage/servers/nfs_example.json b/contrib/storage_plugin/examples/team-storage/servers/nfs_example.json deleted file mode 100644 index 52c4ba1c8..000000000 --- a/contrib/storage_plugin/examples/team-storage/servers/nfs_example.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "type": "nfs", - "title": "nfs_example", - "address": "10.0.0.1", - "rootPath": "/share/nfs", - "sharedFolders": ["data"], - "privateFolders": ["users"] - } \ No newline at end of file diff --git a/contrib/storage_plugin/examples/team-storage/users/user_example.json b/contrib/storage_plugin/examples/team-storage/users/user_example.json deleted file mode 100644 index ccc6b62b6..000000000 --- a/contrib/storage_plugin/examples/team-storage/users/user_example.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "defaultStorage": "nfs_example.json", - "externalStorages": [ - "nfs_example.json" - ] -} \ No newline at end of file diff --git a/contrib/storage_plugin/schemas/storage_server.schema.json b/contrib/storage_plugin/schemas/storage_server.schema.json deleted file mode 100644 index 8398c9b5d..000000000 --- a/contrib/storage_plugin/schemas/storage_server.schema.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "type": "object", - "properties": { - "type": { - "type": "string", - "description": "The type of external storage" - }, - "title": { - "type": "string", - "description": "Shown name of external storage" - }, - "address": { - "type": "string", - "description": "The ip address of external storage" - }, - "rootPath": { - "type": "string", - "description": "The root path of external storage" - }, - "sharedFolders": { - "type": "array", - "description": "Shared folder under root path", - "items": { "type": "string" } - }, - "privateFolders": { - "type": "array", - "description": "The base of user private folder under root path, represent rootPath/$base/$username", - "items": { "type": "string" } - } - }, - "required": [ - "type", - "title", - "address", - "rootPath" - ] -} \ No newline at end of file diff --git a/contrib/storage_plugin/schemas/storage_user.schema.json b/contrib/storage_plugin/schemas/storage_user.schema.json deleted file mode 100644 index 5e4d7c4c9..000000000 --- a/contrib/storage_plugin/schemas/storage_user.schema.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "type": "object", - "properties": { - "defaultStorage": { - "type": "string", - "description": "User default external storage" - }, - "externalStorages": { - "type": "array", - "description": "All external storages that the user has permission to access", - "items": { "type": "string" } - } - }, - "required": [ - "defaultStorage", - "externalStorages" - ] -} \ No newline at end of file diff --git a/contrib/storage_plugin/storagectl.md b/contrib/storage_plugin/storagectl.md deleted file mode 100644 index 172340b43..000000000 --- a/contrib/storage_plugin/storagectl.md +++ /dev/null @@ -1,111 +0,0 @@ -# storagectl - -A tool to manage your storage config. - -## Index -- [ Manage server ](#Server_config) - - [ Set server ](#Server_set) - - [ Set nfs server ](#Server_set_nfs) - - [ Set samba server ](#Server_set_samba) - - [ Set azurefile server ](#Server_set_azurefile) - - [ Set azureblob server ](#Server_set_azureblob) - - [ Set hdfs server ](#Server_set_hdfs) - - [ List server ](#Server_list) - - [ Delete server ](#Server_delete) - -- [ Manage config ](#Config_config) - - [ Set config ](#Config_set) - - [ List config ](#Config_list) - - [ Delete config ](#Config_delete) - -- [ Manage group storage access ](#Groupsc_config) - - [ Add group storage config ](#Groupsc_add) - - [ List group storage configs ](#Groupsc_list) - - [ Delete group storage config ](#Groupsc_delete) - - -## Manage Server -Manage server in PAI. Server how PAI access a nas server. -### Set server - -#### Set nfs server -``` -python storagectl.py server set NAME nfs ADDRESS ROOTPATH -``` - -#### Set samba server -``` -python storagectl.py server set NAME samba ADDRESS ROOTPATH USERNAME PASSWORD DOMAIN -``` - -#### Set azurefile server -``` -python storagectl.py server set NAME azurefile DATASTORE FILESHARE ACCOUNTNAME KEY [-p PROXY_ADDRESS PROXY_PASSWORD] -``` - -#### Set azureblob server -``` -python storagectl.py server set NAME azureblob DATASTORE CONTAINERNAME ACCOUNTNAME KEY -``` - -#### Set hdfs server -``` -python storagectl.py server set NAME hdfs NAMENODE PORT -``` - -### List server -``` -python storagectl.py server list [-n SERVER_NAME_1, SERVER_NAME_2 ...] -``` -- If -n specified, list certain servers. Otherwise list all servers. - -### Delete server -``` -python storagectl.py user delete SERVER_NAME -``` - - -## Manage Config -Manage configs for group in PAI. Config defines a set of mount infos. Every config belongs to a group. That is to say, one group may have 0 to n configs. -### Set config -``` -python storagectl.py config set CONFIG_NAME [-m MOUNT_POINT SERVER PATH]... [-d] -``` -- If -d is set, means mount config storage by default. -- -m means the mount info for config. If -m specified, the PATH on SERVER will be mount to MOUNT_POINT. - - [Job Environment Varialbes](https://github.com/microsoft/pai/blob/master/docs/job_tutorial.md#environment-variables) can be referenced In PATH. Please use '' to quote job environment variables to avoid references to local variables in dev-box. - -For example, suppose we have set config using: -``` -python storagectl.py config set SAMPLE_CONFIG -m /mnt/job SAMPLE_SERVER 'users/${PAI_USER_NAME}/jobs/${PAI_JOB_NAME}' -``` -If current user is 'paiuser' and current job is 'job-TEST'. This config will mount SAMPLE_SERVER/users/paiuser/jobs/job-TEST to /mnt/job - -### List config -``` -python storagectl.py config list [-n CONFIG_NAME_1, CONFIG_NAME_2 ...] -``` -- If -n specified, list certain configs. Otherwise list all config. - -### Delete config -``` -python storagectl.py config delete CONFIG_NAME -``` - - -## Manage group storage access -Manage PAI group's storage config access. -### Add group storage config -``` -python storagectl.py groupsc add GROUP_NAME CONFIG_NAME -``` - -### List group storage config -``` -python storagectl.py groupsc list GROUP_NAME -``` - -### Delete group storage config -``` -python storagectl.py groupsc delete GROUP_NAME CONFIG_NAME -``` diff --git a/contrib/storage_plugin/storagectl.py b/contrib/storage_plugin/storagectl.py deleted file mode 100644 index 0b1cd0194..000000000 --- a/contrib/storage_plugin/storagectl.py +++ /dev/null @@ -1,273 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from __future__ import absolute_import -from __future__ import print_function - -import os -import sys -import argparse -import datetime -import logging -import logging.config -import json -import base64 -import subprocess -import multiprocessing -import random,string - -from kubernetes import client, config, watch -from kubernetes.client.rest import ApiException - -from utils.storage_util import * - -import binascii - -logger = logging.getLogger(__name__) - -# Save server config to k8s secret -def save_secret(secret_name, name, content_dict): - secret_dict = dict() - secret_dict[name] = base64.b64encode(json.dumps(content_dict)) - patch_secret(secret_name, secret_dict, "pai-storage") - -def show_secret(args): - secret_data = get_secret(args.secret_name, "pai-storage") - if secret_data is None: - logger.error("No secret found.") - else: - for key, value in secret_data.iteritems(): - if args.name is None or key in args.name: - print(key) - print(base64.b64decode(value)) - -def delete_secret(args): - delete_secret_content(args.secret_name, args.name, "pai-storage") - - -def server_set(args): - content_dict = dict() - content_dict["spn"] = args.name - content_dict["type"] = args.server_type - if args.server_type == "nfs": - content_dict["address"] = args.address - content_dict["rootPath"] = args.root_path - elif args.server_type == "samba": - content_dict["address"] = args.address - content_dict["rootPath"] = args.root_path - content_dict["userName"] = args.user_name - content_dict["password"] = args.password - content_dict["domain"] = args.domain - elif args.server_type == "azurefile": - content_dict["dataStore"] = args.data_store - content_dict["fileShare"] = args.file_share - content_dict["accountName"] = args.account_name - content_dict["key"] = args.key - if args.proxy is not None: - content_dict["proxy"] = args.proxy - elif args.server_type == "azureblob": - content_dict["dataStore"] = args.data_store - content_dict["containerName"] = args.container_name - content_dict["accountName"] = args.account_name - content_dict["key"] = args.key - elif args.server_type == "hdfs": - content_dict["namenode"] = args.namenode - content_dict["port"] = args.port - else: - logger.error("Unknow storage type") - sys.exit(1) - save_secret("storage-server", args.name, content_dict) - - -def config_set(args): - try: - content_dict = dict() - content_dict["name"] = args.name - content_dict["servers"] = args.servers - content_dict["default"] = args.default - if args.mount_info is not None: - mount_infos = [] - for info_data in args.mount_info: - # Verify mount point, mountPoint should starts with "/" and path should not - if not info_data[0].startswith("/"): - raise NameError("MOUNT_POINT should be absolute path and starts with \'/\'") - elif info_data[2].startswith("/"): - raise NameError("PATH should be relative path and not starts with \'/\'") - else: - info = {"mountPoint" : info_data[0], "server" : info_data[1], "path" : info_data[2]} - mount_infos.append(info) - content_dict["mountInfos"] = mount_infos - except NameError as e: - logger.error(e) - else: - save_secret("storage-config", args.name, content_dict) - -def get_group_extension(group_name): - group_hex = binascii.hexlify(group_name) - secret_data = get_secret(group_hex, "pai-group") - if secret_data is None: - logger.error("No group found.") - return None - else: - extension = json.loads(base64.b64decode(secret_data["extension"])) - return extension - -def groupsc_add(args): - extension = get_group_extension(args.group_name) - if extension is not None: - if "storageConfigs" not in extension["acls"]: - extension["acls"]["storageConfigs"] = [] - storageConfigs = extension["acls"]["storageConfigs"] - if args.config_name not in storageConfigs: - storageConfigs.append(args.config_name) - secret_dict = dict() - secret_dict["extension"] = base64.b64encode(json.dumps(extension)) - patch_secret(binascii.hexlify(args.group_name), secret_dict, "pai-group") - logger.info("Successfully added storage config to group!") - -def groupsc_delete(args): - extension = get_group_extension(args.group_name) - if extension is not None: - storageConfigs = extension["acls"]["storageConfigs"] - if args.config_name in storageConfigs: - storageConfigs.remove(args.config_name) - secret_dict = dict() - secret_dict["extension"] = base64.b64encode(json.dumps(extension)) - patch_secret(binascii.hexlify(args.group_name), secret_dict, "pai-group") - logger.info("Successfully deleted storage config from group!") - -def groupsc_list(args): - extension = get_group_extension(args.group_name) - if extension is not None: - print(extension["acls"]["storageConfigs"]) - -def setup_logger_config(logger): - """ - Setup logging configuration. - """ - if len(logger.handlers) == 0: - logger.propagate = False - logger.setLevel(logging.DEBUG) - consoleHandler = logging.StreamHandler() - consoleHandler.setLevel(logging.DEBUG) - formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') - consoleHandler.setFormatter(formatter) - logger.addHandler(consoleHandler) - - -def main(): - scriptFolder=os.path.dirname(os.path.realpath(__file__)) - os.chdir(scriptFolder) - - parser = argparse.ArgumentParser(description="pai storage management tool") - subparsers = parser.add_subparsers(help='Storage management cli') - - # ./storagectl.py server set|list|delete - server_parser = subparsers.add_parser("server", description="Commands to manage servers.", formatter_class=argparse.RawDescriptionHelpFormatter) - server_subparsers = server_parser.add_subparsers(help="Add/modify, list or delete server") - # ./storgectl.py server set ... - server_set_parser = server_subparsers.add_parser("set") - server_set_parser.add_argument("name") - server_set_subparsers = server_set_parser.add_subparsers(help="Add/modify storage types, currently support nfs, samba, azurefile and azureblob") - # ./storagectl.py server set NAME nfs ADDRESS ROOTPATH - server_set_nfs_parser = server_set_subparsers.add_parser("nfs") - server_set_nfs_parser.add_argument("address", metavar="address", help="Nfs remote address") - server_set_nfs_parser.add_argument("root_path", metavar="rootpath", help="Nfs remote root path") - server_set_nfs_parser.set_defaults(func=server_set, server_type="nfs") - # ./storagectl.py server set NAME samba ADDRESS ROOTPATH USERNAME PASSWORD DOMAIN - server_set_samba_parser = server_set_subparsers.add_parser("samba") - server_set_samba_parser.add_argument("address", metavar="address", help="Samba remote address") - server_set_samba_parser.add_argument("root_path", metavar="rootpath", help="Samba remote root path") - server_set_samba_parser.add_argument("user_name", metavar="username", help="Samba PAI username") - server_set_samba_parser.add_argument("password", metavar="password", help="Samba PAI password") - server_set_samba_parser.add_argument("domain", metavar="domain", help="Samba PAI domain") - server_set_samba_parser.set_defaults(func=server_set, server_type="samba") - # ./storagectl.py server set NAME azurefile DATASTORE FILESHARE ACCOUNTNAME KEY [-p PROXY_ADDRESS PROXY_PASSWORD] - server_set_azurefile_parser = server_set_subparsers.add_parser("azurefile") - server_set_azurefile_parser.add_argument("data_store", metavar="datastore", help="Azurefile data store") - server_set_azurefile_parser.add_argument("file_share", metavar="fileshare", help="Azurefile file share") - server_set_azurefile_parser.add_argument("account_name", metavar="accountname", help="Azurefile account name") - server_set_azurefile_parser.add_argument("key", metavar="key", help="Azurefile share key") - server_set_azurefile_parser.add_argument("-p", "--proxy", dest="proxy", nargs=2, help="Proxy to mount azure file: PROXY_INFO PROXY_PASSWORD") - server_set_azurefile_parser.set_defaults(func=server_set, server_type="azurefile") - # ./storagectl.py server set NAME azureblob DATASTORE CONTAINERNAME ACCOUNTNAME KEY - server_set_azureblob_parser = server_set_subparsers.add_parser("azureblob") - server_set_azureblob_parser.add_argument("data_store", metavar="datastore", help="Azureblob data store") - server_set_azureblob_parser.add_argument("container_name", metavar="containername", help="Azureblob container name") - server_set_azureblob_parser.add_argument("account_name", metavar="accountname", help="Azureblob account name") - server_set_azureblob_parser.add_argument("key", metavar="key", help="Azureblob share key") - server_set_azureblob_parser.set_defaults(func=server_set, server_type="azureblob") - # ./storagectl.py server set NAME hdfs NAMENODE PORT - server_set_hdfs_parser = server_set_subparsers.add_parser("hdfs") - server_set_hdfs_parser.add_argument("namenode", metavar="namenode", help="HDFS name node") - server_set_hdfs_parser.add_argument("port", metavar="port", help="HDFS name node port") - server_set_hdfs_parser.set_defaults(func=server_set, server_type="hdfs") - # ./storagectl.py server list [-n SERVER_NAME_1, SERVER_NAME_2 ...] - server_list_parser = server_subparsers.add_parser("list") - server_list_parser.add_argument("-n", "--name", dest="name", nargs="+", help="filter result by names") - server_list_parser.set_defaults(func=show_secret, secret_name="storage-server") - # ./storagectl.py user delete SERVER_NAME - server_del_parser = server_subparsers.add_parser("delete") - server_del_parser.add_argument("name") - server_del_parser.set_defaults(func=delete_secret, secret_name="storage-server") - - # ./storagectl.py config ... - config_parser = subparsers.add_parser("config", description="Manage config", formatter_class=argparse.RawDescriptionHelpFormatter) - config_subparsers = config_parser.add_subparsers(help="Manage config") - # ./storagectl.py config set CONFIG_NAME GROUP_NAME [-s SERVER_NAME_1 SERVER_NAME_2 ...] [-m MOUNT_POINT SERVER PATH]... [-d] - config_set_parser = config_subparsers.add_parser("set") - config_set_parser.add_argument("name", help="Config name") - config_set_parser.add_argument("-s", "--server", dest="servers", nargs="+", help="-s SERVER_NAME_1 SERVER_NAME_2 ...") - config_set_parser.add_argument("-m", "--mountinfo", dest="mount_info", nargs=3, action="append", help="-m MOUNT_POINT SERVER SUB_PATH") - config_set_parser.add_argument("-d", "--default", action="store_true", help="Mount by default") - config_set_parser.set_defaults(func=config_set) - # ./storagectl.py config list [-n CONFIG_NAME_1, CONFIG_NAME_2 ...] [-g GROUP_NAME_1, GROUP_NAME_2 ...] - config_list_parser = config_subparsers.add_parser("list") - config_list_parser.add_argument("-n", "--name", dest="name", nargs="+", help="filter result by names") - config_list_parser.add_argument("-g", "--group", dest="group", nargs="+", help="filter result by groups") - config_list_parser.set_defaults(func=show_secret, secret_name="storage-config") - # ./storagectl.py config delete CONFIG_NAME - config_del_parser = config_subparsers.add_parser("delete") - config_del_parser.add_argument("name") - config_del_parser.set_defaults(func=delete_secret, secret_name="storage-config") - - # ./storagectl.py groupsc add|delete|list - groupsc_parser = subparsers.add_parser("groupsc", description="Manage group storage config", formatter_class=argparse.RawDescriptionHelpFormatter) - groupsc_subparsers = groupsc_parser.add_subparsers(help="Manage group storage config") - # ./storagectl.py groupsc add GROUP_NAME STORAGE_CONFIG_NAME - groupsc_add_parser = groupsc_subparsers.add_parser("add") - groupsc_add_parser.add_argument("group_name") - groupsc_add_parser.add_argument("config_name") - groupsc_add_parser.set_defaults(func=groupsc_add) - # ./storagectl.py groupsc delete GROUP_NAME STORAGE_CONFIG_NAME - groupsc_delete_parser = groupsc_subparsers.add_parser("delete") - groupsc_delete_parser.add_argument("group_name") - groupsc_delete_parser.add_argument("config_name") - groupsc_delete_parser.set_defaults(func=groupsc_delete) - # ./storagectl.py groupsc list GROUP_NAME - groupsc_list_parser = groupsc_subparsers.add_parser("list") - groupsc_list_parser.add_argument("group_name") - groupsc_list_parser.set_defaults(func=groupsc_list) - - args = parser.parse_args() - args.func(args) - - -if __name__ == "__main__": - setup_logger_config(logger) - main() diff --git a/contrib/storage_plugin/utils/__init__.py b/contrib/storage_plugin/utils/__init__.py deleted file mode 100644 index d647bb847..000000000 --- a/contrib/storage_plugin/utils/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/contrib/storage_plugin/utils/storage_util.py b/contrib/storage_plugin/utils/storage_util.py deleted file mode 100644 index bd6b2fa22..000000000 --- a/contrib/storage_plugin/utils/storage_util.py +++ /dev/null @@ -1,194 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import os -import sys -import time -import logging -import logging.config -import base64 - -from kubernetes import client, config, watch -from kubernetes.client.rest import ApiException - -logger = logging.getLogger(__name__) - -def confirm_namespace(namespace): - config.load_kube_config() - api_instance = client.CoreV1Api() - - try: - api_response = api_instance.read_namespace(namespace) - - except ApiException as e: - if e.status == 404: - logger.info("Couldn't find namespace {0}. Create new namespace".format(namespace)) - try: - meta_data = client.V1ObjectMeta(name=namespace) - body = client.V1ConfigMap(metadata=meta_data) - api_response = api_instance.create_namespace(body) - logger.info("Namesapce {0} is created".format(namespace)) - except ApiException as ie: - logger.error("Exception when calling CoreV1Api->create_namespace: {0}".format(str(ie))) - sys.exit(1) - else: - logger.error("Exception when calling CoreV1Api->read_namespace: {0}".format(str(e))) - sys.exit(1) - - -# List usernames from pai-user secrets -def get_pai_users(): - users = [] - config.load_kube_config() - api_instance = client.CoreV1Api() - - try: - api_response = api_instance.list_namespaced_secret("pai-user") - for item in api_response.items: - users.append(base64.b64decode(item.data["username"])) - - except ApiException as e: - if e.status == 404: - logger.info("Couldn't find secret in namespace pai-user, exit") - sys.exit(1) - else: - logger.error("Exception when calling CoreV1Api->list_namespaced_secret: {0}".format(str(e))) - sys.exit(1) - - return users - - -def update_configmap(name, data_dict, namespace): - confirm_namespace(namespace) - - config.load_kube_config() - api_instance = client.CoreV1Api() - - meta_data = client.V1ObjectMeta() - meta_data.namespace = namespace - meta_data.name = name - body = client.V1ConfigMap( - metadata = meta_data, - data = data_dict) - - try: - api_response = api_instance.patch_namespaced_config_map(name, namespace, body) - logger.info("configmap named {0} is updated.".format(name)) - except ApiException as e: - if e.status == 404: - try: - logger.info("Couldn't find configmap named {0}. Create a new configmap".format(name)) - api_response = api_instance.create_namespaced_config_map(namespace, body) - logger.info("Configmap named {0} is created".format(name)) - except ApiException as ie: - logger.error("Exception when calling CoreV1Api->create_namespaced_config_map: {0}".format(str(ie))) - sys.exit(1) - else: - logger.error("Exception when calling CoreV1Api->patch_namespaced_config_map: {0}".format(str(e))) - sys.exit(1) - - -def get_storage_config(storage_config_name, namespace): - confirm_namespace(namespace) - - config.load_kube_config() - api_instance = client.CoreV1Api() - - try: - api_response = api_instance.read_namespaced_config_map(storage_config_name, namespace) - - except ApiException as e: - if e.status == 404: - logger.info("Couldn't find configmap named {0}.".format(storage_config_name)) - return None - else: - logger.error("Exception when calling CoreV1Api->read_namespaced_config_map: {0}".format(str(e))) - sys.exit(1) - - return api_response.data - - -def patch_secret(name, data_dict, namespace): - confirm_namespace(namespace) - - config.load_kube_config() - api_instance = client.CoreV1Api() - - meta_data = client.V1ObjectMeta() - meta_data.namespace = namespace - meta_data.name = name - body = client.V1Secret(metadata = meta_data, data = data_dict) - - try: - api_response = api_instance.patch_namespaced_secret(name, namespace, body) - logger.info("Secret named {0} is updated.".format(name)) - except ApiException as e: - logger.info(e) - if e.status == 404: - try: - logger.info("Couldn't find secret named {0}. Create a new secret".format(name)) - api_response = api_instance.create_namespaced_secret(namespace, body) - logger.info("Secret named {0} is created".format(name)) - except ApiException as ie: - logger.error("Exception when calling CoreV1Api->create_namespaced_secret: {0}".format(str(ie))) - sys.exit(1) - else: - logger.error("Exception when calling CoreV1Api->patch_namespaced_secret: {0}".format(str(e))) - sys.exit(1) - - -def get_secret(name, namespace): - confirm_namespace(namespace) - - config.load_kube_config() - api_instance = client.CoreV1Api() - - try: - api_response = api_instance.read_namespaced_secret(name, namespace) - except ApiException as e: - if e.status == 404: - logger.info("Couldn't find secret named {0}.".format(name)) - return None - else: - logger.error("Exception when calling CoreV1Api->read_namespaced_config_map: {0}".format(str(e))) - sys.exit(1) - - return api_response.data - - -def delete_secret_content(name, key, namespace): - confirm_namespace(namespace) - - config.load_kube_config() - api_instance = client.CoreV1Api() - try: - api_response = api_instance.read_namespaced_secret(name, namespace) - if api_response is not None and type(api_response.data) is dict: - removed_content = api_response.data.pop(key, None) - if removed_content is not None: - meta_data = client.V1ObjectMeta() - meta_data.namespace = namespace - meta_data.name = name - body = client.V1Secret(metadata = meta_data, data = api_response.data) - api_instance.replace_namespaced_secret(name, namespace, body) - except ApiException as e: - if e.status == 404: - logger.info("Couldn't find secret named {0}.".format(name)) - else: - logger.error("Exception when try to delete {0} from {1}: reason: {2}".format(key, name, str(e))) - sys.exit(1) diff --git a/contrib/submit-simple-job/.editorconfig b/contrib/submit-simple-job/.editorconfig deleted file mode 100644 index 68e74812a..000000000 --- a/contrib/submit-simple-job/.editorconfig +++ /dev/null @@ -1,14 +0,0 @@ -root = true - -[*] -indent_style = space -indent_size = 2 -end_of_line = lf -charset = utf-8 -trim_trailing_whitespace = true -insert_final_newline = true -max_line_length = 80 - -[*.md] -indent_size = 4 -trim_trailing_whitespace = false diff --git a/contrib/submit-simple-job/.gitignore b/contrib/submit-simple-job/.gitignore deleted file mode 100644 index d65e062ff..000000000 --- a/contrib/submit-simple-job/.gitignore +++ /dev/null @@ -1,89 +0,0 @@ - -# Created by https://www.gitignore.io/api/node -# Edit at https://www.gitignore.io/?templates=node - -### Node ### -# Logs -logs -*.log -npm-debug.log* -yarn-debug.log* -yarn-error.log* - -# Runtime data -pids -*.pid -*.seed -*.pid.lock - -# Directory for instrumented libs generated by jscoverage/JSCover -lib-cov - -# Coverage directory used by tools like istanbul -coverage - -# nyc test coverage -.nyc_output - -# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files) -.grunt - -# Bower dependency directory (https://bower.io/) -bower_components - -# node-waf configuration -.lock-wscript - -# Compiled binary addons (https://nodejs.org/api/addons.html) -build/Release - -# Dependency directories -node_modules/ -jspm_packages/ - -# TypeScript v1 declaration files -typings/ - -# Optional npm cache directory -.npm - -# Optional eslint cache -.eslintcache - -# Optional REPL history -.node_repl_history - -# Output of 'npm pack' -*.tgz - -# Yarn Integrity file -.yarn-integrity - -# dotenv environment variables file -.env -.env.test - -# parcel-bundler cache (https://parceljs.org/) -.cache - -# next.js build output -.next - -# nuxt.js build output -.nuxt - -# vuepress build output -.vuepress/dist - -# Serverless directories -.serverless/ - -# FuseBox cache -.fusebox/ - -# DynamoDB Local files -.dynamodb/ - -# End of https://www.gitignore.io/api/node - -dist/ diff --git a/contrib/submit-simple-job/App/Components/FormControls/CheckBox.tsx b/contrib/submit-simple-job/App/Components/FormControls/CheckBox.tsx deleted file mode 100644 index 73957f3d1..000000000 --- a/contrib/submit-simple-job/App/Components/FormControls/CheckBox.tsx +++ /dev/null @@ -1,52 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import * as React from "react"; - -import { IFormControlProps } from "."; - -interface ICheckBoxProps extends IFormControlProps {} - -const CheckBox: React.FunctionComponent = (props) => { - const { children, className, onChange, value } = props; - - const onInputChange: React.ChangeEventHandler = (event) => { - if (onChange !== undefined) { - onChange(event.target.checked); - } - }; - - return ( -
-
- -
-
- ); -}; - -export default CheckBox; diff --git a/contrib/submit-simple-job/App/Components/FormControls/NumberInput.tsx b/contrib/submit-simple-job/App/Components/FormControls/NumberInput.tsx deleted file mode 100644 index 8771ed1b7..000000000 --- a/contrib/submit-simple-job/App/Components/FormControls/NumberInput.tsx +++ /dev/null @@ -1,59 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import classNames from "classnames"; -import * as React from "react"; - -import { IFormControlProps } from "."; - -interface INumberInputProps extends IFormControlProps { - min?: number; - max?: number; -} - -const NumberInput: React.FunctionComponent = (props) => { - const { children, className, max, min, onChange, value } = props; - const onInputChange: React.ChangeEventHandler = (event) => { - if (onChange !== undefined) { onChange(event.target.valueAsNumber); } - }; - const UID = "U" + Math.floor(Math.random() * 0xFFFFFF).toString(16); - - return ( -
- - -
- ); -}; - -export default NumberInput; diff --git a/contrib/submit-simple-job/App/Components/FormControls/Select.tsx b/contrib/submit-simple-job/App/Components/FormControls/Select.tsx deleted file mode 100644 index 52b11dbde..000000000 --- a/contrib/submit-simple-job/App/Components/FormControls/Select.tsx +++ /dev/null @@ -1,75 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import classNames from "classnames"; -import * as React from "react"; - -import { IFormControlProps } from "."; - -interface IOptionProps { - label: string; - value: string; -} - -const Option: React.FunctionComponent = ({ value, label }) => { - return ; -}; - -interface ISelectProps extends IFormControlProps { - options: Array; -} - -const Select: React.FunctionComponent = (props) => { - const { children, className, options, value, onChange } = props; - const onSelectChange: React.ChangeEventHandler = (event) => { - if (onChange !== undefined) { - onChange(event.target.value); - } - }; - const UID = "U" + Math.floor(Math.random() * 0xFFFFFF).toString(16); - return ( -
- - -
- ); -}; - -export default Select; diff --git a/contrib/submit-simple-job/App/Components/FormControls/TextArea.tsx b/contrib/submit-simple-job/App/Components/FormControls/TextArea.tsx deleted file mode 100644 index 8f3e8b6cd..000000000 --- a/contrib/submit-simple-job/App/Components/FormControls/TextArea.tsx +++ /dev/null @@ -1,59 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import classNames from "classnames"; -import * as React from "react"; - -import { IFormControlProps } from "."; - -interface ITextAreaProps extends IFormControlProps { - cols?: number; - rows?: number; -} - -const TextArea: React.FunctionComponent = (props) => { - const { children, className, rows, cols, value, onChange } = props; - const onTextAreaChange: React.ChangeEventHandler = (event) => { - if (onChange !== undefined) { - onChange(event.target.value); - } - }; - const UID = "U" + Math.floor(Math.random() * 0xFFFFFF).toString(16); - return ( -
- - - - Virtual Cluster - - - Interactive Job - - - Enable Tensorboard - - - Launch container as root user. - - { - simpleJob.isInteractive ? ( - - Open ports for interactive job - - ) : null - } - { - simpleJob.enableTensorboard ? ( - - Tensorflow model path, used to enable tensorboard - - ) : null - } -
- -
- { - nfs ? ( - - - - ) : null - } - - - - - - - - - -
-
-
-
- - - -
- - ); - }} - -); - -export default SimpleJobForm; diff --git a/contrib/submit-simple-job/App/SimpleJob/index.ts b/contrib/submit-simple-job/App/SimpleJob/index.ts deleted file mode 100644 index f66264a1c..000000000 --- a/contrib/submit-simple-job/App/SimpleJob/index.ts +++ /dev/null @@ -1,262 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -export interface IEnvironmentVariable { - name: string; - value: string; -} - -export interface ISimpleJob { - readonly name: string; - readonly gpus: number; - - readonly image: string; - readonly command: string; - readonly root: boolean; - readonly virtualCluster: string; - - readonly isInteractive: boolean; - readonly interactivePorts: string; - - readonly enableTensorboard: boolean; - readonly tensorboardModelPath: string; - - readonly enableWorkMount: boolean; - readonly workPath: string; - readonly enableDataMount: boolean; - readonly dataPath: string; - readonly enableJobMount: boolean; - readonly jobPath: string; - - readonly hyperParameterName: string; - readonly hyperParameterStartValue: number; - readonly hyperParameterEndValue: number; - readonly hyperParameterStep: number; - - readonly environmentVariables: IEnvironmentVariable[]; - - readonly isPrivileged: boolean; - readonly cpus: number; - readonly memory: number; -} - -export default class SimpleJob implements ISimpleJob { - - public static fromLegacyJSON(json: string) { - const legacyObject = JSON.parse(json); - const simpleJob = new SimpleJob(); - - if (typeof legacyObject.jobName === "string") { - simpleJob.name = legacyObject.jobName; - } - if (typeof legacyObject.resourcegpu === "number") { - simpleJob.gpus = Number(legacyObject.resourcegpu) || 0; - } - - if (typeof legacyObject.image === "string") { - simpleJob.image = legacyObject.image; - } - if (typeof legacyObject.cmd === "string") { - simpleJob.command = legacyObject.cmd; - } - if (typeof legacyObject.runningasroot === "boolean") { - simpleJob.root = legacyObject.runningasroot; - } - - if (typeof legacyObject.is_interactive === "boolean") { - simpleJob.isInteractive = legacyObject.is_interactive; - } - if (simpleJob.isInteractive && typeof legacyObject.interactivePort === "string") { - simpleJob.interactivePorts = legacyObject.interactivePort; - } - - if (typeof legacyObject.do_log === "boolean") { - simpleJob.enableTensorboard = legacyObject.do_log; - } - if (simpleJob.enableTensorboard && typeof legacyObject.logDir === "string") { - simpleJob.tensorboardModelPath = legacyObject.logDir; - } - - if (typeof legacyObject.enableworkpath === "boolean") { - simpleJob.enableWorkMount = legacyObject.enableworkpath; - } - if (typeof legacyObject.workPath === "string") { - simpleJob.workPath = legacyObject.workPath; - } - if (typeof legacyObject.enabledatapath === "boolean") { - simpleJob.enableDataMount = legacyObject.enabledatapath; - } - if (typeof legacyObject.dataPath === "string") { - simpleJob.dataPath = legacyObject.dataPath; - } - if (typeof legacyObject.enablejobpath === "boolean") { - simpleJob.enableJobMount = legacyObject.enablejobpath; - } - if (typeof legacyObject.jobPath === "string") { - simpleJob.jobPath = legacyObject.jobPath; - } - - if (typeof legacyObject.hyperparametername === "string") { - simpleJob.hyperParameterName = legacyObject.hyperparametername; - } - if ( - typeof legacyObject.hyperparameterstartvalue === "number" || - typeof legacyObject.hyperparameterstartvalue === "string" - ) { - simpleJob.hyperParameterStartValue = Number(legacyObject.hyperparameterstartvalue) || 0; - } - if ( - typeof legacyObject.hyperparameterendvalue === "number" || - typeof legacyObject.hyperparameterendvalue === "string" - ) { - simpleJob.hyperParameterEndValue = Number(legacyObject.hyperparameterendvalue) || 0; - } - if ( - typeof legacyObject.hyperparameterstep === "number" || - typeof legacyObject.hyperparameterstep === "string" - ) { - simpleJob.hyperParameterStep = Number(legacyObject.hyperparameterstep) || 0; - } - - if (Array.isArray(legacyObject.env)) { - const { environmentVariables } = simpleJob; - for (const legacyEnv of legacyObject.env) { - const { name, value } = legacyEnv; - if (typeof name === "string" && typeof value === "string") { - environmentVariables.push({ name, value }); - } - } - } - - if (typeof legacyObject.isPrivileged === "boolean") { - simpleJob.isPrivileged = legacyObject.isPrivileged; - } - - if (simpleJob.isPrivileged) { - if ( - typeof legacyObject.cpurequest === "string" || - typeof legacyObject.cpurequest === "number" - ) { - simpleJob.cpus = Number(legacyObject.cpurequest) || 1; - } - if ( - typeof legacyObject.memoryrequest === "string" || - typeof legacyObject.memoryrequest === "number" - ) { - simpleJob.memory = Number(legacyObject.memoryrequest) || 256; - } - } - - return simpleJob; - } - - public static toLegacyJSON(simpleJob: ISimpleJob): string { - const legacyObject: any = {}; - legacyObject.jobName = simpleJob.name; - legacyObject.resourcegpu = simpleJob.gpus; - - legacyObject.image = simpleJob.image; - legacyObject.cmd = simpleJob.command; - legacyObject.runningasroot = simpleJob.root; - - if (simpleJob.isInteractive) { - legacyObject.is_interactive = simpleJob.isInteractive; - legacyObject.interactivePort = simpleJob.interactivePorts; - } - - if (simpleJob.enableTensorboard) { - legacyObject.do_log = simpleJob.enableTensorboard; - legacyObject.logDir = simpleJob.tensorboardModelPath; - } - - legacyObject.enableworkpath = simpleJob.enableWorkMount; - legacyObject.workPath = simpleJob.workPath; - legacyObject.enabledatapath = simpleJob.enableDataMount; - legacyObject.dataPath = simpleJob.dataPath; - legacyObject.enablejobpath = simpleJob.enableJobMount; - legacyObject.jobPath = simpleJob.jobPath; - - legacyObject.hyperparametername = simpleJob.hyperParameterName; - legacyObject.hyperparameterstartvalue = simpleJob.hyperParameterStartValue; - legacyObject.hyperparameterendvalue = simpleJob.hyperParameterEndValue; - legacyObject.hyperparameterstep = simpleJob.hyperParameterStep; - - if (simpleJob.isPrivileged) { - legacyObject.isPrivileged = simpleJob.isPrivileged; - legacyObject.cpurequest = simpleJob.cpus; - legacyObject.memoryrequest = simpleJob.memory; - } - - return JSON.stringify(legacyObject); - } - - public name: string = ""; - public gpus: number = 0; - - public image: string = ""; - public command: string = ""; - public root: boolean = true; - public virtualCluster: string = "default"; - - public isInteractive: boolean = false; - public interactivePorts: string = ""; - - public enableTensorboard: boolean = false; - public tensorboardModelPath: string = ""; - - public enableWorkMount: boolean = false; - public workPath: string = ""; - public enableDataMount: boolean = false; - public dataPath: string = ""; - public enableJobMount: boolean = false; - public jobPath: string = ""; - - public hyperParameterName: string = ""; - public hyperParameterStartValue: number = 0; - public hyperParameterEndValue: number = 0; - public hyperParameterStep: number = 0; - - public environmentVariables: IEnvironmentVariable[] = []; - - public isPrivileged: boolean = false; - public cpus: number = 1; - public memory: number = 30 * 1024; - - public constructor(that?: ISimpleJob) { - if (that !== undefined) { - Object.assign(this, that); - } - } - - public clone< - F extends keyof ISimpleJob, - V extends ISimpleJob[F] - >(field?: F, value?: V): SimpleJob { - const that = new SimpleJob(this); - if (field !== undefined && value !== undefined) { - that[field] = value; - } - return that; - } -} diff --git a/contrib/submit-simple-job/App/Templates/Context.ts b/contrib/submit-simple-job/App/Templates/Context.ts deleted file mode 100644 index 0c4f9e62f..000000000 --- a/contrib/submit-simple-job/App/Templates/Context.ts +++ /dev/null @@ -1,39 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import { createContext } from "react"; - -import { templates } from "./data.json"; - -interface ITemplatesContext { - templates: typeof templates; - apply(json: string): void; -} - -const TemplatesContext = createContext({ - templates: [], - apply() { return; }, -}); - -export default TemplatesContext; diff --git a/contrib/submit-simple-job/App/Templates/Select.tsx b/contrib/submit-simple-job/App/Templates/Select.tsx deleted file mode 100644 index 286a90464..000000000 --- a/contrib/submit-simple-job/App/Templates/Select.tsx +++ /dev/null @@ -1,61 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import * as React from "react"; - -import TemplatesContext from "./Context"; - -import Select from "../Components/FormControls/Select"; - -interface ITemplatesSelectProps { - className: string; - children: string; -} - -const TemplatesSelect: React.FunctionComponent = ({ className, children }) => ( - - { ({ templates, apply }) => { - const options = templates.map(({ - Name, - Json, - }) => ({ - label: Name, - value: Json, - })); - options.unshift({ label: "None", value: "" }); - - const onChange = (value: string) => { - if (value) { apply(value); } - }; - - return ( - - ); - }} - -); - -export default TemplatesSelect; diff --git a/contrib/submit-simple-job/App/Templates/data.json b/contrib/submit-simple-job/App/Templates/data.json deleted file mode 100644 index 020f22d49..000000000 --- a/contrib/submit-simple-job/App/Templates/data.json +++ /dev/null @@ -1,40 +0,0 @@ -{ - "templates": [ - { - "Name": "Caffe training example", - "Json": "{\"jobName\" : \"caffe training example - resnet18\", \"resourcegpu\" : 1, \"workPath\" : \"samples\", \"image\" : \"bvlc/caffe:gpu\", \"cmd\" : \"caffe train -solver /work/caffe/solver_resnet18.prototxt\", \"interactivePort\" : \"\"}\r\n" - }, - { - "Name": "Caffe-iPython-SSH-CPU", - "Json": "{\"jobName\":\"caffe-ssh-ipython-cpu\",\"resourcegpu\":0,\"workPath\":\"./\",\"dataPath\":\"imagenet\",\"jobPath\":\"\",\"image\":\"bvlc/caffe:cpu\",\"cmd\":\"apt-get update && apt-get install -y python-pip openssh-server sudo && pip install jupyter && addgroup --force-badname --gid $$gid$$ domainusers && adduser --force-badname --home /home/$$username$$ --shell /bin/bash --uid $$uid$$ -gecos '' $$username$$ --gid $$gid$$ --disabled-password && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) $$username$$ && adduser $$username$$ sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && ( mkdir -p /root/.ssh && cat /work/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && mkdir -p /home/$$username$$/.ssh && cat /work/.ssh/id_rsa.pub >> /home/$$username$$/.ssh/authorized_keys && cp /work/.ssh/id_rsa* /home/$$username$$/.ssh/ ; chown -R $$username$$ /home/$$username$$/ || /bin/true ) && service ssh restart && env | while read line; do if [[ $line != HOME=* ]] && [[ $line != INTERACTIVE* ]] ; then echo \\\"export $line\\\" >> /home/$$username$$/.bashrc; fi; done && export HOME=/job && runuser -l $$username$$ -c \\\"jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/\\\"\",\"is_interactive\":true,\"do_log\":false,\"interactivePort\":\"22, 8888\",\"runningasroot\":true,\"mountpoints\":[],\"env\":[],\"jobtrainingtype\":\"RegularJob\",\"enableworkpath\":true,\"enabledatapath\":true,\"enablejobpath\":true}" - }, - { - "Name": "Tensorflow training example", - "Json": "{\"jobName\" : \"Tensorflow training example - inception\", \"resourcegpu\" : 1, \"workPath\" : \"samples\", \"image\" : \"tensorflow/tensorflow:0.12.1-gpu\", \"cmd\" : \"/work/tensorflow/models/inception/bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/job/model --data_dir=/data/tensor\", \"interactivePort\" : \"\"}\r\n" - }, - { - "Name": "Tensorflow-IPython-GPU", - "Json": "{\"jobName\":\"tensorflow-ipython\",\"resourcegpu\":1,\"image\":\"tensorflow/tensorflow:latest-gpu\",\"cmd\":\"export HOME=/job && jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/\",\"is_interactive\":true,\"mountpoints\":[{\"containerPath\":\"/home/jinl\",\"hostPath\":\"/dlwsdata/work/jinl\",\"description\":\"NFS (remote file share)\",\"enabled\":true,\"$$hashKey\":\"object:102\"},{\"containerPath\":\"/glusterfs/public\",\"hostPath\":\"/dlwsdata/glusterfs/public\",\"description\":\"GlusterFS (replicated distributed storage)\",\"enabled\":true,\"$$hashKey\":\"object:74\"}],\"do_log\":false,\"runningasroot\":false,\"env\":[],\"PSDistJob\":false,\"jobtrainingtype\":\"RegularJob\",\"enableworkpath\":true,\"enabledatapath\":true,\"enablejobpath\":true,\"interactivePort\":\"8888\"}" - }, - { - "Name": "Test Template", - "Json": "{\"image\":\"ubuntu:16.04\"}" - }, - { - "Name": "tutorial-caffe2-cpu", - "Json": "{\"jobName\":\"caffe2-detectron-cpu\",\"resourcegpu\":0,\"workPath\":\"./\",\"dataPath\":\"imagenet\",\"jobPath\":\"\",\"image\":\"dlws/tutorial-caffe2-cpu:1.5\",\"cmd\":\"apt-get update && apt-get install -y openssh-server sudo && addgroup --force-badname --gid $$gid$$ domainusers && adduser --force-badname --home /home/$$username$$ --shell /bin/bash --uid $$uid$$ -gecos '' $$username$$ --gid $$gid$$ --disabled-password && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) $$username$$ && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) root && adduser $$username$$ sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && mkdir -p /root/.ssh && cat /work/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && service ssh restart && env | while read line; do if [[ $line != HOME=* ]] && [[ $line != INTERACTIVE* ]] ; then echo \\\"export $line\\\" >> /home/$$username$$/.bashrc; fi; done && export HOME=/job && jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/ --allow-root\",\"is_interactive\":true,\"do_log\":false,\"interactivePort\":\"22, 8888, 6006\",\"runningasroot\":true,\"mountpoints\":[],\"env\":[],\"jobtrainingtype\":\"RegularJob\",\"enableworkpath\":true,\"enabledatapath\":true,\"enablejobpath\":true}" - }, - { - "Name": "Tutorial-pytorch", - "Json": "{\"jobName\":\"tutorial-pytorch\",\"resourcegpu\":1,\"workPath\":\"./\",\"dataPath\":\"imagenet\",\"jobPath\":\"\",\"image\":\"dlws/tutorial-pytorch:1.5\",\"cmd\":\"apt-get update && apt-get install -y openssh-server sudo && addgroup --force-badname --gid $$gid$$ domainusers && adduser --force-badname --home /home/$$username$$ --shell /bin/bash --uid $$uid$$ -gecos '' $$username$$ --gid $$gid$$ --disabled-password && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) $$username$$ && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) root && adduser $$username$$ sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && mkdir -p /root/.ssh && cat /work/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && ( chown -R $$username$$ /home/$$username$$/ || /bin/true ) && service ssh restart && env | while read line; do if [[ $line != HOME=* ]] && [[ $line != INTERACTIVE* ]] ; then echo \\\"export $line\\\" >> /home/$$username$$/.bashrc; fi; done && export HOME=/job && export LD_LIBRARY_PATH=/usr/local/nvidia/lib64/ ; runuser -p -l $$username$$ -c \\\"source /home/$$username$$/.bashrc && export PATH=/opt/conda/envs/pytorch-py35/bin/:$PATH; jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/\\\"\",\"is_interactive\":true,\"do_log\":false,\"interactivePort\":\"22, 8888, 6006\",\"runningasroot\":true,\"mountpoints\":[],\"env\":[],\"jobtrainingtype\":\"RegularJob\",\"enableworkpath\":true,\"enabledatapath\":true,\"enablejobpath\":true,\"isPrivileged\":true,\"dnsPolicy\":\"Default\",\"hostIPC\":true}" - }, - { - "Name": "tutorial-tensorflow", - "Json": "{\"jobName\":\"tutorial-tensorflow\",\"resourcegpu\":1,\"workPath\":\"./\",\"dataPath\":\"imagenet\",\"jobPath\":\"\",\"image\":\"dlws/tutorial-tensorflow:1.5\",\"cmd\":\"apt-get update && apt-get install -y openssh-server sudo && addgroup --force-badname --gid $$gid$$ domainusers && adduser --force-badname --home /home/$$username$$ --shell /bin/bash --uid $$uid$$ -gecos '' $$username$$ --gid $$gid$$ --disabled-password && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) $$username$$ && adduser $$username$$ sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && ( mkdir -p /root/.ssh && cat /work/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && mkdir -p /home/$$username$$/.ssh && cat /work/.ssh/id_rsa.pub >> /home/$$username$$/.ssh/authorized_keys && cp /work/.ssh/id_rsa* /home/$$username$$/.ssh/ ; chown -R $$username$$ /home/$$username$$/ || /bin/true ) && service ssh restart && env | while read line; do if [[ $line != HOME=* ]] && [[ $line != INTERACTIVE* ]] ; then echo \\\"export $line\\\" >> /home/$$username$$/.bashrc; fi; done && export HOME=/job && runuser -l $$username$$ -c \\\"source /home/$$username$$/.bashrc && jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/\\\"\",\"is_interactive\":true,\"do_log\":false,\"interactivePort\":\"22, 8888, 6006\",\"runningasroot\":true,\"mountpoints\":[],\"env\":[{\"name\":\"LD_LIBRARY_PATH\",\"value\":\"/usr/local/nvidia/lib64/\",\"$$hashKey\":\"object:134\"}],\"jobtrainingtype\":\"RegularJob\",\"enableworkpath\":true,\"enabledatapath\":true,\"enablejobpath\":true}" - }, - { - "Name": "tutorial-tensorflow-cpu", - "Json": "{\"jobName\":\"tutorial-tensorflow-cpu\",\"resourcegpu\":0,\"workPath\":\"./\",\"dataPath\":\"imagenet\",\"jobPath\":\"\",\"image\":\"dlws/tutorial-tensorflow-cpu:1.5\",\"cmd\":\"apt-get update && apt-get install -y openssh-server sudo && addgroup --force-badname --gid $$gid$$ domainusers && adduser --force-badname --home /home/$$username$$ --shell /bin/bash --uid $$uid$$ -gecos '' $$username$$ --gid $$gid$$ --disabled-password && usermod -p $(echo tryme2017 | openssl passwd -1 -stdin) $$username$$ && adduser $$username$$ sudo && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers && ( mkdir -p /root/.ssh && cat /work/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys && mkdir -p /home/$$username$$/.ssh && cat /work/.ssh/id_rsa.pub >> /home/$$username$$/.ssh/authorized_keys && cp /work/.ssh/id_rsa* /home/$$username$$/.ssh/ ; chown -R $$username$$ /home/$$username$$/ || /bin/true ) && service ssh restart && env | while read line; do if [[ $line != HOME=* ]] && [[ $line != INTERACTIVE* ]] ; then echo \\\"export $line\\\" >> /home/$$username$$/.bashrc; fi; done && export HOME=/job && runuser -l $$username$$ -c \\\"jupyter notebook --no-browser --port=8888 --ip=0.0.0.0 --notebook-dir=/\\\"\",\"is_interactive\":true,\"do_log\":false,\"interactivePort\":\"22, 8888, 6006\",\"runningasroot\":true,\"mountpoints\":[],\"env\":[{\"name\":\"LD_LIBRARY_PATH\",\"value\":\"/usr/local/nvidia/lib64/\",\"$$hashKey\":\"object:134\"}],\"jobtrainingtype\":\"RegularJob\",\"enableworkpath\":true,\"enabledatapath\":true,\"enablejobpath\":true}" - } - ] -} diff --git a/contrib/submit-simple-job/App/convert.ts b/contrib/submit-simple-job/App/convert.ts deleted file mode 100644 index c99a5db46..000000000 --- a/contrib/submit-simple-job/App/convert.ts +++ /dev/null @@ -1,172 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import Hashids from "hashids"; - -import SimpleJob from "./SimpleJob"; - -import { authFile, nfs } from "../config"; - -function convertCommand(simpleJob: SimpleJob, user: string, hyperParameterValue?: number) { - const depends: { [name: string]: boolean } = {}; - - const exportCommands = simpleJob.environmentVariables.map( - ({ name, value }) => `export ${name}=${value}`); - if (hyperParameterValue !== undefined) { - exportCommands.push(`export ${simpleJob.hyperParameterName}=${hyperParameterValue}`); - } - - const mountCommands = []; - if (nfs) { - if (simpleJob.enableWorkMount) { - mountCommands.push("mkdir --parents /work", - `mount -t nfs4 ${nfs}/${user}/${simpleJob.workPath} /work`); - depends["nfs-common"] = true; - } - - if (simpleJob.enableDataMount) { - mountCommands.push("mkdir --parents /data", - `mount -t nfs4 ${nfs}/${user}/${simpleJob.dataPath} /data`); - depends["nfs-common"] = true; - } - - if (simpleJob.enableJobMount) { - mountCommands.push("mkdir --parents /job", - `mount -t nfs4 ${nfs}/${user}/${simpleJob.jobPath} /job`); - depends["nfs-common"] = true; - } - } - - const hashids = new Hashids(user, 4, "0123456789ABCDEF"); - const uid = String(parseInt(hashids.encode(0), 16) + 10000); - const gid = uid; - const command = simpleJob.command.replace(/\$\$(username|uid|gid)\$\$/g, (_, key) => { - if (key === "username") { return user; } - if (key === "uid") { return uid; } - if (key === "gid") { return gid; } - throw Error("Bad replacement"); - }).split("\n").join(" && "); - - const userCommands = []; - if (simpleJob.root) { - userCommands.push(command); - } else { - // Create a user with group. - userCommands.push( - `groupadd --gid ${gid} ${user}`, - `useradd --gid ${gid} --groups sudo --uid ${uid} ${user}`, - "echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers", - `echo '${command.replace(/'/g, "'\\''")}' | su --preserve-environment ${user}`, - ); - depends.sudo = true; - } - - const dependencies = Object.keys(depends); - const installCommands = []; - if (dependencies.length > 0) { - installCommands.push( - "apt-get update", - `apt-get install --assume-yes ${dependencies.join(" ")}`, - ); - } - - const commands = installCommands - .concat(mountCommands) - .concat(exportCommands) - .concat(userCommands); - - return commands.join(" && "); -} - -function convertTaskRole(name: string, simpleJob: SimpleJob, user: string, hyperParameterValue?: number) { - const taskRole: any = { - command: convertCommand(simpleJob, user, hyperParameterValue), - gpuNumber: simpleJob.gpus, - name, - }; - - if (simpleJob.isPrivileged) { - taskRole.cpuNumber = simpleJob.cpus; - taskRole.memoryMB = simpleJob.memory; - } - - if (simpleJob.isInteractive) { - const portList = []; - const ports = simpleJob.interactivePorts.split(/[;,]/); - for (const portString of ports) { - const port = Number(portString); - if (isNaN(port)) { continue; } - portList.push({ - beginAt: port, - label: `port_${port}`, - portNumber: 1, - }); - } - taskRole.portList = portList; - } - - return taskRole; -} - -export default function convert(simpleJob: SimpleJob, user: string) { - const job: any = { - image: simpleJob.image, - jobName: simpleJob.name, - virtualCluster: simpleJob.virtualCluster, - }; - - if (authFile !== undefined) { - job.authFile = authFile; - } - - const taskRoles = []; - - if (simpleJob.hyperParameterName === "") { - taskRoles.push(convertTaskRole("master", simpleJob, user)); - } else { - for ( - let hyperParameterValue = simpleJob.hyperParameterStartValue; - hyperParameterValue < simpleJob.hyperParameterEndValue; - hyperParameterValue += simpleJob.hyperParameterStep - ) { - const taskName = `hyper_parameter_${hyperParameterValue}`; - const taskRole = convertTaskRole(taskName, simpleJob, user, hyperParameterValue); - taskRoles.push(taskRole); - } - } - - if (simpleJob.enableTensorboard) { - const tensorboardSimpleJob = new SimpleJob(simpleJob); - tensorboardSimpleJob.gpus = 0; - tensorboardSimpleJob.command = `tensorboard --logdir ${simpleJob.tensorboardModelPath} --host 0.0.0.0`; - tensorboardSimpleJob.isInteractive = true; - tensorboardSimpleJob.interactivePorts = "6006"; - - taskRoles.push(convertTaskRole("tensorboard", tensorboardSimpleJob, user)); - } - - job.taskRoles = taskRoles; - - return job; -} diff --git a/contrib/submit-simple-job/App/index.tsx b/contrib/submit-simple-job/App/index.tsx deleted file mode 100644 index fa4bbc672..000000000 --- a/contrib/submit-simple-job/App/index.tsx +++ /dev/null @@ -1,114 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import * as React from "react"; - -import SimpleJob, { ISimpleJob } from "./SimpleJob"; -import SimpleJobContext from "./SimpleJob/Context"; -import SimpleJobForm from "./SimpleJob/Form"; - -import TemplatesContext from "./Templates/Context"; -import { templates } from "./Templates/data.json"; - -import convert from "./convert"; - -const AppContent: React.FunctionComponent = ({ children }) => ( -
- {children} -
-); - -interface IAppProps { - api: string; - user: string; - token: string; -} - -interface IAppState { - simpleJob: SimpleJob; -} - -export default class App extends React.Component { - constructor(props: IAppProps) { - super(props); - - this.state = { - simpleJob: new SimpleJob(), - }; - } - - public render() { - const { simpleJob } = this.state; - const { setSimpleJob, applyLegacyJSON } = this; - return ( - - - - - - - - ); - } - - private setSimpleJob = < - F extends keyof ISimpleJob, - V extends ISimpleJob[F], - >(field: F) => (value: V) => { - this.setState(({ - simpleJob, - }) => ({ - simpleJob: simpleJob.clone(field, value), - })); - } - - private applyLegacyJSON = (json: string) => { - this.setState({ simpleJob: SimpleJob.fromLegacyJSON(json) }); - } - - private submitSimpleJob = (simpleJob: SimpleJob) => { - const { api, user, token } = this.props; - const job = convert(simpleJob as SimpleJob, user); - - window.fetch(`${api}/api/v1/user/${user}/jobs`, { - body: JSON.stringify(job), - headers: { - "Authorization": `Bearer ${token}`, - "Content-Type": "application/json", - }, - method: "POST", - }).then((response) => { - if (response.status >= 400) { - return response.json().then((body) => { - throw Error(body.message); - }); - } else { - window.location.href = `/view.html?username=${user}&jobName=${job.jobName}`; - return Promise.resolve(); - } - }).catch((error) => { - window.alert(error.message); - }); - } -} diff --git a/contrib/submit-simple-job/LICENSE b/contrib/submit-simple-job/LICENSE deleted file mode 100644 index a8ddf64a9..000000000 --- a/contrib/submit-simple-job/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) Microsoft Corporation -All rights reserved. - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/contrib/submit-simple-job/README.md b/contrib/submit-simple-job/README.md deleted file mode 100644 index de514acb4..000000000 --- a/contrib/submit-simple-job/README.md +++ /dev/null @@ -1,127 +0,0 @@ -# Submit Simple Job # - -## Background ## - -"Submit Simple Job" is released to be a PAI web portal plugin with a UI similar to [DLWorkspace](https://github.com/Microsoft/DLWorkspace) that makes DLWorkspace users familiar with OpenPAI quickly. It includes the following features: - -- Caffe / Tensorflow / Pytorch job templates -- Multilined command support -- One-click tensorboard support -- Customize the running user (root / current user) -- NFS mount (Extra configuration needed) -- HyperParameter training - -## User's Guide ## - -### Entrance ### - -The plugin could be accessed by the link in the "plugins" section of sidebar menu, with title customized by the system administrator. If you do not know which one is the plugin, ask your system administrator. - -### Usage ### - -#### Main Section #### - -![Main Section](docs/main-section.png) - -- **Job Template**: shortcut of customizing the job. - - ![Job Template](docs/job-template.png) - -- **Job Name**: the name of the job, similar to PAI -- **Job Type**: WIP, now regular job only. -- **Number of GPUs**: setting how many GPUs used in *each* task. -- **Docker Image**: the base Docker image of the job, private docker registry is supported only if the system administrator is configured the authentication file. -- **Command**: the command to run the job, will be running in root if the followed "Launch container as root user" is checked. -- **Virtual Cluster**: which virtual cluster the job will running on. -- **Interactive Job**: checked if the job could publish network interfaces, and then the specific ports could be customized. - ![Interactive Job](docs/interactive-job.png) -- **Enable Tensorboard**: checked if a tensorboard task will be run. A model path could be customized. - ![Tensorboard](docs/tensorboard.png) -- **Launch container as root user**: if checked, the *Command* will be run in root user, otherwise it will be run in the current user. - -#### Advanced Section #### - -- **Mount Directories**: Mount NFS directories to the job container. This option could be displayed only if the NFS option of the plugin is fully configured. - ![Mount Directories](docs/mount-directories.png) -- **HyperParameter Training**: Enable hyper parameter training. - ![Hyper Parameter](docs/hyper-parameter.png) -- **Environment Variables**: Customize environment variables. - ![Environment Variables](docs/environment-variables.png) -- **Privileged Docker**: Customize CPU and memory requirements of the job, other options are under development. - ![Prvileged Docker](docs/privileged-docker.png) - -#### Database Operation #### - -![Database Operation](docs/database-operation.png) - -- Download JSON: export the current form to a JSON file. -- Upload JSON: import the JSON file exported from this plugin of DLWorkspace to the form. - -## System Administrator's Guide ## - -### Build ### - - npm install - npm run build - -The build file is located in `./dist/plugin.js` - -### Deploy ### - -Deploy the build file to any server accessible by web portal users. Write down the public URL of the file for configuration. - -### Install ### - -Config your `service-configuration.yaml` add/update the following fields to `webportal` section - -```YAML -webportal: - # ... other configs - plugins: - - title: Submit Simple Job - uri: "[plugin public url]?nfs=[NFS host]:[NFS root]&auth-file=hdfs:[hdfs uri]" -``` - -### Configure ### - -According to the YAML config in [Install section](#install), there are two config fields available, in query string syntax appended to the plugin file URL: **(Don't forget to do character encoding)** - -- `nfs` the NFS host and root directory, in `[host]:[root]` format, for example `nfs=10.0.0.1%3A%2Fusers`. -- `auth-file` the docker registry authorization file path in HDFS, in `hdfs:[path]` format, for example `auth-file=hdfs%3A%2F%2F10.0.0.1%3A8020%2Fauth.txt`. - -## Developer's Guide ## - -### Contribute ### - -Start the local web portal server with .env settings: - - WEBPORTAL_PLUGINS=[{"title":"Submit Simple Job", "uri": "/scripts/plugins/submit-simple-job.js"}] - -And then run the builder within the plugin directory. - - npm install - npm run watch - -## License ## - - MIT License - - Copyright (c) Microsoft Corporation. All rights reserved. - - Permission is hereby granted, free of charge, to any person obtaining a copy - of this software and associated documentation files (the "Software"), to deal - in the Software without restriction, including without limitation the rights - to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - copies of the Software, and to permit persons to whom the Software is - furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in all - copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - SOFTWARE diff --git a/contrib/submit-simple-job/config.ts b/contrib/submit-simple-job/config.ts deleted file mode 100644 index 1e0d03589..000000000 --- a/contrib/submit-simple-job/config.ts +++ /dev/null @@ -1,41 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import * as qs from "qs"; - -const query = ((script) => { - if (script === null) { return {}; } - - const src = script.getAttribute("src"); - if (src === null) { return {}; } - - const search = src.slice(src.indexOf("?") + 1); - return qs.parse(search); -})(document.currentScript) as { - nfs?: string, - "auth-file"?: string, -}; - -export const nfs = query.nfs; -export const authFile = query["auth-file"]; diff --git a/contrib/submit-simple-job/docs/database-operation.png b/contrib/submit-simple-job/docs/database-operation.png deleted file mode 100644 index 4b852de78..000000000 Binary files a/contrib/submit-simple-job/docs/database-operation.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/environment-variables.png b/contrib/submit-simple-job/docs/environment-variables.png deleted file mode 100644 index c1988b14c..000000000 Binary files a/contrib/submit-simple-job/docs/environment-variables.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/hyper-parameter.png b/contrib/submit-simple-job/docs/hyper-parameter.png deleted file mode 100644 index 10788754e..000000000 Binary files a/contrib/submit-simple-job/docs/hyper-parameter.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/interactive-job.png b/contrib/submit-simple-job/docs/interactive-job.png deleted file mode 100644 index c4ee9c4a4..000000000 Binary files a/contrib/submit-simple-job/docs/interactive-job.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/job-template.png b/contrib/submit-simple-job/docs/job-template.png deleted file mode 100644 index e86a01fca..000000000 Binary files a/contrib/submit-simple-job/docs/job-template.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/main-section.png b/contrib/submit-simple-job/docs/main-section.png deleted file mode 100644 index 37c5542e3..000000000 Binary files a/contrib/submit-simple-job/docs/main-section.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/mount-directories.png b/contrib/submit-simple-job/docs/mount-directories.png deleted file mode 100644 index 41ad4f41d..000000000 Binary files a/contrib/submit-simple-job/docs/mount-directories.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/privileged-docker.png b/contrib/submit-simple-job/docs/privileged-docker.png deleted file mode 100644 index 8a003966e..000000000 Binary files a/contrib/submit-simple-job/docs/privileged-docker.png and /dev/null differ diff --git a/contrib/submit-simple-job/docs/tensorboard.png b/contrib/submit-simple-job/docs/tensorboard.png deleted file mode 100644 index ef09178ca..000000000 Binary files a/contrib/submit-simple-job/docs/tensorboard.png and /dev/null differ diff --git a/contrib/submit-simple-job/index.ts b/contrib/submit-simple-job/index.ts deleted file mode 100644 index 43f03c812..000000000 --- a/contrib/submit-simple-job/index.ts +++ /dev/null @@ -1,49 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -import * as React from "react"; -import * as ReactDOM from "react-dom"; - -import "whatwg-fetch"; - -import App from "./App"; - -class PAIPluginElement extends HTMLElement { - public connectedCallback() { - const api = this.getAttribute("pai-rest-server-uri") as string; - const user = this.getAttribute("pai-user"); - const token = this.getAttribute("pai-rest-server-token"); - if (user === null || token === null) { - window.location.href = "/login.html"; - return; - } - ReactDOM.render(React.createElement(App, { api, user, token }), this); - } - - public disconnectedCallback() { - ReactDOM.unmountComponentAtNode(this); - } -} - -window.customElements.define("pai-plugin", PAIPluginElement); diff --git a/contrib/submit-simple-job/package-lock.json b/contrib/submit-simple-job/package-lock.json deleted file mode 100644 index aa11bb09b..000000000 --- a/contrib/submit-simple-job/package-lock.json +++ /dev/null @@ -1,4259 +0,0 @@ -{ - "name": "submit-simple-job", - "version": "1.0.0", - "lockfileVersion": 1, - "requires": true, - "dependencies": { - "@types/anymatch": { - "version": "1.3.1", - "resolved": "https://registry.npmjs.org/@types/anymatch/-/anymatch-1.3.1.tgz", - "integrity": "sha512-/+CRPXpBDpo2RK9C68N3b2cOvO0Cf5B9aPijHsoDQTHivnGSObdOF2BRQOYjojWTDy6nQvMjmqRXIxH55VjxxA==", - "dev": true - }, - "@types/classnames": { - "version": "2.2.7", - "resolved": "https://registry.npmjs.org/@types/classnames/-/classnames-2.2.7.tgz", - "integrity": "sha512-rzOhiQ55WzAiFgXRtitP/ZUT8iVNyllEpylJ5zHzR4vArUvMB39GTk+Zon/uAM0JxEFAWnwsxC2gH8s+tZ3Myg==", - "dev": true - }, - "@types/hashids": { - "version": "1.0.30", - "resolved": "https://registry.npmjs.org/@types/hashids/-/hashids-1.0.30.tgz", - "integrity": "sha512-ESXJxz/GRY+SGa3n0WuWZLR0xM3k8LTr4+2xOoj/a0ZThglF/UfEeKVaMQNNFvFQ+mw3ng8LbC3gmJx/GmR5EQ==", - "dev": true, - "requires": { - "@types/node": "*" - } - }, - "@types/node": { - "version": "11.9.3", - "resolved": "https://registry.npmjs.org/@types/node/-/node-11.9.3.tgz", - "integrity": "sha512-DMiqG51GwES/c4ScBY0u5bDlH44+oY8AeYHjY1SGCWidD7h08o1dfHue/TGK7REmif2KiJzaUskO+Q0eaeZ2fQ==", - "dev": true - }, - "@types/prop-types": { - "version": "15.5.8", - "resolved": "https://registry.npmjs.org/@types/prop-types/-/prop-types-15.5.8.tgz", - "integrity": "sha512-3AQoUxQcQtLHsK25wtTWIoIpgYjH3vSDroZOUr7PpCHw/jLY1RB9z9E8dBT/OSmwStVgkRNvdh+ZHNiomRieaw==", - "dev": true - }, - "@types/qs": { - "version": "6.5.1", - "resolved": "https://registry.npmjs.org/@types/qs/-/qs-6.5.1.tgz", - "integrity": "sha512-mNhVdZHdtKHMMxbqzNK3RzkBcN1cux3AvuCYGTvjEIQT2uheH3eCAyYsbMbh2Bq8nXkeOWs1kyDiF7geWRFQ4Q==", - "dev": true - }, - "@types/react": { - "version": "16.7.20", - "resolved": "https://registry.npmjs.org/@types/react/-/react-16.7.20.tgz", - "integrity": "sha512-Qd5RWkwl6SL7R2XzLk/cicjVQm1Mhc6HqXY5Ei4pWd1Vi8Fkbd5O0sA398x8fRSTPAuHdDYD9nrWmJMYTJI0vQ==", - "dev": true, - "requires": { - "@types/prop-types": "*", - "csstype": "^2.2.0" - } - }, - "@types/react-dom": { - "version": "16.0.11", - "resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-16.0.11.tgz", - "integrity": "sha512-x6zUx9/42B5Kl2Vl9HlopV8JF64wLpX3c+Pst9kc1HgzrsH+mkehe/zmHMQTplIrR48H2gpU7ZqurQolYu8XBA==", - "dev": true, - "requires": { - "@types/react": "*" - } - }, - "@types/tapable": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/@types/tapable/-/tapable-1.0.4.tgz", - "integrity": "sha512-78AdXtlhpCHT0K3EytMpn4JNxaf5tbqbLcbIRoQIHzpTIyjpxLQKRoxU55ujBXAtg3Nl2h/XWvfDa9dsMOd0pQ==", - "dev": true - }, - "@types/uglify-js": { - "version": "3.0.4", - "resolved": "https://registry.npmjs.org/@types/uglify-js/-/uglify-js-3.0.4.tgz", - "integrity": "sha512-SudIN9TRJ+v8g5pTG8RRCqfqTMNqgWCKKd3vtynhGzkIIjxaicNAMuY5TRadJ6tzDu3Dotf3ngaMILtmOdmWEQ==", - "dev": true, - "requires": { - "source-map": "^0.6.1" - }, - "dependencies": { - "source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true - } - } - }, - "@types/webpack": { - "version": "4.4.24", - "resolved": "https://registry.npmjs.org/@types/webpack/-/webpack-4.4.24.tgz", - "integrity": "sha512-yg99CjvB7xZ/iuHrsZ7dkGKoq/FRDzqLzAxKh2EmTem6FWjzrty4FqCqBYuX5z+MFwSaaQGDAX4Q9HQkLjGLnQ==", - "dev": true, - "requires": { - "@types/anymatch": "*", - "@types/node": "*", - "@types/tapable": "*", - "@types/uglify-js": "*", - "source-map": "^0.6.0" - }, - "dependencies": { - "source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true - } - } - }, - "@types/whatwg-fetch": { - "version": "0.0.33", - "resolved": "https://registry.npmjs.org/@types/whatwg-fetch/-/whatwg-fetch-0.0.33.tgz", - "integrity": "sha1-GcDShjyMsjgPIaHHNrecv3iVuxM=", - "dev": true, - "requires": { - "@types/whatwg-streams": "*" - } - }, - "@types/whatwg-streams": { - "version": "0.0.7", - "resolved": "https://registry.npmjs.org/@types/whatwg-streams/-/whatwg-streams-0.0.7.tgz", - "integrity": "sha512-6sDiSEP6DWcY2ZolsJ2s39ZmsoGQ7KVwBDI3sESQsEm9P2dHTcqnDIHRZFRNtLCzWp7hCFGqYbw5GyfpQnJ01A==", - "dev": true - }, - "@webassemblyjs/ast": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.7.11.tgz", - "integrity": "sha512-ZEzy4vjvTzScC+SH8RBssQUawpaInUdMTYwYYLh54/s8TuT0gBLuyUnppKsVyZEi876VmmStKsUs28UxPgdvrA==", - "dev": true, - "requires": { - "@webassemblyjs/helper-module-context": "1.7.11", - "@webassemblyjs/helper-wasm-bytecode": "1.7.11", - "@webassemblyjs/wast-parser": "1.7.11" - } - }, - "@webassemblyjs/floating-point-hex-parser": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.7.11.tgz", - "integrity": "sha512-zY8dSNyYcgzNRNT666/zOoAyImshm3ycKdoLsyDw/Bwo6+/uktb7p4xyApuef1dwEBo/U/SYQzbGBvV+nru2Xg==", - "dev": true - }, - "@webassemblyjs/helper-api-error": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.7.11.tgz", - "integrity": "sha512-7r1qXLmiglC+wPNkGuXCvkmalyEstKVwcueZRP2GNC2PAvxbLYwLLPr14rcdJaE4UtHxQKfFkuDFuv91ipqvXg==", - "dev": true - }, - "@webassemblyjs/helper-buffer": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.7.11.tgz", - "integrity": "sha512-MynuervdylPPh3ix+mKZloTcL06P8tenNH3sx6s0qE8SLR6DdwnfgA7Hc9NSYeob2jrW5Vql6GVlsQzKQCa13w==", - "dev": true - }, - "@webassemblyjs/helper-code-frame": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-code-frame/-/helper-code-frame-1.7.11.tgz", - "integrity": "sha512-T8ESC9KMXFTXA5urJcyor5cn6qWeZ4/zLPyWeEXZ03hj/x9weSokGNkVCdnhSabKGYWxElSdgJ+sFa9G/RdHNw==", - "dev": true, - "requires": { - "@webassemblyjs/wast-printer": "1.7.11" - } - }, - "@webassemblyjs/helper-fsm": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-fsm/-/helper-fsm-1.7.11.tgz", - "integrity": "sha512-nsAQWNP1+8Z6tkzdYlXT0kxfa2Z1tRTARd8wYnc/e3Zv3VydVVnaeePgqUzFrpkGUyhUUxOl5ML7f1NuT+gC0A==", - "dev": true - }, - "@webassemblyjs/helper-module-context": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-module-context/-/helper-module-context-1.7.11.tgz", - "integrity": "sha512-JxfD5DX8Ygq4PvXDucq0M+sbUFA7BJAv/GGl9ITovqE+idGX+J3QSzJYz+LwQmL7fC3Rs+utvWoJxDb6pmC0qg==", - "dev": true - }, - "@webassemblyjs/helper-wasm-bytecode": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.7.11.tgz", - "integrity": "sha512-cMXeVS9rhoXsI9LLL4tJxBgVD/KMOKXuFqYb5oCJ/opScWpkCMEz9EJtkonaNcnLv2R3K5jIeS4TRj/drde1JQ==", - "dev": true - }, - "@webassemblyjs/helper-wasm-section": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.7.11.tgz", - "integrity": "sha512-8ZRY5iZbZdtNFE5UFunB8mmBEAbSI3guwbrsCl4fWdfRiAcvqQpeqd5KHhSWLL5wuxo53zcaGZDBU64qgn4I4Q==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/helper-buffer": "1.7.11", - "@webassemblyjs/helper-wasm-bytecode": "1.7.11", - "@webassemblyjs/wasm-gen": "1.7.11" - } - }, - "@webassemblyjs/ieee754": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.7.11.tgz", - "integrity": "sha512-Mmqx/cS68K1tSrvRLtaV/Lp3NZWzXtOHUW2IvDvl2sihAwJh4ACE0eL6A8FvMyDG9abes3saB6dMimLOs+HMoQ==", - "dev": true, - "requires": { - "@xtuc/ieee754": "^1.2.0" - } - }, - "@webassemblyjs/leb128": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.7.11.tgz", - "integrity": "sha512-vuGmgZjjp3zjcerQg+JA+tGOncOnJLWVkt8Aze5eWQLwTQGNgVLcyOTqgSCxWTR4J42ijHbBxnuRaL1Rv7XMdw==", - "dev": true, - "requires": { - "@xtuc/long": "4.2.1" - } - }, - "@webassemblyjs/utf8": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.7.11.tgz", - "integrity": "sha512-C6GFkc7aErQIAH+BMrIdVSmW+6HSe20wg57HEC1uqJP8E/xpMjXqQUxkQw07MhNDSDcGpxI9G5JSNOQCqJk4sA==", - "dev": true - }, - "@webassemblyjs/wasm-edit": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.7.11.tgz", - "integrity": "sha512-FUd97guNGsCZQgeTPKdgxJhBXkUbMTY6hFPf2Y4OedXd48H97J+sOY2Ltaq6WGVpIH8o/TGOVNiVz/SbpEMJGg==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/helper-buffer": "1.7.11", - "@webassemblyjs/helper-wasm-bytecode": "1.7.11", - "@webassemblyjs/helper-wasm-section": "1.7.11", - "@webassemblyjs/wasm-gen": "1.7.11", - "@webassemblyjs/wasm-opt": "1.7.11", - "@webassemblyjs/wasm-parser": "1.7.11", - "@webassemblyjs/wast-printer": "1.7.11" - } - }, - "@webassemblyjs/wasm-gen": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.7.11.tgz", - "integrity": "sha512-U/KDYp7fgAZX5KPfq4NOupK/BmhDc5Kjy2GIqstMhvvdJRcER/kUsMThpWeRP8BMn4LXaKhSTggIJPOeYHwISA==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/helper-wasm-bytecode": "1.7.11", - "@webassemblyjs/ieee754": "1.7.11", - "@webassemblyjs/leb128": "1.7.11", - "@webassemblyjs/utf8": "1.7.11" - } - }, - "@webassemblyjs/wasm-opt": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.7.11.tgz", - "integrity": "sha512-XynkOwQyiRidh0GLua7SkeHvAPXQV/RxsUeERILmAInZegApOUAIJfRuPYe2F7RcjOC9tW3Cb9juPvAC/sCqvg==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/helper-buffer": "1.7.11", - "@webassemblyjs/wasm-gen": "1.7.11", - "@webassemblyjs/wasm-parser": "1.7.11" - } - }, - "@webassemblyjs/wasm-parser": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.7.11.tgz", - "integrity": "sha512-6lmXRTrrZjYD8Ng8xRyvyXQJYUQKYSXhJqXOBLw24rdiXsHAOlvw5PhesjdcaMadU/pyPQOJ5dHreMjBxwnQKg==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/helper-api-error": "1.7.11", - "@webassemblyjs/helper-wasm-bytecode": "1.7.11", - "@webassemblyjs/ieee754": "1.7.11", - "@webassemblyjs/leb128": "1.7.11", - "@webassemblyjs/utf8": "1.7.11" - } - }, - "@webassemblyjs/wast-parser": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/wast-parser/-/wast-parser-1.7.11.tgz", - "integrity": "sha512-lEyVCg2np15tS+dm7+JJTNhNWq9yTZvi3qEhAIIOaofcYlUp0UR5/tVqOwa/gXYr3gjwSZqw+/lS9dscyLelbQ==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/floating-point-hex-parser": "1.7.11", - "@webassemblyjs/helper-api-error": "1.7.11", - "@webassemblyjs/helper-code-frame": "1.7.11", - "@webassemblyjs/helper-fsm": "1.7.11", - "@xtuc/long": "4.2.1" - } - }, - "@webassemblyjs/wast-printer": { - "version": "1.7.11", - "resolved": "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.7.11.tgz", - "integrity": "sha512-m5vkAsuJ32QpkdkDOUPGSltrg8Cuk3KBx4YrmAGQwCZPRdUHXxG4phIOuuycLemHFr74sWL9Wthqss4fzdzSwg==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/wast-parser": "1.7.11", - "@xtuc/long": "4.2.1" - } - }, - "@xtuc/ieee754": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz", - "integrity": "sha512-DX8nKgqcGwsc0eJSqYt5lwP4DH5FlHnmuWWBRy7X0NcaGR0ZtuyeESgMwTYVEtxmsNGY+qit4QYT/MIYTOTPeA==", - "dev": true - }, - "@xtuc/long": { - "version": "4.2.1", - "resolved": "https://registry.npmjs.org/@xtuc/long/-/long-4.2.1.tgz", - "integrity": "sha512-FZdkNBDqBRHKQ2MEbSC17xnPFOhZxeJ2YGSfr2BKf3sujG49Qe3bB+rGCwQfIaA7WHnGeGkSijX4FuBCdrzW/g==", - "dev": true - }, - "abbrev": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/abbrev/-/abbrev-1.1.1.tgz", - "integrity": "sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q==", - "dev": true, - "optional": true - }, - "acorn": { - "version": "5.7.4", - "resolved": "https://registry.npmjs.org/acorn/-/acorn-5.7.4.tgz", - "integrity": "sha512-1D++VG7BhrtvQpNbBzovKNc1FLGGEE/oGe7b9xJm/RFHMBeUaUGpluV9RLjZa47YFdPcDAenEYuq9pQPcMdLJg==", - "dev": true - }, - "acorn-dynamic-import": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/acorn-dynamic-import/-/acorn-dynamic-import-3.0.0.tgz", - "integrity": "sha512-zVWV8Z8lislJoOKKqdNMOB+s6+XV5WERty8MnKBeFgwA+19XJjJHs2RP5dzM57FftIs+jQnRToLiWazKr6sSWg==", - "dev": true, - "requires": { - "acorn": "^5.0.0" - } - }, - "ajv": { - "version": "6.7.0", - "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.7.0.tgz", - "integrity": "sha512-RZXPviBTtfmtka9n9sy1N5M5b82CbxWIR6HIis4s3WQTXDJamc/0gpCWNGz6EWdWp4DOfjzJfhz/AS9zVPjjWg==", - "dev": true, - "requires": { - "fast-deep-equal": "^2.0.1", - "fast-json-stable-stringify": "^2.0.0", - "json-schema-traverse": "^0.4.1", - "uri-js": "^4.2.2" - } - }, - "ajv-errors": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/ajv-errors/-/ajv-errors-1.0.1.tgz", - "integrity": "sha512-DCRfO/4nQ+89p/RK43i8Ezd41EqdGIU4ld7nGF8OQ14oc/we5rEntLCUa7+jrn3nn83BosfwZA0wb4pon2o8iQ==", - "dev": true - }, - "ajv-keywords": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.2.0.tgz", - "integrity": "sha1-6GuBnGAs+IIa1jdBNpjx3sAhhHo=", - "dev": true - }, - "ansi-regex": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-2.1.1.tgz", - "integrity": "sha1-w7M6te42DYbg5ijwRorn7yfWVN8=", - "dev": true - }, - "ansi-styles": { - "version": "3.2.1", - "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", - "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", - "dev": true, - "requires": { - "color-convert": "^1.9.0" - } - }, - "anymatch": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-2.0.0.tgz", - "integrity": "sha512-5teOsQWABXHHBFP9y3skS5P3d/WfWXpv3FUpy+LorMrNYaT9pI4oLMQX7jzQ2KklNpGpWHzdCXTDT2Y3XGlZBw==", - "dev": true, - "requires": { - "micromatch": "^3.1.4", - "normalize-path": "^2.1.1" - } - }, - "aproba": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/aproba/-/aproba-1.2.0.tgz", - "integrity": "sha512-Y9J6ZjXtoYh8RnXVCMOU/ttDmk1aBjunq9vO0ta5x85WDQiQfUF9sIPBITdbiiIVcBo03Hi3jMxigBtsddlXRw==", - "dev": true - }, - "are-we-there-yet": { - "version": "1.1.5", - "resolved": "https://registry.npmjs.org/are-we-there-yet/-/are-we-there-yet-1.1.5.tgz", - "integrity": "sha512-5hYdAkZlcG8tOLujVDTgCT+uPX0VnpAH28gWsLfzpXYm7wP6mp5Q/gYyR7YQ0cKVJcXJnl3j2kpBan13PtQf6w==", - "dev": true, - "optional": true, - "requires": { - "delegates": "^1.0.0", - "readable-stream": "^2.0.6" - } - }, - "argparse": { - "version": "1.0.10", - "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", - "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", - "dev": true, - "requires": { - "sprintf-js": "~1.0.2" - } - }, - "arr-diff": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/arr-diff/-/arr-diff-4.0.0.tgz", - "integrity": "sha1-1kYQdP6/7HHn4VI1dhoyml3HxSA=", - "dev": true - }, - "arr-flatten": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/arr-flatten/-/arr-flatten-1.1.0.tgz", - "integrity": "sha512-L3hKV5R/p5o81R7O02IGnwpDmkp6E982XhtbuwSe3O4qOtMMMtodicASA1Cny2U+aCXcNpml+m4dPsvsJ3jatg==", - "dev": true - }, - "arr-union": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/arr-union/-/arr-union-3.1.0.tgz", - "integrity": "sha1-45sJrqne+Gao8gbiiK9jkZuuOcQ=", - "dev": true - }, - "array-unique": { - "version": "0.3.2", - "resolved": "https://registry.npmjs.org/array-unique/-/array-unique-0.3.2.tgz", - "integrity": "sha1-qJS3XUvE9s1nnvMkSp/Y9Gri1Cg=", - "dev": true - }, - "asn1.js": { - "version": "4.10.1", - "resolved": "https://registry.npmjs.org/asn1.js/-/asn1.js-4.10.1.tgz", - "integrity": "sha512-p32cOF5q0Zqs9uBiONKYLm6BClCoBCM5O9JfeUSlnQLBTxYdTK+pW+nXflm8UkKd2UYlEbYz5qEi0JuZR9ckSw==", - "dev": true, - "requires": { - "bn.js": "^4.0.0", - "inherits": "^2.0.1", - "minimalistic-assert": "^1.0.0" - } - }, - "assert": { - "version": "1.4.1", - "resolved": "https://registry.npmjs.org/assert/-/assert-1.4.1.tgz", - "integrity": "sha1-mZEtWRg2tab1s0XA8H7vwI/GXZE=", - "dev": true, - "requires": { - "util": "0.10.3" - }, - "dependencies": { - "inherits": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.1.tgz", - "integrity": "sha1-sX0I0ya0Qj5Wjv9xn5GwscvfafE=", - "dev": true - }, - "util": { - "version": "0.10.3", - "resolved": "https://registry.npmjs.org/util/-/util-0.10.3.tgz", - "integrity": "sha1-evsa/lCAUkZInj23/g7TeTNqwPk=", - "dev": true, - "requires": { - "inherits": "2.0.1" - } - } - } - }, - "assign-symbols": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/assign-symbols/-/assign-symbols-1.0.0.tgz", - "integrity": "sha1-WWZ/QfrdTyDMvCu5a41Pf3jsA2c=", - "dev": true - }, - "async-each": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/async-each/-/async-each-1.0.1.tgz", - "integrity": "sha1-GdOGodntxufByF04iu28xW0zYC0=", - "dev": true - }, - "atob": { - "version": "2.1.2", - "resolved": "https://registry.npmjs.org/atob/-/atob-2.1.2.tgz", - "integrity": "sha512-Wm6ukoaOGJi/73p/cl2GvLjTI5JM1k/O14isD73YML8StrH/7/lRFgmg8nICZgD3bZZvjwCGxtMOD3wWNAu8cg==", - "dev": true - }, - "babel-code-frame": { - "version": "6.26.0", - "resolved": "https://registry.npmjs.org/babel-code-frame/-/babel-code-frame-6.26.0.tgz", - "integrity": "sha1-Y/1D99weO7fONZR9uP42mj9Yx0s=", - "dev": true, - "requires": { - "chalk": "^1.1.3", - "esutils": "^2.0.2", - "js-tokens": "^3.0.2" - }, - "dependencies": { - "ansi-styles": { - "version": "2.2.1", - "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-2.2.1.tgz", - "integrity": "sha1-tDLdM1i2NM914eRmQ2gkBTPB3b4=", - "dev": true - }, - "chalk": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz", - "integrity": "sha1-qBFcVeSnAv5NFQq9OHKCKn4J/Jg=", - "dev": true, - "requires": { - "ansi-styles": "^2.2.1", - "escape-string-regexp": "^1.0.2", - "has-ansi": "^2.0.0", - "strip-ansi": "^3.0.0", - "supports-color": "^2.0.0" - } - }, - "supports-color": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz", - "integrity": "sha1-U10EXOa2Nj+kARcIRimZXp3zJMc=", - "dev": true - } - } - }, - "balanced-match": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.0.tgz", - "integrity": "sha1-ibTRmasr7kneFk6gK4nORi1xt2c=", - "dev": true - }, - "base": { - "version": "0.11.2", - "resolved": "https://registry.npmjs.org/base/-/base-0.11.2.tgz", - "integrity": "sha512-5T6P4xPgpp0YDFvSWwEZ4NoE3aM4QBQXDzmVbraCkFj8zHM+mba8SyqB5DbZWyR7mYHo6Y7BdQo3MoA4m0TeQg==", - "dev": true, - "requires": { - "cache-base": "^1.0.1", - "class-utils": "^0.3.5", - "component-emitter": "^1.2.1", - "define-property": "^1.0.0", - "isobject": "^3.0.1", - "mixin-deep": "^1.2.0", - "pascalcase": "^0.1.1" - }, - "dependencies": { - "define-property": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz", - "integrity": "sha1-dp66rz9KY6rTr56NMEybvnm/sOY=", - "dev": true, - "requires": { - "is-descriptor": "^1.0.0" - } - }, - "is-accessor-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", - "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-data-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", - "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-descriptor": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", - "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", - "dev": true, - "requires": { - "is-accessor-descriptor": "^1.0.0", - "is-data-descriptor": "^1.0.0", - "kind-of": "^6.0.2" - } - } - } - }, - "base64-js": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.3.0.tgz", - "integrity": "sha512-ccav/yGvoa80BQDljCxsmmQ3Xvx60/UpBIij5QN21W3wBi/hhIC9OoO+KLpu9IJTS9j4DRVJ3aDDF9cMSoa2lw==", - "dev": true - }, - "big.js": { - "version": "5.2.2", - "resolved": "https://registry.npmjs.org/big.js/-/big.js-5.2.2.tgz", - "integrity": "sha512-vyL2OymJxmarO8gxMr0mhChsO9QGwhynfuu4+MHTAW6czfq9humCB7rKpUjDd9YUiDPU4mzpyupFSvOClAwbmQ==", - "dev": true - }, - "binary-extensions": { - "version": "1.12.0", - "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-1.12.0.tgz", - "integrity": "sha512-DYWGk01lDcxeS/K9IHPGWfT8PsJmbXRtRd2Sx72Tnb8pcYZQFF1oSDb8hJtS1vhp212q1Rzi5dUf9+nq0o9UIg==", - "dev": true - }, - "biskviit": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/biskviit/-/biskviit-1.0.1.tgz", - "integrity": "sha1-A3oM1LcbnjMf2QoRIt4X3EnkIKc=", - "requires": { - "psl": "^1.1.7" - } - }, - "bluebird": { - "version": "3.5.3", - "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.5.3.tgz", - "integrity": "sha512-/qKPUQlaW1OyR51WeCPBvRnAlnZFUJkCSG5HzGnuIqhgyJtF+T94lFnn33eiazjRm2LAHVy2guNnaq48X9SJuw==", - "dev": true - }, - "bn.js": { - "version": "4.11.8", - "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.8.tgz", - "integrity": "sha512-ItfYfPLkWHUjckQCk8xC+LwxgK8NYcXywGigJgSwOP8Y2iyWT4f2vsZnoOXTTbo+o5yXmIUJ4gn5538SO5S3gA==", - "dev": true - }, - "brace-expansion": { - "version": "1.1.11", - "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz", - "integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==", - "dev": true, - "requires": { - "balanced-match": "^1.0.0", - "concat-map": "0.0.1" - } - }, - "braces": { - "version": "2.3.2", - "resolved": "https://registry.npmjs.org/braces/-/braces-2.3.2.tgz", - "integrity": "sha512-aNdbnj9P8PjdXU4ybaWLK2IF3jc/EoDYbC7AazW6to3TRsfXxscC9UXOB5iDiEQrkyIbWp2SLQda4+QAa7nc3w==", - "dev": true, - "requires": { - "arr-flatten": "^1.1.0", - "array-unique": "^0.3.2", - "extend-shallow": "^2.0.1", - "fill-range": "^4.0.0", - "isobject": "^3.0.1", - "repeat-element": "^1.1.2", - "snapdragon": "^0.8.1", - "snapdragon-node": "^2.0.1", - "split-string": "^3.0.2", - "to-regex": "^3.0.1" - }, - "dependencies": { - "extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", - "dev": true, - "requires": { - "is-extendable": "^0.1.0" - } - } - } - }, - "brorand": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/brorand/-/brorand-1.1.0.tgz", - "integrity": "sha1-EsJe/kCkXjwyPrhnWgoM5XsiNx8=", - "dev": true - }, - "browserify-aes": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/browserify-aes/-/browserify-aes-1.2.0.tgz", - "integrity": "sha512-+7CHXqGuspUn/Sl5aO7Ea0xWGAtETPXNSAjHo48JfLdPWcMng33Xe4znFvQweqc/uzk5zSOI3H52CYnjCfb5hA==", - "dev": true, - "requires": { - "buffer-xor": "^1.0.3", - "cipher-base": "^1.0.0", - "create-hash": "^1.1.0", - "evp_bytestokey": "^1.0.3", - "inherits": "^2.0.1", - "safe-buffer": "^5.0.1" - } - }, - "browserify-cipher": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/browserify-cipher/-/browserify-cipher-1.0.1.tgz", - "integrity": "sha512-sPhkz0ARKbf4rRQt2hTpAHqn47X3llLkUGn+xEJzLjwY8LRs2p0v7ljvI5EyoRO/mexrNunNECisZs+gw2zz1w==", - "dev": true, - "requires": { - "browserify-aes": "^1.0.4", - "browserify-des": "^1.0.0", - "evp_bytestokey": "^1.0.0" - } - }, - "browserify-des": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/browserify-des/-/browserify-des-1.0.2.tgz", - "integrity": "sha512-BioO1xf3hFwz4kc6iBhI3ieDFompMhrMlnDFC4/0/vd5MokpuAc3R+LYbwTA9A5Yc9pq9UYPqffKpW2ObuwX5A==", - "dev": true, - "requires": { - "cipher-base": "^1.0.1", - "des.js": "^1.0.0", - "inherits": "^2.0.1", - "safe-buffer": "^5.1.2" - } - }, - "browserify-rsa": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/browserify-rsa/-/browserify-rsa-4.0.1.tgz", - "integrity": "sha1-IeCr+vbyApzy+vsTNWenAdQTVSQ=", - "dev": true, - "requires": { - "bn.js": "^4.1.0", - "randombytes": "^2.0.1" - } - }, - "browserify-sign": { - "version": "4.0.4", - "resolved": "https://registry.npmjs.org/browserify-sign/-/browserify-sign-4.0.4.tgz", - "integrity": "sha1-qk62jl17ZYuqa/alfmMMvXqT0pg=", - "dev": true, - "requires": { - "bn.js": "^4.1.1", - "browserify-rsa": "^4.0.0", - "create-hash": "^1.1.0", - "create-hmac": "^1.1.2", - "elliptic": "^6.0.0", - "inherits": "^2.0.1", - "parse-asn1": "^5.0.0" - } - }, - "browserify-zlib": { - "version": "0.2.0", - "resolved": "https://registry.npmjs.org/browserify-zlib/-/browserify-zlib-0.2.0.tgz", - "integrity": "sha512-Z942RysHXmJrhqk88FmKBVq/v5tqmSkDz7p54G/MGyjMnCFFnC79XWNbg+Vta8W6Wb2qtSZTSxIGkJrRpCFEiA==", - "dev": true, - "requires": { - "pako": "~1.0.5" - } - }, - "buffer": { - "version": "4.9.1", - "resolved": "https://registry.npmjs.org/buffer/-/buffer-4.9.1.tgz", - "integrity": "sha1-bRu2AbB6TvztlwlBMgkwJ8lbwpg=", - "dev": true, - "requires": { - "base64-js": "^1.0.2", - "ieee754": "^1.1.4", - "isarray": "^1.0.0" - } - }, - "buffer-from": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.1.tgz", - "integrity": "sha512-MQcXEUbCKtEo7bhqEs6560Hyd4XaovZlO/k9V3hjVUF/zwW7KBVdSK4gIt/bzwS9MbR5qob+F5jusZsb0YQK2A==", - "dev": true - }, - "buffer-xor": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/buffer-xor/-/buffer-xor-1.0.3.tgz", - "integrity": "sha1-JuYe0UIvtw3ULm42cp7VHYVf6Nk=", - "dev": true - }, - "builtin-modules": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/builtin-modules/-/builtin-modules-1.1.1.tgz", - "integrity": "sha1-Jw8HbFpywC9bZaR9+Uxf46J4iS8=", - "dev": true - }, - "builtin-status-codes": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/builtin-status-codes/-/builtin-status-codes-3.0.0.tgz", - "integrity": "sha1-hZgoeOIbmOHGZCXgPQF0eI9Wnug=", - "dev": true - }, - "cacache": { - "version": "11.3.2", - "resolved": "https://registry.npmjs.org/cacache/-/cacache-11.3.2.tgz", - "integrity": "sha512-E0zP4EPGDOaT2chM08Als91eYnf8Z+eH1awwwVsngUmgppfM5jjJ8l3z5vO5p5w/I3LsiXawb1sW0VY65pQABg==", - "dev": true, - "requires": { - "bluebird": "^3.5.3", - "chownr": "^1.1.1", - "figgy-pudding": "^3.5.1", - "glob": "^7.1.3", - "graceful-fs": "^4.1.15", - "lru-cache": "^5.1.1", - "mississippi": "^3.0.0", - "mkdirp": "^0.5.1", - "move-concurrently": "^1.0.1", - "promise-inflight": "^1.0.1", - "rimraf": "^2.6.2", - "ssri": "^6.0.1", - "unique-filename": "^1.1.1", - "y18n": "^4.0.0" - } - }, - "cache-base": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/cache-base/-/cache-base-1.0.1.tgz", - "integrity": "sha512-AKcdTnFSWATd5/GCPRxr2ChwIJ85CeyrEyjRHlKxQ56d4XJMGym0uAiKn0xbLOGOl3+yRpOTi484dVCEc5AUzQ==", - "dev": true, - "requires": { - "collection-visit": "^1.0.0", - "component-emitter": "^1.2.1", - "get-value": "^2.0.6", - "has-value": "^1.0.0", - "isobject": "^3.0.1", - "set-value": "^2.0.0", - "to-object-path": "^0.3.0", - "union-value": "^1.0.0", - "unset-value": "^1.0.0" - } - }, - "camelcase": { - "version": "5.0.0", - "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.0.0.tgz", - "integrity": "sha512-faqwZqnWxbxn+F1d399ygeamQNy3lPp/H9H6rNrqYh4FSVCtcY+3cub1MxA8o9mDd55mM8Aghuu/kuyYA6VTsA==", - "dev": true - }, - "chalk": { - "version": "2.4.2", - "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", - "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", - "dev": true, - "requires": { - "ansi-styles": "^3.2.1", - "escape-string-regexp": "^1.0.5", - "supports-color": "^5.3.0" - } - }, - "chokidar": { - "version": "2.0.4", - "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-2.0.4.tgz", - "integrity": "sha512-z9n7yt9rOvIJrMhvDtDictKrkFHeihkNl6uWMmZlmL6tJtX9Cs+87oK+teBx+JIgzvbX3yZHT3eF8vpbDxHJXQ==", - "dev": true, - "requires": { - "anymatch": "^2.0.0", - "async-each": "^1.0.0", - "braces": "^2.3.0", - "fsevents": "^1.2.2", - "glob-parent": "^3.1.0", - "inherits": "^2.0.1", - "is-binary-path": "^1.0.0", - "is-glob": "^4.0.0", - "lodash.debounce": "^4.0.8", - "normalize-path": "^2.1.1", - "path-is-absolute": "^1.0.0", - "readdirp": "^2.0.0", - "upath": "^1.0.5" - } - }, - "chownr": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/chownr/-/chownr-1.1.1.tgz", - "integrity": "sha512-j38EvO5+LHX84jlo6h4UzmOwi0UgW61WRyPtJz4qaadK5eY3BTS5TY/S1Stc3Uk2lIM6TPevAlULiEJwie860g==", - "dev": true - }, - "chrome-trace-event": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.0.tgz", - "integrity": "sha512-xDbVgyfDTT2piup/h8dK/y4QZfJRSa73bw1WZ8b4XM1o7fsFubUVGYcE+1ANtOzJJELGpYoG2961z0Z6OAld9A==", - "dev": true, - "requires": { - "tslib": "^1.9.0" - } - }, - "cipher-base": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/cipher-base/-/cipher-base-1.0.4.tgz", - "integrity": "sha512-Kkht5ye6ZGmwv40uUDZztayT2ThLQGfnj/T71N/XzeZeo3nf8foyW7zGTsPYkEya3m5f3cAypH+qe7YOrM1U2Q==", - "dev": true, - "requires": { - "inherits": "^2.0.1", - "safe-buffer": "^5.0.1" - } - }, - "class-utils": { - "version": "0.3.6", - "resolved": "https://registry.npmjs.org/class-utils/-/class-utils-0.3.6.tgz", - "integrity": "sha512-qOhPa/Fj7s6TY8H8esGu5QNpMMQxz79h+urzrNYN6mn+9BnxlDGf5QZ+XeCDsxSjPqsSR56XOZOJmpeurnLMeg==", - "dev": true, - "requires": { - "arr-union": "^3.1.0", - "define-property": "^0.2.5", - "isobject": "^3.0.0", - "static-extend": "^0.1.1" - }, - "dependencies": { - "define-property": { - "version": "0.2.5", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", - "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", - "dev": true, - "requires": { - "is-descriptor": "^0.1.0" - } - } - } - }, - "classnames": { - "version": "2.2.6", - "resolved": "https://registry.npmjs.org/classnames/-/classnames-2.2.6.tgz", - "integrity": "sha512-JR/iSQOSt+LQIWwrwEzJ9uk0xfN3mTVYMwt1Ir5mUcSN6pU+V4zQFFaJsclJbPuAUQH+yfWef6tm7l1quW3C8Q==" - }, - "cliui": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/cliui/-/cliui-4.1.0.tgz", - "integrity": "sha512-4FG+RSG9DL7uEwRUZXZn3SS34DiDPfzP0VOiEwtUWlE+AR2EIg+hSyvrIgUUfhdgR/UkAeW2QHgeP+hWrXs7jQ==", - "dev": true, - "requires": { - "string-width": "^2.1.1", - "strip-ansi": "^4.0.0", - "wrap-ansi": "^2.0.0" - }, - "dependencies": { - "ansi-regex": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz", - "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg=", - "dev": true - }, - "is-fullwidth-code-point": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-2.0.0.tgz", - "integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8=", - "dev": true - }, - "string-width": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz", - "integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVaTjAqvVwdfeZ7w7aCvJD7ugkw==", - "dev": true, - "requires": { - "is-fullwidth-code-point": "^2.0.0", - "strip-ansi": "^4.0.0" - } - }, - "strip-ansi": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz", - "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=", - "dev": true, - "requires": { - "ansi-regex": "^3.0.0" - } - } - } - }, - "code-point-at": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/code-point-at/-/code-point-at-1.1.0.tgz", - "integrity": "sha1-DQcLTQQ6W+ozovGkDi7bPZpMz3c=", - "dev": true - }, - "collection-visit": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/collection-visit/-/collection-visit-1.0.0.tgz", - "integrity": "sha1-S8A3PBZLwykbTTaMgpzxqApZ3KA=", - "dev": true, - "requires": { - "map-visit": "^1.0.0", - "object-visit": "^1.0.0" - } - }, - "color-convert": { - "version": "1.9.3", - "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", - "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", - "dev": true, - "requires": { - "color-name": "1.1.3" - } - }, - "color-name": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", - "integrity": "sha1-p9BVi9icQveV3UIyj3QIMcpTvCU=", - "dev": true - }, - "commander": { - "version": "2.19.0", - "resolved": "https://registry.npmjs.org/commander/-/commander-2.19.0.tgz", - "integrity": "sha512-6tvAOO+D6OENvRAh524Dh9jcfKTYDQAqvqezbCW82xj5X0pSrcpxtvRKHLG0yBY6SD7PSDrJaj+0AiOcKVd1Xg==", - "dev": true - }, - "commondir": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/commondir/-/commondir-1.0.1.tgz", - "integrity": "sha1-3dgA2gxmEnOTzKWVDqloo6rxJTs=", - "dev": true - }, - "component-emitter": { - "version": "1.2.1", - "resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.2.1.tgz", - "integrity": "sha1-E3kY1teCg/ffemt8WmPhQOaUJeY=", - "dev": true - }, - "concat-map": { - "version": "0.0.1", - "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", - "integrity": "sha1-2Klr13/Wjfd5OnMDajug1UBdR3s=", - "dev": true - }, - "concat-stream": { - "version": "1.6.2", - "resolved": "https://registry.npmjs.org/concat-stream/-/concat-stream-1.6.2.tgz", - "integrity": "sha512-27HBghJxjiZtIk3Ycvn/4kbJk/1uZuJFfuPEns6LaEvpvG1f0hTea8lilrouyo9mVc2GWdcEZ8OLoGmSADlrCw==", - "dev": true, - "requires": { - "buffer-from": "^1.0.0", - "inherits": "^2.0.3", - "readable-stream": "^2.2.2", - "typedarray": "^0.0.6" - } - }, - "console-browserify": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/console-browserify/-/console-browserify-1.1.0.tgz", - "integrity": "sha1-8CQcRXMKn8YyOyBtvzjtx0HQuxA=", - "dev": true, - "requires": { - "date-now": "^0.1.4" - } - }, - "console-control-strings": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/console-control-strings/-/console-control-strings-1.1.0.tgz", - "integrity": "sha1-PXz0Rk22RG6mRL9LOVB/mFEAjo4=", - "dev": true, - "optional": true - }, - "constants-browserify": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/constants-browserify/-/constants-browserify-1.0.0.tgz", - "integrity": "sha1-wguW2MYXdIqvHBYCF2DNJ/y4y3U=", - "dev": true - }, - "copy-concurrently": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/copy-concurrently/-/copy-concurrently-1.0.5.tgz", - "integrity": "sha512-f2domd9fsVDFtaFcbaRZuYXwtdmnzqbADSwhSWYxYB/Q8zsdUUFMXVRwXGDMWmbEzAn1kdRrtI1T/KTFOL4X2A==", - "dev": true, - "requires": { - "aproba": "^1.1.1", - "fs-write-stream-atomic": "^1.0.8", - "iferr": "^0.1.5", - "mkdirp": "^0.5.1", - "rimraf": "^2.5.4", - "run-queue": "^1.0.0" - } - }, - "copy-descriptor": { - "version": "0.1.1", - "resolved": "https://registry.npmjs.org/copy-descriptor/-/copy-descriptor-0.1.1.tgz", - "integrity": "sha1-Z29us8OZl8LuGsOpJP1hJHSPV40=", - "dev": true - }, - "core-util-is": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.2.tgz", - "integrity": "sha1-tf1UIgqivFq1eqtxQMlAdUUDwac=", - "dev": true - }, - "create-ecdh": { - "version": "4.0.3", - "resolved": "https://registry.npmjs.org/create-ecdh/-/create-ecdh-4.0.3.tgz", - "integrity": "sha512-GbEHQPMOswGpKXM9kCWVrremUcBmjteUaQ01T9rkKCPDXfUHX0IoP9LpHYo2NPFampa4e+/pFDc3jQdxrxQLaw==", - "dev": true, - "requires": { - "bn.js": "^4.1.0", - "elliptic": "^6.0.0" - } - }, - "create-hash": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/create-hash/-/create-hash-1.2.0.tgz", - "integrity": "sha512-z00bCGNHDG8mHAkP7CtT1qVu+bFQUPjYq/4Iv3C3kWjTFV10zIjfSoeqXo9Asws8gwSHDGj/hl2u4OGIjapeCg==", - "dev": true, - "requires": { - "cipher-base": "^1.0.1", - "inherits": "^2.0.1", - "md5.js": "^1.3.4", - "ripemd160": "^2.0.1", - "sha.js": "^2.4.0" - } - }, - "create-hmac": { - "version": "1.1.7", - "resolved": "https://registry.npmjs.org/create-hmac/-/create-hmac-1.1.7.tgz", - "integrity": "sha512-MJG9liiZ+ogc4TzUwuvbER1JRdgvUFSB5+VR/g5h82fGaIRWMWddtKBHi7/sVhfjQZ6SehlyhvQYrcYkaUIpLg==", - "dev": true, - "requires": { - "cipher-base": "^1.0.3", - "create-hash": "^1.1.0", - "inherits": "^2.0.1", - "ripemd160": "^2.0.0", - "safe-buffer": "^5.0.1", - "sha.js": "^2.4.8" - } - }, - "cross-spawn": { - "version": "6.0.5", - "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz", - "integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==", - "dev": true, - "requires": { - "nice-try": "^1.0.4", - "path-key": "^2.0.1", - "semver": "^5.5.0", - "shebang-command": "^1.2.0", - "which": "^1.2.9" - } - }, - "crypto-browserify": { - "version": "3.12.0", - "resolved": "https://registry.npmjs.org/crypto-browserify/-/crypto-browserify-3.12.0.tgz", - "integrity": "sha512-fz4spIh+znjO2VjL+IdhEpRJ3YN6sMzITSBijk6FK2UvTqruSQW+/cCZTSNsMiZNvUeq0CqurF+dAbyiGOY6Wg==", - "dev": true, - "requires": { - "browserify-cipher": "^1.0.0", - "browserify-sign": "^4.0.0", - "create-ecdh": "^4.0.0", - "create-hash": "^1.1.0", - "create-hmac": "^1.1.0", - "diffie-hellman": "^5.0.0", - "inherits": "^2.0.1", - "pbkdf2": "^3.0.3", - "public-encrypt": "^4.0.0", - "randombytes": "^2.0.0", - "randomfill": "^1.0.3" - } - }, - "csstype": { - "version": "2.6.0", - "resolved": "https://registry.npmjs.org/csstype/-/csstype-2.6.0.tgz", - "integrity": "sha512-by8hi8BlLbowQq0qtkx54d9aN73R9oUW20HISpka5kmgsR9F7nnxgfsemuR2sdCKZh+CDNf5egW9UZMm4mgJRg==", - "dev": true - }, - "cyclist": { - "version": "0.2.2", - "resolved": "https://registry.npmjs.org/cyclist/-/cyclist-0.2.2.tgz", - "integrity": "sha1-GzN5LhHpFKL9bW7WRHRkRE5fpkA=", - "dev": true - }, - "date-now": { - "version": "0.1.4", - "resolved": "https://registry.npmjs.org/date-now/-/date-now-0.1.4.tgz", - "integrity": "sha1-6vQ5/U1ISK105cx9vvIAZyueNFs=", - "dev": true - }, - "debug": { - "version": "2.6.9", - "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz", - "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==", - "dev": true, - "requires": { - "ms": "2.0.0" - } - }, - "decamelize": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", - "integrity": "sha1-9lNNFRSCabIDUue+4m9QH5oZEpA=", - "dev": true - }, - "decode-uri-component": { - "version": "0.2.0", - "resolved": "https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz", - "integrity": "sha1-6zkTMzRYd1y4TNGh+uBiEGu4dUU=", - "dev": true - }, - "deep-extend": { - "version": "0.6.0", - "resolved": "https://registry.npmjs.org/deep-extend/-/deep-extend-0.6.0.tgz", - "integrity": "sha512-LOHxIOaPYdHlJRtCQfDIVZtfw/ufM8+rVj649RIHzcm/vGwQRXFt6OPqIFWsm2XEMrNIEtWR64sY1LEKD2vAOA==", - "dev": true, - "optional": true - }, - "define-property": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-2.0.2.tgz", - "integrity": "sha512-jwK2UV4cnPpbcG7+VRARKTZPUWowwXA8bzH5NP6ud0oeAxyYPuGZUAC7hMugpCdz4BeSZl2Dl9k66CHJ/46ZYQ==", - "dev": true, - "requires": { - "is-descriptor": "^1.0.2", - "isobject": "^3.0.1" - }, - "dependencies": { - "is-accessor-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", - "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-data-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", - "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-descriptor": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", - "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", - "dev": true, - "requires": { - "is-accessor-descriptor": "^1.0.0", - "is-data-descriptor": "^1.0.0", - "kind-of": "^6.0.2" - } - } - } - }, - "delegates": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/delegates/-/delegates-1.0.0.tgz", - "integrity": "sha1-hMbhWbgZBP3KWaDvRM2HDTElD5o=", - "dev": true, - "optional": true - }, - "des.js": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/des.js/-/des.js-1.0.0.tgz", - "integrity": "sha1-wHTS4qpqipoH29YfmhXCzYPsjsw=", - "dev": true, - "requires": { - "inherits": "^2.0.1", - "minimalistic-assert": "^1.0.0" - } - }, - "detect-file": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/detect-file/-/detect-file-1.0.0.tgz", - "integrity": "sha1-8NZtA2cqglyxtzvbP+YjEMjlUrc=", - "dev": true - }, - "detect-libc": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-1.0.3.tgz", - "integrity": "sha1-+hN8S9aY7fVc1c0CrFWfkaTEups=", - "dev": true, - "optional": true - }, - "diff": { - "version": "3.5.0", - "resolved": "https://registry.npmjs.org/diff/-/diff-3.5.0.tgz", - "integrity": "sha512-A46qtFgd+g7pDZinpnwiRJtxbC1hpgf0uzP3iG89scHk0AUC7A1TGxf5OiiOUv/JMZR8GOt8hL900hV0bOy5xA==", - "dev": true - }, - "diffie-hellman": { - "version": "5.0.3", - "resolved": "https://registry.npmjs.org/diffie-hellman/-/diffie-hellman-5.0.3.tgz", - "integrity": "sha512-kqag/Nl+f3GwyK25fhUMYj81BUOrZ9IuJsjIcDE5icNM9FJHAVm3VcUDxdLPoQtTuUylWm6ZIknYJwwaPxsUzg==", - "dev": true, - "requires": { - "bn.js": "^4.1.0", - "miller-rabin": "^4.0.0", - "randombytes": "^2.0.0" - } - }, - "domain-browser": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/domain-browser/-/domain-browser-1.2.0.tgz", - "integrity": "sha512-jnjyiM6eRyZl2H+W8Q/zLMA481hzi0eszAaBUzIVnmYVDBbnLxVNnfu1HgEBvCbL+71FrxMl3E6lpKH7Ge3OXA==", - "dev": true - }, - "duplexify": { - "version": "3.6.1", - "resolved": "https://registry.npmjs.org/duplexify/-/duplexify-3.6.1.tgz", - "integrity": "sha512-vM58DwdnKmty+FSPzT14K9JXb90H+j5emaR4KYbr2KTIz00WHGbWOe5ghQTx233ZCLZtrGDALzKwcjEtSt35mA==", - "dev": true, - "requires": { - "end-of-stream": "^1.0.0", - "inherits": "^2.0.1", - "readable-stream": "^2.0.0", - "stream-shift": "^1.0.0" - } - }, - "elliptic": { - "version": "6.5.4", - "resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.5.4.tgz", - "integrity": "sha512-iLhC6ULemrljPZb+QutR5TQGB+pdW6KGD5RSegS+8sorOZT+rdQFbsQFJgvN3eRqNALqJer4oQ16YvJHlU8hzQ==", - "dev": true, - "requires": { - "bn.js": "^4.11.9", - "brorand": "^1.1.0", - "hash.js": "^1.0.0", - "hmac-drbg": "^1.0.1", - "inherits": "^2.0.4", - "minimalistic-assert": "^1.0.1", - "minimalistic-crypto-utils": "^1.0.1" - }, - "dependencies": { - "bn.js": { - "version": "4.12.0", - "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz", - "integrity": "sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA==", - "dev": true - }, - "inherits": { - "version": "2.0.4", - "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", - "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", - "dev": true - } - } - }, - "emojis-list": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/emojis-list/-/emojis-list-2.1.0.tgz", - "integrity": "sha1-TapNnbAPmBmIDHn6RXrlsJof04k=", - "dev": true - }, - "encoding": { - "version": "0.1.12", - "resolved": "https://registry.npmjs.org/encoding/-/encoding-0.1.12.tgz", - "integrity": "sha1-U4tm8+5izRq1HsMjgp0flIDHS+s=", - "requires": { - "iconv-lite": "~0.4.13" - } - }, - "end-of-stream": { - "version": "1.4.1", - "resolved": "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.4.1.tgz", - "integrity": "sha512-1MkrZNvWTKCaigbn+W15elq2BB/L22nqrSY5DKlo3X6+vclJm8Bb5djXJBmEX6fS3+zCh/F4VBK5Z2KxJt4s2Q==", - "dev": true, - "requires": { - "once": "^1.4.0" - } - }, - "enhanced-resolve": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-4.1.0.tgz", - "integrity": "sha512-F/7vkyTtyc/llOIn8oWclcB25KdRaiPBpZYDgJHgh/UHtpgT2p2eldQgtQnLtUvfMKPKxbRaQM/hHkvLHt1Vng==", - "dev": true, - "requires": { - "graceful-fs": "^4.1.2", - "memory-fs": "^0.4.0", - "tapable": "^1.0.0" - } - }, - "errno": { - "version": "0.1.7", - "resolved": "https://registry.npmjs.org/errno/-/errno-0.1.7.tgz", - "integrity": "sha512-MfrRBDWzIWifgq6tJj60gkAwtLNb6sQPlcFrSOflcP1aFmmruKQ2wRnze/8V6kgyz7H3FF8Npzv78mZ7XLLflg==", - "dev": true, - "requires": { - "prr": "~1.0.1" - } - }, - "escape-string-regexp": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", - "integrity": "sha1-G2HAViGQqN/2rjuyzwIAyhMLhtQ=", - "dev": true - }, - "eslint-scope": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.0.tgz", - "integrity": "sha512-1G6UTDi7Jc1ELFwnR58HV4fK9OQK4S6N985f166xqXxpjU6plxFISJa2Ba9KCQuFa8RCnj/lSFJbHo7UFDBnUA==", - "dev": true, - "requires": { - "esrecurse": "^4.1.0", - "estraverse": "^4.1.1" - } - }, - "esprima": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/esprima/-/esprima-4.0.1.tgz", - "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", - "dev": true - }, - "esrecurse": { - "version": "4.2.1", - "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.2.1.tgz", - "integrity": "sha512-64RBB++fIOAXPw3P9cy89qfMlvZEXZkqqJkjqqXIvzP5ezRZjW+lPWjw35UX/3EhUPFYbg5ER4JYgDw4007/DQ==", - "dev": true, - "requires": { - "estraverse": "^4.1.0" - } - }, - "estraverse": { - "version": "4.2.0", - "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.2.0.tgz", - "integrity": "sha1-De4/7TH81GlhjOc0IJn8GvoL2xM=", - "dev": true - }, - "esutils": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/esutils/-/esutils-2.0.2.tgz", - "integrity": "sha1-Cr9PHKpbyx96nYrMbepPqqBLrJs=", - "dev": true - }, - "events": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/events/-/events-1.1.1.tgz", - "integrity": "sha1-nr23Y1rQmccNzEwqH1AEKI6L2SQ=", - "dev": true - }, - "evp_bytestokey": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/evp_bytestokey/-/evp_bytestokey-1.0.3.tgz", - "integrity": "sha512-/f2Go4TognH/KvCISP7OUsHn85hT9nUkxxA9BEWxFn+Oj9o8ZNLm/40hdlgSLyuOimsrTKLUMEorQexp/aPQeA==", - "dev": true, - "requires": { - "md5.js": "^1.3.4", - "safe-buffer": "^5.1.1" - } - }, - "execa": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/execa/-/execa-1.0.0.tgz", - "integrity": "sha512-adbxcyWV46qiHyvSp50TKt05tB4tK3HcmF7/nxfAdhnox83seTDbwnaqKO4sXRy7roHAIFqJP/Rw/AuEbX61LA==", - "dev": true, - "requires": { - "cross-spawn": "^6.0.0", - "get-stream": "^4.0.0", - "is-stream": "^1.1.0", - "npm-run-path": "^2.0.0", - "p-finally": "^1.0.0", - "signal-exit": "^3.0.0", - "strip-eof": "^1.0.0" - } - }, - "expand-brackets": { - "version": "2.1.4", - "resolved": "https://registry.npmjs.org/expand-brackets/-/expand-brackets-2.1.4.tgz", - "integrity": "sha1-t3c14xXOMPa27/D4OwQVGiJEliI=", - "dev": true, - "requires": { - "debug": "^2.3.3", - "define-property": "^0.2.5", - "extend-shallow": "^2.0.1", - "posix-character-classes": "^0.1.0", - "regex-not": "^1.0.0", - "snapdragon": "^0.8.1", - "to-regex": "^3.0.1" - }, - "dependencies": { - "define-property": { - "version": "0.2.5", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", - "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", - "dev": true, - "requires": { - "is-descriptor": "^0.1.0" - } - }, - "extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", - "dev": true, - "requires": { - "is-extendable": "^0.1.0" - } - } - } - }, - "expand-tilde": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/expand-tilde/-/expand-tilde-2.0.2.tgz", - "integrity": "sha1-l+gBqgUt8CRU3kawK/YhZCzchQI=", - "dev": true, - "requires": { - "homedir-polyfill": "^1.0.1" - } - }, - "extend-shallow": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-3.0.2.tgz", - "integrity": "sha1-Jqcarwc7OfshJxcnRhMcJwQCjbg=", - "dev": true, - "requires": { - "assign-symbols": "^1.0.0", - "is-extendable": "^1.0.1" - }, - "dependencies": { - "is-extendable": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-1.0.1.tgz", - "integrity": "sha512-arnXMxT1hhoKo9k1LZdmlNyJdDDfy2v0fXjFlmok4+i8ul/6WlbVge9bhM74OpNPQPMGUToDtz+KXa1PneJxOA==", - "dev": true, - "requires": { - "is-plain-object": "^2.0.4" - } - } - } - }, - "extglob": { - "version": "2.0.4", - "resolved": "https://registry.npmjs.org/extglob/-/extglob-2.0.4.tgz", - "integrity": "sha512-Nmb6QXkELsuBr24CJSkilo6UHHgbekK5UiZgfE6UHD3Eb27YC6oD+bhcT+tJ6cl8dmsgdQxnWlcry8ksBIBLpw==", - "dev": true, - "requires": { - "array-unique": "^0.3.2", - "define-property": "^1.0.0", - "expand-brackets": "^2.1.4", - "extend-shallow": "^2.0.1", - "fragment-cache": "^0.2.1", - "regex-not": "^1.0.0", - "snapdragon": "^0.8.1", - "to-regex": "^3.0.1" - }, - "dependencies": { - "define-property": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz", - "integrity": "sha1-dp66rz9KY6rTr56NMEybvnm/sOY=", - "dev": true, - "requires": { - "is-descriptor": "^1.0.0" - } - }, - "extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", - "dev": true, - "requires": { - "is-extendable": "^0.1.0" - } - }, - "is-accessor-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", - "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-data-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", - "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-descriptor": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", - "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", - "dev": true, - "requires": { - "is-accessor-descriptor": "^1.0.0", - "is-data-descriptor": "^1.0.0", - "kind-of": "^6.0.2" - } - } - } - }, - "fast-deep-equal": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-2.0.1.tgz", - "integrity": "sha1-ewUhjd+WZ79/Nwv3/bLLFf3Qqkk=", - "dev": true - }, - "fast-json-stable-stringify": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.0.0.tgz", - "integrity": "sha1-1RQsDK7msRifh9OnYREGT4bIu/I=", - "dev": true - }, - "fetch": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/fetch/-/fetch-1.1.0.tgz", - "integrity": "sha1-CoJ58Gvjf58Ou1Z1YKMKSA2lmi4=", - "requires": { - "biskviit": "1.0.1", - "encoding": "0.1.12" - } - }, - "figgy-pudding": { - "version": "3.5.1", - "resolved": "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.1.tgz", - "integrity": "sha512-vNKxJHTEKNThjfrdJwHc7brvM6eVevuO5nTj6ez8ZQ1qbXTvGthucRF7S4vf2cr71QVnT70V34v0S1DyQsti0w==", - "dev": true - }, - "fill-range": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-4.0.0.tgz", - "integrity": "sha1-1USBHUKPmOsGpj3EAtJAPDKMOPc=", - "dev": true, - "requires": { - "extend-shallow": "^2.0.1", - "is-number": "^3.0.0", - "repeat-string": "^1.6.1", - "to-regex-range": "^2.1.0" - }, - "dependencies": { - "extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", - "dev": true, - "requires": { - "is-extendable": "^0.1.0" - } - } - } - }, - "find-cache-dir": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.0.0.tgz", - "integrity": "sha512-LDUY6V1Xs5eFskUVYtIwatojt6+9xC9Chnlk/jYOOvn3FAFfSaWddxahDGyNHh0b2dMXa6YW2m0tk8TdVaXHlA==", - "dev": true, - "requires": { - "commondir": "^1.0.1", - "make-dir": "^1.0.0", - "pkg-dir": "^3.0.0" - } - }, - "find-up": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz", - "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==", - "dev": true, - "requires": { - "locate-path": "^3.0.0" - } - }, - "findup-sync": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/findup-sync/-/findup-sync-2.0.0.tgz", - "integrity": "sha1-kyaxSIwi0aYIhlCoaQGy2akKLLw=", - "dev": true, - "requires": { - "detect-file": "^1.0.0", - "is-glob": "^3.1.0", - "micromatch": "^3.0.4", - "resolve-dir": "^1.0.1" - }, - "dependencies": { - "is-glob": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-3.1.0.tgz", - "integrity": "sha1-e6WuJCF4BKxwcHuWkiVnSGzD6Eo=", - "dev": true, - "requires": { - "is-extglob": "^2.1.0" - } - } - } - }, - "flush-write-stream": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/flush-write-stream/-/flush-write-stream-1.0.3.tgz", - "integrity": "sha512-calZMC10u0FMUqoiunI2AiGIIUtUIvifNwkHhNupZH4cbNnW1Itkoh/Nf5HFYmDrwWPjrUxpkZT0KhuCq0jmGw==", - "dev": true, - "requires": { - "inherits": "^2.0.1", - "readable-stream": "^2.0.4" - } - }, - "for-in": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/for-in/-/for-in-1.0.2.tgz", - "integrity": "sha1-gQaNKVqBQuwKxybG4iAMMPttXoA=", - "dev": true - }, - "fragment-cache": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/fragment-cache/-/fragment-cache-0.2.1.tgz", - "integrity": "sha1-QpD60n8T6Jvn8zeZxrxaCr//DRk=", - "dev": true, - "requires": { - "map-cache": "^0.2.2" - } - }, - "from2": { - "version": "2.3.0", - "resolved": "https://registry.npmjs.org/from2/-/from2-2.3.0.tgz", - "integrity": "sha1-i/tVAr3kpNNs/e6gB/zKIdfjgq8=", - "dev": true, - "requires": { - "inherits": "^2.0.1", - "readable-stream": "^2.0.0" - } - }, - "fs-minipass": { - "version": "1.2.5", - "resolved": "https://registry.npmjs.org/fs-minipass/-/fs-minipass-1.2.5.tgz", - "integrity": "sha512-JhBl0skXjUPCFH7x6x61gQxrKyXsxB5gcgePLZCwfyCGGsTISMoIeObbrvVeP6Xmyaudw4TT43qV2Gz+iyd2oQ==", - "dev": true, - "optional": true, - "requires": { - "minipass": "^2.2.1" - } - }, - "fs-write-stream-atomic": { - "version": "1.0.10", - "resolved": "https://registry.npmjs.org/fs-write-stream-atomic/-/fs-write-stream-atomic-1.0.10.tgz", - "integrity": "sha1-tH31NJPvkR33VzHnCp3tAYnbQMk=", - "dev": true, - "requires": { - "graceful-fs": "^4.1.2", - "iferr": "^0.1.5", - "imurmurhash": "^0.1.4", - "readable-stream": "1 || 2" - } - }, - "fs.realpath": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", - "integrity": "sha1-FQStJSMVjKpA20onh8sBQRmU6k8=", - "dev": true - }, - "fsevents": { - "version": "1.2.6", - "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-1.2.6.tgz", - "integrity": "sha512-BalK54tfK0pMC0jQFb2oHn1nz7JNQD/2ex5pBnCHgBi2xG7VV0cAOGy2RS2VbCqUXx5/6obMrMcQTJ8yjcGzbg==", - "dev": true, - "optional": true, - "requires": { - "nan": "^2.9.2", - "node-pre-gyp": "^0.10.0" - } - }, - "gauge": { - "version": "2.7.4", - "resolved": "https://registry.npmjs.org/gauge/-/gauge-2.7.4.tgz", - "integrity": "sha1-LANAXHU4w51+s3sxcCLjJfsBi/c=", - "dev": true, - "optional": true, - "requires": { - "aproba": "^1.0.3", - "console-control-strings": "^1.0.0", - "has-unicode": "^2.0.0", - "object-assign": "^4.1.0", - "signal-exit": "^3.0.0", - "string-width": "^1.0.1", - "strip-ansi": "^3.0.1", - "wide-align": "^1.1.0" - } - }, - "get-caller-file": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-1.0.3.tgz", - "integrity": "sha512-3t6rVToeoZfYSGd8YoLFR2DJkiQrIiUrGcjvFX2mDw3bn6k2OtwHN0TNCLbBO+w8qTvimhDkv+LSscbJY1vE6w==", - "dev": true - }, - "get-stream": { - "version": "4.1.0", - "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz", - "integrity": "sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==", - "dev": true, - "requires": { - "pump": "^3.0.0" - } - }, - "get-value": { - "version": "2.0.6", - "resolved": "https://registry.npmjs.org/get-value/-/get-value-2.0.6.tgz", - "integrity": "sha1-3BXKHGcjh8p2vTesCjlbogQqLCg=", - "dev": true - }, - "glob": { - "version": "7.1.3", - "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.3.tgz", - "integrity": "sha512-vcfuiIxogLV4DlGBHIUOwI0IbrJ8HWPc4MU7HzviGeNho/UJDfi6B5p3sHeWIQ0KGIU0Jpxi5ZHxemQfLkkAwQ==", - "dev": true, - "requires": { - "fs.realpath": "^1.0.0", - "inflight": "^1.0.4", - "inherits": "2", - "minimatch": "^3.0.4", - "once": "^1.3.0", - "path-is-absolute": "^1.0.0" - } - }, - "glob-parent": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz", - "integrity": "sha1-nmr2KZ2NO9K9QEMIMr0RPfkGxa4=", - "dev": true, - "requires": { - "is-glob": "^3.1.0", - "path-dirname": "^1.0.0" - }, - "dependencies": { - "is-glob": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-3.1.0.tgz", - "integrity": "sha1-e6WuJCF4BKxwcHuWkiVnSGzD6Eo=", - "dev": true, - "requires": { - "is-extglob": "^2.1.0" - } - } - } - }, - "global-modules": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/global-modules/-/global-modules-1.0.0.tgz", - "integrity": "sha512-sKzpEkf11GpOFuw0Zzjzmt4B4UZwjOcG757PPvrfhxcLFbq0wpsgpOqxpxtxFiCG4DtG93M6XRVbF2oGdev7bg==", - "dev": true, - "requires": { - "global-prefix": "^1.0.1", - "is-windows": "^1.0.1", - "resolve-dir": "^1.0.0" - } - }, - "global-modules-path": { - "version": "2.3.1", - "resolved": "https://registry.npmjs.org/global-modules-path/-/global-modules-path-2.3.1.tgz", - "integrity": "sha512-y+shkf4InI7mPRHSo2b/k6ix6+NLDtyccYv86whhxrSGX9wjPX1VMITmrDbE1eh7zkzhiWtW2sHklJYoQ62Cxg==", - "dev": true - }, - "global-prefix": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/global-prefix/-/global-prefix-1.0.2.tgz", - "integrity": "sha1-2/dDxsFJklk8ZVVoy2btMsASLr4=", - "dev": true, - "requires": { - "expand-tilde": "^2.0.2", - "homedir-polyfill": "^1.0.1", - "ini": "^1.3.4", - "is-windows": "^1.0.1", - "which": "^1.2.14" - } - }, - "graceful-fs": { - "version": "4.1.15", - "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.1.15.tgz", - "integrity": "sha512-6uHUhOPEBgQ24HM+r6b/QwWfZq+yiFcipKFrOFiBEnWdy5sdzYoi+pJeQaPI5qOLRFqWmAXUPQNsielzdLoecA==", - "dev": true - }, - "has-ansi": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/has-ansi/-/has-ansi-2.0.0.tgz", - "integrity": "sha1-NPUEnOHs3ysGSa8+8k5F7TVBbZE=", - "dev": true, - "requires": { - "ansi-regex": "^2.0.0" - } - }, - "has-flag": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", - "integrity": "sha1-tdRU3CGZriJWmfNGfloH87lVuv0=", - "dev": true - }, - "has-unicode": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/has-unicode/-/has-unicode-2.0.1.tgz", - "integrity": "sha1-4Ob+aijPUROIVeCG0Wkedx3iqLk=", - "dev": true, - "optional": true - }, - "has-value": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/has-value/-/has-value-1.0.0.tgz", - "integrity": "sha1-GLKB2lhbHFxR3vJMkw7SmgvmsXc=", - "dev": true, - "requires": { - "get-value": "^2.0.6", - "has-values": "^1.0.0", - "isobject": "^3.0.0" - } - }, - "has-values": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/has-values/-/has-values-1.0.0.tgz", - "integrity": "sha1-lbC2P+whRmGab+V/51Yo1aOe/k8=", - "dev": true, - "requires": { - "is-number": "^3.0.0", - "kind-of": "^4.0.0" - }, - "dependencies": { - "kind-of": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-4.0.0.tgz", - "integrity": "sha1-IIE989cSkosgc3hpGkUGb65y3Vc=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "hash-base": { - "version": "3.0.4", - "resolved": "https://registry.npmjs.org/hash-base/-/hash-base-3.0.4.tgz", - "integrity": "sha1-X8hoaEfs1zSZQDMZprCj8/auSRg=", - "dev": true, - "requires": { - "inherits": "^2.0.1", - "safe-buffer": "^5.0.1" - } - }, - "hash.js": { - "version": "1.1.7", - "resolved": "https://registry.npmjs.org/hash.js/-/hash.js-1.1.7.tgz", - "integrity": "sha512-taOaskGt4z4SOANNseOviYDvjEJinIkRgmp7LbKP2YTTmVxWBl87s/uzK9r+44BclBSp2X7K1hqeNfz9JbBeXA==", - "dev": true, - "requires": { - "inherits": "^2.0.3", - "minimalistic-assert": "^1.0.1" - } - }, - "hashids": { - "version": "1.2.2", - "resolved": "https://registry.npmjs.org/hashids/-/hashids-1.2.2.tgz", - "integrity": "sha512-dEHCG2LraR6PNvSGxosZHIRgxF5sNLOIBFEHbj8lfP9WWmu/PWPMzsip1drdVSOFi51N2pU7gZavrgn7sbGFuw==" - }, - "hmac-drbg": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/hmac-drbg/-/hmac-drbg-1.0.1.tgz", - "integrity": "sha1-0nRXAQJabHdabFRXk+1QL8DGSaE=", - "dev": true, - "requires": { - "hash.js": "^1.0.3", - "minimalistic-assert": "^1.0.0", - "minimalistic-crypto-utils": "^1.0.1" - } - }, - "homedir-polyfill": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/homedir-polyfill/-/homedir-polyfill-1.0.1.tgz", - "integrity": "sha1-TCu8inWJmP7r9e1oWA921GdotLw=", - "dev": true, - "requires": { - "parse-passwd": "^1.0.0" - } - }, - "https-browserify": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/https-browserify/-/https-browserify-1.0.0.tgz", - "integrity": "sha1-7AbBDgo0wPL68Zn3/X/Hj//QPHM=", - "dev": true - }, - "iconv-lite": { - "version": "0.4.24", - "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz", - "integrity": "sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==", - "requires": { - "safer-buffer": ">= 2.1.2 < 3" - } - }, - "ieee754": { - "version": "1.1.12", - "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.1.12.tgz", - "integrity": "sha512-GguP+DRY+pJ3soyIiGPTvdiVXjZ+DbXOxGpXn3eMvNW4x4irjqXm4wHKscC+TfxSJ0yw/S1F24tqdMNsMZTiLA==", - "dev": true - }, - "iferr": { - "version": "0.1.5", - "resolved": "https://registry.npmjs.org/iferr/-/iferr-0.1.5.tgz", - "integrity": "sha1-xg7taebY/bazEEofy8ocGS3FtQE=", - "dev": true - }, - "ignore-walk": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/ignore-walk/-/ignore-walk-3.0.1.tgz", - "integrity": "sha512-DTVlMx3IYPe0/JJcYP7Gxg7ttZZu3IInhuEhbchuqneY9wWe5Ojy2mXLBaQFUQmo0AW2r3qG7m1mg86js+gnlQ==", - "dev": true, - "optional": true, - "requires": { - "minimatch": "^3.0.4" - } - }, - "import-local": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/import-local/-/import-local-2.0.0.tgz", - "integrity": "sha512-b6s04m3O+s3CGSbqDIyP4R6aAwAeYlVq9+WUWep6iHa8ETRf9yei1U48C5MmfJmV9AiLYYBKPMq/W+/WRpQmCQ==", - "dev": true, - "requires": { - "pkg-dir": "^3.0.0", - "resolve-cwd": "^2.0.0" - } - }, - "imurmurhash": { - "version": "0.1.4", - "resolved": "https://registry.npmjs.org/imurmurhash/-/imurmurhash-0.1.4.tgz", - "integrity": "sha1-khi5srkoojixPcT7a21XbyMUU+o=", - "dev": true - }, - "indexof": { - "version": "0.0.1", - "resolved": "https://registry.npmjs.org/indexof/-/indexof-0.0.1.tgz", - "integrity": "sha1-gtwzbSMrkGIXnQWrMpOmYFn9Q10=", - "dev": true - }, - "inflight": { - "version": "1.0.6", - "resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz", - "integrity": "sha1-Sb1jMdfQLQwJvJEKEHW6gWW1bfk=", - "dev": true, - "requires": { - "once": "^1.3.0", - "wrappy": "1" - } - }, - "inherits": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", - "integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4=", - "dev": true - }, - "ini": { - "version": "1.3.7", - "resolved": "https://registry.npmjs.org/ini/-/ini-1.3.7.tgz", - "integrity": "sha512-iKpRpXP+CrP2jyrxvg1kMUpXDyRUFDWurxbnVT1vQPx+Wz9uCYsMIqYuSBLV+PAaZG/d7kRLKRFc9oDMsH+mFQ==", - "dev": true - }, - "interpret": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/interpret/-/interpret-1.2.0.tgz", - "integrity": "sha512-mT34yGKMNceBQUoVn7iCDKDntA7SC6gycMAWzGx1z/CMCTV7b2AAtXlo3nRyHZ1FelRkQbQjprHSYGwzLtkVbw==", - "dev": true - }, - "invert-kv": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/invert-kv/-/invert-kv-2.0.0.tgz", - "integrity": "sha512-wPVv/y/QQ/Uiirj/vh3oP+1Ww+AWehmi1g5fFWGPF6IpCBCDVrhgHRMvrLfdYcwDh3QJbGXDW4JAuzxElLSqKA==", - "dev": true - }, - "is-accessor-descriptor": { - "version": "0.1.6", - "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-0.1.6.tgz", - "integrity": "sha1-qeEss66Nh2cn7u84Q/igiXtcmNY=", - "dev": true, - "requires": { - "kind-of": "^3.0.2" - }, - "dependencies": { - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "is-binary-path": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-1.0.1.tgz", - "integrity": "sha1-dfFmQrSA8YenEcgUFh/TpKdlWJg=", - "dev": true, - "requires": { - "binary-extensions": "^1.0.0" - } - }, - "is-buffer": { - "version": "1.1.6", - "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.6.tgz", - "integrity": "sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w==", - "dev": true - }, - "is-data-descriptor": { - "version": "0.1.4", - "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-0.1.4.tgz", - "integrity": "sha1-C17mSDiOLIYCgueT8YVv7D8wG1Y=", - "dev": true, - "requires": { - "kind-of": "^3.0.2" - }, - "dependencies": { - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "is-descriptor": { - "version": "0.1.6", - "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-0.1.6.tgz", - "integrity": "sha512-avDYr0SB3DwO9zsMov0gKCESFYqCnE4hq/4z3TdUlukEy5t9C0YRq7HLrsN52NAcqXKaepeCD0n+B0arnVG3Hg==", - "dev": true, - "requires": { - "is-accessor-descriptor": "^0.1.6", - "is-data-descriptor": "^0.1.4", - "kind-of": "^5.0.0" - }, - "dependencies": { - "kind-of": { - "version": "5.1.0", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-5.1.0.tgz", - "integrity": "sha512-NGEErnH6F2vUuXDh+OlbcKW7/wOcfdRHaZ7VWtqCztfHri/++YKmP51OdWeGPuqCOba6kk2OTe5d02VmTB80Pw==", - "dev": true - } - } - }, - "is-extendable": { - "version": "0.1.1", - "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-0.1.1.tgz", - "integrity": "sha1-YrEQ4omkcUGOPsNqYX1HLjAd/Ik=", - "dev": true - }, - "is-extglob": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", - "integrity": "sha1-qIwCU1eR8C7TfHahueqXc8gz+MI=", - "dev": true - }, - "is-fullwidth-code-point": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-1.0.0.tgz", - "integrity": "sha1-754xOG8DGn8NZDr4L95QxFfvAMs=", - "dev": true, - "requires": { - "number-is-nan": "^1.0.0" - } - }, - "is-glob": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.0.tgz", - "integrity": "sha1-lSHHaEXMJhCoUgPd8ICpWML/q8A=", - "dev": true, - "requires": { - "is-extglob": "^2.1.1" - } - }, - "is-number": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/is-number/-/is-number-3.0.0.tgz", - "integrity": "sha1-JP1iAaR4LPUFYcgQJ2r8fRLXEZU=", - "dev": true, - "requires": { - "kind-of": "^3.0.2" - }, - "dependencies": { - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "is-plain-object": { - "version": "2.0.4", - "resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz", - "integrity": "sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og==", - "dev": true, - "requires": { - "isobject": "^3.0.1" - } - }, - "is-stream": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-1.1.0.tgz", - "integrity": "sha1-EtSj3U5o4Lec6428hBc66A2RykQ=", - "dev": true - }, - "is-windows": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/is-windows/-/is-windows-1.0.2.tgz", - "integrity": "sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA==", - "dev": true - }, - "isarray": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz", - "integrity": "sha1-u5NdSFgsuhaMBoNJV6VKPgcSTxE=", - "dev": true - }, - "isexe": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz", - "integrity": "sha1-6PvzdNxVb/iUehDcsFctYz8s+hA=", - "dev": true - }, - "isobject": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz", - "integrity": "sha1-TkMekrEalzFjaqH5yNHMvP2reN8=", - "dev": true - }, - "js-tokens": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-3.0.2.tgz", - "integrity": "sha1-mGbfOVECEw449/mWvOtlRDIJwls=" - }, - "js-yaml": { - "version": "3.13.1", - "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.13.1.tgz", - "integrity": "sha512-YfbcO7jXDdyj0DGxYVSlSeQNHbD7XPWvrVWeVUujrQEoZzWJIRrCPoyk6kL6IAjAG2IolMK4T0hNUe0HOUs5Jw==", - "dev": true, - "requires": { - "argparse": "^1.0.7", - "esprima": "^4.0.0" - } - }, - "json-parse-better-errors": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz", - "integrity": "sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw==", - "dev": true - }, - "json-schema-traverse": { - "version": "0.4.1", - "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", - "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", - "dev": true - }, - "json5": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz", - "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==", - "dev": true, - "requires": { - "minimist": "^1.2.0" - } - }, - "kind-of": { - "version": "6.0.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz", - "integrity": "sha512-s5kLOcnH0XqDO+FvuaLX8DDjZ18CGFk7VygH40QoKPUQhW4e2rvM0rwUq0t8IQDOwYSeLK01U90OjzBTme2QqA==", - "dev": true - }, - "lcid": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/lcid/-/lcid-2.0.0.tgz", - "integrity": "sha512-avPEb8P8EGnwXKClwsNUgryVjllcRqtMYa49NTsbQagYuT1DcXnl1915oxWjoyGrXR6zH/Y0Zc96xWsPcoDKeA==", - "dev": true, - "requires": { - "invert-kv": "^2.0.0" - } - }, - "lightercollective": { - "version": "0.1.0", - "resolved": "https://registry.npmjs.org/lightercollective/-/lightercollective-0.1.0.tgz", - "integrity": "sha512-J9tg5uraYoQKaWbmrzDDexbG6hHnMcWS1qLYgJSWE+mpA3U5OCSeMUhb+K55otgZJ34oFdR0ECvdIb3xuO5JOQ==", - "dev": true - }, - "loader-runner": { - "version": "2.4.0", - "resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-2.4.0.tgz", - "integrity": "sha512-Jsmr89RcXGIwivFY21FcRrisYZfvLMTWx5kOLc+JTxtpBOG6xML0vzbc6SEQG2FO9/4Fc3wW4LVcB5DmGflaRw==", - "dev": true - }, - "loader-utils": { - "version": "1.2.3", - "resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-1.2.3.tgz", - "integrity": "sha512-fkpz8ejdnEMG3s37wGL07iSBDg99O9D5yflE9RGNH3hRdx9SOwYfnGYdZOUIZitN8E+E2vkq3MUMYMvPYl5ZZA==", - "dev": true, - "requires": { - "big.js": "^5.2.2", - "emojis-list": "^2.0.0", - "json5": "^1.0.1" - } - }, - "locate-path": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz", - "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==", - "dev": true, - "requires": { - "p-locate": "^3.0.0", - "path-exists": "^3.0.0" - } - }, - "lodash.debounce": { - "version": "4.0.8", - "resolved": "https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz", - "integrity": "sha1-gteb/zCmfEAF/9XiUVMArZyk168=", - "dev": true - }, - "loose-envify": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz", - "integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==", - "requires": { - "js-tokens": "^3.0.0 || ^4.0.0" - } - }, - "lru-cache": { - "version": "5.1.1", - "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz", - "integrity": "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==", - "dev": true, - "requires": { - "yallist": "^3.0.2" - } - }, - "make-dir": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-1.3.0.tgz", - "integrity": "sha512-2w31R7SJtieJJnQtGc7RVL2StM2vGYVfqUOvUDxH6bC6aJTxPxTF0GnIgCyu7tjockiUWAYQRbxa7vKn34s5sQ==", - "dev": true, - "requires": { - "pify": "^3.0.0" - } - }, - "map-age-cleaner": { - "version": "0.1.3", - "resolved": "https://registry.npmjs.org/map-age-cleaner/-/map-age-cleaner-0.1.3.tgz", - "integrity": "sha512-bJzx6nMoP6PDLPBFmg7+xRKeFZvFboMrGlxmNj9ClvX53KrmvM5bXFXEWjbz4cz1AFn+jWJ9z/DJSz7hrs0w3w==", - "dev": true, - "requires": { - "p-defer": "^1.0.0" - } - }, - "map-cache": { - "version": "0.2.2", - "resolved": "https://registry.npmjs.org/map-cache/-/map-cache-0.2.2.tgz", - "integrity": "sha1-wyq9C9ZSXZsFFkW7TyasXcmKDb8=", - "dev": true - }, - "map-visit": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/map-visit/-/map-visit-1.0.0.tgz", - "integrity": "sha1-7Nyo8TFE5mDxtb1B8S80edmN+48=", - "dev": true, - "requires": { - "object-visit": "^1.0.0" - } - }, - "md5.js": { - "version": "1.3.5", - "resolved": "https://registry.npmjs.org/md5.js/-/md5.js-1.3.5.tgz", - "integrity": "sha512-xitP+WxNPcTTOgnTJcrhM0xvdPepipPSf3I8EIpGKeFLjt3PlJLIDG3u8EX53ZIubkb+5U2+3rELYpEhHhzdkg==", - "dev": true, - "requires": { - "hash-base": "^3.0.0", - "inherits": "^2.0.1", - "safe-buffer": "^5.1.2" - } - }, - "mem": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/mem/-/mem-4.0.0.tgz", - "integrity": "sha512-WQxG/5xYc3tMbYLXoXPm81ET2WDULiU5FxbuIoNbJqLOOI8zehXFdZuiUEgfdrU2mVB1pxBZUGlYORSrpuJreA==", - "dev": true, - "requires": { - "map-age-cleaner": "^0.1.1", - "mimic-fn": "^1.0.0", - "p-is-promise": "^1.1.0" - } - }, - "memory-fs": { - "version": "0.4.1", - "resolved": "https://registry.npmjs.org/memory-fs/-/memory-fs-0.4.1.tgz", - "integrity": "sha1-OpoguEYlI+RHz7x+i7gO1me/xVI=", - "dev": true, - "requires": { - "errno": "^0.1.3", - "readable-stream": "^2.0.1" - } - }, - "micromatch": { - "version": "3.1.10", - "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-3.1.10.tgz", - "integrity": "sha512-MWikgl9n9M3w+bpsY3He8L+w9eF9338xRl8IAO5viDizwSzziFEyUzo2xrrloB64ADbTf8uA8vRqqttDTOmccg==", - "dev": true, - "requires": { - "arr-diff": "^4.0.0", - "array-unique": "^0.3.2", - "braces": "^2.3.1", - "define-property": "^2.0.2", - "extend-shallow": "^3.0.2", - "extglob": "^2.0.4", - "fragment-cache": "^0.2.1", - "kind-of": "^6.0.2", - "nanomatch": "^1.2.9", - "object.pick": "^1.3.0", - "regex-not": "^1.0.0", - "snapdragon": "^0.8.1", - "to-regex": "^3.0.2" - } - }, - "miller-rabin": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/miller-rabin/-/miller-rabin-4.0.1.tgz", - "integrity": "sha512-115fLhvZVqWwHPbClyntxEVfVDfl9DLLTuJvq3g2O/Oxi8AiNouAHvDSzHS0viUJc+V5vm3eq91Xwqn9dp4jRA==", - "dev": true, - "requires": { - "bn.js": "^4.0.0", - "brorand": "^1.0.1" - } - }, - "mimic-fn": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-1.2.0.tgz", - "integrity": "sha512-jf84uxzwiuiIVKiOLpfYk7N46TSy8ubTonmneY9vrpHNAnp0QBt2BxWV9dO3/j+BoVAb+a5G6YDPW3M5HOdMWQ==", - "dev": true - }, - "minimalistic-assert": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/minimalistic-assert/-/minimalistic-assert-1.0.1.tgz", - "integrity": "sha512-UtJcAD4yEaGtjPezWuO9wC4nwUnVH/8/Im3yEHQP4b67cXlD/Qr9hdITCU1xDbSEXg2XKNaP8jsReV7vQd00/A==", - "dev": true - }, - "minimalistic-crypto-utils": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/minimalistic-crypto-utils/-/minimalistic-crypto-utils-1.0.1.tgz", - "integrity": "sha1-9sAMHAsIIkblxNmd+4x8CDsrWCo=", - "dev": true - }, - "minimatch": { - "version": "3.0.4", - "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz", - "integrity": "sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==", - "dev": true, - "requires": { - "brace-expansion": "^1.1.7" - } - }, - "minimist": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz", - "integrity": "sha1-o1AIsg9BOD7sH7kU9M1d95omQoQ=", - "dev": true - }, - "minipass": { - "version": "2.3.5", - "resolved": "https://registry.npmjs.org/minipass/-/minipass-2.3.5.tgz", - "integrity": "sha512-Gi1W4k059gyRbyVUZQ4mEqLm0YIUiGYfvxhF6SIlk3ui1WVxMTGfGdQ2SInh3PDrRTVvPKgULkpJtT4RH10+VA==", - "dev": true, - "optional": true, - "requires": { - "safe-buffer": "^5.1.2", - "yallist": "^3.0.0" - } - }, - "minizlib": { - "version": "1.2.1", - "resolved": "https://registry.npmjs.org/minizlib/-/minizlib-1.2.1.tgz", - "integrity": "sha512-7+4oTUOWKg7AuL3vloEWekXY2/D20cevzsrNT2kGWm+39J9hGTCBv8VI5Pm5lXZ/o3/mdR4f8rflAPhnQb8mPA==", - "dev": true, - "optional": true, - "requires": { - "minipass": "^2.2.1" - } - }, - "mississippi": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/mississippi/-/mississippi-3.0.0.tgz", - "integrity": "sha512-x471SsVjUtBRtcvd4BzKE9kFC+/2TeWgKCgw0bZcw1b9l2X3QX5vCWgF+KaZaYm87Ss//rHnWryupDrgLvmSkA==", - "dev": true, - "requires": { - "concat-stream": "^1.5.0", - "duplexify": "^3.4.2", - "end-of-stream": "^1.1.0", - "flush-write-stream": "^1.0.0", - "from2": "^2.1.0", - "parallel-transform": "^1.1.0", - "pump": "^3.0.0", - "pumpify": "^1.3.3", - "stream-each": "^1.1.0", - "through2": "^2.0.0" - } - }, - "mixin-deep": { - "version": "1.3.2", - "resolved": "https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.2.tgz", - "integrity": "sha512-WRoDn//mXBiJ1H40rqa3vH0toePwSsGb45iInWlTySa+Uu4k3tYUSxa2v1KqAiLtvlrSzaExqS1gtk96A9zvEA==", - "dev": true, - "requires": { - "for-in": "^1.0.2", - "is-extendable": "^1.0.1" - }, - "dependencies": { - "is-extendable": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-1.0.1.tgz", - "integrity": "sha512-arnXMxT1hhoKo9k1LZdmlNyJdDDfy2v0fXjFlmok4+i8ul/6WlbVge9bhM74OpNPQPMGUToDtz+KXa1PneJxOA==", - "dev": true, - "requires": { - "is-plain-object": "^2.0.4" - } - } - } - }, - "mkdirp": { - "version": "0.5.1", - "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.1.tgz", - "integrity": "sha1-MAV0OOrGz3+MR2fzhkjWaX11yQM=", - "dev": true, - "requires": { - "minimist": "0.0.8" - }, - "dependencies": { - "minimist": { - "version": "0.0.8", - "resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz", - "integrity": "sha1-hX/Kv8M5fSYluCKCYuhqp6ARsF0=", - "dev": true - } - } - }, - "move-concurrently": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/move-concurrently/-/move-concurrently-1.0.1.tgz", - "integrity": "sha1-viwAX9oy4LKa8fBdfEszIUxwH5I=", - "dev": true, - "requires": { - "aproba": "^1.1.1", - "copy-concurrently": "^1.0.0", - "fs-write-stream-atomic": "^1.0.8", - "mkdirp": "^0.5.1", - "rimraf": "^2.5.4", - "run-queue": "^1.0.3" - } - }, - "ms": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", - "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g=", - "dev": true - }, - "nan": { - "version": "2.12.1", - "resolved": "https://registry.npmjs.org/nan/-/nan-2.12.1.tgz", - "integrity": "sha512-JY7V6lRkStKcKTvHO5NVSQRv+RV+FIL5pvDoLiAtSL9pKlC5x9PKQcZDsq7m4FO4d57mkhC6Z+QhAh3Jdk5JFw==", - "dev": true, - "optional": true - }, - "nanomatch": { - "version": "1.2.13", - "resolved": "https://registry.npmjs.org/nanomatch/-/nanomatch-1.2.13.tgz", - "integrity": "sha512-fpoe2T0RbHwBTBUOftAfBPaDEi06ufaUai0mE6Yn1kacc3SnTErfb/h+X94VXzI64rKFHYImXSvdwGGCmwOqCA==", - "dev": true, - "requires": { - "arr-diff": "^4.0.0", - "array-unique": "^0.3.2", - "define-property": "^2.0.2", - "extend-shallow": "^3.0.2", - "fragment-cache": "^0.2.1", - "is-windows": "^1.0.2", - "kind-of": "^6.0.2", - "object.pick": "^1.3.0", - "regex-not": "^1.0.0", - "snapdragon": "^0.8.1", - "to-regex": "^3.0.1" - } - }, - "needle": { - "version": "2.2.4", - "resolved": "https://registry.npmjs.org/needle/-/needle-2.2.4.tgz", - "integrity": "sha512-HyoqEb4wr/rsoaIDfTH2aVL9nWtQqba2/HvMv+++m8u0dz808MaagKILxtfeSN7QU7nvbQ79zk3vYOJp9zsNEA==", - "dev": true, - "optional": true, - "requires": { - "debug": "^2.1.2", - "iconv-lite": "^0.4.4", - "sax": "^1.2.4" - } - }, - "neo-async": { - "version": "2.6.0", - "resolved": "https://registry.npmjs.org/neo-async/-/neo-async-2.6.0.tgz", - "integrity": "sha512-MFh0d/Wa7vkKO3Y3LlacqAEeHK0mckVqzDieUKTT+KGxi+zIpeVsFxymkIiRpbpDziHc290Xr9A1O4Om7otoRA==", - "dev": true - }, - "nice-try": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/nice-try/-/nice-try-1.0.5.tgz", - "integrity": "sha512-1nh45deeb5olNY7eX82BkPO7SSxR5SSYJiPTrTdFUVYwAl8CKMA5N9PjTYkHiRjisVcxcQ1HXdLhx2qxxJzLNQ==", - "dev": true - }, - "node-libs-browser": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.1.0.tgz", - "integrity": "sha512-5AzFzdoIMb89hBGMZglEegffzgRg+ZFoUmisQ8HI4j1KDdpx13J0taNp2y9xPbur6W61gepGDDotGBVQ7mfUCg==", - "dev": true, - "requires": { - "assert": "^1.1.1", - "browserify-zlib": "^0.2.0", - "buffer": "^4.3.0", - "console-browserify": "^1.1.0", - "constants-browserify": "^1.0.0", - "crypto-browserify": "^3.11.0", - "domain-browser": "^1.1.1", - "events": "^1.0.0", - "https-browserify": "^1.0.0", - "os-browserify": "^0.3.0", - "path-browserify": "0.0.0", - "process": "^0.11.10", - "punycode": "^1.2.4", - "querystring-es3": "^0.2.0", - "readable-stream": "^2.3.3", - "stream-browserify": "^2.0.1", - "stream-http": "^2.7.2", - "string_decoder": "^1.0.0", - "timers-browserify": "^2.0.4", - "tty-browserify": "0.0.0", - "url": "^0.11.0", - "util": "^0.10.3", - "vm-browserify": "0.0.4" - }, - "dependencies": { - "punycode": { - "version": "1.4.1", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-1.4.1.tgz", - "integrity": "sha1-wNWmOycYgArY4esPpSachN1BhF4=", - "dev": true - } - } - }, - "node-pre-gyp": { - "version": "0.10.3", - "resolved": "https://registry.npmjs.org/node-pre-gyp/-/node-pre-gyp-0.10.3.tgz", - "integrity": "sha512-d1xFs+C/IPS8Id0qPTZ4bUT8wWryfR/OzzAFxweG+uLN85oPzyo2Iw6bVlLQ/JOdgNonXLCoRyqDzDWq4iw72A==", - "dev": true, - "optional": true, - "requires": { - "detect-libc": "^1.0.2", - "mkdirp": "^0.5.1", - "needle": "^2.2.1", - "nopt": "^4.0.1", - "npm-packlist": "^1.1.6", - "npmlog": "^4.0.2", - "rc": "^1.2.7", - "rimraf": "^2.6.1", - "semver": "^5.3.0", - "tar": "^4" - } - }, - "nopt": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/nopt/-/nopt-4.0.1.tgz", - "integrity": "sha1-0NRoWv1UFRk8jHUFYC0NF81kR00=", - "dev": true, - "optional": true, - "requires": { - "abbrev": "1", - "osenv": "^0.1.4" - } - }, - "normalize-path": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-2.1.1.tgz", - "integrity": "sha1-GrKLVW4Zg2Oowab35vogE3/mrtk=", - "dev": true, - "requires": { - "remove-trailing-separator": "^1.0.1" - } - }, - "npm-bundled": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/npm-bundled/-/npm-bundled-1.0.5.tgz", - "integrity": "sha512-m/e6jgWu8/v5niCUKQi9qQl8QdeEduFA96xHDDzFGqly0OOjI7c+60KM/2sppfnUU9JJagf+zs+yGhqSOFj71g==", - "dev": true, - "optional": true - }, - "npm-packlist": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/npm-packlist/-/npm-packlist-1.2.0.tgz", - "integrity": "sha512-7Mni4Z8Xkx0/oegoqlcao/JpPCPEMtUvsmB0q7mgvlMinykJLSRTYuFqoQLYgGY8biuxIeiHO+QNJKbCfljewQ==", - "dev": true, - "optional": true, - "requires": { - "ignore-walk": "^3.0.1", - "npm-bundled": "^1.0.1" - } - }, - "npm-run-path": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-2.0.2.tgz", - "integrity": "sha1-NakjLfo11wZ7TLLd8jV7GHFTbF8=", - "dev": true, - "requires": { - "path-key": "^2.0.0" - } - }, - "npmlog": { - "version": "4.1.2", - "resolved": "https://registry.npmjs.org/npmlog/-/npmlog-4.1.2.tgz", - "integrity": "sha512-2uUqazuKlTaSI/dC8AzicUck7+IrEaOnN/e0jd3Xtt1KcGpwx30v50mL7oPyr/h9bL3E4aZccVwpwP+5W9Vjkg==", - "dev": true, - "optional": true, - "requires": { - "are-we-there-yet": "~1.1.2", - "console-control-strings": "~1.1.0", - "gauge": "~2.7.3", - "set-blocking": "~2.0.0" - } - }, - "number-is-nan": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/number-is-nan/-/number-is-nan-1.0.1.tgz", - "integrity": "sha1-CXtgK1NCKlIsGvuHkDGDNpQaAR0=", - "dev": true - }, - "object-assign": { - "version": "4.1.1", - "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", - "integrity": "sha1-IQmtx5ZYh8/AXLvUQsrIv7s2CGM=" - }, - "object-copy": { - "version": "0.1.0", - "resolved": "https://registry.npmjs.org/object-copy/-/object-copy-0.1.0.tgz", - "integrity": "sha1-fn2Fi3gb18mRpBupde04EnVOmYw=", - "dev": true, - "requires": { - "copy-descriptor": "^0.1.0", - "define-property": "^0.2.5", - "kind-of": "^3.0.3" - }, - "dependencies": { - "define-property": { - "version": "0.2.5", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", - "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", - "dev": true, - "requires": { - "is-descriptor": "^0.1.0" - } - }, - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "object-visit": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/object-visit/-/object-visit-1.0.1.tgz", - "integrity": "sha1-95xEk68MU3e1n+OdOV5BBC3QRbs=", - "dev": true, - "requires": { - "isobject": "^3.0.0" - } - }, - "object.pick": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/object.pick/-/object.pick-1.3.0.tgz", - "integrity": "sha1-h6EKxMFpS9Lhy/U1kaZhQftd10c=", - "dev": true, - "requires": { - "isobject": "^3.0.1" - } - }, - "once": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", - "integrity": "sha1-WDsap3WWHUsROsF9nFC6753Xa9E=", - "dev": true, - "requires": { - "wrappy": "1" - } - }, - "os-browserify": { - "version": "0.3.0", - "resolved": "https://registry.npmjs.org/os-browserify/-/os-browserify-0.3.0.tgz", - "integrity": "sha1-hUNzx/XCMVkU/Jv8a9gjj92h7Cc=", - "dev": true - }, - "os-homedir": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/os-homedir/-/os-homedir-1.0.2.tgz", - "integrity": "sha1-/7xJiDNuDoM94MFox+8VISGqf7M=", - "dev": true, - "optional": true - }, - "os-locale": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/os-locale/-/os-locale-3.1.0.tgz", - "integrity": "sha512-Z8l3R4wYWM40/52Z+S265okfFj8Kt2cC2MKY+xNi3kFs+XGI7WXu/I309QQQYbRW4ijiZ+yxs9pqEhJh0DqW3Q==", - "dev": true, - "requires": { - "execa": "^1.0.0", - "lcid": "^2.0.0", - "mem": "^4.0.0" - } - }, - "os-tmpdir": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/os-tmpdir/-/os-tmpdir-1.0.2.tgz", - "integrity": "sha1-u+Z0BseaqFxc/sdm/lc0VV36EnQ=", - "dev": true, - "optional": true - }, - "osenv": { - "version": "0.1.5", - "resolved": "https://registry.npmjs.org/osenv/-/osenv-0.1.5.tgz", - "integrity": "sha512-0CWcCECdMVc2Rw3U5w9ZjqX6ga6ubk1xDVKxtBQPK7wis/0F2r9T6k4ydGYhecl7YUBxBVxhL5oisPsNxAPe2g==", - "dev": true, - "optional": true, - "requires": { - "os-homedir": "^1.0.0", - "os-tmpdir": "^1.0.0" - } - }, - "p-defer": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/p-defer/-/p-defer-1.0.0.tgz", - "integrity": "sha1-n26xgvbJqozXQwBKfU+WsZaw+ww=", - "dev": true - }, - "p-finally": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/p-finally/-/p-finally-1.0.0.tgz", - "integrity": "sha1-P7z7FbiZpEEjs0ttzBi3JDNqLK4=", - "dev": true - }, - "p-is-promise": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/p-is-promise/-/p-is-promise-1.1.0.tgz", - "integrity": "sha1-nJRWmJ6fZYgBewQ01WCXZ1w9oF4=", - "dev": true - }, - "p-limit": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.1.0.tgz", - "integrity": "sha512-NhURkNcrVB+8hNfLuysU8enY5xn2KXphsHBaC2YmRNTZRc7RWusw6apSpdEj3jo4CMb6W9nrF6tTnsJsJeyu6g==", - "dev": true, - "requires": { - "p-try": "^2.0.0" - } - }, - "p-locate": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz", - "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==", - "dev": true, - "requires": { - "p-limit": "^2.0.0" - } - }, - "p-try": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/p-try/-/p-try-2.0.0.tgz", - "integrity": "sha512-hMp0onDKIajHfIkdRk3P4CdCmErkYAxxDtP3Wx/4nZ3aGlau2VKh3mZpcuFkH27WQkL/3WBCPOktzA9ZOAnMQQ==", - "dev": true - }, - "pako": { - "version": "1.0.8", - "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.8.tgz", - "integrity": "sha512-6i0HVbUfcKaTv+EG8ZTr75az7GFXcLYk9UyLEg7Notv/Ma+z/UG3TCoz6GiNeOrn1E/e63I0X/Hpw18jHOTUnA==", - "dev": true - }, - "parallel-transform": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/parallel-transform/-/parallel-transform-1.1.0.tgz", - "integrity": "sha1-1BDwZbBdojCB/NEPKIVMKb2jOwY=", - "dev": true, - "requires": { - "cyclist": "~0.2.2", - "inherits": "^2.0.3", - "readable-stream": "^2.1.5" - } - }, - "parse-asn1": { - "version": "5.1.1", - "resolved": "https://registry.npmjs.org/parse-asn1/-/parse-asn1-5.1.1.tgz", - "integrity": "sha512-KPx7flKXg775zZpnp9SxJlz00gTd4BmJ2yJufSc44gMCRrRQ7NSzAcSJQfifuOLgW6bEi+ftrALtsgALeB2Adw==", - "dev": true, - "requires": { - "asn1.js": "^4.0.0", - "browserify-aes": "^1.0.0", - "create-hash": "^1.1.0", - "evp_bytestokey": "^1.0.0", - "pbkdf2": "^3.0.3" - } - }, - "parse-passwd": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/parse-passwd/-/parse-passwd-1.0.0.tgz", - "integrity": "sha1-bVuTSkVpk7I9N/QKOC1vFmao5cY=", - "dev": true - }, - "pascalcase": { - "version": "0.1.1", - "resolved": "https://registry.npmjs.org/pascalcase/-/pascalcase-0.1.1.tgz", - "integrity": "sha1-s2PlXoAGym/iF4TS2yK9FdeRfxQ=", - "dev": true - }, - "path-browserify": { - "version": "0.0.0", - "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-0.0.0.tgz", - "integrity": "sha1-oLhwcpquIUAFt9UDLsLLuw+0RRo=", - "dev": true - }, - "path-dirname": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/path-dirname/-/path-dirname-1.0.2.tgz", - "integrity": "sha1-zDPSTVJeCZpTiMAzbG4yuRYGCeA=", - "dev": true - }, - "path-exists": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-3.0.0.tgz", - "integrity": "sha1-zg6+ql94yxiSXqfYENe1mwEP1RU=", - "dev": true - }, - "path-is-absolute": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", - "integrity": "sha1-F0uSaHNVNP+8es5r9TpanhtcX18=", - "dev": true - }, - "path-key": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/path-key/-/path-key-2.0.1.tgz", - "integrity": "sha1-QRyttXTFoUDTpLGRDUDYDMn0C0A=", - "dev": true - }, - "path-parse": { - "version": "1.0.6", - "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz", - "integrity": "sha512-GSmOT2EbHrINBf9SR7CDELwlJ8AENk3Qn7OikK4nFYAu3Ote2+JYNVvkpAEQm3/TLNEJFD/xZJjzyxg3KBWOzw==", - "dev": true - }, - "pbkdf2": { - "version": "3.0.17", - "resolved": "https://registry.npmjs.org/pbkdf2/-/pbkdf2-3.0.17.tgz", - "integrity": "sha512-U/il5MsrZp7mGg3mSQfn742na2T+1/vHDCG5/iTI3X9MKUuYUZVLQhyRsg06mCgDBTd57TxzgZt7P+fYfjRLtA==", - "dev": true, - "requires": { - "create-hash": "^1.1.2", - "create-hmac": "^1.1.4", - "ripemd160": "^2.0.1", - "safe-buffer": "^5.0.1", - "sha.js": "^2.4.8" - } - }, - "pify": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz", - "integrity": "sha1-5aSs0sEB/fPZpNB/DbxNtJ3SgXY=", - "dev": true - }, - "pkg-dir": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz", - "integrity": "sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw==", - "dev": true, - "requires": { - "find-up": "^3.0.0" - } - }, - "posix-character-classes": { - "version": "0.1.1", - "resolved": "https://registry.npmjs.org/posix-character-classes/-/posix-character-classes-0.1.1.tgz", - "integrity": "sha1-AerA/jta9xoqbAL+q7jB/vfgDqs=", - "dev": true - }, - "process": { - "version": "0.11.10", - "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", - "integrity": "sha1-czIwDoQBYb2j5podHZGn1LwW8YI=", - "dev": true - }, - "process-nextick-args": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.0.tgz", - "integrity": "sha512-MtEC1TqN0EU5nephaJ4rAtThHtC86dNN9qCuEhtshvpVBkAW5ZO7BASN9REnF9eoXGcRub+pFuKEpOHE+HbEMw==", - "dev": true - }, - "promise-inflight": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/promise-inflight/-/promise-inflight-1.0.1.tgz", - "integrity": "sha1-mEcocL8igTL8vdhoEputEsPAKeM=", - "dev": true - }, - "prop-types": { - "version": "15.6.2", - "resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.6.2.tgz", - "integrity": "sha512-3pboPvLiWD7dkI3qf3KbUe6hKFKa52w+AE0VCqECtf+QHAKgOL37tTaNCnuX1nAAQ4ZhyP+kYVKf8rLmJ/feDQ==", - "requires": { - "loose-envify": "^1.3.1", - "object-assign": "^4.1.1" - } - }, - "prr": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/prr/-/prr-1.0.1.tgz", - "integrity": "sha1-0/wRS6BplaRexok/SEzrHXj19HY=", - "dev": true - }, - "psl": { - "version": "1.1.31", - "resolved": "https://registry.npmjs.org/psl/-/psl-1.1.31.tgz", - "integrity": "sha512-/6pt4+C+T+wZUieKR620OpzN/LlnNKuWjy1iFLQ/UG35JqHlR/89MP1d96dUfkf6Dne3TuLQzOYEYshJ+Hx8mw==" - }, - "public-encrypt": { - "version": "4.0.3", - "resolved": "https://registry.npmjs.org/public-encrypt/-/public-encrypt-4.0.3.tgz", - "integrity": "sha512-zVpa8oKZSz5bTMTFClc1fQOnyyEzpl5ozpi1B5YcvBrdohMjH2rfsBtyXcuNuwjsDIXmBYlF2N5FlJYhR29t8Q==", - "dev": true, - "requires": { - "bn.js": "^4.1.0", - "browserify-rsa": "^4.0.0", - "create-hash": "^1.1.0", - "parse-asn1": "^5.0.0", - "randombytes": "^2.0.1", - "safe-buffer": "^5.1.2" - } - }, - "pump": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/pump/-/pump-3.0.0.tgz", - "integrity": "sha512-LwZy+p3SFs1Pytd/jYct4wpv49HiYCqd9Rlc5ZVdk0V+8Yzv6jR5Blk3TRmPL1ft69TxP0IMZGJ+WPFU2BFhww==", - "dev": true, - "requires": { - "end-of-stream": "^1.1.0", - "once": "^1.3.1" - } - }, - "pumpify": { - "version": "1.5.1", - "resolved": "https://registry.npmjs.org/pumpify/-/pumpify-1.5.1.tgz", - "integrity": "sha512-oClZI37HvuUJJxSKKrC17bZ9Cu0ZYhEAGPsPUy9KlMUmv9dKX2o77RUmq7f3XjIxbwyGwYzbzQ1L2Ks8sIradQ==", - "dev": true, - "requires": { - "duplexify": "^3.6.0", - "inherits": "^2.0.3", - "pump": "^2.0.0" - }, - "dependencies": { - "pump": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/pump/-/pump-2.0.1.tgz", - "integrity": "sha512-ruPMNRkN3MHP1cWJc9OWr+T/xDP0jhXYCLfJcBuX54hhfIBnaQmAUMfDcG4DM5UMWByBbJY69QSphm3jtDKIkA==", - "dev": true, - "requires": { - "end-of-stream": "^1.1.0", - "once": "^1.3.1" - } - } - } - }, - "punycode": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", - "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", - "dev": true - }, - "qs": { - "version": "6.6.0", - "resolved": "https://registry.npmjs.org/qs/-/qs-6.6.0.tgz", - "integrity": "sha512-KIJqT9jQJDQx5h5uAVPimw6yVg2SekOKu959OCtktD3FjzbpvaPr8i4zzg07DOMz+igA4W/aNM7OV8H37pFYfA==" - }, - "querystring": { - "version": "0.2.0", - "resolved": "https://registry.npmjs.org/querystring/-/querystring-0.2.0.tgz", - "integrity": "sha1-sgmEkgO7Jd+CDadW50cAWHhSFiA=", - "dev": true - }, - "querystring-es3": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/querystring-es3/-/querystring-es3-0.2.1.tgz", - "integrity": "sha1-nsYfeQSYdXB9aUFFlv2Qek1xHnM=", - "dev": true - }, - "randombytes": { - "version": "2.0.6", - "resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.0.6.tgz", - "integrity": "sha512-CIQ5OFxf4Jou6uOKe9t1AOgqpeU5fd70A8NPdHSGeYXqXsPe6peOwI0cUl88RWZ6sP1vPMV3avd/R6cZ5/sP1A==", - "dev": true, - "requires": { - "safe-buffer": "^5.1.0" - } - }, - "randomfill": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/randomfill/-/randomfill-1.0.4.tgz", - "integrity": "sha512-87lcbR8+MhcWcUiQ+9e+Rwx8MyR2P7qnt15ynUlbm3TU/fjbgz4GsvfSUDTemtCCtVCqb4ZcEFlyPNTh9bBTLw==", - "dev": true, - "requires": { - "randombytes": "^2.0.5", - "safe-buffer": "^5.1.0" - } - }, - "rc": { - "version": "1.2.8", - "resolved": "https://registry.npmjs.org/rc/-/rc-1.2.8.tgz", - "integrity": "sha512-y3bGgqKj3QBdxLbLkomlohkvsA8gdAiUQlSBJnBhfn+BPxg4bc62d8TcBW15wavDfgexCgccckhcZvywyQYPOw==", - "dev": true, - "optional": true, - "requires": { - "deep-extend": "^0.6.0", - "ini": "~1.3.0", - "minimist": "^1.2.0", - "strip-json-comments": "~2.0.1" - } - }, - "react": { - "version": "16.7.0", - "resolved": "https://registry.npmjs.org/react/-/react-16.7.0.tgz", - "integrity": "sha512-StCz3QY8lxTb5cl2HJxjwLFOXPIFQp+p+hxQfc8WE0QiLfCtIlKj8/+5tjjKm8uSTlAW+fCPaavGFS06V9Ar3A==", - "requires": { - "loose-envify": "^1.1.0", - "object-assign": "^4.1.1", - "prop-types": "^15.6.2", - "scheduler": "^0.12.0" - } - }, - "react-dom": { - "version": "16.7.0", - "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-16.7.0.tgz", - "integrity": "sha512-D0Ufv1ExCAmF38P2Uh1lwpminZFRXEINJe53zRAbm4KPwSyd6DY/uDoS0Blj9jvPpn1+wivKpZYc8aAAN/nAkg==", - "requires": { - "loose-envify": "^1.1.0", - "object-assign": "^4.1.1", - "prop-types": "^15.6.2", - "scheduler": "^0.12.0" - } - }, - "readable-stream": { - "version": "2.3.6", - "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.6.tgz", - "integrity": "sha512-tQtKA9WIAhBF3+VLAseyMqZeBjW0AHJoxOtYqSUZNJxauErmLbVm2FW1y+J/YA9dUrAC39ITejlZWhVIwawkKw==", - "dev": true, - "requires": { - "core-util-is": "~1.0.0", - "inherits": "~2.0.3", - "isarray": "~1.0.0", - "process-nextick-args": "~2.0.0", - "safe-buffer": "~5.1.1", - "string_decoder": "~1.1.1", - "util-deprecate": "~1.0.1" - } - }, - "readdirp": { - "version": "2.2.1", - "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-2.2.1.tgz", - "integrity": "sha512-1JU/8q+VgFZyxwrJ+SVIOsh+KywWGpds3NTqikiKpDMZWScmAYyKIgqkO+ARvNWJfXeXR1zxz7aHF4u4CyH6vQ==", - "dev": true, - "requires": { - "graceful-fs": "^4.1.11", - "micromatch": "^3.1.10", - "readable-stream": "^2.0.2" - } - }, - "regex-not": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/regex-not/-/regex-not-1.0.2.tgz", - "integrity": "sha512-J6SDjUgDxQj5NusnOtdFxDwN/+HWykR8GELwctJ7mdqhcyy1xEc4SRFHUXvxTp661YaVKAjfRLZ9cCqS6tn32A==", - "dev": true, - "requires": { - "extend-shallow": "^3.0.2", - "safe-regex": "^1.1.0" - } - }, - "remove-trailing-separator": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/remove-trailing-separator/-/remove-trailing-separator-1.1.0.tgz", - "integrity": "sha1-wkvOKig62tW8P1jg1IJJuSN52O8=", - "dev": true - }, - "repeat-element": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/repeat-element/-/repeat-element-1.1.3.tgz", - "integrity": "sha512-ahGq0ZnV5m5XtZLMb+vP76kcAM5nkLqk0lpqAuojSKGgQtn4eRi4ZZGm2olo2zKFH+sMsWaqOCW1dqAnOru72g==", - "dev": true - }, - "repeat-string": { - "version": "1.6.1", - "resolved": "https://registry.npmjs.org/repeat-string/-/repeat-string-1.6.1.tgz", - "integrity": "sha1-jcrkcOHIirwtYA//Sndihtp15jc=", - "dev": true - }, - "require-directory": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", - "integrity": "sha1-jGStX9MNqxyXbiNE/+f3kqam30I=", - "dev": true - }, - "require-main-filename": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-1.0.1.tgz", - "integrity": "sha1-l/cXtp1IeE9fUmpsWqj/3aBVpNE=", - "dev": true - }, - "resolve": { - "version": "1.9.0", - "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.9.0.tgz", - "integrity": "sha512-TZNye00tI67lwYvzxCxHGjwTNlUV70io54/Ed4j6PscB8xVfuBJpRenI/o6dVk0cY0PYTY27AgCoGGxRnYuItQ==", - "dev": true, - "requires": { - "path-parse": "^1.0.6" - } - }, - "resolve-cwd": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-2.0.0.tgz", - "integrity": "sha1-AKn3OHVW4nA46uIyyqNypqWbZlo=", - "dev": true, - "requires": { - "resolve-from": "^3.0.0" - } - }, - "resolve-dir": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/resolve-dir/-/resolve-dir-1.0.1.tgz", - "integrity": "sha1-eaQGRMNivoLybv/nOcm7U4IEb0M=", - "dev": true, - "requires": { - "expand-tilde": "^2.0.0", - "global-modules": "^1.0.0" - } - }, - "resolve-from": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-3.0.0.tgz", - "integrity": "sha1-six699nWiBvItuZTM17rywoYh0g=", - "dev": true - }, - "resolve-url": { - "version": "0.2.1", - "resolved": "https://registry.npmjs.org/resolve-url/-/resolve-url-0.2.1.tgz", - "integrity": "sha1-LGN/53yJOv0qZj/iGqkIAGjiBSo=", - "dev": true - }, - "ret": { - "version": "0.1.15", - "resolved": "https://registry.npmjs.org/ret/-/ret-0.1.15.tgz", - "integrity": "sha512-TTlYpa+OL+vMMNG24xSlQGEJ3B/RzEfUlLct7b5G/ytav+wPrplCpVMFuwzXbkecJrb6IYo1iFb0S9v37754mg==", - "dev": true - }, - "rimraf": { - "version": "2.6.3", - "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.6.3.tgz", - "integrity": "sha512-mwqeW5XsA2qAejG46gYdENaxXjx9onRNCfn7L0duuP4hCuTIi/QO7PDK07KJfp1d+izWPrzEJDcSqBa0OZQriA==", - "dev": true, - "requires": { - "glob": "^7.1.3" - } - }, - "ripemd160": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/ripemd160/-/ripemd160-2.0.2.tgz", - "integrity": "sha512-ii4iagi25WusVoiC4B4lq7pbXfAp3D9v5CwfkY33vffw2+pkDjY1D8GaN7spsxvCSx8dkPqOZCEZyfxcmJG2IA==", - "dev": true, - "requires": { - "hash-base": "^3.0.0", - "inherits": "^2.0.1" - } - }, - "run-queue": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/run-queue/-/run-queue-1.0.3.tgz", - "integrity": "sha1-6Eg5bwV9Ij8kOGkkYY4laUFh7Ec=", - "dev": true, - "requires": { - "aproba": "^1.1.1" - } - }, - "safe-buffer": { - "version": "5.1.2", - "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz", - "integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==", - "dev": true - }, - "safe-regex": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/safe-regex/-/safe-regex-1.1.0.tgz", - "integrity": "sha1-QKNmnzsHfR6UPURinhV91IAjvy4=", - "dev": true, - "requires": { - "ret": "~0.1.10" - } - }, - "safer-buffer": { - "version": "2.1.2", - "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", - "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" - }, - "sax": { - "version": "1.2.4", - "resolved": "https://registry.npmjs.org/sax/-/sax-1.2.4.tgz", - "integrity": "sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==", - "dev": true, - "optional": true - }, - "scheduler": { - "version": "0.12.0", - "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.12.0.tgz", - "integrity": "sha512-t7MBR28Akcp4Jm+QoR63XgAi9YgCUmgvDHqf5otgAj4QvdoBE4ImCX0ffehefePPG+aitiYHp0g/mW6s4Tp+dw==", - "requires": { - "loose-envify": "^1.1.0", - "object-assign": "^4.1.1" - } - }, - "schema-utils": { - "version": "0.4.7", - "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz", - "integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==", - "dev": true, - "requires": { - "ajv": "^6.1.0", - "ajv-keywords": "^3.1.0" - } - }, - "semver": { - "version": "5.6.0", - "resolved": "https://registry.npmjs.org/semver/-/semver-5.6.0.tgz", - "integrity": "sha512-RS9R6R35NYgQn++fkDWaOmqGoj4Ek9gGs+DPxNUZKuwE183xjJroKvyo1IzVFeXvUrvmALy6FWD5xrdJT25gMg==", - "dev": true - }, - "serialize-javascript": { - "version": "1.6.1", - "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.6.1.tgz", - "integrity": "sha512-A5MOagrPFga4YaKQSWHryl7AXvbQkEqpw4NNYMTNYUNV51bA8ABHgYFpqKx+YFFrw59xMV1qGH1R4AgoNIVgCw==", - "dev": true - }, - "set-blocking": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/set-blocking/-/set-blocking-2.0.0.tgz", - "integrity": "sha1-BF+XgtARrppoA93TgrJDkrPYkPc=", - "dev": true - }, - "set-value": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz", - "integrity": "sha512-JxHc1weCN68wRY0fhCoXpyK55m/XPHafOmK4UWD7m2CI14GMcFypt4w/0+NV5f/ZMby2F6S2wwA7fgynh9gWSw==", - "dev": true, - "requires": { - "extend-shallow": "^2.0.1", - "is-extendable": "^0.1.1", - "is-plain-object": "^2.0.3", - "split-string": "^3.0.1" - }, - "dependencies": { - "extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", - "dev": true, - "requires": { - "is-extendable": "^0.1.0" - } - } - } - }, - "setimmediate": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz", - "integrity": "sha1-KQy7Iy4waULX1+qbg3Mqt4VvgoU=", - "dev": true - }, - "sha.js": { - "version": "2.4.11", - "resolved": "https://registry.npmjs.org/sha.js/-/sha.js-2.4.11.tgz", - "integrity": "sha512-QMEp5B7cftE7APOjk5Y6xgrbWu+WkLVQwk8JNjZ8nKRciZaByEW6MubieAiToS7+dwvrjGhH8jRXz3MVd0AYqQ==", - "dev": true, - "requires": { - "inherits": "^2.0.1", - "safe-buffer": "^5.0.1" - } - }, - "shebang-command": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-1.2.0.tgz", - "integrity": "sha1-RKrGW2lbAzmJaMOfNj/uXer98eo=", - "dev": true, - "requires": { - "shebang-regex": "^1.0.0" - } - }, - "shebang-regex": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-1.0.0.tgz", - "integrity": "sha1-2kL0l0DAtC2yypcoVxyxkMmO/qM=", - "dev": true - }, - "signal-exit": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.2.tgz", - "integrity": "sha1-tf3AjxKH6hF4Yo5BXiUTK3NkbG0=", - "dev": true - }, - "snapdragon": { - "version": "0.8.2", - "resolved": "https://registry.npmjs.org/snapdragon/-/snapdragon-0.8.2.tgz", - "integrity": "sha512-FtyOnWN/wCHTVXOMwvSv26d+ko5vWlIDD6zoUJ7LW8vh+ZBC8QdljveRP+crNrtBwioEUWy/4dMtbBjA4ioNlg==", - "dev": true, - "requires": { - "base": "^0.11.1", - "debug": "^2.2.0", - "define-property": "^0.2.5", - "extend-shallow": "^2.0.1", - "map-cache": "^0.2.2", - "source-map": "^0.5.6", - "source-map-resolve": "^0.5.0", - "use": "^3.1.0" - }, - "dependencies": { - "define-property": { - "version": "0.2.5", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", - "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", - "dev": true, - "requires": { - "is-descriptor": "^0.1.0" - } - }, - "extend-shallow": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz", - "integrity": "sha1-Ua99YUrZqfYQ6huvu5idaxxWiQ8=", - "dev": true, - "requires": { - "is-extendable": "^0.1.0" - } - } - } - }, - "snapdragon-node": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/snapdragon-node/-/snapdragon-node-2.1.1.tgz", - "integrity": "sha512-O27l4xaMYt/RSQ5TR3vpWCAB5Kb/czIcqUFOM/C4fYcLnbZUc1PkjTAMjof2pBWaSTwOUd6qUHcFGVGj7aIwnw==", - "dev": true, - "requires": { - "define-property": "^1.0.0", - "isobject": "^3.0.0", - "snapdragon-util": "^3.0.1" - }, - "dependencies": { - "define-property": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz", - "integrity": "sha1-dp66rz9KY6rTr56NMEybvnm/sOY=", - "dev": true, - "requires": { - "is-descriptor": "^1.0.0" - } - }, - "is-accessor-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.0.tgz", - "integrity": "sha512-m5hnHTkcVsPfqx3AKlyttIPb7J+XykHvJP2B9bZDjlhLIoEq4XoK64Vg7boZlVWYK6LUY94dYPEE7Lh0ZkZKcQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-data-descriptor": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.0.tgz", - "integrity": "sha512-jbRXy1FmtAoCjQkVmIVYwuuqDFUbaOeDjmed1tOGPrsMhtJA4rD9tkgA0F1qJ3gRFRXcHYVkdeaP50Q5rE/jLQ==", - "dev": true, - "requires": { - "kind-of": "^6.0.0" - } - }, - "is-descriptor": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.2.tgz", - "integrity": "sha512-2eis5WqQGV7peooDyLmNEPUrps9+SXX5c9pL3xEB+4e9HnGuDa7mB7kHxHw4CbqS9k1T2hOH3miL8n8WtiYVtg==", - "dev": true, - "requires": { - "is-accessor-descriptor": "^1.0.0", - "is-data-descriptor": "^1.0.0", - "kind-of": "^6.0.2" - } - } - } - }, - "snapdragon-util": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/snapdragon-util/-/snapdragon-util-3.0.1.tgz", - "integrity": "sha512-mbKkMdQKsjX4BAL4bRYTj21edOf8cN7XHdYUJEe+Zn99hVEYcMvKPct1IqNe7+AZPirn8BCDOQBHQZknqmKlZQ==", - "dev": true, - "requires": { - "kind-of": "^3.2.0" - }, - "dependencies": { - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "source-list-map": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/source-list-map/-/source-list-map-2.0.1.tgz", - "integrity": "sha512-qnQ7gVMxGNxsiL4lEuJwe/To8UnK7fAnmbGEEH8RpLouuKbeEm0lhbQVFIrNSuB+G7tVrAlVsZgETT5nljf+Iw==", - "dev": true - }, - "source-map": { - "version": "0.5.7", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", - "integrity": "sha1-igOdLRAh0i0eoUyA2OpGi6LvP8w=", - "dev": true - }, - "source-map-resolve": { - "version": "0.5.2", - "resolved": "https://registry.npmjs.org/source-map-resolve/-/source-map-resolve-0.5.2.tgz", - "integrity": "sha512-MjqsvNwyz1s0k81Goz/9vRBe9SZdB09Bdw+/zYyO+3CuPk6fouTaxscHkgtE8jKvf01kVfl8riHzERQ/kefaSA==", - "dev": true, - "requires": { - "atob": "^2.1.1", - "decode-uri-component": "^0.2.0", - "resolve-url": "^0.2.1", - "source-map-url": "^0.4.0", - "urix": "^0.1.0" - } - }, - "source-map-support": { - "version": "0.5.10", - "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.10.tgz", - "integrity": "sha512-YfQ3tQFTK/yzlGJuX8pTwa4tifQj4QS2Mj7UegOu8jAz59MqIiMGPXxQhVQiIMNzayuUSF/jEuVnfFF5JqybmQ==", - "dev": true, - "requires": { - "buffer-from": "^1.0.0", - "source-map": "^0.6.0" - }, - "dependencies": { - "source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true - } - } - }, - "source-map-url": { - "version": "0.4.0", - "resolved": "https://registry.npmjs.org/source-map-url/-/source-map-url-0.4.0.tgz", - "integrity": "sha1-PpNdfd1zYxuXZZlW1VEo6HtQhKM=", - "dev": true - }, - "split-string": { - "version": "3.1.0", - "resolved": "https://registry.npmjs.org/split-string/-/split-string-3.1.0.tgz", - "integrity": "sha512-NzNVhJDYpwceVVii8/Hu6DKfD2G+NrQHlS/V/qgv763EYudVwEcMQNxd2lh+0VrUByXN/oJkl5grOhYWvQUYiw==", - "dev": true, - "requires": { - "extend-shallow": "^3.0.0" - } - }, - "sprintf-js": { - "version": "1.0.3", - "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", - "integrity": "sha1-BOaSb2YolTVPPdAVIDYzuFcpfiw=", - "dev": true - }, - "ssri": { - "version": "6.0.2", - "resolved": "https://registry.npmjs.org/ssri/-/ssri-6.0.2.tgz", - "integrity": "sha512-cepbSq/neFK7xB6A50KHN0xHDotYzq58wWCa5LeWqnPrHG8GzfEjO/4O8kpmcGW+oaxkvhEJCWgbgNk4/ZV93Q==", - "dev": true, - "requires": { - "figgy-pudding": "^3.5.1" - } - }, - "static-extend": { - "version": "0.1.2", - "resolved": "https://registry.npmjs.org/static-extend/-/static-extend-0.1.2.tgz", - "integrity": "sha1-YICcOcv/VTNyJv1eC1IPNB8ftcY=", - "dev": true, - "requires": { - "define-property": "^0.2.5", - "object-copy": "^0.1.0" - }, - "dependencies": { - "define-property": { - "version": "0.2.5", - "resolved": "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz", - "integrity": "sha1-w1se+RjsPJkPmlvFe+BKrOxcgRY=", - "dev": true, - "requires": { - "is-descriptor": "^0.1.0" - } - } - } - }, - "stream-browserify": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/stream-browserify/-/stream-browserify-2.0.1.tgz", - "integrity": "sha1-ZiZu5fm9uZQKTkUUyvtDu3Hlyds=", - "dev": true, - "requires": { - "inherits": "~2.0.1", - "readable-stream": "^2.0.2" - } - }, - "stream-each": { - "version": "1.2.3", - "resolved": "https://registry.npmjs.org/stream-each/-/stream-each-1.2.3.tgz", - "integrity": "sha512-vlMC2f8I2u/bZGqkdfLQW/13Zihpej/7PmSiMQsbYddxuTsJp8vRe2x2FvVExZg7FaOds43ROAuFJwPR4MTZLw==", - "dev": true, - "requires": { - "end-of-stream": "^1.1.0", - "stream-shift": "^1.0.0" - } - }, - "stream-http": { - "version": "2.8.3", - "resolved": "https://registry.npmjs.org/stream-http/-/stream-http-2.8.3.tgz", - "integrity": "sha512-+TSkfINHDo4J+ZobQLWiMouQYB+UVYFttRA94FpEzzJ7ZdqcL4uUUQ7WkdkI4DSozGmgBUE/a47L+38PenXhUw==", - "dev": true, - "requires": { - "builtin-status-codes": "^3.0.0", - "inherits": "^2.0.1", - "readable-stream": "^2.3.6", - "to-arraybuffer": "^1.0.0", - "xtend": "^4.0.0" - } - }, - "stream-shift": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/stream-shift/-/stream-shift-1.0.0.tgz", - "integrity": "sha1-1cdSgl5TZ+eG944Y5EXqIjoVWVI=", - "dev": true - }, - "string-width": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/string-width/-/string-width-1.0.2.tgz", - "integrity": "sha1-EYvfW4zcUaKn5w0hHgfisLmxB9M=", - "dev": true, - "requires": { - "code-point-at": "^1.0.0", - "is-fullwidth-code-point": "^1.0.0", - "strip-ansi": "^3.0.0" - } - }, - "string_decoder": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz", - "integrity": "sha512-n/ShnvDi6FHbbVfviro+WojiFzv+s8MPMHBczVePfUpDJLwoLT0ht1l4YwBCbi8pJAveEEdnkHyPyTP/mzRfwg==", - "dev": true, - "requires": { - "safe-buffer": "~5.1.0" - } - }, - "strip-ansi": { - "version": "3.0.1", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-3.0.1.tgz", - "integrity": "sha1-ajhfuIU9lS1f8F0Oiq+UJ43GPc8=", - "dev": true, - "requires": { - "ansi-regex": "^2.0.0" - } - }, - "strip-eof": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/strip-eof/-/strip-eof-1.0.0.tgz", - "integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8=", - "dev": true - }, - "strip-json-comments": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-2.0.1.tgz", - "integrity": "sha1-PFMZQukIwml8DsNEhYwobHygpgo=", - "dev": true, - "optional": true - }, - "supports-color": { - "version": "5.5.0", - "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", - "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", - "dev": true, - "requires": { - "has-flag": "^3.0.0" - } - }, - "tapable": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/tapable/-/tapable-1.1.1.tgz", - "integrity": "sha512-9I2ydhj8Z9veORCw5PRm4u9uebCn0mcCa6scWoNcbZ6dAtoo2618u9UUzxgmsCOreJpqDDuv61LvwofW7hLcBA==", - "dev": true - }, - "tar": { - "version": "4.4.8", - "resolved": "https://registry.npmjs.org/tar/-/tar-4.4.8.tgz", - "integrity": "sha512-LzHF64s5chPQQS0IYBn9IN5h3i98c12bo4NCO7e0sGM2llXQ3p2FGC5sdENN4cTW48O915Sh+x+EXx7XW96xYQ==", - "dev": true, - "optional": true, - "requires": { - "chownr": "^1.1.1", - "fs-minipass": "^1.2.5", - "minipass": "^2.3.4", - "minizlib": "^1.1.1", - "mkdirp": "^0.5.0", - "safe-buffer": "^5.1.2", - "yallist": "^3.0.2" - } - }, - "terser": { - "version": "3.14.1", - "resolved": "https://registry.npmjs.org/terser/-/terser-3.14.1.tgz", - "integrity": "sha512-NSo3E99QDbYSMeJaEk9YW2lTg3qS9V0aKGlb+PlOrei1X02r1wSBHCNX/O+yeTRFSWPKPIGj6MqvvdqV4rnVGw==", - "dev": true, - "requires": { - "commander": "~2.17.1", - "source-map": "~0.6.1", - "source-map-support": "~0.5.6" - }, - "dependencies": { - "commander": { - "version": "2.17.1", - "resolved": "https://registry.npmjs.org/commander/-/commander-2.17.1.tgz", - "integrity": "sha512-wPMUt6FnH2yzG95SA6mzjQOEKUU3aLaDEmzs1ti+1E9h+CsrZghRlqEM/EJ4KscsQVG8uNN4uVreUeT8+drlgg==", - "dev": true - }, - "source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true - } - } - }, - "terser-webpack-plugin": { - "version": "1.2.1", - "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.2.1.tgz", - "integrity": "sha512-GGSt+gbT0oKcMDmPx4SRSfJPE1XaN3kQRWG4ghxKQw9cn5G9x6aCKSsgYdvyM0na9NJ4Drv0RG6jbBByZ5CMjw==", - "dev": true, - "requires": { - "cacache": "^11.0.2", - "find-cache-dir": "^2.0.0", - "schema-utils": "^1.0.0", - "serialize-javascript": "^1.4.0", - "source-map": "^0.6.1", - "terser": "^3.8.1", - "webpack-sources": "^1.1.0", - "worker-farm": "^1.5.2" - }, - "dependencies": { - "schema-utils": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz", - "integrity": "sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g==", - "dev": true, - "requires": { - "ajv": "^6.1.0", - "ajv-errors": "^1.0.0", - "ajv-keywords": "^3.1.0" - } - }, - "source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true - } - } - }, - "through2": { - "version": "2.0.5", - "resolved": "https://registry.npmjs.org/through2/-/through2-2.0.5.tgz", - "integrity": "sha512-/mrRod8xqpA+IHSLyGCQ2s8SPHiCDEeQJSep1jqLYeEUClOFG2Qsh+4FU6G9VeqpZnGW/Su8LQGc4YKni5rYSQ==", - "dev": true, - "requires": { - "readable-stream": "~2.3.6", - "xtend": "~4.0.1" - } - }, - "timers-browserify": { - "version": "2.0.10", - "resolved": "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.10.tgz", - "integrity": "sha512-YvC1SV1XdOUaL6gx5CoGroT3Gu49pK9+TZ38ErPldOWW4j49GI1HKs9DV+KGq/w6y+LZ72W1c8cKz2vzY+qpzg==", - "dev": true, - "requires": { - "setimmediate": "^1.0.4" - } - }, - "to-arraybuffer": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/to-arraybuffer/-/to-arraybuffer-1.0.1.tgz", - "integrity": "sha1-fSKbH8xjfkZsoIEYCDanqr/4P0M=", - "dev": true - }, - "to-object-path": { - "version": "0.3.0", - "resolved": "https://registry.npmjs.org/to-object-path/-/to-object-path-0.3.0.tgz", - "integrity": "sha1-KXWIt7Dn4KwI4E5nL4XB9JmeF68=", - "dev": true, - "requires": { - "kind-of": "^3.0.2" - }, - "dependencies": { - "kind-of": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz", - "integrity": "sha1-MeohpzS6ubuw8yRm2JOupR5KPGQ=", - "dev": true, - "requires": { - "is-buffer": "^1.1.5" - } - } - } - }, - "to-regex": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/to-regex/-/to-regex-3.0.2.tgz", - "integrity": "sha512-FWtleNAtZ/Ki2qtqej2CXTOayOH9bHDQF+Q48VpWyDXjbYxA4Yz8iDB31zXOBUlOHHKidDbqGVrTUvQMPmBGBw==", - "dev": true, - "requires": { - "define-property": "^2.0.2", - "extend-shallow": "^3.0.2", - "regex-not": "^1.0.2", - "safe-regex": "^1.1.0" - } - }, - "to-regex-range": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-2.1.1.tgz", - "integrity": "sha1-fIDBe53+vlmeJzZ+DU3VWQFB2zg=", - "dev": true, - "requires": { - "is-number": "^3.0.0", - "repeat-string": "^1.6.1" - } - }, - "ts-loader": { - "version": "5.3.3", - "resolved": "https://registry.npmjs.org/ts-loader/-/ts-loader-5.3.3.tgz", - "integrity": "sha512-KwF1SplmOJepnoZ4eRIloH/zXL195F51skt7reEsS6jvDqzgc/YSbz9b8E07GxIUwLXdcD4ssrJu6v8CwaTafA==", - "dev": true, - "requires": { - "chalk": "^2.3.0", - "enhanced-resolve": "^4.0.0", - "loader-utils": "^1.0.2", - "micromatch": "^3.1.4", - "semver": "^5.0.1" - } - }, - "tslib": { - "version": "1.9.3", - "resolved": "https://registry.npmjs.org/tslib/-/tslib-1.9.3.tgz", - "integrity": "sha512-4krF8scpejhaOgqzBEcGM7yDIEfi0/8+8zDRZhNZZ2kjmHJ4hv3zCbQWxoJGz1iw5U0Jl0nma13xzHXcncMavQ==", - "dev": true - }, - "tslint": { - "version": "5.12.1", - "resolved": "https://registry.npmjs.org/tslint/-/tslint-5.12.1.tgz", - "integrity": "sha512-sfodBHOucFg6egff8d1BvuofoOQ/nOeYNfbp7LDlKBcLNrL3lmS5zoiDGyOMdT7YsEXAwWpTdAHwOGOc8eRZAw==", - "dev": true, - "requires": { - "babel-code-frame": "^6.22.0", - "builtin-modules": "^1.1.1", - "chalk": "^2.3.0", - "commander": "^2.12.1", - "diff": "^3.2.0", - "glob": "^7.1.1", - "js-yaml": "^3.7.0", - "minimatch": "^3.0.4", - "resolve": "^1.3.2", - "semver": "^5.3.0", - "tslib": "^1.8.0", - "tsutils": "^2.27.2" - } - }, - "tslint-react": { - "version": "3.6.0", - "resolved": "https://registry.npmjs.org/tslint-react/-/tslint-react-3.6.0.tgz", - "integrity": "sha512-AIv1QcsSnj7e9pFir6cJ6vIncTqxfqeFF3Lzh8SuuBljueYzEAtByuB6zMaD27BL0xhMEqsZ9s5eHuCONydjBw==", - "dev": true, - "requires": { - "tsutils": "^2.13.1" - } - }, - "tsutils": { - "version": "2.29.0", - "resolved": "https://registry.npmjs.org/tsutils/-/tsutils-2.29.0.tgz", - "integrity": "sha512-g5JVHCIJwzfISaXpXE1qvNalca5Jwob6FjI4AoPlqMusJ6ftFE7IkkFoMhVLRgK+4Kx3gkzb8UZK5t5yTTvEmA==", - "dev": true, - "requires": { - "tslib": "^1.8.1" - } - }, - "tty-browserify": { - "version": "0.0.0", - "resolved": "https://registry.npmjs.org/tty-browserify/-/tty-browserify-0.0.0.tgz", - "integrity": "sha1-oVe6QC2iTpv5V/mqadUk7tQpAaY=", - "dev": true - }, - "typedarray": { - "version": "0.0.6", - "resolved": "https://registry.npmjs.org/typedarray/-/typedarray-0.0.6.tgz", - "integrity": "sha1-hnrHTjhkGHsdPUfZlqeOxciDB3c=", - "dev": true - }, - "typescript": { - "version": "3.2.2", - "resolved": "https://registry.npmjs.org/typescript/-/typescript-3.2.2.tgz", - "integrity": "sha512-VCj5UiSyHBjwfYacmDuc/NOk4QQixbE+Wn7MFJuS0nRuPQbof132Pw4u53dm264O8LPc2MVsc7RJNml5szurkg==", - "dev": true - }, - "union-value": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/union-value/-/union-value-1.0.1.tgz", - "integrity": "sha512-tJfXmxMeWYnczCVs7XAEvIV7ieppALdyepWMkHkwciRpZraG/xwT+s2JN8+pr1+8jCRf80FFzvr+MpQeeoF4Xg==", - "dev": true, - "requires": { - "arr-union": "^3.1.0", - "get-value": "^2.0.6", - "is-extendable": "^0.1.1", - "set-value": "^2.0.1" - } - }, - "unique-filename": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/unique-filename/-/unique-filename-1.1.1.tgz", - "integrity": "sha512-Vmp0jIp2ln35UTXuryvjzkjGdRyf9b2lTXuSYUiPmzRcl3FDtYqAwOnTJkAngD9SWhnoJzDbTKwaOrZ+STtxNQ==", - "dev": true, - "requires": { - "unique-slug": "^2.0.0" - } - }, - "unique-slug": { - "version": "2.0.1", - "resolved": "https://registry.npmjs.org/unique-slug/-/unique-slug-2.0.1.tgz", - "integrity": "sha512-n9cU6+gITaVu7VGj1Z8feKMmfAjEAQGhwD9fE3zvpRRa0wEIx8ODYkVGfSc94M2OX00tUFV8wH3zYbm1I8mxFg==", - "dev": true, - "requires": { - "imurmurhash": "^0.1.4" - } - }, - "unset-value": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/unset-value/-/unset-value-1.0.0.tgz", - "integrity": "sha1-g3aHP30jNRef+x5vw6jtDfyKtVk=", - "dev": true, - "requires": { - "has-value": "^0.3.1", - "isobject": "^3.0.0" - }, - "dependencies": { - "has-value": { - "version": "0.3.1", - "resolved": "https://registry.npmjs.org/has-value/-/has-value-0.3.1.tgz", - "integrity": "sha1-ex9YutpiyoJ+wKIHgCVlSEWZXh8=", - "dev": true, - "requires": { - "get-value": "^2.0.3", - "has-values": "^0.1.4", - "isobject": "^2.0.0" - }, - "dependencies": { - "isobject": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/isobject/-/isobject-2.1.0.tgz", - "integrity": "sha1-8GVWEJaj8dou9GJy+BXIQNh+DIk=", - "dev": true, - "requires": { - "isarray": "1.0.0" - } - } - } - }, - "has-values": { - "version": "0.1.4", - "resolved": "https://registry.npmjs.org/has-values/-/has-values-0.1.4.tgz", - "integrity": "sha1-bWHeldkd/Km5oCCJrThL/49it3E=", - "dev": true - } - } - }, - "upath": { - "version": "1.1.0", - "resolved": "https://registry.npmjs.org/upath/-/upath-1.1.0.tgz", - "integrity": "sha512-bzpH/oBhoS/QI/YtbkqCg6VEiPYjSZtrHQM6/QnJS6OL9pKUFLqb3aFh4Scvwm45+7iAgiMkLhSbaZxUqmrprw==", - "dev": true - }, - "uri-js": { - "version": "4.2.2", - "resolved": "https://registry.npmjs.org/uri-js/-/uri-js-4.2.2.tgz", - "integrity": "sha512-KY9Frmirql91X2Qgjry0Wd4Y+YTdrdZheS8TFwvkbLWf/G5KNJDCh6pKL5OZctEW4+0Baa5idK2ZQuELRwPznQ==", - "dev": true, - "requires": { - "punycode": "^2.1.0" - } - }, - "urix": { - "version": "0.1.0", - "resolved": "https://registry.npmjs.org/urix/-/urix-0.1.0.tgz", - "integrity": "sha1-2pN/emLiH+wf0Y1Js1wpNQZ6bHI=", - "dev": true - }, - "url": { - "version": "0.11.0", - "resolved": "https://registry.npmjs.org/url/-/url-0.11.0.tgz", - "integrity": "sha1-ODjpfPxgUh63PFJajlW/3Z4uKPE=", - "dev": true, - "requires": { - "punycode": "1.3.2", - "querystring": "0.2.0" - }, - "dependencies": { - "punycode": { - "version": "1.3.2", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-1.3.2.tgz", - "integrity": "sha1-llOgNvt8HuQjQvIyXM7v6jkmxI0=", - "dev": true - } - } - }, - "use": { - "version": "3.1.1", - "resolved": "https://registry.npmjs.org/use/-/use-3.1.1.tgz", - "integrity": "sha512-cwESVXlO3url9YWlFW/TA9cshCEhtu7IKJ/p5soJ/gGpj7vbvFrAY/eIioQ6Dw23KjZhYgiIo8HOs1nQ2vr/oQ==", - "dev": true - }, - "util": { - "version": "0.10.4", - "resolved": "https://registry.npmjs.org/util/-/util-0.10.4.tgz", - "integrity": "sha512-0Pm9hTQ3se5ll1XihRic3FDIku70C+iHUdT/W926rSgHV5QgXsYbKZN8MSC3tJtSkhuROzvsQjAaFENRXr+19A==", - "dev": true, - "requires": { - "inherits": "2.0.3" - } - }, - "util-deprecate": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", - "integrity": "sha1-RQ1Nyfpw3nMnYvvS1KKJgUGaDM8=", - "dev": true - }, - "v8-compile-cache": { - "version": "2.0.2", - "resolved": "https://registry.npmjs.org/v8-compile-cache/-/v8-compile-cache-2.0.2.tgz", - "integrity": "sha512-1wFuMUIM16MDJRCrpbpuEPTUGmM5QMUg0cr3KFwra2XgOgFcPGDQHDh3CszSCD2Zewc/dh/pamNEW8CbfDebUw==", - "dev": true - }, - "vm-browserify": { - "version": "0.0.4", - "resolved": "https://registry.npmjs.org/vm-browserify/-/vm-browserify-0.0.4.tgz", - "integrity": "sha1-XX6kW7755Kb/ZflUOOCofDV9WnM=", - "dev": true, - "requires": { - "indexof": "0.0.1" - } - }, - "watchpack": { - "version": "1.6.0", - "resolved": "https://registry.npmjs.org/watchpack/-/watchpack-1.6.0.tgz", - "integrity": "sha512-i6dHe3EyLjMmDlU1/bGQpEw25XSjkJULPuAVKCbNRefQVq48yXKUpwg538F7AZTf9kyr57zj++pQFltUa5H7yA==", - "dev": true, - "requires": { - "chokidar": "^2.0.2", - "graceful-fs": "^4.1.2", - "neo-async": "^2.5.0" - } - }, - "webpack": { - "version": "4.28.4", - "resolved": "https://registry.npmjs.org/webpack/-/webpack-4.28.4.tgz", - "integrity": "sha512-NxjD61WsK/a3JIdwWjtIpimmvE6UrRi3yG54/74Hk9rwNj5FPkA4DJCf1z4ByDWLkvZhTZE+P3C/eh6UD5lDcw==", - "dev": true, - "requires": { - "@webassemblyjs/ast": "1.7.11", - "@webassemblyjs/helper-module-context": "1.7.11", - "@webassemblyjs/wasm-edit": "1.7.11", - "@webassemblyjs/wasm-parser": "1.7.11", - "acorn": "^5.6.2", - "acorn-dynamic-import": "^3.0.0", - "ajv": "^6.1.0", - "ajv-keywords": "^3.1.0", - "chrome-trace-event": "^1.0.0", - "enhanced-resolve": "^4.1.0", - "eslint-scope": "^4.0.0", - "json-parse-better-errors": "^1.0.2", - "loader-runner": "^2.3.0", - "loader-utils": "^1.1.0", - "memory-fs": "~0.4.1", - "micromatch": "^3.1.8", - "mkdirp": "~0.5.0", - "neo-async": "^2.5.0", - "node-libs-browser": "^2.0.0", - "schema-utils": "^0.4.4", - "tapable": "^1.1.0", - "terser-webpack-plugin": "^1.1.0", - "watchpack": "^1.5.0", - "webpack-sources": "^1.3.0" - } - }, - "webpack-cli": { - "version": "3.2.1", - "resolved": "https://registry.npmjs.org/webpack-cli/-/webpack-cli-3.2.1.tgz", - "integrity": "sha512-jeJveHwz/vwpJ3B8bxEL5a/rVKIpRNJDsKggfKnxuYeohNDW4Y/wB9N/XHJA093qZyS0r6mYL+/crLsIol4WKA==", - "dev": true, - "requires": { - "chalk": "^2.4.1", - "cross-spawn": "^6.0.5", - "enhanced-resolve": "^4.1.0", - "findup-sync": "^2.0.0", - "global-modules": "^1.0.0", - "global-modules-path": "^2.3.0", - "import-local": "^2.0.0", - "interpret": "^1.1.0", - "lightercollective": "^0.1.0", - "loader-utils": "^1.1.0", - "supports-color": "^5.5.0", - "v8-compile-cache": "^2.0.2", - "yargs": "^12.0.4" - } - }, - "webpack-sources": { - "version": "1.3.0", - "resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-1.3.0.tgz", - "integrity": "sha512-OiVgSrbGu7NEnEvQJJgdSFPl2qWKkWq5lHMhgiToIiN9w34EBnjYzSYs+VbL5KoYiLNtFFa7BZIKxRED3I32pA==", - "dev": true, - "requires": { - "source-list-map": "^2.0.0", - "source-map": "~0.6.1" - }, - "dependencies": { - "source-map": { - "version": "0.6.1", - "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz", - "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==", - "dev": true - } - } - }, - "whatwg-fetch": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/whatwg-fetch/-/whatwg-fetch-3.0.0.tgz", - "integrity": "sha512-9GSJUgz1D4MfyKU7KRqwOjXCXTqWdFNvEr7eUBYchQiVc744mqK/MzXPNR2WsPkmkOa4ywfg8C2n8h+13Bey1Q==" - }, - "which": { - "version": "1.3.1", - "resolved": "https://registry.npmjs.org/which/-/which-1.3.1.tgz", - "integrity": "sha512-HxJdYWq1MTIQbJ3nw0cqssHoTNU267KlrDuGZ1WYlxDStUtKUhOaJmh112/TZmHxxUfuJqPXSOm7tDyas0OSIQ==", - "dev": true, - "requires": { - "isexe": "^2.0.0" - } - }, - "which-module": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/which-module/-/which-module-2.0.0.tgz", - "integrity": "sha1-2e8H3Od7mQK4o6j6SzHD4/fm6Ho=", - "dev": true - }, - "wide-align": { - "version": "1.1.3", - "resolved": "https://registry.npmjs.org/wide-align/-/wide-align-1.1.3.tgz", - "integrity": "sha512-QGkOQc8XL6Bt5PwnsExKBPuMKBxnGxWWW3fU55Xt4feHozMUhdUMaBCk290qpm/wG5u/RSKzwdAC4i51YigihA==", - "dev": true, - "optional": true, - "requires": { - "string-width": "^1.0.2 || 2" - } - }, - "worker-farm": { - "version": "1.6.0", - "resolved": "https://registry.npmjs.org/worker-farm/-/worker-farm-1.6.0.tgz", - "integrity": "sha512-6w+3tHbM87WnSWnENBUvA2pxJPLhQUg5LKwUQHq3r+XPhIM+Gh2R5ycbwPCyuGbNg+lPgdcnQUhuC02kJCvffQ==", - "dev": true, - "requires": { - "errno": "~0.1.7" - } - }, - "wrap-ansi": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-2.1.0.tgz", - "integrity": "sha1-2Pw9KE3QV5T+hJc8rs3Rz4JP3YU=", - "dev": true, - "requires": { - "string-width": "^1.0.1", - "strip-ansi": "^3.0.1" - } - }, - "wrappy": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", - "integrity": "sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=", - "dev": true - }, - "xtend": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.1.tgz", - "integrity": "sha1-pcbVMr5lbiPbgg77lDofBJmNY68=", - "dev": true - }, - "y18n": { - "version": "4.0.1", - "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.1.tgz", - "integrity": "sha512-wNcy4NvjMYL8gogWWYAO7ZFWFfHcbdbE57tZO8e4cbpj8tfUcwrwqSl3ad8HxpYWCdXcJUCeKKZS62Av1affwQ==", - "dev": true - }, - "yallist": { - "version": "3.0.3", - "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.0.3.tgz", - "integrity": "sha512-S+Zk8DEWE6oKpV+vI3qWkaK+jSbIK86pCwe2IF/xwIpQ8jEuxpw9NyaGjmp9+BoJv5FV2piqCDcoCtStppiq2A==", - "dev": true - }, - "yargs": { - "version": "12.0.5", - "resolved": "https://registry.npmjs.org/yargs/-/yargs-12.0.5.tgz", - "integrity": "sha512-Lhz8TLaYnxq/2ObqHDql8dX8CJi97oHxrjUcYtzKbbykPtVW9WB+poxI+NM2UIzsMgNCZTIf0AQwsjK5yMAqZw==", - "dev": true, - "requires": { - "cliui": "^4.0.0", - "decamelize": "^1.2.0", - "find-up": "^3.0.0", - "get-caller-file": "^1.0.1", - "os-locale": "^3.0.0", - "require-directory": "^2.1.1", - "require-main-filename": "^1.0.1", - "set-blocking": "^2.0.0", - "string-width": "^2.0.0", - "which-module": "^2.0.0", - "y18n": "^3.2.1 || ^4.0.0", - "yargs-parser": "^11.1.1" - }, - "dependencies": { - "ansi-regex": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz", - "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg=", - "dev": true - }, - "is-fullwidth-code-point": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-2.0.0.tgz", - "integrity": "sha1-o7MKXE8ZkYMWeqq5O+764937ZU8=", - "dev": true - }, - "string-width": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz", - "integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVaTjAqvVwdfeZ7w7aCvJD7ugkw==", - "dev": true, - "requires": { - "is-fullwidth-code-point": "^2.0.0", - "strip-ansi": "^4.0.0" - } - }, - "strip-ansi": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz", - "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=", - "dev": true, - "requires": { - "ansi-regex": "^3.0.0" - } - } - } - }, - "yargs-parser": { - "version": "11.1.1", - "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz", - "integrity": "sha512-C6kB/WJDiaxONLJQnF8ccx9SEeoTTLek8RVbaOIsrAUS8VrBEXfmeSnCZxygc+XC2sNMBIwOOnfcxiynjHsVSQ==", - "dev": true, - "requires": { - "camelcase": "^5.0.0", - "decamelize": "^1.2.0" - } - } - } -} diff --git a/contrib/submit-simple-job/package.json b/contrib/submit-simple-job/package.json deleted file mode 100644 index c147dd0b6..000000000 --- a/contrib/submit-simple-job/package.json +++ /dev/null @@ -1,38 +0,0 @@ -{ - "name": "submit-simple-job", - "version": "1.0.0", - "description": "PAI web portal plugin for submit simple job", - "main": "index.tsx", - "scripts": { - "watch": "webpack --env development", - "prebuild": "npm test", - "build": "webpack", - "test": "tslint --project ." - }, - "author": "Microsoft Corporation", - "license": "MIT", - "dependencies": { - "classnames": "^2.2.6", - "fetch": "^1.1.0", - "hashids": "1.2.2", - "qs": "^6.6.0", - "react": "^16.7.0", - "react-dom": "^16.7.0", - "whatwg-fetch": "^3.0.0" - }, - "devDependencies": { - "@types/classnames": "^2.2.7", - "@types/hashids": "^1.0.30", - "@types/qs": "^6.5.1", - "@types/react": "^16.7.20", - "@types/react-dom": "^16.0.11", - "@types/webpack": "^4.4.24", - "@types/whatwg-fetch": "0.0.33", - "ts-loader": "^5.3.3", - "tslint": "^5.12.1", - "tslint-react": "^3.6.0", - "typescript": "^3.2.2", - "webpack": "^4.28.4", - "webpack-cli": "^3.2.1" - } -} diff --git a/contrib/submit-simple-job/tsconfig.json b/contrib/submit-simple-job/tsconfig.json deleted file mode 100644 index cfec1a901..000000000 --- a/contrib/submit-simple-job/tsconfig.json +++ /dev/null @@ -1,12 +0,0 @@ -{ - "compilerOptions": { - "target": "es2015", - "module": "es2015", - "lib": ["dom", "es2015"], - "jsx": "react", - "sourceMap": true, - "moduleResolution": "node", - "resolveJsonModule": true, - "strict": true - } -} diff --git a/contrib/submit-simple-job/tslint.json b/contrib/submit-simple-job/tslint.json deleted file mode 100644 index d505e4aa2..000000000 --- a/contrib/submit-simple-job/tslint.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "defaultSeverity": "error", - "extends": [ - "tslint:recommended", - "tslint-react" - ], - "rules": { - "jsx-no-multiline-js": false - } -} diff --git a/contrib/submit-simple-job/webpack.config.js b/contrib/submit-simple-job/webpack.config.js deleted file mode 100644 index d46c3b43c..000000000 --- a/contrib/submit-simple-job/webpack.config.js +++ /dev/null @@ -1,87 +0,0 @@ -/*! - * Copyright (c) Microsoft Corporation - * All rights reserved. - * - * MIT License - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in all - * copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ -const {resolve} = require('path'); -const {BannerPlugin} = require('webpack'); - -module.exports = (env) => { - /** @type { import('webpack').Configuration } */ - const config = { - entry: './index.ts', - output: {}, - module: { - rules: [{ - test: /\.tsx?$/, - use: 'ts-loader', - exclude: /node_modules/, - }], - }, - resolve: { - extensions: ['.tsx', '.ts', '.js'], - }, - plugins: [], - }; - - if (env === 'development') { - config.mode = 'development'; - config.output.path = resolve(__dirname, '..', '..', - 'src', 'webportal', 'dist', 'scripts', 'plugins'); - config.output.filename = 'submit-simple-job.js'; - - config.watch = true; - config.watchOptions = {ignored: /node_modules/}; - } else { - config.mode = 'production'; - config.output.path = resolve(__dirname, 'dist'); - config.output.filename = 'plugin.js'; - config.plugins.push( - new BannerPlugin(` -Copyright (c) Microsoft Corporation -All rights reserved. - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - `.trim()) - ) - } - - return config; -}; diff --git a/src/cleaner/__init__.py b/src/cleaner/__init__.py deleted file mode 100644 index afedca73f..000000000 --- a/src/cleaner/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/src/cleaner/build/cleaner.common.dockerfile b/src/cleaner/build/cleaner.common.dockerfile deleted file mode 100644 index e2903376b..000000000 --- a/src/cleaner/build/cleaner.common.dockerfile +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM python:2.7 - -RUN apt-get -y update && \ - apt-get -y install lsof gawk - -RUN pip install psutil - -RUN curl -SL https://download.docker.com/linux/static/stable/x86_64/docker-17.06.2-ce.tgz \ - | tar -xzvC /usr/local \ - && mv /usr/local/docker/* /usr/bin - -ENV PYTHONPATH "${PYTHONPATH}:/" -RUN mkdir -p /cleaner -WORKDIR /cleaner - -COPY scripts /cleaner/scripts -COPY utils /cleaner/utils -COPY ./*.py /cleaner/ - -ENTRYPOINT ["python", "/cleaner/cleaner_main.py"] diff --git a/src/cleaner/cleaner_main.py b/src/cleaner/cleaner_main.py deleted file mode 100644 index 2af78875f..000000000 --- a/src/cleaner/cleaner_main.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import time -import argparse -import os -from datetime import timedelta -from cleaner.scripts.clean_docker import DockerCleaner -from cleaner.worker import Worker -from cleaner.utils.logger import LoggerMixin -from cleaner.utils import common - - -class Cleaner(LoggerMixin): - - def __init__(self, liveness): - self.workers = {} - self.liveness = liveness - - def add_worker(self, key, worker): - if key not in self.workers: - self.workers[key] = worker - else: - self.logger.warn("worker with key %s already exists.", key) - - def start(self): - for k, w in self.workers.items(): - w.start() - self.logger.info("worker %s started.", k) - - def terminate(self): - for k, w in self.workers.items(): - try: - # terminate the worker and all its subprocesses - common.kill_process_tree(w.pid, 5, self.logger) - except Exception as e: - self.logger.error("errors occur when terminating worker %s.", k) - self.logger.exception(e) - - def update_liveness(self): - if self.liveness: - file_name = os.path.join("/tmp", self.liveness) - with open(file_name, "a"): - os.utime(file_name, None) - - def sync(self): - try: - while True: - stopped_workers = [(k, w) for k, w in self.workers.items() if not w.is_alive()] - if len(stopped_workers) > 0: - for k, w in stopped_workers: - self.logger.error("worker %s exit with code %s", k, w.exitcode) - self.workers.pop(k) - if len(self.workers) == 0: - self.logger.info("all workers are stopped and exit cleaner.") - break - self.update_liveness() - time.sleep(2) - except Exception as e: - self.logger.exception("cleaner interrupted and will exit.") - self.terminate() - time.sleep(1) - - -def get_worker(threshold): - worker = Worker(clean_docker.check_and_clean, threshold, timeout=timedelta(minutes=10), cool_down_time=60) - return worker; - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-t", "--threshold", help="the disk usage precent to start cleaner") - parser.add_argument("-i", "--interval", help="the base interval to check disk usage") - args = parser.parse_args() - - common.setup_logging() - - cleaner = DockerCleaner(args.threshold, args.interval, timedelta(minutes=10)) - cleaner.run() - - -if __name__ == "__main__": - main() diff --git a/src/cleaner/config/cleaner.md b/src/cleaner/config/cleaner.md deleted file mode 100644 index e16043730..000000000 --- a/src/cleaner/config/cleaner.md +++ /dev/null @@ -1,54 +0,0 @@ -## Cleaner section parser - -- [Default Configuration](#D_Config) -- [How to Configure](#HT_Config) -- [Generated Configuration](#G_Config) -- [Data Table](#T_config) - -#### Default configuration - -[cleaner default configuration](cleaner.yaml) - -#### How to configure cluster section in service-configuration.yaml - -All configurations in this section is optional. If you want to customized these value, you can configure it in service-configuration.yaml. - -For example, if you want to use different threshold than the default value 94, add following to your service-configuration.yaml as following: -```yaml -cleaner: - threshold: new-value - interval: new-value -``` - -#### Generated Configuration - -After parsing, object model looks like: -```yaml -cleaner: - threshold: 90 - interval: 60 -``` - - -#### Table - - - - - - - - - - - - - - - - - - - - -
Data in Configuration FileData in Cluster Object ModelData in Jinja2 TemplateData type
cleaner.thresholdcom["cleaner"]["threshold"]cluster_cfg["cleaner"]["threshold"]Int
cleaner.intervalcom["cleaner"]["interval"]cluster_cfg["cleaner"]["interval"]Int
diff --git a/src/cleaner/config/cleaner.py b/src/cleaner/config/cleaner.py deleted file mode 100644 index 2eccd24f8..000000000 --- a/src/cleaner/config/cleaner.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import logging -import logging.config -import copy - -class Cleaner(object): - - def __init__(self, cluster_conf, service_conf, default_service_conf): - self.logger = logging.getLogger(__name__) - self.cluster_conf = cluster_conf - self.service_conf = service_conf - self.default_service_conf = default_service_conf - - def validation_pre(self): - return True, None - - def run(self): - result = copy.deepcopy(self.default_service_conf) - result.update(self.service_conf) - return result - - def validation_post(self, conf): - threshold = conf["cleaner"].get("threshold") - if type(threshold) != int: - msg = "expect threshold in cleaner to be int but get %s with type %s" % \ - (threshold, type(threshold)) - return False, msg - else: - if threshold < 0 or threshold > 100: - msg = "expect threshold in [0, 100]" - return False, msg - - interval = conf["cleaner"].get("interval") - if type(interval) != int: - msg = "expect interval in cleaner to be int but get %s with type %s" % \ - (interval, type(interval)) - return False, msg - - return True, None - diff --git a/src/cleaner/config/cleaner.yaml b/src/cleaner/config/cleaner.yaml deleted file mode 100644 index 0901eae0e..000000000 --- a/src/cleaner/config/cleaner.yaml +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -service_type: "yarn" - -threshold: 90 -interval: 60 \ No newline at end of file diff --git a/src/cleaner/deploy/cleaner.yaml.template b/src/cleaner/deploy/cleaner.yaml.template deleted file mode 100644 index 30fe97487..000000000 --- a/src/cleaner/deploy/cleaner.yaml.template +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: cleaner-ds -spec: - selector: - matchLabels: - app: cleaner - template: - metadata: - labels: - app: cleaner - spec: - hostPID: true - hostNetwork: true - containers: - - name: docker-cleaner - image: {{ cluster_cfg["cluster"]["docker-registry"]["prefix"] }}cleaner:{{ cluster_cfg["cluster"]["docker-registry"]["tag"] }} - args: - - -t {{ cluster_cfg["cleaner"]["threshold"] }} - - -i {{ cluster_cfg["cleaner"]["interval"] }} - imagePullPolicy: Always - securityContext: - privileged: True - volumeMounts: - - mountPath: /var/run/docker.sock - name: docker-socket - - mountPath: /logs - name: cleaner-logs - {%- if cluster_cfg['cluster']['common']['qos-switch'] == "true" %} - resources: - limits: - memory: "1Gi" - {%- endif %} - imagePullSecrets: - - name: {{ cluster_cfg["cluster"]["docker-registry"]["secret-name"] }} - volumes: - - name: docker-socket - hostPath: - path: /var/run/docker.sock - - name: cleaner-logs - hostPath: - path: {{ cluster_cfg["cluster"]["common"]["data-path"] }}/yarn/node/userlogs - tolerations: - - key: node.kubernetes.io/memory-pressure - operator: "Exists" - - key: node.kubernetes.io/disk-pressure - operator: "Exists" \ No newline at end of file diff --git a/src/cleaner/deploy/delete.sh b/src/cleaner/deploy/delete.sh deleted file mode 100644 index 04fa853c7..000000000 --- a/src/cleaner/deploy/delete.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -echo "stop the cleaner service." - -pushd $(dirname "$0") > /dev/null - -/bin/bash stop.sh || exit $? - -popd > /dev/null diff --git a/src/cleaner/deploy/refresh.sh b/src/cleaner/deploy/refresh.sh deleted file mode 100644 index 5c0576f51..000000000 --- a/src/cleaner/deploy/refresh.sh +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -# TODO will add when necessary - -popd > /dev/null diff --git a/src/cleaner/deploy/service.yaml b/src/cleaner/deploy/service.yaml deleted file mode 100644 index ae2fdeeb5..000000000 --- a/src/cleaner/deploy/service.yaml +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -cluster-type: - - yarn - -# to avoid possible race condition, start cleaner after all services are ready -prerequisite: - - cluster-configuration - - alert-manager - - drivers - - end-to-end-test - - grafana - - hadoop-batch-job - - hadoop-data-node - - hadoop-jobhistory - - hadoop-name-node - - hadoop-node-manager - - hadoop-resource-manager - - node-exporter - - prometheus - - pylon - - rest-server - - watchdog - - webportal - - yarn-exporter - - yarn-fremeworklauncher - - zookeeper - -template-list: - - cleaner.yaml - -start-script: start.sh -stop-script: stop.sh -delete-script: delete.sh -refresh-script: refresh.sh - - -deploy-rules: - - in: pai-worker diff --git a/src/cleaner/deploy/start.sh b/src/cleaner/deploy/start.sh deleted file mode 100644 index f42f8b219..000000000 --- a/src/cleaner/deploy/start.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -pushd $(dirname "$0") > /dev/null - -kubectl apply --overwrite=true -f cleaner.yaml || exit $? - -popd > /dev/null diff --git a/src/cleaner/deploy/stop.sh b/src/cleaner/deploy/stop.sh deleted file mode 100644 index ae36c9c22..000000000 --- a/src/cleaner/deploy/stop.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -kubectl delete --ignore-not-found --now daemonset/cleaner-ds - -popd > /dev/null diff --git a/src/cleaner/run_unit_test.sh b/src/cleaner/run_unit_test.sh deleted file mode 100644 index 53fd49982..000000000 --- a/src/cleaner/run_unit_test.sh +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -#!/bin/bash - -set -x - -DIR="$(cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd)" -export PYTHONPATH=$PYTHONPATH:$DIR - -nose_args="--with-coverage \ - --cover-erase \ - --cover-html \ - --logging-level=DEBUG \ - -s \ - -v " - -nosetests $nose_args diff --git a/src/cleaner/scripts/__init__.py b/src/cleaner/scripts/__init__.py deleted file mode 100644 index afedca73f..000000000 --- a/src/cleaner/scripts/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/src/cleaner/scripts/check_deleted_files.py b/src/cleaner/scripts/check_deleted_files.py deleted file mode 100644 index 175f3231a..000000000 --- a/src/cleaner/scripts/check_deleted_files.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import multiprocessing -from cleaner.utils import common - -logger = multiprocessing.get_logger() - -# This command will output the deleted files which has been opened by a process. -# The output of the deleted file list is like: -# COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME -# dhclient 1008 root txt REG 8,1 487248 0 12320783 /sbin/dhclient (deleted) -# python 31848 root 3w REG 8,1 0 0 29362883 /tmp/tmp_out.txt (deleted) -# -# We only retrieve the PID (second column) and NAME (10th column). -DELETED_FILES_CMD = "lsof +L1 2>/dev/null | awk '{print $2, $10}'" - - -def list_and_check_files(arg, log=logger): - files = common.run_cmd(DELETED_FILES_CMD, log) - if len(files) <= 1: - log.info("no deleted files found.") - return - else: - # skip the field names from the command - files = files[1:] - - for f in files: - f_fields = f.split(" ") - log.warning("process [%s] opened file [%s] but the file has been deleted.", f_fields[0], f_fields[1]) - - -def main(): - common.setup_logging() - logger.info("start to check the deleted files opened by each running process.") - list_and_check_files(None) - - -if __name__ == "__main__": - main() diff --git a/src/cleaner/scripts/clean_docker.py b/src/cleaner/scripts/clean_docker.py deleted file mode 100644 index e536d4703..000000000 --- a/src/cleaner/scripts/clean_docker.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from cleaner.utils.logger import LoggerMixin -from cleaner.utils.timer import CountdownTimer, Timeout -from cleaner.utils import common -from datetime import timedelta -import subprocess -import multiprocessing -import re -import time -import os - -class DockerCleaner(LoggerMixin): - def __init__(self, threshold, interval, timeout=timedelta(hours=1)): - self.__threshold = int(threshold) - self.__interval = int(interval) - self.__timeout = timeout - - def _exec(self): - exc = None - try: - with CountdownTimer(duration=self.__timeout): - self.check_and_clean() - except Timeout as e: - self.logger.error("Cleaner timeout.") - exc = e - except Exception as e: - self.logger.error("Unexpected error to run cleaner.") - exc = e - - if exc is not None: - self.logger.exception(exc) - - def run(self): - while True: - # allow a delay before the cleaning - time.sleep(self.__interval) - self._exec() - - - def check_disk_usage(self, partition): - df = subprocess.Popen(["df","-h", partition], stdout=subprocess.PIPE) - sized = 0 - try: - for line in df.stdout: - splitline = line.decode().split() - if splitline[5] == partition: - sized = splitline[1] - used = splitline[2] - usep = int(splitline[4][:-1]) - except ValueError: - self.logger.error("cannot get disk size, reset size to 0") - sized = 0 - used = 0 - usep = 0 - self.logger.info("Checking disk, disk usage = {0}%".format(usep)) - return sized, used, usep - - - def check_and_clean(self): - sized, used, usep = self.check_disk_usage("/") - if usep >= self.__threshold: - self.logger.info("Disk usage is above {0}%, Try to remove containers".format(self.__threshold)) - self.kill_largest_container(sized, used, usep) - - - # Clean logic v1: kill largest container - white_list = ["k8s_POD", "k8s_kube", "k8s_pylon", "k8s_zookeeper", "k8s_rest-server", "k8s_yarn", "k8s_hadoop", "k8s_job-exporter", "k8s_watchdog", "k8s_grafana", "k8s_node-exporter", "k8s_webportal", "k8s_prometheus", "k8s_nvidia-drivers", "k8s_etcd-container", "k8s_apiserver-container", "k8s_docker-cleaner", "kubelet", "dev-box"] - def kill_largest_container(self, sized, used, usep): - containers = [] - # Only try to stop PAI jobs and user created containers - containers_source = subprocess.Popen(["docker", "ps", "-a", "--format", r'{{.ID}}\t{{.Image}}\t{{.Size}}\t{{.Names}}\t'], stdout=subprocess.PIPE) - for line in containers_source.stdout: - splitline = line.split("\t") - for prefix in self.white_list: - if (splitline[3].startswith(prefix)): - break - else: - # Only check job containers - if re.search(r"container(_\w+)?_\d+_\d+_\d+_\d+$", splitline[3]) is not None: - size_str = splitline[2].split()[0] - size = common.calculate_size(size_str) - containers.append([size, splitline[0], splitline[1], splitline[3], size_str]) - - containers.sort(key=lambda x:x[0], reverse=True) - - if containers.count > 0 and containers[0][0] > 1024**3: - self.logger.warning("Kill container {0} due to disk pressure. Container size: {1}".format(containers[0][3], containers[0][4])) - - # Write error log - container_name = re.search(r"container(_\w+)?_\d+_\d+_\d+_\d+$", containers[0][3]).group() - application_name = "application{0}".format(re.search(r"^_\d+_\d+", re.search(r"_\d+_\d+_\d+_\d+$", container_name).group()).group()) - full_path = "/logs/{0}/{1}".format(application_name, container_name) - - if not os.path.isdir(full_path): - self.logger.error("Cannot find job log dir, creating path. Log may not be collected.") - try: - os.makedirs(full_path) - except OSError as exc: - self.logger.error("Failed to create path {0}.".format(full_path)) - - if os.path.isdir(full_path): - error_filename = "{0}/diskCleaner.pai.error".format(full_path) - timestamp = int(time.time()) - try: - fp = open(error_filename, "w") - except IOError: - self.logger.error("Failed to write error log, skipped") - else: - fp.writelines([ - "{0} ERROR ACTION \"KILL\"\n".format(timestamp), - "{0} ERROR REASON \"{1} killed due to disk pressure. Disk size: {2}, Used: {3}, Cleaner threshold: {4}, Container cost: {5} \"\n".format(timestamp, container_name, sized, "{0}({1}%)".format(used, usep), "{0}%".format(self.__threshold), containers[0][4]), - "{0} ERROR SOLUTION \"Node disk is full, please try another time. If your job needs large space, please use NAS to store data.\"\n".format(timestamp) - ]) - fp.close() - - subprocess.Popen(["docker", "kill", "--signal=10", containers[0][1]]) - - # Because docker stop will not immedicately stop container, we can not remove docker image right after stop container - #container_image = subprocess.Popen(["docker", "inspect", containers[0][1], r"--format='{{.Image}}'"], stdout=subprocess.PIPE).stdout.readline() - #subprocess.Popen(["docker", "image", "rmi", container_image]) - return True - else: - return False - diff --git a/src/cleaner/scripts/clean_docker_cache.py b/src/cleaner/scripts/clean_docker_cache.py deleted file mode 100644 index 6667ffdba..000000000 --- a/src/cleaner/scripts/clean_docker_cache.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from cleaner.utils import common -import multiprocessing - -logger = multiprocessing.get_logger() - - -def get_cache_size(): - out = common.run_cmd("source ./scripts/reclaimable_docker_cache.sh 2> /dev/null", logger) - size = 0 - if len(out) == 0: - logger.error("cannot retrieve cache size.") - return size - try: - size = float(out[0]) - except ValueError: - logger.error("cannot convert cache size, reset size to 0") - size = 0 - return size - - -def check_and_clean(threshold): - if get_cache_size() > threshold: - # to avoid possible race condition, only clean the containers, images and networks created 1h ago - common.run_cmd("docker system prune -af --filter until=1h", logger) - - -if __name__ == "__main__": - common.setup_logging() - check_and_clean(10) diff --git a/src/cleaner/scripts/reclaimable_docker_cache.sh b/src/cleaner/scripts/reclaimable_docker_cache.sh deleted file mode 100644 index a907394d0..000000000 --- a/src/cleaner/scripts/reclaimable_docker_cache.sh +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -#!/bin/bash - -# This script parse the result of "docker system df" command to get the total reclaimable disk space. -# This command will list the reclaimable space of all docker objects including images, local volumes and build caches. -# Following is a example of the command's output: -# TYPE TOTAL ACTIVE SIZE RECLAIMABLE -# Images 38 18 16.13GB 11.26GB (69%) -# Containers 42 42 95.3MB 0B (0%) -# Local Volumes 13 1 3.553GB 3.28GB (92%) -# Build Cache 0 0 0B 0B -# -# We summer up the result in column 5 (RECLAIMABLE) and return the size in gigabytes. - -docker system df --format "{{.Reclaimable}}" | \ -gawk 'BEGIN {s=0} - END {print s} - match($1, /([0-9]+\.?[0-9]*)(M|G|B|T)/, a) { - if(a[2] == "M") - s += a[1]/1024; - else if(a[2] == "B") - s += a[1]/1024/1024; - else if(a[2] == "T") - s += a[1]*1024; - else - s += a[1]; - }' diff --git a/src/cleaner/test/__init__.py b/src/cleaner/test/__init__.py deleted file mode 100644 index afedca73f..000000000 --- a/src/cleaner/test/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/src/cleaner/test/job/cleaner-test-job.md b/src/cleaner/test/job/cleaner-test-job.md deleted file mode 100644 index 336d68196..000000000 --- a/src/cleaner/test/job/cleaner-test-job.md +++ /dev/null @@ -1,32 +0,0 @@ -## Cleaner-Test-Job - -- [How to Use](#HT_Use) -- [How to Configure](#HT_Config) -- [How to Build Job Image](#HT_Image) - -#### How to use cleaner test job - -1. Go to PAI web portal, enter job submission page. -2. Import job yaml file. (cleaner-test-job.yaml) -3. Change parameters if needed. -4. Click Submit button to submit job. -5. Go to Jobs page, keep monitoring the job you just submitted. -6. If everything work as expected, the job will fail due to killed by cleaner. Click "Go to Tracking Page", you will find info "Docker container killed by cleaner due to disk pressure" in the end. - -#### How to configure variables in job - -In the job page, use the following command to run cleaner test: -```sh -sh /cleaner-test/cleaner-test.sh <% $parameters.threshold %> <% $parameters.time %> -``` -Set parameters: - - threshold: Test job will fill the disk to (threshold + 1)%. Please adjust this value according to cleaner threshold settings. Default value is 94. - - time: The time cost that job fill disk to (threshold + 1)%. Default value is 30. - -#### How to build job docker image - -Run the following command under this folder, make sure you have docker installed. -```sh -docker build -f cleaner-test.df -t . -``` -Then tag the docker image and upload to your docker repo. We offer the default docker image on openpai/testcleaner diff --git a/src/cleaner/test/job/cleaner-test-job.yaml b/src/cleaner/test/job/cleaner-test-job.yaml deleted file mode 100644 index c2cadda52..000000000 --- a/src/cleaner/test/job/cleaner-test-job.yaml +++ /dev/null @@ -1,29 +0,0 @@ -protocolVersion: 2 -name: cleaner-test-job -type: job -jobRetryCount: 0 -prerequisites: - - type: dockerimage - uri: 'openpai/testcleaner:stable' - name: docker_image_1 -parameters: - threshold: '94' - time: '30' -taskRoles: - taskrole: - instances: 1 - completion: - minFailedInstances: 1 - minSucceededInstances: 1 - dockerImage: docker_image_1 - resourcePerInstance: - gpu: 1 - cpu: 4 - memoryMB: 8192 - commands: - - >- - sh /cleaner-test/cleaner-test.sh <% $parameters.threshold %> <% - $parameters.time %> - taskRetryCount: 0 -defaults: - virtualCluster: default diff --git a/src/cleaner/test/job/cleaner-test.df b/src/cleaner/test/job/cleaner-test.df deleted file mode 100644 index 89982f658..000000000 --- a/src/cleaner/test/job/cleaner-test.df +++ /dev/null @@ -1,11 +0,0 @@ -FROM alpine - -RUN apk update && \ - apk add lsof gawk bash - -RUN mkdir -p /cleaner-test -WORKDIR /cleaner-test - -COPY cleaner-test.sh /cleaner-test/ - -ENTRYPOINT ["sh", "/cleaner-test/cleaner-test.sh 94 60"] \ No newline at end of file diff --git a/src/cleaner/test/job/cleaner-test.sh b/src/cleaner/test/job/cleaner-test.sh deleted file mode 100644 index 031befd8d..000000000 --- a/src/cleaner/test/job/cleaner-test.sh +++ /dev/null @@ -1,32 +0,0 @@ -threshold="$1" -time="$2" -df / | gawk \ -'BEGIN {threshold=int("'"$threshold"'"); time=int("'"$time"'"); fb=0; chunk=0} - END { - if (chunk > 0) { - for (var=0; var= 100) threshold = 94; - if (time <= 0) time = 1; - chunk = $2 / 1024 / 100; - fb = int(chunk * (threshold + 1) - $3 / 1024); - if (fb < 0) fb = 0; - max = int($4 / 1024 - chunk); - if (fb > max) fb = max; - chunk = int(fb / time) - } -' -while true -do - echo "Waiting cleaner to kill job..." - sleep 5 -done \ No newline at end of file diff --git a/src/cleaner/test/test_scripts.py b/src/cleaner/test/test_scripts.py deleted file mode 100644 index d4426faf2..000000000 --- a/src/cleaner/test/test_scripts.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from unittest import TestCase, main -import mock -import time -import psutil -import os -import multiprocessing -from cleaner.utils.common import setup_logging, run_cmd -from cleaner.scripts import clean_docker_cache, check_deleted_files - -CALLED_CMD = "docker system prune -af" -LOGGER = multiprocessing.get_logger() - - -class TestCacheClean(TestCase): - - def setUp(self): - setup_logging() - - @mock.patch("cleaner.utils.common.run_cmd", return_value=[]) - def testCacheEmpty(self, mock_cmd): - self.assertEqual(clean_docker_cache.get_cache_size(), 0) - - @mock.patch("cleaner.utils.common.run_cmd", return_value=["0"]) - def testCacheZero(self, mock_cmd): - self.assertEqual(clean_docker_cache.get_cache_size(), 0) - - @mock.patch("cleaner.utils.common.run_cmd", return_value=["error"]) - def testCacheError(self, mock_cmd): - self.assertEqual(clean_docker_cache.get_cache_size(), 0) - - @mock.patch("cleaner.scripts.clean_docker_cache.get_cache_size", return_value=1) - @mock.patch("cleaner.utils.common.run_cmd", return_value=["0"]) - def testCleanTrue(self, mock_cmd, mock_size): - clean_docker_cache.check_and_clean(0) - mock_cmd.assert_called_once_with(CALLED_CMD, LOGGER) - - @mock.patch("cleaner.scripts.clean_docker_cache.get_cache_size", return_value=0) - @mock.patch("cleaner.utils.common.run_cmd", return_value=["0"]) - def testCleanFalse(self, mock_cmd, mock_size): - clean_docker_cache.check_and_clean(0) - mock_cmd.assert_not_called() - - -class TestDeletedFiles(TestCase): - - def testDeletedCmd(self): - test_file = "/tmp/deleted_test.txt" - - def open_and_loop(): - with open(test_file, "w"): - while True: - pass - - proc = multiprocessing.Process(target=open_and_loop) - proc.start() - time.sleep(1) - os.remove("/tmp/deleted_test.txt") - time.sleep(1) - - mock_logger = mock.Mock() - cmd_out = run_cmd(check_deleted_files.DELETED_FILES_CMD, mock_logger) - files = [f.split(" ")[1] for f in cmd_out[1:]] - self.assertTrue(test_file in files) - - proc.terminate() - proc.join() - - @mock.patch("cleaner.utils.common.run_cmd", return_value=["PID NAME"]) - def testDeletedCheckEmpty(self, mock_cmd): - mock_log = mock.Mock() - check_deleted_files.list_and_check_files(None, mock_log) - mock_log.info.assert_called_once() - - @mock.patch("cleaner.utils.common.run_cmd", return_value=["PID NAME", "1, /test"]) - def testDeletedCheckNonEmpty(self, mock_cmd): - mock_log = mock.Mock() - check_deleted_files.list_and_check_files(None, mock_log) - mock_log.info.assert_not_called() - mock_log.warning.assert_called_once() - - -if __name__ == "__main__": - main() diff --git a/src/cleaner/test/test_utils.py b/src/cleaner/test/test_utils.py deleted file mode 100644 index 0ead28cc9..000000000 --- a/src/cleaner/test/test_utils.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from cleaner.utils.logger import LoggerMixin -from cleaner.utils.timer import CountdownTimer, Timeout -from cleaner.utils.common import * -from datetime import timedelta -from unittest import TestCase, main -import time -import mock -import subprocess as sp -import signal -import os -import psutil - - -def ps_raise(procs, timeout, callback): - raise psutil.Error - - -def kill_process_list_mock(procs, sig, timeout, logger): - for p in procs: - kill_process(p, sig, logger) - time.sleep(timeout) - return procs - - -class UtilsTest(TestCase, LoggerMixin): - - def setUp(self): - setup_logging() - - def testLogger(self): - self.assertTrue(self.logger is not None, "logger cannot be None.") - - def testTimerException(self): - count = 0 - with self.assertRaises(Timeout): - with CountdownTimer(duration=timedelta(seconds=1)): - while count < 3: - time.sleep(1) - count += 1 - - def testTimerExceptionSleep(self): - with self.assertRaises(Timeout): - with CountdownTimer(duration=timedelta(seconds=1)): - time.sleep(10) - - def testTimerNoException(self): - no_timeout = True - try: - with CountdownTimer(duration=timedelta(seconds=3)): - time.sleep(1) - except Timeout: - no_timeout = False - self.assertTrue(no_timeout) - - def testNoTimer(self): - no_timer = True - try: - with CountdownTimer(duration=None): - time.sleep(1) - except Timeout: - no_timer = False - self.assertTrue(no_timer) - - def testRunCmdOneLine(self): - out = run_cmd("echo test", self.logger) - self.assertEqual(out[0], "test") - - def testRunCmdEmptyOut(self): - out = run_cmd("echo test > /dev/null", self.logger) - self.assertEqual(len(out), 0) - - def testTerminateProcess(self): - proc = sp.Popen(["/bin/bash", "-c", "sleep 3600"]) - kill_process(proc, signal.SIGTERM, self.logger) - time.sleep(1) - self.assertEqual(proc.poll(), -signal.SIGTERM) - - def testKillProcess(self): - proc = sp.Popen(["/bin/bash", "-c", "sleep 3600"]) - kill_process(proc, signal.SIGKILL, self.logger) - time.sleep(1) - self.assertEqual(proc.poll(), -signal.SIGKILL) - - def testKillProcessList(self): - procs = [] - procs.append(sp.Popen(["/bin/bash", "-c", "sleep 3600"])) - procs.append(sp.Popen(["/bin/bash", "-c", "sleep 3600"])) - - ps_procs = [psutil.Process(p.pid) for p in procs] - alive = kill_process_list(ps_procs, signal.SIGTERM, 1, self.logger) - self.assertEqual(len(alive), 0) - self.assertTrue(procs[0].poll() is not None) - self.assertTrue(procs[1].poll() is not None) - - @mock.patch("psutil.wait_procs", side_effect=ps_raise) - def testKillProcessListError(self, mock_wait): - proc = sp.Popen(["/bin/bash", "-c", "sleep 1200"]) - ps_procs = [psutil.Process(proc.pid)] - alive = kill_process_list(ps_procs, signal.SIGTERM, 1, self.logger) - mock_wait.assert_called_once() - self.assertEqual(ps_procs, alive) - self.assertTrue(proc.poll() is not None) - - def testKillProcessTree(self): - test_shell = "#!/bin/bash \n" \ - "# create a background process as child \n" \ - "sleep 1000 & \n" \ - "# wait to block the foreground process \n" \ - "sleep 1000 \n" - with open("/tmp/subprocess.sh", "w") as sh: - sh.write(test_shell) - proc = sp.Popen(["/bin/bash", "/tmp/subprocess.sh"]) - time.sleep(1) - subproc = psutil.Process(proc.pid).children(recursive=True) - self.assertTrue(len(subproc) == 2) - - kill_process_tree(proc.pid, 1, self.logger) - gone, alive = psutil.wait_procs(subproc, timeout=1) - self.assertTrue(len(alive) == 0) - self.assertTrue(proc.poll() is not None) - - @mock.patch("cleaner.utils.common.kill_process_list", side_effect=kill_process_list_mock) - def testKillProcessTreeError(self, mock_kill): - proc = sp.Popen(["/bin/bash", "-c", "sleep 1200"]) - kill_process_tree(proc.pid, 1, self.logger) - self.assertTrue(mock_kill.call_count == 2) - self.assertTrue(proc.poll() is not None) - - -if __name__ == "__main__": - main() diff --git a/src/cleaner/test/test_worker.py b/src/cleaner/test/test_worker.py deleted file mode 100644 index 5451f27a7..000000000 --- a/src/cleaner/test/test_worker.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from cleaner.worker import Worker -from cleaner.utils import common -import datetime -import time -from multiprocessing import Queue -from unittest import TestCase, main - - -def called_by_worker(queue): - queue.put(1) - - -def timeout_worker(queue): - time.sleep(2) - queue.put(1) - - -class TestWorker(TestCase): - - def setUp(self): - common.setup_logging() - - def testWorkerRunOnce(self): - queue = Queue() - worker = Worker(called_by_worker, queue, long_run=False) - worker.start() - worker.join() - data = queue.get(timeout=2) - self.assertEqual(data, 1) - - def testWorkerLongRun(self): - queue = Queue() - worker = Worker(called_by_worker, queue, cool_down_time=0.1) - worker.start() - time.sleep(3) - worker.terminate() - worker.join() - self.assertTrue(queue.qsize() > 1) - - def testWorkerTimeout(self): - queue = Queue() - worker = Worker(timeout_worker, queue, long_run=False, timeout=datetime.timedelta(seconds=1)) - worker.start() - worker.join() - self.assertEqual(queue.qsize(), 0) - - -if __name__ == "__main__": - main() diff --git a/src/cleaner/utils/__init__.py b/src/cleaner/utils/__init__.py deleted file mode 100644 index afedca73f..000000000 --- a/src/cleaner/utils/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/src/cleaner/utils/common.py b/src/cleaner/utils/common.py deleted file mode 100644 index 875d74c09..000000000 --- a/src/cleaner/utils/common.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import subprocess -import multiprocessing -import logging -import sys -import os -import psutil -import signal -import re - -def kill_process_tree(pid, time_to_die, logger): - """ - Kills a process and all its subprocesses in best effort. - The processes are first killed by sending SIGTERM. - If they cannot terminate in time_to_die seconds. - They will be killed by sending SIGKILL. - - :param pid: id of process to be killed - :param time_to_die: the time period in which the process should terminate - :param logger: logger handler - """ - if os.getpid() == pid: - logger.error("I refuse to kill myself.") - return - - try: - process = psutil.Process(pid) - processes = process.children(recursive=True) - processes.append(process) - except psutil.Error as e: - logger.error("cannot get process %s and its subprocesses.", pid) - logger.exception(e) - return - - alive = kill_process_list(processes, signal.SIGTERM, time_to_die, logger) - - if alive: - # the processes survive SIGTERM so try to kill them by SIGKILL - alive = kill_process_list(alive, signal.SIGKILL, time_to_die, logger) - if alive: - for p in alive: - logger.error("Process %s cannot be killed.", p.pid) - - -def kill_process_list(processes, sig, time_to_die, logger): - def on_kill(proc): - logger.info("process %s is killed, exit code %s", proc.pid, proc.returncode) - - for p in processes: - kill_process(p, sig, logger) - - try: - gone, alive = psutil.wait_procs(processes, timeout=time_to_die, callback=on_kill) - except psutil.Error as e: - logger.error("error to wait the processes to terminate.") - logger.exception(e) - alive = processes - return alive - - -def kill_process(process, sig, logger): - """ - kill a process by sending signal. - - :param process: process to kill - :param sig: the signal - :param logger: logger handler - """ - try: - logger.info("kill process %s by sending %s", process.pid, sig) - os.kill(process.pid, sig) - except Exception as e: - logger.error("error to send %s to process %s.", sig, process.pid) - logger.exception(e) - - -def run_cmd(cmd, logger): - """ - Runs a given command and returns its output. If exceptions occur and the command process is still running. - The command process and all its subprocesses will be terminated in best effort. - - :param cmd: the command to run - :param logger: logger handler - :return the output of the command - """ - proc = subprocess.Popen(["/bin/bash", "-c", cmd], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) - lines = [] - try: - while True: - line = proc.stdout.readline() - if not line: - break - line = line.encode("UTF-8").strip() - logger.info("output from command [%s] : %s", cmd, line) - lines.append(line) - proc.wait() - if proc.returncode: - logger.error("failed to run command %s, error code is %s", cmd, proc.returncode) - finally: - if proc.poll() is None: - # the process is till running and terminate it before exit - logger.error("process %s is not completed and will terminate it before exit.", proc.pid) - kill_process_tree(proc.pid, 2, logger) - - return lines - - -def setup_logging(): - logger = multiprocessing.get_logger() - if len(logger.handlers) == 0: - handler = logging.StreamHandler(sys.stdout) - formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(filename)s:%(lineno)s - %(message)s") - handler.setFormatter(formatter) - logger.addHandler(handler) - logger.setLevel(logging.INFO) - - -size_defs={'B':1, 'K':1024, 'M':1024**2, 'G':1024**3, 'T':1024**4, 'b':1, 'k':1024, 'm':1024**2, 'g':1024**3, 't':1024**4} -def calculate_size(size_str): - size_search = re.search(r"[BbKkMmGgTt]", size_str) - return float(size_str[0:size_search.start()]) * size_defs[size_search.group()] \ No newline at end of file diff --git a/src/cleaner/utils/logger.py b/src/cleaner/utils/logger.py deleted file mode 100644 index ba64b20a6..000000000 --- a/src/cleaner/utils/logger.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import multiprocessing - - -class LoggerMixin(object): - """ - This mixin is to add a logger property conveniently to classes derived from it. - The usage is like: - - class A(LoggerMixin): - def do_something(): - self.logger().info("log message") - """ - - @property - def logger(self): - try: - if self._logger is None: - self._logger = self._get_logger() - except AttributeError: - self._logger = self._get_logger() - return self._logger - - def _get_logger(self): - return multiprocessing.get_logger().getChild(".".join([self.__class__.__module__, self.__class__.__name__])) diff --git a/src/cleaner/utils/timer.py b/src/cleaner/utils/timer.py deleted file mode 100644 index 9f027cf81..000000000 --- a/src/cleaner/utils/timer.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import signal -import time -from datetime import timedelta -from cleaner.utils.logger import LoggerMixin - - -class Timeout(Exception): - pass - - -class CountdownTimer(LoggerMixin): - """ - This class is to set a countdown with the given time. It will raise exceptions when the time is out. - """ - - def __init__(self, duration=timedelta(hours=1), name="countdown_timer"): - self.duration_in_seconds = int(duration.total_seconds()) if duration else 0 - self.name = name - self.enter_time = 0 - - def __enter__(self): - if self.duration_in_seconds == 0: - return - - try: - signal.signal(signal.SIGALRM, self.on_alarm) - signal.alarm(self.duration_in_seconds) - self.logger.info("setup countdown timer %s with duration %d" % (self.name, self.duration_in_seconds)) - self.enter_time = time.time() - except ValueError as e: - self.logger.error("Failed to setup countdown timer %s", self.name) - self.logger.exception(e) - - def __exit__(self, type, value, traceback): - if self.duration_in_seconds == 0: - return - - try: - signal.alarm(0) - self.logger.info("exit the countdown timer %s after %d seconds" % (self.name, time.time() - self.enter_time)) - except ValueError as e: - self.logger.error("Failed to setup countdown time %s", self.name) - self.logger.exception(e) - - def on_alarm(self, signum, frame): - self.logger.error("%s : the maximum time duration %d reached and will exit.", self.name, self.duration_in_seconds) - raise Timeout() diff --git a/src/cleaner/worker.py b/src/cleaner/worker.py deleted file mode 100644 index 9d67cc48a..000000000 --- a/src/cleaner/worker.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from cleaner.utils.logger import LoggerMixin -from cleaner.utils.timer import CountdownTimer, Timeout -import multiprocessing -import time -from datetime import timedelta - - -class Worker(LoggerMixin, multiprocessing.Process): - - def __init__(self, method, arg, timeout=timedelta(hours=1), long_run=True, cool_down_time=2): - super(Worker, self).__init__() - self.method = method - self.timeout = timeout - self.long_run = long_run - self.cool_down_time = cool_down_time - self.arg = arg - - def _exec(self): - exc = None - method_name = self.method.__name__ - try: - self.logger.info("start to execute method %s.", method_name) - with CountdownTimer(duration=self.timeout): - self.method(self.arg) - except Timeout as e: - self.logger.error("command %s timeout.", method_name) - exc = e - except Exception as e: - self.logger.error("unexpected error to run method %s.", method_name) - exc = e - - if exc is not None: - self.logger.exception(exc) - - def run(self): - if self.method is None: - self.logger.error("cannot start worker with empty method.") - return - - if self.long_run and self.cool_down_time <= 0: - self.cool_down_time = 1 - self.logger.warn("input cool down time should be positive, will use value %d.", self.cool_down_time) - - if self.long_run: - while True: - # allow a delay before the cleaning - time.sleep(self.cool_down_time) - self._exec() - else: - self._exec() diff --git a/src/drivers/build/clean.sh b/src/drivers/build/clean.sh deleted file mode 100755 index 66f1196ae..000000000 --- a/src/drivers/build/clean.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/bin/sh - -set -e - -if [ -f /etc/docker/daemon.json ] ; then - cat /etc/docker/daemon.json | jq 'del(."default-runtime")' | jq 'del(.runtimes.nvidia)' > tmp - mv tmp /etc/docker/daemon.json - pkill -SIGHUP dockerd -fi - -touch /finished - -while true; do sleep 3600; done diff --git a/src/drivers/build/config-docker-runtime.sh b/src/drivers/build/config-docker-runtime.sh deleted file mode 100755 index c71a4f707..000000000 --- a/src/drivers/build/config-docker-runtime.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash -x - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -CONFIG_RUNTIME=false - -if [ "$#" -eq "1" -a "$1" == "--config-runtime" ] ; then - CONFIG_RUNTIME=true -fi - -echo CONFIG_RUNTIME is $CONFIG_RUNTIME - -function configDockerRuntime { - cp /etc/docker/daemon.json /etc/docker/daemon.json.before_config_runtime - - jq -s '.[0] * .[1]' docker-config-with-nvidia-runtime.json /etc/docker/daemon.json > tmp - mv tmp /etc/docker/daemon.json - - pkill -SIGHUP dockerd -} - -function dockerRuntimeConfigured { - cat /etc/docker/daemon.json | jq -e 'has("default-runtime")' &> /dev/null - return $? -} - -if test $CONFIG_RUNTIME == "true" && ! dockerRuntimeConfigured ; then - configDockerRuntime -fi diff --git a/src/drivers/build/docker-config-with-nvidia-runtime.json b/src/drivers/build/docker-config-with-nvidia-runtime.json deleted file mode 100644 index e5f5dc65e..000000000 --- a/src/drivers/build/docker-config-with-nvidia-runtime.json +++ /dev/null @@ -1,9 +0,0 @@ -{ - "default-runtime": "nvidia", - "runtimes": { - "nvidia": { - "path": "/usr/bin/nvidia-container-runtime", - "runtimeArgs": [] - } - } -} diff --git a/src/drivers/build/drivers-384.111.yarn.dockerfile b/src/drivers/build/drivers-384.111.yarn.dockerfile deleted file mode 100644 index a6f6de2fb..000000000 --- a/src/drivers/build/drivers-384.111.yarn.dockerfile +++ /dev/null @@ -1,142 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu16.04 - -ENV STAGE_DIR=/root/drivers \ - PYTHONPATH=/modules - -RUN apt-get -y update && \ - apt-get -y install \ - build-essential \ - gcc \ - pciutils \ - bind9-host \ - bc \ - libssl-dev \ - sudo \ - dkms \ - net-tools \ - iproute2 \ - software-properties-common \ - git \ - vim \ - wget \ - curl \ - make \ - jq \ - psmisc \ - python \ - python-dev \ - python-yaml \ - python-jinja2 \ - python-urllib3 \ - python-tz \ - python-nose \ - python-prettytable \ - python-netifaces \ - python-pip \ - realpath \ - gawk \ - module-init-tools \ - # For MLNX OFED - ethtool \ - lsof \ - python-libxml2 \ - quilt \ - libltdl-dev \ - dpatch \ - autotools-dev \ - graphviz \ - autoconf \ - chrpath \ - swig \ - automake \ - tk8.4 \ - tcl8.4 \ - libgfortran3 \ - tcl \ - gfortran \ - libnl-3-200 \ - libnl-3-dev \ - libnl-route-3-200 \ - libnl-route-3-dev \ - libcr-dev \ - libcr0 \ - pkg-config \ - flex \ - debhelper \ - bison \ - tk \ - libelf-dev \ - libaudit-dev \ - libslang2-dev \ - libgtk2.0-dev \ - libperl-dev \ - liblzma-dev \ - libnuma-dev \ - libglib2.0-dev \ - libnuma1 \ - libtool \ - libdw-dev \ - libiberty-dev \ - libunwind8-dev \ - binutils-dev && \ - pip install subprocess32 && \ - add-apt-repository -y ppa:ubuntu-toolchain-r/test && \ - apt-get -y update && \ - apt-get -y install g++-4.9 && \ - mkdir -p $STAGE_DIR - -WORKDIR $STAGE_DIR - -ENV NVIDIA_VERSION=384.111 \ - OFED_VERSION=4.2-1.2.0.0 \ - OS_VERSION=ubuntu16.04 \ - ARCHITECTURE=x86_64 - -ENV MLNX_OFED_STRING=MLNX_OFED_LINUX-${OFED_VERSION}-${OS_VERSION}-${ARCHITECTURE} - -RUN wget --no-verbose http://us.download.nvidia.com/XFree86/Linux-x86_64/$NVIDIA_VERSION/NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - chmod 750 ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run --extract-only && \ - rm ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run - -RUN echo "wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf - && \ - echo "wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf - && \ - git clone https://github.com/NVIDIA/gdrcopy.git - -RUN cd $MLNX_OFED_STRING/DEBS && \ - for dep in libibverbs1 libibverbs-dev ibverbs-utils libmlx4-1 libmlx5-1 librdmacm1 librdmacm-dev libibumad libibumad-devel libibmad libibmad-devel libopensm infiniband-diags mlnx-ofed-kernel-utils; do \ - dpkg -i $dep\_*_amd64.deb && \ - dpkg --contents $dep\_*_amd64.deb | while read i; do \ - src="/$(echo $i | cut -f6 -d' ')" && \ - dst="$STAGE_DIR/$MLNX_OFED_STRING/usermode$(echo $src | sed -e 's/\.\/usr//' | sed -e 's/\.\//\//')" && \ - (([ -d $src ] && mkdir -p $dst) || \ - ([ -h $src ] && cd $(dirname $dst) && ln -s -f $(echo $i | cut -f8 -d' ') $(basename $dst) && cd $STAGE_DIR/$MLNX_OFED_STRING/DEBS) || \ - ([ -f $src ] && cp $src $dst) \ - ); \ - done; \ - done - - -COPY build/* $STAGE_DIR/ -RUN chmod a+x enable-nvidia-persistenced-mode.sh install-all-drivers install-gdr-drivers install-ib-drivers install-nvidia-drivers - -CMD /bin/bash install-all-drivers diff --git a/src/drivers/build/drivers-390.25.yarn.dockerfile b/src/drivers/build/drivers-390.25.yarn.dockerfile deleted file mode 100644 index 7d56887ad..000000000 --- a/src/drivers/build/drivers-390.25.yarn.dockerfile +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM nvidia/cuda:9.1-cudnn7-devel-ubuntu16.04 - -ENV STAGE_DIR=/root/drivers \ - PYTHONPATH=/modules - -RUN apt-get -y update && \ - apt-get -y install \ - build-essential \ - gcc \ - g++ \ - binutils \ - pciutils \ - bind9-host \ - bc \ - sudo \ - dkms \ - net-tools \ - iproute2 \ - libssl-dev \ - software-properties-common \ - git \ - vim \ - wget \ - curl \ - make \ - jq \ - psmisc \ - python \ - python-dev \ - python-yaml \ - python-jinja2 \ - python-urllib3 \ - python-tz \ - python-nose \ - python-prettytable \ - python-netifaces \ - python-pip \ - realpath \ - gawk \ - module-init-tools \ - # For MLNX OFED - ethtool \ - lsof \ - python-libxml2 \ - quilt \ - libltdl-dev \ - dpatch \ - autotools-dev \ - graphviz \ - autoconf \ - chrpath \ - swig \ - automake \ - tk8.4 \ - tcl8.4 \ - libgfortran3 \ - tcl \ - gfortran \ - libnl-3-200 \ - libnl-3-dev \ - libnl-route-3-200 \ - libnl-route-3-dev \ - libcr-dev \ - libcr0 \ - pkg-config \ - flex \ - debhelper \ - bison \ - tk \ - libelf-dev \ - libaudit-dev \ - libslang2-dev \ - libgtk2.0-dev \ - libperl-dev \ - liblzma-dev \ - libnuma-dev \ - libglib2.0-dev \ - libnuma1 \ - libtool \ - libdw-dev \ - libiberty-dev \ - libunwind8-dev \ - binutils-dev && \ - pip install subprocess32 && \ - add-apt-repository -y ppa:ubuntu-toolchain-r/test && \ - mkdir -p $STAGE_DIR - -WORKDIR $STAGE_DIR - -ENV NVIDIA_VERSION=390.25 \ - OFED_VERSION=4.2-1.2.0.0 \ - OS_VERSION=ubuntu16.04 \ - ARCHITECTURE=x86_64 - -ENV MLNX_OFED_STRING=MLNX_OFED_LINUX-${OFED_VERSION}-${OS_VERSION}-${ARCHITECTURE} - -RUN wget --no-verbose http://us.download.nvidia.com/XFree86/Linux-x86_64/$NVIDIA_VERSION/NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - chmod 750 ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run --extract-only && \ - rm ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run - -RUN echo "wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf - && \ - echo "wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf - && \ - git clone https://github.com/NVIDIA/gdrcopy.git - -RUN cd $MLNX_OFED_STRING/DEBS && \ - for dep in libibverbs1 libibverbs-dev ibverbs-utils libmlx4-1 libmlx5-1 librdmacm1 librdmacm-dev libibumad libibumad-devel libibmad libibmad-devel libopensm infiniband-diags mlnx-ofed-kernel-utils; do \ - dpkg -i $dep\_*_amd64.deb && \ - dpkg --contents $dep\_*_amd64.deb | while read i; do \ - src="/$(echo $i | cut -f6 -d' ')" && \ - dst="$STAGE_DIR/$MLNX_OFED_STRING/usermode$(echo $src | sed -e 's/\.\/usr//' | sed -e 's/\.\//\//')" && \ - (([ -d $src ] && mkdir -p $dst) || \ - ([ -h $src ] && cd $(dirname $dst) && ln -s -f $(echo $i | cut -f8 -d' ') $(basename $dst) && cd $STAGE_DIR/$MLNX_OFED_STRING/DEBS) || \ - ([ -f $src ] && cp $src $dst) \ - ); \ - done; \ - done - -COPY build/* $STAGE_DIR/ -RUN chmod a+x enable-nvidia-persistenced-mode.sh install-all-drivers install-gdr-drivers install-ib-drivers install-nvidia-drivers - -CMD /bin/bash install-all-drivers diff --git a/src/drivers/build/drivers-410.73.yarn.dockerfile b/src/drivers/build/drivers-410.73.yarn.dockerfile deleted file mode 100644 index 65901969e..000000000 --- a/src/drivers/build/drivers-410.73.yarn.dockerfile +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM nvidia/cuda:9.1-cudnn7-devel-ubuntu16.04 - -ENV STAGE_DIR=/root/drivers \ - PYTHONPATH=/modules - -RUN apt-get -y update && \ - apt-get -y install \ - build-essential \ - gcc \ - g++ \ - binutils \ - pciutils \ - bind9-host \ - bc \ - libssl-dev \ - sudo \ - dkms \ - net-tools \ - iproute2 \ - software-properties-common \ - git \ - vim \ - wget \ - curl \ - make \ - jq \ - psmisc \ - python \ - python-dev \ - python-yaml \ - python-jinja2 \ - python-urllib3 \ - python-tz \ - python-nose \ - python-prettytable \ - python-netifaces \ - python-pip \ - realpath \ - gawk \ - module-init-tools \ - # For MLNX OFED - ethtool \ - lsof \ - python-libxml2 \ - quilt \ - libltdl-dev \ - dpatch \ - autotools-dev \ - graphviz \ - autoconf \ - chrpath \ - swig \ - automake \ - tk8.4 \ - tcl8.4 \ - libgfortran3 \ - tcl \ - gfortran \ - libnl-3-200 \ - libnl-3-dev \ - libnl-route-3-200 \ - libnl-route-3-dev \ - libcr-dev \ - libcr0 \ - pkg-config \ - flex \ - debhelper \ - bison \ - tk \ - libelf-dev \ - libaudit-dev \ - libslang2-dev \ - libgtk2.0-dev \ - libperl-dev \ - liblzma-dev \ - libnuma-dev \ - libglib2.0-dev \ - libnuma1 \ - libtool \ - libdw-dev \ - libiberty-dev \ - libunwind8-dev \ - binutils-dev && \ - pip install subprocess32 && \ - add-apt-repository -y ppa:ubuntu-toolchain-r/test && \ - mkdir -p $STAGE_DIR - -WORKDIR $STAGE_DIR - -ENV NVIDIA_VERSION=410.73 \ - OFED_VERSION=4.2-1.2.0.0 \ - OS_VERSION=ubuntu16.04 \ - ARCHITECTURE=x86_64 - -ENV MLNX_OFED_STRING=MLNX_OFED_LINUX-${OFED_VERSION}-${OS_VERSION}-${ARCHITECTURE} - -RUN wget --no-verbose http://us.download.nvidia.com/XFree86/Linux-x86_64/$NVIDIA_VERSION/NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - chmod 750 ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run --extract-only && \ - rm ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run - -RUN echo "wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf - && \ - echo "wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf - && \ - git clone https://github.com/NVIDIA/gdrcopy.git - -RUN cd $MLNX_OFED_STRING/DEBS && \ - for dep in libibverbs1 libibverbs-dev ibverbs-utils libmlx4-1 libmlx5-1 librdmacm1 librdmacm-dev libibumad libibumad-devel libibmad libibmad-devel libopensm infiniband-diags mlnx-ofed-kernel-utils; do \ - dpkg -i $dep\_*_amd64.deb && \ - dpkg --contents $dep\_*_amd64.deb | while read i; do \ - src="/$(echo $i | cut -f6 -d' ')" && \ - dst="$STAGE_DIR/$MLNX_OFED_STRING/usermode$(echo $src | sed -e 's/\.\/usr//' | sed -e 's/\.\//\//')" && \ - (([ -d $src ] && mkdir -p $dst) || \ - ([ -h $src ] && cd $(dirname $dst) && ln -s -f $(echo $i | cut -f8 -d' ') $(basename $dst) && cd $STAGE_DIR/$MLNX_OFED_STRING/DEBS) || \ - ([ -f $src ] && cp $src $dst) \ - ); \ - done; \ - done - -COPY build/* $STAGE_DIR/ -RUN chmod a+x enable-nvidia-persistenced-mode.sh install-all-drivers install-gdr-drivers install-ib-drivers install-nvidia-drivers - -CMD /bin/bash install-all-drivers diff --git a/src/drivers/build/drivers-418.56.yarn.dockerfile b/src/drivers/build/drivers-418.56.yarn.dockerfile deleted file mode 100644 index d47fc46d7..000000000 --- a/src/drivers/build/drivers-418.56.yarn.dockerfile +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM nvidia/cuda:9.1-cudnn7-devel-ubuntu16.04 - -ENV STAGE_DIR=/root/drivers \ - PYTHONPATH=/modules - -RUN apt-get -y update && \ - apt-get -y install \ - build-essential \ - gcc \ - g++ \ - binutils \ - pciutils \ - bind9-host \ - bc \ - libssl-dev \ - sudo \ - dkms \ - net-tools \ - iproute2 \ - software-properties-common \ - git \ - vim \ - wget \ - curl \ - make \ - jq \ - psmisc \ - python \ - python-dev \ - python-yaml \ - python-jinja2 \ - python-urllib3 \ - python-tz \ - python-nose \ - python-prettytable \ - python-netifaces \ - python-pip \ - realpath \ - gawk \ - module-init-tools \ - # For MLNX OFED - ethtool \ - lsof \ - python-libxml2 \ - quilt \ - libltdl-dev \ - dpatch \ - autotools-dev \ - graphviz \ - autoconf \ - chrpath \ - swig \ - automake \ - tk8.4 \ - tcl8.4 \ - libgfortran3 \ - tcl \ - gfortran \ - libnl-3-200 \ - libnl-3-dev \ - libnl-route-3-200 \ - libnl-route-3-dev \ - libcr-dev \ - libcr0 \ - pkg-config \ - flex \ - debhelper \ - bison \ - tk \ - libelf-dev \ - libaudit-dev \ - libslang2-dev \ - libgtk2.0-dev \ - libperl-dev \ - liblzma-dev \ - libnuma-dev \ - libglib2.0-dev \ - libnuma1 \ - libtool \ - libdw-dev \ - libiberty-dev \ - libunwind8-dev \ - binutils-dev && \ - pip install subprocess32 && \ - add-apt-repository -y ppa:ubuntu-toolchain-r/test && \ - mkdir -p $STAGE_DIR - -WORKDIR $STAGE_DIR - -ENV NVIDIA_VERSION=418.56 \ - OFED_VERSION=4.2-1.2.0.0 \ - OS_VERSION=ubuntu16.04 \ - ARCHITECTURE=x86_64 - -ENV MLNX_OFED_STRING=MLNX_OFED_LINUX-${OFED_VERSION}-${OS_VERSION}-${ARCHITECTURE} - -RUN wget --no-verbose http://us.download.nvidia.com/XFree86/Linux-x86_64/$NVIDIA_VERSION/NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - chmod 750 ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run && \ - ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run --extract-only && \ - rm ./NVIDIA-Linux-x86_64-$NVIDIA_VERSION.run - -RUN echo "wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/MLNX_OFED-$OFED_VERSION/$MLNX_OFED_STRING.tgz | tar xzf - && \ - echo "wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf -" && \ - wget -q -O - http://www.mellanox.com/downloads/ofed/nvidia-peer-memory_1.0.5.tar.gz | tar xzf - && \ - git clone https://github.com/NVIDIA/gdrcopy.git - -RUN cd $MLNX_OFED_STRING/DEBS && \ - for dep in libibverbs1 libibverbs-dev ibverbs-utils libmlx4-1 libmlx5-1 librdmacm1 librdmacm-dev libibumad libibumad-devel libibmad libibmad-devel libopensm infiniband-diags mlnx-ofed-kernel-utils; do \ - dpkg -i $dep\_*_amd64.deb && \ - dpkg --contents $dep\_*_amd64.deb | while read i; do \ - src="/$(echo $i | cut -f6 -d' ')" && \ - dst="$STAGE_DIR/$MLNX_OFED_STRING/usermode$(echo $src | sed -e 's/\.\/usr//' | sed -e 's/\.\//\//')" && \ - (([ -d $src ] && mkdir -p $dst) || \ - ([ -h $src ] && cd $(dirname $dst) && ln -s -f $(echo $i | cut -f8 -d' ') $(basename $dst) && cd $STAGE_DIR/$MLNX_OFED_STRING/DEBS) || \ - ([ -f $src ] && cp $src $dst) \ - ); \ - done; \ - done - -COPY build/* $STAGE_DIR/ -RUN chmod a+x enable-nvidia-persistenced-mode.sh install-all-drivers install-gdr-drivers install-ib-drivers install-nvidia-drivers - -CMD /bin/bash install-all-drivers diff --git a/src/drivers/build/enable-nvidia-persistenced-mode.sh b/src/drivers/build/enable-nvidia-persistenced-mode.sh deleted file mode 100755 index 1f4dc8bab..000000000 --- a/src/drivers/build/enable-nvidia-persistenced-mode.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -echo === Enable nvidia persistenced mode -nvidia-persistenced --persistence-mode || exit $? - -echo === Recall nvidia-smi -nvidia-smi || exit $? - -echo === Persistence-mode enabled \ No newline at end of file diff --git a/src/drivers/build/install-all-drivers b/src/drivers/build/install-all-drivers deleted file mode 100755 index 14b750540..000000000 --- a/src/drivers/build/install-all-drivers +++ /dev/null @@ -1,64 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -export MLNX_PREFIX=/var/drivers/mellanox/$MLNX_OFED_STRING/usermode -export NV_DRIVER=${DRIVER_PATH}/$NVIDIA_VERSION -export LIBRARY_PATH=${LIBRARY_PATH:+$LIBRARY_PATH:}${MLNX_PREFIX}/lib -export LD_LIBRARY_PATH=${MLNX_PREFIX}/lib:$LD_LIBRARY_PATH:$NV_DRIVER/lib:$NV_DRIVER/lib64:/usr/local/cuda/lib64 -export PATH=${MLNX_PREFIX}/bin:$PATH:$NV_DRIVER/bin -export C_INCLUDE_PATH=${C_INCLUDE_PATH:+$C_INCLUDE_PATH:}${MLNX_PREFIX}/include:${MLNX_PREFIX}/include/infiniband -export CPLUS_INCLUDE_PATH=${CPLUS_INCLUDE_PATH:+$CPLUS_INCLUDE_PATH:}${MLNX_PREFIX}/include:${MLNX_PREFIX}/include/infiniband - -if lspci | grep -qE "[0-9a-fA-F][0-9a-fA-F]:[0-9a-fA-F][0-9a-fA-F].[0-9] (3D|VGA compatible) controller: NVIDIA Corporation.*"; then - if [ -f "$PRE_INSTALLED_NV_DRIVER_PATH/bin/nvidia-smi" ]; then - ls -a $PRE_INSTALLED_NV_DRIVER_PATH - echo pre installed nvidia driver detectived, skip driver installation - rm -f $DRIVER_PATH/current # remove pre-exist link - mkdir -p $DRIVER_PATH - ln -s $PRE_INSTALLED_NV_DRIVER_PATH $DRIVER_PATH/current - else - /bin/bash -x install-nvidia-drivers || exit $? - echo NVIDIA gpu detected, drivers installed - - /bin/bash enable-nvidia-persistenced-mode.sh || exit $? - fi - - ./config-docker-runtime.sh "$@" -else - echo NVIDIA gpu not detected, skipping driver installation -fi - -if [[ -n ${ENABLE_IB} ]]; then - if lspci | grep -qE '(Network|Infiniband) controller.*Mellanox.*ConnectX'; then - echo Infiniband hardware detected - # Installing InfiniBand drivers and GPU direct RDMA drivers - ./install-ib-drivers || exit $? - echo Infiniband drivers is installed successfully. - else - echo Infiniband hardware is not detected, skipping driver installation - fi -else - echo "Depending on configuration, the variable ENABLE_IB is not set. So the ib installation will be skipped!" -fi - - - -mkdir -p /jobstatus -touch /jobstatus/jobok - -while true; do sleep 1000; done diff --git a/src/drivers/build/install-gdr-drivers b/src/drivers/build/install-gdr-drivers deleted file mode 100644 index 238f9ab0b..000000000 --- a/src/drivers/build/install-gdr-drivers +++ /dev/null @@ -1,48 +0,0 @@ -#!/bin/bash -x - -# Recognize InfiniBand supported network card -lspci | grep -qE '(Network|Infiniband) controller.*Mellanox.*ConnectX' || -{ - echo ======== No IB present, exit early ========= - exit 1 -} - -[[ -L ${DRIVER_PATH}/current ]] || -{ - echo ======== No NVIDIA drivers found ========= - exit 1 -} - -# This script installs GPU direct RDMA drivers -KERNEL_FULL_VERSION=`uname -r` -export DESTDIR=/root/gdr-module/build -export DEPMOD=depmod - -# Make sure that GPU driver kernel sources are available in /usr/src -[[ -e /usr/src/nvidia-$NVIDIA_VERSION ]] || -{ - cp -r /root/drivers/NVIDIA-Linux-x86_64-$NVIDIA_VERSION/kernel /usr/src/nvidia-$NVIDIA_VERSION || exit $? -} - -# Install nv_peer_mem kernel module used in OpenMPI -lsmod | grep -qE "^nv_peer_mem" || -{ - cd nvidia-peer-memory-1.0 || exit $? - make clean || exit $? - make all install || exit $? - insmod $DESTDIR/lib/modules/$KERNEL_FULL_VERSION/extra/nv_peer_mem.ko || exit $? - cd .. || exit $? -} - -# Install gdrcopy kernel module used in MVAPICH2 -lsmod | grep -qE "^gdrdrv" || -{ - cd gdrcopy || exit $? - make clean || exit $? - mkdir -p ${DRIVER_PATH}/current/include || exit $? - make PREFIX=${DRIVER_PATH}/current all install || exit $? - ./insmod.sh || exit $? - ./validate || exit $? - ./copybw || exit $? - cd .. || exit $? -} diff --git a/src/drivers/build/install-ib-drivers b/src/drivers/build/install-ib-drivers deleted file mode 100644 index 0b7232aeb..000000000 --- a/src/drivers/build/install-ib-drivers +++ /dev/null @@ -1,169 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -# Recognize InfiniBand supported network card -lspci | grep -qE '(Network|Infiniband) controller.*Mellanox.*ConnectX' || -{ - echo ======== No IB present, exit early ========= - exit 1 -} - -# This script is used for installation of InfiniBand drivers -KERNEL_FULL_VERSION=`uname -r` - -HOSTNAME=`hostname` -# HACK: using last octet of the host's IP -LAST_OCTET=`host $HOSTNAME | head -n1 | sed 's/^.*\.//'` -echo POD_IP: ${POD_IP} -OCT1=33 -echo OCT1: $OCT1 -OCT2=`echo ${POD_IP} | awk -F'.' '{ print $(NF) }'` -echo OCT1: $OCT2 -IP_ADDRESS="192.168.${OCT1}.${OCT2}" - -echo IB_ADDRESS: $IP_ADDRESS - -CURRENT_DRIVER=/var/drivers/mellanox/current - -if [[ ! -f /var/drivers/mellanox/$MLNX_OFED_STRING/mlnxofedinstall ]]; then - [[ -f /tmp/$MLNX_OFED_STRING-ext.tgz ]] || - { - ./$MLNX_OFED_STRING/mlnx_add_kernel_support.sh -y -m ./$MLNX_OFED_STRING --make-tgz || exit $? - } - mkdir -p /var/drivers/mellanox/$MLNX_OFED_STRING || exit $? - tar -xvf /tmp/$MLNX_OFED_STRING-ext.tgz -C /var/drivers/mellanox/$MLNX_OFED_STRING --strip 1 || exit $? - [[ -L $CURRENT_DRIVER ]] && - { - rm -f $CURRENT_DRIVER || exit $? - } - ln -s -f /var/drivers/mellanox/$MLNX_OFED_STRING $CURRENT_DRIVER || exit $? -fi - -function ibPresent { - # Check that at least one IP address is up for IB - ip a | grep "state UP" -A 2 | grep -q $IP_ADDRESS || return 1 - # Make sure that the devices are configured in connected mode. See below. - cat /sys/class/net/ib*/mode | grep -q datagram && echo 7 - lsmod | grep -qE "^mlx[4-5]_ib" || return 2 - [[ -e /dev/infiniband/rdma_cm ]] || return 3 - lsmod | grep -qE "^nv_peer_mem" || return 4 - lsmod | grep -qE "^gdrdrv" || return 5 - grep -q "$HOSTNAME" /sys/class/infiniband/mlx?_?/node_desc || return 6 - return 0 -} - -echo ======== If IB present exit early ========= -ibPresent && -{ - # Install only user mode diagnostic components of Mellanox drivers - echo "====== Installing Infiniband drivers (diag components only) ======" - pushd $CURRENT_DRIVER || exit $? - echo "infiniband-diags=y" > /tmp/ibdiag.conf || exit $? - ./mlnxofedinstall --force --without-dkms --without-fw-update -c /tmp/ibdiag.conf || exit $? - popd - ibstat || exit $? - exit 0 -} - -# The following lines uninstall inbox network drivers and install -# Mellanox OFED drivers, which are more reliable and have better tools. -echo ====== Installing Infiniband drivers ====== - -# If we had incorrect install last time, first remove ib's -for iface in `ifconfig 2>/dev/null | grep -oE "ib[0-9]" | xargs` -do - ifconfig $iface down || exit $? -done - -# Then remove nv_peer_mem -lsmod | grep -qE "^nv_peer_mem" && -{ - rmmod nv_peer_mem || exit $? -} - -# Then gdrdrv -lsmod | grep -qE "^gdrdrv" && -{ - rmmod gdrdrv || exit $? -} - -# Since we already prepared kernel modules above we don't need --add-kernel-support switch -pushd $CURRENT_DRIVER || exit $? -./mlnxofedinstall --force --kernel-only --without-dkms --without-fw-update --with-infiniband-diags || exit $? -popd - -# Disable enhanced ipoib -cat << EOF > /etc/modprobe.d/ib_ipoib.conf -alias netdev-ib* ib_ipoib -options ib_ipoib send_queue_size=128 recv_queue_size=128 ipoib_enhanced=0 -EOF - -/etc/init.d/openibd restart || exit $? - -# Installing GPU direct RDMA drivers -# NOTE: do this here because it takes some time to install GDR drivers -# and that's enough time for IB devices to come up so we can test them -./install-gdr-drivers || exit $? - -IB_DEVICES=`ibstat -l | xargs` - -# The ib_ipoib modules automatically creates network devices for ip assignment. We use the last 48 bytes of the hardware address -# to map the devices to their corresponding mellanox devices. -declare -A ADDRESS_MAP -for device_path in /sys/class/net/ib*; -do - address=$(cat "$device_path/address" | sed "s/://g") - ADDRESS_MAP["${address: -12}"]=${device_path: -3} -done - - -for dev in $IB_DEVICES -do - for port_path in /sys/class/infiniband/$dev/ports/* - do - if grep -q InfiniBand "$port_path/link_layer" && grep -q LinkUp "$port_path/phys_state"; then - - grep -q "$HOSTNAME" /sys/class/infiniband/$dev/node_desc || echo "$HOSTNAME" > /sys/class/infiniband/$dev/node_desc || exit $? - - # Configuring IP address for IP-over-IB interface - GID=$(cat "$port_path/gids/0" | sed "s/://g") - GID_ADDRESS=${GID: -12} - IB_INTERFACE=${ADDRESS_MAP[$GID_ADDRESS]} - IB_IP_ADDRESS="192.168.${OCT1}.${OCT2}" - echo "Assiging ip address $IB_IP_ADDRESS for $IB_INTERFACE interface" - ifconfig $IB_INTERFACE up $IB_IP_ADDRESS/24 || exit $? - grep -q "connected" /sys/class/net/$IB_INTERFACE/mode || echo "connected" > /sys/class/net/$IB_INTERFACE/mode || exit $? - grep -q "65520" /sys/class/net/$IB_INTERFACE/mtu || echo "65520" > /sys/class/net/$IB_INTERFACE/mtu || exit $? - OCT1=$((OCT1+1)) - fi - - if grep -q Ethernet "$port_path/link_layer" && grep -qE "LinkUp|Polling" "$port_path/phys_state"; then - ETH_INTERFACE=`ls /sys/class/infiniband/$dev/device/net | grep -v ib` - grep -q "1500" /sys/class/net/$ETH_INTERFACE/mtu || echo "1500" > /sys/class/net/$ETH_INTERFACE/mtu || exit $? - fi - done -done - -# Verifying whether IB is up and running by invoking ibstat. This will also print basic device/link information -ibstat || exit $? -ibdev2netdev || exit $? - -# Final check -ibPresent -echo ibPresent exit value: $? diff --git a/src/drivers/build/install-nvidia-drivers b/src/drivers/build/install-nvidia-drivers deleted file mode 100755 index 4ec5a60e2..000000000 --- a/src/drivers/build/install-nvidia-drivers +++ /dev/null @@ -1,133 +0,0 @@ -#!/bin/bash -x - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -KERNEL_FULL_VERSION=`uname -r` -CURRENT_DRIVER=${DRIVER_PATH}/current - -function nvidiaPresent { - [[ -f /proc/driver/nvidia/version ]] || return 1 - grep -q $NVIDIA_VERSION /proc/driver/nvidia/version || return 2 - lsmod | grep -qE "^nvidia" || return 3 - lsmod | grep -qE "^nvidia_uvm" || return 3 - [[ -e /dev/nvidia0 ]] || return 4 - [[ -e ${DRIVER_PATH}/$NVIDIA_VERSION/lib64/libnvidia-ml.so ]] || return 5 - - [[ -e /etc/ld.so.conf.d/nvidia-drivers.conf ]] || return 6 - return 0 -} - -echo ======== If NVIDIA present exit early ========= -nvidiaPresent -if [ $? == 0 ] ; then - if [[ ! -L $CURRENT_DRIVER ]]; then - mkdir -p `dirname $CURRENT_DRIVER` - ln -s $DRIVER_PATH/$NVIDIA_VERSION $CURRENT_DRIVER - fi - exit 0 -fi - -echo ======== If NVIDIA driver already running uninstall it ========= -lsmod | grep -qE "^nvidia" && -{ - DEP_MODS=`lsmod | tr -s " " | grep -E "^nvidia" | cut -f 4 -d " "` - for mod in ${DEP_MODS//,/ } - do - rmmod $mod || - { - echo "The driver $mod is still in use, can't unload it." - exit 1 - } - done - rmmod nvidia || - { - echo "The driver nvidia is still in use, can't unload it." - exit 1 - } -} - -echo === Building and installing NVIDIA modules -# Add new directories to ld.so to make sure that the dependencies -# are properly recorded and in the ld.so cache for later discovery. -echo $NV_DRIVER/lib > /etc/ld.so.conf.d/nvidia-drivers.conf -echo $NV_DRIVER/lib64 >> /etc/ld.so.conf.d/nvidia-drivers.conf -mkdir -p $NV_DRIVER/lib $NV_DRIVER/lib64 $NV_DRIVER/bin || exit $? - -# Install NVIDIA driver user mode components to a directory -# that is mapped outside of Docker file, for easier mapping into -# user job containers afterwards. -./NVIDIA-Linux-x86_64-$NVIDIA_VERSION/nvidia-installer \ - --utility-prefix=$NV_DRIVER \ - --opengl-prefix=$NV_DRIVER \ - --x-prefix=$NV_DRIVER \ - --compat32-prefix=$NV_DRIVER \ - --opengl-libdir=lib64 \ - --utility-libdir=lib64 \ - --x-library-path=lib64 \ - --compat32-libdir=lib \ - --dkms \ - -a -s -N - -echo === Loading NVIDIA UVM module -modprobe nvidia-uvm || exit $? - -echo === Creating /dev entries -UVM_MAJOR=`grep nvidia-uvm /proc/devices | awk '{print $1}'` -FRONTEND_MAJOR=`grep nvidia-frontend /proc/devices | awk '{print $1}'` -rm -f /dev/nvidia* 2>/dev/null -mknod -m 666 /dev/nvidia-uvm c $UVM_MAJOR 0 || exit $? -mknod -m 666 /dev/nvidiactl c $FRONTEND_MAJOR 255 || exit $? -GPU_COUNT=`ls /proc/driver/nvidia/gpus | wc -l` -echo === Number of GPUS: $GPU_COUNT -for ((GPU=0; GPU<$GPU_COUNT; GPU++)); do - mknod -m 666 /dev/nvidia$GPU c $FRONTEND_MAJOR $GPU || exit $? -done - -ls -la /dev/nvidia* - -ldconfig - -echo === Check if everything is loaded -nvidiaPresent || exit $? - -echo === Checking the driver -nvidia-smi || exit $? - -echo === Updating current driver -# Remove previous soft link for current driver -[[ -L $CURRENT_DRIVER ]] && -{ - rm -f $CURRENT_DRIVER || exit $? -} - -# Remove benign issue where "current" exists as directory -[[ -d $CURRENT_DRIVER ]] && -{ - echo === Removing current driver as directory, should be soft link - rm -rf $CURRENT_DRIVER || exit $? -} - -ln -s -f $NV_DRIVER $CURRENT_DRIVER || exit $? - -[[ -L $CURRENT_DRIVER ]] || -{ - echo ======== Current drivers link not updated ========= - exit 1 -} - -echo NVIDIA driver installed successfully diff --git a/src/drivers/config/drivers.md b/src/drivers/config/drivers.md deleted file mode 100644 index 90223461a..000000000 --- a/src/drivers/config/drivers.md +++ /dev/null @@ -1,69 +0,0 @@ -## cluster section parser - -- [Default Configuration](#D_Config) -- [How to Configure](#HT_Config) -- [Generated Configuration](#G_Config) -- [Data Table](#T_config) - - - -#### Default configuration - -[drivers default configuration](drivers.yaml) - -#### How to configure cluster section in service-configuration.yaml - -All configurations in this section is optional. If you wanna customized these value, you can configure it in service-configuration.yaml. - -For example, if you wanna reconfigure ```drivers.set-nvidia-runtme``` with a new value. You should configure it in [service-configuration.yaml](../../../examples/cluster-configuration/services-configuration.yaml) with the yaml style as following. -```yaml -drivers: - set-nvidia-runtme: true -``` - -Or if your cluster has already installed nvidia-driver, and do not need pai to install it -again, then you can provide this info in your service-configuration.yaml like: - -```yaml -drivers: - pre-installed-nvidia-path: /path/to/your/drivers -``` - -#### Generated Configuration - -Generated configuration means the object model after parsing. The parsed data will be presented by a yaml format. -```yaml -drivers: - set-nvidia-runtme: false - version: "384.111" - pre-installed-nvidia-path: /usr/local/nvidia -``` - -#### Table - - - - - - - - - - - - - - - - - - - - - - - - - - -
Data in Configuration FileData in Cluster Object ModelData in Jinja2 TemplateData type
drivers.set-nvidia-runtmecom["drivers"]["set-nvidia-runtime"]cluster_cfg["drivers"]["set-nvidia-runtime"]Bool
drivers.versioncom["drivers"]["version"]cluster_cfg["drivers"]["version"]string
drivers.pre-installed-nvidia-pathcom["drivers"]["pre-installed-nvidia-path"]cluster_cfg["drivers"]["pre-installed-nvidia-path"]path string
diff --git a/src/drivers/config/drivers.py b/src/drivers/config/drivers.py deleted file mode 100644 index d9d751f23..000000000 --- a/src/drivers/config/drivers.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -import logging -import logging.config - - -class Drivers: - - def __init__(self, cluster_configuration, service_configuration, default_service_configuration): - self.logger = logging.getLogger(__name__) - - self.cluster_configuration = cluster_configuration - self.service_configuration = self.merge_service_configuration(service_configuration, default_service_configuration) - - - - def merge_service_configuration(self, overwrite_srv_cfg, default_srv_cfg): - if overwrite_srv_cfg == None: - return default_srv_cfg - srv_cfg = default_srv_cfg.copy() - for k in overwrite_srv_cfg: - v = overwrite_srv_cfg[k] - if (k in srv_cfg and isinstance(overwrite_srv_cfg[k], dict) and isinstance(srv_cfg[k], dict)): - srv_cfg[k] = self.merge_service_configuration(overwrite_srv_cfg[k], srv_cfg[k]) - else: - srv_cfg[k] = overwrite_srv_cfg[k] - return srv_cfg - - - - def validation_pre(self): - if "set-nvidia-runtime" not in self.service_configuration: - return False, "set-nvidia-runtime is miss in service-configuration -> drivers." - if self.service_configuration["set-nvidia-runtime"] not in [False, True]: - return False, "Value of set-nvidia-runtme should be false or true." - if self.service_configuration["enable-ib-installation"] not in [False, True]: - return False, "Value of enable-ib-installation should be false or true." - if "version" not in self.service_configuration: - return False, "version is miss in service-configuration -> drivers." - if self.service_configuration["version"] not in ["384.111", "390.25", "410.73", "418.56"]: - return False, "Value of version in drivers should be [384.111, 390.25, 410.73, 418.56]." - return True, None - - - - def run(self): - drivers_com = self.service_configuration - return drivers_com - - - - def validation_post(self, cluster_object_model): - return True, None - diff --git a/src/drivers/config/drivers.yaml b/src/drivers/config/drivers.yaml deleted file mode 100644 index 59c1d7e79..000000000 --- a/src/drivers/config/drivers.yaml +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -service_type: "yarn" - -set-nvidia-runtime: false - -# You can set drivers version here. If this value is miss, default value will be 384.111 -# Current supported version list -# 384.111 -# 390.25 -# 410.73 -# 418.56 -version: "384.111" - -pre-installed-nvidia-path: /usr/local/nvidia - -# IB driver installation will fail when VM builtin IB kernel modules into vmlinux image. -# If IB installation is needed during deployment, you can set the following field to true. -enable-ib-installation: false diff --git a/src/drivers/deploy/clean.yaml.template b/src/drivers/deploy/clean.yaml.template deleted file mode 100644 index b8b6c967f..000000000 --- a/src/drivers/deploy/clean.yaml.template +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: rollback-nvidia-runtime -spec: - selector: - matchLabels: - app: rollback-nvidia-runtime - template: - metadata: - labels: - app: rollback-nvidia-runtime - spec: - hostNetwork: false - hostPID: true # this is required by pkill dockerd - containers: - - name: rollback-nvidia-runtime - image: {{ cluster_cfg["cluster"]["docker-registry"]["prefix"] }}drivers-{{ cluster_cfg["drivers"]["version"] }}:{{ cluster_cfg["cluster"]["docker-registry"]["tag"] }} - imagePullPolicy: Always - securityContext: - privileged: true # this is required by pkill dockerd - command: - - sh - - -x - - clean.sh - volumeMounts: - - mountPath: /etc/docker - name: docker-config - readinessProbe: - exec: - command: - - cat - - /finished - initialDelaySeconds: 5 - periodSeconds: 3 - imagePullSecrets: - - name: {{ cluster_cfg["cluster"]['docker-registry']['secret-name'] }} - volumes: - - name: docker-config - hostPath: - path: /etc/docker diff --git a/src/drivers/deploy/delete.sh.template b/src/drivers/deploy/delete.sh.template deleted file mode 100644 index 03b54133c..000000000 --- a/src/drivers/deploy/delete.sh.template +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -{% if cluster_cfg["drivers"]['set-nvidia-runtime'] %} -kubectl apply --overwrite=true -f clean.yaml || exit $? - -export PYTHONPATH="../../../deployment" - -# wait until all clean finished -python -m k8sPaiLibrary.monitorTool.check_pod_ready_status -w -k app -v rollback-nvidia-runtime - -sleep 30 # SIGHUP in clean container may restart dockerd which will leads to restart of all docker containers include api server - -kubectl delete -f clean.yaml -{% endif %} - -/bin/bash stop.sh || exit $? - -popd > /dev/null diff --git a/src/drivers/deploy/drivers.yaml.template b/src/drivers/deploy/drivers.yaml.template deleted file mode 100644 index a46ae8365..000000000 --- a/src/drivers/deploy/drivers.yaml.template +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: drivers-one-shot -spec: - selector: - matchLabels: - app: drivers-one-shot - template: - metadata: - labels: - app: drivers-one-shot - spec: - hostNetwork: true - hostPID: true - containers: - - name: nvidia-drivers - image: {{ cluster_cfg["cluster"]["docker-registry"]["prefix"] }}drivers-{{ cluster_cfg["drivers"]["version"] }}:{{ cluster_cfg["cluster"]["docker-registry"]["tag"] }} - imagePullPolicy: Always - securityContext: - privileged: true - capabilities: - add: - - ALL - volumeMounts: - - mountPath: /var/drivers - name: driver-path - - mountPath: /dev - name: device-path - - mountPath: /lib/modules - name: modules-path - - mountPath: /var/log - name: drivers-log - - mountPath: /usr/src - name: kernel-head - - mountPath: /etc/ld.so.conf.d - name: etc-path-ld - - mountPath: /etc/docker - name: etc-path-docker - - mountPath: {{ cluster_cfg["drivers"]["pre-installed-nvidia-path"] }} - name: pre-install-nv-driver-path - env: - {% if cluster_cfg['drivers']['enable-ib-installation'] %} - - name: ENABLE_IB - value: "true" - {%- endif %} - - name: DRIVER_PATH - value: /var/drivers/nvidia - - name: PRE_INSTALLED_NV_DRIVER_PATH - value: /usr/local/nvidia # the path user has pre-installed nvidia driver - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP - readinessProbe: - exec: - command: - - cat - - /jobstatus/jobok - initialDelaySeconds: 5 - periodSeconds: 3 - {%- if cluster_cfg['cluster']['common']['qos-switch'] == "true" %} - resources: - limits: - memory: "2Gi" - requests: - memory: "256Mi" - {%- endif %} - command: ["bash", "-x", "./install-all-drivers"] - {% if cluster_cfg['drivers']['set-nvidia-runtime'] %} - args: - - "--config-runtime" - {% endif %} - imagePullSecrets: - - name: {{ cluster_cfg["cluster"]['docker-registry']['secret-name'] }} - volumes: - - name: driver-path - hostPath: - path: /var/drivers - - name: device-path - hostPath: - path: /dev - - name: modules-path - hostPath: - path: /lib/modules - - name: drivers-log - hostPath: - path: /var/log/drivers - - name: kernel-head - hostPath: - path: /usr/src - - name: etc-path-ld - hostPath: - path: /etc/ld.so.conf.d - - name: etc-path-docker - hostPath: - path: /etc/docker - - name: pre-install-nv-driver-path - hostPath: - path: /usr/local/nvidia # TODO make it argument \ No newline at end of file diff --git a/src/drivers/deploy/refresh.sh b/src/drivers/deploy/refresh.sh deleted file mode 100644 index 8c55d2f52..000000000 --- a/src/drivers/deploy/refresh.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - - -popd > /dev/null \ No newline at end of file diff --git a/src/drivers/deploy/service.yaml b/src/drivers/deploy/service.yaml deleted file mode 100644 index f9a1fc598..000000000 --- a/src/drivers/deploy/service.yaml +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -cluster-type: - - yarn - -prerequisite: - - cluster-configuration - -template-list: - - drivers.yaml - - clean.yaml - - delete.sh - -# Note: your script should start all your service dependency. Make sure service has completed the starting process. -start-script: start.sh -stop-script: stop.sh -delete-script: delete.sh -refresh-script: refresh.sh -upgraded-script: upgraded.sh - - -deploy-rules: - - notin: no-drivers diff --git a/src/drivers/deploy/start.sh b/src/drivers/deploy/start.sh deleted file mode 100755 index 9893dd270..000000000 --- a/src/drivers/deploy/start.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - - -kubectl apply --overwrite=true -f drivers.yaml || exit $? - -sleep 10 - -# wait until all drivers are ready. -PYTHONPATH="../../../deployment" python -m k8sPaiLibrary.monitorTool.check_pod_ready_status -w -k app -v drivers-one-shot || exit $? - -popd > /dev/null - diff --git a/src/drivers/deploy/stop.sh b/src/drivers/deploy/stop.sh deleted file mode 100644 index a11471937..000000000 --- a/src/drivers/deploy/stop.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -if kubectl get daemonset | grep -q "drivers-one-shot"; then - kubectl delete ds drivers-one-shot || exit $? -fi - - -popd > /dev/null \ No newline at end of file diff --git a/src/end-to-end-test/build/end-to-end-test.yarn.dockerfile b/src/end-to-end-test/build/end-to-end-test.yarn.dockerfile deleted file mode 100644 index 14c6061a9..000000000 --- a/src/end-to-end-test/build/end-to-end-test.yarn.dockerfile +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM hadoop-run - -RUN apt-get -y update && \ - apt-get -y install python git jq && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -WORKDIR /root/end-to-end-test - -COPY etc /root/end-to-end-test/etc/ -COPY *.sh /root/end-to-end-test/ - - -RUN git clone https://github.com/sstephenson/bats.git && \ - cd bats && \ - ./install.sh /usr/local - -RUN wget http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz -RUN tar zxvf cifar-10-python.tar.gz -RUN rm cifar-10-python.tar.gz -RUN git clone -b tf_benchmark_stage https://github.com/tensorflow/benchmarks.git - -CMD ["/bin/bash", "/root/end-to-end-test/start.sh"] diff --git a/src/end-to-end-test/deploy/delete.sh b/src/end-to-end-test/deploy/delete.sh deleted file mode 100644 index d77ce6e7c..000000000 --- a/src/end-to-end-test/deploy/delete.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -/bin/bash stop.sh || exit $? - -popd > /dev/null \ No newline at end of file diff --git a/src/end-to-end-test/deploy/end-to-end-test.yaml.template b/src/end-to-end-test/deploy/end-to-end-test.yaml.template deleted file mode 100644 index 40f8b49e7..000000000 --- a/src/end-to-end-test/deploy/end-to-end-test.yaml.template +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -apiVersion: apps/v1 -kind: Deployment -metadata: - name: end-to-end-test-deployment - labels: - app: end-to-end-test -spec: - replicas: 1 - selector: - matchLabels: - app: end-to-end-test - template: - metadata: - name: end-to-end-test - labels: - app: end-to-end-test - spec: - hostNetwork: false - hostPID: false - containers: - - name: end-to-end-test - image: {{ cluster_cfg["cluster"]["docker-registry"]["prefix"] }}end-to-end-test:{{ cluster_cfg["cluster"]["docker-registry"]["tag"] }} - imagePullPolicy: Always - env: - - name: HDFS_URI - value: hdfs://{{ cluster_cfg['hadoop-name-node']['master-ip'] }}:9000 - - name: WEBSERVICE_URI - value: {{ cluster_cfg['yarn-frameworklauncher']['webservice'] }} - - name: REST_SERVER_URI - value: {{ cluster_cfg['rest-server']['uri'] }} - - name: TEST_USERNAME - value: {{ cluster_cfg['rest-server']['default-pai-admin-username'] }} - - name: TEST_PASSWORD - value: {{ cluster_cfg['rest-server']['default-pai-admin-password'] }} - imagePullSecrets: - - name: {{ cluster_cfg["cluster"]["docker-registry"]["secret-name"] }} diff --git a/src/end-to-end-test/deploy/refresh.sh b/src/end-to-end-test/deploy/refresh.sh deleted file mode 100644 index e872d8f8a..000000000 --- a/src/end-to-end-test/deploy/refresh.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -echo "no job in the refresh script of end-to-end-test" - -popd > /dev/null \ No newline at end of file diff --git a/src/end-to-end-test/deploy/service.yaml b/src/end-to-end-test/deploy/service.yaml deleted file mode 100644 index d9d19a14e..000000000 --- a/src/end-to-end-test/deploy/service.yaml +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -cluster-type: - - yarn - -prerequisite: - - cluster-configuration - - yarn-frameworklauncher - - rest-server - -template-list: - - end-to-end-test.yaml - -start-script: start.sh -stop-script: stop.sh -delete-script: delete.sh -refresh-script: refresh.sh -upgraded-script: upgraded.sh \ No newline at end of file diff --git a/src/end-to-end-test/deploy/start.sh b/src/end-to-end-test/deploy/start.sh deleted file mode 100755 index 1f1e89cc1..000000000 --- a/src/end-to-end-test/deploy/start.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -# kubectl apply --overwrite=true -f end-to-end-test.yaml || exit $? - -popd > /dev/null diff --git a/src/end-to-end-test/deploy/stop.sh b/src/end-to-end-test/deploy/stop.sh deleted file mode 100644 index 879889572..000000000 --- a/src/end-to-end-test/deploy/stop.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -pushd $(dirname "$0") > /dev/null - -if kubectl get deployments | grep -q "end-to-end-test-deployment"; then - kubectl delete deployment end-to-end-test-deployment || exit $? -fi - -popd > /dev/null diff --git a/src/end-to-end-test/etc/launcher.json b/src/end-to-end-test/etc/launcher.json deleted file mode 100644 index 9cd25a68c..000000000 --- a/src/end-to-end-test/etc/launcher.json +++ /dev/null @@ -1,42 +0,0 @@ -{ - "version": 10, - "user": { - "name": "test" - }, - "retryPolicy": { - "maxRetryCount": 0, - "fancyRetryPolicy": true - }, - "taskRoles": { - "Master": { - "taskNumber": 10, - "taskRetryPolicy": { - "maxRetryCount": 0, - "fancyRetryPolicy": true - }, - "taskService": { - "version": 23, - "entryPoint": "echo 'TEST'", - "sourceLocations": [ - "/Test/launcher" - ], - "resource": { - "cpuNumber": 1, - "memoryMB": 512, - "portDefinitions": { - "http": { - "start": 0, - "count": 1 - }, - "ssh": { - "start": 0, - "count": 1 - } - }, - "diskType": 0, - "diskMB": 0 - } - } - } - } -} \ No newline at end of file diff --git a/src/end-to-end-test/etc/tensorflow.json b/src/end-to-end-test/etc/tensorflow.json deleted file mode 100644 index 4dc4945ea..000000000 --- a/src/end-to-end-test/etc/tensorflow.json +++ /dev/null @@ -1,19 +0,0 @@ -{ - "jobName": "tensorflow-cifar10", - "image": "openpai/pai.example.tensorflow", - "dataDir": "HDFS_URI/Test/tensorflow/cifar-10-batches-py", - "outputDir": "HDFS_URI/Test/tensorflow/output", - "codeDir": "HDFS_URI/Test/tensorflow/benchmarks", - "taskRoles": [ - { - "name": "tf_benchmark", - "taskNumber": 1, - "cpuNumber": 2, - "memoryMB": 10240, - "gpuNumber": 1, - "command": "pip --quiet install scipy && ls . && python benchmarks/scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py --num_batches=1 --local_parameter_device=gpu --batch_size=8 --model=alexnet --variable_update=parameter_server --data_dir=$PAI_DATA_DIR --data_name=cifar10 --train_dir=$PAI_OUTPUT_DIR", - "minSucceededTaskCount": 1 - } - ], - "retryCount": 0 -} diff --git a/src/end-to-end-test/start.sh b/src/end-to-end-test/start.sh deleted file mode 100644 index c92ddf657..000000000 --- a/src/end-to-end-test/start.sh +++ /dev/null @@ -1,58 +0,0 @@ -#!/bin/bash - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -account_file="./etc/account.config" -token_file="./etc/token.config" -expiration="$((7*24*60*60))" - -rest_server_uri=$REST_SERVER_URI -echo "$TEST_USERNAME:$TEST_PASSWORD" > $account_file - - -get_auth_token() { - account="$(cat $account_file)" - account=(${account//:/ }) - curl -X POST -d "username=${account[0]}" -d "password=${account[1]}" -d "expiration=$expiration" $rest_server_uri/api/v1/authn/basic/login | jq -r ".token" > $token_file -} - - -while true; do - printf "\nStarting end to end tests:\n" - - if [ ! -s $token_file ] || [ $(( $(date +%s) - $(stat -c %Y $token_file) )) -gt $expiration ]; then - get_auth_token - fi - - # printf "\nTesting service ...\n" - # bats test_service.sh - - printf "\nTesting hdfs ...\n" - bats test_hdfs.sh - - printf "\nTesting framework launcher ...\n" - bats test_launcher.sh - - printf "\nTesting rest server ...\n" - bats test_rest_server.sh - - printf "\n Sleeping ...\n" - sleep 1800 - -done diff --git a/src/end-to-end-test/test_hdfs.sh b/src/end-to-end-test/test_hdfs.sh deleted file mode 100644 index 1a7faea49..000000000 --- a/src/end-to-end-test/test_hdfs.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/usr/bin/env bats - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -hdfs_uri=$HDFS_URI - - -@test "list hdfs root dir" { - result="$(hdfs dfs -ls $hdfs_uri/)" - [[ $result == *Launcher* ]] -} - -@test "make hdfs test root dir" { - result="$(hdfs dfs -mkdir $hdfs_uri/Test)" - [[ ! $result == *mkdir* ]] - result="$(hdfs dfs -ls $hdfs_uri/)" - [[ $result == *Test* ]] -} - -@test "make hdfs test sub dir" { - result="$(hdfs dfs -mkdir $hdfs_uri/Test/launcher)" - [[ ! $result == *mkdir* ]] - result="$(hdfs dfs -mkdir $hdfs_uri/Test/tensorflow)" - [[ ! $result == *mkdir* ]] -} - -@test "upload cifar10 tensorflow test data to hdfs" { - result="$(hdfs dfs -put -f cifar-10-batches-py $hdfs_uri/Test/tensorflow/)" - [[ ! $result == *put* ]] -} - -@test "upload tensorflow script to hdfs" { - result="$(hdfs dfs -put -f benchmarks $hdfs_uri/Test/tensorflow/)" - [[ ! $result == *put* ]] -} - -@test "hdfs test root dir chmod" { - result="$(hdfs dfs -chmod -R 777 $hdfs_uri/Test)" - [[ ! $result == *chmod* ]] -} diff --git a/src/end-to-end-test/test_launcher.sh b/src/end-to-end-test/test_launcher.sh deleted file mode 100644 index 9d69e74da..000000000 --- a/src/end-to-end-test/test_launcher.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env bats - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -launcher_uri=$WEBSERVICE_URI - - -@test "check framework launcher health check" { - result="$(curl $launcher_uri)" - [[ $result == *Active* ]] -} - -@test "submit framework launcher test job" { - job_name="launcher-test-$RANDOM-$RANDOM" - result="$(cat ./etc/launcher.json | curl -H "Content-Type: application/json" -H "UserName: test" -X PUT -d @- $launcher_uri/v1/Frameworks/test~$job_name)" - [[ ! $result == *Error* ]] -} diff --git a/src/end-to-end-test/test_rest_server.sh b/src/end-to-end-test/test_rest_server.sh deleted file mode 100644 index a017bf60b..000000000 --- a/src/end-to-end-test/test_rest_server.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env bats - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -hdfs_uri=$HDFS_URI -rest_server_uri=$REST_SERVER_URI - - -@test "check rest server health check" { - result="$(curl $rest_server_uri)" - [[ $result == *API* ]] -} - -@test "submit tensorflow test job" { - account="$(cat ./etc/account.config)" - account=(${account//:/ }) - job_name="tensorflow-test-$RANDOM-$RANDOM" - token="$(cat ./etc/token.config)" - result="$(cat ./etc/tensorflow.json | sed -e "s@tensorflow-cifar10@$job_name@g" -e "s@HDFS_URI@$hdfs_uri@g" | curl -H "Content-Type: application/json" -H "Authorization: Bearer $token" -X POST -d @- $rest_server_uri/api/v1/user/${account[0]}/jobs)" - [[ ! $result == *Error* ]] -} - -@test "clean up jobs" { - account="$(cat ./etc/account.config)" - account=(${account//:/ }) - token="$(cat ./etc/token.config)" - job_list="$(curl -H "Content-Type: application/json" -X GET $rest_server_uri/api/v1/user/${account[0]}/jobs | jq -r --arg username ${account[0]} --argjson timestamp $(( $(date +%s) * 1000 - 24 * 60 * 60 * 1000 )) '.[] | select((.username | match($username)) and (.state | match("SUCCEEDED")) and (.createdTime < $timestamp)) | .name')" - result="$(for job in $job_list; do curl -H "Content-Type: application/json" -H "Authorization: Bearer $token" -X DELETE $rest_server_uri/api/v1/user/${account[0]}/jobs/$job; done)" - [[ ! $result == *Error* ]] -} diff --git a/src/end-to-end-test/test_service.sh b/src/end-to-end-test/test_service.sh deleted file mode 100644 index 1fe3ba3ec..000000000 --- a/src/end-to-end-test/test_service.sh +++ /dev/null @@ -1,66 +0,0 @@ -#!/usr/bin/env bats - -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -service_list="$(kubectl get pods)" - - -@test "check etcd server" { - [[ $service_list == *"etcd-server"!(*$'\n'*)"Running"* ]] -} - -@test "check drivers" { - [[ $service_list == *"drivers-one-shot"!(*$'\n'*)"Running"* ]] -} - -@test "check hadoop name node" { - [[ $service_list == *"hadoop-name-node-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check hadoop data node" { - [[ $service_list == *"hadoop-data-node-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check hadoop resource manager" { - [[ $service_list == *"hadoop-resource-manager-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check hadoop node manager" { - [[ $service_list == *"hadoop-node-manager-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check hadoop job history" { - [[ $service_list == *"hadoop-jobhistory-service"!(*$'\n'*)"Running"* ]] -} - -@test "check zookeeper" { - [[ $service_list == *"zookeeper-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check yarn-frameworklauncher" { - [[ $service_list == *"yarn-frameworklauncher-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check rest server" { - [[ $service_list == *"rest-server-ds"!(*$'\n'*)"Running"* ]] -} - -@test "check webportal" { - [[ $service_list == *"webportal-ds"!(*$'\n'*)"Running"* ]] -} diff --git a/src/etcd-upgrade/build/ds.yaml b/src/etcd-upgrade/build/ds.yaml deleted file mode 100644 index 1cc5b1c45..000000000 --- a/src/etcd-upgrade/build/ds.yaml +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -apiVersion: apps/v1 -kind: DaemonSet -metadata: - name: etcd-upgrade -spec: - selector: - matchLabels: - app: etcd-upgrade - template: - metadata: - labels: - app: etcd-upgrade - name: etcd-upgrade - spec: - containers: - - image: docker.io/openpai/etcd-upgrade:v0.11.0 - name: etcd-upgrade - imagePullPolicy: Always - readinessProbe: - exec: - command: - - cat - - /upgrade/done - initialDelaySeconds: 5 - periodSeconds: 3 - command: ["sh", "-x", "/upgrade/upgrade.sh"] - resources: - limits: - memory: "128Mi" - securityContext: - privileged: true - volumeMounts: - - mountPath: /etc/kubernetes/manifests - name: manifests-dir - volumes: - - name: manifests-dir - hostPath: - path: /etc/kubernetes/manifests diff --git a/src/etcd-upgrade/build/etcd-upgrade.dockerfile b/src/etcd-upgrade/build/etcd-upgrade.dockerfile deleted file mode 100644 index a233913a3..000000000 --- a/src/etcd-upgrade/build/etcd-upgrade.dockerfile +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -FROM python:3.7 - -RUN pip3 install PyYAML && \ - mkdir /upgrade - -COPY build/*.py build/*.sh /upgrade/ diff --git a/src/etcd-upgrade/build/upgrade.py b/src/etcd-upgrade/build/upgrade.py deleted file mode 100755 index e25faea32..000000000 --- a/src/etcd-upgrade/build/upgrade.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python - -import copy -import sys -import yaml - -labels = {"app": "etcd-server"} - -probe = {"httpGet": {"path": "/health", "port": 4001}, - "initialDelaySeconds": 10, - "periodSeconds": 30, - "timeoutSeconds": 10} - -def add_fields(obj): - obj = copy.deepcopy(obj) - assert obj["apiVersion"] == "v1" - assert obj["kind"] == "Pod" - assert obj["metadata"]["name"] == "etcd-server" - obj["metadata"]["labels"] = labels - obj["spec"]["containers"][0]["readinessProbe"] = probe - return obj - -if __name__ == '__main__': - print(yaml.dump(add_fields(yaml.load(sys.stdin)))) diff --git a/src/etcd-upgrade/build/upgrade.sh b/src/etcd-upgrade/build/upgrade.sh deleted file mode 100644 index d33a7616a..000000000 --- a/src/etcd-upgrade/build/upgrade.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/usr/bin/env sh - -MANIFESTS_DIR=/etc/kubernetes/manifests -ORIGIN=$MANIFESTS_DIR/etcd.yaml -UPGRADED_NAME=etcd-upgraded.yaml -TMP_DIR=/tmp - -if [ -f "$ORIGIN" ] ; then - cat $ORIGIN | /upgrade/upgrade.py > $TMP_DIR/$UPGRADED_NAME - rtn=$? - if [ $rtn -eq 0 ] ; then - rm $ORIGIN - sleep 10 # it seems k8s requires the yaml file disappear some time - mv $TMP_DIR/$UPGRADED_NAME $MANIFESTS_DIR - echo success - touch /upgrade/done - else - echo failed, nothing changed, return code is $rtn >&2 - fi -else - echo no etcd file found, nothing changed - touch /upgrade/done -fi - -sleep infinity diff --git a/src/job-exit-spec/config/architecture.png b/src/job-exit-spec/config/architecture.png deleted file mode 100644 index a69a8462f..000000000 Binary files a/src/job-exit-spec/config/architecture.png and /dev/null differ diff --git a/src/job-exit-spec/config/job-exit-spec.md b/src/job-exit-spec/config/job-exit-spec.md deleted file mode 100644 index 5c9c656a0..000000000 --- a/src/job-exit-spec/config/job-exit-spec.md +++ /dev/null @@ -1,82 +0,0 @@ -# PAI Job Exit Spec -1. See details in [job-exit-spec.yaml](job-exit-spec.yaml) -2. This markdown file is generated by [update_markdown.py](update_markdown.py) with [job-exit-spec.yaml](job-exit-spec.yaml) -3. See full doc in [PAI Job Exit Spec User Manual](user-manual.md) - -## Spec Schema -|field|description|required|unique|type|range| -|-----|-----------|--------|------|----|----| -| **code** | The PAI Job ExitCode | True | True | Integer | begin: -8000
end: 256
| -| **phrase** | The textual phrase representation of this ExitCode | True | True | String | Any | -| **issuer** | Who root issued this ExitCode in details | False | False | Enum | 1. USER_CONTAINER
2. PAI_OS
3. PAI_RUNTIME
4. PAI_YARN
5. PAI_LAUNCHER
| -| **causer** | Who root caused this ExitCode in details | False | False | Enum | 1. USER_SUBMISSION
2. USER_CONTAINER
3. USER_STOP
4. USER_DELETION
5. USER_RETRY
6. USER_UPGRADE
7. RESOURCE_ALLOCATION_TIMEOUT
8. PAI_HDFS
9. PAI_OS
10. PAI_DOCKER
11. PAI_RUNTIME
12. PAI_YARN
13. PAI_LAUNCHER
14. UNKNOWN
| -| **type** | The rough type of this ExitCode | False | False | Enum | 1. USER_SUCCESS
2. USER_STOP
3. USER_FAILURE
4. PLATFORM_FAILURE
5. RESOURCE_ALLOCATION_TIMEOUT
6. UNKNOWN_FAILURE
| -| **stage** | The user process stage just before this ExitCode issued | False | False | Enum | 1. SUBMITTING
2. ALLOCATING
3. LAUNCHING
4. RUNNING
5. COMPLETING
6. UNKNOWN
| -| **behavior** | The rerun behavior of this ExitCode | False | False | Enum | 1. TRANSIENT_NORMAL
2. TRANSIENT_CONFLICT
3. NON_TRANSIENT
4. UNKNOWN
| -| **reaction** | The reaction for this ExitCode will be executed by PAI automatically | False | False | Enum | 1. ALWAYS_RETRY
2. ALWAYS_BACKOFF_RETRY
3. RETRY_TO_MAX
4. NEVER_RETRY
| -| **reason** | Why this ExitCode is issued | False | False | String | Any | -| **repro** | One specific reproduce steps of this ExitCode | False | False | List\ | Any | -| **solution** | Some optional solutions to resolve this ExitCode if it indicates failure | False | False | List\ | Any | -| **pattern** | The pattern that PAI used to detect this ExitCode | False | False | String | Any, such as USER_EXITCODE=X && USER_LOG_PATTERN=Y \|\| OS_Signal=Z | - -## Spec Table -1. You may need to **scroll right side to see full table**. -2. The code **256** is just used to represent all **undefined positive** exitcodes in this spec, and the specific undefined exitcode will always override it to expose to user. -3. The code **-8000** is just used to represent all **undefined negative** exitcodes in this spec, and the specific undefined exitcode will always override it to expose to user. - -|code|phrase|issuer|causer|type|stage|behavior|reaction|reason|repro|solution|pattern| -|----|------|------|------|----|-----|--------|--------|------|-----|--------|-------| -| **154** | **CONTAINER_EXIT_CODE_FILE_LOST** | PAI_YARN | PAI_YARN | PLATFORM_FAILURE | COMPLETING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container exitcode file cannot be found by YARN NM, maybe node unexpected shutdown, disk cleaned up or disk failure | 1. Stop YARN NM
2. Kill container process
3. Delete container exitcode file
4. Start YARN NM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **130** | **CONTAINER_KILLED_BY_SIGINT** | PAI_OS | PAI_OS | PLATFORM_FAILURE | RUNNING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by OS Signal: SIGINT | 1. Kill container process by SIGINT
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **132** | **CONTAINER_KILLED_BY_SIGILL** | USER_CONTAINER | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container killed by OS Signal: SIGILL | 1. User program executes an illegal, malformed, unknown, or privileged machine instruction
| 1. Check container log and fix your program bug
| | -| **134** | **CONTAINER_KILLED_BY_SIGABRT** | USER_CONTAINER | UNKNOWN | UNKNOWN_FAILURE | RUNNING | UNKNOWN | RETRY_TO_MAX | Container killed by OS Signal: SIGABRT | 1. User program calls abort() by libc
| 1. Check container log and find root cause
2. Wait result from next retry
| | -| **135** | **CONTAINER_KILLED_BY_SIGBUS** | USER_CONTAINER | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container killed by OS Signal: SIGBUS | 1. User program accesses an unaligned memory address
| 1. Check container log and fix your program bug
| | -| **136** | **CONTAINER_KILLED_BY_SIGFPE** | USER_CONTAINER | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container killed by OS Signal: SIGFPE | 1. User program division by zero
| 1. Check container log and fix your program bug
| | -| **137** | **CONTAINER_KILLED_BY_SIGKILL** | PAI_OS | PAI_OS | PLATFORM_FAILURE | RUNNING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by OS Signal: SIGKILL | 1. Kill container process by SIGKILL
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **139** | **CONTAINER_KILLED_BY_SIGSEGV** | USER_CONTAINER | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container killed by OS Signal: SIGSEGV | 1. User program accesses an illegal memory address
| 1. Check container log and fix your program bug
| | -| **141** | **CONTAINER_KILLED_BY_SIGPIPE** | USER_CONTAINER | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container killed by OS Signal: SIGPIPE | 1. User program writes to a pipe without a process connected to the other end
| 1. Check container log and fix your program bug
| | -| **143** | **CONTAINER_KILLED_BY_SIGTERM** | PAI_OS | PAI_OS | PLATFORM_FAILURE | RUNNING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by OS Signal: SIGTERM | 1. Kill container process by SIGTERM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **193** | **CONTAINER_DOCKER_RUN_FAILED** | PAI_RUNTIME | PAI_DOCKER | PLATFORM_FAILURE | LAUNCHING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container cannot be launched by docker run | 1. PAI Runtime calls docker run with unknown flag
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **196** | **CONTAINER_OOM_KILLED_BY_DOCKER** | PAI_RUNTIME | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container killed by docker due to it exceeded the request memory | 1. User program uses more memory than its requested
| 1. Increase per task memory request
2. Decrease per task memory usage by such as increasing task number
| | -| **198** | **CONTAINER_OOD_KILLED_BY_DISKCLEANER** | PAI_RUNTIME | USER_CONTAINER | USER_FAILURE | RUNNING | NON_TRANSIENT | NEVER_RETRY | Container is killed by disk cleaner due to it used major disk space and all containers disk usage on the node exceeded platform limit | 1. User program uses almost all disk space of the node
| 1. Decrease per task disk space usage by such as increasing task number
| | -| **255** | **CONTAINER_RUNTIME_UNKNOWN_FAILURE** | PAI_RUNTIME | UNKNOWN | UNKNOWN_FAILURE | COMPLETING | UNKNOWN | RETRY_TO_MAX | Container failed but the failure cannot be recognized by PAI Runtime | 1. User program directly exits with exitcode 1
| 1. Check container log and find root cause
2. Wait result from next retry
| | -| **256** | **CONTAINER_RUNTIME_EXIT_ABNORMALLY** | PAI_RUNTIME | PAI_RUNTIME | PLATFORM_FAILURE | UNKNOWN | UNKNOWN | RETRY_TO_MAX | PAI Runtime exit abnormally with undefined exitcode, it may have bugs | 1. PAI Runtime exits with exitcode 1
| 1. Contact PAI Dev to fix PAI Runtime bugs
| | -| **0** | **SUCCEEDED** | USER_CONTAINER | USER_CONTAINER | USER_SUCCESS | COMPLETING | UNKNOWN | NEVER_RETRY | | 1. User program exits with exitcode 0
| | | -| **-7100** | **CONTAINER_INVALID_EXIT_STATUS** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | LAUNCHING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container exited with invalid exit status, maybe YARN failed to initialize container environment | 1. Disable write permission for YARN NM to access {yarn.nodemanager.local-dirs}
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7101** | **CONTAINER_NOT_AVAILABLE_EXIT_STATUS** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | LAUNCHING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container exited with not available exit status, maybe YARN failed to create container executor process | 1. Disable execute permission for YARN NM to access bash on *nix or winutils.exe on Windows
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7102** | **CONTAINER_NODE_DISKS_FAILED** | PAI_LAUNCHER | PAI_OS | PLATFORM_FAILURE | LAUNCHING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container cannot be launched by YARN due to local bad disk, maybe no disk space left | 1. Set zero disk space for {yarn.nodemanager.local-dirs}
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7103** | **CONTAINER_PORT_CONFLICT** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | LAUNCHING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container cannot be launched by YARN due to local port conflict | 1. After container allocated and before container started, stop the container's YARN NM
2. Occupy a container requested port on the container node
3. Start the container's YARN NM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7110** | **CONTAINER_ABORTED** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container aborted by YARN | 1. Corrupt the container entry in YARN NM state store
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7111** | **CONTAINER_NODE_LOST** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container lost due to node lost, maybe its YARN NM is down for a long time | 1. Stop the container's YARN NM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7112** | **CONTAINER_EXPIRED** | PAI_LAUNCHER | RESOURCE_ALLOCATION_TIMEOUT | RESOURCE_ALLOCATION_TIMEOUT | ALLOCATING | TRANSIENT_CONFLICT | ALWAYS_BACKOFF_RETRY | Container previously allocated is expired due to it is not launched on YARN NM in time, maybe other containers cannot be allocated in time | 1. Disable virtual cluster bonus token
2. Set amGangAllocationTimeoutSec large than yarn.resourcemanager.rm.container-allocation.expiry-interval-ms
3. Request more containers in a job than its virtual cluster current available resource
| 1. Wait result from next retry
2. Decrease task number
3. Decrease per task resource request
4. Contact Cluster Admin to increase your virtual cluster quota
| | -| **-7113** | **CONTAINER_ABORTED_ON_AM_RESTART** | PAI_LAUNCHER | RESOURCE_ALLOCATION_TIMEOUT | RESOURCE_ALLOCATION_TIMEOUT | ALLOCATING | TRANSIENT_CONFLICT | ALWAYS_BACKOFF_RETRY | Container previously allocated is aborted by YARN RM during Launcher AM restart, maybe other containers cannot be allocated in time | 1. Disable virtual cluster bonus token
2. Request more containers in a job than its virtual cluster current available resource
3. Kill Launcher AM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7120** | **CONTAINER_PREEMPTED** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container preempted by YARN RM, maybe its virtual cluster overused resource was reclaimed | 1. Enable virtual cluster bonus token
2. Request more containers in a job than its virtual cluster current available resource
3. Use up all other virtual clusters available resource
| 1. Wait result from next retry
2. Decrease task number
3. Decrease per task resource request
4. Contact Cluster Admin to increase your virtual cluster quota
5. Contact Cluster Admin to disable your virtual cluster bonus token
| | -| **-7121** | **CONTAINER_RUNTIME_VIRTUAL_MEMORY_EXCEEDED** | PAI_LAUNCHER | PAI_RUNTIME | PLATFORM_FAILURE | UNKNOWN | NON_TRANSIENT | NEVER_RETRY | Container killed by YARN due to its PAI Runtime exceeded the request virtual memory | 1. PAI Runtime uses more virtual memory than its container requested
| 1. Increase per task virtual memory request
2. Contact PAI Dev to decrease PAI Runtime virtual memory usage
| | -| **-7122** | **CONTAINER_RUNTIME_PHYSICAL_MEMORY_EXCEEDED** | PAI_LAUNCHER | PAI_RUNTIME | PLATFORM_FAILURE | UNKNOWN | NON_TRANSIENT | NEVER_RETRY | Container killed by YARN due to its PAI Runtime exceeded the request physical memory | 1. PAI Runtime uses more physical memory than its container requested
| 1. Increase per task physical memory request
2. Contact PAI Dev to decrease PAI Runtime physical memory usage
| | -| **-7123** | **CONTAINER_KILLED_BY_AM** | PAI_LAUNCHER | PAI_LAUNCHER | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by Launcher AM, maybe allocated container is rejected | 1. Setup single node cluster
2. Submit job with two tasks and antiaffinityAllocation enabled
3. Launcher rejects allocated container whose node already allocated another container
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7124** | **CONTAINER_KILLED_BY_RM** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by YARN RM, maybe the container is not managed by YARN RM anymore | 1. Delete the container's app entry in YARN RM state store
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7125** | **CONTAINER_KILLED_ON_APP_COMPLETION** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | COMPLETING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by YARN RM due to its app is already completed | 1. Stop Launcher AM container's YARN NM
2. Kill the container's app
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7126** | **CONTAINER_EXTERNAL_UTILIZATION_SPIKED** | PAI_LAUNCHER | PAI_OS | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by YARN due to external utilization spiked | 1. Enable YARN external utilization check
2. Start raw process to use up almost all memory on the node
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7150** | **CONTAINER_NM_LAUNCH_FAILED** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | LAUNCHING | TRANSIENT_NORMAL | ALWAYS_RETRY | Container failed to launch on YARN NM | 1. After container allocated and before container started, stop the container's YARN NM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7151** | **CONTAINER_RM_RESYNC_LOST** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container lost after Launcher AM resynced with YARN RM | 1. Stop the container's YARN NM
2. Restart YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7152** | **CONTAINER_RM_RESYNC_EXCEEDED** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | NON_TRANSIENT | NEVER_RETRY | Container exceeded after Launcher AM resynced with YARN RM | 1. Stop the container's YARN NM
2. Restart YARN RM
3. Wait until AM releases container
4. Start the container's YARN NM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7153** | **CONTAINER_MIGRATE_TASK_REQUESTED** | PAI_LAUNCHER | USER_RETRY | USER_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by Launcher due to user MigrateTaskRequest | 1. Send MigrateTaskRequest for the container
| 1. Wait result from next retry
| | -| **-7154** | **CONTAINER_AGENT_EXPIRED** | PAI_LAUNCHER | PAI_OS | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Container killed by Launcher due to no Launcher Agent heartbeat is received in time | 1. Enable Launcher Agent
2. Bring down the container's node
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7200** | **AM_RM_HEARTBEAT_YARN_EXCEPTION** | PAI_LAUNCHER | USER_SUBMISSION | USER_FAILURE | SUBMITTING | NON_TRANSIENT | NEVER_RETRY | Launcher AM failed to heartbeat with YARN RM due to YarnException, maybe App is non-compliant | 1. Submit a job with invalid node label
| 1. Check diagnostics and revise your job config
| | -| **-7201** | **AM_RM_HEARTBEAT_IO_EXCEPTION** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Launcher AM failed to heartbeat with YARN RM due to IOException, maybe YARN RM is down | 1. Stop YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7202** | **AM_RM_HEARTBEAT_UNKNOWN_EXCEPTION** | PAI_LAUNCHER | UNKNOWN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Launcher AM failed to heartbeat with YARN RM due to unknown Exception | 1. AM sends invalid message to YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7203** | **AM_RM_HEARTBEAT_SHUTDOWN_REQUESTED** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Launcher AM failed to heartbeat with YARN RM due to ShutdownRequest, maybe AM is not managed by YARN RM anymore | 1. Set small AM expiry time
2. Set network partition between AM and YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7250** | **AM_UNKNOWN_EXCEPTION** | PAI_LAUNCHER | PAI_LAUNCHER | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | Launcher AM failed due to unknown Exception | 1. Set network partition between AM and ZK
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7251** | **AM_NON_TRANSIENT_EXCEPTION** | PAI_LAUNCHER | USER_SUBMISSION | USER_FAILURE | SUBMITTING | NON_TRANSIENT | NEVER_RETRY | Launcher AM failed due to NonTransientException, maybe App is non-compliant | 1. Submit a job with invalid data dir
| 1. Check diagnostics and revise your job config
| | -| **-7252** | **AM_GANG_ALLOCATION_TIMEOUT** | PAI_LAUNCHER | RESOURCE_ALLOCATION_TIMEOUT | RESOURCE_ALLOCATION_TIMEOUT | ALLOCATING | TRANSIENT_CONFLICT | ALWAYS_BACKOFF_RETRY | Launcher AM failed due to all the requested resource cannot be satisfied in time | 1. Disable virtual cluster bonus token
2. Request more containers in a job than its virtual cluster current available resource
| 1. Wait result from next retry
2. Decrease task number
3. Decrease per task resource request
4. Contact Cluster Admin to increase your virtual cluster quota
| | -| **-7300** | **APP_SUBMISSION_YARN_EXCEPTION** | PAI_LAUNCHER | USER_SUBMISSION | USER_FAILURE | SUBMITTING | NON_TRANSIENT | NEVER_RETRY | Failed to submit App to YARN RM due to YarnException, maybe App is non-compliant | 1. Submit a job to invalid virtual cluster
| 1. Check diagnostics and revise your job config
| | -| **-7301** | **APP_SUBMISSION_IO_EXCEPTION** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | SUBMITTING | TRANSIENT_NORMAL | ALWAYS_RETRY | Failed to submit App to YARN RM due to IOException, maybe YARN RM is down | 1. Stop YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7302** | **APP_SUBMISSION_UNKNOWN_EXCEPTION** | PAI_LAUNCHER | UNKNOWN | UNKNOWN_FAILURE | SUBMITTING | UNKNOWN | RETRY_TO_MAX | Failed to submit App to YARN RM due to unknown Exception | 1. Launcher Service sends invalid message to YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7303** | **APP_KILLED_UNEXPECTEDLY** | PAI_LAUNCHER | UNKNOWN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | App killed unexpectedly and directly through YARN RM | 1. Kill the app directly through YARN RM
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7350** | **APP_RM_RESYNC_LOST** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | UNKNOWN | TRANSIENT_NORMAL | ALWAYS_RETRY | App lost after Launcher Service resynced with YARN RM | 1. Delete the app entry in YARN RM state store
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7351** | **APP_STOP_FRAMEWORK_REQUESTED** | PAI_LAUNCHER | USER_STOP | USER_STOP | UNKNOWN | NON_TRANSIENT | NEVER_RETRY | App stopped by Launcher due to user StopFrameworkRequest | 1. Stop a job
| | | -| **-7352** | **APP_AM_DIAGNOSTICS_LOST** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | COMPLETING | TRANSIENT_NORMAL | ALWAYS_RETRY | Failed to retrieve AMDiagnostics from YARN, maybe the App is cleaned up in YARN | 1. App is in APPLICATION_RETRIEVING_DIAGNOSTICS state
2. Stop Launcher Service
3. Delete the app entry in YARN RM state store
4. Start Launcher Service
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7353** | **APP_AM_DIAGNOSTICS_DESERIALIZATION_FAILED** | PAI_LAUNCHER | PAI_YARN | PLATFORM_FAILURE | COMPLETING | TRANSIENT_NORMAL | ALWAYS_RETRY | Failed to deserialize AMDiagnostics from YARN, maybe it is corrupted or Launcher AM unexpectedly crashed frequently without generating AMDiagnostics | 1. Set yarn.app.attempt.diagnostics.limit.kc to 1B
| 1. Wait result from next retry
2. Contact Cluster Admin
| | -| **-7400** | **TASK_STOPPED_ON_APP_COMPLETION** | PAI_LAUNCHER | USER_STOP | USER_STOP | UNKNOWN | NON_TRANSIENT | NEVER_RETRY | Task stopped by Launcher due to its app is already completed | 1. Stop a job with long running container
| | | -| **-8000** | **CONTAINER_UNKNOWN_YARN_EXIT_STATUS** | PAI_YARN | UNKNOWN | UNKNOWN_FAILURE | UNKNOWN | UNKNOWN | RETRY_TO_MAX | Container exited with unknown exitcode which is issued from YARN | 1. Change YARN code to make it return container exitcode -886
| 1. Contact PAI Dev to recognize this exitcode
| | - diff --git a/src/job-exit-spec/config/job-exit-spec.yaml b/src/job-exit-spec/config/job-exit-spec.yaml deleted file mode 100644 index 2bd8e13a1..000000000 --- a/src/job-exit-spec/config/job-exit-spec.yaml +++ /dev/null @@ -1,1015 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -service_type: "yarn" - -################################################################################ -# PAI Job ExitSpec Schema -################################################################################ -schema: - - field: code - description: "The PAI Job ExitCode" - required: True - unique: True - type: Integer - range: - begin: -8000 - end: 256 - - field: phrase - description: "The textual phrase representation of this ExitCode" - required: True - unique: True - type: String - range: "Any" - - field: issuer - description: "Who root issued this ExitCode in details" - required: False - unique: False - type: Enum - range: - - USER_CONTAINER - - PAI_OS - - PAI_RUNTIME - - PAI_YARN - - PAI_LAUNCHER - - field: causer - description: "Who root caused this ExitCode in details" - required: False - unique: False - type: Enum - range: - - USER_SUBMISSION - - USER_CONTAINER - - USER_STOP - - USER_DELETION - - USER_RETRY - - USER_UPGRADE - - RESOURCE_ALLOCATION_TIMEOUT - - PAI_HDFS - - PAI_OS - - PAI_DOCKER - - PAI_RUNTIME - - PAI_YARN - - PAI_LAUNCHER - - UNKNOWN - - field: type - description: "The rough type of this ExitCode" - required: False - unique: False - type: Enum - range: - - USER_SUCCESS - - USER_STOP - - USER_FAILURE - - PLATFORM_FAILURE - - RESOURCE_ALLOCATION_TIMEOUT - - UNKNOWN_FAILURE - - field: stage - description: "The user process stage just before this ExitCode issued" - required: False - unique: False - type: Enum - range: - - SUBMITTING - - ALLOCATING - - LAUNCHING - - RUNNING - - COMPLETING - - UNKNOWN - - field: behavior - description: "The rerun behavior of this ExitCode" - required: False - unique: False - type: Enum - range: - - TRANSIENT_NORMAL - - TRANSIENT_CONFLICT - - NON_TRANSIENT - - UNKNOWN - - field: reaction - description: "The reaction for this ExitCode will be executed by PAI automatically" - required: False - unique: False - type: Enum - range: - - ALWAYS_RETRY - - ALWAYS_BACKOFF_RETRY - - RETRY_TO_MAX - - NEVER_RETRY - - field: reason - description: "Why this ExitCode is issued" - required: False - unique: False - type: String - range: "Any" - - field: repro - description: "One specific reproduce steps of this ExitCode" - required: False - unique: False - type: List - range: "Any" - - field: solution - description: "Some optional solutions to resolve this ExitCode if it indicates failure" - required: False - unique: False - type: List - range: "Any" - - field: pattern - description: "The pattern that PAI used to detect this ExitCode" - required: False - unique: False - type: String - range: "Any, such as USER_EXITCODE=X && USER_LOG_PATTERN=Y || OS_Signal=Z" - - -################################################################################ -# PAI Job ExitSpec -################################################################################ -spec: -################################ -# Range: [129, 192] -# Owner: PAI_RUNTIME -# Description: Recognized From YARN / Signal -################################ -# Container Failed by YARN -- code: 154 - phrase: CONTAINER_EXIT_CODE_FILE_LOST - issuer: PAI_YARN - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: COMPLETING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container exitcode file cannot be found by YARN NM, maybe node unexpected shutdown, disk cleaned up or disk failure" - repro: - - "Stop YARN NM" - - "Kill container process" - - "Delete container exitcode file" - - "Start YARN NM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# Container Failed by OS Signal -- code: 130 - phrase: CONTAINER_KILLED_BY_SIGINT - issuer: PAI_OS - causer: PAI_OS - type: PLATFORM_FAILURE - stage: RUNNING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by OS Signal: SIGINT" - repro: - - "Kill container process by SIGINT" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: 132 - phrase: CONTAINER_KILLED_BY_SIGILL - issuer: USER_CONTAINER - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by OS Signal: SIGILL" - repro: - - "User program executes an illegal, malformed, unknown, or privileged machine instruction" - solution: - - "Check container log and fix your program bug" - -- code: 134 - phrase: CONTAINER_KILLED_BY_SIGABRT - issuer: USER_CONTAINER - causer: UNKNOWN - type: UNKNOWN_FAILURE - stage: RUNNING - behavior: UNKNOWN - reaction: RETRY_TO_MAX - reason: "Container killed by OS Signal: SIGABRT" - repro: - - "User program calls abort() by libc" - solution: - - "Check container log and find root cause" - - "Wait result from next retry" - -- code: 135 - phrase: CONTAINER_KILLED_BY_SIGBUS - issuer: USER_CONTAINER - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by OS Signal: SIGBUS" - repro: - - "User program accesses an unaligned memory address" - solution: - - "Check container log and fix your program bug" - -- code: 136 - phrase: CONTAINER_KILLED_BY_SIGFPE - issuer: USER_CONTAINER - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by OS Signal: SIGFPE" - repro: - - "User program division by zero" - solution: - - "Check container log and fix your program bug" - -- code: 137 - phrase: CONTAINER_KILLED_BY_SIGKILL - issuer: PAI_OS - causer: PAI_OS - type: PLATFORM_FAILURE - stage: RUNNING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by OS Signal: SIGKILL" - repro: - - "Kill container process by SIGKILL" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: 139 - phrase: CONTAINER_KILLED_BY_SIGSEGV - issuer: USER_CONTAINER - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by OS Signal: SIGSEGV" - repro: - - "User program accesses an illegal memory address" - solution: - - "Check container log and fix your program bug" - -- code: 141 - phrase: CONTAINER_KILLED_BY_SIGPIPE - issuer: USER_CONTAINER - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by OS Signal: SIGPIPE" - repro: - - "User program writes to a pipe without a process connected to the other end" - solution: - - "Check container log and fix your program bug" - -- code: 143 - phrase: CONTAINER_KILLED_BY_SIGTERM - issuer: PAI_OS - causer: PAI_OS - type: PLATFORM_FAILURE - stage: RUNNING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by OS Signal: SIGTERM" - repro: - - "Kill container process by SIGTERM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - - -################################ -# Range: [193, 254] -# Owner: PAI_RUNTIME -# Description: Recognized From Error Pattern -################################ -# Recognized during user process LAUNCHING stage -- code: 193 - phrase: CONTAINER_DOCKER_RUN_FAILED - issuer: PAI_RUNTIME - causer: PAI_DOCKER - type: PLATFORM_FAILURE - stage: LAUNCHING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container cannot be launched by docker run" - repro: - - "PAI Runtime calls docker run with unknown flag" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# Recognized during user process COMPLETING stage -- code: 196 - phrase: CONTAINER_OOM_KILLED_BY_DOCKER - issuer: PAI_RUNTIME - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by docker due to it exceeded the request memory" - repro: - - "User program uses more memory than its requested" - solution: - - "Increase per task memory request" - - "Decrease per task memory usage by such as increasing task number" - -- code: 198 - phrase: CONTAINER_OOD_KILLED_BY_DISKCLEANER - issuer: PAI_RUNTIME - causer: USER_CONTAINER - type: USER_FAILURE - stage: RUNNING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container is killed by disk cleaner due to it used major disk space and all containers disk usage on the node exceeded platform limit" - repro: - - "User program uses almost all disk space of the node" - solution: - - "Decrease per task disk space usage by such as increasing task number" - -- code: 255 - phrase: CONTAINER_RUNTIME_UNKNOWN_FAILURE - issuer: PAI_RUNTIME - causer: UNKNOWN - type: UNKNOWN_FAILURE - stage: COMPLETING - behavior: UNKNOWN - reaction: RETRY_TO_MAX - reason: "Container failed but the failure cannot be recognized by PAI Runtime" - repro: - - "User program directly exits with exitcode 1" - solution: - - "Check container log and find root cause" - - "Wait result from next retry" - - -################################ -# Range: {Undefined Positive ExitCodes} -# Owner: PAI_RUNTIME -# Description: Shadow Fallback ExitCode -################################ -# Here the code 256 is just used to represent all undefined positive exitcodes in this spec, -# and the specific undefined exitcode should always override it to expose outside. -- code: 256 - phrase: CONTAINER_RUNTIME_EXIT_ABNORMALLY - issuer: PAI_RUNTIME - causer: PAI_RUNTIME - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: UNKNOWN - reaction: RETRY_TO_MAX - reason: "PAI Runtime exit abnormally with undefined exitcode, it may have bugs" - repro: - - "PAI Runtime exits with exitcode 1" - solution: - - "Contact PAI Dev to fix PAI Runtime bugs" - - -################################ -# Range: [0, 0] -# Owner: PAI_LAUNCHER -# Description: Success ExitCode -################################ -- code: 0 - phrase: SUCCEEDED - issuer: USER_CONTAINER - causer: USER_CONTAINER - type: USER_SUCCESS - stage: COMPLETING - behavior: UNKNOWN - reaction: NEVER_RETRY - repro: - - "User program exits with exitcode 0" - - -################################ -# Range: [-7199, -7100] -# Owner: PAI_LAUNCHER AM -# Description: Container Failure -################################ -# Container Init Failed by YARN -- code: -7100 - phrase: CONTAINER_INVALID_EXIT_STATUS - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: LAUNCHING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container exited with invalid exit status, maybe YARN failed to initialize container environment" - repro: - - "Disable write permission for YARN NM to access {yarn.nodemanager.local-dirs}" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7101 - phrase: CONTAINER_NOT_AVAILABLE_EXIT_STATUS - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: LAUNCHING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container exited with not available exit status, maybe YARN failed to create container executor process" - repro: - - "Disable execute permission for YARN NM to access bash on *nix or winutils.exe on Windows" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7102 - phrase: CONTAINER_NODE_DISKS_FAILED - issuer: PAI_LAUNCHER - causer: PAI_OS - type: PLATFORM_FAILURE - stage: LAUNCHING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container cannot be launched by YARN due to local bad disk, maybe no disk space left" - repro: - - "Set zero disk space for {yarn.nodemanager.local-dirs}" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7103 - phrase: CONTAINER_PORT_CONFLICT - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: LAUNCHING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container cannot be launched by YARN due to local port conflict" - repro: - - "After container allocated and before container started, stop the container's YARN NM" - - "Occupy a container requested port on the container node" - - "Start the container's YARN NM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# Container Aborted Failed by YARN -- code: -7110 - phrase: CONTAINER_ABORTED - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container aborted by YARN" - repro: - - "Corrupt the container entry in YARN NM state store" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7111 - phrase: CONTAINER_NODE_LOST - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container lost due to node lost, maybe its YARN NM is down for a long time" - repro: - - "Stop the container's YARN NM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7112 - phrase: CONTAINER_EXPIRED - issuer: PAI_LAUNCHER - causer: RESOURCE_ALLOCATION_TIMEOUT - type: RESOURCE_ALLOCATION_TIMEOUT - stage: ALLOCATING - behavior: TRANSIENT_CONFLICT - reaction: ALWAYS_BACKOFF_RETRY - reason: "Container previously allocated is expired due to it is not launched on YARN NM in time, maybe other containers cannot be allocated in time" - repro: - - "Disable virtual cluster bonus token" - - "Set amGangAllocationTimeoutSec large than yarn.resourcemanager.rm.container-allocation.expiry-interval-ms" - - "Request more containers in a job than its virtual cluster current available resource" - solution: - - "Wait result from next retry" - - "Decrease task number" - - "Decrease per task resource request" - - "Contact Cluster Admin to increase your virtual cluster quota" - -- code: -7113 - phrase: CONTAINER_ABORTED_ON_AM_RESTART - issuer: PAI_LAUNCHER - causer: RESOURCE_ALLOCATION_TIMEOUT - type: RESOURCE_ALLOCATION_TIMEOUT - stage: ALLOCATING - behavior: TRANSIENT_CONFLICT - reaction: ALWAYS_BACKOFF_RETRY - reason: "Container previously allocated is aborted by YARN RM during Launcher AM restart, maybe other containers cannot be allocated in time" - repro: - - "Disable virtual cluster bonus token" - - "Request more containers in a job than its virtual cluster current available resource" - - "Kill Launcher AM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# Container Other Failed by YARN -- code: -7120 - phrase: CONTAINER_PREEMPTED - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container preempted by YARN RM, maybe its virtual cluster overused resource was reclaimed" - repro: - - "Enable virtual cluster bonus token" - - "Request more containers in a job than its virtual cluster current available resource" - - "Use up all other virtual clusters available resource" - solution: - - "Wait result from next retry" - - "Decrease task number" - - "Decrease per task resource request" - - "Contact Cluster Admin to increase your virtual cluster quota" - - "Contact Cluster Admin to disable your virtual cluster bonus token" - -- code: -7121 - phrase: CONTAINER_RUNTIME_VIRTUAL_MEMORY_EXCEEDED - issuer: PAI_LAUNCHER - causer: PAI_RUNTIME - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by YARN due to its PAI Runtime exceeded the request virtual memory" - repro: - - "PAI Runtime uses more virtual memory than its container requested" - solution: - - "Increase per task virtual memory request" - - "Contact PAI Dev to decrease PAI Runtime virtual memory usage" - -- code: -7122 - phrase: CONTAINER_RUNTIME_PHYSICAL_MEMORY_EXCEEDED - issuer: PAI_LAUNCHER - causer: PAI_RUNTIME - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container killed by YARN due to its PAI Runtime exceeded the request physical memory" - repro: - - "PAI Runtime uses more physical memory than its container requested" - solution: - - "Increase per task physical memory request" - - "Contact PAI Dev to decrease PAI Runtime physical memory usage" - -- code: -7123 - phrase: CONTAINER_KILLED_BY_AM - issuer: PAI_LAUNCHER - causer: PAI_LAUNCHER - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by Launcher AM, maybe allocated container is rejected" - repro: - - "Setup single node cluster" - - "Submit job with two tasks and antiaffinityAllocation enabled" - - "Launcher rejects allocated container whose node already allocated another container" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7124 - phrase: CONTAINER_KILLED_BY_RM - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by YARN RM, maybe the container is not managed by YARN RM anymore" - repro: - - "Delete the container's app entry in YARN RM state store" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7125 - phrase: CONTAINER_KILLED_ON_APP_COMPLETION - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: COMPLETING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by YARN RM due to its app is already completed" - repro: - - "Stop Launcher AM container's YARN NM" - - "Kill the container's app" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7126 - phrase: CONTAINER_EXTERNAL_UTILIZATION_SPIKED - issuer: PAI_LAUNCHER - causer: PAI_OS - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by YARN due to external utilization spiked" - repro: - - "Enable YARN external utilization check" - - "Start raw process to use up almost all memory on the node" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# Container Failed by Launcher AM -- code: -7150 - phrase: CONTAINER_NM_LAUNCH_FAILED - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: LAUNCHING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container failed to launch on YARN NM" - repro: - - "After container allocated and before container started, stop the container's YARN NM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7151 - phrase: CONTAINER_RM_RESYNC_LOST - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container lost after Launcher AM resynced with YARN RM" - repro: - - "Stop the container's YARN NM" - - "Restart YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7152 - phrase: CONTAINER_RM_RESYNC_EXCEEDED - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Container exceeded after Launcher AM resynced with YARN RM" - repro: - - "Stop the container's YARN NM" - - "Restart YARN RM" - - "Wait until AM releases container" - - "Start the container's YARN NM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7153 - phrase: CONTAINER_MIGRATE_TASK_REQUESTED - issuer: PAI_LAUNCHER - causer: USER_RETRY - type: USER_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by Launcher due to user MigrateTaskRequest" - repro: - - "Send MigrateTaskRequest for the container" - solution: - - "Wait result from next retry" - -- code: -7154 - phrase: CONTAINER_AGENT_EXPIRED - issuer: PAI_LAUNCHER - causer: PAI_OS - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Container killed by Launcher due to no Launcher Agent heartbeat is received in time" - repro: - - "Enable Launcher Agent" - - "Bring down the container's node" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - - -################################ -# Range: [-7299, -7200] -# Owner: PAI_LAUNCHER AM -# Description: Job Failure -################################ -# App Failed by YARN -- code: -7200 - phrase: AM_RM_HEARTBEAT_YARN_EXCEPTION - issuer: PAI_LAUNCHER - causer: USER_SUBMISSION - type: USER_FAILURE - stage: SUBMITTING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Launcher AM failed to heartbeat with YARN RM due to YarnException, maybe App is non-compliant" - repro: - - "Submit a job with invalid node label" - solution: - - "Check diagnostics and revise your job config" - -- code: -7201 - phrase: AM_RM_HEARTBEAT_IO_EXCEPTION - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Launcher AM failed to heartbeat with YARN RM due to IOException, maybe YARN RM is down" - repro: - - "Stop YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7202 - phrase: AM_RM_HEARTBEAT_UNKNOWN_EXCEPTION - issuer: PAI_LAUNCHER - causer: UNKNOWN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Launcher AM failed to heartbeat with YARN RM due to unknown Exception" - repro: - - "AM sends invalid message to YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7203 - phrase: AM_RM_HEARTBEAT_SHUTDOWN_REQUESTED - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Launcher AM failed to heartbeat with YARN RM due to ShutdownRequest, maybe AM is not managed by YARN RM anymore" - repro: - - "Set small AM expiry time" - - "Set network partition between AM and YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# App Failed by Launcher AM -- code: -7250 - phrase: AM_UNKNOWN_EXCEPTION - issuer: PAI_LAUNCHER - causer: PAI_LAUNCHER - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Launcher AM failed due to unknown Exception" - repro: - - "Set network partition between AM and ZK" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7251 - phrase: AM_NON_TRANSIENT_EXCEPTION - issuer: PAI_LAUNCHER - causer: USER_SUBMISSION - type: USER_FAILURE - stage: SUBMITTING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Launcher AM failed due to NonTransientException, maybe App is non-compliant" - repro: - - "Submit a job with invalid data dir" - solution: - - "Check diagnostics and revise your job config" - -- code: -7252 - phrase: AM_GANG_ALLOCATION_TIMEOUT - issuer: PAI_LAUNCHER - causer: RESOURCE_ALLOCATION_TIMEOUT - type: RESOURCE_ALLOCATION_TIMEOUT - stage: ALLOCATING - behavior: TRANSIENT_CONFLICT - reaction: ALWAYS_BACKOFF_RETRY - reason: "Launcher AM failed due to all the requested resource cannot be satisfied in time" - repro: - - "Disable virtual cluster bonus token" - - "Request more containers in a job than its virtual cluster current available resource" - solution: - - "Wait result from next retry" - - "Decrease task number" - - "Decrease per task resource request" - - "Contact Cluster Admin to increase your virtual cluster quota" - - -################################ -# Range: [-7399, -7300] -# Owner: PAI_LAUNCHER Service -# Description: Job Failure -################################ -# App Failed by YARN -- code: -7300 - phrase: APP_SUBMISSION_YARN_EXCEPTION - issuer: PAI_LAUNCHER - causer: USER_SUBMISSION - type: USER_FAILURE - stage: SUBMITTING - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Failed to submit App to YARN RM due to YarnException, maybe App is non-compliant" - repro: - - "Submit a job to invalid virtual cluster" - solution: - - "Check diagnostics and revise your job config" - -- code: -7301 - phrase: APP_SUBMISSION_IO_EXCEPTION - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: SUBMITTING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Failed to submit App to YARN RM due to IOException, maybe YARN RM is down" - repro: - - "Stop YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7302 - phrase: APP_SUBMISSION_UNKNOWN_EXCEPTION - issuer: PAI_LAUNCHER - causer: UNKNOWN - type: UNKNOWN_FAILURE - stage: SUBMITTING - behavior: UNKNOWN - reaction: RETRY_TO_MAX - reason: "Failed to submit App to YARN RM due to unknown Exception" - repro: - - "Launcher Service sends invalid message to YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7303 - phrase: APP_KILLED_UNEXPECTEDLY - issuer: PAI_LAUNCHER - causer: UNKNOWN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "App killed unexpectedly and directly through YARN RM" - repro: - - "Kill the app directly through YARN RM" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -# App Failed by Launcher Service -- code: -7350 - phrase: APP_RM_RESYNC_LOST - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: UNKNOWN - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "App lost after Launcher Service resynced with YARN RM" - repro: - - "Delete the app entry in YARN RM state store" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7351 - phrase: APP_STOP_FRAMEWORK_REQUESTED - issuer: PAI_LAUNCHER - causer: USER_STOP - type: USER_STOP - stage: UNKNOWN - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "App stopped by Launcher due to user StopFrameworkRequest" - repro: - - "Stop a job" - -- code: -7352 - phrase: APP_AM_DIAGNOSTICS_LOST - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: COMPLETING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Failed to retrieve AMDiagnostics from YARN, maybe the App is cleaned up in YARN" - repro: - - "App is in APPLICATION_RETRIEVING_DIAGNOSTICS state" - - "Stop Launcher Service" - - "Delete the app entry in YARN RM state store" - - "Start Launcher Service" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - -- code: -7353 - phrase: APP_AM_DIAGNOSTICS_DESERIALIZATION_FAILED - issuer: PAI_LAUNCHER - causer: PAI_YARN - type: PLATFORM_FAILURE - stage: COMPLETING - behavior: TRANSIENT_NORMAL - reaction: ALWAYS_RETRY - reason: "Failed to deserialize AMDiagnostics from YARN, maybe it is corrupted or Launcher AM unexpectedly crashed frequently without generating AMDiagnostics" - repro: - - "Set yarn.app.attempt.diagnostics.limit.kc to 1B" - solution: - - "Wait result from next retry" - - "Contact Cluster Admin" - - -################################ -# Range: [-7499, -7400] -# Owner: PAI_LAUNCHER WebServer -# Description: Task Failure -################################ -- code: -7400 - phrase: TASK_STOPPED_ON_APP_COMPLETION - issuer: PAI_LAUNCHER - causer: USER_STOP - type: USER_STOP - stage: UNKNOWN - behavior: NON_TRANSIENT - reaction: NEVER_RETRY - reason: "Task stopped by Launcher due to its app is already completed" - repro: - - "Stop a job with long running container" - - -################################ -# Range: {Undefined Negative ExitCodes} -# Owner: PAI_LAUNCHER -# Description: Container Failure:Shadow Fallback ExitCode -################################ -# Here the code -8000 is just used to represent all undefined negative exitcodes in this spec, -# and the specific undefined exitcode should always override it to expose outside. -- code: -8000 - phrase: CONTAINER_UNKNOWN_YARN_EXIT_STATUS - issuer: PAI_YARN - causer: UNKNOWN - type: UNKNOWN_FAILURE - stage: UNKNOWN - behavior: UNKNOWN - reaction: RETRY_TO_MAX - reason: "Container exited with unknown exitcode which is issued from YARN" - repro: - - "Change YARN code to make it return container exitcode -886" - solution: - - "Contact PAI Dev to recognize this exitcode" diff --git a/src/job-exit-spec/config/job_exit_spec.py b/src/job-exit-spec/config/job_exit_spec.py deleted file mode 100644 index ff602dd89..000000000 --- a/src/job-exit-spec/config/job_exit_spec.py +++ /dev/null @@ -1,20 +0,0 @@ -#!/usr/bin/env python - -import copy - -class JobExitSpec(object): - def __init__(self, cluster_conf, service_conf, default_service_conf): - self.cluster_conf = cluster_conf - self.service_conf = service_conf - self.default_service_conf = default_service_conf - - def validation_pre(self): - return True, None - - def run(self): - result = copy.deepcopy(self.default_service_conf) - result.update(self.service_conf) - return result - - def validation_post(self, conf): - return True, None diff --git a/src/job-exit-spec/config/update_markdown.py b/src/job-exit-spec/config/update_markdown.py deleted file mode 100644 index 32f45fd14..000000000 --- a/src/job-exit-spec/config/update_markdown.py +++ /dev/null @@ -1,103 +0,0 @@ -#!/usr/bin/env python3 - -import sys - -import yaml - - -def escape(s): - return str(s) \ - .replace('<', '\<') \ - .replace('>', '\>') \ - .replace('|', '\|') \ - .replace('\r\n', '
') \ - .replace('\r', '
') \ - .replace('\n', '
') - - -def bold(s): - return '**' + str(s) + '**' - - -def get(dic, key): - if key in dic: - value = dic[key] - - if type(value) == list: - rows = '' - row_id = 1 - for row in value: - rows += str(row_id) + '. ' + escape(row) + '
' - row_id += 1 - return rows - - if type(value) == dict: - rows = '' - for row_key, row_value in sorted(value.items()): - rows += escape(row_key) + ': ' + escape(row_value) + '
' - return rows - - return escape(value) - else: - return '' - - -def update_markdown(): - sys.stdout = open("job-exit-spec.md", "w") - with open('job-exit-spec.yaml', 'r') as stream: - data = yaml.safe_load(stream) - - schema = data['schema'] - spec = data['spec'] - - print('# PAI Job Exit Spec') - print('1. See details in [job-exit-spec.yaml](job-exit-spec.yaml)') - print('2. This markdown file is generated by [update_markdown.py](update_markdown.py) with [job-exit-spec.yaml](job-exit-spec.yaml)') - print('3. See full doc in [PAI Job Exit Spec User Manual](user-manual.md)') - print('') - - print('## Spec Schema') - print('|field|description|required|unique|type|range|') - print('|-----|-----------|--------|------|----|----|') - for field in schema: - print('|', bold(get(field, 'field')), '|', - get(field, 'description'), '|', - get(field, 'required'), '|', - get(field, 'unique'), '|', - get(field, 'type'), '|', - get(field, 'range'), '|') - print('') - - print('## Spec Table') - print('1. You may need to **scroll right side to see full table**.') - print('2. The code **256** is just used to represent all **undefined ' - 'positive** exitcodes in this spec, and the specific undefined exitcode ' - 'will always override it to expose to user.') - print('3. The code **-8000** is just used to represent all **undefined ' - 'negative** exitcodes in this spec, and the specific undefined exitcode ' - 'will always override it to expose to user.') - print('') - print('|code|phrase|issuer|causer|type|stage|behavior|reaction|reason|repro|solution|pattern|') - print('|----|------|------|------|----|-----|--------|--------|------|-----|--------|-------|') - for code in spec: - print('|', bold(get(code, 'code')), '|', - bold(get(code, 'phrase')), '|', - get(code, 'issuer'), '|', - get(code, 'causer'), '|', - get(code, 'type'), '|', - get(code, 'stage'), '|', - get(code, 'behavior'), '|', - get(code, 'reaction'), '|', - get(code, 'reason'), '|', - get(code, 'repro'), '|', - get(code, 'solution'), '|', - get(code, 'pattern'), '|') - print('') - - -def main(): - update_markdown() - - -if __name__ == "__main__": - main() diff --git a/src/job-exit-spec/config/user-manual.md b/src/job-exit-spec/config/user-manual.md deleted file mode 100644 index 1939d748e..000000000 --- a/src/job-exit-spec/config/user-manual.md +++ /dev/null @@ -1,26 +0,0 @@ -# PAI Job Exit Spec User Manual - -## Architecture -**PAI Exit Info Setup and Propagation** -

- Architecture -

- -## Spec -**PAI Static Exit Info**: [job-exit-spec.md](job-exit-spec.md) - -**PAI Dynamic Exit Info**: runtime-exit-spec.md - -## How to grow PAI Static Exit Info -### Add a new job exitcode -1. Add the spec of the exitcode into the spec section of [job-exit-spec.yaml](job-exit-spec.yaml) -2. Execute [update_markdown.py](update_markdown.py) to update [job-exit-spec.md](job-exit-spec.md) -3. Return the exitcode from Launcher or PAI Runtime -4. Redeploy PAI - -### Add a new spec field -1. Add the field info into the schema section of [job-exit-spec.yaml](job-exit-spec.yaml) -2. Add the field for all necessary exitcodes into the spec section of [job-exit-spec.yaml](job-exit-spec.yaml) -3. Add the field generator into [update_markdown.py](update_markdown.py) -4. Execute [update_markdown.py](update_markdown.py) to update [job-exit-spec.md](job-exit-spec.md) -5. Redeploy PAI diff --git a/src/tools/.gitignore b/src/tools/.gitignore deleted file mode 100644 index e830cd00f..000000000 --- a/src/tools/.gitignore +++ /dev/null @@ -1,3 +0,0 @@ -.hadoop/ -.config/ -.restserver/ diff --git a/src/tools/config/logging.yaml b/src/tools/config/logging.yaml deleted file mode 100644 index 8a04f4fed..000000000 --- a/src/tools/config/logging.yaml +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -version: 1 -disable_existing_loggers: False -formatters: - simple: - format: "[%(asctime)s] %(name)s:%(levelname)s: %(message)s" - -handlers: - console: - class: logging.StreamHandler - level: DEBUG - formatter: simple - stream: ext://sys.stdout - -# info_file_handler: -# class: logging.handlers.RotatingFileHandler -# level: INFO -# formatter: simple -# filename: info.log -# maxBytes: 10485760 # 10MB -# backupCount: 20 -# encoding: utf8 -# -# error_file_handler: -# class: logging.handlers.RotatingFileHandler -# level: ERROR -# formatter: simple -# filename: errors.log -# maxBytes: 10485760 # 10MB -# backupCount: 20 -# encoding: utf8 - -loggers: - my_module: - level: ERROR - handlers: [console] - propagate: no - -root: - level: DEBUG - handlers: [console] -# handlers: [console, info_file_handler, error_file_handler] \ No newline at end of file diff --git a/src/tools/node_maintain.py b/src/tools/node_maintain.py deleted file mode 100644 index 31ca5b939..000000000 --- a/src/tools/node_maintain.py +++ /dev/null @@ -1,447 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from __future__ import print_function - -import argparse -import time -import re -import logging -import copy -import sys - -from utility import log -log.setup_logging() - -from operator_wrapper import AlertOperator, KubernetesOperator, YarnOperator, Resource, RestserverOperator - -logger = logging.getLogger(__name__) - -def get_unready_nodes(decommissioned_nodes, current_status): - unready_nodes = {} - for node, state in current_status.items(): - # should decommission but not - if state not in {"DECOMMISSIONED"} and node in decommissioned_nodes: - unready_nodes[node] = state - # should recommission but not - if state in {"DECOMMISSIONED", "DECOMMISSIONING"} and node not in decommissioned_nodes: - unready_nodes[node] = state - return unready_nodes - - -def validate_string_is_ip(validated_str): - ip_pattern = re.compile(r"^(1\d{2}|2[0-4]\d|25[0-5]|[1-9]\d|[1-9])(\.(1\d{2}|2[0-4]\d|25[0-5]|[1-9]\d|\d)){3}$") - found = ip_pattern.match(validated_str) is not None - return found - - -def get_gpu_alert(args): - alert_operator = AlertOperator(args.prometheus_ip, args.prometheus_port) - alerting_nodes = alert_operator.get_gpu_alert_nodes() - logger.info("Successfully aggregate gpu alerts.") - if len(alerting_nodes) > 0: - output_info = '\n'.join([node_name+': '+alert_type for node_name, alert_type in alerting_nodes.items()]) - else: - output_info = "No gpu alerting nodes" - print(output_info) - - -def get_decommission_nodes(args): - k8s_operator = KubernetesOperator(args.api_server_ip) - existing_nodes = k8s_operator.get_nodes() - logger.info("Successfully aggregate blacklist info.") - if len(existing_nodes) > 0: - output_info = ','.join(existing_nodes) - else: - output_info = "No blacklist nodes" - print(output_info) - return existing_nodes - - -def add_decommission_nodes(args): - k8s_operator = KubernetesOperator(args.api_server_ip) - existing_nodes = k8s_operator.get_nodes() - nodes = args.nodes - inter_list = existing_nodes & nodes - if len(inter_list) > 0: - logger.warning("Try to add existing blacklist nodes: {}".format(','.join(inter_list))) - full_list = existing_nodes | nodes - k8s_operator.set_nodes(full_list) - logger.info("Add node: {} to blacklist".format(','.join(args.nodes))) - return full_list - - -def remove_decommission_nodes(args): - k8s_operator = KubernetesOperator(args.api_server_ip) - existing_nodes = k8s_operator.get_nodes() - nodes = args.nodes - supplement_list = nodes - existing_nodes - if len(supplement_list) > 0: - logger.warning("Try to remove non-existing blacklist nodes: {}".format(','.join(supplement_list))) - full_list = existing_nodes - nodes - k8s_operator.set_nodes(full_list) - logger.info("Remove node: {} from blacklist".format(','.join(args.nodes))) - return full_list - - -def update_decommission_nodes(args): - k8s_operator = KubernetesOperator(args.api_server_ip) - nodes = args.nodes - k8s_operator.set_nodes(nodes) - logger.info("Update blacklist nodes: {}".format(','.join(args.nodes))) - return nodes - - -def refresh_yarn_nodes(args): - k8s_operator = KubernetesOperator(args.api_server_ip) - yarn_operator = YarnOperator(args.resource_manager_ip) - while True: - yarn_operator.decommission_nodes() - node_info = yarn_operator.get_nodes_info() - current_status = {k: v["state"] for k, v in node_info.items()} - decommissioned_nodes = k8s_operator.get_nodes() - unready_nodes = get_unready_nodes(decommissioned_nodes, current_status) - if len(unready_nodes) == 0: - break - unready_info = ','.join([node_name+" in "+status for node_name, status in unready_nodes.items()]) - logger.info("Unready nodes: {}. Waiting...".format(unready_info)) - time.sleep(30) - logger.info("Successfully refresh nodes.") - - -def convert_nodes(nodes_str): - if isinstance(nodes_str, str): - nodes = set(nodes_str.split(',')) - for node in nodes: - if not validate_string_is_ip(node): - raise argparse.ArgumentTypeError("Value has to be a comma-delimited ip list, but found {}".format(node)) - return nodes - return set() - - -def validate_vc_name(vc_name_str): - if re.match(r"^[A-Za-z0-9_]+$", vc_name_str) is None: - raise argparse.ArgumentTypeError("invalid vc name: {}. Only alphanumeric and _ allowed".format(vc_name_str)) - return vc_name_str - - -def is_dedicated_vc(queue_name, queue_attr): - # print(json.dumps(queue_attr, indent=2)) - if queue_name == "" or queue_name == "*" or queue_attr["defaultNodeLabelExpression"] != queue_name: - return False - if queue_name not in queue_attr["capacities"] or queue_attr["capacities"][queue_name]["maxCapacity"] != 100: - return False - return True - - -def get_resource_by_label(nodes_info): - labels_dict = {} - default_resource = Resource(**{"cpus": 0, "memory": 0, "gpus": 0}) - for node, info in nodes_info.items(): - if info["nodeLabel"] not in labels_dict: - labels_dict[info["nodeLabel"]] = { - "resource": default_resource - } - labels_dict[info["nodeLabel"]]["resource"] += info["resource"] - return labels_dict - - -def get_dedicate_vc(args): - yarn_operator = YarnOperator(args.resource_manager_ip) - queues_info = yarn_operator.get_queues_info() - nodes_info = yarn_operator.get_nodes_info() - dedicate_queues = {queue_name: {"resource": Resource(**{"cpus": 0, "memory": 0, "gpus": 0}), "nodes": []} for queue_name, queue_info in queues_info.items() if - is_dedicated_vc(queue_name, queue_info)} - if len(dedicate_queues) == 0: - logger.info("No dedicated vc found") - return - - labeled_resources = get_resource_by_label(nodes_info) - for partition in labeled_resources: - if partition in dedicate_queues: - dedicate_queues[partition]["resource"] = labeled_resources[partition]["resource"] - - for node in nodes_info: - if nodes_info[node]["nodeLabel"] in dedicate_queues: - dedicate_queues[nodes_info[node]["nodeLabel"]]["nodes"].append(node) - for queue_name, queue_attr in dedicate_queues.items(): - print(queue_name + ":") - print("\tNodes: " + ",".join(queue_attr["nodes"])) - print("\tResource: ".format(queue_attr["resource"].cpus, queue_attr["resource"].memory, queue_attr["resource"].gpus)) - - -def convert_percentage_to_gpus(queues_info, partition_resource): - new_queues_info = copy.deepcopy(queues_info) - for queue, info in new_queues_info.items(): - p = info["capacity"] / float(100) - info["gpus"] = partition_resource.gpus * p - return new_queues_info - - -def convert_gpus_to_percentage(queues_info, partition_resource): - new_queues_info = copy.deepcopy(queues_info) - if partition_resource.gpus > 0: - for queue, info in new_queues_info.items(): - gpus = info["gpus"] - info["capacity"] = float(gpus) / partition_resource.gpus * 100 - return new_queues_info - - -def normalize_percentage(queues_info): - new_queues_info = copy.deepcopy(queues_info) - sum_percentage = 0 - for queue, info in new_queues_info.items(): - sum_percentage += info["capacity"] - - if sum_percentage != 100: - logger.warning("Renormalize percentage to 100%, current: {}%".format(sum_percentage)) - new_queues_info["default"]["capacity"] -= sum_percentage - 100 - - for queue, info in new_queues_info.items(): - if queue != "default": - info["maxCapacity"] = info["capacity"] - - return new_queues_info - - -def add_dedicate_vc(args): - yarn_operator = YarnOperator(args.resource_manager_ip) - restserver_operator = RestserverOperator(args.restserver_ip) - vc_name = args.vc_name - nodes = args.nodes - - logger.info("Adding cluster label...") - existing_labels = yarn_operator.get_cluster_labels() - if vc_name in existing_labels: - logger.warning("Label already exists: {}".format(vc_name)) - else: - yarn_operator.add_cluster_label(vc_name) - - logger.info("Adding dedicated vc...") - queues_info = yarn_operator.get_queues_info() - if vc_name in queues_info: - logger.warning("Virtual cluster already exists: {}. Adding node to it".format(vc_name)) - else: - restserver_operator.add_vc(vc_name) - yarn_operator.add_dedicated_queue(vc_name) - - nodes_info = yarn_operator.get_nodes_info() - if len(nodes) > 0: - logger.info("Labeling node...") - - if queues_info["default"]["maxCapacity"] == 100 or queues_info["default"]["maxCapacity"] > \ - queues_info["default"]["capacity"]: - queues_info["default"]["maxCapacity"] = 100.0 - - added_resource = Resource(**{"cpus": 0, "memory": 0, "gpus": 0}) - for node, info in nodes_info.items(): - if node in nodes and info["nodeLabel"] == "": - added_resource += info["resource"] - - default_partition_resource = get_resource_by_label(nodes_info)[""]["resource"] - default_vc_percentage = queues_info["default"]["capacity"] / 100.0 - default_vc_resource = default_partition_resource * default_vc_percentage - - if default_vc_resource.cpus < added_resource.cpus \ - or default_vc_resource.gpus < added_resource.gpus \ - or default_vc_resource.memory < added_resource.memory: - logger.error("Default vc resource isn't enough for the dedicated vc, please free some resource") - sys.exit(1) - - new_default_partition_resource = default_partition_resource - added_resource - new_default_vc_resource = default_vc_resource - added_resource - - queues_info_with_gpus = convert_percentage_to_gpus(queues_info, default_partition_resource) - queues_info_with_gpus["default"]["gpus"] = new_default_vc_resource.gpus - new_queues_percentage = convert_gpus_to_percentage(queues_info_with_gpus, new_default_partition_resource) - new_queues_percentage = normalize_percentage(new_queues_percentage) - updated_dict = {} - for queue, info in new_queues_percentage.items(): - updated_dict[queue] = { - "capacity": info["capacity"], - "maximum-capacity": info["maxCapacity"] - } - if queue != "default": - updated_dict[queue]["disable_preemption"] = True - - yarn_operator.label_nodes(nodes, vc_name) - yarn_operator.update_queue_capacity(updated_dict) - - -def remove_dedicate_vc(args): - yarn_operator = YarnOperator(args.resource_manager_ip) - restserver_operator = RestserverOperator(args.restserver_ip) - vc_name = args.vc_name - nodes = args.nodes - remove_queue_flag = nodes is None - - logger.info("Unlabeling node...") - nodes_info = yarn_operator.get_nodes_info() - queues_info = yarn_operator.get_queues_info() - if nodes is None: - nodes = set(nodes_info.keys()) - t_nodes = [node for node in nodes if nodes_info[node]["nodeLabel"] == vc_name] - if len(t_nodes) > 0: - - if queues_info["default"]["maxCapacity"] == 100 or queues_info["default"]["maxCapacity"] > \ - queues_info["default"]["capacity"]: - queues_info["default"]["maxCapacity"] = 100.0 - - removed_resource = Resource(**{"cpus": 0, "memory": 0, "gpus": 0}) - for node, info in nodes_info.items(): - if node in nodes and info["nodeLabel"] == vc_name: - removed_resource += info["resource"] - - default_partition_resource = get_resource_by_label(nodes_info)[""]["resource"] - default_vc_percentage = queues_info["default"]["capacity"] / 100.0 - default_vc_resource = default_partition_resource * default_vc_percentage - - new_default_partition_resource = default_partition_resource + removed_resource - new_default_vc_resource = default_vc_resource + removed_resource - - queues_info_with_gpus = convert_percentage_to_gpus(queues_info, default_partition_resource) - queues_info_with_gpus["default"]["gpus"] = new_default_vc_resource.gpus - new_queues_percentage = convert_gpus_to_percentage(queues_info_with_gpus, new_default_partition_resource) - new_queues_percentage = normalize_percentage(new_queues_percentage) - updated_dict = {} - for queue, info in new_queues_percentage.items(): - updated_dict[queue] = { - "capacity": info["capacity"], - "maximum-capacity": info["maxCapacity"] - } - - yarn_operator.label_nodes(t_nodes, "") - yarn_operator.update_queue_capacity(updated_dict) - - if remove_queue_flag: - logger.info("Removing dedicated vc...") - if vc_name not in queues_info: - logger.warning("Virtual cluster not found: {}.".format(vc_name)) - else: - yarn_operator.remove_dedicated_queue(vc_name) - restserver_operator.delete_vc(vc_name) - - logger.info("Removing cluster label...") - if vc_name not in yarn_operator.get_cluster_labels(): - logger.warning("Cluster label not found: {}".format(vc_name)) - else: - yarn_operator.remove_cluster_label(vc_name) - -def setup_user(args): - username = args.username - password = args.password - RestserverOperator.setup_user(username, password) - logger.info("Setup user done") - - -def setup_parser(): - top_parser = argparse.ArgumentParser() - sub_parser = top_parser.add_subparsers(dest="subcommands") - - # a parent parser to avoid repeatedly add arguments for all subcommands - parent_parser = argparse.ArgumentParser(add_help=False) - parent_parser.add_argument("-m", "--master", dest="master_ip", - help="master node ip", required=True) - parent_parser.add_argument("--resource-manager-ip", - help="specify yarn resource manager ip separately, by default it's master node ip") - parent_parser.add_argument("--api-server-ip", - help="specify kubernetes api-server ip separately, by default it's master node ip") - parent_parser.add_argument("--prometheus-ip", - help="specify prometheus ip separately, by default it's master node ip") - parent_parser.add_argument("--restserver-ip", - help="specify restserver ip separately, by default it's master node ip") - parent_parser.add_argument("--prometheus-port", default=9091, - help="specify prometheus port, by default it's 9091") - - # setup restserver user - user_parser = sub_parser.add_parser("user", help="query prometheus alerts") - user_subparsers = user_parser.add_subparsers(dest="action") - - parser_set = user_subparsers.add_parser("set", parents=[parent_parser], help="print current gpu alerts") - parser_set.add_argument("-u", "--username", required=True) - parser_set.add_argument("-p", "--password", required=True) - parser_set.set_defaults(func=setup_user) - - # prometheus operator parser - prometheus_parser = sub_parser.add_parser("badgpus", help="query prometheus alerts") - prometheus_subparsers = prometheus_parser.add_subparsers(dest="action") - - parser_get = prometheus_subparsers.add_parser("get", parents=[parent_parser], help="print current gpu alerts") - parser_get.set_defaults(func=get_gpu_alert) - - # blacklist parser - blacklist_parser = sub_parser.add_parser("blacklist", help="blacklist operation") - blacklist_subparsers = blacklist_parser.add_subparsers(dest="action") - - parser_get = blacklist_subparsers.add_parser("get", parents=[parent_parser], help="get blacklist nodes") - parser_get.set_defaults(func=get_decommission_nodes) - - parser_add = blacklist_subparsers.add_parser("add", parents=[parent_parser], help="add nodes to blacklist") - parser_add.add_argument("-n", "--nodes", type=convert_nodes, help="support comma-delimited node list", required=True) - parser_add.set_defaults(func=add_decommission_nodes) - - parser_remove = blacklist_subparsers.add_parser("remove", parents=[parent_parser], help="remove nodes from blacklist") - parser_remove.add_argument("-n", "--nodes", type=convert_nodes, help="support comma-delimited node list", required=True) - parser_remove.set_defaults(func=remove_decommission_nodes) - - parser_update = blacklist_subparsers.add_parser("update", parents=[parent_parser], help="update blacklist") - parser_update.add_argument("-n", "--nodes", type=convert_nodes, help="support comma-delimited node list") - parser_update.set_defaults(func=update_decommission_nodes) - - parser_refresh = blacklist_subparsers.add_parser("enforce", parents=[parent_parser], - help="enforce yarn to gracefully decommission nodes in blacklist") - parser_refresh.set_defaults(func=refresh_yarn_nodes) - - # dedicated vc parser - dedicated_vc_parser = sub_parser.add_parser("dedicated-vc", help="operate dedicated vc") - dedicated_vc_subparsers = dedicated_vc_parser.add_subparsers(dest="action") - - parser_get = dedicated_vc_subparsers.add_parser("get", parents=[parent_parser], help="get dedicate vc info") - parser_get.set_defaults(func=get_dedicate_vc) - - parser_add = dedicated_vc_subparsers.add_parser("add", parents=[parent_parser], help="add dedicate vc") - parser_add.add_argument("-n", "--nodes", type=convert_nodes, help="support comma-delimited node list", default={}) - parser_add.add_argument("-v", "--vc-name", type=validate_vc_name, required=True) - parser_add.set_defaults(func=add_dedicate_vc) - - parser_remove = dedicated_vc_subparsers.add_parser("remove", parents=[parent_parser], help="remove dedicate vc") - parser_remove.add_argument("-v", "--vc-name", type=validate_vc_name, required=True) - parser_remove.add_argument("-n", "--nodes", type=convert_nodes, help="support comma-delimited node list") - parser_remove.set_defaults(func=remove_dedicate_vc) - - return top_parser - - -def main(): - parser = setup_parser() - args = parser.parse_args() - args.resource_manager_ip = args.resource_manager_ip or args.master_ip - args.api_server_ip = args.api_server_ip or args.master_ip - args.prometheus_ip = args.prometheus_ip or args.master_ip - args.restserver_ip = args.restserver_ip or args.master_ip - try: - args.func(args) - except Exception as e: - from subprocess import CalledProcessError - if isinstance(e, CalledProcessError): - logger.error(e.output) - else: - logger.exception(e) - - -if __name__ == "__main__": - main() diff --git a/src/tools/operator_wrapper/__init__.py b/src/tools/operator_wrapper/__init__.py deleted file mode 100644 index 84e139b79..000000000 --- a/src/tools/operator_wrapper/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from __future__ import absolute_import - -__all__ = ["AlertOperator", "KubernetesOperator", "YarnOperator", "Resource", "RestserverOperator"] - - -from .alert_operator import AlertOperator -from .kubernetes_operator import KubernetesOperator -from .yarn_operator import YarnOperator, Resource -from .base_operator import BaseOperator -from .restserver_operator import RestserverOperator diff --git a/src/tools/operator_wrapper/alert_operator.py b/src/tools/operator_wrapper/alert_operator.py deleted file mode 100644 index f8dae50ec..000000000 --- a/src/tools/operator_wrapper/alert_operator.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import logging -import sys - -from .base_operator import BaseOperator - -logger = logging.getLogger(__name__) - - -class AlertOperator(BaseOperator): - ALERT_TYPE = { - "gpu_related": {"NvidiaSmiLatencyTooLarge", "NvidiaSmiEccError", "NvidiaMemoryLeak", "NvidiaZombieProcess", "GpuUsedByExternalProcess", "GpuUsedByZombieContainer"}, - } - - def __init__(self, prometheus_ip, prometheus_port=9091): - super(AlertOperator, self).__init__(prometheus_ip, prometheus_port) - - def get_gpu_alert_nodes(self): - api_path = "/prometheus/api/v1/query?query=ALERTS" - alerts_info = self.request(api_path) - - if alerts_info["status"] != "success": - logger.error("Alert response error: {}".format(alerts_info["data"])) - sys.exit(1) - - alerts_info = alerts_info["data"]["result"] - gpu_alert_nodes = {} - for alert in alerts_info: - metric = alert["metric"] - if metric["alertname"] in self.ALERT_TYPE["gpu_related"] and metric["alertstate"] == "firing": - node_ip = metric["instance"].split(':')[0] - gpu_alert_nodes[node_ip] = metric["alertname"] - - return gpu_alert_nodes diff --git a/src/tools/operator_wrapper/base_operator.py b/src/tools/operator_wrapper/base_operator.py deleted file mode 100644 index fe725414b..000000000 --- a/src/tools/operator_wrapper/base_operator.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import logging -import requests -import subprocess - -logger = logging.getLogger(__name__) - - -class BaseOperator(object): - def __init__(self, master_ip, port): - self.master_ip = master_ip - self.port = port - - def request(self, api_path, method="get", return_json=True, timeout=10, **kwargs): - - url = "http://{}:{}{}".format(self.master_ip, self.port, api_path) - - logger.debug("{}: {}".format(method, url)) - func = getattr(requests, method) - response = func(url, timeout=timeout, **kwargs) - response.raise_for_status() - if return_json: - return response.json() - else: - return response.text - - def execute(self, command, redirect_stderr=True, shell=True, **kwargs): - logger.debug(command) - stderr = subprocess.STDOUT if redirect_stderr else None - output = subprocess.check_output(command, stderr=stderr, shell=shell, **kwargs) - try: - output = output.decode("utf8") - except AttributeError: - pass - return output - - -if __name__ == "__main__": - pass - diff --git a/src/tools/operator_wrapper/kubernetes_operator.py b/src/tools/operator_wrapper/kubernetes_operator.py deleted file mode 100644 index d8e35a9c7..000000000 --- a/src/tools/operator_wrapper/kubernetes_operator.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import sys -sys.path.append("../..") -from deployment.paiLibrary.common.kubernetes_handler import get_configmap, update_configmap -from deployment.k8sPaiLibrary.maintainlib import common - - -class KubernetesOperator(object): - kubernetes_template = "../../deployment/k8sPaiLibrary/template/config.template" - kube_config_path = "./.config" - configmap_name = "exclude-file" - configmap_data_key = "nodes" - - def __init__(self, master_ip): - self.master_ip = master_ip - self.setup_kubernetes_configfile(master_ip) - - def setup_kubernetes_configfile(self, api_servers_ip): - - template_data = common.read_template(self.kubernetes_template) - dict_map = { - "cluster_cfg": {"kubernetes": {"api-servers-ip": api_servers_ip}}, - } - generated_data = common.generate_from_template_dict(template_data, dict_map) - - common.write_generated_file(generated_data, self.kube_config_path) - - def get_nodes(self): - configmap_info = get_configmap(self.kube_config_path, self.configmap_name) - nodes_str = configmap_info["data"][self.configmap_data_key] - nodes = set(nodes_str.splitlines()) - return nodes - - def set_nodes(self, nodes): - nodes = set(nodes) - nodes_str = '\n'.join(nodes) - data_dict = {self.configmap_data_key: nodes_str} - update_configmap(self.kube_config_path, self.configmap_name, data_dict) diff --git a/src/tools/operator_wrapper/restserver_operator.py b/src/tools/operator_wrapper/restserver_operator.py deleted file mode 100644 index 0bf83ddd6..000000000 --- a/src/tools/operator_wrapper/restserver_operator.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import logging -import os -import json -import sys - -from base_operator import BaseOperator - -logger = logging.getLogger(__name__) - - -class RestserverOperator(BaseOperator): - - secret_file = ".restserver/user_info" - - def __init__(self, restserver_ip, restserver_port=9186): - super(RestserverOperator, self).__init__(restserver_ip, restserver_port) - self.token = "" - self.load_token() - - @classmethod - def setup_user(cls, username, password): - if not os.path.exists(os.path.dirname(cls.secret_file)): - os.mkdir(os.path.dirname(cls.secret_file)) - with open(cls.secret_file, "w") as f: - data = { - "username": username, - "password": password - } - json.dump(data, f) - - def load_token(self): - if not os.path.exists(self.secret_file): - return - with open(self.secret_file) as f: - data = json.load(f) - api_path = "/api/v1/token" - headers = { - "Content-Type": "application/x-www-form-urlencoded" - } - response = self.request(api_path, method="post", headers=headers, data=data) - self.token = response["token"] - - def get_vc(self): - api_path = "/api/v1/virtual-clusters" - response = self.request(api_path) - return response - - def add_vc(self, name, capacity=0, maxcapacity=0): - if self.token == "": - logger.error("Anonymous user can't add vc, please setup user firstly") - sys.exit(1) - api_path = "/api/v1/virtual-clusters/{}".format(name) - headers = { - "Authorization": "Bearer " + self.token - } - data = { - "vcCapacity": capacity, - "vcMaxCapacity": maxcapacity - } - response = self.request(api_path, method="put", headers=headers, data=data) - return response - - def delete_vc(self, name): - if self.token == "": - logger.error("Anonymous user can't delete vc, please setup user firstly") - sys.exit(1) - api_path = "/api/v1/virtual-clusters/{}".format(name) - headers = { - "Authorization": "Bearer " + self.token - } - response = self.request(api_path, method="delete", headers=headers) - return response - - def delete_group(self, name): - if self.token == "": - logger.error("Anonymous user can't delete group, please setup user firstly") - sys.exit(1) - api_path = "/api/v2/group/{}".format(name) - headers = { - "Authorization": "Bearer " + self.token - } - response = self.request(api_path, method="delete", headers=headers) - return response - - -if __name__ == '__main__': - pass - - - - - - diff --git a/src/tools/operator_wrapper/yarn_operator.py b/src/tools/operator_wrapper/yarn_operator.py deleted file mode 100644 index fbf55e5ac..000000000 --- a/src/tools/operator_wrapper/yarn_operator.py +++ /dev/null @@ -1,392 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import logging -import sys -import os -import re -import json -from bs4 import BeautifulSoup -import dicttoxml -dicttoxml.LOG.setLevel(logging.ERROR) -import time -import attr -from attr.validators import instance_of - - -from base_operator import BaseOperator - -logger = logging.getLogger(__name__) - -@attr.s -class Resource(object): - cpus = attr.ib(converter=float, validator=instance_of(float)) - gpus = attr.ib(converter=float, validator=instance_of(float)) - memory = attr.ib(converter=float, validator=instance_of(float)) - - def __add__(self, other): - if isinstance(other, Resource): - cpus = self.cpus + other.cpus - gpus = self.gpus + other.gpus - memory = self.memory + other.memory - return Resource(cpus=cpus, gpus=gpus, memory=memory) - else: - raise NotImplemented - - def __radd__(self, other): - return self + other - - def __sub__(self, other): - if isinstance(other, Resource): - cpus = self.cpus - other.cpus - gpus = self.gpus - other.gpus - memory = self.memory - other.memory - return Resource(cpus=cpus, gpus=gpus, memory=memory) - else: - raise NotImplemented - - def __mul__(self, other): - if isinstance(other, (int, float)): - cpus = self.cpus * other - gpus = self.gpus * other - memory = self.memory * other - return Resource(cpus=cpus, gpus=gpus, memory=memory) - else: - raise NotImplemented - - def __rmul__(self, other): - return self * other - - def __div__(self, other): - if isinstance(other, (int, float)): - cpus = self.cpus / other - gpus = self.gpus / other - memory = self.memory / other - return Resource(cpus=cpus, gpus=gpus, memory=memory) - else: - raise NotImplemented - - -class YarnOperator(BaseOperator): - yarn_config_path = "./.hadoop" - - def __init__(self, master_ip, port=8088): - super(YarnOperator, self).__init__(master_ip, port) - self.setup_yarn_configfile() - - def setup_yarn_configfile(self): - if not os.path.exists(self.yarn_config_path): - os.mkdir(self.yarn_config_path) - - yarn_config_str = \ - ''' - - yarn.resourcemanager.hostname - {} - - '''.format(self.master_ip) - - with open(os.path.join(self.yarn_config_path, "yarn-site.xml"), 'w') as f: - f.write(yarn_config_str) - - def get_nodes_info(self): - api_path = "/ws/v1/cluster/nodes" - nodes_info = self.request(api_path) - current_nodes = {} - for node in nodes_info["nodes"]["node"]: - host = node["nodeHostName"] - state = node["state"] - node_label = node.get("nodeLabels", [""])[0] - resource = Resource(**{ - "cpus": node["usedVirtualCores"] + node["availableVirtualCores"], - "memory": node["usedMemoryMB"] + node["availMemoryMB"], - "gpus": node["usedGPUs"] + node["availableGPUs"] - }) - current_nodes[host] = { - "state": state, - "nodeLabel": node_label, - "resource": resource - } - return current_nodes - - def decommission_nodes(self): - command = "yarn --config {} rmadmin -refreshNodes -g -server".format(self.yarn_config_path) - self.execute(command) - - def get_cluster_labels(self): - # Sample output: "Node Labels: ," - # Sample output: "Node Labels: " - command = "yarn --config {} cluster --list-node-labels".format(self.yarn_config_path) - - output = self.execute(command) - - lines = output.split("\n") - labels = dict() # key: label name, value: exclusivity - for line in lines: - if not line.startswith("Node Labels:"): - continue - line = line.lstrip("Node Labels:") - labels_str = line.split(",") - label_regex = r"<([a-zA-Z0-9][a-zA-Z0-9_\-]*):exclusivity=(true|false)>" - for label_str in labels_str: - match = re.search(label_regex, label_str) - if match: - label_name, exclusivity = match.groups() - exclusivity = exclusivity == "true" - labels[label_name] = {"exclusive": exclusivity} - - return labels - - def add_cluster_label(self, label, exclusivity=True): - - label_str = "{}(exclusive={})".format(label, "true" if exclusivity else "false") - - command = "yarn --config {} rmadmin -addToClusterNodeLabels \"{}\"".format(self.yarn_config_path, label_str) - self.execute(command) - - def remove_cluster_label(self, label): - - command = "yarn --config {} rmadmin -removeFromClusterNodeLabels {}".format(self.yarn_config_path, label) - self.execute(command) - - def label_nodes(self, nodes, label): - if isinstance(nodes, str): - nodes = [nodes] - - nodes_str_builder = [] - - for node in nodes: - node_str = "{}={}".format(node, label) - nodes_str_builder.append(node_str) - - nodes_str = " ".join(nodes_str_builder) - - # yarn rmadmin -replaceLabelsOnNode "node1[:port]=label1 node2=label2" [-failOnUnknownNodes] - command = "yarn --config {} rmadmin -replaceLabelsOnNode \"{}\" -failOnUnknownNodes"\ - .format(self.yarn_config_path, nodes_str) - - self.execute(command) - - def get_queues_info(self): - api_path = "/ws/v1/cluster/scheduler" - scheduler_info = self.request(api_path) - - def traverse(queue_info, result_dict): - if queue_info["type"] == "capacitySchedulerLeafQueueInfo": - result_dict[queue_info["queueName"]] = { - "capacity": queue_info["absoluteCapacity"], - "maxCapacity": queue_info["absoluteMaxCapacity"], - "usedCapacity": queue_info["absoluteUsedCapacity"], - "numActiveJobs": queue_info["numActiveApplications"], - "numJobs": queue_info["numApplications"], - "numPendingJobs": queue_info["numPendingApplications"], - "resourcesUsed": queue_info["resourcesUsed"], - "state": queue_info["state"], - "nodeLabels": queue_info["nodeLabels"], - "capacities": { - partitionCapacities["partitionName"]: { - "capacity": partitionCapacities["absoluteCapacity"], - "maxCapacity": partitionCapacities["absoluteMaxCapacity"], - "usedCapacity": partitionCapacities["absoluteUsedCapacity"], - } - for partitionCapacities in queue_info["capacities"]["queueCapacitiesByPartition"] - }, - "preemptionDisabled": queue_info.get("preemptionDisabled", False), - "defaultNodeLabelExpression": queue_info.get("defaultNodeLabelExpression", ""), - } - elif queue_info["type"] == "capacityScheduler": - for queue in queue_info["queues"]["queue"]: - traverse(queue, result_dict) - else: - logger.error("unsupported scheduler type: {}".format(queue_info["type"])) - return - - queues = {} - traverse(scheduler_info["scheduler"]["schedulerInfo"], queues) - return queues - - def get_resource_by_label(self): - api_path = "/cluster/nodelabels" - html_text = self.request(api_path, return_json=False) - - soup = BeautifulSoup(html_text) - result = soup.find("table", id="nodelabels") - tbody = result.find("tbody") - labels = tbody.find_all("tr") - labels_dict = {} - for label in labels: - label_dict = {} - - label_name_raw, exclusive_raw, active_nm_raw, resources_raw = label.find_all("td") - label_name = label_name_raw.string.strip() - if label_name == "": - label_name = "" - - exclusive = exclusive_raw.string.strip() - if exclusive == "Exclusive Partition": - label_dict["exclusive"] = True - elif exclusive == "Non Exclusive Partition": - label_dict["exclusive"] = False - else: - logger.error("unknown exclusivity: {}".format(exclusive)) - sys.exit(1) - - if active_nm_raw.find('a'): - active_nm = active_nm_raw.find('a').string.strip() - else: - active_nm = active_nm_raw.string.strip() - label_dict["active_nm"] = int(active_nm) - - resources = resources_raw.string.strip() - r_dict = {} - for resource in resources.strip("<>").split(","): - r_type, r_quota = resource.split(":") - r_dict[r_type.strip()] = int(r_quota) - label_dict["resource"] = Resource(**{ - "cpus": r_dict["vCores"], - "memory": r_dict["memory"], - "gpus": r_dict["GPUs"] - }) - labels_dict[label_name] = label_dict - return labels_dict - - def add_dedicated_queue(self, label_name): - - raw_dict = { - "update-queue": { - "queue-name": "root.{}".format(label_name), - "params": [ - { - "key": "capacity", - "value": 0 - }, - { - "key": "maximum-capacity", - "value": 0 - }, - { - "key": "default-node-label-expression", - "value": label_name - }, - { - "key": "accessible-node-labels", - "value": label_name - }, - { - "key": "disable_preemption", - "value": True - }, - { - "key": "maximum-applications", - "value": 10000 - }, - { - "key": "user-limit-factor", - "value": 100 - } - ] - - }, - "global-updates": [ - { - "key": "yarn.scheduler.capacity.root.accessible-node-labels.{}.capacity".format(label_name), - "value": 100 - }, - { - "key": "yarn.scheduler.capacity.root.{vc_name}.accessible-node-labels.{vc_name}.capacity".format(vc_name=label_name), - "value": 100 - } - ] - } - request_xml = self.generate_queue_update_xml(raw_dict) - - self.put_queue_update_xml(request_xml) - - def remove_dedicated_queue(self, label_name): - - raw_dict = { - "update-queue": { - "queue-name": "root.{}".format(label_name), - "params": [ - { - "key": "state", - "value": "STOPPED" - } - ] - - }, - } - request_xml = self.generate_queue_update_xml(raw_dict) - - self.put_queue_update_xml(request_xml) - while True: - current_state = self.get_queues_info()[label_name]["state"] - if current_state == "STOPPED": - break - logger.info("current vc status: {}. waiting...".format(current_state)) - time.sleep(5) - - raw_dict = { - # "remove-queue": "root.{}".format(label_name), - "global-updates": [ - { - "key": "yarn.scheduler.capacity.root.accessible-node-labels.{}.capacity".format(label_name), - "value": 0 - }, - { - "key": "yarn.scheduler.capacity.root.{vc_name}.accessible-node-labels.{vc_name}.capacity".format( - vc_name=label_name), - "value": 0 - }, - { - "key": "yarn.scheduler.capacity.root.{vc_name}.default-node-label-expression".format( - vc_name=label_name), - "value": None - } - ] - } - request_xml = self.generate_queue_update_xml(raw_dict) - - self.put_queue_update_xml(request_xml) - - def update_queue_capacity(self, update_dict): - # Todo: current we use global-updates to update capacity due to dicttoxml package limitation - # Todo: change it to update-queue after this pr: https://github.com/quandyfactory/dicttoxml/pull/64 - raw_dict = {"global-updates": []} - for queue, info in update_dict.items(): - for attribute, value in info.items(): - key = "yarn.scheduler.capacity.root.{}.{}".format(queue, attribute) - raw_dict["global-updates"].append({ - "key": key, - "value": value - }) - - request_xml = self.generate_queue_update_xml(raw_dict) - self.put_queue_update_xml(request_xml) - - def generate_queue_update_xml(self, g_dict): - return dicttoxml.dicttoxml(g_dict, attr_type=False, custom_root="sched-conf", item_func=lambda x: "entry") - - def put_queue_update_xml(self, update_xml): - api_path = "/ws/v1/cluster/scheduler-conf" - headers = {"Content-Type": "application/xml"} - self.request(api_path, method="put", return_json=False, headers=headers, data=update_xml) - - -if __name__ == "__main__": - pass - diff --git a/src/tools/reports.py b/src/tools/reports.py deleted file mode 100644 index fa349cbfe..000000000 --- a/src/tools/reports.py +++ /dev/null @@ -1,960 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import urllib.parse -import argparse -import logging -import datetime -import json -import collections -import re -import sys -import math - -import sqlite3 -import requests - -import flask -from flask import Flask -from flask import request -from flask import Response - -logger = logging.getLogger(__name__) - - -def walk_json_field_safe(obj, *fields): - """ for example a=[{"a": {"b": 2}}] - walk_json_field_safe(a, 0, "a", "b") will get 2 - walk_json_field_safe(a, 0, "not_exist") will get None - """ - try: - for f in fields: - obj = obj[f] - return obj - except: - return None - - -def request_with_error_handling(url): - try: - response = requests.get(url, allow_redirects=True, timeout=15) - response.raise_for_status() - return response.json() - except Exception as e: - logger.exception(e) - return None - - -def format_time(timestamp): - d = datetime.datetime.fromtimestamp(timestamp) - return d.strftime("%Y/%m/%d-%H:%M:%S") - - -def get_ip(ip_port): - """ return 1.2.3.4 on 1.2.3.4:123 """ - m = re.match("([0-9]+[.][0-9]+[.][0-9]+[.][0-9]+):?.*", ip_port) - if m: - return m.groups()[0] - return ip_port - - -class JobInfo(object): - def __init__(self, job_count=0, elapsed_time=0, cpu_sec=0, mem_sec=0, gpu_sec=0, - user="unknown", vc="unknown", start_time=0, finished_time=0, retries=0, - status="unknown", exit_code="N/A", max_mem_usage="N/A"): - """ elapsed_time is seconds, cpu_sec is vcore-seconds, mem_sec is - megabyte-seconds, gpu_sec is card-seconds """ - self.job_count = job_count - self.elapsed_time = elapsed_time - self.cpu_sec = cpu_sec - self.mem_sec = mem_sec - self.gpu_sec = gpu_sec - - self.user = user - self.vc = vc - self.start_time = start_time - self.finished_time = finished_time - self.retries = retries - self.status = status - self.exit_code = exit_code - self.max_mem_usage = max_mem_usage - - def __iadd__(self, o): - self.job_count += o.job_count - self.elapsed_time += o.elapsed_time - self.cpu_sec += o.cpu_sec - self.mem_sec += o.mem_sec - self.gpu_sec += o.gpu_sec - return self - - def __add__(self, o): - return JobInfo( - job_count=self.job_count + o.job_count, - elapsed_time=self.elapsed_time + o.elapsed_time, - cpu_sec=self.cpu_sec + o.cpu_sec, - mem_sec=self.mem_sec + o.mem_sec, - gpu_sec=self.gpu_sec + o.gpu_sec) - - def values(self): - return [self.job_count, self.elapsed_time, - self.cpu_sec, self.mem_sec, self.gpu_sec] - - def __repr__(self): - # NOTE this is used to generate final report - return ",".join(map(str, self.values())) - - -class JobReportEntries(object): - def __init__(self, username, vc, total_job_info, success_job_info, - failed_job_info, stopped_job_info, running_job_info, waiting_job_info): - self.username = username - self.vc = vc - self.total_job_info = total_job_info - self.success_job_info = success_job_info - self.failed_job_info = failed_job_info - self.stopped_job_info = stopped_job_info - self.running_job_info = running_job_info - self.waiting_job_info = waiting_job_info - - def values(self): - result = [self.username, self.vc] - result.extend(self.total_job_info.values()) - result.extend(self.success_job_info.values()) - result.extend(self.failed_job_info.values()) - result.extend(self.stopped_job_info.values()) - result.extend(self.running_job_info.values()) - result.extend(self.waiting_job_info.values()) - return result - - def __repr__(self): - # NOTE this is used to generate final report - return ",".join(map(str, self.values())) - - -class RawJob(object): - def __init__(self, user, vc, job, - start_time, finish_time, waiting_time, run_time, - retries, status, exit_code, cpu, mem, max_mem, gpu): - self.user = user - self.vc = vc - self.job = job - self.start_time = start_time - self.finish_time = finish_time - self.waiting_time = waiting_time - self.run_time = run_time - self.retries = retries - self.status = status - self.exit_code = exit_code - self.cpu = cpu - self.mem = mem - self.max_mem = max_mem - self.gpu = gpu - - def values(self): - return [self.user, self.vc, self.job, - self.start_time, self.finish_time, self.waiting_time, self.run_time, - self.retries, self.status, self.exit_code, - self.cpu, self.mem, self.max_mem, self.gpu] - - def __repr__(self): - # NOTE this is used to generate final report - return ",".join(map(str, self.values())) - - -class Alert(object): - default_get_ip = lambda a: get_ip(a["instance"]) - host_ip_mapping = { - "NodeNotReady": lambda a: get_ip(a["name"]), - "k8sApiServerNotOk": lambda a: get_ip(a["host_ip"]), - "NodeDiskPressure": lambda a: get_ip(a["name"]), - "NodeNotReady": lambda a: get_ip(a["name"]), - "PaiServicePodNotRunning": lambda a: get_ip(a["host_ip"]), - "PaiServicePodNotReady": lambda a: get_ip(a["host_ip"]), - } - - src_mapping = { - "NvidiaSmiEccError": lambda a: a["minor_number"], - "NvidiaMemoryLeak": lambda a: a["minor_number"], - "GpuUsedByExternalProcess": lambda a: a["minor_number"], - "GpuUsedByZombieContainer": lambda a: a["minor_number"], - "k8sApiServerNotOk": lambda a: a["error"], - "k8sDockerDaemonNotOk": lambda a: a["error"], - "NodeFilesystemUsage": lambda a: a["device"], - "NodeDiskPressure": lambda a: get_ip(a["name"]), - "NodeNotReady": lambda a: get_ip(a["name"]), - "AzureAgentConsumeTooMuchMem": lambda a: a["cmd"], - "PaiServicePodNotRunning": lambda a: a["name"], - "PaiServicePodNotReady": lambda a: a["name"], - "PaiServiceNotUp": lambda a: a["pai_service_name"], - "JobExporterHangs": lambda a: a["name"], - } - - def __init__(self, alert_name, start, durtion, labels): - """ alert_name are derived from labels, start/durtion is timestamp - value """ - self.alert_name = alert_name - self.start = start - self.durtion = durtion - self.labels = labels - - #f.write("alert_name,host_ip,source,start,durtion,labels\n") - - @staticmethod - def get_info(alert_name, labels, mapping): - return mapping.get(alert_name, Alert.default_get_ip)(labels) - - def labels_repr(self): - r = [] - for k, v in self.labels.items(): - if k in {"__name__", "alertname", "alertstate", "job", "type"}: - continue - r.append("%s:%s" % (k, v)) - return "|".join(r) - - def values(self): - return [self.alert_name, - Alert.get_info(self.alert_name, self.labels, Alert.host_ip_mapping), - Alert.get_info(self.alert_name, self.labels, Alert.src_mapping), - format_time(self.start), - self.durtion, - self.labels_repr()] - - def __repr__(self): - # NOTE this is used to generate final report - return ",".join(map(str, self.values())) - - -class GPUEntry(object): - def __init__(self, node_ip, gpu_id, avg_util): - self.node_ip = node_ip - self.gpu_id = gpu_id - self.avg_util = avg_util - - def values(self): - return [self.node_ip, self.gpu_id, self.avg_util] - - def __repr__(self): - # NOTE this is used to generate final report - return ",".join(map(str, self.values())) - - -class DB(object): - # If app is running, the finished_time is 0, should not delete it in delete_old_data - CREATE_APPS_TABLE = """CREATE TABLE IF NOT EXISTS apps ( - app_id text NOT NULL, - finished_time integer NOT NULL, - content text NOT NULL - )""" - CREATE_APP_ID_INDEX = "CREATE INDEX IF NOT EXISTS app_id_index ON apps (app_id);" - CREATE_APP_TIME_INDEX = "CREATE INDEX IF NOT EXISTS app_time_index ON apps (finished_time);" - - # If job is running, the finished_time is 0, should not delete it in delete_old_data - CREATE_FRAMEWORKS_TABLE = """CREATE TABLE IF NOT EXISTS frameworks ( - name text NOT NULL, - start_time integer NOT NULL, - finished_time integer NOT NULL, - content text NOT NULL - )""" - CREATE_FRAMEWORK_NAME_INDEX = "CREATE INDEX IF NOT EXISTS framework_name_index ON frameworks (name);" - CREATE_FRAMEWORK_TIME_INDEX = "CREATE INDEX IF NOT EXISTS framework_time_index ON frameworks (start_time, finished_time);" - - def __init__(self, db_path): - self.db_path = db_path - self.conn = sqlite3.connect(self.db_path) - cursor = self.conn.cursor() - cursor.execute(DB.CREATE_APPS_TABLE) - cursor.execute(DB.CREATE_APP_ID_INDEX) - cursor.execute(DB.CREATE_APP_TIME_INDEX) - cursor.execute(DB.CREATE_FRAMEWORKS_TABLE) - cursor.execute(DB.CREATE_FRAMEWORK_NAME_INDEX) - cursor.execute(DB.CREATE_FRAMEWORK_TIME_INDEX) - self.conn.commit() - - -def get_yarn_apps(yarn_url): - apps_url = urllib.parse.urljoin(yarn_url, "/ws/v1/cluster/apps") - result = [] - - obj = request_with_error_handling(apps_url) - - apps = walk_json_field_safe(obj, "apps", "app") - - if apps is None: - return result - - for app in apps: - app_id = walk_json_field_safe(app, "id") - if app_id is None: - continue - - finished_time = walk_json_field_safe(app, "finishedTime") or 0 - finished_time = int(finished_time / 1000) # yarn's time is in millisecond - content = json.dumps(app) - result.append({"app_id": app_id, - "finished_time": finished_time, "content": content}) - - return result - - -def get_frameworks(launcher_url): - launcher_url = urllib.parse.urljoin(launcher_url, "/v1/Frameworks") - result = [] - - obj = request_with_error_handling(launcher_url) - - frameworks = walk_json_field_safe(obj, "summarizedFrameworkInfos") - - if frameworks is None: - return result - - for framework in frameworks: - name = walk_json_field_safe(framework, "frameworkName") - if name is None: - continue - - finished_time = walk_json_field_safe(framework, "frameworkCompletedTimestamp") or 0 - finished_time = int(finished_time / 1000) # yarn's time is in millisecond - start_time = walk_json_field_safe(framework, "firstRequestTimestamp") or 0 - start_time = int(start_time / 1000) # yarn's time is in millisecond - content = json.dumps(framework) - result.append({"name": name, "start_time": start_time, - "finished_time": finished_time, "content": content}) - - return result - - -def refresh_cache(database, yarn_url, launcher_url): - db = DB(database) - - apps = get_yarn_apps(yarn_url) - logger.info("get %d of apps from yarn", len(apps)) - - with db.conn: - cursor = db.conn.cursor() - - for app in apps: - cursor.execute("""SELECT COUNT(*) FROM apps - WHERE app_id=?""", - (app["app_id"],)) - result = cursor.fetchone() - - if result[0] > 0: - cursor.execute("""UPDATE apps SET finished_time=?, content=? - WHERE app_id=?""", - (app["finished_time"], app["content"], app["app_id"])) - else: - cursor.execute("""INSERT INTO apps(app_id,finished_time,content) - VALUES(?,?,?)""", - (app["app_id"], app["finished_time"], app["content"])) - - db.conn.commit() - - frameworks = get_frameworks(launcher_url) - logger.info("get %d of frameworks from launcher", len(frameworks)) - - with db.conn: - cursor = db.conn.cursor() - - for framework in frameworks: - cursor.execute("""SELECT COUNT(*) FROM frameworks - WHERE name=?""", - (framework["name"],)) - result = cursor.fetchone() - - if result[0] > 0: - cursor.execute("""UPDATE frameworks SET finished_time=?, content=? - WHERE name=?""", - (framework["finished_time"], framework["content"], framework["name"])) - else: - cursor.execute("""INSERT INTO frameworks(name,start_time,finished_time,content) - VALUES(?,?,?,?)""", - (framework["name"], - framework["start_time"], - framework["finished_time"], - framework["content"])) - - db.conn.commit() - - -# https://github.com/Microsoft/pai/blob/pai-0.9.y/src/rest-server/src/models/job.js#L45 -# https://github.com/microsoft/pai/blob/v0.13.0/src/job-exit-spec/config/job-exit-spec.md -def convert_job_state(framework_state, exit_code): - if framework_state in { - "FRAMEWORK_WAITING", - "APPLICATION_CREATED", - "APPLICATION_LAUNCHED", - "APPLICATION_WAITING"}: - return "WAITING" - elif framework_state in { - "APPLICATION_RUNNING", - "APPLICATION_RETRIEVING_DIAGNOSTICS", - "APPLICATION_COMPLETED"}: - return "RUNNING" - elif framework_state == "FRAMEWORK_COMPLETED": - if exit_code is not None: - if exit_code == 0: - return "SUCCEEDED" - elif exit_code == -7351: - return "STOPPED" - else: - return "FAILED" - else: - return "FAILED" - - return "UNKNOWN" - - -def get_job_report(database, since, until, max_mem_usage): - """ return two values, one is aggregated job info, the other is raw job status """ - db = DB(database) - - with db.conn: - # Select more apps, since framework may retry, and previous retry - # may not finished in since~until range. - # Assume no retry will happen 1 month before framework finish. - app_since = datetime.datetime.fromtimestamp(since) - datetime.timedelta(days=31) - app_since = int(datetime.datetime.timestamp(app_since)) - cursor = db.conn.cursor() - cursor.execute("""SELECT content FROM apps - WHERE (finished_time>? AND finished_time? AND finished_time 0: - start = end = values[0][0] - events = [] - - for i, value in enumerate(values): - if i == len(values) - 1: - events.append({"start": start, "end": value[0]}) - break - - if value[0] - end <= gap: - end = value[0] - continue - else: - events.append({"start": start, "end": end}) - start = end = value[0] - - for event in events: - # because the end is the last time alert still happening, if we - # treat end - start equals to be the durtion of the alert, - # the alert with start == end will have durtion of 0, which is - # quite confusing, so we set durtion to be end - start + gap - result.append(Alert(alert_name, int(event["start"]), - int(event["end"] - event["start"] + gap), - labels)) - else: - logger.warning("unexpected zero values in alert %s", alert_name) - - logger.info("get %d alert entries", len(result)) - - return result - - -def get_gpu_util(prometheus_url, since, until): - args = urllib.parse.urlencode({ - "query": "nvidiasmi_utilization_gpu", - "start": str(since), - "end": str(until), - "step": "10m", - }) - - url = urllib.parse.urljoin(prometheus_url, - "/prometheus/api/v1/query_range") + "?" + args - - logger.debug("requesting %s", url) - result = [] - - obj = request_with_error_handling(url) - - if walk_json_field_safe(obj, "status") != "success": - logger.warning("requesting %s failed, body is %s", url, obj) - return result - - metrics = walk_json_field_safe(obj, "data", "result") - - for metric in metrics: - node_ip = get_ip(walk_json_field_safe(metric, "metric", "instance")) - gpu_id = walk_json_field_safe(metric, "metric", "minor_number") - - values = walk_json_field_safe(metric, "values") - sum_ = count = avg = 0 - if values is not None and len(values) > 0: - for val in values: - sum_ += float(val[1]) - count += 1 - avg = sum_ / count - else: - logger.warning("unexpected no values in gpu utils %s, %s, default avg to 0", - node_ip, - gpu_id) - - result.append(GPUEntry(node_ip, gpu_id, avg)) - - logger.info("get %d gpu entries", len(result)) - - return result - - -def delete_old_data(database, days): - db = DB(database) - now = datetime.datetime.now() - delta = datetime.timedelta(days=days) - - ago = int(datetime.datetime.timestamp(now - delta)) - - with db.conn: - cursor = db.conn.cursor() - - # should not delete entries if finished_time is 0, they are running apps - cursor.execute("""DELETE FROM apps WHERE finished_time", u"label_ex:", - u"Resource: "}, output_lines) - - @patch("node_maintain.YarnOperator.execute") - def test_add_dedicate_vc(self, execute_mock): - args = self.ArgsMock(resource_manager_ip="127.0.0.1", restserver_ip="127.0.0.1", vc_name="test_vc_2", nodes={"10.151.40.132"}) - - execute_mock.side_effect = [ - "Node Labels: ,,,,", - None - ] - with requests_mock.mock() as requests_get_mock: - requests_get_mock.post("http://127.0.0.1:9186/api/v1/token", text=json.dumps({"token": "test"})) - requests_get_mock.get("http://127.0.0.1:8088/ws/v1/cluster/scheduler", - text=self.capacity_scheduler_response) - requests_get_mock.get("http://127.0.0.1:8088/ws/v1/cluster/nodes", text=self.cluster_nodes_response) - requests_get_mock.put("http://127.0.0.1:8088/ws/v1/cluster/scheduler-conf") - requests_get_mock.delete("http://127.0.0.1:9186/api/v1/virtual-clusters/test_vc", text="{}") - - remove_dedicate_vc(args) - - yarn_command_call = [ - call("yarn --config ./.hadoop rmadmin -replaceLabelsOnNode \"10.151.40.132=\" -failOnUnknownNodes"), - call("yarn --config ./.hadoop cluster --list-node-labels"), - call("yarn --config ./.hadoop rmadmin -removeFromClusterNodeLabels test_vc") - ] - execute_mock.assert_has_calls(yarn_command_call, any_order=False) - - scheduler_conf_call = [request_object for request_object in requests_get_mock.request_history if request_object.path == "/ws/v1/cluster/scheduler-conf"] - self.assertEqual(len(scheduler_conf_call), 3) - update_capacity, stop_queue, remove_queue = [xmltodict.parse(request_object.text) for request_object in scheduler_conf_call] - update_capacity = {or_dict["key"]: or_dict["value"] for or_dict in update_capacity["sched-conf"]["global-updates"]["entry"]} - update_capacity_expect = { - u"yarn.scheduler.capacity.root.default.maximum-capacity": u"100.0", - u"yarn.scheduler.capacity.root.test_vc.capacity": u"0.0", - u"yarn.scheduler.capacity.root.label_ex.maximum-capacity": u"0.0", - u"yarn.scheduler.capacity.root.vc_a.maximum-capacity": u"5.0", - u"yarn.scheduler.capacity.root.vc_a.capacity": u"5.0", - u"yarn.scheduler.capacity.root.default.capacity": u"95.0", - u"yarn.scheduler.capacity.root.test_vc.maximum-capacity": u"0.0", - u"yarn.scheduler.capacity.root.label_ex.capacity": u"0.0" - } - self.assertDictEqual(update_capacity, update_capacity_expect) - - remove_queue = {or_dict["key"]: or_dict["value"] for or_dict in - remove_queue["sched-conf"]["global-updates"]["entry"]} - remove_queue_expect = { - u"yarn.scheduler.capacity.root.accessible-node-labels.test_vc.capacity": u"0", - u"yarn.scheduler.capacity.root.test_vc.accessible-node-labels.test_vc.capacity": u"0", - u"yarn.scheduler.capacity.root.test_vc.default-node-label-expression": None, - } - self.assertDictEqual(remove_queue, remove_queue_expect) - - -if __name__ == "__main__": - assert not hasattr(sys.stdout, "getvalue") - unittest.main(module=__name__, buffer=True, exit=False) diff --git a/src/tools/tests/test_yarn_operator.py b/src/tools/tests/test_yarn_operator.py deleted file mode 100644 index b7bc1163c..000000000 --- a/src/tools/tests/test_yarn_operator.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -from __future__ import absolute_import - -from mock import patch -import unittest - -from operator_wrapper.yarn_operator import YarnOperator - - -class YarnOperatorTestCase(unittest.TestCase): - - def setUp(self): - with patch("operator_wrapper.yarn_operator.YarnOperator.setup_yarn_configfile"): - self.yarnOperator = YarnOperator("localhost") - - @patch("operator_wrapper.yarn_operator.YarnOperator.setup_yarn_configfile") - def test__init__(self, setup_yarn_configfile): - YarnOperator("127.0.0.1") - setup_yarn_configfile.assert_called_with() - - - def test_generate_queue_update_xml(self): - from collections import OrderedDict - from xml.dom.minidom import parseString - raw_dict = OrderedDict([ - ("global-updates", [ - OrderedDict([("key", "yarn.scheduler.capacity.root.default.default-node-label-expression"), - ("value", "label_non")]), - OrderedDict([("key", "yarn.scheduler.capacity.root.default.accessible-node-labels.label_ex.capacity"), - ("value", 0)]), - - ]) - ]) - dom = parseString(self.yarnOperator.generate_queue_update_xml(raw_dict)) - expect_output = ''' - - - - yarn.scheduler.capacity.root.default.default-node-label-expression - label_non - - - yarn.scheduler.capacity.root.default.accessible-node-labels.label_ex.capacity - 0 - - - -''' - self.assertEquals(dom.toprettyxml(), expect_output) - - -if __name__ == "__main__": - unittest.main() diff --git a/src/tools/utility/__init__.py b/src/tools/utility/__init__.py deleted file mode 100644 index afedca73f..000000000 --- a/src/tools/utility/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/src/tools/utility/common.py b/src/tools/utility/common.py deleted file mode 100644 index 8f271f237..000000000 --- a/src/tools/utility/common.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import logging - -logger = logging.getLogger(__name__) - -def safe_get(dct, *keys): - for key in keys: - try: - dct = dct[key] - except KeyError: - return None - return dct diff --git a/src/tools/utility/log.py b/src/tools/utility/log.py deleted file mode 100644 index a96f96ca7..000000000 --- a/src/tools/utility/log.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) Microsoft Corporation -# All rights reserved. -# -# MIT License -# -# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated -# documentation files (the "Software"), to deal in the Software without restriction, including without limitation -# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and -# to permit persons to whom the Software is furnished to do so, subject to the following conditions: -# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING -# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, -# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - -import yaml -import logging -import logging.config -import os - -def setup_logging(default_path="config/logging.yaml", default_level=logging.INFO, env_key="LOG_CFG"): - path = default_path - value = os.getenv(env_key, None) - if value: - path = value - if os.path.exists(path): - with open(path, "rt") as f: - config = yaml.safe_load(f) - logging.config.dictConfig(config) - else: - logging.basicConfig(level=default_level) \ No newline at end of file