зеркало из https://github.com/microsoft/nni.git
[Doc] clean useless files (#4707)
This commit is contained in:
Родитель
5a7c6eca74
Коммит
611ed639bb
|
@ -17,7 +17,7 @@ NNI automates feature engineering, neural architecture search, hyperparameter tu
|
|||
* [Installation guide](https://nni.readthedocs.io/en/stable/installation.html)
|
||||
* [Tutorials](https://nni.readthedocs.io/en/stable/tutorials.html)
|
||||
* [Python API reference](https://nni.readthedocs.io/en/stable/reference/python_api.html)
|
||||
* [Releases](https://nni.readthedocs.io/en/stable/Release.html)
|
||||
* [Releases](https://nni.readthedocs.io/en/stable/release.html)
|
||||
|
||||
## What's NEW! <a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>
|
||||
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
:orphan:
|
||||
|
||||
Python API Reference
|
||||
====================
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _modules
|
||||
:recursive:
|
||||
|
||||
nni
|
|
@ -161,7 +161,6 @@ exclude_patterns = [
|
|||
'_build',
|
||||
'Thumbs.db',
|
||||
'.DS_Store',
|
||||
'Release_v1.0.md',
|
||||
'**.ipynb_checkpoints',
|
||||
# Exclude translations. They will be added back via replacement later if language is set.
|
||||
'**_zh.rst',
|
||||
|
|
|
@ -11,6 +11,6 @@ For details, please refer to the following tutorials:
|
|||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
Overview <FeatureEngineering/Overview>
|
||||
GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector>
|
||||
GBDTSelector <FeatureEngineering/GBDTSelector>
|
||||
Overview <overview>
|
||||
GradientFeatureSelector <gradient_feature_selector>
|
||||
GBDTSelector <gbdt_selector>
|
|
@ -1,4 +1,4 @@
|
|||
.. 0958703dcd6f8078a1ad1bcaef9c7199
|
||||
.. 74ffd973c9cc0edea8dc524ed9a86840
|
||||
|
||||
###################
|
||||
特征工程
|
||||
|
@ -13,6 +13,6 @@
|
|||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
概述 <FeatureEngineering/Overview>
|
||||
GradientFeatureSelector <FeatureEngineering/GradientFeatureSelector>
|
||||
GBDTSelector <FeatureEngineering/GBDTSelector>
|
||||
概述 <overview>
|
||||
GradientFeatureSelector <gradient_feature_selector>
|
||||
GBDTSelector <gbdt_selector>
|
|
@ -6,8 +6,8 @@ We are glad to announce the alpha release for Feature Engineering toolkit on top
|
|||
For now, we support the following feature selector:
|
||||
|
||||
|
||||
* `GradientFeatureSelector <./GradientFeatureSelector.rst>`__
|
||||
* `GBDTSelector <./GBDTSelector.rst>`__
|
||||
* `GradientFeatureSelector <./gradient_feature_selector.rst>`__
|
||||
* `GBDTSelector <./gbdt_selector.rst>`__
|
||||
|
||||
These selectors are suitable for tabular data(which means it doesn't include image, speech and text data).
|
||||
|
|
@ -108,4 +108,4 @@ These articles have compared built-in tuners' performance on some different task
|
|||
|
||||
:doc:`hpo_benchmark_stats`
|
||||
|
||||
:doc:`/misc/hpo_comparison`
|
||||
:doc:`/sharings/hpo_comparison`
|
||||
|
|
|
@ -18,7 +18,7 @@ Neural Network Intelligence
|
|||
Hyperparameter Optimization <hpo/index>
|
||||
Neural Architecture Search <nas/index>
|
||||
Model Compression <compression/index>
|
||||
Feature Engineering <feature_engineering>
|
||||
Feature Engineering <feature_engineering/index>
|
||||
Experiment <experiment/overview>
|
||||
|
||||
.. toctree::
|
||||
|
@ -28,27 +28,25 @@ Neural Network Intelligence
|
|||
|
||||
nnictl Commands <reference/nnictl>
|
||||
Experiment Configuration <reference/experiment_config>
|
||||
Python API <reference/_modules/nni>
|
||||
API Reference <reference/python_api_ref>
|
||||
Python API <reference/python_api>
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Misc
|
||||
:hidden:
|
||||
|
||||
Use Cases and Solutions <misc/community_sharings>
|
||||
Research and Publications <misc/research_publications>
|
||||
FAQ <misc/faq>
|
||||
Use Cases and Solutions <sharings/community_sharings>
|
||||
Research and Publications <notes/research_publications>
|
||||
notes/build_from_source
|
||||
Contribution Guide <notes/contributing>
|
||||
Change Log <Release>
|
||||
Change Log <release>
|
||||
|
||||
**NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate**:
|
||||
|
||||
* :doc:`Hyperparameter Tuning </hpo/overview>`,
|
||||
* :doc:`Neural Architecture Search </nas/index>`,
|
||||
* :doc:`Model Compression </compression/index>`,
|
||||
* :doc:`Feature Engineering </FeatureEngineering/Overview>`.
|
||||
* :doc:`Feature Engineering </feature_engineering/overview>`.
|
||||
|
||||
.. Can't use section title here due to the limitation of toc
|
||||
|
||||
|
@ -83,7 +81,7 @@ Then, please read :doc:`quickstart` and :doc:`tutorials` to start your journey w
|
|||
* **New demo available**: `Youtube entry <https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw>`_ | `Bilibili 入口 <https://space.bilibili.com/1649051673>`_ - *last updated on May-26-2021*
|
||||
* **New webinar**: `Introducing Retiarii, A deep learning exploratory-training framework on NNI <https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-Live.html>`_ - *scheduled on June-24-2021*
|
||||
* **New community channel**: `Discussions <https://github.com/microsoft/nni/discussions>`_
|
||||
* **New emoticons release**: :doc:`nnSpider <nnSpider>`
|
||||
* **New emoticons release**: :doc:`nnSpider <sharings/nn_spider/index>`
|
||||
|
||||
.. raw:: html
|
||||
|
||||
|
@ -207,7 +205,7 @@ Then, please read :doc:`quickstart` and :doc:`tutorials` to start your journey w
|
|||
.. codesnippetcard::
|
||||
:icon: ../img/thumbnails/feature-engineering-small.svg
|
||||
:title: Feature Engineering
|
||||
:link: FeatureEngineering/Overview
|
||||
:link: feature_engineering/overview
|
||||
|
||||
.. code-block::
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. c16ad1fb7782d3510f6a6fa8c931d8aa
|
||||
.. 6b958f21bd23025c81836e54a7f4fbe4
|
||||
|
||||
###########################
|
||||
Neural Network Intelligence
|
||||
|
@ -16,17 +16,18 @@ Neural Network Intelligence
|
|||
自动(超参数)调优 <hpo/index>
|
||||
神经网络架构搜索<nas/index>
|
||||
模型压缩<compression/index>
|
||||
特征工程<feature_engineering>
|
||||
特征工程<feature_engineering/index>
|
||||
NNI实验 <experiment/overview>
|
||||
HPO API Reference <reference/hpo>
|
||||
Experiment API Reference <reference/experiment>
|
||||
参考<reference>
|
||||
示例与解决方案<misc/community_sharings>
|
||||
研究和出版物 <misc/research_publications>
|
||||
常见问题 <misc/faq>
|
||||
nnictl Commands <reference/nnictl>
|
||||
Experiment Configuration <reference/experiment_config>
|
||||
Python API <reference/python_api>
|
||||
示例与解决方案<sharings/community_sharings>
|
||||
研究和出版物 <notes/research_publications>
|
||||
从源代码安装 <notes/build_from_source>
|
||||
如何贡献 <notes/contributing>
|
||||
更改日志 <Release>
|
||||
更改日志 <release>
|
||||
|
||||
|
||||
.. raw:: html
|
||||
|
|
|
@ -37,15 +37,15 @@ Basically, an experiment runs as follows: Tuner receives search space and genera
|
|||
|
||||
For each experiment, the user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps:
|
||||
|
||||
* Step 1: `Define search space <Tutorial/SearchSpaceSpec.rst>`__
|
||||
* Step 1: :doc:`Define search space <../hpo/search_space>`
|
||||
|
||||
* Step 2: `Update model codes <TrialExample/Trials.rst>`__
|
||||
* Step 2: Update model codes
|
||||
|
||||
* Step 3: `Define Experiment <reference/experiment_config.rst>`__
|
||||
* Step 3: :doc:`Define Experiment <../reference/experiment_config>`
|
||||
|
||||
.. image:: https://user-images.githubusercontent.com/23273522/51816627-5d13db80-2302-11e9-8f3e-627e260203d5.jpg
|
||||
|
||||
For more details about how to run an experiment, please refer to `Get Started <Tutorial/QuickStart.rst>`__.
|
||||
For more details about how to run an experiment, please refer to :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>`.
|
||||
|
||||
Core Features
|
||||
-------------
|
||||
|
@ -57,12 +57,12 @@ NNI also provides algorithm toolkits for machine learning and deep learning, esp
|
|||
Hyperparameter Tuning
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
This is a core and basic feature of NNI, we provide many popular `automatic tuning algorithms <Tuner/BuiltinTuner.rst>`__ (i.e., tuner) and `early stop algorithms <Assessor/BuiltinAssessor.rst>`__ (i.e., assessor). You can follow `Quick Start <Tutorial/QuickStart.rst>`__ to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment.
|
||||
This is a core and basic feature of NNI, we provide many popular :doc:`automatic tuning algorithms <../hpo/tuners>` (i.e., tuner) and :doc:`early stop algorithms <../hpo/assessors>` (i.e., assessor). You can follow :doc:`Quickstart <../tutorials/hpo_quickstart_pytorch/main>` to tune your model (or system). Basically, there are the above three steps and then starting an NNI experiment.
|
||||
|
||||
General NAS Framework
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found `here <NAS/Overview.rst>`__.
|
||||
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found :doc:`here <../nas/index>`.
|
||||
|
||||
NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment.
|
||||
|
||||
|
@ -75,11 +75,11 @@ NNI provides an easy-to-use model compression framework to compress deep neural
|
|||
inference speed without losing performance significantlly. Model compression on NNI includes pruning algorithms and quantization algorithms. NNI provides many pruning and
|
||||
quantization algorithms through NNI trial SDK. Users can directly use them in their trial code and run the trial code without starting an NNI experiment. Users can also use NNI model compression framework to customize their own pruning and quantization algorithms.
|
||||
|
||||
A detailed description of model compression and its usage can be found `here <Compression/Overview.rst>`__.
|
||||
A detailed description of model compression and its usage can be found :doc:`here <../compression/index>`.
|
||||
|
||||
Automatic Feature Engineering
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found `here <FeatureEngineering/Overview.rst>`__. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code.
|
||||
Automatic feature engineering is for users to find the best features for their tasks. A detailed description of automatic feature engineering and its usage can be found :doc:`here <../feature_engineering/overview>`. It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code.
|
||||
|
||||
The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it.
|
||||
|
|
|
@ -1,14 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
.. to be removed
|
||||
|
||||
References
|
||||
==================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
nnictl Commands <reference/nnictl>
|
||||
Experiment Configuration <reference/experiment_config>
|
||||
API References <reference/python_api_ref>
|
||||
Supported Framework Library <SupportedFramework_Library>
|
|
@ -1,10 +1,11 @@
|
|||
:orphan:
|
||||
API Reference
|
||||
=============
|
||||
|
||||
Python API Reference
|
||||
====================
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
.. autosummary::
|
||||
:toctree: _modules
|
||||
:recursive:
|
||||
|
||||
nni
|
||||
Hyperparameter Optimization <hpo>
|
||||
Neural Architecture Search <nas>
|
||||
Model Compression <compression>
|
||||
Experiment <experiment>
|
||||
Others <others>
|
||||
|
|
|
@ -1,5 +0,0 @@
|
|||
Feature Engineering
|
||||
===================
|
||||
|
||||
nni.algorithms.feature_engineering
|
||||
----------------------------------
|
|
@ -1,12 +0,0 @@
|
|||
API Reference
|
||||
=============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
Hyperparameter Optimization <hpo>
|
||||
Neural Architecture Search <./python_api/nas>
|
||||
Model Compression <compression>
|
||||
Feature Engineering <./python_api/feature_engineering>
|
||||
Experiment <experiment>
|
||||
Others <others>
|
|
@ -1,14 +0,0 @@
|
|||
.. e973987e22c5e2d43f325d6f29717ecb
|
||||
|
||||
:orphan:
|
||||
|
||||
参考
|
||||
==================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
nnictl 命令 <reference/nnictl>
|
||||
Experiment 配置 <reference/experiment_config>
|
||||
API 参考 <reference/python_api_ref>
|
||||
支持的框架和库 <SupportedFramework_Library>
|
|
@ -7,7 +7,7 @@
|
|||
<ul class="emotion">
|
||||
<li class="first">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/nobug') }}">
|
||||
<a href="{{ pathto('nobug') }}">
|
||||
<img src="_static/img/NoBug.png" alt="NoBug" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -15,7 +15,7 @@
|
|||
</li>
|
||||
<li class="first">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/holiday') }}">
|
||||
<a href="{{ pathto('holiday') }}">
|
||||
<img src="_static/img/Holiday.png" alt="Holiday" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -23,7 +23,7 @@
|
|||
</li>
|
||||
<li class="first">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/errorEmotion') }}">
|
||||
<a href="{{ pathto('error_emotion') }}">
|
||||
<img src="_static/img/Error.png" alt="Error" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -31,7 +31,7 @@
|
|||
</li>
|
||||
<li class="second">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/working') }}">
|
||||
<a href="{{ pathto('working') }}">
|
||||
<img class="working" src="_static/img/Working.png" alt="Working" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -39,7 +39,7 @@
|
|||
</li>
|
||||
<li class="second">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/sign') }}">
|
||||
<a href="{{ pathto('sign') }}">
|
||||
<img class="sign" src="_static/img/Sign.png" alt="Sign" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -47,7 +47,7 @@
|
|||
</li>
|
||||
<li class="second">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/crying') }}">
|
||||
<a href="{{ pathto('crying') }}">
|
||||
<img class="crying" src="_static/img/Crying.png" alt="Crying" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -55,7 +55,7 @@
|
|||
</li>
|
||||
<li class="three">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/cut') }}">
|
||||
<a href="{{ pathto('cut') }}">
|
||||
<img src="_static/img/Cut.png" alt="Crying" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -63,7 +63,7 @@
|
|||
</li>
|
||||
<li class="three">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/weaving') }}">
|
||||
<a href="{{ pathto('weaving') }}">
|
||||
<img class="weaving" src="_static/img/Weaving.png" alt="Weaving" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -71,7 +71,7 @@
|
|||
</li>
|
||||
<li class="three">
|
||||
<div class="comfort">
|
||||
<a href="{{ pathto('nnSpider/comfort') }}">
|
||||
<a href="{{ pathto('comfort') }}">
|
||||
<img src="_static/img/Comfort.png" alt="Weaving" />
|
||||
</a>
|
||||
</div>
|
||||
|
@ -79,7 +79,7 @@
|
|||
</li>
|
||||
<li class="four">
|
||||
<div>
|
||||
<a href="{{ pathto('nnSpider/sweat') }}">
|
||||
<a href="{{ pathto('sweat') }}">
|
||||
<img src="_static/img/Sweat.png" alt="Sweat" />
|
||||
</a>
|
||||
</div>
|
|
@ -7,7 +7,6 @@ Tutorials
|
|||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
tutorials/nni_experiment
|
||||
tutorials/hello_nas
|
||||
tutorials/nasbench_as_dataset
|
||||
tutorials/pruning_quick_start_mnist
|
||||
|
@ -17,13 +16,6 @@ Tutorials
|
|||
|
||||
.. ----------------------
|
||||
|
||||
.. cardlinkitem::
|
||||
:header: Start and Manage a New Experiment
|
||||
:description: Familiarize yourself with Pythonic API to manage a hyper-parameter tuning experiment
|
||||
:link: tutorials/nni_experiment.html
|
||||
:image: ../img/thumbnails/overview-31.png
|
||||
:tags: Experiment/HPO
|
||||
|
||||
.. cardlinkitem::
|
||||
:header: HPO Quickstart with PyTorch
|
||||
:description: Use HPO to tune a PyTorch FashionMNIST model
|
||||
|
|
|
@ -30,27 +30,6 @@ Tutorials
|
|||
|
||||
/tutorials/pruning_speedup
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<div class="sphx-glr-thumbcontainer" tooltip="Start and Manage a New Experiment">
|
||||
|
||||
.. only:: html
|
||||
|
||||
.. figure:: /tutorials/images/thumb/sphx_glr_nni_experiment_thumb.png
|
||||
:alt: Start and Manage a New Experiment
|
||||
|
||||
:ref:`sphx_glr_tutorials_nni_experiment.py`
|
||||
|
||||
.. raw:: html
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
/tutorials/nni_experiment
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<div class="sphx-glr-thumbcontainer" tooltip="Quantization reduces model size and speeds up inference time by reducing the number of bits req...">
|
||||
|
|
|
@ -1,187 +0,0 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%matplotlib inline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n# Start and Manage a New Experiment\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configure Search Space\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search_space = {\n \"C\": {\"_type\": \"quniform\", \"_value\": [0.1, 1, 0.1]},\n \"kernel\": {\"_type\": \"choice\", \"_value\": [\"linear\", \"rbf\", \"poly\", \"sigmoid\"]},\n \"degree\": {\"_type\": \"choice\", \"_value\": [1, 2, 3, 4]},\n \"gamma\": {\"_type\": \"quniform\", \"_value\": [0.01, 0.1, 0.01]},\n \"coef0\": {\"_type\": \"quniform\", \"_value\": [0.01, 0.1, 0.01]}\n}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Configure Experiment\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from nni.experiment import Experiment\nexperiment = Experiment('local')\nexperiment.config.experiment_name = 'Example'\nexperiment.config.trial_concurrency = 2\nexperiment.config.max_trial_number = 10\nexperiment.config.search_space = search_space\nexperiment.config.trial_command = 'python scripts/trial_sklearn.py'\nexperiment.config.trial_code_directory = './'\nexperiment.config.tuner.name = 'TPE'\nexperiment.config.tuner.class_args['optimize_mode'] = 'maximize'\nexperiment.config.training_service.use_active_gpu = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Start Experiment\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment.start(8080)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Experiment View & Control\n\nView the status of experiment.\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment.get_status()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Wait until at least one trial finishes.\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import time\n\nfor _ in range(10):\n stats = experiment.get_job_statistics()\n if any(stat['trialJobStatus'] == 'SUCCEEDED' for stat in stats):\n break\n time.sleep(10)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Export the experiment data.\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment.export_data()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Get metric of jobs\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment.get_job_metrics()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Stop Experiment\n\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"experiment.stop()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.8"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
|
@ -1,67 +0,0 @@
|
|||
"""
|
||||
Start and Manage a New Experiment
|
||||
=================================
|
||||
"""
|
||||
|
||||
# %%
|
||||
# Configure Search Space
|
||||
# ----------------------
|
||||
|
||||
search_space = {
|
||||
"C": {"_type": "quniform", "_value": [0.1, 1, 0.1]},
|
||||
"kernel": {"_type": "choice", "_value": ["linear", "rbf", "poly", "sigmoid"]},
|
||||
"degree": {"_type": "choice", "_value": [1, 2, 3, 4]},
|
||||
"gamma": {"_type": "quniform", "_value": [0.01, 0.1, 0.01]},
|
||||
"coef0": {"_type": "quniform", "_value": [0.01, 0.1, 0.01]}
|
||||
}
|
||||
|
||||
# %%
|
||||
# Configure Experiment
|
||||
# --------------------
|
||||
|
||||
from nni.experiment import Experiment
|
||||
experiment = Experiment('local')
|
||||
experiment.config.experiment_name = 'Example'
|
||||
experiment.config.trial_concurrency = 2
|
||||
experiment.config.max_trial_number = 10
|
||||
experiment.config.search_space = search_space
|
||||
experiment.config.trial_command = 'python scripts/trial_sklearn.py'
|
||||
experiment.config.trial_code_directory = './'
|
||||
experiment.config.tuner.name = 'TPE'
|
||||
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
|
||||
experiment.config.training_service.use_active_gpu = True
|
||||
|
||||
# %%
|
||||
# Start Experiment
|
||||
# ----------------
|
||||
experiment.start(8080)
|
||||
|
||||
# %%
|
||||
# Experiment View & Control
|
||||
# -------------------------
|
||||
#
|
||||
# View the status of experiment.
|
||||
experiment.get_status()
|
||||
|
||||
# %%
|
||||
# Wait until at least one trial finishes.
|
||||
import time
|
||||
|
||||
for _ in range(10):
|
||||
stats = experiment.get_job_statistics()
|
||||
if any(stat['trialJobStatus'] == 'SUCCEEDED' for stat in stats):
|
||||
break
|
||||
time.sleep(10)
|
||||
|
||||
# %%
|
||||
# Export the experiment data.
|
||||
experiment.export_data()
|
||||
|
||||
# %%
|
||||
# Get metric of jobs
|
||||
experiment.get_job_metrics()
|
||||
|
||||
# %%
|
||||
# Stop Experiment
|
||||
# ---------------
|
||||
experiment.stop()
|
|
@ -1 +0,0 @@
|
|||
9f822647d89f05264b70d1ae1c473be1
|
|
@ -1,265 +0,0 @@
|
|||
|
||||
.. DO NOT EDIT.
|
||||
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
|
||||
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
|
||||
.. "tutorials/nni_experiment.py"
|
||||
.. LINE NUMBERS ARE GIVEN BELOW.
|
||||
|
||||
.. only:: html
|
||||
|
||||
.. note::
|
||||
:class: sphx-glr-download-link-note
|
||||
|
||||
Click :ref:`here <sphx_glr_download_tutorials_nni_experiment.py>`
|
||||
to download the full example code
|
||||
|
||||
.. rst-class:: sphx-glr-example-title
|
||||
|
||||
.. _sphx_glr_tutorials_nni_experiment.py:
|
||||
|
||||
|
||||
Start and Manage a New Experiment
|
||||
=================================
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 7-9
|
||||
|
||||
Configure Search Space
|
||||
----------------------
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 9-18
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
|
||||
search_space = {
|
||||
"C": {"_type": "quniform", "_value": [0.1, 1, 0.1]},
|
||||
"kernel": {"_type": "choice", "_value": ["linear", "rbf", "poly", "sigmoid"]},
|
||||
"degree": {"_type": "choice", "_value": [1, 2, 3, 4]},
|
||||
"gamma": {"_type": "quniform", "_value": [0.01, 0.1, 0.01]},
|
||||
"coef0": {"_type": "quniform", "_value": [0.01, 0.1, 0.01]}
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 19-21
|
||||
|
||||
Configure Experiment
|
||||
--------------------
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 21-34
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
|
||||
from nni.experiment import Experiment
|
||||
experiment = Experiment('local')
|
||||
experiment.config.experiment_name = 'Example'
|
||||
experiment.config.trial_concurrency = 2
|
||||
experiment.config.max_trial_number = 10
|
||||
experiment.config.search_space = search_space
|
||||
experiment.config.trial_command = 'python scripts/trial_sklearn.py'
|
||||
experiment.config.trial_code_directory = './'
|
||||
experiment.config.tuner.name = 'TPE'
|
||||
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
|
||||
experiment.config.training_service.use_active_gpu = True
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 35-37
|
||||
|
||||
Start Experiment
|
||||
----------------
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 37-39
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
experiment.start(8080)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. rst-class:: sphx-glr-script-out
|
||||
|
||||
Out:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[2022-02-07 18:56:04] Creating experiment, Experiment ID: fl9vu67z
|
||||
[2022-02-07 18:56:04] Starting web server...
|
||||
[2022-02-07 18:56:05] Setting up...
|
||||
[2022-02-07 18:56:05] Web UI URLs: http://127.0.0.1:8080 http://10.190.173.211:8080 http://172.17.0.1:8080 http://192.168.49.1:8080
|
||||
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 40-44
|
||||
|
||||
Experiment View & Control
|
||||
-------------------------
|
||||
|
||||
View the status of experiment.
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 44-46
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
experiment.get_status()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. rst-class:: sphx-glr-script-out
|
||||
|
||||
Out:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
'RUNNING'
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 47-48
|
||||
|
||||
Wait until at least one trial finishes.
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 48-56
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
import time
|
||||
|
||||
for _ in range(10):
|
||||
stats = experiment.get_job_statistics()
|
||||
if any(stat['trialJobStatus'] == 'SUCCEEDED' for stat in stats):
|
||||
break
|
||||
time.sleep(10)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 57-58
|
||||
|
||||
Export the experiment data.
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 58-60
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
experiment.export_data()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. rst-class:: sphx-glr-script-out
|
||||
|
||||
Out:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
[TrialResult(parameter={'C': 0.9, 'kernel': 'rbf', 'degree': 4, 'gamma': 0.07, 'coef0': 0.03}, value=0.9733333333333334, trialJobId='dNOZt'), TrialResult(parameter={'C': 0.8, 'kernel': 'sigmoid', 'degree': 2, 'gamma': 0.01, 'coef0': 0.01}, value=0.9733333333333334, trialJobId='okYSD')]
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 61-62
|
||||
|
||||
Get metric of jobs
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 62-64
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
experiment.get_job_metrics()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. rst-class:: sphx-glr-script-out
|
||||
|
||||
Out:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
{'okYSD': [TrialMetricData(timestamp=1644227777089, trialJobId='okYSD', parameterId='1', type='FINAL', sequence=0, data=0.9733333333333334)], 'dNOZt': [TrialMetricData(timestamp=1644227777357, trialJobId='dNOZt', parameterId='0', type='FINAL', sequence=0, data=0.9733333333333334)]}
|
||||
|
||||
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 65-67
|
||||
|
||||
Stop Experiment
|
||||
---------------
|
||||
|
||||
.. GENERATED FROM PYTHON SOURCE LINES 67-68
|
||||
|
||||
.. code-block:: default
|
||||
|
||||
experiment.stop()
|
||||
|
||||
|
||||
|
||||
|
||||
.. rst-class:: sphx-glr-script-out
|
||||
|
||||
Out:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[2022-02-07 18:56:25] Stopping experiment, please wait...
|
||||
[2022-02-07 18:56:28] Experiment stopped
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 24.662 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_tutorials_nni_experiment.py:
|
||||
|
||||
|
||||
.. only :: html
|
||||
|
||||
.. container:: sphx-glr-footer
|
||||
:class: sphx-glr-footer-example
|
||||
|
||||
|
||||
|
||||
.. container:: sphx-glr-download sphx-glr-download-python
|
||||
|
||||
:download:`Download Python source code: nni_experiment.py <nni_experiment.py>`
|
||||
|
||||
|
||||
|
||||
.. container:: sphx-glr-download sphx-glr-download-jupyter
|
||||
|
||||
:download:`Download Jupyter notebook: nni_experiment.ipynb <nni_experiment.ipynb>`
|
||||
|
||||
|
||||
.. only:: html
|
||||
|
||||
.. rst-class:: sphx-glr-signature
|
||||
|
||||
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_
|
Двоичный файл не отображается.
|
@ -18,8 +18,6 @@ Computation times
|
|||
+-----------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_tutorials_nasbench_as_dataset.py` (``nasbench_as_dataset.py``) | 00:00.000 | 0.0 MB |
|
||||
+-----------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_tutorials_nni_experiment.py` (``nni_experiment.py``) | 00:00.000 | 0.0 MB |
|
||||
+-----------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_tutorials_pruning_customize.py` (``pruning_customize.py``) | 00:00.000 | 0.0 MB |
|
||||
+-----------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_tutorials_quantization_customize.py` (``quantization_customize.py``) | 00:00.000 | 0.0 MB |
|
||||
|
|
|
@ -1,135 +0,0 @@
|
|||
import argparse
|
||||
import m2r
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def single_line_process(line):
|
||||
if line == ' .. contents::':
|
||||
return '.. contents::'
|
||||
# https://github.com/sphinx-doc/sphinx/issues/3921
|
||||
line = re.sub(r'(`.*? <.*?>`)_', r'\1__', line)
|
||||
# inline emphasis
|
||||
line = re.sub(r'\*\*\\ (.*?)\\ \*\*', r' **\1** ', line)
|
||||
line = re.sub(r'\*(.*?)\\ \*', r'*\1*', line)
|
||||
line = re.sub(r'\*\*(.*?) \*\*', r'**\1** ', line)
|
||||
line = re.sub(r'\\\*\\\*(.*?)\*\*', r'**\1**', line)
|
||||
line = re.sub(r'\\\*\\\*(.*?)\*\*\\ ', r'**\1**', line)
|
||||
line = line.replace(r'\* - `\**', r'* - `**')
|
||||
line = re.sub(r'\\\* \*\*(.*?)\*\* \(\\\*\s*(.*?)\s*\*\\ \)', r'* \1 (\2)', line)
|
||||
line = re.sub(r'\<(.*)\.md(\>|#)', r'<\1.rst\2', line)
|
||||
line = re.sub(r'`\*\*(.*?)\*\* <#(.*?)>`__', r'`\1 <#\2>`__', line)
|
||||
line = re.sub(r'\*\* (classArgs|stop|FLOPS.*?|pruned.*?|large.*?|path|pythonPath|2D.*?|codeDirectory|ps|worker|Tuner|Assessor)\*\*',
|
||||
r' **\1**', line)
|
||||
|
||||
line = line.replace('.. code-block:::: bash', '.. code-block:: bash')
|
||||
line = line.replace('raw-html-m2r', 'raw-html')
|
||||
line = line.replace('[toc]', '.. toctree::')
|
||||
|
||||
# image
|
||||
line = re.sub(r'\:raw\-html\:`\<img src\=\"(.*?)\" style\=\"zoom\: ?(\d+)\%\;\" \/\>`', r'\n.. image:: \1\n :scale: \2%', line)
|
||||
|
||||
# special case (per line handling)
|
||||
line = line.replace('Nb = |Db|', r'Nb = \|Db\|')
|
||||
line = line.replace(' Here is just a small list of libraries ', '\nHere is just a small list of libraries ')
|
||||
line = line.replace(' Find the data management region in job submission page.', 'Find the data management region in job submission page.')
|
||||
line = line.replace('Tuner/InstallCustomizedTuner.md', 'Tuner/InstallCustomizedTuner')
|
||||
line = line.replace('✓', ':raw-html:`✓`')
|
||||
line = line.replace(' **builtinTunerName** and** classArgs**', '**builtinTunerName** and **classArgs**')
|
||||
line = line.replace('`\ ``nnictl ss_gen`` <../Tutorial/Nnictl.rst>`__', '`nnictl ss_gen <../Tutorial/Nnictl.rst>`__')
|
||||
line = line.replace('**Step 1. Install NNI, follow the install guide `here <../Tutorial/QuickStart.rst>`__.**',
|
||||
'**Step 1. Install NNI, follow the install guide** `here <../Tutorial/QuickStart.rst>`__.')
|
||||
line = line.replace('*Please refer to `here ', 'Please refer to `here ')
|
||||
# line = line.replace('\* **optimize_mode** ', '* **optimize_mode** ')
|
||||
if line == '~' * len(line):
|
||||
line = '^' * len(line)
|
||||
return line
|
||||
|
||||
|
||||
def special_case_replace(full_text):
|
||||
replace_pairs = {}
|
||||
replace_pairs['PyTorch\n"""""""'] = '**PyTorch**'
|
||||
replace_pairs['Search Space\n============'] = '.. role:: raw-html(raw)\n :format: html\n\nSearch Space\n============'
|
||||
for file in os.listdir(Path(__file__).parent / 'patches'):
|
||||
with open(Path(__file__).parent / 'patches' / file) as f:
|
||||
r, s = f.read().split('%%%%%%\n')
|
||||
replace_pairs[r] = s
|
||||
for r, s in replace_pairs.items():
|
||||
full_text = full_text.replace(r, s)
|
||||
return full_text
|
||||
|
||||
|
||||
def process_table(content):
|
||||
content = content.replace('------ |', '------|')
|
||||
lines = []
|
||||
for line in content.split('\n'):
|
||||
if line.startswith(' |'):
|
||||
line = line[2:]
|
||||
lines.append(line)
|
||||
return '\n'.join(lines)
|
||||
|
||||
|
||||
def process_github_link(line):
|
||||
line = re.sub(r'`(\\ ``)?([^`]*?)(``)? \<(.*?)(blob|tree)/v1.9/(.*?)\>`__', r':githublink:`\2 <\6>`', line)
|
||||
if 'githublink' in line:
|
||||
line = re.sub(r'\*Example: (.*)\*', r'*Example:* \1', line)
|
||||
line = line.replace('https://nni.readthedocs.io/en/latest', '')
|
||||
return line
|
||||
|
||||
|
||||
for root, dirs, files in os.walk('en_US'):
|
||||
root = Path(root)
|
||||
for file in files:
|
||||
if not file.endswith('.md') or file == 'Release_v1.0.md':
|
||||
continue
|
||||
|
||||
with open(root / file) as f:
|
||||
md_content = f.read()
|
||||
|
||||
if file == 'Nnictl.md':
|
||||
md_content = process_table(md_content)
|
||||
|
||||
out = m2r.convert(md_content)
|
||||
lines = out.split('\n')
|
||||
if lines[0] == '':
|
||||
lines = lines[1:]
|
||||
|
||||
# remove code-block eval_rst
|
||||
i = 0
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
if line.strip() == '.. code-block:: eval_rst':
|
||||
space_count = line.index('.')
|
||||
lines[i] = lines[i + 1] = None
|
||||
if i > 0 and lines[i - 1]:
|
||||
lines[i] = '' # blank line
|
||||
i += 2
|
||||
while i < len(lines) and (lines[i].startswith(' ' * (space_count + 3)) or lines[i] == ''):
|
||||
lines[i] = lines[i][space_count + 3:]
|
||||
i += 1
|
||||
elif line.strip() == '.. code-block' or line.strip() == '.. code-block::':
|
||||
lines[i] += ':: bash'
|
||||
i += 1
|
||||
else:
|
||||
i += 1
|
||||
|
||||
lines = [l for l in lines if l is not None]
|
||||
|
||||
lines = list(map(single_line_process, lines))
|
||||
|
||||
if file != 'Release.md':
|
||||
# githublink
|
||||
lines = list(map(process_github_link, lines))
|
||||
|
||||
out = '\n'.join(lines)
|
||||
out = special_case_replace(out)
|
||||
|
||||
with open(root / (Path(file).stem + '.rst'), 'w') as f:
|
||||
f.write(out)
|
||||
|
||||
# back it up and remove
|
||||
moved_root = Path('archive_en_US') / root.relative_to('en_US')
|
||||
moved_root.mkdir(exist_ok=True)
|
||||
shutil.move(root / file, moved_root / file)
|
|
@ -1,24 +0,0 @@
|
|||
* - GP Tuner
|
||||
- :raw-html:`✓`
|
||||
-
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
-
|
||||
-
|
||||
-
|
||||
%%%%%%
|
||||
* - GP Tuner
|
||||
- :raw-html:`✓`
|
||||
-
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
- :raw-html:`✓`
|
||||
-
|
||||
-
|
||||
-
|
||||
-
|
|
@ -1,3 +0,0 @@
|
|||
An SSH server needs a port; you need to expose Docker's SSH port to NNI as the connection port. For example, if you set your container's SSH port as **``A``** \ , you should map the container's port ** ``A``** to your remote host machine's other port ** ``B``** \ , NNI will connect port ** ``B``** as an SSH port, and your host machine will map the connection from port ** ``B``** to port ** ``A``** then NNI could connect to your Docker container.
|
||||
%%%%%%
|
||||
An SSH server needs a port; you need to expose Docker's SSH port to NNI as the connection port. For example, if you set your container's SSH port as ``A``, you should map the container's port ``A`` to your remote host machine's other port ``B``, NNI will connect port ``B`` as an SSH port, and your host machine will map the connection from port ``B`` to port ``A`` then NNI could connect to your Docker container.
|
|
@ -1,3 +0,0 @@
|
|||
If the id ends with *, nnictl will stop all experiments whose ids matchs the regular.
|
||||
%%%%%%
|
||||
If the id ends with \*, nnictl will stop all experiments whose ids matchs the regular.
|
|
@ -1,7 +0,0 @@
|
|||
..
|
||||
|
||||
make: *** [install-XXX] Segmentation fault (core dumped)
|
||||
%%%%%%
|
||||
.. code-block:: text
|
||||
|
||||
make: *** [install-XXX] Segmentation fault (core dumped)
|
|
@ -1,3 +0,0 @@
|
|||
Click ``Submit job`` button in web portal.
|
||||
%%%%%%
|
||||
Click ``Submit job`` button in web portal.
|
|
@ -1,5 +0,0 @@
|
|||
:raw-html:`<div >
|
||||
<img src="https://github.com/microsoft/Cream/blob/main/demo/intro.jpg" width="800"/>
|
||||
</div>`
|
||||
%%%%%%
|
||||
:raw-html:`<div ><img src="https://github.com/microsoft/Cream/blob/main/demo/intro.jpg" width="800"/></div>`
|
|
@ -1,8 +0,0 @@
|
|||
.. list-table::
|
||||
:header-rows: 1
|
||||
|
||||
%%%%%%
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: auto
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
.. code-block:: bash
|
||||
|
||||
1.1 Declare NNI API
|
||||
Include `import nni` in your trial code to use NNI APIs.
|
||||
%%%%%%
|
||||
..
|
||||
|
||||
1.1 Declare NNI API
|
||||
Include `import nni` in your trial code to use NNI APIs.
|
|
@ -1,7 +0,0 @@
|
|||
.. code-block:: bash
|
||||
|
||||
from nni.compression.pytorch.utils.counter import count_flops_params
|
||||
%%%%%%
|
||||
.. code-block:: python
|
||||
|
||||
from nni.compression.pytorch.utils.counter import count_flops_params
|
|
@ -1,7 +0,0 @@
|
|||
.. code-block:: bash
|
||||
|
||||
NNI's official image msranni/nni does not support SSH servers for the time being; you should build your own Docker image with an SSH configuration or use other images as a remote server.
|
||||
%%%%%%
|
||||
.. code-block:: text
|
||||
|
||||
NNI's official image msranni/nni does not support SSH servers for the time being; you should build your own Docker image with an SSH configuration or use other images as a remote server.
|
|
@ -1,56 +0,0 @@
|
|||
Code Styles & Naming Conventions
|
||||
--------------------------------
|
||||
|
||||
|
||||
* We follow `PEP8 <https://www.python.org/dev/peps/pep-0008/>`__ for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as ``flake8`` or ``pylint``
|
||||
* We also follow `NumPy Docstring Style <https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy>`__ for Python Docstring Conventions. During the `documentation building <Contributing.rst#documentation>`__\ , we use `sphinx.ext.napoleon <https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html>`__ to generate Python API documentation from Docstring.
|
||||
* For docstrings, please refer to `numpydoc docstring guide <https://numpydoc.readthedocs.io/en/latest/format.html>`__ and `pandas docstring guide <https://python-sprints.github.io/pandas/guide/pandas_docstring.html>`__
|
||||
|
||||
* For function docstring, **description** , **Parameters**\ , and** Returns**\ /** Yields** are mandatory.
|
||||
* For class docstring, **description**\ ,** Attributes** are mandatory.
|
||||
* For docstring to describe ``dict``\ , which is commonly used in our hyper-param format description, please refer to [RiboKit : Doc Standards
|
||||
|
||||
* Internal Guideline on Writing Standards](https://ribokit.github.io/docs/text/)
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
Our documentation is built with :githublink:`sphinx <docs>`.
|
||||
|
||||
|
||||
*
|
||||
Before submitting the documentation change, please **build homepage locally**\ : ``cd docs/en_US && make html``\ , then you can see all the built documentation webpage under the folder ``docs/en_US/_build/html``. It's also highly recommended taking care of** every WARNING** during the build, which is very likely the signal of a** deadlink** and other annoying issues.
|
||||
|
||||
*
|
||||
For links, please consider using **relative paths** first. However, if the documentation is written in Markdown format, and:
|
||||
|
||||
|
||||
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``\ , which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
|
||||
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v1.9/`` (\ :githublink:`mnist.py <examples/trials/mnist-tfv1/mnist.py>` for example).
|
||||
%%%%%%
|
||||
Code Styles & Naming Conventions
|
||||
--------------------------------
|
||||
|
||||
* We follow `PEP8 <https://www.python.org/dev/peps/pep-0008/>`__ for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as ``flake8`` or ``pylint``
|
||||
* We also follow `NumPy Docstring Style <https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy>`__ for Python Docstring Conventions. During the `documentation building <Contributing.rst#documentation>`__\ , we use `sphinx.ext.napoleon <https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html>`__ to generate Python API documentation from Docstring.
|
||||
* For docstrings, please refer to `numpydoc docstring guide <https://numpydoc.readthedocs.io/en/latest/format.html>`__ and `pandas docstring guide <https://python-sprints.github.io/pandas/guide/pandas_docstring.html>`__
|
||||
|
||||
* For function docstring, **description**, **Parameters**, and **Returns** **Yields** are mandatory.
|
||||
* For class docstring, **description**, **Attributes** are mandatory.
|
||||
* For docstring to describe ``dict``, which is commonly used in our hyper-param format description, please refer to RiboKit Doc Standards
|
||||
|
||||
* `Internal Guideline on Writing Standards <https://ribokit.github.io/docs/text/>`__
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
Our documentation is built with :githublink:`sphinx <docs>`.
|
||||
|
||||
* Before submitting the documentation change, please **build homepage locally**: ``cd docs/en_US && make html``, then you can see all the built documentation webpage under the folder ``docs/en_US/_build/html``. It's also highly recommended taking care of **every WARNING** during the build, which is very likely the signal of a **deadlink** and other annoying issues.
|
||||
|
||||
*
|
||||
For links, please consider using **relative paths** first. However, if the documentation is written in Markdown format, and:
|
||||
|
||||
|
||||
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like ``https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png``, which can be automatically generated by dragging picture onto `Github Issue <https://github.com/Microsoft/nni/issues/new>`__ Box.
|
||||
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at ``https://github.com/Microsoft/nni/tree/v1.9/`` (:githublink:`mnist.py <examples/trials/mnist-tfv1/mnist.py>` for example).
|
|
@ -1,45 +0,0 @@
|
|||
* -
|
||||
- Recommended
|
||||
- Minimum
|
||||
* - **Operating System**
|
||||
- Ubuntu 16.04 or above
|
||||
* - **CPU**
|
||||
- Intel® Core™ i5 or AMD Phenom™ II X3 or better
|
||||
- Intel® Core™ i3 or AMD Phenom™ X3 8650
|
||||
* - **GPU**
|
||||
- NVIDIA® GeForce® GTX 660 or better
|
||||
- NVIDIA® GeForce® GTX 460
|
||||
* - **Memory**
|
||||
- 6 GB RAM
|
||||
- 4 GB RAM
|
||||
* - **Storage**
|
||||
- 30 GB available hare drive space
|
||||
* - **Internet**
|
||||
- Boardband internet connection
|
||||
* - **Resolution**
|
||||
- 1024 x 768 minimum display resolution
|
||||
%%%%%%
|
||||
* -
|
||||
- Recommended
|
||||
- Minimum
|
||||
* - **Operating System**
|
||||
- Ubuntu 16.04 or above
|
||||
-
|
||||
* - **CPU**
|
||||
- Intel® Core™ i5 or AMD Phenom™ II X3 or better
|
||||
- Intel® Core™ i3 or AMD Phenom™ X3 8650
|
||||
* - **GPU**
|
||||
- NVIDIA® GeForce® GTX 660 or better
|
||||
- NVIDIA® GeForce® GTX 460
|
||||
* - **Memory**
|
||||
- 6 GB RAM
|
||||
- 4 GB RAM
|
||||
* - **Storage**
|
||||
- 30 GB available hare drive space
|
||||
-
|
||||
* - **Internet**
|
||||
- Boardband internet connection
|
||||
-
|
||||
* - **Resolution**
|
||||
- 1024 x 768 minimum display resolution
|
||||
-
|
|
@ -1,44 +0,0 @@
|
|||
..
|
||||
|
||||
1.1 Declare NNI API
|
||||
Include `import nni` in your trial code to use NNI APIs.
|
||||
|
||||
1.2 Get predefined parameters
|
||||
Use the following code snippet:
|
||||
|
||||
RECEIVED_PARAMS = nni.get_next_parameter()
|
||||
|
||||
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
|
||||
|
||||
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
|
||||
|
||||
1.3 Report NNI results
|
||||
Use the API:
|
||||
|
||||
`nni.report_intermediate_result(accuracy)`
|
||||
|
||||
to send `accuracy` to assessor.
|
||||
|
||||
Use the API:
|
||||
|
||||
`nni.report_final_result(accuracy)`
|
||||
|
||||
to send `accuracy` to tuner.
|
||||
%%%%%%
|
||||
* Declare NNI API: include ``import nni`` in your trial code to use NNI APIs.
|
||||
* Get predefined parameters
|
||||
|
||||
Use the following code snippet:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
RECEIVED_PARAMS = nni.get_next_parameter()
|
||||
|
||||
to get hyper-parameters' values assigned by tuner. ``RECEIVED_PARAMS`` is an object, for example:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
|
||||
|
||||
* Report NNI results: Use the API: ``nni.report_intermediate_result(accuracy)`` to send ``accuracy`` to assessor.
|
||||
Use the API: ``nni.report_final_result(accuracy)`` to send `accuracy` to tuner.
|
|
@ -1,46 +0,0 @@
|
|||
* -
|
||||
- Recommended
|
||||
- Minimum
|
||||
* - **Operating System**
|
||||
- macOS 10.14.1 or above
|
||||
* - **CPU**
|
||||
- Intel® Core™ i7-4770 or better
|
||||
- Intel® Core™ i5-760 or better
|
||||
* - **GPU**
|
||||
- AMD Radeon™ R9 M395X or better
|
||||
- NVIDIA® GeForce® GT 750M or AMD Radeon™ R9 M290 or better
|
||||
* - **Memory**
|
||||
- 8 GB RAM
|
||||
- 4 GB RAM
|
||||
* - **Storage**
|
||||
- 70GB available space SSD
|
||||
- 70GB available space 7200 RPM HDD
|
||||
* - **Internet**
|
||||
- Boardband internet connection
|
||||
* - **Resolution**
|
||||
- 1024 x 768 minimum display resolution
|
||||
%%%%%%
|
||||
* -
|
||||
- Recommended
|
||||
- Minimum
|
||||
* - **Operating System**
|
||||
- macOS 10.14.1 or above
|
||||
-
|
||||
* - **CPU**
|
||||
- Intel® Core™ i7-4770 or better
|
||||
- Intel® Core™ i5-760 or better
|
||||
* - **GPU**
|
||||
- AMD Radeon™ R9 M395X or better
|
||||
- NVIDIA® GeForce® GT 750M or AMD Radeon™ R9 M290 or better
|
||||
* - **Memory**
|
||||
- 8 GB RAM
|
||||
- 4 GB RAM
|
||||
* - **Storage**
|
||||
- 70GB available space SSD
|
||||
- 70GB available space 7200 RPM HDD
|
||||
* - **Internet**
|
||||
- Boardband internet connection
|
||||
-
|
||||
* - **Resolution**
|
||||
- 1024 x 768 minimum display resolution
|
||||
-
|
|
@ -1,45 +0,0 @@
|
|||
* -
|
||||
- Recommended
|
||||
- Minimum
|
||||
* - **Operating System**
|
||||
- Windows 10 1809 or above
|
||||
* - **CPU**
|
||||
- Intel® Core™ i5 or AMD Phenom™ II X3 or better
|
||||
- Intel® Core™ i3 or AMD Phenom™ X3 8650
|
||||
* - **GPU**
|
||||
- NVIDIA® GeForce® GTX 660 or better
|
||||
- NVIDIA® GeForce® GTX 460
|
||||
* - **Memory**
|
||||
- 6 GB RAM
|
||||
- 4 GB RAM
|
||||
* - **Storage**
|
||||
- 30 GB available hare drive space
|
||||
* - **Internet**
|
||||
- Boardband internet connection
|
||||
* - **Resolution**
|
||||
- 1024 x 768 minimum display resolution
|
||||
%%%%%%
|
||||
* -
|
||||
- Recommended
|
||||
- Minimum
|
||||
* - **Operating System**
|
||||
- Windows 10 1809 or above
|
||||
-
|
||||
* - **CPU**
|
||||
- Intel® Core™ i5 or AMD Phenom™ II X3 or better
|
||||
- Intel® Core™ i3 or AMD Phenom™ X3 8650
|
||||
* - **GPU**
|
||||
- NVIDIA® GeForce® GTX 660 or better
|
||||
- NVIDIA® GeForce® GTX 460
|
||||
* - **Memory**
|
||||
- 6 GB RAM
|
||||
- 4 GB RAM
|
||||
* - **Storage**
|
||||
- 30 GB available hare drive space
|
||||
-
|
||||
* - **Internet**
|
||||
- Boardband internet connection
|
||||
-
|
||||
* - **Resolution**
|
||||
- 1024 x 768 minimum display resolution
|
||||
-
|
|
@ -1,84 +0,0 @@
|
|||
* -
|
||||
- s=4
|
||||
- s=3
|
||||
- s=2
|
||||
- s=1
|
||||
- s=0
|
||||
* - i
|
||||
- n r
|
||||
- n r
|
||||
- n r
|
||||
- n r
|
||||
- n r
|
||||
* - 0
|
||||
- 81 1
|
||||
- 27 3
|
||||
- 9 9
|
||||
- 6 27
|
||||
- 5 81
|
||||
* - 1
|
||||
- 27 3
|
||||
- 9 9
|
||||
- 3 27
|
||||
- 2 81
|
||||
-
|
||||
* - 2
|
||||
- 9 9
|
||||
- 3 27
|
||||
- 1 81
|
||||
-
|
||||
-
|
||||
* - 3
|
||||
- 3 27
|
||||
- 1 81
|
||||
-
|
||||
-
|
||||
-
|
||||
* - 4
|
||||
- 1 81
|
||||
-
|
||||
-
|
||||
-
|
||||
%%%%%%
|
||||
* -
|
||||
- s=4
|
||||
- s=3
|
||||
- s=2
|
||||
- s=1
|
||||
- s=0
|
||||
* - i
|
||||
- n r
|
||||
- n r
|
||||
- n r
|
||||
- n r
|
||||
- n r
|
||||
* - 0
|
||||
- 81 1
|
||||
- 27 3
|
||||
- 9 9
|
||||
- 6 27
|
||||
- 5 81
|
||||
* - 1
|
||||
- 27 3
|
||||
- 9 9
|
||||
- 3 27
|
||||
- 2 81
|
||||
-
|
||||
* - 2
|
||||
- 9 9
|
||||
- 3 27
|
||||
- 1 81
|
||||
-
|
||||
-
|
||||
* - 3
|
||||
- 3 27
|
||||
- 1 81
|
||||
-
|
||||
-
|
||||
-
|
||||
* - 4
|
||||
- 1 81
|
||||
-
|
||||
-
|
||||
-
|
||||
-
|
|
@ -1,3 +0,0 @@
|
|||
*Please refer to `here <https://nni.readthedocs.io/en/latest/sdk_reference.html>`__ for more APIs (e.g., ``nni.get_sequence_id()``\ ) provided by NNI.
|
||||
%%%%%%
|
||||
*Please refer to `here <https://nni.readthedocs.io/en/latest/sdk_reference.html>`__ for more APIs (e.g., ``nni.get_sequence_id()``\ ) provided by NNI.*
|
|
@ -1,44 +0,0 @@
|
|||
#. For each filter
|
||||
.. image:: http://latex.codecogs.com/gif.latex?F_{i,j}
|
||||
:target: http://latex.codecogs.com/gif.latex?F_{i,j}
|
||||
:alt:
|
||||
, calculate the sum of its absolute kernel weights
|
||||
.. image:: http://latex.codecogs.com/gif.latex?s_j=\sum_{l=1}^{n_i}\sum|K_l|
|
||||
:target: http://latex.codecogs.com/gif.latex?s_j=\sum_{l=1}^{n_i}\sum|K_l|
|
||||
:alt:
|
||||
|
||||
#. Sort the filters by
|
||||
.. image:: http://latex.codecogs.com/gif.latex?s_j
|
||||
:target: http://latex.codecogs.com/gif.latex?s_j
|
||||
:alt:
|
||||
.
|
||||
#. Prune
|
||||
.. image:: http://latex.codecogs.com/gif.latex?m
|
||||
:target: http://latex.codecogs.com/gif.latex?m
|
||||
:alt:
|
||||
filters with the smallest sum values and their corresponding feature maps. The
|
||||
kernels in the next convolutional layer corresponding to the pruned feature maps are also
|
||||
.. code-block:: bash
|
||||
|
||||
removed.
|
||||
|
||||
#. A new kernel matrix is created for both the
|
||||
.. image:: http://latex.codecogs.com/gif.latex?i
|
||||
:target: http://latex.codecogs.com/gif.latex?i
|
||||
:alt:
|
||||
th and
|
||||
.. image:: http://latex.codecogs.com/gif.latex?i+1
|
||||
:target: http://latex.codecogs.com/gif.latex?i+1
|
||||
:alt:
|
||||
th layers, and the remaining kernel
|
||||
weights are copied to the new model.
|
||||
%%%%%%
|
||||
#. For each filter :math:`F_{i,j}`, calculate the sum of its absolute kernel weights :math:`s_j=\sum_{l=1}^{n_i}\sum|K_l|`.
|
||||
|
||||
#. Sort the filters by :math:`s_j`.
|
||||
|
||||
#. Prune :math:`m` filters with the smallest sum values and their corresponding feature maps. The
|
||||
kernels in the next convolutional layer corresponding to the pruned feature maps are also removed.
|
||||
|
||||
#. A new kernel matrix is created for both the :math:`i`-th and :math:`i+1`-th layers, and the remaining kernel
|
||||
weights are copied to the new model.
|
|
@ -1,25 +0,0 @@
|
|||
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
|
||||
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the** KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
|
||||
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
|
||||
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copies files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or** Azure File Storage**.
|
||||
#.
|
||||
Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
apt-get install nfs-common
|
||||
|
||||
#.
|
||||
Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart.rst>`__.
|
||||
%%%%%%
|
||||
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
|
||||
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the**KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
|
||||
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
|
||||
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copies files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or **Azure File Storage**.
|
||||
#. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
apt-get install nfs-common
|
||||
|
||||
#. Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart>`__.
|
|
@ -1,27 +0,0 @@
|
|||
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
|
||||
#. Download, set up, and deploy **Kubeflow** to your Kubernetes cluster. Follow this `guideline <https://www.kubeflow.org/docs/started/getting-started/>`__ to setup Kubeflow.
|
||||
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the** KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
|
||||
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
|
||||
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copy files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or** Azure File Storage**.
|
||||
#.
|
||||
Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
apt-get install nfs-common
|
||||
|
||||
#.
|
||||
Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart.rst>`__.
|
||||
%%%%%%
|
||||
#. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this `guideline <https://kubernetes.io/docs/setup/>`__ to set up Kubernetes
|
||||
#. Download, set up, and deploy **Kubeflow** to your Kubernetes cluster. Follow this `guideline <https://www.kubeflow.org/docs/started/getting-started/>`__ to setup Kubeflow.
|
||||
#. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the**KUBECONFIG** environment variable. Refer this `guideline <https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig>`__ to learn more about kubeconfig.
|
||||
#. If your NNI trial job needs GPU resource, you should follow this `guideline <https://github.com/NVIDIA/k8s-device-plugin>`__ to configure **Nvidia device plugin for Kubernetes**.
|
||||
#. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in ``root_squash option``\ , otherwise permission issue may raise when NNI copy files to NFS. Refer this `page <https://linux.die.net/man/5/exports>`__ to learn what root_squash option is), or**Azure File Storage**.
|
||||
#. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
apt-get install nfs-common
|
||||
|
||||
#. Install **NNI**\ , follow the install guide `here <../Tutorial/QuickStart>`__.
|
|
@ -1,67 +0,0 @@
|
|||
"""
|
||||
Start and Manage a New Experiment
|
||||
=================================
|
||||
"""
|
||||
|
||||
# %%
|
||||
# Configure Search Space
|
||||
# ----------------------
|
||||
|
||||
search_space = {
|
||||
"C": {"_type": "quniform", "_value": [0.1, 1, 0.1]},
|
||||
"kernel": {"_type": "choice", "_value": ["linear", "rbf", "poly", "sigmoid"]},
|
||||
"degree": {"_type": "choice", "_value": [1, 2, 3, 4]},
|
||||
"gamma": {"_type": "quniform", "_value": [0.01, 0.1, 0.01]},
|
||||
"coef0": {"_type": "quniform", "_value": [0.01, 0.1, 0.01]}
|
||||
}
|
||||
|
||||
# %%
|
||||
# Configure Experiment
|
||||
# --------------------
|
||||
|
||||
from nni.experiment import Experiment
|
||||
experiment = Experiment('local')
|
||||
experiment.config.experiment_name = 'Example'
|
||||
experiment.config.trial_concurrency = 2
|
||||
experiment.config.max_trial_number = 10
|
||||
experiment.config.search_space = search_space
|
||||
experiment.config.trial_command = 'python scripts/trial_sklearn.py'
|
||||
experiment.config.trial_code_directory = './'
|
||||
experiment.config.tuner.name = 'TPE'
|
||||
experiment.config.tuner.class_args['optimize_mode'] = 'maximize'
|
||||
experiment.config.training_service.use_active_gpu = True
|
||||
|
||||
# %%
|
||||
# Start Experiment
|
||||
# ----------------
|
||||
experiment.start(8080)
|
||||
|
||||
# %%
|
||||
# Experiment View & Control
|
||||
# -------------------------
|
||||
#
|
||||
# View the status of experiment.
|
||||
experiment.get_status()
|
||||
|
||||
# %%
|
||||
# Wait until at least one trial finishes.
|
||||
import time
|
||||
|
||||
for _ in range(10):
|
||||
stats = experiment.get_job_statistics()
|
||||
if any(stat['trialJobStatus'] == 'SUCCEEDED' for stat in stats):
|
||||
break
|
||||
time.sleep(10)
|
||||
|
||||
# %%
|
||||
# Export the experiment data.
|
||||
experiment.export_data()
|
||||
|
||||
# %%
|
||||
# Get metric of jobs
|
||||
experiment.get_job_metrics()
|
||||
|
||||
# %%
|
||||
# Stop Experiment
|
||||
# ---------------
|
||||
experiment.stop()
|
|
@ -49,7 +49,7 @@ class TpeArguments(NamedTuple):
|
|||
|
||||
How each liar works is explained in paper's section 6.1.
|
||||
In general "best" suit for small trial number and "worst" suit for large trial number.
|
||||
(:doc:`experiment result </misc/parallelizing_tpe_search>`)
|
||||
(:doc:`experiment result </sharings/parallelizing_tpe_search>`)
|
||||
|
||||
n_startup_jobs
|
||||
The first N hyperparameters are generated fully randomly for warming up.
|
||||
|
|
Загрузка…
Ссылка в новой задаче