This commit is contained in:
Yuge Zhang 2022-04-01 17:31:16 +08:00 коммит произвёл GitHub
Родитель 04ae3deea2
Коммит 8499d63f98
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
16 изменённых файлов: 74 добавлений и 70 удалений

Просмотреть файл

@ -180,10 +180,10 @@ Usage
# Given input tensor with size (1, 1, 28, 28) and switch to full mode
x = torch.randn(1, 1, 28, 28)
flops, params, results = count_flops_params(model, (x,) mode='full') # tuple of tensor as input
flops, params, results = count_flops_params(model, (x,), mode='full') # tuple of tensor as input
# Format output size to M (i.e., 10^6)
print(f'FLOPs: {flops/1e6:.3f}M, Params: {params/1e6:.3f}M)
print(f'FLOPs: {flops/1e6:.3f}M, Params: {params/1e6:.3f}M')
print(results)
{
'conv': {'flops': [60], 'params': [20], 'weight_size': [(5, 3, 1, 1)], 'input_size': [(1, 3, 2, 2)], 'output_size': [(1, 5, 2, 2)], 'module_type': ['Conv2d']},

Просмотреть файл

@ -140,11 +140,11 @@ Follow the log streaming of a certain trial:
.. code-block:: bash
nnictl log trial --trial_id=<trial_id>
nnictl log trial --trial_id=TRIAL_ID
.. code-block:: bash
nnictl log trial <experiment_id> --trial_id=<trial_id>
nnictl log trial EXPERIMENT_ID --trial_id=TRIAL_ID
Note that *after* a trial has done and its pod has been deleted,
no logs can be retrieved then via this command.
@ -195,7 +195,7 @@ If having multiple experiment running at the same time, you may use
.. code-block:: bash
nnictl tensorboard start <experiment_id>
nnictl tensorboard start EXPERIMENT_ID
It will provide you the web url to access the tensorboard.

Просмотреть файл

@ -24,7 +24,7 @@ Folder structure of code
NNI's folder structure is shown below:
.. code-block:: bash
.. code-block:: text
nni
|- deployment
@ -59,7 +59,7 @@ NNI's folder structure is shown below:
Function annotation of TrainingService
--------------------------------------
.. code-block:: bash
.. code-block:: typescript
abstract class TrainingService {
public abstract listTrialJobs(): Promise<TrialJobDetail[]>;
@ -82,7 +82,7 @@ The parent class of TrainingService has a few abstract functions, users need to
ClusterMetadata is the data related to platform details, for examples, the ClusterMetadata defined in remote machine server is:
.. code-block:: bash
.. code-block:: typescript
export class RemoteMachineMeta {
public readonly ip : string;
@ -117,7 +117,7 @@ This function will return the metadata value according to the values, it could b
SubmitTrialJob is a function to submit new trial jobs, users should generate a job instance in TrialJobDetail type. TrialJobDetail is defined as follow:
.. code-block:: bash
.. code-block:: typescript
interface TrialJobDetail {
readonly id: string;

Просмотреть файл

@ -61,8 +61,8 @@ Here is an example:
from nni.feature_engineering.feature_selector import FeatureSelector
class CustomizedSelector(FeatureSelector):
def __init__(self, ...):
...
def __init__(self, *args, **kwargs):
...
**2. Implement fit and _get_selected features Function**
@ -73,8 +73,8 @@ Here is an example:
from nni.feature_engineering.feature_selector import FeatureSelector
class CustomizedSelector(FeatureSelector):
def __init__(self, ...):
...
def __init__(self, *args, **kwargs):
...
def fit(self, X, y, **kwargs):
"""
@ -126,16 +126,15 @@ Here is an example:
from nni.feature_engineering.feature_selector import FeatureSelector
class CustomizedSelector(FeatureSelector, BaseEstimator):
def __init__(self, ...):
...
def __init__(self, *args, **kwargs):
...
def get_params(self, ...):
def get_params(self, *args, **kwargs):
"""
Get parameters for this estimator.
"""
params = self.__dict__
params = {key: val for (key, val) in params.items()
if not key.endswith('_')}
params = {key: val for (key, val) in params.items() if not key.endswith('_')}
return params
def set_params(self, **params):
@ -143,8 +142,8 @@ Here is an example:
Set the parameters of this estimator.
"""
for param in params:
if hasattr(self, param):
setattr(self, param, params[param])
if hasattr(self, param):
setattr(self, param, params[param])
return self
**2. Inherit the SelectorMixin Class and its Function**
@ -157,10 +156,10 @@ Here is an example:
from nni.feature_engineering.feature_selector import FeatureSelector
class CustomizedSelector(FeatureSelector, BaseEstimator, SelectorMixin):
def __init__(self, ...):
def __init__(self, *args, **kwargs):
...
def get_params(self, ...):
def get_params(self, *args, **kwargs):
"""
Get parameters for this estimator.
"""
@ -174,8 +173,8 @@ Here is an example:
Set the parameters of this estimator.
"""
for param in params:
if hasattr(self, param):
setattr(self, param, params[param])
if hasattr(self, param):
setattr(self, param, params[param])
return self
def get_support(self, indices=False):

Просмотреть файл

@ -19,7 +19,7 @@ Here is an example:
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
def __init__(self, *args, **kwargs):
...
**2. Implement receive_trial_result, generate_parameter and update_search_space function**
@ -29,7 +29,7 @@ Here is an example:
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
def __init__(self, *args, **kwargs):
...
def receive_trial_result(self, parameter_id, parameters, value, **kwargs):
@ -143,7 +143,7 @@ If you want to implement a customized Assessor, there are three things to do:
from nni.assessor import Assessor
class CustomizedAssessor(Assessor):
def __init__(self, ...):
def __init__(self, *args, **kwargs):
...
**2. Implement assess trial function**
@ -153,7 +153,7 @@ If you want to implement a customized Assessor, there are three things to do:
from nni.assessor import Assessor, AssessResult
class CustomizedAssessor(Assessor):
def __init__(self, ...):
def __init__(self, *args, **kwargs):
...
def assess_trial(self, trial_history):

Просмотреть файл

@ -28,7 +28,6 @@ You can follow below steps to build a customized tuner/assessor/advisor, and reg
Reference following instructions to create:
* `customized tuner <../Tuner/CustomizeTuner.rst>`_
* `customized assessor <../Assessor/CustomizeAssessor.rst>`_
* `customized advisor <../Tuner/CustomizeAdvisor.rst>`_
@ -101,9 +100,9 @@ Run following command to register the customized algorithms as builtin algorithm
.. code-block:: bash
nnictl algo register --meta <path_to_meta_file>
nnictl algo register --meta PATH_TO_META_FILE
The ``<path_to_meta_file>`` is the path to the yaml file your created in above section.
The ``PATH_TO_META_FILE`` is the path to the yaml file your created in above section.
Reference `customized tuner example <#example-register-a-customized-tuner-as-a-builtin-tuner>`_ for a full example.
@ -128,7 +127,7 @@ List builtin algorithms
Run following command to list the registered builtin algorithms:
.. code-block:: bash
.. code-block:: text
nnictl algo list
+-----------------+------------+-----------+--------=-------------+------------------------------------------+
@ -213,7 +212,7 @@ Check the registered builtin algorithms
Then run command ``nnictl algo list``\ , you should be able to see that demotuner is installed:
.. code-block:: bash
.. code-block:: text
+-----------------+------------+-----------+--------=-------------+------------------------------------------+
| Name | Type | source | Class Name | Module Name |

Просмотреть файл

@ -29,7 +29,7 @@ Following code snippet demonstrates a naive HPO process:
best_accuracy = accuracy
best_hyperparameters = (learning_rate, momentum, activation_type)
print('Best hyperparameters:', best_hyperparameters)
print('Best hyperparameters:', best_hyperparameters)
You may have noticed, the example will train 4×10×3=120 models in total.
Since it consumes so much computing resources, you may want to:

Просмотреть файл

@ -239,7 +239,7 @@ To run the tutorial, follow the steps below:
2. **Search**: Based on the architecture of simplified PFLD, the setting of multi-stage search space and hyper-parameters for searching should be firstly configured to construct the supernet. For example,
.. code-block:: bash
.. code-block::
from lib.builder import search_space
from lib.ops import PRIMITIVES
@ -249,13 +249,13 @@ To run the tutorial, follow the steps below:
# configuration of hyper-parameters
# search_space defines the multi-stage search space
nas_config = NASConfig(
model_dir="./ckpt_save",
nas_lr=0.01,
mode="mul",
alpha=0.25,
beta=0.6,
search_space=search_space,
)
model_dir="./ckpt_save",
nas_lr=0.01,
mode="mul",
alpha=0.25,
beta=0.6,
search_space=search_space,
)
# lookup table to manage the information
lookup_table = LookUpTable(config=nas_config, primitives=PRIMITIVES)
# created supernet

Просмотреть файл

@ -63,7 +63,7 @@ Then run one-shot ProxylessNAS demo:
.. code-block:: bash
python ${NNI_ROOT}/examples/nas/oneshot/proxylessnas/main.py --applied_hardware <hardware> --reference_latency <reference latency (ms)>
python ${NNI_ROOT}/examples/nas/oneshot/proxylessnas/main.py --applied_hardware HARDWARE --reference_latency REFERENCE_LATENCY_MS
How the demo works
^^^^^^^^^^^^^^^^^^

Просмотреть файл

@ -1506,7 +1506,7 @@ NNICTL new features and updates
Before v0.3, NNI only supports running single experiment once a time. After this release, users are able to run multiple experiments simultaneously. Each experiment will require a unique port, the 1st experiment will be set to the default port as previous versions. You can specify a unique port for the rest experiments as below:
.. code-block:: bash
.. code-block:: text
nnictl create --port 8081 --config <config file path>

Просмотреть файл

@ -20,9 +20,9 @@ Hyperparameter Optimization algorithms are list below:
All algorithms run in NNI local environment.
Machine Environment
Machine Environment:
.. code-block:: bash
.. code-block:: text
OS: Linux Ubuntu 16.04 LTS
CPU: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz 2600 MHz
@ -215,7 +215,7 @@ The performance of ``DB_Bench`` is associated with the machine configuration and
Machine configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
.. code-block:: text
RocksDB: version 6.1
CPU: 6 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz

Просмотреть файл

@ -37,9 +37,9 @@ PyTorch code
The complete code for fine-tuning the pruned model can be found :githublink:`here <examples/model_compress/pruning/finetune_kd_torch.py>`
.. code-block:: python
.. code-block:: bash
python finetune_kd_torch.py --model [model name] --teacher-model-dir [pretrained checkpoint path] --student-model-dir [pruned checkpoint path] --mask-path [mask file path]
python finetune_kd_torch.py --model [model name] --teacher-model-dir [pretrained checkpoint path] --student-model-dir [pruned checkpoint path] --mask-path [mask file path]
Note that: for fine-tuning a pruned model, run :githublink:`basic_pruners_torch.py <examples/model_compress/pruning/basic_pruners_torch.py>` first to get the mask file, then pass the mask path as argument to the script.

Просмотреть файл

@ -6,40 +6,42 @@ NNI can easily run on Google Colab platform. However, Colab doesn't expose its p
How to Open NNI's Web UI on Google Colab
----------------------------------------
#. Install required packages and softwares.
.. code-block:: bash
.. code-block:: bash
! pip install nni # install nni
! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip # download ngrok and unzip it
! unzip ngrok-stable-linux-amd64.zip
! mkdir -p nni_repo
! git clone https://github.com/microsoft/nni.git nni_repo/nni # clone NNI's offical repo to get examples
! pip install nni # install nni
! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip # download ngrok and unzip it
! unzip ngrok-stable-linux-amd64.zip
! mkdir -p nni_repo
! git clone https://github.com/microsoft/nni.git nni_repo/nni # clone NNI's offical repo to get examples
#. Register a ngrok account `here <https://ngrok.com/>`__\ , then connect to your account using your authtoken.
#. Register a ngrok account `here <https://ngrok.com/>`__, then connect to your account using your authtoken.
.. code-block:: bash
.. code-block:: bash
! ./ngrok authtoken <your-authtoken>
! ./ngrok authtoken YOUR_AUTH_TOKEN
#. Start an NNI example on a port bigger than 1024, then start ngrok with the same port. If you want to use gpu, make sure gpuNum >= 1 in config.yml. Use ``get_ipython()`` to start ngrok since it will be stuck if you use ``! ngrok http 5000 &``.
.. code-block:: bash
.. code-block:: bash
! nnictl create --config nni_repo/nni/examples/trials/mnist-pytorch/config.yml --port 5000 &
get_ipython().system_raw('./ngrok http 5000 &')
! nnictl create --config nni_repo/nni/examples/trials/mnist-pytorch/config.yml --port 5000 &
.. code-block:: python
get_ipython().system_raw('./ngrok http 5000 &')
#. Check the public url.
.. code-block:: bash
.. code-block:: bash
! curl -s http://localhost:4040/api/tunnels # don't change the port number 4040
! curl -s http://localhost:4040/api/tunnels # don't change the port number 4040
You will see an url like http://xxxx.ngrok.io after step 4, open this url and you will find NNI's Web UI. Have fun :)
You will see an url like http://xxxx.ngrok.io after step 4, open this url and you will find NNI's Web UI. Have fun :)
Access Web UI with frp
----------------------

Просмотреть файл

@ -118,7 +118,7 @@ Citing OpEvo
If you feel OpEvo is helpful, please consider citing the paper as follows:
.. code-block:: bash
.. code-block:: bib
@misc{gao2020opevo,
title={OpEvo: An Evolutionary Method for Tensor Operator Optimization},

Просмотреть файл

@ -146,9 +146,9 @@ Among those files, ``trial.py`` and ``graph_to_tf.py`` are special.
if topo_i == '|':
continue
if graph.layers[topo_i].graph_type == LayerType.input.value:
# ......
...
elif graph.layers[topo_i].graph_type == LayerType.attention.value:
# ......
...
# More layers to handle
As we can see, this function is actually a compiler, that converts the internal model DAG configuration (which will be introduced in the ``Model configuration format`` section) ``graph``\ , to a Tensorflow computation graph.
@ -162,6 +162,7 @@ performs topological sorting on the internal graph representation, and the code
.. code-block:: python
for _, topo_i in enumerate(topology):
...
performs actually conversion that maps each layer to a part in Tensorflow computation graph.

Просмотреть файл

@ -20,8 +20,11 @@ stages:
- script: |
cd docs
# rstcheck -r source
displayName: rstcheck (disabled for now) # TODO: rstcheck
rstcheck -r source \
--ignore-directives automodule,autoclass,autofunction,cardlinkitem,codesnippetcard,argparse \
--ignore-roles githublink --ignore-substitutions release \
--report warning
displayName: rstcheck
- script: |
cd docs