i18n toolchain based on sphinx-intl (#4759)

This commit is contained in:
Yuge Zhang 2022-04-20 10:35:02 +08:00 коммит произвёл GitHub
Родитель f5b89bb655
Коммит fb0c273483
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
15 изменённых файлов: 1830 добавлений и 8 удалений

1
dependencies/develop.txt поставляемый
Просмотреть файл

@ -15,6 +15,7 @@ sphinx >= 4.5
sphinx-argparse-nni >= 0.4.0
sphinx-copybutton
sphinx-gallery
sphinx-intl
sphinx-tabs
sphinxcontrib-bibtex
git+https://github.com/bashtage/sphinx-material@6e0ef82#egg=sphinx_material

3
docs/.gitignore поставляемый
Просмотреть файл

@ -8,3 +8,6 @@ _build/
# auto-generated reference table
_modules/
# Machine-style translation files
*.mo

Просмотреть файл

@ -11,6 +11,11 @@ BUILDDIR = build
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
# Build message catelogs for translation
i18n:
@$(SPHINXBUILD) -M getpartialtext "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
sphinx-intl update -p "$(BUILDDIR)/getpartialtext" -d "$(SOURCEDIR)/locales" -l zh
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new

Просмотреть файл

@ -0,0 +1,30 @@
"""
Basically same as
`sphinx gettext buidler <https://www.sphinx-doc.org/en/master/_modules/sphinx/builders/gettext.html>`_,
but only get texts from files in a whitelist.
"""
import re
from docutils import nodes
from sphinx.application import Sphinx
from sphinx.builders.gettext import MessageCatalogBuilder
class PartialMessageCatalogBuilder(MessageCatalogBuilder):
name = 'getpartialtext'
def init(self):
super().init()
self.whitelist_docs = [re.compile(x) for x in self.config.gettext_documents]
def write_doc(self, docname: str, doctree: nodes.document) -> None:
for doc_re in self.whitelist_docs:
if doc_re.match(docname):
return super().write_doc(docname, doctree)
def setup(app: Sphinx):
app.add_builder(PartialMessageCatalogBuilder)
app.add_config_value('gettext_documents', [], 'gettext')

Просмотреть файл

@ -61,6 +61,7 @@ extensions = [
# Custom extensions in extension/ folder.
'tutorial_links', # this has to be after sphinx-gallery
'getpartialtext',
'inplace_translation',
'cardlinkitem',
'codesnippetcard',
@ -186,6 +187,18 @@ master_doc = 'index'
# Usually you set "language" from the command line for these cases.
language = None
# Translation related settings
locale_dir = ['locales']
# Documents that requires translation: https://github.com/microsoft/nni/issues/4298
gettext_documents = [
r'^index$',
r'^quickstart$',
r'^installation$',
r'^(nas|hpo|compression)/overview$',
r'^tutorials/(hello_nas|pruning_quick_start_mnist|hpo_quickstart_pytorch/main)$',
]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.

Просмотреть файл

@ -0,0 +1,144 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/compression/overview.rst:2
msgid "Overview of NNI Model Compression"
msgstr ""
#: ../../source/compression/overview.rst:4
msgid ""
"Deep neural networks (DNNs) have achieved great success in many tasks "
"like computer vision, nature launguage processing, speech processing. "
"However, typical neural networks are both computationally expensive and "
"energy-intensive, which can be difficult to be deployed on devices with "
"low computation resources or with strict latency requirements. Therefore,"
" a natural thought is to perform model compression to reduce model size "
"and accelerate model training/inference without losing performance "
"significantly. Model compression techniques can be divided into two "
"categories: pruning and quantization. The pruning methods explore the "
"redundancy in the model weights and try to remove/prune the redundant and"
" uncritical weights. Quantization refers to compress models by reducing "
"the number of bits required to represent weights or activations. We "
"further elaborate on the two methods, pruning and quantization, in the "
"following chapters. Besides, the figure below visualizes the difference "
"between these two methods."
msgstr ""
#: ../../source/compression/overview.rst:19
msgid ""
"NNI provides an easy-to-use toolkit to help users design and use model "
"pruning and quantization algorithms. For users to compress their models, "
"they only need to add several lines in their code. There are some popular"
" model compression algorithms built-in in NNI. On the other hand, users "
"could easily customize their new compression algorithms using NNIs "
"interface."
msgstr ""
#: ../../source/compression/overview.rst:24
msgid "There are several core features supported by NNI model compression:"
msgstr ""
#: ../../source/compression/overview.rst:26
msgid "Support many popular pruning and quantization algorithms."
msgstr ""
#: ../../source/compression/overview.rst:27
msgid ""
"Automate model pruning and quantization process with state-of-the-art "
"strategies and NNI's auto tuning power."
msgstr ""
#: ../../source/compression/overview.rst:28
msgid ""
"Speedup a compressed model to make it have lower inference latency and "
"also make it smaller."
msgstr ""
#: ../../source/compression/overview.rst:29
msgid ""
"Provide friendly and easy-to-use compression utilities for users to dive "
"into the compression process and results."
msgstr ""
#: ../../source/compression/overview.rst:30
msgid "Concise interface for users to customize their own compression algorithms."
msgstr ""
#: ../../source/compression/overview.rst:34
msgid "Compression Pipeline"
msgstr ""
#: ../../source/compression/overview.rst:42
msgid ""
"The overall compression pipeline in NNI is shown above. For compressing a"
" pretrained model, pruning and quantization can be used alone or in "
"combination. If users want to apply both, a sequential mode is "
"recommended as common practise."
msgstr ""
#: ../../source/compression/overview.rst:46
msgid ""
"Note that NNI pruners or quantizers are not meant to physically compact "
"the model but for simulating the compression effect. Whereas NNI speedup "
"tool can truly compress model by changing the network architecture and "
"therefore reduce latency. To obtain a truly compact model, users should "
"conduct :doc:`pruning speedup <../tutorials/pruning_speedup>` or "
":doc:`quantizaiton speedup <../tutorials/quantization_speedup>`. The "
"interface and APIs are unified for both PyTorch and TensorFlow. Currently"
" only PyTorch version has been supported, and TensorFlow version will be "
"supported in future."
msgstr ""
#: ../../source/compression/overview.rst:52
msgid "Model Speedup"
msgstr ""
#: ../../source/compression/overview.rst:54
msgid ""
"The final goal of model compression is to reduce inference latency and "
"model size. However, existing model compression algorithms mainly use "
"simulation to check the performance (e.g., accuracy) of compressed model."
" For example, using masks for pruning algorithms, and storing quantized "
"values still in float32 for quantization algorithms. Given the output "
"masks and quantization bits produced by those algorithms, NNI can really "
"speedup the model."
msgstr ""
#: ../../source/compression/overview.rst:59
msgid "The following figure shows how NNI prunes and speeds up your models."
msgstr ""
#: ../../source/compression/overview.rst:67
msgid ""
"The detailed tutorial of Speedup Model with Mask can be found :doc:`here "
"<../tutorials/pruning_speedup>`. The detailed tutorial of Speedup Model "
"with Calibration Config can be found :doc:`here "
"<../tutorials/quantization_speedup>`."
msgstr ""
#: ../../source/compression/overview.rst:72
msgid ""
"NNI's model pruning framework has been upgraded to a more powerful "
"version (named pruning v2 before nni v2.6). The old version (`named "
"pruning before nni v2.6 "
"<https://nni.readthedocs.io/en/v2.6/Compression/pruning.html>`_) will be "
"out of maintenance. If for some reason you have to use the old pruning, "
"v2.6 is the last nni version to support old pruning version."
msgstr ""

Просмотреть файл

@ -0,0 +1,207 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/hpo/overview.rst:2
msgid "Hyperparameter Optimization Overview"
msgstr ""
#: ../../source/hpo/overview.rst:4
msgid ""
"Auto hyperparameter optimization (HPO), or auto tuning, is one of the key"
" features of NNI."
msgstr ""
#: ../../source/hpo/overview.rst:7
msgid "Introduction to HPO"
msgstr ""
#: ../../source/hpo/overview.rst:9
msgid ""
"In machine learning, a hyperparameter is a parameter whose value is used "
"to control learning process, and HPO is the problem of choosing a set of "
"optimal hyperparameters for a learning algorithm. (`From "
"<https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)>`__ "
"`Wikipedia "
"<https://en.wikipedia.org/wiki/Hyperparameter_optimization>`__)"
msgstr ""
#: ../../source/hpo/overview.rst:14
msgid "Following code snippet demonstrates a naive HPO process:"
msgstr ""
#: ../../source/hpo/overview.rst:34
msgid ""
"You may have noticed, the example will train 4×10×3=120 models in total. "
"Since it consumes so much computing resources, you may want to:"
msgstr ""
#: ../../source/hpo/overview.rst:37
msgid ""
":ref:`Find the best hyperparameter set with less iterations. <hpo-"
"overview-tuners>`"
msgstr ""
#: ../../source/hpo/overview.rst:38
msgid ":ref:`Train the models on distributed platforms. <hpo-overview-platforms>`"
msgstr ""
#: ../../source/hpo/overview.rst:39
msgid ""
":ref:`Have a portal to monitor and control the process. <hpo-overview-"
"portal>`"
msgstr ""
#: ../../source/hpo/overview.rst:41
msgid "NNI will do them for you."
msgstr ""
#: ../../source/hpo/overview.rst:44
msgid "Key Features of NNI HPO"
msgstr ""
#: ../../source/hpo/overview.rst:49
msgid "Tuning Algorithms"
msgstr ""
#: ../../source/hpo/overview.rst:51
msgid ""
"NNI provides *tuners* to speed up the process of finding best "
"hyperparameter set."
msgstr ""
#: ../../source/hpo/overview.rst:53
msgid ""
"A tuner, or a tuning algorithm, decides the order in which hyperparameter"
" sets are evaluated. Based on the results of historical hyperparameter "
"sets, an efficient tuner can predict where the best hyperparameters "
"locates around, and finds them in much fewer attempts."
msgstr ""
#: ../../source/hpo/overview.rst:57
msgid ""
"The naive example above evaluates all possible hyperparameter sets in "
"constant order, ignoring the historical results. This is the brute-force "
"tuning algorithm called *grid search*."
msgstr ""
#: ../../source/hpo/overview.rst:60
msgid ""
"NNI has out-of-the-box support for a variety of popular tuners. It "
"includes naive algorithms like random search and grid search, Bayesian-"
"based algorithms like TPE and SMAC, RL based algorithms like PPO, and "
"much more."
msgstr ""
#: ../../source/hpo/overview.rst:64
msgid "Main article: :doc:`tuners`"
msgstr ""
#: ../../source/hpo/overview.rst:69
msgid "Training Platforms"
msgstr ""
#: ../../source/hpo/overview.rst:71
msgid ""
"If you are not interested in distributed platforms, you can simply run "
"NNI HPO with current computer, just like any ordinary Python library."
msgstr ""
#: ../../source/hpo/overview.rst:74
msgid ""
"And when you want to leverage more computing resources, NNI provides "
"built-in integration for training platforms from simple on-premise "
"servers to scalable commercial clouds."
msgstr ""
#: ../../source/hpo/overview.rst:77
msgid ""
"With NNI you can write one piece of model code, and concurrently evaluate"
" hyperparameter sets on local machine, SSH servers, Kubernetes-based "
"clusters, AzureML service, and much more."
msgstr ""
#: ../../source/hpo/overview.rst:80
msgid "Main article: :doc:`/experiment/training_service/overview`"
msgstr ""
#: ../../source/hpo/overview.rst:85
msgid "Web Portal"
msgstr ""
#: ../../source/hpo/overview.rst:87
msgid ""
"NNI provides a web portal to monitor training progress, to visualize "
"hyperparameter performance, to manually customize hyperparameters, and to"
" manage multiple HPO experiments."
msgstr ""
#: ../../source/hpo/overview.rst:90
msgid "Main article: :doc:`/experiment/web_portal/web_portal`"
msgstr ""
#: ../../source/hpo/overview.rst:96
msgid "Tutorials"
msgstr ""
#: ../../source/hpo/overview.rst:98
msgid ""
"To start using NNI HPO, choose the quickstart tutorial of your favorite "
"framework:"
msgstr ""
#: ../../source/hpo/overview.rst:100
msgid ":doc:`PyTorch tutorial </tutorials/hpo_quickstart_pytorch/main>`"
msgstr ""
#: ../../source/hpo/overview.rst:101
msgid ":doc:`TensorFlow tutorial </tutorials/hpo_quickstart_tensorflow/main>`"
msgstr ""
#: ../../source/hpo/overview.rst:104
msgid "Extra Features"
msgstr ""
#: ../../source/hpo/overview.rst:106
msgid ""
"After you are familiar with basic usage, you can explore more HPO "
"features:"
msgstr ""
#: ../../source/hpo/overview.rst:108
msgid ""
":doc:`Use command line tool to create and manage experiments (nnictl) "
"</reference/nnictl>`"
msgstr ""
#: ../../source/hpo/overview.rst:109
msgid ":doc:`Early stop non-optimal models (assessor) <assessors>`"
msgstr ""
#: ../../source/hpo/overview.rst:110
msgid ":doc:`TensorBoard integration </experiment/web_portal/tensorboard>`"
msgstr ""
#: ../../source/hpo/overview.rst:111
msgid ":doc:`Implement your own algorithm <custom_algorithm>`"
msgstr ""
#: ../../source/hpo/overview.rst:112
msgid ":doc:`Benchmark tuners <hpo_benchmark>`"
msgstr ""

Просмотреть файл

@ -0,0 +1,206 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-12 17:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/index.rst:4 ../../source/index.rst:52
msgid "Get Started"
msgstr ""
#: ../../source/index.rst:12
msgid "Hyperparameter Optimization"
msgstr ""
#: ../../source/index.rst:12
msgid "Model Compression"
msgstr ""
#: ../../source/index.rst:12
msgid "User Guide"
msgstr ""
#: ../../source/index.rst:23
msgid "Python API"
msgstr ""
#: ../../source/index.rst:23
msgid "References"
msgstr ""
#: ../../source/index.rst:32
msgid "Misc"
msgstr ""
#: ../../source/index.rst:2
msgid "NNI Documentation"
msgstr ""
#: ../../source/index.rst:44
msgid ""
"**NNI (Neural Network Intelligence)** is a lightweight but powerful "
"toolkit to help users **automate**:"
msgstr ""
#: ../../source/index.rst:46
msgid ":doc:`Hyperparameter Optimization </hpo/overview>`"
msgstr ""
#: ../../source/index.rst:47
msgid ":doc:`Neural Architecture Search </nas/overview>`"
msgstr ""
#: ../../source/index.rst:48
msgid ":doc:`Model Compression </compression/overview>`"
msgstr ""
#: ../../source/index.rst:49
msgid ":doc:`Feature Engineering </feature_engineering/overview>`"
msgstr ""
#: ../../source/index.rst:54
msgid "To install the current release:"
msgstr ""
#: ../../source/index.rst:60
msgid ""
"See the :doc:`installation guide </installation>` if you need additional "
"help on installation."
msgstr ""
#: ../../source/index.rst:63
msgid "Try your first NNI experiment"
msgstr ""
#: ../../source/index.rst:65
msgid "To run your first NNI experiment:"
msgstr ""
#: ../../source/index.rst:71
msgid ""
"you need to have `PyTorch <https://pytorch.org/>`_ (as well as "
"`torchvision <https://pytorch.org/vision/stable/index.html>`_) installed "
"to run this experiment."
msgstr ""
#: ../../source/index.rst:73
msgid ""
"To start your journey now, please follow the :doc:`absolute quickstart of"
" NNI <quickstart>`!"
msgstr ""
#: ../../source/index.rst:76
msgid "Why choose NNI?"
msgstr ""
#: ../../source/index.rst:79
msgid "NNI makes AutoML techniques plug-and-play"
msgstr ""
#: ../../source/index.rst:223
msgid "NNI eases the effort to scale and manage AutoML experiments"
msgstr ""
#: ../../source/index.rst:231
msgid ""
"An AutoML experiment requires many trials to explore feasible and "
"potentially good-performing models. **Training service** aims to make the"
" tuning process easily scalable in a distributed platforms. It provides a"
" unified user experience for diverse computation resources (e.g., local "
"machine, remote servers, AKS). Currently, NNI supports **more than 9** "
"kinds of training services."
msgstr ""
#: ../../source/index.rst:242
msgid ""
"Web portal visualizes the tuning process, exposing the ability to "
"inspect, monitor and control the experiment."
msgstr ""
#: ../../source/index.rst:253
msgid ""
"The DNN model tuning often requires more than one experiment. Users might"
" try different tuning algorithms, fine-tune their search space, or switch"
" to another training service. **Experiment management** provides the "
"power to aggregate and compare tuning results from multiple experiments, "
"so that the tuning workflow becomes clean and organized."
msgstr ""
#: ../../source/index.rst:259
msgid "Get Support and Contribute Back"
msgstr ""
#: ../../source/index.rst:261
msgid ""
"NNI is maintained on the `NNI GitHub repository "
"<https://github.com/microsoft/nni>`_. We collect feedbacks and new "
"proposals/ideas on GitHub. You can:"
msgstr ""
#: ../../source/index.rst:263
msgid ""
"Open a `GitHub issue <https://github.com/microsoft/nni/issues>`_ for bugs"
" and feature requests."
msgstr ""
#: ../../source/index.rst:264
msgid ""
"Open a `pull request <https://github.com/microsoft/nni/pulls>`_ to "
"contribute code (make sure to read the `contribution guide "
"</contribution>` before doing this)."
msgstr ""
#: ../../source/index.rst:265
msgid ""
"Participate in `NNI Discussion "
"<https://github.com/microsoft/nni/discussions>`_ for general questions "
"and new ideas."
msgstr ""
#: ../../source/index.rst:266
msgid "Join the following IM groups."
msgstr ""
#: ../../source/index.rst:272
msgid "Gitter"
msgstr ""
#: ../../source/index.rst:273
msgid "WeChat"
msgstr ""
#: ../../source/index.rst:280
msgid "Citing NNI"
msgstr ""
#: ../../source/index.rst:282
msgid ""
"If you use NNI in a scientific publication, please consider citing NNI in"
" your references."
msgstr ""
#: ../../source/index.rst:284
msgid ""
"Microsoft. Neural Network Intelligence (version |release|). "
"https://github.com/microsoft/nni"
msgstr ""
#: ../../source/index.rst:286
msgid ""
"Bibtex entry (please replace the version with the particular version you "
"are using): ::"
msgstr ""

Просмотреть файл

@ -0,0 +1,130 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/installation.rst:2
msgid "Install NNI"
msgstr ""
#: ../../source/installation.rst:4
msgid ""
"NNI requires Python >= 3.7. It is tested and supported on Ubuntu >= "
"18.04, Windows 10 >= 21H2, and macOS >= 11."
msgstr ""
#: ../../source/installation.rst:8
msgid "There are 3 ways to install NNI:"
msgstr ""
#: ../../source/installation.rst:10
msgid ":ref:`Using pip <installation-pip>`"
msgstr ""
#: ../../source/installation.rst:11
msgid ":ref:`Build source code <installation-source>`"
msgstr ""
#: ../../source/installation.rst:12
msgid ":ref:`Using Docker <installation-docker>`"
msgstr ""
#: ../../source/installation.rst:17
msgid "Using pip"
msgstr ""
#: ../../source/installation.rst:19
msgid ""
"NNI provides official packages for x86-64 CPUs. They can be installed "
"with pip:"
msgstr ""
#: ../../source/installation.rst:25
msgid "Or to upgrade to latest version:"
msgstr ""
#: ../../source/installation.rst:31
msgid "You can check installation with:"
msgstr ""
#: ../../source/installation.rst:37
msgid ""
"On Linux systems without Conda, you may encounter ``bash: nnictl: command"
" not found`` error. In this case you need to add pip script directory to "
"``PATH``:"
msgstr ""
#: ../../source/installation.rst:48
msgid "Installing from Source Code"
msgstr ""
#: ../../source/installation.rst:50
msgid "NNI hosts source code on `GitHub <https://github.com/microsoft/nni>`__."
msgstr ""
#: ../../source/installation.rst:52
msgid ""
"NNI has experimental support for ARM64 CPUs, including Apple M1. It "
"requires to install from source code."
msgstr ""
#: ../../source/installation.rst:55
msgid "See :doc:`/notes/build_from_source`."
msgstr ""
#: ../../source/installation.rst:60
msgid "Using Docker"
msgstr ""
#: ../../source/installation.rst:62
msgid ""
"NNI provides official Docker image on `Docker Hub "
"<https://hub.docker.com/r/msranni/nni>`__."
msgstr ""
#: ../../source/installation.rst:69
msgid "Installing Extra Dependencies"
msgstr ""
#: ../../source/installation.rst:71
msgid ""
"Some built-in algorithms of NNI requires extra packages. Use ``nni"
"[<algorithm-name>]`` to install their dependencies."
msgstr ""
#: ../../source/installation.rst:74
msgid ""
"For example, to install dependencies of :class:`DNGO "
"tuner<nni.algorithms.hpo.dngo_tuner.DNGOTuner>` :"
msgstr ""
#: ../../source/installation.rst:80
msgid ""
"This command will not reinstall NNI itself, even if it was installed in "
"development mode."
msgstr ""
#: ../../source/installation.rst:82
msgid "Alternatively, you may install all extra dependencies at once:"
msgstr ""
#: ../../source/installation.rst:88
msgid ""
"**NOTE**: SMAC tuner depends on swig3, which requires a manual downgrade "
"on Ubuntu:"
msgstr ""

Просмотреть файл

@ -0,0 +1,270 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:17+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/nas/overview.rst:2
msgid "Overview"
msgstr ""
#: ../../source/nas/overview.rst:4
msgid ""
"NNI's latest NAS supports are all based on Retiarii Framework, users who "
"are still on `early version using NNI NAS v1.0 "
"<https://nni.readthedocs.io/en/v2.2/nas.html>`__ shall migrate your work "
"to Retiarii as soon as possible. We plan to remove the legacy NAS "
"framework in the next few releases."
msgstr ""
#: ../../source/nas/overview.rst:6
msgid ""
"PyTorch is the **only supported framework on Retiarii**. Inquiries of NAS"
" support on Tensorflow is in `this discussion "
"<https://github.com/microsoft/nni/discussions/4605>`__. If you intend to "
"run NAS with DL frameworks other than PyTorch and Tensorflow, please "
"`open new issues <https://github.com/microsoft/nni/issues>`__ to let us "
"know."
msgstr ""
#: ../../source/nas/overview.rst:9
msgid "Basics"
msgstr ""
#: ../../source/nas/overview.rst:11
msgid ""
"Automatic neural architecture search is playing an increasingly important"
" role in finding better models. Recent research has proven the "
"feasibility of automatic NAS and has led to models that beat many "
"manually designed and tuned models. Representative works include `NASNet "
"<https://arxiv.org/abs/1707.07012>`__, `ENAS "
"<https://arxiv.org/abs/1802.03268>`__, `DARTS "
"<https://arxiv.org/abs/1806.09055>`__, `Network Morphism "
"<https://arxiv.org/abs/1806.10282>`__, and `Evolution "
"<https://arxiv.org/abs/1703.01041>`__. In addition, new innovations "
"continue to emerge."
msgstr ""
#: ../../source/nas/overview.rst:13
msgid ""
"High-level speaking, aiming to solve any particular task with neural "
"architecture search typically requires: search space design, search "
"strategy selection, and performance evaluation. The three components work"
" together with the following loop (from the famous `NAS survey "
"<https://arxiv.org/abs/1808.05377>`__):"
msgstr ""
#: ../../source/nas/overview.rst:19
msgid "In this figure:"
msgstr ""
#: ../../source/nas/overview.rst:21
msgid ""
"*Model search space* means a set of models from which the best model is "
"explored/searched. Sometimes we use *search space* or *model space* in "
"short."
msgstr ""
#: ../../source/nas/overview.rst:22
msgid ""
"*Exploration strategy* is the algorithm that is used to explore a model "
"search space. Sometimes we also call it *search strategy*."
msgstr ""
#: ../../source/nas/overview.rst:23
msgid ""
"*Model evaluator* is responsible for training a model and evaluating its "
"performance."
msgstr ""
#: ../../source/nas/overview.rst:25
msgid ""
"The process is similar to :doc:`Hyperparameter Optimization "
"</hpo/index>`, except that the target is the best architecture rather "
"than hyperparameter. Concretely, an exploration strategy selects an "
"architecture from a predefined search space. The architecture is passed "
"to a performance evaluation to get a score, which represents how well "
"this architecture performs on a particular task. This process is repeated"
" until the search process is able to find the best architecture."
msgstr ""
#: ../../source/nas/overview.rst:28
msgid "Key Features"
msgstr ""
#: ../../source/nas/overview.rst:30
msgid ""
"The current NAS framework in NNI is powered by the research of `Retiarii:"
" A Deep Learning Exploratory-Training Framework "
"<https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__, where "
"we highlight the following features:"
msgstr ""
#: ../../source/nas/overview.rst:32
msgid ":doc:`Simple APIs to construct search space easily <construct_space>`"
msgstr ""
#: ../../source/nas/overview.rst:33
msgid ":doc:`SOTA NAS algorithms to explore search space <exploration_strategy>`"
msgstr ""
#: ../../source/nas/overview.rst:34
msgid ""
":doc:`Experiment backend support to scale up experiments on large-scale "
"AI platforms </experiment/overview>`"
msgstr ""
#: ../../source/nas/overview.rst:37
msgid "Why NAS with NNI"
msgstr ""
#: ../../source/nas/overview.rst:39
msgid ""
"We list out the three perspectives where NAS can be particularly "
"challegning without NNI. NNI provides solutions to relieve users' "
"engineering effort when they want to try NAS techniques in their own "
"scenario."
msgstr ""
#: ../../source/nas/overview.rst:42
msgid "Search Space Design"
msgstr ""
#: ../../source/nas/overview.rst:44
msgid ""
"The search space defines which architectures can be represented in "
"principle. Incorporating prior knowledge about typical properties of "
"architectures well-suited for a task can reduce the size of the search "
"space and simplify the search. However, this also introduces a human "
"bias, which may prevent finding novel architectural building blocks that "
"go beyond the current human knowledge. Search space design can be very "
"challenging for beginners, who might not possess the experience to "
"balance the richness and simplicity."
msgstr ""
#: ../../source/nas/overview.rst:46
msgid ""
"In NNI, we provide a wide range of APIs to build the search space. There "
"are :doc:`high-level APIs <construct_space>`, that enables incorporating "
"human knowledge about what makes a good architecture or search space. "
"There are also :doc:`low-level APIs <mutator>`, that is a list of "
"primitives to construct a network from operator to operator."
msgstr ""
#: ../../source/nas/overview.rst:49
msgid "Exploration strategy"
msgstr ""
#: ../../source/nas/overview.rst:51
msgid ""
"The exploration strategy details how to explore the search space (which "
"is often exponentially large). It encompasses the classical exploration-"
"exploitation trade-off since, on the one hand, it is desirable to find "
"well-performing architectures quickly, while on the other hand, premature"
" convergence to a region of suboptimal architectures should be avoided. "
"The \"best\" exploration strategy for a particular scenario is usually "
"found via trial-and-error. As many state-of-the-art strategies are "
"implemented with their own code-base, it becomes very troublesome to "
"switch from one to another."
msgstr ""
#: ../../source/nas/overview.rst:53
msgid ""
"In NNI, we have also provided :doc:`a list of strategies "
"<exploration_strategy>`. Some of them are powerful yet time consuming, "
"while others might be suboptimal but really efficient. Given that all "
"strategies are implemented with a unified interface, users can always "
"find one that matches their need."
msgstr ""
#: ../../source/nas/overview.rst:56
msgid "Performance estimation"
msgstr ""
#: ../../source/nas/overview.rst:58
msgid ""
"The objective of NAS is typically to find architectures that achieve high"
" predictive performance on unseen data. Performance estimation refers to "
"the process of estimating this performance. The problem with performance "
"estimation is mostly its scalability, i.e., how can I run and manage "
"multiple trials simultaneously."
msgstr ""
#: ../../source/nas/overview.rst:60
msgid ""
"In NNI, we standardize this process is implemented with :doc:`evaluator "
"<evaluator>`, which is responsible of estimating a model's performance. "
"The choices of evaluators also range from the simplest option, e.g., to "
"perform a standard training and validation of the architecture on data, "
"to complex configurations and implementations. Evaluators are run in "
"*trials*, where trials can be spawn onto distributed platforms with our "
"powerful :doc:`training service </experiment/training_service/overview>`."
msgstr ""
#: ../../source/nas/overview.rst:63
msgid "Tutorials"
msgstr ""
#: ../../source/nas/overview.rst:65
msgid ""
"To start using NNI NAS framework, we recommend at least going through the"
" following tutorials:"
msgstr ""
#: ../../source/nas/overview.rst:67
msgid ":doc:`Quickstart </tutorials/hello_nas>`"
msgstr ""
#: ../../source/nas/overview.rst:68
msgid ":doc:`construct_space`"
msgstr ""
#: ../../source/nas/overview.rst:69
msgid ":doc:`exploration_strategy`"
msgstr ""
#: ../../source/nas/overview.rst:70
msgid ":doc:`evaluator`"
msgstr ""
#: ../../source/nas/overview.rst:73
msgid "Resources"
msgstr ""
#: ../../source/nas/overview.rst:75
msgid ""
"The following articles will help with a better understanding of the "
"current arts of NAS:"
msgstr ""
#: ../../source/nas/overview.rst:77
msgid ""
"`Neural Architecture Search: A Survey "
"<https://arxiv.org/abs/1808.05377>`__"
msgstr ""
#: ../../source/nas/overview.rst:78
msgid ""
"`A Comprehensive Survey of Neural Architecture Search: Challenges and "
"Solutions <https://arxiv.org/abs/2006.02903>`__"
msgstr ""
#~ msgid "Basics"
#~ msgstr ""
#~ msgid "Basic Concepts"
#~ msgstr ""

Просмотреть файл

@ -0,0 +1,23 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/quickstart.rst:2
msgid "Quickstart"
msgstr ""

Просмотреть файл

@ -0,0 +1,23 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-12 17:35+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../templates/globaltoc.html:6
msgid "Overview"
msgstr ""

Просмотреть файл

@ -0,0 +1,749 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/tutorials/hello_nas.rst:13
msgid ""
"Click :ref:`here <sphx_glr_download_tutorials_hello_nas.py>` to download "
"the full example code"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:22
msgid "Hello, NAS!"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:24
msgid ""
"This is the 101 tutorial of Neural Architecture Search (NAS) on NNI. In "
"this tutorial, we will search for a neural architecture on MNIST dataset "
"with the help of NAS framework of NNI, i.e., *Retiarii*. We use multi-"
"trial NAS as an example to show how to construct and explore a model "
"space."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:28
msgid ""
"There are mainly three crucial components for a neural architecture "
"search task, namely,"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:30
msgid "Model search space that defines a set of models to explore."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:31
msgid "A proper strategy as the method to explore this model space."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:32
msgid ""
"A model evaluator that reports the performance of every model in the "
"space."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:34
msgid ""
"Currently, PyTorch is the only supported framework by Retiarii, and we "
"have only tested **PyTorch 1.7 to 1.10**. This tutorial assumes PyTorch "
"context but it should also apply to other frameworks, which is in our "
"future plan."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:38
msgid "Define your Model Space"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:40
msgid ""
"Model space is defined by users to express a set of models that users "
"want to explore, which contains potentially good-performing models. In "
"this framework, a model space is defined with two parts: a base model and"
" possible mutations on the base model."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:46
msgid "Define Base Model"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:48
msgid ""
"Defining a base model is almost the same as defining a PyTorch (or "
"TensorFlow) model. Usually, you only need to replace the code ``import "
"torch.nn as nn`` with ``import nni.retiarii.nn.pytorch as nn`` to use our"
" wrapped PyTorch modules."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:52
msgid "Below is a very simple example of defining a base model."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:93
msgid ""
"Always keep in mind that you should use ``import nni.retiarii.nn.pytorch "
"as nn`` and :meth:`nni.retiarii.model_wrapper`. Many mistakes are a "
"result of forgetting one of those. Also, please use ``torch.nn`` for "
"submodules of ``nn.init``, e.g., ``torch.nn.init`` instead of "
"``nn.init``."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:98
msgid "Define Model Mutations"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:100
msgid ""
"A base model is only one concrete model not a model space. We provide "
":doc:`API and Primitives </nas/construct_space>` for users to express how"
" the base model can be mutated. That is, to build a model space which "
"includes many models."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:103
msgid "Based on the above base model, we can define a model space as below."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:134
msgid "This results in the following code:"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:189
#: ../../source/tutorials/hello_nas.rst:259
#: ../../source/tutorials/hello_nas.rst:551
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:247
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:284
#: ../../source/tutorials/pruning_quick_start_mnist.rst:65
#: ../../source/tutorials/pruning_quick_start_mnist.rst:107
#: ../../source/tutorials/pruning_quick_start_mnist.rst:172
#: ../../source/tutorials/pruning_quick_start_mnist.rst:218
#: ../../source/tutorials/pruning_quick_start_mnist.rst:255
#: ../../source/tutorials/pruning_quick_start_mnist.rst:283
msgid "Out:"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:210
msgid ""
"This example uses two mutation APIs, :class:`nn.LayerChoice "
"<nni.retiarii.nn.pytorch.LayerChoice>` and :class:`nn.InputChoice "
"<nni.retiarii.nn.pytorch.ValueChoice>`. :class:`nn.LayerChoice "
"<nni.retiarii.nn.pytorch.LayerChoice>` takes a list of candidate modules "
"(two in this example), one will be chosen for each sampled model. It can "
"be used like normal PyTorch module. :class:`nn.InputChoice "
"<nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values, "
"one will be chosen to take effect for each sampled model."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:219
msgid ""
"More detailed API description and usage can be found :doc:`here "
"</nas/construct_space>`."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:223
msgid ""
"We are actively enriching the mutation APIs, to facilitate easy "
"construction of model space. If the currently supported mutation APIs "
"cannot express your model space, please refer to :doc:`this doc "
"</nas/mutator>` for customizing mutators."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:228
msgid "Explore the Defined Model Space"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:230
msgid ""
"There are basically two exploration approaches: (1) search by evaluating "
"each sampled model independently, which is the search approach in :ref"
":`multi-trial NAS <multi-trial-nas>` and (2) one-shot weight-sharing "
"based search, which is used in one-shot NAS. We demonstrate the first "
"approach in this tutorial. Users can refer to :ref:`here <one-shot-nas>` "
"for the second approach."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:235
msgid ""
"First, users need to pick a proper exploration strategy to explore the "
"defined model space. Second, users need to pick or customize a model "
"evaluator to evaluate the performance of each explored model."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:239
msgid "Pick an exploration strategy"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:241
msgid ""
"Retiarii supports many :doc:`exploration strategies "
"</nas/exploration_strategy>`."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:243
msgid "Simply choosing (i.e., instantiate) an exploration strategy as below."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:273
msgid "Pick or customize a model evaluator"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:275
msgid ""
"In the exploration process, the exploration strategy repeatedly generates"
" new models. A model evaluator is for training and validating each "
"generated model to obtain the model's performance. The performance is "
"sent to the exploration strategy for the strategy to generate better "
"models."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:279
msgid ""
"Retiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, "
"but to start with, it is recommended to use :class:`FunctionalEvaluator "
"<nni.retiarii.evaluator.FunctionalEvaluator>`, that is, to wrap your own "
"training and evaluation code with one single function. This function "
"should receive one single model class and uses "
":func:`nni.report_final_result` to report the final score of this model."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:284
msgid ""
"An example here creates a simple evaluator that runs on MNIST dataset, "
"trains for 2 epochs, and reports its validation accuracy."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:367
msgid "Create the evaluator"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:386
msgid ""
"The ``train_epoch`` and ``test_epoch`` here can be any customized "
"function, where users can write their own training recipe."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:389
msgid ""
"It is recommended that the ``evaluate_model`` here accepts no additional "
"arguments other than ``model_cls``. However, in the :doc:`advanced "
"tutorial </nas/evaluator>`, we will show how to use additional arguments "
"in case you actually need those. In future, we will support mutation on "
"the arguments of evaluators, which is commonly called \"Hyper-parmeter "
"tuning\"."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:394
msgid "Launch an Experiment"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:396
msgid ""
"After all the above are prepared, it is time to start an experiment to do"
" the model search. An example is shown below."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:417
msgid ""
"The following configurations are useful to control how many trials to run"
" at most / at the same time."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:436
msgid ""
"Remember to set the following config if you want to GPU. "
"``use_active_gpu`` should be set true if you wish to use an occupied GPU "
"(possibly running a GUI)."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:456
msgid ""
"Launch the experiment. The experiment should take several minutes to "
"finish on a workstation with 2 GPUs."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:474
msgid ""
"Users can also run Retiarii Experiment with :doc:`different training "
"services </experiment/training_service/overview>` besides ``local`` "
"training service."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:478
msgid "Visualize the Experiment"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:480
msgid ""
"Users can visualize their experiment in the same way as visualizing a "
"normal hyper-parameter tuning experiment. For example, open "
"``localhost:8081`` in your browser, 8081 is the port that you set in "
"``exp.run``. Please refer to :doc:`here "
"</experiment/web_portal/web_portal>` for details."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:484
msgid ""
"We support visualizing models with 3rd-party visualization engines (like "
"`Netron <https://netron.app/>`__). This can be used by clicking "
"``Visualization`` in detail panel for each trial. Note that current "
"visualization is based on `onnx <https://onnx.ai/>`__ , thus "
"visualization is not feasible if the model cannot be exported into onnx."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:489
msgid ""
"Built-in evaluators (e.g., Classification) will automatically export the "
"model into a file. For your own evaluator, you need to save your file "
"into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work. For instance,"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:520
msgid "Relaunch the experiment, and a button is shown on Web portal."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:525
msgid "Export Top Models"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:527
msgid ""
"Users can export top models after the exploration is done using "
"``export_top_models``."
msgstr ""
#: ../../source/tutorials/hello_nas.rst:563
msgid "**Total running time of the script:** ( 2 minutes 15.810 seconds)"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:578
msgid ":download:`Download Python source code: hello_nas.py <hello_nas.py>`"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:584
msgid ":download:`Download Jupyter notebook: hello_nas.ipynb <hello_nas.ipynb>`"
msgstr ""
#: ../../source/tutorials/hello_nas.rst:591
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:338
#: ../../source/tutorials/pruning_quick_start_mnist.rst:357
msgid "`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:14
msgid ""
"Click :ref:`here "
"<sphx_glr_download_tutorials_hpo_quickstart_pytorch_main.py>` to download"
" the full example code"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:23
msgid "NNI HPO Quickstart with PyTorch"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:24
msgid ""
"This tutorial optimizes the model in `official PyTorch quickstart`_ with "
"auto-tuning."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:26
msgid ""
"There is also a :doc:`TensorFlow "
"version<../hpo_quickstart_tensorflow/main>` if you prefer it."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:28
msgid "The tutorial consists of 4 steps:"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:30
msgid "Modify the model for auto-tuning."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:31
msgid "Define hyperparameters' search space."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:32
msgid "Configure the experiment."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:33
msgid "Run the experiment."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:40
msgid "Step 1: Prepare the model"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:41
msgid "In first step, we need to prepare the model to be tuned."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:43
msgid ""
"The model should be put in a separate script. It will be evaluated many "
"times concurrently, and possibly will be trained on distributed "
"platforms."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:47
msgid "In this tutorial, the model is defined in :doc:`model.py <model>`."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:49
msgid "In short, it is a PyTorch model with 3 additional API calls:"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:51
msgid ""
"Use :func:`nni.get_next_parameter` to fetch the hyperparameters to be "
"evalutated."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:52
msgid ""
"Use :func:`nni.report_intermediate_result` to report per-epoch accuracy "
"metrics."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:53
msgid "Use :func:`nni.report_final_result` to report final accuracy."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:55
msgid "Please understand the model code before continue to next step."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:60
msgid "Step 2: Define search space"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:61
msgid ""
"In model code, we have prepared 3 hyperparameters to be tuned: "
"*features*, *lr*, and *momentum*."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:64
msgid ""
"Here we need to define their *search space* so the tuning algorithm can "
"sample them in desired range."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:66
msgid "Assuming we have following prior knowledge for these hyperparameters:"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:68
msgid "*features* should be one of 128, 256, 512, 1024."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:69
msgid ""
"*lr* should be a float between 0.0001 and 0.1, and it follows exponential"
" distribution."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:70
msgid "*momentum* should be a float between 0 and 1."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:72
msgid ""
"In NNI, the space of *features* is called ``choice``; the space of *lr* "
"is called ``loguniform``; and the space of *momentum* is called "
"``uniform``. You may have noticed, these names are derived from "
"``numpy.random``."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:77
msgid ""
"For full specification of search space, check :doc:`the reference "
"</hpo/search_space>`."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:79
msgid "Now we can define the search space as follow:"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:102
msgid "Step 3: Configure the experiment"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:103
msgid ""
"NNI uses an *experiment* to manage the HPO process. The *experiment "
"config* defines how to train the models and how to explore the search "
"space."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:106
msgid ""
"In this tutorial we use a *local* mode experiment, which means models "
"will be trained on local machine, without using any special training "
"platform."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:125
msgid "Now we start to configure the experiment."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:128
msgid "Configure trial code"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:129
msgid ""
"In NNI evaluation of each hyperparameter set is called a *trial*. So the "
"model script is called *trial code*."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:147
msgid ""
"When ``trial_code_directory`` is a relative path, it relates to current "
"working directory. To run ``main.py`` in a different path, you can set "
"trial code directory to ``Path(__file__).parent``. (`__file__ "
"<https://docs.python.org/3.10/reference/datamodel.html#index-43>`__ is "
"only available in standard Python, not in Jupyter Notebook.)"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:154
msgid ""
"If you are using Linux system without Conda, you may need to change "
"``\"python model.py\"`` to ``\"python3 model.py\"``."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:160
msgid "Configure search space"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:178
msgid "Configure tuning algorithm"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:179
msgid "Here we use :doc:`TPE tuner </hpo/tuners>`."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:198
msgid "Configure how many trials to run"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:199
msgid ""
"Here we evaluate 10 sets of hyperparameters in total, and concurrently "
"evaluate 2 sets at a time."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:218
msgid ""
"``max_trial_number`` is set to 10 here for a fast example. In real world "
"it should be set to a larger number. With default config TPE tuner "
"requires 20 trials to warm up."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:222
msgid "You may also set ``max_experiment_duration = '1h'`` to limit running time."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:224
msgid ""
"If neither ``max_trial_number`` nor ``max_experiment_duration`` are set, "
"the experiment will run forever until you press Ctrl-C."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:230
msgid "Step 4: Run the experiment"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:231
msgid ""
"Now the experiment is ready. Choose a port and launch it. (Here we use "
"port 8080.)"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:233
msgid ""
"You can use the web portal to view experiment status: "
"http://localhost:8080."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:263
msgid "After the experiment is done"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:264
msgid "Everything is done and it is safe to exit now. The following are optional."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:266
msgid ""
"If you are using standard Python instead of Jupyter Notebook, you can add"
" ``input()`` or ``signal.pause()`` to prevent Python from exiting, "
"allowing you to view the web portal after the experiment is done."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:296
msgid ""
":meth:`nni.experiment.Experiment.stop` is automatically invoked when "
"Python exits, so it can be omitted in your code."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:299
msgid ""
"After the experiment is stopped, you can run "
":meth:`nni.experiment.Experiment.view` to restart web portal."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:303
msgid ""
"This example uses :doc:`Python API </reference/experiment>` to create "
"experiment."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:305
msgid ""
"You can also create and manage experiments with :doc:`command line tool "
"</reference/nnictl>`."
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:310
msgid "**Total running time of the script:** ( 1 minutes 24.393 seconds)"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:325
msgid ":download:`Download Python source code: main.py <main.py>`"
msgstr ""
#: ../../source/tutorials/hpo_quickstart_pytorch/main.rst:331
msgid ":download:`Download Jupyter notebook: main.ipynb <main.ipynb>`"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:13
msgid ""
"Click :ref:`here "
"<sphx_glr_download_tutorials_pruning_quick_start_mnist.py>` to download "
"the full example code"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:22
msgid "Pruning Quickstart"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:24
msgid ""
"Model pruning is a technique to reduce the model size and computation by "
"reducing model weight size or intermediate state size. There are three "
"common practices for pruning a DNN model:"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:27
msgid "Pre-training a model -> Pruning the model -> Fine-tuning the pruned model"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:28
msgid ""
"Pruning a model during training (i.e., pruning aware training) -> Fine-"
"tuning the pruned model"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:29
msgid "Pruning a model -> Training the pruned model from scratch"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:31
msgid ""
"NNI supports all of the above pruning practices by working on the key "
"pruning stage. Following this tutorial for a quick look at how to use NNI"
" to prune a model in a common practice."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:37
msgid "Preparation"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:39
msgid ""
"In this tutorial, we use a simple model and pre-trained on MNIST dataset."
" If you are familiar with defining a model and training in pytorch, you "
"can skip directly to `Pruning Model`_."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:121
msgid "Pruning Model"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:123
msgid ""
"Using L1NormPruner to prune the model and generate the masks. Usually, a "
"pruner requires original model and ``config_list`` as its inputs. "
"Detailed about how to write ``config_list`` please refer "
":doc:`compression config specification "
"<../compression/compression_config_list>`."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:127
msgid ""
"The following `config_list` means all layers whose type is `Linear` or "
"`Conv2d` will be pruned, except the layer named `fc3`, because `fc3` is "
"`exclude`. The final sparsity ratio for each layer is 50%. The layer "
"named `fc3` will not be pruned."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:153
msgid "Pruners usually require `model` and `config_list` as input arguments."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:232
msgid ""
"Speedup the original model with masks, note that `ModelSpeedup` requires "
"an unwrapped model. The model becomes smaller after speedup, and reaches "
"a higher sparsity ratio because `ModelSpeedup` will propagate the masks "
"across layers."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:269
msgid "the model will become real smaller after speedup"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:307
msgid "Fine-tuning Compacted Model"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:308
msgid ""
"Note that if the model has been sped up, you need to re-initialize a new "
"optimizer for fine-tuning. Because speedup will replace the masked big "
"layers with dense small ones."
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:329
msgid "**Total running time of the script:** ( 0 minutes 58.337 seconds)"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:344
msgid ""
":download:`Download Python source code: pruning_quick_start_mnist.py "
"<pruning_quick_start_mnist.py>`"
msgstr ""
#: ../../source/tutorials/pruning_quick_start_mnist.rst:350
msgid ""
":download:`Download Jupyter notebook: pruning_quick_start_mnist.ipynb "
"<pruning_quick_start_mnist.ipynb>`"
msgstr ""

Просмотреть файл

@ -305,16 +305,31 @@ To contribute a new tutorial, here are the steps to follow:
* `How to add images to notebooks <https://sphinx-gallery.github.io/stable/configuration.html#adding-images-to-notebooks>`_.
* `How to reference a tutorial in documentation <https://sphinx-gallery.github.io/stable/advanced.html#cross-referencing>`_.
Chinese translation
^^^^^^^^^^^^^^^^^^^
Translation (i18n)
^^^^^^^^^^^^^^^^^^
We only maintain `a partial set of documents <https://github.com/microsoft/nni/issues/4298>`_ with Chinese translation. If you intend to contribute more, follow the steps:
We only maintain `a partial set of documents <https://github.com/microsoft/nni/issues/4298>`_ with translation. Currently, translation is provided in Simplified Chinese only.
1. Add a ``xxx_zh.rst`` in the same folder where ``xxx.rst`` exists.
2. Run ``python tools/chineselink.py`` under ``docs`` folder, to generate a hash string in your created ``xxx_zh.rst``.
3. Don't delete the hash string, add your translation after it.
* If you want to update the translation of an existing document, please update messages in ``docs/source/locales``.
* If you have updated a translated English document, we require that the corresponding translated documents to be updated (at least the update should be triggered). Please follow these steps:
In case you modify an English document with Chinese translation already exists, you also need to run ``python tools/chineselink.py`` first to update the hash string, and update the Chinese translation contents accordingly.
1. Run ``make i18n`` under ``docs`` folder.
2. Verify that there are new messages in ``docs/source/locales``.
3. Translate the messages.
* If you intend to translate a new document:
1. Update ``docs/source/conf.py`` to make ``gettext_documents`` include your document (probably adding a new regular expression).
2. See the steps above.
To build the translated documentation (for example Chinese documentation), please run:
.. code-block:: bash
make -e SPHINXOPTS="-D language='zh'" html
If you ever encountered problems for translation builds, try to remove the previous build via ``rm -r docs/build/``.
.. _code-of-conduct:

Просмотреть файл

@ -38,8 +38,11 @@ stages:
displayName: Sphinx sanity check (Chinese)
- script: |
set -e
cd docs
python tools/chineselink.py check
rm -rf build
make i18n
git diff --exit-code source/locales
displayName: Translation up-to-date
- script: |