This commit is contained in:
Shital Shah 2020-06-12 11:56:54 -07:00 коммит произвёл Gustavo Rosa
Родитель 2bcad3159b
Коммит 30ed18c0f4
8 изменённых файлов: 119 добавлений и 105 удалений

1
.vscode/spellright.dict поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
Archai

4
AUTHORS.md Normal file
Просмотреть файл

@ -0,0 +1,4 @@
# Authors
- [Shital Shah](http://www.shitalshah.com)
- [Debadeepta Dey](https://www.debadeepta.com/)

Просмотреть файл

@ -1,3 +1,5 @@
Archai
MIT License
Copyright (c) Microsoft Corporation.

131
README.md
Просмотреть файл

@ -1,56 +1,31 @@
# Welcome to Archai
Neural Architecture Search (NAS) aims to automate the process of searching for neural architectures.
Given a new dataset, it is often a tedious task of trying out many different architectures and
hyperparameters manually. Even the most skilled machine learning researchers and engineers have to
resort to the dark arts of finding good architectures and corresponding hyperparameters guided by some
intuition and a lot of careful experimentation. The NAS community's dream is that this tedium be taken
over by algorithms, freeing up precious human time for more noble pursuits.
Archai is a platform for Neural Network Search (NAS) with a goal to unify several recent advancements in research
and making them accessible to non-experts so that anyone can leverage this research to generate efficient deep networks for their own applications. Archai hopes to accelerate NAS research by easily allowing to mix and match different techniques rapidly while still ensuring reproducibility, documented hyper-parameters and fair comparison across the spectrum of these techniques. Archai is extensible and modular to accommodate new algorithms easily offering clean and robust codebase.
Recently, NAS has made tremendous progress but is merely getting started. Many open problems remain.
But one of the more immediate problems is fair comparison and reproducibility. To ameliorate these
issues we are releasing Archai which is a performant platform for NAS algorithms. Archai has the following features:
[Extensive feature list](docs/features.md)
Arhai has the following features:
## How to Get It
* NAS for non-experts
* Turnkey experimentation platform
* High performance PyTorch code base
* Ease of algorithm development
* Object-oriented model definition
* Unified abstractions for training and evaluation
* New algorithms can be written in a few lines of code
* Easily mix and match existing algorithm aspects
* Easily implement both forward and backward search
* Algorithm-agnostic pareto-front generation
* Easily add hardware-specific constraints like memory, inference time, flops etc.
### Install as package
* Efficient experiment management for reproducibility and fair comparison
* Flexible configuration system
* Structured logging
* Metrics management and logging
* Declarative experimentation
* Declarative support for wide variety of datasets
* Custom dataset support
* Unified final training procedure for searched models
```
pip install archai
```
## Installation
### Install from source code
Currently we have tested Archai on Ubuntu 16.04 LTS 64-bit and Ubuntu 18.04 LTS 64-bit on
Python 3.6+ and PyTorch 1.3+.
We recommend installing from the source code:
* System prep:
* CUDA compatible GPU
* Anaconda package manager
* Nvidia driver compatible with cuda 9.2 or greater
* We provide two conda environments for cuda 9.2 and cuda 10.1.
* [archaicuda101.yaml](dockers/docker-cuda-10-1/archaicuda101.yml)
* [archaicuda92.yaml](dockers/docker-cuda-9-2/archaicuda92.yml)
```
git clone https://github.com/microsoft/archai.git
cd archai
pip install -e .
```
Archai requires Python 3.6+ and is tested with PyTorch 1.3+. For network visualization, you may need to separately install [graphviz](https://graphviz.gitlab.io/download/). We recommand
* `conda env create -f archaicuda101.yml`
* `conda activate archaicuda101`
* `pip install -r requirements.txt`
* `pip install -e .`
## Test installation
@ -63,41 +38,7 @@ Python 3.6+ and PyTorch 1.3+.
provided in the [dockers](dockers) folder. These dockers are useful
for large scale experimentation on compute clusters.
## How to use it?
```
├── archai
│ ├── cifar10_models
│ ├── common
│ ├── darts
│ ├── data_aug
│ ├── nas
│ ├── networks
│ ├── petridish
│ ├── random
│ └── xnas
├── archived
├── confs
├── dockers
├── docs
├── scripts
├── setup.py
├── tests
└── tools
├── azure
```
Most of the functionality resides in the [`archai`](archai/) folder.
[`nas`](archai/nas) contains algorithm-agnostic infrastructure
that is commonly used in NAS algorithms. [`common`](archai/common) contains
common infrastructure code that has no nas-specific code but is infrastructre
code that gets widely used.
Algorithm-specific code resides in appropriately named folder like [`darts`](archai/nas/darts),
[`petridish`](archai/nas/petridish), [`random`](archai/nas/random),
[`xnas`](archai/nas/xnas)
[`scripts`](archai/scripts) contains entry-point scripts to running all algorithms.
## How to Use It
### Quick start
@ -133,40 +74,24 @@ See [Roadmap](#roadmap) for details on new algorithms coming soon.
See detailed [instructions](tools/azure/README.md).
## Roadmap
### Other References
We are striving to rapidly update the list of algorithms and encourage pull-requests from the community
of new algorithms.
Here is our current deck:
* [ProxyLess NAS](https://arxiv.org/abs/1812.00332)
* [SNAS](https://arxiv.org/abs/1812.09926)
* [DATA](http://papers.nips.cc/paper/8374-data-differentiable-architecture-approximation.pdf)
* [RandNAS](https://liamcli.com/assets/pdf/randnas_arxiv.pdf)
Please file in the issues algorithms you would like to see implemented in Archai. We will try our best to accomodate.
## Paper
If you use Archai in your work please cite...
* [Directory Structure](docs/dir_struct.md)
* [FAQ](docs/faq.md)
* [Roadmap](docs/roadmap.md)
## Contribute
We would love your contributions, feedback, questions, and feature requests! Please file a github issue or send us a pull request. Please review the [Microsoft Code of Conduct](https://opensource.microsoft.com/codeofconduct/) and [learn more](CONTRIBUTING.md).
We would love your contributions, feedback, questions, and feature requests! Please [file a Github issue](https://github.com/microsoft/archai/issues/new) or send us a pull request. Please review the [Microsoft Code of Conduct](https://opensource.microsoft.com/codeofconduct/) and [learn more](https://github.com/microsoft/archai/blob/master/CONTRIBUTING.md).
## Contacts
## Contact
Shital Shah shitals@microsoft.com
Debadeepta Dey dedey@microsoft.com
Eric Horvitz horvitz@microsoft.com
Join the Archai group on [Facebook](https://www.facebook.com/groups/1133660130366735/) to stay up to date or ask any questions.
## Credits
Archai utilizes several open source libraries for many of its features. These includes:[fastautoaugment](https://github.com/kakaobrain/fast-autoaugment), [tensorwatch](https://github.com/microsoft/tensorwatch), and many others.
Archai builds on several open source codebases. These includes: [Fast AutoAugment](https://github.com/kakaobrain/fast-autoaugment), [pt.darts](https://github.com/khanrc/pt.darts), [DARTS-PyTorch](https://github.com/dragen1860/DARTS-PyTorch), [DARTS](https://github.com/quark0/darts), [petridishnn](https://github.com/microsoft/petridishnn), [PyTorch CIFAR-10 Models](https://github.com/huyvnphan/PyTorch-CIFAR10), [NVidia DeepLearning Examples](https://github.com/NVIDIA/DeepLearningExamples), [PyTorch Warmup Scheduler](https://github.com/ildoonet/pytorch-gradual-warmup-lr). Please see `install_requires` section in [setup.py](setup.py) for up to date dependencies list. If you feel credit to any material is missing, please let us know by filing a [Github issue](https://github.com/microsoft/archai/issues/new).
## License
This project is released under the MIT License. Please review the [License file](LICENSE.txt) for further details.
This project is released under the MIT License. Please review the [License file](LICENSE.txt) for more details.

34
docs/dir_struct.md Normal file
Просмотреть файл

@ -0,0 +1,34 @@
# Directory Structure
```
├── archai
│ ├── cifar10_models
│ ├── common
│ ├── darts
│ ├── data_aug
│ ├── nas
│ ├── networks
│ ├── petridish
│ ├── random
│ └── xnas
├── archived
├── confs
├── dockers
├── docs
├── scripts
├── setup.py
├── tests
└── tools
├── azure
```
Most of the functionality resides in the [`archai`](archai/) folder.
[`nas`](archai/nas) contains algorithm-agnostic infrastructure
that is commonly used in NAS algorithms. [`common`](archai/common) contains
common infrastructure code that has no nas-specific code but is infrastructre
code that gets widely used.
Algorithm-specific code resides in appropriately named folder like [`darts`](archai/nas/darts),
[`petridish`](archai/nas/petridish), [`random`](archai/nas/random),
[`xnas`](archai/nas/xnas)
[`scripts`](archai/scripts) contains entry-point scripts to running all algorithms.

9
docs/faq.md Normal file
Просмотреть файл

@ -0,0 +1,9 @@
# Frequently Asked Questions (FAQs)
* **What is Neural Network Search (NAS)?**
Neural Architecture Search (NAS) aims to automate the process of searching for neural architectures.
Given a new dataset, it is often a tedious task of trying out many different architectures and
hyperparameters manually. Even the most skilled machine learning researchers and engineers have to
resort to the dark arts of finding good architectures and corresponding hyperparameters guided by some
intuition and a lot of careful experimentation. The NAS community's dream is that this tedium be taken
over by algorithms, freeing up precious human time for more noble pursuits.

Просмотреть файл

@ -4,8 +4,34 @@ Archai is designed to unify several latest algorithms for Network Architecture S
## Features
* **Declarative Approach and Reproducibility**: Archai carefully abstracts away various hyperparameters, training details, model descriptions etc into a configuration. The goal is to make several critical decisions explicit that otherwise may get burried in the code making it harder to reproduce experiment. In addition, Archai's configuration ssytem goes well beyond standard yaml by ability to inherit from config from one experiment and make only few changes or override configs from command line. It is possible to perform several different experiments by merely changing config that otherwise might need significant code changes.
* **Declarative Approach and Reproducibility**: Archai incorporates yaml based config system with an additional ability to inherit from base yaml and share settings without needing to copy/paste. This design allows to easily mix and match parts of different algorithms without having to write code. The resulting yaml becomes self-documenting list of all hyper-parameters as well as bag-of-tricks enabled for the experiment. The goal is to make critical decisions explicit that otherwise may remain buried in the code making it harder to reproduce experiment for others. It is then also possible to perform several different experiments by merely changing config that otherwise might need significant code changes.
* **Plug-n-Play Datasets**: Archai provides infrastructure so a new dataset can be added by simply adding new config. This allows for much faster experimentation for real-world datasets and leverage latest NAS research to benefit actual products.
* **
* **Fair Comparison**: A recent crisis in the field of NAS is ability to fairly compare different techniques with *same* bag-of-tricks such that often makes more difference than NAS technique itself. Archai stipulates all the bag-of-tricks to be config driven and hence enforces ability to run different algorithms on a fair leveled field. Further, instead of different algorithms using vastly different training and evaluation code, Archai provides common infrastructure to again allow fair comparison.
* **Unifications and Abstractions**: Archai is designed to provide abstractions for various phases of NAS including architecture trainers, finalizers, evaluators and so on. Also the architecture is represented by model description expressed as yaml that can be "compiled" into PyTorch network. Unlike "genotypes" used in traditional algorithms, these model descriptions are far more powerful, flexible and extensible. This allows unifying several different NAS techniques including macro-search into a single framework. The modular architecture allows for extensions and modifications in just few lines of code.
* **Exhaustive Pareto Front Generation**: Archai allows to sweep over several macro parameters such as number of cells, number of nodes etc along with reports on model statistics such as number of parameters, flops, inference latency and model memory utilization to allow identify optimal model for desired constraints.
* **Differentiable Search vs Growing Networks**: Archai offers unified codebase for two mainstream approaches for NAS: differentiable search and growing networks one iteration at time (also called forward search). For growing networks, Archai also supports initializing weights from previous iteration for faster training.
* **Clean Codebase with Aggregated Best Practices**: Archai has leveraged several different popular codebases to extract different best practices in one codebase with an extensible and modular Pythonic design. Our hope is that Archai can serve as great starting point for future NAS research.
* **NAS for Non-Experts**: Archai enables quick plug-n-play for custom datasets and ability to run sweep of standard algorithms. Our goal is to present Archai as turn-key NAS solution for the non-expert.
* **Efficient PyTorch Code**: Archai implements best practices for PyTorch as well as implements efficient versions of algorithms such as bi-level optimizers that runs as much as 2X faster than original implementation.
* **Structured Logging**: Archai logs all information of the run in machine-readable structured yaml file as well as human readable text file. This allows to extract minute details of the run for comparisons and analysis.
* **Metrics and Reporting**: Archai collects all metrics including timings into machine readable yaml that can easily be analyzed. One can also run multiple combinations of parameters, each with multiple seeds and then compute mean as well as standard deviations over multiple seed tuns. Archai includes reporting component to generate metrics plots with standard deviation envelops and other details.
* **General Purpose Trainer**: Archai includes general purpose trainer that can be used to train any PyTorch model including handcrafted models. This trainer includes several best practices and along with all other infrastructure it would be useful to anyone even if NAS is not a primary focus. This trainer also supports features including support for multiple optimizers, warm up schedule, chunking support etc.
* **Mixed Precision and Distributed Runs**: Archai supports easy config driven distributed multi-GPU runs with or without mixed precision support that can make runs 2X-4X faster for TensorCore based GPUs such as NVidia V100s. Archai includes several best practices through NVidia Apex library as well as its own components such as distributed stratified sampler.
* **Development Mode**: The so called "toy" mode allows for quick end-to-end testing during development so you can develop on usual laptop and do a full runs in cloud. Archai also supports TensorWatch, TensorBoard and other debugging aids such as network visualization, timing logs etc.
* **Enhanced Archai Model**: The Archai `Model` class derives from PyTorch `nn.Module` but adds on features such as clear separation of architecture and non-architecture differentiable parameters.
* **Cross Platform**: Archai runs on Linux as well as Windows however distributed runs are currently only supported on Linux.

13
docs/roadmap.md Normal file
Просмотреть файл

@ -0,0 +1,13 @@
# Roadmap
We are striving to rapidly update the list of algorithms and encourage pull-requests from the community
of new algorithms.
Here is our current deck:
* [ProxyLess NAS](https://arxiv.org/abs/1812.00332)
* [SNAS](https://arxiv.org/abs/1812.09926)
* [DATA](http://papers.nips.cc/paper/8374-data-differentiable-architecture-approximation.pdf)
* [RandNAS](https://liamcli.com/assets/pdf/randnas_arxiv.pdf)
Please file in the issues algorithms you would like to see implemented in Archai. We will try our best to accomodate.