Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
Перейти к файлу
Shital Shah ed636a25c8 evaluate.py -> evaluater.py 2022-12-16 16:22:08 -03:00
.vscode post merge sync with refactoring 2022-12-16 16:21:39 -03:00
archai evaluate.py -> evaluater.py 2022-12-16 16:22:08 -03:00
confs petridish refactor final 2022-12-16 16:22:05 -03:00
dockers initial 2020-05-18 03:11:07 -07:00
docs Model desc builder: refactor part 1, support multiple stems 2022-12-16 16:15:40 -03:00
models/darts Model desc builder: refactor part 1, support multiple stems 2022-12-16 16:15:40 -03:00
scripts Petridish distributed now saves gallery of models. 2022-12-16 16:18:32 -03:00
tests nas_utils cleanup, aux tower separation from build_cell, 2022-12-16 16:16:58 -03:00
tools Fix bugs in dist stratified sampler, apex_install 2020-05-18 03:12:42 -07:00
.gitignore Making changes such that petridish distributed eval can train all the models needed. 2022-12-16 16:18:53 -03:00
.ptignore initial 2020-05-18 03:11:07 -07:00
AUTHORS.md Updated .md files 2022-12-16 16:13:54 -03:00
CODE_OF_CONDUCT.md Updated code of conduct, licence, security.md 2020-05-18 03:23:58 -07:00
CONTRIBUTING.md initial 2020-05-18 03:11:07 -07:00
LICENSE.TXT Updated .md files 2022-12-16 16:13:54 -03:00
NOTICE.md added OSS notice file 2022-12-16 16:15:12 -03:00
README.md Update README.md 2022-12-16 16:15:40 -03:00
SECURITY.md Updated code of conduct, licence, security.md 2020-05-18 03:23:58 -07:00
__init__.py added copyright header 2022-12-16 16:15:06 -03:00
azure-pipelines.yml Update azure-pipelines.yml for Azure Pipelines 2022-12-16 16:15:40 -03:00
install.sh initial 2020-05-18 03:11:07 -07:00
requirements.txt initial 2020-05-18 03:11:07 -07:00
setup.py sport8 more installable, factor out datarot from yaml 2022-12-16 16:15:33 -03:00
sysinfo.sh initial 2020-05-18 03:11:07 -07:00

README.md

Welcome to Archai

Archai is a platform for Neural Network Search (NAS) with a goal to unify several recent advancements in research and making them accessible to non-experts so that anyone can leverage this research to generate efficient deep networks for their own applications. Archai hopes to accelerate NAS research by easily allowing to mix and match different techniques rapidly while still ensuring reproducibility, documented hyper-parameters and fair comparison across the spectrum of these techniques. Archai is extensible and modular to accommodate new algorithms easily (often with only a few new lines of code) offering clean and robust codebase.

Extensive feature list

How to Get It

Install as package

pip install archai

Install from source code

We recommend installing from the source code:

git clone https://github.com/microsoft/archai.git
cd archai
pip install -e .

Archai requires Python 3.6+ and is tested with PyTorch 1.3+. For network visualization, you may need to separately install graphviz. We recommand

Test installation

  • cd archai
  • The below command will run every algorithm through a few batches of cifar10 and for both search and final training
  • python scripts/main.py. If all went well, you have a working installation! Yay!
  • Note one can also build and use the cuda 10.1 or 9.2 compatible dockers provided in the dockers folder. These dockers are useful for large scale experimentation on compute clusters.

How to Use It

Quick start

scripts/main.py is the main point of entry.

Run all algorithms in toy mode

python scripts/main.py runs all implemented search algorithms and final training with a few minibatches of data from cifar10. This is designed to exercise all code paths and make sure that everything is properly.

To run specific algorithms

python scripts/main.py --darts will run darts search and evaluation (final model training) using only a few minibatches of data from cifar10. python scripts/main.py --darts --full will run the full search.

Other algorithms can be run by specifying different algorithm names like petridish, xnas, random etc.

List of algorithms

Current the following algorithms are implemented:

  • Petridish
  • DARTS
  • [Random search baseline]
  • XNAS (this is currently experimental and has not been fully reproduced yet as authors have not released source code at the time of writing.)
  • DATA (this is currently experimental and has not been fully reproduced yet as authors have not released source code at the time of writing.)

See Roadmap for details on new algorithms coming soon.

Tutorials

Running experiments on Azure AML

See detailed instructions.

Other References

Contribute

We would love your contributions, feedback, questions, algorithm implementations and feature requests! Please file a Github issue or send us a pull request. Please review the Microsoft Code of Conduct and learn more.

Contact

Join the Archai group on Facebook to stay up to date or ask any questions.

Team

Archai has been created and maintained by Shital Shah and Debadeepta Dey in the Reinforcement Learning Group at Microsoft Research AI, Redmond, USA. Archai has benefited immensely from discussions with John Langford, Rich Caruana, and Eric Horvitz

They look forward to Archai becoming more community driven and including major contributors here.

Credits

Archai builds on several open source codebases. These includes: Fast AutoAugment, pt.darts, DARTS-PyTorch, DARTS, petridishnn, PyTorch CIFAR-10 Models, NVidia DeepLearning Examples, PyTorch Warmup Scheduler, NAS Evaluation is Frustratingly Hard. Please see install_requires section in setup.py for up to date dependencies list. If you feel credit to any material is missing, please let us know by filing a Github issue.

License

This project is released under the MIT License. Please review the License file for more details.