This commit is contained in:
Debadeepta Dey 2020-06-14 20:21:58 -07:00 коммит произвёл Gustavo Rosa
Родитель 03d1ae4e77
Коммит c7b9e506e1
2 изменённых файлов: 13 добавлений и 7 удалений

Просмотреть файл

@ -1,7 +1,7 @@
# Welcome to Archai
Archai is a platform for Neural Network Search (NAS) with a goal to unify several recent advancements in research
and making them accessible to non-experts so that anyone can leverage this research to generate efficient deep networks for their own applications. Archai hopes to accelerate NAS research by easily allowing to mix and match different techniques rapidly while still ensuring reproducibility, documented hyper-parameters and fair comparison across the spectrum of these techniques. Archai is extensible and modular to accommodate new algorithms easily offering clean and robust codebase.
and making them accessible to non-experts so that anyone can leverage this research to generate efficient deep networks for their own applications. Archai hopes to accelerate NAS research by easily allowing to mix and match different techniques rapidly while still ensuring reproducibility, documented hyper-parameters and fair comparison across the spectrum of these techniques. Archai is extensible and modular to accommodate new algorithms easily (often with only a few new lines of code) offering clean and robust codebase.
[Extensive feature list](docs/features.md)
@ -32,8 +32,7 @@ Archai requires Python 3.6+ and is tested with PyTorch 1.3+. For network visuali
* `cd archai`
* The below command will run every algorithm through a few batches of cifar10
and for both search and final training
* `python scripts/main.py`
* If all went well, now you have a working installation!
* `python scripts/main.py`. If all went well, you have a working installation! Yay!
* Note one can also build and use the cuda 10.1 or 9.2 compatible dockers
provided in the [dockers](dockers) folder. These dockers are useful
for large scale experimentation on compute clusters.
@ -64,7 +63,8 @@ Current the following algorithms are implemented:
* [Petridish](https://papers.nips.cc/paper/9202-efficient-forward-architecture-search.pdf)
* [DARTS](https://deepmind.com/research/publications/darts-differentiable-architecture-search)
* [Random search baseline]
* [XNAS](http://papers.nips.cc/paper/8472-xnas-neural-architecture-search-with-expert-advice.pdf) (this is currently experimental and has not been fully reproduced yet as XNAS authors have not released source code at the time of writing.)
* [XNAS](http://papers.nips.cc/paper/8472-xnas-neural-architecture-search-with-expert-advice.pdf) (this is currently experimental and has not been fully reproduced yet as authors have not released source code at the time of writing.)
* [DATA](https://papers.nips.cc/paper/8374-data-differentiable-architecture-approximation.pdf) (this is currently experimental and has not been fully reproduced yet as authors have not released source code at the time of writing.)
See [Roadmap](#roadmap) for details on new algorithms coming soon.
@ -82,12 +82,17 @@ See detailed [instructions](tools/azure/README.md).
## Contribute
We would love your contributions, feedback, questions, and feature requests! Please [file a Github issue](https://github.com/microsoft/archai/issues/new) or send us a pull request. Please review the [Microsoft Code of Conduct](https://opensource.microsoft.com/codeofconduct/) and [learn more](https://github.com/microsoft/archai/blob/master/CONTRIBUTING.md).
We would love your contributions, feedback, questions, algorithm implementations and feature requests! Please [file a Github issue](https://github.com/microsoft/archai/issues/new) or send us a pull request. Please review the [Microsoft Code of Conduct](https://opensource.microsoft.com/codeofconduct/) and [learn more](https://github.com/microsoft/archai/blob/master/CONTRIBUTING.md).
## Contact
Join the Archai group on [Facebook](https://www.facebook.com/groups/1133660130366735/) to stay up to date or ask any questions.
## Team
Archai has been created and maintained by [Shital Shah](https://shitalshah.com) and [Debadeepta Dey](www.debadeepta.com) in the [Reinforcement Learning Group](https://www.microsoft.com/en-us/research/group/reinforcement-learning-redmond/) at Microsoft Research AI, Redmond, USA.
They look forward to Archai becoming more community driven and including major contributors here.
## Credits
Archai builds on several open source codebases. These includes: [Fast AutoAugment](https://github.com/kakaobrain/fast-autoaugment), [pt.darts](https://github.com/khanrc/pt.darts), [DARTS-PyTorch](https://github.com/dragen1860/DARTS-PyTorch), [DARTS](https://github.com/quark0/darts), [petridishnn](https://github.com/microsoft/petridishnn), [PyTorch CIFAR-10 Models](https://github.com/huyvnphan/PyTorch-CIFAR10), [NVidia DeepLearning Examples](https://github.com/NVIDIA/DeepLearningExamples), [PyTorch Warmup Scheduler](https://github.com/ildoonet/pytorch-gradual-warmup-lr). Please see `install_requires` section in [setup.py](setup.py) for up to date dependencies list. If you feel credit to any material is missing, please let us know by filing a [Github issue](https://github.com/microsoft/archai/issues/new).

Просмотреть файл

@ -4,10 +4,11 @@ We are striving to rapidly update the list of algorithms and encourage pull-requ
of new algorithms.
Here is our current deck:
* [PC-DARTS](https://arxiv.org/abs/1907.05737)
* [Geometric NAS](https://arxiv.org/pdf/2004.07802.pdf)
* [ProxyLess NAS](https://arxiv.org/abs/1812.00332)
* [SNAS](https://arxiv.org/abs/1812.09926)
* [DATA](http://papers.nips.cc/paper/8374-data-differentiable-architecture-approximation.pdf)
* [RandNAS](https://liamcli.com/assets/pdf/randnas_arxiv.pdf)
Please file in the issues algorithms you would like to see implemented in Archai. We will try our best to accomodate.
Please file in the issues algorithms you would like to see implemented in Archai. We will try our best to accomodate.