Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
Перейти к файлу
Debadeepta Dey 027fdeac5a The pipeline runs through nominally! But lots of work to be done still. 2022-12-16 18:30:33 -03:00
.github chore(github): Fixes wrong labeling and unused comments. 2022-12-16 16:51:21 -03:00
.vscode Progress on evolution pareto segmentation. 2022-12-16 18:30:33 -03:00
archai The pipeline runs through nominally! But lots of work to be done still. 2022-12-16 18:30:33 -03:00
azure chore(root): Moves azure out of tools and remove xt folder. 2022-12-16 16:52:40 -03:00
confs The pipeline runs through nominally! But lots of work to be done still. 2022-12-16 18:30:33 -03:00
devices fix: quantizer requirements 2022-12-16 16:50:58 -03:00
docker chore(root): Bumps version for PyPI release. 2022-12-16 16:52:40 -03:00
docs Updated fear docs. 2022-12-16 18:25:25 -03:00
models/darts
scripts Progress on evolution pareto segmentation. 2022-12-16 18:30:33 -03:00
tests Added natsbench compiler code to repo. 2022-12-16 16:53:11 -03:00
.amltignore Finished processing local search results on natsbench tss. 2022-12-16 18:25:36 -03:00
.gitattributes chore(root): Adds missing headers and updates version to 0.6.6. 2022-12-16 16:52:15 -03:00
.gitignore Reverted changes to gitignore. 2022-12-16 16:53:11 -03:00
AUTHORS.md
CODE_OF_CONDUCT.md
CONTRIBUTING.md
LICENSE
NOTICE.md Updated notice of cyclic cosine 2022-12-16 16:31:48 -03:00
README.md fix(root): Aesthetic fixes. 2022-12-16 16:52:40 -03:00
SECURITY.md
install.bat chore(root): Adds missing headers and updates version to 0.6.6. 2022-12-16 16:52:15 -03:00
install.sh fix(root): Aesthetic fixes. 2022-12-16 16:52:40 -03:00
pyproject.toml chore(root): Bumps version for release. 2022-12-16 16:51:13 -03:00
requirements.txt fix(root): Aesthetic fixes. 2022-12-16 16:52:40 -03:00
run_all_ft_analysis.bat Updated experimental results. 2022-12-16 18:27:03 -03:00
setup.cfg chore(root): Bumps version for release. 2022-12-16 16:51:13 -03:00
setup.py chore(root): Bumps version for PyPI release. 2022-12-16 16:52:40 -03:00

README.md

archai_logo_black_bg_cropped

Archai: Platform for Neural Architecture Search

License Issues Latest release

Archai is a Neural Network Search (NAS) platform that allows you to generate efficient deep networks for your applications. It offers the following advantages:

  • 🔬 Easy mix-and-match between different algorithms;
  • 📈 Self-documented hyper-parameters and fair comparison;
  • Extensible and modular to allow rapid experimentation;
  • 📂 Powerful configuration system and easy-to-use tools.

Please refer to the documentation for more information.

Package compatibility: Python 3.7+ and PyTorch 1.2.0+.

OS compatibility: Windows, Linux and MacOS.

Table of contents

Quickstart

Installation

There are many alternatives to installing Archai, but note that regardless of choice, we recommend using it within a virtual environment, such as conda or pyenv.

PyPI

PyPI provides a fantastic source of ready-to-go packages, and it is the easiest way to install a new package:

pip install archai

Source (development)

Alternatively, one can clone this repository and install the bleeding-edge version:

git clone https://github.com/microsoft/archai.git
cd archai
install.sh # on Windows, use install.bat

Please refer to the installation guide for more information.

Running an Algorithm

To run a specific NAS algorithm, specify it by the --algos switch:

python scripts/main.py --algos darts --full

Please refer to running algorithms for more information on available switches and algorithms.

Tutorials

The best way to familiarize yourself with Archai is to take a quick tour through our 30-minute tutorial. Additionally, one can dive into the Petridish tutorial developed at Microsoft Research and available at Archai.

We highly recommend Visual Studio Code to take advantage of predefined run configurations and interactive debugging. From the archai directory, launch Visual Studio Code, select Run (Ctrl+Shift+D), choose the configuration and click on Play.

On the other hand, you can use Archai on Azure to run NAS experiments at scale.

Support

Contributions

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Team

Archai has been created and maintained by Shital Shah, Debadeepta Dey, Gustavo de Rosa, Caio Mendes, Piero Kauffmann, and Ofer Dekel at Microsoft Research.

Credits

Archai builds on several open-source codebases. These includes: Fast AutoAugment, pt.darts, DARTS-PyTorch, DARTS, petridishnn, PyTorch CIFAR-10 Models, NVidia DeepLearning Examples, PyTorch Warmup Scheduler, NAS Evaluation is Frustratingly Hard, NASBench-PyTorch.

Please see install_requires section in setup.py for up-to-date dependencies list. If you feel credit to any material is missing, please let us know by filing an issue.

License

This project is released under the MIT License. Please review the file for more details.

Trademark

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.