Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
Перейти к файлу
Gustavo de Rosa 43e500a01a chore(gitgnore): Ignores .onnx files. 2022-12-16 16:45:50 -03:00
.github publish new version of snpe quantizer (#89) 2022-12-16 16:26:47 -03:00
.vscode Added script for analyzing a corpus of fully trained architectures and evaluating proxy measures like decoder params and total params. 2022-12-16 16:45:48 -03:00
archai fix(constraints): Changes allocated memory for disk memory, which produces more stable results. 2022-12-16 16:45:49 -03:00
confs change num classes to 100 2022-12-16 16:26:28 -03:00
devices Clovett/fixes (#98) 2022-12-16 16:26:48 -03:00
dockers update run docker command 2022-12-16 16:31:13 -03:00
docs enable code scanning. (#83) 2022-12-16 16:26:47 -03:00
models/darts imagenet tensor shape mismatch fix 2022-12-16 16:24:48 -03:00
scripts Added script for analyzing a corpus of fully trained architectures and evaluating proxy measures like decoder params and total params. 2022-12-16 16:45:48 -03:00
tests new DataLoaders class to hold train,va;,test; better val and test metrics handling 2022-12-16 16:26:17 -03:00
tools Fix bugs in dist stratified sampler, apex_install 2020-05-18 03:12:42 -07:00
.amltignore vocab code cleanup 2022-12-16 16:28:30 -03:00
.flake8 initial commit of qualcomm device code 2022-12-16 16:26:46 -03:00
.gitattributes initial commit of qualcomm device code 2022-12-16 16:26:46 -03:00
.gitignore chore(gitgnore): Ignores .onnx files. 2022-12-16 16:45:50 -03:00
AUTHORS.md chore(archai): Adds updated files. 2022-12-16 16:26:45 -03:00
CODE_OF_CONDUCT.md Updated code of conduct, licence, security.md 2020-05-18 03:23:58 -07:00
CONTRIBUTING.md initial 2020-05-18 03:11:07 -07:00
LICENSE chore(archai): Adds updated files. 2022-12-16 16:26:45 -03:00
NOTICE.md Updated notice of cyclic cosine 2022-12-16 16:31:48 -03:00
README.md Update README.md 2022-12-16 16:26:47 -03:00
SECURITY.md chore(archai): Adds updated files. 2022-12-16 16:26:45 -03:00
install.bat chore(archai): Adds updated files. 2022-12-16 16:26:45 -03:00
install.sh chore(archai): Adds updated files. 2022-12-16 16:26:45 -03:00
setup.cfg chore(archai): Adds updated files. 2022-12-16 16:26:45 -03:00
setup.py Pushed job creation within search loop. 2022-12-16 16:44:57 -03:00

README.md

archai_logo_black_bg_cropped

Archai: Platform for Neural Architecture Search

License Issues Latest release

Archai is a platform for Neural Network Search (NAS) that allows you to generate efficient deep networks for your applications. It offers the following advantages:

  • 🔬 Easy mix-and-match between different algorithms;
  • 📈 Self-documented hyper-parameters and fair comparison;
  • Extensible and modular to allow rapid experimentation;
  • 📂 Powerful configuration system and easy-to-use tools.

Please refer to the documentation for more information.

Archai is compatible with: Python 3.6+ and PyTorch 1.2+.

Table of contents

Quickstart

Installation

There are many alternatives to installing Archai, ranging from PyPI to Docker. Feel free to choose your preferred, but note that regardless of choice, we recommend using it within a virtual environment, such as conda or pyenv.

Archai is compatible with Windows, Linux, and MacOS.

PyPI

PyPI provides a fantastic source of ready-to-go packages, and it is the easiest way to install a new package:

pip install archai

Source (development)

Alternatively, one can clone this repository and install the bleeding-edge version:

git clone https://github.com/microsoft/archai.git
cd archai
install.sh # on Windows, use install.bat

Please refer to the installation guide for more information.

Running Experiments

To run a specific NAS algorithm, specify it by the --algos switch:

python scripts/main.py --algos darts --full

Please refer to running algorithms for more information on available switches and algorithms.

Tutorials

The best way to familiarize yourself with Archai is to take a quick tour through our 30-minute tutorial. Additionally, one can dive into the Petridish tutorial developed at Microsoft Research and available at Archai.

Visual Studio Code

We highly recommend Visual Studio Code to take advantage of predefined run configurations and interactive debugging.

From the archai directory, launch Visual Studio Code. Select the Run button (Ctrl+Shift+D), choose the run configuration you want, and click on the Play icon.

AzureML Experiments

To run NAS experiments at scale, you can use Archai on Azure.

Support

Join Archai on Facebook to stay up-to-date or ask questions.

Contributions

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Team

Archai has been created and maintained by Shital Shah, Debadeepta Dey, Gustavo de Rosa, Caio Cesar Teodoro Mendes, Piero Kauffmann in the Reinforcement Learning Group at Microsoft Research, Redmond, USA. Archai has benefited immensely from discussions with Ofer Dekel, John Langford, Rich Caruana, Eric Horvitz and Alekh Agarwal.

Credits

Archai builds on several open source codebases. These includes: Fast AutoAugment, pt.darts, DARTS-PyTorch, DARTS, petridishnn, PyTorch CIFAR-10 Models, NVidia DeepLearning Examples, PyTorch Warmup Scheduler, NAS Evaluation is Frustratingly Hard, NASBench-PyTorch.

Please see install_requires section in setup.py for up-to-date dependencies list. If you feel credit to any material is missing, please let us know by filing an issue.

License

This project is released under the MIT License. Please review the file for more details.

Trademark

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.