Перейти к файлу
Yucheng Zhao dcede1e86c
xxs model release
2023-01-18 18:26:25 +08:00
models Fix error configuration of shiftvit_r2_tiny 2022-02-22 14:35:22 +08:00
.gitignore Initial commit 2021-10-26 10:29:21 +00:00
CODE_OF_CONDUCT.md CODE_OF_CONDUCT.md committed 2021-10-26 03:29:32 -07:00
LICENSE LICENSE committed 2021-10-26 03:29:33 -07:00
NOTICE Update NOTICE 2021-11-10 11:59:14 +08:00
README.md xxs model release 2023-01-18 18:26:25 +08:00
SECURITY.md SECURITY.md committed 2021-10-26 03:29:34 -07:00
SUPPORT.md SUPPORT.md committed 2021-10-26 03:29:35 -07:00
datasets.py Add files via upload 2021-10-26 18:59:24 +08:00
engine.py Add files via upload 2021-10-26 18:59:24 +08:00
logger.py Add files via upload 2021-10-26 18:59:24 +08:00
losses.py Add files via upload 2021-10-26 18:59:24 +08:00
main.py Add files via upload 2021-10-26 18:59:24 +08:00
requirements.txt Add files via upload 2021-11-24 12:24:30 +08:00
samplers.py Add files via upload 2021-10-26 18:59:24 +08:00
utils.py Add files via upload 2021-10-26 18:59:24 +08:00

README.md

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects:

Other unofficial implementations:

Main Results on ImageNet with Pretrained Models

name acc@1 #params FLOPs url
SPACH-Conv-MS-XXS 73.1 5M 0.7G github
SPACH-Trans-MS-XXS 65.4 2M 0.5G github
SPACH-MLP-MS-XXS 74.5 6M 0.9G github
SPACH-Conv-MS-S 81.6 44M 7.2G github
SPACH-Trans-MS-S 82.9 40M 7.6G github
SPACH-MLP-MS-S 82.1 46M 8.2G github
SPACH-Hybrid-MS-S 83.7 63M 11.2G github
SPACH-Hybrid-MS-S+ 83.9 63M 12.3G github
sMLPNet-T 81.9 24M 5.0G
sMLPNet-S 83.1 49M 10.3G github
sMLPNet-B 83.4 66M 14.0G github
Shift-T / light 79.4 20M 3.0G github
Shift-T 81.7 29M 4.5G github
Shift-S / light 81.6 34M 5.7G github
Shift-S 82.8 50M 8.8G github

Usage

Install

First, clone the repo and install requirements:

git clone https://github.com/microsoft/Spach
pip install -r requirements.txt

Data preparation

Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is the standard layout for the torchvision datasets.ImageFolder, and the training and validation data is expected to be in the train/ folder and val/ folder respectively:

/path/to/imagenet/
  train/
    class1/
      img1.jpeg
    class2/
      img2.jpeg
  val/
    class1/
      img3.jpeg
    class/2
      img4.jpeg

Evaluation

To evaluate a pre-trained model on ImageNet val with a single GPU run:

python main.py --eval --resume <checkpoint> --model <model-name>--data-path <imagenet-path> 

For example, to evaluate the SPACH-Hybrid-MS-S model, run

python main.py --eval --resume --model spach_ms_s_patch4_224_hybrid spach_ms_hybrid_s.pth --data-path <imagenet-path>

giving

* Acc@1 83.658 Acc@5 96.762 loss 0.688

You can find all supported models in models/registry.py.

Training

One can simply call the following script to run training process. Distributed training is recommended even on single GPU node.

python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --use_env main.py \
--model <model-name>
--data-path <imagenet-path>
--output_dir <output-path>
--dist-eval

Citation

@article{zhao2021battle,
  title={A Battle of Network Structures: An Empirical Study of CNN, Transformer, and MLP},
  author={Zhao, Yucheng and Wang, Guangting and Tang, Chuanxin and Luo, Chong and Zeng, Wenjun and Zha, Zheng-Jun},
  journal={arXiv preprint arXiv:2108.13002},
  year={2021}
}

@article{tang2021sparse,
  title={Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?},
  author={Tang, Chuanxin and Zhao, Yucheng and Wang, Guangting and Luo, Chong and Xie, Wenxuan and Zeng, Wenjun},
  journal={arXiv preprint arXiv:2109.05422},
  year={2021}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Acknowledgement

Our code are built on top of DeiT. We test throughput following Swin Transformer