LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Перейти к файлу
Dongqing Zhang 4f3ea9b52b
Merge pull request #22 from microsoft/users/GitHubPolicyService/795b4758-ae87-4a96-afef-cc5c45f10550
Adding Microsoft SECURITY.MD
2022-08-30 16:00:20 +08:00
LICENSE Initial commit 2018-07-19 17:14:18 +08:00
README.md Update README.md 2018-08-09 17:20:15 +08:00
SECURITY.md Microsoft mandatory file 2022-08-29 14:01:16 +00:00
alexnet_model.py Initial commit 2018-07-26 02:45:28 +08:00
cifar10-vgg-small.py Initial commit 2018-07-26 02:45:28 +08:00
densenet_model.py Initial commit 2018-07-26 02:45:28 +08:00
googlenet_model.py Initial commit 2018-07-26 02:45:28 +08:00
imagenet.py Initial commit 2018-07-26 02:45:28 +08:00
imagenet_utils.py Initial commit 2018-07-26 02:45:28 +08:00
learned_quantization.py Initial commit 2018-07-26 02:45:28 +08:00
resnet_model.py Initial commit 2018-07-26 02:45:28 +08:00
vgg_model.py Initial commit 2018-07-26 02:45:28 +08:00

README.md

LQ-Nets

By Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua.

Microsoft Research Asia (MSRA).

Introduction

This repository contains the training code of LQ-Nets introduced in our ECCV 2018 paper:

D. Zhang*, J. Yang*, D. Ye* and G. Hua. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks. ECCV 2018 (*: Equal contribution) PDF

Dependencies

  • Python 2.7 or 3.3+
  • Python bindings for OpenCV
  • TensorFlow >= 1.3.0
  • TensorPack

Usage

Download the ImageNet dataset and decompress into the structure like

dir/
  train/
    n01440764/
      n01440764_10026.JPEG
      ...
    ...
  val/
    ILSVRC2012_val_00000001.JPEG
    ...

To train a quantized "pre-activation" ResNet-18, simply run

python imagenet.py --gpu 0,1,2,3 --data /PATH/TO/IMAGENET --mode preact --depth 18 --qw 1 --qa 2 --logdir_id w1a2 

After the training, the result model will be stored in ./train_log/w1a2.

For more options, please refer to python imagenet.py -h.

Results

ImageNet Experiments

Quantizing both weight and activation

Model Bit-width(W/A) Top-1(%) Top-5(%)
ResNet-18 1/2 62.6 84.3
ResNet-18 2/2 64.9 85.9
ResNet-18 3/3 68.2 87.9
ResNet-18 4/4 69.3 88.8
ResNet-34 1/2 66.6 86.9
ResNet-34 2/2 69.8 89.1
ResNet-34 3/3 71.9 90.2
ResNet-50 1/2 68.7 88.4
ResNet-50 2/2 71.5 90.3
ResNet-50 3/3 74.2 91.6
ResNet-50 4/4 75.1 92.4
AlexNet 1/2 55.7 78.8
AlexNet 2/2 57.4 80.1
DenseNet-121 2/2 69.6 89.1
VGG-Variant 1/2 67.1 87.6
VGG-Variant 2/2 68.8 88.6
GoogLeNet-Variant 1/2 65.6 86.4
GoogLeNet-Variant 2/2 68.2 88.1

Quantizing weight only

Model Bit-width(W/A) Top-1(%) Top-5(%)
ResNet-18 2/32 68.0 88.0
ResNet-18 3/32 69.3 88.8
ResNet-18 4/32 70.0 89.1
ResNet-50 2/32 75.1 92.3
ResNet-50 4/32 76.4 93.1
AlexNet 2/32 60.5 82.7

More results can be found in the paper.

Citation

If you use our code or models in your research, please cite our paper with

@inproceedings{ZhangYangYeECCV2018,
    author = {Zhang, Dongqing and Yang, Jiaolong and Ye, Dongqiangzi and Hua, Gang},
    title = {LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2018}
}