FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
Перейти к файлу
James Reed d4bfa96cda int8 specialization for AVX2 Quantize routine (#120)
Summary:
This adds a specialization for `int8` to the AVX2 `Quantize` routine.

I tried also adding a specialization for `int32` (the final datatype we support in PyTorch quantization), but it seemed to introduce numerical issues stemming from the difference in implementations:

https://github.com/pytorch/FBGEMM/blob/master/include/fbgemm/QuantUtils.h#L63

vs

https://github.com/pytorch/FBGEMM/blob/master/src/QuantUtilsAvx2.cc#L82
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/120

Reviewed By: driazati

Differential Revision: D17115198

Pulled By: jamesr66a

fbshipit-source-id: 119145bb99235a7545389afa61483060200cc2b7
2019-08-29 11:26:05 -07:00
.circleci Fix CI indent error 2019-05-14 15:12:04 -07:00
bench Support pointwise with unified convolution interface as well (#108) 2019-07-18 16:03:41 -07:00
cmake/modules Use submodules instead of cmake downloads 2019-05-14 13:25:14 -07:00
include/fbgemm int8 specialization for AVX2 Quantize routine (#120) 2019-08-29 11:26:05 -07:00
src int8 specialization for AVX2 Quantize routine (#120) 2019-08-29 11:26:05 -07:00
test Per channel support in fbgemmConv (#119) 2019-08-20 16:58:08 -07:00
third_party Update asmjit to version that includes a bug fix (#118) 2019-08-14 15:52:54 -07:00
.gitignore FP16Benchmark: Allow fp32 comparison using cblas (#56) 2019-01-14 11:08:48 -08:00
.gitmodules Use submodules instead of cmake downloads 2019-05-14 13:25:14 -07:00
CMakeLists.txt Integrate VNNI into FBGEMM master branch (#114) 2019-08-09 11:33:13 -07:00
CODE_OF_CONDUCT.md Initial commit 2018-10-30 14:56:00 -07:00
CONTRIBUTING.md Initial commit 2018-10-30 14:56:00 -07:00
LICENSE Initial commit 2018-10-30 14:56:00 -07:00
README.md Update README.md with mentioning PyTorch (#116) 2019-08-12 09:25:22 -07:00

README.md

FBGEMM

Linux Build: CircleCI

FBGEMM (Facebook GEneral Matrix Multiplication) is a low-precision, high-performance matrix-matrix multiplications and convolution library for server-side inference.

The library provides efficient low-precision general matrix multiplication for small batch sizes and support for accuracy-loss minimizing techniques such as row-wise quantization and outlier-aware quantization. FBGEMM also exploits fusion opportunities in order to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound operations.

FBGEMM is used as a backend of Caffe2 and PyTorch quantized operators for x86 machines:

Examples

The tests (in test folder) and benchmarks (in bench folder) are some great examples of using FBGEMM. For instance, SpMDMTest test in test/PackedRequantizeAcc16Test.cc shows how to combine row offset calculations with packing of A (PackAWithRowOffset), how to pack B matrix (PackBMatrix) and construct output pipeline (sparse_matrix*dense_matrix --> requantization --> nop) fused with inner GEMM macro kernel.

Build Notes

FBGEMM uses the standard CMAKE-based build flow.

Dependencies

FBGEMM requires gcc 4.9+ and a CPU with support for avx2 instruction set or higher. It's been tested on Mac OS X and Linux.

  • asmjit

With inner kernels, FBGEMM takes a “one size doesn't fit all” approach, so the implementation dynamically generates efficient matrix-shape specific vectorized code using a third-party library called asmjit. asmjit is required to build FBGEMM.

  • cpuinfo

FBGEMM detects CPU instruction set support at runtime using cpuinfo library and dispatches optimized kernels for the detected instruction set. Therefore, cpuinfo is required to detect CPU type.

  • googletest

googletest is required to build and run FBGEMM's tests. googletest is not required if you don't want to run FBGEMM tests. By default, building of tests is on. Turn it off by setting FBGEMM_BUILD_TESTS to off.

You can download asmjit, cpuinfo, googletest and set ASMJIT_SRC_DIR, CPUINFO_SRC_DIR, GOOGLETEST_SOURCE_DIR respectively for cmake to find these libraries. If any of these variables is not set, cmake will build the git submodules found in the third_party directory.

FBGEMM, in general, does not have any dependency on Intel MKL. However, for performance comparison, some benchmarks use MKL functions. If MKL is found or MKL path is provided with INTEL_MKL_DIR benchmarks are built with MKL and performance numbers are reported for MKL functions as well. However, if MKL is not found, the benchmarks are not built.

General build instructions are as follows:

git clone --recursive https://github.com/pytorch/FBGEMM.git
cd FBGEMM
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive
mkdir build && cd build
cmake ..
make

To run the tests after building FBGEMM (if tests are built), use the following command:

make test

Installing FBGEMM

make install

How FBGEMM works

For a high-level overview, design philosophy and brief descriptions of various parts of FBGEMM please see our blog.

Full documentation

We have extensively used comments in our source files. The best and up-do-date documentation is available in the source files.

Join the FBGEMM community

See the CONTRIBUTING file for how to help out.

License

FBGEMM is BSD licensed, as found in the LICENSE file.