зеркало из https://github.com/mozilla/FBGEMM.git
d4bfa96cda
Summary: This adds a specialization for `int8` to the AVX2 `Quantize` routine. I tried also adding a specialization for `int32` (the final datatype we support in PyTorch quantization), but it seemed to introduce numerical issues stemming from the difference in implementations: https://github.com/pytorch/FBGEMM/blob/master/include/fbgemm/QuantUtils.h#L63 vs https://github.com/pytorch/FBGEMM/blob/master/src/QuantUtilsAvx2.cc#L82 Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/120 Reviewed By: driazati Differential Revision: D17115198 Pulled By: jamesr66a fbshipit-source-id: 119145bb99235a7545389afa61483060200cc2b7 |
||
---|---|---|
.. | ||
ConvUtils.h | ||
Fbgemm.h | ||
FbgemmBuild.h | ||
FbgemmFP16.h | ||
FbgemmI8DepthwiseAvx2.h | ||
FbgemmI8Spmdm.h | ||
OutputProcessing-inl.h | ||
PackingTraits-inl.h | ||
QuantUtils.h | ||
QuantUtilsAvx2.h | ||
Types.h | ||
Utils.h | ||
UtilsAvx2.h |