This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.
Перейти к файлу
dependabot[bot] 60e69eed3a
Bump numpy from 1.16.4 to 1.22.0 in /tf
Bumps [numpy](https://github.com/numpy/numpy) from 1.16.4 to 1.22.0.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst)
- [Commits](https://github.com/numpy/numpy/compare/v1.16.4...v1.22.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-21 21:25:29 +00:00
applications GesturePod reorg (#200) 2020-09-14 17:53:04 -07:00
c_reference Shiftry Release Code (#213) 2021-07-08 23:27:38 +05:30
cpp Use platform (linux) specific APIs to check and create directories (#183) 2020-07-07 08:30:04 -07:00
docs Update Citation and Bump Version (#207) 2020-10-25 20:25:51 -07:00
examples Shiftry Release Code (#213) 2021-07-08 23:27:38 +05:30
pytorch KWS Fix (#225) 2021-01-07 14:51:34 -08:00
tf Bump numpy from 1.16.4 to 1.22.0 in /tf 2022-06-21 21:25:29 +00:00
tools/SeeDot Fixes to Shiftry 2021-11-23 19:22:24 +05:30
.gitattributes Implement Sparse Face Detection Model (#212) 2020-10-25 20:25:12 -07:00
.gitignore fix python code so it runs on windows. (#77) 2019-03-16 09:44:53 +05:30
License.txt Moved Back License files 2018-10-02 18:56:04 +05:30
README.md Update Citation and Bump Version (#207) 2020-10-25 20:25:51 -07:00
ThirdPartyNotice.txt Moved Back License files 2018-10-02 18:56:04 +05:30

README.md

The Edge Machine Learning library

This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.

Machine learning models for edge devices need to have a small footprint in terms of storage, prediction latency, and energy. One instance of where such models are desirable is resource-scarce devices and sensors in the Internet of Things (IoT) setting. Making real-time predictions locally on IoT devices without connecting to the cloud requires models that fit in a few kilobytes.

Contents

Algorithms that shine in this setting in terms of both model size and compute, namely:

  • Bonsai: Strong and shallow non-linear tree based classifier.
  • ProtoNN: Prototype based k-nearest neighbors (kNN) classifier.
  • EMI-RNN: Training routine to recover the critical signature from time series data for faster and accurate RNN predictions.
  • Shallow RNN: A meta-architecture for training RNNs that can be applied to streaming data.
  • FastRNN & FastGRNN - FastCells: Fast, Accurate, Stable and Tiny (Gated) RNN cells.
  • DROCC: Deep Robust One-Class Classfiication for training robust anomaly detectors.
  • RNNPool: An efficient non-linear pooling operator for RAM constrained inference.

These algorithms can train models for classical supervised learning problems with memory requirements that are orders of magnitude lower than other modern ML algorithms. The trained models can be loaded onto edge devices such as IoT devices/sensors, and used to make fast and accurate predictions completely offline.

A tool that adapts models trained by above algorithms to be inferred by fixed point arithmetic.

  • SeeDot: Floating-point to fixed-point quantization tool.

Applications demonstrating usecases of these algorithms:

  • GesturePod: Gesture recognition pipeline for microcontrollers.
  • MSC-RNN: Multi-scale cascaded RNN for analyzing Radar data.

Organization

  • The tf directory contains the edgeml_tf package which specifies these architectures in TensorFlow, and examples/tf contains sample training routines for these algorithms.
  • The pytorch directory contains the edgeml_pytorch package which specifies these architectures in PyTorch, and examples/pytorch contains sample training routines for these algorithms.
  • The cpp directory has training and inference code for Bonsai and ProtoNN algorithms in C++.
  • The applications directory has code/demonstrations of applications of the EdgeML algorithms.
  • The tools/SeeDot directory has the quantization tool to generate fixed-point inference code.
  • The c_reference directory contains the inference code (floating-point or quantized) for various algorithms in C.

Please see install/run instructions in the README pages within these directories.

Details and project pages

For details, please see our project page, Microsoft Research page, the ICML '17 publications on Bonsai and ProtoNN algorithms, the NeurIPS '18 publications on EMI-RNN and FastGRNN, the PLDI '19 publication on SeeDot compiler, the UIST '19 publication on Gesturepod, the BuildSys '19 publication on MSC-RNN, the NeurIPS '19 publication on Shallow RNNs, the ICML '20 publication on DROCC, and the NeurIPS '20 publication on RNNPool.

Also checkout the ELL project which can provide optimized binaries for some of the ONNX models trained by this library.

Contributors:

Code for algorithms, applications and tools contributed by:

Contributors to this project. New contributors welcome.

Please email us your comments, criticism, and questions.

If you use software from this library in your work, please use the BibTex entry below for citation.

@misc{edgeml04,
   author = {{Dennis, Don Kurian and Gaurkar, Yash and Gopinath, Sridhar and Goyal, Sachin 
              and Gupta, Chirag and Jain, Moksh and Jaiswal, Shikhar and Kumar, Ashish and
              Kusupati, Aditya and  Lovett, Chris and Patil, Shishir G and Saha, Oindrila and
              Simhadri, Harsha Vardhan}},
   title = {{EdgeML: Machine Learning for resource-constrained edge devices}},
   url = {https://github.com/Microsoft/EdgeML},
   version = {0.4},
}

Microsoft Open Source Code of Conduct This project has adopted the

Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.