Open deep learning compiler stack for cpu, gpu and specialized accelerators
Перейти к файлу
Tianqi Chen f7d9d7e87e Release 0.3 (#1171) 2018-05-20 21:18:26 -07:00
.github
HalideIR@a3698398fa
apps
cmake
dlpack@10892ac964
dmlc-core@d3f7fbb53e
docs
include/tvm
jvm
make
python
src
tests
topi
tutorials
verilog
web
.gitignore
.gitmodules
.travis.yml
CMakeLists.txt
CODEOWNERS
CONTRIBUTORS.md
Jenkinsfile
LICENSE
Makefile
NEWS.md
README.md

README.md

Open Deep Learning Compiler Stack

GitHub license Build Status

Installation | Documentation | Tutorials | Operator Inventory | FAQ | Contributors | Community | Release Notes

TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends. Checkout the tvm stack homepage for more information.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.

Acknowledgement

We learnt a lot from the following projects when building TVM.

  • Halide: TVM uses HalideIR as data structure for arithematic simplification and low level lowering. We also learnt and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.