Open deep learning compiler stack for cpu, gpu and specialized accelerators
Перейти к файлу
KeDengMS 36cf8698ee Update patch 2021-08-13 01:49:55 +00:00
.github Add link to the reviewers 2018-10-22 23:08:32 -07:00
3rdparty [IR] Update HalideIR (#2582) 2019-02-10 15:21:50 -08:00
apps Bundled interpreter demo (#2297) 2018-12-18 18:00:42 -08:00
cmake Update patch 2021-08-13 01:49:55 +00:00
conda Conda packages with cuda support (#2577) 2019-02-09 09:55:56 -08:00
docker [Relay][Frontend] Caffe2 Support (#2507) 2019-02-01 22:31:20 -08:00
docs [Hybrid script] Backend support (#2477) 2019-02-13 15:46:34 -08:00
golang [Golang] bugfix #2517 (#2558) 2019-02-06 06:01:33 -06:00
include/tvm Clamp int64_t/uint64_t input to tvm::Integer to [INT32_MIN, INT32_MAX] 2019-09-12 16:02:39 -07:00
jvm Vulkan TVM Android Support (#1571) 2018-08-09 18:41:49 -07:00
nnvm fix get layout in to_relay (#2610) 2019-02-18 14:42:45 -08:00
python Fix issue mutating if expressions (#2601) 2019-02-18 14:42:45 -08:00
rust [RUST][FRONTEND] Add rust frontend v0.1 (#2292) 2019-02-02 19:56:11 -08:00
src Fix build break in c++17 2021-08-12 01:33:58 +00:00
tests [TVM][Bugfix] fix storage_rewrite bug when input is big (#2580) 2019-02-14 08:55:29 -08:00
topi [TOPI][CUDA] Add faster-rcnn proposal op (#2420) 2019-02-14 19:50:59 +09:00
tutorials [DOCS] update titles to reflect tutorial content (nnvm vs. relay) (#2597) 2019-02-14 11:31:59 -08:00
verilog Remove leading "./" from include paths (#1640) 2018-08-22 22:11:12 -07:00
vta [TUTORIAL] Fix downloaded file path (#2590) 2019-02-12 07:36:39 -08:00
web Fix Web Build after CMake transition. (#2407) 2019-01-09 12:19:53 -08:00
.clang-format add .clang-format (#2395) 2019-01-08 13:08:13 -08:00
.gitignore [Relay][RFC] Relay IR Text Format (#1781) 2018-12-02 10:35:01 -08:00
.gitmodules [Relay] Add generic & informative Relay error reporting (#2408) 2019-01-25 10:17:31 -08:00
.travis.yml Remove linux from travis (#156) 2017-05-22 19:47:12 -07:00
CMakeLists.txt [Runtime] Enable option to use OpenMP thread pool (#4089) 2019-11-26 12:57:50 -08:00
CONTRIBUTORS.md [Team] @merrymercy -> PMC (#2578) 2019-02-09 09:52:39 -08:00
Jenkinsfile [TEST] Remove script that references previously removed content. (#2481) 2019-01-24 14:20:08 -05:00
LICENSE [DOC/LICENSE] Make doc and license consistent, opensource repo when we get approval (#134) 2017-05-09 20:36:23 -07:00
Makefile Fix Web Build after CMake transition. (#2407) 2019-01-09 12:19:53 -08:00
NEWS.md Version 0.5 (#2604) 2019-02-18 14:42:45 -08:00
NOTICE NOTICE (#2203) 2018-11-29 23:43:41 -08:00
README.md Update README.md typo (#2132) 2018-11-19 09:00:55 -08:00
version.py Version 0.5 (#2604) 2019-02-18 14:42:45 -08:00

README.md

Open Deep Learning Compiler Stack

GitHub license Build Status

Documentation | Contributors | Community | Release Notes

TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends. Checkout the tvm stack homepage for more information.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide

Acknowledgement

We learnt a lot from the following projects when building TVM.

  • Halide: TVM uses HalideIR as data structure for arithmetic simplification and low level lowering. We also learnt and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.