Open deep learning compiler stack for cpu, gpu and specialized accelerators
Перейти к файлу
Tianqi Chen 2607a83619 [RUNTIME][PYTHON] More compatibility in ndarray (#463) 2017-09-18 23:11:19 -07:00
HalideIR@f3eb854595
apps
cmake
dlpack@9422e98f3f
dmlc-core@a384fb9ed0 [SUBMODULE] upgrade dmlc-core (#461) 2017-09-18 10:22:06 -07:00
docs
include/tvm [RPC] Include rpc session info into context (#458) 2017-09-17 19:05:55 -07:00
jvm
make
python [RUNTIME][PYTHON] More compatibility in ndarray (#463) 2017-09-18 23:11:19 -07:00
src [METAL] use 32bit indexing for metal until we have a bound adapted pass (#462) 2017-09-18 13:14:31 -07:00
tests [PASS] Fix intrinsic lowering with fma and other intrin (#457) 2017-09-17 15:54:36 -07:00
topi Use ewise schedule for broadcasting (#460) 2017-09-17 23:32:05 -07:00
tutorials
verilog
web
.gitignore
.gitmodules
.travis.yml
CMakeLists.txt
CONTRIBUTORS.md
Jenkinsfile
LICENSE
Makefile
NEWS.md
README.md

README.md

TVM: Tensor IR Stack for Deep Learning Systems

GitHub license Build Status

Installation | Documentation | Tutorials | Operator Inventory | FAQ | Contributors | Release Notes

TVM is a Tensor intermediate representation(IR) stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends. Checkout our announcement for more details.

License

© Contributors, 2017. Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.