Open deep learning compiler stack for cpu, gpu and specialized accelerators
Перейти к файлу
Thierry Moreau a96a4a9bcc [TOPI] Automated schedule in conv2d TOPI lib, moving to GEMM intrinsic (#35)
* removing programming out of end to end example for now

* updating TOPI library to use gemm tensor intrinsic

* bug fix, autoschedule in TOPI conv lib

* removing the deprecated GEVM intrinsic

* refactoring, fixed lint test

* fix for integer division bug

* python3 bug fix for non matching types due to float division

* comment
2018-07-11 21:54:39 -07:00
.github [DOCS] Improve review guide, improve cmake llvm build (#1295) 2018-06-17 12:07:11 -07:00
HalideIR@9204453ae8 [NNVM] Introduce const shift ops (#1325) 2018-06-23 18:57:18 -07:00
apps [RPC] graduate tvm.contrib.rpc -> tvm.rpc (#1410) 2018-07-09 15:23:41 -07:00
cmake Add support for multiple OpenCL platforms (#1345) 2018-07-09 09:41:33 -07:00
dlpack@10892ac964 [DLPack] Upgrade dlpack to 0.2 (#609) 2017-11-03 15:55:17 +08:00
dmlc-core@e864aa6757 [BUILD] Fix reflection build for gcc-8 (#1304) 2018-06-20 14:10:00 -07:00
docker [DOCKER] Add demo-gpu image (#1407) 2018-07-09 10:46:14 -07:00
docs [DOCS] Improve documents on deployment (#1412) 2018-07-09 22:27:59 -07:00
include/tvm [RUNTIME] Simple NDArray container API in c++ (#1418) 2018-07-11 18:31:29 -07:00
jvm [RPC] graduate tvm.contrib.rpc -> tvm.rpc (#1410) 2018-07-09 15:23:41 -07:00
nnvm [RPC] graduate tvm.contrib.rpc -> tvm.rpc (#1410) 2018-07-09 15:23:41 -07:00
python [RUNTIME] Simple NDArray container API in c++ (#1418) 2018-07-11 18:31:29 -07:00
src [RUNTIME] Simple NDArray container API in c++ (#1418) 2018-07-11 18:31:29 -07:00
tests [RUNTIME] Simple NDArray container API in c++ (#1418) 2018-07-11 18:31:29 -07:00
topi [RPC] graduate tvm.contrib.rpc -> tvm.rpc (#1410) 2018-07-09 15:23:41 -07:00
tutorials [DOCS] Improve documents on deployment (#1412) 2018-07-09 22:27:59 -07:00
verilog [CONTRIB/BLAS] Add CBLAS Example to contrib (#120) 2017-05-05 10:55:34 -07:00
vta [TOPI] Automated schedule in conv2d TOPI lib, moving to GEMM intrinsic (#35) 2018-07-11 21:54:39 -07:00
web [RPC] graduate tvm.contrib.rpc -> tvm.rpc (#1410) 2018-07-09 15:23:41 -07:00
.gitignore [BUILD] Switch to CMake only Infra (#1254) 2018-06-10 22:00:33 -07:00
.gitmodules [SUBMODULE] switch to https (#341) 2017-08-17 11:51:31 -07:00
.travis.yml Remove linux from travis (#156) 2017-05-22 19:47:12 -07:00
CMakeLists.txt [RUNTIME] keep opencl runtime deps free from node (#1349) 2018-06-27 19:18:24 -07:00
CONTRIBUTORS.md [TEAM] New reviewer: kazum (#1417) 2018-07-11 14:30:03 -07:00
Jenkinsfile [DOCKER] Start docker infratructure (#1402) 2018-07-08 18:56:52 -07:00
LICENSE [DOC/LICENSE] Make doc and license consistent, opensource repo when we get approval (#134) 2017-05-09 20:36:23 -07:00
Makefile Use CMake for make clean (#1280) 2018-06-14 08:55:19 -07:00
NEWS.md Release 0.3 (#1171) 2018-05-20 21:18:26 -07:00
README.md [BUILD] Add clang to build matrix, -Werror (#1273) 2018-06-13 10:52:49 -07:00

README.md

Open Deep Learning Compiler Stack

GitHub license Build Status

Documentation | Contributors | Community | Release Notes

TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends. Checkout the tvm stack homepage for more information.

License

© Contributors Licensed under an Apache-2.0 license.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide

Acknowledgement

We learnt a lot from the following projects when building TVM.

  • Halide: TVM uses HalideIR as data structure for arithematic simplification and low level lowering. We also learnt and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.