2017-08-17 21:42:14 +03:00
|
|
|
TVM: Tensor IR Stack for Deep Learning Systems
|
|
|
|
==============================================
|
|
|
|
|
2017-05-10 06:36:23 +03:00
|
|
|
[![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE)
|
2017-05-24 09:26:22 +03:00
|
|
|
[![Build Status](http://mode-gpu.cs.washington.edu:8080/buildStatus/icon?job=dmlc/tvm/master)](http://mode-gpu.cs.washington.edu:8080/job/dmlc/job/tvm/job/master/)
|
2016-10-13 01:29:17 +03:00
|
|
|
|
2017-04-15 08:06:41 +03:00
|
|
|
[Installation](docs/how_to/install.md) |
|
2017-05-24 09:26:22 +03:00
|
|
|
[Documentation](http://docs.tvmlang.org) |
|
|
|
|
[Tutorials](http://tutorials.tvmlang.org) |
|
2017-08-15 08:13:28 +03:00
|
|
|
[Operator Inventory](topi) |
|
2017-05-24 09:26:22 +03:00
|
|
|
[FAQ](docs/faq.md) |
|
2017-05-10 06:36:23 +03:00
|
|
|
[Contributors](CONTRIBUTORS.md) |
|
2017-04-15 08:06:41 +03:00
|
|
|
[Release Notes](NEWS.md)
|
2016-10-13 01:29:17 +03:00
|
|
|
|
2017-08-15 08:13:28 +03:00
|
|
|
TVM is a Tensor intermediate representation(IR) stack for deep learning systems. It is designed to close the gap between the
|
|
|
|
productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends.
|
|
|
|
TVM works with deep learning frameworks to provide end to end compilation to different backends.
|
2017-08-17 22:55:30 +03:00
|
|
|
Checkout our [announcement](http://tvmlang.org/2017/08/17/tvm-release-announcement.html) for more details.
|
2017-08-17 21:42:14 +03:00
|
|
|
|
2017-08-15 08:13:28 +03:00
|
|
|
License
|
|
|
|
-------
|
|
|
|
© Contributors, 2017. Licensed under an [Apache-2.0](https://github.com/dmlc/tvm/blob/master/LICENSE) license.
|
2017-04-17 08:03:52 +03:00
|
|
|
|
2017-05-10 06:36:23 +03:00
|
|
|
Contribute to TVM
|
|
|
|
-----------------
|
2017-08-15 08:13:28 +03:00
|
|
|
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.
|
|
|
|
|
2017-05-10 06:36:23 +03:00
|
|
|
- [Contributor Guide](docs/how_to/contribute.md)
|
|
|
|
- Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md)
|
|
|
|
- Please also update [NEWS.md](NEWS.md) on changes and improvements in API and codes.
|
2017-09-20 22:14:50 +03:00
|
|
|
|
|
|
|
Acknowledgement
|
|
|
|
---------------
|
|
|
|
We learnt a lot from the following projects when building TVM.
|
|
|
|
- [Halide](https://github.com/halide/Halide): TVM uses [HalideIR](https://github.com/dmlc/HalideIR) as data structure for
|
|
|
|
arithematic simplification and low level lowering. HalideIR is derived from Halide.
|
|
|
|
We also learns from Halide when implementing the lowering pipeline in TVM.
|
|
|
|
- [Loopy](https://github.com/inducer/loopy): use of integer set analysis and its loop transformation primitives.
|
|
|
|
- [Theano](https://github.com/Theano/Theano): the design inspiration of symbolic scan operator for recurrence.
|