b10dda6945
This appears in linting using the docker scripts. I'm not sure why this isn't failing in the standard CI for TVM and it might be that the docker images haven't been updated in the CI system. python3 -m pylint vta/python/vta --rcfile=/workspace/tests/lint/pylintrc Using config file /workspace/tests/lint/pylintrc ************* Module vta.top.graphpack C:131, 4: Missing method docstring (missing-docstring) |
||
---|---|---|
.github | ||
3rdparty | ||
apps | ||
cmake | ||
conda | ||
docker | ||
docs | ||
golang | ||
include/tvm | ||
jvm | ||
nnvm | ||
python | ||
rust | ||
src | ||
tests | ||
topi | ||
tutorials | ||
vta | ||
web | ||
.clang-format | ||
.gitignore | ||
.gitmodules | ||
CMakeLists.txt | ||
CONTRIBUTORS.md | ||
Jenkinsfile | ||
LICENSE | ||
Makefile | ||
NEWS.md | ||
NOTICE | ||
README.md | ||
version.py |
README.md
Open Deep Learning Compiler Stack
Documentation | Contributors | Community | Release Notes
TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends. Checkout the tvm stack homepage for more information.
License
© Contributors Licensed under an Apache-2.0 license.
Contribute to TVM
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide
Acknowledgement
We learnt a lot from the following projects when building TVM.
- Halide: TVM uses HalideIR as data structure for arithmetic simplification and low level lowering. We also learnt and adapted some part of lowering pipeline from Halide.
- Loopy: use of integer set analysis and its loop transformation primitives.
- Theano: the design inspiration of symbolic scan operator for recurrence.