onnxruntime-tvm/vta
Luis Vega 83d2418a58 [VTA][Config] hotfix denano10 (#3918) 2019-09-09 10:31:31 -07:00
..
apps/tsim_example [VTA][Chisel] add scalafmt and format existing scala codebase (#3880) 2019-09-03 22:19:01 -07:00
config [VTA][Config] hotfix denano10 (#3918) 2019-09-09 10:31:31 -07:00
hardware [VTA][Chisel] add ISA BitPat generation (#3891) 2019-09-04 10:36:21 -07:00
include/vta [VTA] Support TLPP in function simulator. (#3555) 2019-09-06 17:03:51 -07:00
python/vta [VTA][TOPI] Conv2d transpose (deconvolution) operator support (#3777) 2019-09-05 11:29:42 -07:00
scripts [VTA][TOPI] Conv2d transpose (deconvolution) operator support (#3777) 2019-09-05 11:29:42 -07:00
src [VTA] Support TLPP in function simulator. (#3555) 2019-09-06 17:03:51 -07:00
tests [VTA][TOPI] Conv2d transpose (deconvolution) operator support (#3777) 2019-09-05 11:29:42 -07:00
tutorials [VTA][Relay] Extending Vision model coverage compilation for VTA (#3740) 2019-09-05 11:17:09 -07:00
README.md

README.md

VTA: Open, Modular, Deep Learning Accelerator Stack

VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.

The key features of VTA include:

  • Generic, modular, open-source hardware
    • Streamlined workflow to deploy to FPGAs.
    • Simulator support to prototype compilation passes on regular workstations.
  • Driver and JIT runtime for both simulator and FPGA hardware back-end.
  • End-to-end TVM stack integration
    • Direct optimization and deployment of models from deep learning frameworks via TVM.
    • Customized and extensible TVM compiler back-end.
    • Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.

Learn more about VTA here.