onnxruntime-tvm/vta
MORITA Kazutaka 77718f8e11 [TUTORIAL] Fix downloaded file path (#2590) 2019-02-12 07:36:39 -08:00
..
config [VTA] bugfix parameter derivation (#1521) 2018-08-01 08:56:01 -07:00
hardware/xilinx Remove leading "./" from include paths (#1640) 2018-08-22 22:11:12 -07:00
include/vta Optimize Linux shared library modules (*.so files) (#2445) 2019-01-28 21:14:18 -08:00
python/vta [RELAY][EXPR] Make const numpy consistent (#2349) 2018-12-28 20:11:56 -08:00
src Misc refactor on graph runtime, layout node (#2557) 2019-02-03 14:02:04 -08:00
tests [VTA] Improved RPC for VTA (#2043) 2018-11-11 19:24:48 -08:00
tutorials [TUTORIAL] Fix downloaded file path (#2590) 2019-02-12 07:36:39 -08:00
README.md [DOC] Update VTA readme files to avoid stale information (#1484) 2018-07-24 17:58:58 -07:00

README.md

VTA: Open, Modular, Deep Learning Accelerator Stack

VTA (versatile tensor accelerator) is an open-source deep learning accelerator complemented with an end-to-end TVM-based compiler stack.

The key features of VTA include:

  • Generic, modular, open-source hardware
    • Streamlined workflow to deploy to FPGAs.
    • Simulator support to prototype compilation passes on regular workstations.
  • Driver and JIT runtime for both simulator and FPGA hardware back-end.
  • End-to-end TVM stack integration
    • Direct optimization and deployment of models from deep learning frameworks via TVM.
    • Customized and extensible TVM compiler back-end.
    • Flexible RPC support to ease deployment, and program FPGAs with the convenience of Python.

Learn more about VTA here.