Граф коммитов

3030 Коммитов

Автор SHA1 Сообщение Дата
microsoft-github-policy-service[bot] 984dc1b111
Auto merge mandatory file pr
This pr is auto merged as it contains a mandatory file and is opened for more than 10 days.
2022-11-28 19:09:42 +00:00
microsoft-github-policy-service[bot] 18879228cb
Microsoft mandatory file 2022-07-25 20:34:48 +00:00
Jared Roesch 4e2d707f2d [Relay][Module] Refactor the way we interface between different modules of Relay. (#3906)
* Module refactor

* Add load module

* Add support for idempotent import

* Tweak load paths

* Move path around

* Expose C++ import functions in Python

* Fix import

* Add doc string

* Fix

* Fix lint

* Fix lint

* Fix test failure

* Add type solver

* Fix lint
2019-09-11 20:39:56 -07:00
Lianmin Zheng c31e77718a
[Community] Add reviewer Balint Cristian (#3935) 2019-09-11 14:32:15 -07:00
Yizhi Liu eb3a7382d2 [Arm] parallel batch axis (#3931)
* support LLVM trunk

* guard with USE_LLVM in if condition for c++14

* GREATER_EQUAL -> GREATER

* [Arm] parallel batch axis
2019-09-11 11:10:47 -07:00
Zhao Wu 968ffef62b [TFLite] Support depthwise convolution multiplier greater than 1 (#3922) 2019-09-10 21:09:25 -07:00
雾雨魔理沙 54dbcc2872 [Relay] fix exponential blowup in interpreter (#3559) 2019-09-10 23:30:46 -04:00
Neo Chien 5bff6ccede [Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case (#3917)
* [Relay][Frontend][Keras] Fix ReLU in Keras Converter missed the case

* [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case

* [Relay][Frontend][Keras] Add test case for ReLU in Keras Converter missed the case
2019-09-10 10:41:16 -07:00
Pratyush Patel 42195a48e0 [CODEGEN] Remove incorrect check for LLVM in C codegen test (#3921) 2019-09-10 08:43:01 +08:00
雾雨魔理沙 0f4c151f2a [Relay][Training] Add gradient for max. (#3915)
* save

* save
2019-09-09 12:48:04 -07:00
Luis Vega 83d2418a58 [VTA][Config] hotfix denano10 (#3918) 2019-09-09 10:31:31 -07:00
Xingjian Shi 63a91ebf45 Numpy compatible dtype inference for `tvm.convert` and `tvm.const` (#3861)
* numpy compatible type inference

* update

* try to fix

* fix

* try to fix

* fix lint

* Update nn.h

* cast to int32

* try to fix

* fix again

* retrigger ci
2019-09-10 01:26:34 +08:00
Haichen Shen 2f5b155ab5 [Relay/TOPI][Op] Add erf intrinsic and op (#3702)
* add more ops

* stop vectorization for erf

* x

* cleanup

* fix

* add whitelist for vectorizable intrin

* add tf converter

* fix dense

* fix

* add missing intrin

* fix mxnet frontend

* fix nvptx
2019-09-09 22:54:15 +08:00
雾雨魔理沙 6a377f77e8 [Relay][Training] Add gradient for cast (#3894)
save

fix

fix grad
2019-09-07 23:11:47 -04:00
雾雨魔理沙 184fa484ca change docker install script (#3524) 2019-09-08 08:10:11 +08:00
Haichen Shen 7a15aedf1e [Fix] Fix blas cmake for mac os (#3898)
* fix cmake for mac os

* rename
2019-09-08 05:34:32 +08:00
Yizhi Liu 8c50469c6d
Support LLVM trunk (#3907)
* support LLVM trunk

* guard with USE_LLVM in if condition for c++14

* GREATER_EQUAL -> GREATER
2019-09-08 02:43:29 +08:00
noituIover 6604593bef Fix a typo (#3913) 2019-09-07 09:44:39 -07:00
Peter Yeh e8c6adc6fb Add .hsaco save/load for ROCm target (#3852)
fix lld
2019-09-07 12:41:35 +09:00
Haichen Shen 54150cd581 add luis as reviewer (#3909) 2019-09-07 08:12:56 +08:00
Hua Jiang 50c4546f59 [VTA] Support TLPP in function simulator. (#3555)
* [VTA] Support TLPP in function simulator.
Issue:
currently vta function simulator just doing serialized instruction
execution, the dependency logic of runtime ISA which use for task
level pipe line parallelism can not get verified by function simulator.

Solution:
make the simulator driver to be multiple thread and support TLPP.

Benefit:
TLPP support VTA function simulator would make VTA logic testing/debug
/change more easy.

replace boost lockfree queue

add configure control for simulator tlpp enable or disable.

change code tyle into google style.

Wrap queue read/write and sync logic to make function call more simple.

Add some comments.

Remove MT logic, change into Single thread mode.

address review comments.

code style change to match google code style and add comments.

add cmake macro to enable/disable simulator tlpp logic.

submodule update.

correct file name mentioned in comments.

* remove USE_VTA_FSIM_TLPP.
2019-09-06 17:03:51 -07:00
Leyuan Wang 70042b78ee [TOPI] Intel graphics conv2d autotvm template added (#3839)
* update lint

* lint fixed

* lint updated

* lint fixed

* lint fixed

* lint fixed

* updates

* add intel graphics as a package

* remove print info

* depthwise conv2d schedule added for intel graphics

* asdf

* fix lint

* fix lint

* fix ci

* add channels
2019-09-06 17:01:29 -07:00
雾雨魔理沙 02ddb5a9c3 save (#3901) 2019-09-06 15:17:37 -07:00
雾雨魔理沙 19f8c123af [Relay][Op] Make Type Relation catch more errors (#3899)
* save

* init

* move type_relations
2019-09-06 11:51:27 -07:00
Logan Weber ca0292d8c5 [Relay] Add ADTs to text format (#3863)
* Getting closer to having ADT defs

* ADT defs working probly

* Match parsing basipally done

* came to earth in a silver chrome UFO

* match finished?

* All tests but newest are passing

* ADT constructors work

now cleanup?

* Cleanup round 1

* Cleanup round 2

* Cleanup round 3

* Cleanup round 4

* Cleanup round 6

* Cleanup round 7

* Lil grammar fix

* Remove ANTLR Java files

* Lint roller

* Lint roller

* Address feedback

* Test completeness in match test

* Remove unused imports

* Lint roller

* Switch to Rust-style ADT syntax

* Lil fix

* Add dummy `extern type` handler

* Add type arg to test

* Update prelude semantic version

* Repair test

* Fix graph var handling in match

* Revert 's/graph_equal/is_unifiable' change
2019-09-06 11:04:34 -07:00
Yong Wu a103c4ee18 [bugfix] remove duplicate resize (#3902) 2019-09-06 11:30:04 -04:00
Jason Knight d464d2baa7 Add another MKL name alias for MKL (#3853)
Installed through pypi
2019-09-06 21:30:13 +08:00
Yizhi Liu 9b148f14a3 [schedule] Improve ceil_divide in tile/split (#3842) 2019-09-06 21:29:31 +08:00
Jon Soifer d9bbdbc8e9 [PYTHON/FFI] Search PATH for DLLs (#3888)
* Search PATH for DLLs

* Fix lint issue
2019-09-05 16:42:29 -07:00
雾雨魔理沙 08d92203f7 [Relay] add Tuple pattern (#3596)
* implement tuple pattern

* add tuple pattern

* lint;

* lint

* lint

* fix error

* fix

* add test
2019-09-05 16:41:44 -07:00
kice 98c9980500 Fix int32 range overflow by using int64 (#3870) 2019-09-06 07:21:54 +08:00
雾雨魔理沙 ca35277071 [Relay] Fix operator fusion for multiple output (#3871)
* save

* add test

* refactor

* fix indent

* save

* refactor
2019-09-06 06:39:13 +09:00
Haibin Lin 57cd27f163 [DOC] Fix doc rendering (#3897)
* Update from_source.rst

* Update deploy_ssd_gluoncv.py
2019-09-05 11:48:57 -07:00
黎明灰烬 e873a73abd [Test] enable NHWC of `relay.testing.mobilenet` (#3886)
* [Relay] enable NHWC of `relay.testing.mobilenet`

In this way, we can play around NHWC inside TVM regardless of
the frontends.

* [Test] test for NHWC of relay.testing.mobilenet
2019-09-05 11:32:21 -07:00
Thierry Moreau 23c22812b8 [VTA][TOPI] Conv2d transpose (deconvolution) operator support (#3777)
* initial conv2d_transpose

* correct select operator

* cleanup

* fix

* fix correcness check

* conv2d transpose declaration fix

* autotvm conv2d_transpose tuning script

* ir pass fix

* fix tuning script

* deriving params from env, adding bias

* removing bias comp from deconvolution

* lint

* fix

* lint

* lint

* turning off cpu

* lint, ops

* lint

* import fix

* removing hard coded values

* lint
2019-09-05 11:29:42 -07:00
Thierry Moreau 028f47ce65 [VTA][Relay] Extending Vision model coverage compilation for VTA (#3740)
* adding support for graphpack over multiply op

* increasing resnet model coverage

* fix indentation

* lint

* moving recursion limit fix into graphpack pass

* moving recursionlimit to relay init

* pooling on NCHWnc format

* adding more models

* deploy_resnet_on_vta.py

* trailing line

* generalizing to vision models

* merge conflicts

* fix, apply quantization to VTA only

* improving comments

* trimming models that have runtime issues for the moment

* lint

* lint

* lint
2019-09-05 11:17:09 -07:00
雾雨魔理沙 dee11b4198 [Relay][Training] Small refactoring (#3893)
* init

* fix
2019-09-05 11:13:07 -07:00
Animesh Jain a6bb84a834 [QNN] Add - Refactoring to C++ (#3736) 2019-09-05 10:22:45 -07:00
Liangfu Chen 734df8d59b [VTA] de10-nano driver (#3394)
* rework;

* `de10-nano` -> `de10nano`;

* fix compilation error;

* bug fix;

* Update install.md

* Update install.md

* Update install.md

* update with current runtime;

* add debug messages;

* bug fix in cma kernel module;
2019-09-05 09:52:10 -07:00
miheer vaidya 66235d1c37 Reveal hidden code snippets by inserting newline (#3892) 2019-09-04 21:24:00 -07:00
Luis Vega f07fe80aaf [VTA][Chisel] add ISA BitPat generation (#3891) 2019-09-04 10:36:21 -07:00
Animesh Jain 0d4870cc70 [QNN] Convolution 2D Implementation. (#3580)
Rebasing. Empty commit.

Clang-format styling.
2019-09-04 10:05:22 -07:00
lixiaoquan df7cc5db8b [TENSORFLOW] Convert scalar Const into tvm.relay.const (#3885)
* [TENSORFLOW] Convert scalar Const into tvm.relay.const

* use _get_num_param() and _get_list_param()
2019-09-04 09:57:20 -07:00
SWu 5ed251a68c [Relay] Add grads (#3857)
* Add gradient implementations

* Add docstrings to fix lint errors
2019-09-04 00:07:39 -07:00
youluexx 360d26ddc1 [Relay][Frontend][darknet] Solve tvm parsing darknet resnext failure bug (#3778)
* test_darkent_bug

* test_darkent

* add resnext tests
2019-09-04 13:46:29 +08:00
Luis Vega 5fe61fd1b4 [VTA][Chisel] add scalafmt and format existing scala codebase (#3880)
* [VTA][Chisel] add scalafmt and format existing scala codebase

* change column width to 100

* add scalafmt conf file as a valid file type

* add asf header to scalafmt conf file and rerun formatter
2019-09-03 22:19:01 -07:00
Liangfu Chen f4a28c4bc5 [VTA] Fix TSIM compile error in Linux (add missing -fPIC flag) (#3876)
* [VTA] Fix TSIM compile error in Linux (add missing -fPIC flag);

* [VTA] Fix TSIM compile error in Linux (add missing -fPIC flag);

* fix indentation problem;
2019-09-03 09:31:31 -07:00
Tianqi Chen 6b0359b440
Revert "[Runtime] Allow parameter sharing between modules (#3489)" (#3884)
This reverts commit 224cc243b4.
2019-09-03 15:31:04 +08:00
Neo Chien 9e595b422f ONNX frontend operator support: And (#3878) 2019-09-02 21:02:52 -07:00
Yong Sun 224cc243b4 [Runtime] Allow parameter sharing between modules (#3489)
As GraphRuntime does not provide control-flow logics, we have to split
our model to two parts. While we need to share parameters between them
to save memory usage.

Solution:
1) add "lazy_init_input" in graph's attributes
   "attrs": {
     ... ...
     "lazy_init_input": [
       "list_str",
       [
         "p0"
       ]
     ]
    }
2) allow un-allocated NDArray entry in SetupStorage
3) utilize "set_input_zero_copy" function to set parameters
2019-09-02 20:53:42 -07:00