Граф коммитов

1031 Коммитов

Автор SHA1 Сообщение Дата
雾雨魔理沙 6577774de9 fix deprecation warning (#3446) 2019-06-28 11:17:47 +08:00
Altan Haan 1e9d014b3f [Relay] Fix reduce axis bug (#3422)
* fix relay reduce axis bug

* add tests for reduce bug
2019-06-27 10:03:29 -07:00
ttyang1018 7db5779fd7 [Relay][Frontend] Fix tensorflow frontend lstm forget bias adding order (#3410) 2019-06-27 10:01:17 -07:00
Yao Wang 6c43019b4c GraphTuner supports relay.module as input (#3434) 2019-06-27 10:00:08 -07:00
Alexander Pivovarov e1827173f3 Add mod supoort in relay.build (#3424) 2019-06-27 09:37:51 -07:00
Yong Wu cbec5b94b8 [Relay] Add ResizeNearestNeighbor and CropAndResize in tf converter (#3393) 2019-06-25 14:08:55 +05:30
Andrew Tulloch 32be34a07f [Runtime] Allow for parameter sharing in GraphRuntime (#3384)
Summary:

In multi-threaded applications where we have multiple inferences on the
same model in parallel (consider e.g. a TTS system handling multiple
requests), it can be useful to share the parameters of a model amongst
these multiple instances. This improves the cache utilization behaviour
of the system, as multiple cores can use the same set of weights instead
of evicting the identical copies of weights in a shared cache.

As the underlying `NDArray` instances in `data_entry_` implement a
ref-counted based sharing system, this is a simple modification of the
`GraphRuntime::LoadParams` logic to instead copy parameters from an
existing GraphRuntime instance. This is a little ugly in that we need
both the pre-existing GraphRuntime instance, as well as the 'serialized'
params (since we need to know the set of names we should copy), but
without imposing additional assumptions (i.e. storing the set of param
names in GraphRuntime, and enforcing that shared param names are
identical to the parameters set in the preceding `LoadParams` call),
this seems unavoidable.

Test Plan:

Unit test added.
2019-06-24 21:06:20 -07:00
Sammy e97c01012d Fixing package path in tflite test (#3427) 2019-06-24 20:55:55 -07:00
Alexander Pivovarov 311434e881 Add Reduce operators to TFLite (#3421) 2019-06-23 20:39:43 -07:00
Haichen Shen 5629901033 [Frontend][MxNet] Support bidirectional RNN layer (#3397)
* Support bidirectional RNN layer

* tweak

* tweak
2019-06-22 18:36:07 -07:00
ziheng bfb4884e47
[QUANTIZE] Memorizing the quantize node mapping (#3233)
* [QUANTIZE] Support for clip operator

* [QUANTIZE] Memorizing the quantize node mapping.

* [QUANTIZE] Remove use_stop_fusion and skip_k_conv in qconfig

* update

* update

* update

* update
2019-06-22 14:59:41 -07:00
Wei Chen f2406eaeb2 Create closure object for GlobalVar (#3411) 2019-06-22 14:16:58 -07:00
Jessica Davies e9634eaddd Extend TensorComputeOp to allow scalar inputs (#2606). (#3300) 2019-06-21 21:22:54 -07:00
Wei Chen 1598e329b0 Add EtaExpand to transform API (#3406)
* Add EtaExpand to transform API

* Add test case
2019-06-20 13:41:41 -07:00
zhengdi 05c772801b [TEST][TENSORFLOW] clean up code (#3342) 2019-06-19 18:27:38 +05:30
Alexander Pivovarov 40d56b5d55 Add RESIZE operators to realy TFLite frontend (#3370) 2019-06-17 23:03:27 -07:00
Tianqi Chen 8703d9fb26
[ARITH] Bugfix min/max const canonicalize rule (#3386) 2019-06-17 21:51:33 -07:00
Zhi 563978264b hotfix for onnx (#3387) 2019-06-17 21:51:24 -07:00
Tianqi Chen 1119c40b36
Revert "[Relay][Frontend][ONNX] Fix reshape precompute, and type error (#3230)" (#3385)
This reverts commit df6957a5ea.
2019-06-17 16:27:53 -07:00
Alexander Pivovarov 5050ab5e16 TFLite: Add fused_activation_function for ADD, SUB, MUL, DIV (#3372) 2019-06-17 12:36:31 -07:00
Jared Roesch df6957a5ea [Relay][Frontend][ONNX] Fix reshape precompute, and type error (#3230) 2019-06-17 09:58:45 -07:00
Wuwei Lin 04e816241f [Relay][Pass] CanonicalizeCast (#3280) 2019-06-17 09:56:10 -07:00
Zhi fa351045e6 [relay][frontend] Return module from frontend parsers (#3353) 2019-06-17 09:55:08 -07:00
Tianqi Chen 07fbe5c87f
[RELAY][PASS] Enable decorating python class as Pass (#3364) 2019-06-17 09:54:48 -07:00
Sheng Zha 133bb25001 add favicon in rtd (#3379) 2019-06-17 09:52:31 -07:00
雾雨魔理沙 df88c411f5 save (#3033)
save

save

save

upstream

lint

remove bad changes

fix build

save

save

please the ci god

Update src/relay/pass/partial_eval.cc

Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>

save

fix test

ci is ANGRY

fix rebase problem

fix rebase

add test

save

save

comment
2019-06-15 15:08:46 -07:00
Alexander Pivovarov 50dd03ca86 Fix typo in word explicitly (#3376) 2019-06-14 21:34:37 -07:00
Alexander Pivovarov 59d8ba8f5c Add test_forward_ssd_mobilenet_v1 to tflite/test_forward (#3350) 2019-06-14 13:34:17 -07:00
Haichen Shen 2b045c560a [TEST][FLAKY] Fix flaky test on topk and quantize pass (#3362)
* fix flaky test

* fix flaky quantize pass
2019-06-13 17:48:17 -07:00
Hua ede964ec10 [Relay] tflite frontend, keep underline with comments in same length. (#3363) 2019-06-13 15:01:42 -07:00
Tianqi Chen 153417a5e0
[ARITH] Revamp IntSet (#3272) 2019-06-13 13:09:58 -07:00
Yong Wu 9bb16872b6 [Relay][Frontend] Add a bunch of ops in tf converter (#3270) 2019-06-13 13:08:48 -07:00
Hua c9e96d9f2b [Relay] Add Elemwise operator Sub, Divide, Power, Max, Min to tflite frontend. (#3357) 2019-06-13 11:10:07 -07:00
Steven S. Lyubomirsky a698ad7f4c [Relay] Check match expressions for completeness (#3203) 2019-06-13 09:02:26 -07:00
Alexander Pivovarov 579e96da44 Update tflite schema version to 1.13 (#3356) 2019-06-13 08:52:25 -07:00
Wei Chen 713fc73bda Support export ADT value in Python (#3299)
* Support export ADT value in Python

* Cache original functions

* Cleanup

* Cleanup
2019-06-12 18:21:19 -07:00
Yong Wu b67afcd6b9 [Relay] add ClipByValue and Neg in tf frontend converter (#3211) 2019-06-12 14:56:18 -07:00
Haichen Shen 29ee8a237b [Relay][Frontend] Fix MxNet RNN without providing state initialization as input (#3326) 2019-06-12 14:23:15 -07:00
Jared Roesch d0c45648b5 [Relay][Backend] Fix interpreter argument conversion for tuples. (#3349)
* Support taking a tuple as an argument

* Add test
2019-06-12 09:58:15 -07:00
hlu1 2c41fd2f03 [Topi] Fast mode in take op (#3325) 2019-06-11 16:32:12 -07:00
Tianqi Chen d4ca627a5a
[CI] separate out legacy as a stage (#3337) 2019-06-11 10:55:37 -07:00
Tianqi Chen c9a2f3da5b
[RELAY] Pass infra cleanup (#3336) 2019-06-11 10:55:24 -07:00
Marcus Shawcroft d6c4aba837 [CI] Clarify RAT exclude patterns. (#3328) 2019-06-11 10:54:04 -07:00
Alexander Pivovarov 7c1c97d2d8 Add LOGISTIC operator to relay tflite frontend (#3313) 2019-06-10 21:26:14 -07:00
Jared Roesch c4245e3d05 [Relay][Prelude] Use the Relay parser to define the Relay prelude (#3043)
* Add ability to load Prelude from disk

* Port over id

* Define compose

* Linting errors and style changes

* Eliminate unnecessary parens

* Rename identType to typeIdent (makes more sense)

* Another unnecessary paren

* Bump the version number for the text format

* Ensure .rly (Relay text files) are permitted

* Correct release number and simplify grammar rule

* Correct load_prelude docstring

* Corrections to _parser

* Add Apache headers to prelude source file

* Remove test_prelude (redundant)

* Correct misleading error message

* Add check that parser is enabled in Prelude

* Commit pre-generated parser, ensure generated files are treated as binaries, and have parser tests always fire

* Permit parser files and git attributes files

* Exclude gitattributes and parser files from apache check

* Another attempt at appeasing Apache audit checker

* Corrections to rat-excludes

* Apache should be truly appeased now

* Ignore Relay parser files by name

* Mark parser files as generated so they don't show up on Github

* Add parsing helper function for tests

* Mark parser files as not detectable
2019-06-10 18:15:11 -07:00
Alexander Pivovarov 8f219b95bb Add PAD operator to relay tflite frontend (#3310) 2019-06-10 15:29:00 -07:00
Zhi 3294d72b24 [Relay][heterogeneous] Fix tuple annotation (#3311)
* [Relay][heterogeneous] Fix TupleGetItem

* retrigger ci

* retrigger ci
2019-06-10 15:02:41 -07:00
Alexander Pivovarov bfa966a86b Fix Error messages in tflite.py (#3320) 2019-06-10 11:34:52 -07:00
Marcus Shawcroft ce90f0d0ee [CI] Fix shell script exit codes (#3329)
The exist code of a posix compilant shell is 0..255.  Attempting to
return -1 will error in some shells and implicitly cast to 255 in
others.  Fix it by returning a legal return value.
2019-06-10 11:01:58 -07:00
Marcus Shawcroft 474b56834e Drop trailing whitespace (#3331) 2019-06-10 11:01:10 -07:00
Alexander Pivovarov 084e338e12 Add MUL operator to relay tflite frontend (#3304) 2019-06-09 16:24:11 -07:00
Yao Wang 98a91af993 Improve non_max_suppression and get_valid_counts for CPU (#3305)
* Improve non_max_suppression for CPU

* Improve get_valid_counts

* Minor change

* Skip some unnecessary computes
2019-06-09 22:34:56 +02:00
Marcus Shawcroft ca017a38f3 [CI] Ensure rat ignores rust cargo lock files [CI] Ensure rat ignores emacs backup files [CI] Ensure rat ignores .egg-info (#3314) 2019-06-07 12:52:00 -07:00
Marcus Shawcroft a7af3ef441 [LINT] Improve robustness in task_lint.sh logic (#3315)
The existing RAT ASF license auditing logic ignores any failure in the
shell pipeline rather than just the exit code of the final grep.
Adjust the logic such that failure of the various tools in the
pipeline are not elided away.
2019-06-07 09:07:36 -07:00
Yao Wang d7bc4fdd47 Fix x86 depthwise conv2d alter_op_layout (#3264)
* Fix x86 depthwise conv2d alter_op_layout

* Small fix

* Add test case

* Fix test

* Assert kernel layout

* Minor fix

* Add get_shape function

* Minor change
2019-06-06 11:41:50 -07:00
Alexey Romanov 770ac84e74 [Relay][Frontend] Simplify parameter handling in Tensorflow frontend (#2993) 2019-06-06 11:00:19 -07:00
Ramana Radhakrishnan 29b0b4c11d Add support for overloading comparison operations in relay (#2910) (#3168) 2019-06-05 10:19:13 -07:00
Jared Roesch 95ab85d002 [Relay][VM] Fix code generation for packed functions + tuples (#3287) 2019-06-05 09:28:52 -07:00
ziheng befd8c1e48 [LANG] Comparison operators support for Imm expressions (#3283) 2019-06-04 16:56:38 -07:00
Haichen Shen 072f8cc75e [Relay/TOPI][Op] Add TopK operator (#3256)
* init impl for topk

* Fix cpu for topk

* init cuda impl for topk

* Add cuda for topk

* fix

* Add doc

* update doc

* lint

* lint

* lint

* x

* fix warning

* [Relay] Add TopK in tf converter

* Add frontend converter

* fix
2019-06-04 16:29:56 -07:00
Sergei Grechanik 4a81086684 [ARITH] Bugfix: int bound analysis for mod (#3288) 2019-06-04 08:42:27 -07:00
Zhi bb48a45bcf [RELAY][TRANSFORM] Migrate buildmodule to transform (#3251) 2019-06-03 10:40:38 -07:00
Sergei Grechanik 0faf7310d9 [ARITH] Bugfix: check arg positiveness for mod rules (#3279) 2019-06-03 08:52:31 -07:00
Zhi 887255a8c2 [relay][heterogeneous] annotate using visitor (#3261)
* annotate using visitor

* retrigger CI
2019-06-01 00:53:18 -07:00
Animesh Jain 1f4ec9e221 [Relay][Hashing] Structural hash - incorporate the var type into its hash (#3267)
Currently, the BindVar function does not take Var type into account. This causes
two same graph structures with different var shapes to have same hash.
Structural hash is used for keeping track of which operators we have
already compiled. Because of this, two operators with different shapes end up
pointing to same compiled code. The failure is encountered at runtime, where the
expected input shape asserts are not met.
2019-05-31 01:29:54 -07:00
Balint Cristian 584a32aebd [Relay] Handle float16 constants & fix BatchNorm (#3260) 2019-05-31 10:12:56 +08:00
Yao Wang c8a0f524d9 [AutoTVM]Core functionality for Graph tuner (#2184)
* Add graph tuning

* Add tests

* Fix tests

* Fix pylint

* Small fix for docstring

* Minor fix

* Support fetching workload from relay expr

* Simplify benchmark layout transformation

* Add relay support

* Fix infer layout func name

* Refactor internal data representation

* Fix issues

* Add PBQP solver

* Fix layout transform check

* Add PBQPTuner test

* Fix lint

* Update tutorial

* Fix tutorial

* Fix lint

* Add relay test

* Remove nnvm since nnvm graph can be converted to relay function

* Modify benchmark layout wrt new layout_transform api

* Fix lint

* Update docstring for DP tuner

* Refactor traverse graph

* Support graph tuning for multiple target operators

* Fix fetching workloads

* Add x86 depthwise_conv2d infer_layout

* Fix x86 depthwise_conv2d autotvm

* Fix PBQP tuner

* Fix DP tuner

* Generate dummy layout transform record

* Update tutorial

* Modify layout records name

* Add ASF header

* Add ASF header for testing files

* Fix test

* Fix topi fetching

* Some refactors

* Fix lint

* Fix tutorial

* Rename test files

* Fix doc typo

* Add test case note link
2019-05-29 16:36:05 -07:00
masahi a8275bdbfb [TOPI] Fix resize nearest with fractional scaling (#3244) 2019-05-28 15:20:58 -07:00
Nick Hynes a479432d90 [RUST] Rust DSO module (#2976) 2019-05-28 15:20:18 -07:00
Sergei Grechanik 8814adab8a [ARITH] Improve div/mod in rewrite simplifier (#3149)
* [ARITH] Improve div/mod in rewrite simplifier

* Fix lint error

* Fuller file name in src/arithmetic/modular_set.h

Co-Authored-By: Wei Chen <ipondering.weic@gmail.com>

* Generalize some rules

* Replace gcd factoring with specialized rules

* Mark rules that don't work for non-truncated division

* More tests
2019-05-27 09:33:13 -07:00
Haichen Shen c5fdb0003b [Relay][Frontend] Add Crop op converter (#3241)
* Add Crop op converter

* lint

* x
2019-05-25 17:40:02 -07:00
雾雨魔理沙 89a88c5747 [Relay] Start porting pass to the pass manager (#3191) 2019-05-24 16:43:03 -07:00
Siju d3958e114e [RELAY]Frontend darknet (#2773)
* [RELAY]Frontend darknet

* CI test file updated & CI error fixed

* avg_pool pad fix

* Changed repo_url and doc formatting
2019-05-25 06:38:08 +09:00
Zhi 138ec7be78 [Relay][Transform] merge PassContext and BuildConfig (#3234) 2019-05-24 12:05:00 -07:00
Tianqi Chen 415a270def
[C++][API] Consistent RAII scoping API. (#3231) 2019-05-24 09:29:14 -07:00
eqy b2f8b96a02 [LINT] handle more file types in ASF header (#3235)
* Update add_asf_header.py

* Update add_asf_header.py
2019-05-23 17:52:44 -07:00
hlu1 e1e91f1f67 [GraphRuntime] Debug graph runtime (#3232) 2019-05-23 10:13:11 -07:00
Steven S. Lyubomirsky 95bfd4a242 [Relay][Prelude] Remove Peano nats from the prelude (#3045) 2019-05-22 13:57:53 -07:00
Zhi c93235d77f [relay][pass manager] Open transform namespace (#3226) 2019-05-22 13:52:52 -07:00
Zhao Wu b63267b92d [TFLite] Convert TFLite NCHW to NHWC (#3141)
* Convert TFLite NCHW to NHWC

* Minor comment fix
2019-05-22 08:14:47 +05:30
hlu1 3d1d17e390 [Contrib] cblas batch_matmul (#3210) 2019-05-21 16:05:28 -07:00
Zhi 21935dcbf5 [Relay][heterogeneous pass] remove on_device op after annotation (#3204)
* remove on_device op after annotation

* Update src/relay/pass/device_annotation.cc

Co-Authored-By: MORINAGA <34588258+imorinaga@users.noreply.github.com>
2019-05-21 12:53:58 -07:00
Yong Wu 9fd8e3c513 [Relay][TOPI] operator All (#3124)
* [Relay][TOPI] operator All

* Update tests/python/frontend/tensorflow/test_forward.py

Co-Authored-By: yongwww <55wuyong@163.com>

* fix comments

* change to level 4
2019-05-20 11:56:22 -07:00
lixiaoquan 24fe04f8dd [CODEGEN][CUDA][OPENCL] Handle INF and NAN (#3194) 2019-05-17 10:13:17 -07:00
Josh Fromm 246b410929 [Relay] Better shape inference in TensorFlow Frontend. (#3176)
* Some bug fixes in tensorflow graph converter and added DepthToSpace operator.

* Made DepthToSpace better comply with other function syntax.

* Added better shape inference for unusual situations.

* Lint fixes.

* Added depthtospace test.

* Added test cases for value inference and depthtospace.

* Added fill testing.

* Made comment changes and added BroadcastTo op and tests.

* Fixed underlining and unneeded opt_level forcing.

* Added _infer_value assertion that all values to infer are available in passed parameters.
2019-05-17 16:11:50 +05:30
Siva c4439a8046 [TENSORLFOW] PlaceholderWithDefault (limited) implementation. (#3184) 2019-05-15 20:55:38 -07:00
llyfacebook c7794564a5 Add the acc16 intrinsic support (#3081) 2019-05-15 20:21:35 -07:00
Zhi 0f2a3086fc [Relay][Compilation] replace relay.build_module with C++ BuildModule (#3174) 2019-05-15 17:28:18 -07:00
Gus Smith 7d845f0d98 [Datatypes] Custom datatypes (#2900)
* Register and use custom datatypes in TVM

This patch adds the ability to register and use a custom datatype from Python,
using the `register_datatype` call. The datatype can then be passed as the
`dtype` parameter using the syntax `dtype="custom[<type_name>]bitsxlanes"`.

* Removes extra file

* Register custom datatypes with TVM; specify Cast and Add lowering

This commit adds functionality for registering custom datatypes with TVM, and
furthermore adding custom lowering functions to lower those custom datatypes.
This commit only adds lowering for the Cast and Add ops; more ops will be added
soon.

Check out some custom datatype samples in my repository of samples:
https://github.com/gussmith23/tvm-custom-datatype-samples

* Register and lower casts from Python

* Formatting

* Fix include; was including too much

* Add comment

* Add DatatypeRegistered

* Add storage size field to custom datatypes

This field indicates the bitwidth of the opaque block of data into which
instances of the datatype will be stored, when TVM compiles. For example, if I
create a datatype with a storage size of 16, then
- Constants of that datatype will be created as unsigned 16-bit ints
- Calls to external functions taking that datatype will pass the data as
  unsigned 16-bit ints
- External functions returning that datatype will be assumed to return unsigned
  16-bit ints.

* Change how lowering funcs (Cast and other ops) are named in registry

tvm.datatypes.lower.<target>.cast.<dst-type>.<src-type>
becomes
tvm.datatypes.lower.<target>.Cast.<dst-type>.<src-type>

And fixes some sloppy code around how the other ops were being formatted.

* Update Python register_datatype to accept storage size

* Oops, left out one cast->Cast change

* Look up storage size when parsing `custom[typename]`

When we encounter this type string in Python, it will be parsed into a Halide
type object in C++. Some of my original code supported this parsing, but we now
have to attach the storage type to the type (by setting the bits field).

* Change how external calls for casting/other ops are done

Firstly, we now use the storage size of the custom type when determining
input/output types; e.g. a cast to a custom type with storage size 16 is seen as
a call to an external function returning an opaque uint of size 16.

Secondly, write a macro to handle the other ops. Originally I thought I could
handle these at runtime, with a single `_register_op` global. I transitioned
instead to using individual `_register_Add` etc. calls generated with a macro,
but I don't remember why.

* When encountering a custom type immediate, generate UIntImm

* Translate custom types to LLVM type

* Generate correct return type in Casts

Originally I was assuming that the result type from casts was always a custom
datatype, and so I was making the Call return a UInt type.

* Use TVM-idiomatic recursion style in DatatypesLowerer

This was actually a bug, I'm pretty sure; we wouldn't have recursed deep on any
complex programs. As a result of making this change, I also uncovered another
potential bug, where the datatypes lowering pass would attempt to lower a Load
of a custom type. By commenting out the `Mutate_` for Load, I was able to stop
the error from cropping up, but frankly, I'm not satisfied with the solution;
how is it that we are able to run codegen when Loads of custom datatypes are
present in the IR? I have not written any code, to my knowledge, that will
support this. Perhaps Load does not care about the underlying datatype?

* Use CHECK

* Add comment about which Mutate_s are needed

* Add comments

* Add GetCustomDatatypeRegistered as an extern C function

* Formatting, comments, casting

* Change how datatype string is formatted

* Use bits() instead of GetStorageSize

Use bits() instead of GetStorageSize

* Change comment

* Add datatype.py

* Change registered function name (datatypes->datatype)

* Remove GetStorageSize

* Format custom datatypes like any other datatype

Specifically, we now print the bits and lanes after the `custom[...]` string.

* Correctly implement datatype lowering in Python

* Remove unneeded include

* Make function naming consistent

* Use CHECK instead of internal_assert

* Rename macro

* Formatting

* Rename functions

* Implement Cast lowering

`_datatype_register_op` is now able to lower both binary ops and Casts.

* Formatting

* Formatting

* Clang format, google style

* Fix std::string/extern "C" warnings

* Formatting

* Formatting

* Lower Allocates and Loads during datatype lowering

This should ensure that there are no custom datatypes remaining once datatype
lowering is done. This will allow us to remove the code in the LLVM codegen
which deals with custom datatypes.

* Revert additions to codegen_llvm.cc which are now unneeded

* Pass cpplint on lower_datatypes.cc

* Add clarifying comment

* Remove datatype lowering registration funcs from C++

* Add CHECKs

* Remove TODO

* Remove all references to storage size

* Move and rename function

* Rename function

* Remove done TODOs and other handled comments

* Remove irrelevant Load code and comments

* Comment out the IR node types I'm not sure about yet

* Add bfloat16 datatype unittest

* Fix MakeConstScalar

MakeConstScalar for a custom datatype will now call out to a function which can
be registered on a per-datatype basis. The function will take a double and
return the equivalent value in the custom datatype format.

Note that these code paths are not actually used or tested at the moment. I have
not yet written an example which uses const scalars of a custom datatype.

* Formatting

* Change pass name

* Allow users to register whatever lowering function they want

Tianqi pointed out that users should be able to register whatever lowering
function they want, and should not be constrained to registering lowering
functions which just call out to external libraries.

I still provide a function for making lowering functions which call out to
external libraries, for convenience.

* Add clarifying comment

* Remove unneeded comment

* Remove unneeded function

* Rename file

* Undo unnecessary change

* Undo unnecessary change

* Make naming consistent

Rename "datatypes" to "custom datatypes" in most contexts.

* Revert an artifact of old code

* Fix build warnings, add TODO

* Lint

* Remove unnecessary use of extern C by separating decl and impl

* Error checking

* Remove TODO

* Missed a name change

* Lint

* Python lint

* Correctly format datatype

* Move bfloat16 to 3rdparty

* "custom_datatypes" --> "datatype" in most places

I left the pass as "LowerCustomDatatypes" to indicate that we're not lowering
anything other than custom datatypes. Otherwise, everything else has been
changed.

* Upgrade datatype unittest

I used a float calculator to generate some real testcases for the unittest.

* Separate public includes and private implementation

Specifically, create cleaner decoupling between datatypes stuff in packed_func
and the datatype registry implementation.

* Formatting

* Limit custom datatype codes to >128

* Add TODOs

* Fix comment

* Formatting

* Clean up datatype unittest

* Remove un-exported functions in public headers; UIntImm->FloatImm

More places where I accidentally was using implementation-only functions in
public headers.

Additionally, store custom datatype immediates as FloatImms. A later change will
add new lowering logic to lower these FloatImms to UIntImms.

Plus formatting change.

* Lint

* Use FloatImm (not UIntImm) to hold immediates of custom datatypes

This change switches from using UIntImm to FloatImm for storing immediates of
custom datatypes. The value of the number is stored in a double, which should be
enough precision for now, for most custom types we will explore in the immediate
future.

In line with this change, we change the datatype lowering so that FloatImms are
lowered to UInts of the appropriate size. Originally, this was going to be done
by allowing the user to register a double->uint_<storage size>_t conversion
which would be called at compile time to convert the value from the FloatImm to
a UInt and store it in a UIntImm. After discussions with Tianqi, we decided to
take the simpler route, and lower FloatImms just as we lower all other ops: by
replacing them with Call nodes. In this case, presumably the user will Call out
to a conversion function in their datatype library.

The justification for this decision is due to the functionality added in #1486.
This pull request adds the ability to load LLVM bytecode in at compile time.
This applies in our case as follows:
 1. The user writes their custom datatype programs and registers their lowering
    functions in the same way we've been doing it so far. All operations over
    custom datatypes are lowered to Calls to the datatype library.
 2. The user compiles their datatype library to LLVM bytecode.
 3. At TVM compile time, the user loads the LLVM bytecode. Depending on how the
    datatype library is written, Clang should be able to perform constant
    folding over the custom datatype immediates, even if their conversions are
    done with calls to the library.

Additionally adds test to test the FloatImm codepath.

* Re-add a change I removed accidentally during rebase

* Cleanup

* Remove unnecessary TVM_DLLs

* Add custom datatype utilities source file to Go runtime pack

* Revert "Remove unnecessary TVM_DLLs"

This reverts commit 4b742b99557fd3bf0ce6617f033c8b444b74eda4.

* Mark bfloat code as TVM_DLL

* Moves custom datatype runtime utilities to c_runtime_api.cc

* Revert "Add custom datatype utilities source file to Go runtime pack"

This reverts commit aecbcde0b2cc09a2693955b77037fe20f93b5bfd.

* Move datatype parsing to its own function

* Change comments

* Remove unneeded function

* Formatting

* Formatting

* Documentation

* Add kCustomBegin, use it for checking for custom types

* Documentation

* Formatting

* Move static definition to implementation

* Remove comment

* Decide toBeLowered before lowering arguments of Expr

In the past, e.g. when lowering custom datatypes for an Add, we would lower a
and b first, and then decide whether the resulting new Add needed to be lowered
based on the (new) types of a and b. Now, instead, we need to check the types of
a and b first (to see if they're custom types), and then lower them (so they'll
become non-custom types), and then lower the new Add.

* Revert "Move datatype parsing to its own function"

This reverts commit d554a5881afcf69af1c070d882a7651022703a09.

This broke parsing. Will figure this out later. There isn't a really clean way
to separate this out given how the rest of the function is written.

* Replace comment

* Documentation

* Remove comment and TVM_DLL

* Better error messages

* Remove artifact of rebase

* Separate datatypes parsing to its own function

* Add \returns

* Comment changes; add TODO

* Refactor tests
2019-05-15 13:34:30 -07:00
Yong Wu 93c8017096 [Relay][TensorFlow Frontend] SoftPlus Sqrt (#3187) 2019-05-14 22:42:34 -07:00
eqy 605b5e6074 [RELAY][PASS] detect depthwise conv2d in mac_count pass (#3083)
* check in

* use groups

* CHECK_EQ

* trigger CI

* Update mac_count.cc

* trigger CI

* trigger CI
2019-05-14 20:34:16 +08:00
Joshua Z. Zhang 134a2f25d6 add onnx elemwise greater/less (#3186) 2019-05-13 14:17:11 -07:00
Oldpan 25c91d34c4 Fix a bug of flatten in ONNX to Relay converter (#3180)
* fix onnx frontend flatten bug

* Update onnx.py

* Update onnx.py

* Update onnx.py
2019-05-13 14:03:40 -04:00
Jared Roesch 6a4d71ff40
[Relay][Runtime] Add VM compiler. (#3139)
* Implement the VM compiler

* Fix issues

* Fix ASF headers

* Fix test issue

* Apply typo fixes.

* Update src/relay/backend/vm/compiler.cc

Co-Authored-By: 雾雨魔理沙 <lolisa@marisa.moe>

* Refactor compiler

* Fix

* Fix

* Fix in benchmark

* Fix

* Address comments
2019-05-11 18:08:13 -04:00
lixiaoquan 8cb9ee47b9 [Relay][TensorFlow] Support tf.math.reduce_prod (#3166) 2019-05-11 10:26:01 +05:30
Lianmin Zheng ba15a729b0 [HybridScript] Capture constant external python variables (#3157) 2019-05-10 16:36:53 -07:00
lixiaoquan 654192de5c Fix a tensorflow test bug. (#3165)
Length of input_shape isn't always 4.
2019-05-10 10:14:38 -07:00
Zhi 95a323aaa1 [codegen] heterogeneous build for c++ (#3144)
* heterogeneous build for c++

* merge relay buildmodule to codegen build

* use module split

* use target_host

* remove sse3

* retrigger ci
2019-05-09 21:29:16 -07:00
Jared Roesch 4332b0aae3
[Relay][Runtime] Implementation of Relay VM (#2889)
* Implement the virtual machine

Co-Authored-By: wweic <ipondering.weic@gmail.com>

* Fix rebase build issues

* Reorganize vm.py and fix allocator bug

* Remove compiler

* Remove tests

* Remove backend/vm/vm.cc too

* Fix docs

* Fix doc

* Fix doc

* Add vm docs

* Remove change to dead_code.cc

* Remove Relay logging

* Remove reduce

* Update include/tvm/runtime/vm.h

Co-Authored-By: jroesch <roeschinc@gmail.com>

* Reformat

* Update include/tvm/runtime/vm.h

Co-Authored-By: jroesch <roeschinc@gmail.com>

* Address feedback

* Update include/tvm/runtime/vm.h

Co-Authored-By: jroesch <roeschinc@gmail.com>

* Apply suggestions from code review

Co-Authored-By: jroesch <roeschinc@gmail.com>

* Fix a couple outstanding comments

* Last couple comments

* Update include/tvm/runtime/vm.h

Co-Authored-By: jroesch <roeschinc@gmail.com>

* Address code review feedback

* Fix final comment

* Address comments

* Error reporting and example

* add Const

* Explicitly delete copy assignment operator

* Fix rebase

* Pass 3rd arg to fusion
2019-05-09 02:09:15 -04:00
Yao Wang 147ea3b0ca [Relay][Op] Adaptive pooling (#3085)
* Add topi adaptive_pool

* Use adaptive_pool to compute global_pool

* Add relay adaptive pool2d

* Fix lint

* Fix typo

* Minor change

* Change support level to 10

* Add contrib

* Remove global pool schedule

* Add contrib module

* Fix lint

* Update doc

* Update doc
2019-05-08 17:21:41 -07:00
Bing Xu b131d83687 Relay C++ Build Module (#3082)
* [Relay] C++ Build module

* asdf
2019-05-08 03:16:15 -04:00
Wei Chen 18dbfb1520 Handle vectorize for LE statement (#3137)
* Handle vectorize for LE statement

Fix a new cases introduced by commit 7afbca5691

* Add test
2019-05-07 23:52:24 -04:00
Yong Wu 094fc68049 [Relay][Frontend] add log op in tf frontend (#3111)
* [Relay][Frontend] add log op in tf frontend

* address comment
2019-05-05 01:08:10 -07:00
Tianqi Chen 48c92376fb
[ARITH] Constraint-aware ConstIntBound, Enhance CanonicalSimplify (#3132) 2019-05-03 21:07:14 -04:00
Haichen Shen d39a4ea000 Add MXNet converter for RNN layer ops (#3125) 2019-05-02 11:59:22 -04:00
Tianqi Chen 2ed7f95a81
[LINT] Add more allowed file type 2019-05-02 11:52:13 -04:00
Zhao Wu 2e260938db Fix PRelu layout in Relay (#3013)
* Fix PRelu layout in Relay

* Fix cpplint

* Add PRelu test case
2019-05-01 11:18:15 -07:00
songqun 78e0871daa [FRONTEND][TFLITE] Add FULLY_CONNECTED op into tflite frontend, support Inception V4 (#3019)
* Add FULLY_CONNECTED op into tflite frontend, support Inception V4

* Fix comment style in TF Lite tests.
2019-05-01 11:03:52 -04:00
lixiaoquan e6ca91e196 [Relay][Tensorflow] Allow an op as loop var. (#3056) 2019-05-01 11:02:12 -04:00
Zhi f88f45805d [RELAY][FUSION] Enhance fusion rule that starts from elemwise and broadcast (#2932)
* [relay][bugfix] fuse injective to elemwise and broadcast

* enhance fusion for prarllel injectiveOD

* check if tensor in schedule

* fix codegen

* fix lint

* update

* lint
2019-05-01 12:42:27 +09:00
Haichen Shen 977896cbc9 [Bugfix] Fix type code error for StringImm (#3050) 2019-04-30 17:10:19 -07:00
Jared Roesch ba6f194b54 Fix bug in ONNX importer (#3084) 2019-04-29 12:54:16 -07:00
Leyuan Wang a706ad16f8 [Relay][TOPI] Gluncv SSD support on the GPU (#2784)
* ssd gluoncv gpu op updated

* ssd gluoncv gpu op updated

* tutorials and testes modified

* tutorials and testes modified

* fix lint

* fix lint

* address comment

* multibox bug fixed

* space line added

* use less threads per block

* use less threads per block

* less threads per block for get valid count

* less threads per block for get valid count

* merge with master

* Revert "less threads per block for get valid count"

This reverts commit 08896cfccc34b0b2a1646d01d01ea4cad73941c4.

* Revert "less threads per block for get valid count"

This reverts commit 08896cfccc34b0b2a1646d01d01ea4cad73941c4.

* typo fixed

* elem length made to a variable

* fix lint error

* fix lint error

* lint fixed

* bug fixed

* bug fixed

* lint fixed

* error fixed

* error fixed

* test ci

* test ci

* seperate argsort to be an independent op

* seperate argsort to be an independent op

* fix lint

* fix lint

* remove unsupported models

* typo fixed

* argsort added to realy

* solve conflicts with master

* fix lint

* fix lint

* test push

* Revert "test push"

This reverts commit 6db00883fab6cc06bddf564c926bb27c874397d8.

* fix lint error

* fix more lint

* cpu test_sort udpated

* debug ci

* nms fixed

* expose argsort to relay frontend

* test ci

* fix lint

* sort register error fixed

* fix nnvm

* nms type fixed

* adaptive pooling added to relay

* Revert "adaptive pooling added to relay"

This reverts commit 1119f1f2c055753e0cc5611627597749134c5c8c.

* fix lint

* expose argsort op

* fix lint

* fix lint

* fix lint

* sort test updated

* sort bug fixed

* nnvm error fixed

* fix argsort default data type returned to be float insteaf of int

* fix lint

* fix lint

* test fixed

* fix valid count

* fix titanx bug

* tutorial add both targets

* titanx error fixed

* try to fix CI old gpu error

* try to solve CI GPU error

* get_valid_count added

* reverse get_valid_count

* get valid count optimized

* address comments

* fix ci error

* remove unessesary block sync

* add back one sync

* address comments

* address more comments

* more comments

* move sort to be indepent algorithm

* typo fixed

* more typos

* comments addressed

* doc updated

* fix pylint

* address final comments

* apache license added
2019-04-28 20:47:21 -07:00
Yizhi Liu 9d002e8eb2 [Lang] Fix undef BijectiveLayout and add scalar layout support (#3105) 2019-04-28 19:25:38 -07:00
Gemfield 73f87ae0b6 porting new upsample test case from nnvm to relay (#3115) 2019-04-28 19:24:28 -07:00
masahi f1fcbaf9b6 [Relay, OpFusion] Better tuple fusion implementation (#3092) 2019-04-28 19:18:41 -07:00
Tianqi Chen d0dca01a36
[LINT] recover lint error, add asf header check (#3117) 2019-04-28 13:21:08 -07:00
Tianqi Chen fbcb67afdc
[CI] Add file type check (#3116) 2019-04-28 12:04:19 -07:00
Ruizhe Zhao (Vincent) 8f56949b34 Fixed issue #3069 by checking op tag (#3070)
* Fixed issue #3069 by adding in_channels

* Registerd group_conv2d_nchw as topi compute

* Improved by checking tag value

* Removed group_conv2d_nchw topi registration

* Added test for relay group_conv2d_nchw

* Added assertions to forbid small group size

* Removed hard-coded oc_block_factor

* Added explanatory comments to group_conv2d_nchw_cuda

* Updated group_conv2d_nchw_cuda schedule

Removed 'direct' CUDA tests

* Reverted an accidental change in a conv2d test

* Fixed indentation problems

* Fixed a mis-commented line

* Reverted change in group_conv2d_nchw tag

* Removed commented int8 group_conv2d test

* Fixed group size assertions in group_conv2d_nchw_cuda
2019-04-27 10:15:21 +08:00
Salem Derisavi 7e68d63f75 1) fixed a functional bug in loop partitioning algorithm that is exposed when double splitting with indivisible factors 2) added a testcase (#2956) 2019-04-26 14:10:42 -07:00
Salem Derisavi 8b5b180af5 [TVM][ARITH] Teach BoundDeduce to handle the case in which target var can appear in rhs of expression (#2795)
* target variable can now appear in either lhs or rhs of the expression to be analyzed

* removed extra spaces
2019-04-26 09:49:29 -07:00
Siva cb16cd445d [TEST][FLAKY] fix for #3099 (#3101) 2019-04-26 08:27:30 -07:00
lixiaoquan 036294c94d [Relay][TensorFlow] Remove 'input_0d_mismatch' special handling (#3087)
* [Relay][TensorFlow] Remove 'input_0d_mismatch' special handling

* Add more tests.

* Cover the case that strided_slice outputs a scalar
2019-04-25 22:57:37 -07:00
Hiroyuki Makino 9bfdc55c57 [Relay][TOPI] Add rsqrt operator (#2949) 2019-04-25 11:05:42 -07:00
Josh Pollock fed1c08e8d [Relay][Text Format] Fix Pretty Printing Annotations (#3041) 2019-04-25 10:56:45 -07:00
Yong Wu 51e2e31f99 [Frontend][TF] Fix Placeholder issue (#2834)
* [Frontend][TF] Fix Placeholder issue

* Add test cases
2019-04-21 15:59:21 +09:00
Yong Wu 85a3ea08d7 [Relay][Frontend] TF Tile Round Sign Pow Exp Reverse (#2960)
* [Relay][Frontend] TF Round Sign Pow Exp Reverse

* fix ci

* fix comments
2019-04-19 06:37:25 +05:30
Balint Cristian c91f714170 Support Deriving channels when it is not provided in AlterLayout. (#2972) 2019-04-17 22:20:41 +08:00
雾雨魔理沙 8d50312f74 [Relay] Fix Fuse (#3035)
* save

* fix

* Update fuse_ops.cc
2019-04-17 14:33:31 +09:00
Steven S. Lyubomirsky fcc5b42208 Ensure interpreted functions can take values that are not TensorValues (#3015) 2019-04-16 16:44:30 -04:00
hlu1 561e422b95 Add caffe2 nnvm frontend to CI (#3018) 2019-04-16 16:43:37 -04:00
Sergei Grechanik 4b487c0d09 [ARITH] Fix x||!x for comparisons in rewrite simplifier (#3029) 2019-04-16 12:37:45 -04:00
Ehsan M. Kermani 8d3b392da6 [RUST][FRONTEND] Fix resnet example (#3000)
Due to the previous changes the frontend resnet example failed to build.  So this patch 

1) fixes it 
2) adds ~~a local `run_tests.sh` to remedy non-existence of MXNet CI (used in python build example)~~ the example build to CI with random weights and a flag for pretrained resnet weights

Please review: @tqchen @nhynes @kazimuth
2019-04-14 19:11:18 -07:00
MORINAGA 2bf666601d [Heterogeneous][Bugfix] Fix bug of wrongly generated device_map (#2990)
* fix bug of device_index

* cpplint

* nose

* Update test_pass_annotation.py

* fix name of testcase

* delete comment
2019-04-12 20:03:11 -07:00
Josh Pollock dc97e52789 [Relay][Text Format] Pretty Printer Smart Inlining (#2881) 2019-04-12 21:36:19 -04:00
Bing Xu 8b71a28289 [Relay] C++ GraphRuntimeCodegen, Deprecate Python2 (#2986)
* [Relay] C++ GraphRuntimeCodegen

* [Test] Deprecate Python2

* [Python3] Add Py2 check

* Update _pyversion.py

* [Python3] Update test
2019-04-12 16:13:45 -07:00
Alexey Romanov ab890d6e99 Support SpaceToBatchND/BatchToSpaceND in Tensorflow frontend (#2943)
Thanks @alexeyr . This is now merged.
2019-04-12 09:29:20 +05:30
Lianmin Zheng 5a27632e27
[AutoTVM] fix argument type for curve feature (#3004) 2019-04-11 10:58:54 +08:00
雾雨魔理沙 bb87f04409 add document (#2714)
lint

lint

save

save

add more case

save

error

lint

lint

commit

do

lint

save

fix lint

wrap it back as func

lint

save

remove dead comment

fix style

fix lint

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

address review feedback

pe now handle freevar. as a result preserving function is now trivial.

test

add basic test, implement pretty printing for generic function

test

lint

fix segfault

save

save

do

test

fix another error

address comment

commit

save

address review feedback

add test for invalidate, fix error in lookup

rename cont to boduy

fix error and add regression test

fix error, add test case

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

fix lint

remove extra line

save

save
2019-04-08 22:56:57 -07:00
雾雨魔理沙 28f354bf1e [Relay] Add expr_visitor, fix expr_functor exponential blowup problem (#2988)
* save

* lint
2019-04-08 22:51:00 -07:00
Wuwei Lin 5d70b00866 [Relay] InferCorrectLayout for strided_slice & min_num_branches option in CombineParallelConv2D (#2961)
* [Relay] InferCorrectLayout for strided_slice

* Add min_num_branches option to CombineParallelConv2D

* Return undef if original layout contains splitted axes
2019-04-08 22:20:56 -07:00
Tianqi Chen cffb4fba03
[HEADER] Add Header to Comply with ASF Release Policy (#2982)
* [HEADER] ASF header dir=include

* [HEADER] ASF Header dir=src

* [HEADER] ASF Header -dir=python

* [HEADER] ASF header dir=topi

* [HEADER] ASF Header dir=nnvm

* [HEADER] ASF Header -dir=tutorials

* [HEADER] ASF Header dir=tests

* [HEADER] ASF Header -dir=docker

* fix whitespace

* [HEADER] ASF Header -dir=jvm

* [HEADER] ASF Header -dir=web

* [HEADER] ASF Header --dir=apps

* [HEADER] ASF Header --dir=vta

* [HEADER] ASF Header -dir=go

* temp

* [HEADER] ASF Header --dir=rust

* [HEADER] Add ASF Header --dir=cmake

* [HEADER] ASF Header --dir=docs

* [HEADER] Header for Jenkinsfile

* [HEADER] ASF Header to toml and md

* [HEADER] ASF Header to gradle

* Finalize rat cleanup

* Fix permission

* Fix java test

* temporary remove nnvm onnx test
2019-04-07 21:14:02 -07:00
Tianqi Chen e38d00e2b0
[REFACTOR] Remove stale verilog generator (#2964) 2019-04-04 11:46:01 -07:00
Sunwoong Joo e68874d64d [Relay][Frontend] Adding ADD operator to tflite frontend for compiling the MobileNetV2 (#2919) 2019-04-03 14:28:11 -07:00
Yong Wu eb82e7b77a [Relay][Frontend] Support tf.where (#2936)
* [Relay][Frontend] Support tf.where

* fix comments
2019-04-03 16:40:10 +05:30
Yong Wu 38151abd72 [Relay][Frontend] Support TF Gather (#2935)
* [Relay][Frontend] Support TF Gather

* fix comments
2019-04-03 10:35:27 +05:30
Nick Hynes 4968279f87 [Rust] Unify types between bindings and pure Rust impl (#2616) 2019-04-02 17:24:21 -07:00
Leyuan Wang 1dab4dcce3 [Bugfix] Bilinear resize bug fix from PR #2777 (#2857)
* error fixed

* rename

* solve conlicts with master

* more test added

* fix error

* remove test

* comment addressed
2019-04-02 14:11:51 -07:00
Marcus Shawcroft a42ad8ed82 Add missing #!/bin/bash directive. (#2951) 2019-04-02 09:41:08 -07:00
Leyuan Wang ae21eddf5f [Relay][OP] Gather_nd exposed to relay (#2945)
* gather_nd added

* gather_nd test added

* more test added

* fix lint

* fix build error

* fix lint

* comments addressed
2019-04-01 23:17:31 -07:00
Haichen Shen 3746d9026a [Relay/TOPI][OP] Add clip and wrap mode support in take (#2858)
* Update take

* Add special case for canonical simplify and fix test cases

* Use lower case for wrap and clip

* remove unnecssary lower

* Fix mxnet converter for take

* fix
2019-04-02 06:40:11 +08:00
lixiaoquan 7cc9240ae8 [Relay] Add foldr1 (#2928) 2019-04-01 09:11:29 -07:00
MORITA Kazutaka 162eab44dd [DOCKER][FRONTEND] Run DarkNet tests (#2673)
* [DOCKER][FRONTEND] Run DarkNet tests

* update tests to pass CI
2019-04-01 09:00:23 -07:00
Mr You bbaee69b0e Update schedule_dataflow_rewrite.cc (#2934) 2019-03-31 21:32:34 -07:00
Tianqi Chen 7afbca5691
[ARITH] Analyzer CanonicalSimplifier (#2891) 2019-03-31 15:06:48 -07:00
Andrew Tulloch eb1ed1164e Fix vcvtph2ps codegen (#2925) 2019-03-31 12:30:25 -04:00
Siva 891c41177b [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (#2850)
* [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay.

* 	* test cases

* 	* ci error
2019-03-30 14:16:56 -04:00
Yutetsu TAKATSUKASA b590c4f225 Consistent result of DetectLinearEquation() when an empy vars is passed (#2860) 2019-03-30 14:13:54 -04:00
masahi c162e7d662 [Relay, OpFusion] Fix handling TupleGetItem for nested tuples (#2929) 2019-03-30 14:09:46 -04:00
Siyuan Feng 39c116f00f Fix intersect of modular set (#2904)
Fix comment bugs and code style
2019-03-30 12:45:06 -05:00
Haichen Shen c6339730ad [Tutorial] Cache the test data in tutorial (#2923) 2019-03-30 00:11:55 -04:00
Tianqi Chen 8eef1565b5
Revert "[Relay] add test for second order ad (#2754)" (#2926)
This reverts commit f5ca9915ab.
2019-03-29 21:57:24 -04:00
雾雨魔理沙 f5ca9915ab [Relay] add test for second order ad (#2754)
* do second order

* add comment

* better name

* use tvm assert all close

* refire ci
2019-03-29 17:02:32 -07:00
Andrew Tulloch c5a7b74332 TVM debugresult dump to Chrome Tracing (#2922) 2019-03-29 14:35:15 -04:00
Wuwei Lin 608cdeeb48 [Relay, TOPI] Deformable conv2d (#2908)
* [Relay, TOPI] Add deformable conv2d

* Moved to op level2

* Fix lint

* Moved to level2 & bug fix

* Update comments

* Disabled flaky test of conv2d
2019-03-29 21:38:56 +09:00
masahi 82e868a4be [Relay] Add support for TupleGetItem in op fusion (#2914) 2019-03-29 08:23:01 -04:00
Hao Jin a0537ecbf8 add support for mxnet smooth_l1 (#2905) 2019-03-29 11:59:05 +08:00
Haichen Shen e3206aa84d [TEST] Cache test data (#2921) 2019-03-28 22:12:25 -04:00
Nick Hynes dfe4c466d3 [Relay] Allow converting keras.layers.Sequential (#2842)
* Allow converting keras.layers.Sequential

* Use existing new_var function

* Only update expr when missing

* Add test
2019-03-28 05:49:05 +09:00
Marcus Shawcroft d63f6d36f7 [TESTS] Import script robustness (set -u) (#2896)
Adopt the "set -u" idiom from the docker scripts as a mechanism to
improve future robustness.
2019-03-26 11:17:11 -07:00
hlu1 a02916b5ae winograd_nnpack (#2721) 2019-03-25 20:12:32 -07:00
Marcus Shawcroft 23aa24cf17 [TESTS] Improve script robustness (#2893)
A number of test scripts use the '|| exit 1' idiom.  This has two
issues, first process exit codes are defined to be in the range 0-255.
Second, more importantly, the idiom is fragile because it requires
that every possible failure point be explicitly coded.  This patch
removes the idiom in favour of "set -e" as used in the docker scripts
as a more robust mechanism to ensure that script failures are always
caught and propagated by default.
2019-03-25 13:44:35 -07:00
Zhi 2df3364b05 [RELAY][Frontend][TF] decompile tf control flow (#2830)
* decompile tf control flow

* Add docs

* remove import relay

* move tests under tensorflow frontend

* minor fix
2019-03-24 23:19:16 +08:00
Sergei Grechanik a610edee91 [ARITH] RewriteSimplifier: improved cmp simplification (#2851) 2019-03-22 18:25:16 -07:00
Wei Chen 2d5a072019 [Relay] Add list update to prelude (#2866) 2019-03-22 18:21:31 -07:00
hlu1 4692440605 [NNPACK] Modernize test (#2868) 2019-03-22 18:21:06 -07:00
Josh Pollock e23913f5eb [Relay][Text Format] Reverse CallNode Print Order (#2882) 2019-03-22 18:20:41 -07:00
Josh Pollock db5bfa3c61 [Relay][Text Format] Text Printer Refactor and Debug Printing (#2605) 2019-03-20 16:11:53 -07:00
Haichen Shen 89acfeb258 [Relay][Frontend] Add ops in mxnet converter (#2844)
* Add ops in mxnet converter

* trigger ci
2019-03-20 01:09:24 -07:00
Bing Xu f81e2873d1 [AlterLayout] NCHWc upsampling, fix depthwise conv (#2806)
* [AlterLayout] NCHW upsampling

* [Relay][Pass] Fix Depthwise AlterLayout
2019-03-19 20:51:23 -07:00
Leonardo lontra d5b3422099 [Relay][Frontend][keras] added interpolation method of Upsampling2D (#2854)
* [Relay][Frontend][keras] added interpolation method of Upsampling2D.

* added testcase

* small fixes
2019-03-19 19:17:09 -04:00
Siva bb3c815140
[FRONTEND][TENSORFLOW] Enhance with left over patches from NNVM. (#2757)
* [FRONTEND][TENSORFLOW] Enhance with left over patches from NNVM.

commit 76188a4
Author: Siva sivar.b@huawei.com
[NNVM][TENSORFLOW] bugfix. (#2444)

commit 6737739
Author: Ashutosh Parkhi ashutosh.parkhi@imgtec.com
[Tensorflow] Support for Crop (#2285)

commit f6c3f99
Author: Alexey Romanov alexey.v.romanov@gmail.com
[FRONTEND][TENSORFLOW] Use input shapes directly instead of 1-element lists (#2242)

commit e5d92e1
Author: Dominic Symes 36929632+dominicsymes@users.noreply.github.com
[FRONTEND][TENSORFLOW] Bugfix (#2326)

commit 00d509d
Author: Alexey Romanov alexey.v.romanov@gmail.com
[FRONTEND][TENSORFLOW] Support Unstack and Split (#2105)

commit df9d3ad
Author: Siva sivar.b@huawei.com
[FRONTEND][TENSORFLOW] Bugfix (#2267)

commit d1a0c90
Author: Zhebin Jin zhebin.jzb@alibaba-inc.com
[FRONTEND][TENSORFLOW]Add Split and realdiv op support (#2123)
* Add Split and realdiv op support
* Fix the pad calculation in the case of dilated convolution

* 	* review comments

* 	* resnet fix.

* 	* review comments
2019-03-19 12:25:07 +05:30
Tianqi Chen f63631fc73
[RUNTIME] Scaffold structured error handling. (#2838) 2019-03-18 23:05:02 -07:00
lixiaoquan fa709832f1 [CODEGEN][OPENCL] Fix compile error about ternary expression. (#2821)
Code like this can't be built with NV OpenCL, and it needs an explicit type
  converison for ternary expression if return type is uchar.

       uchar i = 0, j = 0;
       uchar t = max((uchar)j, ((i > 0) ? (uchar)1 : (uchar)0));
2019-03-19 05:43:48 +09:00
hlu1 0f6989f98a Fix typo (#2839) 2019-03-18 11:49:28 -07:00
Wuwei Lin baf7a729ec [TOPI, Relay] ROI Pool operator (#2811) 2019-03-15 08:06:01 +09:00
Wei Chen c0a5a9be2f [Relay] Add hd,tl,nth for list in Prelude (#2771) 2019-03-14 12:06:49 -07:00
Leyuan Wang 5f89a50e32 [Bugfix] Repeat and tile bug fixed, relay tests added (#2804) 2019-03-14 10:09:38 -07:00
Tianqi Chen 046e4ff078
[ARITH] RewriteSimplifier: min/max, logical, select (#2768) 2019-03-14 09:52:33 -07:00
hlu1 6c60b8d304 Fix caffe2 relay frontend (#2733) 2019-03-13 22:16:50 -07:00
lixiaoquan 7182201d89 Fix a bug in nnvm to relay converter. (#2756) 2019-03-13 22:15:36 -07:00
Ashutosh Parkhi cc112c10c5 Support for sign (#2775) 2019-03-13 22:14:26 -07:00
Haichen Shen ee8058069a [Relay/TOPI][Op] Add shape op in Relay and TOPI (#2749)
* Add shapeof op in topi

* Add relay shape_of op

* Add constant folding for shape_of

* Allow shape op to specify dtype

* Add mxnet converter for shape_array

* lint

* lint

* Add doc
2019-03-13 16:14:48 -07:00
Leyuan Wang 4d09fc4e48 [Relay][Frontend] Add reverse op to relay (#2800)
* start adding reverse

* reverse updated

* reverse uses topi::flip

* typo fixed

* comment addressed

* exp simplified
2019-03-13 14:24:46 -07:00
Salem Derisavi a2b45887aa Ensure loop count is a constant before trying to unroll. (#2797) 2019-03-12 16:12:10 -07:00
Tianqi Chen 5e3ceaa073
[DOCKER] Update docker protocol (#2793) 2019-03-12 14:34:34 -07:00
Tianqi Chen d8abc733a1
[TEST] recover tflite test (#2788) 2019-03-11 20:30:30 -07:00
Zhi abe6f77046 [Relay] Pass manager (#2546)
* initial commit

* add python frontend and module tests

* add unit tests for function pass and optimize interface

* add ExprPass

* remove PassState and pass context for run

* add required_passes

* return module

* remove move

* fix minor reviews

* remove optimizer, optimizer->pass_manager, make pass a the base class of all

* remove deleted files

* move resolvedependency to sequential pass, use ir_pass namespace

* add todo

* add disabled passes in sequetialpass

* fix minor

* fix currying doc

* remove pass_kind from passnode

* remove pass kind from test

* fix doc

* fix per @tqchen's comments

* remove pass_manager.py create separate classes

* simplify pass_func

* inline using passfunc

* update doc

* disable test_quantize_pass for now

* create PassInfo class to contain the meta data

* flatten passinfo for interface

* retrigger ci

* remove required method

* make Pass python class lighter

* create pass -> decorator

* make the api consistent for all classes
2019-03-11 18:19:39 -07:00
Tianqi Chen 7226c01065
[TEST] Hotfix CI outrage after TF in docker update (#2781) 2019-03-11 17:52:10 -07:00
Andrew Tulloch 2919a3ee1e Implement flop support for int8 models (#2776) 2019-03-11 12:55:01 -07:00