Граф коммитов

5411 Коммитов

Автор SHA1 Сообщение Дата
Vadim Mazalov fea6e9dd47 Merge branch 'vadimma/bmlfex' 2019-01-12 00:01:46 +00:00
Vadim Mazalov 62fa8182d7 Enable frame mode for the binary MLF reader 2019-01-10 18:02:34 -08:00
liqfu fe939aa3cb support ONNX scan(9) 2019-01-08 12:16:32 -08:00
Vadim Mazalov 05435d0db2 Expose Adam in BS 2019-01-03 12:58:17 -08:00
Bowen Bao e2d79d7da0 Submodule onnxruntime, and remove previous drop.
* A few patches are required to build cntk_uwp.
* Use proto from onnxruntime/protobuf instead of from onnx.
* TODO: Some issues with onnx_op_test RNN and OptimizedRNNStack from shape inference.
2019-01-02 17:09:08 -08:00
Bowen Bao bc708633c8 Add Maxunpooling export 2018-12-14 11:24:26 -08:00
Deyu Fu ef2f039b53 fix break for cub 1.8 API change 2018-12-12 16:14:15 -08:00
Bowen Bao f1781446d1 Support CUDA 10
* Move to support CUDA 10, cudnn 7.3, cub 1.8.
* Fixed a bug related to "pointer to pin pointer is disallowed" #3063,
which is exposed in newer version vctools.
* Added workaround for a potential vs2017 15.9 bug with cntk Debug
version.
2018-12-12 16:10:31 -08:00
liqfu 93e10096cb comment debug code 2018-12-11 19:17:16 -08:00
liqfu de15bda40d passed prod models and unfold test. handle delay op. 2018-12-11 17:49:23 -08:00
Spandan Tiwari 22e869ec42 Add ONNX support for zeros_like, ones_like, and eye_like. 2018-12-05 13:51:43 -08:00
Peyman Manikashani 75b0141b48 Adding support for exporting CNTK's Sequence::IsFirst and Sequence::IsLast nodes 2018-11-29 15:57:43 -08:00
Bowen Bao d1d113322c ConvTranspose asymmetric padding
* Temporarily reverse the extra padding location in case of
SAME_UPPER vs SAME_LOWER for convTranspose to match with onnxruntime.
* In case of importing asymmetric padding convTranspose, use
symmetric pads by alter the output_shape and pads, and attach a slice
node afterwards to enable cudnn.
* Fix a bug in slice/squeeze attribute axes export.
2018-11-27 09:33:15 -08:00
Vadim Mazalov 8c156834f6 Merge branch 'vadimma/binmlf' 2018-11-24 09:12:41 +00:00
liqfu 0c1d283623 update latest onnx and onnxruntime, fix shape inference 2018-11-22 08:56:07 -08:00
Vadim Mazalov d7101a24cd Clean up bin mlf test 2018-11-12 12:34:56 -08:00
Vadim Mazalov 7ef57defd3 Add dim label to bin mlf reader 2018-11-10 21:32:28 -08:00
Aghagolzadeh 22c7c3cbf9
Update BlockMomentumDistributedLearner.h 2018-11-08 14:32:13 -08:00
Liqun Fu 69df29e43c Merge branch 'liqun/seqopsStage' 2018-11-08 22:14:44 +00:00
Aghagolzadeh 2405457fd3
Update BlockMomentumDistributedLearner.h 2018-11-08 12:27:55 -08:00
liqfu ab4bee2b7a Support RNN ops in a Scan loop
Update with latest ONNX
Update with latest ONNX graph IR
Support sequence ops - Sequence::Gather, Sequence::PastValue, Sequence::FutureValue, etc.
2018-11-07 18:36:20 -08:00
Vadim Mazalov b51e8c243a Expose binmlf in python 2018-11-05 16:32:59 -08:00
Bowen Bao 3f46cf0269 Updates on several ONNX exports.
* ConvTranspose outputShape: now pads values are always exported even
when outputShape is given. The reason is that CNTK and ONNX have
different padding specs.
* Flatten: in CNTK flatten does not affect batch axis, this should be
preserved in ONNX.
2018-11-02 17:18:07 -07:00
Bowen Bao fca139674c Add onnx_test_runner verification in CI.
* onnx_test_runner.exe will be called on win64 GPU tests to verify if
the output data produced by CNTK in onnx_op_test and
onnx_model_test(cntk_model_test) matches in onnxruntime.
2018-10-31 15:46:56 -07:00
liqfu e940605f6b Support ONNX Scan op 2018-10-19 21:36:21 -07:00
Bowen Bao a55e871ec8 Fix InvStdDev.
* Issue was that AssignSqrOfDifferenceOf(beta, input, mean, alpha)
assigns mean value to the gaps in input. These values are then reduced
within this function, leading to incorrect results. The fix is to
execute assign and reduce separately, and mask gaps to zero again before reducing.
* Update test baseline affected by this change (err is lowered by <1%).
2018-10-19 10:24:11 -07:00
Bowen Bao 0ffdcf7f1d Overhaul node name export & other fixes
* Overhaul node name export. Create static class UniqueNodeNameStorage
to manage ONNX node name generation with maintained one-to-one mapping
between CNTK Uid, while preserving the original CNTK node name by best
efforts (#3358).
* Update onnx_op_test to test the preservation of original CNTK node
names in exported/imported models.
* Update onnx_test_helper to support proper linking of test data and
onnx model input/output with unique names.
* Update onnx_test_helper to generate .bat file to run exported models
in further onnxruntime verification.
* Fix Sum import to support arbitrary number of inputs. Sum
implementation in CNTK backend is loop of Plus, which takes care of
potential broadcast issues.
2018-10-17 18:36:48 -07:00
Yang Chen da2e610c73 Replaced wchar/wstring with char/string in C interface 2018-10-12 14:12:23 -07:00
Spandan Tiwari 149d87bad3 Adding ONNX export support for OneHotOp. 2018-10-05 14:06:56 -07:00
Bowen Bao bf37aadc53 Fix pad offset computation for pooling
* Compute keyInterior according to the updated algorithm for computing
cell offset key.
* Update unittest of avg_pooling/max_pooling for cases that requires
auto_padding = True. Previous test cases cover only those that do not
need padding.
2018-10-01 17:21:30 -07:00
Bowen Bao fcdeef63d0 Support crop_manual export & import. 2018-09-29 13:46:17 -07:00
Bowen Bao a36fae88bb Support logPlus(log_add_exp) export to ONNX
* ONNX supports similar op ReduceLogSumExp. Conversions are added when
exporting.
* Refactored CNTKToONNXHelper::BroadcastInputsIfNeeded to support more
generalized cases.
2018-09-28 15:59:55 -07:00
Spandan Tiwari c2072cc4ab Add support for ONNX export of StraightThrough op. 2018-09-27 10:25:11 -07:00
Spandan Tiwari 1aab76af99 Updating ONNX submodule hash to include defs for ConstantLike and EyeLike ops. 2018-09-26 18:01:34 -07:00
Peyman Manikashani ce503f8dd7 pooling export fix for backward compatibility 2018-09-25 17:09:39 -07:00
Ke Deng 9165fd06f8 Merge branch 'kedeng/fixCrash' 2018-09-25 00:26:34 +00:00
liqfu 58f810fed0 update with ONNX1.3 and latest onnxruntime 2018-09-22 09:53:27 -07:00
KeDengMS 1489de8de8 Fix a crash in transpose_times simplification to element times 2018-09-21 22:33:41 -07:00
Bowen Bao da6b0bc71f GatherNode backward: add check for no dynamic axis
Previously, to resolve issue of gather producing incorrect gradient
values, validity mask check was added to ensure we don't count non-valid
cells as 0.
However, this check is needed only for input that has dynamic axis, i.e.
inputs that have MBLayout.
2018-09-20 14:54:39 -07:00
Bowen Bao deda94b67b Support pooling(cpu) where kernel center is on pads.
- Previous implementation has the assumption that (0 <= dk < width).
This assumption doesn't stand when lo >(kernel - 1) / 2.
    The updated calculation supports arbitrary lo & hi non-negative
    integer value. The new calculation has dk in range (0, width + hi +
    lo].
- Enables onnx backend test {averagepool_2d_pads, maxpool_2d_pads} to
pass.
2018-09-12 21:37:21 -07:00
Bowen Bao 62e18f4854 Improve clarity in pads calculation for conv/pool
- Refactor function CalcPaddingForSameLowerOrUpperAutoPad in conv/pool import,
  changing parameter "const Variable& input" to "const NDShape& inputWithBatchAxisShape",
  to specify the required shape format as [N x C x H x W].
2018-09-12 21:37:21 -07:00
Sergii Dymchenko 35e370170a Merge branch 'sedymche/onnx-min-max' 2018-09-13 02:42:25 +00:00
Sergii Dymchenko 61d7dab912 Support more than 2 inputs for ONNX Min/Max import. 2018-09-12 15:12:14 -07:00
Peyman Manikashani b374e149b4 fixes on Batchnorm and Pooling for v1 pretrained models after removal of sequence axis from input 2018-09-12 10:02:52 -07:00
Bowen Bao 5897265366 small patch on conv/pooling export
- when pads are all zero, check if autopad is true.
- when pads are all zero, check if ceilOutDim is true, and extra cells
are needed.
2018-09-11 08:52:22 -07:00
Bowen Bao 2f52f2219f update conv/convtranspose/pooling import.
pad values are explicitly computed based on ONNX spec equations during import in the following cases:
- case 1: when auto_pad is SAME_UPPER | SAME_LOWER for convolution, convolution transpose and pooling.
- case 2: when output_shape is explicitly set for convolution transpose.
	  note: output_shape in ONNX spec can have the two below format:
	  	1. [X1 * X2 * ... * Xn]
		2. [N * O * X1 * X2 * ... * Xn]
2018-09-09 13:41:04 -07:00
liqfu d877233979 Make broadcast ops compitable between CNTK and ONNX,
Enable ONNX export/import for optimizedRNN op,
More ONNX support for Sequence ops
2018-09-09 08:59:33 -07:00
Bowen Bao fcf9f48895 Overhaul conv/convTrans/pooling pads value export
- Update exporting of conv/pooling to always export pad values.
- Enable correct exporting of multiple pretrained models (ResNet50/ResNet101/ResNet152_ImageNet_Caffe, etc).
- Overhaul convtranspose pads exporting
- Support conv weight export with omitted out channel axis (LRN).
- Add tests in onnx_op_test to cover the above changes
2018-09-06 11:46:14 -07:00
Bowen Bao dc5e482d54 fix onnx average pooling export.
- this fix solves the issue that ceilOutDim == true will enforce exporting auto_pad as true, even if autoPadding is explicitly set to false.
2018-08-30 18:19:04 -07:00
liqfu 18d9f39afc skip dynamic axes wrapper, export onnx test cases, handle output op being Combine op, workaround a specific case bug of ONNX bidirectional broadcast shape inference 2018-08-29 16:52:58 -07:00