Граф коммитов

15960 Коммитов

Автор SHA1 Сообщение Дата
Bowen Bao da6b0bc71f GatherNode backward: add check for no dynamic axis
Previously, to resolve issue of gather producing incorrect gradient
values, validity mask check was added to ensure we don't count non-valid
cells as 0.
However, this check is needed only for input that has dynamic axis, i.e.
inputs that have MBLayout.
2018-09-20 14:54:39 -07:00
Bowen Bao 0a3eb3b813 Update onnx_model_test with tests on cntk pretrained models 2018-09-19 11:27:55 -07:00
Liqun Fu 6f09c398b9 Merge branch 'release/2.6' 2018-09-18 02:33:30 +00:00
liqfu 4ed1896332 set public_build to "no"/false 2018-09-17 16:10:48 -07:00
liqfu 1be3b64195 update readme for .net support 2018-09-14 18:07:45 -07:00
TJ d355c1c700 Updated current_iteration with .net support 2018-09-14 13:07:24 -07:00
TJ 04caa9deaf Updated current_iteration with .net support 2018-09-14 12:29:51 -07:00
liqfu 7c1b0fadb6 udpate readme with current iteration 2018-09-13 17:01:45 -07:00
Bowen Bao da31ba04fa Update current_iteration.md 2018-09-13 16:57:44 -07:00
Sergii Dymchenko e4d708118d Update current_iteration.md. 2018-09-13 16:57:30 -07:00
liqfu 4f965aaaf7 update version # in cntk_common.cmake 2018-09-13 16:34:30 -07:00
liqfu 82d350d0ac bump up version number 2018-09-13 15:52:47 -07:00
Bowen Bao d264a26034 Update current_iteration.md 2018-09-13 13:44:42 -07:00
Sergii Dymchenko be28e864cc Update current_iteration.md. 2018-09-13 11:18:53 -07:00
Bowen Bao deda94b67b Support pooling(cpu) where kernel center is on pads.
- Previous implementation has the assumption that (0 <= dk < width).
This assumption doesn't stand when lo >(kernel - 1) / 2.
    The updated calculation supports arbitrary lo & hi non-negative
    integer value. The new calculation has dk in range (0, width + hi +
    lo].
- Enables onnx backend test {averagepool_2d_pads, maxpool_2d_pads} to
pass.
2018-09-12 21:37:21 -07:00
Bowen Bao 62e18f4854 Improve clarity in pads calculation for conv/pool
- Refactor function CalcPaddingForSameLowerOrUpperAutoPad in conv/pool import,
  changing parameter "const Variable& input" to "const NDShape& inputWithBatchAxisShape",
  to specify the required shape format as [N x C x H x W].
2018-09-12 21:37:21 -07:00
Sergii Dymchenko 35e370170a Merge branch 'sedymche/onnx-min-max' 2018-09-13 02:42:25 +00:00
liqfu d33b7b44e8 update iteration plan 2018-09-12 17:24:45 -07:00
Peyman Manikashani 6a4ec05b19 Merge branch 'peykash/batchnorm_and_pooling_fixes' 2018-09-13 00:13:46 +00:00
Sergii Dymchenko 61d7dab912 Support more than 2 inputs for ONNX Min/Max import. 2018-09-12 15:12:14 -07:00
Spandan Tiwari 8b48976bed
Adding CNTK 2.6 release work summary to current_iteration.md 2018-09-12 11:12:10 -07:00
Peyman Manikashani b374e149b4 fixes on Batchnorm and Pooling for v1 pretrained models after removal of sequence axis from input 2018-09-12 10:02:52 -07:00
Bowen Bao 5897265366 small patch on conv/pooling export
- when pads are all zero, check if autopad is true.
- when pads are all zero, check if ceilOutDim is true, and extra cells
are needed.
2018-09-11 08:52:22 -07:00
Bowen Bao 61572e89f8 Update onnx_model_test skip list 2018-09-11 08:51:45 -07:00
Bowen Bao 0754b38e34 update onnx_model_test with tests from onnx backend test 2018-09-09 13:53:26 -07:00
Bowen Bao 2f52f2219f update conv/convtranspose/pooling import.
pad values are explicitly computed based on ONNX spec equations during import in the following cases:
- case 1: when auto_pad is SAME_UPPER | SAME_LOWER for convolution, convolution transpose and pooling.
- case 2: when output_shape is explicitly set for convolution transpose.
	  note: output_shape in ONNX spec can have the two below format:
	  	1. [X1 * X2 * ... * Xn]
		2. [N * O * X1 * X2 * ... * Xn]
2018-09-09 13:41:04 -07:00
liqfu d877233979 Make broadcast ops compitable between CNTK and ONNX,
Enable ONNX export/import for optimizedRNN op,
More ONNX support for Sequence ops
2018-09-09 08:59:33 -07:00
Bowen Bao fcf9f48895 Overhaul conv/convTrans/pooling pads value export
- Update exporting of conv/pooling to always export pad values.
- Enable correct exporting of multiple pretrained models (ResNet50/ResNet101/ResNet152_ImageNet_Caffe, etc).
- Overhaul convtranspose pads exporting
- Support conv weight export with omitted out channel axis (LRN).
- Add tests in onnx_op_test to cover the above changes
2018-09-06 11:46:14 -07:00
Bowen Bao e3a1acfdf0 Resolve dependencies and build issues
-Temporary add importorskip around import onnx
-bump up .yml matplotlib version
2018-09-05 15:02:23 -07:00
Bowen Bao dc5e482d54 fix onnx average pooling export.
- this fix solves the issue that ceilOutDim == true will enforce exporting auto_pad as true, even if autoPadding is explicitly set to false.
2018-08-30 18:19:04 -07:00
Bowen Bao 77a8c4992f Temporarily skip onnx_model_test if import onnx fail 2018-08-30 10:52:53 -07:00
Liqun Fu 94a43edd24 Merge branch 'liqun/liqun/RNN2.6.Stage' 2018-08-30 07:40:20 +00:00
Bowen Bao 73cd53e4f5 fix nightly issues related to onnx dependencies
- Windows OOBE (pip) tests & Linus OOBE tests: skip onnx_model_test. This test requires
onnx to be installed. Skip Until we decide to add onnx dependencies to
OOBE test environment.
2018-08-29 17:19:19 -07:00
liqfu 18d9f39afc skip dynamic axes wrapper, export onnx test cases, handle output op being Combine op, workaround a specific case bug of ONNX bidirectional broadcast shape inference 2018-08-29 16:52:58 -07:00
Peyman Manikashani 902f1a424d times export fix 2018-08-29 10:09:07 -07:00
Ke Deng b86fe1a0e2 Merge branch 'pull/3374' 2018-08-28 07:17:54 +00:00
Spandan Tiwari 1f2e42e649 Merge branch 'sptiwari/convtranspose_update7' 2018-08-27 07:32:33 +00:00
Yang Chen edc29f899e Packaging newly-added internal header files
Recently, we added a couple of new header files into API/Internals.
This patch includes them into our pre-built binaries.
2018-08-26 21:36:04 -07:00
Spandan Tiwari b3c0fa2c6b Overhaul ConvTranpose to match ONNX 1.2.2. spec. 2018-08-26 21:20:05 -07:00
Bowen Bao 7dd9638799 squash of the following changes:
- fix flatten onnx export.
- fix unsqueeze onnx export.
- add comments on temporarily skipped tests.
- adjust the importing of softmax, logsoftmax and hardmax with blockfunction
  - such that they could be exported as is back to onnx.
- update reshape onnx export to pass mobilenet round trip test.
2018-08-26 13:11:44 -07:00
liqfu 0e208365be CNTK splice allows broadcast. This case is handled in the change. For noop (identity) ops, its inputs and outputs types shall be set according to upstream ops. ToBatch/ToSequence and Unpack Batch/Sequence ops added during model importing need tp be skipped. Model import need to handle ops with multiple outputs 2018-08-26 08:41:20 -07:00
Peyman Manikashani 4a6238d979 reduction all axes export fix 2018-08-24 10:02:34 -07:00
Bowen Bao d2ff41272d temporarily disable 2 tests on Windows.
- This is due to an issue on Windows CI introduced by adding onnx dependencies. These tests are temporarily disabled to not block CI while we investigate.
- Disable CNTKv2Python/Tutorial/205
- Disable CNTKv2Python/Keras
2018-08-24 01:27:45 -07:00
Phoebe Ma (Beyondsoft Corporation) 88d88822de Fix issue#3228 and #3363 which found when build with MSVC+ permissive- 2018-08-23 22:56:36 -07:00
Bowen Bao a26e542c88 update conftest.py to resolve doctest issue.
- newer version numpy has a different print format for arrays and scalars that would potentially break the doctests.
2018-08-23 21:28:06 -07:00
Bowen Bao 28ada9657b update python doctest handling newer version numpy print format. 2018-08-22 23:21:13 -07:00
Bowen Bao 56ef694c88 add onnx_model_test.py 2018-08-22 23:21:13 -07:00
Bowen Bao c0ff1da544 fix gemm, pooling export to onnx. 2018-08-22 23:19:18 -07:00
Peyman Manikashani 4244320eba Adding support for exporting CNTK TimesTranspose 2018-08-22 11:33:55 -07:00
Yang Chen 3d809bf54c Added several internal API header files
In case other projects may use these header files, we added
them into API/Internals.

* ComputationGraphAlgorithms.h was moved from Source/ComputationNetworkLib

* PrimitiveOpType.h and EvaluatorWrapper.h were moved from Source/CNTKv2Library

* PrimitiveFunctionAttribute.h was extracted from PrimitiveFunction.h. It contains
  a new class PrimitiveFunctionAttribute which is the collection of all attribute
  names for PrimitiveFunction.

  This change actually had a subtle side-effect. We had a global static variable
  s_stateAttributes that depended on PrimitiveFunction::AttributeNameRngSeed and
  PrimitiveFunction::AttributeNameRngOffset. After we moved those static
  attribute-variables into another translation unit, s_stateAttributes can be
  initialized with empty wstring, because PrimitiveFunctionAttribute::AttributeNameRngSeed
  PrimitiveFunctionAttribute::AttributeNameRngSeedOffset were initialized after
  s_stateAttributes. Note that the initialization order of global static variables
  is not well-defined cross translation units. To fix the issue, we also moved
  s_stateAttributes into PrimitiveFunctionAttribute class, and renamed it to
  s_rngStateAttributes. I think it's reasonable to consider s_rngStateAttributes
  to be part of the PrimitiveFunctionAttribute class.

* PrimitiveFunction.h was moved from Source/CNTKv2Library
2018-08-22 10:47:18 -07:00