Previously, to resolve issue of gather producing incorrect gradient
values, validity mask check was added to ensure we don't count non-valid
cells as 0.
However, this check is needed only for input that has dynamic axis, i.e.
inputs that have MBLayout.
- Previous implementation has the assumption that (0 <= dk < width).
This assumption doesn't stand when lo >(kernel - 1) / 2.
The updated calculation supports arbitrary lo & hi non-negative
integer value. The new calculation has dk in range (0, width + hi +
lo].
- Enables onnx backend test {averagepool_2d_pads, maxpool_2d_pads} to
pass.
- Refactor function CalcPaddingForSameLowerOrUpperAutoPad in conv/pool import,
changing parameter "const Variable& input" to "const NDShape& inputWithBatchAxisShape",
to specify the required shape format as [N x C x H x W].
pad values are explicitly computed based on ONNX spec equations during import in the following cases:
- case 1: when auto_pad is SAME_UPPER | SAME_LOWER for convolution, convolution transpose and pooling.
- case 2: when output_shape is explicitly set for convolution transpose.
note: output_shape in ONNX spec can have the two below format:
1. [X1 * X2 * ... * Xn]
2. [N * O * X1 * X2 * ... * Xn]
- Update exporting of conv/pooling to always export pad values.
- Enable correct exporting of multiple pretrained models (ResNet50/ResNet101/ResNet152_ImageNet_Caffe, etc).
- Overhaul convtranspose pads exporting
- Support conv weight export with omitted out channel axis (LRN).
- Add tests in onnx_op_test to cover the above changes
- Windows OOBE (pip) tests & Linus OOBE tests: skip onnx_model_test. This test requires
onnx to be installed. Skip Until we decide to add onnx dependencies to
OOBE test environment.
- fix flatten onnx export.
- fix unsqueeze onnx export.
- add comments on temporarily skipped tests.
- adjust the importing of softmax, logsoftmax and hardmax with blockfunction
- such that they could be exported as is back to onnx.
- update reshape onnx export to pass mobilenet round trip test.
- This is due to an issue on Windows CI introduced by adding onnx dependencies. These tests are temporarily disabled to not block CI while we investigate.
- Disable CNTKv2Python/Tutorial/205
- Disable CNTKv2Python/Keras
In case other projects may use these header files, we added
them into API/Internals.
* ComputationGraphAlgorithms.h was moved from Source/ComputationNetworkLib
* PrimitiveOpType.h and EvaluatorWrapper.h were moved from Source/CNTKv2Library
* PrimitiveFunctionAttribute.h was extracted from PrimitiveFunction.h. It contains
a new class PrimitiveFunctionAttribute which is the collection of all attribute
names for PrimitiveFunction.
This change actually had a subtle side-effect. We had a global static variable
s_stateAttributes that depended on PrimitiveFunction::AttributeNameRngSeed and
PrimitiveFunction::AttributeNameRngOffset. After we moved those static
attribute-variables into another translation unit, s_stateAttributes can be
initialized with empty wstring, because PrimitiveFunctionAttribute::AttributeNameRngSeed
PrimitiveFunctionAttribute::AttributeNameRngSeedOffset were initialized after
s_stateAttributes. Note that the initialization order of global static variables
is not well-defined cross translation units. To fix the issue, we also moved
s_stateAttributes into PrimitiveFunctionAttribute class, and renamed it to
s_rngStateAttributes. I think it's reasonable to consider s_rngStateAttributes
to be part of the PrimitiveFunctionAttribute class.
* PrimitiveFunction.h was moved from Source/CNTKv2Library