Граф коммитов

1195 Коммитов

Автор SHA1 Сообщение Дата
Peyman Manikashani 2964f1fde5 batchnorm_without_batch_axis_fixes 2018-07-30 12:42:35 -07:00
Sergii Dymchenko 6783e9ce96 Fix Hardmax/Softmax/LogSoftmax ONNX import/export. 2018-07-27 14:37:16 -07:00
Bowen Bao 1eb51830a0 Fix arguments interface to respect pythonOperandOrder. 2018-07-20 15:52:01 -07:00
Spandan Tiwari fae5dc6c0d Merge branch 'master' into cntkteam/onnx_without_batch_axis 2018-07-20 14:55:56 -07:00
Peyman Manikashani ad631079b7 Reduction ops Fixes: shape mistmatch and default axes 2018-07-19 10:03:09 -07:00
Sergii Dymchenko 3806ec6e0e Change to more numerically stable LogSoftMax implementation. 2018-07-18 10:31:41 -07:00
Bowen Bao dca867c2cb add warning for convolution padding on channel axis. 2018-07-11 16:03:56 -07:00
Bowen Bao 5ca4bb3a93 Add Sequential Convolution.
adding convolution over sequential axis related tests.

adding convolution over sequential axis.
currently additional supported parameters:
  auto padding
  strides
  groups
support for dilation needs to be tested on GPU.

updating PrimitiveOpType SerializationTests that is missing from other commits ..

convert tabs to spaces.

Refine cpp convolution unit tests. Add dilation tests to python convolution unit tests.

more detailed comments on shape change for 1d seq conv with reduction rank 0. And other minor tweaks.

add EndToEndTests of sequential convolution on MNIST

add init_bias tests for seq conv

minor change in comments

rename ConvolutionOverSequenceAxisNode. Add comment on cudnn failed new test.

add more comments, trim spaces

add more comments, remove magic number, add more boundary checks.

remove the last SetValue for outputSeqAxisDimValue as TensorView Unary Op has already updated the value.

fix bug in python seqconv default bias shape, and add related unit tests.

small tweak in seq conv to avoid additional gpu memory allocation and increase performance.

Example: seq MNIST, and profiling

adjust conv c++ value unit test channel size.

small update on python seq mnist

Sequential convolution v2.
* re-designed ConvolutionSequenceShapeNode: refactored to separate out computing output sequence length from v1 node design. And refactored ConvolutionNodeBaseExtended as their common base class. (Since "ConvolutionNodeBase" is not only base class of ConvolutionNode but also PoolingNode).
* Performance increase against v1.
- compute sequence length by MBLayout instead of mask output from unpack. Avoiding the unnecessary cpu/gpu memory copy.

not include py sequence example for now .. need to find they a correct location.

add check for truncated sequences in sequential convolution

improve code style.

Moving sequential convolution in python to a new high level api, to maintain compatibility with previous implementation (special case 1d sequential convolution).

Add ConvolutionSequenceShape OP.

nit

update conv_attribute test for updated convolution parameter
move sequential parameter to the last
update test shortcircuit for CPU convolution dilation.

update endtoendtest - unittest baseline file for new convolution unittests.

update makefile to include new unittest file for linux

nit

Update ConvolutionNode initialization code to handle TransformerNode Initialization.

nit

nit
2018-07-10 21:10:33 -07:00
Bowen Bao 19ffa068bd Merge branch 'master' into bowbao/gather_grad 2018-07-06 10:03:13 -07:00
Bowen Bao 1e058cedcf fix Gather op's incorrect gradient value.
* the error was due to that we pad 0 as default value for missing gaps. All these then each contribute 1 to the gradient of reference at index 0. The fix is to mask missing values in indices matrix to negative, and in Matrix scatter implementation to check and skip negative indices. (previous Matrix CPU implementation already checks for negative indices)
2018-07-03 18:34:11 -07:00
Bowen Bao 81ced59adb add None check for python clone substitution.
* it seems None somehow escapes the typecheck when being converted to class CNTK::Variable thus causing the crash.
2018-07-03 15:19:33 -07:00
Sergii Dymchenko bece037f63 Merge branch 'sedymche/onnx-trig' 2018-06-29 01:52:40 +00:00
Spandan Tiwari 69938f2992 Adding full support for 'alpha' attribute in ELU op in ONNX and CNTK. 2018-06-28 15:34:30 -07:00
Sergii Dymchenko e95f92cd5e Add Tan/Atan ops to CNTK (with ONNX support). 2018-06-28 14:03:02 -07:00
Jie Zhu 8c1e5edcc4 fixing type mismatch in tensorops.h and changing name of python name for straightthrough 2018-06-27 11:44:49 -07:00
Jie Zhu 02be8a0d69 adding straight through unary op 2018-06-27 11:44:49 -07:00
Spandan Tiwari da60e4f8b6 Updating DepthToSpace and SpaceToDepth ops to match ONNX spec. 2018-06-26 13:39:21 -07:00
Yuqing Tang 199bc5c30b Fixed a cloning bug in placehoder shape information 2018-06-03 12:12:09 -07:00
Jaliya Ekanayake 796b59dad1 Adding a special op to proxy operands to optimized implemenations such as Halide 2018-05-21 12:39:44 -07:00
Yuqing Tang 367a13d0ee Fix bugs in no-backprop gradients ops and adding unit tests. 2018-05-07 17:55:50 -07:00
Yuqing Tang 5a587b376d Implemented eye_like Op and the depedent SetDiagonalValue methods for CPU and GPU sparse matrices. 2018-05-05 21:24:59 -07:00
KeDengMS 5e0856e47e Fix bugs for fp16 in RNN and BMUF
Note that sparse embedding is still not working yet.
2018-04-11 15:35:08 -07:00
Spandan Tiwari 09e25a47fa Moving group convolution implementation to use cuDNN7 and MKL2017 APIs. 2018-04-11 10:26:15 -07:00
Project Philly d3d26d096a Integrate v-rodemo/fix-batchnorm-freedimension into master 2018-04-09 23:39:20 +00:00
Igor Macedo Quintanilha 0b5b100513 Added test for batch norm spatial shape inference 2018-04-06 16:19:39 +00:00
Yuqing Tang 90a41f6a10 Enabled gather op indcies be computed from parameters. 2018-03-27 16:48:11 -07:00
Jaliya Ekanayake 73c2046e88 Adding recurrence support to user defined functions. This enables UDF to be called inside recurrent loops. 2018-03-22 11:44:09 -07:00
Yuqing Tang edbda9d7a5 Added sanitary check for step function signature with appropriate error messages. 2018-03-13 14:38:57 -07:00
Spandan Tiwari 677cb422b4 Updated MeanVarianceNormalization and LayerNormalization to work with epsilon. 2018-03-02 22:37:13 -08:00
KeDengMS d7704195da Fix shape inference in step function for scalar to broadcast 2018-03-01 14:47:46 -08:00
Spandan Tiwari 4017a1664a Adding mean_variance_normalization CNTK and ONNX op, and LayerNormalization ONNX suort. 2018-02-16 00:41:08 -08:00
KeDengMS 2237dd0988 Add support for FreeDimension in Pooling/Unpooling 2018-02-14 13:27:45 -08:00
KeDengMS 3660b7a36e Node timing and profile details format in chrome://tracing.
Working example in ./Examples/Image/Classification/MLP/Python/SimpleMNIST.py

Note that node timing would be added to profiler details when profiler is enabled, i.e.

    import cntk as C
    C.debugging.debug.set_node_timing(True)
    C.debugging.start_profiler()
    C.debugging.enable_profiler()
    trainer|evaluator|function executions
    trainer|evaluator|function.print_node_timing()
    C.debugging.stop_profiler()
2018-02-01 21:53:46 -08:00
Liqun Fu 2a764a7941 add ImageScaler, fix ConvTranspose 2018-01-30 08:46:06 -08:00
KeDengMS 3cf3af5df6 CNTK support for CUDA 9
CNTK now supports CUDA 9/cuDNN 7. This requires an update to build environment to Ubuntu 16/GCC 5 for Linux, and Visual Studio 2017/VCTools 14.11 for Windows. With CUDA 9, CNTK also added a preview for 16-bit floating point (a.k.a FP16) computation.

Please check out the example of FP16 in ResNet50 at /Examples/Image/Classification/ResNet/Python/TrainResNet_ImageNet_Distributed.py

Notes on FP16 preview:
* FP16 implementation on CPU is not optimized, and it's not supposed to be used in CPU inference directly. User needs to convert the model to 32-bit floating point before running on CPU.
* Loss/Criterion for FP16 training needs to be 32bit for accumulation without overflow, using cast function. Please check the example above.
* Readers do not have FP16 output unless using numpy to feed data, cast from FP32 to FP16 is needed. Please check the example above.
* FP16 gradient aggregation is currently only implemented on GPU using NCCL2. Distributed training with FP16 with MPI is not supported.
* FP16 math is a subset of current FP32 implementation. Some model may get Feature Not Implemented exception using FP16.
* FP16 is currently not supported in BrainScript. Please use Python for FP16.

To setup build and runtime environment on Windows:
* Install [Visual Studio 2017](https://www.visualstudio.com/downloads/) with following workloads and components. From command line (use Community version installer as example):
    vs_community.exe --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.ManagedDesktop --add Microsoft.VisualStudio.Workload.Universal --add Microsoft.Component.PythonTools --add Microsoft.VisualStudio.Component.VC.Tools.14.11
* Install [NVidia CUDA 9](https://developer.nvidia.com/cuda-90-download-archive?target_os=Windows&target_arch=x86_64)
* From PowerShell, run:
    /Tools/devInstall/Windows/DevInstall.ps1
* Start VCTools 14.11 command line, run:
    cmd /k "%VS2017INSTALLDIR%\VC\Auxiliary\Build\vcvarsall.bat" x64 --vcvars_ver=14.11
* Open /CNTK.sln from the VCTools 14.11 command line. Note that starting CNTK.sln other than VCTools 14.11 command line, would causes CUDA 9 [build error](https://developercommunity.visualstudio.com/content/problem/163758/vs-2017-155-doesnt-support-cuda-9.html).

To setup build and runtime environment on Linux using docker, please build Unbuntu 16.04 docker image using Dockerfiles /Tools/docker. For other Linux systems, please refer to the Dockerfiles to setup dependent libraries for CNTK.
2018-01-22 16:58:56 -08:00
Project Philly 5bdaed77b4 Integrate yuqtang/TimesOnFreeAxes into master 2018-01-16 23:27:02 +00:00
KeDengMS 1a81d41ee0 Fix batch matmul test failures 2018-01-14 23:32:12 -08:00
Chengji Yao 15e705da8d add batch matmul 2018-01-12 18:41:34 -08:00
Yuqing Tang 71429951f6 Allow cntk.times operator over tensors: [shape, free_axis] x [free_axis, shape] and [shape, batch_axis] x [batch_axis, shape]. 2018-01-08 17:26:12 -08:00
Nikos Karampatziakis 6477d5b58f gather over axis 2018-01-03 10:07:17 -08:00
liqfu 348773e663 add missing ops (logical, reduceL1, reduceL2, etc.) with ONNX support 2017-12-31 11:47:50 -08:00
Nikos Karampatziakis c7d2502662 squeeze, expand_dims, zeros_like, ones_like
closes #2306
2017-12-13 09:58:51 -08:00
liqfu 3cee5a0f93 add LogSoftmax, HardSigmoid, Flatten, Mean ops and ONNX support 2017-12-12 08:03:19 -08:00
Chengji Yao d216454117 fix reshape bug in backprop 2017-12-11 19:12:39 -08:00
Spandan Tiwari 4bb4dbb5d0 Adding DepthToSpace and SpaceToDepth ops to CNTK and ONNX. 2017-12-08 22:45:28 -08:00
Nikos Karampatziakis 5c97bd02ab add top k operation. Closes #2468. 2017-12-06 18:19:02 -08:00
KeDengMS 1d3d1e8a27 Implements batch normalization forward and backward in MKL. CPU training with BN is now enabled on Intel CPUs. 2017-11-29 17:12:10 -08:00
Project Philly 77de951377 Integrate kedeng/fixKeras into master 2017-11-29 21:48:23 +00:00
KeDengMS 42587a8cf8 Fix MKL convolution for output with reduced ranks 2017-11-28 19:00:06 -08:00
Spandan Tiwari 3eecf587f6 Group conv updated to use same kernel for each group. 2017-11-28 10:33:51 -08:00