Граф коммитов

302 Коммитов

Автор SHA1 Сообщение Дата
Dong Yu 73e20eaac2 Allow named arguments to take non literal values in macros. 2015-01-29 10:22:07 -08:00
Dong Yu ac50417b75 fix the build problem in the debug build of CNTKEval project. 2015-01-28 23:21:59 -08:00
Dong Yu 0cbab9b3e7 move TimerUtilities.h to common\include. Add TimerUtilities.cpp to CNTKEval project. 2015-01-28 15:59:31 -08:00
Dong Yu 97a317e861 force gradient to be sparse if input is sparse in the TimesNode. This is the only dense X sparse combination we suppport right now. 2015-01-28 11:18:16 -08:00
Dong Yu 5fd74fe9a3 fill constant node's value in second pass if the size of the constant is known by then. 2015-01-27 15:38:06 -08:00
Dong Yu 7baca24d75 changed timing function. 2015-01-27 14:35:20 -08:00
Mike Seltzer 8fbcf8f5b7 added functionality for assymmetric context windows 2015-01-27 09:53:42 -08:00
Mike Seltzer f9c555830b added functionality for asymmetric context windows 2015-01-27 09:47:26 -08:00
Dong Yu 8c61da007a Merge branch 'master' of https://git01.codeplex.com/cntk 2015-01-23 15:30:23 -08:00
Dong Yu 337033f65d Change FindSymbol to fix a MEL bug that occurs when using NDL command with dotted names. 2015-01-23 15:30:06 -08:00
yzhang87 8b8f20138b Fix the bug during loadCheckPointFile. It try to putMarker in the old file. 2015-01-21 23:03:28 -05:00
Dong Yu 032a0e8ede Merge branch 'master' of https://git01.codeplex.com/cntk 2015-01-21 18:25:58 -08:00
Dong Yu 6e1876111c Change Sparse Matrix implementation (esp. GPU Sparse matrix) to make it work for LM training and other sparse features.
Note: still many improvements are needed for sparse matrices.
2015-01-21 18:25:41 -08:00
yzhang87 dc6164c06a Merge branch 'master' of https://git01.codeplex.com/cntk 2015-01-19 16:30:33 -05:00
yzhang87 0879cfef56 Fix the bug for MEL when adding a recurrent loop. 2015-01-19 16:30:25 -05:00
Jasha Droppo 08a9c5e946 Add plumbing for ExpNode in NDL 2015-01-19 09:27:41 -08:00
Dong Yu 43ea68b59e change SparseInputValue node and CPU Sparse matrix to make LM CPU training work. 2015-01-17 13:37:00 -08:00
yzhang87 459ff2ded1 Merge branch 'master' of https://git01.codeplex.com/cntk 2015-01-14 22:51:55 -05:00
yzhang87 97d5fb415a FIx the bug when Dropout node in recurrent loop. Bug: the funcationValues is unintialized when dropOutNode in a recurrent loop. 2015-01-14 22:23:57 -05:00
Jasha Droppo eafc86daab MPI Model Averaging. Added check to short-circuit empty minibatch 2015-01-14 18:51:05 -08:00
Jasha Droppo 3e68d18ad8 Improvements to MPI Model Averaging.
DecimateMinibatch now checks that all input matrices have the same number of columns, and throws an exception if they do not.

DecimateMinibatch now checks for empty input matricies (after decimation).

The reduction/averaging code now weights each model by the relative number of samples in it's effective minibatch.

TrainOneEpoch now returns the local value totalEpochSamples, so the proper weighting can be performed during model updating.
2015-01-14 18:09:39 -08:00
Jasha Droppo 9e37c72050 Changed Deviceid=auto to be MPI model-averaging compatable, no more device contention. 2015-01-14 18:09:38 -08:00
Jasha Droppo 6f21c5b388 Added debug output 2015-01-14 18:09:37 -08:00
Dong Yu bd5524720d Added a check in HTKMLFReader and throw exception if Truncated is false while nbrUttsInEachRecurrentIter is not 1. 2015-01-09 23:09:34 -08:00
Dong Yu 4ff3ab53f3 Add SetNZCount to OmmonMatrix to support setting number of Non-zero values externally.
Change the Resize function in sparse matrices to make it clear that the numNZ is used to reserve memory only so that we can call Resize repeatedly without affecting the actual number of non-zero values.
Change the cpu sparse matrix's Resize to support keeping existing values when memory is reallocated.
Change SetValue function in cpu sparsematrix to support automatic resizing.
2015-01-09 00:19:17 -08:00
Dong Yu 751b1cd744 unify the dense and sparse matrix Resize surrogate at the Matrix class level.
fix the bug of the sparse matrix CPU-GPU transfer flag.
change the SetValue(idx, j, v) so that it uses CPU when both CPU and GPU are available.
2015-01-07 23:12:56 -08:00
Frank Seide 144db5b70d bug fix by AdamE in QueryNvmlData(), to skip a check that was found to sometimes fail 2015-01-06 15:27:29 -08:00
Dong Yu 6bd6dbeb47 Changes in the CNTK book:
1. corrected the parameter list for the Delay command in NDL.
2. added the options for the rmprop gradient update type.
3. added the ConvertDBN command.
2015-01-04 23:54:16 -08:00
yzhang87 e773dc6264 Fix the error during validation stage. Bug: when the anchor node is not the root in a strong component connection, the post visited order may initialize the forward computation incorrect. 2015-01-03 01:55:27 -05:00
fyc0624@gmail.com 12b8ec6e0f Adding GetFormatString() for char* and wchar_t* in fileutil 2014-12-21 19:48:55 +08:00
Dong Yu 2c29a69661 add the option dumpFileName to the NDLNetworkBuilder block so that before validation happens the network structure information can be saved to the file specified by this option. This will help to debug the NDL related issues. 2014-12-20 02:15:09 -08:00
Dong Yu 909e8adfb0 minor typo fixes in the CNTK book. 2014-12-11 23:07:15 -08:00
Dong Yu 091a785d1f update the CNTK book to include the learnRateAdjustInterval option 2014-12-10 00:14:23 -08:00
Dong Yu c897f83f70 Added learnRateAdjustInterval option to the automatic learning rate control block for SGD. It allows users to determine the learning rate adjustment frequency when the learning rate is controlled, for example,by a dev set. 2014-12-10 00:06:20 -08:00
Dong Yu 3f92c98bfb changed the implementation of _matrixVectorColumnWiseAdd so that it uses more GPU threads but more memory access as well. Overall the speed is comparable with original implementation for normal usage but significantly faster for CNN bias. 2014-12-09 01:52:37 -08:00
guhernan 7f8315c8aa "Correct Fix for bugs in SLU example in CNTKBook"
This reverts commit 5e26505d0d which was used to revert the fix done with the wrong user name.
2014-12-04 16:23:14 -08:00
guhernan 5e26505d0d Revert "Fixed bugs in SLU example in CNTKBook (section 7.4)"
This reverts commit 435d6127d5.
2014-12-04 10:29:31 -08:00
fcb1899 435d6127d5 Fixed bugs in SLU example in CNTKBook (section 7.4)
1. Problem: RuntimeError if the full path of the output file doesn’t
exist. Fix: make the intermediate dirs for the path of the output file.
2. Problem: Output labels file not written (should be written according
to the documentation). Fix: change the rnnlu.config file.
3. Problem: mismatch between the feature and the output files when
building the evaluation file. Fix: skip empty lines in the feature file
2014-12-04 10:18:52 -08:00
Dong Yu 1645818816 Merge branch 'master' of https://git01.codeplex.com/cntk 2014-12-02 20:03:09 -08:00
Dong Yu 7a9bb9a64d fixed the gradient computation bug in CosDistanceNode. 2014-12-02 20:02:49 -08:00
Frank Seide 61f37d4ca0 fixed an incorrect argument to a printf format string 2014-12-02 17:24:28 -08:00
Frank Seide 89d12a378e Merge branch 'master' of https://git01.codeplex.com/cntk 2014-12-02 17:01:02 -08:00
Frank Seide 1bcb5b7d8f ACML DLL is now delay-loaded, which is required to be able to control the number of threads of the ACML library (set OMP_NUM_THREADS before first use, which then seems to get picked up upon DLL load) 2014-12-02 17:00:50 -08:00
Mike Seltzer c157083384 Merge branch 'master' of https://git01.codeplex.com/cntk 2014-12-02 14:42:30 -08:00
Mike Seltzer 737fd2a964 fixed DumpNodeInfo bug 2014-12-02 14:41:39 -08:00
ychfan 5438ed416d Fix compilation errors when flag NANCHECK opened 2014-12-02 02:05:07 -08:00
Dong Yu 286b109087 change Matrix::SetValue to include matrixFormat. 2014-11-29 21:17:27 -08:00
Dong Yu 25107d964f Merge branch 'master' of https://git01.codeplex.com/cntk into localdev 2014-11-29 19:19:51 -08:00
Dong Yu a56f8ef5aa Significant improvement to GPUSparseMatrix and CPUSparseMatrix. Also makes device id type consistent across classes. 2014-11-29 19:18:58 -08:00
Dong Yu cdf9cd6d24 remove temp lyx files for CNTK book. 2014-11-25 17:54:52 -08:00