also changed the namespace of all inside ScriptableObjects to Microsoft::MSR::ScriptableObjects;
NetworkBuilderFromConfig.cpp (the one that creates objects from BrainScript) also moved to ScriptableObjects namespace. It is independent of BrainScript now--yay! Python, F#, come all in!;
added a new base class ScriptableObjects::ScriptingError for catching and printing scripting exceptions
EvaluationError itself is now encapsulated inside BrainScriptEvaluator.cpp;
deleted IConfigRecord::operator(), as it was not really useful. Just use operator[]
cleaned up UsingBatchModeNodeMembers macro;
factored the 99% identical Max/AveragePoolingNode classes into shared PoolingNodeBase;
removed use of static eval/partial functions for convolution nodes, allowing to eliminate the detour via ConvolutionParams and PoolParams algogether, saving more code;
removed redundant member copies in CopyTo() of pooling node (-base, now), that is, members that are already copied in ComputationNode::CopyTo()
Changed some scalar numerical values from ElemType to double, where this distinction did not add value, for example for objective values, frame-error rates, and learning rates.
Lots of minor cleanup such as reducing header dependencies (e.g. Matrix.h), consistency of template<typename/class ElemType>, and moved some misplaced code to more appropriate places (e.g. LearnableParameter initializations).
Merge branch 'master' into fseide/netlib
Conflicts:
MachineLearning/CNTK/CNTK.cpp
created an SGD.cpp that instantiates the exported classes of CNTKSGDLib;
does not build since git mixed up files during move, and it won't let me git-add and git-mv in one go
DistGradHeader no longer depending on <ElemType>;
all accesses of ComputationNode::TypeName are now done to <float> variant instead of <ElemType>, for consistency where we don't have an <ElemType>
Added a completely new configuration language, which currently can be used in place of NDL, but eventually will power all configurations.
It supports infix expressions, recursive macros, arrays, and a few useful functions such as string replace.
It is called "BrainScript" (file extension .bs), where the name is meant to be reflective of our grand ambition
(whereas the file extension is reflective of where we stand today w.r.t. that grand ambition...).
As of now, BrainScript can be accessed for configuring networks through the new ExperimentalNetworkBuilder option.
A few ComputationNodes are still missing, and MEL may not work as node naming is not sorted out yet.
The core classes were refactored aiming at removing the pervasive template parameter <ElemType> (selecting float vs. double), aiming at making it feasible to wrap parts of CNTK as libraries.
ComputationNode has been disentanlgled, while consumers such as ComputationNetwork and SGD--which really should be agnostic to float/double--have been changed to use the agnostic interface (ComputationNodeBase) where possible, but the full separation will require many more steps.
Theoretically, once this is completed, it would be possible to mix float and double nodes in a single graph (through the use of still to be written typecast nodes).
The two variants of each Evaluate and ComputePartial have been unified across full-minibatch and per-frame operation through passing the range as a new FrameRange object that encodes both whether it is the full minibatch vs. single frame, as well as the number of slices in a minibatch.
Currently, the latter is passed through a member m_samplesInRecurrentStep, which now can be removed (currently it is kept for a runtime check to verify that this was done right--to be removed).
The LSTM test case was modified to initialize its parameters with CPU code that, unlike the GPU code, honors random seeds, making it resilient to evaluation-order changes (that BrainScript implies, for example).
The test case now has a BrainScript implementation (it is not default though; default remains the NDL version).
Further minor code refactoring.