still not working with GPU since inputs[1] comes in on the CPU, for unknown reasons;
made latticesource.h compile an accidentally imported "using namespace std"
SimpleNetworkBuilder.cpp now never uses the LSTMNode (which is not functional at present--we should just remove it);
bug fix: LUBatchSequenceReader now sets the SequenceEnd flag as well. But I get NaNs, so something still not complete
...well, it was removed from mainline CNTK code. My old DBN.exe reader code inside still needs it, so it has just been moved under the rug (HTKMLFReader project). Unused code was deleted from the basetypes.h under the rug;
DataReader.h no longer includes lattice-related header files (keeping MS-propietary stuff out from inclusion in mainline CNTK code);
fileutil.h is no longer included by mainline CNTK code. Instead #include "File.h". Some day we will merge the two;
removed a 'using namespace std;' from fileutil.h, and dealt with the fallout
fixed the include path of the MathPerformanceTests project (which were incomplete and even inconsistent between Release/Debug);
more attempts at ImageReader.cpp
changed BestGpu,.h to not include "ScriptableObjects.h" and "commandArgUtil.h", since this header is also included by .cu files, which needs it for the CPUONLY flag, but the CUDA compiler gets confused;
new method ConfigParameters::GetMemberIds();
new method ConfigArray::AsVector<>()
deleted mDoRandomize flag and SetDoRandomize() method, since they were not used anywhere;
made Datareader::GetDataReader(), Init(), and data members private;
removed DataReader::m_configure since it just passed simple data from GetDataReader() to the only function that calls GetDataReader();
renamed m_dataReader[] to m_dataReaders[] (plural);
fixed rnnlm.gpu.config by moving modelPath out from SGD block
added a dummy DataReader constructor from BrainScript (currently not implemented);
had to disable DelayLoadNofify() as that suddenly caused link errors (seems because DeviceFromConfig() is now called from a different place?);
further unified DoTrain() w.r.t. old CNTK config and BrainScript. Added a new BrainScriptBuilder (but the builders will go away next and get replaced by a lambda). DoTrain now accessible from BS (but not functional yet, too many pieces missing);
all builders now can be constructed from both old ConfigParameters and BrainScript IConfigRecord;
suddenly gcc got very picky, had to fix several places
added pretty-printed expression names for infix operators;
the ":" operator (for forming arrays) now flattens the array in the parser rather than the evaluation, allowing evaluation to construct a ConfigArray where the elements are lazily evaluated, as needed for the top-level "actions";
added the parsing and evaluation of the (dict with dict) syntax, although its functionality is not yet implemented (just returns the first dict);
disabled the 'stopAtNewline' flag, since I don't see why it is even necessary, and it prevents us from writing the "with" operator on the next line;
ParseConfigExpression() now checks for junk at end;
wmainWithBS() now parses BS with ParseConfigExpression() rather than ParseConfigDictFromString(), so that we can construct a "with" expression with overrides given on the command line
added a temporary fix to ProgressTracing that allows to not set the correct #epochs upfront (since BS can't do that);
added a first BS version of main()
all ConfigParameters accesses in SGDParams() now use the default values with type, such that as a next step, the same code will hopefully compile for both old config and BrainScript