Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
Fix reader on Linux
Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
Activate LibSparsePCReader on Linux build
Make file name consistent with Windows
added a dummy DataReader constructor from BrainScript (currently not implemented);
had to disable DelayLoadNofify() as that suddenly caused link errors (seems because DeviceFromConfig() is now called from a different place?);
further unified DoTrain() w.r.t. old CNTK config and BrainScript. Added a new BrainScriptBuilder (but the builders will go away next and get replaced by a lambda). DoTrain now accessible from BS (but not functional yet, too many pieces missing);
all builders now can be constructed from both old ConfigParameters and BrainScript IConfigRecord;
suddenly gcc got very picky, had to fix several places
fixed Linux Makefile after change of GPUMatrixCUDAKernels.cu to GPUMatrixCUDAKernels.cuh. It used to explicitly compile this in the GCC build, although it was included
new source file ComputationNetworkScripting.cpp;
changed ComputationNetwork::SetDeviceId(). Now just takes the input and sets that as the device. Before it did somethinf funky with m_deviceId, which looks like a bug;
in LSTM test config, removed remnants of old NDL, now completely based on BS
moved DisableLegacyTruncationSettings() away from being the very first piece of code in CNTK.cpp, hiding it close to where it is used. Also fixed its spelling and made it 'static'
renamed SimpleEvaluator::PreCompute() to EvaluateBatchModeNodes() because the original name was misleading;
Linux Makefile now defines _DEBUG in debug builds
Replace hash_map with unordered_map
Fix size_t formatting (%zd)
Add some missing sprintf_s args
Some methods defined in base class need the base mentioned in templates
Don't use classname::methodname in class definitions
fixed the slew of dependencies, mostly SimpleNetworkBuilder.cpp;
ComputationNetwork no longer has to include all headers for all vairants of ComputationNodes;
a few ComputationNode derivates missed #includes
Remove the no longer needed kaldi Makefiles.
Fix some pre-configure references in the README that were missed, add
a missing step to the build instructions, remove some obsolete
information, be more specific about the acml library that works, and
reword a few sentences.
Add math library dependencies to the kaldi plugins so that make -j
doesn't try to build them before the math library is finished.
Add a configure script for initializing build parameters, either
for in or out of source builds. The script generates a Config.make
in the build directory, and, for out of source builds, a trampoline
Makefile.
Make the build-and-test script to do an out of source build.
Add Config.make to .gitignore, as well as emacs temporary file patterns.
Change configuration to build to a specific PREFIX directory, where
a Config.make is located that contains build and site-specific information.
This also makes it easy to check just how an earlier build was configured.
Update the instructions in README.
Modularize build specifications for each target, where each target
adds what it needs to paths.
Add rpath to cntkmath and plugins so they do not need LD_LIBRARY_PATH.
Remove object files from cntk that were already in cntkmath.
Organize build targets into UNIX-like bin and lib directories under a
configuration-specific directory. Have .gitignore ignore these
directories.
Make it easy to keep sources in alphabetic order for easier comparison
with the Windows project definition.
This change implements a read-ahead (prefetch) of one minibatch on a separate thread to ensure that main thread always has enough work. This is done through coarse-level parallelism, where GetMinibatch is called from a separate thread and result is cached for main thread to consume. The synchronization is a simple producer/consumer model with a single mutex and a conditional variable.
This change also modifies how we measure time it takes to read and compute a minibatch worth of data.