Note: baselines need to be fixed for
Tests/EndToEndTests/BatchNormalization and
Tests/EndToEndTests/Examples/Image/Miscellaneous/CIFAR-10/02_BatchNormConv.
For batch normalization, running inverse standard deviation becomes
running variance. We mirror this CuDNN v5 change in the CNTK batch
normalization engine. Model version is bumped. When old models are
loaded, this parameter is (approximately) converted.
In the same model version change, let batch normalization count
samples seen rather minibatches (this deals with incorrect averaging
when minibatch size is varied across epochs).
For batch normalization averaging and blending handle initialization
cases, don't rely on mean and variance initial values (set in
NDL/BrainScript).
Update Windows / Linux / Docker build.
With this commit, CuDNN v4 is not supported anymore.
Build unit tests only when Boost is available
Configure Boost path in configure
Address CR comments
fix#678
This is a combination of 10 commits.
Adapt makefile to work with Docker container on Linux
This is a combination of 7 commits.
enable unit tests build
remove -liomp5 from math unit test
enable openmp
add -ldl
fix space error
add -dl for reader test
change order of -l
remove gdk installation
use /usr/local/lib as boost library path
Adapt path in unit tests on Linux
adapt path for Linux
adapt path on other test projects for Linux
remove extra blank line
add BOOST_PATH,and build unit tests based on BOOST_PATH
configure boost path
install Boost 1.60.0; use version variable instead of hardcoding.
add comments for different paths on Linux than on Windows.
restore installation of gdk, because the removal of gdk needs more changes and will be done in a separate check-in
use ifdef, fix typos
- Open MPI 1.10.3
- OpenBLAS 0.2.18, also compile with OpenMP and LAPACK
- OpenCV 3.1.0 (also support in ./configure)
- CNTK custom MKL, version 1
- MKL build (in /cntk/build-mkl/*/release)
- Add python-yaml package, so TestDriver.py can be run
* Add 'openblas' as mathlib option in configure. Not added to auto-search so
must be specified using --with-openblas
* configure script searches empty tail so that libraries located at default_path_list
roots (ie /usr/local/ + include/openblas_config.h) are found
* Treat ACML as the odd library out in ifdefs since it doesn't conform to typical
BLAS standard. Other libraries like ATLAS should be able to share
OpenBLAS/MKL variants. Add default USE_ACML define in VS projects to match
* Fix 'max' macro define colliding with C++ std::max once openblas headers are included
Usage Notes:
* For best performance, build OpenBLAS with USE_OPENMP=1. When running CNTK, set
OPENBLAS_NUM_THREADS environment var or set numCPUThreads CNTK config variable to the
physical core count or performance will suffer
* OpenBLAS 2.16 (git HEAD) tested in Linux with GCC 4.8.4 and in Windows with
OpenBLAS 2.15 (pre-built binary release + MingGW 64-bit support dlls)
* For Windows, in Math.vcxproj, replace libacml_mp_dll.lib with libopenblas.dll.a and change
USE_ACML define to USE_OPENBLAS. Change ACML_PATH environment variable to your OpenBLAS path.
Modify openblas_config.h as per https://github.com/xianyi/OpenBLAS/issues/708
* On current generation Intel processors, OpenBLAS measures a little faster than
AMD ACML and slower than Intel MKL on MNIST and other examples
Add a configure script for initializing build parameters, either
for in or out of source builds. The script generates a Config.make
in the build directory, and, for out of source builds, a trampoline
Makefile.
Make the build-and-test script to do an out of source build.
Add Config.make to .gitignore, as well as emacs temporary file patterns.