Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Перейти к файлу
Frank Seide 53e7dad15f refined log message of VerifyDims() 2015-11-13 21:07:40 -08:00
BrainScript removed BrainScript/BrainScriptObjects.h ghost 2015-10-16 10:24:04 -07:00
CNTKActionsLib added new Project CNTKActions where all actions should be moved to (later this will be a library accessible from other languages) 2015-11-12 10:46:07 -08:00
Common refined an error message in DataSliceWithMBLayout() 2015-11-12 18:09:36 -08:00
DataReader Minor: Addressed some code style review feedback 2015-11-13 09:01:09 -08:00
Demos Fixed a couple of bugs in the ModelAveraging implementation and some code cleanup 2015-08-26 13:01:42 -07:00
Documentation update CNTK book. 2015-08-10 14:59:42 -07:00
ExampleSetups Added batch normalization example. 2015-11-11 12:17:42 -08:00
MachineLearning refined log message of VerifyDims() 2015-11-13 21:07:40 -08:00
Math first step towards separating function-value size determiniation (Validate()) and allocation. Dimensions are explicitly stored in the node, and the matrix object just follows. Renamed Resize() to SetDims(). No actual functional change yet though (i.e. SetDims() still does memory allocation), other than data reading had to be modified to notify the network of the update. We now need to find out the correct place to do that allocation; 2015-11-13 12:43:03 -08:00
Scripts Addressing Marko's comments 2015-10-30 09:48:36 +00:00
Tests MathTests: comment GPUSparseMatrixTests temporarily 2015-11-13 08:32:36 +01:00
license First Release of CNTK 2014-08-29 16:21:42 -07:00
.gitattributes Added run-test-common EOL setting to .gitattributes to make cygwin happy 2015-10-15 16:16:46 -07:00
.gitignore Add a configure script for setting build parameters 2015-08-07 13:03:27 -04:00
CNTK.sln added new Project CNTKActions where all actions should be moved to (later this will be a library accessible from other languages) 2015-11-12 10:46:07 -08:00
CppCntk.vssettings add Visual Studio C++ settings files. Make sure everyone uses the same tab, indentation and other settings in CNTK. 2015-07-09 11:23:27 -07:00
Makefile added CNTKActionsLib to Linux include path; 2015-11-12 13:23:54 -08:00
README Cleanups, README improvements, missing Makefile dependency 2015-08-11 16:58:18 -04:00
ThirdPartyNotices.txt Added end-to-end test for CNTK speech workloads using AN4 dataset 2015-08-05 12:17:09 -07:00
configure Fix two perf problems 2015-11-05 22:15:51 +01:00

README

== To-do ==
Add descriptions to LSTMnode
Add descriptions to 0/1 mask segmentation in feature reader, delay node, and crossentropywithsoftmax node
Change criterion node to use the 0/1 mask, following example in crossentropywithsoftmax node
Add description of encoder-decoder simple network builder
Add description of time-reverse node, simple network builder and NDL builder for bi-directional models

== Authors of the README ==
   	Kaisheng Yao
        Microsoft Research
        email: kaisheny@microsoft.com

	Wengong Jin,
	Shanghai Jiao Tong University
	email: acmgokun@gmail.com

    Yu Zhang, Leo Liu, Scott Cyphers
    CSAIL, Massachusetts Institute of Technology
    email: yzhang87@csail.mit.edu
    email: leoliu_cu@sbcglobal.net
    email: cyphers@mit.edu

    Guoguo Chen
    CLSP, Johns Hopkins University
    email: guoguo@jhu.edu

== Preeliminaries == 

To build the cpu version, you have to install intel MKL blas library
or ACML library first. Note that ACML is free, whereas MKL may not be.

for MKL:
1. Download from https://software.intel.com/en-us/intel-mkl

for ACML:
1. Download from
http://developer.amd.com/tools-and-sdks/archive/amd-core-math-library-acml/acml-downloads-resources/
We have seen some problems with some versions of the library on Intel
processors, but have had success with acml-5-3-1-ifort-64bit.tgz

for Kaldi:
1. In kaldi-trunk/tools/Makefile, uncomment # OPENFST_VERSION = 1.4.1, and
   re-install OpenFst using the makefile.
2. In kaldi-trunk/src/, do ./configure --shared; make depend -j 8; make -j 8;
   and re-compile Kaldi (the -j option is for parallelization).

To build the gpu version, you have to install NIVIDIA CUDA first

== Build Preparation ==

You can do an out of source build in any directory, as well as an in
source build.  Let $CNTK be the CNTK directory.  For an out of source
build in the directory "build" type

  >mkdir build
  >cd build
  >$CNTK/configure -h

(For an in source build, just run configure in the $CNTK directory).

You will see various options for configure, as well as their default
values.  CNTK needs a CPU math directory, either acml or mkl.  If you
do not specify one and both are available, acml will be used.  For GPU
use, a cuda and gdk directory are also required.  Similary, to build
the kaldi plugin a kaldi directory is required.  You may also specify
whether you want a debug or release build, as well as add additional
path roots to use in searching for libraries.

Rerun configure with the desired options:

  >$CNTK/configure ...

This will create a Config.make and a Makefile (if you are doing an in
source build, a Makefile will not be created).  The Config.make file
records the configuration parameters and the Makefile reinvokes the
$CNTK/Makefile, passing it the build directory where it can find the
Config.make.

After make completes, you will have the following directories:

  .build will contain object files, and can be deleted
  bin contains the cntk program
  lib contains libraries and plugins

  The bin and lib directories can safely be moved as long as they
  remain siblings.

To clean

>make clean

== Run ==
All executables are in bin directory:
	cntk: The main executable for CNTK
	*.so: shared library for corresponding reader, these readers will be linked and loaded dynamically at runtime.

	./cntk configFile=${your cntk config file}

== Kaldi Reader ==
This is a HTKMLF reader and kaldi writer (for decode)

To build, set --with-kaldi when you configure.

The feature section is like:

writer=[
    writerType=KaldiReader
    readMethod=blockRandomize
    frameMode=false
    miniBatchMode=Partial
    randomize=Auto
    verbosity=1
    ScaledLogLikelihood=[
        dim=$labelDim$
        Kaldicmd="ark:-" # will pipe to the Kaldi decoder latgen-faster-mapped
        scpFile=$outputSCP$ # the file key of the features
    ]
]

== Kaldi2 Reader ==
This is a kaldi reader and kaldi writer (for decode)

To build, set --with-kaldi in your Config.make

The features section is different:

features=[
    dim=
    rx=
    scpFile=
    featureTransform=
]

rx is a text file which contains:

    one Kaldi feature rxspecifier readable by RandomAccessBaseFloatMatrixReader.
    'ark:' specifiers don't work; only 'scp:' specifiers work.

scpFile is a text file generated by running:

    feat-to-len FEATURE_RXSPECIFIER_FROM_ABOVE ark,t:- > TEXT_FILE_NAME

    scpFile should contain one line per utterance.

    If you want to run with fewer utterances, just shorten this file.
    (It will load the feature rxspecifier but ignore utterances not present in scpFile).

featureTransform is the name of a Kaldi feature transform file:
    
    Kaldi feature transform files are used for stacking / applying transforms to features.

    An empty string (if permitted by the config file reader?) or the special string: NO_FEATURE_TRANSFORM
    says to ignore this option.

********** Labels **********

The labels section is also different.

labels=[
    mlfFile=
    labelDim=
    labelMappingFile=
]

Only difference is mlfFile. mlfFile is a different format now. It is a text file which contains:

    one Kaldi label rxspecifier readable by Kaldi's copy-post binary.