Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Перейти к файлу
Frank Seide 6878fa4b74 fixed a few small refactorings in EvalReader.h that I had missed 2015-09-25 14:55:23 -07:00
BrainScript separated left-over calls of SetActualMiniBatchSize(.) as { ResizeAllFeatureNodes(.); SetActualMiniBatchSizeFromFeatures(); } to get fully regular structure, ready for refactoring--note: these are not tested, and it is not clear whether the affected functions are even still needed; 2015-09-23 18:29:36 -07:00
Common bug fix: DataReader<ElemType>::RequireSentenceSeg() was missing, and thus falling back to default which always returns false 2015-09-25 14:35:00 -07:00
DataReader bug fix: DataReader<ElemType>::RequireSentenceSeg() was missing, and thus falling back to default which always returns false 2015-09-25 14:35:00 -07:00
Demos Fixed a couple of bugs in the ModelAveraging implementation and some code cleanup 2015-08-26 13:01:42 -07:00
Documentation update CNTK book. 2015-08-10 14:59:42 -07:00
ExampleSetups Minor changes in readme.txt and .config samples 2015-08-26 14:37:09 -07:00
MachineLearning fixed a few small refactorings in EvalReader.h that I had missed 2015-09-25 14:55:23 -07:00
Math renamed MBLayout::Reset() to SetWithoutOr() and Resize() to Init() for clarity; 2015-09-25 09:37:31 -07:00
Scripts Updating build-and-test script and e2e tests to reflect new build directory layout 2015-08-07 19:00:28 -07:00
Tests ComputationNetwork now has a second layout, pMBNoLayout, which matches pMBLayout in #sequences but is otherwise empty, and used for nodes that do not require sequential processing; 2015-09-18 20:53:24 -07:00
license
.gitattributes Added ability to run speech e2e tests on Windows (cygwin) 2015-08-11 18:17:37 -07:00
.gitignore Add a configure script for setting build parameters 2015-08-07 13:03:27 -04:00
CNTK.sln Include CNTKMathTest in the VS build to avoid build issues in the project from getting in undetected 2015-09-21 13:59:05 -07:00
CppCntk.vssettings
Makefile Merge branch 'master' of https://git01.codeplex.com/forks/srehpycs/mitsls into srehpycs/mitsls 2015-09-24 10:50:00 -07:00
README Cleanups, README improvements, missing Makefile dependency 2015-08-11 16:58:18 -04:00
ThirdPartyNotices.txt
configure Removed CUDA 7.1 from list of CUDA versions to use in configure 2015-08-24 11:04:45 -04:00

README

== To-do ==
Add descriptions to LSTMnode
Add descriptions to 0/1 mask segmentation in feature reader, delay node, and crossentropywithsoftmax node
Change criterion node to use the 0/1 mask, following example in crossentropywithsoftmax node
Add description of encoder-decoder simple network builder
Add description of time-reverse node, simple network builder and NDL builder for bi-directional models

== Authors of the README ==
   	Kaisheng Yao
        Microsoft Research
        email: kaisheny@microsoft.com

	Wengong Jin,
	Shanghai Jiao Tong University
	email: acmgokun@gmail.com

    Yu Zhang, Leo Liu, Scott Cyphers
    CSAIL, Massachusetts Institute of Technology
    email: yzhang87@csail.mit.edu
    email: leoliu_cu@sbcglobal.net
    email: cyphers@mit.edu

    Guoguo Chen
    CLSP, Johns Hopkins University
    email: guoguo@jhu.edu

== Preeliminaries == 

To build the cpu version, you have to install intel MKL blas library
or ACML library first. Note that ACML is free, whereas MKL may not be.

for MKL:
1. Download from https://software.intel.com/en-us/intel-mkl

for ACML:
1. Download from
http://developer.amd.com/tools-and-sdks/archive/amd-core-math-library-acml/acml-downloads-resources/
We have seen some problems with some versions of the library on Intel
processors, but have had success with acml-5-3-1-ifort-64bit.tgz

for Kaldi:
1. In kaldi-trunk/tools/Makefile, uncomment # OPENFST_VERSION = 1.4.1, and
   re-install OpenFst using the makefile.
2. In kaldi-trunk/src/, do ./configure --shared; make depend -j 8; make -j 8;
   and re-compile Kaldi (the -j option is for parallelization).

To build the gpu version, you have to install NIVIDIA CUDA first

== Build Preparation ==

You can do an out of source build in any directory, as well as an in
source build.  Let $CNTK be the CNTK directory.  For an out of source
build in the directory "build" type

  >mkdir build
  >cd build
  >$CNTK/configure -h

(For an in source build, just run configure in the $CNTK directory).

You will see various options for configure, as well as their default
values.  CNTK needs a CPU math directory, either acml or mkl.  If you
do not specify one and both are available, acml will be used.  For GPU
use, a cuda and gdk directory are also required.  Similary, to build
the kaldi plugin a kaldi directory is required.  You may also specify
whether you want a debug or release build, as well as add additional
path roots to use in searching for libraries.

Rerun configure with the desired options:

  >$CNTK/configure ...

This will create a Config.make and a Makefile (if you are doing an in
source build, a Makefile will not be created).  The Config.make file
records the configuration parameters and the Makefile reinvokes the
$CNTK/Makefile, passing it the build directory where it can find the
Config.make.

After make completes, you will have the following directories:

  .build will contain object files, and can be deleted
  bin contains the cntk program
  lib contains libraries and plugins

  The bin and lib directories can safely be moved as long as they
  remain siblings.

To clean

>make clean

== Run ==
All executables are in bin directory:
	cntk: The main executable for CNTK
	*.so: shared library for corresponding reader, these readers will be linked and loaded dynamically at runtime.

	./cntk configFile=${your cntk config file}

== Kaldi Reader ==
This is a HTKMLF reader and kaldi writer (for decode)

To build, set --with-kaldi when you configure.

The feature section is like:

writer=[
    writerType=KaldiReader
    readMethod=blockRandomize
    frameMode=false
    miniBatchMode=Partial
    randomize=Auto
    verbosity=1
    ScaledLogLikelihood=[
        dim=$labelDim$
        Kaldicmd="ark:-" # will pipe to the Kaldi decoder latgen-faster-mapped
        scpFile=$outputSCP$ # the file key of the features
    ]
]

== Kaldi2 Reader ==
This is a kaldi reader and kaldi writer (for decode)

To build, set --with-kaldi in your Config.make

The features section is different:

features=[
    dim=
    rx=
    scpFile=
    featureTransform=
]

rx is a text file which contains:

    one Kaldi feature rxspecifier readable by RandomAccessBaseFloatMatrixReader.
    'ark:' specifiers don't work; only 'scp:' specifiers work.

scpFile is a text file generated by running:

    feat-to-len FEATURE_RXSPECIFIER_FROM_ABOVE ark,t:- > TEXT_FILE_NAME

    scpFile should contain one line per utterance.

    If you want to run with fewer utterances, just shorten this file.
    (It will load the feature rxspecifier but ignore utterances not present in scpFile).

featureTransform is the name of a Kaldi feature transform file:
    
    Kaldi feature transform files are used for stacking / applying transforms to features.

    An empty string (if permitted by the config file reader?) or the special string: NO_FEATURE_TRANSFORM
    says to ignore this option.

********** Labels **********

The labels section is also different.

labels=[
    mlfFile=
    labelDim=
    labelMappingFile=
]

Only difference is mlfFile. mlfFile is a different format now. It is a text file which contains:

    one Kaldi label rxspecifier readable by Kaldi's copy-post binary.