Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Перейти к файлу
Frank Seide bd6aa96770 further simplified RowStackNode, e.g. we actually do not need m_startRowIndices[num children] 2015-10-14 08:22:17 -07:00
BrainScript Replaced all bare throw calls with Centralized exception functions that print the call stack 2015-10-10 18:38:59 -07:00
Common added a workaround for Check_t() (which will be fixed properly after memshare merge) 2015-10-13 18:09:34 -07:00
DataReader Fixed a bug where labelIDs assigment was incorrectly being done unconditionally when it only applies when lattices exist 2015-10-12 22:12:21 -07:00
Demos Fixed a couple of bugs in the ModelAveraging implementation and some code cleanup 2015-08-26 13:01:42 -07:00
Documentation update CNTK book. 2015-08-10 14:59:42 -07:00
ExampleSetups Minor changes in readme.txt and .config samples 2015-08-26 14:37:09 -07:00
MachineLearning further simplified RowStackNode, e.g. we actually do not need m_startRowIndices[num children] 2015-10-14 08:22:17 -07:00
Math RowStackNode() now uses Matrix::AssignRowSliceValuesOf() instead of AssignRowStackValuesOf() because that allows to use ValueSlice() like all other nodes. The previous method passed the column indices directly, but it is no longer of nodes' concern that they operate on column slices (and this caused a bug for Abdo's config after eliminating RowStackNode::EveluateThisNodeMap()); 2015-10-14 08:03:48 -07:00
Scripts Add more log info for use by Philly 2015-09-29 11:56:44 -07:00
Tests Removed testing for EvalErr in each mini-batch - too flaky 2015-10-13 14:29:47 -07:00
license First Release of CNTK 2014-08-29 16:21:42 -07:00
.gitattributes Added ability to run speech e2e tests on Windows (cygwin) 2015-08-11 18:17:37 -07:00
.gitignore Add a configure script for setting build parameters 2015-08-07 13:03:27 -04:00
CNTK.sln pulled all config parameters from SGD into a separate struct SGDParams; 2015-10-02 14:09:14 -07:00
CppCntk.vssettings add Visual Studio C++ settings files. Make sure everyone uses the same tab, indentation and other settings in CNTK. 2015-07-09 11:23:27 -07:00
Makefile Fixed Linux build 2015-10-09 23:54:37 -07:00
README Cleanups, README improvements, missing Makefile dependency 2015-08-11 16:58:18 -04:00
ThirdPartyNotices.txt Added end-to-end test for CNTK speech workloads using AN4 dataset 2015-08-05 12:17:09 -07:00
configure Removed CUDA 7.1 from list of CUDA versions to use in configure 2015-08-24 11:04:45 -04:00

README

== To-do ==
Add descriptions to LSTMnode
Add descriptions to 0/1 mask segmentation in feature reader, delay node, and crossentropywithsoftmax node
Change criterion node to use the 0/1 mask, following example in crossentropywithsoftmax node
Add description of encoder-decoder simple network builder
Add description of time-reverse node, simple network builder and NDL builder for bi-directional models

== Authors of the README ==
   	Kaisheng Yao
        Microsoft Research
        email: kaisheny@microsoft.com

	Wengong Jin,
	Shanghai Jiao Tong University
	email: acmgokun@gmail.com

    Yu Zhang, Leo Liu, Scott Cyphers
    CSAIL, Massachusetts Institute of Technology
    email: yzhang87@csail.mit.edu
    email: leoliu_cu@sbcglobal.net
    email: cyphers@mit.edu

    Guoguo Chen
    CLSP, Johns Hopkins University
    email: guoguo@jhu.edu

== Preeliminaries == 

To build the cpu version, you have to install intel MKL blas library
or ACML library first. Note that ACML is free, whereas MKL may not be.

for MKL:
1. Download from https://software.intel.com/en-us/intel-mkl

for ACML:
1. Download from
http://developer.amd.com/tools-and-sdks/archive/amd-core-math-library-acml/acml-downloads-resources/
We have seen some problems with some versions of the library on Intel
processors, but have had success with acml-5-3-1-ifort-64bit.tgz

for Kaldi:
1. In kaldi-trunk/tools/Makefile, uncomment # OPENFST_VERSION = 1.4.1, and
   re-install OpenFst using the makefile.
2. In kaldi-trunk/src/, do ./configure --shared; make depend -j 8; make -j 8;
   and re-compile Kaldi (the -j option is for parallelization).

To build the gpu version, you have to install NIVIDIA CUDA first

== Build Preparation ==

You can do an out of source build in any directory, as well as an in
source build.  Let $CNTK be the CNTK directory.  For an out of source
build in the directory "build" type

  >mkdir build
  >cd build
  >$CNTK/configure -h

(For an in source build, just run configure in the $CNTK directory).

You will see various options for configure, as well as their default
values.  CNTK needs a CPU math directory, either acml or mkl.  If you
do not specify one and both are available, acml will be used.  For GPU
use, a cuda and gdk directory are also required.  Similary, to build
the kaldi plugin a kaldi directory is required.  You may also specify
whether you want a debug or release build, as well as add additional
path roots to use in searching for libraries.

Rerun configure with the desired options:

  >$CNTK/configure ...

This will create a Config.make and a Makefile (if you are doing an in
source build, a Makefile will not be created).  The Config.make file
records the configuration parameters and the Makefile reinvokes the
$CNTK/Makefile, passing it the build directory where it can find the
Config.make.

After make completes, you will have the following directories:

  .build will contain object files, and can be deleted
  bin contains the cntk program
  lib contains libraries and plugins

  The bin and lib directories can safely be moved as long as they
  remain siblings.

To clean

>make clean

== Run ==
All executables are in bin directory:
	cntk: The main executable for CNTK
	*.so: shared library for corresponding reader, these readers will be linked and loaded dynamically at runtime.

	./cntk configFile=${your cntk config file}

== Kaldi Reader ==
This is a HTKMLF reader and kaldi writer (for decode)

To build, set --with-kaldi when you configure.

The feature section is like:

writer=[
    writerType=KaldiReader
    readMethod=blockRandomize
    frameMode=false
    miniBatchMode=Partial
    randomize=Auto
    verbosity=1
    ScaledLogLikelihood=[
        dim=$labelDim$
        Kaldicmd="ark:-" # will pipe to the Kaldi decoder latgen-faster-mapped
        scpFile=$outputSCP$ # the file key of the features
    ]
]

== Kaldi2 Reader ==
This is a kaldi reader and kaldi writer (for decode)

To build, set --with-kaldi in your Config.make

The features section is different:

features=[
    dim=
    rx=
    scpFile=
    featureTransform=
]

rx is a text file which contains:

    one Kaldi feature rxspecifier readable by RandomAccessBaseFloatMatrixReader.
    'ark:' specifiers don't work; only 'scp:' specifiers work.

scpFile is a text file generated by running:

    feat-to-len FEATURE_RXSPECIFIER_FROM_ABOVE ark,t:- > TEXT_FILE_NAME

    scpFile should contain one line per utterance.

    If you want to run with fewer utterances, just shorten this file.
    (It will load the feature rxspecifier but ignore utterances not present in scpFile).

featureTransform is the name of a Kaldi feature transform file:
    
    Kaldi feature transform files are used for stacking / applying transforms to features.

    An empty string (if permitted by the config file reader?) or the special string: NO_FEATURE_TRANSFORM
    says to ignore this option.

********** Labels **********

The labels section is also different.

labels=[
    mlfFile=
    labelDim=
    labelMappingFile=
]

Only difference is mlfFile. mlfFile is a different format now. It is a text file which contains:

    one Kaldi label rxspecifier readable by Kaldi's copy-post binary.