Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Перейти к файлу
Alexey Reznichenko ff49f1e3c6 Start binary reader overhaul
Redesign the binary format, refactor and update the reader
  implementation, as well as the ctf2bin.py converter.
2017-03-09 11:40:50 +01:00
Dependencies/CNTKCustomMKL Windows: switch to Visual Studio 2015 2017-01-10 10:47:35 +01:00
Documentation Revert some changes back (related to Gaussian and uniform initializer). Will create separate branch for that. 2016-11-22 10:50:05 -08:00
Examples Providing callback for cross validation 2017-02-28 13:32:09 +01:00
Scripts Start binary reader overhaul 2017-03-09 11:40:50 +01:00
Source Start binary reader overhaul 2017-03-09 11:40:50 +01:00
Tests Start binary reader overhaul 2017-03-09 11:40:50 +01:00
Tools Merge remote-tracking branch 'origin/master' into mahilleb/PostReleaseMerges 2017-02-23 20:47:44 +01:00
Tutorials CNTK v2 library: Fixed a couple of tutorials that were using numpy imnterop to properly specify the dynamic axes of inputs; earlier the models were specifying inputs to have both sequence and dynamic axes though the fed numpy array data did not account for the sequence axis causing the batch dimension to be wrongly interpreted as the sequence dimension. This was working earlier due to a bug in interpreting numpy data which always interpreted the leading axis to be the bacth axis regardless of the actual dynamic axis the input has; fixing that bug exposed the issue with these notebooks 2017-02-23 13:39:36 -08:00
bindings Integrate kedeng/fixEpochSummary2 into master 2017-02-28 20:39:09 -08:00
.clang-format Re-format code using clang-format (plus some post-processing) 2016-01-18 09:36:14 +01:00
.gitattributes updated Fast R-CNN to py35 2017-02-15 09:28:34 +01:00
.gitignore Added Animals data set 2017-02-09 18:14:10 +01:00
.gitmodules GitHub Repo as 1bit SGD Repo 2017-01-19 20:10:35 +01:00
CNTK.Common.props add CNTK.Common.props 2017-02-07 18:29:24 +01:00
CNTK.Cpp.props Merge branch 'master' into fmegen/aarch64/mpi 2017-02-17 09:31:13 +01:00
CNTK.sln use nuget package in C++ eval examples, add debug build for c++ examples using nuget 2017-02-17 22:49:16 +01:00
CONTRIBUTING.md Added CONTRIBUTING.md to the root directory 2016-02-17 13:14:44 +01:00
CppCntk.vssettings Update CppCntk.vssettings (wolfma) 2016-01-22 10:08:52 +01:00
LICENSE.md CNTK custom MKL support 2016-06-14 17:39:24 +02:00
Makefile A few minor reader improvements 2017-02-27 14:07:14 +01:00
README.md Main ReadMe News, February 28, 2017 2017-02-28 13:51:04 +01:00
configure update permissions on configure 2017-01-31 14:42:33 +01:00

README.md

The CNTK Wiki has all information on CNTK including setup, examples, etc.

Effective January 25, 2017 CNTK 1-bit Stochastic Gradient Descent (1bit-SGD) and BlockMomentumSGD code is moved to a new Repository in GitHub. Read this article for details.

Give us feedback through these channels.

Latest news

2017-02-28. V 2.0 Beta 12 Release available at Docker Hub
CNTK V 2.0 Beta 12 Runtime packages are now available as Public Images at Docker Hub.
See more on CNTK as Docker Images in this Wiki article.

2017-02-23. V 2.0 Beta 12 Release
Highlights of this Release:

  • New and updated features: new activation functions, support of Argmax and Argmin, improved performance of numpy interop, new functionality of existing operators, and more.
  • CNTK for CPU on Windows can now be installed via pip install on Anaconda 3. Other configurations will be enabled soon.
  • HTK deserializers are now exposed in Python. All deserializers are exposed in C++.
  • The memory pool implementation of CNTK has been updated with a new global optimization algorithm. Hyper memory compression has been removed.
  • New features in C++ API.
  • New Eval examples for RNN models.
  • New CNTK NuGet Packages with CNTK V2 C++ Library.

See more in the Release Notes.
Get the Release from the CNTK Releases page.

2017-02-13. V 2.0 Beta 11 Release available at Docker Hub
CNTK V 2.0 Beta 11 Runtime packages are now available as Public Images at Docker Hub.
See more on CNTK as Docker Images in this Wiki article.

2017-02-10. V 2.0 Beta 11 Release
Highlights of this Release:

See more in the Release Notes.
Get the Release from the CNTK Releases page.

2017-02-08. V 2.0 Beta 10 Release available at Docker Hub
CNTK V 2.0 Beta 10 Runtime packages are now available as Public Images at Docker Hub.
See more on CNTK as Docker Images in this Wiki article.

See all news.

What is The Microsoft Cognitive Toolkit

The Microsoft Cognitive Toolkit (https://www.microsoft.com/en-us/research/product/cognitive-toolkit/), is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations upon their inputs. CNTK allows to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs). It implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. CNTK has been available under an open-source license since April 2015. It is our hope that the community will take advantage of CNTK to share ideas more quickly through the exchange of open source working code.

Wiki: Go to the CNTK Wiki for all information on CNTK including setup, examples, etc.

License: See LICENSE.md in the root of this repository for the full license information.

Tutorial: Microsoft Computational Network Toolkit (CNTK) @ NIPS 2015 Workshops

Blogs:

Performance

Cognitive Toolkit (CNTK) provides significant performance gains compared to other toolkits click here for details. Here is a summary of findings by researchers at HKBU.

  • CNTKs LSTM performance is 5-10x faster than the other toolkits.
  • For convolution (image tasks), CNTK is comparable, but note the authors were using CNTK 1.7.2, and current CNTK 2.0 beta 10 is over 30% faster than 1.7.2.
  • For all networks, CTNK's performance was superior to TensorFlow performance.

Historically, CNTK has been a pioneer in optimizing performance on multi-GPU systems. We continue to maintain the edge (NVidia news at SuperComputing 2016 and CRAY at NIPS 2016).

CNTK was a pioneer in introducing scalability across multi-server multi-GPU systems. The figure below compares processing speed (frames processed per second) of CNTK to that of four other well-known toolkits. The configuration uses a fully connected 4-layer neural network (see our benchmark scripts) and an effective mini batch size (8192). All results were obtained on the same hardware with the respective latest public software versions as of Dec 3, 2015.

Performance chart

Citation

If you used this toolkit or part of it to do your research, please cite the work as:

Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, T. Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Hari Parthasarathi, Baolin Peng, Marko Radmilac, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Huaming Wang, Yongqiang Wang, Kaisheng Yao, Dong Yu, Yu Zhang, Geoffrey Zweig (in alphabetical order), "An Introduction to Computational Networks and the Computational Network Toolkit", Microsoft Technical Report MSR-TR-2014-112, 2014.

Disclaimer

CNTK is in active use at Microsoft and constantly evolving. There will be bugs.

Microsoft Open Source Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.