More small revisions based on CR.

This commit is contained in:
Cha Zhang 2016-11-04 14:32:55 -07:00
Родитель 7086e04892
Коммит a6a513258c
4 изменённых файлов: 8 добавлений и 10 удалений

Просмотреть файл

@ -24,13 +24,12 @@ model_path = os.path.join(abs_path, "Models")
# Define the reader for both training and evaluation action.
def create_reader(path, is_training, input_dim, label_dim):
return MinibatchSource(CTFDeserializer(path, StreamDefs(
features = StreamDef(field='features', shape=input_dim, is_sparse=False),
labels = StreamDef(field='labels', shape=label_dim, is_sparse=False)
features = StreamDef(field='features', shape=input_dim),
labels = StreamDef(field='labels', shape=label_dim)
)), randomize=is_training, epoch_size = INFINITELY_REPEAT if is_training else FULL_DATA_SWEEP)
# Creates and trains a feedforward classification model for MNIST images
def convnet_cifar10(debug_output=False):
set_computation_network_trace_level(0)
@ -56,7 +55,7 @@ def convnet_cifar10(debug_output=False):
]),
LayerStack(2, lambda i: [
Dense([256,128][i]),
Dropout(0.5) # dropout scheduling is not supported in Python yet
Dropout(0.5)
]),
Dense(num_output_classes, activation=None)
])(scaled_input)

Просмотреть файл

@ -69,7 +69,7 @@ def convnet_cifar10_dataaug(reader_train, reader_test):
]),
LayerStack(2, lambda i: [
Dense([256,128][i]),
Dropout(0.5) # dropout scheduling is not supported in Python yet
Dropout(0.5)
]),
Dense(num_classes, activation=None)
])(scaled_input)

Просмотреть файл

@ -22,13 +22,12 @@ model_path = os.path.join(abs_path, "Models")
# Define the reader for both training and evaluation action.
def create_reader(path, is_training, input_dim, label_dim):
return MinibatchSource(CTFDeserializer(path, StreamDefs(
features = StreamDef(field='features', shape=input_dim, is_sparse=False),
labels = StreamDef(field='labels', shape=label_dim, is_sparse=False)
features = StreamDef(field='features', shape=input_dim),
labels = StreamDef(field='labels', shape=label_dim)
)), randomize=is_training, epoch_size = INFINITELY_REPEAT if is_training else FULL_DATA_SWEEP)
# Creates and trains a feedforward classification model for MNIST images
def convnet_mnist(debug_output=False):
image_height = 28
image_width = 28

Просмотреть файл

@ -13,7 +13,7 @@
### Getting the data
We use the MNIST and CIFAR-10 datasets to demonstrate how to train a `convolutional neural network (CNN)`. CNN has been one of the most popular neural networks for image-related tasks. A very well-known early work on CNN is the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf). In 2012 Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ILSVRC-2012 competition using a [CNN architecture](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). And most state-of-the-art neural networks on image classification tasks today adopts a modified CNN architecture, such as [VGG](../VGG), [GoogLeNet](../GoogLeNet), [ResNet](../ResNet), etc.
We use the MNIST and CIFAR-10 datasets to demonstrate how to train a `convolutional neural network (CNN)`. CNN has been one of the most popular neural networks for image-related tasks. A very well-known early work on CNN is the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf). In 2012 Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ILSVRC-2012 competition using a CNN architecture, [AlexNet](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). And most state-of-the-art neural networks on image classification tasks today adopt a modified CNN architecture, such as [VGG](../VGG), [GoogLeNet](../GoogLeNet), [ResNet](../ResNet), etc.
MNIST and CIFAR-10 datasets are not included in the CNTK distribution but can be easily downloaded and converted by following the instructions in [DataSets/MNIST](../../DataSets/MNIST) and [DataSets/CIFAR-10](../../DataSets/CIFAR-10). We recommend you to keep the downloaded data in the respective folder while downloading, as the configuration files in this folder assumes that by default.