"This tutorial is targeted to individuals who are new to CNTK and to machine learning. We assume you have completed or are familiar with CNTK 101 and 102. In this tutorial, you will train a feed forward network based simple model to recognize handwritten digits. This is the first example, where we will train and evaluate a neural network based model on read real world data. \n",
"- [Part B](https://github.com/Microsoft/CNTK/blob/v2.0.beta10.0/Tutorials/CNTK_103B_MNIST_FeedForwardNetwork.ipynb): We will use the feedforward classifier used in CNTK 102 to classify digits in MNIST data set.\n",
"# Import the relevant modules to be used later\n",
"from __future__ import print_function\n",
"import gzip\n",
"import matplotlib.image as mpimg\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import os\n",
"import shutil\n",
"import struct\n",
"import sys\n",
"\n",
"try: \n",
" from urllib.request import urlretrieve \n",
"except ImportError: \n",
" from urllib import urlretrieve\n",
"\n",
"# Config matplotlib for inline plotting\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data download\n",
"\n",
"We will download the data into local machine. The MNIST database is a standard handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 pixels. This set is easy to use visualize and train on any computer."
"Save the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points) and the labels are encoded as [1-hot][] encoding (label of 3 with 10 digits becomes `0010000000`.\n",
"One can do data manipulations to improve the performance of a machine learning system. I suggest you first use the data generated so far and run the classifier in CNTK 103 Part B. Once you have a baseline with classifying the data in its original form, now use the different data manipulation techniques to further improve the model.\n",
"\n",
"There are several ways data alterations can be performed. CNTK readers automate a lot of these actions for you. However, to get a feel for how these transforms can impact training and test accuracies, I strongly encourage individuals to try one or more of data perturbation.\n",
"\n",
"- Shuffle the training data (rows to create a different). Hint: Use `permute_indices = np.random.permutation(train.shape[0])`. Then run Part B of the tutorial with this newly permuted data.\n",
"- Adding noise to the data can often improves [generalization error][]. You can augment the training set by adding noise (generated with numpy, hint: use `numpy.random`) to the training images. \n",
"- Distort the images with [affine transformation][] (translations or rotations)\n",