Added timeseries with LSTM basics tutorial
This commit is contained in:
Родитель
777ac54fac
Коммит
c6147735a9
|
@ -0,0 +1,30 @@
|
|||
# Copyright (c) Microsoft. All rights reserved.
|
||||
|
||||
# Licensed under the MIT license. See LICENSE.md file in the project root
|
||||
# for full license information.
|
||||
# ==============================================================================
|
||||
|
||||
import os
|
||||
import re
|
||||
import numpy as np
|
||||
|
||||
abs_path = os.path.dirname(os.path.abspath(__file__))
|
||||
notebook = os.path.join(abs_path, "..", "..", "..", "..", "Tutorials", "CNTK_206A_IOT_Example_with_LSTM.ipynb")
|
||||
|
||||
def test_cntk_206A_iot_example_with_LSTM_noErrors(nb):
|
||||
errors = [output for cell in nb.cells if 'outputs' in cell
|
||||
for output in cell['outputs'] if output.output_type == "error"]
|
||||
assert errors == []
|
||||
|
||||
expectedEvalErrorByDeviceId = { -1: 0.0001, 0: 0.0001 }
|
||||
|
||||
def test_cntk_206A_iot_example_with_LSTM_evalCorrect(nb, device_id):
|
||||
testCell = [cell for cell in nb.cells
|
||||
if cell.cell_type == 'code' and re.search('# Print validate and test error', cell.source)]
|
||||
assert len(testCell) == 1
|
||||
print(testCell[0].outputs[0]['text'])
|
||||
m = re.match(r"msre for test: (?P<actualEvalError>\d+\.\d+)\r?$", testCell[0].outputs[0]['text'])
|
||||
print(m)
|
||||
print(float(m.group('actualEvalError')))
|
||||
# TODO tighten tolerances
|
||||
assert np.isclose(float(m.group('actualEvalError')), expectedEvalErrorByDeviceId[device_id], atol=0.000001)
|
|
@ -199,6 +199,14 @@
|
|||
"language": ["Python", "BrainScript"],
|
||||
"type": ["Tutorial", "Recipe"]
|
||||
},
|
||||
{
|
||||
"category": ["Numeric"],
|
||||
"name": "Timeseries with LSTM",
|
||||
"url": "https://github.com/Microsoft/CNTK/blob/master/Tutorials/CNTK_206A_IOT_Example_with_LSTM.ipynb",
|
||||
"description": "Timeseries modeling with LSTMs using simulated data.",
|
||||
"language": ["Python"],
|
||||
"type": ["Tutorial"]
|
||||
},
|
||||
{
|
||||
"category": ["Numeric"],
|
||||
"name": "Reinforcement Learning",
|
||||
|
|
|
@ -0,0 +1,511 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# CNTK 206: Part A - Time series prediction with LSTM (Basics)\n",
|
||||
"\n",
|
||||
"This tutorial build on top of the CNTK_104 tutorial where the first introduced use of the Cognitive Toolkit for Time Series analysis. In that tutorial we used dense feedforward network for directional prediction (up and down) of movements in stock market data. In this tutorial demonstrates how to use CNTK to predict future values in a time series.\n",
|
||||
"\n",
|
||||
"**Goal**\n",
|
||||
"\n",
|
||||
"We use simulated data set of a continuous function (in our case a [sine wave](https://en.wikipedia.org/wiki/Sine)). From N previous values of this function, we will predict M future values of the function.\n",
|
||||
"\n",
|
||||
"<img src=\"https://upload.wikimedia.org/wikibooks/en/7/7d/Ch15-figure01.png\", width=300, height=300>\n",
|
||||
"\n",
|
||||
"In this tutorial we will use [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory) to implement our model. LSTMs are well suited for this task because their ability to learn from experience. For details on how LSTMs work, see [this excellent post](http://colah.github.io/posts/2015-08-Understanding-LSTMs). \n",
|
||||
"\n",
|
||||
"In this tutorial we will have following sub-sections:\n",
|
||||
"- Simulated data generation\n",
|
||||
"- LSTM network modeling\n",
|
||||
"- Model training and evaluation\n",
|
||||
"\n",
|
||||
"This model works for lots real world data. In part A of this tutorial we use a simple sin(x) function and in part B of the tutorial (currently in development) we will use real data from IOT device and try to predict daily output of solar pannel. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Using CNTK we can easily express our model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import math\n",
|
||||
"from matplotlib import pyplot as plt\n",
|
||||
"import numpy as np\n",
|
||||
"import os\n",
|
||||
"import pandas as pd\n",
|
||||
"import time\n",
|
||||
"\n",
|
||||
"import cntk as C\n",
|
||||
"import cntk.axis\n",
|
||||
"from cntk.blocks import Input\n",
|
||||
"from cntk.layers import Dense, Dropout, Recurrence \n",
|
||||
"\n",
|
||||
"%matplotlib inline"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Select the notebook runtime environment devices / settings\n",
|
||||
"\n",
|
||||
"Set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default we choose the best available device."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Select the right target device when this notebook is being tested:\n",
|
||||
"if 'TEST_DEVICE' in os.environ:\n",
|
||||
" import cntk\n",
|
||||
" if os.environ['TEST_DEVICE'] == 'cpu':\n",
|
||||
" C.device.set_default_device(C.device.cpu())\n",
|
||||
" else:\n",
|
||||
" C.device.set_default_device(C.device.gpu(0))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"There are two run modes:\n",
|
||||
"- *Fast mode*: `isFast` is set to `True`. This is the default mode for the notebooks, which means we train for fewer iterations or train / test on limited data. This ensures functional correctness of the notebook though the models produced are far from what a completed training would produce.\n",
|
||||
"\n",
|
||||
"- *Slow mode*: We recommend the user to set this flag to `False` once the user has gained familiarity with the notebook content and wants to gain insight from running the notebooks for a longer period with different parameters for training. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"isFast = True"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Data generation\n",
|
||||
"\n",
|
||||
"We need a few helper methods to generate the simulated sine wave data. Let `N` and `M` be a ordered set of past values and future (desired predicted values) of the sine wave, respectively. \n",
|
||||
"\n",
|
||||
"- **`generate_data()`**\n",
|
||||
"\n",
|
||||
"> Assuming N=5, M=5, generate_data() will generate the desired sin wave and returns numpy arrrays in the following shape:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"# the input to the lstm\n",
|
||||
"X = [\n",
|
||||
" [[y0],[y1],[y2],[y3],[y4], ...],\n",
|
||||
" [[y1],[y2],[y3],[y4],[y5], ...],\n",
|
||||
" ...\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"# the desired output, M(=5) steps in the future\n",
|
||||
"Y = [\n",
|
||||
" [y4+5],\n",
|
||||
" [y5+5]\n",
|
||||
" ...\n",
|
||||
"]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"- **`split_data()`**\n",
|
||||
"split_data() will split the data into training, validation and test sets.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def split_data(data, val_size=0.1, test_size=0.1):\n",
|
||||
" \"\"\"\n",
|
||||
" splits np.array into training, validation and test\n",
|
||||
" \"\"\"\n",
|
||||
" pos_test = int(len(data) * (1 - test_size))\n",
|
||||
" pos_val = int(len(data[:pos_test]) * (1 - val_size))\n",
|
||||
"\n",
|
||||
" train, val, test = data[:pos_val], data[pos_val:pos_test], data[pos_test:]\n",
|
||||
"\n",
|
||||
" return {\"train\": train, \"val\": val, \"test\": test}\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def generate_data(fct, x, time_steps, time_shift):\n",
|
||||
" \"\"\"\n",
|
||||
" generate sequences to feed to rnn for fct(x)\n",
|
||||
" \"\"\"\n",
|
||||
" data = fct(x)\n",
|
||||
" if not isinstance(data, pd.DataFrame):\n",
|
||||
" data = pd.DataFrame(dict(a=data[0:len(data) - time_shift],\n",
|
||||
" b=data[time_shift:]))\n",
|
||||
" rnn_x = []\n",
|
||||
" for i in range(len(data) - time_steps):\n",
|
||||
" rnn_x.append(data['a'].iloc[i: i + time_steps].as_matrix())\n",
|
||||
" rnn_x = np.array(rnn_x)\n",
|
||||
" rnn_x = rnn_x.reshape(rnn_x.shape + (1,))\n",
|
||||
" rnn_y = data['b'].values\n",
|
||||
" rnn_y = rnn_y.reshape(rnn_y.shape + (1,))\n",
|
||||
"\n",
|
||||
" return split_data(rnn_x), split_data(rnn_y)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let us generate and visualize the generated data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"N = 5 # input: N subsequent values \n",
|
||||
"M = 5 # output: predict 1 value M steps ahead\n",
|
||||
"X, Y = generate_data(np.sin, np.linspace(0, 100, 10000, dtype=np.float32), N, M)\n",
|
||||
"\n",
|
||||
"f, a = plt.subplots(3, 1, figsize=(12, 8))\n",
|
||||
"for j, ds in enumerate([\"train\", \"val\", \"test\"]):\n",
|
||||
" a[j].plot(Y[ds], label=ds + ' raw');\n",
|
||||
"[i.legend() for i in a];\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Network modeling\n",
|
||||
"\n",
|
||||
"We setup our network with 1 LSTM cell for each input. We have N inputs and each input is a value in our continues function. The N outputs from the LSTM are the input into a dense layer that produces a single output. \n",
|
||||
"Between LSTM and dense layer we insert a dropout layer that randomly drops 20% of the values coming the LSTM to prevent overfitting the model to the training dataset. We want use use the dropout layer during training but when using the model to make predictions we don't want to drop values.\n",
|
||||
"![lstm](https://guschmueds.blob.core.windows.net/datasets/1.png)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def lstm_model(x):\n",
|
||||
" \"\"\"Create the model for time series prediction\"\"\"\n",
|
||||
" with C.layers.default_options(initial_state=0.1):\n",
|
||||
" m = C.layers.Recurrence(C.layers.LSTM(N))(x)\n",
|
||||
" m = C.ops.sequence.last(m)\n",
|
||||
" m = C.layers.Dropout(0.2)(m)\n",
|
||||
" m = cntk.layers.Dense(1)(m)\n",
|
||||
" return m"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Training the network"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We define the next_batch() iterator that produces batches we can feed to the training function. \n",
|
||||
"Note that because CNTK supports variable sequence length, we must feed the batches as list of sequences. This is a convenience function to generate small batches of data often referred to as minibacth."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def next_batch(x, y, ds):\n",
|
||||
" \"\"\"get the next batch to process\"\"\"\n",
|
||||
"\n",
|
||||
" def as_batch(data, start, count):\n",
|
||||
" part = []\n",
|
||||
" for i in range(start, start + count):\n",
|
||||
" part.append(data[i])\n",
|
||||
" return np.array(part)\n",
|
||||
"\n",
|
||||
" for i in range(0, len(x[ds])-BATCH_SIZE, BATCH_SIZE):\n",
|
||||
" yield as_batch(x[ds], i, BATCH_SIZE), as_batch(y[ds], i, BATCH_SIZE)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Setup everything else we need for training the model: define user specified training parameters, define inputs, outputs, model and the optimizer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Training parameters\n",
|
||||
"\n",
|
||||
"TRAINING_STEPS = 10000\n",
|
||||
"BATCH_SIZE = 100\n",
|
||||
"EPOCHS = 20 if isFast else 100"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Key Insight**\n",
|
||||
"\n",
|
||||
"In CNTK the axis over which you run a recurrence is dynamic and thus its dimensions are unknown at the time you define your variable. Thus the input variable only lists the shapes of the static axes. Since our inputs are a sequence of one dimensional numbers we specify the input as \n",
|
||||
"\n",
|
||||
"> `C.blocks.Input(1, dynamic_axes=x_axes, name=\"x\")`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# tell cntk that we use dynamic axis\n",
|
||||
"x_axes = [C.Axis.default_batch_axis(), C.Axis.default_dynamic_axis()]\n",
|
||||
"y_axes = [C.Axis.default_batch_axis()]\n",
|
||||
"\n",
|
||||
"# input sequences\n",
|
||||
"x = C.blocks.Input(1, dynamic_axes=x_axes, name=\"x\")\n",
|
||||
"\n",
|
||||
"# expected output\n",
|
||||
"y = C.blocks.Input(1, dynamic_axes=y_axes, name=\"y\")\n",
|
||||
"\n",
|
||||
"# create the model\n",
|
||||
"z = lstm_model(x)\n",
|
||||
"\n",
|
||||
"# the learning rate\n",
|
||||
"learning_rate = 0.001\n",
|
||||
"lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch)\n",
|
||||
"\n",
|
||||
"# loss function\n",
|
||||
"loss = C.ops.squared_error(z, y)\n",
|
||||
"\n",
|
||||
"# use msre to determin error for now\n",
|
||||
"error = C.ops.squared_error(z, y)\n",
|
||||
"\n",
|
||||
"# use adam optimizer\n",
|
||||
"momentum_time_constant = C.learner.momentum_as_time_constant_schedule(BATCH_SIZE / -math.log(0.9)) \n",
|
||||
"learner = C.learner.adam_sgd(z.parameters, lr=lr_schedule, momentum=momentum_time_constant, unit_gain = True)\n",
|
||||
"trainer = C.Trainer(z, loss, error, [learner])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We are ready to train. 100 epochs should yield acceptable results."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# train\n",
|
||||
"loss_summary = []\n",
|
||||
"start = time.time()\n",
|
||||
"for epoch in range(0, EPOCHS):\n",
|
||||
" for x1, y1 in next_batch(X, Y, \"train\"):\n",
|
||||
" trainer.train_minibatch({x: x1, y: y1})\n",
|
||||
" if epoch % (EPOCHS / 10) == 0:\n",
|
||||
" training_loss = cntk.utils.get_train_loss(trainer)\n",
|
||||
" loss_summary.append(training_loss)\n",
|
||||
" print(\"epoch: {}, loss: {:.5f}\".format(epoch, training_loss))\n",
|
||||
"\n",
|
||||
"print(\"training took {0:.1f} sec\".format(time.time() - start))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# A look how the loss function shows how well the model is converging\n",
|
||||
"plt.plot(loss_summary, label='training loss');"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Normally we would validate the training on the data that we set aside for validation but since the input data is small we can run validattion on all parts of the dataset."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# validate\n",
|
||||
"def get_msre(X,Y,label):\n",
|
||||
" result = 0.0\n",
|
||||
" for x1, y1 in next_batch(X, Y, label):\n",
|
||||
" eval_error = trainer.test_minibatch({x: x1, y: y1})\n",
|
||||
" result += eval_error\n",
|
||||
" return result/len(X[label])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Print the train and validation errors\n",
|
||||
"for label in [\"train\", \"val\"]:\n",
|
||||
" print(\"msre for {}: {:.6f}\".format(label, get_msre(X, Y, label)))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Print validate and test error\n",
|
||||
"label = \"test\"\n",
|
||||
"print(\"msre for {}: {:.6f}\".format(label, get_msre(X, Y, label)))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Since we used a simple sin(x) function we should expect that the errors are the same for train, validation and test sets. For real datasets that will be different of course. We also plot the expected output (Y) and the prediction our model made to shows how well the simple LSTM approach worked."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# predict\n",
|
||||
"f, a = plt.subplots(3, 1, figsize=(12, 8))\n",
|
||||
"for j, ds in enumerate([\"train\", \"val\", \"test\"]):\n",
|
||||
" results = []\n",
|
||||
" for x1, y1 in next_batch(X, Y, ds):\n",
|
||||
" pred = z.eval({x: x1})\n",
|
||||
" results.extend(pred[:, 0])\n",
|
||||
" a[j].plot(Y[ds], label=ds + ' raw');\n",
|
||||
" a[j].plot(results, label=ds + ' predicted');\n",
|
||||
"[i.legend() for i in a];"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"source": [
|
||||
"Not perfect but close enough, considering the simplicity of the model.\n",
|
||||
"\n",
|
||||
"Here we used a simple sin wave but you can tinker yourself and try other time series data. generate_data() allows you to pass in a dataframe with 'a' and 'b' columns instead of a function.\n",
|
||||
"\n",
|
||||
"To improve results, we could train with more data points, let the model train for more epochs or improve on the model itself.\n",
|
||||
"\n",
|
||||
"In future [part B of this tutorial](CNTK_206B_IOT_Example_with_LSTM.ipynb) we will use some real world data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"collapsed": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"anaconda-cloud": {},
|
||||
"kernelspec": {
|
||||
"display_name": "Python [conda env:cntk-py34]",
|
||||
"language": "python",
|
||||
"name": "conda-env-cntk-py34-py"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.4.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
}
|
|
@ -26,6 +26,10 @@ Tutorials
|
|||
|
||||
#. CNTK 205: `Artistic Style Transfer`_
|
||||
|
||||
#. CNTK 206: Time Series with LSTMs
|
||||
|
||||
* Part A: `Basic LSTMs using simulated data`_
|
||||
|
||||
For our Japanese users, you can find some of the `tutorials in Japanese`_.
|
||||
|
||||
.. _`Logistic Regression`: https://github.com/Microsoft/CNTK/tree/v2.0.beta8.0/Tutorials/CNTK_101_LogisticRegression.ipynb
|
||||
|
@ -41,5 +45,6 @@ For our Japanese users, you can find some of the `tutorials in Japanese`_.
|
|||
.. _`Reinforcement learning basics`: https://github.com/Microsoft/CNTK/blob/v2.0.beta8.0/Tutorials/CNTK_203_Reinforcement_Learning_Basics.ipynb
|
||||
.. _`Sequence to sequence basics`: https://github.com/Microsoft/CNTK/blob/v2.0.beta8.0/Tutorials/CNTK_204_Sequence_To_Sequence.ipynb
|
||||
.. _`Artistic Style Transfer`: https://github.com/Microsoft/CNTK/blob/v2.0.beta8.0/Tutorials/CNTK_205_Artistic_Style_Transfer.ipynb
|
||||
.. _`Basic LSTMs using simulated data`: https://github.com/Microsoft/CNTK/blob/v2.0.beta8.0/Tutorials/CNTK_206A_IOT_Example_with_LSTM.ipynb
|
||||
|
||||
.. _`tutorials in Japanese`: https://notebooks.azure.com/library/cntkbeta2_ja
|
||||
|
|
Загрузка…
Ссылка в новой задаче