Update Getting started section of the Python API docs
This commit is contained in:
Родитель
db0686110d
Коммит
1fa75dcfdb
|
@ -32,15 +32,16 @@ Installing the Python module
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
#. Go to ``<cntkpath>/contrib/Python`` and run ``python setup.py install``
|
||||
#. Set up the environment variable ``CNTK_EXECUTABLE_PATH`` to point to the
|
||||
CNTK executable
|
||||
CNTK executable. Make sure the executable is also included
|
||||
#. Enjoy Python's ease of use with CNTK's speed::
|
||||
|
||||
>>> import cntk as cn
|
||||
>>> cn.__version__
|
||||
>>> import cntk as C
|
||||
>>> C.__version__
|
||||
1.4
|
||||
>>> with cn.Context('demo', clean_up=False) as ctx:
|
||||
... a = cn.constant([[1,2], [3,4]])
|
||||
... print(ctx.eval(a + [[10,20], [30, 40]]))
|
||||
>>> with C.LocalExecutionContext('demo', clean_up=False) as ctx:
|
||||
... a = C.constant([[1,2], [3,4]])
|
||||
... i = C.input_numpy([[[10,20], [30, 40]]])
|
||||
... print(ctx.eval(a + i))
|
||||
[[11.0, 22.0], [33.0, 44.0]]
|
||||
|
||||
In this case, we have set ``clean_up=False`` so that you can now peek into the
|
||||
|
@ -71,7 +72,6 @@ explained::
|
|||
import cntk as C
|
||||
import numpy as np
|
||||
|
||||
def simple_network():
|
||||
# 500 samples, 250-dimensional data
|
||||
N = 500
|
||||
d = 250
|
||||
|
@ -82,16 +82,16 @@ explained::
|
|||
Y = np.hstack((Y, 1-Y))
|
||||
|
||||
# set up the training data for CNTK
|
||||
x = C.input_numpy(X, has_dynamic_axis=False)
|
||||
y = C.input_numpy(Y, has_dynamic_axis=False)
|
||||
x = C.input_numpy(X)
|
||||
y = C.input_numpy(Y)
|
||||
|
||||
# define our network parameters: a weight tensor and a bias
|
||||
W = C.parameter((2, d))
|
||||
b = C.parameter((2, 1))
|
||||
W = C.parameter((d, 2))
|
||||
b = C.parameter((1, 2))
|
||||
|
||||
# create a dense 'layer' by multiplying the weight tensor and
|
||||
# the features and adding the bias
|
||||
out = C.times(W, x) + b
|
||||
out = C.times(x, W) + b
|
||||
|
||||
# setup the criterion node using cross entropy with softmax
|
||||
ce = C.cross_entropy_with_softmax(y, out, name='loss')
|
||||
|
@ -222,7 +222,7 @@ We can define this network as follows in the CNTK Python API::
|
|||
cell_dim = 128
|
||||
|
||||
t = C.dynamic_axis(name='t')
|
||||
# temporarily using cntk1 SpareInput because cntk2's input() will simply allow sparse as a parameter
|
||||
# temporarily using cntk1 SparseInput because cntk2's input() will simply allow sparse as a parameter
|
||||
features = cntk1.SparseInput(vocab, dynamicAxis=t, name='features')
|
||||
labels = C.input(num_labels, name='labels')
|
||||
|
||||
|
@ -251,6 +251,7 @@ We can define this network as follows in the CNTK Python API::
|
|||
ce = C.cross_entropy_with_softmax(labels, pred)
|
||||
ce.tag = "criterion"
|
||||
|
||||
|
||||
Let's go through some of the intricacies of the above network definition. First, we define
|
||||
some parameters of the data and the network. We have 5 possible classes for the sequences;
|
||||
we're working with a vocabulary of 2000 words; and our embedding vectors have a dimension of
|
||||
|
@ -282,7 +283,7 @@ the criterion node that adds a softmax and then implements the cross entropy los
|
|||
we add the criterion node, however, we call :func:`cntk.ops.reconcile_dynamic_axis` which will ensure
|
||||
that the minibatch layout for the labels and the data with dynamic axes is compatible.
|
||||
|
||||
For the full explanation of how ``lstm_layer()`` is defined, please see the full example in the
|
||||
For the full explanation of how ``lstm_layer()`` is defined, please see the full example (`seqcla.py <https://github.com/Microsoft/CNTK/blob/master/contrib/Python/cntk/examples/LSTM/seqcla.py>`_) in the
|
||||
Examples section.
|
||||
|
||||
How to pass Python data as train/test data
|
||||
|
|
Загрузка…
Ссылка в новой задаче