зеркало из https://github.com/microsoft/caffe.git
[docs] draft tutorial subjects
This commit is contained in:
Родитель
5d8c93c201
Коммит
b256a7601a
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 103 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 70 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 56 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 54 KiB |
Двоичный файл не отображается.
После Ширина: | Высота: | Размер: 42 KiB |
|
@ -1,4 +1,37 @@
|
|||
---
|
||||
layout: default
|
||||
---
|
||||
# The Forward / Backward Passes
|
||||
# Forward and Backward
|
||||
|
||||
The forward and backward passes are the essential computations of a [Net](net_layer_blob.html).
|
||||
|
||||
<img src="fig/forward_backward.png" alt="Forward and Backward" width="480">
|
||||
|
||||
Let's consider a simple logistic regression classifier.
|
||||
|
||||
The **forward** pass computes the output given the input for inference.
|
||||
In forward Caffe composes the computation of each layer to compute the "function" represented by the model.
|
||||
This pass goes from bottom to top.
|
||||
|
||||
<img src="fig/forward.jpg" alt="Forward pass" width="320">
|
||||
|
||||
The data $x$ is passed through an inner product layer for $g(x)$ then through a softmax for $h(g(x))$ and softmax loss to give $f_W(x)$.
|
||||
|
||||
The **backward** pass computes the gradient given the loss for learning.
|
||||
In backward Caffe reverse-composes the gradient of each layer to compute the gradient of the whole model by automatic differentiation.
|
||||
This is back-propagation.
|
||||
This pass goes from top to bottom.
|
||||
|
||||
<img src="fig/backward.jpg" alt="Backward pass" width="320">
|
||||
|
||||
The backward pass begins with the loss and computes the gradient with respect to the output $\frac{\partial f_W}{\partial h}$. The gradient with respect to the rest of the model is computed layer-by-layer through the chain rule. Layers with parameters, like the `INNER_PRODUCT` layer, compute the gradient with respect to their parameters $\frac{\partial f_W}{\partial W_{\text{ip}}}$ during the backward step.
|
||||
|
||||
These computations follow immediately from defining the model: Caffe plans and carries out the forward and backward passes for you.
|
||||
|
||||
- The `Net::Forward()` and `Net::Backward()` methods carry out the respective passes while `Layer::Forward()` and `Layer::Backward()` compute each step.
|
||||
- Every layer type has `forward_{cpu,gpu}()` and `backward_{cpu,gpu}` methods to compute its steps according to the mode of computation. A layer may only implement CPU or GPU mode due to constraints or convenience.
|
||||
|
||||
The [Solver](solver.html) optimizes a model by first calling forward to yield the output and loss, then calling backward to generate the gradient of the model, and then incorporating the gradient into a weight update that attempts to minimize the loss. Division of labor between the Solver, Net, and Layer keep Caffe modular and open to development.
|
||||
|
||||
For the details of the forward and backward steps of Caffe's layer types, refer to the [layer catalogue](layers.html).
|
||||
|
||||
|
|
|
@ -0,0 +1,44 @@
|
|||
---
|
||||
layout: default
|
||||
---
|
||||
# Caffe Tutorial
|
||||
|
||||
Caffe is a deep learning framework and this tutorial explains its philosophy, architecture, and usage.
|
||||
This is a practical guide and framework introduction, so the full frontier, context, and history of deep learning cannot be covered here.
|
||||
While explanations will be given where possible, a background in machine learning and neural networks is helpful.
|
||||
|
||||
## Philosophy
|
||||
|
||||
In one sip, Caffe is brewed for
|
||||
|
||||
- Expression: models and optimizations are defined as plaintext schemas instead of code.
|
||||
- Speed: for research and industry alike speed is crucial for state-of-the-art models and massive data.
|
||||
- Modularity: new tasks and settings require flexibility and extension.
|
||||
- Openness: scientific and applied progress call for common code, reference models, and reproducibility.
|
||||
- Community: academic research, startup prototypes, and industrial applications all share strength by joint discussion and development in a BSD-2 project.
|
||||
|
||||
and these principles direct the project.
|
||||
|
||||
## Tour
|
||||
|
||||
- [Nets, Layers, and Blobs](net_layer_blob.html): the anatomy of a Caffe model.
|
||||
- [Forward / Backward](forward_backward.html): the essential computations of layered compositional models.
|
||||
- [Loss](loss.html): the task to be learned is defined by the loss.
|
||||
- [Solver Optimization](solver.html): the solver coordinates model optimization.
|
||||
- [Layer Catalogue](layers.html): the layer is the fundamental unit of modeling and computation -- Caffe's catalogue includes layers for state-of-the-art models.
|
||||
- [Interfaces](interfaces.html): command line, Python, and MATLAB Caffe.
|
||||
|
||||
## Deeper Learning
|
||||
|
||||
There are helpful references freely online for deep learning that complement our hands-on tutorial.
|
||||
These cover introductory and advanced material, background and history, and the latest advances.
|
||||
|
||||
A broad introduction is given in the free online draft of [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html) by Michael Nielsen. In particular the chapters on using neural nets and how backpropagation works are helpful if you are new to the subject.
|
||||
|
||||
These recent academic tutorials explain deep learning for researchers in machine learning and vision:
|
||||
|
||||
- [Deep Learning Tutorial](http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf) by Yann LeCun (NYU, Facebook) and Marc'Aurelio Ranzato (Facebook). ICML 2013 tutorial.
|
||||
- [Large-Scale Visual Recognition: Deep Learning Tutorial](https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxsc3ZydHV0b3JpYWxjdnByMTR8Z3g6Njg5MmZkZTM1MDhhZWNmZA) by Marc'Aurelio Ranzato (Facebook). CPVR 2014 tutorial.
|
||||
- [LISA Deep Learning Tutorial](http://deeplearning.net/tutorial/deeplearning.pdf) by the LISA Lab directed by Yoshua Bengio (U. Montréal).
|
||||
|
||||
For an exposition of neural networks in circuits and code, check out [Understanding Neural Networks from a Programmer's Perspective](http://karpathy.github.io/neuralnets/) by Andrej Karpathy (Stanford).
|
|
@ -0,0 +1,68 @@
|
|||
---
|
||||
layout: default
|
||||
---
|
||||
# Interfaces
|
||||
|
||||
Caffe has command line, Python, and MATLAB interfaces for day-to-day usage, interfacing with research code, and rapid prototyping. While Caffe is a C++ library at heart and it exposes a modular interface for development, not every occasion calls for custom compilation. The cmdcaffe, pycaffe, and matcaffe interfaces are here for you.
|
||||
|
||||
## Command Line
|
||||
|
||||
The command line interface -- cmdcaffe -- is the `caffe` tool for model training, scoring, and diagnostics. Run `caffe` without any arguments for help. This tool and others are found in caffe/build/tools. (The following example calls require completing the LeNet / MNIST example first.)
|
||||
|
||||
**Training**: `caffe train` learns models from scratch, resumes learning from saved snapshots, and fine-tunes models to new data and tasks. All training requires a solver configuration through the `-solver solver.prototxt` argument. Resuming requires the `-snapshot model_iter_1000.solverstate` argument to load the solver snapshot. Fine-tuning requires the `-weights model.caffemodel` argument for the model initialization.
|
||||
|
||||
# train LeNet
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt
|
||||
# train on GPU 2
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 2
|
||||
# resume training from the half-way point snapshot
|
||||
caffe train -solver examples/mnist/lenet_solver.prototxt -snapshot examples/mnist/lenet_iter_5000.solverstate
|
||||
|
||||
For a full example of fine-tuning, see examples/finetuning_on_flickr_style, but the training call alone is
|
||||
|
||||
# fine-tune CaffeNet model weights for style recognition
|
||||
caffe train -solver examples/finetuning_on_flickr_style/solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel
|
||||
|
||||
**Testing**: `caffe test` scores models by running them in the test phase and reports the net output as its score. The net architecture must be properly defined to output an accuracy measure or loss as its output. The per-batch score is reported and then the grand average is reported last.
|
||||
|
||||
#
|
||||
# score the learned LeNet model on the validation set as defined in the model architeture lenet_train_test.prototxt
|
||||
caffe test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000 -gpu 0 -iterations 100
|
||||
|
||||
**Benchmarking**: `caffe time` benchmarks model execution layer-by-layer through timing and synchronization. This is useful to check system performance and measure relative execution times for models.
|
||||
|
||||
# (These example calls require you complete the LeNet / MNIST example first.)
|
||||
# time LeNet training on CPU for 10 iterations
|
||||
caffe time -model examples/mnist/lenet_train_test.prototxt -iterations 10
|
||||
# time a model architecture with the given weights on the first GPU for 10 iterations
|
||||
# time LeNet training on GPU for the default 50 iterations
|
||||
caffe time -model examples/mnist/lenet_train_test.prototxt -gpu 0
|
||||
|
||||
**Diagnostics**: `caffe device_query` reports GPU details for reference and checking device ordinals for running on a given device in multi-GPU machines.
|
||||
|
||||
# query the first device
|
||||
caffe device_query -gpu 0
|
||||
|
||||
## Python
|
||||
|
||||
The Python interface -- pycaffe -- is the `caffe` module and its scripts in caffe/python. `import caffe` to load models, do forward and backward, handle IO, visualize networks, and even instrument model solving. All model data, derivatives, and parameters are exposed for reading and writing.
|
||||
|
||||
- `caffe.Net` is the central interface for loading, configuring, and running models. `caffe.Classsifier` and `caffe.Detector` provide convenience interfaces for common tasks.
|
||||
- `caffe.SGDSolver` exposes the solving interface.
|
||||
- `caffe.io` handles input / output with preprocessing and protocol buffers.
|
||||
- `caffe.draw` visualizes network architectures.
|
||||
- Caffe blobs are exposed as numpy ndarrays for ease-of-use and efficiency.
|
||||
|
||||
Tutorial IPython notebooks are found in caffe/examples: do `ipython notebook caffe/examples` to try them. For developer reference docstrings can be found throughout the code.
|
||||
|
||||
Compile pycaffe by `make pycaffe`. The module dir caffe/python/caffe should be installed in your PYTHONPATH for `import caffe`.
|
||||
|
||||
## MATLAB
|
||||
|
||||
The MATLAB interface -- matcaffe -- is the `caffe` mex and its helper m-files in caffe/matlab. Load models, do forward and backward, extract output and read-only model weights, and load the binaryproto format mean as a matrix.
|
||||
|
||||
A MATLAB demo is in caffe/matlab/caffe/matcaffe_demo.m
|
||||
|
||||
Note that MATLAB matrices and memory are in column-major layout counter to Caffe's row-major layout! Double-check your work accordingly.
|
||||
|
||||
Compile matcaffe by `make matcaffe`.
|
|
@ -1,4 +1,150 @@
|
|||
---
|
||||
layout: default
|
||||
---
|
||||
# Data: Ins and Outs
|
||||
# Layers
|
||||
|
||||
To create a Caffe model you need to define the model architecture in a protocol buffer definition file (prototxt).
|
||||
|
||||
Caffe layers and their parameters are defined in the protocol buffer definitions for the project in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto). The latest definitions are in the [dev caffe.proto](https://github.com/BVLC/caffe/blob/dev/src/caffe/proto/caffe.proto).
|
||||
|
||||
TODO complete list of layers linking to headings
|
||||
|
||||
### Vision Layers
|
||||
|
||||
#### Convolution
|
||||
|
||||
`CONVOLUTION`
|
||||
|
||||
#### Pooling**
|
||||
|
||||
`POOLING`
|
||||
|
||||
#### Local Response Normalization
|
||||
|
||||
`LRN`
|
||||
|
||||
#### im2col
|
||||
|
||||
`IM2COL` is a helper for doing the image-to-column transformation that you most likely do not need to know about.
|
||||
|
||||
### Loss Layers
|
||||
|
||||
Loss drives learning by comparing an output to a target and assigning cost to minimize. The loss itself is computed by the forward pass and the gradient w.r.t. to the loss is computed by the backward pass.
|
||||
|
||||
#### Softmax
|
||||
|
||||
`SOFTMAX_LOSS`
|
||||
|
||||
#### Sum-of-Squares / Euclidean
|
||||
|
||||
`EUCLIDEAN_LOSS`
|
||||
|
||||
#### Hinge / Margin
|
||||
|
||||
`HINGE_LOSS`
|
||||
|
||||
#### Sigmoid Cross-Entropy
|
||||
|
||||
`SIGMOID_CROSS_ENTROPY_LOSS`
|
||||
|
||||
#### Infogain
|
||||
|
||||
`INFOGAIN_LOSS`
|
||||
|
||||
#### Accuracy and Top-k
|
||||
|
||||
`ACCURACY` scores the output as the accuracy of output with respect to target -- it is not actually a loss and has no backward step.
|
||||
|
||||
### Activation / Neuron Layers
|
||||
|
||||
#### ReLU / Rectified-Linear and Leaky ReLU
|
||||
|
||||
`RELU`
|
||||
|
||||
#### Sigmoid
|
||||
|
||||
`SIGMOID`
|
||||
|
||||
#### TanH / Hyperbolic Tangent
|
||||
|
||||
`TANH`
|
||||
|
||||
#### Absolute Value
|
||||
|
||||
`ABSVAL`
|
||||
|
||||
#### Power
|
||||
|
||||
`POWER`
|
||||
|
||||
#### BNLL
|
||||
|
||||
`BNLL`
|
||||
|
||||
### Data Layers
|
||||
|
||||
#### Database
|
||||
|
||||
`DATA`
|
||||
|
||||
#### In-Memory
|
||||
|
||||
`MEMORY_DATA`
|
||||
|
||||
#### HDF5 Input
|
||||
|
||||
`HDF5_DATA`
|
||||
|
||||
#### HDF5 Output
|
||||
|
||||
`HDF5_OUTPUT`
|
||||
|
||||
#### Images
|
||||
|
||||
`IMAGE_DATA`
|
||||
|
||||
#### Windows
|
||||
|
||||
`WINDOW_DATA`
|
||||
|
||||
#### Dummy
|
||||
|
||||
`DUMMY_DATA` is for development and debugging. See `DummyDataParameter`.
|
||||
|
||||
### Common Layers
|
||||
|
||||
#### Inner Product
|
||||
|
||||
`INNER_PRODUCT`
|
||||
|
||||
#### Splitting
|
||||
|
||||
`SPLIT`
|
||||
|
||||
#### Flattening
|
||||
|
||||
`FLATTEN`
|
||||
|
||||
#### Concatenation
|
||||
|
||||
`CONCAT`
|
||||
|
||||
#### Slicing
|
||||
|
||||
`SLICE`
|
||||
|
||||
#### Elementwise Operations
|
||||
|
||||
`ELTWISE`
|
||||
|
||||
#### Argmax
|
||||
|
||||
`ARGMAX`
|
||||
|
||||
#### Softmax
|
||||
|
||||
`SOFTMAX`
|
||||
|
||||
#### Mean-Variance Normalization
|
||||
|
||||
`MVN`
|
||||
|
|
|
@ -2,3 +2,56 @@
|
|||
layout: default
|
||||
---
|
||||
# Loss
|
||||
|
||||
In Caffe, as in most of machine learning, learning is driven by a **loss** function (also known as an **error**, **cost**, or **objective** function).
|
||||
A loss function specifies the goal of learning by mapping parameter settings (i.e., the current network weights) to a scalar value specifying the "badness" of these parameter settings.
|
||||
Hence, the goal of learning is to find a setting of the weights that *minimizes* the loss function.
|
||||
|
||||
The loss in Caffe is computed by the Forward pass of the network.
|
||||
Each layer takes a set of input (`bottom`) blobs and produces a set of output (`top`) blobs.
|
||||
Some of these layers' outputs may be used in the loss function.
|
||||
A typical choice of loss function for one-versus-all classification tasks is the `SOFTMAX_LOSS` function, used in a network definition as follows, for example:
|
||||
|
||||
```
|
||||
layers {
|
||||
name: "loss"
|
||||
type: SOFTMAX_LOSS
|
||||
bottom: "pred"
|
||||
bottom: "label"
|
||||
top: "loss"
|
||||
}
|
||||
```
|
||||
|
||||
In a `SOFTMAX_LOSS` function, the `top` blob is a scalar (dimensions $1 \times 1 \times 1 \times 1$) which averages the loss (computed from predicted labels `pred` and actuals labels `label`) over the entire mini-batch.
|
||||
|
||||
### Loss weights
|
||||
|
||||
For nets with multiple layers producing a loss (e.g., a network that both classifies the input using a `SOFTMAX_LOSS` layer and reconstructs it using a `EUCLIDEAN_LOSS` layer), *loss weights* can be used to specify their relative importance.
|
||||
|
||||
By convention, Caffe layer types with the suffix `_LOSS` contribute to the loss function, but other layers are assumed to be purely used for intermediate computations.
|
||||
However, any layer can be used as a loss by adding a field `loss_weight: <float>` to a layer definition for each `top` blob produced by the layer.
|
||||
Layers with the suffix `_LOSS` have an implicit `loss_weight: 1` for the first `top` blob (and `loss_weight: 0` for any additional `top`s); other layers have an implicit `loss_weight: 0` for all `top`s.
|
||||
So, the above `SOFTMAX_LOSS` layer could be equivalently written as:
|
||||
|
||||
```
|
||||
layers {
|
||||
name: "loss"
|
||||
type: SOFTMAX_LOSS
|
||||
bottom: "pred"
|
||||
bottom: "label"
|
||||
top: "loss"
|
||||
loss_weight: 1
|
||||
}
|
||||
```
|
||||
|
||||
However, *any* layer able to backpropagate may be given a non-zero `loss_weight`, allowing one to, for example, regularize the activations produced by some intermediate layer(s) of the network if desired.
|
||||
For non-singleton outputs with an associated non-zero loss, the loss is computed simply by summing over all entries of the blob.
|
||||
|
||||
The final loss in Caffe, then, is computed by summing the total weighted loss over the network, as in the following pseudo-code:
|
||||
|
||||
```
|
||||
loss := 0
|
||||
for layer in layers:
|
||||
for top, loss_weight in layer.tops, layer.loss_weights:
|
||||
loss += loss_weight * sum(top)
|
||||
```
|
||||
|
|
|
@ -2,3 +2,136 @@
|
|||
layout: default
|
||||
---
|
||||
# Nets, Layers, and Blobs: anatomy of a Caffe model
|
||||
|
||||
Deep networks are compositional models that are naturally represented as a collection of inter-connected layers. Caffe defines a net layer-by-layer in its own model schema. The network defines the entire model bottom-to-top from input data to loss. As data and derivatives flow through the network in the [forward and backward passes](forward_backward.html) Caffe stores, communicates, and manipulates the information as *blobs*: the blob is the standard array and unified memory interface for the framework.
|
||||
|
||||
[Solving](solver.html) is configured separately to decouple modeling and optimization.
|
||||
|
||||
The layer comes first as the foundation of both model and computation. The net follows as the collection and connection of layers. The details of blob describe how information is stored and communicated in and across layers and nets.
|
||||
|
||||
## Layer computation and connections
|
||||
|
||||
The layer is the essence of a model and the fundamental unit of computation. Layers convolve filters, pool, take inner products, apply nonlinearities like rectified-linear and sigmoid and other elementwise transformations, normalize, load data, and compute losses like softmax and hinge. [See the layer catalogue](layers.html) for all operations. All the types needed for state-of-the-art deep learning tasks are there.
|
||||
|
||||
<img src="fig/layer.jpg" alt="A layer with bottom and top blob." width="256">
|
||||
|
||||
A layer takes input through *bottom* connections and makes output through *top* connections.
|
||||
|
||||
Each layer type defines three critical computations: *setup*, *forward*, and *backward*.
|
||||
|
||||
- Setup: initialize the layer and its connections once at model initialization.
|
||||
- Forward: given input from bottom compute the output and send to the top.
|
||||
- Backward: given the gradient w.r.t. the top output compute the gradient w.r.t. to the input and send to the bottom. A layer with parameters computes the gradient w.r.t. to its parameters and stores it internally.
|
||||
|
||||
Layers have two key responsibilities for the operation of the network as a whole: a *forward pass* that takes the inputs and produces the outputs, and a *backward pass* that takes the gradient with respect to the output, and computes the gradients with respect to the parameters and to the inputs, which are in turn back-propagated to earlier layers. These passes are simply the composition of each layer's forward and backward.
|
||||
|
||||
Developing custom layers requires minimal effort by the compositionality of the network and modularity of the code. Define the setup, forward, and backward for the layer and it is ready for inclusion in a net.
|
||||
|
||||
## Net definition and operation
|
||||
|
||||
The net jointly defines a function and its gradient by composition and auto-differentiation. The composition of every layer's output computes the function to do a given task, and the composition of every layer's backward computes the gradient from the loss to learn the task. Caffe models are end-to-end machine learning engines.
|
||||
|
||||
The net is a set of layers connected in a computation graph -- a DAG / directed acyclic graph to be exact. Caffe does all the bookkeeping for any DAG of layers to ensure correctness of the forward and backward passes. A typical net begins with a data layer that loads from disk and ends with a loss layer that computes the objective for a task such as classification or reconstruction.
|
||||
|
||||
The net is defined as a set of layers and their connections in a plaintext modeling language.
|
||||
A simple logistic regression classifier
|
||||
|
||||
<img src="fig/logreg.jpg" alt="Softmax Regression" width="256">
|
||||
|
||||
is defined by
|
||||
|
||||
name: "LogReg"
|
||||
layers {
|
||||
name: "mnist"
|
||||
type: DATA
|
||||
top: "data"
|
||||
top: "label"
|
||||
data_param {
|
||||
source: "input_leveldb"
|
||||
batch_size: 64
|
||||
}
|
||||
}
|
||||
layers {
|
||||
name: "ip"
|
||||
type: INNER_PRODUCT
|
||||
bottom: "data"
|
||||
top: "ip"
|
||||
inner_product_param {
|
||||
num_output: 2
|
||||
}
|
||||
}
|
||||
layers {
|
||||
name: "loss"
|
||||
type: SOFTMAX_LOSS
|
||||
bottom: "ip"
|
||||
bottom: "label"
|
||||
top: "loss"
|
||||
}
|
||||
|
||||
The Net explains its initialization as it goes:
|
||||
|
||||
I0902 22:52:17.931977 2079114000 net.cpp:39] Initializing net from parameters:
|
||||
name: "LogReg"
|
||||
[...model prototxt printout...]
|
||||
# construct the network layer-by-layer
|
||||
I0902 22:52:17.932152 2079114000 net.cpp:67] Creating Layer mnist
|
||||
I0902 22:52:17.932165 2079114000 net.cpp:356] mnist -> data
|
||||
I0902 22:52:17.932188 2079114000 net.cpp:356] mnist -> label
|
||||
I0902 22:52:17.932200 2079114000 net.cpp:96] Setting up mnist
|
||||
I0902 22:52:17.935807 2079114000 data_layer.cpp:135] Opening leveldb input_leveldb
|
||||
I0902 22:52:17.937155 2079114000 data_layer.cpp:195] output data size: 64,1,28,28
|
||||
I0902 22:52:17.938570 2079114000 net.cpp:103] Top shape: 64 1 28 28 (50176)
|
||||
I0902 22:52:17.938593 2079114000 net.cpp:103] Top shape: 64 1 1 1 (64)
|
||||
I0902 22:52:17.938611 2079114000 net.cpp:67] Creating Layer ip
|
||||
I0902 22:52:17.938617 2079114000 net.cpp:394] ip <- data
|
||||
I0902 22:52:17.939177 2079114000 net.cpp:356] ip -> ip
|
||||
I0902 22:52:17.939196 2079114000 net.cpp:96] Setting up ip
|
||||
I0902 22:52:17.940289 2079114000 net.cpp:103] Top shape: 64 2 1 1 (128)
|
||||
I0902 22:52:17.941270 2079114000 net.cpp:67] Creating Layer loss
|
||||
I0902 22:52:17.941305 2079114000 net.cpp:394] loss <- ip
|
||||
I0902 22:52:17.941314 2079114000 net.cpp:394] loss <- label
|
||||
I0902 22:52:17.941323 2079114000 net.cpp:356] loss -> loss
|
||||
# set up the loss and configure the backward pass
|
||||
I0902 22:52:17.941328 2079114000 net.cpp:96] Setting up loss
|
||||
I0902 22:52:17.941328 2079114000 net.cpp:103] Top shape: 1 1 1 1 (1)
|
||||
I0902 22:52:17.941329 2079114000 net.cpp:109] with loss weight 1
|
||||
I0902 22:52:17.941779 2079114000 net.cpp:170] loss needs backward computation.
|
||||
I0902 22:52:17.941787 2079114000 net.cpp:170] ip needs backward computation.
|
||||
I0902 22:52:17.941794 2079114000 net.cpp:172] mnist does not need backward computation.
|
||||
# determine outputs
|
||||
I0902 22:52:17.941800 2079114000 net.cpp:208] This network produces output loss
|
||||
# finish initialization and report memory usage
|
||||
I0902 22:52:17.941810 2079114000 net.cpp:467] Collecting Learning Rate and Weight Decay.
|
||||
I0902 22:52:17.941818 2079114000 net.cpp:219] Network initialization done.
|
||||
I0902 22:52:17.941824 2079114000 net.cpp:220] Memory required for data: 201476
|
||||
|
||||
Model initialization is handled by `Net::Init()`.
|
||||
|
||||
The network is run on CPU or GPU by setting a single switch. Layers come with corresponding CPU and GPU routines that produce identical results (with tests to prove it). The CPU / GPU switch is seamless and independent of the model definition. For research and deployment alike it is best to divide model and implementation.
|
||||
|
||||
## Blob storage and communication
|
||||
|
||||
Caffe stores and communicates data in 4-dimensional arrays called blobs. Blobs provide a unified memory interface, holding data e.g. batches of images, model parameters, and derivatives for optimization.
|
||||
|
||||
Blobs conceal the computational and mental overhead of mixed CPU/GPU operation by synchronizing from the CPU host to the GPU device as needed. In practice, one loads data from the disk to a blob in CPU code, calls a device kernel to do GPU computation, and ferries the blob off to the next layer, ignoring low-level details while maintaining a high level of performance.
|
||||
|
||||
Memory on the host and device is allocated on demand (lazily) for efficient memory usage.
|
||||
|
||||
The conventional blob dimensions for data are number N x channel K x height H x width W. Blob memory is row-major in layout so the last / rightmost dimension changes fastest.
|
||||
|
||||
- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images B = 256.
|
||||
- Channel / K is the feature dimension e.g. for RGB images K = 3.
|
||||
|
||||
Caffe operations are general with respect to the channel dimension / K. Grayscale and hyperspectral imagery are fine. Caffe can likewise model and process arbitrary vectors in blobs with singleton. That is, the shape of blob holding 1000 vectors of 16 feature dimensions is 1000 x 16 x 1 x 1.
|
||||
|
||||
Parameter blob dimensions vary according to the type and configuration of the layer. For a convolution layer with 96 filters of 11 x 11 spatial dimension and 3 inputs the blob is 96 x 3 x 11 x 11. For an inner product / fully-connected layer with 1000 output channels and 1024 input channels the parameter blob is 1 x 1 x 1000 x 4096.
|
||||
|
||||
For custom data it may be necessary to hack your own input preparation tool or data layer. However once your data is in your job is done. The modularity of layers accomplishes the rest of the work for you.
|
||||
|
||||
### Model format
|
||||
|
||||
The models are defined in plaintext protocol buffer schema (prototxt) while the learned models are serialized as binary protocol buffer (binaryproto) .caffemodel files.
|
||||
|
||||
The model format is defined by the protobuf schema in [caffe.proto](https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto).
|
||||
|
||||
Caffe speaks [Google Protocol Buffer](https://code.google.com/p/protobuf/) for the following strengths: minimal-size binary strings when serialized, efficient serialization, a human-readable text format compatible with the binary version, and efficient interface implementations in multiple languages, most notably C++ and Python. This all contributes to the flexibility and extensibility of modeling in Caffe.
|
||||
|
|
|
@ -1,4 +1,179 @@
|
|||
---
|
||||
layout: default
|
||||
---
|
||||
# Solving
|
||||
# Solver Optimization
|
||||
|
||||
The solver orchestrates model optimization by coordinating the network's forward inference and backward gradients to form parameter updates that attempt to improve the loss.
|
||||
The responsibilities of learning are divided between the solver for overseeing the optimization and generating parameter updates and the network for yielding loss and gradients.
|
||||
|
||||
The Caffe solvers are Stochastic Gradient Descent (SGD), Adaptive Gradient (ADAGRAD), and Nesterov's Accelerated Gradient (NAG).
|
||||
|
||||
The solver
|
||||
|
||||
1. scaffolds the optimization bookkeeping and creates the training network for learning and test network(s) for evaluation.
|
||||
2. iteratively optimizes by calling forward / backward and updating parameters
|
||||
3. (periodically) evaluates the test networks
|
||||
4. snapshots the model and solver state throughout the optimization
|
||||
|
||||
where each iteration
|
||||
|
||||
1. calls network forward to make the output and loss
|
||||
2. calls network backward to make the gradients
|
||||
3. incorporates the gradients into parameter updates according to the solver method
|
||||
4. updates the solver state according to learning rate, history, and method
|
||||
|
||||
to take the weights all the way from initialization to learned model.
|
||||
|
||||
Like Caffe models, Caffe solvers run in CPU / GPU modes.
|
||||
|
||||
## Methods
|
||||
|
||||
The solver methods address the general optimization problem of loss minimization.
|
||||
The optimization objective is the average loss over instances
|
||||
|
||||
$L(W) = \frac{1}{N} \sum_i f_W^{(i)}\left(X^{(i)}\right) + \lambda r(W)$
|
||||
|
||||
where $f_W^{(i)}$ is the loss on instance $i$ with data $X^{(i)}$ in a mini-batch of size $N$ and $r(W)$ is a regularization term with weight $\lambda$.
|
||||
|
||||
The model computes $f_W$ in the forward pass and the gradient $\nabla f_W$ in the backward pass.
|
||||
|
||||
The gradient of the loss $\nabla L(W)$ is formed by the solver from the model gradient $\nabla f_W$, the regularlization gradient $r(W)$, and other particulars to each method.
|
||||
The method then computes the parameter update $\Delta W$ to update the weights and iterate.
|
||||
|
||||
### SGD
|
||||
|
||||
Stochastic gradient descent (SGD)
|
||||
|
||||
TODO Bottou pointer
|
||||
|
||||
### ADAGRAD
|
||||
|
||||
The adaptive gradient (ADAGRAD)
|
||||
|
||||
TODO cite Duchi
|
||||
|
||||
### NAG
|
||||
|
||||
Nesterov's accelerated gradient (NAG)
|
||||
|
||||
TODO cite ???
|
||||
|
||||
## Scaffolding
|
||||
|
||||
The solver scaffolding prepares the optimization method and initializes the model to be learned in `Solver::Presolve()`.
|
||||
|
||||
> caffe train -solver examples/mnist/lenet_solver.prototxt
|
||||
I0902 13:35:56.474978 16020 caffe.cpp:90] Starting Optimization
|
||||
I0902 13:35:56.475190 16020 solver.cpp:32] Initializing solver from parameters:
|
||||
test_iter: 100
|
||||
test_interval: 500
|
||||
base_lr: 0.01
|
||||
display: 100
|
||||
max_iter: 10000
|
||||
lr_policy: "inv"
|
||||
gamma: 0.0001
|
||||
power: 0.75
|
||||
momentum: 0.9
|
||||
weight_decay: 0.0005
|
||||
snapshot: 5000
|
||||
snapshot_prefix: "examples/mnist/lenet"
|
||||
solver_mode: GPU
|
||||
net: "examples/mnist/lenet_train_test.prototxt"
|
||||
|
||||
Net initialization
|
||||
|
||||
I0902 13:35:56.655681 16020 solver.cpp:72] Creating training net from net file: examples/mnist/lenet_train_test.prototxt
|
||||
[...]
|
||||
I0902 13:35:56.656740 16020 net.cpp:56] Memory required for data: 0
|
||||
I0902 13:35:56.656791 16020 net.cpp:67] Creating Layer mnist
|
||||
I0902 13:35:56.656811 16020 net.cpp:356] mnist -> data
|
||||
I0902 13:35:56.656846 16020 net.cpp:356] mnist -> label
|
||||
I0902 13:35:56.656874 16020 net.cpp:96] Setting up mnist
|
||||
I0902 13:35:56.694052 16020 data_layer.cpp:135] Opening lmdb examples/mnist/mnist_train_lmdb
|
||||
I0902 13:35:56.701062 16020 data_layer.cpp:195] output data size: 64,1,28,28
|
||||
I0902 13:35:56.701146 16020 data_layer.cpp:236] Initializing prefetch
|
||||
I0902 13:35:56.701196 16020 data_layer.cpp:238] Prefetch initialized.
|
||||
I0902 13:35:56.701212 16020 net.cpp:103] Top shape: 64 1 28 28 (50176)
|
||||
I0902 13:35:56.701230 16020 net.cpp:103] Top shape: 64 1 1 1 (64)
|
||||
[...]
|
||||
I0902 13:35:56.703737 16020 net.cpp:67] Creating Layer ip1
|
||||
I0902 13:35:56.703753 16020 net.cpp:394] ip1 <- pool2
|
||||
I0902 13:35:56.703778 16020 net.cpp:356] ip1 -> ip1
|
||||
I0902 13:35:56.703797 16020 net.cpp:96] Setting up ip1
|
||||
I0902 13:35:56.728127 16020 net.cpp:103] Top shape: 64 500 1 1 (32000)
|
||||
I0902 13:35:56.728142 16020 net.cpp:113] Memory required for data: 5039360
|
||||
I0902 13:35:56.728175 16020 net.cpp:67] Creating Layer relu1
|
||||
I0902 13:35:56.728194 16020 net.cpp:394] relu1 <- ip1
|
||||
I0902 13:35:56.728219 16020 net.cpp:345] relu1 -> ip1 (in-place)
|
||||
I0902 13:35:56.728240 16020 net.cpp:96] Setting up relu1
|
||||
I0902 13:35:56.728256 16020 net.cpp:103] Top shape: 64 500 1 1 (32000)
|
||||
I0902 13:35:56.728270 16020 net.cpp:113] Memory required for data: 5167360
|
||||
I0902 13:35:56.728287 16020 net.cpp:67] Creating Layer ip2
|
||||
I0902 13:35:56.728304 16020 net.cpp:394] ip2 <- ip1
|
||||
I0902 13:35:56.728333 16020 net.cpp:356] ip2 -> ip2
|
||||
I0902 13:35:56.728356 16020 net.cpp:96] Setting up ip2
|
||||
I0902 13:35:56.728690 16020 net.cpp:103] Top shape: 64 10 1 1 (640)
|
||||
I0902 13:35:56.728705 16020 net.cpp:113] Memory required for data: 5169920
|
||||
I0902 13:35:56.728734 16020 net.cpp:67] Creating Layer loss
|
||||
I0902 13:35:56.728747 16020 net.cpp:394] loss <- ip2
|
||||
I0902 13:35:56.728767 16020 net.cpp:394] loss <- label
|
||||
I0902 13:35:56.728786 16020 net.cpp:356] loss -> loss
|
||||
I0902 13:35:56.728811 16020 net.cpp:96] Setting up loss
|
||||
I0902 13:35:56.728837 16020 net.cpp:103] Top shape: 1 1 1 1 (1)
|
||||
I0902 13:35:56.728849 16020 net.cpp:109] with loss weight 1
|
||||
I0902 13:35:56.728878 16020 net.cpp:113] Memory required for data: 5169924
|
||||
|
||||
Loss
|
||||
|
||||
I0902 13:35:56.728893 16020 net.cpp:170] loss needs backward computation.
|
||||
I0902 13:35:56.728909 16020 net.cpp:170] ip2 needs backward computation.
|
||||
I0902 13:35:56.728924 16020 net.cpp:170] relu1 needs backward computation.
|
||||
I0902 13:35:56.728938 16020 net.cpp:170] ip1 needs backward computation.
|
||||
I0902 13:35:56.728953 16020 net.cpp:170] pool2 needs backward computation.
|
||||
I0902 13:35:56.728970 16020 net.cpp:170] conv2 needs backward computation.
|
||||
I0902 13:35:56.728984 16020 net.cpp:170] pool1 needs backward computation.
|
||||
I0902 13:35:56.728998 16020 net.cpp:170] conv1 needs backward computation.
|
||||
I0902 13:35:56.729014 16020 net.cpp:172] mnist does not need backward computation.
|
||||
I0902 13:35:56.729027 16020 net.cpp:208] This network produces output loss
|
||||
I0902 13:35:56.729053 16020 net.cpp:467] Collecting Learning Rate and Weight Decay.
|
||||
I0902 13:35:56.729071 16020 net.cpp:219] Network initialization done.
|
||||
I0902 13:35:56.729085 16020 net.cpp:220] Memory required for data: 5169924
|
||||
I0902 13:35:56.729277 16020 solver.cpp:156] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxt
|
||||
|
||||
Completion
|
||||
|
||||
I0902 13:35:56.806970 16020 solver.cpp:46] Solver scaffolding done.
|
||||
I0902 13:35:56.806984 16020 solver.cpp:165] Solving LeNet
|
||||
|
||||
|
||||
## Updating Parameters
|
||||
|
||||
The actual weight update is made by the solver then applied to the net parameters in `Solver::ComputeUpdateValue()`.
|
||||
|
||||
TODO
|
||||
|
||||
## Snapshotting and Resuming
|
||||
|
||||
The solver snapshots the weights and its own state during training in `Solver::Snapshot()` and `Solver::SnapshotSolverState()`.
|
||||
The weight snapshots export the learned model while the solver snapshots allow training to be resumed from a given point.
|
||||
Training is resumed by `Solver::Restore()` and `Solver::RestoreSolverState()`.
|
||||
|
||||
Weights are saved without extension while solver states are saved with `.solverstate` extension.
|
||||
Both files will have an `_iter_N` suffix for the snapshot iteration number.
|
||||
|
||||
Snapshotting is configured by:
|
||||
|
||||
# The snapshot interval in iterations.
|
||||
snapshot: 5000
|
||||
# File path prefix for snapshotting model weights and solver state.
|
||||
# Note: this is relative to the invocation of the `caffe` utility, not the
|
||||
# solver definition file.
|
||||
snapshot_prefix: /path/to/model
|
||||
# Snapshot the diff along with the weights. This can help debugging training
|
||||
# but takes more storage.
|
||||
snapshot_diff: false
|
||||
# A final snapshot is saved at the end of training unless
|
||||
# this flag is set to false. The default is true.
|
||||
snapshot_after_train: true
|
||||
|
||||
in the solver definition prototxt.
|
||||
|
|
Загрузка…
Ссылка в новой задаче