back-merging [docs] changes and web demo [example] addition; updating

net_surgery example to new format

Conflicts:
	docs/getting_pretrained_models.md
	docs/index.md
This commit is contained in:
Sergey Karayev 2014-07-12 09:25:23 -07:00
Родитель e07a2c0eea dd292da248
Коммит 9c61462cac
28 изменённых файлов: 848 добавлений и 420 удалений

1
.gitignore поставляемый
Просмотреть файл

@ -55,6 +55,7 @@ examples/*
# Generated documentation
docs/_site
docs/gathered
_site
# Sublime Text settings

Просмотреть файл

@ -4,6 +4,7 @@
# This script downloads the imagenet example auxiliary files including:
# - the ilsvrc12 image mean, binaryproto
# - synset ids and words
# - Python pickle-format data of ImageNet graph structure and relative infogain
# - the training splits with labels
DIR="$( cd "$(dirname "$0")" ; pwd -P )"

Просмотреть файл

@ -1,3 +1,5 @@
To generate stuff you can paste in an .md page from an IPython notebook, run
# Caffe Documentation
ipython nbconvert --to markdown <notebook_file>
To generate the documentation, run `$CAFFE_ROOT/scripts/build_docs.sh`.
To push your changes to the documentation to the gh-pages branch of your or the BVLC repo, run `$CAFFE_ROOT/scripts/deploy_docs.sh <repo_name>`.

Просмотреть файл

@ -7,10 +7,10 @@
Caffe {% if page contains 'title' %}| {{ page.title }}{% endif %}
</title>
<link rel="stylesheet" href="stylesheets/reset.css">
<link rel="stylesheet" href="stylesheets/styles.css">
<link rel="stylesheet" href="stylesheets/pygment_trac.css">
<script src="javascripts/scale.fix.js"></script>
<link rel="stylesheet" href="/stylesheets/reset.css">
<link rel="stylesheet" href="/stylesheets/styles.css">
<link rel="stylesheet" href="/stylesheets/pygment_trac.css">
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
<!--[if lt IE 9]>
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
@ -28,28 +28,21 @@
</script>
<div class="wrapper">
<header>
<h1 class="header"><a href="index.html">Caffe</a></h1>
<!-- <p class="header">Convolutional Architecture for Fast Feature Embedding</p> -->
<h1 class="header"><a href="/">Caffe</a></h1>
<p class="header">
Deep learning framework developed by <a class="header name" href="http://daggerfs.com/">Yangqing Jia</a> / <a class="header name" href="http://bvlc.eecs.berkeley.edu/">BVLC</a>
</p>
<ul>
<!--<li class="download"><a class="buttons" href="https://github.com/BVLC/caffe/zipball/master">Download ZIP</a></li>
<li class="download"><a class="buttons" href="https://github.com/BVLC/caffe/tarball/master">Download TAR</a></li>-->
<li><a class="buttons github" href="https://github.com/BVLC/caffe">View On GitHub</a></li>
<li>
<a class="buttons github" href="https://github.com/BVLC/caffe">View On GitHub</a>
</li>
</ul>
<p class="header">Maintained by<br><a class="header name" href="http://bvlc.eecs.berkeley.edu/">BVLC</a></p>
<p class="header">Created by<br><a class="header name" href="http://daggerfs.com/">Yangqing Jia</a></p>
</header>
<section>
{{ content }}
</section>
<!-- <footer>
<p><small>Hosted on <a href="http://pages.github.com">GitHub Pages</a>.</small></p>
</footer>
-->
</div>
<!--[if !IE]><script>fixScale(document);</script><![endif]-->
</body>
</html>

Просмотреть файл

@ -9,9 +9,14 @@ The [BVLC](http://bvlc.eecs.berkeley.edu/) maintainers welcome all contributions
### Documentation
Tutorials and general documentation -- including this website -- are written in Markdown format in the `docs/` folder.
While the format is quite easy to read directly, you may prefer to view the whole thing as a website.
To do so, simply run `jekyll serve -s docs` and view the documentation website at `http://0.0.0.0:4000` (for [jekyll](http://jekyllrb.com/), you must have ruby and do `gem install jekyll`).
This website, written with [Jekyll](http://jekyllrb.com/), functions as the official Caffe documentation -- simply run `scripts/build_docs.sh` and view the website at `http://0.0.0.0:4000`.
We prefer tutorials and examples to be documented close to where they live, in `readme.md` files.
The `build_docs.sh` script gathers all `examples/**/readme.md` and `examples/*.ipynb` files, and makes a table of contents.
To be included in the docs, the readme files must be annotated with [YAML front-matter](http://jekyllrb.com/docs/frontmatter/), including the flag `include_in_docs: true`.
Similarly for IPython notebooks: simply include `"include_in_docs": true` in the `"metadata"` JSON field.
Other docs, such as installation guides, are written in the `docs` directory and manually linked to from the `index.md` page.
We strive to provide provide lots of usage examples, and to document all code in docstrings.
We absolutely appreciate any contribution to this effort!

Просмотреть файл

@ -8,7 +8,8 @@ layout: default
Note that unlike Caffe itself, these models are licensed for **academic research / non-commercial use only**.
If you have any questions, please get in touch with us.
This page will be updated as more models become available.
*UPDATE* July 2014: we are actively working on a service for hosting user-uploaded model definition and trained weight files.
Soon, the community will be able to easily contribute different architectures!
### ImageNet
@ -28,4 +29,6 @@ This page will be updated as more models become available.
**R-CNN (ILSVRC13)**: The pure Caffe instantiation of the [R-CNN](https://github.com/rbgirshick/rcnn) model for ILSVRC13 detection. Download the model (230.8MB) by running `examples/imagenet/get_caffe_rcnn_imagenet_model.sh` from the Caffe root directory. This model was made by transplanting the R-CNN SVM classifiers into a `fc-rcnn` classification layer, provided here as an off-the-shelf Caffe detector. Try the [detection example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/detection.ipynb) to see it in action. For the full details, refer to the R-CNN site. *N.B. For research purposes, make use of the official R-CNN package and not this example.*
### Auxiliary Data
Additionally, you will probably eventually need some auxiliary data (mean image, synset list, etc.): run `data/ilsvrc12/get_ilsvrc_aux.sh` from the root directory to obtain it.

Просмотреть файл

@ -7,14 +7,16 @@ Caffe is a deep learning framework developed with cleanliness, readability, and
It was created by [Yangqing Jia](http://daggerfs.com), and is in active development by the Berkeley Vision and Learning Center ([BVLC](http://bvlc.eecs.berkeley.edu)) and by community contributors.
Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE).
## Why
Check out our web image classification [demo](http://demo.caffe.berkeleyvision.org)!
## Why use Caffe?
**Clean architecture** enables rapid deployment.
Networks are specified in simple config files, with no hard-coded parameters in the code.
Switching between CPU and GPU code is as simple as setting a flag -- so models can be trained on a GPU machine, and then used on commodity clusters.
Switching between CPU and GPU is as simple as setting a flag -- so models can be trained on a GPU machine, and then used on commodity clusters.
**Readable & modifiable implementation** fosters active development.
In Caffe's first six months, it has been forked by over 300 developers on Github, and many have contributed significant changes.
In Caffe's first six months, it has been forked by over 300 developers on Github, and many have pushed significant changes.
**Speed** makes Caffe perfect for industry use.
Caffe can process over **40M images per day** with a single NVIDIA K40 or Titan GPU\*.
@ -29,29 +31,34 @@ There is an active discussion and support community on [Github](https://github.c
Consult performance [details](/performance_hardware.html).
</p>
## How
## Documentation
* [Introductory slides](http://dl.caffe.berkeleyvision.org/caffe-presentation.pdf): slides about the Caffe architecture, *updated 03/14*.
* [ACM MM paper](http://ucb-icsi-vision-group.github.io/caffe-paper/caffe.pdf): a 4-page report for the ACM Multimedia Open Source competition.
* [Installation instructions](/installation.html): tested on Ubuntu, Red Hat, OS X.
* [Pre-trained models](/getting_pretrained_models.html): BVLC provides ready-to-use models for non-commercial use.
* [Development](/development.html): Guidelines for development and contributing to Caffe.
- [Introductory slides](http://dl.caffe.berkeleyvision.org/caffe-presentation.pdf)<br />
Slides about the Caffe architecture, *updated 03/14*.
- [ACM MM paper](http://ucb-icsi-vision-group.github.io/caffe-paper/caffe.pdf)<br />
A 4-page report for the ACM Multimedia Open Source competition.
- [Installation instructions](/installation.html)<br />
Tested on Ubuntu, Red Hat, OS X.
* [Pre-trained models](/getting_pretrained_models.html)<br />
BVLC provides ready-to-use models for non-commercial use.
* [Development](/development.html)<br />
Guidelines for development and contributing to Caffe.
### Tutorials and Examples
### Examples
* [Image Classification \[notebook\]][imagenet_classification]: classify images with the pretrained ImageNet model by the Python interface.
* [Detection \[notebook\]][detection]: run a pretrained model as a detector in Python.
* [Visualizing Features and Filters \[notebook\]][visualizing_filters]: extracting features and visualizing trained filters with an example image, viewed layer-by-layer.
* [Editing Model Parameters \[notebook\]][net_surgery]: how to do net surgery and manually change model parameters.
* [LeNet / MNIST Demo](/mnist.html): end-to-end training and testing of LeNet on MNIST.
* [CIFAR-10 Demo](/cifar10.html): training and testing on the CIFAR-10 data.
* [Training ImageNet](/imagenet_training.html): recipe for end-to-end training of an ImageNet classifier.
* [Feature extraction with C++](/feature_extraction.html): feature extraction using pre-trained model.
{% for page in site.pages %}
{% if page.category == 'example' %}
- <div><a href="{{page.url}}">{{page.title}}</a><br />{{page.description}}</div>
{% endif %}
{% endfor %}
[imagenet_classification]: http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/imagenet_classification.ipynb
[detection]: http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/detection.ipynb
[visualizing_filters]: http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/filter_visualization.ipynb
[net_surgery]: http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/net_surgery.ipynb
### Notebook examples
{% for page in site.pages %}
{% if page.category == 'notebook' %}
- <div><a href="http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/{{page.original_path}}">{{page.title}}</a><br />{{page.description}}</div>
{% endif %}
{% endfor %}
## Citing Caffe

Просмотреть файл

@ -1,20 +0,0 @@
fixScale = function(doc) {
var addEvent = 'addEventListener',
type = 'gesturestart',
qsa = 'querySelectorAll',
scales = [1, 1],
meta = qsa in doc ? doc[qsa]('meta[name=viewport]') : [];
function fix() {
meta.content = 'width=device-width,minimum-scale=' + scales[0] + ',maximum-scale=' + scales[1];
doc.removeEventListener(type, fix, true);
}
if ((meta = meta[meta.length - 1]) && addEvent in doc) {
fix();
scales = [.25, 1.6];
doc[addEvent](type, fix, true);
}
};

Просмотреть файл

@ -1,91 +0,0 @@
---
layout: default
title: Caffe
---
Training MNIST with Caffe
================
We will assume that you have caffe successfully compiled. If not, please refer to the [Installation page](installation.html). In this tutorial, we will assume that your caffe installation is located at `CAFFE_ROOT`.
Prepare Datasets
----------------
You will first need to download and convert the data format from the MNIST website. To do this, simply run the following commands:
cd $CAFFE_ROOT/data/mnist
./get_mnist.sh
cd $CAFFE_ROOT/examples/mnist
./create_mnist.sh
If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist-train-leveldb`, and `mnist-test-leveldb`.
LeNet: the MNIST Classification Model
-------------------------------------
Before we actually run the training program, let's explain what will happen. We will use the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf) network, which is known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with Rectified Linear Unit (ReLU) activations for the neurons.
The design of LeNet contains the essence of CNNs that are still used in larger models such as the ones in ImageNet. In general, it consists of a convolutional layer followed by a pooling layer, another convolution layer followed by a pooling layer, and then two fully connected layers similar to the conventional multilayer perceptrons. We have defined the layers in `CAFFE_ROOT/data/lenet.prototxt`.
If you would like to read about step-by-step instruction on how the protobuf definitions are written, see [MNIST: Define the Network](mnist_prototxt.html) and [MNIST: Define the Solver](mnist_solver_prototxt.html)?.
Training and Testing the Model
------------------------------
Training the model is simple after you have written the network definition protobuf and solver protobuf files. Simply run `train_mnist.sh`, or the following command directly:
cd $CAFFE_ROOT/examples/mnist
./train_lenet.sh
`train_lenet.sh` is a simple script, but here are a few explanations: `GLOG_logtostderr=1` is the google logging flag that prints all the logging messages directly to stderr. The main tool for training is `train_net.bin`, with the solver protobuf text file as its argument.
When you run the code, you will see a lot of messages flying by like this:
I1203 net.cpp:66] Creating Layer conv1
I1203 net.cpp:76] conv1 <- data
I1203 net.cpp:101] conv1 -> conv1
I1203 net.cpp:116] Top shape: 20 24 24
I1203 net.cpp:127] conv1 needs backward computation.
These messages tell you the details about each layer, its connections and its output shape, which may be helpful in debugging. After the initialization, the training will start:
I1203 net.cpp:142] Network initialization done.
I1203 solver.cpp:36] Solver scaffolding done.
I1203 solver.cpp:44] Solving LeNet
Based on the solver setting, we will print the training loss function every 100 iterations, and test the network every 1000 iterations. You will see messages like this:
I1203 solver.cpp:204] Iteration 100, lr = 0.00992565
I1203 solver.cpp:66] Iteration 100, loss = 0.26044
...
I1203 solver.cpp:84] Testing net
I1203 solver.cpp:111] Test score #0: 0.9785
I1203 solver.cpp:111] Test score #1: 0.0606671
For each training iteration, `lr` is the learning rate of that iteration, and `loss` is the training function. For the output of the testing phase, score 0 is the accuracy, and score 1 is the testing loss function.
And after a few minutes, you are done!
I1203 solver.cpp:84] Testing net
I1203 solver.cpp:111] Test score #0: 0.9897
I1203 solver.cpp:111] Test score #1: 0.0324599
I1203 solver.cpp:126] Snapshotting to lenet_iter_10000
I1203 solver.cpp:133] Snapshotting solver state to lenet_iter_10000.solverstate
I1203 solver.cpp:78] Optimization Done.
The final model, stored as a binary protobuf file, is stored at
lenet_iter_10000
which you can deploy as a trained model in your application, if you are training on a real-world application dataset.
Um... How about GPU training?
-----------------------------
You just did! All the training was carried out on the GPU. In fact, if you would like to do training on CPU, you can simply change one line in `lenet_solver.prototxt`:
# solver mode: CPU or GPU
solver_mode: CPU
and you will be using CPU for training. Isn't that easy?
MNIST is a small dataset, so training with GPU does not really introduce too much benefit due to communication overheads. On larger datasets with more complex models, such as ImageNet, the computation speed difference will be more significant.

Просмотреть файл

@ -1,153 +0,0 @@
---
layout: default
title: Caffe
---
Define the MNIST Network
=========================
This page explains the prototxt file `lenet_train.prototxt` used in the MNIST demo. We assume that you are familiar with [Google Protobuf](https://developers.google.com/protocol-buffers/docs/overview), and assume that you have read the protobuf definitions used by Caffe, which can be found at [src/caffe/proto/caffe.proto](https://github.com/Yangqing/caffe/blob/master/src/caffe/proto/caffe.proto).
Specifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto.caffe_pb2.NetParameter`) protubuf. We will start by giving the network a name:
name: "LeNet"
Writing the Data Layer
----------------------
Currently, we will read the MNIST data from the leveldb we created earlier in the demo. This is defined by a data layer:
layers {
name: "mnist"
type: DATA
data_param {
source: "mnist-train-leveldb"
batch_size: 64
scale: 0.00390625
}
top: "data"
top: "label"
}
Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given leveldb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.
Writing the Convolution Layer
--------------------------------------------
Let's define the first convolution layer:
layers {
name: "conv1"
type: CONVOLUTION
blobs_lr: 1.
blobs_lr: 2.
convolution_param {
num_output: 20
kernelsize: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "data"
top: "conv1"
}
This layer takes the `data` blob (it is provided by the data layer), and produces the `conv1` layer. It produces outputs of 20 channels, with the convolutional kernel size 5 and carried out with stride 1.
The fillers allow us to randomly initialize the value of the weights and bias. For the weight filler, we will use the `xavier` algorithm that automatically determines the scale of initialization based on the number of input and output neurons. For the bias filler, we will simply initialize it as constant, with the default filling value 0.
`blobs_lr` are the learning rate adjustments for the layer's learnable parameters. In this case, we will set the weight learning rate to be the same as the learning rate given by the solver during runtime, and the bias learning rate to be twice as large as that - this usually leads to better convergence rates.
Writing the Pooling Layer
-------------------------
Phew. Pooling layers are actually much easier to define:
layers {
name: "pool1"
type: POOLING
pooling_param {
kernel_size: 2
stride: 2
pool: MAX
}
bottom: "conv1"
top: "pool1"
}
This says we will perform max pooling with a pool kernel size 2 and a stride of 2 (so no overlapping between neighboring pooling regions).
Similarly, you can write up the second convolution and pooling layers. Check `data/lenet.prototxt` for details.
Writing the Fully Connected Layer
----------------------------------
Writing a fully connected layer is also simple:
layers {
name: "ip1"
type: INNER_PRODUCT
blobs_lr: 1.
blobs_lr: 2.
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "pool2"
top: "ip1"
}
This defines a fully connected layer (for some legacy reason, Caffe calls it an `innerproduct` layer) with 500 outputs. All other lines look familiar, right?
Writing the ReLU Layer
----------------------
A ReLU Layer is also simple:
layers {
name: "relu1"
type: RELU
bottom: "ip1"
top: "ip1"
}
Since ReLU is an element-wise operation, we can do *in-place* operations to save some memory. This is achieved by simply giving the same name to the bottom and top blobs. Of course, do NOT use duplicated blob names for other layer types!
After the ReLU layer, we will write another innerproduct layer:
layers {
name: "ip2"
type: INNER_PRODUCT
blobs_lr: 1.
blobs_lr: 2.
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "ip1"
top: "ip2"
}
Writing the Loss Layer
-------------------------
Finally, we will write the loss!
layers {
name: "loss"
type: SOFTMAX_LOSS
bottom: "ip2"
bottom: "label"
}
The `softmax_loss` layer implements both the softmax and the multinomial logistic loss (that saves time and improves numerical stability). It takes two blobs, the first one being the prediction and the second one being the `label` provided by the data layer (remember it?). It does not produce any outputs - all it does is to compute the loss function value, report it when backpropagation starts, and initiates the gradient with respect to `ip2`. This is where all magic starts.
Now that we have demonstrated how to write the MNIST layer definition prototxt, maybe check out [how we write a solver prototxt](mnist_solver_prototxt.html)?

Просмотреть файл

@ -1,37 +0,0 @@
---
layout: default
title: Caffe
---
Define the MNIST Solver
=======================
The page is under construction. For now, check out the comments in the solver prototxt file, which explains each line in the prototxt:
# The training protocol buffer definition
train_net: "lenet_train.prototxt"
# The testing protocol buffer definition
test_net: "lenet_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "lenet"
# solver mode: 0 for CPU and 1 for GPU
solver_mode: 1

Просмотреть файл

@ -42,7 +42,7 @@ h3 {
}
h4, h5, h6 {
font-family: Times, serif;
font-family: 'PT Serif', serif;
font-weight: 700;
}
@ -68,12 +68,11 @@ strong {
}
ul {
list-style: inside;
padding-left: 25px;
}
ol {
list-style: decimal inside;
list-style: decimal;
padding-left: 20px;
}
@ -129,7 +128,6 @@ p img {
}
/* Code blocks */
code, pre {
font-family: monospace;
color:#000;
@ -149,7 +147,6 @@ pre {
/* Tables */
table {
width:100%;
}
@ -161,7 +158,7 @@ table {
}
th {
font-family: 'Arvo', Helvetica, Arial, sans-serif;
font-family: 'Open Sans', sans-serif;
font-size: 18px;
font-weight: normal;
padding: 10px;
@ -184,21 +181,11 @@ td {
/* Header */
header {
background-color: #171717;
color: #FDFDFB;
width:170px;
float:left;
position:fixed;
border: 1px solid #000;
-webkit-border-top-right-radius: 4px;
-webkit-border-bottom-right-radius: 4px;
-moz-border-radius-topright: 4px;
-moz-border-radius-bottomright: 4px;
border-top-right-radius: 4px;
border-bottom-right-radius: 4px;
padding: 12px 25px 22px 50px;
margin: 24px 25px 0 0;
-webkit-font-smoothing: antialiased;
}
p.header {
@ -206,23 +193,12 @@ p.header {
}
h1.header {
/*font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;*/
font-size: 30px;
font-weight: 300;
line-height: 1.3em;
border-bottom: none;
margin-top: 0;
}
h1.header, a.header, a.name, header a{
color: #fff;
}
a.header {
text-decoration: underline;
}
a.name {
white-space: nowrap;
}
@ -239,38 +215,19 @@ header li {
margin-bottom: 12px;
line-height: 1em;
padding: 6px 6px 6px 7px;
background: #AF0011;
background: -moz-linear-gradient(top, #AF0011 0%, #820011 100%);
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#f8f8f8), color-stop(100%,#dddddd));
background: -webkit-linear-gradient(top, #AF0011 0%,#820011 100%);
background: -o-linear-gradient(top, #AF0011 0%,#820011 100%);
background: -ms-linear-gradient(top, #AF0011 0%,#820011 100%);
background: linear-gradient(top, #AF0011 0%,#820011 100%);
background: #c30000;
border-radius:4px;
border:1px solid #0D0D0D;
-webkit-box-shadow: inset 0px 1px 1px 0 rgba(233,2,38, 1);
box-shadow: inset 0px 1px 1px 0 rgba(233,2,38, 1);
border:1px solid #555;
}
header li:hover {
background: #C3001D;
background: -moz-linear-gradient(top, #C3001D 0%, #950119 100%);
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#f8f8f8), color-stop(100%,#dddddd));
background: -webkit-linear-gradient(top, #C3001D 0%,#950119 100%);
background: -o-linear-gradient(top, #C3001D 0%,#950119 100%);
background: -ms-linear-gradient(top, #C3001D 0%,#950119 100%);
background: linear-gradient(top, #C3001D 0%,#950119 100%);
background: #dd0000;
}
a.buttons {
-webkit-font-smoothing: antialiased;
background: url(../images/arrow-down.png) no-repeat;
color: #fff;
text-decoration: none;
font-weight: normal;
text-shadow: rgba(0, 0, 0, 0.4) 0 -1px 0;
padding: 2px 2px 2px 22px;
height: 30px;
}
@ -280,12 +237,6 @@ a.github {
background-size: 15%;
}
a.buttons:hover {
color: #fff;
text-decoration: none;
}
/* Section - for main page content */
section {

Просмотреть файл

@ -1,6 +1,9 @@
---
title: CIFAR-10 tutorial
category: example
description: Train and test Caffe on CIFAR-10 data.
include_in_docs: true
layout: default
title: Caffe
---
Alex's CIFAR-10 tutorial, Caffe style

Просмотреть файл

@ -1,6 +1,8 @@
{
"metadata": {
"name": ""
"name": "ImageNet detection",
"description": "Run a pretrained model as a detector in Python.",
"include_in_docs": true
},
"nbformat": 3,
"nbformat_minor": 0,
@ -836,4 +838,4 @@
"metadata": {}
}
]
}
}

Просмотреть файл

@ -1,6 +1,9 @@
---
title: Feature extraction with Caffe C++ code.
description: Extract AlexNet features using the Caffe binary.
category: example
include_in_docs: true
layout: default
title: Caffe
---
Extracting Features
@ -57,7 +60,7 @@ The last parameter above is the number of data mini-batches.
The features are stored to LevelDB `examples/_temp/features`, ready for access by some other code.
If you meet with the error "Check failed: status.ok() Failed to open leveldb examples/_temp/features", it is because the directory examples/_temp/features has been created the last time you run the command. Remove it and run again.
If you meet with the error "Check failed: status.ok() Failed to open leveldb examples/_temp/features", it is because the directory examples/_temp/features has been created the last time you run the command. Remove it and run again.
rm -rf examples/_temp/features/

Просмотреть файл

@ -1,6 +1,8 @@
{
"metadata": {
"name": ""
"name": "Filter visualization",
"description": "Extracting features and visualizing trained filters with an example image, viewed layer-by-layer.",
"include_in_docs": true
},
"nbformat": 3,
"nbformat_minor": 0,

Просмотреть файл

@ -1,6 +1,9 @@
---
title: ImageNet tutorial
description: Train and test "CaffeNet" on ImageNet challenge data.
category: example
include_in_docs: true
layout: default
title: Caffe
---
Yangqing's Recipe on Brewing ImageNet

Просмотреть файл

@ -1,6 +1,8 @@
{
"metadata": {
"name": ""
"description": "Use the pre-trained ImageNet model to classify images with the Python interface.",
"name": "ImageNet Classification",
"include_in_docs": true
},
"nbformat": 3,
"nbformat_minor": 0,
@ -407,4 +409,4 @@
"metadata": {}
}
]
}
}

266
examples/mnist/readme.md Normal file
Просмотреть файл

@ -0,0 +1,266 @@
---
title: MNIST Tutorial
description: Train and test "LeNet" on MNIST data.
category: example
include_in_docs: true
layout: default
---
# Training MNIST with Caffe
We will assume that you have caffe successfully compiled. If not, please refer to the [Installation page](installation.html). In this tutorial, we will assume that your caffe installation is located at `CAFFE_ROOT`.
## Prepare Datasets
You will first need to download and convert the data format from the MNIST website. To do this, simply run the following commands:
cd $CAFFE_ROOT/data/mnist
./get_mnist.sh
cd $CAFFE_ROOT/examples/mnist
./create_mnist.sh
If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist-train-leveldb`, and `mnist-test-leveldb`.
## LeNet: the MNIST Classification Model
Before we actually run the training program, let's explain what will happen. We will use the [LeNet](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf) network, which is known to work well on digit classification tasks. We will use a slightly different version from the original LeNet implementation, replacing the sigmoid activations with Rectified Linear Unit (ReLU) activations for the neurons.
The design of LeNet contains the essence of CNNs that are still used in larger models such as the ones in ImageNet. In general, it consists of a convolutional layer followed by a pooling layer, another convolution layer followed by a pooling layer, and then two fully connected layers similar to the conventional multilayer perceptrons. We have defined the layers in `CAFFE_ROOT/data/lenet.prototxt`.
## Define the MNIST Network
This section explains the prototxt file `lenet_train.prototxt` used in the MNIST demo. We assume that you are familiar with [Google Protobuf](https://developers.google.com/protocol-buffers/docs/overview), and assume that you have read the protobuf definitions used by Caffe, which can be found at [src/caffe/proto/caffe.proto](https://github.com/Yangqing/caffe/blob/master/src/caffe/proto/caffe.proto).
Specifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto.caffe_pb2.NetParameter`) protubuf. We will start by giving the network a name:
name: "LeNet"
### Writing the Data Layer
Currently, we will read the MNIST data from the leveldb we created earlier in the demo. This is defined by a data layer:
layers {
name: "mnist"
type: DATA
data_param {
source: "mnist-train-leveldb"
batch_size: 64
scale: 0.00390625
}
top: "data"
top: "label"
}
Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given leveldb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.
### Writing the Convolution Layer
Let's define the first convolution layer:
layers {
name: "conv1"
type: CONVOLUTION
blobs_lr: 1.
blobs_lr: 2.
convolution_param {
num_output: 20
kernelsize: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "data"
top: "conv1"
}
This layer takes the `data` blob (it is provided by the data layer), and produces the `conv1` layer. It produces outputs of 20 channels, with the convolutional kernel size 5 and carried out with stride 1.
The fillers allow us to randomly initialize the value of the weights and bias. For the weight filler, we will use the `xavier` algorithm that automatically determines the scale of initialization based on the number of input and output neurons. For the bias filler, we will simply initialize it as constant, with the default filling value 0.
`blobs_lr` are the learning rate adjustments for the layer's learnable parameters. In this case, we will set the weight learning rate to be the same as the learning rate given by the solver during runtime, and the bias learning rate to be twice as large as that - this usually leads to better convergence rates.
### Writing the Pooling Layer
Phew. Pooling layers are actually much easier to define:
layers {
name: "pool1"
type: POOLING
pooling_param {
kernel_size: 2
stride: 2
pool: MAX
}
bottom: "conv1"
top: "pool1"
}
This says we will perform max pooling with a pool kernel size 2 and a stride of 2 (so no overlapping between neighboring pooling regions).
Similarly, you can write up the second convolution and pooling layers. Check `data/lenet.prototxt` for details.
### Writing the Fully Connected Layer
Writing a fully connected layer is also simple:
layers {
name: "ip1"
type: INNER_PRODUCT
blobs_lr: 1.
blobs_lr: 2.
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "pool2"
top: "ip1"
}
This defines a fully connected layer (for some legacy reason, Caffe calls it an `innerproduct` layer) with 500 outputs. All other lines look familiar, right?
### Writing the ReLU Layer
A ReLU Layer is also simple:
layers {
name: "relu1"
type: RELU
bottom: "ip1"
top: "ip1"
}
Since ReLU is an element-wise operation, we can do *in-place* operations to save some memory. This is achieved by simply giving the same name to the bottom and top blobs. Of course, do NOT use duplicated blob names for other layer types!
After the ReLU layer, we will write another innerproduct layer:
layers {
name: "ip2"
type: INNER_PRODUCT
blobs_lr: 1.
blobs_lr: 2.
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
bottom: "ip1"
top: "ip2"
}
### Writing the Loss Layer
Finally, we will write the loss!
layers {
name: "loss"
type: SOFTMAX_LOSS
bottom: "ip2"
bottom: "label"
}
The `softmax_loss` layer implements both the softmax and the multinomial logistic loss (that saves time and improves numerical stability). It takes two blobs, the first one being the prediction and the second one being the `label` provided by the data layer (remember it?). It does not produce any outputs - all it does is to compute the loss function value, report it when backpropagation starts, and initiates the gradient with respect to `ip2`. This is where all magic starts.
## Define the MNIST Solver
Check out the comments explaining each line in the prototxt:
# The training protocol buffer definition
train_net: "lenet_train.prototxt"
# The testing protocol buffer definition
test_net: "lenet_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "lenet"
# solver mode: 0 for CPU and 1 for GPU
solver_mode: 1
## Training and Testing the Model
Training the model is simple after you have written the network definition protobuf and solver protobuf files. Simply run `train_mnist.sh`, or the following command directly:
cd $CAFFE_ROOT/examples/mnist
./train_lenet.sh
`train_lenet.sh` is a simple script, but here are a few explanations: `GLOG_logtostderr=1` is the google logging flag that prints all the logging messages directly to stderr. The main tool for training is `train_net.bin`, with the solver protobuf text file as its argument.
When you run the code, you will see a lot of messages flying by like this:
I1203 net.cpp:66] Creating Layer conv1
I1203 net.cpp:76] conv1 <- data
I1203 net.cpp:101] conv1 -> conv1
I1203 net.cpp:116] Top shape: 20 24 24
I1203 net.cpp:127] conv1 needs backward computation.
These messages tell you the details about each layer, its connections and its output shape, which may be helpful in debugging. After the initialization, the training will start:
I1203 net.cpp:142] Network initialization done.
I1203 solver.cpp:36] Solver scaffolding done.
I1203 solver.cpp:44] Solving LeNet
Based on the solver setting, we will print the training loss function every 100 iterations, and test the network every 1000 iterations. You will see messages like this:
I1203 solver.cpp:204] Iteration 100, lr = 0.00992565
I1203 solver.cpp:66] Iteration 100, loss = 0.26044
...
I1203 solver.cpp:84] Testing net
I1203 solver.cpp:111] Test score #0: 0.9785
I1203 solver.cpp:111] Test score #1: 0.0606671
For each training iteration, `lr` is the learning rate of that iteration, and `loss` is the training function. For the output of the testing phase, score 0 is the accuracy, and score 1 is the testing loss function.
And after a few minutes, you are done!
I1203 solver.cpp:84] Testing net
I1203 solver.cpp:111] Test score #0: 0.9897
I1203 solver.cpp:111] Test score #1: 0.0324599
I1203 solver.cpp:126] Snapshotting to lenet_iter_10000
I1203 solver.cpp:133] Snapshotting solver state to lenet_iter_10000.solverstate
I1203 solver.cpp:78] Optimization Done.
The final model, stored as a binary protobuf file, is stored at
lenet_iter_10000
which you can deploy as a trained model in your application, if you are training on a real-world application dataset.
### Um... How about GPU training?
You just did! All the training was carried out on the GPU. In fact, if you would like to do training on CPU, you can simply change one line in `lenet_solver.prototxt`:
# solver mode: CPU or GPU
solver_mode: CPU
and you will be using CPU for training. Isn't that easy?
MNIST is a small dataset, so training with GPU does not really introduce too much benefit due to communication overheads. On larger datasets with more complex models, such as ImageNet, the computation speed difference will be more significant.

Просмотреть файл

@ -1,6 +1,8 @@
{
"metadata": {
"name": ""
"name": "Editing model parameters",
"description": "How to do net surgery and manually change model parameters.",
"include_in_docs": true
},
"nbformat": 3,
"nbformat_minor": 0,
@ -324,4 +326,4 @@
"metadata": {}
}
]
}
}

215
examples/web_demo/app.py Normal file
Просмотреть файл

@ -0,0 +1,215 @@
import os
import time
import cPickle
import datetime
import logging
import flask
import werkzeug
import optparse
import tornado.wsgi
import tornado.httpserver
import numpy as np
import pandas as pd
from PIL import Image as PILImage
import cStringIO as StringIO
import urllib
import caffe
import exifutil
REPO_DIRNAME = os.path.abspath(os.path.dirname(__file__) + '/../..')
UPLOAD_FOLDER = '/tmp/caffe_demos_uploads'
ALLOWED_IMAGE_EXTENSIONS = set(['png', 'bmp', 'jpg', 'jpe', 'jpeg', 'gif'])
# Obtain the flask app object
app = flask.Flask(__name__)
@app.route('/')
def index():
return flask.render_template('index.html', has_result=False)
@app.route('/classify_url', methods=['GET'])
def classify_url():
imageurl = flask.request.args.get('imageurl', '')
try:
string_buffer = StringIO.StringIO(
urllib.urlopen(imageurl).read())
image = caffe.io.load_image(string_buffer)
except Exception as err:
# For any exception we encounter in reading the image, we will just
# not continue.
logging.info('URL Image open error: %s', err)
return flask.render_template(
'index.html', has_result=True,
result=(False, 'Cannot open image from URL.')
)
logging.info('Image: %s', imageurl)
result = app.clf.classify_image(image)
return flask.render_template(
'index.html', has_result=True, result=result, imagesrc=imageurl)
@app.route('/classify_upload', methods=['POST'])
def classify_upload():
try:
# We will save the file to disk for possible data collection.
imagefile = flask.request.files['imagefile']
filename_ = str(datetime.datetime.now()).replace(' ', '_') + \
werkzeug.secure_filename(imagefile.filename)
filename = os.path.join(UPLOAD_FOLDER, filename_)
imagefile.save(filename)
logging.info('Saving to %s.', filename)
image = exifutil.open_oriented_im(filename)
except Exception as err:
logging.info('Uploaded image open error: %s', err)
return flask.render_template(
'index.html', has_result=True,
result=(False, 'Cannot open uploaded image.')
)
result = app.clf.classify_image(image)
return flask.render_template(
'index.html', has_result=True, result=result,
imagesrc=embed_image_html(image)
)
def embed_image_html(image):
"""Creates an image embedded in HTML base64 format."""
image_pil = PILImage.fromarray((255 * image).astype('uint8'))
image_pil = image_pil.resize((256, 256))
string_buf = StringIO.StringIO()
image_pil.save(string_buf, format='png')
data = string_buf.getvalue().encode('base64').replace('\n', '')
return 'data:image/png;base64,' + data
def allowed_file(filename):
return (
'.' in filename and
filename.rsplit('.', 1)[1] in ALLOWED_IMAGE_EXTENSIONS
)
class ImagenetClassifier(object):
default_args = {
'model_def_file': (
'{}/examples/imagenet/imagenet_deploy.prototxt'.format(REPO_DIRNAME)),
'pretrained_model_file': (
'{}/examples/imagenet/caffe_reference_imagenet_model'.format(REPO_DIRNAME)),
'mean_file': (
'{}/python/caffe/imagenet/ilsvrc_2012_mean.npy'.format(REPO_DIRNAME)),
'class_labels_file': (
'{}/data/ilsvrc12/synset_words.txt'.format(REPO_DIRNAME)),
'bet_file': (
'{}/data/ilsvrc12/imagenet.bet.pickle'.format(REPO_DIRNAME)),
}
for key, val in default_args.iteritems():
if not os.path.exists(val):
raise Exception(
"File for {} is missing. Should be at: {}".format(key, val))
default_args['image_dim'] = 227
default_args['gpu_mode'] = True
def __init__(self, model_def_file, pretrained_model_file, mean_file,
class_labels_file, bet_file, image_dim, gpu_mode=False):
logging.info('Loading net and associated files...')
self.net = caffe.Classifier(
model_def_file, pretrained_model_file, input_scale=255,
image_dims=(image_dim, image_dim), gpu=gpu_mode,
mean_file=mean_file, channel_swap=(2, 1, 0)
)
with open(class_labels_file) as f:
labels_df = pd.DataFrame([
{
'synset_id': l.strip().split(' ')[0],
'name': ' '.join(l.strip().split(' ')[1:]).split(',')[0]
}
for l in f.readlines()
])
self.labels = labels_df.sort('synset_id')['name'].values
self.bet = cPickle.load(open(bet_file))
# A bias to prefer children nodes in single-chain paths
# I am setting the value to 0.1 as a quick, simple model.
# We could use better psychological models here...
self.bet['infogain'] -= np.array(self.bet['preferences']) * 0.1
def classify_image(self, image):
try:
starttime = time.time()
scores = self.net.predict([image], oversample=True).flatten()
endtime = time.time()
indices = (-scores).argsort()[:5]
predictions = self.labels[indices]
# In addition to the prediction text, we will also produce
# the length for the progress bar visualization.
meta = [
(p, '%.5f' % scores[i])
for i, p in zip(indices, predictions)
]
logging.info('result: %s', str(meta))
# Compute expected information gain
expected_infogain = np.dot(
self.bet['probmat'], scores[self.bet['idmapping']])
expected_infogain *= self.bet['infogain']
# sort the scores
infogain_sort = expected_infogain.argsort()[::-1]
bet_result = [(self.bet['words'][v], '%.5f' % expected_infogain[v])
for v in infogain_sort[:5]]
logging.info('bet result: %s', str(bet_result))
return (True, meta, bet_result, '%.3f' % (endtime - starttime))
except Exception as err:
logging.info('Classification error: %s', err)
return (False, 'Something went wrong when classifying the '
'image. Maybe try another one?')
def start_tornado(app, port=5000):
http_server = tornado.httpserver.HTTPServer(
tornado.wsgi.WSGIContainer(app))
http_server.listen(port)
print("Tornado server starting on port {}".format(port))
tornado.ioloop.IOLoop.instance().start()
def start_from_terminal(app):
"""
Parse command line options and start the server.
"""
parser = optparse.OptionParser()
parser.add_option(
'-d', '--debug',
help="enable debug mode",
action="store_true", default=False)
parser.add_option(
'-p', '--port',
help="which port to serve content on",
type='int', default=5000)
opts, args = parser.parse_args()
# Initialize classifier
app.clf = ImagenetClassifier(**ImagenetClassifier.default_args)
if opts.debug:
app.run(debug=True, host='0.0.0.0', port=opts.port)
else:
start_tornado(app, opts.port)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
if not os.path.exists(UPLOAD_FOLDER):
os.makedirs(UPLOAD_FOLDER)
start_from_terminal(app)

Просмотреть файл

@ -0,0 +1,33 @@
"""
This script handles the skimage exif problem.
"""
from PIL import Image
import numpy as np
ORIENTATIONS = { # used in apply_orientation
2: (Image.FLIP_LEFT_RIGHT,),
3: (Image.ROTATE_180,),
4: (Image.FLIP_TOP_BOTTOM,),
5: (Image.FLIP_LEFT_RIGHT, Image.ROTATE_90),
6: (Image.ROTATE_270,),
7: (Image.FLIP_LEFT_RIGHT, Image.ROTATE_270),
8: (Image.ROTATE_90,)
}
def open_oriented_im(im_path):
im = Image.open(im_path)
if hasattr(im, '_getexif'):
exif = im._getexif()
if exif is not None and 274 in exif:
orientation = exif[274]
im = apply_orientation(im, orientation)
return np.asarray(im).astype(np.float32) / 255.
def apply_orientation(im, orientation):
if orientation in ORIENTATIONS:
for method in ORIENTATIONS[orientation]:
im = im.transpose(method)
return im

Просмотреть файл

@ -0,0 +1,30 @@
---
title: Web demo
description: Image classification demo running as a Flask web server.
category: example
layout: default
include_in_docs: true
---
# Web Demo
## Requirements
The demo server requires Python with some dependencies.
To make sure you have the dependencies, please run `pip install -r examples/web_demo/requirements.txt`, and also make sure that you've compiled the Python Caffe interface and that it is on your `PYTHONPATH` (see [installation instructions](/installation.html)).
Make sure that you have obtained the Caffe Reference ImageNet Model and the ImageNet Auxiliary Data ([instructions](/getting_pretrained_models.html)).
NOTE: if you run into trouble, try re-downloading the auxiliary files.
## Run
Running `python examples/web_demo/app.py` will bring up the demo server, accessible at `http://0.0.0.0:5000`.
You can enable debug mode of the web server, or switch to a different port:
% python examples/web_demo/app.py -h
Usage: app.py [options]
Options:
-h, --help show this help message and exit
-d, --debug enable debug mode
-p PORT, --port=PORT which port to serve content on

Просмотреть файл

@ -0,0 +1,138 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Caffe demos">
<meta name="author" content="BVLC (http://bvlc.eecs.berkeley.edu/)">
<title>Caffe Demos</title>
<link href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet">
<script type="text/javascript" src="//code.jquery.com/jquery-2.1.1.js"></script>
<script src="//netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>
<!-- Script to instantly classify an image once it is uploaded. -->
<script type="text/javascript">
$(document).ready(
function(){
$('#classifyfile').attr('disabled',true);
$('#imagefile').change(
function(){
if ($(this).val()){
$('#formupload').submit();
}
}
);
}
);
</script>
<style>
body {
font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
line-height:1.5em;
color: #232323;
-webkit-font-smoothing: antialiased;
}
h1, h2, h3 {
font-family: Times, serif;
line-height:1.5em;
border-bottom: 1px solid #ccc;
}
</style>
</head>
<body>
<!-- Begin page content -->
<div class="container">
<div class="page-header">
<h1><a href="/">Caffe Demos</a></h1>
<p>
The <a href="http://caffe.berkeleyvision.org">Caffe</a> neural network library makes implementing state-of-the-art computer vision systems easy.
</p>
</div>
<div>
<h2>Classification</h2>
<a href="/classify_url?imageurl=http%3A%2F%2Fi.telegraph.co.uk%2Fmultimedia%2Farchive%2F02351%2Fcross-eyed-cat_2351472k.jpg">Click for a Quick Example</a>
</div>
{% if has_result %}
{% if not result[0] %}
<!-- we have error in the result. -->
<div class="alert alert-danger">{{ result[1] }} Did you provide a valid URL or a valid image file? </div>
{% else %}
<div class="media">
<a class="pull-left" href="#"><img class="media-object" width="192" height="192" src={{ imagesrc }}></a>
<div class="media-body">
<div class="bs-example bs-example-tabs">
<ul id="myTab" class="nav nav-tabs">
<li class="active"><a href="#infopred" data-toggle="tab">Maximally accurate</a></li>
<li><a href="#flatpred" data-toggle="tab">Maximally specific</a></li>
</ul>
<div id="myTabContent" class="tab-content">
<div class="tab-pane fade in active" id="infopred">
<ul class="list-group">
{% for single_pred in result[2] %}
<li class="list-group-item">
<span class="badge">{{ single_pred[1] }}</span>
<h4 class="list-group-item-heading">
<a href="https://www.google.com/#q={{ single_pred[0] }}" target="_blank">{{ single_pred[0] }}</a>
</h4>
</li>
{% endfor %}
</ul>
</div>
<div class="tab-pane fade" id="flatpred">
<ul class="list-group">
{% for single_pred in result[1] %}
<li class="list-group-item">
<span class="badge">{{ single_pred[1] }}</span>
<h4 class="list-group-item-heading">
<a href="https://www.google.com/#q={{ single_pred[0] }}" target="_blank">{{ single_pred[0] }}</a>
</h4>
</li>
{% endfor %}
</ul>
</div>
</div>
</div>
</div>
</div>
<p> CNN took {{ result[3] }} seconds. </p>
{% endif %}
<hr>
{% endif %}
<form role="form" action="classify_url" method="get">
<div class="form-group">
<div class="input-group">
<input type="text" class="form-control" name="imageurl" id="imageurl" placeholder="Provide an image URL">
<span class="input-group-btn">
<input class="btn btn-primary" value="Classify URL" type="submit" id="classifyurl"></input>
</span>
</div><!-- /input-group -->
</div>
</form>
<form id="formupload" class="form-inline" role="form" action="classify_upload" method="post" enctype="multipart/form-data">
<div class="form-group">
<label for="imagefile">Or upload an image:</label>
<input type="file" name="imagefile" id="imagefile">
</div>
<!--<input type="submit" class="btn btn-primary" value="Classify File" id="classifyfile"></input>-->
</form>
</div>
<hr>
<div id="footer">
<div class="container">
<p>&copy; BVLC 2014</p>
</div>
</div>
</body>
</html>

Просмотреть файл

@ -1,11 +1,17 @@
#!/bin/bash
# Build documentation for display in web browser.
PORT=${1:-4000}
echo "usage: build_docs.sh [port]"
echo "usage: build.sh [port]"
# Find the docs dir, no matter where the script is called
DIR="$( cd "$(dirname "$0")" ; pwd -P )"
cd $DIR/../docs
ROOT_DIR="$( cd "$(dirname "$0")"/.. ; pwd -P )"
cd $ROOT_DIR
# Gather docs.
scripts/gather_examples.sh
# Display docs using web server.
cd docs
jekyll serve -w -s . -d _site --port=$PORT

32
scripts/copy_notebook.py Executable file
Просмотреть файл

@ -0,0 +1,32 @@
#!/usr/bin/env python
"""
Takes as arguments:
1. the path to a JSON file (such as an IPython notebook).
2. the path to output file
If 'metadata' dict in the JSON file contains 'include_in_docs': true,
then copies the file to output file, appending the 'metadata' property
as YAML front-matter, adding the field 'category' with value 'notebook'.
"""
import os
import sys
import json
filename = sys.argv[1]
output_filename = sys.argv[2]
content = json.load(open(filename))
if 'include_in_docs' in content['metadata'] and content['metadata']['include_in_docs']:
yaml_frontmatter = ['---']
for key, val in content['metadata'].iteritems():
if key == 'name':
key = 'title'
if val == '':
val = os.path.basename(filename)
yaml_frontmatter.append('{}: {}'.format(key, val))
yaml_frontmatter += ['category: notebook']
yaml_frontmatter += ['original_path: ' + filename]
with open(output_filename, 'w') as fo:
fo.write('\n'.join(yaml_frontmatter + ['---']) + '\n')
fo.write(open(filename).read())

Просмотреть файл

@ -1,5 +1,5 @@
#!/usr/bin/env sh
# Publish/ Pull-request documentation to the gh-pages site.
#!/bin/bash
# Publish documentation to the gh-pages site.
# The remote for pushing the docs (defaults to origin).
# This is where you will submit the PR to BVLC:gh-pages from.

29
scripts/gather_examples.sh Executable file
Просмотреть файл

@ -0,0 +1,29 @@
#!/bin/bash
# Assemble documentation for the project into one directory via symbolic links.
# Find the docs dir, no matter where the script is called
ROOT_DIR="$( cd "$(dirname "$0")"/.. ; pwd -P )"
cd $ROOT_DIR
# Gather docs from examples/**/readme.md
GATHERED_DIR=docs/gathered
rm -r $GATHERED_DIR
mkdir $GATHERED_DIR
for README_FILENAME in $(find examples -iname "readme.md"); do
# Only use file if it is to be included in docs.
if grep -Fxq "include_in_docs: true" $README_FILENAME; then
# Make link to readme.md in docs/gathered/.
# Since everything is called readme.md, rename it by its dirname.
README_DIRNAME=`dirname $README_FILENAME`
DOCS_FILENAME=$GATHERED_DIR/$README_DIRNAME.md
mkdir -p `dirname $DOCS_FILENAME`
ln -s $ROOT_DIR/$README_FILENAME $DOCS_FILENAME
fi
done
# Gather docs from examples/*.ipynb and add YAML front-matter.
for NOTEBOOK_FILENAME in $(find examples -d 1 -iname "*.ipynb"); do
DOCS_FILENAME=$GATHERED_DIR/$NOTEBOOK_FILENAME
mkdir -p `dirname $DOCS_FILENAME`
python scripts/copy_notebook.py $NOTEBOOK_FILENAME $DOCS_FILENAME
done