зеркало из https://github.com/microsoft/caffe.git
Коммит
923a847b21
|
@ -20,11 +20,11 @@ We'll make a temporary folder to store things into.
|
|||
Generate a list of the files to process.
|
||||
We're going to use the images that ship with caffe.
|
||||
|
||||
find `pwd`/examples/images -type f -exec echo {} \; > examples/_temp/file_list.txt
|
||||
find `pwd`/examples/images -type f -exec echo {} \; > examples/_temp/temp.txt
|
||||
|
||||
The `ImagesLayer` we'll use expects labels after each filenames, so let's add a 0 to the end of each line
|
||||
|
||||
sed "s/$/ 0/" examples/_temp/file_list.txt > examples/_temp/file_list.txt
|
||||
sed "s/$/ 0/" examples/_temp/temp.txt > examples/_temp/file_list.txt
|
||||
|
||||
Define the Feature Extraction Network Architecture
|
||||
--------------------------------------------------
|
||||
|
@ -48,7 +48,7 @@ Extract Features
|
|||
|
||||
Now everything necessary is in place.
|
||||
|
||||
build/tools/extract_features.bin models/caffe_reference_imagenet_model examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10
|
||||
build/tools/extract_features.bin examples/imagenet/caffe_reference_imagenet_model examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10
|
||||
|
||||
The name of feature blob that you extract is `fc7`, which represents the highest level feature of the reference model.
|
||||
We can use any other layer, as well, such as `conv5` or `pool3`.
|
||||
|
@ -57,6 +57,10 @@ The last parameter above is the number of data mini-batches.
|
|||
|
||||
The features are stored to LevelDB `examples/_temp/features`, ready for access by some other code.
|
||||
|
||||
If you meet with the error "Check failed: status.ok() Failed to open leveldb examples/_temp/features", it is because the directory examples/_temp/features has been created the last time you run the command. Remove it and run again.
|
||||
|
||||
rm -rf examples/_temp/features/
|
||||
|
||||
If you'd like to use the Python wrapper for extracting features, check out the [layer visualization notebook](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/filter_visualization.ipynb).
|
||||
|
||||
Clean Up
|
||||
|
|
|
@ -9,7 +9,7 @@ Yangqing's Recipe on Brewing ImageNet
|
|||
"All your braincells are belong to us."
|
||||
- Caffeine
|
||||
|
||||
We are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf). Since training the whole model takes some time and energy, we provide a model, trained in the same way as we describe here, to help fight global warming. If you would like to simply use the pretrained model, check out the [Pretrained ImageNet](imagenet_pretrained.html) page. *Note that the pretrained model is for academic research / non-commercial use only*.
|
||||
We are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf). Since training the whole model takes some time and energy, we provide a model, trained in the same way as we describe here, to help fight global warming. If you would like to simply use the pretrained model, check out the [Pretrained ImageNet](getting_pretrained_models.html) page. *Note that the pretrained model is for academic research / non-commercial use only*.
|
||||
|
||||
To clarify, by ImageNet we actually mean the ILSVRC12 challenge, but you can easily train on the whole of ImageNet as well, just with more disk space, and a little longer training time.
|
||||
|
||||
|
@ -99,4 +99,4 @@ Parting Words
|
|||
|
||||
Hope you liked this recipe! Many researchers have gone further since the ILSVRC 2012 challenge, changing the network architecture and/or finetuning the various parameters in the network. The recent ILSVRC 2013 challenge suggests that there are quite some room for improvement. **Caffe allows one to explore different network choices more easily, by simply writing different prototxt files** - isn't that exciting?
|
||||
|
||||
And since now you have a trained network, check out how to use it: [Running Pretrained ImageNet](imagenet_pretrained.html). This time we will use Python, but if you have wrappers for other languages, please kindly send a pull request!
|
||||
And since now you have a trained network, check out how to use it: [Running Pretrained ImageNet](getting_pretrained_models.html). This time we will use Python, but if you have wrappers for other languages, please kindly send a pull request!
|
||||
|
|
|
@ -45,7 +45,11 @@ You will also need other packages, most of which can be installed via apt-get us
|
|||
|
||||
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev
|
||||
|
||||
The only exception being the google logging library, which does not exist in the Ubuntu 12.04 repository. To install it, do:
|
||||
On CentOS or RHEL, you can install via yum using:
|
||||
|
||||
sudo yum install protobuf-devel leveldb-devel snappy-devel opencv-devel boost-devel hdf5-devel
|
||||
|
||||
The only exception being the google logging library, which does not exist in the Ubuntu 12.04 or CentOS/RHEL repository. To install it, do:
|
||||
|
||||
wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
|
||||
tar zxvf glog-0.3.3.tar.gz
|
||||
|
@ -62,7 +66,7 @@ After setting all the prerequisites, you should modify the `Makefile.config` fil
|
|||
|
||||
## Compilation
|
||||
|
||||
After installing the prerequisites, simply do `make all` to compile Caffe. If you would like to compile the Python and Matlab wrappers, do
|
||||
After installing the prerequisites, simply do `make all -j10` in which 10 is the number of parallel compilation threads to compile Caffe. If you would like to compile the Python and Matlab wrappers, do
|
||||
|
||||
make pycaffe
|
||||
make matcaffe
|
||||
|
|
Загрузка…
Ссылка в новой задаче