Caffe on both Linux and Windows
Перейти к файлу
Sergey Karayev 461175e836 remove linking against mkl_intel_thread: unneeded
and gives hard-to-debug errors on os x
2014-02-03 02:41:04 -08:00
data fix program path. 2014-01-29 17:37:57 +08:00
examples Save the last batch of data in image set conversion 2014-01-28 01:58:09 -08:00
include/caffe Merge pull request #62 from viirya/master 2014-01-31 19:02:48 -08:00
matlab/caffe update Makefile and add some more docs 2013-11-22 14:16:58 -08:00
python/caffe read single input, load/save csv, and record windows 2014-01-31 01:23:29 -08:00
src Fix test_data_layer segfault by adding destructor to join pthread 2014-01-31 17:57:18 -08:00
.gitignore ignore distribute dir 2014-01-24 16:18:17 -08:00
INSTALL.md include install notes 2014-01-21 18:50:07 -08:00
LICENSE License under BSD 2014-01-20 18:24:41 -08:00
Makefile remove linking against mkl_intel_thread: unneeded 2014-02-03 02:41:04 -08:00
Makefile.config cleanup whitespace 2014-01-19 14:34:12 -08:00
README.md note pretrained model licensing: academic / non-commercial use only 2014-01-31 02:52:29 -08:00
caffe.cloc linecount improvement 2013-11-12 14:31:09 -08:00

README.md

Caffe: Convolutional Architecture for Fast Feature Extraction

Created by Yangqing Jia, Department of EECS, University of California, Berkeley. Maintained by the Berkeley Vision and Learning Center (BVLC).

Introduction

Caffe aims to provide computer vision scientists with a clean, modifiable implementation of state-of-the-art deep learning algorithms. Network structure is easily specified in separate config files, with no mess of hard-coded parameters in the code. Python and Matlab wrappers are provided.

At the same time, Caffe fits industry needs, with blazing fast C++/Cuda code for GPU computation. Caffe is currently the fastest GPU CNN implementation publicly available, and is able to process more than 20 million images per day on a single Tesla K20 machine *.

Caffe also provides seamless switching between CPU and GPU, which allows one to train models with fast GPUs and then deploy them on non-GPU clusters with one line of code: Caffe::set_mode(Caffe::CPU).

Even in CPU mode, computing predictions on an image takes only 20 ms when images are processed in batch mode.

* When measured with the SuperVision model that won the ImageNet Large Scale Visual Recognition Challenge 2012.

License

Caffe is BSD 2-Clause licensed (refer to the LICENSE for details).

The pretrained models published by the BVLC, such as the Caffe reference ImageNet model are licensed for academic research / non-commercial use only. However, Caffe is a full toolkit for model training, so start brewing your own Caffe model today!

Citing Caffe

Please kindly cite Caffe in your publications if it helps your research:

@misc{Jia13caffe,
  Author = {Yangqing Jia},
  Title = { {Caffe}: An Open Source Convolutional Architecture for Fast Feature Embedding},
  Year  = {2013},
  Howpublished = {\url{http://caffe.berkeleyvision.org/}
}