caffe/docs/index.md

5.6 KiB

title
Deep Learning Framework

Caffe

Caffe is a deep learning framework developed with cleanliness, readability, and speed in mind. It was created by Yangqing Jia during his PhD at UC Berkeley, and is in active development by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Caffe is released under the BSD 2-Clause license.

Check out our web image classification demo!

Why use Caffe?

Clean architecture enables rapid deployment. Networks are specified in simple config files, with no hard-coded parameters in the code. Switching between CPU and GPU is as simple as setting a flag -- so models can be trained on a GPU machine, and then used on commodity clusters.

Readable & modifiable implementation fosters active development. In Caffe's first six months, it has been forked by over 300 developers on Github, and many have pushed significant changes.

Speed makes Caffe perfect for industry use. Caffe can process over 40M images per day with a single NVIDIA K40 or Titan GPU*. That's 5 ms/image in training, and 2 ms/image in test. We believe that Caffe is the fastest CNN implementation available.

Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. There is an active discussion and support community on Github.

\* When files are properly cached, and using the ILSVRC2012-winning [SuperVision](http://www.image-net.org/challenges/LSVRC/2012/supervision.pdf) model. Consult performance [details](/performance_hardware.html).

Documentation

  • Model Zoo
    BVLC suggests a standard distribution format for Caffe models, and provides trained models.
  • Developing & Contributing
    Guidelines for development and contributing to Caffe.
  • API Documentation
    Developer documentation automagically generated from code comments.

Examples

{% assign examples = site.pages | where:'category','example' | sort: 'priority' %} {% for page in examples %}

{% endfor %}

Notebook examples

{% assign notebooks = site.pages | where:'category','notebook' %} {% for page in notebooks %}

{% endfor %}

Citing Caffe

Please cite Caffe in your publications if it helps your research:

@misc{Jia13caffe,
   Author = {Yangqing Jia},
   Title = { {Caffe}: An Open Source Convolutional Architecture for Fast Feature Embedding},
   Year  = {2013},
   Howpublished = {\url{http://caffe.berkeleyvision.org/}}
}

If you do publish a paper where Caffe helped your research, we encourage you to update the publications wiki. Citations are also tracked automatically by Google Scholar.

Acknowledgements

Yangqing would like to thank the NVIDIA Academic program for providing GPUs, Oriol Vinyals for discussions along the journey, and BVLC PI Trevor Darrell for guidance.

A core set of BVLC members have contributed much new functionality and many fixes since the original release (alphabetical by first name): Eric Tzeng, Evan Shelhamer, Jeff Donahue, Jon Long, Ross Girshick, Sergey Karayev, Sergio Guadarrama.

Additionally, the open-source community plays a large and growing role in Caffe's development. Check out the Github project pulse for recent activity, and the contributors for a sorted list.

We sincerely appreciate your interest and contributions! If you'd like to contribute, please read the developing & contributing guide.

Contacting us

All questions about usage, installation, code, and applications should be searched for and asked on the caffe-users mailing list.

All development discussion should be carried out at GitHub Issues.

If you have a proposal that may not be suited for public discussion and an ability to act on it, please email us directly. Requests for features, explanations, or personal help will be ignored; post such matters publicly as issues.

The core Caffe developers may be able to provide consulting services for appropriate projects.