Граф коммитов

198 Коммитов

Автор SHA1 Сообщение Дата
Fisher Yu 708c1a122c remove extra space before + 2015-12-28 22:46:49 -05:00
Yangqing Jia 03a00e8290 Merge pull request #3090 from longjon/summarize-tool
A Python script for at-a-glance net summary
2015-12-08 17:38:00 -08:00
T.E.A de Souza 99571c471d Correct type of device_id; disambiguate shared_ptr 2015-12-04 13:59:55 +08:00
Evan Shelhamer 300f43f3ae dismantle layer headers
No more monolithic includes: split layers into their own headers for modular inclusion and build.
2015-12-01 21:13:43 -08:00
Tea b72b0318e2 replace snprintf with a C++98 equivalent 2015-11-26 09:54:12 +08:00
Kai Li 5925fa8ed9 Update plot_training_log.py.example
I find there is no plot_log.sh file
2015-10-30 00:46:08 +08:00
Ronghang Hu c1f7fe1cff Add automatic upgrade for solver type 2015-10-16 22:32:33 -07:00
Ronghang Hu 0eea815ad6 Change solver type to string and provide solver registry 2015-10-16 22:32:32 -07:00
Dmytro Mishkin 200bd40391 Fix parse_log.sh against "prefetch queue empty" messages 2015-09-25 10:00:23 +03:00
Jonathan L Long 84eb44e6cf [tools] add Python script for at-a-glance prototxt summary 2015-09-19 21:14:56 -07:00
Ronghang Hu 2a585f78a0 Merge pull request #3074 from ronghanghu/show-use-cpu
Get back 'USE CPU' print for caffe train
2015-09-17 13:26:30 -07:00
Tea f3a933a620 Separate IO dependencies
OpenCV, LMDB, LevelDB and Snappy are made optional via switches
(USE_OPENCV, USE_LMDB, USE_LEVELDB) available for Make and CMake
builds. Since Snappy is a LevelDB dependency, its use is determined by
USE_LEVELDB. HDF5 is left bundled because it is used for serializing
weights and solverstates.
2015-09-17 15:08:29 +08:00
Ronghang Hu 3d3a8b2ca0 Get back 'USE CPU' print for caffe train 2015-09-16 12:06:16 -07:00
Lumin Zhou 1bdc18c5be Update extract_features.cpp 2015-09-04 04:38:43 +00:00
Luke Yeager 6ca0ab6607 Show output from convert_imageset tool 2015-09-01 17:20:37 -07:00
J Yegerlehner ff19d5f5c0 Add signal handler and early exit/snapshot to Solver.
Add signal handler and early exit/snapshot to Solver.

Add signal handler and early exit/snapshot to Solver.

Also check for exit and snapshot when testing.

Skip running test after early exit.

Fix more lint.

Rebase on master.

Finish rebase on master.

Fixups per review comments.

Redress review comments.

Lint.

Correct error message wording.
2015-08-22 12:51:55 -05:00
Cyprien Noel e5575cf17a Multi-GPU
- Parallelize batches among GPUs and tree-reduce the gradients
- The effective batch size scales with the number of devices
- Batch size is multiplied by the number of devices
- Split batches between GPUs, and tree-reduce the gradients
- Detect machine topology (twin-GPU boards, P2P connectivity)
- Track device in syncedmem (thanks @thatguymike)
- Insert a callback in the solver for minimal code change
- Accept list for gpu flag of caffe tool, e.g. '-gpu 0,1' or '-gpu all'.
  Run on default GPU if no ID given.
- Add multi-GPU solver test
- Deterministic architecture for reproducible runs
2015-08-09 15:16:00 -07:00
Jeff Donahue 1d5f4e5491 Merge pull request #2634 from mlopezantequera/patch-2
Update parse_log.py
2015-08-07 11:58:19 -07:00
Evan Shelhamer d958b5a45c [pycaffe,build] include Python first in caffe tool 2015-08-06 13:03:50 -07:00
Evan Shelhamer ac6d4b67c2 Merge pull request #2462 from longjon/correct-python-exceptions
Handle Python layer exceptions correctly
2015-08-06 00:27:59 -07:00
Manuel e342e155c4 Update parse_log.py
Correct parsing (exponential notation learning rates were not being interpreted correctly)
2015-06-22 11:49:45 +02:00
Evan Shelhamer 50ab52cbbf Merge pull request #2350 from drdan14/log-parser-python-improved
Python log parser improvements
2015-05-30 00:26:27 -07:00
Mohammad Norouzi e1cc9d3c78 fix the bug with db_type when the number of features to be extracted is larger than 1 2015-05-27 10:33:19 -04:00
Mohammad Norouzi 9ea3da42fe add leading zeros to keys in feature DB files 2015-05-26 17:25:56 -04:00
Ronghang Hu 4ceefaaf71 fix blob_loss_weights index in test() in caffe.cpp
Correct the index for blob_loss_weights during output. Previously it was set to test_score index by mistake.
2015-05-18 20:44:07 +08:00
Jonathan L Long cebce77130 print Python exceptions when using Python layer with the caffe tool 2015-05-14 22:17:19 -07:00
Jeff Donahue cadc42bfa0 Merge pull request #2165 from longjon/auto-reshape
Always call Layer::Reshape in Layer::Forward
2015-05-14 15:10:59 -07:00
Takuma Wakamori afa2d591b6 fix typo: swap the titles of xlabel and ylabel 2015-04-25 17:30:53 +09:00
Daniel Golden 23d28fdef8 Improvements to python log parser
Over version introduced in https://github.com/BVLC/caffe/pull/1384

Highlights:
* Interface change: column order is now determined by using a list of `OrderedDict` objects instead of `dict` objects, which obviates the need to pass around a tuple with the column orders.
* The outputs are now named according to their names in the network protobuffer; e.g., if your top is named `loss`, then the corresponding column header will also be `loss`; we no longer rename it to, e.g., `TrainingLoss` or `TestLoss`.
* Fixed the bug/feature of the first version where the initial learning rate was always NaN.
* Add optional parameter to specify output table delimiter. It's still a comma by default.

You can use Matlab code from [this gist](https://gist.github.com/drdan14/d8b45999c4a1cbf7ad85) to verify that your results are the same before and after the changes introduced in this pull request. That code assumes that your `top` names are `accuracy` and `loss`, but you can modify the code if that's not true.
2015-04-22 07:57:18 -07:00
Jonathan L Long f61c374983 always call Layer::Reshape in Layer::Forward
There are no cases where Forward is called without Reshape, so we can
simplify the call structure.
2015-03-19 21:30:22 -07:00
J Yegerlehner 1cd6fcb88a extract_features preserves feature shape 2015-03-07 19:06:06 -08:00
J Yegerlehner a0087e4992 Load weights from multiple caffemodels. 2015-03-07 18:37:33 -08:00
Evan Shelhamer ccd837433d Repeal revert of #1878 2015-02-19 17:54:49 -08:00
Evan Shelhamer 6f4bdd88df Revert "Merge pull request #1878 from philkr/encoded"
This reverts the encoding cleanup since it breaks data processing for
existing inputs as discussed in #1901.
2015-02-19 11:03:23 -08:00
Evan Shelhamer 650b944509 Merge pull request #1899 from philkr/project_source_dir
[cmake] CMAKE_SOURCE/BINARY_DIR to PROJECT_SOURCE/BINARY_DIR
2015-02-19 00:37:53 -08:00
philkr d1238e14fa Changing CMAKE_SOURCE/BINARY_DIR to PROJECT_SOURCE/BINARY_DIR 2015-02-18 09:27:13 -08:00
Evan Shelhamer fe8fe5f3a1 tools make net with phase 2015-02-17 11:35:51 -08:00
Evan Shelhamer 2f5af3fb97 Merge pull request #1878 from philkr/encoded
Groom handling of encoded image inputs
2015-02-16 21:03:31 -08:00
Anatoly Baksheev 9358247014 improve CMake build 2015-02-16 20:48:16 -08:00
philkr 52465873d0 Cleaning up the encoded flag. Allowing any image (cropped or gray scale) to be encoded. Allowing for a change in encoded (jpg -> png vice versa) and cleaning up some unused functions. 2015-02-16 20:47:09 -08:00
Jeff Donahue 62d1d3add9 get rid of NetParameterPrettyPrint as layer is now after inputs
(whoohoo)
2015-02-05 14:49:22 -08:00
Jeff Donahue 2e6a82c844 automagic upgrade for v1->v2 2015-02-05 14:49:22 -08:00
Evan Shelhamer 92a66e36b5 Merge pull request #1748 from longjon/db-wrappers
Simple database wrappers
2015-01-29 20:58:58 -08:00
Evan Shelhamer 47d7cb7b20 drop dump_network tool
Nets are better serialized as a single binaryproto or saved however
desired through the Python and MATLAB interfaces.
2015-01-24 11:37:22 -08:00
Jonathan L Long 7dfe23963c use db wrappers 2015-01-19 15:15:04 -08:00
Jon Long c24c83ef0c Merge pull request #1686 from longjon/net-const
Improve const-ness of Net
2015-01-16 11:21:49 -08:00
Evan Shelhamer d612f4a35c check for enough args to convert_imageset
(this might better be handled by making all args flags...)
2015-01-15 21:47:57 -08:00
Jonathan L Long 2377b68dcc improve const-ness of Net 2015-01-09 00:03:47 -08:00
Daniel Golden bae916f74b Store data in lists of dicts and use csv package
Output format is unchanged (except that csv.DictWriter insists on writing ints as 0.0 instead of 0)
2014-12-08 12:06:20 -08:00
Daniel Golden f954c6bea1 Take train loss from `Iteration N, loss = X` lines
Was previously using `Train net output #M: loss = X` lines, but there may not be exactly one of those (e.g., for GoogLeNet, which has multiple loss layers); I believe that `Iteration N, loss = X` is the aggregated loss.

If there's only one loss layer, these two values will be equal and it won't matter. Otherwise, we presumably want to report the aggregated loss.
2014-12-08 12:06:19 -08:00