Download Latest Version 1.0 source code.tar.gz (8.5 MB)
Email in envelope

Get an email when there's a new version of Caffe

Home / rc3
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2016-01-30 3.5 kB
release candidate 3 source code.tar.gz 2016-01-30 7.0 MB
release candidate 3 source code.zip 2016-01-30 7.3 MB
Totals: 3 Items   14.2 MB 0

A lot has happened since the last release! This packages up ~800 commits by 119 authors. Thanks all!

With all releases one should do make clean && make superclean to clear out old materials before compiling the new release.

  • layers
  • batch normalization [#3229] [#3299]
  • scale + bias layers [#3591]
  • PReLU [#1940] [#2414], ELU [#3388], and log [#2090] non-linearities
  • tile layer [#2083], reduction layer [#2089]
  • embed layer [#2032]
  • spatial pyramid pooling [#2117]
  • batch reindex layer [#2966]
  • filter layer [#2054]
  • solvers: Adam [#2918], RMSProp [#2867], AdaDelta [#2782]
  • accumulate gradients to decouple computational and learning batch size [#1977]
  • de-duplicate solver code [#2518]
  • make solver type a string and split classes [#3166] -- you should update your solver definitions
  • MSRA [#1946] and bilinear interpolation [#2213] weight fillers
  • N-D blobs [#1970] and convolution [#2049] for higher dimensional data and filters
  • tools:
  • test caffe command line tool execution [#1926]
  • network summarization tool [#3090]
  • snapshot on signal / before quit [#2253]
  • report ignored layers when loading weights [#3305]
  • caffe command fine-tunes from multiple caffemodels [#1456]
  • pycaffe:
  • python net spec [#2086] [#2813] [#2959]
  • handle python exceptions [#2462]
  • python layer arguments [#2871]
  • python layer weights [#2944]
  • snapshot in pycaffe [#3082]
  • top + bottom names in pycaffe [#2865]
  • python3 compatibility improvements
  • matcaffe: totally new interface with examples and tests [#2505]
  • cuDNN: switch to v2 [#2038], switch to v3 [#3160], make v4 compatible [#3439]
  • separate IO dependencies for configurable build [#2523]
  • large model and solverstate serialization through hdf5 [#2836]
  • train by multi-GPU data parallelism [#2903] [#2921] [#2924] [#2931] [#2998]
  • dismantle layer headers so every layer has its own include [#3315]
  • workflow: adopt build versioning [#3311] [#3593], contributing guide [#2837], and badges for build status and license [#3133]
  • SoftmaxWithLoss normalization options [#3296]
  • dilated convolution [#3487]
  • expose Solver Restore() to C++ and Python [#2037]
  • set mode once and only once in testing [#2511]
  • turn off backprop by skip_propagate_down [#2095]
  • flatten layer learns axis [#2082]
  • trivial slice and concat [#3014]
  • hdf5 data layer: loads integer data [#2978], can shuffle [#2118]
  • cross platform adjustments [#3300] [#3320] [#3321] [#3362] [#3361] [#3378]
  • speed-ups for GPU solvers [#3519] and CPU im2col [#3536]
  • make and cmake build improvements
  • and more!

Fixes

  • [#2866] fix weight sharing to (1) reduce memory usage and computation (2) correct momentum and other solver computations
  • [#2972] fix concat (broken in [#1970])
  • [#2964] [#3162] fix MVN layer
  • [#2321] fix contrastive loss layer to match Hadsell et al. 2006
  • fix deconv backward [#3095] and conv reshape [#3096] (broken in [#2049])
  • [#3393] fix in-place reshape and flatten
  • [#3152] fix silence layer to not zero bottom on backward
  • [#3574] disable cuDNN max pooling (incompatible with in-place)
  • make backward compatible with negative LR [#3007]
  • [#3332] fix pycaffe forward_backward_all()
  • [#1922] fix cross-channel LRN for large channel band
  • [#1457] fix shape of C++ feature extraction demo output

Dependencies:

  • hdf5 is required
  • cuDNN compatibility is now at v3 + v4 and cuDNN v1 and v2 are not supported
  • IO dependencies (lmdb, leveldb, opencv) are now optional [#2523]

:coffee:

Source: README.md, updated 2016-01-30