Activity for Triceps

  • Sergey Babkin Sergey Babkin committed [r1825] on Code

    Added the fast brake factor, to adjust the amount of braking.

  • Sergey Babkin Sergey Babkin committed [r1824] on Code

    Fixed a diagnostics that got broken in refactoring.

  • Sergey Babkin Sergey Babkin committed [r1823] on Code

    Added support for combining momentum with stochastic descent.

  • Sergey Babkin Sergey Babkin committed [r1822] on Code

    Updated the naming: the last-pass weights were actually pre-last-pass weights.

  • Sergey Babkin Sergey Babkin committed [r1821] on Code

    Merged the main and corner weights in autorate computation (this should

  • Sergey Babkin Sergey Babkin committed [r1820] on Code

    Fixed the miscompitation level sizes, added a test.

  • Sergey Babkin Sergey Babkin committed [r1819] on Code

    Merged the main weights and corner weights into a single interleaved vector.

  • Sergey Babkin Sergey Babkin committed [r1818] on Code

    Weights are now always accessed through the high-level way.

  • Sergey Babkin Sergey Babkin committed [r1817] on Code

    A minor clean-up.

  • Sergey Babkin Sergey Babkin committed [r1816] on Code

    Made FloatNeuralNet::setLevel() and getLevel() use higher-level primitives.

  • Sergey Babkin Sergey Babkin committed [r1815] on Code

    Made FloatNeuralNet::randomize() use the high-level representation of weights.

  • Sergey Babkin Sergey Babkin committed [r1814] on Code

    Changed the demos to return a stable result, so that it can be

  • Sergey Babkin Sergey Babkin committed [r1813] on Code

    A minor clean-up of expression.

  • Sergey Babkin Sergey Babkin committed [r1812] on Code

    Converted the main cae to use the Tweaker and updateVectorGradient().

  • Sergey Babkin Sergey Babkin committed [r1811] on Code

    Encapsulated tweaker in a structure, used it in updateVectorGradient().

  • Sergey Babkin Sergey Babkin committed [r1810] on Code

    Generalized the update of FNN weights, so far only in by-level computation.

  • Sergey Babkin Sergey Babkin committed [r1809] on Code

    The code used to build the graphs in the realSEUDO paper

  • Sergey Babkin Sergey Babkin committed [r1808] on Code

    Converted the preiously-ifdefed code for fast brake and fixed nu into FNN options.

  • Sergey Babkin Sergey Babkin committed [r1807] on Code

    Use exception references in XS code.

  • Sergey Babkin Sergey Babkin committed [r1806] on Code

    Docs for the Matlab build.

  • Sergey Babkin Sergey Babkin committed [r1805] on Code

    Added the Matlab-specific build

  • Sergey Babkin Sergey Babkin committed [r1804] on Code

    Fixed the build for GCC 11.3.

  • Sergey Babkin Sergey Babkin committed [r1803] on Code

    Added the FloatNeuralNet experiments in slow start and arrested bouncing.

  • Sergey Babkin Sergey Babkin committed [r1802] on Code

    More MNIST & FloatNeuralNet experiments:

  • Sergey Babkin Sergey Babkin committed [r1801] on Code

    For MNIST demo, added a trapeze mode with absolute X coordinates instead of relative widths (and it works even worse).

  • Sergey Babkin Sergey Babkin committed [r1800] on Code

    In FloatNeuralNet autoRate2_. made the rate bumps up higher and bumps down lower to bring them closer together.

  • Sergey Babkin Sergey Babkin committed [r1799] on Code

    In FloatNeuralNet removed the reference to mdiff_* when momentum is disabled.

  • Sergey Babkin Sergey Babkin committed [r1798] on Code

    Fixed getting the first gradient in FloatNeuralNet.

  • Sergey Babkin Sergey Babkin committed [r1797] on Code

    Experiments with alternative encodings of MNIST input: via run-length and trapezes.

  • Sergey Babkin Sergey Babkin committed [r1796] on Code

    Added rul-length encoding for the MNIST ZIP example.

  • Sergey Babkin Sergey Babkin committed [r1795] on Code

    Even more aggressive settings in FNN autoRate2, and debug printout of gradient

  • Sergey Babkin Sergey Babkin committed [r1794] on Code

    More examples of FloatNeuralNet with a high-ish tweak rate.

  • Sergey Babkin Sergey Babkin committed [r1793] on Code

    More experimentation with autoRate2 in FloatNeuralNet, adjusting the conditions,

  • Sergey Babkin Sergey Babkin committed [r1792] on Code

    In FloatNeuralNet added the stop of momentum on the dimensions that change gradient sign,

  • Sergey Babkin Sergey Babkin committed [r1791] on Code

    Add a small deterministic but unequal tweak to the gradients in FloatNeuralNet,

  • Sergey Babkin Sergey Babkin committed [r1790] on Code

    Changed the implementation of boosting the rare cases in FloatNeuralNet classifiers:

  • Sergey Babkin Sergey Babkin committed [r1789] on Code

    A MNIST version with 16x16 input and 256 first-layer neurons.

  • Sergey Babkin Sergey Babkin committed [r1788] on Code

    In MNIST, added an explicit label field to training data and added commented-out

  • Sergey Babkin Sergey Babkin committed [r1787] on Code

    Tried including the MNIST test vector into training vector, this makes it recognized very well.

  • Sergey Babkin Sergey Babkin committed [r1786] on Code

    Measurements of MNIST example with momentum descent.

  • Sergey Babkin Sergey Babkin committed [r1785] on Code

    An optimization of FloatNeuralNet: skip the computation of new weights in train()

  • Sergey Babkin Sergey Babkin committed [r1784] on Code

    Added the momentum mode to FloatNeuralNet.

  • Sergey Babkin Sergey Babkin committed [r1783] on Code

    Added FNN pull-up for training cases where the output is correct but its value is still below 0.

  • Sergey Babkin Sergey Babkin committed [r1782] on Code

    Fixed a -Wall compilation failure.

  • Sergey Babkin Sergey Babkin committed [r1781] on Code

    Added (conditionally compiled) the printout of misrecognized characters in MNIST.

  • Sergey Babkin Sergey Babkin committed [r1780] on Code

    Bumped the correctMultiplier_ FNN option to 0.01.

  • Sergey Babkin Sergey Babkin committed [r1779] on Code

    Added to FloatNeuralNet an option for classifier training that gives preference

  • Sergey Babkin Sergey Babkin committed [r1778] on Code

    Added batching to the MIST example. It didn't work well at all.

  • Sergey Babkin Sergey Babkin committed [r1777] on Code

    More recorded experinments of MNIST NN training.

  • Sergey Babkin Sergey Babkin committed [r1776] on Code

    Added priniting of error rate on training data every 10th time for Zipcode NN example.

  • Sergey Babkin Sergey Babkin committed [r1775] on Code

    More experiments with Zipcode NN, added checkpointing to it.

  • Sergey Babkin Sergey Babkin committed [r1774] on Code

    Added the counting of training passes and checkpointing in FNN.

  • Sergey Babkin Sergey Babkin committed [r1773] on Code

    Fixed the training rate setting in FloatNN autoRate. In the zip code recognition demo

  • Sergey Babkin Sergey Babkin committed [r1772] on Code

    Added a demo of MNIST handwriting recognition.

  • Sergey Babkin Sergey Babkin committed [r1771] on Code

    Fixed a memory allocation error that showed with RELU, print the auto-rate value with %g because %f runs out of precision.

  • Sergey Babkin Sergey Babkin committed [r1770]

    Added FNN option enableWeightFloor_ and friends.

  • Sergey Babkin Sergey Babkin committed [r1769]

    In FNN: added option trainingRateScale_, option scaleRatePerLayer_,

  • Sergey Babkin Sergey Babkin committed [r1768]

    Improved the FNN test logging: print the stats on a redo pass, print the correct difference

  • Sergey Babkin Sergey Babkin committed [r1767]

    In FNN auto-rate, limited the number of redoings to one in a row, and added a lower

  • Sergey Babkin Sergey Babkin committed [r1766]

    The FloatNeuralNet auto-rate seems to work more or less, at least not dissolving into NaNs,

  • Sergey Babkin Sergey Babkin committed [r1765]

    More experimentation with FloatNeuralNet option autoRate_ shows that it's

  • Sergey Babkin Sergey Babkin committed [r1764]

    The FloatNeuralNet option autoRate_ is in place but not tested yet.

  • Sergey Babkin Sergey Babkin committed [r1763]

    Renamed FloatNeuralNet options to be consistent with Triceps style.

  • Sergey Babkin Sergey Babkin committed [r1762]

    When computing the gradient for adjsting the breaking point of the CORNER

  • Sergey Babkin Sergey Babkin committed [r1761]

    Added a demo of XOR implemented with CORNER activation function in one neuron.

  • Sergey Babkin Sergey Babkin committed [r1760]

    Added to svn a forgotten header file.

  • Sergey Babkin Sergey Babkin committed [r1759]

    More experiments in mixing FloatNeuralNet activation functions.

  • Sergey Babkin Sergey Babkin committed [r1758]

    A couple of experiments with FloatNeuralNet:

  • Sergey Babkin Sergey Babkin committed [r1757]

    Added the full build dependency for mktest and mkdemo.

  • Sergey Babkin Sergey Babkin committed [r1756]

    Added options to FloatNeuralNet constructor.

  • Sergey Babkin Sergey Babkin committed [r1755]

    Added the demo targets at the top level.

  • Sergey Babkin Sergey Babkin committed [r1754]

    Split off the larger examples to be "demos" rather than "tests", because they

  • Sergey Babkin Sergey Babkin committed [r1753]

    Early work on adding an on-disk store.

  • Sergey Babkin Sergey Babkin committed [r1752]

    Put back the default limitation of weights as +-1 in FloatNeuralNet,

  • Sergey Babkin Sergey Babkin committed [r1751]

    A better way to pull the breaking point of CORNER activation function sideways.

  • Sergey Babkin Sergey Babkin committed [r1750]

    The CORNER activation function sort of works empyrically, and shows a useful direction, but needs more thinking to get it more right.

  • Sergey Babkin Sergey Babkin committed [r1749]

    Fixed the bug in backpropagation of CORNER activation function, but now it started producing a straight line.

  • Sergey Babkin Sergey Babkin committed [r1748]

    The CORNER activation function basically works, but not with applyGradient() yet.

  • Sergey Babkin Sergey Babkin committed [r1747]

    Reogranized where the activation function derivatives get applied, made

  • Sergey Babkin Sergey Babkin committed [r1746]

    Started adding state for the CORNER activation function in FloatNeuralNet.

  • Sergey Babkin Sergey Babkin committed [r1745]

    Added a test for LEAKY_RELU with a random seed. It does get stuck as badly as basic RELU without reclaiming.

  • Sergey Babkin Sergey Babkin committed [r1744]

    Reuse the helper function throughout tests in t_FloatNeuralNet

  • Sergey Babkin Sergey Babkin committed [r1743]

    Extracted the t_FloatNeuralNet typical training and printing segments into functions.

  • Sergey Babkin Sergey Babkin committed [r1742]

    Don't try to return a negative value in size_t in FloatNeuralNet::reclaim().

  • Sergey Babkin Sergey Babkin committed [r1741]

    In FloatNeuralNet removed the counting uf last level's usage, and implemented LEAKY_RELU activation function.

  • Sergey Babkin Sergey Babkin committed [r1740]

    Made the weigth saturation limit in FloatNeuralNet adjustable.

  • Sergey Babkin Sergey Babkin committed [r1739]

    Removed FloatNeuralNet::trainOld().

  • Sergey Babkin Sergey Babkin committed [r1738]

    Collect the neuron usage statistics only for RELU activation function.

  • Sergey Babkin Sergey Babkin committed [r1737]

    Made the last level in FloatNeuralNet pseudo-activated by copying the inactivated values.

  • Sergey Babkin Sergey Babkin committed [r1736]

    Extracted backpropagation by one level into FloatNeuralNet::backpropLevel().

  • Sergey Babkin Sergey Babkin committed [r1735]

    More experimentation with tests on random seeds in FloatNeuralNet. Used and documented a mixed way of applying gradients, both by case and cumulative.

  • Sergey Babkin Sergey Babkin committed [r1734]

    Added an constant allowing to boost the gradients for offsets in FloatNeuralNet, currently set to 1.

  • Sergey Babkin Sergey Babkin committed [r1733]

    Experimenting with FloatNeuralNet::applyGradient() and random inputs.

  • Sergey Babkin Sergey Babkin committed [r1732]

    Added FloatNeuralNet::applyGradient().

  • Sergey Babkin Sergey Babkin committed [r1731]

    More variants of gradient norms computed in FloatNeuralNet.

  • Sergey Babkin Sergey Babkin committed [r1730]

    Cleaned up the obsolete unused code (a separate loop for reclaiming neurons) from FloatNeuralNet test.

  • Sergey Babkin Sergey Babkin committed [r1729]

    Computation of total gradient in FloatNeuralNet.

  • Sergey Babkin Sergey Babkin committed [r1728]

    Test the FloatNeuralNet error computation with a non-moving "training pass".

  • Sergey Babkin Sergey Babkin committed [r1727]

    Computation of error during the training passes.

  • Sergey Babkin Sergey Babkin committed [r1726]

    Changed FloatNeuralNet::simpleDump() to use SubVector/SubMatrix.

1 >