Added the fast brake factor, to adjust the amount of braking.
Fixed a diagnostics that got broken in refactoring.
Added support for combining momentum with stochastic descent.
Updated the naming: the last-pass weights were actually pre-last-pass weights.
Merged the main and corner weights in autorate computation (this should
Fixed the miscompitation level sizes, added a test.
Merged the main weights and corner weights into a single interleaved vector.
Weights are now always accessed through the high-level way.
A minor clean-up.
Made FloatNeuralNet::setLevel() and getLevel() use higher-level primitives.
Made FloatNeuralNet::randomize() use the high-level representation of weights.
Changed the demos to return a stable result, so that it can be
A minor clean-up of expression.
Converted the main cae to use the Tweaker and updateVectorGradient().
Encapsulated tweaker in a structure, used it in updateVectorGradient().
Generalized the update of FNN weights, so far only in by-level computation.
The code used to build the graphs in the realSEUDO paper
Converted the preiously-ifdefed code for fast brake and fixed nu into FNN options.
Use exception references in XS code.
Docs for the Matlab build.
Added the Matlab-specific build
Fixed the build for GCC 11.3.
Added the FloatNeuralNet experiments in slow start and arrested bouncing.
More MNIST & FloatNeuralNet experiments:
For MNIST demo, added a trapeze mode with absolute X coordinates instead of relative widths (and it works even worse).
In FloatNeuralNet autoRate2_. made the rate bumps up higher and bumps down lower to bring them closer together.
In FloatNeuralNet removed the reference to mdiff_* when momentum is disabled.
Fixed getting the first gradient in FloatNeuralNet.
Experiments with alternative encodings of MNIST input: via run-length and trapezes.
Added rul-length encoding for the MNIST ZIP example.
Even more aggressive settings in FNN autoRate2, and debug printout of gradient
More examples of FloatNeuralNet with a high-ish tweak rate.
More experimentation with autoRate2 in FloatNeuralNet, adjusting the conditions,
In FloatNeuralNet added the stop of momentum on the dimensions that change gradient sign,
Add a small deterministic but unequal tweak to the gradients in FloatNeuralNet,
Changed the implementation of boosting the rare cases in FloatNeuralNet classifiers:
A MNIST version with 16x16 input and 256 first-layer neurons.
In MNIST, added an explicit label field to training data and added commented-out
Tried including the MNIST test vector into training vector, this makes it recognized very well.
Measurements of MNIST example with momentum descent.
An optimization of FloatNeuralNet: skip the computation of new weights in train()
Added the momentum mode to FloatNeuralNet.
Added FNN pull-up for training cases where the output is correct but its value is still below 0.
Fixed a -Wall compilation failure.
Added (conditionally compiled) the printout of misrecognized characters in MNIST.
Bumped the correctMultiplier_ FNN option to 0.01.
Added to FloatNeuralNet an option for classifier training that gives preference
Added batching to the MIST example. It didn't work well at all.
More recorded experinments of MNIST NN training.
Added priniting of error rate on training data every 10th time for Zipcode NN example.
More experiments with Zipcode NN, added checkpointing to it.
Added the counting of training passes and checkpointing in FNN.
Fixed the training rate setting in FloatNN autoRate. In the zip code recognition demo
Added a demo of MNIST handwriting recognition.
Fixed a memory allocation error that showed with RELU, print the auto-rate value with %g because %f runs out of precision.
Added FNN option enableWeightFloor_ and friends.
In FNN: added option trainingRateScale_, option scaleRatePerLayer_,
Improved the FNN test logging: print the stats on a redo pass, print the correct difference
In FNN auto-rate, limited the number of redoings to one in a row, and added a lower
The FloatNeuralNet auto-rate seems to work more or less, at least not dissolving into NaNs,
More experimentation with FloatNeuralNet option autoRate_ shows that it's
The FloatNeuralNet option autoRate_ is in place but not tested yet.
Renamed FloatNeuralNet options to be consistent with Triceps style.
When computing the gradient for adjsting the breaking point of the CORNER
Added a demo of XOR implemented with CORNER activation function in one neuron.
Added to svn a forgotten header file.
More experiments in mixing FloatNeuralNet activation functions.
A couple of experiments with FloatNeuralNet:
Added the full build dependency for mktest and mkdemo.
Added options to FloatNeuralNet constructor.
Added the demo targets at the top level.
Split off the larger examples to be "demos" rather than "tests", because they
Early work on adding an on-disk store.
Put back the default limitation of weights as +-1 in FloatNeuralNet,
A better way to pull the breaking point of CORNER activation function sideways.
The CORNER activation function sort of works empyrically, and shows a useful direction, but needs more thinking to get it more right.
Fixed the bug in backpropagation of CORNER activation function, but now it started producing a straight line.
The CORNER activation function basically works, but not with applyGradient() yet.
Reogranized where the activation function derivatives get applied, made
Started adding state for the CORNER activation function in FloatNeuralNet.
Added a test for LEAKY_RELU with a random seed. It does get stuck as badly as basic RELU without reclaiming.
Reuse the helper function throughout tests in t_FloatNeuralNet
Extracted the t_FloatNeuralNet typical training and printing segments into functions.
Don't try to return a negative value in size_t in FloatNeuralNet::reclaim().
In FloatNeuralNet removed the counting uf last level's usage, and implemented LEAKY_RELU activation function.
Made the weigth saturation limit in FloatNeuralNet adjustable.
Removed FloatNeuralNet::trainOld().
Collect the neuron usage statistics only for RELU activation function.
Made the last level in FloatNeuralNet pseudo-activated by copying the inactivated values.
Extracted backpropagation by one level into FloatNeuralNet::backpropLevel().
More experimentation with tests on random seeds in FloatNeuralNet. Used and documented a mixed way of applying gradients, both by case and cumulative.
Added an constant allowing to boost the gradients for offsets in FloatNeuralNet, currently set to 1.
Experimenting with FloatNeuralNet::applyGradient() and random inputs.
Added FloatNeuralNet::applyGradient().
More variants of gradient norms computed in FloatNeuralNet.
Cleaned up the obsolete unused code (a separate loop for reclaiming neurons) from FloatNeuralNet test.
Computation of total gradient in FloatNeuralNet.
Test the FloatNeuralNet error computation with a non-moving "training pass".
Computation of error during the training passes.
Changed FloatNeuralNet::simpleDump() to use SubVector/SubMatrix.