<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to Home</title><link>https://sourceforge.net/p/rnnl/wiki/Home/</link><description>Recent changes to Home</description><atom:link href="https://sourceforge.net/p/rnnl/wiki/Home/feed" rel="self"/><language>en</language><lastBuildDate>Tue, 20 Aug 2013 14:41:48 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/rnnl/wiki/Home/feed" rel="self" type="application/rss+xml"/><item><title>Home modified by Alex Graves</title><link>https://sourceforge.net/p/rnnl/wiki/Home/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v3
+++ v4
@@ -354,7 +354,7 @@
 recognition. It has matched the best recorded performance in phoneme
 recognition on the TIMIT database&lt;sup&gt;9&lt;/sup&gt;, and recently won three handwriting
 recognition competitions at the ICDAR 2009 conference, for offline
-French&lt;sup&gt;10^, offline Arabic^11&lt;/sup&gt; and offline Farsi character
+French&lt;sup&gt;10&lt;/sup&gt;, offline Arabic&lt;sup&gt;11&lt;/sup&gt; and offline Farsi character
 classification&lt;sup&gt;12&lt;/sup&gt;. Unlike the competing systems, RNNLIB worked entirely
 on raw inputs, and therefore did not require any preprocessing or
 alphabet-specific feature extraction. It also has among the best
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Alex Graves</dc:creator><pubDate>Tue, 20 Aug 2013 14:41:48 -0000</pubDate><guid>https://sourceforge.netdb3ccd315b87740a988d227ddaf77302d0de8d77</guid></item><item><title>Home modified by Alex Graves</title><link>https://sourceforge.net/p/rnnl/wiki/Home/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v2
+++ v3
@@ -262,16 +262,16 @@

 Variables:

--   float inputs[numTimesteps,inputPattSize] = array of input vectors
--   int seqDims[numSeqs,numDims] = array of sequence dimensions
--   ( R ) float targetPatterns[numTimesteps,targetPattSize] = array of
+-   float inputs\[numTimesteps,inputPattSize\] = array of input vectors
+-   int seqDims\[numSeqs,numDims\] = array of sequence dimensions
+-   ( R ) float targetPatterns\[numTimesteps,targetPattSize\] = array of
     regression target vectors
--   ( C ) int targetClasses[numTimesteps] = array of target classes
--   ( T, SC ) char targetStrings[numSeqs,maxTargStringLength] = array of
+-   ( C ) int targetClasses\[numTimesteps\] = array of target classes
+-   ( T, SC ) char targetStrings\[numSeqs,maxTargStringLength\] = array of
     target strings for transcription
--   ( T, C, SC ) char labels[numLabels, maxLabelLength] = class label
+-   ( T, C, SC ) char labels\[numLabels, maxLabelLength\] = class label
     names (can just be “1”,“2”…)
--   ( O ) char seqTags[numSeqs,maxSeqTagLength] = array of tags for
+-   ( O ) char seqTags\[numSeqs,maxSeqTagLength\] = array of tags for
     sequences (e.g. filename they were created from)

 [netCDF Operator](http://nco.sourceforge.net/) provides several tools
@@ -310,9 +310,7 @@
 the same scripts can be used to build realistic experiments, given more
 data.

-If you want to adapt the python scripts to create netcdf files for your
-own experiments, [here](http://gfesuite.noaa.gov/developer/netCDFPythonInterface.html) is
-a useful tutorial on using netcdf with python.
+If you want to adapt the python scripts to create netcdf files for your own experiments, [here](http://gfesuite.noaa.gov/developer/netCDFPythonInterface.html) is a useful tutorial on using netcdf with python.

 Utilities
 =========
@@ -342,9 +340,12 @@
 python libraries are required for some of the scripts:

 -   [SciPy](http:///www.scipy.org/) (for all scripts)
--   [matplotlib](http://matplotlib.sourceforge.net/) (for all plotting/visualisation scripts)
--   [PIL](http://www.pythonware.com/products/pil/) (for plot\_variables.py)
--   [ScientificPython](http://sourcesup.cru.fr/projects/scientific-py/) (for normalise\_inputs.sh)
+-   [matplotlib](http://matplotlib.sourceforge.net/) (for all
+    plotting/visualisation scripts)
+-   [PIL](http://www.pythonware.com/products/pil/) (for
+    plot\_variables.py)
+-   [ScientificPython](http://sourcesup.cru.fr/projects/scientific-py/)
+    (for normalise\_inputs.sh)

 Experimental Results
 ====================
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Alex Graves</dc:creator><pubDate>Tue, 20 Aug 2013 14:38:16 -0000</pubDate><guid>https://sourceforge.netdc8644b9919f1478f6d21c081216b1be35e8a073</guid></item><item><title>Home modified by Alex Graves</title><link>https://sourceforge.net/p/rnnl/wiki/Home/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v1
+++ v2
@@ -1,8 +1,442 @@
-Welcome to your wiki!
-
-This is the default page, edit it as you see fit. To add a new page simply reference it within brackets, e.g.: [SamplePage].
-
-The wiki uses [Markdown](/p/rnnl/wiki/markdown_syntax/) syntax.
-
-[[members limit=20]]
-[[download_button]]
+RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition.
+
+Contents
+==
+
+[TOC]
+
+Introduction
+============
+
+RNNLIB is a recurrent neural network library for sequence labelling
+problems, such as speech and handwriting recognition. It implements the
+Long Short-Term Memory (LSTM) architecture&lt;sup&gt;1&lt;/sup&gt;, as well as more
+traditional neural network structures, such as Multilayer Perceptrons
+and standard recurrent networks with nonlinear hidden units. Its most
+important features are:
+
+-   Bidirectional Long Short-Term Memory&lt;sup&gt;2&lt;/sup&gt;, which provides access to
+    long range contextual information in all input directions
+-   Connectionist Temporal Classification&lt;sup&gt;3&lt;/sup&gt;, which allows the system to
+    transcribe unsegmented sequence data
+-   Multidimensional Recurrent Neural Networks&lt;sup&gt;4&lt;/sup&gt;, which extends the
+    system to data with more than one spatiotemporal dimension (images,
+    videos, fMRI scans etc.)
+
+All of which are explained in more detail in my Ph.D. thesis&lt;sup&gt;5&lt;/sup&gt;. The
+library also implements the multilayer, subsampling structure developed
+for offline arabic handwriting recognition&lt;sup&gt;6&lt;/sup&gt;. This structure allows the
+network to efficiently label high resolution data such as raw images and
+speech waveforms.
+
+Taken together, the above components make RNNLIB a generic system for
+labelling and classifying data with one or more spatiotemporal
+dimensions. Perhaps its greatest strength is its flexibility: as well as
+speech and handwriting&lt;sup&gt;7&lt;/sup&gt; recognition, it has so far been applied (with
+varying degrees of success) to image classification, object recognition,
+facial expression recognition, EEG and fMRI classification, motion
+capture labelling, robot localisation, wind turbine energy prediction,
+signature verification, image compression and touch sensor
+classification. RNNLIB is also able to accept a wide variety of
+different input representations for the same task, e.g. raw sensor data
+or hand-crafted features (as shown for online handwriting&lt;sup&gt;8&lt;/sup&gt;). See my
+[homepage](http://www6.in.tum.de/Main/Graves) for more publications.
+
+RNNLIB also implements adaptive weight noise regularisation&lt;sup&gt;14&lt;/sup&gt;, which makes it possible to train an arbitrary neural network with stochastic variational inference (or equivalently, to minimise the two part description length of the training data given the network weights plus the weights themselves). This form of regularisation makes overfitting virtually impossible; however it can lead to very long training times.
+
+Installation
+============
+
+RNNLIB is written in C++ and should compile on any platform. However it
+is currently only tested for Linux and OSX.
+
+Building it requires the following:
+
+-   A modern C++ compiler (e.g. gcc 3.0 or higher)
+-   [GNU Libtool](http://www.gnu.org/software/libtool/)
+-   [GNU automake version 1.9](http://www.gnu.org/software/automake/)
+    (NOTE: will not work with version 1.10)
+-   [NetCDF scientific data
+    library](http://www.unidata.ucar.edu/software/netcdf/)
+-   [Boost C++ Libraries](http://www.boost.org/) version 1.36 or higher
+    (headers only, no compilation needed.)
+
+In addition, the following python packages are needed for the auxiliary
+scripts in the ‘utils’ directory:
+
+-   [SciPy](http://www.scipy.org/)
+-   [matplotlib](http://matplotlib.sourceforge.net/)
+-   [PIL](http://www.pythonware.com/products/pil/)
+
+And these packages are needed to create and manipulate netcdf data files
+with python, and to run the experiments in the ‘examples’ directory:
+
+-   [ScientificPython](http://sourcesup.cru.fr/projects/scientific-py/)
+    (NOT Scipy)
+-   [netCDF Operator](http://nco.sourceforge.net/)
+
+To build RNNLIB, first download the source, then enter the root
+directory and type
+
+    ./configure
+    make
+
+This should create the binary file ‘rnnlib’ in the ‘src’ directory. Note
+that on most linux systems the default installation directory for the
+Boost headers is ‘/usr/local/include/boost-VERSION\_NUMBER’ which is not
+on the standard include path. In this case type
+
+    CXXFLAGS=-I/usr/local/include/boost-VERSION_NUMBER/ ./configure
+    make
+
+If you wish to install the binary type:
+
+    make install
+
+By default this will use ‘/usr’ as the installation root (for which you
+will usually need administrator privileges). You can change the install
+path with the --prefix option of the configure script (use ./configure
+--help for other options)
+
+It is recommended that you add the directory containing the ‘rnnlib’
+binary to your path, as otherwise the tools in the ‘utilities’ directory
+will not work.
+
+Project files are provided for the following integrated development
+environments in the ‘ide’ directory:
+
+-   kdevelop (KDE, linux)
+-   xcode (OSX)
+
+Usage
+=====
+
+RNNLIB can be run from the command line as follows:
+
+    Usage: rnnlib [config_options] config_file
+    config_options syntax: --=
+    whitespace not allowed in variable names or values
+    all config_file variables overwritten by config_options
+    setting  = "" removes the variable from the config
+    repeated variables overwritten by last specified
+
+All the parameters determining the network structure, experimental setup
+etc. can be specified either in the config file or on the command line.
+
+The main parameters are as follows:
+
+  Parameter       | Type                   | Allowed Values                                          | Default                                                                                                                                                                                                                  | Comment
+  ----------------|------------------------|---------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------
+  autosave        | boolean                | true,false                                              | false                                                                                                                                                                                                                    | see below
+  batchLearn      | boolean                | true,false                                              |  true if RPROP is used, false otherwise                                                                                                                                                                                  | false =\&gt; gradient descent updates at the end of each sequence, true =\&gt; at the end of epochs only
+  dataFraction    | real                   | 0-1                                                     |  1                                                                                                                                                                                                                       |  determines fraction of the data to load
+  hiddenBlock     | list of integer lists  | all \&gt;=1                                                |                                                                                                                                                                                                                          |  Hidden layer block dimensions
+  hiddenSize      | integer list           | all \&gt;=1                                                |                                                                                                                                                                                                                          |  Sizes of the hidden layers
+  hiddenType      | string                 | tanh, linear, logistic, lstm, linear\_lstm, softsign    |  lstm                                                                                                                                                                                                                    |  Type of units in the hidden layers
+  inputBlock      | integer list           | all \&gt;= 1                                               |                                                                                                                                                                                                                          |  Input layer block dimensions
+  maxTestsNoBest  | integer                | \&gt;=0                                                    |  20                                                                                                                                                                                                                      |  Number of error tests without improvement on the validation set before early stopping
+  optimiser       | steepest, rprop        | steepest                        |                                                                                                              |
+  learnRate       | real                   | 0-1                                                     |  1e-4                                                                                                                                                                                                                    |  Learning rate (steepest descent optimiser only)
+  momentum        | real                   | 0-1                                                     |  0.9                                                                                                                                                                                                                     |  Momentum (steepest descent optimiser only)
+  subsampleSize   | integer list           | all \&gt;= 1                                               |  |Sizes of hidden subsample layers                                                                                           
+  task            | string                 | classification, sequence\_classification, transcription |  |Network task. sequence\_\* =\&gt; one target for whole sequence (not for each point in the sequence). transcription =\&gt; unsegmented sequence labelling with CTC.                              
+  trainFile       | string list            |                                                         |  |Netcdf files used for training. Note that all datasets can consist of multiple files. During each training epoch, the files will be cycled through in random order, with the sequences cycled randomly within each file 
+  valFile         | string list            |                                                         |  |Netcdf files used for validation / early stopping                                                                                    
+  testFile        | string list            |                                                         |  |Netcdf files used for testing                                                                                              
+  verbose         | boolean                | true,false                                              |  false                                                                                                                                                                                                                   |  Verbose console output
+  mdl         | boolean                | true,false                                              |  false                                                                                                                                                                                                                   |  Use adaptive weight noise (M)inimum (D)escription (L)ength regularisation
+  mdlWeight|real|0-1|1|weight for MDL regularisation (0 =&gt; no regularisation; 1 =&gt; *true* MDL) 
+  mdlInitStdDev|real|&gt;0|0.075|initial std. dev. for MDL adaptive weight noise
+  mdlSamples|int|\&gt;=1|1|number of Monte Carlo samples to pick for each sequence to get stochastic derivs for MDL adaptive weight noise (more samples =&gt; less noisy derivatives, more computationl cost)
+  mdlSymmetricSampling|boolean|true,false|false|if true, use symmetric (AKA antithetical) sampling to reduce variance in the derivatives
+
+Parameter names and values are separated by whitespace, and must
+themselves contain no whitespace. Lists are comma separated, e.g.:
+
+    trainFile a.nc,b.nc,c.nc
+
+and lists of lists are semicolon separated, e.g.:
+
+    hiddenBlock 3,3;4,4
+
+See the ‘examples’ directory for examples of config files.
+
+To override parameters at the command line, the syntax is:
+
+    rnnlib --OPTION_NAME=VALUE CONFIG_FILE
+
+so e.g.
+
+    rnnlib --learnRate=1e-5 CONFIG_FILE
+
+will override the learnRate set in the config file.
+
+Autosave
+==
+
+If the 'autosave' option is true the system will store all dynamic
+information (e.g. network weights) as it runs. Without this there will
+be no way to to resume an interrupted experiment (e.g. if a computer
+crashes) and the final trained system will not be saved. If saving is
+activated, timestamped config files with dynamic information appended
+will be saved after each training epoch, and whenever one of the error
+measures for the given task is improved on. In addition a timestamped
+log file will be saved, containing all the console output. For example,
+for a classification task, the command
+
+    rnnlib --autosave=true classification.config
+
+might create the following files
+
+-   classification@2009.07.17-13.08.40.712422.best\_classificationError.save
+-   classification@2009.07.17-13.08.40.712422.best\_crossEntropyError.save
+-   classification@2009.07.17-13.08.40.712422.last.save
+-   classification@2009.07.17-13.08.40.712422.log
+
+Data File Format
+================
+
+All RNNLIB data files (for training, testing and validation) are in
+[netCDF](http://www.unidata.ucar.edu/software/netcdf/) format, a binary
+file format designed for large scientific datasets.
+
+A netCDF file has the following basic structure:
+
+-   Dimensions:
+
+o …
+
+-   Variables:
+
+o …
+
+-   Data:
+
+o …
+
+Following the statement ‘Variables’ the variables that will listed in
+the ‘Data’ section are declared. For example
+
+    float foo[ 3 ]
+
+would declare an array of floats with size 3. For saving variable sized
+array the size can be declared after ‘Dimensions’. So the example would
+look like:
+
+    Dimensions:
+    fooSize= 3
+    Variables:
+    float foo[ fooSize ];
+
+Following ‘Data’ the actual values are stored:
+
+    Data:
+    foo = 1,2,3;
+
+The data format for RNNLIB is specified below. The codes at the start
+determine which tasks the dimension/variable is required for:
+
+-   R = regression (sum-of-squares error with linear outputs)
+-   T = transcription (sequence labelling with connectionist temporal
+    classification outputs)
+-   C = classification (cross-entropy error with softmax outputs)
+-   SC = sequence\_classification (as above, but only one target per
+    sequence)
+-   O = optional, not required for any task
+
+Dimensions:
+
+-   numSeqs = total number of data sequences
+-   numTimesteps = total number of timesteps (sum of lengths of all
+    sequences)
+-   inputPattSize = size of input vectors (e.g. 3 if input points are
+    RGB pixels)
+-   ( O ) maxSeqTagLength = length of longest sequence tag string
+    (including null terminator)
+-   ( R ) targetPattSize = size of target vectors
+-   ( T, SC ) maxTargStringLength = length of longest target string
+    (including null terminator)
+-   ( T, C, SC ) numLabels = number of distinct class labels
+-   ( T, C, SC ) maxLabelLength = length of longest label string
+    (including null terminator)
+
+Variables:
+
+-   float inputs[numTimesteps,inputPattSize] = array of input vectors
+-   int seqDims[numSeqs,numDims] = array of sequence dimensions
+-   ( R ) float targetPatterns[numTimesteps,targetPattSize] = array of
+    regression target vectors
+-   ( C ) int targetClasses[numTimesteps] = array of target classes
+-   ( T, SC ) char targetStrings[numSeqs,maxTargStringLength] = array of
+    target strings for transcription
+-   ( T, C, SC ) char labels[numLabels, maxLabelLength] = class label
+    names (can just be “1”,“2”…)
+-   ( O ) char seqTags[numSeqs,maxSeqTagLength] = array of tags for
+    sequences (e.g. filename they were created from)
+
+[netCDF Operator](http://nco.sourceforge.net/) provides several tools
+for creating, manipulating and displaying netCDF files, and is
+recommended for anyone wanting to make their own datasets. In particular
+the toold ncgen and ncdump convert ASCII text files to and from netcdf
+format.
+
+Examples
+========
+
+The ‘examples’ directory provides example experiments that can be run
+with RNNLIB. To run the experiments, the ‘utilities’ directory must be
+added to your pythonpath, and the following python packages must be
+installed:
+
+-   [SciPy](http://www.scipy.org/)
+-   [ScientificPython](http://sourcesup.cru.fr/projects/scientific-py/)
+-   [PIL](http://www.pythonware.com/products/pil/)
+
+In each subdirectory type
+
+    ./build_netcdf
+
+to build the netcdf datasets, then
+
+    rnnlib SAMPLE_NAME.config
+
+to run the experiments. Note that some directories may contain more than
+1 config file, since different tasks may be defined for the same data.
+
+The results of these experiments will not correspond to published
+results, because only a fraction of the complete dataset is used in each
+case (to keep the size of the distribution down). In addition, early
+stopping is not used, because no validation files are created. However
+the same scripts can be used to build realistic experiments, given more
+data.
+
+If you want to adapt the python scripts to create netcdf files for your
+own experiments, [here](http://gfesuite.noaa.gov/developer/netCDFPythonInterface.html) is
+a useful tutorial on using netcdf with python.
+
+Utilities
+=========
+
+The ‘utilities’ directory provides a range of auxiliary tools for
+RNNLIB. In order for these to work, the directory containing the
+‘rnnlib’ binary must be added to your path. The ‘utilities’ directory
+must be added to your pythonpath for the experiments in the ‘examples’
+directory to work. The most important utilities are:
+
+-   dump\_sequence\_variables.sh: writes to file all the internal
+    variables (activations, delta terms etc.) of the network while
+    processing a single sequence
+-   plot\_variables.py: plots a single variable file saved with
+    ‘dump\_sequence\_variables’
+-   plot\_errors.sh: plots the error curves written to a log file during
+    training
+-   normalise\_inputs.sh: adjusts the inputs of one or more netcdf files
+    to have mean 0, standard deviation 1 (relative to the first file,
+    which should be used for training)
+-   gradient\_check.sh: numerically checks the network’s gradient
+    calculation
+
+All files should provide a list of arguments if called with no
+arguments.The python scripts will give a list of optional arguments,
+defaults etc. if called with the single argument ‘-h’. The following
+python libraries are required for some of the scripts:
+
+-   [SciPy](http:///www.scipy.org/) (for all scripts)
+-   [matplotlib](http://matplotlib.sourceforge.net/) (for all plotting/visualisation scripts)
+-   [PIL](http://www.pythonware.com/products/pil/) (for plot\_variables.py)
+-   [ScientificPython](http://sourcesup.cru.fr/projects/scientific-py/) (for normalise\_inputs.sh)
+
+Experimental Results
+====================
+
+RNNLIB’s best results so far have been in speech and handwriting
+recognition. It has matched the best recorded performance in phoneme
+recognition on the TIMIT database&lt;sup&gt;9&lt;/sup&gt;, and recently won three handwriting
+recognition competitions at the ICDAR 2009 conference, for offline
+French&lt;sup&gt;10^, offline Arabic^11&lt;/sup&gt; and offline Farsi character
+classification&lt;sup&gt;12&lt;/sup&gt;. Unlike the competing systems, RNNLIB worked entirely
+on raw inputs, and therefore did not require any preprocessing or
+alphabet-specific feature extraction. It also has among the best
+published results on the IAM Online and IAM offline English handwriting
+databases&lt;sup&gt;13&lt;/sup&gt;.
+
+Citations
+=========
+
+If you use RNNLIB for your research, please cite it with the following
+reference:
+
+  @misc
+  {rnnlib,
+  Author = {Alex Graves},
+  Title = {RNNLIB: A recurrent neural network library for sequence learning problems},
+  howpublished = {\url{http://sourceforge.net/projects/rnnl/}}}
+
+References
+==========
+
+&lt;sup&gt;1&lt;/sup&gt; Sepp Hochreiter and Jürgen Schmidhuber 
+[Long Short-Term Memory](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.7752&amp;amp;rep=rep1&amp;amp;type=pdf)
+Neural Computation, 9(8):1735-1780, 1997
+
+&lt;sup&gt;2&lt;/sup&gt; Alex Graves and Jürgen Schmidhuber. 
+[Framewise phoneme classification with bidirectional LSTM and other neural network architectures](http://www6.in.tum.de/pub/Main/Publications/Graves2005b.pdf)
+Neural Networks, 18(5-6):602-610, June 2005
+
+&lt;sup&gt;3&lt;/sup&gt; Alex Graves, Santiago Fernández, Faustino Gomez and Jürgen Schmidhuber. 
+[Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks](http://www6.in.tum.de/pub/Main/Publications/Graves2006a.pdf)
+International Conference on Machine Learning, June 2006, Pittsburgh
+
+&lt;sup&gt;4&lt;/sup&gt; Alex Graves, Santiago Fernández and Jürgen Schmidhuber.
+[Multidimensional recurrent neural networks](http://www6.in.tum.de/pub/Main/Publications/Graves2007a.pdf)
+International Conference on Artificial Neural Networks, September 2007,
+Porto
+
+&lt;sup&gt;5&lt;/sup&gt; Alex Graves. 
+[Supervised Sequence Labelling with Recurrent Neural Networks](http://www6.in.tum.de/pub/Main/Publications/Graves2008c.pdf)
+PhD thesis, July 2008, Technische Universität München
+
+&lt;sup&gt;6&lt;/sup&gt; Alex Graves and Jürgen Schmidhuber. 
+[Offline handwriting recognition with multidimensional recurrent neural networks](http://www6.in.tum.de/pub/Main/Publications/graves_nips_2009.pdf)
+Advances in Neural Information Processing Systems, December 2008,
+Vancouver
+
+&lt;sup&gt;7&lt;/sup&gt; Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami, Horst Bunke, and Jürgen Schmidhuber. 
+[A novel connectionist system for unconstrained handwriting recognition](http://www6.in.tum.de/pub/Main/Publications/Graves2008b.pdf)
+IEEE Transactions on Pattern Analysis and Machine Intelligence,
+31(5):855-868, May 2009
+
+&lt;sup&gt;8&lt;/sup&gt; Alex Graves, Santiago Fernández, Marcus Liwicki, Horst Bunke, and Jürgen Schmidhuber. 
+[Unconstrained online handwriting recognition with recurrent neural networks](http://www6.in.tum.de/pub/Main/Publications/Graves2008a.pdf)
+Advances in Neural Information Processing Systems, December 2007,
+Vancouver
+
+&lt;sup&gt;9&lt;/sup&gt; Santiago Fernández, Alex Graves, and Jürgen Schmidhuber. 
+[Phoneme recognition in TIMIT with BLSTM-CTC](http://www6.in.tum.de/pub/Main/Publications/Fernandez2008a.pdf)
+Technical Report IDSIA-04-08, IDSIA, April 2008.
+
+&lt;sup&gt;10&lt;/sup&gt; E. Grosicki, H. El Abed [ICDAR 2009 Handwriting Recognition
+Competition](http://www.cvc.uab.es/icdar2009/papers/3725b398.pdf)
+International Conference on Document Analysis and Recognition, July
+2009, Barcelona
+
+&lt;sup&gt;11&lt;/sup&gt; V. Märgner and H. El Abed. 
+[ICDAR 2009 Arabic Handwriting Recognition Competition](http://www.cvc.uab.es/icdar2009/papers/3725b383.pdf)
+International Conference on Document Analysis and Recognition, July
+2009, Barcelona
+
+&lt;sup&gt;12&lt;/sup&gt; S. Mozaffari and H. Soltanizadeh. 
+[ICDAR 2009 Handwritten
+Farsi/Arabic Character Recognition Competition](http://www.cvc.uab.es/icdar2009/papers/3725b413.pdf)
+International Conference on Document Analysis and Recognition, July
+2009, Barcelona
+
+&lt;sup&gt;13&lt;/sup&gt; Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami,
+Horst Bunke, and Jürgen Schmidhuber. 
+[A novel connectionist system for unconstrained handwriting recognition](http://www6.in.tum.de/pub/Main/Publications/Graves2008b.pdf)
+IEEE Transactions on Pattern Analysis and Machine Intelligence,
+31(5):855-868, May 2009
+
+&lt;sup&gt;14&lt;/sup&gt; Alex Graves. 
+[Practical Variational Inference For Neural Networks](http://www.cs.toronto.edu/~graves/nips_2011.pdf)
+Advances in Neural Information Processing Systems, December 2011, Granada, Spain
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Alex Graves</dc:creator><pubDate>Tue, 20 Aug 2013 14:34:54 -0000</pubDate><guid>https://sourceforge.net6f5b37529e5745ccd01f8f5208e11fbb41197316</guid></item><item><title>Discussion for Home page</title><link>https://sourceforge.net/p/rnnl/wiki/Home/</link><description>&lt;div class="markdown_content"&gt;&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#introduction"&gt;Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#installation"&gt;Installation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#usage"&gt;Usage&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#autosave"&gt;Autosave&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#data-file-format"&gt;Data File Format&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#examples"&gt;Examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#utilities"&gt;Utilities&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#experimental-results"&gt;Experimental Results&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#citations"&gt;Citations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#references"&gt;References&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h1 id="introduction"&gt;Introduction&lt;/h1&gt;
&lt;p&gt;RNNLIB is a recurrent neural network library for sequence labelling&lt;br /&gt;
problems, such as speech and handwriting recognition. It implements the&lt;br /&gt;
Long Short-Term Memory (LSTM) architecture^1^, as well as more&lt;br /&gt;
traditional neural network structures, such as Multilayer Perceptrons&lt;br /&gt;
and standard recurrent networks with nonlinear hidden units. Its most&lt;br /&gt;
important features are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Bidirectional Long Short-Term Memory^2^, which provides access to&lt;br /&gt;
    long range contextual information in all input directions&lt;/li&gt;
&lt;li&gt;Connectionist Temporal Classification^3^, which allows the system to&lt;br /&gt;
    transcribe unsegmented sequence data&lt;/li&gt;
&lt;li&gt;Multidimensional Recurrent Neural Networks^4^, which extends the&lt;br /&gt;
    system to data with more than one spatiotemporal dimension (images,&lt;br /&gt;
    videos, fMRI scans etc.)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of which are explained in more detail in my Ph.D. thesis^5^. The&lt;br /&gt;
library also implements the multilayer, subsampling structure developed&lt;br /&gt;
for offline arabic handwriting recognition^6^. This structure allows the&lt;br /&gt;
network to efficiently label high resolution data such as raw images and&lt;br /&gt;
speech waveforms.&lt;/p&gt;
&lt;p&gt;Taken together, the above components make RNNLIB a generic system for&lt;br /&gt;
labelling and classifying data with one or more spatiotemporal&lt;br /&gt;
dimensions. Perhaps its greatest strength is its flexibility: as well as&lt;br /&gt;
speech and handwriting^7^ recognition, it has so far been applied (with&lt;br /&gt;
varying degrees of success) to image classification, object recognition,&lt;br /&gt;
facial expression recognition, EEG and fMRI classification, motion&lt;br /&gt;
capture labelling, robot localisation, wind turbine energy prediction,&lt;br /&gt;
signature verification, image compression and touch sensor&lt;br /&gt;
classification. RNNLIB is also able to accept a wide variety of&lt;br /&gt;
different input representations for the same task, e.g. raw sensor data&lt;br /&gt;
or hand-crafted features (as shown for online handwriting^8^). See my&lt;br /&gt;
&lt;a class="" href="http://www6.in.tum.de/Main/Graves" rel="nofollow"&gt;homepage&lt;/a&gt; for more publications.&lt;/p&gt;
&lt;h1 id="installation"&gt;Installation&lt;/h1&gt;
&lt;p&gt;RNNLIB is written in C++ and should compile on any platform. However it&lt;br /&gt;
is currently only tested for Linux and OSX.&lt;/p&gt;
&lt;p&gt;Building it requires the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A modern C++ compiler (e.g. gcc 3.0 or higher)&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.gnu.org/software/libtool/" rel="nofollow"&gt;GNU Libtool&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.gnu.org/software/automake/" rel="nofollow"&gt;GNU automake version 1.9&lt;/a&gt;&lt;br /&gt;
    (NOTE: will not work with version 1.10)&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.unidata.ucar.edu/software/netcdf/" rel="nofollow"&gt;NetCDF scientific data&lt;br /&gt;
    library&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.boost.org/" rel="nofollow"&gt;Boost C++ Libraries&lt;/a&gt; version 1.36 or higher&lt;br /&gt;
    (headers only, no compilation needed.)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In addition, the following python packages are needed for the auxiliary&lt;br /&gt;
scripts in the ‘utils’ directory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="" href="http://www.scipy.org/" rel="nofollow"&gt;SciPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://matplotlib.sourceforge.net/"&gt;matplotlib&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.pythonware.com/products/pil/" rel="nofollow"&gt;PIL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And these packages are needed to create and manipulate netcdf data files&lt;br /&gt;
with python, and to run the experiments in the ‘examples’ directory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="" href="http://sourcesup.cru.fr/projects/scientific-py/" rel="nofollow"&gt;ScientificPython&lt;/a&gt;&lt;br /&gt;
    (NOT Scipy)&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://nco.sourceforge.net/"&gt;netCDF Operator&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To build RNNLIB, first download the source, then enter the root&lt;br /&gt;
directory and type&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;configure&lt;/span&gt;
&lt;span class="n"&gt;make&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This should create the binary file ‘rnnlib’ in the ‘src’ directory. Note&lt;br /&gt;
that on most linux systems the default installation directory for the&lt;br /&gt;
Boost headers is ‘/usr/local/include/boost-VERSION_NUMBER’ which is not&lt;br /&gt;
on the standard include path. In this case type&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;CXXFLAGS&lt;/span&gt;&lt;span class="o"&gt;=-&lt;/span&gt;&lt;span class="n"&gt;I&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;usr&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;local&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;include&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;boost&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;VERSION_NUMBER&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;configure&lt;/span&gt;
&lt;span class="n"&gt;make&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If you wish to install the binary type:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;make&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;By default this will use ‘/usr’ as the installation root (for which you&lt;br /&gt;
will usually need administrator privileges). You can change the install&lt;br /&gt;
path with the --prefix option of the configure script (use ./configure&lt;br /&gt;
--help for other options)&lt;/p&gt;
&lt;p&gt;It is recommended that you add the directory containing the ‘rnnlib’&lt;br /&gt;
binary to your path, as otherwise the tools in the ‘utilities’ directory&lt;br /&gt;
will not work.&lt;/p&gt;
&lt;p&gt;Project files are provided for the following integrated development&lt;br /&gt;
environments in the ‘ide’ directory:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;kdevelop (KDE, linux)&lt;/li&gt;
&lt;li&gt;xcode (OSX)&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="usage"&gt;Usage&lt;/h1&gt;
&lt;p&gt;RNNLIB can be run from the command line as follows:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;Usage&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rnnlib&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="n"&gt;config_options&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="n"&gt;config_file&lt;/span&gt;
&lt;span class="n"&gt;config_options&lt;/span&gt; &lt;span class="n"&gt;syntax&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;--&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;variable_name&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;=&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;variable_value&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="n"&gt;whitespace&lt;/span&gt; &lt;span class="n"&gt;not&lt;/span&gt; &lt;span class="n"&gt;allowed&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="n"&gt;names&lt;/span&gt; &lt;span class="n"&gt;or&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt;
&lt;span class="n"&gt;all&lt;/span&gt; &lt;span class="n"&gt;config_file&lt;/span&gt; &lt;span class="n"&gt;variables&lt;/span&gt; &lt;span class="n"&gt;overwritten&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;config_options&lt;/span&gt;
&lt;span class="n"&gt;setting&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;variable_value&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;quot;&amp;quot;&lt;/span&gt; &lt;span class="n"&gt;removes&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;
&lt;span class="n"&gt;repeated&lt;/span&gt; &lt;span class="n"&gt;variables&lt;/span&gt; &lt;span class="n"&gt;overwritten&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;last&lt;/span&gt; &lt;span class="n"&gt;specified&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;All the parameters determining the network structure, experimental setup&lt;br /&gt;
etc. can be specified either in the config file or on the command line.&lt;/p&gt;
&lt;p&gt;The main parameters are as follows:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Allowed Values&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Comment&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;autosave&lt;/td&gt;
&lt;td&gt;boolean&lt;/td&gt;
&lt;td&gt;true,false&lt;/td&gt;
&lt;td&gt;false&lt;/td&gt;
&lt;td&gt;see below&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;batchLearn&lt;/td&gt;
&lt;td&gt;boolean&lt;/td&gt;
&lt;td&gt;true,false&lt;/td&gt;
&lt;td&gt;true if RPROP is used, false otherwise&lt;/td&gt;
&lt;td&gt;false =&gt; gradient descent updates at the end of each sequence, true =&gt; at the end of epochs only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dataFraction&lt;/td&gt;
&lt;td&gt;real&lt;/td&gt;
&lt;td&gt;0-1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;determines fraction of the data to load&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hiddenBlock&lt;/td&gt;
&lt;td&gt;list of integer lists&lt;/td&gt;
&lt;td&gt;all &gt;=1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Hidden layer block dimensions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hiddenSize&lt;/td&gt;
&lt;td&gt;integer list&lt;/td&gt;
&lt;td&gt;all &gt;=1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Sizes of the hidden layers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;hiddenType&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;tanh, linear, logistic, lstm, linear_lstm, softsign&lt;/td&gt;
&lt;td&gt;lstm&lt;/td&gt;
&lt;td&gt;Type of units in the hidden layers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;inputBlock&lt;/td&gt;
&lt;td&gt;integer list&lt;/td&gt;
&lt;td&gt;all &gt;= 1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Input layer block dimensions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;maxTestsNoBest&lt;/td&gt;
&lt;td&gt;integer&lt;/td&gt;
&lt;td&gt;&gt;=0&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;Number of error tests without improvement on the validation set before early stopping&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;optimiser&lt;/td&gt;
&lt;td&gt;steepest, rprop&lt;/td&gt;
&lt;td&gt;steepest&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;learnRate&lt;/td&gt;
&lt;td&gt;real&lt;/td&gt;
&lt;td&gt;0-1&lt;/td&gt;
&lt;td&gt;1e-4&lt;/td&gt;
&lt;td&gt;Learning rate (steepest descent optimiser only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;momentum&lt;/td&gt;
&lt;td&gt;real&lt;/td&gt;
&lt;td&gt;0-1&lt;/td&gt;
&lt;td&gt;0.9&lt;/td&gt;
&lt;td&gt;Momentum (steepest descent optimiser only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;subsampleSize&lt;/td&gt;
&lt;td&gt;integer list&lt;/td&gt;
&lt;td&gt;all &gt;= 1&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Sizes of hidden subsample layers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;task&lt;/td&gt;
&lt;td&gt;string&lt;/td&gt;
&lt;td&gt;classification, sequence_classification, transcription&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Network task. sequence_* =&gt; one target for whole sequence (not for each point in the sequence). transcription =&gt; unsegmented sequence labelling with CTC.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;trainFile&lt;/td&gt;
&lt;td&gt;string list&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Netcdf files used for training. Note that all datasets can consist of multiple files. During each training epoch, the files will be cycled through in random order, with the sequences cycled randomly within each file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;valFile&lt;/td&gt;
&lt;td&gt;string list&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Netcdf files used for validation / early stopping&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;testFile&lt;/td&gt;
&lt;td&gt;string list&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Netcdf files used for testing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;verbose&lt;/td&gt;
&lt;td&gt;boolean&lt;/td&gt;
&lt;td&gt;true,false&lt;/td&gt;
&lt;td&gt;false&lt;/td&gt;
&lt;td&gt;Verbose console output&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Parameter names and values are separated by whitespace, and must&lt;br /&gt;
themselves contain no whitespace. Lists are comma separated, e.g.:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;trainFile&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nc&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;and lists of lists are semicolon separated, e.g.:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;hiddenBlock&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;See the ‘examples’ directory for examples of config files.&lt;/p&gt;
&lt;p&gt;To override parameters at the command line, the syntax is:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rnnlib&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;OPTION_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;VALUE&lt;/span&gt; &lt;span class="n"&gt;CONFIG_FILE&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;so e.g.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rnnlib&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;learnRate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;1e-5&lt;/span&gt; &lt;span class="n"&gt;CONFIG_FILE&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;will override the learnRate set in the config file.&lt;/p&gt;
&lt;h2 id="autosave"&gt;Autosave&lt;/h2&gt;
&lt;p&gt;If the 'autosave' option is true the system will store all dynamic&lt;br /&gt;
information (e.g. network weights) as it runs. Without this there will&lt;br /&gt;
be no way to to resume an interrupted experiment (e.g. if a computer&lt;br /&gt;
crashes) and the final trained system will not be saved. If saving is&lt;br /&gt;
activated, timestamped config files with dynamic information appended&lt;br /&gt;
will be saved after each training epoch, and whenever one of the error&lt;br /&gt;
measures for the given task is improved on. In addition a timestamped&lt;br /&gt;
log file will be saved, containing all the console output. For example,&lt;br /&gt;
for a classification task, the command&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rnnlib&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;autosave&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="n"&gt;classification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;might create the following files&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;classification@2009.07.17-13.08.40.712422.best_classificationError.save&lt;/li&gt;
&lt;li&gt;classification@2009.07.17-13.08.40.712422.best_crossEntropyError.save&lt;/li&gt;
&lt;li&gt;classification@2009.07.17-13.08.40.712422.last.save&lt;/li&gt;
&lt;li&gt;classification@2009.07.17-13.08.40.712422.log&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="data-file-format"&gt;Data File Format&lt;/h1&gt;
&lt;p&gt;All RNNLIB data files (for training, testing and validation) are in&lt;br /&gt;
&lt;a class="" href="http://www.unidata.ucar.edu/software/netcdf/" rel="nofollow"&gt;netCDF&lt;/a&gt; format, a binary&lt;br /&gt;
file format designed for large scientific datasets.&lt;/p&gt;
&lt;p&gt;A netCDF file has the following basic structure:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dimensions:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;o …&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Variables:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;o …&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Data:&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;o …&lt;/p&gt;
&lt;p&gt;Following the statement ‘Variables’ the variables that will listed in&lt;br /&gt;
the ‘Data’ section are declared. For example&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="kt"&gt;float&lt;/span&gt; &lt;span class="n"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;would declare an array of floats with size 3. For saving variable sized&lt;br /&gt;
array the size can be declared after ‘Dimensions’. So the example would&lt;br /&gt;
look like:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;Dimensions&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;fooSize&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="n"&gt;Variables&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;float&lt;/span&gt; &lt;span class="n"&gt;foo&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="n"&gt;fooSize&lt;/span&gt; &lt;span class="o"&gt;];&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Following ‘Data’ the actual values are stored:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;Data&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;foo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The data format for RNNLIB is specified below. The codes at the start&lt;br /&gt;
determine which tasks the dimension/variable is required for:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;R = regression (sum-of-squares error with linear outputs)&lt;/li&gt;
&lt;li&gt;T = transcription (sequence labelling with connectionist temporal&lt;br /&gt;
    classification outputs)&lt;/li&gt;
&lt;li&gt;C = classification (cross-entropy error with softmax outputs)&lt;/li&gt;
&lt;li&gt;SC = sequence_classification (as above, but only one target per&lt;br /&gt;
    sequence)&lt;/li&gt;
&lt;li&gt;O = optional, not required for any task&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Dimensions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;numSeqs = total number of data sequences&lt;/li&gt;
&lt;li&gt;numTimesteps = total number of timesteps (sum of lengths of all&lt;br /&gt;
    sequences)&lt;/li&gt;
&lt;li&gt;inputPattSize = size of input vectors (e.g. 3 if input points are&lt;br /&gt;
    RGB pixels)&lt;/li&gt;
&lt;li&gt;( O ) maxSeqTagLength = length of longest sequence tag string&lt;br /&gt;
    (including null terminator)&lt;/li&gt;
&lt;li&gt;( R ) targetPattSize = size of target vectors&lt;/li&gt;
&lt;li&gt;( T, SC ) maxTargStringLength = length of longest target string&lt;br /&gt;
    (including null terminator)&lt;/li&gt;
&lt;li&gt;( T, C, SC ) numLabels = number of distinct class labels&lt;/li&gt;
&lt;li&gt;( T, C, SC ) maxLabelLength = length of longest label string&lt;br /&gt;
    (including null terminator)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Variables:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;float inputs&lt;span&gt;[numTimesteps,inputPattSize]&lt;/span&gt; = array of input vectors&lt;/li&gt;
&lt;li&gt;int seqDims&lt;span&gt;[numSeqs,numDims]&lt;/span&gt; = array of sequence dimensions&lt;/li&gt;
&lt;li&gt;( R ) float targetPatterns&lt;span&gt;[numTimesteps,targetPattSize]&lt;/span&gt; = array of&lt;br /&gt;
    regression target vectors&lt;/li&gt;
&lt;li&gt;( C ) int targetClasses&lt;span&gt;[numTimesteps]&lt;/span&gt; = array of target classes&lt;/li&gt;
&lt;li&gt;( T, SC ) char targetStrings&lt;span&gt;[numSeqs,maxTargStringLength]&lt;/span&gt; = array of&lt;br /&gt;
    target strings for transcription&lt;/li&gt;
&lt;li&gt;( T, C, SC ) char labels&lt;span&gt;[numLabels, maxLabelLength]&lt;/span&gt; = class label&lt;br /&gt;
    names (can just be “1”,“2”…)&lt;/li&gt;
&lt;li&gt;( O ) char seqTags&lt;span&gt;[numSeqs,maxSeqTagLength]&lt;/span&gt; = array of tags for&lt;br /&gt;
    sequences (e.g. filename they were created from)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a class="" href="http://nco.sourceforge.net/"&gt;netCDF Operator&lt;/a&gt; provides several tools&lt;br /&gt;
for creating, manipulating and displaying netCDF files, and is&lt;br /&gt;
recommended for anyone wanting to make their own datasets. In particular&lt;br /&gt;
the toold ncgen and ncdump convert ASCII text files to and from netcdf&lt;br /&gt;
format.&lt;/p&gt;
&lt;h1 id="examples"&gt;Examples&lt;/h1&gt;
&lt;p&gt;The ‘examples’ directory provides example experiments that can be run&lt;br /&gt;
with RNNLIB. To run the experiments, the ‘utilities’ directory must be&lt;br /&gt;
added to your pythonpath, and the following python packages must be&lt;br /&gt;
installed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="" href="http://www.scipy.org/" rel="nofollow"&gt;SciPy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://sourcesup.cru.fr/projects/scientific-py/" rel="nofollow"&gt;ScientificPython&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.pythonware.com/products/pil/" rel="nofollow"&gt;PIL&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In each subdirectory type&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;build_netcdf&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;to build the netcdf datasets, then&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="n"&gt;rnnlib&lt;/span&gt; &lt;span class="n"&gt;SAMPLE_NAME&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;to run the experiments. Note that some directories may contain more than&lt;br /&gt;
1 config file, since different tasks may be defined for the same data.&lt;/p&gt;
&lt;p&gt;The results of these experiments will not correspond to published&lt;br /&gt;
results, because only a fraction of the complete dataset is used in each&lt;br /&gt;
case (to keep the size of the distribution down). In addition, early&lt;br /&gt;
stopping is not used, because no validation files are created. However&lt;br /&gt;
the same scripts can be used to build realistic experiments, given more&lt;br /&gt;
data.&lt;/p&gt;
&lt;p&gt;If you want to adapt the python scripts to create netcdf files for your&lt;br /&gt;
own experiments,&lt;br /&gt;
&lt;a class="" href="http://gfesuite.noaa.gov/developer/netCDFPythonInterface.html" rel="nofollow"&gt;here&lt;/a&gt; is&lt;br /&gt;
a useful tutorial on using netcdf with python.&lt;/p&gt;
&lt;h1 id="utilities"&gt;Utilities&lt;/h1&gt;
&lt;p&gt;The ‘utilities’ directory provides a range of auxiliary tools for&lt;br /&gt;
RNNLIB. In order for these to work, the directory containing the&lt;br /&gt;
‘rnnlib’ binary must be added to your path. The ‘utilities’ directory&lt;br /&gt;
must be added to your pythonpath for the experiments in the ‘examples’&lt;br /&gt;
directory to work. The most important utilities are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dump_sequence_variables.sh: writes to file all the internal&lt;br /&gt;
    variables (activations, delta terms etc.) of the network while&lt;br /&gt;
    processing a single sequence&lt;/li&gt;
&lt;li&gt;plot_variables.py: plots a single variable file saved with&lt;br /&gt;
    ‘dump_sequence_variables’&lt;/li&gt;
&lt;li&gt;plot_errors.sh: plots the error curves written to a log file during&lt;br /&gt;
    training&lt;/li&gt;
&lt;li&gt;normalise_inputs.sh: adjusts the inputs of one or more netcdf files&lt;br /&gt;
    to have mean 0, standard deviation 1 (relative to the first file,&lt;br /&gt;
    which should be used for training)&lt;/li&gt;
&lt;li&gt;gradient_check.sh: numerically checks the network’s gradient&lt;br /&gt;
    calculation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All files should provide a list of arguments if called with no&lt;br /&gt;
arguments.The python scripts will give a list of optional arguments,&lt;br /&gt;
defaults etc. if called with the single argument ‘-h’. The following&lt;br /&gt;
python libraries are required for some of the scripts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="" href="http:///www.scipy.org/" rel="nofollow"&gt;SciPy&lt;/a&gt; (for all scripts)&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://matplotlib.sourceforge.net/"&gt;matplotlib&lt;/a&gt; (for all&lt;br /&gt;
    plotting/visualisation scripts)&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://www.pythonware.com/products/pil/" rel="nofollow"&gt;PIL&lt;/a&gt; (for&lt;br /&gt;
    plot_variables.py)&lt;/li&gt;
&lt;li&gt;&lt;a class="" href="http://sourcesup.cru.fr/projects/scientific-py/" rel="nofollow"&gt;ScientificPython&lt;/a&gt;&lt;br /&gt;
    (for normalise_inputs.sh)&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="experimental-results"&gt;Experimental Results&lt;/h1&gt;
&lt;p&gt;RNNLIB’s best results so far have been in speech and handwriting&lt;br /&gt;
recognition. It has matched the best recorded performance in phoneme&lt;br /&gt;
recognition on the TIMIT database^9^, and recently won three handwriting&lt;br /&gt;
recognition competitions at the ICDAR 2009 conference, for offline&lt;br /&gt;
French^10^, offline Arabic^11^ and offline Farsi character&lt;br /&gt;
classification^12^. Unlike the competing systems, RNNLIB worked entirely&lt;br /&gt;
on raw inputs, and therefore did not require any preprocessing or&lt;br /&gt;
alphabet-specific feature extraction. It also has among the best&lt;br /&gt;
published results on the IAM Online and IAM offline English handwriting&lt;br /&gt;
databases^13^.&lt;/p&gt;
&lt;h1 id="citations"&gt;Citations&lt;/h1&gt;
&lt;p&gt;If you use RNNLIB for your research, please cite it with the following&lt;br /&gt;
reference:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;misc&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;rnnlib&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;\
&lt;span class="n"&gt;Author&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Alex&lt;/span&gt; &lt;span class="n"&gt;Graves&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;\
&lt;span class="n"&gt;Title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;RNNLIB&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;A&lt;/span&gt; &lt;span class="n"&gt;recurrent&lt;/span&gt; &lt;span class="n"&gt;neural&lt;/span&gt; &lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="n"&gt;library&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;sequence&lt;/span&gt;
&lt;span class="n"&gt;learning&lt;/span&gt; &lt;span class="n"&gt;problems&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;\
&lt;span class="n"&gt;howpublished&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;\\&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//sourceforge.net/projects/rnnl/&amp;gt;}}}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h1 id="references"&gt;References&lt;/h1&gt;
&lt;p&gt;^1^ Sepp Hochreiter and Jürgen Schmidhuber &lt;a class="" href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.7752&amp;amp;rep=rep1&amp;amp;type=pdf" rel="nofollow"&gt;Long Short-Term&lt;br /&gt;
Memory&lt;/a&gt;&lt;br /&gt;
Neural Computation, 9(8):1735-1780, 1997&lt;/p&gt;
&lt;p&gt;^2^ Alex Graves and Jürgen Schmidhuber &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2005b.pdf" rel="nofollow"&gt;Framewise phoneme classification&lt;br /&gt;
with bidirectional LSTM and other neural network&lt;br /&gt;
architectures&lt;/a&gt;&lt;br /&gt;
Neural Networks, 18(5-6):602-610, June 2005&lt;/p&gt;
&lt;p&gt;^3^ Alex Graves, Santiago Fernández, Faustino Gomez and Jürgen&lt;br /&gt;
Schmidhuber &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2006a.pdf" rel="nofollow"&gt;Connectionist temporal classification: Labelling&lt;br /&gt;
unsegmented sequence data with recurrent neural&lt;br /&gt;
networks&lt;/a&gt;&lt;br /&gt;
International Conference on Machine Learning, June 2006, Pittsburgh&lt;/p&gt;
&lt;p&gt;^4^ Alex Graves, Santiago Fernández and Jürgen Schmidhuber&lt;br /&gt;
&lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2007a.pdf" rel="nofollow"&gt;Multidimensional recurrent neural&lt;br /&gt;
networks&lt;/a&gt;&lt;br /&gt;
International Conference on Artificial Neural Networks, September 2007,&lt;br /&gt;
Porto&lt;/p&gt;
&lt;p&gt;^5^ Alex Graves &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2008c.pdf" rel="nofollow"&gt;Supervised Sequence Labelling with Recurrent Neural&lt;br /&gt;
Networks&lt;/a&gt;&lt;br /&gt;
PhD thesis, July 2008, Technische Universität München&lt;/p&gt;
&lt;p&gt;^6^ Alex Graves and Jürgen Schmidhuber &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/graves_nips_2009.pdf" rel="nofollow"&gt;Offline handwriting recognition&lt;br /&gt;
with multidimensional recurrent neural&lt;br /&gt;
networks&lt;/a&gt;&lt;br /&gt;
Advances in Neural Information Processing Systems, December 2008,&lt;br /&gt;
Vancouver&lt;/p&gt;
&lt;p&gt;^7^ Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami,&lt;br /&gt;
Horst Bunke, and Jürgen Schmidhuber. &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2008b.pdf" rel="nofollow"&gt;A novel connectionist system for&lt;br /&gt;
unconstrained handwriting&lt;br /&gt;
recognition&lt;/a&gt;&lt;br /&gt;
IEEE Transactions on Pattern Analysis and Machine Intelligence,&lt;br /&gt;
31(5):855-868, May 2009&lt;/p&gt;
&lt;p&gt;^8^ Alex Graves, Santiago Fernández, Marcus Liwicki, Horst Bunke, and&lt;br /&gt;
Jürgen Schmidhuber. &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2008a.pdf" rel="nofollow"&gt;Unconstrained online handwriting recognition with&lt;br /&gt;
recurrent neural&lt;br /&gt;
networks&lt;/a&gt;&lt;br /&gt;
Advances in Neural Information Processing Systems, December 2007,&lt;br /&gt;
Vancouver&lt;/p&gt;
&lt;p&gt;^9^ Santiago Fernández, Alex Graves, and Jürgen Schmidhuber &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Fernandez2008a.pdf" rel="nofollow"&gt;Phoneme&lt;br /&gt;
recognition in TIMIT with&lt;br /&gt;
BLSTM-CTC&lt;/a&gt;&lt;br /&gt;
Technical Report IDSIA-04-08, IDSIA, April 2008.&lt;/p&gt;
&lt;p&gt;^10^ E. Grosicki, H. El Abed &lt;a class="" href="http://www.cvc.uab.es/icdar2009/papers/3725b398.pdf" rel="nofollow"&gt;ICDAR 2009 Handwriting Recognition&lt;br /&gt;
Competition&lt;/a&gt;&lt;br /&gt;
International Conference on Document Analysis and Recognition, July&lt;br /&gt;
2009, Barcelona&lt;/p&gt;
&lt;p&gt;^11^ V. Märgner and H. El Abed &lt;a class="" href="http://www.cvc.uab.es/icdar2009/papers/3725b383.pdf" rel="nofollow"&gt;ICDAR 2009 Arabic Handwriting&lt;br /&gt;
Recognition&lt;br /&gt;
Competition&lt;/a&gt;&lt;br /&gt;
International Conference on Document Analysis and Recognition, July&lt;br /&gt;
2009, Barcelona&lt;/p&gt;
&lt;p&gt;^12^ S. Mozaffari and H. Soltanizadeh &lt;a class="" href="http://www.cvc.uab.es/icdar2009/papers/3725b413.pdf" rel="nofollow"&gt;ICDAR 2009 Handwritten&lt;br /&gt;
Farsi/Arabic Character Recognition&lt;br /&gt;
Competition&lt;/a&gt;&lt;br /&gt;
International Conference on Document Analysis and Recognition, July&lt;br /&gt;
2009, Barcelona&lt;/p&gt;
&lt;p&gt;^13^ Alex Graves, Marcus Liwicki, Santiago Fernández, Roman Bertolami,&lt;br /&gt;
Horst Bunke, and Jürgen Schmidhuber. &lt;a class="" href="http://www6.in.tum.de/pub/Main/Publications/Graves2008b.pdf" rel="nofollow"&gt;A novel connectionist system for&lt;br /&gt;
unconstrained handwriting&lt;br /&gt;
recognition&lt;/a&gt;&lt;br /&gt;
IEEE Transactions on Pattern Analysis and Machine Intelligence,&lt;br /&gt;
31(5):855-868, May 2009&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Alex Graves</dc:creator><pubDate>Tue, 20 Aug 2013 14:01:52 -0000</pubDate><guid>https://sourceforge.netadba464bc891c33fa81899f341c736653d3748cc</guid></item><item><title>Home modified by Alex Graves</title><link>https://sourceforge.net/p/rnnl/wiki/Home/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Welcome to your wiki!&lt;/p&gt;
&lt;p&gt;This is the default page, edit it as you see fit. To add a new page simply reference it within brackets, e.g.: &lt;span&gt;[SamplePage]&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;The wiki uses &lt;a class="" href="/p/rnnl/wiki/markdown_syntax/"&gt;Markdown&lt;/a&gt; syntax.&lt;/p&gt;
&lt;p&gt;&lt;h6&gt;Project Members:&lt;/h6&gt;&lt;ul class="md-users-list"&gt;&lt;li&gt;&lt;a href="/u/alexgraves/"&gt;Alex Graves&lt;/a&gt; (admin)&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;
&lt;/p&gt;&lt;p&gt;&lt;span class="download-button-5176cc9734309d5b6a99f415" style="margin-bottom: 1em; display: block;"&gt;&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Alex Graves</dc:creator><pubDate>Tue, 23 Apr 2013 18:02:00 -0000</pubDate><guid>https://sourceforge.netdea208817c0b64cdf2908e659753d388bdd48287</guid></item></channel></rss>