<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to V1_LCA tutorial</title><link>https://sourceforge.net/p/petavision/wiki/V1_LCA%2520tutorial/</link><description>Recent changes to V1_LCA tutorial</description><atom:link href="https://sourceforge.net/p/petavision/wiki/V1_LCA%20tutorial/feed" rel="self"/><language>en</language><lastBuildDate>Wed, 17 Jun 2015 23:12:04 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/petavision/wiki/V1_LCA%20tutorial/feed" rel="self" type="application/rss+xml"/><item><title>V1_LCA tutorial modified by Brian Broom-Peltz</title><link>https://sourceforge.net/p/petavision/wiki/V1_LCA%2520tutorial/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v3
+++ v4
@@ -1 +1 @@
-[[include repo=code path=/trunk/docs/tutorial/basic/V1_LCA_tutorial.md rev=10102]]
+[[include repo=code path=/trunk/docs/tutorial/basic/V1_LCA_tutorial.md]]
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Brian Broom-Peltz</dc:creator><pubDate>Wed, 17 Jun 2015 23:12:04 -0000</pubDate><guid>https://sourceforge.net256163c5be98a1b63086c667334496b002a13120</guid></item><item><title>V1_LCA tutorial modified by Brian Broom-Peltz</title><link>https://sourceforge.net/p/petavision/wiki/V1_LCA%2520tutorial/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;--- v2
+++ v3
@@ -1 +1 @@
-[[include repo=code path=/trunk/docs/tutorial/basic/V1_LCA_tutorial.md]]
+[[include repo=code path=/trunk/docs/tutorial/basic/V1_LCA_tutorial.md rev=10102]]
&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Brian Broom-Peltz</dc:creator><pubDate>Tue, 26 May 2015 21:04:49 -0000</pubDate><guid>https://sourceforge.netd18fb0196b8c29ea507c632233a01a340d7b8a1d</guid></item><item><title>V1_LCA tutorial modified by Brian Broom-Peltz</title><link>https://sourceforge.net/p/petavision/wiki/V1_LCA%2520tutorial/</link><description>&lt;div class="markdown_content"&gt;&lt;pre&gt;&lt;/pre&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Brian Broom-Peltz</dc:creator><pubDate>Fri, 15 May 2015 19:36:16 -0000</pubDate><guid>https://sourceforge.net158432006533870cc87d7dde12e56bba7011eaff</guid></item><item><title>V1_LCA tutorial modified by Brian Broom-Peltz</title><link>https://sourceforge.net/p/petavision/wiki/V1_LCA%2520tutorial/</link><description>&lt;div class="markdown_content"&gt;&lt;div&gt;
&lt;div class="markdown_content"&gt;&lt;h1 id="v1-lca-tutorial"&gt;V1 LCA Tutorial&lt;/h1&gt;
&lt;p&gt;This basic tutorial is set up to walk you through downloading a dataset, performing unsupervised learning on a V1 dictionary of that dataset using the LCA algorithm, and finally looking at your output using one of our automated scripts.  In the next tutorial we will look at how to train an SLP classifier using the V1 dictionary you train here.&lt;/p&gt;
&lt;p&gt;Our intended audience is aware of some key ideas in computational neuroscience:&lt;br /&gt;
    Sparse Coding with LCA     Olshausen and Fields, 96&lt;br /&gt;
    ...  &lt;br /&gt;
&lt;/p&gt;
&lt;p&gt;But honestly you can get through this without reading any of those papers. You may just miss out on appreciating how cool it is what you are doing :p&lt;/p&gt;
&lt;p&gt;Depending on your internet connection and the speed of your machine you should be able to get to Step 3, "Running PetaVision" and running the experiment in about an hour. Step 3 could take minutes to days depending on the speed of your machine (Many threaded CPUs + GPUs = faster) and how long you choose to train your network.&lt;/p&gt;
&lt;div class="toc"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="#v1-lca-tutorial"&gt;V1 LCA Tutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#0-pre-requisites"&gt;0. Pre-requisites:&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#1-get-your-dataset"&gt;1. Get your dataset&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#11-download-cifar-dataset"&gt;1.1. Download CIFAR dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#12-extract-cifar-dataset"&gt;1.2. Extract CIFAR dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#13-extract-images-using-octave-script"&gt;1.3.  Extract images using octave script&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#14-combine-the-data_batches-to-make-a-master-file"&gt;1.4. Combine the data_batches to make a master file&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#2-fix-up-your-params-file"&gt;2. Fix up your params file&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#21-get-your-params-file-v1_lcaparams"&gt;2.1. Get your params file: V1_LCA.params&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#22-inspect-the-params-file"&gt;2.2. Inspect the params file&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#221-hypercol"&gt;2.2.1. HyPerCol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#222-hyperlayer"&gt;2.2.2. HyPerLayer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#223-connections"&gt;2.2.3. Connections&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#23-customize-the-params-file-for-a-run-on-your-system"&gt;2.3. Customize the params file for a run on your system&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#231-hypercol-column"&gt;2.3.1. HyPerCol "column"&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#232-movie-image-update-image-path"&gt;2.3.2. Movie "Image" | Update Image path&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#234-optional-momentumconn-v1toerror-load-weights-from-file"&gt;2.3.4. [Optional] MomentumConn "V1ToError" | Load weights from file]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#3-running-petavision"&gt;3. Running PetaVision&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#31-build-petavision"&gt;3.1 Build PetaVision&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#32-start-a-screen"&gt;3.2 Start a Screen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#33-run-time-arguments"&gt;3.3 Run-time arguments&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#4-analyze-run"&gt;4. Analyze Run&lt;/a&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="#41-outputpath-directory-files"&gt;4.1 outputPath directory files&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#42-checkpoint-directory-files"&gt;4.2 Checkpoint directory files&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#43-run-automated-analysis-script"&gt;4.3 Run automated analysis script&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#44-cloud-aws-view-the-files-on-your-local-machine"&gt;4.4 [Cloud - AWS] - view the files on your local machine&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="#5-experiment"&gt;5. Experiment&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="#6-comments-questions"&gt;6. Comments / Questions?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;h1 id="0-pre-requisites"&gt;0. Pre-requisites:&lt;/h1&gt;
&lt;ol&gt;
&lt;li&gt;Successfully build PetaVision and run a BasicSystemTest&lt;/li&gt;
&lt;li&gt;Grab a cup of coffee and hunker down. &lt;/li&gt;
&lt;/ol&gt;
&lt;h1 id="1-get-your-dataset"&gt;1. Get your dataset&lt;/h1&gt;
&lt;p&gt;You can run this tutorial using any Dataset but since we like the K.I.S.S. principle, we're going to walk through getting a popular and well documented dataset: CIFAR.&lt;/p&gt;
&lt;p&gt;CIFAR consists of 50k training images of 10 categories (5k each) plus 10k test images for classification.  The images are 32x32 color images of the following categories: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.  The dataset is hosted by the CS department at the University Toronto by Alex Krizhevsky (name sake for award winning 'AlexNet' implementation on Caffe).  This data set is simple to work with (from a whole image classification perspective) and since the images are small you will be able to run your experiments much faster than if you were try using full frame pictures. &lt;/p&gt;
&lt;p&gt;For more information about the CIFAR dataset: &lt;a href="http://www.cs.toronto.edu/~kriz/cifar.html" rel="nofollow"&gt;http://www.cs.toronto.edu/~kriz/cifar.html&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="11-download-cifar-dataset"&gt;1.1. Download CIFAR dataset&lt;/h2&gt;
&lt;p&gt;If you are on the AWS server or on your local machine you can use wget&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~
&lt;span class="nv"&gt;$ &lt;/span&gt;mkdir dataset
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;dataset
&lt;span class="nv"&gt;$ &lt;/span&gt;wget &lt;span class="s2"&gt;"http://www.cs.toronto.edu/~kriz/cifar-10-matlab.tar.gz"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;tar -zxvf cifar-10-matlab.tar.gz
&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id="12-extract-cifar-dataset"&gt;1.2. Extract CIFAR dataset&lt;/h2&gt;
&lt;p&gt;You will be using PetaVision/mlab/HyPerLCA/extractImagesOctave.m, but first you'll need to first modify extractImagesOctave.m by pointing to the correct local directories. Follow the instructions at the top of the script.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/path/to/PetaVision/mlab/HyPerLCA
&lt;span class="nv"&gt;$ &lt;/span&gt;vim extractImagesOctave.m
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Make sure you saved your changes to the script before continuing&lt;/p&gt;
&lt;h2 id="13-extract-images-using-octave-script"&gt;1.3.  Extract images using octave script&lt;/h2&gt;
&lt;p&gt;Navigate to where you unzipped the cifar-10-matlab.tar.gz file and extract the images in octave&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/dataset/cifar-10-batches-mat/
&lt;span class="nv"&gt;$ &lt;/span&gt;octave

&amp;gt; addpath&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'path/to/PetaVision/mlab/HyPerLCA'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&amp;gt; extractImagesOctave&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'data_batch_1.mat'&lt;/span&gt;,1&lt;span class="o"&gt;)&lt;/span&gt;
&amp;gt; extractImagesOctave&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'data_batch_2.mat'&lt;/span&gt;,2&lt;span class="o"&gt;)&lt;/span&gt;
&amp;gt; extractImagesOctave&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'data_batch_3.mat'&lt;/span&gt;,3&lt;span class="o"&gt;)&lt;/span&gt;
&amp;gt; extractImagesOctave&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'data_batch_4.mat'&lt;/span&gt;,4&lt;span class="o"&gt;)&lt;/span&gt;
&amp;gt; extractImagesOctave&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'data_batch_5.mat'&lt;/span&gt;,5&lt;span class="o"&gt;)&lt;/span&gt;
&amp;gt; extractImagesOctave&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'test_batch.mat'&lt;/span&gt;,0&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Note: I recognize this is not an elegant method, but it works and is clear&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="14-combine-the-data_batches-to-make-a-master-file"&gt;1.4. Combine the data_batches to make a master file&lt;/h2&gt;
&lt;p&gt;Each run of extractImagesOctave produced a unique text file listing all the images in random order.  If you wish to expand your training dataset to  include all of the training images, you can concatenate them by copying them to a common directory with different names (inelegant solution) and then doing:  &lt;br /&gt;
&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;cat *.txt &amp;gt; mixed_cifar.txt
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Congratulations!  You now have a massive training dataset along with a test set that you will use in the next tutorial in creating a classifier.  For now we are only concerned about using the dataset for unsupervised learning.&lt;/p&gt;
&lt;h1 id="2-fix-up-your-params-file"&gt;2. Fix up your params file&lt;/h1&gt;
&lt;p&gt;The params file is where each experiment is described in english for PetaVision. It details the different objects (ie. hypercol, layers, and connections) that are used by PetaVision.&lt;/p&gt;
&lt;p&gt;You'll be starting off with a params file that has already been tuned pretty well but feel free to modify parameters as you experiment to try to identify different results.&lt;br /&gt;
&lt;/p&gt;
&lt;h2 id="21-get-your-params-file-v1_lcaparams"&gt;2.1. Get your params file: V1_LCA.params&lt;/h2&gt;
&lt;p&gt;In the directory /PetaVision/docs/tutorial/basic/ you will also see a V1_LCA.png that has a graphical rendition of this params file.  If you are on AWS copy this file to a directory or EBS you will be working from.&lt;br /&gt;
&lt;/p&gt;
&lt;p&gt;In the case of AWS, you may want to copy the params file to your EBS volume in the event that your instance gets outbid and shut down. &lt;/p&gt;
&lt;h2 id="22-inspect-the-params-file"&gt;2.2. Inspect the params file&lt;/h2&gt;
&lt;p&gt;First just look over the params and see if you can understand the general structure of a params file.  It is organized into three categories: the 1.column, 2.layers, and 3.connections, to simulate a cortical column in the brain. The params file you will be using is commented to help guide you along to highlight this structure of the params file. &lt;/p&gt;
&lt;h3 id="221-hypercol"&gt;2.2.1. HyPerCol&lt;/h3&gt;
&lt;p&gt;The column is what holds the whole experiment and all the layers are proportional to the column.  In PetaVision the column object is called HyPerCol for 'High Performance Column'. The 'y' is there because it spells hyper (a.k.a: legacy naming scheme stuff)&lt;/p&gt;
&lt;p&gt;The column sets up a bunch of key experiment details such as how long to run, where to save files, how frequently and where to checkpoint, and adaptive time-step parameters. All of these parameters are fairly clearly identified but lets look at a few of the very important ones:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;HyPerCol Parameter&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;startTime&lt;/td&gt;
&lt;td&gt;sets where experiment starts; usually 0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;stopTime&lt;/td&gt;
&lt;td&gt;sets how long to run experiment; (start - stop)/dt = number of timesteps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dt&lt;/td&gt;
&lt;td&gt;how long a timestep; modulations possible with adaptive timestep&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;outputPath&lt;/td&gt;
&lt;td&gt;sets directory path for experiment output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nx&lt;/td&gt;
&lt;td&gt;x-dimensions of column; typically match to input image size&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ny&lt;/td&gt;
&lt;td&gt;y-dimensions of column; typically match to input image size&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;checkpointWriteDir&lt;/td&gt;
&lt;td&gt;sets directory path for experiment checkpoints; usually output/Checkpoints&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dtAdaptFlag&lt;/td&gt;
&lt;td&gt;tells PetaVision to use the adaptive timestep parameters for normalized error layers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For more details on the HyPerCol please read the documentation:&lt;a class="" href="http://petavision.sourceforge.net/doxygen/html/classPV_1_1HyPerCol.html#member-group"&gt;HyPerCol Parameters&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="222-hyperlayer"&gt;2.2.2. HyPerLayer&lt;/h3&gt;
&lt;p&gt;The layers are where the neurons are contained and their dynamics described. You can set up a layers that convolve inputs, have self-self interactions, or even just copy the layer properties or activities of one layer to another ... and more. All layers are subclassed from HyPerLayer and you can read about their individual properties by following some of the doxygen documentation.&lt;/p&gt;
&lt;p&gt;Some important parameters to notice are nxScale, nyScale and nf since they set up physical dimensions of the layer. phase and displayPeriod describe some of the temporal dynamics of the layer.  Most layers have their own unique properties that you can explore further on your own.  For now this is a good snapshot. The table below summarizes the types of layers we use and roles in this experiment:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer Class&lt;/th&gt;
&lt;th&gt;"Name"&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Movie&lt;/td&gt;
&lt;td&gt;"Image"&lt;/td&gt;
&lt;td&gt;loads image from imageListPath&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ANNNormalizedErrorLayer&lt;/td&gt;
&lt;td&gt;"Error"&lt;/td&gt;
&lt;td&gt;computes residual error between Image and V1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HyPerLCALayer&lt;/td&gt;
&lt;td&gt;"V1"&lt;/td&gt;
&lt;td&gt;makes a sparse representation of Image using LCA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ANNLayer&lt;/td&gt;
&lt;td&gt;"Recon"&lt;/td&gt;
&lt;td&gt;output for visualization&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Before moving on to Connections, we should make a note about displayPeriod, writeStep,  triggerFlag, and phase. Movie has a parameter 'displayPeriod' that sets the number of  timesteps an image is shown. We then typically set the writeStep and initialWriteTime to be some integer interval of displayPeriod, but this isn't necessary. For example if you want to see what the sparse reconstruction looks like while the same image is being shown to Movie, you can change the writeStep for "Recon" to 1 (just note that your output file will get very large very quickly so you may want change the stopTime to a smaller value if you want this sort of visualization).&lt;/p&gt;
&lt;p&gt;While writeStep has to do with how frequently PetaVision outputs to the .pvp file (this is the unique binary format used for PetaVision), the triggerFlag in more in with the dynamics of the layers.  Notice only the "Recon" layer has a trigger flag and that the triggerLayerName = "Image".  This means that PetaVision will only process the convolution of the "Recon" after a new image is shown.&lt;br /&gt;
&lt;/p&gt;
&lt;p&gt;But don't we want it to make the convolution using the sparse representation found at the end of the displayPeriod?  Keen observation. This is where phase comes in. Phase determines the order of layers to update at a given timestep.  To get the Recon from V1 before the new image makes its way to V1 and starts changing the sparse representation, we set phases as follows: &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer Class&lt;/th&gt;
&lt;th&gt;"Name"&lt;/th&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Movie&lt;/td&gt;
&lt;td&gt;"Image"&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ANNNormalizedErrorLayer&lt;/td&gt;
&lt;td&gt;"Error"&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HyPerLCALayer&lt;/td&gt;
&lt;td&gt;"V1"&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ANNLayer&lt;/td&gt;
&lt;td&gt;"Recon"&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For more details on the HyPerLayer parameters please read the documentation:&lt;a class="" href="http://petavision.sourceforge.net/doxygen/html/classPV_1_1HyPerLayer.html#member-group"&gt;HyPerLayer Parameters&lt;/a&gt;&lt;/p&gt;
&lt;h3 id="223-connections"&gt;2.2.3. Connections&lt;/h3&gt;
&lt;p&gt;The connections connect neurons to other neurons in different layers. Similar to layers, connections are all subclassed from their base class HyPerConn. Connections are where the 'learning' of an artificial neural network happens.&lt;/p&gt;
&lt;p&gt;Connections in PetaVision are always described in terms of their pre and postLayerName, their channel code, and their patch size (or receptive field). Some connection parameters are inherited from another connection such as patch size for a TransposeConn or CloneKernelConn from the originalConnName.  We use a naming convention of &lt;span&gt;[PreLayerName]&lt;/span&gt;To&lt;span&gt;[PostLayerName]&lt;/span&gt; but it is not required if you explictly define the pre and post layer. &lt;/p&gt;
&lt;p&gt;The channelCode value determines if the connection is excitatory (0), inhibitory (1), or neither (-1).  Neither is useful when you are making a connection to an error layer to train the weights, but want the activity for the layer to come from a reconstruction layer. &lt;/p&gt;
&lt;p&gt;Patch size is determined by the nxp, nyp, and nfp parameters.  Restrictions on how you can set these values are explained in detail in &lt;span&gt;[Patch Size and Margin Width Requirements]&lt;/span&gt;.&lt;/p&gt;
&lt;p&gt;The following table summarizes the types of connections that are used and their&lt;br /&gt;
roles in this experiment:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Connection Class&lt;/th&gt;
&lt;th&gt;"Name"&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;HyPerConn&lt;/td&gt;
&lt;td&gt;"ImageToError"&lt;/td&gt;
&lt;td&gt;loads image from imageListPath&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MomentumConn&lt;/td&gt;
&lt;td&gt;"V1ToError"&lt;/td&gt;
&lt;td&gt;computes residual error between Image and V1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TransposeConn&lt;/td&gt;
&lt;td&gt;"ErrorToV1"&lt;/td&gt;
&lt;td&gt;makes a sparse representation of Image using LCA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CloneKernelConn&lt;/td&gt;
&lt;td&gt;"V1ToRecon"&lt;/td&gt;
&lt;td&gt;clone V1ToError and convolve with V1 to make a reconstruction&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;For more details on the HyPerConn parameters please read the documentation: &lt;a class="" href="http://petavision.sourceforge.net/doxygen/html/classPV_1_1HyPerConn.html#member-group"&gt;HyPerConn Parameters&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="23-customize-the-params-file-for-a-run-on-your-system"&gt;2.3. Customize the params file for a run on your system&lt;/h2&gt;
&lt;p&gt;The params file is tagged to let you know where you have to edit parameters before you run. The parameter will have a ! symbol at the beginning of the line if you need to edit it. If a parameter is tagged, there will be a small commented instruction following the tag. Before you move on to running the experiment, make sure you delete every !. When done, save the file and you will be ready to start your run.  The sections below identify  objects to make sure to review, however there are some extra ! comments you will want to look for (eg. writeStep is commented in all the layers since you may want to adjust depending on how frequently you plan to &lt;/p&gt;
&lt;h3 id="231-hypercol-column"&gt;2.3.1. HyPerCol "column"&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;What to do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;stopTime&lt;/td&gt;
&lt;td&gt;Currently at 10,000,000 = 1 time through the data set; multiply to run through dataset more times&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;outputPath&lt;/td&gt;
&lt;td&gt;Change to where you want to save your output&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;checkpointWriteDir&lt;/td&gt;
&lt;td&gt;Change to where you want to save your checkpoints (usually outputPath/Checkpoints)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="232-movie-image-update-image-path"&gt;2.3.2. Movie "Image" | Update Image path&lt;/h3&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;What to do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;imageListPath&lt;/td&gt;
&lt;td&gt;Change to point to your mixed_cifar.txt file created in step 1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3 id="234-optional-momentumconn-v1toerror-load-weights-from-file"&gt;2.3.4. &lt;span&gt;[Optional]&lt;/span&gt; MomentumConn "V1ToError" | Load weights from file]&lt;/h3&gt;
&lt;p&gt;This tutorial is designed to bring you from chaotic random weights to beautiful colorful gabors.  However, if you don't want to wait for your dictionary to mature and want to start off with well trained weights, we have included a well trained dictionary located at:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;What to do&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;weightInitType&lt;/td&gt;
&lt;td&gt;Uncomment the one ending with "FileWeight";&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;initWeightsFile&lt;/td&gt;
&lt;td&gt;Change to point to ~/path/to/PetaVision/docs/tutorial/V1_LCA/V1ToError_W.pvp&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Delete or comment out with two slashes // the following paramaters&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;weightInitType  Comment the one ending with "UniformRandomWeight";&lt;/li&gt;
&lt;li&gt;wMinInit&lt;/li&gt;
&lt;li&gt;wMaxInit&lt;/li&gt;
&lt;li&gt;sparseFraction&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="3-running-petavision"&gt;3. Running PetaVision&lt;/h1&gt;
&lt;p&gt;You've arrived at the moment of truth.  These final steps should take a couple of minutes if everything else has been working smoothly.  PetaVision has a built in params file error checker and will halt the run if something was misspelled.  You'll just want to pay attention to the runtime output to decipher where you have to fix the params file.  Frequently the error is a missing semi-colon ';' at the end of a line or a misspelled parameter.  PetaVision specific errors (eg. incompatible patch sizes or layers) produced unique error messages with detailed instructions about how to try to fix our params file. &lt;/p&gt;
&lt;h2 id="31-build-petavision"&gt;3.1 Build PetaVision&lt;/h2&gt;
&lt;p&gt;If you haven't already built PetaVision make PetaVision now.  You can follow the instructions from any of the installation instructions found in the doxygen documentation: &lt;a href="http://petavision.sourceforge.net/doxygen/html/md_src_install_aws.html"&gt;http://petavision.sourceforge.net/doxygen/html/md_src_install_aws.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You can do everything from BasicSystemsTest or if you prefer you can check out one of the sandboxes (eg. HyPerHLCA).  Just make sure you add the path to the sandbox to the CMakeLists.txt in the parent directory or PetaVision won't be able to build the executable file (the sandboxes are commented out at the bottom of the file, just scroll down and uncomment them to move on).&lt;/p&gt;
&lt;h2 id="32-start-a-screen"&gt;3.2 Start a Screen&lt;/h2&gt;
&lt;p&gt;Since you'll probably want to be able to use your computer while you are running your&lt;br /&gt;
experiment, we recommend using 'screen'&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;screen -S run
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;To detach the screen: 'control+a+d'&lt;br /&gt;
To reattach the screen: 'screen -r run'&lt;/p&gt;
&lt;h2 id="33-run-time-arguments"&gt;3.3 Run-time arguments&lt;/h2&gt;
&lt;p&gt;Make sure you are attached to your 'run' screen. I like to navigate to the location &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Run-time flag&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;-p &lt;span&gt;[/path/to/pv.params]&lt;/span&gt;&lt;/td&gt;
&lt;td&gt;Point PetaVision to your desired params file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-t &lt;span&gt;[number]&lt;/span&gt;&lt;/td&gt;
&lt;td&gt;Declare number of CPU threads PetaVision should use&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-c &lt;span&gt;[/path/to/Checkpoint]&lt;/span&gt;&lt;/td&gt;
&lt;td&gt;Load weights and activities from the Checkpoint folder listed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-d &lt;span&gt;[number]&lt;/span&gt;&lt;/td&gt;
&lt;td&gt;Declare which GPU to use at an index; not essential&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-l &lt;span&gt;[/path/to/log.txt]&lt;/span&gt;&lt;/td&gt;
&lt;td&gt;PetaVision will write out a log file to the path listed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;All of these flags combined you'll get a runtime argument that looks similar to this:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;~/workspace/HyPerHLCA/Release/HyPerHLCA -p ~/workspace/input/params/V1_LCA.params -t 8 -l ~/workspace/output/txt.log
&lt;/pre&gt;&lt;/div&gt;
&lt;h1 id="4-analyze-run"&gt;4. Analyze Run&lt;/h1&gt;
&lt;p&gt;PetaVision has tools to review the run in progress and after the experiment is finished.  You will either be looking at files in the outputPath directory or in one of the Checkpoint directories.  This params file and the corresponding analysis tool kit is set up to have you look at the files in the outputPath directory.&lt;/p&gt;
&lt;p&gt;The main type of file you'll be examining are the '.pvp' files.  This is a PetaVision specific binary file type that saves space, can be read using python or matlab/octave, and can easily be loaded into PetaVision.&lt;br /&gt;
&lt;/p&gt;
&lt;h2 id="41-outputpath-directory-files"&gt;4.1 outputPath directory files&lt;/h2&gt;
&lt;p&gt;In your output directory you should see the following files:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File or Folder/&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;a0_Image.pvp&lt;/td&gt;
&lt;td&gt;Image layer activity written on Image writeStep frequency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;a1_Error.pvp&lt;/td&gt;
&lt;td&gt;Error layer activity written on Error writeStep frequency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;a2_V1.pvp&lt;/td&gt;
&lt;td&gt;V1 layer activity written on V1 writeStep frequency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;a3_Recon.pvp&lt;/td&gt;
&lt;td&gt;Recon layer activity written on Recon writeStep frequency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Checkpoints/&lt;/td&gt;
&lt;td&gt;Contains folders of Checkpoints written on checkpointWriteStepInterval&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error_timescales.txt&lt;/td&gt;
&lt;td&gt;Log of Error layer timescales&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HyPerCol_timescales.txt&lt;/td&gt;
&lt;td&gt;Log of aggregated Error timescales; adaptive time-step information&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;log.txt&lt;/td&gt;
&lt;td&gt;Generated from -l flag; saved stdout&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;pv.params&lt;/td&gt;
&lt;td&gt;PetaVision generated params file; removes all the comments; preferred for drawing diagrams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;timestamps/&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;w1_V1ToError.pvp&lt;/td&gt;
&lt;td&gt;Weight values written on writeStep frequency&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Depending on how long you ran your experiment for and how frequently you set writeStep, the size of your .pvp files can range from kB to gB.&lt;br /&gt;
&lt;/p&gt;
&lt;p&gt;If you are on the AWS PetaVision Public AMI, we have already installed our params file drawer.  If you want to test this out, type:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;draw pv.params
&lt;/pre&gt;&lt;/div&gt;
&lt;h2 id="42-checkpoint-directory-files"&gt;4.2 Checkpoint directory files&lt;/h2&gt;
&lt;p&gt;Navigate to one of the checkpoints and you'll see there are many more files that are saved than in the outputPath directory.  This is because a Checkpoint includes all the information PetaVision would need to initialize a run including timers, layer potentials and activities, weights, and more.&lt;br /&gt;
&lt;/p&gt;
&lt;p&gt;Without going into all of the files, one important file to notice is the V1ToError_W.pvp file. This is the file we will use in the next tutorial for classification.  This file is a snapshot of the weights (dictionary elements) that &lt;/p&gt;
&lt;h2 id="43-run-automated-analysis-script"&gt;4.3 Run automated analysis script&lt;/h2&gt;
&lt;p&gt;You can write your own analysis scripts and should look at the one we are about to use for reference if you want to do that, but for now let's just use it as is.  The script we are working with is called 'analyze_network.m'.  All we have to do is navigate to the outputPath directory and type the following commmand:&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;octave ~/path/to/PetaVision/mlab/util/analyze_network.m
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This script looks in your current directory for pv.params to find the names and properties of the different will analyze the the .pvp files in the root output directory and produce graphical&lt;br /&gt;
outputs in the newly created folder 'Analysis'&lt;/p&gt;
&lt;h2 id="44-cloud-aws-view-the-files-on-your-local-machine"&gt;4.4 &lt;span&gt;[Cloud - AWS]&lt;/span&gt; - view the files on your local machine&lt;/h2&gt;
&lt;p&gt;One extra step for you AWS users: scp the files from the AWS instance to your local machine to view.&lt;/p&gt;
&lt;div class="codehilite"&gt;&lt;pre&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;scp -r -i ~/.ssh/cred ec2-user@&lt;span class="o"&gt;[&lt;/span&gt;000.000.000.000&lt;span class="o"&gt;]&lt;/span&gt;:/home/ec2-user/mountData/V1_LCA/output/Analysis .
&lt;/pre&gt;&lt;/div&gt;
&lt;h1 id="5-experiment"&gt;5. Experiment&lt;/h1&gt;
&lt;p&gt;Now is your chance to explore and experiment some with the different parameters.  Maybe you want to reduce the displayPeriod, modify the threshold, or change the learning rate of your connections.  Perhaps you want to use a totally different dataset.&lt;br /&gt;
&lt;/p&gt;
&lt;p&gt;Whatever you do, be sure to come back and tune in when we use the weights that you just trained to design a SLP classifier. &lt;/p&gt;
&lt;h1 id="6-comments-questions"&gt;6. Comments / Questions?&lt;/h1&gt;
&lt;p&gt;I hope you found this tutorial helpful. If you identify any errors and opportunities for improvement to this tutorial, please contact the developers of PetaVision using the e-mail listed on sourceforge with "V1_LCA_tutorial" in the subject line.&lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Brian Broom-Peltz</dc:creator><pubDate>Wed, 13 May 2015 20:30:06 -0000</pubDate><guid>https://sourceforge.net6415e9d90346ccaf481a6c7f7b26f38dabb07347</guid></item></channel></rss>