Menu

Learning Function

Help
Kesh Rao
2012-03-19
2012-12-12
  • Kesh Rao

    Kesh Rao - 2012-03-19

    Hi guys,
          Hope things find you well. I'm having some trouble with the technical aspect of topographica's learning functions that I was hoping to get your advice on.
       I'm implementing learning algorithms into the neural network and for now, i'm just modeling things after the lateral inhibitory/excitatory connections as I saw in the examples folder. My first question has to do with simply creating these connections and the second question has to do with methods of monitoring changes

    1) I've been creating the lateral connections using a weights generator that is a composite of a gaussian and a uniform random distribution. As I understand it, a single unit will influence it's surrounding with that pattern of activation compounded with the unit activity and strength (and possibly other factors). In a sheet, eg. 10 x 10, does this lateral connection randomly  pick some percentage of neurons to have lateral excitatory connections with positive strength and the remainder of neurons to have lateral inhibitory connections? Or can a single unit have both excitatory and inhibitory influences? And finally, is there a way of visualizing the neurons that ended up being excitatory vs inhibitory?

    2) Now that we've established the lateral connections, and we pick a learning algorithm (say Simple Hebbian), we need to monitor its changes as the simulation progresses. I've just been putting a plot function inside the hebbian function call. For example:

    # Create laternal network in layer 3
    topo.sim.connect('V1','V1',delay=0.5,name='LateralExcitatory',
         connection_type=CFProjection,   strength=0.9,
         weights_generator=Composite( Gaussian, UniformRandom, how = multiply)
         nominal_bounds_template=BoundingBox(radius=0.166666666667),  learning_rate=1.0,
            learning_fn=learningfn.projfn.CFPLF_Plugin(single_cf_fn=SampleHebbian(sample_param=0.5))
         )
    topo.sim.connect('V1','V1',delay=0.5,name='LateralInhibitory',
        connection_type=CFProjection,   strength=-0.9,
         weights_generator=Composite( Gaussian, UniformRandom, how = multiply),
        nominal_bounds_template=BoundingBox(radius=0.416666666667),  learning_rate=1.0,
             learning_fn=learningfn.projfn.CFPLF_Plugin(single_cf_fn=SampleHebbian(sample_param=0.5))
              )
    class SampleHebbian(LearningFn):
    
        import matplotlib.pyplot as plt
        def __init__(self,**params):
            ## stuff ##
        def __call__(self,input_activity, unit_activity, weights, single_connection_learning_rate):
    
            weights += single_connection_learning_rate * unit_activity * input_activity
                   ## plot weights here - but the is so much plotting going on for every iteration. Very inefficient.
    

    2 cont) But it's tough to see the big picture when I'm just plotting the weights for each unit one at a time. I guess I'm not sure of the way topographica is structured to call this learning function every time and how/where the weights and strength and things like that are stored. For example, if I wanted to just run the simulation for 1 iteration and then plot the weights and see how things have changed, I'm not sure how to efficiently do that.

    I'm basically trying to wrap my head around how the learning is structured in topographica. From my experience, all this structure is quite vast in topographica and for that reason, it's really tough for me to go through things like I have in the past and just continually pdb & subroutine into things for an hour or so to learn it all.
    If you can help me through this, I would greatly appreciate it!
    Thanks very much!

    Kesh

     
  • Kesh Rao

    Kesh Rao - 2012-03-21

    I think I found out a possible solution to my second question!
    I believe the weights that influence units in the destination sheet are in topo.sim.projections().cfs.weights
    I need to play around with the densities and such and see how that affects the coordinates but I think it's progress. Now onto actually figuring out what I do with those weights ;)

     
  • James A. Bednar

    James A. Bednar - 2012-03-27

    Sorry for the delay in responding; it's very busy here right now!
    Responding to your two questions:

    1) Topographica allows you to set weights to anything you want, but in the examples/ .ty files, we always set up a Projection to contain either excitatory weights or inhibitory weights, but not both. Assuming you are using GaussianCloud to make your gaussian * uniform random, as in the example files in recent releases, then you'll get a bunch of random strength values initially between 0 and 1, scaled down by a Gaussian.  For an inhibitory projection these will then all be multipled by a negative strength parameter as they are used, e.g. -1.0; that's the only thing that makes a connection inhibitory. If you want both excitatory and inhibitory connections, you'd usually make two Projections, one with negative strength and one with positive.  So each neuron would then receive both excitatory and inhibitory connections (Projections are about incoming connections), and would also make both excitatory and inhibitory connections (if you trace everything backward).  If violating Dale's law (one neurotransmitter per neuron) alarms you, you can create separate sheets of inhibitory and excitatory cells, connected to each other in ways that respect Dale's law.  At the other extreme, you are welcome to generate both positive and negative weights in the same Projection, but the standard DivisiveNormalizeL1 and Hebbian learning functions will have a problem with that, as behavior is undefined for them in those cases.

    2) I'm not sure what you're getting at.  What we normally do to see how weights change over time is to use a Projection plot.  If you turn on auto-refresh in that window, put 10 or 100 or whatever iterations into the Run for box in the main window, and then hit Go a few times, you should build up a few Projection plots that you can then go back and forth between in the GUI to see how learning is progressing. Usually it's a good idea to make the learning rate really high while you are debugging things, so that any changes are obvious and dramatic.   I don't see why you would need any custom plotting code in the learning function, but maybe I'm missing something.  SampleHebbian isn't a real class, by the way - it's just a starting point for customizing if you want to do that; the regular Hebbian learning functions in learningfn/ are the ones we'd actually normally use in practice.

    In any case, as you noted in your second post the weights are stored in the Projection, which you can access like:

    > ./topographica -i examples/tiny.ty
    topo_t000000.00_c1>>> topo.sim["V1"].projections("Afferent").cfs[0][0].weights
                   Out[10]:
    array([[ 0.01041539,  0.00771981,  0.01576674,  0.00965847,  0.01035698,
             0.0007668 ,  0.01661078,  0.00240769,  0.01138386],
           [ 0.00651785,  0.01370444,  0.01156759,  0.00260807,  0.00882181,
             0.02365352,  0.02395667,  0.01928253,  0.01363588],
           [ 0.00426616,  0.02318755,  0.02292406,  0.00838136,  0.00528211,
             0.0134657 ,  0.01955791,  0.0046838 ,  0.00807812],
           [ 0.00270269,  0.00169533,  0.02227775,  0.00907297,  0.02387828,
             0.0005287 ,  0.022131  ,  0.00934117,  0.01431429],
           [ 0.02373384,  0.02067097,  0.00430907,  0.02371083,  0.01778501,
             0.02337287,  0.01223926,  0.00414159,  0.00455902],
           [ 0.02210954,  0.02221228,  0.00116286,  0.01822333,  0.00092893,
             0.02456963,  0.0081213 ,  0.02065333,  0.00514953],
           [ 0.00344047,  0.02358248,  0.00312791,  0.00854251,  0.00404132,
             0.00592183,  0.02432995,  0.0237791 ,  0.02073587],
           [ 0.02316161,  0.01687724,  0.00208869,  0.00660348,  0.00761381,
             0.01696292,  0.01616783,  0.01329055,  0.02117874],
           [ 0.01287446,  0.00401441,  0.00449339,  0.00856578,  0.00284608,
             0.01699862,  0.0051369 ,  0.01562238,  0.00574461]], dtype=float32)
    

    Here  is actually , not  as in your example, i.e., it's matrix coordinates starting at the upper left and going down to the lower right, specified by row and column and not Cartesian (x,y) coordinates.  But again, normally you wouldn't need to mess with this if you are using the GUI, as the usual Projection and Connection Fields plots display this information graphically.

     
  • Kesh Rao

    Kesh Rao - 2012-04-04

    Hey Jim,
       Thanks very much for your help! I think we'll stick with either making two separate sheets or just both connections on one sheet. We tried working a little with selecting specific neurons and making the weights negative but we were having some troubles with it. I think for now, we should be fine with the system we have.
      We regard to the second part, we are thinking about just saving the images of projections to a file and then viewing it later so that we don't have to use the GUI. It is just so that we can run simulation for long periods of time and then make movies of the way the projections change in time.
    Thanks again!

     
  • James A. Bednar

    James A. Bednar - 2012-04-04

    Sounds like you have part one in hand.  For part two, you can make a movie of how the weights change over time fairly easily.  E.g.
    save_plotgroup("Projection",projection=topo.sim.projections('Afferent')) will save a projection to a single .png file for later viewing, and doing that repeatedly will make multiple .png files that can easily be animated (e.g. using "animate *.png" on UNIX; not sure on Mac).   The example below shows how to collect those plots from every iteration automatically.

    > ./topographica -i -a examples/tiny.ty
    topo_t000000.00_c1>>> from topo.base.simulation import PeriodicEventSequence,CommandEvent
    topo_t000000.00_c2>>> topo.sim.enqueue_event(PeriodicEventSequence(0.0,1.0,[CommandEvent(0.0,"save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))")]))
    topo_t000000.00_c3>>> topo.sim.run(10)
    Time: 000000.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000001.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000002.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000003.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000004.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000005.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000006.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000007.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000008.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000009.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    Time: 000010.00 CommandEvent: Running command save_plotgroup('Projection',projection=topo.sim['V1'].projections('Afferent'))
    topo_t000010.00_c4>>>
    > ls ~/Topographica/
    Output/
    tests/
    tiny_000000.00_V1_Afferent.png
    tiny_000001.00_V1_Afferent.png
    tiny_000002.00_V1_Afferent.png
    tiny_000003.00_V1_Afferent.png
    tiny_000004.00_V1_Afferent.png
    tiny_000005.00_V1_Afferent.png
    tiny_000006.00_V1_Afferent.png
    tiny_000007.00_V1_Afferent.png
    tiny_000008.00_V1_Afferent.png
    tiny_000009.00_V1_Afferent.png
    tiny_000010.00_V1_Afferent.png
    
     
  • Kesh Rao

    Kesh Rao - 2012-04-04

    OH yeah! Thats fantastic. I completely forgot about that command. Chris had mentioned it in an earlier thread. Thanks for letting me know!

     

Log in to post a comment.