Hi guys,
Hope things find you well. I'm having some trouble with the technical aspect of topographica's learning functions that I was hoping to get your advice on.
I'm implementing learning algorithms into the neural network and for now, i'm just modeling things after the lateral inhibitory/excitatory connections as I saw in the examples folder. My first question has to do with simply creating these connections and the second question has to do with methods of monitoring changes
1) I've been creating the lateral connections using a weights generator that is a composite of a gaussian and a uniform random distribution. As I understand it, a single unit will influence it's surrounding with that pattern of activation compounded with the unit activity and strength (and possibly other factors). In a sheet, eg. 10 x 10, does this lateral connection randomly pick some percentage of neurons to have lateral excitatory connections with positive strength and the remainder of neurons to have lateral inhibitory connections? Or can a single unit have both excitatory and inhibitory influences? And finally, is there a way of visualizing the neurons that ended up being excitatory vs inhibitory?
2) Now that we've established the lateral connections, and we pick a learning algorithm (say Simple Hebbian), we need to monitor its changes as the simulation progresses. I've just been putting a plot function inside the hebbian function call. For example:
# Create laternal network in layer 3topo.sim.connect('V1','V1',delay=0.5,name='LateralExcitatory',connection_type=CFProjection,strength=0.9,weights_generator=Composite(Gaussian,UniformRandom,how=multiply)nominal_bounds_template=BoundingBox(radius=0.166666666667),learning_rate=1.0,learning_fn=learningfn.projfn.CFPLF_Plugin(single_cf_fn=SampleHebbian(sample_param=0.5)))topo.sim.connect('V1','V1',delay=0.5,name='LateralInhibitory',connection_type=CFProjection,strength=-0.9,weights_generator=Composite(Gaussian,UniformRandom,how=multiply),nominal_bounds_template=BoundingBox(radius=0.416666666667),learning_rate=1.0,learning_fn=learningfn.projfn.CFPLF_Plugin(single_cf_fn=SampleHebbian(sample_param=0.5)))classSampleHebbian(LearningFn):importmatplotlib.pyplotaspltdef__init__(self,**params):## stuff ##def__call__(self,input_activity,unit_activity,weights,single_connection_learning_rate):weights+=single_connection_learning_rate*unit_activity*input_activity## plot weights here - but the is so much plotting going on for every iteration. Very inefficient.
2 cont) But it's tough to see the big picture when I'm just plotting the weights for each unit one at a time. I guess I'm not sure of the way topographica is structured to call this learning function every time and how/where the weights and strength and things like that are stored. For example, if I wanted to just run the simulation for 1 iteration and then plot the weights and see how things have changed, I'm not sure how to efficiently do that.
I'm basically trying to wrap my head around how the learning is structured in topographica. From my experience, all this structure is quite vast in topographica and for that reason, it's really tough for me to go through things like I have in the past and just continually pdb & subroutine into things for an hour or so to learn it all.
If you can help me through this, I would greatly appreciate it!
Thanks very much!
Kesh
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I think I found out a possible solution to my second question!
I believe the weights that influence units in the destination sheet are in topo.sim.projections().cfs.weights
I need to play around with the densities and such and see how that affects the coordinates but I think it's progress. Now onto actually figuring out what I do with those weights ;)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Sorry for the delay in responding; it's very busy here right now!
Responding to your two questions:
1) Topographica allows you to set weights to anything you want, but in the examples/ .ty files, we always set up a Projection to contain either excitatory weights or inhibitory weights, but not both. Assuming you are using GaussianCloud to make your gaussian * uniform random, as in the example files in recent releases, then you'll get a bunch of random strength values initially between 0 and 1, scaled down by a Gaussian. For an inhibitory projection these will then all be multipled by a negative strength parameter as they are used, e.g. -1.0; that's the only thing that makes a connection inhibitory. If you want both excitatory and inhibitory connections, you'd usually make two Projections, one with negative strength and one with positive. So each neuron would then receive both excitatory and inhibitory connections (Projections are about incoming connections), and would also make both excitatory and inhibitory connections (if you trace everything backward). If violating Dale's law (one neurotransmitter per neuron) alarms you, you can create separate sheets of inhibitory and excitatory cells, connected to each other in ways that respect Dale's law. At the other extreme, you are welcome to generate both positive and negative weights in the same Projection, but the standard DivisiveNormalizeL1 and Hebbian learning functions will have a problem with that, as behavior is undefined for them in those cases.
2) I'm not sure what you're getting at. What we normally do to see how weights change over time is to use a Projection plot. If you turn on auto-refresh in that window, put 10 or 100 or whatever iterations into the Run for box in the main window, and then hit Go a few times, you should build up a few Projection plots that you can then go back and forth between in the GUI to see how learning is progressing. Usually it's a good idea to make the learning rate really high while you are debugging things, so that any changes are obvious and dramatic. I don't see why you would need any custom plotting code in the learning function, but maybe I'm missing something. SampleHebbian isn't a real class, by the way - it's just a starting point for customizing if you want to do that; the regular Hebbian learning functions in learningfn/ are the ones we'd actually normally use in practice.
In any case, as you noted in your second post the weights are stored in the Projection, which you can access like:
Here is actually , not as in your example, i.e., it's matrix coordinates starting at the upper left and going down to the lower right, specified by row and column and not Cartesian (x,y) coordinates. But again, normally you wouldn't need to mess with this if you are using the GUI, as the usual Projection and Connection Fields plots display this information graphically.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hey Jim,
Thanks very much for your help! I think we'll stick with either making two separate sheets or just both connections on one sheet. We tried working a little with selecting specific neurons and making the weights negative but we were having some troubles with it. I think for now, we should be fine with the system we have.
We regard to the second part, we are thinking about just saving the images of projections to a file and then viewing it later so that we don't have to use the GUI. It is just so that we can run simulation for long periods of time and then make movies of the way the projections change in time.
Thanks again!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Sounds like you have part one in hand. For part two, you can make a movie of how the weights change over time fairly easily. E.g.
save_plotgroup("Projection",projection=topo.sim.projections('Afferent')) will save a projection to a single .png file for later viewing, and doing that repeatedly will make multiple .png files that can easily be animated (e.g. using "animate *.png" on UNIX; not sure on Mac). The example below shows how to collect those plots from every iteration automatically.
Hi guys,
Hope things find you well. I'm having some trouble with the technical aspect of topographica's learning functions that I was hoping to get your advice on.
I'm implementing learning algorithms into the neural network and for now, i'm just modeling things after the lateral inhibitory/excitatory connections as I saw in the examples folder. My first question has to do with simply creating these connections and the second question has to do with methods of monitoring changes
1) I've been creating the lateral connections using a weights generator that is a composite of a gaussian and a uniform random distribution. As I understand it, a single unit will influence it's surrounding with that pattern of activation compounded with the unit activity and strength (and possibly other factors). In a sheet, eg. 10 x 10, does this lateral connection randomly pick some percentage of neurons to have lateral excitatory connections with positive strength and the remainder of neurons to have lateral inhibitory connections? Or can a single unit have both excitatory and inhibitory influences? And finally, is there a way of visualizing the neurons that ended up being excitatory vs inhibitory?
2) Now that we've established the lateral connections, and we pick a learning algorithm (say Simple Hebbian), we need to monitor its changes as the simulation progresses. I've just been putting a plot function inside the hebbian function call. For example:
2 cont) But it's tough to see the big picture when I'm just plotting the weights for each unit one at a time. I guess I'm not sure of the way topographica is structured to call this learning function every time and how/where the weights and strength and things like that are stored. For example, if I wanted to just run the simulation for 1 iteration and then plot the weights and see how things have changed, I'm not sure how to efficiently do that.
I'm basically trying to wrap my head around how the learning is structured in topographica. From my experience, all this structure is quite vast in topographica and for that reason, it's really tough for me to go through things like I have in the past and just continually pdb & subroutine into things for an hour or so to learn it all.
If you can help me through this, I would greatly appreciate it!
Thanks very much!
Kesh
I think I found out a possible solution to my second question!
I believe the weights that influence units in the destination sheet are in topo.sim.projections().cfs.weights
I need to play around with the densities and such and see how that affects the coordinates but I think it's progress. Now onto actually figuring out what I do with those weights ;)
Sorry for the delay in responding; it's very busy here right now!
Responding to your two questions:
1) Topographica allows you to set weights to anything you want, but in the examples/ .ty files, we always set up a Projection to contain either excitatory weights or inhibitory weights, but not both. Assuming you are using GaussianCloud to make your gaussian * uniform random, as in the example files in recent releases, then you'll get a bunch of random strength values initially between 0 and 1, scaled down by a Gaussian. For an inhibitory projection these will then all be multipled by a negative strength parameter as they are used, e.g. -1.0; that's the only thing that makes a connection inhibitory. If you want both excitatory and inhibitory connections, you'd usually make two Projections, one with negative strength and one with positive. So each neuron would then receive both excitatory and inhibitory connections (Projections are about incoming connections), and would also make both excitatory and inhibitory connections (if you trace everything backward). If violating Dale's law (one neurotransmitter per neuron) alarms you, you can create separate sheets of inhibitory and excitatory cells, connected to each other in ways that respect Dale's law. At the other extreme, you are welcome to generate both positive and negative weights in the same Projection, but the standard DivisiveNormalizeL1 and Hebbian learning functions will have a problem with that, as behavior is undefined for them in those cases.
2) I'm not sure what you're getting at. What we normally do to see how weights change over time is to use a Projection plot. If you turn on auto-refresh in that window, put 10 or 100 or whatever iterations into the Run for box in the main window, and then hit Go a few times, you should build up a few Projection plots that you can then go back and forth between in the GUI to see how learning is progressing. Usually it's a good idea to make the learning rate really high while you are debugging things, so that any changes are obvious and dramatic. I don't see why you would need any custom plotting code in the learning function, but maybe I'm missing something. SampleHebbian isn't a real class, by the way - it's just a starting point for customizing if you want to do that; the regular Hebbian learning functions in learningfn/ are the ones we'd actually normally use in practice.
In any case, as you noted in your second post the weights are stored in the Projection, which you can access like:
Here is actually , not as in your example, i.e., it's matrix coordinates starting at the upper left and going down to the lower right, specified by row and column and not Cartesian (x,y) coordinates. But again, normally you wouldn't need to mess with this if you are using the GUI, as the usual Projection and Connection Fields plots display this information graphically.
Hey Jim,
Thanks very much for your help! I think we'll stick with either making two separate sheets or just both connections on one sheet. We tried working a little with selecting specific neurons and making the weights negative but we were having some troubles with it. I think for now, we should be fine with the system we have.
We regard to the second part, we are thinking about just saving the images of projections to a file and then viewing it later so that we don't have to use the GUI. It is just so that we can run simulation for long periods of time and then make movies of the way the projections change in time.
Thanks again!
Sounds like you have part one in hand. For part two, you can make a movie of how the weights change over time fairly easily. E.g.
save_plotgroup("Projection",projection=topo.sim.projections('Afferent')) will save a projection to a single .png file for later viewing, and doing that repeatedly will make multiple .png files that can easily be animated (e.g. using "animate *.png" on UNIX; not sure on Mac). The example below shows how to collect those plots from every iteration automatically.
OH yeah! Thats fantastic. I completely forgot about that command. Chris had mentioned it in an earlier thread. Thanks for letting me know!