You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
(17) |
Sep
(3) |
Oct
(1) |
Nov
(7) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(5) |
Feb
(1) |
Mar
|
Apr
(23) |
May
(39) |
Jun
(25) |
Jul
(3) |
Aug
(20) |
Sep
(8) |
Oct
(5) |
Nov
(1) |
Dec
|
2003 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
(4) |
Aug
(1) |
Sep
|
Oct
|
Nov
(5) |
Dec
(8) |
2004 |
Jan
(3) |
Feb
(22) |
Mar
(8) |
Apr
(2) |
May
(4) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2005 |
Jan
(28) |
Feb
(57) |
Mar
(33) |
Apr
(24) |
May
(4) |
Jun
(15) |
Jul
(8) |
Aug
(16) |
Sep
|
Oct
(16) |
Nov
(11) |
Dec
(2) |
2006 |
Jan
(8) |
Feb
|
Mar
(1) |
Apr
(3) |
May
|
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Sanjay P. <san...@ya...> - 2006-07-12 11:46:29
|
Hi, I've made some progress playing with amygdala and my next question is on documentation. I believe that the documentation I have (and on the site) is outdated and not consistent with the source code. For example, I was trying to follow through the example for Spikeloop in the PDF version of the Amygdala user handbook at http://www.amygdala.org/using_documentation.php. I was able to get it working with Visual, but not with Gnuplot. The instructions for Gnuplot say: Quote: First, we need to turn logging on. The StatisticsOutput class provides the code for this so we need to replace the default class: StatisticsOutput *spikeOut = new StatisticsOutput(); Neuron::SetSpikeOutput(spikeOut); Unquote: After some poking around without much success, I grepped the entire source code and discovered that there is no SetSpikeOutput method in any of the files. This suggests that the documentation is wrong or at least the source has moved on from when the documentation was created. So, my question is: 1) What is the best source of documentation, particularly examples? 2) How do other users/developers start learning about this? What is your recommendation for the best way of getting upto speed? Thanks, Sanjay |
From: Sanjay P. <san...@ya...> - 2006-07-12 11:30:58
|
Matt, Thanks for the prompt response. After playing around with it a bit, I find that both Anjuta and KDevelop are able to build the project fine, but KDevelop does a better job showing the class structure. Anjuta did not show all the classes properly, particularly for the classes in the amygdala directory e.g. class AlphaNeuron. Sanjay --- Matt Grover <mg...@am...> wrote: > Hi Sanjay, > We have mostly used Kdevelop as an IDE. The best > way is to import it as an > Automake base C++ project. That said, Kdevelop > sometimes does weird stuff > when it goes through the make process. So I often > build from the command > line even if I'm developing and debugging in > Kdevelop. I suggest you play > with it and see what works best for you. > > Matt > > > Hi, > > First of all, thanks for working on this and > bringing > > it to this point. I'm just getting started looking > at > > amygdala, but I am new to app development on > Linux. I > > built it from the command line per the > instructions, > > but was wondering whether you use any IDE tools > and if > > so, which. I tried importing the project into > Anjuta, > > but this did not work. I also tried KDevelop, and > this > > seems to have worked better, but I'm still playing > > with that. > > > > I thought asking the experts would be a faster way > to > > help me get started, hence this post. > > > > Appreciate any feedback you have. Thanks in > advance. > > Sanjay > > > > Using Tomcat but need to do more? Need to support > web services, security? > > Get stuff done quickly with pre-integrated > technology to make your job > > easier Download IBM WebSphere Application Server > v.1.0.1 based on Apache > > Geronimo > > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Amygdala-development mailing list > > Amy...@li... > > > https://lists.sourceforge.net/lists/listinfo/amygdala-development > |
From: Matt G. <mg...@am...> - 2006-07-08 16:21:22
|
Hi Sanjay, We have mostly used Kdevelop as an IDE. The best way is to import it as an Automake base C++ project. That said, Kdevelop sometimes does weird stuff when it goes through the make process. So I often build from the command line even if I'm developing and debugging in Kdevelop. I suggest you play with it and see what works best for you. Matt > Hi, > First of all, thanks for working on this and bringing > it to this point. I'm just getting started looking at > amygdala, but I am new to app development on Linux. I > built it from the command line per the instructions, > but was wondering whether you use any IDE tools and if > so, which. I tried importing the project into Anjuta, > but this did not work. I also tried KDevelop, and this > seems to have worked better, but I'm still playing > with that. > > I thought asking the experts would be a faster way to > help me get started, hence this post. > > Appreciate any feedback you have. Thanks in advance. > Sanjay > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Amygdala-development mailing list > Amy...@li... > https://lists.sourceforge.net/lists/listinfo/amygdala-development |
From: Sanjay P. <san...@ya...> - 2006-07-07 10:55:14
|
Hi, First of all, thanks for working on this and bringing it to this point. I'm just getting started looking at amygdala, but I am new to app development on Linux. I built it from the command line per the instructions, but was wondering whether you use any IDE tools and if so, which. I tried importing the project into Anjuta, but this did not work. I also tried KDevelop, and this seems to have worked better, but I'm still playing with that. I thought asking the experts would be a faster way to help me get started, hence this post. Appreciate any feedback you have. Thanks in advance. Sanjay |
From: Matt G. <mg...@am...> - 2006-04-17 13:46:13
|
That's a good paper. I've been wanting to see someone apply a Hebbian rule to a LSM, so it's nice that you've done that. It's also nice that someone has been able to make a LSM work properly with Amygdala. I didn't have a chance to look at your code this weekend, but I'll try to do that this week sometime. Thanks for the patch and the acknowledgement! Matt > Hello, > > Sorry for the long delay in publishing the code, but it needed some cleanup > and I was tight on schedule. > > What this is all about is an article I'll present at WCCI/IJCNN06 this > July, entitled "Learning using Dynamical Regime Identification and > Synchronization". I've given due credits to the Amygdala project in the > Acknowledgment section. This article is available together with the > experiments source code, random seeds and data files at: > http://nicolas.brodu.free.fr/en/recherche/publications/ > > It consists in a methodology for defining learning rules for the recurrent > layer of a LSM, by generalizing Hebbian learning to take into account more > dynamical aspects. Experiments are run with the base LSM version (no > learning on the recurrent layer), with the Hebbian rule applied on the > recurrent layer, and with a rule based on multifractal analysis, just > because it's so different. The goal is to demonstrate that it is not the > Hebbian rule particular choice of observable that leads to learning, but > that > synchronizing any not-too-bad dynamical regime identifier may lead to > learning too. > > I've also included the final patch against Amydgala v0.4.0 in the source > code for the experiments. We've discussed most of that patch on the list > before. > > Happy hacking, and thanks again for creating Amygdala in the first place :) > > Nicolas > > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live > webcast and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Amygdala-development mailing list > Amy...@li... > https://lists.sourceforge.net/lists/listinfo/amygdala-development |
From: Nicolas B. <br...@us...> - 2006-04-14 23:32:43
|
Hello, Sorry for the long delay in publishing the code, but it needed some cleanup and I was tight on schedule. What this is all about is an article I'll present at WCCI/IJCNN06 this July, entitled "Learning using Dynamical Regime Identification and Synchronization". I've given due credits to the Amygdala project in the Acknowledgment section. This article is available together with the experiments source code, random seeds and data files at: http://nicolas.brodu.free.fr/en/recherche/publications/ It consists in a methodology for defining learning rules for the recurrent layer of a LSM, by generalizing Hebbian learning to take into account more dynamical aspects. Experiments are run with the base LSM version (no learning on the recurrent layer), with the Hebbian rule applied on the recurrent layer, and with a rule based on multifractal analysis, just because it's so different. The goal is to demonstrate that it is not the Hebbian rule particular choice of observable that leads to learning, but that synchronizing any not-too-bad dynamical regime identifier may lead to learning too. I've also included the final patch against Amydgala v0.4.0 in the source code for the experiments. We've discussed most of that patch on the list before. Happy hacking, and thanks again for creating Amygdala in the first place :) Nicolas |
From: Nicolas B. <nic...@fr...> - 2006-04-14 22:47:04
|
Hello, Sorry for the long delay in publishing the code, but it needed some cleanup= =20 and I was tight on schedule. What this is all about is an article I'll present at WCCI/IJCNN06 this July= ,=20 entitled "Learning using Dynamical Regime Identification and=20 Synchronization". I've given due credits to the Amygdala project in the=20 Acknowledgment section. This article is available together with the=20 experiments source code, random seeds and data files at:=20 http://nicolas.brodu.free.fr/en/recherche/publications/ It consists in a methodology for defining learning rules for the recurrent= =20 layer of a LSM, by generalizing Hebbian learning to take into account more= =20 dynamical aspects. Experiments are run with the base LSM version (no learni= ng=20 on the recurrent layer), with the Hebbian rule applied on the recurrent=20 layer, and with a rule based on multifractal analysis, just because it's so= =20 different. The goal is to demonstrate that it is not the Hebbian rule=20 particular choice of observable that leads to learning, but that=20 synchronizing any not-too-bad dynamical regime identifier may lead to=20 learning too. I've also included the final patch against Amydgala v0.4.0 in the source co= de=20 for the experiments. We've discussed most of that patch on the list before. Happy hacking, and thanks again for creating Amygdala in the first place :) Nicolas |
From: Bill U. <ul...@ci...> - 2006-03-23 16:09:37
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello all, New Amygdala user here - I'm really impressed so far. It's been fun to play around with... I'm looking to develop an LSM for computer vision applications, but I have a small doubt on how the input neurons are mapped to the LSM. I see that which LSM face to map to is selectable. Are the inputs then mapped to the nearest LSM neuron, assuming a grid layout? (i.e. MxN inputs mapped to the corresponding MxN LSM neurons of the selected face.) On a related note, are the outputs for the readout neurons mapped spatially as well, or do they connect to all LSM neurons, a la Maass et al. I'll be trying things out, but I thought someone on the list might have a quick answer. TIA, Bill U -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFEIsgzT9Hs6GUiy7gRAmVxAJ4zfpG05AY05VKW4DguGIUTJ9YqnwCgmEHm zeTM7JQ4+F9WMcxc9+tFwfk= =5LiI -----END PGP SIGNATURE----- |
From: Rüdiger K. <rk...@rk...> - 2006-01-31 10:44:35
|
--- Matt Grover <mg...@am...> wrote: > In practice, I have never needed resolution finer than 0.1 ms. Some > calculations use finer resolution, but the results end up getting > rounded to > the nearest 0.1 ms anyway. The most direct way to deal with this > problem for > now might be to make 1 AmTimeInt == 0.1 ms instead of 1 us. For the time step resolution, 100 usec to 1 msec is probably enough for all purposes. For resolution of firing time, this isn't always sufficient. To model the auditory system of a barn owl one probably needs a few usec which is exactly what we can do. We should probably come to regard a timestep as a time window for processing events within this window instead of an exact point of time for that reason. To keep things simple we should just stick with the microsecond resolution. -Rudiger |
From: Matt G. <mg...@am...> - 2006-01-31 08:07:18
|
First, a little background on AmTimeInt just in case anyone is interested. When I first started work on Amygdala, I had no idea how much time resolution is really needed to run a good SNN simulation. A lot of the literature assumes a 1ms time step size, but for some applications, that might not be small enough. On the other hand, an unsigned long time variable with a 1ms step would allow simulations to run for over a month. Going to the other extreme, using a 1 us time resolution would only give around an hour of runtime, but I wouldn't have to worry about introducing errors because of an overly large step size. I knew that was probably overkill, but I also didn't think hour long simulations would be an issue for a number of years. So I went with a ridiculously fine time resolution and decided to deal with the limitted run time issue if anyone really needed to run long simulations at some point. Looks like that time has finally arrived. In practice, I have never needed resolution finer than 0.1 ms. Some calculations use finer resolution, but the results end up getting rounded to the nearest 0.1 ms anyway. The most direct way to deal with this problem for now might be to make 1 AmTimeInt == 0.1 ms instead of 1 us. I suspect some recoding will have to be done to make it work correctly, but it shouldn't be too bad. That would give us 100 hours of run time. If you still need more, we can also implement one of Rudigers suggestions. More comments below: > > 2. Once simTime exceeds a certain value (say, 4 Billion) we subtract > > one Billion from all time variables > > Another consideration is: > > When you post a spike, currently this is done in absolute time. Why not > changing this to post a spike in relative time frow "now", whatever that > is? This is more logical, since neurons do not have a built-in clock, they > just maintain their own discharge time. > In the network.cpp code, when pushing the newly scheduled spike in the > event list, maintain "now" as an additional field in the SpikeInputRequest > structure, or convert to absolute time internally, whichever is more > convenient. We also have synapses and trainers to worry about. Some sort of relative working time might be good enough for those also, but I can't say without thinking about it some more. > You'd still have to use one of the 2 tricks you mentionned (either > wrap-around of subtracting big value) but this time, only the network has > the notion of absolute time, so this is less opportunity for bugs. If you > need the absolute time externally for any reason, provide > Network::GetAbsoluteTime (for example returning a structure, with > second/usec, exactly what the time POSIX functions do). Allowing the time variables to wrap makes me nervous. I would prefer to subtract a big value and then increment an epoch variable. You're right that users shouldn't have to deal with epochs, so I suggest that we have Network::GetAbsoluteTime return either a 64 bit int or a POSIX time structure (either is fine with me), but use epochs and AmTimeInt internally. > Now, I'm new to Amygdala, so don't have your background to analyze if this > is really a good idea or not. You seem to be doing well enough so far :) Matt |
From: Nicolas B. <nic...@fr...> - 2006-01-30 15:44:20
|
Le Lundi 30 Janvier 2006 09:39, R=FCdiger Koch a =E9crit=A0: > I can think about 2 ways to work around using "typedef unsigned long > long AmTimeInt": > > 1. We simply let the simTime run over. That requires to use only > operations on time variables that produce correct results when time > just ran over. The common operation dt =3D t2 - t1 is safe as long as > AmTimeInt is unsigned, pretty much everything else requires a case > differentiation. > > 2. Once simTime exceeds a certain value (say, 4 Billion) we subtract > one Billion from all time variables Another consideration is: When you post a spike, currently this is done in absolute time. Why not=20 changing this to post a spike in relative time frow "now", whatever that is= ?=20 This is more logical, since neurons do not have a built-in clock, they just= =20 maintain their own discharge time. In the network.cpp code, when pushing the newly scheduled spike in the even= t=20 list, maintain "now" as an additional field in the SpikeInputRequest=20 structure, or convert to absolute time internally, whichever is more=20 convenient. You'd still have to use one of the 2 tricks you mentionned (either wrap-aro= und=20 of subtracting big value) but this time, only the network has the notion of= =20 absolute time, so this is less opportunity for bugs. If you need the absolu= te=20 time externally for any reason, provide Network::GetAbsoluteTime (for examp= le=20 returning a structure, with second/usec, exactly what the time POSIX=20 functions do). Now, I'm new to Amygdala, so don't have your background to analyze if this = is=20 really a good idea or not. > For both workarounds we'd increment an epoch variable. The notion of epoch may be confusing to users. If all code refering to=20 absolute time is isolated, there is no need for it. > Both workarounds are excellent hiding places for bugs, of course. Some > bugs of that type are probably already present. So what we can do is: > > 1. Initialize simTime to one second under the maximum value to make > sure that there is at least one overrun for every decent simulation so > bugs are forced to come out of hiding. That's how Linux is handling > potential jiffie overruns after 28 days. BTW, another unrelated problem I had is that currently Amygdala initializes= =20 the simulation at +1 timestep, but then checks for simtTime<maxTime in the= =20 run loop. TimeStep() increments the simTime after processing, for the next= =20 call. This lead to a dissymetry, quite important in my case since the first and l= ast=20 "epochs" were not having the same number of time steps. I had to modify the code so: =2D simulation time is initialized to 0. =2D TimeStep() increases the current simtime before processing. =2D the run loop checks for simTime<=3DmaxTime Now each epoch is having exactly the same number of time steps. Regards, Nicolas |
From: Rüdiger K. <rk...@rk...> - 2006-01-30 15:36:52
|
Stupid me: In case we choose to let the simTime run over, it should certainly be a signed, since then both t2-t1 and t1-t2 give correct results even if t2 is on one side of the overrun and t1 on the other. -Rudiger --- Rüdiger Koch <rk...@rk...> wrote: > I can think about 2 ways to work around using "typedef unsigned long > long AmTimeInt": > > 1. We simply let the simTime run over. That requires to use only > operations on time variables that produce correct results when time > just ran over. The common operation dt = t2 - t1 is safe as long as > AmTimeInt is unsigned, pretty much everything else requires a case > differentiation. > > 2. Once simTime exceeds a certain value (say, 4 Billion) we subtract > one Billion from all time variables > > > > For both workarounds we'd increment an epoch variable. > > Both workarounds are excellent hiding places for bugs, of course. > Some > bugs of that type are probably already present. So what we can do is: > > 1. Initialize simTime to one second under the maximum value to make > sure that there is at least one overrun for every decent simulation > so > bugs are forced to come out of hiding. That's how Linux is handling > potential jiffie overruns after 28 days. > > 2. Have a ./configure --enable-long-timeint option so we can test the > exact version against the performance hack version. > > -Rudiger > > > --- Rüdiger Koch <rk...@rk...> wrote: > > > Nichlolas is right with the AmTimeInt issue - Amygdala must be able > > to > > run for more than an hour. But it's not as simple as changing the > > typedef. > > > > I just tried out what happens if I simply typedef AmTimeInt to a > > 64-bit > > integer: Execution time went up in the order of 30%. Another, > bigger > > problem is that no SIMD unit I am aware of can handle 64-bit > > integers. > > We don't want to stay without SIMD - we can do a spike on an axon > > with > > up to 128 static synapses with *two* SIMD commands! That's just a > few > > cycles. > > > > The good part is that we're actually only dealing with differences, > > such as the time difference between the last and the current spike. > > If > > that's more than one hour of simTime then the last spike is > > irrelevant, > > anyway. No idea how to exploit the fact that we don't need an > > absolute > > time for processing in order to stay on 32 bit. > > > > There should be an absolute time for reference, though. > > > > -Rudiger > > > > > > > > --- Matt Grover <mg...@am...> wrote: > > > > > Thanks for the patch. I'm happy to hear that you've been using > > > Amygdala for > > > something useful. > > > > > > I'll take a look at this tonight or tomorrow night and send you > any > > > questions > > > that I have. > > > > > > Matt > > > > > > > > > > Hello, > > > > > > > > First, let me thank you all for your efforts. As a user and > > > developper, > > > > Amygdala has also saved me a lot of time. > > > > > > > > I'm currently using Amygdala for a research project using > spiking > > > networks. > > > > Amygdala was specifically built for that, so it seemed logical > to > > > re-use > > > > it. I'll post the project online in a week or 2, as soon as > it's > > > cleaned up > > > > a bit. > > > > > > > > In the mean time, here are a few contributions to Amygdala. > I've > > > attached a > > > > diff file to this message against version 0.4.0, but didn't > look > > in > > > the > > > > latest CVS for dups. You'll find comments for the diff file > > below. > > > > > > > > Best wishes, > > > > Nicolas Brodu > > > > > > > > > > > > ------------------- > > > > In lsmtopology.h: > > > > > > > > Include should be <amygdala/topology.h> for consistency. > > > > > > > > ------------------- > > > > In netloader.h: > > > > > > > > Including "config.h" is wrong: it should not be provided with > the > > > library. > > > > Including in the CPP ensures correct code isolation. > > > > > > > > ------------------- > > > > In staticsynapse.cpp > > > > > > > > 00105 // static_cast can be used here because the > Trainer > > > was > > > > verified to be of the correct > > > > 00106 // type when this synapse was added to the > dendrite > > > > 00107 > > > > > static_cast<StaticTrainer*>(dendrite->GetTrainer())->Train(this, > > > > lastTransTime, trainingHistIdx); > > > > > > > > Not true!!! Dendrite.cpp says: > > > > 00061 // FIXME: Need to do checks here to make sure the trainer > > > type > > > > matches the trainable > > > > 00062 // neuron type. > > > > 00063 void Dendrite::SetTrainer(Trainer* t) > > > > > > > > As a result the program crashes. > > > > > > > > As a workaround I had to derive my class from StaticTrainer, > even > > > if it's > > > > not related > > > > > > > > ------------------- > > > > In network.h line 252 > > > > > > > > getTopology should take the string as const ref, as the other > > > methods do. > > > > > > > > ------------------- > > > > Empty function: > > > > > > > > void ConnectorRegistry::Register(std::string name, NConnector * > > > connector) > > > > {} should be: > > > > void ConnectorRegistry::Register(std::string name, NConnector * > > > connector) > > > > {connectorMap[name]=connector;} > > > > > > > > ------------------- > > > > Forever looping in alphaneuron.cpp integration: > > > > > > > > Here is a gdb trace: > > > > 232 while (iterate) { > > > > 233 currState = 0.0; > > > > 234 currDeriv = 0.0; > > > > 236 Utilities::RoundTime(calcTime, pspStepSize); > > > > 240 for (i=histBeginIdx; i<histSize; i++) { > > > > 241 tmpInput = inputHist[i]; > > > > 242 funcTime = calcTime - tmpInput.time; > > > > 243 funcWeight = tmpInput.weight; > > > > 247 tblIndex = (funcTime / pspStepSize); > > > > 249 if (tblIndex < pspLSize) { > > > > 250 if ( funcWeight > 0.0 ) { > > > > 255 currState = currState + (funcWeight > * > > > > (ipspLookup[tblIndex])); > > > > 256 currDeriv = currDeriv + (funcWeight > * > > > > (idPspLookup[tblIndex])); > > > > 240 for (i=histBeginIdx; i<histSize; i++) { > > > > 241 tmpInput = inputHist[i]; > > > > 242 funcTime = calcTime - tmpInput.time; > > > > 243 funcWeight = tmpInput.weight; > > > > 247 tblIndex = (funcTime / pspStepSize); > > > > 249 if (tblIndex < pspLSize) { > > > > 250 if ( funcWeight > 0.0 ) { > > > > 251 currState = currState + (funcWeight > * > > > > (epspLookup[tblIndex])); > > > > 252 currDeriv = currDeriv + (funcWeight > * > === message truncated === |
From: Rüdiger K. <rk...@rk...> - 2006-01-30 14:39:27
|
I can think about 2 ways to work around using "typedef unsigned long long AmTimeInt": 1. We simply let the simTime run over. That requires to use only operations on time variables that produce correct results when time just ran over. The common operation dt = t2 - t1 is safe as long as AmTimeInt is unsigned, pretty much everything else requires a case differentiation. 2. Once simTime exceeds a certain value (say, 4 Billion) we subtract one Billion from all time variables For both workarounds we'd increment an epoch variable. Both workarounds are excellent hiding places for bugs, of course. Some bugs of that type are probably already present. So what we can do is: 1. Initialize simTime to one second under the maximum value to make sure that there is at least one overrun for every decent simulation so bugs are forced to come out of hiding. That's how Linux is handling potential jiffie overruns after 28 days. 2. Have a ./configure --enable-long-timeint option so we can test the exact version against the performance hack version. -Rudiger --- Rüdiger Koch <rk...@rk...> wrote: > Nichlolas is right with the AmTimeInt issue - Amygdala must be able > to > run for more than an hour. But it's not as simple as changing the > typedef. > > I just tried out what happens if I simply typedef AmTimeInt to a > 64-bit > integer: Execution time went up in the order of 30%. Another, bigger > problem is that no SIMD unit I am aware of can handle 64-bit > integers. > We don't want to stay without SIMD - we can do a spike on an axon > with > up to 128 static synapses with *two* SIMD commands! That's just a few > cycles. > > The good part is that we're actually only dealing with differences, > such as the time difference between the last and the current spike. > If > that's more than one hour of simTime then the last spike is > irrelevant, > anyway. No idea how to exploit the fact that we don't need an > absolute > time for processing in order to stay on 32 bit. > > There should be an absolute time for reference, though. > > -Rudiger > > > > --- Matt Grover <mg...@am...> wrote: > > > Thanks for the patch. I'm happy to hear that you've been using > > Amygdala for > > something useful. > > > > I'll take a look at this tonight or tomorrow night and send you any > > questions > > that I have. > > > > Matt > > > > > > > Hello, > > > > > > First, let me thank you all for your efforts. As a user and > > developper, > > > Amygdala has also saved me a lot of time. > > > > > > I'm currently using Amygdala for a research project using spiking > > networks. > > > Amygdala was specifically built for that, so it seemed logical to > > re-use > > > it. I'll post the project online in a week or 2, as soon as it's > > cleaned up > > > a bit. > > > > > > In the mean time, here are a few contributions to Amygdala. I've > > attached a > > > diff file to this message against version 0.4.0, but didn't look > in > > the > > > latest CVS for dups. You'll find comments for the diff file > below. > > > > > > Best wishes, > > > Nicolas Brodu > > > > > > > > > ------------------- > > > In lsmtopology.h: > > > > > > Include should be <amygdala/topology.h> for consistency. > > > > > > ------------------- > > > In netloader.h: > > > > > > Including "config.h" is wrong: it should not be provided with the > > library. > > > Including in the CPP ensures correct code isolation. > > > > > > ------------------- > > > In staticsynapse.cpp > > > > > > 00105 // static_cast can be used here because the Trainer > > was > > > verified to be of the correct > > > 00106 // type when this synapse was added to the dendrite > > > 00107 > > > static_cast<StaticTrainer*>(dendrite->GetTrainer())->Train(this, > > > lastTransTime, trainingHistIdx); > > > > > > Not true!!! Dendrite.cpp says: > > > 00061 // FIXME: Need to do checks here to make sure the trainer > > type > > > matches the trainable > > > 00062 // neuron type. > > > 00063 void Dendrite::SetTrainer(Trainer* t) > > > > > > As a result the program crashes. > > > > > > As a workaround I had to derive my class from StaticTrainer, even > > if it's > > > not related > > > > > > ------------------- > > > In network.h line 252 > > > > > > getTopology should take the string as const ref, as the other > > methods do. > > > > > > ------------------- > > > Empty function: > > > > > > void ConnectorRegistry::Register(std::string name, NConnector * > > connector) > > > {} should be: > > > void ConnectorRegistry::Register(std::string name, NConnector * > > connector) > > > {connectorMap[name]=connector;} > > > > > > ------------------- > > > Forever looping in alphaneuron.cpp integration: > > > > > > Here is a gdb trace: > > > 232 while (iterate) { > > > 233 currState = 0.0; > > > 234 currDeriv = 0.0; > > > 236 Utilities::RoundTime(calcTime, pspStepSize); > > > 240 for (i=histBeginIdx; i<histSize; i++) { > > > 241 tmpInput = inputHist[i]; > > > 242 funcTime = calcTime - tmpInput.time; > > > 243 funcWeight = tmpInput.weight; > > > 247 tblIndex = (funcTime / pspStepSize); > > > 249 if (tblIndex < pspLSize) { > > > 250 if ( funcWeight > 0.0 ) { > > > 255 currState = currState + (funcWeight * > > > (ipspLookup[tblIndex])); > > > 256 currDeriv = currDeriv + (funcWeight * > > > (idPspLookup[tblIndex])); > > > 240 for (i=histBeginIdx; i<histSize; i++) { > > > 241 tmpInput = inputHist[i]; > > > 242 funcTime = calcTime - tmpInput.time; > > > 243 funcWeight = tmpInput.weight; > > > 247 tblIndex = (funcTime / pspStepSize); > > > 249 if (tblIndex < pspLSize) { > > > 250 if ( funcWeight > 0.0 ) { > > > 251 currState = currState + (funcWeight * > > > (epspLookup[tblIndex])); > > > 252 currDeriv = currDeriv + (funcWeight * > > > (edPspLookup[tblIndex])); > > > 240 for (i=histBeginIdx; i<histSize; i++) { > > > 267 if ( (currDeriv < 0.0) && (currState < 1.0) ) { > > > 276 else if (currState > 1.0) { > > > 281 stateDelta = fabs(1.0 - currState); > > > 282 threshCrs = (stateDelta / currDeriv) + > > lstThreshCrs; > > > 284 if ((threshCrs - lstThreshCrs) < convergeRes) > { > > > 285 if (stateDelta < 1.0) { > > > 291 converged = 0; > > > 292 calcTime = int(threshCrs); > > > 293 lstThreshCrs = threshCrs; > > > 298 schedSpikeTime = 0; > > > 232 while (iterate) { > > > > > > Here is the culprit: line 292, unlike what is said in point "a" > of > > comment, > > > the old calcTime is equal to the new calcTime and there is no > > convergence. > > > Therefore the loop continues forever. > > > > > > Moreover, the value in threshCrs is dangerously close to 2^32, so > > > converting using an "int" to store in a "AmTimeInt" is > error-prone > > (lines > > > 292 and 307). This is of course assuming "AmTimeInt" was defined > to > > > unsigned long long and 64 bits, otherwise the whole simulation > > fails at > > > roughly 1h 11min 35seconds anyway. > > > I've used static_cast<AmTimeInt>(...) instead of int(...) and > > changed float > > > variables to double for better precision > > > This increases precision, but does not solve the fundamental > > problem. Hence > > > I've also added an explicit test: > > > > > > 292a AmTimeInt newCalcTime = > > > static_cast<AmTimeInt>(threshCrs); > > > 292b if (newCalcTime == calcTime) > iterate=0; > > > 292c calcTime = newCalcTime; > > > > > > I've also added these lines for the test line 307. > > > > > > ------------------- > > > network.h/cpp > > > > > > Changed GetMaxRunTime to return AmTimeInt instead of unsigned int > === message truncated === |
From: Rüdiger K. <rk...@rk...> - 2006-01-27 19:43:37
|
Nichlolas is right with the AmTimeInt issue - Amygdala must be able to run for more than an hour. But it's not as simple as changing the typedef. I just tried out what happens if I simply typedef AmTimeInt to a 64-bit integer: Execution time went up in the order of 30%. Another, bigger problem is that no SIMD unit I am aware of can handle 64-bit integers. We don't want to stay without SIMD - we can do a spike on an axon with up to 128 static synapses with *two* SIMD commands! That's just a few cycles. The good part is that we're actually only dealing with differences, such as the time difference between the last and the current spike. If that's more than one hour of simTime then the last spike is irrelevant, anyway. No idea how to exploit the fact that we don't need an absolute time for processing in order to stay on 32 bit. There should be an absolute time for reference, though. -Rudiger --- Matt Grover <mg...@am...> wrote: > Thanks for the patch. I'm happy to hear that you've been using > Amygdala for > something useful. > > I'll take a look at this tonight or tomorrow night and send you any > questions > that I have. > > Matt > > > > Hello, > > > > First, let me thank you all for your efforts. As a user and > developper, > > Amygdala has also saved me a lot of time. > > > > I'm currently using Amygdala for a research project using spiking > networks. > > Amygdala was specifically built for that, so it seemed logical to > re-use > > it. I'll post the project online in a week or 2, as soon as it's > cleaned up > > a bit. > > > > In the mean time, here are a few contributions to Amygdala. I've > attached a > > diff file to this message against version 0.4.0, but didn't look in > the > > latest CVS for dups. You'll find comments for the diff file below. > > > > Best wishes, > > Nicolas Brodu > > > > > > ------------------- > > In lsmtopology.h: > > > > Include should be <amygdala/topology.h> for consistency. > > > > ------------------- > > In netloader.h: > > > > Including "config.h" is wrong: it should not be provided with the > library. > > Including in the CPP ensures correct code isolation. > > > > ------------------- > > In staticsynapse.cpp > > > > 00105 // static_cast can be used here because the Trainer > was > > verified to be of the correct > > 00106 // type when this synapse was added to the dendrite > > 00107 > > static_cast<StaticTrainer*>(dendrite->GetTrainer())->Train(this, > > lastTransTime, trainingHistIdx); > > > > Not true!!! Dendrite.cpp says: > > 00061 // FIXME: Need to do checks here to make sure the trainer > type > > matches the trainable > > 00062 // neuron type. > > 00063 void Dendrite::SetTrainer(Trainer* t) > > > > As a result the program crashes. > > > > As a workaround I had to derive my class from StaticTrainer, even > if it's > > not related > > > > ------------------- > > In network.h line 252 > > > > getTopology should take the string as const ref, as the other > methods do. > > > > ------------------- > > Empty function: > > > > void ConnectorRegistry::Register(std::string name, NConnector * > connector) > > {} should be: > > void ConnectorRegistry::Register(std::string name, NConnector * > connector) > > {connectorMap[name]=connector;} > > > > ------------------- > > Forever looping in alphaneuron.cpp integration: > > > > Here is a gdb trace: > > 232 while (iterate) { > > 233 currState = 0.0; > > 234 currDeriv = 0.0; > > 236 Utilities::RoundTime(calcTime, pspStepSize); > > 240 for (i=histBeginIdx; i<histSize; i++) { > > 241 tmpInput = inputHist[i]; > > 242 funcTime = calcTime - tmpInput.time; > > 243 funcWeight = tmpInput.weight; > > 247 tblIndex = (funcTime / pspStepSize); > > 249 if (tblIndex < pspLSize) { > > 250 if ( funcWeight > 0.0 ) { > > 255 currState = currState + (funcWeight * > > (ipspLookup[tblIndex])); > > 256 currDeriv = currDeriv + (funcWeight * > > (idPspLookup[tblIndex])); > > 240 for (i=histBeginIdx; i<histSize; i++) { > > 241 tmpInput = inputHist[i]; > > 242 funcTime = calcTime - tmpInput.time; > > 243 funcWeight = tmpInput.weight; > > 247 tblIndex = (funcTime / pspStepSize); > > 249 if (tblIndex < pspLSize) { > > 250 if ( funcWeight > 0.0 ) { > > 251 currState = currState + (funcWeight * > > (epspLookup[tblIndex])); > > 252 currDeriv = currDeriv + (funcWeight * > > (edPspLookup[tblIndex])); > > 240 for (i=histBeginIdx; i<histSize; i++) { > > 267 if ( (currDeriv < 0.0) && (currState < 1.0) ) { > > 276 else if (currState > 1.0) { > > 281 stateDelta = fabs(1.0 - currState); > > 282 threshCrs = (stateDelta / currDeriv) + > lstThreshCrs; > > 284 if ((threshCrs - lstThreshCrs) < convergeRes) { > > 285 if (stateDelta < 1.0) { > > 291 converged = 0; > > 292 calcTime = int(threshCrs); > > 293 lstThreshCrs = threshCrs; > > 298 schedSpikeTime = 0; > > 232 while (iterate) { > > > > Here is the culprit: line 292, unlike what is said in point "a" of > comment, > > the old calcTime is equal to the new calcTime and there is no > convergence. > > Therefore the loop continues forever. > > > > Moreover, the value in threshCrs is dangerously close to 2^32, so > > converting using an "int" to store in a "AmTimeInt" is error-prone > (lines > > 292 and 307). This is of course assuming "AmTimeInt" was defined to > > unsigned long long and 64 bits, otherwise the whole simulation > fails at > > roughly 1h 11min 35seconds anyway. > > I've used static_cast<AmTimeInt>(...) instead of int(...) and > changed float > > variables to double for better precision > > This increases precision, but does not solve the fundamental > problem. Hence > > I've also added an explicit test: > > > > 292a AmTimeInt newCalcTime = > > static_cast<AmTimeInt>(threshCrs); > > 292b if (newCalcTime == calcTime) iterate=0; > > 292c calcTime = newCalcTime; > > > > I've also added these lines for the test line 307. > > > > ------------------- > > network.h/cpp > > > > Changed GetMaxRunTime to return AmTimeInt instead of unsigned int > > > > ------------------- > > Utilities.h/cpp & files using rand() > > > > Added possibility to set user-defined RNG. This allows better > experiment > > reproduceability, and better randomness than system rand(). Ideally > my > > patch should have considered adding RNGs for both 0<=x<=1 and > 0<=x<1, but > > only one function is provided for now for simplicity. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD > SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > Amygdala-development mailing list > Amy...@li... > https://lists.sourceforge.net/lists/listinfo/amygdala-development > |
From: Matt G. <mg...@am...> - 2006-01-26 15:49:26
|
Thanks for the patch. I'm happy to hear that you've been using Amygdala for something useful. I'll take a look at this tonight or tomorrow night and send you any questions that I have. Matt > Hello, > > First, let me thank you all for your efforts. As a user and developper, > Amygdala has also saved me a lot of time. > > I'm currently using Amygdala for a research project using spiking networks. > Amygdala was specifically built for that, so it seemed logical to re-use > it. I'll post the project online in a week or 2, as soon as it's cleaned up > a bit. > > In the mean time, here are a few contributions to Amygdala. I've attached a > diff file to this message against version 0.4.0, but didn't look in the > latest CVS for dups. You'll find comments for the diff file below. > > Best wishes, > Nicolas Brodu > > > ------------------- > In lsmtopology.h: > > Include should be <amygdala/topology.h> for consistency. > > ------------------- > In netloader.h: > > Including "config.h" is wrong: it should not be provided with the library. > Including in the CPP ensures correct code isolation. > > ------------------- > In staticsynapse.cpp > > 00105 // static_cast can be used here because the Trainer was > verified to be of the correct > 00106 // type when this synapse was added to the dendrite > 00107 > static_cast<StaticTrainer*>(dendrite->GetTrainer())->Train(this, > lastTransTime, trainingHistIdx); > > Not true!!! Dendrite.cpp says: > 00061 // FIXME: Need to do checks here to make sure the trainer type > matches the trainable > 00062 // neuron type. > 00063 void Dendrite::SetTrainer(Trainer* t) > > As a result the program crashes. > > As a workaround I had to derive my class from StaticTrainer, even if it's > not related > > ------------------- > In network.h line 252 > > getTopology should take the string as const ref, as the other methods do. > > ------------------- > Empty function: > > void ConnectorRegistry::Register(std::string name, NConnector * connector) > {} should be: > void ConnectorRegistry::Register(std::string name, NConnector * connector) > {connectorMap[name]=connector;} > > ------------------- > Forever looping in alphaneuron.cpp integration: > > Here is a gdb trace: > 232 while (iterate) { > 233 currState = 0.0; > 234 currDeriv = 0.0; > 236 Utilities::RoundTime(calcTime, pspStepSize); > 240 for (i=histBeginIdx; i<histSize; i++) { > 241 tmpInput = inputHist[i]; > 242 funcTime = calcTime - tmpInput.time; > 243 funcWeight = tmpInput.weight; > 247 tblIndex = (funcTime / pspStepSize); > 249 if (tblIndex < pspLSize) { > 250 if ( funcWeight > 0.0 ) { > 255 currState = currState + (funcWeight * > (ipspLookup[tblIndex])); > 256 currDeriv = currDeriv + (funcWeight * > (idPspLookup[tblIndex])); > 240 for (i=histBeginIdx; i<histSize; i++) { > 241 tmpInput = inputHist[i]; > 242 funcTime = calcTime - tmpInput.time; > 243 funcWeight = tmpInput.weight; > 247 tblIndex = (funcTime / pspStepSize); > 249 if (tblIndex < pspLSize) { > 250 if ( funcWeight > 0.0 ) { > 251 currState = currState + (funcWeight * > (epspLookup[tblIndex])); > 252 currDeriv = currDeriv + (funcWeight * > (edPspLookup[tblIndex])); > 240 for (i=histBeginIdx; i<histSize; i++) { > 267 if ( (currDeriv < 0.0) && (currState < 1.0) ) { > 276 else if (currState > 1.0) { > 281 stateDelta = fabs(1.0 - currState); > 282 threshCrs = (stateDelta / currDeriv) + lstThreshCrs; > 284 if ((threshCrs - lstThreshCrs) < convergeRes) { > 285 if (stateDelta < 1.0) { > 291 converged = 0; > 292 calcTime = int(threshCrs); > 293 lstThreshCrs = threshCrs; > 298 schedSpikeTime = 0; > 232 while (iterate) { > > Here is the culprit: line 292, unlike what is said in point "a" of comment, > the old calcTime is equal to the new calcTime and there is no convergence. > Therefore the loop continues forever. > > Moreover, the value in threshCrs is dangerously close to 2^32, so > converting using an "int" to store in a "AmTimeInt" is error-prone (lines > 292 and 307). This is of course assuming "AmTimeInt" was defined to > unsigned long long and 64 bits, otherwise the whole simulation fails at > roughly 1h 11min 35seconds anyway. > I've used static_cast<AmTimeInt>(...) instead of int(...) and changed float > variables to double for better precision > This increases precision, but does not solve the fundamental problem. Hence > I've also added an explicit test: > > 292a AmTimeInt newCalcTime = > static_cast<AmTimeInt>(threshCrs); > 292b if (newCalcTime == calcTime) iterate=0; > 292c calcTime = newCalcTime; > > I've also added these lines for the test line 307. > > ------------------- > network.h/cpp > > Changed GetMaxRunTime to return AmTimeInt instead of unsigned int > > ------------------- > Utilities.h/cpp & files using rand() > > Added possibility to set user-defined RNG. This allows better experiment > reproduceability, and better randomness than system rand(). Ideally my > patch should have considered adding RNGs for both 0<=x<=1 and 0<=x<1, but > only one function is provided for now for simplicity. |
From: Nicolas B. <br...@us...> - 2006-01-26 05:34:37
|
Hello, First, let me thank you all for your efforts. As a user and developper, Amygdala has also saved me a lot of time. I'm currently using Amygdala for a research project using spiking networks. Amygdala was specifically built for that, so it seemed logical to re-use it. I'll post the project online in a week or 2, as soon as it's cleaned up a bit. In the mean time, here are a few contributions to Amygdala. I've attached a diff file to this message against version 0.4.0, but didn't look in the latest CVS for dups. You'll find comments for the diff file below. Best wishes, Nicolas Brodu ------------------- In lsmtopology.h: Include should be <amygdala/topology.h> for consistency. ------------------- In netloader.h: Including "config.h" is wrong: it should not be provided with the library. Including in the CPP ensures correct code isolation. ------------------- In staticsynapse.cpp 00105 // static_cast can be used here because the Trainer was verified to be of the correct 00106 // type when this synapse was added to the dendrite 00107 static_cast<StaticTrainer*>(dendrite->GetTrainer())->Train(this, lastTransTime, trainingHistIdx); Not true!!! Dendrite.cpp says: 00061 // FIXME: Need to do checks here to make sure the trainer type matches the trainable 00062 // neuron type. 00063 void Dendrite::SetTrainer(Trainer* t) As a result the program crashes. As a workaround I had to derive my class from StaticTrainer, even if it's not related ------------------- In network.h line 252 getTopology should take the string as const ref, as the other methods do. ------------------- Empty function: void ConnectorRegistry::Register(std::string name, NConnector * connector) {} should be: void ConnectorRegistry::Register(std::string name, NConnector * connector) {connectorMap[name]=connector;} ------------------- Forever looping in alphaneuron.cpp integration: Here is a gdb trace: 232 while (iterate) { 233 currState = 0.0; 234 currDeriv = 0.0; 236 Utilities::RoundTime(calcTime, pspStepSize); 240 for (i=histBeginIdx; i<histSize; i++) { 241 tmpInput = inputHist[i]; 242 funcTime = calcTime - tmpInput.time; 243 funcWeight = tmpInput.weight; 247 tblIndex = (funcTime / pspStepSize); 249 if (tblIndex < pspLSize) { 250 if ( funcWeight > 0.0 ) { 255 currState = currState + (funcWeight * (ipspLookup[tblIndex])); 256 currDeriv = currDeriv + (funcWeight * (idPspLookup[tblIndex])); 240 for (i=histBeginIdx; i<histSize; i++) { 241 tmpInput = inputHist[i]; 242 funcTime = calcTime - tmpInput.time; 243 funcWeight = tmpInput.weight; 247 tblIndex = (funcTime / pspStepSize); 249 if (tblIndex < pspLSize) { 250 if ( funcWeight > 0.0 ) { 251 currState = currState + (funcWeight * (epspLookup[tblIndex])); 252 currDeriv = currDeriv + (funcWeight * (edPspLookup[tblIndex])); 240 for (i=histBeginIdx; i<histSize; i++) { 267 if ( (currDeriv < 0.0) && (currState < 1.0) ) { 276 else if (currState > 1.0) { 281 stateDelta = fabs(1.0 - currState); 282 threshCrs = (stateDelta / currDeriv) + lstThreshCrs; 284 if ((threshCrs - lstThreshCrs) < convergeRes) { 285 if (stateDelta < 1.0) { 291 converged = 0; 292 calcTime = int(threshCrs); 293 lstThreshCrs = threshCrs; 298 schedSpikeTime = 0; 232 while (iterate) { Here is the culprit: line 292, unlike what is said in point "a" of comment, the old calcTime is equal to the new calcTime and there is no convergence. Therefore the loop continues forever. Moreover, the value in threshCrs is dangerously close to 2^32, so converting using an "int" to store in a "AmTimeInt" is error-prone (lines 292 and 307). This is of course assuming "AmTimeInt" was defined to unsigned long long and 64 bits, otherwise the whole simulation fails at roughly 1h 11min 35seconds anyway. I've used static_cast<AmTimeInt>(...) instead of int(...) and changed float variables to double for better precision This increases precision, but does not solve the fundamental problem. Hence I've also added an explicit test: 292a AmTimeInt newCalcTime = static_cast<AmTimeInt>(threshCrs); 292b if (newCalcTime == calcTime) iterate=0; 292c calcTime = newCalcTime; I've also added these lines for the test line 307. ------------------- network.h/cpp Changed GetMaxRunTime to return AmTimeInt instead of unsigned int ------------------- Utilities.h/cpp & files using rand() Added possibility to set user-defined RNG. This allows better experiment reproduceability, and better randomness than system rand(). Ideally my patch should have considered adding RNGs for both 0<=x<=1 and 0<=x<1, but only one function is provided for now for simplicity. |
From: <rud...@ya...> - 2005-12-16 21:11:17
|
I created the STI_CELL branch. I think first I'll try out how far I get with the IDL approach. IDL is somewhat similar to the old DCE. It allows for synchronized and asynch calls. I'll try if context switching is quick enough if each spike is a synched call where the one SPE executes all the synapse stuff. Since the thread blocks there it'll be replace by a sleeping thread until that one sends out a spike. If that works Cell will just be a general case of normal multicore CPUs with a SIMD instruction set. Except that Cell would require context switches because 8 threads have to keep 8 SPEs happy although the PPE can run only 2. So probably context switching will become the bottleneck on Cell. The XBox360 wouldn't have that problem because on this box we'd run 3 threads on 3 cores. If context switches really are the bottleneck I'll try asynch calls from 2 threads. Unless someone has a better idea, of course. -Rudiger |
From: Rüdiger K. <rk...@rk...> - 2005-12-13 12:41:03
|
the last few weeks I found some time to start playing with IBM's Cell simulator a bit. It's really funny to see Linux/PPC boot on an Intel box. There were a couple of surprises: - The PPE is presented as a 2-way SMP system. It has 2 register sets and can execute 2 threads simultaneously. - An SPE is not at all treated like a co-processor. One actually exectues SPE programs from the PPE. This SPE program has it's own main() and may even reside in a different binary executable, although it doesn't have to. - An SPE is not a specialilized SIMD processor. It can also execute scalar instructions. In the very first example, the 8 SPE's are made to run printf("Hello Cell(0x%ll)\n", id); in parallel, which is obviously fairly complex scalar code. It has a Turing-complete instruction set which is incompatible to the PPE's PowerPC instruction set. If the real Cell keeps what the simulator promises then we should be able to run networks in real time that come close to actual insect brains in size. I expect interesting things to happen once we can run networks of that size. There are a number of programming models. I can't really assess yet which one is the best for us but currently I lean to the IDL - an interface definition language for RPC between the PPE and the SPEs. Ideally, the result of the port is such that the code for Cell is OK for recent multicore processors with Synchronized Vector units, such as the XBox's Xenon processor and recent Intel. I am not very confident that this is possible - the SPEs differ too much from oder SIMD implementations and the thread/task model differs too much. It's a bad idea to require different versions of the Synapse and Spike trigger code, however, because it's extremely difficult to make sure this code really conforms to the underlying model. I'll open an STI_CELL branch in the repository next Friday. Please everybody commit their changes until then - that'll make the later merging easier. -Rudiger |
From: Rüdiger K. <rk...@rk...> - 2005-11-09 12:48:46
|
@ http://www.alphaworks.ibm.com/tech/cellsystemsim From what I gather they have everything we need - the Cell simulator, a subset of a Fedora distro for Power, 2 compilers (xlC and GCC) and a bunch of debugging and profiling tools. I'll toy around with it the next few days to see if Amygdala can utilize the SPEs. -Rudiger |
From: Uwe A. <uwe...@ro...> - 2005-11-04 14:44:31
|
Rüdiger Koch schrieb: > > --- Uwe Arzt <uwe...@ro...> wrote: > > >>Hi, >> >>see the attached screenshot :) > > > Wow!!! Great job! > Does the OpenGL part work, too? Yes, i also have spikeloop up and running (see screenshot. Looks better without the jpg Compression:). Next step geneserver (port the networkcode to qt4). A little bit more work ;) :q! Uwe |
From: Rüdiger K. <rk...@rk...> - 2005-11-04 09:47:11
|
--- Uwe Arzt <uwe...@ro...> wrote: > Hi, > > see the attached screenshot :) Wow!!! Great job! Does the OpenGL part work, too? > now 2 questions: > > 1. How do i express the dependency to qt4 in configure.in (i haven't > worked with autoconf/automake yet)? The first thing i ported is ltdl > to QLibrary. You have to think of configure.in as a shell script. It consists of macros that expands to shell code and of literal shell code. In the end of the configure.in you see the macros AC_SUBST(AMDIR) AC_SUBST(QTINCLUDES) AC_SUBST(QTLIBS) AC_SUBST(QTDEF) These macros produce shell code that substitutes the variables in Makefile.in (which in turn is generated from Makefile.am). You find these variable in Makefile.am and you will see that they are used to put compiler and linker command lines together. > 2. Is it OK to drop the qt3 parts in visual? Amygdala will already > need qt4... Yes for the main dev trunk we can chuck it out. -Rudiger |
From: Uwe A. <uwe...@ro...> - 2005-11-04 09:13:32
|
Hi, see the attached screenshot :) now 2 questions: 1. How do i express the dependency to qt4 in configure.in (i haven't worked with autoconf/automake yet)? The first thing i ported is ltdl to QLibrary. 2. Is it OK to drop the qt3 parts in visual? Amygdala will already need qt4... Thanks in advance Uwe |
From: Uwe A. <uwe...@ro...> - 2005-11-04 06:41:00
|
Hi, >Date: Thu, 3 Nov 2005 02:43:54 -0800 (PST) >From: "Rüdiger" Koch <rk...@rk...> >Subject: Re: [Amygdala-development] Question about synapse >To: amy...@li... > > >--- Manjunath Sripadarao <man...@gm...> wrote: > > > >>Interesting, when you say few hundred to a few thousand neurons do >>you mean, on a single computer ? What are the specs of the machine ? >>Yes for faster than real-time I think SIMD and FPGA would be useful. >> >> > >Number of neurons is not a good meter for performance. Guess for an SNN >the number of InputSpikes is because updating the state of all >postsynaptic neurons after a spike is what eats all the CPU cycles. I >am getting in the order of 500,000 Inputspikes per second on an Athlon >2000+ when using StaticSynapse and BasicNeuron. This means you can run >a sparsely connected network of 500-1000 neurons in about real time. >This is without any particular optimizations other than improving >memory access patterns in 0.4. With Cell, that should increase a lot - >I'd expect up to 100 Million input spikes per second. > > It would be great to have an standard test scenario int the lib. >>I don't have access a big cluster, just a small one(5-10 nodes). I >>am interested in >>exploring the possibility of using GPUs for SNN simlation and >>possibly a cluster of GPUs. >> >> > >GPUs are just SIMD machines that are even more inconvenient to program >than usually - after all that's not what they've been built for. With >Cell just around the corner I can't see the point of that. SDKs for >Cell are already available. I'll go for Cell once I get hold of an SDK. >Why not help there? Playstations will certainly be cheap enough to >afford a cluster. > > > There are some libs which can help with GPU-Programming, as starting point go to: http://www.gpgpu.org have not tried one yet :( >>If I get aroung to tinkering, am I free to post questions on this >>list regarding amygdala's internals ? >> >> > >This is the place for it. Ask away! > >-Rudiger > > :q! Uwe |
From: Rüdiger K. <rk...@rk...> - 2005-11-03 17:53:34
|
--- Uwe Arzt <uwe...@ro...> wrote: > Hi, > > i can compile amygdala (as dll without multithreading) and the > subdirectory test. Before i checkin my first changes, i want to > discuss > some points with you: > > 1. > For compiling as DLL, i had to add > > #ifdef _MSC_VER > #ifdef AMYGDALA_EXPORTS > #define AMYGDALADLL __declspec(dllexport) > #else > #define AMYGDALADLL __declspec(dllimport) > #endif > #else > #define AMYGDALADLL > #endif That's OK. We could use the GCC visibility feature (http://gcc.gnu.org/wiki/Visibility) but for small libraries like Amygdala / Visual this shouldn't be necessary. We can add that any time later. > to amygdalaclass.h > > also every single method needed outside the dll has to prefixed with > > [snip amygdalaclass.cpp] > AMYGDALADLL AmygdalaClass() {}; > AMYGDALADLL virtual ~AmygdalaClass() = 0; > [snip] > > in all header files (windows is sometimes really stupid!). Is that > OK for you? Yes, it's OK. Windows isn't that stupid here. For large / many libraries it can yield significant improvements in loading times. That's why the Visibility feature was added to GCC. > > 2. > The "std::string AmygdalaClass::GetClassName(bool fullyQualified)" > Method now looks like this: > > { > const std::type_info &ti = typeid(*this); > std::string dn; > > #if defined (__GNUC__) > int status; > char* demangledName; > > demangledName = abi::__cxa_demangle(ti.name(), 0, 0, &status); > dn = demangledName; > free(demangledName); > #elif defined (_MSC_VER) > // OK for VS.NET 2003 and VS.NET 2005 beta 2 > // TODO: check with nonbeta 2005 Compiler > dn = (ti.name() + 6); > #else > #pragma error "Undefined Compiler" > #endif > > [snip] That's OK - if I remember right, CL.EXE's name() function returns the demangled name. > do you have any rules to deal with preprocessor macros? No particular rules. They should be as few and simple as possible. No metaprogramming with macros. > > 3. > For the name mangling, i added to test/main.cpp: > > int main(int argc, char** argv) > { > cout << "Starting Amygdala API test program.\n"; > Network::Init(argc, argv); > > cout << "Network::GetClassName(true) " > << Network::GetNetworkRef()->GetClassName(true) > << std::endl; > cout << "Network::GetClassName(false) " > << Network::GetNetworkRef()->GetClassName(false) > << std::endl; > > [snip] Fine. > do you have any rules for testdrivers? Is it possible to checkin the > test output and diff a new run to this old checkedin output (no diff > would be OK)? We don't have a real test suite yet, unless you count the samples. Amygdala was ( and to an extend still is ) too experimental. Just consider the changes from 0.3 to 0.4. > 4. > How the webpage is updated? I would keep up a page for the Visual > Studio > port up, with tips to compile and so on. You've got to log on to the shell account, cd to the Amygdala directory and edit the PHP scripts in place or use scp. -Rudiger |
From: Uwe A. <uwe...@ro...> - 2005-11-03 16:00:57
|
Hi, i can compile amygdala (as dll without multithreading) and the subdirectory test. Before i checkin my first changes, i want to discuss some points with you: 1. For compiling as DLL, i had to add #ifdef _MSC_VER #ifdef AMYGDALA_EXPORTS #define AMYGDALADLL __declspec(dllexport) #else #define AMYGDALADLL __declspec(dllimport) #endif #else #define AMYGDALADLL #endif to amygdalaclass.h also every single method needed outside the dll has to prefixed with [snip amygdalaclass.cpp] AMYGDALADLL AmygdalaClass() {}; AMYGDALADLL virtual ~AmygdalaClass() = 0; [snip] in all header files (windows is sometimes really stupid!). Is that OK for you? 2. The "std::string AmygdalaClass::GetClassName(bool fullyQualified)" Method now looks like this: { const std::type_info &ti = typeid(*this); std::string dn; #if defined (__GNUC__) int status; char* demangledName; demangledName = abi::__cxa_demangle(ti.name(), 0, 0, &status); dn = demangledName; free(demangledName); #elif defined (_MSC_VER) // OK for VS.NET 2003 and VS.NET 2005 beta 2 // TODO: check with nonbeta 2005 Compiler dn = (ti.name() + 6); #else #pragma error "Undefined Compiler" #endif [snip] do you have any rules to deal with preprocessor macros? 3. For the name mangling, i added to test/main.cpp: int main(int argc, char** argv) { cout << "Starting Amygdala API test program.\n"; Network::Init(argc, argv); cout << "Network::GetClassName(true) " << Network::GetNetworkRef()->GetClassName(true) << std::endl; cout << "Network::GetClassName(false) " << Network::GetNetworkRef()->GetClassName(false) << std::endl; [snip] do you have any rules for testdrivers? Is it possible to checkin the test output and diff a new run to this old checkedin output (no diff would be OK)? 4. How the webpage is updated? I would keep up a page for the Visual Studio port up, with tips to compile and so on. Thanks Uwe |