I am currently looking at changing some of the "dumb" processing to event-driven processing so that particular objects can signal interested objects as to certain events that take place. Hopefully removing some of the tight control and data coupling. Looking to use the SDK provided observer design pattern. I would say that the architecure of the library as a whole lends itself to this. All comments are welcome.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
For starters I am talking mainly about the GA and NN API. I will use the NN API for this example:
Much of the processing is done linearly lending itself to processing bottlenecks depending on the network architecture. However, the basic nature of an NN is a graph and in most cases a directed graph. Therefore a layer of neurons could be evaluated/trained in parallel and neurons in connected layers simply wait for the presence of a signal (activation of a neuron). Much like the biological basis of NN except without the temporal nature...but this too could be implemented! ;>
In the GA API we have a similar example to work with - a GA evaluates a population of genomes, all of which can be done in parallel due to the complete independence between genomes for this operation. Therefore your whole population of genomes is evaluated in parallel and the population or GA waits for a signal of completion.
Hopefully this clears up what I meant. Now any advice as to how and more specifically where in the API this can happen is what I am currently considering - and your input would be valuable!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I am currently looking at changing some of the "dumb" processing to event-driven processing so that particular objects can signal interested objects as to certain events that take place. Hopefully removing some of the tight control and data coupling. Looking to use the SDK provided observer design pattern. I would say that the architecure of the library as a whole lends itself to this. All comments are welcome.
what specificly are you on about? Which objects are you looking at?
For starters I am talking mainly about the GA and NN API. I will use the NN API for this example:
Much of the processing is done linearly lending itself to processing bottlenecks depending on the network architecture. However, the basic nature of an NN is a graph and in most cases a directed graph. Therefore a layer of neurons could be evaluated/trained in parallel and neurons in connected layers simply wait for the presence of a signal (activation of a neuron). Much like the biological basis of NN except without the temporal nature...but this too could be implemented! ;>
In the GA API we have a similar example to work with - a GA evaluates a population of genomes, all of which can be done in parallel due to the complete independence between genomes for this operation. Therefore your whole population of genomes is evaluated in parallel and the population or GA waits for a signal of completion.
Hopefully this clears up what I meant. Now any advice as to how and more specifically where in the API this can happen is what I am currently considering - and your input would be valuable!