We like it or not, we must prepare to reuse a memory allocated network bacause we can't allocate them all in what relates to the weights in the context of an ensemble method.
What is the best way to do so?
Remembering our old computer architecture courses at school, we may notice that we have all the qualities for a "RISK" pipeline execution. In a risk architecture, the instruction set is and must be simple. For instructions taking multiple cycles, if there is enough similarity between the instructions, we can begin the first cycle of an instruction and then execute the second cycle of the instruction at the next physical layer while beginning the first cycle of the next instruction at the first physical layer. Then the first instruction may execute its third cycle on the third physical layer, the second instruction its second cycle on the second physical layer and a new third instruction begins its first cycle on the first physical layer. Etc. Hence multiple instructions are computed at the same time much like multiple cars get assembled at the same time in an assembly line. The overall throughput is increased.... read more
Lot of things may be parallelized when "executing" a bpnn :
Weights are best viewed as labelling incoming edges for a given neuron. If a neuron is on layer i then the weights of the incoming connections from the layer i-1 will be given by the following array :
weight-from-Bias, weight-from-neuron-1, weight-from-neuron-2, ..., weight-from-neuron-N
where N is the number of actual real neuron from layer i-1.... read more