Menu

Weights layout

For connections incoming into a given neuron

Weights are best viewed as labelling incoming edges for a given neuron. If a neuron is on layer i then the weights of the incoming connections from the layer i-1 will be given by the following array :

weight-from-Bias, weight-from-neuron-1, weight-from-neuron-2, ..., weight-from-neuron-N

where N is the number of actual real neuron from layer i-1.

For connections incoming into the neurons of a given layer i

All neurons (bias excluded of course) of a given layer i will have an such an array of incoming connections from layer i-1. We can lay them out one after the other as rows like this :

w(B->1) w(1->1) w(2->1) w(3->1) ... w(N->1)
w(B->2) w(1->2) w(2->2) w(3->2) ... w(N->2)
...
w(B->M) w(1->M) w(2->M) w(3->M) ... w(N->M)

where w(x->y) denotes the weight of the link incoming into neuron y of layer i and outgoing from neuron x of layer i-1. The preceding assumes that there is N neurons (bias excluded) on layer i-1 and M neurons on layer i.

That makes (N+1)*M weights to store for the incoming connections of given layer.

For a complete network

For a complete network, we layout the weigths of incoming connections of each of the layer one after the other like this :

LAYER 1:

w(0,B->1,1)    w(0,1->1,1)    w(0,2->1,1)    w(0,3->1,1)    ... w(0,NB_0->1,1)
w(0,B->1,2)    w(0,1->1,2)    w(0,2->1,2)    w(0,3->1,2)    ... w(0,NB_0->1,2)
...
w(0,B->1,NB_1) w(0,1->1,NB_1) w(0,2->1,NB_1) w(0,3->1,NB_1) ... w(0,NB_0->1,NB_1)


LAYER 2:

w(1,B->2,1)    w(1,1->2,1)    w(1,2->2,1)    w(1,3->2,1)    ... w(1,NB_1->2,1)
w(1,B->2,2)    w(1,1->2,2)    w(1,2->2,2)    w(1,3->2,2)    ... w(1,NB_1->2,2)
...
w(1,B->2,NB_2) w(1,1->2,NB_2) w(1,2->2,NB_2) w(1,3->2,NB_2) ... w(1,NB_1->2,NB_2)

...

LAYER X:

w(X-1,B->X,1)    w(X-1,1->X,1)    w(X-1,2->X,1)    w(X-1,3->X,1)    ... w(X-1,NB_X-1->X,1)
w(X-1,B->X,2)    w(X-1,1->X,2)    w(X-1,2->X,2)    w(X-1,3->X,2)    ... w(X-1,NB_X-1->X,2)
...
w(X-1,B->X,NB_X) w(X-1,1->X,NB_X) w(X-1,2->X,NB_X) w(X-1,3->X,NB_X) ... w(X-1,NB_X-1->X,NB_X)

where a,b is the neuron on bth neuron on layer a. B mean the bias neuron. The layer 0 is the input layer. The expression NB_X denotes the number of neurons on layer X, bias excluded. Here it is assumed that there is X layers, input layer excluded.

This layout has been chosen so that the so-called "sum" of incoming signals for a given neuron is the dot product of two vectors, the first one is the output activations of the neurons of the previous layer (with bias coming first and set to 1) and the second one is the incoming weights vector, i.e. a line in the previous layout.

Posted by Francis Girard 2012-08-07