Hi all,
I've constructed a simple neural network to learn to multiply numbers
The network doesn't converge ( error is 8.83333333333...) and by training it decreses at the smallest decimals
what can I do to fix that ?
here is the code I used:
for counter in range(600000000):
# Feed forward through layers 0, 1, and 2
l0 = X
l1 = nonlin(np.dot(l0,syn0))
l2 = nonlin(np.dot(l1,syn1))
l3 = nonlin(np.dot(l2,syn2))# {{wa haza howa el output layer}}
# how much did we miss the target value?
l3_error = y - l3
l3_delta = l3_error*nonlin(l3,deriv=True)
l2_error = l3_delta.dot(syn2.T)
l2_delta = l2_error*nonlin(l2,deriv=True)
l1_error = l2_delta.dot(syn1.T)
l1_delta = l1_error*nonlin(l1,deriv=True)
counter = counter + 1
if (counter% 1000) == 0:
print ("Error:" + str(np.mean(np.abs(l3_error))))
syn2 += l2.T.dot(l3_delta)
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
Hi all,
I've constructed a simple neural network to learn to multiply numbers
The network doesn't converge ( error is 8.83333333333...) and by training it decreses at the smallest decimals
what can I do to fix that ?
here is the code I used:
import numpy as np
def nonlin(x,deriv=False):
if(deriv==True):
return x*(1-x)
X = np.array([[1,2,1],
[5,2,2],
[0,1,4],
[3,3,1],[1,1,1],[5,5,1]])
y = np.array([[2],
[20],
[0],
[9],[1],[25]])
np.random.seed(1)
randomly initialize our weights with mean 0
syn0 = 2np.random.random((3,2))
syn1 = 2np.random.random((2,2))
syn2 = 2*np.random.random((2,1))
l1_error = 100
l2_error = 100
l3_error = 100
counter = 0
for counter in range(600000000):
# Feed forward through layers 0, 1, and 2
l0 = X
l1 = nonlin(np.dot(l0,syn0))
l2 = nonlin(np.dot(l1,syn1))
l3 = nonlin(np.dot(l2,syn2))# {{wa haza howa el output layer}}
solve
solve_1 = ([[2,3,2]]) #input
solve_2 = nonlin(np.dot(solve_1,syn0))
solve_3 = nonlin(np.dot(solve_2,syn1))
solve_4 = nonlin(np.dot(solve_3,syn2)) # output
print("solution output is:", solve_4)
as first, use fanntool before do any coding, to be sure you have correct train data