I am new to Neural Networks.I was trying to write simple 4-0-2 MLP and learn back-propagation algorithm in practice. But my back-propagation always diverges and the output is always [1,1]. I searched for the possible cause but neither setting learning rate to quite small number(0.001) nor changing the sign of delta weight could solve the problem.
Code for back-propagation algorithm:
def backward(self,trainingSamples): for i in range(len(trainingSamples)): curr_sample=trainingSamples[i] self.input=curr_sample[0] self.forward() print("output is "+str(self.output)) curr_des_out=curr_sample[1] for i in range(len(self.outputs)): error=curr_des_out[i]-self.outputs[i].output der_act=self.outputs[i].activate(deriv=True) local_gradient=der_act*error for j in range(len(self.input)): self.weights[j][i]-=self.learning_rate*local_gradient*self.input[j] and trainingSamples is a tuple of tuples of arrays:( ([1,1,1,1],[1,0]), ([0,0,0,0],[0,1]),([1,0,0,0],[0,1]), ([1,0,1,0],[1,0]) )
Here is the forward pass code:
def forward(self): for i in range(len(self.outputs)): for j in range(len(self.input)): self.outputs[i].input+=self.input[j]*self.weights[j][i] self.outputs[i].activate() self.output[i]=self.outputs[i].output return self.output
0) yououtput[i].inputsbefore you call? Because here you keep adding up.