I've been wanting to make my own Neural Network in Python, in order to better understand how it works. I've been following this series of videos as a sort of guide, but it seems the backpropagation will get much more difficult when you use a larger network, which I plan to do. He doesn't really explain how to scale it to larger ones.
Currently, my network feeds forward, but I don't have much of an idea of where to start with backpropagation. My code is posted below, to show you where I'm currently at (I'm not asking for coding help, just for some pointers to good sources, and I figure knowing where I'm currently at might help):
import numpy class NN: prediction = [] def __init__(self,input_length): self.layers = [] self.input_length = input_length def addLayer(self, layer): self.layers.append(layer) if len(self.layers) >1: self.layers[len(self.layers)-1].setWeights(len(self.layers[len(self.layers)-2].neurons)) else: self.layers[0].setWeights(self.input_length) def feedForward(self, inputs): _inputs = inputs for i in range(len(self.layers)): self.layers[i].process(_inputs) _inputs = self.layers[i].output self.prediction = _inputs def calculateErr(self, target): out = [] for i in range(0,len(self.prediction)): out.append( (self.prediction[i] - target[i]) ** 2 ) return out class Layer: neurons = [] weights = [] biases = [] output = [] def __init__(self,length,function): for i in range(0,length): self.neurons.append(Neuron(function)) self.biases.append(numpy.random.randn()) def setWeights(self, inlength): for i in range(0,inlength): self.weights.append([]) for j in range(0, inlength): self.weights[i].append(numpy.random.randn()) def process(self,inputs): for i in range(0, len(self.neurons)): self.output.append(self.neurons[i].run(inputs,self.weights[i], self.biases[i])) class Neuron: output = 0 def __init__(self, function): self.function = function def run(self, inputs, weights, bias): self.output = self.function(inputs,weights,bias) return self.output def sigmoid(n): return 1/(1+numpy.exp(n)) def inputlayer_func(inputs,weights,bias): return inputs def l2_func(inputs,weights,bias): out = 0 for i in range(0,len(inputs)): out += weights[i] * inputs[i] out += bias return sigmoid(out) NNet = NN(2) l2 = Layer(1,l2_func) NNet.addLayer(l2) NNet.feedForward([2.0,1.0]) print(NNet.prediction) So, is there any resource that explains how to implement the back-propagation algorithm step-by-step?