6
$\begingroup$

I'm trying to implement a convolutional neural network at the moment. A simple feedforward network is not the problem but I'm having some trouble with the weight adjustment in the conv layer.

Lets assume I have four layers. Input, convolution, hidden and output.

enter image description here

src:http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/

In the picture above we just see the input and the convolution layer. The deltas of the convolution layer are calculated as in a normal feedforward network. But how do I update the weights/filtermatrix between input and convolutionlayer?

$\endgroup$

1 Answer 1

1
$\begingroup$

For learning kernel/filter matrix in convolution layer, we find partial derivative of loss w.r.t. filter matrix and use gradient descent method to update filters. $$ W = W - \alpha\frac{\partial L}{\partial W} $$

Convolutional Neural Networks also use back-propagation algorithm to find partial derivatives of loss w.r.t. filter matrix.

$\endgroup$
4
  • $\begingroup$ This seems entirely correct to me, but it is not clear why the OP has not already understood this stage, since they have (also correctly) stated "the deltas of the convolution layer are calculated as in a normal feedforward network". I suspect as well as this answer, they are also missing the step of accumulating the deltas for each part of the feature output, so that they apply to the same weights each time. $\endgroup$ Commented Feb 17, 2017 at 8:00
  • $\begingroup$ @avaj can u share link of your implementation? $\endgroup$ Commented Feb 17, 2017 at 8:31
  • $\begingroup$ I have a correct implementation right now. If somebody is interested anymore I can share this. $\endgroup$ Commented Apr 30, 2017 at 17:58
  • $\begingroup$ @avaj could you please share your implementation? $\endgroup$ Commented Jan 29, 2018 at 12:47

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.