You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhance Conv2d section with code and explanations (#695)
* Enhance Conv2d section with code and explanations Added explanation and code snippet for nn.Conv2d parameters, including in_channels, out_channels, kernel_size, stride, and padding. Included output size formula for Conv2d. * Correct markdown formatting in conv-nets documentation Fixed formatting for explanation and output size equation. --------- Co-authored-by: Alexey Grigorev <alexeygrigorev@users.noreply.github.com>
Copy file name to clipboardExpand all lines: 08-deep-learning/04-conv-neural-nets.md
+29Lines changed: 29 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,35 @@ This is the first step in the process of extracting valuable features from an im
26
26
27
27
Consider a black and white image of 5x5 size whose pixel values are either 0 or 1 and also a filter matrix with a dimension of 3x3. Next, slide the filter matrix over the image and compute the dot product to get the convolved feature matrix.
28
28
29
+
```python
30
+
nn.Conv2d(
31
+
in_channels, # number of channels in the input image
Once the feature maps are extracted, the next step is to move them to a ReLU layer. ReLU (Rectified Linear Unit) is an activation function which performs an element-wise operation and sets all the negative pixels to 0. It introduces non-linearity to the network, and the generated output is a rectified feature map. The relu function is: `f(x) = max(0,x)`.
0 commit comments