Image Filtering

بص على محاضرة ال Spatial Filtering في الايميج ارحم

the image is what matters here


حابب أشارك معاكم strategy note عملتها لل2D Convolution، علشان لاحظت اني لما بحاول أعمل 2D Convolution بإيدي على ورقة بتوه جدًا وسط الأرقام، إن شاء الله تفيد حد =)
Performing 2D Convolution

By: Asser Ahmed


1. Machine Learning vs. Deep Learning

Deep Learning (DL) is a highly effective subfield of Machine Learning (ML) that uses multiple layers to learn data representations and find complex patterns.

The fundamental difference between the two lies in Feature Extraction:


2. Convolutional Neural Networks (CNNs)

When dealing with images (like a 32x32x3 RGB image), standard neural networks struggle because they flatten the image, losing the spatial relationship between pixels.

CNNs are designed specifically to solve this. They preserve the spatial structure of the image and use far fewer weights by sharing them across the entire image.

Convolutional Neural Network architecture, AI generated

A standard CNN is a sequence of distinct layers:

A. The Convolution Layer (CONV)

This is the core building block of a CNN. Instead of connecting every pixel to a neuron, a CONV layer uses filters (also called kernels).

The Kernel should have the same depth of the previous layer

B. Activation Functions (ReLU)

Between the Convolution layers, the network intersperses activation functions (like ReLU) to introduce non-linearity, allowing the network to learn complex, non-linear patterns.

C. The Pooling Layer (POOL)

Pooling layers are used to downsample the activation maps, reducing the spatial dimensions (width and height) while keeping the most important information.

D. The Fully Connected Layer (FC)

After the image has passed through multiple sequences of CONV, ReLU, and POOL layers, the high-level features are finally flattened and passed into Fully Connected layers. These act just like a standard neural network to make the final classification (e.g., predicting if the image is a car, truck, airplane, ship, or horse).

more here