Mechanics of Spatial Filtering

While point processing (intensity transformation) operates on a 1×1 neighborhood, spatial filtering uses a larger neighborhood (e.g., a 3×3 mask, kernel, or window).

Pasted image 20260401230549.png

Convolution Output Size Formula

When applying a filter, the spatial dimensions of the output image can be calculated using the following formula:

O=WK+2PS+1

Pasted image 20260401230312.png

What Happens at the Borders?


2. Smoothing Spatial Filters

Smoothing filters are used for blurring (a preprocessing step to remove small details or bridge gaps in lines) and for noise reduction.

A. Linear Smoothing Filters (Averaging)

These filters replace the value of every pixel with the average of the intensity levels in its neighborhood.

Pasted image 20260331023035.png

B. Order-Statistic Filters (Nonlinear)

These filters replace the center pixel based on the ordering (ranking) of the pixels in the image area encompassed by the filter.


3. Sharpening Spatial Filters

The objective of sharpening is to highlight transitions in intensity (edges, boundaries). Because smoothing is achieved by spatial averaging (integration), sharpening is achieved by spatial differentiation.

A. The Laplacian (Second Derivative)

The Laplacian is an isotropic (rotation-invariant) operator that calculates the second derivative of the image to find rapid changes in intensity.

Mathematical Definition:

2f=2fx2+2fy2

Discrete Approximation Masks:

Common 3×3 masks for the Laplacian include a center value of 4 (with 1s on the top, bottom, left, and right) or a center of 8 (with 1s in all surrounding cells).

Image Enhancement with the Laplacian:

Because the Laplacian highlights edges but zeroes out flat areas (losing the background), the sharpened image is usually added back to the original image to restore the background information.

g(x,y)=f(x,y)+c[2f(x,y)]

(Where c depends on the center coefficient of the mask, negative center corresponds to c = 1, and vice versa).

Screenshot 2026-03-30 172816.png

B. Unsharp Masking & Highboost Filtering

This is a classic technique used in the publishing industry, consisting of three steps:

  1. Blur the original image: f¯(x,y)

  2. Subtract the blurred image from the original to create a mask:

    gmask(x,y)=f(x,y)f¯(x,y)
  3. Add the mask back to the original image:

    g(x,y)=f(x,y)+kgmask(x,y)

Pasted image 20260401233437.png

C. The Gradient (First Derivative)

First derivatives are primarily used to extract edges (rather than just enhance the whole image). The gradient of an image f at (x,y) is a vector:

f=[GxGy]=[fxfy]

Gradient Magnitude:

To calculate the strength of the edge, we find the magnitude. In image processing, this is often approximated using absolute values for computational speed:

M(x,y)|Gx|+|Gy|

Common Gradient Operators:

Too see full comparison and summary for Ch3, here