Of course! Let's dive into using SIFT (Scale-Invariant Feature Transform) with Python and OpenCV.

SIFT is a powerful computer vision algorithm used for detecting and describing local features in images. These features are "invariant" to scale, rotation, and illumination changes, which makes them excellent for tasks like object recognition, image stitching, and 3D reconstruction.
Here's a complete guide, from the basics to a practical example.
Prerequisites: Installation
First, you need to install the necessary libraries. The most important thing is to install the correct version of OpenCV that includes the contrib modules, where SIFT is located.
Step 1: Install OpenCV with contrib modules
OpenCV's SIFT algorithm is patented and was moved to the contrib module. You need to install a version of OpenCV that includes it.

# For pip (Recommended for most users) pip install opencv-contrib-python # If you need the headless version (for servers, no GUI) # pip install opencv-contrib-python-headless
Step 2: Install other dependencies
We'll use matplotlib for easy image display.
pip install matplotlib numpy
Understanding the SIFT Process
Using SIFT in OpenCV involves two main steps:
- Feature Detection: Find keypoints (interesting points) in the image. SIFT finds points that are stable across different scales and rotations.
- Feature Description: For each keypoint, compute a descriptor. A descriptor is a vector (a list of numbers) that describes the local region around the keypoint. This vector is what you'll use to compare keypoints between different images.
Basic Code: Detecting and Drawing Keypoints
This is the simplest example. We'll load an image, create a SIFT detector, find the keypoints, and draw them on the image.
import cv2
import numpy as np
import matplotlib.pyplot as plt
# --- 1. Load the image ---
# SIFT works best on grayscale images
image_path = 'your_image.jpg' # Replace with your image path
image = cv2.imread(image_path)
if image is None:
print(f"Error: Could not load image at {image_path}")
exit()
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# --- 2. Create a SIFT object ---
# The SIFT detector is created by instantiating the cv2.SIFT_create() class.
sift = cv2.SIFT_create()
# --- 3. Detect keypoints and compute descriptors ---
# The detect() method finds the keypoints.
# The detectAndCompute() method does both: finds keypoints AND computes their descriptors.
# It's more efficient to use detectAndCompute().
keypoints, descriptors = sift.detectAndCompute(gray_image, None)
# Print some information about the keypoints
print(f"Number of keypoints detected: {len(keypoints)}")
# A single keypoint object has properties like:
# - .pt: (x, y) coordinates
# - .size: the diameter of the keypoint
# - .angle: the orientation of the keypoint
# - .response: the strength of the keypoint
print(f"Properties of the first keypoint: {keypoints[0].pt}, size: {keypoints[0].size}, angle: {keypoints[0].angle}")
# --- 4. Draw the keypoints on the original image ---
# cv2.drawKeypoints draws the keypoints.
# flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS draws a circle with size proportional to the keypoint size
# and an orientation line.
image_with_keypoints = cv2.drawKeypoints(
image,
keypoints,
None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS
)
# --- 5. Display the result ---
# Convert BGR (OpenCV) to RGB (Matplotlib) for correct color display
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
keypoints_rgb = cv2.cvtColor(image_with_keypoints, cv2.COLOR_BGR2RGB)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.imshow(image_rgb)'Original Image')
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(keypoints_rgb)'Image with SIFT Keypoints')
plt.axis('off')
plt.show()
To run this code, save it as sift_basic.py and replace 'your_image.jpg' with a path to an image. You should see two images side-by-side: the original and the same image with circles and lines marking the detected keypoints.

Practical Example: Feature Matching
The real power of SIFT shines when you match features between two images of the same object from different angles or viewpoints. This is the core of object recognition.
We'll use a Brute-Force matcher with a cross-check for reliable matches.
import cv2
import numpy as np
import matplotlib.pyplot as plt
def sift_feature_matching(img_path1, img_path2):
"""
Detects SIFT features in two images and matches them.
"""
# --- 1. Load and prepare images ---
img1 = cv2.imread(img_path1, cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread(img_path2, cv2.IMREAD_GRAYSCALE)
if img1 is None or img2 is None:
print("Error: Could not load one or both images.")
return
# --- 2. Create SIFT object and detect features ---
sift = cv2.SIFT_create()
keypoints1, descriptors1 = sift.detectAndCompute(img1, None)
keypoints2, descriptors2 = sift.detectAndCompute(img2, None)
# --- 3. Match features using Brute-Force Matcher with cross-check ---
# NORM_L2 is good for SIFT descriptors
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
# Match descriptors
matches = bf.match(descriptors1, descriptors2)
# Sort matches by distance (lower distance is a better match)
matches = sorted(matches, key=lambda x: x.distance)
# --- 4. Draw the matches ---
# cv2.drawMatches draws the two images side-by-side with lines connecting the matched keypoints.
# We'll only draw the "good" matches (e.g., the first 50 or those with distance < a threshold)
good_matches = matches[:50] # Let's take the top 50 matches
matched_image = cv2.drawMatches(
img1,
keypoints1,
img2,
keypoints2,
good_matches,
None,
flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS
)
# --- 5. Display the result ---
plt.figure(figsize=(15, 7))
plt.imshow(matched_image, cmap='gray')
plt.title('SIFT Feature Matching')
plt.axis('off')
plt.show()
# --- Example Usage ---
# Make sure you have two images of the same object.
# For example, 'box.png' and 'box_rotated.png'
# sift_feature_matching('path/to/image1.jpg', 'path/to/image2.jpg')
How to use this:
- Save this code as
sift_matching.py. - Find two images of the same object taken from different angles. You can download sample images online (e.g., "opencv box" or "book cover dataset").
- Update the
sift_feature_matching()call with your image paths. - Run the script. You will see an image with the two input images on the left and right, and lines connecting the matched keypoints.
Important Parameters and Tuning
When you create the SIFT object, you can pass parameters to tune its behavior.
# Create SIFT object with custom parameters
sift = cv2.SIFT_create(
nfeatures=0, # The number of best features to retain. 0 means no limit.
nOctaveLayers=3, # Number of octave layers in the pyramid. More layers = more scales.
contrastThreshold=0.03, # Filter out weak keypoints. Higher value = fewer, but stronger keypoints.
edgeThreshold=10, # Filter out edge-like keypoints. Higher value = more keypoints.
sigma=1.6 # The sigma of the Gaussian applied to the input image at the 0th octave.
)
contrastThreshold: This is one of the most useful parameters for tuning. If you get too many keypoints, try increasing this value. If you miss important features, try decreasing it.nfeatures: Useful if you only care about the top N most salient features and want to speed up processing.
Summary
| Task | Code Snippet | Explanation |
|---|---|---|
| Install | pip install opencv-contrib-python |
Installs OpenCV with the SIFT algorithm in the contrib module. |
| Load Image | gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) |
SIFT requires a grayscale image. |
| Create SIFT | sift = cv2.SIFT_create() |
Instantiates the SIFT detector. |
| Detect & Describe | kp, des = sift.detectAndCompute(gray, None) |
The core function that finds keypoints and computes their descriptors. |
| Draw Keypoints | img_kp = cv2.drawKeypoints(img, kp, None) |
Visualizes the detected keypoints on the image. |
| Match Features | bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)matches = bf.match(des1, des2) |
Creates a matcher and finds correspondences between descriptors of two images. |
SIFT is a foundational algorithm in feature-based computer vision. Understanding how to use it with OpenCV is a crucial skill for many applications.
