Of course! In Python, "parametrize" most commonly refers to the concept of parameterizing tests to run the same logic with multiple different inputs. This is a cornerstone of effective testing, as it prevents you from writing repetitive test code.

The most popular and powerful library for this is pytest. I'll cover that in detail, and also show how you can achieve a similar effect with Python's built-in unittest module.
The Core Idea: Why Parametrize?
Imagine you want to test a simple function that adds two numbers.
# my_app.py
def add(a, b):
"""Adds two numbers."""
return a + b
Without parametrization, you might write tests like this:
# test_simple.py
import pytest
def test_add_positive_numbers():
assert add(1, 2) == 3
def test_add_negative_numbers():
assert add(-1, -2) == -3
def test_add_zero():
assert add(0, 5) == 5
def test_add_mixed_numbers():
assert add(10, -2) == 8
This works, but it's repetitive. If you find a bug in your test logic, you have to fix it in four different places. Parametrization solves this by defining the test logic once and providing a list of inputs to run it against.

Parametrization with pytest (The Modern & Recommended Way)
pytest's @pytest.mark.parametrize decorator is the standard for this task. It's clean, powerful, and integrates seamlessly with the rest of the pytest framework.
Basic Syntax
The decorator takes two arguments:
- A string of comma-separated parameter names (e.g.,
"a, b, expected"). - A list (or tuple) of tuples, where each inner tuple contains the arguments for one test run (e.g.,
[(1, 2, 3), (-1, -2, -3)]).
# test_parametrized.py
import pytest
from my_app import add
# The first argument is a string of parameter names
# The second argument is a list of test data tuples
@pytest.mark.parametrize("a, b, expected", [
(1, 2, 3), # Test case 1
(-1, -2, -3), # Test case 2
(0, 5, 5), # Test case 3
(10, -2, 8), # Test case 4
(1.5, 2.5, 4.0), # Test case 5 (works with floats too!)
])
def test_add(a, b, expected):
"""Tests the add function with various inputs."""
assert add(a, b) == expected
When you run pytest, it will automatically generate and run 5 separate tests from this single function. The output will show each test case distinctly:
$ pytest
============================= test session starts ==============================
...
collected 5 items
test_parametrized.py ..... [100%]
============================== 5 passed in ...s ===============================
Using pytest.param for Test IDs
The default test names (like test_add[0], test_add[1]) can be cryptic. You can give each test case a more descriptive name using pytest.param.
# test_with_ids.py
import pytest
from my_app import add
@pytest.mark.parametrize("a, b, expected", [
pytest.param(1, 2, 3, id="positive_numbers"),
pytest.param(-1, -2, -3, id="negative_numbers"),
pytest.param(0, 5, 5, id="first_zero"),
pytest.param(10, -2, 8, id="mixed_numbers"),
pytest.param(1.5, 2.5, 4.0, id="float_numbers"),
])
def test_add_with_ids(a, b, expected):
assert add(a, b) == expected
Now the output is much clearer:
$ pytest
============================= test session starts ==============================
...
collected 5 items
test_with_ids.py ..... [100%]
============================== 5 passed in ...s ===============================
Parametrizing with Lists of Dictionaries
Sometimes your test data is better organized as a list of dictionaries. This is especially useful when you have more complex data or want to include descriptions for each case.
# test_with_dicts.py
import pytest
from my_app import add
test_data = [
{"a": 1, "b": 2, "expected": 3, "description": "Positive numbers"},
{"a": -1, "b": -2, "expected": -3, "description": "Negative numbers"},
{"a": 0, "b": 5, "expected": 5, "description": "First is zero"},
]
@pytest.mark.parametrize("data", test_data)
def test_add_with_dicts(data):
"""Tests the add function using a list of dictionaries."""
assert add(data["a"], data["b"]) == data["expected"]
Indirect Parametrization
This is a more advanced feature where you don't pass the test values directly. Instead, you pass the name of a fixture or another function that will provide the value. This is useful for:
- Expensive data generation (compute it once, then reference it).
- Sharing data across multiple parametrized tests.
# test_indirect.py
import pytest
# A fixture that provides test data
@pytest.fixture
def user_data():
print("Generating user data...")
return {"name": "Alice", "id": 123}
# A function that generates test data
def generate_status_tests():
print("Generating status tests...")
return ["active", "inactive", "pending"]
# The 'indirect=True' argument tells pytest to resolve "user_data" and "status"
# by looking for fixtures or functions with those names.
@pytest.mark.parametrize("user_data, status", [
("user_data", "active"),
("user_data", "inactive"),
], indirect=True)
def test_user_status(user_data, status):
# pytest will call the user_data fixture and the generate_status_tests function
# to get the values for user_data and status.
print(f"Testing user {user_data['name']} with status: {status}")
assert user_data["id"] > 0
assert status in ["active", "inactive", "pending"]
Parametrization with unittest (The Built-in Way)
If you prefer or are required to use Python's built-in unittest framework, you can achieve parametrization using subTest. It's a bit more verbose but achieves a similar goal.
The with self.subTest(...): block allows you to mark a block of code as a "sub-test". If an assertion fails inside it, unittest will report the failure for that specific sub-test, but continue running the others.
# test_unittest_parametrized.py
import unittest
from my_app import add
class TestAddFunction(unittest.TestCase):
def test_add_various_inputs(self):
test_cases = [
(1, 2, 3),
(-1, -2, -3),
(0, 5, 5),
(10, -2, 8),
(1.5, 2.5, 4.0),
]
for a, b, expected in test_cases:
with self.subTest(a=a, b=b, expected=expected):
# If this assert fails, it will be reported as a failure
# for the sub-test with the given parameters.
self.assertEqual(add(a, b), expected)
if __name__ == '__main__':
unittest.main()
When you run this, if one test case fails, you'll see a detailed report for just that one, not the whole test method.
Summary: pytest vs. unittest
| Feature | pytest |
unittest |
|---|---|---|
| Syntax | Clean decorator: @pytest.mark.parametrize |
Verbose with self.subTest() inside a loop. |
| Test Naming | Automatic, customizable with pytest.param |
Generic names like test_method[subtest_name]. |
| Failure Reporting | Excellent. Clearly shows which specific data set failed. | Good. Reports sub-test failures within the main test method. |
| Ecosystem | Huge ecosystem of plugins (e.g., pytest-cov, pytest-mock). |
Part of the standard library, but fewer plugins. |
| Recommendation | Highly recommended for most new projects. It's the modern standard. | Perfect if you can't use external libraries or are in a strictly unittest environment. |
Final Recommendation
For any new Python project, use pytest. Its parametrization feature is more elegant, powerful, and leads to more readable and maintainable test code. The time saved by not writing repetitive tests and the clarity of the test output are significant advantages.
