Stop Writing Python Tests Without These 7 Pytest Plugins
Your Pytest Tests need an upgrade.
Even as a Data Scientist, writing test for your code is not only a best practice - it is mandatory.
While you can use the standard unittest library for testing, using pytest is my go-to choice.
It has a rich ecosystem of plugins to solve common pain points in testing.
Below, I will show you 7 essential Pytest plugins that will definitely transform how you write and run your test suite.
1. pytest-xdist: Parallelize Test Execution
Is your test suite taking forever to run? Maybe the reason is that pytest runs your test cases in sequential order.
Since we assume that especially unit tests shouldn’t depend on each other, we can also run our tests in parallel, or?
pytest-xdist lets you run tests across multiple CPU cores, speeding up your test executions.
You just need to install and run it with the -n option to specify how many cores you want to use (use auto to use all available cores)
pip install pytest-xdist
pytest -n auto2. pytest-env: Simplify Environment Variables
Testing apps that rely on environment variables?
That can become nasty, since you need to clarify how to separate your test environment from your local development environment.
Luckily pytest-env isolates test variables in pytest.ini, preventing conflicts with your local environment.
You just need to install it, and define a pytest.ini file with your environment variables:
pip install pytest-env# pytest.ini
[pytest]
env =
DB_URL=sqlite:///test.db
DEBUG=FalseAnd just run your tests like always.
3. pytest-cov: Measure Code Coverage
How much of your code is actually tested? pytest-cov generates coverage reports directly in your terminal.
Install and run it with —cov which should point to your source code folder:
pip install pytest-cov
pytest --cov=src4. pytest-mpl: Test Matplotlib Plots
When you are working with Matplotlib to generate plots, how do you test the output?
pytest-mpl does that for you.
For each figure to test, an image will be generated and compared to an existing reference image. If the difference is larger than an user defined threshold the test will fail.
Of course, you need to install it:
pip install pytest-mplAnd you need to generate a baseline image. It will created by having a plot generation function:
# test_plots.py
import matplotlib.pyplot as plt
def test_plot():
fig, ax = plt.subplots()
ax.plot([1, 2, 3])
return fig # pytest-mpl compares this to a stored baselineTo generate your baseline image in the baseline folder, run:
pytest --mpl-generate-path=baselineThen, run your tests as always, but pass —mpl to compare your baseline image to the newly generated image by your tests. If your plot generation logic changed, you will get an error.
5. pytest-instafail: Get Immediate Feedback
Why wait for all tests to finish when one fails?
pytest-instafail shows errors the moment they occur.
Of course, locally if pytest has a failing test, you can just press CTRL-C to stop your test run. But in CI pipelines this is more difficult to do. And you probably pay for compute time (e.g. in Github Actions).
Just install and pass the —instafail option:
pip install pytest-instafail
pytest --instafail6. pytest-randomly: Eliminate Test Dependencies
Sometimes your tests can have inter-test dependencies, that means one test is dependent on antoher test which is not want you want.
Test cases should be independent of each other, but they can be hard to detect.
pytest-randomly runs your test in random order to reduce the risk of such dependencies.
pip install pytest-randomly
pytestNo flags are needed, as you will get randomized orders by default.
7. pytest-recording: Auto-Mock APIs
When you work with many APIs, you will find yourself write dozens of mocks for your tests.
This can become very cumbersome.
A different way to work with APIs in your tests is by using pytest-recording.
It records any API interaction once and replay it offline forever.
That means it saves the whole interaction with all APIs when you run your tests the first time by automatically capturing responses and saving them into a .yaml file.
It then uses the saved interaction for any subsequent test run.
Therefore you do not need to mock anything.
You need to install it, and mark your test function with its decorator:
pip install pytest-recording# tests/test_function.py
import pytest
import requests
@pytest.mark.vcr
def test_api_call():
response = requests.get("https://api.example.com/data")
assert response.status_code == 200When running your test suite as usual, the first time it will make a real API request to your endpoint. It will save the whole API interaction with request and response in a .yaml.
For every subsequent run, it will just take the .yaml file.
Conclusion
I showed you 7 plugins which solve some of the biggest headaches.
To adopt them incrementally, start with pytest-xdist and pytest-cov for immediate wins.
Thanks for reading!


