Pytest, A Practical Guide

Pytest, A Practical Guide

At a local Python meetup, there were several attendees that weren't familiar with pytest, which gave me the idea to make this post as a primer on how I use pytest. This isn't going to be a comprehensive review of pytest and how to use it. For that, pick up Brian Okken's book.

This post is for someone with knowledge of Python who is maybe just starting to use pytest, or has used it for a while to run tests but hasn't dug into its features. What I want to cover is some of the features of pytest that can help improve your tests and running them. I also want to talk about how to think about pytest because it has some features that are very powerful, but can be confusing if you don't know where to look.

Fixtures

These are a great feature of pytest that can cause some initial confusion.

@pytest.fixture
def first_entry():
    return "a"

def test_first_entry(first_entry):
    assert first_entry == "a"

@pytest.fixture is a marker (more about that below) that identifies the function first_entry as a fixture. It is imported into test_first_entry as a parameter by name.

This is an oversimplified example to illustrate how fixtures work. I've simplified here because what you might find in a real world project is several fixtures being imported into a test where some are defined a few hundred lines above the function, and some might even be defined in another file (see next section). The power of fixtures comes when you need to define variables that will be reused across a number of tests, either in a single file or across the application.

A lot more about fixtures are in the docs.

conftest.py

This is a common filename you will see in projects using pytest. It's a great place to store fixtures and other configuration that will be reused across an entire application (or a section of the app). It's a very helpful pattern, but I've found that it can lead to debugging trouble in larger projects. How so? Two ways:

  1. If you aren't aware that conftest.py exists, you might not be able to track down where something (usually a fixture) is defined.
  2. If you have more than one conftest.py (this is possible and can be a good idea in larger projects), it can also be hard to track down which file things are defined in.

The other reason these are problems is that not all IDEs are able to figure out where fixtures are defined, which means you can't right-click and view their definition, like you can with most objects.

The way I work around this is to use simple project wide string search (ctrl-shift-f or cmd-shift-f in vscode) when I'm looking for fixtures.

Flags

Executing pytest on the command line will run your entire test suite, but I almost never use that command alone. Instead, I use a range of different flags depending on how I am running my tests. Here are the flags I use most often and when I use them.

-n auto

By default, pytest runs on a single core. If you install pytest-xdist alongside pytest and execute pytest -n auto, it runs on all of the cores on your machine. I use this almost every time I run pytest. The only exception is when I'm running a few tests because the overhead isn't worth it in those cases. You can replace auto with a specific number of cores, but I've never had a good reason to do that. Another advantage of running tests this way is catching some flaky tests that might not have cleaned up properly or are not properly isolated.

-x

This stops the test run on the first failure. I use this when I'm running more than one test and I think there is a likelihood that one or more might fail (usually when working through refactoring). Or, if I'm having an issue with my setup, and I think there is a possibility of an error causing many tests to fail. I want the test suite to stop on the first one so I can more easily read the output.

--lf

This runs your last failed tests. I use this when I'm fixing a series of tests and I don't want to wait for the rest of the test suite to run. It can also be helpful if you've broken a handful of tests to work through each one until none fail.

-k name_of_test

This allows you to select a subset of tests. I often use this to run a single test or test file. Note that in larger test suites (thousands, not hundreds of tests) this can take (significant) time to collect. In that case, it's good to use -k to find the test once, then if you are re-running the test a bunch of times, use the full path to the test. You can find the full path by forcing a failure in the test. I do this sometimes when I forget the path syntax which is /path/to/test.py::test_name.

-s

This stops pytest from capturing standard output. It's useful if I need to print something in a test. I try to not do that in favor of using a breakpoint, but sometimes I find this useful.

--pdb

This opens a debugger at the line where a test fails. I should probably use this more often than I do, but I tend to use -x, then introduce a breakpoint(). I did find this useful recently, though. It can be helpful when you're having trouble knowing where a test is going to fail and you know you'll need to poke around at some values when it does.

--reuse-db

This is Django specific (I think) and is great for speeding up your tests in most test runs, but I've run into issues when new migrations are introduced. This is partially specific to my setup because I added this to my pytest.ini, forgot about it, then realized the errors I was getting was because the test database was in a bad state. Once you've done this a few times, you won't forget! This can also happen if you use it by default. If you start getting unexpected testing errors, try running without it. I tend to not use it the first time I run the test suite after pulling down changes, then use it throughout the day that I'm running my tests.

Adam Johnson wrote about his most used flags, which has some similarities and differences with my list.

Related Concepts

coverage.py - this is a great package to use with pytest that tells you how much of your code is covered by tests. This can be very helpful when getting started with a test suite, or when developing new code to ensure you have tested all possibilities. Get started by installing pytest-cov then run with pytest --cov appname. On some projects, I add coverage to my defaults so I don't have to remember to use all of the flags. On larger projects, it can be a bit too much output and I use this Github Action instead.

Markers - we've already seen these with @pytest.fixture above. These are decorators unique to pytest. There are some good built in markers, and you can also define your own, which can be helpful for separating different types of tests (slower vs faster, for example).

@pytest.mark.xfail - A good marker for bugs you know you need to fix. I have also used it for failures we knew would be fixed in future versions of external libraries. That way, we'd see the test pass with the new version, which would show up as a failure in the test run, notifying us to remove the mark, and make any necessary code changes.

@pytest.mark.skip - Another useful marker. This one is a bit like commenting out code, but can be useful for keeping tests around for a short time until they can be updated to no longer require skipping. As such, I use this sparingly.

parametrization - using a marker, you can run a single test with a number of input parameters. This is very helpful for not repeating test code when the only difference is the input. More information in the docs. Unfortunately, there is not a built in way to parametrize fixtures. Fortunately, I have a blog post about how to do it.

With Django

If you are developing a Django application, pytest-django has a lot of nice features for using pytest with Django.

Wrapping Up

I hope this has been helpful in highlighting some useful features of pytest as well as some notable places that you might trip up while using it. Let me know if you learned something new!