Python comes with unittest module that you can use for writing tests.

Unittest is ok, but it suffers from the same “problem” as the default Python REPL - it’s very basic (and overcomplicated at the same time - you need to remember a bunch of different assert versions).

I suggest you learn pytest instead - probably the most popular Python testing library nowadays. pytest is packed with features, but at the same time, it stays very beginner-friendly.

You don’t have to write classes that inherit from unittest.TestCase - you just write functions that start with test_. You don’t have to memorize assertEqual, assertTrue, assertIn, etc. - you use assert something == something_else, and that’s it. It has plenty of additional features and a vast ecosystem of pytest plugins.


You can install pytest using pip:

$ pip install pytest

In your projects, you will probably usually add pytest to the requirements file.

How to use pytest?

The best way to start is to follow pytest conventions:

  • Create tests directory in the root folder of your project
  • Create files starting with test_ in that folder
  • Write functions starting with test_ inside those files
# my_project/tests/

def test_adding_two_numbers():
    assert 2+2 == 5

If you follow this advice, pytest will work for you out of the box (you can customize all that, but then you have to tweak pytest a bit).

Once you write some tests, just run $ pytest command - pytest will detect all the tests and run them for you:

$ pytest
=========================== test session starts ==============================
platform darwin -- Python 3.7.2, pytest-5.3.5, py-1.8.0, pluggy-0.12.0
rootdir: /Users/testuser/myproject, inifile: setup.cfg
collected 3 items F                                                [ 33%] ..                                               [100%]

============================ FAILURES =========================================
______________________ test_adding_two_numbers_________________________________

    def test_adding_two_numbers():
>       assert 2+2 == 5
E       assert (2 + 2) == 5 AssertionError
====================== 1 failed, 2 passed in 0.14s ============================

If you have some existing unittests or nose tests, that you don’t want to rewrite, pytest can run them out of the box! (!

pytest features


Let’s say that you have an e-commerce website where you sell some products. You want to make sure that everything works fine - you don’t want to give your products for free if there is a bug!

So you write some tests - you check if the user can log in, add some products to a cart, go through the checkout process, etc. For each test, you need to create a user. Instead of writing code that creates a new user inside every test, you decide to extract this user creation to a separate function and then run this function at the beginning of each test.

It turns out that a “need for some specific object to exist at the beginning of a test” is such a common scenario that the idea of fixtures was born. A fixture is a function that returns an object (in our case - the user object) that can be used in our test:

def user():
    # This would normally be a function that creates user in the DB
    # But let's use a dictionary for illustration purposes
    user = dict(
    return user

and then pass your fixture to a test function:

def test_checkout_process(user):
  assert user.role == "customer"
  # ... and so on

If you want to use the same fixture in multiple test files (we probably need our “user” in other tests as well), put it inside a file called pytest will automatically load fixtures from this file.

Mocking and monkeypatching

Let’s continue our example with the e-commerce website. You want to test that if a user buys something, you charge his credit card, and then you send him his order.

You can’t charge a real credit card each time your run tests. Ok, technically, you can if you are very rich. But let’s assume you are not Jeff Bezos testing Amazon, and you want to avoid losing money.

That’s why you need a mock - a “fake” object that you can use in place of a real one. Instead of sending credit card details to Stripe (a payment processor), you replace Stripe with a mock object. This mock object returns a “success” message if you provide it with correct parameters.

It’s not your job to test that Stripe works fine. All you need to do is to make sure you send the correct data to Stripe and that you accept and process whatever information comes back from Stripe:

def charge_customer(amount):
    response = Stripe.charge(amount)

    if response.get('status') == "success":
        current_order_status = "processing"

def test_payment(monkeypatch):
    # Patch the Stripe.charge() method and make it return "success" status
    monkeypatch.setattr(Stripe, "charge", dict(status="success"))

    # This calls our monkeypatch that always returns "success"
    assert current_order_status == "processing"
    # ... and so on

Parametrized tests

You want to test that everything is working fine if a user buys 1 item in your store. But also if he buys 100 items. And if he places one order with 100 different items. And 50 different orders with 50 different items.

That’s a lot of similar tests to write. Luckily, you can parametrize your tests. You decide which parts of the test can change. In our example - it’s the number of orders and the number of items in one order. Then you write one test that accepts those “changing parts” as parameters, and you use pytest.mark.parametrize decorator to pass different values to those parameters.

It’s probably easier to understand with an example:

        (1, 1),
        (100, 1),
        (1, 100),
        (50, 50),
def test_placing_order(number_of_orders, order_size):
    user = create_user()

    order = place_order(number_of_orders, order_size)

    assert order.payment == "success"
    assert order.status == "processing"
    # ... and so on

Pytest will turn the above code into four separate tests.

Test your documentation

One of the things that we usually forget to keep up-to-date is documentation. Mostly, because we don’t have a tool that will tell us when it’s outdated and no longer valid.

pytest can solve part of this problem - if you put some code examples in the documentation, it can evaluate them and tell you if they no longer work.

Let’s see an example:

def add_two_numbers(a, b):
    """ Adds two numbers
    >>> add_two_numbers(2, 2)
    return a + b

If we run pytest with --doctest-modules parameter, it will check for parts of your documentation starting with >>>. If there are any, pytest will evaluate whatever is after the >>> sign and check if the result is equal to the next line of the documentation. If it’s not - it will report an error:

$ pytest --doctest-modules
============================= test session starts =============================
platform darwin -- Python 3.7.2, pytest-5.3.5, py-1.8.0, pluggy-0.12.0
rootdir: /Users/testuser/my_module
collected 1 item F                                                                [100%]

================================== FAILURES ===================================
_______________________ [doctest] test.add_two_numbers ________________________
002 Adds two numbers
003     >>> add_two_numbers(2, 2)

/Users/testuser/my_module/ DocTestFailure
============================== 1 failed in 0.04s ==============================

Other configuration options

Pytest offers a lot of configuration options:

  • You can stop running tests after the first failure (pytest -x)
  • Rerun only the failed tests from the previous run (pytest --lf)
  • Select a single test or a single file to run (pytest my_module/tests/
  • Start a debugger on failure (pytest --pdb)
  • Run tests in parallel (pytest -n 4 - requires pytest-xdist plugin)
  • “Mark” tests - assign a label to a test and then run tests only from a specific label. That way, you can split your tests into:
    • “unit” tests that you will every time because they are fast
    • “end-to-end” tests that you will run only on the CI platform because they are slow
  • Use one of the predefined marks to:
    • Tell pytest to skip some tests (with @pytest.mark.skip) - for example, because you don’t have time to fix them today, but you still want your CI to work
    • Tell pytest that some tests are expected to fail: @pytest.mark.xfail
    • Skip tests if a specific condition is true: @pytest.mark.skipif(sys.platform == "win32")


Once you get more comfortable with using pytest, you will probably want to add some plugins, like:

  • xdist-pytest to run your tests in parallel
  • pytest-cov to generate coverage reports, so you know which parts of the code are well tested and which aren’t.

There are different ways to approach writing tests. Some people swear by the Test-Driven Development. Some people think writing tests is a waste of time. I believe that tests are useful, but I rarely use the TDD approach. What works for me is the following workflow:
1. Write your feature - just make it work, don’t worry about the ugly code.
2. Write tests
3. Refactor your code into something that you are not ashamed of and make a Pull (Merge) Request.
I’m using this approach because I found myself tinkering too much with the code to make it “the best code possible” from the beginning, only to realize that I need to change something and deleting this code later. With this approach, I can write ugly, undocumented code with a bunch of TODOs fast and then fix it when I have the tests, and I know that my fixes won’t break anything.