Python comes with unittest module that you can use for writing tests.
Unittest is ok, but it suffers from the same “problem” as the default Python REPL - it’s very basic (and overcomplicated at the same time - you need to remember a bunch of different
I suggest you learn pytest instead - probably the most popular Python testing library nowadays.
pytest is packed with features, but at the same time, it stays very beginner-friendly.
You don’t have to write classes that inherit from
unittest.TestCase - you just write functions that start with
test_. You don’t have to memorize
assertIn, etc. - you use
assert something == something_else, and that’s it. It has plenty of additional features and a vast ecosystem of pytest plugins.
You can install pytest using pip:
$ pip install pytest
In your projects, you will probably usually add
pytest to the requirements file.
The best way to start is to follow pytest conventions:
testsdirectory in the root folder of your project
test_in that folder
test_inside those files
# my_project/tests/test_math_operations.py def test_adding_two_numbers(): assert 2+2 == 5
If you follow this advice, pytest will work for you out of the box (you can customize all that, but then you have to tweak pytest a bit).
Once you write some tests, just run
$ pytest command - pytest will detect all the tests and run them for you:
$ pytest =========================== test session starts ============================== platform darwin -- Python 3.7.2, pytest-5.3.5, py-1.8.0, pluggy-0.12.0 rootdir: /Users/testuser/myproject, inifile: setup.cfg collected 3 items test_math_operations.py F [ 33%] test_some_other_file.py .. [100%] ============================ FAILURES ========================================= ______________________ test_adding_two_numbers_________________________________ def test_adding_two_numbers(): > assert 2+2 == 5 E assert (2 + 2) == 5 test_math_operations.py:2: AssertionError ====================== 1 failed, 2 passed in 0.14s ============================ ~/myproject/tests
If you have some existing unittests or nose tests, that you don’t want to rewrite,
pytest can run them out of the box! (https://docs.pytest.org/en/latest/unittest.html#unittest)!
Let’s say that you have an e-commerce website where you sell some products. You want to make sure that everything works fine - you don’t want to give your products for free if there is a bug!
So you write some tests - you check if the user can log in, add some products to a cart, go through the checkout process, etc. For each test, you need to create a user. Instead of writing code that creates a new user inside every test, you decide to extract this user creation to a separate function and then run this function at the beginning of each test.
It turns out that a “need for some specific object to exist at the beginning of a test” is such a common scenario that the idea of fixtures was born. A fixture is a function that returns an object (in our case - the user object) that can be used in our test:
@pytest.fixture def user(): # This would normally be a function that creates user in the DB # But let's use a dictionary for illustration purposes user = dict( first_name="Sebastian", last_name="Witowski", email="email@example.com", role="customer", ) return user
and then pass your fixture to a test function:
def test_checkout_process(user): assert user.role == "customer" # ... and so on
If you want to use the same fixture in multiple test files (we probably need our “user” in other tests as well), put it inside a file called
conftest.py. pytest will automatically load fixtures from this file.
Let’s continue our example with the e-commerce website. You want to test that if a user buys something, you charge his credit card, and then you send him his order.
You can’t charge a real credit card each time your run tests. Ok, technically, you can if you are very rich. But let’s assume you are not Jeff Bezos testing Amazon, and you want to avoid losing money.
That’s why you need a mock - a “fake” object that you can use in place of a real one. Instead of sending credit card details to Stripe (a payment processor), you replace Stripe with a mock object. This mock object returns a “success” message if you provide it with correct parameters.
It’s not your job to test that Stripe works fine. All you need to do is to make sure you send the correct data to Stripe and that you accept and process whatever information comes back from Stripe:
def charge_customer(amount): response = Stripe.charge(amount) if response.get('status') == "success": current_order_status = "processing" else: display_error_message(response.get('error')) def test_payment(monkeypatch): # Patch the Stripe.charge() method and make it return "success" status monkeypatch.setattr(Stripe, "charge", dict(status="success")) # This calls our monkeypatch that always returns "success" charge_customer("199") assert current_order_status == "processing" # ... and so on
You want to test that everything is working fine if a user buys 1 item in your store. But also if he buys 100 items. And if he places one order with 100 different items. And 50 different orders with 50 different items.
That’s a lot of similar tests to write. Luckily, you can parametrize your tests. You decide which parts of the test can change. In our example - it’s the number of orders and the number of items in one order. Then you write one test that accepts those “changing parts” as parameters, and you use
pytest.mark.parametrize decorator to pass different values to those parameters.
It’s probably easier to understand with an example:
@pytest.mark.parametrize( "number_of_orders,order_size", [ (1, 1), (100, 1), (1, 100), (50, 50), ] ) def test_placing_order(number_of_orders, order_size): user = create_user() log_in_user(user) order = place_order(number_of_orders, order_size) assert order.payment == "success" assert order.status == "processing" # ... and so on
Pytest will turn the above code into four separate tests.
One of the things that we usually forget to keep up-to-date is documentation. Mostly, because we don’t have a tool that will tell us when it’s outdated and no longer valid.
pytest can solve part of this problem - if you put some code examples in the documentation, it can evaluate them and tell you if they no longer work.
Let’s see an example:
def add_two_numbers(a, b): """ Adds two numbers >>> add_two_numbers(2, 2) 5 """ return a + b
If we run
--doctest-modules parameter, it will check for parts of your documentation starting with
>>>. If there are any, pytest will evaluate whatever is after the
>>> sign and check if the result is equal to the next line of the documentation. If it’s not - it will report an error:
$ pytest --doctest-modules ============================= test session starts ============================= platform darwin -- Python 3.7.2, pytest-5.3.5, py-1.8.0, pluggy-0.12.0 rootdir: /Users/testuser/my_module collected 1 item test.py F [100%] ================================== FAILURES =================================== _______________________ [doctest] test.add_two_numbers ________________________ 002 Adds two numbers 003 >>> add_two_numbers(2, 2) Expected: 5 Got: 4 /Users/testuser/my_module/test_math_operations.py:3: DocTestFailure ============================== 1 failed in 0.04s ==============================
Pytest offers a lot of configuration options:
pytest -n 4- requires pytest-xdist plugin)
@pytest.mark.skip) - for example, because you don’t have time to fix them today, but you still want your CI to work
@pytest.mark.skipif(sys.platform == "win32")
Once you get more comfortable with using pytest, you will probably want to add some plugins, like:
There are different ways to approach writing tests. Some people swear by the Test-Driven Development. Some people think writing tests is a waste of time. I believe that tests are useful, but I rarely use the TDD approach. What works for me is the following workflow:
1. Write your feature - just make it work, don’t worry about the ugly code.
2. Write tests
3. Refactor your code into something that you are not ashamed of and make a Pull (Merge) Request.
I’m using this approach because I found myself tinkering too much with the code to make it “the best code possible” from the beginning, only to realize that I need to change something and deleting this code later. With this approach, I can write ugly, undocumented code with a bunch of TODOs fast and then fix it when I have the tests, and I know that my fixes won’t break anything.