pytest

pytest --tb=short --show-capture=no --disable-warnings -x

caplog

To filter by error level:

import logging

def test_handler(caplog):
    # your test code
    logs = [x.message for x in caplog.records if x.levelno == logging.ERROR]
    assert 2 == len(logs)
    assert [
        "Course '38' has no product",
        "Course '39' has no product",
    ] == logs

caplog.record_tuples is nice and simple with the caplog fixture:

import logging

def test_handler(caplog):
    # use factories to create your objects... then:
    caplog.clear()
    # test
    response = handler(
        APIException("Patrick does not know..."), {"colour": "Green"}
    )
    assert [
        (
            "api.models",
            logging.ERROR,
            "Patrick does not know... {'colour': 'Green'}",
        )
    ] == caplog.record_tuples

Tip

Use caplog.clear() after creating your objects so caplog.record_tuples does not include output from factory-boy.

django_assert_num_queries

@pytest.mark.django_db
def test_dinner_detail(django_assert_num_queries):
    with django_assert_num_queries(3):

freezegun

To use freezegun as a context manager,

pip uninstall pytest-freezegun
pip install freezegun
from freezegun import freeze_time

with freeze_time("2017-05-21"):
    UserConsent.objects.set_consent(consent, True, user_1)
    UserConsent.objects.set_consent(consent, False, user_2)

Or use a date/time:

import pytz
from datetime import datetime
with freeze_time(datetime(2020, 1, 1, 1, 1, 1, tzinfo=pytz.utc)):

pytest-freezegun

Warning

I had some issues with pytest-freezegun , but I think the problem was that I had not included it in requirements/ci.txt. I prefer to use the context manager with freezegun, so the following notes are just for information…

https://github.com/ktosiek/pytest-freezegun

@pytest.mark.freeze_time("2017-05-21")
def test_report():

Markers

To register a marker e.g:

@pytest.mark.elasticsearch

Update pyproject.toml:

[tool.pytest.ini_options]
markers = [
    "elasticsearch: is the elasticsearch service running on this workstation?"
]

Skip

To find out why tests are skipped:

pytest -rsx app/tests/test_app.py

Warnings

-p no:warnings
# e.g.
pytest --color=yes --tb=short --show-capture=no --html=report.html --fail-on-template-vars -p no:warnings

or:

[pytest]
addopts = -p no:warnings