基本模式與範例

如何變更命令列選項預設值

每次使用 pytest 時都輸入相同的命令列選項可能會很繁瑣。例如,如果您總是想看到關於跳過和 xfailed 測試的詳細資訊,以及更簡潔的「點」進度輸出,您可以將其寫入組態檔

# content of pytest.ini
[pytest]
addopts = -ra -q

或者,您可以設定 PYTEST_ADDOPTS 環境變數,以便在使用環境時新增命令列選項

export PYTEST_ADDOPTS="-v"

以下是在存在 addopts 或環境變數的情況下,命令列是如何建構的

<pytest.ini:addopts> $PYTEST_ADDOPTS <extra command-line arguments>

因此,如果使用者在命令列中執行

pytest -m slow

實際執行的命令列是

pytest -ra -q -v -m slow

請注意,與其他命令列應用程式的慣例一樣,如果選項衝突,則最後一個選項獲勝,因此上面的範例將顯示詳細輸出,因為 -v 覆蓋了 -q

根據命令列選項將不同的值傳遞給測試函數

假設我們想要編寫一個依賴於命令列選項的測試。以下是實現此目的的基本模式

# content of test_sample.py
def test_answer(cmdopt):
    if cmdopt == "type1":
        print("first")
    elif cmdopt == "type2":
        print("second")
    assert 0  # to see what was printed

為了使其工作,我們需要新增一個命令列選項,並通過 fixture 函數 提供 cmdopt

# content of conftest.py
import pytest


def pytest_addoption(parser):
    parser.addoption(
        "--cmdopt", action="store", default="type1", help="my option: type1 or type2"
    )


@pytest.fixture
def cmdopt(request):
    return request.config.getoption("--cmdopt")

讓我們在不提供新選項的情況下執行此操作

$ pytest -q test_sample.py
F                                                                    [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________

cmdopt = 'type1'

    def test_answer(cmdopt):
        if cmdopt == "type1":
            print("first")
        elif cmdopt == "type2":
            print("second")
>       assert 0  # to see what was printed
E       assert 0

test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s

現在提供命令列選項

$ pytest -q --cmdopt=type2
F                                                                    [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________

cmdopt = 'type2'

    def test_answer(cmdopt):
        if cmdopt == "type1":
            print("first")
        elif cmdopt == "type2":
            print("second")
>       assert 0  # to see what was printed
E       assert 0

test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s

您可以看到命令列選項已到達我們的測試中。

我們可以通過列出選項來為輸入新增簡單的驗證

# content of conftest.py
import pytest


def pytest_addoption(parser):
    parser.addoption(
        "--cmdopt",
        action="store",
        default="type1",
        help="my option: type1 or type2",
        choices=("type1", "type2"),
    )

現在我們將收到關於錯誤參數的回饋

$ pytest -q --cmdopt=type3
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --cmdopt: invalid choice: 'type3' (choose from type1, type2)

如果您需要提供更詳細的錯誤訊息,您可以使用 type 參數並引發 pytest.UsageError

# content of conftest.py
import pytest


def type_checker(value):
    msg = "cmdopt must specify a numeric type as typeNNN"
    if not value.startswith("type"):
        raise pytest.UsageError(msg)
    try:
        int(value[4:])
    except ValueError:
        raise pytest.UsageError(msg)

    return value


def pytest_addoption(parser):
    parser.addoption(
        "--cmdopt",
        action="store",
        default="type1",
        help="my option: type1 or type2",
        type=type_checker,
    )

這完成了基本模式。但是,人們通常更希望在測試之外處理命令列選項,而是傳遞不同的或更複雜的物件。

動態新增命令列選項

通過 addopts,您可以為您的專案靜態地新增命令列選項。您也可以在處理命令列參數之前動態修改它們

# installable external plugin
import sys


def pytest_load_initial_conftests(args):
    if "xdist" in sys.modules:  # pytest-xdist plugin
        import multiprocessing

        num = max(multiprocessing.cpu_count() / 2, 1)
        args[:] = ["-n", str(num)] + args

如果您安裝了 xdist 插件,您現在將始終使用接近 CPU 數量的子進程執行測試。在具有上述 conftest.py 的空目錄中執行

$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 0 items

========================== no tests ran in 0.12s ===========================

根據命令列選項控制測試的跳過

這是一個 conftest.py 檔案,新增了一個 --runslow 命令列選項,用於控制跳過標記為 pytest.mark.slow 的測試

# content of conftest.py

import pytest


def pytest_addoption(parser):
    parser.addoption(
        "--runslow", action="store_true", default=False, help="run slow tests"
    )


def pytest_configure(config):
    config.addinivalue_line("markers", "slow: mark test as slow to run")


def pytest_collection_modifyitems(config, items):
    if config.getoption("--runslow"):
        # --runslow given in cli: do not skip slow tests
        return
    skip_slow = pytest.mark.skip(reason="need --runslow option to run")
    for item in items:
        if "slow" in item.keywords:
            item.add_marker(skip_slow)

我們現在可以編寫一個像這樣的測試模組

# content of test_module.py
import pytest


def test_func_fast():
    pass


@pytest.mark.slow
def test_func_slow():
    pass

當執行它時,將看到一個跳過的「slow」測試

$ pytest -rs    # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items

test_module.py .s                                                    [100%]

========================= short test summary info ==========================
SKIPPED [1] test_module.py:8: need --runslow option to run
======================= 1 passed, 1 skipped in 0.12s =======================

或者執行它,包含標記為 slow 的測試

$ pytest --runslow
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items

test_module.py ..                                                    [100%]

============================ 2 passed in 0.12s =============================

編寫良好整合的斷言輔助函數

如果您有一個從測試中調用的測試輔助函數,您可以使用 pytest.fail 標記來使用特定訊息使測試失敗。如果您在輔助函數中的某個位置設定了 __tracebackhide__ 選項,則測試支援函數將不會顯示在追溯中。範例

# content of test_checkconfig.py
import pytest


def checkconfig(x):
    __tracebackhide__ = True
    if not hasattr(x, "config"):
        pytest.fail(f"not configured: {x}")


def test_something():
    checkconfig(42)

__tracebackhide__ 設定會影響 pytest 顯示追溯的方式:除非指定 --full-trace 命令列選項,否則不會顯示 checkconfig 函數。讓我們執行我們的小函數

$ pytest -q test_checkconfig.py
F                                                                    [100%]
================================= FAILURES =================================
______________________________ test_something ______________________________

    def test_something():
>       checkconfig(42)
E       Failed: not configured: 42

test_checkconfig.py:11: Failed
========================= short test summary info ==========================
FAILED test_checkconfig.py::test_something - Failed: not configured: 42
1 failed in 0.12s

如果您只想隱藏某些例外,您可以將 __tracebackhide__ 設定為可調用的物件,該物件會取得 ExceptionInfo 物件。例如,您可以使用它來確保未隱藏意外的例外類型

import operator

import pytest


class ConfigException(Exception):
    pass


def checkconfig(x):
    __tracebackhide__ = operator.methodcaller("errisinstance", ConfigException)
    if not hasattr(x, "config"):
        raise ConfigException(f"not configured: {x}")


def test_something():
    checkconfig(42)

這將避免在不相關的例外(即斷言輔助函數中的錯誤)上隱藏例外追溯。

偵測是否從 pytest 執行中運行

通常,如果從測試中調用應用程式程式碼,則使其行為不同是一個壞主意。但是,如果您絕對必須找出您的應用程式程式碼是否正在從測試中運行,您可以這樣做

import os


if os.environ.get("PYTEST_VERSION") is not None:
    # Things you want to to do if your code is called by pytest.
    ...
else:
    # Things you want to to do if your code is not called by pytest.
    ...

將資訊新增到測試報告標頭

pytest 運行中呈現額外資訊很容易

# content of conftest.py


def pytest_report_header(config):
    return "project deps: mylib-1.1"

這將相應地將字串新增到測試標頭

$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
project deps: mylib-1.1
rootdir: /home/sweet/project
collected 0 items

========================== no tests ran in 0.12s ===========================

也可以返回字串列表,這些字串將被視為多行資訊。您可以考慮使用 config.getoption('verbose') 以在適用的情況下顯示更多資訊

# content of conftest.py


def pytest_report_header(config):
    if config.get_verbosity() > 0:
        return ["info1: did you know that ...", "did you?"]

這將僅在以「-v」運行時新增資訊

$ pytest -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
cachedir: .pytest_cache
info1: did you know that ...
did you?
rootdir: /home/sweet/project
collecting ... collected 0 items

========================== no tests ran in 0.12s ===========================

而平常運行時則沒有任何資訊

$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 0 items

========================== no tests ran in 0.12s ===========================

分析測試持續時間

如果您有一個運行緩慢的大型測試套件,您可能想找出哪些測試最慢。讓我們建立一個人工測試套件

# content of test_some_are_slow.py
import time


def test_funcfast():
    time.sleep(0.1)


def test_funcslow1():
    time.sleep(0.2)


def test_funcslow2():
    time.sleep(0.3)

現在我們可以分析哪些測試函數執行最慢

$ pytest --durations=3
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 3 items

test_some_are_slow.py ...                                            [100%]

=========================== slowest 3 durations ============================
0.30s call     test_some_are_slow.py::test_funcslow2
0.20s call     test_some_are_slow.py::test_funcslow1
0.10s call     test_some_are_slow.py::test_funcfast
============================ 3 passed in 0.12s =============================

增量測試 - 測試步驟

有時您可能遇到一個測試情況,它由一系列測試步驟組成。如果一個步驟失敗,那麼執行後續步驟就沒有意義了,因為預計它們都會失敗,並且它們的追溯不會增加任何見解。這是一個簡單的 conftest.py 檔案,它引入了一個 incremental 標記,用於類別

# content of conftest.py

from typing import Dict, Tuple

import pytest

# store history of failures per test class name and per index in parametrize (if parametrize used)
_test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {}


def pytest_runtest_makereport(item, call):
    if "incremental" in item.keywords:
        # incremental marker is used
        if call.excinfo is not None:
            # the test has failed
            # retrieve the class name of the test
            cls_name = str(item.cls)
            # retrieve the index of the test (if parametrize is used in combination with incremental)
            parametrize_index = (
                tuple(item.callspec.indices.values())
                if hasattr(item, "callspec")
                else ()
            )
            # retrieve the name of the test function
            test_name = item.originalname or item.name
            # store in _test_failed_incremental the original name of the failed test
            _test_failed_incremental.setdefault(cls_name, {}).setdefault(
                parametrize_index, test_name
            )


def pytest_runtest_setup(item):
    if "incremental" in item.keywords:
        # retrieve the class name of the test
        cls_name = str(item.cls)
        # check if a previous test has failed for this class
        if cls_name in _test_failed_incremental:
            # retrieve the index of the test (if parametrize is used in combination with incremental)
            parametrize_index = (
                tuple(item.callspec.indices.values())
                if hasattr(item, "callspec")
                else ()
            )
            # retrieve the name of the first test function to fail for this class name and index
            test_name = _test_failed_incremental[cls_name].get(parametrize_index, None)
            # if name found, test has failed for the combination of class name & test name
            if test_name is not None:
                pytest.xfail(f"previous test failed ({test_name})")

這兩個 hook 實作協同工作以中止類別中標記為 incremental 的測試。這是一個測試模組範例

# content of test_step.py

import pytest


@pytest.mark.incremental
class TestUserHandling:
    def test_login(self):
        pass

    def test_modification(self):
        assert 0

    def test_deletion(self):
        pass


def test_normal():
    pass

如果我們運行這個

$ pytest -rx
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 4 items

test_step.py .Fx.                                                    [100%]

================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________

self = <test_step.TestUserHandling object at 0xdeadbeef0001>

    def test_modification(self):
>       assert 0
E       assert 0

test_step.py:11: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::test_deletion - reason: previous test failed (test_modification)
================== 1 failed, 2 passed, 1 xfailed in 0.12s ==================

我們將看到 test_deletion 沒有被執行,因為 test_modification 失敗了。它被報告為「預期失敗」。

套件/目錄級別的 fixtures (設定)

如果您有巢狀測試目錄,您可以通過將 fixture 函數放在該目錄中的 conftest.py 檔案中來擁有每個目錄的 fixture 範圍。您可以使用所有類型的 fixtures,包括 autouse fixtures,它們相當於 xUnit 的 setup/teardown 概念。但是,建議在您的測試或測試類別中使用顯式的 fixture 引用,而不是依賴於隱式執行 setup/teardown 函數,特別是當它們遠離實際測試時。

這是一個在目錄中使 db fixture 可用的範例

# content of a/conftest.py
import pytest


class DB:
    pass


@pytest.fixture(scope="package")
def db():
    return DB()

然後是該目錄中的一個測試模組

# content of a/test_db.py
def test_a1(db):
    assert 0, db  # to show value

另一個測試模組

# content of a/test_db2.py
def test_a2(db):
    assert 0, db  # to show value

然後是姊妹目錄中的一個模組,它將看不到 db fixture

# content of b/test_error.py
def test_root(db):  # no db here, will error out
    pass

我們可以運行這個

$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 7 items

a/test_db.py F                                                       [ 14%]
a/test_db2.py F                                                      [ 28%]
b/test_error.py E                                                    [ 42%]
test_step.py .Fx.                                                    [100%]

================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /home/sweet/project/b/test_error.py, line 1
  def test_root(db):  # no db here, will error out
E       fixture 'db' not found
>       available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
>       use 'pytest --fixtures [testpath]' for help on them.

/home/sweet/project/b/test_error.py:1
================================= FAILURES =================================
_________________________________ test_a1 __________________________________

db = <conftest.DB object at 0xdeadbeef0002>

    def test_a1(db):
>       assert 0, db  # to show value
E       AssertionError: <conftest.DB object at 0xdeadbeef0002>
E       assert 0

a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________

db = <conftest.DB object at 0xdeadbeef0002>

    def test_a2(db):
>       assert 0, db  # to show value
E       AssertionError: <conftest.DB object at 0xdeadbeef0002>
E       assert 0

a/test_db2.py:2: AssertionError
____________________ TestUserHandling.test_modification ____________________

self = <test_step.TestUserHandling object at 0xdeadbeef0003>

    def test_modification(self):
>       assert 0
E       assert 0

test_step.py:11: AssertionError
========================= short test summary info ==========================
FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7...
FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x...
FAILED test_step.py::TestUserHandling::test_modification - assert 0
ERROR b/test_error.py::test_root
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============

a 目錄中的兩個測試模組看到相同的 db fixture 實例,而姊妹目錄 b 中的一個測試則看不到它。我們當然也可以在該姊妹目錄的 conftest.py 檔案中定義一個 db fixture。請注意,只有當實際上有測試需要每個 fixture 時,才會實例化它(除非您使用「autouse」fixture,它們總是在第一個執行的測試之前執行)。

後處理測試報告/失敗

如果您想後處理測試報告,並且需要訪問執行環境,您可以實作一個 hook,該 hook 在即將建立測試「報告」物件時被調用。在這裡,我們寫出所有失敗的測試調用,並在您想在後處理期間查詢/查看 fixture 時訪問 fixture(如果測試使用了它)。在我們的例子中,我們只是將一些資訊寫入 failures 檔案

# content of conftest.py

import os.path

import pytest


@pytest.hookimpl(wrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
    # execute all other hooks to obtain the report object
    rep = yield

    # we only look at actual failing test calls, not setup/teardown
    if rep.when == "call" and rep.failed:
        mode = "a" if os.path.exists("failures") else "w"
        with open("failures", mode, encoding="utf-8") as f:
            # let's also access a fixture for the fun of it
            if "tmp_path" in item.fixturenames:
                extra = " ({})".format(item.funcargs["tmp_path"])
            else:
                extra = ""

            f.write(rep.nodeid + extra + "\n")

    return rep

如果您然後有失敗的測試

# content of test_module.py
def test_fail1(tmp_path):
    assert 0


def test_fail2():
    assert 0

並運行它們

$ pytest test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 2 items

test_module.py FF                                                    [100%]

================================= FAILURES =================================
________________________________ test_fail1 ________________________________

tmp_path = PosixPath('PYTEST_TMPDIR/test_fail10')

    def test_fail1(tmp_path):
>       assert 0
E       assert 0

test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________

    def test_fail2():
>       assert 0
E       assert 0

test_module.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_fail1 - assert 0
FAILED test_module.py::test_fail2 - assert 0
============================ 2 failed in 0.12s =============================

您將擁有一個「failures」檔案,其中包含失敗的測試 ID

$ cat failures
test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10)
test_module.py::test_fail2

在 fixtures 中提供測試結果資訊

如果您想在 fixture finalizers 中提供測試結果報告,這是一個通過本地插件實作的小範例

# content of conftest.py
from typing import Dict
import pytest
from pytest import StashKey, CollectReport

phase_report_key = StashKey[Dict[str, CollectReport]]()


@pytest.hookimpl(wrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):
    # execute all other hooks to obtain the report object
    rep = yield

    # store test results for each phase of a call, which can
    # be "setup", "call", "teardown"
    item.stash.setdefault(phase_report_key, {})[rep.when] = rep

    return rep


@pytest.fixture
def something(request):
    yield
    # request.node is an "item" because we use the default
    # "function" scope
    report = request.node.stash[phase_report_key]
    if report["setup"].failed:
        print("setting up a test failed or skipped", request.node.nodeid)
    elif ("call" not in report) or report["call"].failed:
        print("executing test failed or skipped", request.node.nodeid)

如果您然後有失敗的測試

# content of test_module.py

import pytest


@pytest.fixture
def other():
    assert 0


def test_setup_fails(something, other):
    pass


def test_call_fails(something):
    assert 0


def test_fail2():
    assert 0

並運行它

$ pytest -s test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
rootdir: /home/sweet/project
collected 3 items

test_module.py Esetting up a test failed or skipped test_module.py::test_setup_fails
Fexecuting test failed or skipped test_module.py::test_call_fails
F

================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________

    @pytest.fixture
    def other():
>       assert 0
E       assert 0

test_module.py:7: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________

something = None

    def test_call_fails(something):
>       assert 0
E       assert 0

test_module.py:15: AssertionError
________________________________ test_fail2 ________________________________

    def test_fail2():
>       assert 0
E       assert 0

test_module.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_call_fails - assert 0
FAILED test_module.py::test_fail2 - assert 0
ERROR test_module.py::test_setup_fails - assert 0
======================== 2 failed, 1 error in 0.12s ========================

您將看到 fixture finalizers 可以使用精確的報告資訊。

PYTEST_CURRENT_TEST 環境變數

有時測試 session 可能會卡住,並且可能沒有簡單的方法來找出哪個測試卡住了,例如,如果 pytest 以安靜模式 (-q) 運行,或者您無法訪問控制台輸出。如果問題僅偶爾發生,即著名的「不穩定」類型的測試,則這尤其是一個問題。

當運行測試時,pytest 會設定 PYTEST_CURRENT_TEST 環境變數,進程監控實用程式或像 psutil 這樣的庫可以檢查它,以在必要時發現哪個測試卡住了

import psutil

for pid in psutil.pids():
    environ = psutil.Process(pid).environ()
    if "PYTEST_CURRENT_TEST" in environ:
        print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')

在測試 session 期間,pytest 會將 PYTEST_CURRENT_TEST 設定為當前測試 nodeid 和當前階段,當前階段可以是 setupcallteardown

例如,當從 foo_module.py 運行名為 test_foo 的單個測試函數時,PYTEST_CURRENT_TEST 將被設定為

  1. foo_module.py::test_foo (setup)

  2. foo_module.py::test_foo (call)

  3. foo_module.py::test_foo (teardown)

依此順序。

注意

PYTEST_CURRENT_TEST 的內容旨在讓人們易於閱讀,並且實際格式可以在版本之間(甚至錯誤修復之間)更改,因此不應將其用於腳本或自動化。

凍結 pytest

如果您使用像 PyInstaller 這樣的工具凍結您的應用程式,以便將其分發給您的終端使用者,那麼最好也打包您的測試運行器並使用凍結的應用程式運行您的測試。這樣,可以在早期檢測到諸如依賴項未包含在可執行檔中的打包錯誤,同時也允許您將測試檔案發送給使用者,以便他們可以在他們的機器中運行它們,這對於獲得關於難以重現的錯誤的更多資訊可能很有用。

幸運的是,最近的 PyInstaller 版本已經為 pytest 提供了一個自訂 hook,但是如果您使用另一個工具來凍結可執行檔,例如 cx_freezepy2exe,您可以使用 pytest.freeze_includes() 來獲得完整的內部 pytest 模組列表。但是,配置工具以查找內部模組的方式因工具而異。

您可以通過在程式啟動期間進行一些巧妙的參數處理,使您的凍結程式作為 pytest 運行器工作,而不是將 pytest 運行器凍結為單獨的可執行檔。這允許您擁有單個可執行檔,這通常更方便。請注意,pytest 使用的插件發現機制(entry points)不適用於凍結的可執行檔,因此 pytest 無法自動找到任何第三方插件。要包含像 pytest-timeout 這樣的第三方插件,必須顯式導入它們並將它們傳遞給 pytest.main。

# contents of app_main.py
import sys

import pytest_timeout  # Third party plugin

if len(sys.argv) > 1 and sys.argv[1] == "--pytest":
    import pytest

    sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout]))
else:
    # normal application execution: at this point argv can be parsed
    # by your argument-parsing library of choice as usual
    ...

這允許您使用標準 pytest 命令列選項使用凍結的應用程式執行測試

./app_main --pytest --verbose --tb=long --junit=xml=results.xml test-suite/