Python
To integrate CodSpeed with your Python codebase, the simplest way is to use our pytest
extension: pytest-codspeed
. This extension will automatically enable the CodSpeed engine on your benchmarks and allow reporting to CodSpeed.
Creating benchmarks with pytest-codspeed
is the same as with the pytest-benchmark
API.
So if you already have benchmarks written with it, you can start using CodSpeed right away ๐
Installationโ
First, install pytest-codspeed
as a development dependency:
- Poetry
- Pipenv
- Pip
poetry add -G dev pytest-codspeed
pipenv install -d pytest-codspeed
pip install pytest-codspeed && pip freeze | grep pytest-codspeed >>
requirements-dev.txt
(assuming you store your development dependencies in ./requirements-dev.txt
)Usageโ
Creating benchmarksโ
Marking a whole test function as a benchmark with pytest.mark.benchmark
โ
import pytest
from statistics import median
@pytest.mark.benchmark
def test_median_performance():
return median([1, 2, 3, 4, 5])
Benchmarking selected lines of a test function with the benchmark
fixtureโ
import pytest
from statistics import mean
def test_mean_performance(benchmark):
# Precompute some data useful for the benchmark but that should not be
# included in the benchmark time
data = [1, 2, 3, 4, 5]
# Benchmark the execution of the function
benchmark(lambda: mean(data))
def test_mean_and_median_performance(benchmark):
# Precompute some data useful for the benchmark but that should not be
# included in the benchmark time
data = [1, 2, 3, 4, 5]
# Benchmark the execution of the function:
# The `@benchmark` decorator will automatically call the function and
# measure its execution
@benchmark
def bench():
mean(data)
median(data)
Testing the benchmarks locallyโ
If you want to run the benchmarks tests locally, you can use the --codspeed
pytest flag:
Running pytest-codspeed
locally will not produce any performance reporting. It's only useful for making sure that your benchmarks are working as expected. If you want to get performance reporting, you should run the benchmarks in your CI.
Running the benchmarks in your CIโ
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the environment.
If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the
results to CodSpeed on every push to the main
branch and every pull request:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v3
with:
python-version: "3.9"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: pytest tests/ --codspeed
Recipesโ
Running benchmarks in parallelโ
If your benchmarks are taking too much time to run under the CodSpeed action, you can run them in parallel to speed up the execution.
pytest-codspeed
is compatible with pytest-xdist
, a pytest
plugin allowing to distribute the execution across multiple processes.
Thus, to run your benchmarks in parallel you can simply enable the pytest-xdist
plugin on top of pytest-codspeed
.
This will allow you to run your benchmarks in parallel using multiple processes.
Distributing the execution of the benchmarks only works on a single machine. Distributing across multiple machines is not supported yet.
First, install pytest-xdist
as a development dependency:
- Poetry
- Pipenv
- Pip
poetry add -G dev pytest-xdist
pipenv install -d pytest-xdist
pip install pytest-xdist && pip freeze | grep pytest-xdist >>
requirements-dev.txt
(assuming you store your development dependencies in ./requirements-dev.txt
)Then, you can run your benchmarks in parallel with the pytest-xdist
flag:
pytest tests/ --codspeed -n auto
The change in the CI workflow would look like this:
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
- run: pytest tests/ --codspeed
+ run: pytest tests/ --codspeed -n auto
Usage with Noxโ
It's possible to use pytest-codspeed
with Nox
,
a Python automation tool that allows you to automate the execution of Python code across
multiple environments.
Here is an example configuration file to run benchmarks with pytest-codspeed
using Nox
:
import nox
@nox.session
def codspeed(session):
session.install('pytest')
session.install('pytest-codspeed')
session.run('pytest', '--codspeed')
You can then run the benchmarks:
nox --sessions codspeed
To use it with Github Actions, you can use the following workflow:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v3
with:
python-version: "3.12.0"
- name: Install Nox
run: pip install nox
- name: Install dependencies
run: nox --sessions codspeed --install-only
- name: Run the action
uses: CodSpeedHQ/action@v3
with:
run: nox --sessions codspeed --reuse-existing-virtualenvs --no-install
token: ${{ secrets.CODSPEED_TOKEN }}
Splitting the virtualenv installation and the execution of the benchmarks is optional. Though this allows to speed up the execution of the benchmarks since the dependencies will be installed or compiled without the instrumentation enabled.