C++
To use CodSpeed in your C++ codebase, you can use
CodSpeed's google_benchmark
library,
which is a compatibility layer to run both instrumented and walltime CodSpeed
benchmarks.
Writing benchmarks
CodSpeed integrates with the google_benchmark
library. Here is a small example
on how to declare benchmarks. Otherwise, any existing benchmarks of your project
can be reused.
// Define the function under test
static void BM_StringCopy(benchmark::State &state) {
std::string x = "hello";
// Google benchmark relies on state.begin() and state.end() to run the benchmark and count iterations
for (auto _ : state) {
std::string copy(x);
}
}
// Register the benchmarked to be called by the executable
BENCHMARK(BM_StringCopy);
static void BM_memcpy(benchmark::State &state) {
char *src = new char[state.range(0)];
char *dst = new char[state.range(0)];
memset(src, 'x', state.range(0));
for (auto _ : state)
memcpy(dst, src, state.range(0));
delete[] src;
delete[] dst;
}
BENCHMARK(BM_memcpy)->Range(8, 8 << 10);
// Entrypoint of the benchmark executable
BENCHMARK_MAIN();
Checkout the Google benchmark user guide for more advanced usage of the library.
Building & Running benchmarks
To build and run benchmarks, CodSpeed officially support usage of the
google_benchmark
library using both CMake
and Bazel
.
If you are using another build system, you may find guidelines in the
custom build systems section
CMake
To use CodSpeed's google_benchmark
integration using
CMake
, you can declare a benchmark
executable as follows:
cmake_minimum_required(VERSION 3.12)
include(FetchContent)
project(my_codspeed_project VERSION 0.0.0 LANGUAGES CXX)
# Enable release mode with debug symbols to display useful profiling data
set(CMAKE_BUILD_TYPE RelWithDebInfo)
set(BENCHMARK_DOWNLOAD_DEPENDENCIES ON)
FetchContent_Declare(
google_benchmark
GIT_REPOSITORY https://github.com/CodSpeedHQ/codspeed-cpp # Target the codspeed cpp repository
SOURCE_SUBDIR google_benchmark # Make sure to target the google_benchmark subdirectory
GIT_TAG main # Or chose a specific version or git ref, check the releases page on the repository
)
FetchContent_MakeAvailable(google_benchmark)
# Declare your benchmark executable and its sources here
add_executable(my_benchmark_executable benches/bench.cpp)
# Link your executable against the `benchmark::benchmark`, the `google_benchmark` library
# Note: the first argument must match the first argument of the `add_executable` call
target_link_libraries(my_benchmark_executable benchmark::benchmark)
Checkout the releases page if you want to target a specific version of the library.
This example is a dedicated CMakeLists.txt
file for the benchmark executable.
You can also add an executable target to your existing project's
CMakeLists.txt
. Make sure to link this target against the
benchmark::benchmark
library.
Building benchmarks
To build the benchmark executable, run
mkdir build && cd build
cmake .. -DCODSPEED_MODE=instrumentation
make -j
The CODSPEED_MODE
flag
Please note the -DCODSPEED_MODE=instrumentation
flag in the cmake
command.
This will enable the CodSpeed instrumentation mode for the benchmark executable,
where each benchmark is run only once on a simulated CPU.
You can also use -DCODSPEED_MODE=walltime
if you are building for walltime
codspeed reports, see dedicated documentation for more
information.
If you omit the CODSPEED_MODE
cmake flag, CodSpeed will not be enabled in the
benchmark executable, and it will run as a regular benchmark.
Debug symbols
In order to get the most out of CodSpeed reports, debug symbols need to be
enabled within your executable. In the example above, this is done by
setting CMAKE_BUILD_TYPE
to RelWithDebInfo
.
Running the benchmarks locally
Simply execute the compiled binary to run the benchmarks.
Congratulations ! 🎉 You can now run those benchmark in your CI to get the actual performance measurements.
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the instrumented environment.
If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main
branch and every
pull request:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build the benchmark target(s)
run: |
mkdir build
cd build
cmake -DCODSPEED_MODE=instrumentation ..
make -j
- name: Run the benchmarks
uses: CodSpeedHQ/action@v3
with:
run: ./build/my_benchmark_executable # Replace with the proper executable path
token: ${{ secrets.CODSPEED_TOKEN }}
Running benchmarks in parallel CI jobs
If your benchmarks are taking too much time to run under the CodSpeed action, you can run them in parallel to speed up the execution.
To parallelize your benchmarks, first split them in multiple executables that each run a subset of your benches.
# Create individual benchmark executables
set(BENCHMARKS first_bench second_bench third_bench)
# Add `bench_name` target with `bench_name.cpp` source for each bench listed above
foreach(benchmark IN LISTS BENCHMARKS)
add_executable(${benchmark} benches/${benchmark}.cpp)
target_link_libraries(${benchmark}
benchmark::benchmark
)
endforeach()
# Create a custom target to run all benchmarks locally
add_custom_target(run_all_benchmarks
COMMAND ${CMAKE_COMMAND} -E echo "Running all benchmarks..."
)
# Register each benchmark target as a dependency of
foreach(benchmark IN LISTS BENCHMARKS)
add_custom_command(
TARGET run_all_benchmarks
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E echo "Running ${benchmark}..."
COMMAND $<TARGET_FILE:${benchmark}>
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
)
endforeach()
Then update your CI workflow to run benchmarks executable by executable
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
strategy:
matrix:
target: [first_bench, second_bench, third_bench]
steps:
- uses: actions/checkout@v4
- name: Build the benchmark target
run: |
mkdir build
cd build
cmake -DCODSPEED_MODE=instrumentation ..
make -j ${{ matrix.target }}
- name: Run the benchmarks
uses: CodSpeedHQ/action@v3
with:
run: ./build/${{ matrix.target }}
token: ${{ secrets.CODSPEED_TOKEN }}
Bazel
You can also use CodSpeed's google_benchmark
integration with the
Bazel
integration.
Building benchmarks
Import the library by adding this to your WORKSPACE
file:
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "codspeed_cpp", # Name the codspeed_cpp will be imported as
# Target the main branch automatically, or select a specific version
urls = ["https://github.com/CodSpeedHQ/codspeed-cpp/archive/refs/heads/main.zip"],
strip_prefix = "codspeed-cpp-main",
)
Then, define your benchmark target in your packags's BUILD.bazel
file:
cc_binary(
name = "my_benchmark", # Name of your benchmark target
srcs = glob(["*.cpp", "*.hpp"]), # Or define sources however you wish
deps = [
"@codspeed_cpp//google_benchmark:benchmark", # codspeed_cpp must match the name you imported the lib in WORKSPACE
],
)
Finally, you can build the benchmarks by running:
Build options
As you may have noticed in the example, there are a few key build options essential for bazel to make full use of the CodSpeed library.
--@codspeed_cpp//core:codspeed_mode=instrumentation
enables the codspeed features of the library, which can take the following values here:off
: defaulted to when the cli flag is not provided, disables codspeed.instrumentation
: benchmarks are run only once on a simulated CPU.walltime
: used for walltime codspeed reports, see dedicated documentation
--compilation_mode=dbg
: enables debug symbols in the compiled binary, used to generate meaningful CodSpeed reports.--copt=-O2
: sets the desired level of compiler optimizations in the benchmarks binary.
If you do not want to specify these flags everytime, you can create a .bazelrc
file at the root of the bazel workspace with the following content
build --@codspeed_cpp//core:codspeed_mode=instrumentation
build --compilation_mode=dbg
build --copt=-O2
Running the benchmarks locally
You can then run your benchmarks by running:
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the instrumented environment.
If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main
branch and every
pull request:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Bazel
uses: bazelbuild/setup-bazelisk@v2
- name: Build the benchmark target(s)
run: |
bazel build //path/to/bench:my_benchmark --@codspeed_cpp//core:codspeed_mode=instrumentation
- name: Run the benchmarks
uses: CodSpeedHQ/action@v3
with:
run: |
bazel run //path/to/bench:my_benchmark --@codspeed_cpp//core:codspeed_mode=instrumentation
token: ${{ secrets.CODSPEED_TOKEN }}
Note that we separated the build and run steps in the CI workflow. This is important to speed up the CI workflow and avoiding instrumenting the build step.
Custom build systems
If you need to have full control over your build system, here are guiding steps to take to use codspeed.
Get the sources
Sources are located in the
codspeed-cpp
repository. You can
either clone the repository, add it as a submodule or even download the sources
as a zip file.
Build the library
Sources of the google_benchmark
CodSpeed integration library are located in
the
google_benchmark
subdirectory. 3.
Make sure the following pre-processor variables are defined when you build the
library
When building the library, the tricky part is to make sure google_benchmark's
fork has access to the
codspeed-core
library.
Additionally, the following pre-processor variables must be defined:
CODPSEED_ENABLED
: if not defined,google_benchmark
will the same as the upstream library, with no CodSpeed features.CODSPEED_INSTRUMENTATION
if running in instrumentation modeCODSPEED_WALLTIME
if running in walltime modeCODSPEED_ROOT_DIR
: absolute path to the root directory of your project. This is used in the report to display file path relative to your root project
If you run into issues integrating CodSpeed's google_benchmark
library with
your project, please reach out and open an issue on the
codspeed-cpp repository.