Skip to main content

Rust

To integrate CodSpeed with your Rust codebase, the simplest way is to use our cargo tool: cargo-codspeed. This tool will allow you to check your CodSpeed benchmarks without modifying the behavior of the usual cargo bench command.

tip

Creating benchmarks with cargo-codspeed is the same as with the bencher and criterion APIs. So if you already have benchmarks written with one of those, only a small import change is required 🚀

Runner installation​

To check your benchmarks with CodSpeed, you first need to install the cargo-codspeed CLI tool:

cargo install cargo-codspeed --locked

This tool can then be used directly within cargo:

Usage​

With Criterion.rs​

Let's start with the example from the Criterion.rs documentation, creating a benchmark suite for the Fibonacci function.

You need to install the criterion.rs compatibility layer:

cargo add --dev codspeed-criterion-compat --rename criterion

Or directly change your Cargo.toml if you already have criterion installed:

[dev-dependencies]
criterion = { version = "*" }
criterion = { package = "codspeed-criterion-compat", version = "*" }

This will install the codspeed-criterion-compat crate and rename it to criterion in your Cargo.toml. This way, you can keep your existing imports and the compatibility layer will take care of the rest.

tip

Using the compatibility layer won't change the behavior of your benchmark suite outside of the CodSpeed instrumentation environment and Criterion.rs will still run it as usual.

note

If you prefer, you can also install codspeed-criterion-compat as is and change your imports to use this new crate name.

Then, you can create your benchmark suite:

benches/my_benchmark.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn fibonacci(n: u64) -> u64 {
match n {
0 => 1,
1 => 1,
n => fibonacci(n-1) + fibonacci(n-2),
}
}

pub fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("fib 20", |b| b.iter(|| fibonacci(black_box(20))));
}

criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);

The last step in creating the Criterion benchmark is to add the new benchmark target in your Cargo.toml:

Cargo.toml
[[bench]]
name = "my_benchmark"
harness = false

And that's it! You can now run your benchmark suite with CodSpeed:

Congrats ! 🎉 You can now run those benchmark in your CI to get the actual performance measurements.

Cargo Workspaces

If you're using CodSpeed within a workspace you can use the -p flag to specify the crate to run the build command on:

cargo codspeed build -p my_package
Build only specific benchmarks

By default, build will build and run all the benchmarks in your project.

Sometimes you may want to build and run only a subset of your benchmarks. With the following folder structure:

benches/
├── bench1.rs
└── bench2.rs

To build only bench1, you can pass its name as an argument to the build and commands:

cargo codspeed build bench1
Feature flags

If you're using feature flags in your benchmark suite, you can use the --features flag to specify the features to enable:

cargo codspeed build --features my_feature
Run the built benchmarks

By default, cargo codspeed run will run all the built benchmarks (the latest cargo codspeed build command you ran).

To run only a subset of the built benchmarks, you can do the following:

cargo codspeed run -p my_package # Run all the benchmarks of the `my_package` crate
cargo codspeed run bench1 # Run only the `bench1` benchmark

With the test::bench API​

Let's start with the example from the bencher documentation, creating a benchmark suite for 2 simple functions.

You need to install bencher compatibility layer:

cargo add --dev codspeed-bencher-compat --rename bencher

Or directly change your Cargo.toml if you already have bencher installed:

[dev-dependencies]
bencher = { version = "*" }
bencher = { package = "codspeed-bencher-compat", version = "*" }
note

If you prefer, you can also install codspeed-bencher-compat as is and change your imports to use this new crate name.

This will install the codspeed-bencher-compat crate and rename it to bencher in your Cargo.toml. This way, you can keep your existing imports and the compatibility layer will take care of the rest.

tip

Using the compatibility layer won't change the behavior of your existing benchmark suite of the CodSpeed instrumentation environment and the benches will still run it as usual.

Then, you can create your benchmark suite:

benches/example.rs
use bencher::{benchmark_group, benchmark_main, Bencher};

fn a(bench: &mut Bencher) {
bench.iter(|| {
(0..1000).fold(0, |x, y| x + y)
})
}

fn b(bench: &mut Bencher) {
const N: usize = 1024;
bench.iter(|| {
vec![0u8; N]
});

bench.bytes = N as u64;
}

benchmark_group!(benches, a, b);
benchmark_main!(benches);

The last step in creating the Bencher benchmark is to add the new benchmark target in your Cargo.toml:

Cargo.toml
[[bench]]
name = "example"
harness = false

And that's it! You can now run your benchmark suite with CodSpeed:

Congrats! 🎉 You can now run those benchmark in your CI to get the actual performance measurements 👇.

Cargo Workspaces

If you're using CodSpeed within a workspace you can use the -p flag to specify the crate to run the build command on:

cargo codspeed build -p my_package
Build only specific benchmarks

By default, build will build and run all the benchmarks in your project.

Sometimes you may want to build and run only a subset of your benchmarks. With the following folder structure:

benches/
├── bench1.rs
└── bench2.rs

To build only bench1, you can pass its name as an argument to the build and commands:

cargo codspeed build bench1
Feature flags

If you're using feature flags in your benchmark suite, you can use the --features flag to specify the features to enable:

cargo codspeed build --features my_feature
Run the built benchmarks

By default, cargo codspeed run will run all the built benchmarks (of the latest cargo codspeed build ... command you ran).

To run only a subset of the built benchmarks, you can do the following:

cargo codspeed run -p my_package # Run all the benchmarks of the `my_package` crate
cargo codspeed run bench1 # Run only the `bench1` benchmark

Running the benchmarks in your CI​

To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the instrumented environment.

tip

If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.

Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the results to CodSpeed on every push to the main branch and every pull request:

.github/workflows/codspeed.yml
name: CodSpeed

on:
push:
branches:
- "main" # or "master"
pull_request:
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:

jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Setup rust toolchain, cache and cargo-codspeed binary
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-codspeed

- name: Build the benchmark target(s)
run: cargo codspeed build

- name: Run the benchmarks
uses: CodSpeedHQ/action@v3
with:
run: cargo codspeed run
token: ${{ secrets.CODSPEED_TOKEN }}