Divan ⭐
Installation
For all Rust integrations, you will need
the cargo-codspeed
command to build and run your
CodSpeed benchmarks
Install the
divan
compatibility layer:
cargo add --dev codspeed-divan-compat --rename divan
Or directly change your Cargo.toml
if you already have divan
installed:
[dev-dependencies]
divan = { version = "*" }
divan = { package = "codspeed-divan-compat", version = "*" }
This will install the codspeed-divan-compat
crate and rename it to divan
in
your Cargo.toml
. This way, you can keep your existing imports and the
compatibility layer will take care of the rest.
Using the compatibility layer won't change the behavior of your benchmark suite
outside of the CodSpeed instrumentation environment and divan
will still run
it as usual.
If you prefer, you can also install codspeed-divan-compat
as is and change
your imports to use this new crate name.
Usage
Creating benchmarks
As an example, let's follow the example from the divan documentation: a benchmark suite for the Fibonacci function:
fn main() {
// Run registered benchmarks.
divan::main();
}
// Register a `fibonacci` function and benchmark it over multiple cases.
#[divan::bench(args = [1, 2, 4, 8, 16, 32])]
fn fibonacci(n: u64) -> u64 {
if n <= 1 {
1
} else {
fibonacci(n - 2) + fibonacci(n - 1)
}
}
The last step in creating the divan benchmark is to add the new benchmark target
in your Cargo.toml
:
[[bench]]
name = "my_benchmark"
harness = false
And that's it! You can now run your benchmark suite with CodSpeed
Testing the benchmarks locally
Congrats ! 🎉 You can now run those benchmark in your CI to get the actual performance measurements.
Running the benchmarks in your CI
To generate performance reports, you need to run the benchmarks in your CI. This allows CodSpeed to detect the CI environment and properly configure the instrumented environment.
If you want more details on how to configure the CodSpeed action, you can check out the Continuous Reporting section.
Here is an example of a GitHub Actions workflow that runs the benchmarks and
reports the results to CodSpeed on every push to the main
branch and every
pull request:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup rust toolchain, cache and cargo-codspeed binary
uses: moonrepo/setup-rust@v1
with:
channel: stable
cache-target: release
bins: cargo-codspeed
- name: Build the benchmark target(s)
run: cargo codspeed build
- name: Run the benchmarks
uses: CodSpeedHQ/action@v3
with:
run: cargo codspeed run
token: ${{ secrets.CODSPEED_TOKEN }}
Advanced usage
Divan provides a lot of convenient features to help you write benchmars, below is a selection that can be useful in CodSpeed benchmarks, but check out the divan documentation for an exhaustive list of features.
Type generics
#[divan::bench(types = [&str, String])]
fn from_str<'a, T>() -> T
where
T: From<&'a str>,
{
divan::black_box("hello world").into()
}
Combining type generics and arguments
use std::collections::{BTreeSet, HashSet};
#[divan::bench(
types = [Vec<i32>, BTreeSet<i32>, HashSet<i32>],
args = [0, 2, 4, 16, 256, 4096],
)]
fn from_range<T>(n: i32) -> T
where
T: FromIterator<i32>,
{
(0..n).collect()
}
Generating dynamic inputs
Time spent generating inputs is not measured in benchmarks.
#[divan::bench]
fn bench(bencher: divan::Bencher) {
bencher
.with_inputs(|| {
// Generate input:
String::from("...")
})
.bench_values(|s| {
// Use input by-value:
s + "123"
});
}