This crate contains end-to-end performance and memory benchmarks for xsd-parser and the generated types.
It is intended for:
- comparing different generated backends (e.g. quick-xml vs serde-based implementations),
- tracking performance and memory regressions,
The goal is to benchmark realistic end-to-end flows.
Build and run in release mode:
cargo run -p benchmark --releaseYou can also pass CLI flags:
cargo run -p benchmark --release -- --helpIf you are interested in benchmarks of the debug build you can also run the following command. This might be interesting because if you execute xsd-parser inside the build script of your crate, it is usually compiled using
debug build.
cargo run -p benchmarkThe benchmark runner prints a summary table to the terminal
Typical metrics include:
- Runtime (min / max / avg / median across runs),
- Stack usage.
The reported stack usage is a high-water-mark estimate based on stack painting and scanning. It is Linux-only and should be treated as an approximation.
- Extend the build script to generate the code from the schema
- Add a new module in
benchmark/src/schemas/ - Register it in
benchmark/src/schemas/mod.rs - Implement the benchmark entrypoints used by the runner (
benchmark/src/main.rs).
Guidelines:
- Prefer end-to-end benchmarks that reflect real-world usage:
- parse XML from bytes,
- deserialize into the generated structs,
- (optionally) re-serialize to XML.
- Ensure test cases are reproducible:
- fixed seeds,
- fixed input payloads,
- avoid reading from the network.
- Use
--releasewhen comparing results. - Consider multiple payload sizes (small/medium/large) to get throughput curves.