For string functions we use following benchmark system:
- Each function has benchmark in separate directory.
- Benchmark directory contains following relevant files
- File function_variant.c for each variant to be tested
- A bash script run that runs a benchmark
- A script benchmark that describes what to benchmark
- A benchmark programs.
- Directory data with benchmark results in gnuplot format.
- Directory html with visualized data.
Running benchmark
Invoke
./run
A progress can be viewed in function.html file in html directory.
Benchmark parameters
Parameters can be edited in benchmark file. Each test is done by standalone program.
Typically in visualization vertical axis mirrors benchmark file while horizontal corresponds to string size.
For example simple strstr benchmark file consist of:
try random $N 10 try random $N 20 try aaab $N 10
Which generates table obtained from calls.
./random 1 10 |
./random 10 10 |
./random 100 10 |
./random 1 20 |
./random 10 20 |
./random 100 20 |
./aaab 1 10 |
./aaab 10 10 |
./aaab 100 10 |
Test can be ran standalone as
LD_LIBRARY_PATH=. ./test arguments
Adding benchmark
Almost any program can be turned into benchmark if you follow simple rule.
- At start call init_tester() and at exit call fini_tester()
- include header that contains
- * Wrapper function function_wrapper that calls function2 surrounded by functions bench_start() and bench_end(size). As size specify a parameter you want relate to running time.
- * Change function called by macro
#define function function_wrapper
Adding variant of function
Adding variant is simple. Create file function_variant.c that implements a function and can be compiled standalone. You must use name function2 instead function.
Data and visualization
A benchmark results are saved in data directory in gnuplot format. A data file consist of two space separated collumns where first is test size and second result.
From data graphs are generated and can be viewed in html directory.
TODO wildcards to show queries
A tester prints results of functions to output file and all implementations should produce same output. When they dont diff is printed.
Specific functions
benchmarking/strstr temporary page with results
benchmarking/strcasestr same benchmark
benchmarking/strmemmem same benchmark