C++ class templates for graph construction and search
This document explains how to use the performance testing framework to quantitatively evaluate optimization improvements.
The performance testing suite measures baseline performance for the key bottlenecks identified in the TODO.md:
cd build
../scripts/run_performance_tests.sh
This will:
performance_results/
Make your performance improvements to the codebase.
../scripts/run_performance_tests.sh
# Automatic comparison with detailed analysis
../scripts/compare_performance.py baseline_old.txt baseline_new.txt
# Manual comparison
diff -u baseline_old.txt baseline_new.txt
What it measures: Time to find edges from vertices using current O(n) linear search Scenarios tested:
Optimization target: Replace with O(1) hash-based lookup
What it measures: Time to remove vertices with all incoming/outgoing edges Scenarios tested:
Optimization target: Reduce from O(m²) to O(m) complexity
What it measures: Memory allocation overhead and context reuse benefits Scenarios tested:
Optimization target: Memory pooling and context reuse patterns
What it measures: Throughput scaling with multiple threads Scenarios tested:
Optimization target: Better concurrent performance patterns
Optimization | Metric | Expected Improvement |
---|---|---|
Hash-based edge lookup | Edge Lookups | 10-100x faster |
Better vertex removal | Vertex Removal | 2-10x faster |
Memory pooling | Context Creation | 20-50% faster |
Context reuse | Context Reuse | 30-70% faster |
The benchmarks use realistic graph structures:
To add benchmarks for new optimizations:
test_performance_benchmarks.cpp
compare_performance.py
parsing patternsExample structure:
class NewOptimizationBenchmark {
public:
static void RunBenchmarks() {
// Test different scenarios
// Measure performance with PerformanceTimer
// Output in consistent format
}
};
The framework is designed for CI/CD integration:
test_performance_benchmarks.cpp
- Main benchmark implementationrun_performance_tests.sh
- Test runner scriptcompare_performance.py
- Result comparison toolperformance_results/
- Timestamped results directorysystem_info_*.txt
- System configuration snapshotsbaseline_*.txt
- Benchmark resultsThis framework provides the foundation for quantitative performance evaluation and ensures optimizations deliver measurable improvements.