libgraph

C++ class templates for graph construction and search


Project maintained by rxdu Hosted on GitHub Pages — Theme by mattgraham

Performance Testing Guide

This document explains how to use the performance testing framework to quantitatively evaluate optimization improvements.

Overview

The performance testing suite measures baseline performance for the key bottlenecks identified in the TODO.md:

  1. Edge Lookup Performance - Measures O(n) linear search times
  2. Vertex Removal Performance - Measures O(m²) removal complexity
  3. Search Context Performance - Measures allocation/context reuse overhead
  4. Concurrent Search Performance - Measures threading scalability

Quick Start

1. Run Baseline Measurements

cd build
../scripts/run_performance_tests.sh

This will:

2. Implement Optimizations

Make your performance improvements to the codebase.

3. Run Performance Tests Again

../scripts/run_performance_tests.sh

4. Compare Results

# Automatic comparison with detailed analysis
../scripts/compare_performance.py baseline_old.txt baseline_new.txt

# Manual comparison
diff -u baseline_old.txt baseline_new.txt

Benchmark Categories

Edge Lookup Benchmarks

What it measures: Time to find edges from vertices using current O(n) linear search Scenarios tested:

Optimization target: Replace with O(1) hash-based lookup

Vertex Removal Benchmarks

What it measures: Time to remove vertices with all incoming/outgoing edges Scenarios tested:

Optimization target: Reduce from O(m²) to O(m) complexity

Search Context Benchmarks

What it measures: Memory allocation overhead and context reuse benefits Scenarios tested:

Optimization target: Memory pooling and context reuse patterns

Concurrent Search Benchmarks

What it measures: Throughput scaling with multiple threads Scenarios tested:

Optimization target: Better concurrent performance patterns

Interpreting Results

Key Metrics to Track

Expected Improvements

Optimization Metric Expected Improvement
Hash-based edge lookup Edge Lookups 10-100x faster
Better vertex removal Vertex Removal 2-10x faster
Memory pooling Context Creation 20-50% faster
Context reuse Context Reuse 30-70% faster

Performance Testing Best Practices

1. Consistent Environment

2. Meaningful Workloads

The benchmarks use realistic graph structures:

3. Statistical Significance

Adding New Benchmarks

To add benchmarks for new optimizations:

  1. Add test category to test_performance_benchmarks.cpp
  2. Update compare_performance.py parsing patterns
  3. Document expected improvements

Example structure:

class NewOptimizationBenchmark {
public:
    static void RunBenchmarks() {
        // Test different scenarios
        // Measure performance with PerformanceTimer
        // Output in consistent format
    }
};

Automated Performance Tracking

The framework is designed for CI/CD integration:

Files Overview

This framework provides the foundation for quantitative performance evaluation and ensures optimizations deliver measurable improvements.