🔍 Code Extractor

class PerformanceLogger

Maturity: 56

A context manager class for measuring and logging execution time and custom performance metrics of code operations.

File:
/tf/active/vicechatdev/invoice_extraction/utils/logging_utils.py
Lines:
211 - 267
Complexity:
simple

Purpose

PerformanceLogger provides a convenient way to track execution time and custom metrics for code blocks using Python's context manager protocol. It automatically measures elapsed time when entering and exiting a context, logs performance data with structured metrics, and allows adding custom metrics during execution. This is useful for monitoring application performance, debugging slow operations, and collecting operational metrics.

Source Code

class PerformanceLogger:
    """
    Utility for logging performance metrics and execution time.
    
    Usage:
        with PerformanceLogger("extract_invoice") as perf:
            # Code to measure
            result = extract_invoice(data)
            perf.add_metric("invoice_pages", len(data["pages"]))
            perf.add_metric("extraction_status", "success")
    """
    
    def __init__(self, operation_name: str, logger=None):
        """
        Initialize performance logger.
        
        Args:
            operation_name: Name of the operation being measured
            logger: Logger instance (uses root logger if None)
        """
        self.operation_name = operation_name
        self.logger = logger or logging.getLogger()
        self.start_time = None
        self.metrics = {}
    
    def __enter__(self):
        """Start timing when entering context."""
        self.start_time = datetime.now()
        return self
    
    def __exit__(self, exc_type, exc_val, exc_tb):
        """Log metrics when exiting context."""
        execution_time = datetime.now() - self.start_time
        execution_ms = int(execution_time.total_seconds() * 1000)
        
        # Add execution time to metrics
        self.metrics["execution_time_ms"] = execution_ms
        
        # Log performance data
        log_message = f"Performance: {self.operation_name} completed in {execution_ms}ms"
        
        # Include metrics in log record
        extra = {"extra_fields": self.metrics}
        self.logger.info(log_message, extra=extra)
        
        # Don't suppress exceptions
        return False
    
    def add_metric(self, name: str, value: Any) -> None:
        """
        Add a custom metric to be logged.
        
        Args:
            name: Metric name
            value: Metric value
        """
        self.metrics[name] = value

Parameters

Name Type Default Kind
bases - -

Parameter Details

operation_name: A string identifier for the operation being measured. This name appears in log messages to identify which operation's performance is being tracked. Should be descriptive and unique enough to distinguish different operations (e.g., 'extract_invoice', 'process_payment', 'database_query').

logger: Optional logging.Logger instance to use for outputting performance data. If None, uses the root logger obtained via logging.getLogger(). Allows integration with existing logging configurations and custom loggers.

Return Value

Instantiation returns a PerformanceLogger instance that acts as a context manager. When used with the 'with' statement, __enter__ returns self (the PerformanceLogger instance) allowing access to the add_metric method. __exit__ returns False, meaning exceptions are not suppressed. The add_metric method returns None.

Class Interface

Methods

__init__(self, operation_name: str, logger=None)

Purpose: Initialize the PerformanceLogger with an operation name and optional logger instance

Parameters:

  • operation_name: String identifier for the operation being measured
  • logger: Optional logging.Logger instance; uses root logger if None

Returns: None (constructor)

__enter__(self)

Purpose: Context manager entry point that starts timing the operation

Returns: Returns self (the PerformanceLogger instance) to allow method calls within the context

__exit__(self, exc_type, exc_val, exc_tb)

Purpose: Context manager exit point that calculates execution time, logs all metrics, and handles cleanup

Parameters:

  • exc_type: Exception type if an exception occurred, None otherwise
  • exc_val: Exception value if an exception occurred, None otherwise
  • exc_tb: Exception traceback if an exception occurred, None otherwise

Returns: Returns False to indicate exceptions should not be suppressed

add_metric(self, name: str, value: Any) -> None

Purpose: Add a custom metric to be included in the performance log when the context exits

Parameters:

  • name: String name/key for the metric
  • value: The metric value of any type (should be JSON-serializable for structured logging)

Returns: None

Attributes

Name Type Description Scope
operation_name str The name of the operation being measured, used in log messages instance
logger logging.Logger Logger instance used to output performance data instance
start_time datetime | None Timestamp when the context was entered; None before __enter__ is called instance
metrics Dict[str, Any] Dictionary storing custom metrics added via add_metric(), plus execution_time_ms added automatically instance

Dependencies

  • logging
  • datetime

Required Imports

import logging
from datetime import datetime
from typing import Any

Usage Example

import logging
from datetime import datetime
from typing import Any

logging.basicConfig(level=logging.INFO)

# Basic usage with context manager
with PerformanceLogger("data_processing") as perf:
    # Simulate some work
    data = [i for i in range(1000)]
    processed = sum(data)
    
    # Add custom metrics
    perf.add_metric("items_processed", len(data))
    perf.add_metric("result", processed)
    perf.add_metric("status", "success")

# Using with custom logger
custom_logger = logging.getLogger("my_app")
with PerformanceLogger("database_query", logger=custom_logger) as perf:
    # Execute database query
    results = []
    perf.add_metric("rows_returned", len(results))
    perf.add_metric("query_type", "SELECT")

Best Practices

  • Always use PerformanceLogger as a context manager with the 'with' statement to ensure proper timing and logging
  • Choose descriptive operation_name values that clearly identify what is being measured
  • Add metrics using add_metric() only within the context (between __enter__ and __exit__)
  • Metrics are logged only when exiting the context, so all add_metric calls should complete before the with block ends
  • The logger should be configured before using PerformanceLogger, otherwise logs may not appear
  • Exceptions are not suppressed - if an exception occurs in the with block, it will propagate after logging
  • Metric values can be any type (Any), but should be JSON-serializable for structured logging systems
  • The execution_time_ms metric is automatically added and should not be manually set
  • For consistent logging format, use the same logger instance across related operations
  • Consider using different logger instances or operation names to separate concerns in large applications

Similar Components

AI-powered semantic similarity - components with related functionality:

  • class PerformanceLogger_v1 88.7% similar

    A context manager class that measures and logs the execution time of code blocks, with support for custom metrics and automatic error handling.

    From: /tf/active/vicechatdev/contract_validity_analyzer/utils/logging_utils.py
  • function log_performance 74.0% similar

    A context manager decorator that logs the performance metrics of an operation by wrapping it with a PerformanceLogger instance.

    From: /tf/active/vicechatdev/contract_validity_analyzer/utils/logging_utils.py
  • class ProgressLogger 59.4% similar

    A progress tracking logger that monitors and reports the progress of long-running operations with timing statistics, error counts, and estimated completion times.

    From: /tf/active/vicechatdev/contract_validity_analyzer/utils/logging_utils.py
  • class TestLoggingUtils 54.4% similar

    Unit test class for testing logging utilities including InvoiceExtractionLogger, PerformanceLogger, and get_logger function.

    From: /tf/active/vicechatdev/invoice_extraction/tests/test_utils.py
  • class InvoiceExtractionLogger 52.4% similar

    A comprehensive logging configuration class for invoice extraction systems that provides console and file logging with optional JSON formatting, request tracking via correlation IDs, and configurable log levels.

    From: /tf/active/vicechatdev/invoice_extraction/utils/logging_utils.py
← Back to Browse