class ProgressLogger
A progress tracking logger that monitors and reports the progress of long-running operations with timing statistics, error counts, and estimated completion times.
/tf/active/vicechatdev/contract_validity_analyzer/utils/logging_utils.py
147 - 226
moderate
Purpose
ProgressLogger provides comprehensive progress tracking for batch operations or long-running tasks. It automatically logs progress at configurable intervals, calculates processing rates, estimates time to completion (ETA), tracks errors, and provides summary statistics upon completion. It supports both manual usage and context manager patterns for automatic cleanup.
Source Code
class ProgressLogger:
"""Logger for tracking progress of long-running operations."""
def __init__(self, total_items: int, operation_name: str = "Processing",
logger: Optional[logging.Logger] = None, log_interval: int = 10):
"""
Initialize progress logger.
Args:
total_items: Total number of items to process
operation_name: Name of the operation
logger: Logger instance (optional)
log_interval: Log progress every N items
"""
self.total_items = total_items
self.operation_name = operation_name
self.logger = logger or get_logger(__name__)
self.log_interval = log_interval
self.processed_items = 0
self.start_time = time.time()
self.last_log_time = self.start_time
self.errors = 0
self.logger.info(f"Starting {operation_name}: {total_items} items to process")
def update(self, increment: int = 1, error: bool = False):
"""
Update progress.
Args:
increment: Number of items processed
error: Whether an error occurred
"""
self.processed_items += increment
if error:
self.errors += 1
# Log progress at intervals
if self.processed_items % self.log_interval == 0 or self.processed_items == self.total_items:
self._log_progress()
def _log_progress(self):
"""Log current progress."""
current_time = time.time()
elapsed = current_time - self.start_time
if self.processed_items > 0:
rate = self.processed_items / elapsed
eta = (self.total_items - self.processed_items) / rate if rate > 0 else 0
percentage = (self.processed_items / self.total_items) * 100
self.logger.info(
f"{self.operation_name} progress: {self.processed_items}/{self.total_items} "
f"({percentage:.1f}%) - {rate:.1f} items/sec - ETA: {eta:.1f}s"
)
if self.errors > 0:
self.logger.warning(f"Errors encountered: {self.errors}")
def finish(self):
"""Log completion."""
elapsed = time.time() - self.start_time
rate = self.processed_items / elapsed if elapsed > 0 else 0
self.logger.info(
f"Completed {self.operation_name}: {self.processed_items} items processed "
f"in {elapsed:.1f}s ({rate:.1f} items/sec)"
)
if self.errors > 0:
self.logger.warning(f"Total errors: {self.errors}")
def __enter__(self):
"""Context manager entry."""
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""Context manager exit."""
self.finish()
Parameters
| Name | Type | Default | Kind |
|---|---|---|---|
bases |
- | - |
Parameter Details
total_items: The total number of items that will be processed in the operation. This is used to calculate percentage completion and ETA. Must be a positive integer.
operation_name: A descriptive name for the operation being tracked (e.g., 'File Processing', 'Data Import'). Defaults to 'Processing'. This name appears in all log messages for identification.
logger: An optional logging.Logger instance to use for output. If not provided, a default logger is obtained via get_logger(__name__). Allows integration with existing logging infrastructure.
log_interval: The frequency of progress logging, specified as the number of items processed between log messages. Defaults to 10. Progress is always logged at completion regardless of this interval.
Return Value
Instantiation returns a ProgressLogger instance that tracks progress state. The update() method returns None. The finish() method returns None. The __enter__() method returns self for context manager usage. The __exit__() method returns None.
Class Interface
Methods
__init__(self, total_items: int, operation_name: str = 'Processing', logger: Optional[logging.Logger] = None, log_interval: int = 10)
Purpose: Initialize the progress logger with operation parameters and start timing
Parameters:
total_items: Total number of items to processoperation_name: Name of the operation being trackedlogger: Optional logger instance; uses get_logger(__name__) if not providedlog_interval: Number of items between progress log messages
Returns: None (constructor)
update(self, increment: int = 1, error: bool = False)
Purpose: Update the progress counter and optionally log progress if interval is reached
Parameters:
increment: Number of items processed in this update (default 1)error: Whether an error occurred during processing of these items
Returns: None
_log_progress(self)
Purpose: Internal method to calculate and log current progress statistics including percentage, rate, and ETA
Returns: None
finish(self)
Purpose: Log completion summary with total items processed, elapsed time, processing rate, and error count
Returns: None
__enter__(self)
Purpose: Context manager entry point that returns self for use in with statements
Returns: self (ProgressLogger instance)
__exit__(self, exc_type, exc_val, exc_tb)
Purpose: Context manager exit point that automatically calls finish() to log completion
Parameters:
exc_type: Exception type if an exception occurredexc_val: Exception value if an exception occurredexc_tb: Exception traceback if an exception occurred
Returns: None (allows exceptions to propagate)
Attributes
| Name | Type | Description | Scope |
|---|---|---|---|
total_items |
int | Total number of items to be processed | instance |
operation_name |
str | Name of the operation being tracked | instance |
logger |
logging.Logger | Logger instance used for outputting progress messages | instance |
log_interval |
int | Number of items between progress log messages | instance |
processed_items |
int | Counter tracking the number of items processed so far | instance |
start_time |
float | Timestamp (from time.time()) when the logger was initialized | instance |
last_log_time |
float | Timestamp of the last progress log message (initialized to start_time) | instance |
errors |
int | Counter tracking the number of errors encountered during processing | instance |
Dependencies
loggingtime
Required Imports
import logging
import time
from typing import Optional
Usage Example
import logging
import time
from typing import Optional
# Assuming get_logger is defined
def get_logger(name):
return logging.getLogger(name)
# Basic usage
logger = ProgressLogger(total_items=100, operation_name="Data Processing", log_interval=10)
for i in range(100):
try:
# Process item
time.sleep(0.01)
logger.update(increment=1, error=False)
except Exception:
logger.update(increment=1, error=True)
logger.finish()
# Context manager usage (automatically calls finish())
with ProgressLogger(total_items=50, operation_name="File Import") as progress:
for i in range(50):
# Process item
progress.update()
# Custom logger and interval
custom_logger = logging.getLogger('my_app')
progress = ProgressLogger(
total_items=1000,
operation_name="Batch Processing",
logger=custom_logger,
log_interval=50
)
for i in range(1000):
progress.update()
progress.finish()
Best Practices
- Always call finish() when done processing, or use the context manager pattern to ensure automatic cleanup
- Choose an appropriate log_interval based on total_items to avoid excessive logging (e.g., log_interval=total_items/100 for percentage-based logging)
- Call update() after each item is processed, not before, to maintain accurate counts
- Set error=True in update() when an error occurs to track error rates separately from progress
- Use the context manager pattern (with statement) for automatic finish() call and cleaner code
- Provide a descriptive operation_name to distinguish between multiple concurrent progress loggers in logs
- The logger tracks elapsed time from instantiation, so create the instance immediately before starting the operation
- Progress is automatically logged at completion even if log_interval hasn't been reached
- The class assumes sequential processing; for parallel processing, consider thread-safety modifications
- ETA calculations become more accurate as more items are processed
Tags
Similar Components
AI-powered semantic similarity - components with related functionality:
-
class PerformanceLogger 65.0% similar
-
class ProgressIndicator 54.9% similar
-
function log_performance 52.6% similar
-
function watch_logs 50.2% similar
-
function update_task_progress 48.0% similar