๐Ÿ” Code Extractor

function run_tests

Maturity: 50

Asynchronous test suite function that creates test images with various text prompts, processes them through an E-Ink LLM processor, and reports usage statistics and results.

File:
/tf/active/vicechatdev/e-ink-llm/test.py
Lines:
46 - 111
Complexity:
moderate

Purpose

This function serves as a comprehensive testing framework for the E-Ink LLM Assistant application. It validates the entire pipeline by creating synthetic test images containing questions, instructions, and math problems, processing them through the EInkLLMProcessor, and generating detailed reports on success rates, token usage, and costs. It's designed for development testing, CI/CD validation, and demonstrating the application's capabilities.

Source Code

async def run_tests():
    """Run comprehensive tests of the application"""
    print("๐Ÿงช E-Ink LLM Assistant - Test Suite")
    print("=" * 50)
    
    # Check if API key is available
    api_key = os.getenv("OPENAI_API_KEY")
    if not api_key:
        print("โŒ No OpenAI API key found. Set OPENAI_API_KEY environment variable.")
        return
    
    # Create test folder
    test_folder = Path("test_files")
    test_folder.mkdir(exist_ok=True)
    
    # Create test images
    test_cases = [
        {
            "filename": "test_question.png",
            "text": "What is photosynthesis?\n\nHow do plants convert sunlight into energy?\n\nPlease explain the process in detail."
        },
        {
            "filename": "test_instruction.png", 
            "text": "Instructions:\n\n1. Explain how to bake bread\n2. Include all ingredients\n3. Provide step-by-step directions\n4. Add tips for beginners"
        },
        {
            "filename": "test_math.png",
            "text": "Math Problem:\n\nSolve for x:\n\n2x + 5 = 17\n\nShow all steps"
        }
    ]
    
    print(f"๐Ÿ“ Creating test files in {test_folder}...")
    for test_case in test_cases:
        file_path = test_folder / test_case["filename"]
        create_test_image(test_case["text"], str(file_path))
    
    # Initialize processor
    print(f"\n๐Ÿค– Initializing E-Ink LLM Processor...")
    processor = EInkLLMProcessor(api_key=api_key, watch_folder=str(test_folder))
    
    # Process each test file
    print(f"\n๐Ÿš€ Processing test files...")
    for test_case in test_cases:
        file_path = test_folder / test_case["filename"]
        print(f"\n๐Ÿ“„ Processing: {file_path.name}")
        
        try:
            result = await processor.process_file(file_path)
            if result:
                print(f"โœ… Success: {result.name}")
            else:
                print(f"โŒ Failed to process {file_path.name}")
        except Exception as e:
            print(f"โŒ Error processing {file_path.name}: {e}")
    
    # Print usage summary
    usage_stats = processor.llm_handler.get_usage_summary()
    print(f"\n๐Ÿ“Š TEST SUMMARY:")
    print(f"   โ€ข Files processed: {len(test_cases)}")
    print(f"   โ€ข Preprocessing calls: {usage_stats['preprocessing_calls']}")
    print(f"   โ€ข Main processing calls: {usage_stats['main_processing_calls']}")
    print(f"   โ€ข Total tokens used: {usage_stats['total_tokens_used']:,}")
    print(f"   โ€ข Estimated cost: ${usage_stats['total_cost_estimate']:.3f}")
    
    print(f"\n๐Ÿ“ Check the {test_folder} folder for generated response PDFs!")
    print(f"๐Ÿงช Test complete!")

Return Value

Returns None. The function outputs results to stdout, including test progress, success/failure status for each test case, and a comprehensive summary with processing statistics (files processed, API calls made, tokens used, and estimated costs). Generated PDF responses are saved to the test_files directory.

Dependencies

  • asyncio
  • os
  • sys
  • pathlib
  • PIL
  • base64
  • io

Required Imports

import asyncio
import os
import sys
from pathlib import Path
from PIL import Image, ImageDraw, ImageFont
import base64
import io
from processor import EInkLLMProcessor

Conditional/Optional Imports

These imports are only needed under specific conditions:

from processor import EInkLLMProcessor

Condition: requires a local 'processor' module with EInkLLMProcessor class

Required (conditional)

Usage Example

import asyncio
import os
from pathlib import Path
from PIL import Image, ImageDraw, ImageFont
from processor import EInkLLMProcessor

# Set API key
os.environ['OPENAI_API_KEY'] = 'your-api-key-here'

# Define create_test_image helper (required)
def create_test_image(text, filepath):
    img = Image.new('RGB', (800, 600), color='white')
    draw = ImageDraw.Draw(img)
    draw.text((10, 10), text, fill='black')
    img.save(filepath)

# Run the test suite
async def main():
    await run_tests()

if __name__ == '__main__':
    asyncio.run(main())

Best Practices

  • Always set the OPENAI_API_KEY environment variable before calling this function to avoid early termination
  • Ensure sufficient disk space for creating test images and generated PDF responses in the test_files directory
  • Run this function in an async context using asyncio.run() or within an existing event loop
  • Monitor the console output for detailed progress and error messages during test execution
  • Review generated PDFs in the test_files folder after completion to validate output quality
  • Be aware that this function makes real API calls to OpenAI and will incur costs based on token usage
  • The function requires the create_test_image() helper function to be defined in the same scope
  • Consider implementing cleanup logic to remove test files after execution if running in CI/CD environments
  • The test cases are hardcoded; modify the test_cases list to add custom test scenarios
  • Ensure the EInkLLMProcessor class and its dependencies are properly configured before running tests

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function main_v75 78.7% similar

    Asynchronous test runner function that executes a suite of tests for the E-Ink LLM Assistant application, including tests for compact formatting, session management, and improvement comparisons.

    From: /tf/active/vicechatdev/e-ink-llm/test_improvements.py
  • function run_demo 72.2% similar

    Orchestrates a complete demonstration of an E-Ink LLM Assistant by creating three sample handwritten content files (question, instruction diagram, math problem) and processing each through an AI pipeline.

    From: /tf/active/vicechatdev/e-ink-llm/demo.py
  • function main_v59 65.9% similar

    Orchestrates a comprehensive demonstration of E-Ink LLM hybrid mode capabilities, running three sequential demos showcasing graphics generation, placeholder parsing, and complete hybrid response processing.

    From: /tf/active/vicechatdev/e-ink-llm/demo_hybrid_mode.py
  • function main_v54 61.4% similar

    Test orchestration function that executes a comprehensive test suite for DocChat's multi-LLM model selection feature and reports results.

    From: /tf/active/vicechatdev/docchat/test_model_selection.py
  • class TestLLMClient 61.1% similar

    Unit test class for testing the LLMClient class, which provides comprehensive test coverage for initialization, text generation, structured data extraction, and error handling across multiple LLM providers (OpenAI, Anthropic, Azure, local).

    From: /tf/active/vicechatdev/invoice_extraction/tests/test_utils.py
โ† Back to Browse