🔍 Code Extractor

function test_config

Maturity: 44

A test function that validates the presence and correctness of all required configuration settings for a multi-model RAG (Retrieval-Augmented Generation) system.

File:
/tf/active/vicechatdev/docchat/test_model_selection.py
Lines:
9 - 47
Complexity:
simple

Purpose

This function performs comprehensive validation of the application's configuration module, ensuring that all necessary settings are present and correctly configured. It checks for the existence of 4 specific LLM models (GPT-4o, GPT-4o-mini, Claude Sonnet, and Gemini), validates their metadata structure, verifies API key configurations for OpenAI, Gemini, and Anthropic, confirms ChromaDB connection settings, and validates the documents directory path. The function is designed to be run as part of a test suite to catch configuration errors early in the development or deployment process.

Source Code

def test_config():
    """Test configuration has all required settings"""
    print("=" * 60)
    print("TEST 1: Configuration")
    print("=" * 60)
    
    import config
    
    # Check AVAILABLE_MODELS
    assert hasattr(config, 'AVAILABLE_MODELS'), "Missing AVAILABLE_MODELS"
    assert len(config.AVAILABLE_MODELS) == 4, f"Expected 4 models, got {len(config.AVAILABLE_MODELS)}"
    
    expected_models = ['gpt-4o', 'gpt-4o-mini', 'claude-sonnet-4-5-20250929', 'gemini-2.0-flash-exp']
    for model in expected_models:
        assert model in config.AVAILABLE_MODELS, f"Missing model: {model}"
        assert 'name' in config.AVAILABLE_MODELS[model], f"Model {model} missing 'name'"
        assert 'provider' in config.AVAILABLE_MODELS[model], f"Model {model} missing 'provider'"
        assert 'description' in config.AVAILABLE_MODELS[model], f"Model {model} missing 'description'"
    
    # Check API keys
    assert hasattr(config, 'OPENAI_API_KEY'), "Missing OPENAI_API_KEY"
    assert hasattr(config, 'GEMINI_API_KEY'), "Missing GEMINI_API_KEY"
    assert hasattr(config, 'ANTHROPIC_API_KEY'), "Missing ANTHROPIC_API_KEY"
    
    # Check ChromaDB settings
    assert hasattr(config, 'CHROMA_HOST'), "Missing CHROMA_HOST"
    assert hasattr(config, 'CHROMA_PORT'), "Missing CHROMA_PORT"
    assert config.CHROMA_HOST == 'oneco_chroma', f"Expected CHROMA_HOST='oneco_chroma', got '{config.CHROMA_HOST}'"
    assert config.CHROMA_PORT == 8000, f"Expected CHROMA_PORT=8000, got {config.CHROMA_PORT}"
    
    # Check documents directory
    assert hasattr(config, 'DOCUMENTS_DIR'), "Missing DOCUMENTS_DIR"
    assert 'qa_docs' in str(config.DOCUMENTS_DIR), f"DOCUMENTS_DIR should contain 'qa_docs', got {config.DOCUMENTS_DIR}"
    
    print("✓ All configuration checks passed")
    print(f"  - 4 models configured: {list(config.AVAILABLE_MODELS.keys())}")
    print(f"  - ChromaDB: {config.CHROMA_HOST}:{config.CHROMA_PORT}")
    print(f"  - Documents: {config.DOCUMENTS_DIR}")
    print()

Return Value

This function does not return any value (implicitly returns None). It performs assertions and prints status messages to stdout. If any assertion fails, it raises an AssertionError with a descriptive message. On success, it prints a summary of validated configuration settings.

Dependencies

  • config

Required Imports

import config

Usage Example

# Ensure config.py exists with all required settings
# Example config.py structure:
# AVAILABLE_MODELS = {
#     'gpt-4o': {'name': 'GPT-4o', 'provider': 'openai', 'description': '...'},
#     'gpt-4o-mini': {'name': 'GPT-4o Mini', 'provider': 'openai', 'description': '...'},
#     'claude-sonnet-4-5-20250929': {'name': 'Claude Sonnet', 'provider': 'anthropic', 'description': '...'},
#     'gemini-2.0-flash-exp': {'name': 'Gemini 2.0', 'provider': 'google', 'description': '...'}
# }
# OPENAI_API_KEY = 'sk-...'
# GEMINI_API_KEY = 'AI...'
# ANTHROPIC_API_KEY = 'sk-ant-...'
# CHROMA_HOST = 'oneco_chroma'
# CHROMA_PORT = 8000
# DOCUMENTS_DIR = Path('/path/to/qa_docs')

import config

# Run the test
test_config()

# Expected output on success:
# ============================================================
# TEST 1: Configuration
# ============================================================
# ✓ All configuration checks passed
#   - 4 models configured: ['gpt-4o', 'gpt-4o-mini', 'claude-sonnet-4-5-20250929', 'gemini-2.0-flash-exp']
#   - ChromaDB: oneco_chroma:8000
#   - Documents: /path/to/qa_docs

Best Practices

  • Run this test function before starting the application to ensure all configuration is properly set up
  • This function should be part of an automated test suite that runs during CI/CD pipeline
  • If any assertion fails, the function will raise an AssertionError - handle this appropriately in your test runner
  • The function prints detailed output to stdout - ensure your test environment captures this output for debugging
  • Update the expected_models list if you add or remove models from your configuration
  • Consider running this test after any configuration changes or environment updates
  • The function uses hardcoded expected values (like CHROMA_HOST='oneco_chroma') - modify these if your deployment differs

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function check_configuration 77.8% similar

    A comprehensive configuration verification function that checks and displays the status of all DocChat system settings, including API keys, models, ChromaDB connection, directories, and LLM initialization.

    From: /tf/active/vicechatdev/docchat/verify_setup.py
  • function test_rag_engine 71.5% similar

    A test function that validates the RAG engine's ability to correctly instantiate different LLM models (OpenAI, Anthropic, Gemini) based on configuration settings.

    From: /tf/active/vicechatdev/docchat/test_model_selection.py
  • function test_configuration_v1 70.0% similar

    Validates that all required configuration variables (Azure AD credentials, OpenAI API key, and domain) are properly set and not using placeholder values.

    From: /tf/active/vicechatdev/find_email/test_vendor_extractor.py
  • function test_config_loading 69.5% similar

    A test function that validates configuration loading by instantiating a Config object and verifying access to key configuration parameters across FileCloud, LLM, and output settings.

    From: /tf/active/vicechatdev/contract_validity_analyzer/test_implementation.py
  • function test_configuration 69.3% similar

    A test function that validates configuration settings by importing and calling the Config.validate_config() method, printing the result and returning a boolean status.

    From: /tf/active/vicechatdev/SPFCsync/test_connections.py
← Back to Browse