🔍 Code Extractor

function test_rag_engine

Maturity: 44

A test function that validates the RAG engine's ability to correctly instantiate different LLM models (OpenAI, Anthropic, Gemini) based on configuration settings.

File:
/tf/active/vicechatdev/docchat/test_model_selection.py
Lines:
50 - 82
Complexity:
moderate

Purpose

This function serves as an automated test to ensure that the RAG engine's get_llm_instance() function correctly creates and returns the appropriate LLM client class for each configured model provider. It iterates through all available models defined in the config, instantiates each one, and verifies that the returned object matches the expected class type for its provider. This is critical for ensuring model switching functionality works correctly in a multi-provider RAG system.

Source Code

def test_rag_engine():
    """Test RAG engine model switching"""
    print("=" * 60)
    print("TEST 2: RAG Engine Model Selection")
    print("=" * 60)
    
    from rag_engine import get_llm_instance
    import config
    
    # Test get_llm_instance for each model
    for model_name in config.AVAILABLE_MODELS.keys():
        try:
            llm = get_llm_instance(model_name)
            provider = config.AVAILABLE_MODELS[model_name]['provider']
            
            expected_class_names = {
                'openai': 'ChatOpenAI',
                'anthropic': 'ChatAnthropic',
                'gemini': 'ChatGoogleGenerativeAI'
            }
            
            expected_class = expected_class_names.get(provider)
            actual_class = type(llm).__name__
            
            assert actual_class == expected_class, \
                f"Model {model_name}: expected {expected_class}, got {actual_class}"
            
            print(f"✓ {model_name}: {actual_class} (provider: {provider})")
        except Exception as e:
            print(f"✗ {model_name}: Failed - {e}")
            raise
    
    print()

Return Value

This function does not return any value (implicitly returns None). It performs assertions and prints test results to stdout. If any assertion fails, it raises an AssertionError. If any other error occurs during model instantiation, it re-raises that exception after printing the error message.

Dependencies

  • config
  • rag_engine
  • langchain-openai
  • langchain-anthropic
  • langchain-google-genai

Required Imports

import config
from rag_engine import get_llm_instance

Conditional/Optional Imports

These imports are only needed under specific conditions:

from langchain_openai import ChatOpenAI

Condition: Required by rag_engine when instantiating OpenAI models

Required (conditional)
from langchain_anthropic import ChatAnthropic

Condition: Required by rag_engine when instantiating Anthropic models

Required (conditional)
from langchain_google_genai import ChatGoogleGenerativeAI

Condition: Required by rag_engine when instantiating Gemini models

Required (conditional)

Usage Example

# Ensure config.py has AVAILABLE_MODELS defined:
# AVAILABLE_MODELS = {
#     'gpt-4': {'provider': 'openai'},
#     'claude-3': {'provider': 'anthropic'},
#     'gemini-pro': {'provider': 'gemini'}
# }

# Set environment variables
import os
os.environ['OPENAI_API_KEY'] = 'your-key'
os.environ['ANTHROPIC_API_KEY'] = 'your-key'
os.environ['GOOGLE_API_KEY'] = 'your-key'

# Run the test
test_rag_engine()

# Expected output:
# ============================================================
# TEST 2: RAG Engine Model Selection
# ============================================================
# ✓ gpt-4: ChatOpenAI (provider: openai)
# ✓ claude-3: ChatAnthropic (provider: anthropic)
# ✓ gemini-pro: ChatGoogleGenerativeAI (provider: gemini)

Best Practices

  • This function should be run as part of a test suite, not in production code
  • Ensure all required API keys are set before running this test to avoid authentication errors
  • The function will raise exceptions on failure, so wrap in try-except if you need graceful error handling
  • This test assumes config.AVAILABLE_MODELS is properly structured with 'provider' keys
  • The test validates class names, so any changes to langchain provider class names will require updating the expected_class_names dictionary
  • Run this test after any changes to model configuration or rag_engine implementation
  • Consider mocking API calls if you want to test without making actual API requests

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function test_config 71.5% similar

    A test function that validates the presence and correctness of all required configuration settings for a multi-model RAG (Retrieval-Augmented Generation) system.

    From: /tf/active/vicechatdev/docchat/test_model_selection.py
  • function check_rag_config 67.4% similar

    Diagnostic function that inspects and reports configuration details of a hybrid RAG (Retrieval-Augmented Generation) engine module, including model settings and class attributes.

    From: /tf/active/vicechatdev/vice_ai/check_rag_config.py
  • function get_llm_instance 58.0% similar

    Factory function that creates and returns an appropriate LLM (Large Language Model) instance based on the specified model name, automatically detecting the provider (OpenAI, Azure OpenAI, or Anthropic) and configuring it with the given parameters.

    From: /tf/active/vicechatdev/docchat/llm_factory.py
  • class DocChatRAG 56.9% similar

    Main RAG engine with three operating modes: 1. Basic RAG (similarity search) 2. Extensive (full document retrieval with preprocessing) 3. Full Reading (process all documents)

    From: /tf/active/vicechatdev/docchat/rag_engine.py
  • function init_chat_engine_v1 56.4% similar

    Initializes a global chat engine instance using OneCo_hybrid_RAG and validates its configuration by checking for required attributes like available_collections and data_handles.

    From: /tf/active/vicechatdev/vice_ai/app.py
← Back to Browse