🔍 Code Extractor

function check_rag_config

Maturity: 34

Diagnostic function that inspects and reports configuration details of a hybrid RAG (Retrieval-Augmented Generation) engine module, including model settings and class attributes.

File:
/tf/active/vicechatdev/vice_ai/check_rag_config.py
Lines:
8 - 44
Complexity:
moderate

Purpose

This function performs runtime introspection of the hybrid_rag_engine.py module to discover and display its configuration without fully initializing it. It dynamically loads the module, searches for model-related configurations, examines the OneCo_hybrid_RAG class if present, and prints known server configuration details. This is useful for debugging, configuration verification, and understanding the RAG system setup without triggering full initialization.

Source Code

def check_rag_config():
    try:
        # Minimal import without full initialization
        import importlib.util
        spec = importlib.util.spec_from_file_location("hybrid_rag_engine", "/tf/active/vice_ai/hybrid_rag_engine.py")
        if spec and spec.loader:
            module = importlib.util.module_from_spec(spec)
            spec.loader.exec_module(module)
            
            # Look for default configurations in the module
            print("Checking hybrid_rag_engine module for default configurations...")
            
            # Check if there are any default model configurations
            for attr_name in dir(module):
                attr = getattr(module, attr_name)
                if 'gpt' in str(attr).lower() or 'model' in str(attr).lower():
                    print(f"{attr_name}: {attr}")
                    
            # Try to look at the class definition
            if hasattr(module, 'OneCo_hybrid_RAG'):
                rag_class = getattr(module, 'OneCo_hybrid_RAG')
                print(f"RAG class found: {rag_class}")
                
                # Check class attributes and methods
                for attr in dir(rag_class):
                    if 'model' in attr.lower() or 'config' in attr.lower() or 'llm' in attr.lower():
                        print(f"  Class attribute: {attr}")
                        
    except Exception as e:
        print(f"Error checking RAG config: {e}")
        
    # Also check what the logs showed us from the server
    print("\nFrom server logs, we know:")
    print("- Main LLM: OpenAI gpt-4o (temp: 0)")
    print("- Small LLM: gpt-4o-mini")
    print("- Collections: 20 available")
    print("- Flow control settings exist")

Return Value

This function does not return any value (implicitly returns None). All output is printed to stdout, including discovered module attributes, class information, and hardcoded server configuration details.

Dependencies

  • importlib.util

Required Imports

import importlib.util

Conditional/Optional Imports

These imports are only needed under specific conditions:

import importlib.util

Condition: Required for dynamic module loading from file path

Required (conditional)

Usage Example

# Simple invocation - no parameters needed
check_rag_config()

# Expected output includes:
# - Module attributes containing 'gpt' or 'model' keywords
# - OneCo_hybrid_RAG class attributes related to models, config, or LLM
# - Hardcoded server configuration summary
# Example output:
# Checking hybrid_rag_engine module for default configurations...
# model_name: gpt-4o
# RAG class found: <class 'OneCo_hybrid_RAG'>
#   Class attribute: llm_model
#   Class attribute: config_path
# 
# From server logs, we know:
# - Main LLM: OpenAI gpt-4o (temp: 0)
# - Small LLM: gpt-4o-mini
# - Collections: 20 available
# - Flow control settings exist

Best Practices

  • This function is intended for diagnostic and debugging purposes only, not for production use
  • Ensure the target module path (/tf/active/vice_ai/hybrid_rag_engine.py) exists before calling
  • The function uses broad exception handling which may mask specific errors - review printed output carefully
  • Dynamic module loading can have side effects if the target module has top-level code execution
  • The hardcoded server configuration details may become outdated and should be verified against actual logs
  • Consider redirecting stdout if you need to capture the output programmatically
  • This function performs introspection by searching for keywords ('gpt', 'model', 'config', 'llm') which may produce false positives

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function test_rag_engine 67.4% similar

    A test function that validates the RAG engine's ability to correctly instantiate different LLM models (OpenAI, Anthropic, Gemini) based on configuration settings.

    From: /tf/active/vicechatdev/docchat/test_model_selection.py
  • function test_config 63.3% similar

    A test function that validates the presence and correctness of all required configuration settings for a multi-model RAG (Retrieval-Augmented Generation) system.

    From: /tf/active/vicechatdev/docchat/test_model_selection.py
  • function check_configuration 61.9% similar

    A comprehensive configuration verification function that checks and displays the status of all DocChat system settings, including API keys, models, ChromaDB connection, directories, and LLM initialization.

    From: /tf/active/vicechatdev/docchat/verify_setup.py
  • function init_chat_engine_v1 61.2% similar

    Initializes a global chat engine instance using OneCo_hybrid_RAG and validates its configuration by checking for required attributes like available_collections and data_handles.

    From: /tf/active/vicechatdev/vice_ai/app.py
  • function basic_rag_example 59.7% similar

    Demonstrates a basic RAG (Retrieval-Augmented Generation) workflow by initializing a DocChatRAG engine, executing a sample query about document topics, and displaying the response with metadata.

    From: /tf/active/vicechatdev/docchat/example_usage.py
← Back to Browse