🔍 Code Extractor

class TestLLMClient

Maturity: 47

Unit test class for testing the LLMClient class, which provides comprehensive test coverage for initialization, text generation, structured data extraction, and error handling across multiple LLM providers (OpenAI, Anthropic, Azure, local).

File:
/tf/active/vicechatdev/invoice_extraction/tests/test_utils.py
Lines:
18 - 294
Complexity:
moderate

Purpose

This test class validates the functionality of the LLMClient class by testing its ability to initialize with different LLM providers, generate text responses, extract structured data, handle API failures with retries, and track token usage statistics. It uses mocking to simulate API responses without making actual API calls, ensuring fast and reliable test execution. The tests cover OpenAI, Anthropic (Claude), Azure OpenAI, and local LLM endpoints.

Source Code

class TestLLMClient(unittest.TestCase):
    """Test cases for the LLMClient class."""
    
    def setUp(self):
        """Set up test environment before each test."""
        self.config = {
            'provider': 'openai',
            'model': 'gpt-4',
            'temperature': 0.0,
            'max_tokens': 1000,
            'api_key': 'test_api_key'
        }
    
    @patch('utils.llm_client.openai')
    def test_init_openai(self, mock_openai):
        """Test initialization with OpenAI provider."""
        # Create client
        client = LLMClient(self.config)
        
        # Check if OpenAI client was initialized
        self.assertEqual(client.provider, 'openai')
        self.assertEqual(client.model, 'gpt-4')
        self.assertEqual(client.temperature, 0.0)
        
        # Ensure OpenAI was initialized with the api key
        mock_openai.OpenAI.assert_called_once_with(api_key='test_api_key')
    
    @patch.dict(os.environ, {'OPENAI_API_KEY': 'env_api_key'})
    @patch('utils.llm_client.openai')
    def test_api_key_from_env(self, mock_openai):
        """Test getting API key from environment variables."""
        # Create client without api_key in config
        config_without_key = {
            'provider': 'openai',
            'model': 'gpt-4'
        }
        client = LLMClient(config_without_key)
        
        # Ensure api key was taken from environment variable
        mock_openai.OpenAI.assert_called_once_with(api_key='env_api_key')
    
    @patch('utils.llm_client.anthropic')
    def test_init_anthropic(self, mock_anthropic):
        """Test initialization with Anthropic provider."""
        # Create client with Anthropic provider
        config = {
            'provider': 'anthropic',
            'model': 'claude-3-opus-20240229',
            'api_key': 'test_claude_key'
        }
        client = LLMClient(config)
        
        # Check if Anthropic client was initialized
        self.assertEqual(client.provider, 'anthropic')
        self.assertEqual(client.model, 'claude-3-opus-20240229')
        
        # Ensure Anthropic was initialized with the api key
        mock_anthropic.Anthropic.assert_called_once_with(api_key='test_claude_key')
    
    @patch('utils.llm_client.AzureOpenAI')
    def test_init_azure(self, mock_azure):
        """Test initialization with Azure provider."""
        # Create client with Azure provider
        config = {
            'provider': 'azure',
            'model': 'gpt-4',
            'api_key': 'test_azure_key',
            'api_endpoint': 'https://test.openai.azure.com',
            'deployment': 'gpt4-deployment',
            'api_version': '2023-05-15'
        }
        client = LLMClient(config)
        
        # Check if Azure client was initialized
        self.assertEqual(client.provider, 'azure')
        self.assertEqual(client.azure_deployment, 'gpt4-deployment')
        
        # Ensure Azure was initialized with the correct parameters
        mock_azure.assert_called_once_with(
            api_key='test_azure_key',
            api_version='2023-05-15',
            azure_endpoint='https://test.openai.azure.com'
        )
    
    @patch('utils.llm_client.requests')
    def test_init_local(self, mock_requests):
        """Test initialization with local provider."""
        # Create client with local provider
        config = {
            'provider': 'local',
            'api_endpoint': 'http://localhost:8000/v1/completions',
            'model': 'llama2-7b'
        }
        client = LLMClient(config)
        
        # Check if local client was initialized
        self.assertEqual(client.provider, 'local')
        self.assertEqual(client._api_endpoint, 'http://localhost:8000/v1/completions')
        
        # Ensure client is using requests
        self.assertEqual(client.client, mock_requests)
    
    @patch('utils.llm_client.openai')
    def test_generate_openai(self, mock_openai):
        """Test generating text with OpenAI."""
        # Create mock response
        mock_response = MagicMock()
        mock_response.choices = [MagicMock()]
        mock_response.choices[0].message.content = "Test response"
        mock_response.usage.prompt_tokens = 10
        mock_response.usage.completion_tokens = 5
        
        # Setup OpenAI mock
        mock_client = MagicMock()
        mock_client.chat.completions.create.return_value = mock_response
        mock_openai.OpenAI.return_value = mock_client
        
        # Create client and generate text
        client = LLMClient(self.config)
        response = client.generate("Test prompt", "You are a helpful assistant")
        
        # Check that OpenAI was called correctly
        mock_client.chat.completions.create.assert_called_once_with(
            model='gpt-4',
            messages=[
                {"role": "system", "content": "You are a helpful assistant"},
                {"role": "user", "content": "Test prompt"}
            ],
            temperature=0.0,
            max_tokens=1000
        )
        
        # Check response
        self.assertEqual(response, "Test response")
        self.assertEqual(client.total_prompt_tokens, 10)
        self.assertEqual(client.total_completion_tokens, 5)
    
    @patch('utils.llm_client.anthropic')
    def test_generate_anthropic(self, mock_anthropic):
        """Test generating text with Anthropic."""
        # Create mock response
        mock_response = MagicMock()
        mock_response.content = [MagicMock()]
        mock_response.content[0].text = "Test Claude response"
        
        # Setup Anthropic mock
        mock_client = MagicMock()
        mock_client.messages.create.return_value = mock_response
        mock_anthropic.Anthropic.return_value = mock_client
        
        # Create client with Anthropic provider
        config = {
            'provider': 'anthropic',
            'model': 'claude-3-opus-20240229',
            'api_key': 'test_claude_key',
            'temperature': 0.2,
            'max_tokens': 2000
        }
        client = LLMClient(config)
        response = client.generate("Test prompt", "You are an invoice processor")
        
        # Check that Anthropic was called correctly
        mock_client.messages.create.assert_called_once_with(
            model='claude-3-opus-20240229',
            system="You are an invoice processor",
            messages=[{"role": "user", "content": "Test prompt"}],
            temperature=0.2,
            max_tokens=2000
        )
        
        # Check response
        self.assertEqual(response, "Test Claude response")
    
    @patch('utils.llm_client.requests')
    def test_generate_local(self, mock_requests):
        """Test generating text with local provider."""
        # Create mock response
        mock_response = MagicMock()
        mock_response.json.return_value = {
            "choices": [{"text": "Test local response"}]
        }
        mock_requests.post.return_value = mock_response
        
        # Create client with local provider
        config = {
            'provider': 'local',
            'api_endpoint': 'http://localhost:8000/v1/completions',
            'temperature': 0.5,
            'max_tokens': 500
        }
        client = LLMClient(config)
        response = client.generate("Test prompt")
        
        # Check that requests was called correctly
        mock_requests.post.assert_called_once_with(
            'http://localhost:8000/v1/completions',
            headers={"Content-Type": "application/json"},
            data=json.dumps({
                "prompt": "Test prompt",
                "temperature": 0.5,
                "max_tokens": 500
            }),
            timeout=60
        )
        
        # Check response
        self.assertEqual(response, "Test local response")
    
    @patch('utils.llm_client.openai')
    def test_extract_structured_data(self, mock_openai):
        """Test extracting structured data."""
        # Create mock response
        mock_response = MagicMock()
        mock_response.choices = [MagicMock()]
        mock_response.choices[0].message.content = '```json\n{"name": "Test Company", "invoice_number": "INV-123"}\n```'
        
        # Setup OpenAI mock
        mock_client = MagicMock()
        mock_client.chat.completions.create.return_value = mock_response
        mock_openai.OpenAI.return_value = mock_client
        
        # Create client and extract structured data
        client = LLMClient(self.config)
        schema = {
            "name": "string",
            "invoice_number": "string"
        }
        result = client.extract_structured_data("Sample invoice from Test Company #INV-123", schema)
        
        # Check result
        self.assertEqual(result, {"name": "Test Company", "invoice_number": "INV-123"})
    
    @patch('utils.llm_client.requests')
    def test_retrying_on_failure(self, mock_requests):
        """Test retry mechanism on API failure."""
        # First call raises an exception, second call succeeds
        mock_response_success = MagicMock()
        mock_response_success.json.return_value = {
            "choices": [{"text": "Test response after retry"}]
        }
        
        mock_requests.post.side_effect = [
            requests.exceptions.ConnectionError("API unavailable"),
            mock_response_success
        ]
        
        # Create client with local provider
        config = {
            'provider': 'local',
            'api_endpoint': 'http://localhost:8000/v1/completions',
            'max_retries': 3
        }
        client = LLMClient(config)
        response = client.generate("Test prompt")
        
        # Check that requests was called twice
        self.assertEqual(mock_requests.post.call_count, 2)
        
        # Check final response
        self.assertEqual(response, "Test response after retry")
    
    def test_get_usage_stats(self):
        """Test getting usage statistics."""
        # Create client and manually set token counts
        client = LLMClient(self.config)
        client.total_prompt_tokens = 100
        client.total_completion_tokens = 50
        
        # Get stats
        stats = client.get_usage_stats()
        
        # Check stats
        self.assertEqual(stats['prompt_tokens'], 100)
        self.assertEqual(stats['completion_tokens'], 50)
        self.assertEqual(stats['total_tokens'], 150)
        self.assertEqual(stats['provider'], 'openai')
        self.assertEqual(stats['model'], 'gpt-4')

Parameters

Name Type Default Kind
bases unittest.TestCase -

Parameter Details

bases: Inherits from unittest.TestCase, which provides the testing framework infrastructure including assertion methods, test discovery, and test execution lifecycle management

Return Value

As a test class, it does not return values directly. Each test method performs assertions that either pass (no exception) or fail (raises AssertionError). The unittest framework collects and reports test results.

Class Interface

Methods

setUp(self) -> None

Purpose: Initializes test environment before each test method runs, setting up a default configuration dictionary for OpenAI

Returns: None - sets up self.config instance variable

test_init_openai(self, mock_openai) -> None

Purpose: Tests that LLMClient correctly initializes with OpenAI provider, verifying client attributes and API key handling

Parameters:

  • mock_openai: Mocked openai module injected by @patch decorator

Returns: None - performs assertions

test_api_key_from_env(self, mock_openai) -> None

Purpose: Tests that LLMClient retrieves API key from OPENAI_API_KEY environment variable when not provided in config

Parameters:

  • mock_openai: Mocked openai module injected by @patch decorator

Returns: None - performs assertions

test_init_anthropic(self, mock_anthropic) -> None

Purpose: Tests that LLMClient correctly initializes with Anthropic (Claude) provider, verifying client attributes and API key handling

Parameters:

  • mock_anthropic: Mocked anthropic module injected by @patch decorator

Returns: None - performs assertions

test_init_azure(self, mock_azure) -> None

Purpose: Tests that LLMClient correctly initializes with Azure OpenAI provider, verifying deployment name, endpoint, and API version handling

Parameters:

  • mock_azure: Mocked AzureOpenAI class injected by @patch decorator

Returns: None - performs assertions

test_init_local(self, mock_requests) -> None

Purpose: Tests that LLMClient correctly initializes with local LLM provider, verifying endpoint configuration and requests library usage

Parameters:

  • mock_requests: Mocked requests module injected by @patch decorator

Returns: None - performs assertions

test_generate_openai(self, mock_openai) -> None

Purpose: Tests text generation with OpenAI provider, verifying API call parameters, response handling, and token usage tracking

Parameters:

  • mock_openai: Mocked openai module injected by @patch decorator

Returns: None - performs assertions

test_generate_anthropic(self, mock_anthropic) -> None

Purpose: Tests text generation with Anthropic provider, verifying API call parameters and response handling for Claude models

Parameters:

  • mock_anthropic: Mocked anthropic module injected by @patch decorator

Returns: None - performs assertions

test_generate_local(self, mock_requests) -> None

Purpose: Tests text generation with local LLM provider, verifying HTTP request parameters and response parsing

Parameters:

  • mock_requests: Mocked requests module injected by @patch decorator

Returns: None - performs assertions

test_extract_structured_data(self, mock_openai) -> None

Purpose: Tests structured data extraction from text, verifying JSON parsing from LLM response with code block formatting

Parameters:

  • mock_openai: Mocked openai module injected by @patch decorator

Returns: None - performs assertions

test_retrying_on_failure(self, mock_requests) -> None

Purpose: Tests retry mechanism when API calls fail, verifying that client retries on connection errors and eventually succeeds

Parameters:

  • mock_requests: Mocked requests module injected by @patch decorator

Returns: None - performs assertions

test_get_usage_stats(self) -> None

Purpose: Tests retrieval of token usage statistics, verifying that prompt tokens, completion tokens, and total tokens are correctly tracked and reported

Returns: None - performs assertions

Attributes

Name Type Description Scope
config dict Test configuration dictionary containing provider, model, temperature, max_tokens, and api_key settings for OpenAI. Initialized in setUp() method before each test. instance

Dependencies

  • unittest
  • unittest.mock
  • logging
  • json
  • os
  • sys
  • io
  • datetime
  • tempfile
  • requests
  • utils.llm_client
  • utils.logging_utils

Required Imports

import unittest
from unittest.mock import patch, MagicMock, ANY
import logging
import json
import os
import sys
import io
from datetime import datetime, timedelta
import tempfile
import requests
from utils.llm_client import LLMClient
from utils.logging_utils import InvoiceExtractionLogger, PerformanceLogger, get_logger

Usage Example

import unittest
from test_module import TestLLMClient

# Run all tests in the class
if __name__ == '__main__':
    unittest.main()

# Run specific test
suite = unittest.TestLoader().loadTestsFromName('test_module.TestLLMClient.test_init_openai')
unittest.TextTestRunner().run(suite)

# Run with verbose output
suite = unittest.TestLoader().loadTestsFromTestCase(TestLLMClient)
runner = unittest.TextTestRunner(verbosity=2)
runner.run(suite)

Best Practices

  • Each test method is independent and uses setUp() to initialize fresh test configuration
  • Uses @patch decorator to mock external dependencies (API clients) to avoid actual API calls
  • Tests are isolated and do not depend on external services or network connectivity
  • Each test follows the Arrange-Act-Assert pattern for clarity
  • Mock objects are configured to return predictable responses for deterministic testing
  • Tests verify both successful operations and error handling scenarios
  • Token usage tracking is validated to ensure proper monitoring
  • Tests cover all supported LLM providers (OpenAI, Anthropic, Azure, local)
  • Environment variable handling is tested using @patch.dict
  • Retry mechanism is validated to ensure resilience to transient failures

Similar Components

AI-powered semantic similarity - components with related functionality:

  • class LLMClient 80.2% similar

    A singleton client class for interacting with multiple LLM providers (OpenAI, Anthropic, Azure OpenAI, and local models) with unified interface for text generation and structured data extraction.

    From: /tf/active/vicechatdev/invoice_extraction/utils/llm_client.py
  • class MockLLMClient 80.0% similar

    A mock implementation of an LLM client designed for testing extractor components without making actual API calls to language models.

    From: /tf/active/vicechatdev/invoice_extraction/tests/test_extractors.py
  • class LLMClient_v2 78.7% similar

    Client for interacting with LLM providers (OpenAI, Anthropic, Azure, etc.)

    From: /tf/active/vicechatdev/contract_validity_analyzer/utils/llm_client.py
  • function test_llm_client 77.5% similar

    Tests the LLM client functionality by analyzing a sample contract text and verifying the extraction of key contract metadata such as third parties, dates, and status.

    From: /tf/active/vicechatdev/contract_validity_analyzer/test_implementation.py
  • class LLMClient_v1 74.4% similar

    A client class for interacting with Large Language Models (LLMs), specifically designed to work with OpenAI's chat completion API.

    From: /tf/active/vicechatdev/QA_updater/core/llm_client.py
← Back to Browse