class LLMClient_v1
A client class for interacting with Large Language Models (LLMs), specifically designed to work with OpenAI's chat completion API.
/tf/active/vicechatdev/QA_updater/core/llm_client.py
7 - 53
moderate
Purpose
LLMClient provides a simplified interface for making calls to OpenAI's language models. It handles configuration management, API key authentication, error handling, and logging. The class is designed to be instantiated once with configuration settings and then used to make multiple LLM calls throughout an application's lifecycle. It supports customizable parameters like model selection, token limits, and temperature settings for controlling response generation.
Source Code
class LLMClient:
"""Client for interacting with Large Language Models."""
def __init__(self, config: ConfigParser):
"""
Initializes the LLMClient with the model specified in the config.
Args:
config (ConfigParser): Configuration object containing model settings.
"""
self.logger = logging.getLogger(__name__)
self.model_name = config.get('llm', 'model_name', fallback='gpt-3.5-turbo') # Default to gpt-3.5-turbo
self.api_key = os.getenv("OPENAI_API_KEY") # Assuming OpenAI for now
if not self.api_key:
self.logger.warning("OPENAI_API_KEY not found in environment variables. "
"LLM functionality will be limited.")
self.logger.info(f"LLMClient initialized with model: {self.model_name}")
def call_llm(self, prompt: str, max_tokens: int = 2000, temperature: float = 0.0) -> str:
"""
Calls the LLM with the given prompt and returns the response.
Args:
prompt (str): The prompt to send to the LLM.
max_tokens (int): Maximum number of tokens in the response.
temperature (float): Controls the randomness of the response (0.0 is more deterministic).
Returns:
str: The LLM's response, or an empty string if there was an error.
"""
if not self.api_key:
self.logger.error("API key not configured. Cannot call LLM.")
return ""
try:
response = openai.chat.completions.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],
max_tokens=max_tokens,
temperature=temperature,
)
return response.choices[0].message.content.strip()
except Exception as e:
self.logger.exception(f"Error calling LLM: {e}")
return ""
Parameters
| Name | Type | Default | Kind |
|---|---|---|---|
bases |
- | - |
Parameter Details
config: A ConfigParser object containing LLM configuration settings. Must have an 'llm' section with optional 'model_name' key. If 'model_name' is not provided, defaults to 'gpt-3.5-turbo'. This parameter is required for instantiation and determines which model will be used for all subsequent calls.
Return Value
Instantiation returns an LLMClient object configured with the specified model and API key. The call_llm method returns a string containing the LLM's response text (stripped of leading/trailing whitespace), or an empty string if an error occurs (such as missing API key or API call failure).
Class Interface
Methods
__init__(self, config: ConfigParser) -> None
Purpose: Initializes the LLMClient with model configuration and API key from environment variables
Parameters:
config: ConfigParser object containing LLM settings, specifically looking for 'llm' section with optional 'model_name' key
Returns: None - constructor initializes the instance
call_llm(self, prompt: str, max_tokens: int = 2000, temperature: float = 0.0) -> str
Purpose: Sends a prompt to the configured LLM and returns the generated response
Parameters:
prompt: The text prompt/question to send to the LLMmax_tokens: Maximum number of tokens in the response (default: 2000). Controls response length and API coststemperature: Controls randomness of the response, range 0.0-2.0 (default: 0.0). Lower values are more deterministic, higher values more creative
Returns: String containing the LLM's response text (stripped), or empty string if API key is missing or an error occurs
Attributes
| Name | Type | Description | Scope |
|---|---|---|---|
logger |
logging.Logger | Logger instance for recording initialization, errors, and warnings related to LLM operations | instance |
model_name |
str | Name of the OpenAI model to use for completions (e.g., 'gpt-3.5-turbo', 'gpt-4'). Retrieved from config or defaults to 'gpt-3.5-turbo' | instance |
api_key |
str | None | OpenAI API key retrieved from OPENAI_API_KEY environment variable. None if not set, which disables LLM functionality | instance |
Dependencies
osopenailoggingconfigparser
Required Imports
import os
import openai
import logging
from configparser import ConfigParser
Usage Example
from configparser import ConfigParser
import os
import logging
import openai
# Setup logging
logging.basicConfig(level=logging.INFO)
# Set API key
os.environ['OPENAI_API_KEY'] = 'your-api-key-here'
# Create configuration
config = ConfigParser()
config.add_section('llm')
config.set('llm', 'model_name', 'gpt-4')
# Instantiate the client
client = LLMClient(config)
# Make a simple call
response = client.call_llm('What is the capital of France?')
print(response)
# Make a call with custom parameters
response = client.call_llm(
prompt='Write a creative story about a robot.',
max_tokens=500,
temperature=0.7
)
print(response)
Best Practices
- Always set the OPENAI_API_KEY environment variable before instantiating the class to enable LLM functionality
- Instantiate LLMClient once and reuse the instance for multiple calls to avoid repeated initialization overhead
- Check the return value from call_llm - an empty string indicates an error occurred
- Use appropriate temperature values: 0.0 for deterministic responses, higher values (0.7-1.0) for creative outputs
- Set max_tokens appropriately based on expected response length to control costs and response time
- Monitor logs for warnings and errors, especially regarding API key configuration
- Handle empty string returns gracefully in your application logic
- Consider implementing retry logic for transient API failures in production environments
- The class uses synchronous API calls - consider implementing async versions for high-throughput applications
Tags
Similar Components
AI-powered semantic similarity - components with related functionality:
-
class LLMClient_v2 81.6% similar
-
class LLMClient 81.3% similar
-
class OpenAIChatLLM 77.0% similar
-
class TestLLMClient 74.4% similar