🔍 Code Extractor

function get_available_models

Maturity: 39

Flask API endpoint that returns a JSON response containing the list of available LLM models and the default model configured in the application.

File:
/tf/active/vicechatdev/docchat/blueprint.py
Lines:
331 - 336
Complexity:
simple

Purpose

This endpoint serves as a configuration discovery mechanism for frontend clients to retrieve the available language models that can be used for document chat functionality. It allows the UI to dynamically populate model selection dropdowns and know which model is set as default, enabling flexible model switching without hardcoding model names in the frontend.

Source Code

def get_available_models():
    """Get list of available LLM models"""
    return jsonify({
        'models': config.AVAILABLE_MODELS,
        'default': config.DEFAULT_MODEL
    })

Return Value

Returns a Flask JSON response object containing a dictionary with two keys: 'models' (a list of available LLM model identifiers from config.AVAILABLE_MODELS) and 'default' (a string representing the default model identifier from config.DEFAULT_MODEL). The HTTP status code is 200 on success.

Dependencies

  • flask
  • flask-login

Required Imports

from flask import Blueprint, jsonify
from flask_login import login_required
from config import config

Usage Example

# In your Flask application setup
from flask import Flask, Blueprint, jsonify
from flask_login import LoginManager, login_required

# Configuration module
class Config:
    AVAILABLE_MODELS = ['gpt-4', 'gpt-3.5-turbo', 'claude-2']
    DEFAULT_MODEL = 'gpt-3.5-turbo'

config = Config()

# Create Flask app and blueprint
app = Flask(__name__)
app.secret_key = 'your-secret-key'
docchat_bp = Blueprint('docchat', __name__)

# Setup Flask-Login
login_manager = LoginManager()
login_manager.init_app(app)

@docchat_bp.route('/api/models', methods=['GET'])
@login_required
def get_available_models():
    return jsonify({
        'models': config.AVAILABLE_MODELS,
        'default': config.DEFAULT_MODEL
    })

app.register_blueprint(docchat_bp)

# Client-side usage (JavaScript fetch example)
# fetch('/api/models')
#   .then(response => response.json())
#   .then(data => {
#     console.log('Available models:', data.models);
#     console.log('Default model:', data.default);
#   });

Best Practices

  • Ensure config.AVAILABLE_MODELS is properly populated before the application starts to avoid returning empty or None values
  • Validate that config.DEFAULT_MODEL exists in config.AVAILABLE_MODELS to maintain consistency
  • This endpoint requires authentication via @login_required decorator, ensure users are logged in before accessing
  • Consider caching this response on the client side as model configurations typically don't change during runtime
  • Handle potential AttributeError if config module doesn't have AVAILABLE_MODELS or DEFAULT_MODEL attributes
  • Consider adding error handling to return appropriate HTTP status codes if configuration is missing
  • Document the expected format of model identifiers for consistency across the application

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function api_get_models 93.8% similar

    Flask API endpoint that returns a list of available LLM models and the default model configuration.

    From: /tf/active/vicechatdev/docchat/app.py
  • function chat_with_text_section 57.5% similar

    Flask API endpoint that enables AI-powered chat conversations about a specific text section, with support for multiple LLM models and document context.

    From: /tf/active/vicechatdev/vice_ai/new_app.py
  • function api_send_chat_message_v1 56.3% similar

    Flask API endpoint that handles sending messages in a chat session, processes them through a RAG (Retrieval-Augmented Generation) engine with configurable LLM models, and returns AI-generated responses with references.

    From: /tf/active/vicechatdev/vice_ai/new_app.py
  • class LLMClient_v1 55.4% similar

    A client class for interacting with Large Language Models (LLMs), specifically designed to work with OpenAI's chat completion API.

    From: /tf/active/vicechatdev/QA_updater/core/llm_client.py
  • function get_llm_instance 55.1% similar

    Factory function that creates and returns an appropriate LLM (Large Language Model) instance based on the specified model name, automatically detecting the provider (OpenAI, Azure OpenAI, or Anthropic) and configuring it with the given parameters.

    From: /tf/active/vicechatdev/docchat/llm_factory.py
← Back to Browse