🔍 Code Extractor

function chat_with_text_section

Maturity: 58

Flask API endpoint that enables AI-powered chat conversations about a specific text section, with support for multiple LLM models and document context.

File:
/tf/active/vicechatdev/vice_ai/new_app.py
Lines:
758 - 872
Complexity:
complex

Purpose

This endpoint allows authenticated users to have interactive conversations with AI assistants about their text sections. It maintains chat history, supports multiple AI models (OpenAI, Anthropic, Google), incorporates uploaded documents as context, and provides intelligent responses for content improvement, writing assistance, and document editing. The function verifies ownership, manages conversation state, and integrates with various LLM providers through a unified client interface.

Source Code

def chat_with_text_section(section_id):
    """Chat with AI about a specific text section"""
    user_email = get_current_user()
    data = request.get_json()
    
    # Verify ownership
    text_section = text_section_service.get_text_section(section_id)
    if not text_section or text_section.owner != user_email:
        return jsonify({'error': 'Text section not found or access denied'}), 404
    
    try:
        user_message = data.get('message', '')
        model = data.get('model', LLM_CONFIG['default_model'])  # Get model from request
        
        if not user_message:
            return jsonify({'error': 'Message is required'}), 400
        
        # Add user message to chat history
        text_section_service.add_chat_message(
            section_id=section_id,
            role='user',
            content=user_message
        )
        
        # Initialize LLM client with selected model
        llm_client = LLMClient(model=model)
        
        # Prepare context for AI
        context = {
            'section_title': text_section.title,
            'section_content': text_section.current_content,
            'section_type': text_section.section_type.value,
            'chat_history': [msg.to_dict() for msg in text_section.chat_messages[-10:]]  # Last 10 messages
        }
        
        # Get uploaded documents as context
        uploaded_docs = session.get('uploaded_documents', {})
        context_documents = data.get('context_documents', [])
        
        # Add uploaded documents to context
        document_context = ""
        if uploaded_docs:
            document_context += "\n\nAdditional Context Documents:\n"
            for doc_id, doc_info in uploaded_docs.items():
                if doc_id in context_documents or not context_documents:  # Include all if none specified
                    document_context += f"\n--- {doc_info['filename']} ---\n"
                    document_context += doc_info['text_content'][:1000]  # Limit size
                    if len(doc_info['text_content']) > 1000:
                        document_context += "... [content truncated]"
                    document_context += "\n"
        
        # Build messages for the LLM
        system_content = f"""You are an expert AI assistant helping with document creation and editing. 
                
Current section context:
- Title: {context['section_title']}
- Type: {context['section_type']}
- Content: {context['section_content'][:2000]}{'...' if len(context['section_content']) > 2000 else ''}

{document_context}

Please provide helpful, accurate responses about this section. Focus on:
1. Content improvement suggestions
2. Writing assistance
3. Factual information related to the topic
4. Structure and organization advice

If you reference external sources or make specific claims, please be clear about your confidence level."""

        messages = [
            {
                "role": "system",
                "content": system_content
            },
            {
                "role": "user", 
                "content": user_message
            }
        ]
        
        # Add recent chat history for context
        if context['chat_history']:
            # Insert recent history before the current user message
            for msg in context['chat_history'][-5:]:  # Last 5 messages for context
                if msg['role'] in ['user', 'assistant']:
                    messages.insert(-1, {
                        "role": msg['role'],
                        "content": msg['content'][:1000]  # Truncate long messages
                    })
        
        # Get AI response using LLM client
        ai_response = llm_client.generate_response(
            messages=messages,
            max_tokens=2000,
            temperature=0.7
        )
        
        # Add AI response to chat history
        text_section_service.add_chat_message(
            section_id=section_id,
            role='assistant',
            content=ai_response,
            references=[]  # No references for now, could be added later
        )
        
        return jsonify({
            'success': True,
            'response': ai_response,
            'references': [],
            'model_used': model
        })
        
    except Exception as e:
        logger.error(f"Error in chat: {e}")
        return jsonify({'error': str(e)}), 500

Parameters

Name Type Default Kind
section_id - - positional_or_keyword

Parameter Details

section_id: String identifier for the text section to chat about. Must be a valid section ID that exists in the database and is owned by the authenticated user. Used to retrieve section content, title, type, and chat history.

Return Value

Returns a Flask JSON response. On success (200): {'success': True, 'response': <AI generated text>, 'references': [], 'model_used': <model name>}. On error: {'error': <error message>} with status codes 400 (missing message), 404 (section not found/access denied), or 500 (server error).

Dependencies

  • flask
  • openai
  • google-generativeai
  • anthropic
  • python-docx
  • reportlab
  • pandas
  • werkzeug

Required Imports

from flask import request, jsonify, session
import logging
from models import TextSection, ChatMessage
from services import TextSectionService

Conditional/Optional Imports

These imports are only needed under specific conditions:

import openai

Condition: Required if using OpenAI models (GPT-3.5, GPT-4, etc.)

Required (conditional)
import google.generativeai as genai

Condition: Required if using Google Gemini models

Required (conditional)
import anthropic

Condition: Required if using Anthropic Claude models

Required (conditional)

Usage Example

# Example POST request to the endpoint
import requests

# Assuming authentication is handled via session/cookies
url = 'http://localhost:5000/api/text-sections/abc123/chat'
headers = {'Content-Type': 'application/json'}
payload = {
    'message': 'Can you help me improve the introduction?',
    'model': 'gpt-4',
    'context_documents': ['doc_id_1', 'doc_id_2']
}

response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
    data = response.json()
    print(f"AI Response: {data['response']}")
    print(f"Model Used: {data['model_used']}")
else:
    print(f"Error: {response.json()['error']}")

Best Practices

  • Always verify user ownership of the text section before processing chat requests to prevent unauthorized access
  • Limit chat history context to recent messages (5-10) to avoid token limit issues with LLM APIs
  • Truncate long content (section content to 2000 chars, messages to 1000 chars, documents to 1000 chars) to manage API costs and response times
  • Store both user and assistant messages in the database to maintain conversation continuity
  • Use try-except blocks to handle LLM API failures gracefully and return meaningful error messages
  • Include system prompts that clearly define the AI's role and capabilities for consistent responses
  • Allow model selection via request payload to support different LLM providers and capabilities
  • Log errors with sufficient detail for debugging while avoiding exposure of sensitive data
  • Consider implementing rate limiting to prevent abuse of the chat endpoint
  • Validate that the 'message' field is not empty before processing to avoid unnecessary API calls
  • Use session storage for uploaded documents but be mindful of session size limits
  • Include document context selectively based on context_documents parameter to optimize token usage
  • Set appropriate temperature (0.7) for balanced creativity and accuracy in responses
  • Return the model_used in the response for transparency and debugging purposes

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function api_send_chat_message_v1 76.1% similar

    Flask API endpoint that handles sending messages in a chat session, processes them through a RAG (Retrieval-Augmented Generation) engine with configurable LLM models, and returns AI-generated responses with references.

    From: /tf/active/vicechatdev/vice_ai/new_app.py
  • function api_chat 70.2% similar

    Flask API endpoint that handles chat requests asynchronously, processing user queries through a RAG (Retrieval-Augmented Generation) engine with support for multiple modes, memory, web search, and custom configurations.

    From: /tf/active/vicechatdev/docchat/app.py
  • function chat 68.5% similar

    Flask route handler that processes chat requests with RAG (Retrieval-Augmented Generation) capabilities, managing conversation sessions, chat history, and document-based question answering.

    From: /tf/active/vicechatdev/docchat/blueprint.py
  • function api_send_chat_message 67.7% similar

    Flask API endpoint that handles sending a message in a chat session, processes it through a hybrid RAG engine with configurable search and memory settings, and returns an AI-generated response with references.

    From: /tf/active/vicechatdev/vice_ai/complex_app.py
  • function data_section_analysis_chat 67.0% similar

    Async Flask route handler that processes chat messages for data section analysis, managing conversation history and integrating with a statistical analysis service.

    From: /tf/active/vicechatdev/vice_ai/new_app.py
← Back to Browse