🔍 Code Extractor

function conversation_example

Maturity: 46

Demonstrates a multi-turn conversational RAG system with chat history management, showing how follow-up questions are automatically optimized based on conversation context.

File:
/tf/active/vicechatdev/docchat/example_usage.py
Lines:
135 - 164
Complexity:
moderate

Purpose

This function serves as a demonstration/example of implementing a conversational RAG (Retrieval-Augmented Generation) system with memory. It illustrates how to maintain chat history across multiple turns, pass context between queries, and handle follow-up questions that reference previous conversation content. The example shows the pattern of appending user queries and assistant responses to a chat history list, which enables the RAG system to understand contextual references in subsequent questions.

Source Code

def conversation_example():
    """Example: Multi-turn conversation with history"""
    print("\n=== Conversation with History Example ===\n")
    
    rag = DocChatRAG()
    
    # Simulated conversation
    chat_history = []
    
    # First question
    query1 = "What topics are covered in the documents?"
    print(f"User: {query1}\n")
    
    result1 = rag.chat(query=query1, mode="basic", chat_history=chat_history)
    
    print(f"Assistant: {result1['response'][:200]}...\n")
    
    # Add to history
    chat_history.append({'role': 'user', 'content': query1})
    chat_history.append({'role': 'assistant', 'content': result1['response']})
    
    # Follow-up question (will be optimized based on history)
    query2 = "Tell me more about the first topic you mentioned"
    print(f"User: {query2}\n")
    
    result2 = rag.chat(query=query2, mode="basic", chat_history=chat_history)
    
    print(f"Assistant: {result2['response'][:200]}...\n")
    
    print("Note: The second query was automatically optimized based on conversation history!")

Return Value

This function does not return any value (implicitly returns None). It prints output to the console demonstrating the conversation flow, including user queries, assistant responses (truncated to 200 characters), and informational notes about query optimization.

Dependencies

  • pathlib
  • document_indexer
  • rag_engine
  • config

Required Imports

from pathlib import Path
from document_indexer import DocumentIndexer
from rag_engine import DocChatRAG
import config

Usage Example

from pathlib import Path
from document_indexer import DocumentIndexer
from rag_engine import DocChatRAG
import config

# Simply call the function to see the conversation example
conversation_example()

# The function will output:
# - Initial query about document topics
# - Assistant's response
# - Follow-up contextual query
# - Assistant's contextual response
# - Note about automatic query optimization

Best Practices

  • This is a demonstration function meant for learning and testing, not for production use
  • Chat history should be maintained as a list of dictionaries with 'role' and 'content' keys
  • Always append both user queries and assistant responses to maintain complete conversation context
  • The chat_history list grows with each turn, so implement truncation or summarization for long conversations in production
  • Follow-up questions benefit from conversation history as they can reference previous context
  • Consider implementing error handling when using this pattern in production code
  • The response truncation ([:200]) is only for display purposes in the example
  • Ensure the RAG system is properly initialized before starting a conversation

Similar Components

AI-powered semantic similarity - components with related functionality:

  • function basic_rag_example 73.6% similar

    Demonstrates a basic RAG (Retrieval-Augmented Generation) workflow by initializing a DocChatRAG engine, executing a sample query about document topics, and displaying the response with metadata.

    From: /tf/active/vicechatdev/docchat/example_usage.py
  • function chat 68.5% similar

    Flask route handler that processes chat requests with RAG (Retrieval-Augmented Generation) capabilities, managing conversation sessions, chat history, and document-based question answering.

    From: /tf/active/vicechatdev/docchat/blueprint.py
  • function process_chat_background 67.0% similar

    Processes chat requests asynchronously in a background thread, managing RAG engine interactions, progress updates, and session state for various query modes including basic, extensive, full_reading, and deep_reflection.

    From: /tf/active/vicechatdev/docchat/app.py
  • function full_reading_example 65.3% similar

    Demonstrates the full reading mode of a RAG (Retrieval-Augmented Generation) system by processing all documents to answer a comprehensive query about key findings.

    From: /tf/active/vicechatdev/docchat/example_usage.py
  • function main_v45 63.0% similar

    Orchestrates and executes a series of example demonstrations for the DocChat system, including document indexing, RAG queries, and conversation modes.

    From: /tf/active/vicechatdev/docchat/example_usage.py
← Back to Browse