function conversation_example
Demonstrates a multi-turn conversational RAG system with chat history management, showing how follow-up questions are automatically optimized based on conversation context.
/tf/active/vicechatdev/docchat/example_usage.py
135 - 164
moderate
Purpose
This function serves as a demonstration/example of implementing a conversational RAG (Retrieval-Augmented Generation) system with memory. It illustrates how to maintain chat history across multiple turns, pass context between queries, and handle follow-up questions that reference previous conversation content. The example shows the pattern of appending user queries and assistant responses to a chat history list, which enables the RAG system to understand contextual references in subsequent questions.
Source Code
def conversation_example():
"""Example: Multi-turn conversation with history"""
print("\n=== Conversation with History Example ===\n")
rag = DocChatRAG()
# Simulated conversation
chat_history = []
# First question
query1 = "What topics are covered in the documents?"
print(f"User: {query1}\n")
result1 = rag.chat(query=query1, mode="basic", chat_history=chat_history)
print(f"Assistant: {result1['response'][:200]}...\n")
# Add to history
chat_history.append({'role': 'user', 'content': query1})
chat_history.append({'role': 'assistant', 'content': result1['response']})
# Follow-up question (will be optimized based on history)
query2 = "Tell me more about the first topic you mentioned"
print(f"User: {query2}\n")
result2 = rag.chat(query=query2, mode="basic", chat_history=chat_history)
print(f"Assistant: {result2['response'][:200]}...\n")
print("Note: The second query was automatically optimized based on conversation history!")
Return Value
This function does not return any value (implicitly returns None). It prints output to the console demonstrating the conversation flow, including user queries, assistant responses (truncated to 200 characters), and informational notes about query optimization.
Dependencies
pathlibdocument_indexerrag_engineconfig
Required Imports
from pathlib import Path
from document_indexer import DocumentIndexer
from rag_engine import DocChatRAG
import config
Usage Example
from pathlib import Path
from document_indexer import DocumentIndexer
from rag_engine import DocChatRAG
import config
# Simply call the function to see the conversation example
conversation_example()
# The function will output:
# - Initial query about document topics
# - Assistant's response
# - Follow-up contextual query
# - Assistant's contextual response
# - Note about automatic query optimization
Best Practices
- This is a demonstration function meant for learning and testing, not for production use
- Chat history should be maintained as a list of dictionaries with 'role' and 'content' keys
- Always append both user queries and assistant responses to maintain complete conversation context
- The chat_history list grows with each turn, so implement truncation or summarization for long conversations in production
- Follow-up questions benefit from conversation history as they can reference previous context
- Consider implementing error handling when using this pattern in production code
- The response truncation ([:200]) is only for display purposes in the example
- Ensure the RAG system is properly initialized before starting a conversation
Tags
Similar Components
AI-powered semantic similarity - components with related functionality:
-
function basic_rag_example 73.6% similar
-
function chat 68.5% similar
-
function process_chat_background 67.0% similar
-
function full_reading_example 65.3% similar
-
function main_v45 63.0% similar