🔍 Code Extractor

class ConversationContextManager

Maturity: 46

Advanced conversation context manager that analyzes conversation history, extracts topics, builds reference maps, and generates contextual intelligence for multi-turn conversations.

File:
/tf/active/vicechatdev/e-ink-llm/conversation_context.py
Lines:
57 - 416
Complexity:
complex

Purpose

This class manages conversation context by analyzing exchange history, identifying topics and patterns, tracking references between exchanges, building problem-solving chains, and generating enhanced context summaries. It provides comprehensive conversation intelligence including topic extraction, key insights, reference mapping, and contextual prompt enhancements to help maintain coherent multi-turn conversations with awareness of previous exchanges.

Source Code

class ConversationContextManager:
    """Advanced conversation management with contextual intelligence"""
    
    def __init__(self, session_manager: SessionManager):
        """Initialize conversation context manager"""
        self.session_manager = session_manager
        self.logger = logging.getLogger(__name__)
        
        # Context analysis patterns
        self.reference_patterns = {
            'step_reference': r'(?:in )?step (\d+)',
            'exchange_reference': r'(?:in )?(?:exchange|turn) (\d+)',
            'previous_reference': r'(?:previously|earlier|before)',
            'solution_reference': r'(?:the )?solution|approach|method',
            'question_reference': r'(?:the )?question|problem|issue',
            'data_reference': r'(?:the )?data|results|findings'
        }
        
        # Topic extraction keywords
        self.topic_keywords = {
            'analysis': ['analyze', 'analysis', 'examine', 'study', 'investigate'],
            'problem_solving': ['solve', 'solution', 'problem', 'issue', 'challenge'],
            'data_processing': ['data', 'process', 'calculate', 'compute', 'transform'],
            'visualization': ['chart', 'graph', 'plot', 'visualize', 'diagram'],
            'collaboration': ['review', 'feedback', 'collaborate', 'discuss', 'refine'],
            'documentation': ['document', 'record', 'notes', 'summary', 'report']
        }
    
    async def get_enhanced_conversation_context(self, 
                                              conversation_id: str,
                                              current_input: str = "") -> ConversationContext:
        """
        Get enhanced conversation context with timeline and references
        
        Args:
            conversation_id: Conversation ID
            current_input: Current input to analyze for references
            
        Returns:
            ConversationContext with comprehensive conversation intelligence
        """
        self.logger.info(f"Building enhanced context for conversation {conversation_id}")
        
        # Get basic conversation data
        conversation = self.session_manager.get_conversation(conversation_id)
        if not conversation:
            return self._create_empty_context(conversation_id)
        
        exchanges = self.session_manager.get_conversation_exchanges(conversation_id)
        
        # Build conversation turns
        conversation_turns = []
        for exchange in exchanges:
            turn = await self._build_conversation_turn(exchange)
            conversation_turns.append(turn)
        
        # Analyze conversation for topics and insights
        active_topics = self._extract_active_topics(conversation_turns)
        conversation_summary = await self._generate_conversation_summary(conversation_turns)
        key_insights = self._extract_key_insights(conversation_turns)
        problem_solving_chain = self._build_problem_solving_chain(conversation_turns)
        
        # Build reference map
        reference_map = self._build_reference_map(conversation_turns, current_input)
        
        # Generate recent context (last 3 exchanges)
        recent_context = self._generate_recent_context(conversation_turns[-3:])
        
        return ConversationContext(
            conversation_id=conversation_id,
            total_exchanges=len(conversation_turns),
            conversation_turns=conversation_turns,
            active_topics=active_topics,
            conversation_summary=conversation_summary,
            key_insights=key_insights,
            problem_solving_chain=problem_solving_chain,
            recent_context=recent_context,
            reference_map=reference_map
        )
    
    async def _build_conversation_turn(self, exchange: Dict[str, Any]) -> ConversationTurn:
        """Build a conversation turn from exchange data"""
        
        # Extract summaries from response text
        response_text = exchange.get('response_text', '')
        input_summary = self._summarize_input(exchange.get('input_file', ''))
        response_summary = self._summarize_response(response_text)
        
        # Extract topics and key points
        topics = self._extract_topics_from_text(response_text)
        key_points = self._extract_key_points(response_text)
        
        return ConversationTurn(
            exchange_id=exchange['exchange_id'],
            exchange_number=exchange['exchange_number'],
            timestamp=datetime.fromisoformat(exchange['timestamp']),
            input_summary=input_summary,
            response_summary=response_summary,
            input_file=exchange.get('input_file', ''),
            response_file=exchange.get('response_file', ''),
            topics=topics,
            key_points=key_points,
            processing_time=exchange.get('processing_time', 0.0),
            tokens_used=exchange.get('tokens_used', 0)
        )
    
    def _extract_active_topics(self, turns: List[ConversationTurn]) -> List[str]:
        """Extract active topics from conversation turns"""
        topic_counts = {}
        
        # Count topic occurrences across all turns
        for turn in turns:
            for topic in turn.topics:
                topic_counts[topic] = topic_counts.get(topic, 0) + 1
        
        # Return topics that appear in multiple turns or recent turns
        active_topics = []
        recent_topics = set()
        
        # Get topics from last 2 turns
        for turn in turns[-2:]:
            recent_topics.update(turn.topics)
        
        # Include frequent topics and recent topics
        for topic, count in topic_counts.items():
            if count > 1 or topic in recent_topics:
                active_topics.append(topic)
        
        return active_topics[:10]  # Limit to top 10
    
    async def _generate_conversation_summary(self, turns: List[ConversationTurn]) -> str:
        """Generate overall conversation summary"""
        if not turns:
            return "New conversation"
        
        # Simple summary based on turns
        total_exchanges = len(turns)
        main_topics = self._extract_active_topics(turns)
        
        if total_exchanges == 1:
            return f"Single exchange conversation focusing on {', '.join(main_topics[:3])}"
        else:
            return f"{total_exchanges}-turn conversation covering {', '.join(main_topics[:5])}"
    
    def _extract_key_insights(self, turns: List[ConversationTurn]) -> List[str]:
        """Extract key insights from conversation"""
        insights = []
        
        # Look for insights in key points
        for turn in turns:
            for point in turn.key_points:
                if any(keyword in point.lower() for keyword in ['insight', 'conclusion', 'finding', 'result']):
                    insights.append(f"Exchange {turn.exchange_number}: {point}")
        
        return insights[:5]  # Limit to top 5
    
    def _build_problem_solving_chain(self, turns: List[ConversationTurn]) -> List[Dict[str, Any]]:
        """Build problem-solving progression chain"""
        chain = []
        
        for turn in turns:
            # Identify problem-solving steps
            step_type = 'analysis'
            if any(keyword in turn.response_summary.lower() for keyword in ['solution', 'solve', 'fix']):
                step_type = 'solution'
            elif any(keyword in turn.response_summary.lower() for keyword in ['question', 'clarify', 'understand']):
                step_type = 'clarification'
            elif any(keyword in turn.response_summary.lower() for keyword in ['implement', 'apply', 'execute']):
                step_type = 'implementation'
            
            chain.append({
                'exchange_number': turn.exchange_number,
                'step_type': step_type,
                'description': turn.response_summary,
                'topics': turn.topics
            })
        
        return chain
    
    def _build_reference_map(self, 
                           turns: List[ConversationTurn], 
                           current_input: str) -> Dict[int, List[ConversationReference]]:
        """Build map of references between exchanges"""
        reference_map = {}
        
        # Analyze current input for references to previous exchanges
        if current_input:
            references = self._find_references_in_text(current_input, turns)
            if references:
                reference_map[len(turns) + 1] = references  # Next exchange number
        
        # Analyze turn responses for references to previous turns
        for i, turn in enumerate(turns[1:], 2):  # Start from second turn
            references = self._find_references_in_text(turn.response_summary, turns[:i-1])
            if references:
                reference_map[turn.exchange_number] = references
        
        return reference_map
    
    def _find_references_in_text(self, 
                                text: str, 
                                previous_turns: List[ConversationTurn]) -> List[ConversationReference]:
        """Find references to previous exchanges in text"""
        references = []
        text_lower = text.lower()
        
        # Look for explicit step/exchange references
        for pattern_name, pattern in self.reference_patterns.items():
            matches = re.finditer(pattern, text_lower)
            for match in matches:
                if pattern_name in ['step_reference', 'exchange_reference']:
                    try:
                        ref_number = int(match.group(1))
                        if ref_number <= len(previous_turns):
                            referenced_turn = previous_turns[ref_number - 1]
                            references.append(ConversationReference(
                                exchange_number=ref_number,
                                exchange_id=referenced_turn.exchange_id,
                                reference_type=pattern_name,
                                referenced_content=referenced_turn.response_summary,
                                context_snippet=text[max(0, match.start()-20):match.end()+20],
                                relevance_score=0.9
                            ))
                    except (ValueError, IndexError):
                        continue
        
        # Look for topical references (lower confidence)
        for turn in previous_turns[-3:]:  # Only check recent turns for topical refs
            for topic in turn.topics:
                if topic.lower() in text_lower:
                    references.append(ConversationReference(
                        exchange_number=turn.exchange_number,
                        exchange_id=turn.exchange_id,
                        reference_type='topic',
                        referenced_content=topic,
                        context_snippet=f"Topic: {topic}",
                        relevance_score=0.6
                    ))
        
        return references
    
    def _generate_recent_context(self, recent_turns: List[ConversationTurn]) -> str:
        """Generate context string from recent turns"""
        if not recent_turns:
            return ""
        
        context_parts = []
        for turn in recent_turns:
            context_parts.append(
                f"Exchange {turn.exchange_number}: {turn.input_summary} → {turn.response_summary}"
            )
        
        return "\n".join(context_parts)
    
    def _summarize_input(self, input_file: str) -> str:
        """Create summary of input file"""
        if not input_file:
            return "No input file"
        
        filename = Path(input_file).name
        if filename.startswith('test_'):
            return f"Test document: {filename}"
        elif any(keyword in filename.lower() for keyword in ['question', 'problem']):
            return f"Question/Problem: {filename}"
        else:
            return f"Document: {filename}"
    
    def _summarize_response(self, response_text: str) -> str:
        """Create summary of response text"""
        if not response_text:
            return "No response"
        
        # Take first sentence or first 100 characters
        sentences = response_text.split('.')
        if sentences and len(sentences[0]) < 100:
            return sentences[0].strip() + "."
        else:
            return response_text[:100].strip() + "..."
    
    def _extract_topics_from_text(self, text: str) -> List[str]:
        """Extract topics from text using keyword matching"""
        topics = []
        text_lower = text.lower()
        
        for topic_category, keywords in self.topic_keywords.items():
            if any(keyword in text_lower for keyword in keywords):
                topics.append(topic_category)
        
        return topics
    
    def _extract_key_points(self, text: str) -> List[str]:
        """Extract key points from response text"""
        # Simple extraction of sentences with key indicators
        key_indicators = ['important', 'key', 'main', 'primary', 'essential', 'critical']
        
        sentences = text.split('.')
        key_points = []
        
        for sentence in sentences[:10]:  # Check first 10 sentences
            sentence = sentence.strip()
            if len(sentence) > 20 and any(indicator in sentence.lower() for indicator in key_indicators):
                key_points.append(sentence + ".")
        
        return key_points[:3]  # Return up to 3 key points
    
    def _create_empty_context(self, conversation_id: str) -> ConversationContext:
        """Create empty context for new conversations"""
        return ConversationContext(
            conversation_id=conversation_id,
            total_exchanges=0,
            conversation_turns=[],
            active_topics=[],
            conversation_summary="New conversation",
            key_insights=[],
            problem_solving_chain=[],
            recent_context="",
            reference_map={}
        )
    
    def generate_contextual_prompt_enhancement(self, 
                                             context: ConversationContext,
                                             current_input: str) -> str:
        """Generate prompt enhancement based on conversation context"""
        
        if context.total_exchanges == 0:
            return "This is the start of a new conversation."
        
        enhancements = []
        
        # Add conversation summary
        enhancements.append(f"CONVERSATION CONTEXT: {context.conversation_summary}")
        
        # Add recent context
        if context.recent_context:
            enhancements.append(f"RECENT EXCHANGES:\n{context.recent_context}")
        
        # Add active topics
        if context.active_topics:
            enhancements.append(f"ACTIVE TOPICS: {', '.join(context.active_topics)}")
        
        # Add references if found
        references = context.reference_map.get(context.total_exchanges + 1, [])
        if references:
            ref_text = []
            for ref in references:
                ref_text.append(f"References exchange {ref.exchange_number}: {ref.referenced_content}")
            enhancements.append(f"REFERENCES:\n" + "\n".join(ref_text))
        
        # Add key insights
        if context.key_insights:
            enhancements.append(f"KEY INSIGHTS:\n" + "\n".join(context.key_insights))
        
        # Add problem-solving progression
        if len(context.problem_solving_chain) > 1:
            chain_text = []
            for step in context.problem_solving_chain[-3:]:  # Last 3 steps
                chain_text.append(f"Step {step['exchange_number']} ({step['step_type']}): {step['description']}")
            enhancements.append(f"PROBLEM-SOLVING PROGRESSION:\n" + "\n".join(chain_text))
        
        return "\n\n".join(enhancements)

Parameters

Name Type Default Kind
bases - -

Parameter Details

session_manager: SessionManager instance that provides access to conversation data, exchanges, and session storage. Required for retrieving conversation history and exchange details.

Return Value

Instantiation returns a ConversationContextManager object. Key method returns: get_enhanced_conversation_context() returns ConversationContext dataclass with comprehensive conversation intelligence; generate_contextual_prompt_enhancement() returns string with formatted context for prompt enhancement.

Class Interface

Methods

__init__(self, session_manager: SessionManager)

Purpose: Initialize the conversation context manager with session manager and set up reference patterns and topic keywords

Parameters:

  • session_manager: SessionManager instance for accessing conversation data

Returns: None

async get_enhanced_conversation_context(self, conversation_id: str, current_input: str = '') -> ConversationContext

Purpose: Get comprehensive conversation context including timeline, references, topics, insights, and problem-solving chain

Parameters:

  • conversation_id: Unique identifier for the conversation
  • current_input: Optional current user input to analyze for references to previous exchanges

Returns: ConversationContext dataclass containing all conversation intelligence including turns, topics, summary, insights, and reference map

async _build_conversation_turn(self, exchange: Dict[str, Any]) -> ConversationTurn

Purpose: Build a ConversationTurn object from raw exchange data with summaries, topics, and key points

Parameters:

  • exchange: Dictionary containing exchange data with keys: exchange_id, exchange_number, timestamp, input_file, response_text, processing_time, tokens_used

Returns: ConversationTurn dataclass with processed exchange information

_extract_active_topics(self, turns: List[ConversationTurn]) -> List[str]

Purpose: Extract currently active topics from conversation turns based on frequency and recency

Parameters:

  • turns: List of ConversationTurn objects to analyze

Returns: List of up to 10 active topic strings

async _generate_conversation_summary(self, turns: List[ConversationTurn]) -> str

Purpose: Generate a high-level summary of the entire conversation

Parameters:

  • turns: List of ConversationTurn objects

Returns: String summary describing the conversation scope and main topics

_extract_key_insights(self, turns: List[ConversationTurn]) -> List[str]

Purpose: Extract key insights, conclusions, and findings from conversation turns

Parameters:

  • turns: List of ConversationTurn objects to analyze

Returns: List of up to 5 key insight strings with exchange numbers

_build_problem_solving_chain(self, turns: List[ConversationTurn]) -> List[Dict[str, Any]]

Purpose: Build a chain showing problem-solving progression through the conversation

Parameters:

  • turns: List of ConversationTurn objects

Returns: List of dictionaries with keys: exchange_number, step_type (analysis/solution/clarification/implementation), description, topics

_build_reference_map(self, turns: List[ConversationTurn], current_input: str) -> Dict[int, List[ConversationReference]]

Purpose: Build a map of references between exchanges, identifying when exchanges reference previous ones

Parameters:

  • turns: List of ConversationTurn objects
  • current_input: Current user input to analyze for references

Returns: Dictionary mapping exchange numbers to lists of ConversationReference objects

_find_references_in_text(self, text: str, previous_turns: List[ConversationTurn]) -> List[ConversationReference]

Purpose: Find explicit and topical references to previous exchanges in given text

Parameters:

  • text: Text to analyze for references
  • previous_turns: List of previous ConversationTurn objects that could be referenced

Returns: List of ConversationReference objects with relevance scores

_generate_recent_context(self, recent_turns: List[ConversationTurn]) -> str

Purpose: Generate formatted context string from recent conversation turns

Parameters:

  • recent_turns: List of recent ConversationTurn objects (typically last 3)

Returns: Formatted string with exchange summaries

_summarize_input(self, input_file: str) -> str

Purpose: Create a brief summary of the input file based on filename

Parameters:

  • input_file: Path to input file

Returns: String summary of input file

_summarize_response(self, response_text: str) -> str

Purpose: Create a brief summary of response text (first sentence or 100 characters)

Parameters:

  • response_text: Full response text to summarize

Returns: String summary of response

_extract_topics_from_text(self, text: str) -> List[str]

Purpose: Extract topics from text using keyword matching against predefined topic categories

Parameters:

  • text: Text to analyze for topics

Returns: List of topic category strings found in text

_extract_key_points(self, text: str) -> List[str]

Purpose: Extract key points from response text based on indicator keywords

Parameters:

  • text: Response text to analyze

Returns: List of up to 3 key point strings

_create_empty_context(self, conversation_id: str) -> ConversationContext

Purpose: Create an empty ConversationContext for new conversations with no history

Parameters:

  • conversation_id: Conversation identifier

Returns: Empty ConversationContext with default values

generate_contextual_prompt_enhancement(self, context: ConversationContext, current_input: str) -> str

Purpose: Generate formatted prompt enhancement text based on conversation context for inclusion in LLM prompts

Parameters:

  • context: ConversationContext object with conversation intelligence
  • current_input: Current user input

Returns: Formatted string with conversation context, recent exchanges, topics, references, insights, and problem-solving progression

Attributes

Name Type Description Scope
session_manager SessionManager Session manager instance for accessing conversation data and exchanges instance
logger logging.Logger Logger instance for logging context manager operations instance
reference_patterns Dict[str, str] Dictionary of regex patterns for detecting references to previous exchanges (step_reference, exchange_reference, previous_reference, solution_reference, question_reference, data_reference) instance
topic_keywords Dict[str, List[str]] Dictionary mapping topic categories to keyword lists for topic extraction (analysis, problem_solving, data_processing, visualization, collaboration, documentation) instance

Dependencies

  • asyncio
  • logging
  • pathlib
  • typing
  • dataclasses
  • datetime
  • json
  • re
  • session_manager

Required Imports

import asyncio
import logging
from pathlib import Path
from typing import List, Dict, Any, Optional, Tuple
from dataclasses import dataclass
from datetime import datetime
import json
import re
from session_manager import SessionManager

Usage Example

# Instantiate with session manager
from session_manager import SessionManager

session_mgr = SessionManager()
context_mgr = ConversationContextManager(session_mgr)

# Get enhanced context for a conversation
context = await context_mgr.get_enhanced_conversation_context(
    conversation_id="conv_123",
    current_input="Can you explain step 2 in more detail?"
)

# Access context information
print(f"Total exchanges: {context.total_exchanges}")
print(f"Active topics: {context.active_topics}")
print(f"Summary: {context.conversation_summary}")

# Generate prompt enhancement
enhancement = context_mgr.generate_contextual_prompt_enhancement(
    context=context,
    current_input="Can you explain step 2 in more detail?"
)
print(f"Context enhancement:\n{enhancement}")

# Access specific turns
for turn in context.conversation_turns:
    print(f"Exchange {turn.exchange_number}: {turn.response_summary}")

# Check references
if context.reference_map:
    for exchange_num, refs in context.reference_map.items():
        print(f"Exchange {exchange_num} references: {len(refs)} previous exchanges")

Best Practices

  • Always instantiate with a properly configured SessionManager instance
  • Use async/await when calling get_enhanced_conversation_context() as it's an async method
  • The class maintains no mutable state between calls - it's safe to reuse the same instance for multiple conversations
  • Reference patterns use regex matching - ensure conversation text is properly formatted
  • Context generation can be expensive for long conversations - consider caching results
  • The class analyzes up to the last 3 exchanges for recent context by default
  • Topic extraction is keyword-based - customize topic_keywords dict for domain-specific topics
  • Reference map building looks for explicit step/exchange references and topical references
  • Empty contexts are returned for non-existent conversations rather than raising errors
  • The class logs operations at INFO level - configure logging appropriately for debugging

Similar Components

AI-powered semantic similarity - components with related functionality:

  • class ConversationContext 76.7% similar

    A dataclass that stores comprehensive conversation context including timeline, turns, topics, insights, and references for managing rich conversational state.

    From: /tf/active/vicechatdev/e-ink-llm/conversation_context.py
  • function conversation_example 64.3% similar

    Demonstrates a multi-turn conversational RAG system with chat history management, showing how follow-up questions are automatically optimized based on conversation context.

    From: /tf/active/vicechatdev/docchat/example_usage.py
  • class SessionManager_v1 64.3% similar

    SessionManager is a class that manages conversation sessions and tracking using SQLite database, storing conversations and their exchanges with metadata.

    From: /tf/active/vicechatdev/e-ink-llm/session_manager.py
  • class SimpleChatMemory 59.9% similar

    A simple chat memory manager that stores and retrieves conversation history between users and assistants with configurable history limits.

    From: /tf/active/vicechatdev/OneCo_hybrid_RAG copy.py
  • class ConversationReference 59.4% similar

    A dataclass that stores a reference to a previous conversation exchange, including metadata about the reference type, content, and relevance.

    From: /tf/active/vicechatdev/e-ink-llm/conversation_context.py
← Back to Browse