function _continue_to_analysis
Continues statistical analysis workflow after data retrieval by configuring analysis parameters, executing statistical analysis via StatisticalAnalysisService, and updating workflow progress status.
/tf/active/vicechatdev/full_smartstat/app.py
232 - 311
complex
Purpose
This function serves as a continuation handler in a multi-phase workflow for statistical analysis. It is called after data retrieval is complete to perform the actual statistical analysis. It retrieves session information, configures analysis parameters (type, target variables, confidence levels), executes the analysis using the specified AI model, verifies result consistency, and updates the workflow status. It's designed to work within a Flask application context with enhanced workflow tracking.
Source Code
def _continue_to_analysis(session_id: str, analysis_service: StatisticalAnalysisService, parameters: Dict[str, Any]):
"""Continue with statistical analysis after data retrieval"""
try:
logger.info(f"Starting analysis phase for session {session_id}")
# Get workflows from app context
workflows = getattr(app, 'enhanced_workflows', {})
# Get the session and analysis configuration
session = analysis_service.database_manager.get_session(session_id)
if not session:
logger.error(f"Session {session_id} not found for analysis")
return
# Configure analysis based on parameters
from models import AnalysisType, AnalysisConfiguration
analysis_config = AnalysisConfiguration(
analysis_type=AnalysisType.DESCRIPTIVE, # Default to descriptive
target_variables=parameters.get('analysis_variables', []),
confidence_level=parameters.get('confidence_level', 0.95),
significance_level=parameters.get('significance_level', 0.05)
)
# Update session with analysis config
session.analysis_config = analysis_config
session.status = session.status # Keep current status
analysis_service.database_manager.update_session(session)
# Extract model parameter from enhanced workflow parameters
ai_model = parameters.get('ai_model', 'gpt-4o')
logger.info(f"Using AI model for statistical analysis: {ai_model}")
# Start the actual statistical analysis
result = analysis_service.generate_and_execute_analysis(
session_id=session_id,
analysis_config=analysis_config,
user_query="Statistical analysis of laboratory numerical results",
model=ai_model, # Pass the model parameter from UI
include_previous_context=False # First analysis, no previous context needed
)
# Force refresh of results after analysis to match regular route behavior
if result and result.get('success'):
logger.info(f"Smart workflow analysis completed successfully, checking results consistency for session {session_id}")
# Ensure analysis results are properly accessible
consistency_ok = ensure_analysis_results_consistency(session_id)
if consistency_ok:
logger.info(f"Smart workflow analysis results verified for session {session_id}")
else:
logger.warning(f"Smart workflow analysis completed but results verification failed for session {session_id}")
else:
logger.error(f"Smart workflow analysis failed for session {session_id}: {result.get('error', 'Unknown error')}")
# Update progress to completed
if session_id in workflows:
if result and result.get('success'):
workflows[session_id].update({
'status': 'completed',
'progress': 100,
'message': 'Analysis completed successfully',
})
logger.info(f"Analysis completed successfully for session {session_id}")
else:
workflows[session_id].update({
'status': 'error',
'progress': 100,
'message': 'Analysis failed - check logs for details',
})
logger.error(f"Analysis failed for session {session_id}: {result}")
except Exception as e:
logger.error(f"Error in analysis continuation for session {session_id}: {str(e)}")
workflows = getattr(app, 'enhanced_workflows', {})
if session_id in workflows:
workflows[session_id].update({
'status': 'error',
'progress': 100,
'message': f'Analysis error: {str(e)}',
})
Parameters
| Name | Type | Default | Kind |
|---|---|---|---|
session_id |
str | - | positional_or_keyword |
analysis_service |
StatisticalAnalysisService | - | positional_or_keyword |
parameters |
Dict[str, Any] | - | positional_or_keyword |
Parameter Details
session_id: Unique identifier (string) for the analysis session. Used to retrieve session data, track workflow progress, and store analysis results. Must correspond to an existing session in the database.
analysis_service: Instance of StatisticalAnalysisService that provides methods for database management, session retrieval/update, and executing statistical analysis. Must be properly initialized with database connection and configuration.
parameters: Dictionary containing analysis configuration parameters. Expected keys include: 'analysis_variables' (list of variable names to analyze), 'confidence_level' (float, default 0.95), 'significance_level' (float, default 0.05), and 'ai_model' (string, default 'gpt-4o'). Additional parameters may be present from the workflow context.
Return Value
This function does not return any value (implicit None). It operates through side effects: updating the session in the database, modifying the global 'enhanced_workflows' dictionary in the Flask app context with status updates ('completed' or 'error'), and logging progress. Success or failure is communicated through the workflow status dictionary.
Dependencies
flaskloggingtypingmodelsservicespathlib
Required Imports
from flask import Flask
import logging
from typing import Dict, Any
from services import StatisticalAnalysisService
Conditional/Optional Imports
These imports are only needed under specific conditions:
from models import AnalysisType, AnalysisConfiguration
Condition: imported within the function body to configure analysis parameters
Required (conditional)Usage Example
# Assuming Flask app setup with enhanced_workflows tracking
from services import StatisticalAnalysisService
from flask import Flask
import logging
app = Flask(__name__)
app.enhanced_workflows = {}
logger = logging.getLogger(__name__)
# Initialize analysis service
analysis_service = StatisticalAnalysisService(
database_manager=db_manager,
config=config
)
# Define parameters for analysis
parameters = {
'analysis_variables': ['hemoglobin', 'glucose', 'cholesterol'],
'confidence_level': 0.95,
'significance_level': 0.05,
'ai_model': 'gpt-4o'
}
# Initialize workflow tracking
session_id = 'abc-123-def-456'
app.enhanced_workflows[session_id] = {
'status': 'analyzing',
'progress': 50,
'message': 'Starting analysis'
}
# Execute analysis continuation (typically called in a background thread)
with app.app_context():
_continue_to_analysis(
session_id=session_id,
analysis_service=analysis_service,
parameters=parameters
)
# Check workflow status after completion
status = app.enhanced_workflows[session_id]['status']
print(f"Analysis status: {status}")
Best Practices
- This function should be called in a background thread or async context to avoid blocking the main application thread during analysis
- Ensure the Flask app context is active before calling this function (use 'with app.app_context():')
- The session_id must exist in both the database and the enhanced_workflows dictionary before calling
- Always initialize app.enhanced_workflows as a dictionary before using this function
- Handle exceptions at the caller level as this function logs errors but doesn't raise them
- The 'ensure_analysis_results_consistency' function must be defined in the same module
- Monitor the enhanced_workflows dictionary for status updates to track progress
- Validate that analysis_service.database_manager is properly initialized with database connection
- Consider implementing timeout mechanisms for long-running analyses
- The AI model specified in parameters must be properly configured with valid API credentials
Tags
Similar Components
AI-powered semantic similarity - components with related functionality:
-
function run_analysis_async 69.1% similar
-
function smartstat_run_analysis 66.7% similar
-
function analyze_data 66.4% similar
-
function get_analysis_progress 64.3% similar
-
function demo_analysis_workflow 63.5% similar