function main_v23
Interactive CLI function that allows users to select and run document processing test scenarios with varying document counts, providing feedback on test success and next steps.
/tf/active/vicechatdev/contract_validity_analyzer/test_real_documents.py
166 - 209
moderate
Purpose
This function serves as an interactive entry point for testing a document processing system with real documents. It presents users with predefined test scenarios (3, 5, or 10 documents) or allows custom document counts, executes the selected test via test_with_real_documents(), and provides guidance on next steps based on test results. It's designed for validation before running full-scale document analysis.
Source Code
def main():
"""Main function to run real document tests."""
print("Starting real document test...")
# Test with different document counts
test_scenarios = [
("Quick test", 3),
("Standard test", 5),
("Extended test", 10)
]
print("\nAvailable test scenarios:")
for i, (name, count) in enumerate(test_scenarios, 1):
print(f"{i}. {name} - Process {count} documents")
try:
choice = input("\nSelect test scenario (1-3) or enter custom number of documents: ").strip()
if choice in ['1', '2', '3']:
scenario_idx = int(choice) - 1
scenario_name, max_docs = test_scenarios[scenario_idx]
print(f"\nRunning {scenario_name}...")
else:
max_docs = int(choice)
scenario_name = f"Custom test with {max_docs} documents"
print(f"\nRunning {scenario_name}...")
except (ValueError, KeyboardInterrupt):
print("\nUsing default: Quick test with 3 documents")
max_docs = 3
scenario_name = "Quick test"
success = test_with_real_documents(max_docs)
if success:
print(f"\nš Ready to process the full dataset!")
print("To run the full analysis, use:")
print(" python main.py")
print(" or")
print(" python main.py --config custom_config.yaml")
else:
print(f"\nš§ Please review the logs and fix any issues before processing the full dataset")
return success
Return Value
Returns a boolean value indicating test success. Returns True if test_with_real_documents() completed successfully, False otherwise. This return value indicates whether the system is ready for full dataset processing.
Dependencies
ossysloggingjsonpathlibdatetime
Required Imports
import os
import sys
import logging
import json
from pathlib import Path
from datetime import datetime
from config.config import Config
from core.analyzer import ContractAnalyzer
Usage Example
if __name__ == '__main__':
# Run the interactive test function
success = main()
# Exit with appropriate status code
sys.exit(0 if success else 1)
# Alternative: Call directly in a script
from your_module import main
# This will prompt user for test scenario selection
test_passed = main()
if test_passed:
print('Tests passed, ready for production')
else:
print('Tests failed, review logs')
Best Practices
- This function should be called as the main entry point for testing, typically in an if __name__ == '__main__' block
- Ensure test_with_real_documents() function is properly implemented before calling main()
- The function handles KeyboardInterrupt gracefully, allowing users to cancel input
- Invalid input defaults to the 'Quick test' scenario with 3 documents for safety
- Review console output and logs after running to understand test results
- Use the return value to determine if the system is ready for full-scale processing
- The function assumes test_with_real_documents() is defined in the same scope or imported
- Consider redirecting output to a log file for record-keeping during testing
Tags
Similar Components
AI-powered semantic similarity - components with related functionality:
-
function main_v21 66.8% similar
-
function main_v39 65.8% similar
-
function test_document_processing 64.6% similar
-
function main_v46 64.5% similar
-
function test_with_real_documents 63.3% similar