🔍 Code Extractor

Search Components

Full-Text: Fast keyword matching | Semantic: AI-powered understanding of intent (finds similar concepts)

Search Results for "langchain"

Found 17 matching component(s)

  • function validate_and_alternatives

    Validates whether a given keyword is a valid chemical compound, biochemical concept, or drug-related term using GPT-4, and returns alternative names/synonyms if valid.

    File: /tf/active/vicechatdev/offline_parser_docstore.py

    validation chemistry biochemistry drug-research llm
  • class MyEmbeddingFunction_v1

    A custom embedding function class that generates embeddings for documents using OpenAI's API, with built-in text summarization for long documents and token management.

    File: /tf/active/vicechatdev/OneCo_hybrid_RAG copy.py

    embeddings openai chromadb vector-database text-summarization
  • class FixedProjectVictoriaGenerator

    Fixed Project Victoria Disclosure Generator that properly handles all warranty sections.

    File: /tf/active/vicechatdev/fixed_project_victoria_generator.py

    class fixedprojectvictoriagenerator
  • class MyEmbeddingFunction_v2

    A custom embedding function class that generates embeddings for text documents using OpenAI's embedding models, with automatic text summarization and token management for large documents.

    File: /tf/active/vicechatdev/offline_docstore_multi_vice.py

    embeddings openai chromadb text-processing summarization
  • function main_v62

    Entry point function that instantiates an ImprovedProjectVictoriaGenerator and executes its complete pipeline to generate disclosure documents.

    File: /tf/active/vicechatdev/improved_project_victoria_generator.py

    entry-point main-function disclosure-generation RAG document-generation
  • class QueryBasedExtractor_v2

    A class that performs targeted information extraction from text using LLM-based query-guided extraction, with support for handling long documents through chunking and token management.

    File: /tf/active/vicechatdev/OneCo_hybrid_RAG.py

    information-extraction text-processing llm openai query-based
  • class ExtensiveSearchManager

    Manages extensive search functionality including full document retrieval, summarization, and enhanced context gathering.

    File: /tf/active/vicechatdev/OneCo_hybrid_RAG.py

    class extensivesearchmanager
  • class MyEmbeddingFunction_v3

    A custom embedding function class that generates embeddings for text documents using OpenAI's embedding models, with automatic text summarization and token limit handling for large documents.

    File: /tf/active/vicechatdev/offline_docstore_multi.py

    embeddings openai vector-database chromadb text-processing
  • class DocChatRAG

    Main RAG engine with three operating modes: 1. Basic RAG (similarity search) 2. Extensive (full document retrieval with preprocessing) 3. Full Reading (process all documents)

    File: /tf/active/vicechatdev/docchat/rag_engine.py

    class docchatrag
  • function test_rag_engine

    A test function that validates the RAG engine's ability to correctly instantiate different LLM models (OpenAI, Anthropic, Gemini) based on configuration settings.

    File: /tf/active/vicechatdev/docchat/test_model_selection.py

    testing rag llm model-switching validation
  • function get_llm_instance

    Factory function that creates and returns an appropriate LLM (Large Language Model) instance based on the specified model name, automatically detecting the provider (OpenAI, Azure OpenAI, or Anthropic) and configuring it with the given parameters.

    File: /tf/active/vicechatdev/docchat/llm_factory.py

    llm factory-pattern openai azure anthropic
  • function check_dependencies

    Validates the installation status of all required Python packages for the DocChat application by attempting to import each dependency and logging the results.

    File: /tf/active/vicechatdev/docchat/integration.py

    dependency-check validation installation package-management flask
  • class QueryBasedExtractor_v1

    A class that performs targeted information extraction from text using LLM-based query-guided extraction, with support for handling long documents through chunking and token management.

    File: /tf/active/vicechatdev/vice_ai/hybrid_rag_engine.py

    information-extraction llm openai text-processing query-based
  • class OneCo_hybrid_RAG_v3

    A class named OneCo_hybrid_RAG

    File: /tf/active/vicechatdev/vice_ai/hybrid_rag_engine.py

    class oneco_hybrid_rag
  • class ExtensiveSearchManager_v1

    Manages extensive search functionality including full document retrieval, summarization, and enhanced context gathering.

    File: /tf/active/vicechatdev/vice_ai/hybrid_rag_engine.py

    class extensivesearchmanager
  • class VersionComparisonService

    A service class that compares two versions of a document using LLM-based analysis, implementing smart segmentation and chunking for handling large documents efficiently.

    File: /tf/active/vicechatdev/CDocs/utils/version_comparison.py

    document-comparison version-control llm openai text-analysis
  • class QAUpdater

    Orchestrates a two-step Q&A document updating process that generates optimal search queries, retrieves information from internal and external sources, and uses an LLM to determine if updates are needed.

    File: /tf/active/vicechatdev/QA_updater/qa_engine/qa_updater.py

    qa-management document-updating llm-orchestration information-retrieval vector-search

Search Examples