🚀
Codex Editor
  • Project Overview
    • Welcome to Codex
    • Codex Editor
      • Features
      • What was Project Accelerate?
        • Project Philosophy
        • Steering Committee
    • Vision
      • Powerful Tools and Simplicity
      • Streamlining the Translation Process
      • Unopinionated Assistance
    • Architecture
    • Frequently Asked Questions
    • Translator Requirements
    • Roadmap
    • Why VS Code?
    • Multimodality
  • Translator's Copilot
    • What is Translator's Copilot?
    • Information Management
      • Resource Indexing
      • Greek/Hebrew Insights
      • Chat with Resources
      • Project-Based Insights
    • Translation Assistance
      • Translation Drafting
        • Prioritizing Semantics over Structures
        • Reranked Translation Suggestions
      • Quality Checking
        • Token matching evaluation
        • Few-shot evaluation
        • Fine-tuned Bible QA model
        • Character and word n-gram evaluation
        • Simulated GAN evaluation
        • Linguistic Anomaly Detection (LAD)
      • Back Translation
    • Orchestration
      • Translation Memory
      • Multi-Agent Simulations
      • Drafting linguistic resources
    • Intelligent Functions Library
  • Development
    • Codex Basics
      • Projects
      • The Editor
      • Extensions
        • Rendered Views
        • Language Servers
        • Custom Notebooks
        • Global State
    • Experimental Repositories
Powered by GitBook
On this page
  1. Translator's Copilot
  2. Information Management

Chat with Resources

PreviousGreek/Hebrew InsightsNextProject-Based Insights

Last updated 1 year ago

The Translator's Copilot should enable users to interact with their resources in a chat conversation, as well as via traditional methods such as simply opening various file types and interacting with them.

The overall aim is to provide powerful interaction opportunity through .

Development Plan

What’s involved in getting this into working order is:

  1. Some way of retrieving resources. This could be:

    1. Embedding functionality (i.e., vectorization)

      1. sentence_transformers model

      2. global vector based retrieval

  • We should have a cloud instance (for faster connections) and a local instance of any given same sentence_transformers model

  • Vectorstore - must be local

  • Chat UI - vs code web view

  • LLM - API connection or local model (ideally both: one in the cloud on a powerful computer and the other local )

the simplest possible user interface