🚀
Codex Editor
  • Project Overview
    • Welcome to Codex
    • Codex Editor
      • Features
      • What was Project Accelerate?
        • Project Philosophy
        • Steering Committee
    • Vision
      • Powerful Tools and Simplicity
      • Streamlining the Translation Process
      • Unopinionated Assistance
    • Architecture
    • Frequently Asked Questions
    • Translator Requirements
    • Roadmap
    • Why VS Code?
    • Multimodality
  • Translator's Copilot
    • What is Translator's Copilot?
    • Information Management
      • Resource Indexing
      • Greek/Hebrew Insights
      • Chat with Resources
      • Project-Based Insights
    • Translation Assistance
      • Translation Drafting
        • Prioritizing Semantics over Structures
        • Reranked Translation Suggestions
      • Quality Checking
        • Token matching evaluation
        • Few-shot evaluation
        • Fine-tuned Bible QA model
        • Character and word n-gram evaluation
        • Simulated GAN evaluation
        • Linguistic Anomaly Detection (LAD)
      • Back Translation
    • Orchestration
      • Translation Memory
      • Multi-Agent Simulations
      • Drafting linguistic resources
    • Intelligent Functions Library
  • Development
    • Codex Basics
      • Projects
      • The Editor
      • Extensions
        • Rendered Views
        • Language Servers
        • Custom Notebooks
        • Global State
    • Experimental Repositories
Powered by GitBook
On this page
  1. Translator's Copilot
  2. Orchestration

Translation Memory

PreviousOrchestrationNextMulti-Agent Simulations

Last updated 1 year ago

In order to leverage translation memory effectively for in-context learning, we probably need to think through multiple mechanisms for memory retrieval. For example:

  1. Selective memory: Agent can pull specific lines from data files and logs

  2. Total abstracted memory: some kind of vectorized version of memory reduced with SVD or something along those lines to represent total memory in abstract form

  3. Advanced RAG techniques

Links to RAG resources

  1. Academic survey of RAG techniques

  2. Good chart of RAG techniques

  3. Model each doc as structured metadata plus summary for multi-doc RAG

  4. advanced RAG overview by founder of Llama index

  5. Overview of RAG practice

  6. No code RAG

  7. RAG tutorial

  8. Multimodal RAG “walking” a document

  9. RAG plus agent

  10. Application of an agent over a db

  11. Use “this is the most relevant sentence in the context:” for better RAG

  12. Rewrite user query for RAG with local LLM

Interesting early discussion about .

https://x.com/omarsar0/status/1738354427759612222?s=46
https://x.com/bindureddy/status/1738367792729264207?s=46
https://x.com/llama_index/status/1737515390664872040?s=46
https://www.youtube.com/watch?v=TRjq7t2Ms5I
https://x.com/jerryjliu0/status/1736916360314458253?s=46
https://x.com/llama_index/status/1736141134345437668?s=46
https://x.com/llama_index/status/1735364513535496201?s=46
https://x.com/hrishioa/status/1734935026201239800?s=46
https://x.com/llama_index/status/1734250820487774264?s=46
https://x.com/hrishioa/status/1733804354547966234?s=46
https://x.com/helloiamleonie/status/1732676100495421537?s=46
https://x.com/langchainai/status/1731724086072779109?s=46
stateful bots