🚀
Codex Editor
  • Project Overview
    • Welcome to Codex
    • Codex Editor
      • Features
      • What was Project Accelerate?
        • Project Philosophy
        • Steering Committee
    • Vision
      • Powerful Tools and Simplicity
      • Streamlining the Translation Process
      • Unopinionated Assistance
    • Architecture
    • Frequently Asked Questions
    • Translator Requirements
    • Roadmap
    • Why VS Code?
    • Multimodality
  • Translator's Copilot
    • What is Translator's Copilot?
    • Information Management
      • Resource Indexing
      • Greek/Hebrew Insights
      • Chat with Resources
      • Project-Based Insights
    • Translation Assistance
      • Translation Drafting
        • Prioritizing Semantics over Structures
        • Reranked Translation Suggestions
      • Quality Checking
        • Token matching evaluation
        • Few-shot evaluation
        • Fine-tuned Bible QA model
        • Character and word n-gram evaluation
        • Simulated GAN evaluation
        • Linguistic Anomaly Detection (LAD)
      • Back Translation
    • Orchestration
      • Translation Memory
      • Multi-Agent Simulations
      • Drafting linguistic resources
    • Intelligent Functions Library
  • Development
    • Codex Basics
      • Projects
      • The Editor
      • Extensions
        • Rendered Views
        • Language Servers
        • Custom Notebooks
        • Global State
    • Experimental Repositories
Powered by GitBook
On this page
  1. Translator's Copilot
  2. Orchestration

Multi-Agent Simulations

PreviousTranslation MemoryNextDrafting linguistic resources

Last updated 1 year ago

It ought to be possible, in some scenarios (particularly resource constraints and model availability) to run a continuous multi-agent simulation to iteratively improve a draft translation.

One example I've implemented involves the following basic loop (each step is not necessarily blocked by the prior step):

  • Main translation loop

    • Forward translation bot drafts a new translation instance using few-shot learning based on the available example translations drafted so far by humans

    • Back-translation bot provides a back-translation of the new instance

    • Evaluation bot provides feedback on how to improve, correct, or revise the instance

    • repeat process until the translation draft stabilizes (if it does!)

  • Linguist bot builds and updates language resources, such as a multilingual project lexicon, a working grammar, machine grammar rules for output token biasing with the LLM, etc.

  • Project manager bot brings stable drafts to human translation team for input, feedback, and correction. Corrected drafts are added to the pool of examples available to the main translation loop.

Early POC sample using GPTeam library