Translation Memory

In order to leverage translation memory effectively for in-context learning, we probably need to think through multiple mechanisms for memory retrieval. For example:

  1. Selective memory: Agent can pull specific lines from data files and logs

  2. Total abstracted memory: some kind of vectorized version of memory reduced with SVD or something along those lines to represent total memory in abstract form

  3. Advanced RAG techniques

  1. Model each doc as structured metadata plus summary for multi-doc RAG https://x.com/llama_index/status/1737515390664872040?s=46

  2. advanced RAG overview by founder of Llama index https://www.youtube.com/watch?v=TRjq7t2Ms5I

  3. Multimodal RAG “walking” a document https://x.com/hrishioa/status/1734935026201239800?s=46

  4. Use “this is the most relevant sentence in the context:” for better RAG https://x.com/helloiamleonie/status/1732676100495421537?s=46

  5. Rewrite user query for RAG with local LLM https://x.com/langchainai/status/1731724086072779109?s=46

Interesting early discussion about stateful bots.

Last updated