Translation Memory
Last updated
Last updated
In order to leverage translation memory effectively for in-context learning, we probably need to think through multiple mechanisms for memory retrieval. For example:
Selective memory: Agent can pull specific lines from data files and logs
Total abstracted memory: some kind of vectorized version of memory reduced with SVD or something along those lines to represent total memory in abstract form
Advanced RAG techniques
Academic survey of RAG techniques
Good chart of RAG techniques
Model each doc as structured metadata plus summary for multi-doc RAG
advanced RAG overview by founder of Llama index
Overview of RAG practice
No code RAG
RAG tutorial
Multimodal RAG “walking” a document
RAG plus agent
Application of an agent over a db
Use “this is the most relevant sentence in the context:” for better RAG
Rewrite user query for RAG with local LLM
Interesting early discussion about .