# Translation Memory

In order to leverage translation memory effectively for in-context learning, we probably need to think through multiple mechanisms for memory retrieval. For example:

1. Selective memory: Agent can pull specific lines from data files and logs&#x20;
2. Total abstracted memory: some kind of vectorized version of memory reduced with SVD or something along those lines to represent total memory in abstract form
3. Advanced RAG techniques

### Links to RAG resources

1. Academic survey of RAG techniques <https://x.com/omarsar0/status/1738354427759612222?s=46>
2. Good chart of RAG techniques <https://x.com/bindureddy/status/1738367792729264207?s=46>
3. Model each doc as structured metadata plus summary for multi-doc RAG <https://x.com/llama_index/status/1737515390664872040?s=46>
4. advanced RAG overview by founder of Llama index  <https://www.youtube.com/watch?v=TRjq7t2Ms5I>
5. Overview of RAG practice <https://x.com/jerryjliu0/status/1736916360314458253?s=46>
6. No code RAG <https://x.com/llama_index/status/1736141134345437668?s=46>
7. RAG tutorial <https://x.com/llama_index/status/1735364513535496201?s=46>
8. Multimodal RAG “walking” a document <https://x.com/hrishioa/status/1734935026201239800?s=46>
9. RAG plus agent <https://x.com/llama_index/status/1734250820487774264?s=46>
10. Application of an agent over a db <https://x.com/hrishioa/status/1733804354547966234?s=46>
11. Use “this is the most relevant sentence in the context:” for better RAG <https://x.com/helloiamleonie/status/1732676100495421537?s=46>
12. Rewrite user query for RAG with local LLM <https://x.com/langchainai/status/1731724086072779109?s=46>

Interesting early discussion about [stateful bots](https://www.abrahamberg.com/blog/how-to-make-openai-stateful-text-generator-like-chatgpt-for-conversations/).
