Translation Memory
In order to leverage translation memory effectively for in-context learning, we probably need to think through multiple mechanisms for memory retrieval. For example:
Selective memory: Agent can pull specific lines from data files and logs
Total abstracted memory: some kind of vectorized version of memory reduced with SVD or something along those lines to represent total memory in abstract form
Advanced RAG techniques
Links to RAG resources
Academic survey of RAG techniques https://x.com/omarsar0/status/1738354427759612222?s=46
Good chart of RAG techniques https://x.com/bindureddy/status/1738367792729264207?s=46
Model each doc as structured metadata plus summary for multi-doc RAG https://x.com/llama_index/status/1737515390664872040?s=46
advanced RAG overview by founder of Llama index https://www.youtube.com/watch?v=TRjq7t2Ms5I
Overview of RAG practice https://x.com/jerryjliu0/status/1736916360314458253?s=46
Multimodal RAG “walking” a document https://x.com/hrishioa/status/1734935026201239800?s=46
RAG plus agent https://x.com/llama_index/status/1734250820487774264?s=46
Application of an agent over a db https://x.com/hrishioa/status/1733804354547966234?s=46
Use “this is the most relevant sentence in the context:” for better RAG https://x.com/helloiamleonie/status/1732676100495421537?s=46
Rewrite user query for RAG with local LLM https://x.com/langchainai/status/1731724086072779109?s=46
Interesting early discussion about stateful bots.
Last updated