RAG ⚐
up: LLMs ⚐ Retrieval Augmented Generation. This is best to provide a knowledge base (as opposed to Finetuning).
- LlamaCloud
- relevance.ai
- Hosted Vector Databases
- LlamaIndex
Set up RAG
- Vectorise the source data into a Vector Store (RAG)
- I think LlamaIndex can do that with the CSV loader
- Do a similarity search ()
- Get the relevant docs
- Pass docs to LLM and Prompts
- Execute RAG