Fine-Tuning vs Knowledge-Base vs Toolformer approach in LLM
2 min readAug 5, 2023
- Fine-Tuning: Finetuning involves taking a pre-trained language model (like the base GPT model) and adapting it to a specific task or domain. During finetuning, the model is trained further on a narrower dataset relevant to the desired task. This process helps the model to specialize and perform better on the specific task. For example, you could take a pre-trained language model and then finetune it on a dataset of medical texts to make it more proficient in understanding and generating medical content.
- Knowledge Base: In the knowledge base we create a one-shot approach where we can inject some information that can be searched based on specific task/question. This search procedure is empowered using a vector(embedding) knowledge base. Normally “K” similar cases are found and provided as examples to the prompt to provide a similar answer tailored toward the current one. This is commonly done using a vector database like Pinecone or ChromaDB where we can easily search based on vector embedding similarity.
- Toolformer: In the toolformer, we use a modified version of knowledge base searching. Instead of having one specific task, we have multiple tasks and we let another agent decide which part needs further clarification. based on that we modify the prompt to get information for external tools.