Fine-tuning
Overview

AnythingLLM Fine-Tuning

Fine-tuning is now available in AnythingLLM as an additional opt-in feature to improve your baseline model responses. Fine-tuning can be a powerful way to "bake in" knowledge that is specific to your use case into an already great foundational model like LLama 3 8B.

Why fine-tune?

Fine-tuning and LLM is taking a solid foundational model and improving or "tuning" its behavior, responses, or inherit knowledge based on your AnythingLLM chats and documents. Combining RAG, agents, and a capable fine-tune is basically an LLM with superpowers.

‼️

Know before you tune!

Fine-tuning is not the "silver bullet" to bad responses. Fine-tuned LLMs do not guarantee better responses. Like any LLM, your data (chats and documents) are the key. While a fine-tune may "know" something it was trained on it is not built for recall and citations.

What do I get when I order a fine-tune?

When you order a fine-tuned model via AnythingLLM we will deliver to you via email a link to download an 8-bit quantized .GGUF file that you can run anywhere you run LLMs locally. You can load this model directly into AnythingLLM, Ollama, LMStudio, LocalAI, and anywhere else you can run GGUF files.

This model is yours to keep, forever.

How do I get started?

We currently offer in-app fine-tuning where we handle everything for a single one-time fee. This process can deliver you a full fine tune in under 1 hour.