Changelogs
v1.6.0
AnythingLLM Changelog v1.6.0

New Features:

  • Multimodal support - You can now upload text and images into the chat and use them with image capable models.

You must use a multi-modal model to chat with images. This model can be a local LLM or cloud-hosted model like GPT-4o.


We added LLaVA-Llama3 as a model in our built-in LLM to make selection easier for those unfamiliar with multi-modal models.

  • Drag-and-Drop files into the chat UI to automatically upload & embed at once.

Images you drag-and-drop into a chat window are used for that specific chat. Document files uploaded are embedded into the workspace as you normally would and are available until un-embedded.

Fixes & Improvements:

  • Bumped known models for Perplexity & TogetherAI
  • Various small bugfixes

What's Next:

  • Custom @agent skill builder
  • More data connector integrations