AnythingLLM Setup
LLM Setup
Cloud
TrueFoundry

TrueFoundry provides an enterprise-ready AI Gateway (opens in a new tab) which can integrate with applications like AnythingLLM and provides governance and observability for your AI Applications. TrueFoundry AI Gateway serves as a unified interface for LLM access, providing:

  • Unified API Access: Connect to 250+ LLMs (OpenAI, Claude, Gemini, Groq, Mistral) through one API
  • Low Latency: Sub-3ms internal latency with intelligent routing and load balancing
  • Enterprise Security: SOC 2, HIPAA, GDPR compliance with RBAC and audit logging
  • Quota and cost management: Token-based quotas, rate limiting, and comprehensive usage tracking
  • Observability: Full request/response logging, metrics, and traces with customizable retention

Prerequisites

Before integrating AnythingLLM with TrueFoundry, ensure you have:

  1. TrueFoundry Account: Create a Truefoundry account (opens in a new tab) and follow our Quick Start Guide (opens in a new tab)
  2. AnythingLLM Installation: Set up AnythingLLM using either the Desktop application (opens in a new tab) or Docker deployment (opens in a new tab)

Integration Steps

This guide assumes you have AnythingLLM installed and running, and have obtained your TrueFoundry AI Gateway base URL and authentication token.

Step 1: Access AnythingLLM LLM Settings

  1. Launch your AnythingLLM application (Desktop or Docker).

  2. Navigate to Settings and go to LLM Preference:

AnythingLLM settings page showing LLM provider selection interface

Step 3: Configure Generic OpenAI Provider

  1. In the LLM provider search box, type "Generic OpenAI" and select it from the available options.

  2. Configure the TrueFoundry connection with the following settings:

    • Base URL: Enter your TrueFoundry Gateway base URL (e.g., https://internal.devtest.truefoundry.tech/api/llm/api/inference/openai)
    • API Key: Enter your TrueFoundry Personal Access Token
    • Chat Model Name: Enter the model name from the unified code snippet (e.g., openai-main/gpt-4o)
    • Token Context Window: Set based on your model's limits (e.g., 16000, 128000)
    • Max Tokens: Configure according to your needs (e.g., 1024, 2048)

Step 4: Get Configuration from TrueFoundry

Get the api key, base URL and model name from the unified code snippet in our playground (ensure you use the same model name as written):

Get API key, Base URL and Model Name from Unified Code Snippet

Copy the api key, base URL and model ID and paste them into AnythingLLM's configuration fields.

Step 5: Test Your Integration

  1. Save your configuration in AnythingLLM.

  2. Create a new workspace or open an existing one to test the integration:

AnythingLLM chat interface showing successful test message with TrueFoundry integration
  1. Send a test message to verify that AnythingLLM is successfully communicating with TrueFoundry's AI Gateway.

Your AnythingLLM application is now integrated with TrueFoundry's AI Gateway and ready for AI chat, RAG, and agent operations.