Showing posts with label GPU Passthrough. Show all posts
Showing posts with label GPU Passthrough. Show all posts

Tuesday, November 11, 2025

Offline AI LLM System (TrueNAS Hosted)

 


TrueNAS Hosted Offline LLM:
Dolphin, Ollama and AnythingLLM

Introduction

Running AI models on your TrueNAS server provides centralized access to powerful language models for your entire network. This guide walks through deploying Ollama with Dolphin models and AnythingLLM on TrueNAS, making AI assistance available to all your devices while maintaining complete data privacy.

By the end of this guide, you'll have:

  • Ollama running as a Docker container on TrueNAS
  • Dolphin models stored efficiently on your ZFS pool
  • AnythingLLM accessible via web browser from any device on your network
  • Persistent storage that survives container restarts
  • Optional GPU passthrough for accelerated inference