Wednesday, November 26, 2025

Bitwarden Password Vault Cleanup Guide


Bitwarden CLI Vault Cleanup Guide

A comprehensive guide for cleaning up your Bitwarden vault using the CLI, based on real troubleshooting sessions.


Prerequisites

  • Bitwarden CLI installed in C:\Tools\Bitwarden (or your chosen directory)
  • Windows PowerShell (works with PowerShell 5.x)
  • Navigate to the Bitwarden directory in a Windows Terminal: cd C:\Tools\Bitwarden

What To Do If You Are Compromised By A Scam/Scammer

 


What To Do If You Are Compromised By A Scam/Scammer

A Comprehensive Guide to Protecting Your Digital Life

Online security is becoming increasingly critical as more of our daily lives move onto the internet. Whether through hacking, phishing, tech support scams, or other malicious activities, account compromises can happen to anyone. This comprehensive guide provides detailed steps to help you respond effectively, recover your accounts, and protect yourself from future incidents.

IMPORTANT:
If you are in immediate danger or feel threatened,
Call 911 immediately.

Quickbooks Multi-User Hosting Troubleshooting


QuickBooks Multi-User Troubleshooting:
Workstation(s) & Server


Please follow these steps to make sure only your server is hosting QuickBooks, and that the QuickBooks Database Server is working correctly.

Tuesday, November 11, 2025

Offline AI LLM System (TrueNAS Hosted)

 


TrueNAS Hosted Offline LLM:
Dolphin, Ollama and AnythingLLM

Introduction

Running AI models on your TrueNAS server provides centralized access to powerful language models for your entire network. This guide walks through deploying Ollama with Dolphin models and AnythingLLM on TrueNAS, making AI assistance available to all your devices while maintaining complete data privacy.

By the end of this guide, you'll have:

  • Ollama running as a Docker container on TrueNAS
  • Dolphin models stored efficiently on your ZFS pool
  • AnythingLLM accessible via web browser from any device on your network
  • Persistent storage that survives container restarts
  • Optional GPU passthrough for accelerated inference

Offline AI LLM System


Create a Completely Offline LLM:
Using Dolphin, Ollama, and AnythingLLM

Running a Large Language Model (LLM) completely offline gives you privacy, control, and independence from cloud services. In this comprehensive guide, I'll walk you through setting up a fully functional offline AI assistant using three powerful tools:

  • Dolphin - Uncensored, instruction-tuned language models
  • Ollama - Simple, efficient local LLM runtime
  • AnythingLLM - User-friendly web interface for interacting with local models

By the end of this guide, you'll have a ChatGPT-like experience running entirely on your own hardware, with no internet connection required!!