Showing posts with label filesystem. Show all posts
Showing posts with label filesystem. Show all posts

Thursday, August 14, 2025

RAID Controller HBA Mode PLUS Proxmox ZFS Setup Tutorial


RAID HBA Mode & Proxmox ZFS Setup


Overview

This tutorial covers converting a RAID controller to HBA (Host Bus Adapter) mode and setting up ZFS storage in Proxmox. HBA mode allows ZFS to directly manage individual drives, providing better performance and ZFS's advanced features.

Prerequisites

  • Server with a compatible RAID controller
  • Basic understanding of server hardware
  • USB drive for Proxmox installation
  • IMPORTANT: Complete backup of all data (this process will destroy existing data)

Why Use HBA Mode with ZFS?

  • Direct Drive Access: ZFS can directly communicate with drives for better error handling
  • No RAID Overhead: Eliminates hardware RAID controller bottlenecks
  • ZFS Features: Full access to snapshots, compression, deduplication, and checksumming
  • Better Performance: Reduced latency and improved throughput

Part 1: Identifying Your RAID Controller

Check Current Controller

# Check PCI devices for RAID controllers
lspci | grep -i raid

# Check for LSI/Broadcom controllers specifically
lspci | grep -i lsi
lspci | grep -i broadcom

Common Controllers and HBA Compatibility

  • LSI/Broadcom 9xxx series: Usually support HBA mode
  • Dell PERC H310/H710: Can often be flashed to HBA mode
  • HP Smart Array: Limited HBA support (check specific models)
  • Intel RAID: Some models support AHCI mode

Part 2: Converting RAID Controller to HBA Mode

Method 1: Firmware Flashing (LSI/Broadcom Controllers)

⚠️ WARNING: Flashing firmware can brick your controller. Proceed with caution.

Step 1: Create Bootable DOS Environment

  1. Download FreeDOS or use a DOS boot disk
  2. Download appropriate firmware and tools from vendor

Step 2: Flash to HBA Firmware

# Example for LSI 9211-8i
sas2flsh -o -f 2118it.bin -b mptsas2.rom

# Verify the flash
sas2flsh -list

Step 3: Clear Configuration

# Clear existing RAID configuration
sas2flsh -o -c 0

Method 2: BIOS/UEFI Configuration

For Controllers Supporting Mode Switch:

  1. Boot into system BIOS/UEFI
  2. Navigate to storage/RAID controller settings
  3. Look for options like:
    • "HBA Mode"
    • "IT Mode"
    • "AHCI Mode"
    • "Non-RAID Mode"
  4. Select HBA/IT mode
  5. Save and exit

Dell PERC Controllers:

  1. Press Ctrl+R during boot to enter RAID BIOS
  2. Go to "Controller Management"
  3. Select "Switch to HBA Mode" (if available)
  4. Confirm the change

Method 3: Software Tools (Dell Servers)

# Using Dell's perccli tool
perccli /c0 set mode=hba

# Verify mode change
perccli /c0 show

Part 3: Verifying HBA Mode

Check Drive Visibility

After conversion, individual drives should be visible:

# Check for individual drives
lsblk
fdisk -l

# Verify no RAID arrays are present
cat /proc/mdstat

Verify Controller Mode

# For LSI controllers
lspci -vvv | grep -A 20 "LSI"

# Should show individual drives, not logical volumes

Part 4: Installing Proxmox

Download and Create Installation Media

  1. Download Proxmox VE ISO from official website
  2. Create bootable USB using tools like Rufus or dd
  3. Boot from USB installation media

Installation Considerations for ZFS

  1. Boot Drive: Use separate SSD for Proxmox OS (recommended)
  2. ZFS Pool: Plan which drives will be used for ZFS storage
  3. Network: Configure management interface

Basic Proxmox Installation

  1. Boot from installation media
  2. Select "Install Proxmox VE"
  3. Accept license agreement
  4. Select target disk for Proxmox OS
  5. Configure location and keyboard
  6. Set root password and email
  7. Configure network settings
  8. Review and install


Part 5: Setting Up ZFS in Proxmox

Access Proxmox Web Interface

  1. Note the IP address shown after installation
  2. Browse to https://your-server-ip:8006
  3. Login with root credentials

Create ZFS Pool via Web Interface

Navigate to Storage Configuration:

  1. Go to "Datacenter" → "Storage"
  2. Click "Add" → "ZFS"

Configure ZFS Pool:

  1. ID: Give your pool a name (e.g., "zfs-pool")
  2. Pool: Click "Create" to make a new pool
  3. Pool Name: Enter pool name (e.g., "rpool-data")
  4. Devices: Select drives for the pool
  5. RAID Level: Choose appropriate level:
    • RAID0 (stripe): No redundancy, maximum space
    • RAID1 (mirror): 50% usable space, can lose 1 drive
    • RAID10: Striped mirrors, can lose 1 drive per mirror
    • RAIDZ1: Can lose 1 drive, minimum 3 drives
    • RAIDZ2: Can lose 2 drives, minimum 4 drives

Create ZFS Pool via Command Line

Basic Pool Creation:

# Simple mirror pool
zpool create tank mirror /dev/sdb /dev/sdc

# RAIDZ1 pool (RAID5-like)
zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sdd /dev/sde

# RAIDZ2 pool (RAID6-like)
zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

Advanced Pool Configuration:

# Create pool with specific features
zpool create tank \
  -o ashift=12 \
  -O compression=lz4 \
  -O atime=off \
  -O xattr=sa \
  -O recordsize=64k \
  raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

Add ZFS Pool to Proxmox Storage

# Add existing pool to Proxmox configuration
pvesm add zfspool local-zfs --pool tank

Part 6: ZFS Configuration Best Practices

Optimal ZFS Settings

# Set compression (saves space, often improves performance)
zfs set compression=lz4 tank

# Disable access time updates (performance)
zfs set atime=off tank

# Set appropriate record size for your use case
zfs set recordsize=64k tank  # Good for VMs
zfs set recordsize=1M tank   # Good for large files

# Enable deduplication (use carefully - RAM intensive)
zfs set dedup=on tank  # Only if you have adequate RAM

Create Datasets for Different Use Cases

# Dataset for VM storage
zfs create tank/vms
zfs set recordsize=64k tank/vms

# Dataset for backups
zfs create tank/backups
zfs set recordsize=1M tank/backups
zfs set compression=gzip tank/backups

# Dataset for ISO images
zfs create tank/isos

Part 7: Monitoring and Maintenance

Check ZFS Pool Status

# Pool status
zpool status

# Pool I/O statistics
zpool iostat

# Detailed pool information
zpool list -v

Regular Maintenance Tasks

# Scrub pool monthly
zpool scrub tank

# Check scrub progress
zpool status

# Set up automatic scrubbing (add to cron)
echo "0 2 1 * * root zpool scrub tank" >> /etc/crontab

ZFS Snapshots

# Create snapshot
zfs snapshot tank/vms@backup-$(date +%Y%m%d)

# List snapshots
zfs list -t snapshot

# Restore from snapshot
zfs rollback tank/vms@backup-20240815

Part 8: Troubleshooting

Common Issues and Solutions

Drives Not Visible After HBA Conversion

# Check if drives are detected
dmesg | grep sd
lsblk

# Rescan SCSI bus
echo "- - -" > /sys/class/scsi_host/host*/scan

ZFS Pool Import Issues

# Force import pool
zpool import -f tank

# Import pool with different name
zpool import tank tank-backup

Performance Issues

# Check ARC usage
arc_summary

# Adjust ARC size (in /etc/modprobe.d/zfs.conf)
options zfs zfs_arc_max=17179869184  # 16GB

Hardware Considerations

  • RAM: ZFS needs adequate RAM (recommended: 1GB per TB of storage)
  • Boot Drive: Use separate SSD for Proxmox OS
  • Network: Ensure adequate bandwidth for your storage needs

Part 9: Advanced Configuration

L2ARC (SSD Cache)

# Add SSD as L2ARC cache device
zpool add tank cache /dev/nvme0n1

ZFS Intent Log (ZIL)

# Add dedicated SSD for ZIL
zpool add tank log /dev/nvme0n2

Tuning for Virtualization

# Optimal settings for VM storage
zfs set primarycache=metadata tank/vms
zfs set recordsize=64k tank/vms
zfs set sync=always tank/vms

Security Considerations

  • Change default Proxmox passwords
  • Configure firewall rules
  • Enable two-factor authentication
  • Regular security updates
  • Monitor access logs

Conclusion

You now have a server with RAID controller in HBA mode running Proxmox with ZFS storage. This setup provides excellent performance, data integrity, and advanced storage features while maintaining the flexibility of a virtualization platform.


Remember:

  • Regular monitoring of pool health
  • Scheduled scrubs and backups
  • Keep Proxmox and ZFS updated
  • Document your configuration for future reference

Additional Resources




Created & Maintained by Pacific Northwest Computers