Replacing PERC6 raid controller with a HBA LSI card Dell R710

Published

# Contents

After running a PERC6/i for some years, I started to be tired by it's limitations. I wanted to change to running ZFS, with the disks passed through. The two most annoying limitations being the limitation on disk size of 2TB, and the inability to mix SAS and SATA drives.

The process for this included a few things, first I needed to get some kind of JBOD/HBA card which could replace the Perc6/i,. Then I needed to figure out how I wanted to configure the new array, and lastly transfer the data.

# Choosing and aquiring a HBA

Many alternatives for HBA cards are available, but some are quite expensive, and some are easier to use for this purpose than others. In the ZFS-community, it seems a lot of people have a love for LSI cards. So I ended up acquiring one for around USD 20, that would be more or less a drop-in replacement for the Perc-card, with some new mini-SAS-cables for another USD 20. The total I ended up paying for these two was USD 47 including shipping and taxes. Which is less than a raid card upgrade would have been.

It is not a complete drop-in-replacement, as it needs to go in another PCI-slot than the original raid-card used, meaning I did give up one slot for this card. But since I don't have any other cards in the chassis currently, that was worthwile.

# Booting with the new card

The system is running Proxmox. Before stopping the system, I made extra backups to another proxmox node. But I forgot to deal with the mount of the old card, it also runs a nfs server for other node(s) to connect to (easing backups). So I had to login in maintenance mode and run

systemctl mask mnt-percraid.mount
systemctl disable nfs-server

This allowed me to boot the server with the new card, and start to configure the new card with a ZFS filesystem.

# Deciding on ZFS pool configuration

The current configuration with the raid card is Raid 6 with 6x2TB disks, giving me around 8TB of usable space. This is split into two partitions, one 5.5TB partition with LVM, and one 2.5TB partition with ext4 for daily snapshots so I can rollback changes for up to 24 hours.

I have for a while been utilizing around 2.5TB of the main partition, and I don't see that expanding very rapidly. So allowing simple expansion in the future, I decided on a single raidz2, this gives me around 8TB of usable space, similar to my previous raid. I ran commands like the below to create the pool:

# /sbin/zpool create -o ashift=12 local-zfs raidz2 /dev/disk/by-id/scsi-<diskid> ...(all disks listed here)
# /sbin/zfs set compression=on local-zfs
# systemctl enable zfs-import@local\x2dzfs.service

# Finishing

The last part of this, is quite simply to add the new disk to /etc/pve/storage.cfg, and then start restoring all the VMs from the backup you made earlier. In my case, I changed /etc/exports to use the new paths now, and then I also enabled nfs-kernel-server again.