The Story.
I have a machine, setup as a NAS, that lives in my closet, running TrueNAS Scale, which I also use to run some various Virtual Machines, because why not, it’s relatively beefy hardware for a NAS (Ryzen 7 5600G, 64MB of Ram), and when I built the thing, It had 4 X WD Red 2TB Drives, in a RAIDZ1, and a spare 256GB SATA SSD from an old machine, that I used for the boot-pool.
It worked fine, but as you may, or may not know, TrueNAS Scale only requires 64GB of storage for it’s boot image. This meant I had basically 192GB of SSD storage, that was unusable, in the stock TrueNAS configuration. Additionally, all of my Virtual Machines were running off of the RAIDZ1 Storage Pool, which due to being a NAS, were suuuuuuuuuuper fast 5400 RPM NAS Drives, so performance wasn’t exactly what I would call “zippy”.
So a few months later, I replaced the 512 NVMe drive in my Thinkpad T16, so I had a spare NVMe drive sitting around, and thought to myself “Why don’t you toss this in the NAS, it’s got two NVMe/M2 slots that aren’t doing anything, and use it as the storage for your Virtual Machines?”
Great idea. Problem is, it doesn’t work. I take the system down, plug the M.2 drive in, and wait a minute, why in the hell is it bringing up the openSUSE Grub2 bootloader, and trying to boot into the old openSUSE Kalpa installation that’s on the M.2 Drive?
Well, apparently, with the motherboard/chipset that’s in that machine, two things are happening:
- When an M.2 Drive is present on the motherboard, it’s going to boot from that drive, no matter what you do.
- When an M.2 Drive is present on the motherboard, it disables some of the SATA ports. Which SATA Ports? Who Knows.
Ok. I can live with that. Just disconnect the SATA SSD, and install the TrueNAS appliance boot image to the M.2 Drive, which I can probably get into a partitioner and do what I need to do. Except you can’t. That’s not functionality that’s supported by TrueNAS Scale. And honestly, I get it. It’s meant to be an appliance, you just install the image. This isn’t a knock on the folks at iXSystems, they’re a commercial entity, that wants to sell NAS Hardware, and just happen to be nice enough to offer their Software to the community to install on their own hardware. So what to do. Off to the internets to search, that’s what to do.
DISCLAIMER
This is not supported by the TrueNAS folks. In fact, they’ll probably get pretty cranky with you, if you do this, and then try to go ask them for support, if this breaks something. If you do this, and it blows up in your face, that’s on you. Period. I make no claim that this will work for you, or that there is zero risk in doing it. You’ve been warned.
What I did.
So I found a reddit link, that did what I wanted, that I’m going to copy here, so I don’t lose it in the future, in case I need to do it again. That being said, I despise reddit, and I’m not going to link back to them. Credit goes to u/heren_istarion, and if you do the websearching for [Scale][Howto] split ssd during installation you will find their original post.
-
- Create yourself the USB install media, with whatever version of TrueNAS Scale suits your preference.
-
- Boot it
-
- When the Installer starts, chose the
[]Shelloption.
- When the Installer starts, chose the
find / -name truenas-install
# Your most likely candidate will be /usr/sbin/truenas-install
# Its just a shell script
# Yes, it's basic vi, suck it up.
vi /usr/sbin/truenas-install
Now, the function that we’re interested in, is “create_partition” which generally appears to be right around line ~300-ish in every version of the script I’ve looked at, and the line you want to change, will generally be preceeded by the comment # Create boot pool.
- Existing code block
# Create boot pool
if ! sgdisk -n3:0:0 -t3:BF01 /dev/${_disk}; then
return 1
fi
- Change it to
if ! sgdisk -n3:0:+64GiB -t3:BF01 /dev/${_disk}; then
return 1
fi
Just in case you can’t see the difference, I’ll give you a hint. +64GiB
What is this doing? It’s telling the installer to create a 64GiB partition, rather than using the entire disk, to create the boot-pool. That’s all.
Now you’re just going to want to run /usr/sbin/truenas-install at this point, which will put you back into the installer. And install as you normally would. It’s pretty basic, and their Installation instructions are really quite good on their website, if you’re stuck. Finish the install, and reboot.
-
- Create the storage pool on the remaining space:
Once you’re booted back up after install, go ahead and proceed as normal, go to the web interface, setup your admin account/password, etc. At this point, you’re going to want to enable ssh access on the server, or access the Shell, via the
System->Settingmenu. Your call.
- Create the storage pool on the remaining space:
Once you’re booted back up after install, go ahead and proceed as normal, go to the web interface, setup your admin account/password, etc. At this point, you’re going to want to enable ssh access on the server, or access the Shell, via the
We need to figure out which disks are in boot-pool the following 2 commands should do this for you.
admin@truenas[~]$ sudo zpool status boot-pool
[sudo] password for admin:
pool: boot-pool
state: ONLINE
scan: scrub repaired 0B in 00:00:01 with 0 errors on Tue Feb 27 03:45:03 2024
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
nvme0n1p3 ONLINE 0 0 0
errors: No known data errors
admin@truenas[~]$ sudo fdisk -l
…
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SKHynix_HFS512GEJ9X102N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1645DD88-9D5C-4EFB-8821-50F49287F473
Device Start End Sectors Size Type
/dev/nvme0n1p1 4096 6143 2048 1M BIOS boot
/dev/nvme0n1p2 6144 1054719 1048576 512M EFI System
/dev/nvme0n1p3 1054720 135272447 134217728 64G Solaris /usr & Apple ZFS
/dev/nvme0n1p4 135272448 1000215182 864942735 412.4G Solaris /usr & Apple ZFS
…
I had my RAIDZ1 disks disconnected at this point, but if you don’t, the boot-pool disk(s) will have 3-4 partitions each, vs the 2 partitions each of your storage disks will have.
**Note: Please keep in mind, the results of fdisk -l there are on my already running system, /dev/nvme0n1p4 wouldn’t be showing as a partition at this point.
/dev/nvme0n1 is obviously going to be my boot device, so now we want to create the partition, and update the linux kernel table:
admin@truenas[~]$ sudo sgdisk -n4:0:0 -t4:BF01 /dev/nvme0n1
admin@truenas[~]$ partprobe
# Now we need to get the ID for the new partition
admin@truenas[~]$ sudo fdisk -lx /dev/nvme0n1
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SKHynix_HFS512GEJ9X102N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1645DD88-9D5C-4EFB-8821-50F49287F473
First usable LBA: 34
Last usable LBA: 1000215182
Alternative LBA: 1000215215
Partition entries starting LBA: 2
Allocated partition entries: 128
Partition entries ending LBA: 33
Device Start End Sectors Type-UUID UUID Name Attrs
/dev/nvme0n1p1 4096 6143 2048 21686148-6449-6E6F-744E-656564454649 2C6EF97C-144B-4A84-AE56-FE387B58EA05 LegacyBIOSBootable
/dev/nvme0n1p2 6144 1054719 1048576 C12A7328-F81F-11D2-BA4B-00A0C93EC93B 1CA9D2E6-ADB3-4487-B978-0A745E067AC7
/dev/nvme0n1p3 1054720 135272447 134217728 6A898CC3-1DD2-11B2-99A6-080020736631 A033A85D-83EA-4C94-AB93-88690F7258C6
/dev/nvme0n1p4 135272448 1000215182 864942735 6A898CC3-1DD2-11B2-99A6-080020736631 10A95E72-8E30-4251-8B9E-B7B98EB73BCD
In my case, the ID I’m looking for is that big ugly string in the column UUID for /dev/nvme0n1p4
-
- Create and export the storage pool:
admin@truenas[~]$ sudo zpool create -f ssd-storage /dev/disk/by-partuuid/10A95E72-8E30-4251-8B9E-B7B98EB73BCD
admin@truenas[~]$ sudo zpool export ssd-storage
I named mine ssd-storage, you can name yours whatever you wish.
-
- Go back to the web interface, and import the new storage pool in the
Storagetab.
- Go back to the web interface, and import the new storage pool in the
-
- Be gay, do crimes?