Some History
Now that I am coming off of my winter break, I’ll finally have a chance to write about some of the things I was able to do over it. So, this post will be the first in a series of two detailing two big projects I worked on over the past 4 weeks. First, I’d like to talk about the bigger of the two projects: Rennovating my home media server. Around the summer of 2018, my Dad and I started to undertake the process of digitizing all of our media. Now, legal gray area aside, this was a success, and after almost a year of work we finally had most things digitized. Now, the goal was to eventually make everything available to the entire house. We weren’t sure how to go about this, but I’ve heard of an application named “Kodi”, so I installed a distribution of Ubuntu onto a spare computer named “Kodibuntu.” This was easy enough to set up, and soon enough we had all of our songs and most of our movies stored on this machine, ready to be viewed at a moments notice.
About 6 months after this was set up, and numerous crashes from Kodi, I started looking for a different solution. I was very passive in my search, I had schoolwork to focus on as well, so this project sort of hit the backburner for a while. However, one day I was scrolling through my Google Feed and saw a github page for a project named “Jellyfin” claiming to be useful for media servers. I took a look at the project and it filled all of our boxes! Self-hosted, auto indexing software for all of our media, all while providing a central interface to get everything from. So, one weekend I tore out Kodi and replaced it with Jellyfin; after all Kodibuntu was just Ubuntu with Kodi on top of it, so I had access to all of the Ubuntu Repos. After that, it just sat there, slowly collecting any new media we got, waiting for us to watch stuff from it (which we do all the time, we have an Android TV with the Jellyfin app, we watch things on there all the time!).
A Growing Problem
Around mid-2020, our USB DVD reader died while ripping a DVD. This was unfortunate, but it was going on 3 years old at this point, it was probably its time. So, we decided it was time to upgrade. We ended up buying a USB Bluray drive so we could begin to rip HD and UHD content to our media server; I mean, the TV we had downstairs was a UHD TV, it was time to stop watching 480p DVDs on it. We had a 1TB drive installed from the beginning, and we were only using about 150GB thus far, so we figured we still had a while before it ran out. That was before we ripped out first blu-ray disc. It ended up being around 30GB after we got done ripping, and after compressing the video further in handbrake it we got it to around 20GB. The Ultra HD Blu-ray discs were closer to 50GB! So, our estimate on how long the server would last quickly dropped.
While we were using more space quickly, we still never surpassed 1TB. This wouldn’t happen for another year or so. Around the fall of 2020, I left for college, and as things began to open up more permanently buying and ripping movies started to fall to the wayside once again. We had spurts of buying and ripping discs, but nothing too often. However, we were starting to run into issues streaming out UHD content. It would buffer quite often. We figured this was a sympotom of our WIFI link being slow, but in hindsight this may not have been the case. We’ve been wanting to upgrade the machine for a while, but never had any reason (or the time) to do so just yet. My dad brought home a decomissioned server from his office that had 6 1TB drives in it. One day we’d move everything over, but now wasn’t the time.
A Breaking Point
Just this past Christmas (2021), we ran into a breaking point. I was gifted a box set containing the full series of The Office (because it was taken off of Netflix for the sole reason of promoting Peacock, so I refuse to use Peacock), so I wanted to put it on the server before I left for college again. I got through about 3 seasons until I noticed the size of the folder: 250GB! This was huge, and I still had 6 seasons to go, probably was going to be around 850-900GB of content in total. This was never going to fit on the current server, which probably had about 90GB still free on it. We had to finally implement this new server. However, we didn’t want to do it haphazardly. We were going to do this right (or at least as right as we know how).
Planning the Solution
So, we had this server that had a total of 6TB of storage and 32GB of RAM. This was more than enough for our needs. Since we were also looking to overhaul our home network, we decided to work on this as well. Ideally, we wanted to move all of our networking equipment into another room so it was out of our downstairs family room, making it look a lot nicer (and most likely more performant since the modem would be closer to where the coaxial cabling enters the house and we’ve historically had issues with noisy signal where we have it set up.)
We took some time to jerry-rig a mount for the server (since we don’t own a rack nor have a place for an entire rack) and start preparing to place the server there. While we were coming up with a mounting solution, I got started setting up the server.
The Build
As mentioned previously, this machine was a decomissioned server from my dad’s office. It was a backup server for their infrastructure in a past life. So, first thing I did was wipe everything off of it to start with a clean slate. Now, we wanted full redundancy with this new setup, so I had to come up with a solution. Since this machine happened to support EFI booting, I used that. Just note that the approach I used would have worked with BIOS booting as well, just some small differences in setup. We had 6 1TB drives that I split into two partitions each. 1 ~500MB partition for the EFI boot files and 1 partition for the rest of the space which would go into the main storage pool.
Now, for full redundancy, I had to make sure everything got replicated, because if I just relied on one drive for booting that would be a single point of failure. Since every drive was the same size and partitioned exactly the same I was able to setup two RAIDs to solve this issue.
The RAID setup
The method in which we setup our server is a bit unconventional (At least I think, I’ve never heard of a setup like this before). I used the beginning 500MB partitions as the EFI boot partition, duplicated among all 6 drives using mdadm’s RAID1 and metadata version 1. The metadata version is the most important part of this setup. By using version 1, you avoid putting the mdadm headers at the beginning of the partition, meaning you can have your filesystem headers at the beginning instead, which is important if the motherboard firmware is to recognize these as proper boot partitions. Since the metadata is not at the front, I simply mounted the /dev/md0
entry that was created after adding them all to an array and wrote my FAT32 filesystem with the proper flags set to it. Lo and behold, when I went back to look at the separate partitions, they all appeared to be “formatted” using the same FAT32 filesystem with the exact same data. Perfect! Our bootloader is fully redundant. I just added an entry for each drive in the motherboard’s firmware so it will just run through all the drives and boot using the first one that’s aailable, falling back if necessary.
For the actual operating system, I used ZFS in a raidz1 setup. This is akin to a RAID5 array. It gives you one drive failure before you begin to lose any data. Since I was basing this install on Arch (I wanted to install the least number of packages possible, this is simply a base OS for KVM to manage virtual machines.), I installed the zfs-dkim package from the AUR and let it compile for my kernel. Once it was fully installed, I created the pool using the second partition from all the drives I had available. We were just going to make one big partition that contained the system and the VM images all in one. Once I made the volume that the system would reside on, I followed the Arch install guide just like any other regular install. The only deviations made were for generating the fstab entries and generating the initcpio image. I followed the Root on ZFS guide from the Arch wiki to generate those properly. Once my system was fully installed, I rebooted and….. nothing :slight_frown:. I booted back into the live usb I created a few more times, changing various settings to no avail. For whatever reason, the Arch initcpio wouldn’t automatically mount our root partition and pivot into it. I could mount it manually just fine, but automounting did not work. Case in point, after a few hours, the fix was simple: just add zfs=bootfs
to your grub boot entry. That solved our issue and we booted right to the OS.
Conclusion
This was a really fun project for me to work on over the break. It really got me out of my comfort zone and put me into a new territory of Linux I’ve never explored before. Coming up with a solution completely on my own, figuring out how to execute it properly, and watching it slowly come to fruition has been very rewarding to say the least. I’m sure at some point we will expand this server again, and I’ll probably have to re-learn a lot of things, but hopefully this article taught you (and me when I inevitably have to read through it to figure out what I did) something new :slight_smile:. At the very least, I hope it was an enjoyable read and really put you into my shoes, if not just for a little bit.