Just got all the hardware set up and working today, super stoked!
In the pic:
- Raspberry Pi 5
- Radxa Penta SATA hat for Pi
- 5x WD Blue 8TB HDD
- Noctua 140mm fan
- 12V -> 5V buck convertor
- 12V (red), 5V (white), and GND (black) distribution blocks
I went with the Raspberry Pi to save some money and keep my power consumption low. I’m planning to use the NAS for streaming TV shows and movies (probably with Jellyfin), replacing my google photos account (probably with Immich), and maybe steaming music (not sure what I might use for that yet). The Pi is running Raspberry Pi Desktop OS, might switch to the server version. I’ve got all 5 drives set up and I’ve tested out streaming some stuff locally including some 4K movies, so far so good!
For those wondering, I added the 5V buck convertor because some people online said the SATA hat doesn’t do a great job of supplying power to the Pi if you’re only providing 12V to the barrel jack, so I’m going to run a USB C cable to the Pi. Also using it to send 5V to the PWM pin on the fan. Might add some LEDs too, fuck it.
Next steps:
- Set up
RAID 5ZFS RAIDz1? - 3D print an enclosure with panel mount connectors
Any tips/suggestions are welcome! Will post again once I get the enclosure set up.
My understanding is that the only issues were the write hole on power loss for raid 5/6 and rebuild failures due to un-seen damage to surviving drives.
Issues with single drive rebuild failures should be largely mitigated by regular drive surface checks and scrubbing if the filesystem supports it. This should ensure that any single drive errors that might have been masked by raid are removed and all drives contain the correct data.
The write hole itself could be entirely mitigated since the OP is building their own system. What I mean by that is that they could include a “mini UPS” to keep 12v/5v up long enough to shut down gracefully in a power loss scenario (use a GPIO for “power good” signal). Now, back in the day we had raid controllers with battery backup to hold the cache memory contents and flush it to disk on regaining power. But, those became super rare quite some time ago now. Also, hardware raid was always a problem with getting a compatible replacement if the actual controller died.
Is there another issue with raid 5/6 that I’m not aware of?
That’s a fuckin great idea.
I was looking at doing something similar with my Asustor NAS. That is, supply the voltage, battery, charging circuit myself, and add one of those CH347 USB boards to provide I2C/GPIO etc and just have the charging circuit also provide a voltage good signal that software on the NAS could poll and use to shut down.
Nice. For the Pi5 running Pi OS, do you think using a GPIO pin to trigger a sudo shutdown command be graceful enough to prevent issues?
I think so. I would consider perhaps allowing a short time without power before doing that. To handle short cuts and brownouts.
So perhaps poll once per minute, if no power for more than 5 polls trigger a shutdown. Make sure you can provide power for at least twice as long as the grace period. You could be a bit more flash and measure the battery voltage and if it drops below a certain threshold send a more urgent shutdown on another gpio. But really if the batteries are good for 20mins+ then it should be quite safe to do it on a timer.
The logic could be a bit more nuanced, to handle multiple short power cuts in succession to shorten the grace period (since the batteries could be drained somewhat). But this is all icing on the cake I would say.
“sudo shutdown” gives you 60 seconds, “sudo shutdown now” does not, which is what I usually use. I’m thinking I could launch a script on startup that will check a pin every x seconds and run a shutdown command once it gets pulled low.
For me, raid 5 always has been great, but zfs it’s just… Better. Snapshots, scrubs, datasets… I also like how you can export/import a pool super easily, and stuff. It’s just better overall.