

Any reason why that board? Not 100% sure what you are trying to do, but it seems like an expensive board for a home NAS. I feel like you could get more value with other hardware. Again, you don’t need a raid controller these days. They are a pain to deal with and provide less protection when compared to software raid these days. It looks like the x16 can be split on that board to be 8/8, so if needed you can add an adapter to add 2 nvmes.
You can just get an HBA card and add a bunch of drives to that as well if you need more data ports.
I would recommend doing a bit more research on hardware and try and figure out what you need ahead of time. Something like an ASRock motherboard might better in this case. The epyc CPU is fine. But maybe get something with rdimm memory. I would just make sure it has a Management port like ipmi on the supermicro.
You will get different answers. Some people like proxmox with ZFS. You can run vms and lxc containers pretty easily. Some people like running everything in a container and using podman or docker. Some people like to raw dog it and just install everything on bare metal ( I don’t recommend this approach though).
The setup I currently have are 3 servers. One server for compute. This is where I run all my services from. 1 server for storage. 1 server for backup storage.
The compute server is set up with an NFS share that connects to the storage server. These all have a 10gbe nic on a 10gbe switch.
If I could go back and redo this setup again, I would make a few changes. I do have a few NVMe drives in my storage server for the NFS share. The compute server has the user home directories on there, as well as the permanent files for the containers that have volumes. This makes it easy for me to backup that data to the other server as well.
With that said, I kinda wish I went with less storage and built out a server using mostly nvmes. My mobo doesn’t do bifurcation on its x16 slots and so I can only get 1 NVMe per slot. It’s a waste. Nvmes can run somewhat hot, but are smaller and easier to cool than platters. Plus it’s faster to rebuild if something were to happen. You could probably get away with using 1 parity drive because of this.
I would still need a few big drives for my media, but that data is not as critical to me in the event I lost something there.
What I would look for in a storage system are the following:
With those requirements in mind, something like an ASRock server motherboard using an AMD epyc would normally fit the bill. I have seen bundles go for about 600-700 on AliExpress.
As far as the OS. I treat the storage server as an appliance. I have truenas on there. This is also the reason I have a separate computer server as it makes it easier for me to manage services the way I want, without trying to hack the truenas box. This makes it easy to replicate to my backup since that is also truenas. I have snapshots every hour and those get backed up. I also have cloud backup for critical data every hour.
Last, but not least, I have a vps server so I can access my services from the internet. This uses a wireguard tunnel and forwards from the vps to the compute server.
For the compute server, I am managing mostly everything with saltbox. Which uses ansible and docker containers for most services.
No matter what you choose, I highly recommend ZFS for your data. Good luck!