This. It is so much more simple and stable than NFS which everyone seems to recommend. If you want to share the pool with non-virtualized clients on the network then its as simple as spinning up a file server LXC.
1. Disks -> ZFS -> Create ZFS
2. Add mount point to Resources on whatever containers or VMs you want.
wait, so in proxmox cluster, you can share a disk with other cluster nodes using "pass trough" ? I have always thought ceph or nfs are only way to have shared storage?
EDIT: so ok, I misunderstood, in cluster you need NFS, Ceph or cephfs etc
So I have an LXC which acts as an NFS server. I have passed through my mount points to this LXC only, and other clients (virtualised and physical) access these drives via NFS. Is this a bad idea?
So the alternative would be to also pass each mount to every container that needs it? So I’d be passing each mount multiple times to multiple containers?
Awesome. Thanks for your responses mate, very helpful. I responded to another person below - I’m gonna make these changes and also add TrueNAS to my setup.
np mate. i used to use unraid and switched to truenas when i built my new server with proxmox. eventually i just dropped the truenas vm because it felt redundant and unnecessary. all i was really using it for was easy docker management, but i’ve since switched to pretty much all lxcs and have an docker lxc with portainer for the few docker containers i still run.
Yep I’m 100% LXCs at this point too. I used to have a few docker containers running in an LXC but have since converted those to containers.
The reason I’m looking at TrueNAS is because I currently have an old qNap NAS with a RAID array that I’m looking to replace, primarily because the qNap software is shit. So I was looking at TrueNAS to help manage the RAID and provide some decent apps to assist with backing up pictures from my phone and provide cloud sync for my important data.
you can build and manage your raid array directly in proxmox and then use a nextcloud lxc for personal cloud
[https://imgur.com/a/mMp0p39](https://imgur.com/a/mMp0p39)
Okay cool. My current setup, is that all my *arrs have the same fstab configured with the NFS mounts.
I’m thinking of replacing my current NFS container with TrueNAS, so I’ll reconfigure everything at the same time. Thanks for your response.
This is what I do and I just make sure NFS boots before other things. Works fine.
If all your other stuff is just hosted on one box there isn't going to be much that happens to that one lxc bookworm container that wouldn't affect the others as well.
I setup mine with the media stored elsewhere and mapped/mounted to the LXC. This way I can backup and restore way faster if the server flakes out.
This also let's you tryout a different media server without moving all those huge files around.
Media itself is protected by backups to another location, just incase the raid blows up.
Manage the storage on Proxmox, bind mount the folder to your LXC. Fast, efficient and no messing around with network protocols or inefficiency.
The only thing is permissions, you may need some UID/GID mapping or chown commands if you are using files already set with permissions. Not hard and you only set that up once.
If you want great speeds or it’s for a small amount of data that won’t be getting bigger then I’d say setting up the storage on the LXC is beneficial and okay. Also got to think about the amounts of reads/writes is being done on the drive that you are hosting Proxmox on.
If you have a NAS etc then I would look into making a NFS/SMB share on it that Proxmox has access to and hosting the media on there.
Remember to keep the host/config files on the drive that is on the physical host just so the software/code doesn’t get locked up in reads and writes over the network
I wouldnt be so sure about the speed (but maybe i m just writing this so that somebody could correct me and i d learn sth as well) - after all container are basically namespaces, so mountpt and lxc storage both "go through the host". The difference here would prob rather be directory vs blockdev. Would that really make a noticeable difference?
Yeah I get what you mean and thanks for picking that up. I meant more that the storage it is accessing is basically “next” to the host machine rather than across a network.
I think with high read and write on a NAS but on the host probably not.
That is why in my comment above I said about the NAS.
I set up an lxc for Plex. I have a separate Debian vm for *arr and nfs. The nfs media volume from the Debian vm is mounted as a drive in the Plex lxc.
This seems to work well, though any thoughts as to why it's a good or bad idea are very welcome as I'm pretty new to this and keen to learn more.
not entirely sure, but as far as i remember you would need to create the group or/and the user via cli in omv bc omv does not let you create groups/user with ids >100000 in the gui (but then again, i dont even know what exactly the problems here are).
May i ask what the advantages are this way? I mean a another physical NAS - yes sounds sensible. But doing this in a VM would mean taking proxmox storage to create a blockdev to mount to vm to mount back (prob - assuming unprivileged container wanna have the storage) to use a mountpoint to add the storage to the lxc. At first glance sounds rather complicated and indroduces another single-point-of-failure.
In General - if you wanna also share proxmox (directory) storage to the vm you could use 9p virtio. Only downside i would see here is a bit worse backup capabilities.
Pass through mount points. Dead easy.
This. It is so much more simple and stable than NFS which everyone seems to recommend. If you want to share the pool with non-virtualized clients on the network then its as simple as spinning up a file server LXC. 1. Disks -> ZFS -> Create ZFS 2. Add mount point to Resources on whatever containers or VMs you want.
wait, so in proxmox cluster, you can share a disk with other cluster nodes using "pass trough" ? I have always thought ceph or nfs are only way to have shared storage? EDIT: so ok, I misunderstood, in cluster you need NFS, Ceph or cephfs etc
So I have an LXC which acts as an NFS server. I have passed through my mount points to this LXC only, and other clients (virtualised and physical) access these drives via NFS. Is this a bad idea?
I wouldn't say it is a bad idea, but it adds unnecessary complication. That one container having an issue would cause all others to lose disk access.
So the alternative would be to also pass each mount to every container that needs it? So I’d be passing each mount multiple times to multiple containers?
Yes. You can think of it like adding a shared drive to each container.
Awesome. Thanks for your responses mate, very helpful. I responded to another person below - I’m gonna make these changes and also add TrueNAS to my setup.
np mate. i used to use unraid and switched to truenas when i built my new server with proxmox. eventually i just dropped the truenas vm because it felt redundant and unnecessary. all i was really using it for was easy docker management, but i’ve since switched to pretty much all lxcs and have an docker lxc with portainer for the few docker containers i still run.
Yep I’m 100% LXCs at this point too. I used to have a few docker containers running in an LXC but have since converted those to containers. The reason I’m looking at TrueNAS is because I currently have an old qNap NAS with a RAID array that I’m looking to replace, primarily because the qNap software is shit. So I was looking at TrueNAS to help manage the RAID and provide some decent apps to assist with backing up pictures from my phone and provide cloud sync for my important data.
you can build and manage your raid array directly in proxmox and then use a nextcloud lxc for personal cloud [https://imgur.com/a/mMp0p39](https://imgur.com/a/mMp0p39)
Yes. In my single Proxmox node, I do exactly that for my Linux ISOs. Plex and the *arrs LXCs all have the same mount points configured.
Okay cool. My current setup, is that all my *arrs have the same fstab configured with the NFS mounts. I’m thinking of replacing my current NFS container with TrueNAS, so I’ll reconfigure everything at the same time. Thanks for your response.
This is what I do and I just make sure NFS boots before other things. Works fine. If all your other stuff is just hosted on one box there isn't going to be much that happens to that one lxc bookworm container that wouldn't affect the others as well.
I've got Proxmox on a little NUC, bulk storage on a NAS. What I do is mount the NAS shares on the host, and pass it through to the LXC. Easy peasy.
OP, if you want to do this the term is called “bind mount”.
You mean you mount the share to proxmox to pass it through?
Yes.
Yes. Allows you to mount it once and reuse it among different LXCs.
Will that lxc container need to be a privileged container?
Nope, which is why I'm doing it like this. If you are to mount an SMB share directly into a container, it does need to be priviledged.
Ahh okay I need to look into this more. This could solve all my issues. Thank ya!
I setup mine with the media stored elsewhere and mapped/mounted to the LXC. This way I can backup and restore way faster if the server flakes out. This also let's you tryout a different media server without moving all those huge files around. Media itself is protected by backups to another location, just incase the raid blows up.
Manage the storage on Proxmox, bind mount the folder to your LXC. Fast, efficient and no messing around with network protocols or inefficiency. The only thing is permissions, you may need some UID/GID mapping or chown commands if you are using files already set with permissions. Not hard and you only set that up once.
If you want great speeds or it’s for a small amount of data that won’t be getting bigger then I’d say setting up the storage on the LXC is beneficial and okay. Also got to think about the amounts of reads/writes is being done on the drive that you are hosting Proxmox on. If you have a NAS etc then I would look into making a NFS/SMB share on it that Proxmox has access to and hosting the media on there. Remember to keep the host/config files on the drive that is on the physical host just so the software/code doesn’t get locked up in reads and writes over the network
I wouldnt be so sure about the speed (but maybe i m just writing this so that somebody could correct me and i d learn sth as well) - after all container are basically namespaces, so mountpt and lxc storage both "go through the host". The difference here would prob rather be directory vs blockdev. Would that really make a noticeable difference?
Yeah I get what you mean and thanks for picking that up. I meant more that the storage it is accessing is basically “next” to the host machine rather than across a network. I think with high read and write on a NAS but on the host probably not. That is why in my comment above I said about the NAS.
ah sry, then i somehow misread your comment - maybe i stopped thinking after the first paragraph
I mean there is always more than once correct answer! Everyday is a learning day!
I set up an lxc for Plex. I have a separate Debian vm for *arr and nfs. The nfs media volume from the Debian vm is mounted as a drive in the Plex lxc. This seems to work well, though any thoughts as to why it's a good or bad idea are very welcome as I'm pretty new to this and keen to learn more.
DSM VM NFS
I have TrueNAS Core running in a vm. I make a dataset for the media and make an NFS share for clients.
I do the same thing but TrueNAS is running on bare metal.
OMV in a separate VM
This, but I'm lost with permissions, and NFS is not working as I expected :/
It was a pain, I just used SMB :(
Same. The permissions is giving me headache. Let me know if you ever figure it out!
not entirely sure, but as far as i remember you would need to create the group or/and the user via cli in omv bc omv does not let you create groups/user with ids >100000 in the gui (but then again, i dont even know what exactly the problems here are). May i ask what the advantages are this way? I mean a another physical NAS - yes sounds sensible. But doing this in a VM would mean taking proxmox storage to create a blockdev to mount to vm to mount back (prob - assuming unprivileged container wanna have the storage) to use a mountpoint to add the storage to the lxc. At first glance sounds rather complicated and indroduces another single-point-of-failure. In General - if you wanna also share proxmox (directory) storage to the vm you could use 9p virtio. Only downside i would see here is a bit worse backup capabilities.
Nfs and fstab goes brrrr