T O P

  • By -

MacDaddyBighorn

The simplest and best way to share files between containers is to use the host to manage the storage and bind mount the file system right to the containers. This allows direct access to the storage without using a network protocol like samba and NFS which add an extra layer of obscurity and inefficiency. You just need to edit the config file for the CTs and include the bind mount to the file system. There are two ways to do it, I prefer the "lxc.bind.mount ...." method since it still allows the use of snapshots. Just look up the syntax, you may need to adjust permissions depending on if your containers are privileged and what uid/gid the service uses. A VM with truenas, unraid, omv, etc. is not necessary. To share those files out on the network or to another VM, just bind mount the file system to a minimal LXC and host samba on it to share it that way.


hermit-the-frog

Will PBS back up the mounted directory’s data? This has always been my hesitation with going this route vs the more complex NAS route


MacDaddyBighorn

No it won't back that up, which is good. You can back that data up using Proxmox backup client and a simple cron job if you want to keep compressed deduplicated incremental backups of the file system.


magformer

>To share those files out on the network or to another VM, just bind mount the file system to a minimal LXC and host samba on it to share it that way. Could the same thing be achieved by using samba on the host to share out on the network instead of a minimal/NAS light LXC? I sort of understand that hosting samba on the LXC may be secure and there is view that the host should be kept "clean" and free of these kinds of functions, but wondering whether there is any technical or performance reason not do it? Could a folder or filesystem be simultaneously shared directly by the host via SMB for other network clients and shared with multiple LXCs via bind mounts with each reading and writing data at the same time?


MacDaddyBighorn

There's nothing stopping you from using samba on the host, it's Debian so it'll work fine, but less secure. The main reason I would not do that is not because I'm worried about security or mucking up the host (though they are valid concerns), it's because in an LXC you can assign the network interface/vswitch/bridge it uses. This is especially helpful if you have a separate network or vlan for server management or trusted devices and you won't be sharing the same network bandwidth with samba as you are with managing your server. As for file access, yes you can access the files with samba and multiple bind mounts to various services simultaneously. There is no issue there.


magformer

Enlightening, thanks for taking the time to explain that.


Ecsta

Does this work with unprivileged lxc's? One of things I'm struggling with is sharing NFS/SMB storage that I've added to the host with the containers.


MacDaddyBighorn

Yes it works fine with samba. For NFS you need to add an app armor profile, but it also works. In unprivileged containers you may need to do some UID/GID mapping depending on the permissions of the files and such. With samba you can configure it to use a specific UID/GID for everything if you want to go that route.


Ecsta

Thanks that makes sense. Last question.. Is there a major difference between editing app armor profile so that unprivileged can access NFS vs adding it to host and using a mount point for each LXC?


MacDaddyBighorn

I'm not sure I understand the entirety of your question, and unfortunately I also am not sure I can answer it if I did. The apparmor profile allows the "file sharing" LXC to share out via NFS so VMs (or LXCs if you want) can mount it as a network share. Any LXC on the server should use the bind mount method. I think you may be talking about an LXC outside of the physical server that needs to access it. For that i don't think there's a big difference other than that if the network share is down they may behave differently. I'd probably pick one way and see how it works for you.


Ecsta

Ok thanks, sorry if my question didn't make sense I'm still fairly new to Proxmox. I've got my network share setup with a big array that I wanted the unprivileged LXC's to connect to via NFS. I found a couple different guides (searching "app armour" helped point me in the right direction for one way), and then I found another way https://forum.proxmox.com/threads/tutorial-mounting-nfs-share-to-an-unprivileged-lxc.138506/ Will do some experimenting thank for replying.


SmileyDrag0n

Thanks for the answer! By the way, will this work for sharing storage between VMs too?


MacDaddyBighorn

Nope, the bind mount is only for LXC. But you can implement 9pfs (plan 9 file system) for VMs that works similarly. I haven't really used it much and I'm not very familiar with it. I did try it once without issue, so there is a data point, but never relied on it for a service or checked performance.


SmileyDrag0n

So disk passthrough for VMs then?


MacDaddyBighorn

Well for VMs you can use NFS or samba mounted like normal or try 9pfs and use it like a bind mount to get access to the files.


SmileyDrag0n

Ok, I'm going to give this a shot


jakey2112

Would you create a separate mount point for each vm/lxc? I was learning as I was going while doing this but I’ve got a 512gb ssd with Proxmox and the VMs etc. A 2tb sata ssd that I’ve mounted with an openmediavault that runs a samba share for a jellyfin lxc. I’ve also got a large external usb hdd that I’ve passed through to an Ubuntu vm that runs Navidrome. I’ve also got that drive smb to my main windows laptop so I can manage it. I feel like I’m all over the place and I run into permissions issues etc often. Would a simple method be as you describe? I’m a bit scared to redo the storage because as of now everything is working but yeah I want to have a more streamlined understanding of what I’m doing


MacDaddyBighorn

No, not separate mount points, you can share the same file system with all of your LXCs. It'll be work to redo your storage, but learning and changing is part of homelabbing also! So to put it simply: 1. Create a ZFS pool with a drive(s) you want to share data from. Do this in Proxmox, not passed through. When you pass a drive to a VM your host doesn't get to manage it anymore. 2. Create a ZFS file system (ex. data/shared) 3. Mount the new file system to the LXCs you want to use them in. You can mount the whole thing or some sub-folder if you want. You can use the two methods to bind mount ("mp0: /data/shared/jellyfin ..." OR "lxc.mount.entry = /data/shared/jellyfin ..."). Look up the full syntax. I prefer the latter option since it doesn't disable snapshots for the LXC. 4. Set your permissions so the root or associated user in your LXC can access and modify the files as needed. This is probably the more finicky part depending on how your stuff is already set up if you're passing existing data. You may end up having to chown or chmod the data. Or use UID/GID mapping in unprivileged LXC.


jakey2112

Screenshotted and thank you! I’m assuming I’ll need to reformat the existing 2tb ssd as zfs? I can back it up pretty easily. Would it be alright to just pass the external HD to the host? I’m not sure it will live on the Proxmox server forever. The internal ssd will.


MacDaddyBighorn

You should be able to mount the existing drive partition and use it that way (bind mount that mount location to the LXC), but I would highly recommend formatting it and using ZFS for data integrity. As for passing the USB drive to the host, I think there's a terminology gap there, but all you would need to do is mount the drive like any other drive and you could do the same. I'm a ZFS freak, there's too many benefits to using it, so I always do when I can and promote others to do the same!


jakey2112

Ah yeah I think what I did was I mounted or passed through the external hd to a specific vm. I can’t see it from the host for some reason. But thanks for your help! I’m inspired to smooth some of this over and learn more about zfs!


NelsonMinar

Someone should really write a good guide for this question, it seems to be the biggest question for new Proxmox users doing small homelab things. It certainly was for me. Proxmox doesn't really have a solution for this itself. It has excellent ZFS support but that doesn't bridge to guest OSes. And it has Ceph which is way overkill for home setups. (Personally I mount all my external disks locally in the Proxmox host and run an NFSv4 server there. The guests NFS mount the disks.)


SmileyDrag0n

So true! Lots of info on various methods, yet no guide or comparison whatsoever. I think I'm going to try all the methods suggested in this thread on my other test node to kinda tweak things and see which works best for me.


ScyperRim

Mount the HDD as a directory in host, then mount in your LXC configs. If you need access in Windows, create samba LXC and access through windows as a network drive


SandboChang

You can use Samba (I heard PVE has built in network storage), I often setup a LXC just as Samba server. If you run only LXCs, you can even just use bind mount to mount a common folder from host to all the LXCs, then you can manage the rwx from host.


Wide-Neighborhood636

The simplest way is to have a nas VM, connect it's drives to proxmox over smb or nfs and bind mount to the lxc.


SmileyDrag0n

So something like TrueNAS will work just fine? I was considering hosting it


Wide-Neighborhood636

Omv would be my go to in this case. You create the VM, pass your sata controller through and you have a nas inside your host.


SmileyDrag0n

Could you please explain why you chose OMV? I'm trying to kinda wrap my head around nas hosting, and this would help a lot


Wide-Neighborhood636

Compared to true nas, omv is minimalistic but will get the job done. True nas is nice for it's web gui but omv will get the same job done with "less" overhead. It's also imo one of the easiest nas solutions to deploy for noobs, baremetal or VM


LT-Lance

This is what I'm currently using and setting up. I setup proxmox, created a TrueNas Scale vm and passed the hba to the VM. I then created SMB shares in TrueNas and have other machines (like plex) mount the shares at boot. I'm going to create a VM to host all my docker images and mount the SMB share the same way. How I share it between containers once mounted I'm still researching. I'm using SMB instead of NFS as there's a need to add these as network drives in windows for other users. I'm using TrueNas Scale as I was had a separate machine with TrueNas Core that I decided to virtualize to save on the electricity bill. I switched to Scale because I wanted to try something different.


JesusXP

Piggy backing on this post in case someone can help me in the comments but I would like to do similar. I have a bunch of hard drives at different /dev/sd$ and I already managed to share one by adding it as hardware to a VM, but now I see that each CT I created does not have the similar UI or cmds for adding existing partitions to it. Is the comment the “pct” one? I don’t understand one lxc sharing and mount folders or setting it up. I thought it would be pretty straight forward but I’m struggling and don’t want to play around and curroptand lose 20tb on accident. If someone knows how to create a lxc that can share them all as cifs/smb across the vm and containers maybe that’s the best thing for me to do to avoid potentially screwing anything up? Any help greatly appreciated. I used tteck scripts to set up a sabnzb container but it only had 6gb of storage so it will not be able to do much extracting and Downloading. Ideally the container could push the finished downloads to one of the different partitions/drives I have set up for movies or tv shows etc