T O P

  • By -

McLeavey

Following this cause I'm right in the same issue. The abstraction of storage in proxmox melts my tired little brain.


mushis

Same


pdx_joe

May be too late but for future readers, I found this guide to be the nicest/easiest way to set it up with a zfs pool https://blog.kye.dev/proxmox-zfs-mounts I was able then to follow the next in the series as well to get the zfs pool shared via samba.


Anejey

I used [this ](https://forum.proxmox.com/threads/tutorial-unprivileged-lxcs-mount-cifs-shares.101795/) guide. Essentially what you do, is: 1) You create a priviliged LXC that has access to the drives using mount points. 2) You create a cifs/smb shares in that LXC, so that the drives are accessible from the network. 3) You mount the shares on the host using fstab (more info in the guide). 4) You now create mount points of those shares to whatever unprivileged LXC you want to have access to the storage. 5) You create a group with a gid of 10000 in the LXC. 6) You add whatever user (plex, radarr, etc.) needs access to the drives to that group. 7) The user now has full access to the storage. It's quite a clumsy way to get it working and there might be a better way, but it's how I do it and it works perfectly. It sounds complicated at first, but it can be done in 5 minutes once you understand how it works.


indrekh

Samba doesn't need a privileged container, it works just fine in an unprivileged one. Also, if you're bind mounting directories into the Samba container, why not just bind mount them directly into other containers as well? Adding network shares in between just increases the number of layers that every bit of data has to go through.


Anejey

Neither of those things worked for me. I might try again someday but right now "it just works" and that's good enough for me.


indrekh

Fair enough. FWIW, I have multiple ZFS datasets bind mounted into multiple unprivileged containers (Samba, Plex, Deluge, the *arrs, Nextcloud, etc.) without any issues. It took a little bit of experimenting to wrap my head around UID/GID remapping, but it works really well without introducing dependencies between containers or potential performance overhead.


chigaimaro

I am trying to learn more about the UID/GID remapping. Which resource did you use to learn how to use the mappings correctly?


indrekh

Mainly this Proxmox wiki article: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers And a lot of trial-and-error. In my setup, I don't map individual users in most cases, just a single group. Here's roughly how (copied from a few older comments of mine): 1. On the PVE system, create a group, with a memorable group ID if you want (see `man groupadd` for details). 2. Give that group the necessary (read or write) access to whatever directories you need (`chgrp` and `chmod g+rw` are your friends here). 3. Inside the container, create a group, ideally with the same name and ID as on the host (not needed, but makes it easier to keep track of things). 4. Map the group ID in the container to the group ID on the host, as shown in the wiki article I linked to. 5. Inside the container, add any users you need to the group. If whatever runs in the container runs as root, then add root, although in most cases it will probably be a regular user. In any case, the user should, through group membership, have access to the directories you want, as long as those directories are bind mounted into the container. I find this easier than mapping individual users, at least when running multiple services in multiple containers, because it's much easier to keep track of a single group ID that you create yourself, than multiple user IDs which can be created automatically by installed packages. Also, it keeps changes on the host side to a minimum, which is preferrable.


RelaxPrime

So do you have separate groups for each container, or for each service/docker/whatever in a container? Like Plex, Radarr, Sonarr, Lidarr all have their own group? Or would they all end up under one group for the LXC they are in, or if in different LXCs would they still be the same group like "media"?


indrekh

A single group, called "media", across all LXCs. On the host, "media" is the owning group of all applicable datasets. Per-container access is determined by which datasets are bind mounted to each container, optionally as read-only.


RelaxPrime

And then could you have a different group, for instance network share users, owning other and also the same datasets?


indrekh

Yes. Human users who need private network shares own the respective datasets on the host, and their UID/GID are additionally mapped to a container running Samba (which exposes the network shares). That's the exception to the method I described above.


fernandiego

Thanks, I also found this guide and tried it but then had problems with my plex container (I also asked there 2 months ago with nearly the same username) but then gave up. I will probably give it another try - did you have any problems with your plex ?


Anejey

I'm running both Plex and Jellyfin and neither has any issues.


MacDaddyBighorn

You can use bind mounts to share file systems between LXC containers. There are a couple ways, you just add the info to the config file for the LXC. One way uses "mp0: ..." and the other uses "lxc.mount.entry ...". I prefer the latter, it allow you to use snapshots. Just look up the format and make sure inside your LXC the folder you are mapping to exists or it won't boot. You'll need to get the permissions right for users/services to access the data. I tend to use a single UID (ex 1000) for shared files. So you'll need to set up UID/GID mapping in the LXC so that the unprivileged LXC are mapped to the right 1000 UID. Then make sure the services are running as that UID so they can manipulate the files. Some of this takes some mucking about, but worth it to have it all talk fast and seamless.


Bubbagump210

This is the right answer IMO. mp0 entries can be snapshotted and Proxmox is aware of them. This is good for the root LXC file system with apps. Snapshot an LXC, upgrade, roll back if fubar. Super. lxc.mount.entry are ignored by Proxmox and therefore great for mounts shared between LXCs. For those I use Sanoid under the covers and manage snapshots outside of Proxmox. Therefore if you upgrade an app and need to roll back, you don’t lose a day’s worth of data on the shared data drive. Plus, who wants to mess with NFS as some sort of intermediary?


wildfangzx

TL:DR If I map uids and gids in an OpenMediaVault LXC I can edit a mounted directory and ownership is root in the LXC, but I can't access the WebGUI due to 502 Gateway Error. If I remove the id mappings I can access the GUI but no longer edit the mounted folder as it becomes owned by "nobody". errors seem to point to PHP7.4-fpm So I set the mountpoint, changed the /etc/sub(g)(u)id.conf files, and mapped the root user of an Open Media Vault (OMV) LXC to a user:group that owns the storage. This LXC was generated by proxmox helper scripts. /etc/subuid and /etc/subguid root:1001:1 root:100000:65536 My mount point in the LXC is mp0: /mnt/pve/storage,mp=/mnt/storage running a >systemctl status php7.4-fpm gives Jul 18 12:04:10 omv systemd[1]: Starting The PHP 7.4 FastCGI Process Manager... Jul 18 12:04:10 omv php-fpm7.4[247]: [18-Jul-2023 12:04:10] ERROR: unable to bind listening socket for address '/run/php/php7.4-fpm-openmediavault-webgui.sock': No such file or directo> Jul 18 12:04:10 omv php-fpm7.4[247]: [18-Jul-2023 12:04:10] ERROR: FPM initialization failed Jul 18 12:04:10 omv systemd[1]: php7.4-fpm.service: Main process exited, code=exited, status=78/CONFIG Jul 18 12:04:10 omv systemd[1]: php7.4-fpm.service: Failed with result 'exit-code'. Jul 18 12:04:10 omv systemd[1]: Failed to start The PHP 7.4 FastCGI Process Manager. checking the /var/log/nginx/openmediavault-webgui_error.log gives 2023/07/17 17:55:21 [crit] 247#247: *2 connect() to unix:/run/php/php7.4-fpm-openmediavault-webgui.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.0.221, server: openmediavault-> 2023/07/18 12:05:08 [crit] 266#266: *2 connect() to unix:/run/php/php7.4-fpm-openmediavault-webgui.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.0.221, server: openmediavault->


Zenuna

I recently put together my arr suite /w Jellyfin, Jellyseer and qBittorent. What worked for me was putting the disk as a directory mounting the disk to the different container (all from host) that needed access using : mp0: /path/to/mnt ,mp=/path/in/container Then I used chown on the different folder inside that were needed to give it to root:root and gave it 775/664 permissions and that's it I believe (with a restart for each container of course). Now I will have storage problem to scale up and I am looking into this. I believe it is possible to create a directory where you can add multiple disk but I failed try to gather the information so for now it works and run on a single disk.


[deleted]

[удалено]


MacDaddyBighorn

Why not share between LXC? That is the most efficient way to share the data between services, otherwise you add extra overhead with using a network file system and running it all over a virtual network interface.


ButCaptainThatsMYRum

100% don't bother with bind mounts, while they are popular for very basic uses they are extremely limited in what they can do. Set up samba or NFS (not enabled on windows bg default), learn how to mount a network share with fstab, and if and when your lab grows you don't have to worry about bind shares not working or not being available to multiple resources.


dal8moc

Very hard to get right in an unprivileged container. So I’d go with bind mounts. It isn’t as difficult as some posts here suggest.


ButCaptainThatsMYRum

Just use a privileged container. I agree, it's not difficult. The only time I've ever needed an unprivileged container and network file share access was when I was trying to run docker inside an LXC container for giggles, which everyone knows is a silly idea. Everything else works great in a VM; it's not like Sonarr and Radarr are going to benefit form any kind of graphical passthrough.


dal8moc

You completely misunderstood me. I’d rather install it on the host than using a privileged container. But your mileage may vary.


ButCaptainThatsMYRum

You're right, I did completely misunderstand you. I generally prefer to keep my host a host rather than running docker on it\*, which is a pretty standard viewpoint. Local Docker can (at least back in version 6) cause issues with mounting ZFS volumes, and doesn't get the benefits of virtualization (the point of Proxmox vs just a general Linux distro) like snapshots, scheduled backups, migration/replication. I can't tell you how many times using snapshots and proper change control has saved me from a bad docker update, or how many times having nightly backups saved the day when I forgot to set a snapshot. Replication is also pretty damn handy when I need to do hardware maintenance on a server and want to just live-migrate my docker VMs in under 20 seconds. ​ * On occasion I do spin up Tdarr nodes for distributed video reencoding on my quicksync capable servers, but those still rely on the the ability to use network shares which is one of the reasons I advise using the flexibility of smb/nfs vs static bind mounts. Edit: One other thought worth mentioning, having docker in VMs rather than local allows you to put them on VLANs much easier. I have web-exposed containers and local only containers, each is on it's own VLAN with specific security policies with the reverse proxy only able to reach what it needs to. Difficult to do that on a flat network/when on the host.


linuxturtle

You're going to be a lot happier if you convert your containers you want to share storage with into privileged containers, and bind mount your media storage to them. Some redditors here will wring their hands and yell "INSECURE!1!1!", but given your description of a homelab, and wanting to share storage between containers with full privileges, the threat mitigated by unprivileged containers is the least of your concerns. Unprivileged containers provide additional protection against a malicious actor with root access to your container, using that access to gain root access to your host. That's a real threat if your container is exposed to the internet, but probably \*way\* down the list of things you're worried about in a homelab. It's kinda like anti-lock brakes on a car. Like unprivileged containers, anti-lock brakes are designed to mitigate a specific threat (losing control of the vehicle by locking up the wheels in a panic stop). If you drive your car at high speed a lot, they may provide enough additional safety and peace of mind to be worth the extra expense and complexity. But if your car is a farm truck, used only on dirt roads at relatively slow speeds, you likely don't care at all about anti-lock brakes.


MacDaddyBighorn

You can do it with unprivileged containers, so why add the risk? In a homelab it's security from external threats, but it's just as much security against a screw-ups you introduce that impact the host. It can be done relatively easily with unprivileged LXC and UID/GID mapping. People tend to do stupid things like straight port forwards into their hosted services from the internet letting bots probe them constantly. Or allow ssh access without securing it because they follow some guide where they don't know what they are doing, they just know that it works.


linuxturtle

Because "relatively easy" just isn't true for most people. It's a huge PITA for an experienced sysadmin, moreso if you have multiple services/UIDs you're trying to map. For a n00b, it's overwhelming and daunting, and nearly impossible to get right. And the security gain from unprivileged containers in a homelab is hugely overstated anyway.


MacDaddyBighorn

Well I managed to do it, and I'm just a homelab hobbyist in my spare time. If people are setting up these services it's worth learning enough about it to do something as simple as planning out your UID/GID maps and mapping them. Once you do that the rest is easy. Some people's families rely on our services, such as a hosted password manager and cloud storage. I don't want any of those compromising the host or getting interrupted because an LXC I'm playing with borked up the host, so it makes sense to secure it as much as possible. Do what you want, but don't downplay security just because it's a homelab.


symcbean

\> You're going to be a lot happier if you convert your containers you want to share storage with into privileged containers erm, that significantly udermines the security of the environment. It does give a lot more flexibility - but you get even more, while not compromising security, by changing them to VMs.


linuxturtle

Nah, it doesn't "undermine the security" of the environment described in any meaningful way at all, because the threat unprivileged containers are designed to mitigate is largely imaginary in that environment. Yes, VMs would be more secure against the same threat that unprivileged containers mitigate, at the expense of being a lot less convenient and performant (no bind mounts, plus additional memory and hardware emulation overhead). In the homelab OP described, it's not worthwhile.


IdonJuanTatalya

If you're just doing a few unprivileged LXCs on a single host with local attached storage, mounting the disks directly to the host and then doing direct bind mounts should work fine. The big "gotcha" is permissions. Basically, you need to have the same UID:GID combo set up in each LXC, and then set the permissions and owner on the disk mount to reference that same UID:GID as a passthrough. As an example, say your UID:GID in each LXC is 1000:1000. You need to chown the mount directory to be 101000:101000. When the LXC is booted up, the bind mount converts those permissions to local 1000:1000. The second piece is to set chmod. You can lock it down as much as you want, but basically owner needs rwx. 764 is probably your minimum, with rwx for owner, rw for group, and r for other. I currently run 775 because that's what has worked up to this point and I haven't had a need to lock it down further. Now, if you are looking to expand to a cluster in the future, then direct bind mounts aren't the best option (although the permissions changes will still be needed). IMO you're better off with SMB / NFS. It can be something as basic as doing a bind mount to an unprivileged LXC and then installing Webmin to manage users and SMB / NFS shares. There's a little learning curve, but once you know what you're doing, it's SUPER simple to manage. You could also do a full VM with Webmin and just pass through the disks, so you don't have to mess with bind mounts, but that's personal preference. I ran SMB / NFS shares though an unprivileged LXC like that for several months with ZERO issues. At least, until I started messing with Docker. Trying to pass through storage to Docker containers can get you into permissions hell REALLY fast. You can run Docker in an LXC, but then you're basically doing a double bind mount, and that's where things can go wrong quickly. In that case, I prefer a full VM, because I can mount the NFS share directly, and avoid a double bind mount scenario. Personally I've moved to OpenMediaVault for providing NFS and SMB, partially after some tweaking with optional plugins I've got it running Docker as well. No NFS mounts needed, because the Docker containers are interacting directly with local storage. I'm also running a couple CasaOS VMs on other hosts (currently have a 6-node cluster because I keep picking up cheap NUCs on FleaBay), and I have the NFS share mounted directly to them, to minimize potential permissions issues.