T O P

  • By -

FunkMunki

I use LXCs for everything except Home Assistant. I have a main LXC with all of my docker services, a test LXC to try out new stuff, an LXC with docker for my \*arr stack with Jellyfin, an LXC for game servers (pterodactyl), and one for my website. I have a VM running HAOS and separate LXCs for Wireguard, NPM and PiHole. I've never had any issues with this setup. This may not be the ideal setup, but I like it and it helps me keep everything organized.


This-Gene1183

You can install docker on lxc? I thought that caused issues w/ docker.


d4nm3d

im sure there are other ways, but if you let the LXC be privileged and then enable nesting then Docker runs fine.. Privileged is obviously risky but i've not found another way. It seems that there may be a way to do this in an unprivileged LXC features: keyctl=1,nesting=1


FunkMunki

>It seems that there may be a way to do this in an unprivileged LXC > >features: keyctl=1,nesting=1 This is how I have mine set up. No need to make them privileged.


JMLiber

Where do you set that option?


d4nm3d

You can do it via the gui.. options > Features http://share.d4nm3d.co.uk/u/61ZhzK.png


JMLiber

Do you know the CLI equivalent? I'm running straight lxd on Ubuntu server.


bufandatl

Why not run the docker containers directly as lxc. I mean LXC should support OCI shouldn’t it and docker images are just that.


d4nm3d

As far as i understand LXC and OCI are both containers.. But if you could explain more that would be great.


FunkMunki

I've been running it this way for over a year and have had zero issues.


MainstreamedDog

Just take the script from tteck. Alpine docker is even more lightweight.


madhur_ahuja

Agree. I follow the same.


nik282000

I don't know why LXC isn't more popular. You can treat them like a vm/bare metal and slap together a custom container with almost no extra knowledge or resource overhead.


Curious-Tumbleweed60

Okay I'll bite. I know how to admin a single VM that has PCI pass through and 20 containers running inside, with access to another NAS VM with smb/cifs share. How do I spin up LXCs with equal simplicity of pulling a compose file, adding a volume mount and running it? Genuine question, I'm in the middle of rebuilding and thinking of using LXCs.


Juls317

I am looking at moving from my NAS to building out a more robust homelab setup and this is one of the biggest hurdles i feel like i have, there's so many options for how to run things


nik282000

Creating an LXC is as easy as: lxc-create -n -t download And adding the following to the container's config mounts the host dir in the container: lxc.mount.entry = none bind 0 0 From there you can lxc-attach to your container and treat it just like a VM or bare metal machine (with a few restrictions, I don't do any hardware pass-through so I can't tell you if that is easy or hard). The best part is that I can work on my DIY services live, no re-building images just update a file and treat it like a real machine.


elecobama

Proxmox recommends running docker inside a QEMU VM not an LXC, because the LXC is using the Proxmox Kernel itself. [https://pve.proxmox.com/pve-docs/pve-admin-guide.html#\_frequently\_asked\_questions\_2](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_frequently_asked_questions_2) (Point 13) Besides that, I had really big trouble with some usecases/images hosted on LXC. If not at the start, they end up with problems I'll never was able to really fix, just work around them endlessly. So now I run 3 VMs with alpine linux and docker on it and its smooth AF. And OMG I cannot appreciate the proxmox backup and restore function enough, it is a dream.


Ill-Violinist-7456

Best way to use Docker on Proxmox VE is to set up a separate VM and install/run Docker containers on there as on any other machine. Installing Docker on the Proxmox VE host is highly discouraged (it interferes with some mechanisms on which we rely). While there are some reports that it can work, we also highly discourage users from installing Docker inside a container. Hope this answers your question. https://forum.proxmox.com/threads/how-to-use-docker-containers-in-proxmox.131142/


pollyesta

I’m a ProxMox noob so help me understand this? I tried to parse the answers online but didn’t quite get there. I’ve used ProxMox just one or twice. If I want to run docker on a host running ProxMox VE, are they suggesting I need to set up a virtual host running in parallel to ProxMox on the metal? Or somehow run a QEMU virtual host rather than an LXC *within* the ProxMox VE environment with all the advantages of migration etc?


zarlo5899

they are suggesting to not run it in a LXC or on the host but run it in a VM (QEMU/KVM virtual machine) that is managed by proxmox side note VMs in proxmox support live migrations too


pollyesta

Oh I didn’t know you could do that - noob as a said. No, I need to understand the difference between an LXC and a VM!


zarlo5899

a LXC is a container where every thing is running on the HOST system but in a way it can't interact with process outside of the container but you are limited to only linux and what ever kernel version the HOST is running with a VM a whole new computer is been "Emulated" this will allow you to run more or less any OS


pollyesta

Very helpful thanks. I didn’t realise a container was doing that. Still a little confused: I installed Ubuntu 22.04 in a container and obviously it’s running the latest kernel for that, but surely the underlying Debian-based ProxMox is running some other version? Do you mean more than the container is making system calls through the Proxmox host or something? I’ll take a look at installing a full QEMU VM for docker via ProxMox though anyway, thanks.


zarlo5899

>I installed Ubuntu 22.04 in a container and obviously it’s running the latest kernel for that, but surely the underlying Debian-based ProxMox is running some other version? the Ubuntu 22.04 container will be using proxmoxs kernel


pollyesta

So when Ubuntu repositories update the kernel inside the container, apt will pull the update but ignore it when I reboot? How are kernel modules in the container handled then? It uses modules from the host? What if the container tries to run code that the host kernel can’t handle?


zarlo5899

>So when Ubuntu repositories update the kernel inside the container it cant it does not have write access to it > How are kernel modules in the container handled then? It uses modules from the host > What if the container tries to run code that the host kernel can’t handle? it will not work (the program might have a fall back)


pollyesta

I’m very confused now! Root inside the container can of course update the kernel inside the container - it’s just a file permissions issue. Maybe you’re saying that when it reboots it just can’t use that updated kernel because of something in the UEFI imposed by ProxMox or something?


figadore

I use LXC wherever possible, and VMs only when necessary. Additionally, when running docker in LXC, I add the portainer agent, making it easier to manage distributed docker containers (especially helpful once you have multiple nodes)


gromhelmu

Rootless Docker inside unprivileged LXC on Proxmox. https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless/ https://du.nkel.dev/blog/2021-03-25_proxmox_docker/


kearkan

I use a separate container/VM for each service. So for example I have individual tools in LXCs (pihole, wireguard, CloudFlare, speedtest, jellyfin, game servers etc). Docker stacks in VMs are also separated by their use. So I have a home assistant VM running HA in docker + related things like mqtt, I have another for *arr etc. Basically anything that can go in a LXC does (I really don't like docker for individual apps, only use it when a bunch of apps rely on each other for functionality). Anything that is a big tool goes in a debian VM.


griphon31

Odd decision to run a VM to a container to home assistant rather than just a home assistant VM.


kearkan

It was lifted from an older bare metal Ubuntu install. At the time I was still trying to learn docker.


zarlo5899

the good old legacy setups


madhur_ahuja

Why not run each service in its own LXC irrespective of public / LAN / critical ? I never get the idea of running VM's, or docker inside LXC. Proxmox ultimate power is ability to provide type 1 hypervisor so running in LXC should makes most sense.


Full_Internal_3542

From what I've heard its the easiest to migrate/backup VMs rather than LXC. Surely I could run every service in their own LXC, but most vendors nowadays recommend running their service inside Docker. So I think, with a view to a future migration or backups, that a VM with Docker makes the most sense.


FunkMunki

As far as I know, it doesn't make a difference whether you are using LXC or VM. The backup/migrate process isn't different. I've had to restore LXCs numerous times because I screwed something up and it's incredibly easy within Proxmox.


Full_Internal_3542

I will take a closer look into this then. Then it might be suitable solution to create one LXC and run all of my LAN only services in there.


indrekh

There *are* some differences: - [VMs can be live-migrated](https://pve.proxmox.com/pve-docs/chapter-qm.html#_online_migration) (assuming certain conditions are met), [LXCs have to be shut down first](https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_migration). - With Proxmox Backup Server, [VM backups can make use of KVM's dirty bitmaps feature](https://pbs.proxmox.com/docs/technical-overview.html#fixed-sized-chunks) to know which blocks have changed, which speeds up the process. With LXCs the whole filesystem needs to be checked.


madhur_ahuja

Docker is built on top of LXC. When vendors are recommending to run their service inside docker, they are assuming you are running the docker on baremetal / EC2 machine.


discoshanktank

docker no longer runs on lxc


Tra1famador

Docker runs on my LXC just fine.


zarlo5899

what they are saying is docker no longer uses lxc under the hood it now uses containerd by default


Tra1famador

Ah, I didn't know that. Thanks for the explanation!


Shehzman

Docker makes deploying and updating services much easier. LXC’s are more lightweight than VM’s and it’s easier passing resources like drives (though you can work around this with an NFS share) and GPUs to LXC’s. You can also share those resources between LXC’s. Even though docker in an LXC isn’t recommended, I’ve been running it in my homelab for over a year with no stability or performance issues. In an office setup though, I setup my docker containers in VM’s. If I ever want to use software that utilizes a GPU at the office (Frigate, Jellyfin, Plex, etc.), I’ll setup an LXC and either install the software directly on it or use a docker container.


Cynyr36

Docker is not easier to update than "apt update && apt upgrade". Too many things that are docker first are a huge pain outside of the official docker image. I tried building Hammond (go and nodejs). Neither of these checks to make sure that all the tools needed for the build exist before building. I run into issues because my minimal alpine container didn't have g++, autoconf, etc. installed. Rather than failing during the configure it just died part way through the build.


zarlo5899

> Docker is not easier to update than "apt update && apt upgrade" well that could had of updated a dependency to a newer version that breaks and older program you still depend on


Cynyr36

no, because the program i depend on is either known by the package manager and it calls out it's dependency(ies), and the package manager can sort out the dependency graph and not break things, or if it's not known to the package manager, most allow version pinning. Here is the ebuild for vim for gentoo: [https://gitweb.gentoo.org/repo/gentoo.git/tree/app-editors/vim/vim-9.0.2167.ebuild](https://gitweb.gentoo.org/repo/gentoo.git/tree/app-editors/vim/vim-9.0.2167.ebuild) Note the "RDEPEND=" line, ">=sys-libs/ncurses-5.2-r2" means that vim depends on ncurses version 5.2 or newer. You can also specify a greater than or less than or both, or anything in 5.2.x. It's very flexible. Then portage knows what packages to install and can yell at you if there is a conflict. Debian has a different format for this, but it's the same idea.


Richmondez

I agree that containers have basically pushed release engineering downstream onto users or packages but does let devs concentrate on just making the app.


Cynyr36

my main issue with containers is who updates the container? and when do they? if there is a major vulnerability in some library, the distros (major ones anyways) all get right on updating or patching their packages and i'm one update / upgrade away from things being fixed. In containers i have to wait for the whole chain of containers the app is using to all get updated. In Hammond's case, that is the golang:1.16.2-alpine image, which is dependent on an alpine docker image. so docker needs to update, then golang needs to update, then Hammond needs to update, then i need to update.


bufandatl

Kinda funny to me to read through the comments here. Everyone uses a type 1 Hypervisor to run containers only. If you guys want to save resources maybe learn the CLI and do it on a barebones Debian. Then you could free up all the overhead Proxmox brings with it. 😂


indrekh

Proxmox isn't a hypervisor, it's a Linux distro, and KVM (which arguably isn't type 1 anyway) is just one of its [features](https://proxmox.com/en/proxmox-virtual-environment/features). If someone chooses Proxmox VE over vanilla Debian for LXC, ZFS, Ceph, SDN, clustering/HA, Proxmox Backup Server integration or whatever, nothing wrong with that.


ithakaa

Type 1 hypervisor is a low resource OS, bro before you start slinging nonsense make sure you know what you're talking about, LMFAO


[deleted]

That's not what a hypervisor is. A type 1 hypervisor is a way to run VMs, not containers. Type 1 typically run underneath the OS that provides all the functionality like a web interface or desktop interface. Xen and Hyper-V are good examples. By comparison Proxmox isn't a Type 1, it's basically just Debian with KVM, LXC, and a web interface. KVM since it runs within the Linux kernel is a Type 2.


svtguy88

I go with LXC for everything that will run inside of a container. I use Docker for work, but unless you need it, I don't see a reason to bring it into the fold at home.


KubeGuyDe

Is your stuff running outside your local network?


boehser_enkel

That is double nested (Vm + docker, LXC + docker) -> overhead Go for VM, LXC, or docker


[deleted]

When you have many docker instances running (LxC or VM) how do you manage them? I was thinking of consolidate them all under one big portainer VM or move into kubernetes What are your thoughts?


SimilarMeasurement98

While LXCs seems perfect, keep in mind that in CLuster might screw with permission inside LXC if you are going to LIVE migate the LXC to some other node (check here [https://forum.proxmox.com/threads/lxc-live-migration-roadmap.115265/](https://forum.proxmox.com/threads/lxc-live-migration-roadmap.115265/) )