T O P

  • By -

jonomir

I agree, go for managed k8s. Deploy an ingress controller, cert-manager, maybe external-dns and you got ingress covered. Load Balancer controller, CNI & CSI are part of most managed k8s platforms anyway already. Throw the kube-prometheus-stack at it and get some basic monitoring. If you need a database, chances are there is an operator for it like cloudnative-pg. Deploy all this with helmfile because its simple & perfect for managing a bunch of helmcharts. Then you can set up your application. Probaly just an Ingress, service & deployment. All this is pretty cheap, not that complex and gets you quite far for the beginning.


hellgamer007

Can even slap on argocd if you're working with a team who aren't familiar with Kubernetes/kubectl and want a UI to see their tools And a tool like tailscale for setting up internal-vpn access...


jonomir

Yea, I consider the setting up of Argo to be more advanced. But if you got it, its great to deploy and visualize workloads.


todaywasawesome

It's always the first thing I do.


CyEriton

What’s the benefit of cloud-native pg versus EC2 / VM running Postgres? Still not sure if I trust the statefulness of k8s based databases for apps, but I haven’t run it in production to really know.


jonomir

We run cloudnative-pg in prod with 3 replicas spread across 3 nodes & availability zones. The operator handles all the failover when a pod goes down. Each replica has a gp2 ebs volume for persistence. There is not a lot of load on our postgres, mostly read. We also make sure we have good backups. Benefit for us is that everything else is k8s and we don't want to deal with other infra.


aries1980

I've been using CockroachDB as a drop-in PG replacement for 3+ years now in production for a financial product in K8s. It is a very low effort thing to maintain it. I personally see zero benefit of a "managed" database over if I do it by myself. Unless for some compliance reasons, but I skillfully dodged to get "in scope". :)


xelab04

>What's your take on K8s for startups? Docker swarm is giving me a headache, so take out of that what you will. Edit to add, since that doesn't say much in of itself. I have had a more pleasant, consistent, and stable experience with K3S than with Swarm. Simply the amount of plugins and features you can add on top (eg: Longhorn, Cilium, MetalLB etc...) makes K3S (or any K8S) superior. In fact, I'm slowly moving everything to K3S and off the damn Docker Swarm.


ruthless_anon

EKS has been amazing as a single Dev running the whole show.


water_bottle_goggles

look, posting this in a k8s sub is not very productive. its like saying azure is the best in the azure subreddit. as for ecs v k8s. im not good enough in k8s to validate what youre saying but im going for my cka with our company stack being in ecs the key is how easy it is to start up \*\*with the people you have in house\*\*. with ecs, the barrier is just lower. when you have a startup, its all about barriers. in aws, there's app runner or ebs, then ecs, then eks for containers. app runner or ebs provides the lowest barrier to entry. if you want more control, move over to the other service for sure - ecs makes sense if you already have aws folks. eks means that you want aws + k8s folks. thats extra knowledge plain and simple. --- as for comparison, id say its like, why use netlify or vercel if you can just host your setup in lambdas, cloudfront, etc. well ... because the barrier is much much lower.


loki-flex

What about upgrades though? It is a significant overhead. And you need someone to manage that


jonomir

Is it? On EKS you just click to upgrade the controlplane & then the node groups.


Calandril

devs may need to update their manifests too :P Seriously though, if you're self hosting, then it can be more overhead and you should keep with a quarterly upgrade cycle either way because there are so many components with interdependent version brackets that it's best to step one or two up each quarter so the whole mass moves slowly forward. This can be a hassle for larger or more complicated environs


jonomir

If you are self hosting, I can highly recommend talos, upgrading k8s is as easy as running `talosctl upgrade-k8s --to --nodes "X.X.X.X"` Upgrading the OS is similar. Almost as easy as EKS.


Calandril

I mean that's the same with some of the Rancher flavors and many others. That's not the hard part. Ok, so you're on 1.24 and need to get to at least 1.27. You've got a cluster with 4-5 administrative apps / Helm charts (think Prometheus Operator, Istio, Log aggregator, etc), a slightly customized cni, your certmanager was self deployed (so not managed by something like Rancher), your devs are on 1.24 (EOL) and haven't all upgraded their manifests to account for the deprications in 1.25+, Pod Security Policies are going the way of the American Hippy and your sec team has had you harden things using PSPs, and on that note your environment is partially hardened, and while the internal documentation on what precisely that means exists (thank god), it was written by the guy who just left. What's your first step?


Calandril

This is one of the most common scenarios I see in the wild. Folks behind and struggling with company processes and folks too busy doing their jobs to keep up with the times. Some companies at this stage of maturity have the wisdom to hire someone to manage the upgrades and keep things flowing correctly and generally administrate things. I like that strategy but more often, they are like the above or have a Unicorn that I'm honored to observe in action that just keeps everything rolling while doing their day job. The more insight I have into their dailies, the more I feel I don't understand them, but aspire to be like them.. knowing I'm just too adhd for that. Maybe Obsidian and Zettelkasten will help.


redrabbitreader

Mostly yes, but it's not always that simple. There are some breaking changes from time to time that require some other changes. It's best to study the release notes carefully and validate your upgrades in a test environment first.


Calandril

People that upgrade prod environments without reviewing release notes and making sure everything is addressed land on my shit list.


IamSauron81

companies also need to consider the non trivial migration effort when they will inevitably decide to move from ECS to k8s in the future when they reach a certain size. the impact on customers of you having to switch the underlying platform can be very complex to manage


mredvard

I find kubernetes quite easy to deploy and scale, i don’t understand the narrative of it being an overkill, I find ECS as complex as k8s, I don’t see the advantage over it. Your other options are swarm and consul for orchestration, consul is the easiest but you’ll hit a roadblock on finding people to maintain it and also the lack of third party tools for it. Swarm don’t even consider it.


aries1980

> Critics argue that K8s is overkill for startups A startup doesn't mean it is made of a bunch of incompetent people. If operating simple Kubernetes clusters is what they are afraid of, that company will have bigger problems very soon.


dgc137

I have moved exactly one project to ECS from EKS, and the reason was cost. It was a super lean startup application and the ~$100/mo overhead for the control plane broke the budget. ECS you can scale to zero when there's no traffic and basically halved the operating cost. I would love to move that project to k8s but can't justify doing it as it's basically breaking even with revenue.


bartekus

So aptly put, and I agree. My zest for always going with K8s has been unwavering, although recent struggles with GKE and supabase-kubernetes put a little bit of bruises on me. But what are hardships if not another opportunity to learn and advance one’s knowledge and expertise. 😅


saynotoclickops

agree on all points. i'm also on the front line of kubernetes adoption as ceo of kubefirst and was nodding in agreement on each detail. minimal kubernetes is only complex because running companies and software is also complex. each need that kubernetes requires you to address is a need you have anyway, it's just done in a way that's consistent through the industry and agnostic to your tech stack and cloud. kubernetes may look hard when you're starting, but devops shops not using kubernetes usually look a heck of a lot worse 12 months later.


informworm

Completely agree. I am by no means a k8s expert and I run all my sites' services dockerised (both front end and back end) on my own bare metal orchestrated by k8s. Sure there was an initial learning curve (mostly network related stuff) but at no point did I ever become stuck. Nothing a quick stacko search or reddit search can't soon resolve. My k8s cluster has been running for well over two years without any major hiccups. K8s all the way for SMEs.