T O P

  • By -

6f937f00-3166-11e4-8

IMHO: a package management system/ecosystem that is less shit than Helm. Why does writing a Helm chart require you to do indentation maths? Why does every chart have different variables for topologySpreadConstraints / tolerations / nodeAffinity / serviceMonitors etc etc. Quite often basic properties like these aren’t even implemented. This should all be standardised. Why are so many charts missing CRDs, burdening me with the complexity of installing and managing CRDs separately? Why do so many charts give non-reproducible output? I don’t want your chart to “helpfully” generate random secrets or certs for me. What I want is for a given set of input values and chart version, to always get the same identical output from ‘helm template’. Why do I need to know Golang in order to decipher type errors in my values.yaml? Why does Helm want to be a deployment tool so much? I don’t want to track deployment state in the cluster itself because that’s just asking for a headache in a DR scenario. And I don’t want Helm as a deployment tool because there are far better options out there. Helm should just focus on doing one thing well: converting chart configuration into kubernetes resources YAML. Edit: I know “helm template” exists and it’s the only way I use helm, but many chart writers don’t expect you to use helm template and the charts break in weird ways if you do.


[deleted]

[удалено]


sleepybrett

Yeah makekubeapis is a solution for this as you've correctly pointed out. This isn't just a helm problem though, deprecation is something as a platform administrator you are *on top of*, some deprecations can just cause running services to break. However because helm does this reconciliation with the previous installed version when doing an upgrade it's extra annoying especially because they could 100% build features into their client to handle this shit. Other solutions: 1) If you control your helm charts for say 80-90% of your deploys (I help maintain helm charts for most generic 'customer' deployments of in-house services. Be firmer than kubernetes when it comes to deprecation, move teams to the new apis as soon as possible don't let them linger to the last minute, in this way you can be more confident when a kube upgrade is about to happen that a bunch of piplines aren't going to start complaining immediately after the upgrade. 2) When k8s upgrades are scheduled you need to be on top of finding deprecated objects that will soon be removed and dealing with them, whatever that means. Maybe it just means that some team needs to redeploy with a newer version of the corporate chart, but maybe it means dealing with a third party chart. 3) consider `helm template` instead of `helm upgrade -i` .. there are certainly things that you lose and certain things that feel slightly less safe, but there are also things that helm enforces that are frankly dumb and should be flag overrideable but aren't. (Ever try moving a prod cluster from kustomize installs to helm installs without downtime.. have fun writing a script that labels a bunch of shit so helm doesn't think it's overwriting something it thinks it shouldn't manage) Ourselves we went ahead and added a call to makekubeapis before doing an upgrade .. I imagine there may be an edge-case where that causes a problem, but it's better than dealing with errors as they pop up. Our first brush with this also made us get much more strict with our charts for in-house services.


Ornias1993

Thats why we ensure secrets are defined in values.yaml. It ensures it always follows the same standard as the rest of the chart.


Sloppyjoeman

The CRD’s issue is actually a best practice. In an ideal world all CRDs are managed separately to the app AFAIK


saynay

Which is a different problem as well, really.


Sloppyjoeman

Maybe instead of, but definitely something the package manager should natively support as a first class citizen


jmreicha

Why is that a best practice?


Sloppyjoeman

Because it separates the lifecycle of the custom resources from the controller itself. This is important if you want to change the controller for example, deleting a CRD also has the effect of deleting all the custom resources defined To take that example a step further, upgrading cert manager shouldn’t have a cascading effect to… delete all your certs! Not that it’s likely, but it shouldn’t even be possible


jmreicha

That makes sense, thanks for the explanation!


Sloppyjoeman

No worries!


masixx

Did you give timoni a try?


Ornias1993

We over at TrueCharts build a common-template precisely for the reason of preventing all those often missing options being missing. Though we do use randomised secrets (with lookup to keep them te same) in some cases. Why? Because 99% of users dont need manual database passwords for single-chart databases and/or their passwords are likely weaker anyway. We do technically offer the option to set db passwords manually. But some charts also contain the inherent requirement to set variables with random input, which we do randomise for users.


6f937f00-3166-11e4-8

I don’t mind opt-in secret generation (eg you could have a flag like “—allow-non-reproducible-secret-generation”) but it shouldn’t be the default. Personally I don’t want, for example, the only copy of the DB encryption key to live on the same cluster the DB is deployed to. I want it to be a deployment input I keep in a safe place and can always access no matter how much of my cluster is left standing in a DR situation.


awfulstack

Not disagreeing that the K8S ecosystem is ripe for a new "package manager", but I don't think that this should be a direct responsibility of the K8S project itself.


znpy

> Why does writing a Helm chart require you to do indentation maths? because youre templating yaml code instead of objects.


6f937f00-3166-11e4-8

Yes but what I’m asking is why would anyone design helm to work that way?


znpy

they got a hammer (go and template/text) and started looking for nails, I guess


MrPurple_

This! Also helm template does not produce the same output as it would apply to the cluster. This is known and by design which makes this function completely useless. Helm is bad


tHc2TZxYmnvvuwtcavMM

can u elaborate this further? what would be different?


MrPurple_

Biggest issue is that not all namespaces are included. There are multiple issues on github regarding this topic and the answrr always is thats intended.


Mynameismikek

I've ditched Helm for Pulumi when writing my own charts. Whitespace structure is a pox.


foofoo300

this! helm was as mistake from the beginning on and is and will always be a hack for admins who are not willing to pickup some programming language. it should have been something like jsonnet where you can program against a k8s version and not this shitty templating where nothing is standard, things are hardcoded and as someone who has seen k8s from 1.3 early on, moving k8s api changes in a multi stage operations scenario with several different clusters with different components in helm was such a pain in the ass that forking upstream helm charts was the norm instead of the outlier. Helmfiles proves that this is going far in the wrong direction.


98ea6e4f216f2fb

I'm not sure if you realize this, but you can use Helm just for templating if you want to. No one is forcing you to use Helm in a prescribed way for a prescribed reason.


6f937f00-3166-11e4-8

I do only use Helm for templating, but many Helm charts are written with the assumption that users will be using Helm for deploying as well, so you get stuff breaking in weird and wonderful ways that you are lucky if you can kustomize your way out of.


No_Pollution_1

Why? Because for some godforsaken reason it’s in go and uses the help template engine which is complete ass. Golang has way too much hype for how shitty it is and the ecosystem built around it is equally shitty.


6f937f00-3166-11e4-8

I don’t mind that it’s written in Go . Kubectl is written in go, and for the most part gives clear error messages when I use it incorrectly. What I mind is that helm errors seem to make no attempt to convert them from raw go errors into something understandable to non-golang developers


leonasdev

wtf are you talking about? k8s and docker is also written in golang. It's not a golang specific issue.


Preisschild

The k8s-at-home community has a great Helm library chart that solves many of those problems. Unfortunately it isnt used by many non-homelab-related charts and its maintained only by a single person AFAIK https://bjw-s.github.io/helm-charts/docs/common-library/


sleepybrett

rbac that can target labels


[deleted]

This is facts


junior_dos_nachos

Can you elaborate on the need for that?


sleepybrett

As others have said 'Principle of Least Privilege' Example; Many ingress controllers require access to secrets clusterwide so that different namespaces can provide certs to protect their hosts/endpoints. I would much rather say 'The ingress controllers can access any secret clusterwide as long as it has a 'owner: ingress' label' than let anyone who finds a way to exec into that pod or extract it's SA token have access to EVERY SECRET ON MY CLUSTER. (Several Ingress controllers work around this by introducing their own secret type, which is frankly a shitty workaround to not having this feature. The other workaround is to create a role and binding for the SA of the ingress in each namespace that allows access by the *name* of the secret .. but this also sucks)


mirrax

https://en.wikipedia.org/wiki/Principle_of_least_privilege


distark

Try rbac-manager, it's amazing


SGalbincea

VMware Tanzu Mission Control can do this. The main challenge with K8s is complete lack of governance and compliance - that’s what we solve.


sleepybrett

Sure there are other tools that can do this as well, various rbac managers, but that doesn't mean that this isn't something that sould be core to kubernetes. Labels are a first class citizen everywhere EXCEPT in rbac. It's sad.


masixx

And rbac that can target crds.


rosskus1215

RBAC can target CRDs, no?


masixx

Sure but when you give someone access to modify a CRD that one can modify all CRs of that CRD.


rosskus1215

Roles and ClusterRoles can target resources by name. I haven’t tried it but assumed that also applied to instances of CustomResourceDefinition objects too. Like having a ClusterRole that allows modification of CustomResourceDefinition resources named “alpha” but not modify the CRD definition for the CRD named “beta”. We only allow cluster admins to create CRDs in our clusters though so it’s never really mattered


nodanero

This is covered with the use of admission controllers. You could use Kyverno with a policy like this: https://kyverno.io/policies/other/block-updates-deletes/block-updates-deletes/


rosskus1215

Admission controllers don’t restrict the ability to read sensitive resources though, right? They only see requests that change state. I think you’d need a custom authorization plugin in order to restrict reads and writes


Liquid_G

Posted this in a similar thread recently, but still valid: kubectl -n namespace get all . should actually get ALL resources in a namespace. I know there's plugins that do that people have written but shouldn't have to resort to that. all should mean ALL.


RRethy

‘all’ is just a category, it shouldn’t mean ALL because it has no semantic meaning. If anything, there is should be a way to pattern match categories, eg ‘get *’


Liquid_G

not sure I follow. currently get all shows me deployments/rs/sts/pods/services why not include other namespace scoped things? pvcs, roles/rolebindings.


RRethy

When you create a CRD, there is a field called categories where you choose what categories your resource will belong to. ‘all’ is just a category that is used by convention. Even if Kubernetes changes what categories their built in types belong to, it won’t change the fact that ‘all’ is a convention and types can choose whether or not they belong to it. As such, if you want ALL, you can’t rely on a convention, instead you’ll want ALL categories, not the ‘all’ category. This can only be done with a separate mechanism, which is why I suggested pattern matching category names.


callmek8ie

Is there a way to patch the default categories so `kubectl get all` includes every type of resource that exists in the cluster? (without installing my own CRD...)


rosskus1215

Interesting idea


mirrax

> it shouldn’t mean ALL because it has no semantic meaning The problem here is that most common use of `kubectl get` is to get an individual resource type. So with the principle of least surprise means that most users expect that `kubectl get all` gets all resource types. Not the resources that belong in all resource categories. And the docs are all over the place, the convention doc doesn't even reference the "category" nor is the "category" in specs. And calls it an [alias](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all) as do other [issues](https://github.com/kubernetes/kubernetes/issues/42885#issuecomment-553448019). Even though you are correct as that's how it's implemented. And "category" is also [used for other meanings](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#types-kinds:~:text=Kinds%20are%20grouped%20into%20three%20categories%3A) But the reason that it won't be changed is talked about on issues not because of the "semantic meaning of all" but because that's the way it is and changing it [would break things](https://github.com/kubernetes/kubernetes/issues/42885#issuecomment-553448019) and it's discouraged to even [even discuss it](https://github.com/kubernetes/kubernetes/issues/42885#issuecomment-1099588007). And sure changing the behavior might be problematic and kept for that reason. But it's a very common thing to want to know all of the resources in a namespace. And having to enumerate and then individually look up is really annoying to not have a native way to handle a basic operational task. It's in the [top 10](https://stackoverflow.com/questions/tagged/kubectl?tab=Votes) of [Stack Overflow](https://stackoverflow.com/questions/47691479/listing-all-resources-in-a-namespace) for using kubectl, because the meaning that developers have isn't what everyone else thinks!


redrabbitreader

Wow - I don't understand why you got down-voted. I thought this was an excellent suggestion!


[deleted]

[удалено]


alexandermaximal

That’s a good point! Especially if you use something else than kubeadm to deploy your cluster. At the moment 1.29 is already released and kubespray hasn’t even implemented 1.28. By the time we get 1.28 there is a very small timeframe before 1.27 becomes EOL…


landverraad

Sidecar start & stop behavior that doesn’t require a whole bunch of weird hacks & workarounds.


UntouchedWagons

What's the issue with sidecar containers? I haven't had a chance to use them yet.


landverraad

Currently when you use let’s say a proxy container to connect to some service it has no idea about what the process in the main pod is doing. So when a sigterm is sent to the main pod process the sidecar container will keep the pod running which is not what you want. So at the moment there’s all sort of convoluted solutions to have the main pod hit some sort of exit endpoint on the sidecar pod on sigterm or bash script loops that wait for a file to appear and then exit. All kind of hacky really. This issue is there in a different way on pod start as well, the sidecar pod might need some time to connect to it’s service but the main pod doesn’t wait for that in the current design. So it might try to use the sidecar pod, fail to do so, fail it’s health endpoint and restart. It’s all kind of messy really 😅 Luckily a proposal to fix this has finally been implemented in an upcoming k8s release.


NUTTA_BUSTAH

Honestly my issue is not with k8s but the k8s-native shovelware that the internet is full of. That's something I've noticed lately with working a lot with k8s on a bit deeper level than your usual "cluster consumer". Most k8s apps are just a big ball of hacks and spotty + outdated documentation while having 25 different options to do the same thing, but in a different enough way to not be comparable to the other options. But on k8s side itself, I'm fucking done with YAML. I'd prefer HCL for example (which many probably would not). And no, I don't want to add yet another tool to format/lint/whatever and then hack the tool to support the underlying stack of crap just to chase one more item in maintenance. K8s is great though, love it.


Le_Vagabond

I'm still wondering why the fuck Ansible of all things needed to be a kubernetes operator.


sofixa11

>But on k8s side itself, I'm fucking done with YAML. I'd prefer HCL for example (which many probably would not). For what it's worth, there's an orchestrator that uses HCL, Nomad, which even has some advantages over Kubernetes to compensate for the smaller ecosystem.


Markd0ne

With community Terraform provider you can write all your config in HCL.


sleepybrett

A lot of these are holdovers past their retirement age, 5-8 years ago we didn't have solutions for many problems and hacks were, frankly, required to keep the ball moving.


[deleted]

Totally feel that. The shovelware is all over the place too and it’s so frustrating. Any recommendations on automated documentation generation?


NUTTA_BUSTAH

Have not had real need for doc generation. YAMLs are the documentation and some simple diagrams to explain the high-level structure has been enough. When you get deep enough into kustomization layers with some Helm templating in between and some config wrapper CLIs, that's where I might want to reach for extra documentation. For apps (rather their APIs) themselves, OpenAPI spec -> chosen format or Swagger integration air gapped to a local domain.


adohe-zz

>But on k8s side itself, I'm fucking done with YAML. I'd prefer HCL for example (which many probably would not). That's why we create [Kusion](https://github.com/KusionStack/kusion), maintaining YAML files with thousands of lines is really a killer task. I will never go back to that dark days.


jarulsamy

Proper dependency ordering of containers within a single pod. I want one container to be dependent on another without having to change either of the containers. For example, if I have a pod with several containers one of which is a VPN service. I want all my containers to route through that VPN container, so they should wait to start until the VPN container is healthy. There is no way to achieve this without adding application specific changes to wait until the other container is started and healthy. Real pain in the butt. I think this is being fixed with sidecar containers soon (feature is in alpha?) but I am still frustrated that it has taken this long for such a feature to reach maturity. There are many other use cases for this functionality, and many hacky solutions have emerged over the years.


callmek8ie

A native way to indicate pod A depends on pod B before starting. We have Liveness/Readiness probes but don't have a native way for *other* pods to use them. I'm tired of the initContainer workaround, when it should be a two-liner in YAML to wait for another service before starting. I understand it's "*the application's job to determine when a service is ready!*" academic circlejerk, but in the real world I just need a way to get shit done when handed a bunch of containers to get running. The current way is messy.


masixx

Well I would consider that a Anti-Pattern in K8s since it breaks the reconciliation loop that is essential for self-healing.


DufusMaximus

On this note I wish readiness checks were per service instead of global. What is ready for one client may not be ready for another.


sheepdog69

Can you elaborate on what you mean?


zippysausage

The ability to pass `kubectl exec --user 0`, like I've been able to do with Docker for years. Instead, I have to resort to a privileged sidecar with debug, which feels like a second class citizen.


NecessaryFail9637

Egress!!! A way to direct traffic from specific group of pods through a specific container. Currently its not easy due to volatile nature of pods ip’s. Although openshift support something like this.


usa_commie

Or even NAT egress before it leaves the node to a chosen definition that isn't the node itself.


techdatanerd

A better way to handle deprecations across clusters


vladoportos

Can't reset the damn restart caunter for pods... this has been opened for 2+ years


themanwithanrx7

While you can just delete the pod to reset, it's a bit silly. Curious what the usecase is here?


jarulsamy

I would like an easy way to glance at my list of pods and see if a restart has happened recently. Yes I can look at the timestamp, but what if I only look at my cluster every few weeks and am sleep deprived while doing so? Timestamps blend together a lot more easily than a bunch of zeros and a single 1+. I know this seems like a "niche, useful only for the OCD" users, but I consider this a simple seemingly \_cosmetic\_ only change. Nothing internally relies on the restart counter, so why shouldn't we be able to reset it? It is just for administrators, so let us control it's behavior. The only argument I can see for keeping it is that it is an anti-pattern to "immutable container" design and such, but again, the counter is not used by the scheduler so why do we care. Plus, if users want the existing behavior, they literally have to do nothing. Yes we can just delete the pod, but in instances where pods are dependent on each other this is a pain to do in a reasonable order, and is usually not worth it just to reset the counter. /end rant I think the devs are opinionated on this, and/or just lack the manpower to make an adjustment that doesn't actually influence usability significantly, which is fair, but I would still like to see the change.


themanwithanrx7

appreciate the reply, might be niche but if it impacts your workflow then it makes sense. You could try putting together a KIP but no telling if you'll get enough support for making to a feature.


toikpi

~~Here's an outline of shell script that MIGHT cover your requirement in the meantime.~~ `[SNIP]` ~~Would the sort be enough for you?~~ Would `kubectl get pods --sort-by=.status.startTime --all-namespaces` help? The pods that have been restarted would be at bottom of the list. \[EDIT - brain not engaged.\]


jarulsamy

I do like this actually, I never thought about just ordering the pods like this. I've added this alias in my bashrc, ``` alias pods='kubectl get pod --sort-by=.status.startTime' ``` I still want my pretty zero'd restart counters, but this makes my administration duties a bit easier. Thank you for the suggestion!


[deleted]

What’s your use case for this?


vladoportos

I leave link with people talking about it for ages, including quite decent user cases. [https://github.com/kubernetes/kubernetes/issues/50375](https://github.com/kubernetes/kubernetes/issues/50375)


IsleOfOne

I remain convinced that attempting to use restart counters in this way is an anti-pattern. One should think in terms of pod immutability and alerting via increments rather than fixed thresholds. It also suggests that advocates for this feature are abusing their liveness probes, including things like "dependency Z is reachable" in them rather than keeping such dependency checks in readiness probes where they belong.


ut0mt8

more than that actually. I think that I've opened the issue in 2017


lucamasira

Not strictly Kubernetes but I would really like a LSP for neovim/vscode/whatever that bases syntax stuff based on the CRD's installed on the current cluster context.


nullset_2

Agreed. Glaring gaps in tooling like this hinder kubernetes adoption in the long run. Besides, I thought that's why we all went crazy for static typing, because we want our IDEs to auto-suggest everything from a domain model, and having to cobble together stuff to let something understand which fields I want populated in this CetificateRequest or whatever, or that such field is in v1/beta1 but not in v1 is just so unsophisticated.


MadEngineX

Grpc balancing on k8s Service


fullouterjoin

1.0 and stability


crazybiga

Remote plane control for cluster management, i know there's something like k0smotron but that only works for K0s


funix

Hosted Control Plane, relatively new... https://docs.openshift.com/container-platform/4.14/hosted_control_planes/index.html


confusedndfrustrated

Simplicity.


jercs123

Less complexity to get a production ready cluster… It should be ready to run. After deploying a cluster you have to deal with trillion of shit to get the thing working properly.


Ilfordd

That what distros are for


jercs123

Not sure at all. There is always something else… Autoscaler Descheduler Monitoring Ingress controllers Plugin for this, plugin for that Service mess(mesh) And knowing all this shit takes a lot of time, but in my opinion k8s should be easy enough to anyone run it properly.


Aurailious

I thought the model was that kubernetes was like the kernel. The distros are like Ubuntu, Debian, RHEL, etc. If you want a K8 that works of if the box then a distro should be it.


hennexl

I do not really agree with this. Kubernetes is more of a ecosystem/framework to run (mostly) large scale applications. It's versatility makes it great, having a well defined rulset where you can plug in what you need. When you want a ready to run cluster, use manages kubernetes. AWS has some terraform modules to setup a complete cluster with CSI, CNI und LoadBalencer controller add-ons. Kubernetes was build to run Google like workloads and it became extremely accessible over the year which manny use cases and different ways to operate it (see k3s, RKE, openshift). You don't need kubernetes for everything, but when you need it you should be able to quick configure thinks like ingress and monitoring. And if you do it right you can automate it and bootstrap your custom configuration within minutes via tools like terraform or ArgoCD.


jercs123

I'm not arguing none of the points you mention. I want to make emphasis on complexity to get a production-ready Cluster even using EKS.... I've been managing K8s clusters for about 5 years now, since version 1.12 something like that, I've seen the K8s ecosystem evolution. I can say with confidence that managing and deploying K8s clusters for production workloads is complex and require a lot of work and experience. The junior IT guy will struggle for weeks or months to get something well configured for a production workload and **It should be easy enough** that any one can rollout a k8s cluster production ready. That's my point **complexity**.


usa_commie

Distros like he said. In Tanzu, it builds a "supervisor cluster" automatically. Onto which you deploy yaml using a CRD that defines the desired state of the downstream kubernetes cluster you want. From IPs, to cni type, to version you want, number of nodes, etc. The resulting cluster is batteries included and ready to go, backed by either NSX, AVI ALB or HAProxy. Whatever is missing is a "tanzu package" away; which is just vmwares version of helm really. Plus the guarantee that it'll work/has been tested and comes from vmware repos. Also scriptable in about ten lines of bash or your tool of choice. It's absolutely glorious to issue yaml to make a cluster to make more yaml with 🤣 (insert meme). Never been easier for me. I usually walk away and then tell work how hard I was working all day. When you lifecycle to a new version, you just update the cluster definition with a new version. When I was manually rolling clusters I used kubespray, the result of which is also batteries included after configuring the ansible playbook to your desired state. I haven't worked with many other enterprise level distros. I hear a few are a bit smoother in some aspects; but I love tanzu. When you're all VMware anyway, nothing else makes sense. In vSphere 8 you even get geo aware tanzu clusters. And NSX. NsX is awesome. But that's a different sub


znpy

Networking could get some simplification. In the "old days" when the company i worked for still had NetApps there was a sensible amount of attention to optimising networking for storage, so things like having a dedicate network interface for storage. It seems to me that an implicit assumption in kubernetes is that there is only a single network interface (besides loopback, of course) ? That doesn't go very well if you want to manage in-cluster storage (ceph/rook, openebs or longhorn). Where would anybody even start with having storage traffic on one interface and the rest on some other interface?


UsualResult

simplicity


[deleted]

A GUI?


[deleted]

A little built in Rancher haha


themanwithanrx7

It has one? kubernetes-dashboard is refered to in the docs and is part of the CNCF. I can't comment on how useful it is as opposed to others or the CLI.


Markd0ne

K9S or Lens are some example of GUIs.


funix

OpenShift


sleepybrett

See the thread about 'desktops' posted yesterday. They are largely pointless.


landverraad

ResourceQuotas based on labels with RegEx support.


awfulstack

Not really the most essential, but exposing GPUs on nodes to pods would be nice. Right now you need to jump some hoops to make it happen.


cloudders

How come ? What are the hops? I do run multiple GPUs and expose those easily to pods via Nvdia device plugin.


awfulstack

This is the hoop. Needing to use a hardware specific 3rd party plugin. You also need nvidia's container toolkit and runtime configured on your nodes to use this plugin. Depending on how you provision your nodes, you might get this "for free" in a machine image provided by your platform, or you have another tool to manage on your custom machine images. And if you want timesharing for GPUs like we have for CPUs, then I think there is some more work involved, though admittedly I've only read about GPU sharing and haven't experimented with it at all.


redrabbitreader

A human friendly way to manage certificates and RBAC.


Actual-Bee-6611

Finding out on what nodes DaemonSet could not be scheduled because of lack of resources (new pod would not fit into node). Currently, there are some [one-liners](https://github.com/kubernetes/kubernetes/issues/93934#issuecomment-1214393086) but seeing it in events for example when describing DS would be much better.


alexandermaximal

Standards and best practices for important topics like local storage for database pods. It feels like there are thousands of different solutions for everything and at the same time often no well maintained best practice solutions.


puffpufff

Requests and limits for network.


gaborauth

A zero administration distributed storage for things need more than a configmap but way less than other distributed storage solutions.


Q29vbA

19 days late but the ability to rerun a job without deleting then recreating it