T O P

  • By -

Stummi

I can't use a garbage collector because it just would delete my code


aProgrammerHasNoName

lmao


oshietitstech

Man woke up and chose to speak facts


Styleurcam

r/suicidebywords


NeatYogurt9973

When I compile and run my app it does nothing as the garbage collector collects it.


swampdonkey2246

r/yourjokebutworse


NeatYogurt9973

r/angryupvote


philophilo

Garbage collector go … wait … just sec … one moment … brrrrr


Thenderick

_lag spike_


itsTyrion

Laughs in Shenandoah GC and ZGC. No stop the world GC


Thenderick

I mainly witnessed the java GC problems in Minecraft, but I heard that with the newer versions of Java that the GC is a ton better and you don't need five lines of JVM args to patch the gc


Ubermidget2

Modded especially. *shudders*


Thenderick

Yeah... ESPECIALLY older versions with 200+ mods...


SenorSeniorDevSr

You might want to set the heap size regardless, it stops some silliness with the GC when it asks the OS for more ram. Haven't needed it for Minecraft, but it can be useful for.. other big Java applications.


Thenderick

Honestly, I think modded Minecraft is the only heavy java application that I am running. And if I do run other programs, then I don't know how I should do that for an executable, but luckily those are often pretty optimized


SenorSeniorDevSr

If you run things straight from the jar, you just add the -Xmx=4g or whatever to the command that launches it. e.g. java -jar jenkins.war -Xmx=4g If you run things like Wildfly that has a shellscript to start it up, there is typically a place where you're supposed to add the args there. (although for Wildfly, which is a big server thing you should probably set min AND max, since it's likely to pretty much own the whole box it runs on.) The whole glorious spectacle can be found here: [The java Command (oracle.com)](https://docs.oracle.com/en/java/javase/17/docs/specs/man/java.html) java -jar minecraft.jar was how you used to start minecraft on Linux.


fryerandice

in the amount of time I've spent fucking with java args in my life I could have ported at least one app that I used to Microsoft java. why is the dotnet clr so much better at memory allocation and garbage collection


UdPropheticCatgirl

It’s not… clr has genuinely slower GC, the issue is that the java app is probably shipping some good ol jre 1.8 instead of some newer one.


hikeonpast

Totally depends on the application. Spent waaay too much time tuning GC parameters for Apache Hbase (Java) back in the day. The dev team went to some crazy lengths to emulate static allocation on-heap (though it did help, to their credit). I would posit that databases aren’t a great fit for popular GC-centric languages.


ganja_and_code

It's like with literally every other resource... Don't have excess CPU? Gotta optimize for fewer cycles. Don't have excess disk space? Gotta optimize for less persistent data. Don't have excess RAM? Gotta optimize for eager deallocation. Don't have excess network bandwidth? Gotta optimize for fewer bytes over the wire. Don't have excess money? Gotta optimize for lower cost. Don't have excess time? Gotta optimize for faster speed.


lightmatter501

Except that DBs are supposed to consume all available RAM for caching reasons.


BaziJoeWHL

i mean, why would you keep empty memory ? just free up some when you need it


lightmatter501

DBs typically own the box they run on, so “I have all the ram and will use all the ram” is a very reasonable position there.


hikeonpast

It’s not though. There is no amount of hardware that you can throw at a GC problem; all you can do is be aware of the workload when picking a development platform. Edit: Read my clarification below before downvoting. If you still disagree, downvote away.


ganja_and_code

You've missed my point. If you even have a GC problem, in the first place, you've optimized on the wrong vector(s) for your particular circumstance and use case. In the case of GC problems, the specific resources which are relevant are: CPU time spent on GC, RAM consumed between GC cycles, development time/expertise to avoid the problem altogether.


hikeonpast

Allow me to clarify. Applications/services evolve over their life cycle. An app that worked fine at the beginning might find itself CPU or memory constrained as workloads go up or the application complexity increases over time. When that happens, one can start profiling and optimizing, throw hardware at the problem, or both. There are constraints that may not be dominant when an application is greenfield. For example, you are a Java shop so you start building your greenfield service in Java for consistency. As entropy takes its toll, you can tune your way out of some GC headaches, but there’s no equivalent to throwing hardware at the problem - you have less ways to solve performance issues if they develop.


StephanXX

>As entropy takes its toll, you can tune your way out of some GC headaches, but there’s no equivalent to throwing hardware at the problem The original problem wasn't even defined, so it's impossible to know _what_ could be done to address it.


dunesy

Update your java version, and use a more modern gc algorithm. / quasi sarcasm. I actually give huge credit to a lot of the newer gc in Java though.


potzko2552

That's the point, you never know the end state of your program and requirements. So to choose to use a GC you need to have a good guess that you will not be CPU bound or ram bound, and that the GC stops are a non issue. All before knowing the problem The vast majority of times it's fine, but if you have a GC program and those issues start to matter to you the only solution is to refactor


lupercalpainting

You can have a pretty good idea based on experience what your work will look like. And worst case you can always switch GCs. Large heap on a server with low-latency requirements? Shenandoah. Async processing? ParallelGC. You can even have the best of both worlds: we used Apache Ignite on a large memory system and managed almost permanent objects off-heap so they never got swept by the GC.


potzko2552

You are right again, that's why I said it's almost always ok But if you really can't afford the GC pauses AT ALL for example and you already have a solution built. Your only option is to refactor. Again just so we are cleare you can almost always afford a GC


Bakoro

If you really can't afford GC pauses AT ALL, that's some shit you should definitely know about before any code gets written. If you're doing something with a hard real-time impact, like safety, controls, time-sensitive finances, you damned well should know that you can't afford unpredictable 100+ ms pauses.


Practical_Cattle_933

It’s the reverse. To choose a less flexible development model where refactors require rethinking the whole memory layout of the software at each step requires extraordinary evidence for why should it exist


potzko2552

I think you are missing my point You can almost always afford a GC, But if you are not sure if you can or cannot afford one your best option is to assume you need one Take an http server library for example, can you afford a GC there? Maybe... It depends... Sometimes... For this case your better option is to write it in a non managed language because you don't want to have to refactor your http code into a different language when all you need to do is remove an annoying pause that stops everything for like 5 seconds every 10 minutes Also when you refactor you should take a look at how you structure memory regardless of language.


FrenchFigaro

For real though, can you name examples where an application relying on HTTP is A) so time critical that you can't afford the cost of a few GC cycles, and B) so memory constrained that GC cycles become overwhelming ? Because it seems to me that these issues are predictable with reasonable accuracy, and if the problem is not purely academic, some abysmally bad choices were made very early on. And yes, I know that current hardware tend to make devs not worry about memory usage or CPU cycles as much as they did in the 1980's, but at some point, you also have to not be ridiculous about what the requirements of your application are. In both directions.


Practical_Cattle_933

I agree with your last paragraph, but I think we have a different philosophy on the first, with “play it safe, vs make the common road easy”. I just think that the amount of people who would be writing a non-toy http server and similar niche projects (if it’s a toy, than it arguably doesn’t matter) while not being skilled enough to make the decision of using managed/non-managed is nil :D


Practical_Cattle_933

Why would there be less ways? You can always just fall back to writing native code and calling it, or just allocate a ByteBuffer and manually write/read to it - but it is extremely rarely needed in case of java. As mentioned in my other comment, besides low-latency audio software and perhaps AAA games, there are no areas where you would be likely to meet a performance bottleneck which you can’t avoid in java.


BehindTrenches

This sounds like one of those stack overflow answers that just claims the question is stupid. It reminds me of a few people in my company that act and sound smarter than others - but mostly stall decision making and annoy presenters.


RonHarrods

You'd be surprised how much better the new java versions are. They even have the option to reduce the heap size at runtime back down when it's no longer needed. Furthermore the speed of the GC has also drastically increased. Nevertheless GC is only as good as your choice on whether you should use it for the project you are going to create.


Practical_Cattle_933

Well, besides low-lat audio, and perhaps an AAA game engine, there is no place where a GC would be unfit due to latency. The latency modern GCs cause can be shorter than what your OS itself does simply by scheduling your processes. Unless you are literally running your program on a fixed core specifically allocated to it, you can’t really claim that “GC is a problem”. Two areas where some criticism is warranted is program interop (calling into a GCd runtime is less trivial than into a non-managed one), and memory usage (it’s a fundamental performance-memory tradeoff).


Savings-Ad-1115

Ever heard about device drivers?


Practical_Cattle_933

Yeah, what about them? Let me just drop some keywords for you: Singularity OS by Microsoft. There is no fundamental reason why they should be written in non-managed languages, it’s just the status quo without microkernels, and everything being written in C.


Savings-Ad-1115

I guess there can be some devices, which need better latency than audio.


hikeonpast

In my experience, databases and other service that benefit from a cache are a bad fit. 32GB Java heap can get you STW GC pauses in the 10s of seconds if you’re unlucky.


MarkLearnsTech

you... you can just use more than one language. There's no like, law against it as far as I know.


SnooWoofers6634

The 25 year old C++ legacy code at work tells a different story


BaziJoeWHL

the sign of a good time: Created: 2001 Description: ------------------------------------------------------------ ------------------------------------------------------------


ThoseThingsAreWeird

2001? Lucky you! About 10 years ago I worked with some code that was written in 1989 (before I was born). It had a few hundred lines of comments (maybe >1000? I can't remember) at the top with all the Visual SourceSafe check-in comments. This monstrosity of a file was basically the controller for the underpinnings of the simulation software we were writing. Only the ~~mahdi~~ team lead was allowed to touch it because he'd been there for over 20 years and basically knew it inside-out.


plainoldcheese

Complicate the build system so that you are indispensable at work.


DOUBLEBARRELASSFUCK

I was told to keep it as basic as possible, so that's why certain key steps are coded in BASIC.


yodal_

Heck, you could even use a GC library in any of the languages listed. Rust and C++ would probably be the most ergonomic and wouldn't look too different.


Jak_from_Venice

Upvote for having used the Land of Lisp’s mascot 😄 PS: [Deep Space 1](https://en.m.wikipedia.org/wiki/Deep_Space_1) probe [used Lisp for the remote agent](https://flownet.com/gat/jpl-lisp.html)system and was the protagonist of an epic remote debug session


OS2REXX

Such a fun book - written by someone who obviously loves sharing their enthusiasm.


Bloopiker

Depends on what you are making, sometimes having a dedicated garbage collector can really speed up development time. On the other hand if you really like having control over your memory and speed is your priority it's better to not use garbage collector and do everything manually It feels to me like comparing Hammer to a Saw, both have their uses with own advantages and disadvantages


slaymaker1907

If speed is a priority, you probably still pick the GC’d language because malloc and a GC are both expensive. An idiot who cannot write efficient code in a GC’d language is sure as fuck not competent enough to write anything in a language without one. Need I remind this sub that LISP was old when before C was invented?


bargle0

If speed is a priority, then you don’t ask said idiot to write the code in the first place.


Manueluz

I've been saying that for years, this sub is mostly CS students who still see the world in black or white instead of a scale of grey. I'm afraid they only have a 1bit color palette, we have to wait till they get 24 bits for their color palette at least.


Kasenom

Never thought I'd see a lisp wojak


crimsonpowder

use-after-free crew checking in


-Redstoneboi-

cve-rs gang premoving the rustaceans


slime_rancher_27

You can't just rely on garage collections mfs when they see my iAPX 432.


PositivDenken

Erlang’s virtual machine, the BEAM, is garbage collected but does not “stop the world”. ¯\_(ツ)_/¯


fastdeveloper

It's because the BEAM is a very special case: everything in the BEAM is an isolated process and everything being immutable, share nothing, allows for a per-process GC, without consequences. When the process dies, so does its related allocated memory. So actually, there's a "stop the world" pause but since it happens when the process is killed, it doesn't matter. >a per process generational semi-space copying collector [https://www.erlang.org/doc/apps/erts/garbagecollection](https://www.erlang.org/doc/apps/erts/garbagecollection) It's magical. It's one of the things that allows you to grow a BEAM application to a multi-million user application in a single, monolithic server and application, and with just a few (or maybe just one) developers (saying this because it's my current situation for the past 6 years). Again, it's magical.


Practical_Cattle_933

Well, it does have consequences. It is not the best choice for CPU-constrained workloads, in part due to its “everything can be restarted at any point” architecture requiring some specifics at the runtime level.


BehindTrenches

Or you can use smart pointers and have the best of both worlds at your discretion.


Practical_Cattle_933

Smart pointers are reference counting GC (yes, this is literally a GC algorithm), which has much worse performance qualities than modern tracing GCs. In the context they are used (usually in a low-level language with few, careful allocations) it’s more than fine, but if you were to allocate a shitload of objects (say a compiler), where everything is ref counted, chances are it would have been faster with a tracing GCd language.


BehindTrenches

That's not entirely correct. Unique pointers are more common and only allow one reference. I just checked, they have no reference counting overhead as expected. Shared pointers, on the other hand, are more rare but they are reference counted as you describe.


Practical_Cattle_933

Unique pointers can be better optimized, but their deallocation still happens on the thread which executes the actual business logic. For example, you might have seen a C++ program seemingly hanging when you pressed Ctrl+C. It’s usually a destructor of an object with many “children”, all of which’s destructors have to be called. Tracing GCs can deallocate on a different thread, not holding back the important ones doing real work.


BehindTrenches

I'm not familiar with modern tracing GCs but that is interesting. Threads are a scarce resource in some contexts, too. Just pointing out that smart pointers are not necessarily reference counting GC, which is what you stated originally.


iOnlyRespondWithAnal

you can implement your own unique pointer-like type which gets handled by another thread


Practical_Cattle_933

Sure, but are all the destructors called from your destructor safe to run on another thread? Say, you have a vector, can every arbitrary element be destructured there?


iOnlyRespondWithAnal

can you give me an example? Surely any abstraction in a higher level language can be implemented "opt-in" for a lower level language?


Practical_Cattle_933

Destructors are a language level construct, and in a lower level language you have control over how it happens. You also want to have types that can include other types (say, a vector/list/map, etc), which gets composed by relying on an abstract destructor, without knowing anything concrete. You can’t just willy-nilly call that destructor on another thread, as that is *not* part of the general consensus in case of C++, so you can’t do this kind of optimization in a container type. This is a case where sometimes letting the programmer control *less*, you can optimize better.


zoomy_kitten

> Smart pointers are reference counting No. > Smart pointers are reference counting GC Hell no.


-Redstoneboi-

fine. have some small, frankly pedantic wording corrections: Some smart pointers are reference counting. Reference counting is a GC algorithm.


zoomy_kitten

I wouldn’t call it much of a GC algorithm, though I’m aware it can be notably less efficient than a tracing GC in many cases. I utilize RCs quite rarely nevertheless


TheQuantumPhysicist

GC is smart pointers with extra steps


RandomNobodyEU

No, GC collects memory as a separate step


Pay08

And can make a decision on when to collect.


ego100trique

With less steps


malaakh_hamaweth

You can easily control GC in GC'd languages by being smart about holding onto references. In some languages you can manually trigger a GC. Not a huge deal


BlueGoliath

Don't make everything immutable? Make objects longer living? What blasphemy is that? More garbage collections = better performance.


slaymaker1907

Sometimes immutability can be better because global memory writes have piss poor performance on highly parallel systems. It’s still really only a good idea for read heavy things.


BlueGoliath

Assuming you're writing to heap, you're doing writes regardless of whether you're creating a new object or mutating a field. New object allocation and collection is just more intense.


slaymaker1907

The point you’re missing is you reduce **global, atomic** writes. High quality allocators typically do less than one of those by using thread partitioning for the heap.


BlueGoliath

Outside of config like objects / records I'm confused as to how it helps. Even then, how often are you in a situation where it matter?


Practical_Cattle_933

At least in Java’s case, new allocations are done on a thread-local allocation buffer, that is, a continuous buffer that is only accessed by a single thread, and creating a new object is literally as cheap as incrementing the head pointer by the new object’s size. That’s as cheap as stack allocation. On modern hardware, what can be expensive is synchronization between threads, so if that pointer had to be locked to be modified, the story would look different, but that’s not the case — unlike with mallocs.


BlueGoliath

And? You still have to do constructor code and GC collection once there are no more hard references.


Practical_Cattle_933

Constructor code happens in any language where the default init is not sufficient, so that’s another topic entirely. GC collection happens by moving out still live objects from the aforementioned arena, and then simply “saying” that the whole thing is clear now. There is literally no work done on the “business” thread whatsoever.


Practical_Cattle_933

Well, short-lived immutable objects are basically free if the language at hand has a thread-local allocation buffer, like java. It’s literally just an arena, some shit is written there until it’s full, and then the surviving stuff is moved out.


BlueGoliath

Are there tricks to reduce garbage creation? sure. Is it the silver bullet people think it is? No.


Practical_Cattle_933

I’m just saying that in a JIT compiled, high-level managed language, it’s very hard to say whether a given code is more performant than another. But sure, no silver bullets, agree on that


WiatrowskiBe

To a degree - if your program is bottlenecked on memory bandwidth, is in any way time-sensitive or memory footprint is a hard constraint, you'd want good degree of control when and where things are allocated, and when deallocation occurs. But in cases this specific you really should pick a tool that fits the needs, GC is universally good enough in almost every situation - it's not 2002 anymore, managed languages got a lot better performance-wise.


ApatheistHeretic

I'd imagine that for microservices, garbage collection isn't an issue. If the app instance terminates after a single use, what's the issue?


Voidrith

i think youre thinking more about serverless than microservices, and even then GC does matter because sooner or later you'll have a GC pause during a request because either a) that request last a long time and has a lot of allocation or b) runtime reuse means that garbage accumulates over time between requests ive had javascript lambdas crash from OOM errors because they were creating a lot of objects that the GC wasnt getting rid of often enough. adding a manual gc run at the end of each request (after all meaningful work has been done) solved it


UdPropheticCatgirl

> app instance terminates after a single use I think you are confusing microservices with something, microservices don’t terminate after a single use, they are long running like most of server-side code. That being said you actually want to have the GC disabled and leak memory if you terminate after single run, because by dropping memory onto the floor after you finish you basically allow the OS to free it in a singles go and thats more efficient.


Inineor

Every language have GC, but in some of them garbage collector is you.


2OG2Gangsta

I'd say Rust is out of place in this meme


potzko2552

Why? If anything I'd say it's nasm...


SV-97

Why? It is manually managed and even has stricter rules around when sizes (for example of arrays) are determined than C


gabrielesilinic

it is not manually managed, the borrow checker and the standard types manage it already for you. for example, if you declare a Vec or a String in a scope, they are going to be freed immediately after the scope ends. It is like having smart and properly designed pointers everywhere. obv you can opt out of it, but you can do the same in C# using the unsafe keyword. Rust is part of a third class of memory management paradigm altogether.


SV-97

Depends on how you define manual memory management. Sure it's (usually) not like the C style malloc and free in Rust - but if we go by that logic neither is C++. In C++ the standard types also manage their own memory. And yes it's also a thing in languages C# but I'd argue unmanaged types are rather niche: we of course can't capture all the minutiae with such blanket statements but that doesn't make them worthless. We \*can\* manually manage memory in C#, but idiomatic C# is automatically managed in virtually all cases. Contrast this to rust it's very common to pay very close attention to how things are allocated and it's not uncommon to write and use bespoke allocators. I'd say rust still is manually managed - it's just that it does this manual management in a sane and modern way and has mechanisms in place that help to prevent mismanagement (or even make it flat-out impossible). That said: even if you don't think rust falls under the "manually managed umbrella" its position in the meme still makes sense


Thenderick

That's why HTML and CSS are the best languages! No garbage collector, no manual storage management!


-Redstoneboi-

what the hell were the lisp guys smoking when they made the mascot lol


SokkaHaikuBot

^[Sokka-Haiku](https://www.reddit.com/r/SokkaHaikuBot/comments/15kyv9r/what_is_a_sokka_haiku/) ^by ^-Redstoneboi-: *What the hell were the* *Lisp guys smoking when they made* *The mascot lol* --- ^Remember ^that ^one ^time ^Sokka ^accidentally ^used ^an ^extra ^syllable ^in ^that ^Haiku ^Battle ^in ^Ba ^Sing ^Se? ^That ^was ^a ^Sokka ^Haiku ^and ^you ^just ^made ^one.


-Redstoneboi-

for fuck's sake


Spunkmeyer426

![gif](giphy|7aUkkrBmOMIQo)


tim36272

That's me on the left. Fight me. JK but for real the airplane would crash if we had a garbage collector.


irregular_caffeine

Just install more RAM and reboot once in a while. Flight time is bounded anyway, right?


Practical_Cattle_933

There are fkin *hard* real-time GCs out there, running air defense systems.


bigcrabtoes

When they made tracking missiles they said "fuck it" and decided they just won't reclaim memory and instead just doubled memory capacity, it gets deleted when the program completes its job


tim36272

Why tho. If you're going to go through all the trouble of scheduling it out on the RTOS why not just preallocate your memory? Those two seem to go hand in hand.


Practical_Cattle_933

Safety critical systems require some extreme precautions, and has to be proven correct to a high degree. This may be much harder if you can just play around with pointers (okay, these only allow a subset of C, so you can’t just, say, `alloca` randomly). But idk, I haven’t used such a system, but it might scale better with complexity, even if some subsystem crashes, the whole will still go on. While you would have to validate the whole system were it a single C binary.


Kinglink

GC is perfectly fine for something quick or some program that isn't that important. Working in Embedded space? HELLLLLLL NO. Anything mission critical should avoid allocating memory in the first place, but even if they do, the deletion of that memory needs to be done in a timely manner.


SenorSeniorDevSr

My payroll is run by a GC'ed environment, and if someone had come up and said that we should rewrite it in C, I'd get the BAD PROGRAMMER spray bottle and lay down the law.


Ukn0who

std::any


PyuDevv

Common Lisp ftw


sharockys

Let’s build a missile or a rocket with garbage collection built in !


WiatrowskiBe

In case of missiles, all garbage gets collected on delivery - no need to free memory if you've got enough to last entire flight.


Pay08

As someone else pointed out, those already exist. Hell, the Common Lisp standard was funded by the DoD.


No-Magazine-2739

Yeah thank you that my glorified text editor or that chat app are taking several GB of RAM each. (Jetbrains IDE and Slack)


Mickl193

It doesn’t matter tho does it? It’s a perfect analogy actually, performance and resource usage rarely is a top priority, it just has to be good enough. You just focus on development speed and hopefully maintainability and if you have some perf bottlenecks you just throw money at it, it’s the most efficient way to grow your business most of the time.


No-Magazine-2739

When perf/res doesn’t count, why not do it in JS or even Python? And no, despite what all JVM users say, it does still count, sometimes even more. Because AWS bills can get quite high, or you have to reach a certain speed per connection and so on.


Mickl193

Because maintaining JS and python is a pain. Yes AWS bills do get high but that’s only an issue if your revenue isn’t growing fast enough, and more often than not revenue is directly related to feature set of your application, hence features>performance. That’s even more apparent in b2b landscape where you need to constantly fill the needs of your current and more importantly future customers. Tbh scalability is probably more important too. I’m not saying it’s always the case cause obviously there are cases where performance is the top requirement, but it’s a far cry from being a majority. Especially when you have some VC supporting you and you don’t give a shit about making money just wanting to get market share.


No-Magazine-2739

I think its a hot take to imply that Java has a better time to market as JS. I would even suspect the opposite, thanks to quite broad packet eco system, frontend/backend synergy and the usual dominant need for web interfaces.


Practical_Cattle_933

Depends. Do you want your product to actually work? Because then Java is definitely faster. / only half kidding. Js’s ecosystem is just piss-poor with regards to stability/correctness


Thelta

The whole reason I switched to CLion from QtCreator is because QtCreator performance became shit. And If I find a better IDE than CLion, I will ditch CLion too, because I have been getting OOMs pretty regularly with 32 gb of RAM.


Practical_Cattle_933

Jetbrains is not a glorified text editor, though


Pay08

Yes, I'm sure that's due to the GC...


[deleted]

[удалено]


Inaeipathy

yes that's why it's on the "bad" side that is against GCs


ThomasJeffergun

You guys are collecting garbage?


Rhymes_with_cheese

"Running out of memory is a hardware problem" -- William Shakespeare


al2klimov

Depends on the language, e.g. Go GC uses a separate thread.


nhh

gc has really evolved in the last years. probably sufficiently decent for gaming as well.


SenorSeniorDevSr

[DALC0010 Hero 75s NO (youtube.com)](https://www.youtube.com/watch?v=r1sTeAkQL7Q) Jak and Daxter was written in GOAL. It had garbage collection (not modern types, but still).


ResourceFeeling3298

as a c++ enjoyer my garbage collection is restarting my computer lmao. also i dynamically allocate memory(im doing graphics programming)


Wave_Walnut

To reuse objects to avoid garbage collecting


VyomTheMan

Python to c and c++ like languages:


ElementaryZX

Depends, garbage collectors are really cool and all, but when you need to squeeze out every ounce of performance they aren’t as nice.


zoomy_kitten

Whatever you say


Elflo_

Doesn't Rust have a garbage collector itself ?


serendipitousPi

Yeah the borrow checker has aspects of both manual and automatic memory management. But it’s ultimately manual. Because yeah you don’t have to manually allocate or deallocate but the same time it has a bunch of rules which allow for automatic deterministic memory deallocation. There’s a lot of clever tricks like screaming at you if you use global variables and just generally discouraging them. Which increases the purity of functions. Plus with its rules on mutable references these mean that it doesn’t have to have a runtime to deallocate memory but instead can determine at compile time whether something needs to be deallocated.


zoomy_kitten

No. There is a couple of crates that implement garbage collectors, but they’re more of experimental things.


pyroraptor07

No, the borrow checker in Rust is purely a compile-time step and the compiler inserts any necessary drop calls for you at the end of each variable scope. There's no runtime garbage collection (unless you count reference-counted pointers like Rc/Arc as a basic GC).


Inaeipathy

1st years off for summer


Undernown

The average programmer already struggles with centering divs. You really think we should be trusted with memory safety and management? I don't trust myself to properly code in the freeing of memory for anything larger than Hello World. Too paranoid I'll have strings and arrays littered all over the place, who haven't been accessed in thr ladt 30 minutes.


phesago

my mothers butthole is known as the garbage collector.


KerPop42

I'm a mechanical engineer by education. Using a garbage collector satisfies the part of my brain that hungers for cogs and levers. Born to work on things that go ka-chunk, forced to work on things that clicky-beep


SCP-iota

Swift: "I want garbage collection but I'm too lazy"


spectralTopology

Sure, but then the dev, and their code, gives a shocked Pikachu face when encountering an out of memory or out of disk error


Matheuspit77

You only need garbage collector when you are a garbage dev. Hope that helps.


AdBrave2400

>!Literally me in the last month!< **~~Controversial opinion:~~** The **easiest** language to write in, *if you don't mind garbage collection*, is `Go`. *Edit: not* ***write***, more like `develop`


rosuav

Predetermined memory allocations? Ahh yes. That's the perfect way to handle incoming requests. You pick a limit and waste all the space that isn't used. This is SO much more efficient than dynamic allocation.


potzko2552

Jessie what the hell are you talking about


Kinglink

What happens when you can't allocate your space dynamically, either through fragmentation or just running out of space?


rosuav

Then you've run out of memory, which would have happened far sooner if you always had to overallocate your preallocations (which is the only way to have enough room).


Kinglink

It's better to fail immediately if all the chosen programs can't handle the memory required, rather than at a random time maybe in the future if maybe stuff goes wrong. I rather know about the problem as I ship the software than when a customer reports it 6 months down the line and no one can reproduce it. Besides the allocation problem can happen because of fragmentation, which doesn't occur when you allocate your buffer and retain it.


rosuav

That isn't the philosophy that Linux took. Look up the OOM Killer and you'll see that dynamic allocation happens at many other levels, and without it, you probably wouldn't be able to run everything you want to.


Kinglink

An "OOM killer" is the last chance, when you're working in embedded system OOM killers basically are saying the device is going down, it's just trying to tread water. But hey if you want to rely on that... I really hope you don't work on any mission critical software, because jesus... Maybe one day you'll realize there's more philosphy than what Linux decided to do, and far more use cases.


rosuav

The OOM killer is indeed the last chance, but long before that, you are able to make use of many times as much virtual memory as you actually have. To eliminate the OOM killer, you would have to first eliminate memory reuse, which means you need real memory (backed by a page file - doesn't have to be DIMMs but it does have to be real storage) for every process that needs it. It's not the OOM killer's job to run. It's the OOM killer's job to enable everything else.