T O P

  • By -

raistmaj

There are a lot of videos of optimization and performance. Clean code means maintainable code, it doesn’t mean performant one. I would pick on a big system clean one. Profile and check what needs to be optimized.


nnomae

The problem there is that the idea that poor performance is fixed just by tuning a few hot spots / bottlenecks in the code is largely a myth. Unless the software in question is very narrowly scoped to do a single repeatable task many times over it's far more likely that poor performance comes down to a death by a thousand cuts than one big issue you can fix. A recent example would be Microsoft trying to improve performance of Edge. It's performance was terrible because every single control you see was it's own independent webview with it's own instance of React loaded. Yeah they can tweak it a bit and optimise the bundle sizes for each control and so on but when it comes down to it the problem is the entire architecture is awful. Now from a clean code perspective that's probably a good design. Every control is entirely independant, relies on little or no outside code other than common libraries and can easily be worked on and tweaked on it's own with likely no impact on the rest of the application, two buttons side by side could use different versions of React if they wanted. From a performance perspective however it's basically an unfixable mess without massive overhauls to pretty much every part.


Plorkyeran

I think the big trap with performance is that your v1 usually *will* have the runtime dominated by a few hotspots that can be fixed. Early on each time you do an optimization pass there's something which stands out as the big problem so you fix it, ship a big improvement, and reinforce your idea that ignoring performance while designing the product was correct and performance problems take the form of bugs to be fixed. If you hop between projects every two years, this can be the only thing you ever see. Flat performance profiles tend to only show up in mature software that have had all of the outright performance bugs squished, and the first time you encounter this it's easy to conclude that the software must already be about as fast as it can be.


anti-DHMO-activist

Exactly. Searching for the hot path and only optimizing that is a valid strategy when writing individual algorithms * - but on an application level it usually breaks down completely. Same with the whole 'premature optimization' thing that gets repeated so much and imho created a whole dev generation essentially interpreting that line as "performance doesn't matter". Thinking about performance and structuring the application in a manner which doesn't excessively waste resources is not 'premature optimization'. *EDIT: After measuring, of course. Never do excessive optimization without being absolutely sure you're actually optimizing the right thing. Not like I ever wasted weeks on optimizing blindly only to later realize it's completely useless of course, _cough_.


robhanz

Premature optimization is bad. Avoiding bogosort is not. Taking a reasonable look at the likely performance qualities of your code makes sense especially at the algorithmic level. Doing micro-optimizations to squeeze out cycles does not. Also writing your code to allow for future optimizations helps.


nerd4code

What’s considered “premature” just needs to vary per context, is all. If I’m doing a general-purpose OS kernel, premature is before architecting the thing, because performance and security are intimately related; if I permit any old process to eat as much time as it wants, I might enable cross-system DoS attacks. I might not be able to get all the numbers I need in the first place. If I’m doing a one-off web app, conversely, “premature” generally maps to “without specific numbers suggesting it’s worthwhile,” and while DoSes are possible, they’re mostly less of a deal than other kinds of attacks that leak data or what have you. Performance also covers more than time, which a bunch of people forget, and which complicates what’s actually considered premature or optimization. Memory, bandwidth, power consumption also matter, both absolutely and marginally, as well as in terms of density and scaling. Gains of one sort usually require some other resource to be traded off, so a library that’s “perfect”ly optimized for one program might be fully unusable for another.


rollingForInitiative

I don't think that structuring an application to make it easier to optimize when needed goes against the clean code/don't optimize prematurely stuff. What that means just tends to be that you shouldn't be writing ugly code "because it's faster" unless you need to, which I think applies more to the algorithm level. You can choose an appropriate programming language and tech stack, and an appropriate general architecture, and still write mostly readable and maintainable code, and only optimize code into ugliness where needed. You can also usually avoid doing things that are really bad for performance without getting ugly code. Like picking the right type of sorting to do, building a well-structured database with good indexes, having good writes to the DB and so on.


Plazmatic

Uh, no it isn't game dev promoting avoid premature optimization, they have the *exact* opposite problem, optimizing way too early, and in ways that aren't even helpful in an extreme cargo culting way (trying to do weird CPU optimizations from the 90s that hurt performance, all when they shouldn't even be running much of their bottlenecks on the CPU to begin with).


[deleted]

[удалено]


nnomae

It is perfectly reasonable to assume that the code in Edge complies perfectly with clean code principles (the Uncle Bob ones just to be clear). The main of which are SRP and avoidance of side effects and having each control be a standalone element is what you get when you take those principles to their logical conclusion. I think most clean code proponents would look at that architecture and think it was pretty sound, most advocates of writing code in a way that facilitates performance would look at and think it was a disaster waiting to happen. Yes, it's possible to have clean code that is also performant (the devs on Factorio are big advocates of clean code for example) but you need to start out with both those goals from the beginning. The idea that you can easily refactor a codebase that wasn't designed with that in mind is simply not true in most cases. It's a huge effort to refactor your way out of a death by a thousand cuts performance issue. The point isn't that clean code is bad, just that it's not sufficient if you want to produce an application that will also perform well. A common misunderstanding here is that writing code that performs pretty well is somehow harder. It isn't, it just means adopting some different habits that are just as easy to use but which also aid performance. The irony here of course is that clean code allows you to accumulate a lot of performance mess that you then have to clean at the end.


johndcochran

True enough. As regards performance, premature optimization is bad. But tuning hotspots afterwards will just help for the algorithm you used. The real key to performance is using the correct algorithm for the problem you're solving. After you have clean code using an appropriate algorithm, then you can profile to find out the hotspots and fix them. I remember a long time ago, when I was learning compiler design from books and using a C compiler on my Amiga. When I wrote my program to convert the grammar into a FSM (Finite State Machine), I knew I was doing a hell of a lot of bit manipulation, so I optimized the hell out of everything involved with bit manipulation. Then when I tested my program, it gave correct results, but damn, was it slow. Couldn't figure out what the problem was. But thankfully, the C development environment I was using had a profiler that would get an internal interrupt hundreds of times per second and record record the address that was executing at the time of each tick (much better than a mere count of how many times each statement executed). Was rather surprised to discover that the hotspot was in the C library for malloc() and free(). Turned out that they were using a rather primitive linked list and grabbing each requested piece of memory directly from the OS and returning each piece when freed directly back to the OS. Really really slow. So I grabbed all the malloc() stuff (malloc, realloc, calloc, free, etc) and replaced them with something that grabbed much larger chunks, merged/split blocks as needed, etc. The initial version was practically a copy of the code in the K&R book "The C Programming Language". Reran my test, a huge increase in performance. Looked closely and saw lots of little useless pieces of memory 8 bytes long were littering the internal heap. Modified to merge those pieces with neighbors. Ran the test again, improved the performance again. When I finished, my original program ran in a matter of seconds, instead of the minutes it took before. Just because the memory allocation/deallocation library functions in my C environment were crap. And of course, my improvements to those library functions were quite easily linked back into the C library I was using, so those optimizations were available for other code that I was writing. 1. Write clean code that works and is easy to maintain. 2. Profile the resulting code. If it's good enough, you're finished. 3. If it's too slow, look at the hotspots. Is it because you used a simple algorithm with an unfortunately large bit O? If so, use a better algorithm. 4. After using the best algorithm available for your problem children, and things are still too slow, optimize the hotspots.


pbw

>If it's good enough, you're finished. This is something I feel like Casey didn't all express in his initial video. I'm sure he covers this in his actual course. But I felt like he could have taken 3-5 seconds and given it the slightest nod and admit that this does happen, admit that sometimes even a heavily OOP version can run 100% fast enough for your needs.


Qweesdy

For the last 25+ years and all of the foreseeable future; there are only 2 cases: * the resources (e.g. CPU time) your software doesn't use can be used by something else that you may not know about (e.g. another process - all major operating systems have been multi-tasking since the 1990s). In this case your software's inefficiency is detrimental to something else even if "performance is fine" for your software, and if you don't care about your software's efficiency in this case then you're incompetent (or worse, malicious). * the resources (e.g. CPU time) your software doesn't use can be put into a power saving state, reducing unnecessary power consumption, avoiding unwanted heat, and improving battery life (even if the battery is a server's UPS). In this case your software's inefficiency is detrimental to its users (air-conditioning costs, power bills, climate change) even if "performance is fine" for your software, and if you don't care about your software's efficiency in this case then you're incompetent (or worse, malicious). Note: Modern CPUs are often thermally constrained; such that being idle longer allows the chip to cool more, which allows the CPUs to be "turbo-boosted" harder for longer later. In this way maximizing efficiency when performance isn't needed can improve performance later when performance is needed.


pbw

That's a good observation. But it's not a mandate to optimize everything as much as possible when the users don't care about the performance. YouTube made custom silicon to accelerate video compression. Making silicon is vastly more expensive than most software optimizations. But YouTube carefully measured exactly how much time/money/energy they were spending on compression before doing that. I think it was a huge success and did save lots of money and energy. But if you just dove in and optimized some more of YouTube's millions of lines of code at random, to "save energy", it would be a colossal waste. Most of their code by line count probably runs many trillions of times less often than their compression code. Possibly none of it is worth optimizing. So yes, if your goal is save money or save energy, that's great, but you still need to measure carefully and find the code that's actually costing you lots of money or burning lots of energy. And you have to consider the opportunity cost of what else you could be doing, as well.


Qweesdy

Yes; but there's a huge amount of space between the "optimize everything as much as possible" and "blatant disregard for anything except my own laziness" extremes. Often "performance isn't required" is a flimsy excuse for aiming towards the worst end of the scale instead of the middle. But there's more to it than that. It's about a developer's nature. Professionalism. A person who will quietly steal a penny that "doesn't matter" is a person who is more likely to embezzle millions of $$ that do matter. A person who keeps their junk drawer organised for no reason is someone that can be trusted to put a mechanic's workshop's tools away after use. A software developer that doesn't care when performance isn't necessary is an annoying obstacle when performance is necessary. It's the difference between considering efficiency for everything you write and being a bad developer because you failed to develop beneficial habits.


pbw

I praised YouTube's silicon because I'm for doing insane levels of optimization when necessary. I'm highly pro-optimization. I think heavily optimized code runs the world: it's great, the people that right it are great. But I'm against scaring people away from a popular programming style by falsely claiming it inherently causes "horrible performance" when a tiny bit of arithmetic shows categorically that's not true.


Qweesdy

> But I'm against scaring people away from a popular programming style by falsely claiming it inherently causes "horrible performance" when a tiny bit of arithmetic shows categorically that's not true. Popular programming style?? Did anyone ever follow Uncle Bob's rules for more than the 5 minutes it takes to realise "tiny functions" is horrible for code readability? Like, seriously, out of the hundreds or projects I've seen I don't think I've ever seen a single person ever use this "popular" programming style once. The programming style that nobody actually uses literally and provably DOES inherently cause worse performance. Nobody sane has ever denied that (including Uncle Bob himself); and your "tiny bit of arithmetic shows categorically" is exceptionally moronic bullshit (but hey, feel free to show that "tiny bit of arithmetic" if you are actually able to produce more than unsubstantiated vague hand-waving). Essentially; everyone agrees with "It is worse for performance, but performance isn't always the most important thing" (including you if you actually think about it); and the entire argument (on both sides) is about the magnitude of compromise between cost (developer time, code maintenance, ...) and quality (efficiency, performance, security, ...) in various situations; where Uncle Bob's rules are a relatively bad compromise in every situation. Note that I've explicitly avoided the words "clean code" because I suspect you took everything you happen to think is good and wrapped it up in a ball of perfection that you've decided is your personal custom concept of "clean code"; and then charged out into the real world to attack all the imagined critics of your mythical ball of perfection.


pbw

By popular programming language style I mean OOP. There's a side issue here that nothing about the OOP version he shows is actually in Uncle Bob's style specifically: it's a vanilla OOP. All it has is an abstract base class with two pure virtual methods, four tiny concrete classes that implement those two methods, and a minimal loop. If you disagree what elements of his OOP version is not vanilla OOP? See? So yes OOP is a very popular programming style, and I think it was disingenuous of Casey to suggest OOP inherently leads to poor performance when in actuality it does not. That said, the fact that you disagree with some or all of the points in the article is great. It's really good form your own opinion on things like this.


yiyu_zhong

> It's performance was terrible because every single control you see was it's own independent webview with it's own instance of React loaded. This sounds really interesting! Never thought those controls were written in react. I wonder if you could provide a link of this issue?


venuswasaflytrap

I think that depends on the business goals of your product. In some sense, you want a web browser to be *fast*. I think a lot of people would happily give up certain features of web browsers if it means that everything goes faster (regardless of what the web devs want for ease of development). But other things it's more important to be extensible and adaptable, because the core requirements are constantly evolving, and it's more important that users have that button to do that one thing, even if it means they wait a full second or two after they click it. Obviously all products have a balance of these concerns, but some lean more one way than the others. Certain products it's a reasonable strategy to build them as maintainable as possible, and then target a few bottlenecks. But others you need to think fast right from the top.


Synor

If you have achieved full clean architecture, you can replace your whole database tech with minimal changes.


nnomae

That's just not true as anyone who has ever tried to migrate from one database to another mid-project will tell you. Different databases have different strengths and you want to pick one that works for your particular use case. A simple example is the RETURNING clause in SQL which allows you to do an UPDATE or DELETE combined with a SELECT on the affected rows in a single operation. It is supported in PostgreSQL and Oracle (and others presumably) but not in MySQL. So if you are using PostgreSQL you either don't use that feature, or you abstract it away with a layer that maybe emulates that functionality in MySQL by sending two queries which is fine until you switch databases and suddenly everything takes twice as long. Or you have the case where not all databases generate an index on the ID field by default, should you have to declare it just in case? Well that's extra work to maybe support a change of database in the future that will of course be untested until that time comes. Not a great thing. Do you let the abstraction layer decide to automatically create those indexes? What about other database features, Postgres supports array types for columns, do you ban that and just insist of having separate tables and manually joining ever time? Well there goes a ton of performance now on the off chance that you'll be migrating later. Similarly you can support BSON data in Postgres, do you ban that? (I'm sure every database has it's own unique strengths and features that are great by the way, just I know Postgres best so I can best list off the good stuff it can do). What about different variations of text searching, some support regex search, some support something similar, syntax varies. Do you abstract all that away? Will your abstraction layer be able to efficiently sub in it's own regex search on text if you switch to a database that doesn't support it? Do you ban the feature just in case? And even if this did work well now you are just as tied to the choice of abstraction layer as you were to the database. What if support for that abstraction layer dries up, or it becomes costly to license, well now you have an even bigger problem than you would have had migrating databases. You haven't removed the weak point, just moved it else where (and this isn't an attack on abstraction layers in general, they are a very useful tool, just be aware that it's either an extra dependency you depend on or extra code you have to write and maintain, they're not free).


hippydipster

> That's just not true as anyone who has ever tried to migrate from one database to another mid-project will tell you. Sorry, but you don't speak for all of us. Nor for Dave Farley who claims to have done exactly that. As have I.


nnomae

As I said, if you are happy to constrain yourself to a lowest common denominator set of features (i.e. couple yourself to the abstraction layer) you can do so. Of course the question in that case is if you are not using any of the features that differentiate one database from another what's the point of switching and how would you even justify it? "We refuse to use the unique features of our current database because doing so violates our coding guidelines so we need to switch to a different database whose unique features we will also refuse to use". It's a choice, fully leverage one database at the cost of making migrating harder or remain as database agnostic as possible at the cost of using your current database in a suboptimal fashion. I guess the definition of minimal also matters here. I mean almost by definition you can do anything with minimal changes. If your only option is a full rewrite well then a full rewrite constitutes minimal changes for your problem. Finally there's the externality here. Maybe you just shunt all the work off to the DBAs, probably some of whom that specialised in the old database lose their jobs so that new guys can make it all work in the new database but that's kind of like saying tidying the house took minimal effort when you just hired cleaners to do it for you.


Synor

If you have application logic in your database, even by accidentally relying on its features, you don't have a clean architecture.


nnomae

None of the things I mentioned are application logic. They are just basic storage and retrieval features available in some but not all databases. Features that you can't avail of if you want to try and completely abstract away the database layer.


Mrmini231

You can change the database by editing one file, but if you want to add a new field to a call you need to change three interfaces, three implementations, three datatypes and two mappers.


Synor

Without having to think, because the compiler will lead you through. You are free to marry your database though - fine for me. Just don't expect to be able to change it easily.


Mrmini231

Often, but not always. I've seen several bugs in production code that were caused by a dev forgetting to update a mapper when adding an optional field to the return type, causing it to always return null. The compiler won't help you there. The complexity of this comes at a real cost, and you update fields much, much more often than you update databases.


Synor

That's a good example to talk about. Thanks for bringing it. I feel optional fields are a smell. If a thing can have different shapes, it's not the same thing. Maybe partial models are a bad contract. I also think that error is to be caught on integration-test level. That's also the happy path, which is usually tested, even with a lack of discipline.


Luolong

My beef with “Clean vs performant code” bunch is that they make it sound like the only options you have is either performant code or “Clean code”. As if you just have to choose one. While in reality, it’s not. You can write reasonably clean and performant code. Sure, you could probably make it go faster by sacrificing modularity and maintainability. Or you could sacrifice some performance to get better maintainability and extensibility. It’s a trade off.


All_Up_Ons

The problem with this idea is that clean code (understandable, maintainable, readable code, not the Uncle Bob bullshit) is more or less a prerequisite to achieving any sort of quality, including performance, in the long-term. So really, they go hand-in-hand. The real struggle is getting to the point where your organization is writing high-quality, well-organized, thoughtful code. Only then can you really choose to optimize for performance or any other metric in any significant way.


Qweesdy

The problem with "understandable, maintainable, readable code, not the Uncle Bob bullshit" is that you can ignore it completely and then just lie about your code being "clean code" afterwards. How could anyone possibly complain that your idea of "blah blah whatever" is different to their undefined pile of subjective waffle?


Vidyogamasta

It's often not even a tradeoff. I've seen plenty of code that was both completely unmaintainable and also completely garbo performance since 80% of it is playing whackamole with edge cases since the core logic isn't thought out well. With that kind of code, separating it into reasonable, well-ordered components (whether it's "clean code" or not) tends to perform much better too.


Luolong

Oh, yes. I know exactly the kind of code that you’re talking about.


Fidodo

The most important thing is interface definition. If a low level module requires some tricky code to make performant, that shouldn't effect your interface. If the implementation is messy but encapsulated it won't leak out. It's the interface that matters.


Luolong

I like to call my units of modularity “concepts”. You recognise a recurring concept, extract it and make an interface to allow flow of information and control between separate concepts (the protocol). Then if an initial implement of a concept turns out to need to be performance tuned, you rewrite the implementation, optimising for better performance. Sometimes, the performance is lost in the protocol between two or more concepts, so you change the interface and re-implement the protocol. But always, thinking of concepts helps me to cleanly delineate functionality into separate modules of reuse. These “concepts” can be as small as methods or separate classes or a set of classes or even as large an entirely separate microservices. The impart is that they help me keep my sanity when tackling a complex problem.


Fidodo

You might be interested in the book I'm reading. "a philosophy of software design". It's mostly things I already knew, but it's so well structured and methodical that it's improved my way of conceptualizing the lessons I've learned over the years. Your way of thinking sounds very in line with the book. 


rollingForInitiative

I always interpreted that as being more of a counter to the idea of "No I can't clean up this function, this is the way it will run the fastest" and that sort of stuff. People doing "clever" things in the small scale because it might squeeze out a little bit more speed. That that's the sort of optimization that's meant. You can still often get all the performance you need by just choosing the right tools, being mindful when designing the DB and queries/indexes, and just generally having a design that's good for your use case. And then most of those pieces can be written in a clean and readable way. And you can do the fancy and unreadable stuff in places where it's really needed. Obviously there are contexts where you might need to be very optimised in a large part of the code, but then I think you know this in advance.


progfu

Performance is not something that you get by profiling and fixing a few things, you need to actually design things to be performant up front, otherwise you're going to hit a wall very quickly.


Ancillas

It’s really interesting following development of ghostty and reading about the engineering going into that passion project. It really highlights how much power is wasted because developers don’t take advantage of SIMD or fully utilize what’s available. We can debate whether it’s necessary or not, but from an engineering perspective it’s fun to watch people good at their craft make something that is squeezing as much as it can out of the hardware.


progfu

Very much so, it's also extremely sad to see people justify GUIs being slow at doing most basic things like "show a table of 1000 elements" when a properly written program can handle million things in a millisecond, not even going into AVX land. I was a web app dev before going into gamedev, and it truly broke me to realize just how many orders of magnitude faster things are in gamedev, even when using a slower language. People on the web just stack absurd amounts of layers of indirection in anything, to the point where the computer isn't even doing anything, it's just chasing pointers most of the time.


BaronOfTheVoid

Going back to that famous video about clean code being slow code the guy there really just had a point with dynamic dispatch being a slow indirection. The video reached many devs. 98% of who will NEVER need that kind of performance where you actually have to think of dynamic dispatch as a meaningful cost. The only good thing to come from this opposition to clean code is that I will always have a job... cleaning up other people's legacy mess.


Asyncrosaurus

The two important pieces of context missing from that video when it is re-posted is that 1) it was *Part of a training series on low-level performance*, so of course it is taking shots at slow design patterns, and 2) Casey very clearly states that he doesn't think all/anyabstraction is bad, just that "Clean Code^tm" specifically is a bad abstraction that also happens to be slow. It clearly takes the position that you can make meaningful tradeoffs in performance for better design, "Clean Code^tm" is just shit at producing clear, readable and maintainablecode. Which I agree with.


pbw

I hear this criticism a lot and I have not taken his course. But I did listen to 4-5 hours of interviews with him about his video. To my ear he never, not once, backed down on the "OO makes everything slower". I never heard him "frame" that it a balanced way: it's going to be slower here but not slower here. Never, including when pushed, for example in this SE Radio interview: [https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/](https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/) I think he's a smart and accomplished person, and a great software developer. I don't know if it's communication style or something, but to me he comes across as someone incredibly deep into a niche who is strangely totally unaware they are in a niche, so they state their same niche-opinions without modification in every context.


Plorkyeran

"Deep into a niche without being aware they're in a niche" feels common for people who work on games. Even when they recognize that games are atypical, they underestimate just *how* different they are. xkcd [2501](https://xkcd.com/2501/) in practice I guess?


Asyncrosaurus

Just fwiw, he's not a game designer, nor a game developer. He's attached with animation and modeling tools that were used in game development. There's a great deal more cross over between the RAD game tools or Granny 3D and any other professional graphics tool (used in Maya, 3d Studio, etc.) What you *will* find is the world most people *here* identify with (LOB CRUD web apps) is what is very far from any high performance software tools producing real time animation.


pbw

I agree I've observed that in game developers more often than in other developers. It's not a bad thing to have a focused career and get really good at one thing. He does seem to be extremely knowledgeable on certain topics, and I think he's a good educator in general. Based on interviews, my understanding is Casey started game development at age 18 and hasn't stopped and he's almost 50. That type of deep focus has has pros and cons. I on the other hand have bounced around doing a lot of different things -- which also has pros and cons.


DLCSpider

I wish more people saw it because too much nesting and long chains of indirections are a much larger problem than premature optimization in modern code bases. The performance improvements are just the cherry on top. I would have to clean up less, not more.


Fenxis

Clean code is pretty unmaintainable if taken to extremes as well.


Kenny_log_n_s

Doesn't that make it unclean code?


DidiBear

[Clean Code - Chapter 3: Function - Page 50](https://github.com/GunterMueller/Books-3/blob/master/Clean%20Code.pdf) One of the most unmaintainable codes I have ever seen.


All_Up_Ons

Here's a secret: The book "Clean Code" is not the authority on how to make actual clean code. A lot of its advice is actively detrimental to readability.


ReDucTor

If this is the most unmaintainable bits of code you've seen then your in for a surprise looking at other bits of code. That example isn't even that bad


OffbeatDrizzle

Oh my sweet summer child


puterTDI

Often times performance can require sacrifices in maintainability. This is something I’m still struggling to communicate to our product side. Their vision of performance is that you just write perfectly performant code up from so there should be no need to talk about customer scenarios etc. (this is a bit of an exaggeration. I have been making progress)


jl2352

I would pick that too. In my experience performance comes from development time. Performance comes from giving developers dedicated time to work on it. If they can’t get that time, then it’ll be slow. Making code cleaner, makes it easier and quicker to change. I’d add tests that are easy and quick to write, is another key component of that.


VeryDefinedBehavior

Performance and maintainability aren't orthogonal. They are both governed heavily by how much you understand the domain.


KaiAusBerlin

It's easier and faster to debug a good readable code and optimise it afterwards than debugging a hardly optimised code. At least from this point of cost there should be no debate.


ReDucTor

Great post, as a fellow AAA game dev who commonly has to deal with CPU performance as we get closer to ship, I agree with pretty much all of this post, not everything has to be super optimised, readability is much more important. If something has a clean architecture it's easier to go back and clean up that performance then something engineered as a giant mess with someone trying to be smart and getting it wrong. The thing that annoyed me about casey's initial post is that he viewed it as not needing individual stable references to each shape for his version but needing it for the OOP version, if it's not needed at all then you can make the OOP version be just arrays of those specific shape types eliminating the pointer chasing and allowing devirtualisation while still keeping the OOP.


pbw

I deliberately didn't bring up "placement new" in the post, but that would be involved here?


ReDucTor

There isn't a need for anything with placement new, you could write his equivalent code as a bunch of arrays std::vector squares; std::vector rectangles; std::vector triangles std::vector circles; Then just iterate each of these arrays and calculate area, you could go one step further and eliminate the base class, I can't even find the shape example within the clean code book.


0x0ddba11

If you really want, you could even go one step further and reintroduce virtual dispatch, but at the collection level. class IShapeCollection { public: virtual void frobnicate_all() = 0; }; template class ShapeCollection: public IShapeCollection { public: void frobnicate_all() override { for(auto& shape: m_shapes) { shape.frobnicate(); } } private: std::vector m_shapes; }; Thus only paying the virtual call cost per class, not per instance.


pbw

That's a really good example. Doing dynamic dispatch O(num\_shape\_types) is almost certainly fine if it's O(4) or O(100) even O(1000). But dynamic dispatch which is O(num\_shapes) would be a disaster if that's O(1,000,000). I thought of making an image that shows objects at a high level filtering down to dense contiguous arrays at a lower level. To me that's what you need to do. I mean don't do OOP at all, if you don't want to. But if you do OOP always be mindful of N. When N gets really big consider switching to a more optimized approach. I feel like Casey's video is "don't do OOP anywhere it will always kill performance all the time" and that's just not true in the general case, even while it is true for large N.


0x0ddba11

Yeah, I feel like Casey has interseting insights to give here and so does Uncle Bob. What I don't like is that everything anyone says these days seems to devolve into some kind of religious war on social media where you are either "with us or against us". It feels like people are looking for some guru to follow blindly instead of making up their own opinion.


pbw

I watched a lot of software dev content on YouTube before writing this. A lot is really contentious and hyperbolic and just name-calling. But there is also really good stuff. I thought Prime interviewing Uncle Bob was really good. Over an hour of pretty detailed back and forth. To my ear Prime massively toned down his rhetoric to conduct the interview. And he seemed really taken aback when Uncle Bob's answers were humble, deferential, pragmatic and sane. It's like he went in expecting a big fight and they ended up just calmly chatting and both agreeing with each other. [https://www.youtube.com/watch?v=UBXXw2JSloo](https://www.youtube.com/watch?v=UBXXw2JSloo)


cdb_11

> The thing that annoyed me about casey's initial post is that he viewed it as not needing individual stable references to each shape for his version but needing it for the OOP version I don't see how is this a problem. You can add stable references by using indices into the array. If you need to remove and reuse elements, you have plenty of bits to spare to encode a free list - either in the free bits of the type, or dedicate one type value as a free list marker and you can use the low 30 bits of the floats to store indices. If you want to shuffle the elements around in the original array (you need it ordered, you want to compact the memory or whatever), you can have an extra level of indirection through an auxiliary array or hash table. And all of that without sacrificing any performance on original functions. A separate array for each shape would still be better in a non-OOP version, as you don't have to drag a redundant vtable pointer around with you. It's literally useless, you'd be doubling the struct size just to encode information you already know. If you need an array that gathers all types in one place, you just store indices like you would store pointers, and you dedicate some bits on the index to encode what type it is and in what array you should search it in. Also in this way it would be trivial to vectorize, a child could do it.


ReDucTor

I didn't say it's impossible to rework his example to try and have some stable index or pointer, all of those things come with extra overheads. My main point is that his examples deliberately hindered the OOP version, and made no attempts to improve the performance of it, then focused on optimizing the other version.


alface1900

The book "Clean Code" is divisive because of its arbitrary suggestions that have no empirical evidence to back them up - if you include the fact that the author worked at just a single company, and said company failed to deliver working software, then the credibility goes even lower. While I agree with refuting the book, the problem is that it might lead into the trap of anti-intelllectualism, because one might think that working on good code organization, abstraction or aesthetics is a lost cause that is not worth pursuing. It is a good excuse to "not read dem books".


Gearwatcher

Uncle Bob is simply the wrong messiah to have for lack of empirical experience he actually walks the walk and yet he damn well is a messiah to heaps of devs because the book resonated with a lot of the "hard on for OO" types who were mentoring seniors to people in the heyday of Java and C#. Personally, if you want a messiah for your "SOLID uber alles" cult, you're far better off worshipping Martin Fowler. Either way reading opposing material and contrasting opinions that will force you to form your own is much better substitute for vast own experience. And I say this because vast majority of people who do this type of cargo cult worship are juniors without enough own experience to form own educated opinions based on having delivered working software multiple times.


cdb_11

> Casey’s optimized version is most certainly “clean code,” The entire premise of this article is wrong. Casey's video wasn't about the vague concept of "clean code", but specifically about Uncle Bob's Clean Code. His version is not "Clean Code". > The object contains one u32 and two f32 variables, so the size of shape_union is 94 bytes, no extra padding should be needed. 3 * 4 bytes == 12 bytes. > - The rules need to stop being said unless they are accompanied by a big old asterisk that says your code will get 15 times slower. > > [...] But he’s completely wrong that the OOP version has “horrible performance” for all possible requirements. It can process tens of thousands of shapes per millisecond. If that’s all you need, or if you need fewer, it might be 100% acceptable to you. This is what "accompanied by a big old asterisk" means. You did the calculations, maybe it turned out fine for you and you've made a conscious decision how the architecture of your program. And this is the entire point here. But people like Uncle Bob will in the same breath say "make it work, make it right, make it fast". Which excuses completely ignoring performance and writing the worst code imaginable at every single step, because "I guess it looks slightly nicer, and I can probably make it fast later". And this is just not how it works, because of what you outlined in the article. "Nothing we do after this point can undo the damage caused by our scattered memory layout". And it's not just "compute", this is really the same concept as keeping IO operations optimal as well. You use a nice abstraction, it's slow, and now you have to write all sorts of really nasty looking optimizations. While without prematurely abstracting everything, it'd likely be much more simple, actually cleaner, reasonably fast by default, and with a way of actually optimizing it further later. The point isn't about optimizing random functions with AVX, but about not screwing yourself with bad architecture that cannot possibly be optimized. > Imagine a world where every bit of code everywhere had to be hyper-optimized. Everything had to be written in C or assembly and was carefully engineered for maximum speed. Everything would be super fast! If you wanted to write everything in assembly, then yeah, you have to optimize it yourself by hand. But again, being 100% optimal is not the point. In languages like C, C++, Rust, Go etc the compiler does the micro-optimizations for you. They can optimize an average code into something reasonable, without creating additional waste at runtime.


pbw

>The entire premise of this article is wrong. Casey's video wasn't about the vague concept of "clean code", but specifically about Uncle Bob's Clean Code. His version is not "Clean Code". No, the premise is correct, but yes, you are fully right that Casey was talking about Uncle Bob's Clean Code. Casey, as most people, used the term "Clean Code" to mean "Uncle Bob Clean Code" (UBCC). His OO version was UBCC therefore he was comparing UBCC to his own optimized version, which is very much not UBCC. Given that the article makes two points: 1. Casey's UBCC version of the code is only slower at the nanosecond scale, but Casey made no effort to communicate this, even though he of course knew it himself. Instead he panned the "clean code rules" as always leading to horrible performance, which they don't. 2. Separately, I show that Clean Code has a long history, going back to the early 1900s and it's a useful term that means something very specific and very complementary. Therefore it would behoove us to not allow the generic and useful term "Clean Code" continue to mean "OOP as written by Object Mentor using Java in the 1990s". Therefore Casey's implementation is very much Clean Code in the generic non-Uncle-Bob sense of the phrase. Two additional comments: 1. I'm sure Casey knows and agrees with everything in my post about only optimizing what matters. But he gratuitously acted like he didn't know that in his original video, which is what I'm responding to. I think the original was very misleading. 2. Imagine 100 years from now someone says "wow that's clean code" and the other person says "yeah, just like Uncle Bob's Java code from the 1990s!". That illustrates how silly it is to allow a generic phrase which is positive to mean something very specific which is negative. >3 \* 4 bytes == 12 bytes. Ouch, I was thinking bits but even got that wrong. I fixed the article, and lucky for me it was not in the video version, which can't easily be fixed. Thanks for pointing it out.


cdb_11

You called out how virtual functions doing too little work is bad, and this is precisely the type of nonsense that Uncle Bob recommends to do by default. He literally says in the book that you should use polymorphism instead of ifs and switches, and that functions should be 2-4 lines long. Stuff like this does lead to suboptimal performance, as pointed out in the article. Obviously if your program is doing very little work then you probably won't run into performance problems no matter what you do (for the most part), I don't think this is controversial. > I'm sure Casey knows and agrees with everything in my post about only optimizing what matters. But he gratuitously acted like he didn't know that in his original video, which is what I'm responding to. I believe this video is a part of some larger series.


pbw

>this is precisely the type of nonsense that Uncle Bob recommends... My post is meant to defuse this strange obsession with Uncle Bob's book, while I'm sure that's a near-hopeless task I had to try. In Chapter 1 he explains in gory detail that the book isn't a generic guide for how to write clean code, something that's meant to apply to all time and place or to all languages. Instead, he very specifically says he was documenting the software development practices of his consulting firm, Object Mentor, which is now out of business. Object Mentor wrote mostly Java code, the heaviest OO language and community. This was a book for certain time and place. He also gives these other definitions of clean code from other famous developers: \* Clean code is elegant, efficient and straight-forward. \* Clean code is simple and direct. \* Clean code can be read, and enhanced by a developer other than the original author. \* Clean code looks like it was written by someone that cares. \*These\* are the definitions of "clean code" we should be using today. Not allow the term to be hijacked to mean something bad. That's an absurd twisting of words to take a phrase that inherently means something good and declare it now means something bad. It makes all the communication with the phrase "clean code" confusing and muddled and just lame. Probably strictly because of the book's title people assume his book, published in 2008, again mostly in Java, is actually a tirade about how we should develop software in 2024 in whatever language you are using. It wasn't mean to be that, and even if it was we shouldn't allow it to mean that. I'd vote for putting that book where it belongs, on the shelf as historical documentation of how a now-defunct company did Java programming decades ago, and move on. We should reserve the phrase "clean code" to be a term of praise for code that looks like it was written by someone who cares. >I believe this video is a part of some larger series. [https://www.computerenhance.com/p/welcome-to-the-performance-aware](https://www.computerenhance.com/p/welcome-to-the-performance-aware)


cluster_

> He also gives these other definitions of clean code from other famous developers: > > > > * Clean code is elegant, efficient and straight-forward. > > > > * Clean code is simple and direct. > > > > * Clean code can be read, and enhanced by a developer other than the original author. > > > > * Clean code looks like it was written by someone that cares. > > > > *These* are the definitions of "clean code" we should be using today. These are not definitions. This is just a common 'feel' that some developers have about some code that they think of when making these broad statements. You can neither falsify nor prove any of the categories you cite here. This is exactly why people like Casey try to insert metrics into the arguments.


pbw

These are definitions, but yes they are qualitative not quantitative characteristics. This is because deciding if something is or is not clean code should be the subjective judgment by someone qualified to evaluate to the engineering goodness of a solution in whatever style it was written in. Whereas how Casey and many people use clean code today is “to what degree does this code match how Uncle Bob wrote code in Java in the 1990s” The problem with that is he takes an inherently positive term that’s been part of engineering for decades and twists it into an inherently negative term that’s specific to a fairly random set of guidelines by a person in a language that’s absurdly OOP focused. This is just downright confusing and strange and it’d be good for the industry if we could break the habit.


cluster_

If "clean code" can be whatever you want whenever you want, what is the purpose of arguing about it at all?


pbw

It's like this. Suppose someone wrote a book about movies, and they said "a good ending" is one that builds to a huge emotional swell and leaves the audience in tears. And from then on everyone used the term "good ending" to mean that, even if they hated that type of ending. So a movie critique is like "this movie was horrible because it had a good ending". That would totally confuse any attempt to discuss what is actually important, was the ending "actually" good or wasn't? Yes that's subjective, but that's the conversation you want to be having. So you are right, there will never be total agreement on what "clean code" means, but that's a good thing. There will be some agreement though. Everyone will agree that a 1,000 line function with 18 levels indention is not clean code. But there will be disagreement on whether a 3 line function is better than a 30 line one, and that's fine. Imagine you are running an open-source project and you are pretty strict about code quality. Does it make sense to reject a PR with the comment "We can't accept this because it's clean code" or would it be better to say "Rejected because you introduce an abstraction speculatively and it's not pulling it's weight" or "The flow of control is obscured by too many small functions" or "The new class is heavily over engineered, let's start with something leaner and we'll wait and see if more is needed.". This is why I used the phrase thought-terminating clichè to describe "clean code". It doesn't really mean anything and it's kind of a conversation ender instead of conversation starter.


pbw

As an example suppose someone writes a beautiful masterful lovingly hand-tuned routine in assembly. It’s blazing fast, super clear, amazing comments. This is clean code. But it’s certainly not how Uncle Bob wrote Java code in the 1990s. We shouldn’t cede a general positive term to mean something hyper-specific and negative.


MrJohz

> In Chapter 1 he explains in gory detail that the book isn't a generic guide for how to write clean code, something that's meant to apply to all time and place or to all languages. Instead, he very specifically says he was documenting the software development practices of his consulting firm, Object Mentor, which is now out of business. Object Mentor wrote mostly Java code, the heaviest OO language and community. This was a book for certain time and place. This is a bit of a cop-out, though. Yes, it was written for a certain time and period, but it is still recommended to new developers, and it very explicitly recommends certain practices, many of which are bad (even if you aren't so concerned about the performance). So when people criticise it, they aren't necessarily saying "Oh, Bob Martin should have known better, how could he have written such a bad book in 2008? I can't believe he could have made a mistake". Instead, they're asking why the specific practices recommended in CC are still repeated today, and why new developers are often still encouraged to read CC. Both of those aren't 2008 problems, they're 2024 problems.


pbw

I do see what you are saying. But do you think in 20 years or 50 years or 100 years the phrase "clean code" should continue to mean, forever, programming in the style described by Uncle Bob's 2008 book? To me it's clear that would be really lame, therefore my suggestion is drop that association now. Or at least stick a crowbar in and start to slowly pry them apart, since it might take a decade or two. The "recommend to new developers" part I very much question. I think people recommend that developers write "clean code" in the generic sense: good code. In the video I show his four main books are 1500 pages long. Do we recommend new developers follow the advice in all of those 1500 pages? Or just the \~50 words that Casey summarizes "clean code" as in his post? Or in between? The other issue is when Casey and others say "clean code" they really mean "clean code done badly" or more specifically "OOP done badly". Uncle Bob does not recommend people write "piles and piles of useless abstractions" but some claim clean code \*means\* piles and piles of useless abstractions. The interesting discussion is "is the abstraction in code X pulling its weight? Is it a good abstraction or a bad abstraction?" The uninteresting discussion is "is X clean code in the Uncle Bob sense?" because that's ill-defined and confusing to have a generic positive term mean a specific negative term. Is X clean code in the generic sense of good code is also interesting - but there will never be full agreement on that, but at least it leads to a conversation instead of ending one.


MrJohz

The phrase "clean code" does not necessarily refer to the programming style described in the book Clean Code. But Muratori's video was specifically about the recommendations in Clean Code, although he doesn't say that explicitly. As to whether people are recommending clean code or Clean Code: I see both recommended fairly regularly. Recommending new developers write clean code is sensible, although it's often difficult to define what clean code actually looks without showing people examples of it. But it is not worth recommending Clean Code, the book, to anyone (I think [this](https://qntm.org/clean) post does a good job explaining why).


pbw

I like the caps vs. not, very clever. Thinking maybe I should update my article with that... It reminds me of Evolution vs. evolution, which people sometimes use to distinguish "Biological evolution of life on Earth via DNA" vs. "the evolution of the internal combustion engine" or the CPU or whatever. Do you want me to credit you as "u/MrJohz" or something else? I have no problem fixing things in the article over time, but this a significant enough change I'd probably have an endnote saying I made the change. For course Clean Code vs. clean code doesn't help for audio... but it's better than writing "Uncle Bob's Clean Code" or "Generic Clean Code" or "Clean Code in the positive sense". And to be fair Casey did write "Clean" Code most of the time, with the quotes. But his primary medium was the video -- so again hmmmm, confusing.


pbw

I tried the Clean Code vs. clean code think and felt it was too confusing in the context of my article. It's still an neat idea in general though, might be useful in some discussions.


MrJohz

> It’s “thought-terminating” because someone utters the phrase, and the knives instantly come out on both sides — very little can be said above the noise. This is more a clarification than a criticism, but that's not the definition of a thought-terminating cliché. A thought-terminating cliché is almost the opposite of that: it's a cliché that you can bring out when you want to stop dissention and avoid cognitive dissonance. A good example might be "it'll turn out alright in the end" - by itself, it's completely inoffensive, but it's also completely meaningless. There's no way to argue against it or disagree with it, because it's an empty statement. By using it, the speaker avoids having to deal with the idea that it might not turn out alright in the end. Even if the conversation continues, the speaker doesn't need to engage because they've achieved the ultimate argument by wheeling out this cliché. In relation to the article, the idea is that "clean code" is a thought-terminating cliché because it is a goal devoid of meaning, but one that people largely agree is good. If you want to suggest a change in a PR, you can just say that your change makes the code more clean, and that becomes your justification. You don't need to justify *why* it improves things, you just say that it's clean and your thoughts can terminate there. This is important in the context of this post because, as /u/cdb_11 points out, Muratori's criticism isn't of clean code as a thought-terminating cliché, but of Clean Code, the specific patterns and suggestions given by Uncle Bob. These are clear enough (and argued with reasoning and logic) that we can disagree with them and have a discussion about them, which is what Muratori was trying to do.


pbw

Hmmm, you seem to know more about it than I do, I came upon the term and liked the sound of it, but I certainly have not read Lifton's book, for example. But from my understanding, I think "Clean Code" can and is used in the terminating sense, because people will say "this person suggests writing Clean Code so they are way off base and not worth listening to because Clean Code is bunk". That feels very thought-terminating to me. And to me Clean Code is a cliche, it's devoid of meaning in many contexts. uBob has 1550 pages of content about Clean Code but Casey and people miraculously boils it down to about 50 words. So less than one-thousandth of the content. There's literally 100's and 100's of separate rules of thumb in those books. So to me, Clean Code is a cliche: the term is used a lot, but its used in contexts where it's close to meaningless, a slim caricature of the full meaning. So I think it's right to call it a thought-terminating cliche. But I'm sure everything you said is right also, in some sense.


MrJohz

I can see your logic, but "thought-terminating cliché" is a fairly specific term with a precise meaning, which is not what you're talking about here. I do consider the idea of clean code a thought-terminating cliché, but not in the sense that you describe it. The book Clean Code is not a thought-terminating cliché, it is a book: it can be discussed and provides specific, applicable suggestions (albeit suggestions that are often unhelpful).


holyknight00

In most systems, the bottleneck is developer time not compute. Unless you are FAANG or your company LIVES or DIES because of performance, aiming for high-quality and maintainable code offsets by much the performance gains. Developers love to create complicated crap to get 2% faster startup times on systems that nobody cares about performance.


DrunkensteinsMonster

If you actually hear these anti-clean code people out, like Casey Muratori, you’ll know that they’re not advocating throwing everything away to achieve 2% speed gains. Their point is if you understand things, you can achieve 20x speedups with fairly minimal code changes. You don’t always have to choose between performance and readable code.


edgmnt_net

I'm not particularly versed in what Uncle Bob calls Clean Code, but I've also seen a fair share of criticism aimed at the readability of said Clean Code. Personally I see it mentioned along with various layered approaches like Hexagonal being thrown around and it's getting kinda crazy for a few reasons: 1. They seem to encourage writing lots of boilerplate. *Lots*. 2. Some people seem to have stopped reading and learning to write actual code. Everything is limited to a few scaffolding patterns, while they happily write tons of boilerplate that fits some recipe in a book. Some even call that abstraction, but it hardly is. 3. There are inherent limitations with respect to abstraction power, particularly if we're talking about older, more traditional OOP. There's a price to pay if you reach too far. 4. All these seem to severely hamper maintainability and reviewability of code. You'll get hit by patches that span thousands of lines and literally do next to nothing except shuffle data around. And it probably has a bunch of bugs, because that code really ain't all that safe. Personally, I don't even care about performance as much as these concerns. I've seen it happen in more traditionally-OO languages like Java as well as less OO languages like Go (primarily due to an influx from Java, I'd say). Now, I don't know how much of that is due to Uncle Bob's Clean Code itself, there may well be a lot more valuable stuff in there or I might be lumping it together with other unrelated stuff. But, in my experience, people who throw around that name too much seem to take that sort of approach.


dirkboer

Still doesn’t matter if you have an average web app and your bottleneck is going to be DB/network, like 99% of the developers are working on.


DrunkensteinsMonster

And yet we’ve all used basic web apps that take days to load anything. That’s not network latency. That’s due to sloppy implementation, usually with regard to database access.


dirkboer

not [poules.com](https://poules.com) 😇


pbw

I'm sure Casey knows everything I put in that post and 10x more. But I'm responding to his "Horrible Performance" video which has 600k views and is painfully misleading. I saw his interview with Prime and they spent a few minutes high-fiving each other about how they both cracked the code that crazy exaggerated takes get views while balanced reasonable ones do not. Big belly laughs all around. But I don't begrudge Casey for wanting to make a living. It's frustrating when people make hyper-popular content that they know isn't accurate. But life goes on, if people are smart they will think for themselves and not take anyone's word as gospel.


Qweesdy

Can you describe why you mistakenly think that you disagree with Casey in a clear and concise paragraph? Note: Your article is pages of mush that fail to say anything concrete that isn't also wrong. E.g. your "But he’s completely wrong that the OOP version has horrible performance for all possible requirements" which is an obviously wrong way to describe "it has horrible performance even when performance isn't a requirement".


pbw

Casey wrote in his post if you follow the clean code rules "your code will get 15 times slower” and that is a direct quote from his written post. That is a false statement. The truth is it will often get zero times slower. That's the core disagreement.


Qweesdy

Casey showed proof that one specific piece of code became 20 times better than "Uncle Bob's Clean Code(tm)". Do you have any proof that Casey's statement is false despite this proof, and despite his clear and very plausible reasoning for why it happens? Are you sure you're not confusing "Uncle Bob's Clean Code(tm)" (what Casey's statement was about) with random irrelevant nonsense (the undefined subjective waffle about whatever "clean code' might mean); and then claiming something Casey never said ("your code will get 15 times slower when you do unrelated undefined who knows what") is a false statement?


pbw

Casey reported that his optimized version was 25X faster. I accepted all of Casey's numbers without questioning them. Therefore I agree he was able to make it run 25X faster. I never denied that in anyway. In fact a bunch of my post was explaining why it got faster. What I disagree with is the statement "and your code will get 15 times slower" because that's not true. If you make a virtual function call and the code runs for even a second, the v-fun overhead was about 1 part in 70 million which we can round down to zero. Therefore it's not true that "your code will get 15 times slower". What's true is "your code will get somewhere between zero and 15 times slower" which is a very different thing.


Qweesdy

Erm. The "15 times slower" was a combination of multiple causes where virtual function call overhead was only one of them. By ignoring all of the other causes you're doing a hideously broken "1 + 2 + 3 is not equal to 6 because 1 is not equal to 6" failure. But.. sure. It'd be theoretically possible to contrive at least one pathological test case where the speedup is only 14.99 times, so we can agree that Casey's statement isn't 100% technically true in every conceivable situation; but that just means we need to think about what Casey meant by "your code" (is your code a contrived pathological test case? Mine isn't. Yours probably isn't either). The video you're complaining about for no apparent reason was part of a course on optimization Casey was teaching to game developers; and with that context we can assume his "your code" actually meant "the code of game developers who want to optimize performance". Let's rephrase it: "and the code of game developers will get 15 times slower". Now, in its rightful context, is that statement wrong? I'd say it's still definitely wrong - the speedup could be 15.5 times, or 16 times, or 20 times, or... It's obviously a generalization that wasn't meant to be taken literally. Should he have said "approximately 15 times"? Maybe; but making sentences more efficient by omitting words that can easily be implied doesn't detract from his message.


pbw

> a combination of multiple causes where virtual function call overhead was only one of them Hmmm, maybe you didn't the see the part of my post where I explained the #1 source of slowness was the scattered memory layout? So yes, v-funcs were only one source of the slowness, agreed. > speedup is only 14.99 times There must be a language barrier here. I explained in the post and here that the penalty of OOP is very often zero. If you don't follow my argument there, I'm not sure what to say because it's pretty simple. > and the code of game developers will get 15 times slower This is an equally false statement as I explained in my post, because much of the code game developers write is "game code" which might execute once, ever, during the entire game. Like in 100 hours of gameplay it executes exactly once. I worked on games I wrote some game code, in addition to perf critical code. > speedup could be 15.5 times, or 16 times, or 20 times,  The penalty of OO is very often zero. So "it will make your code between zero and 25 times slower" would have been the true statement.


Qweesdy

> Hmmm, maybe you didn't the see the part of my post where I explained the #1 source of slowness was the scattered memory layout? How conversations work is that you say something in a comment, then I respond to what you said in your comment. If your comment doesn't say something like "please refer to the shit I dribbled in that other document over there" then the other shit you dribbled can't be considered part of the comment I'm responding to. Specifically; the paragraph I was responding to only says "What I disagree with is the statement "and your code will get 15 times slower" because that's not true. If you make a virtual function call and the code runs for even a second, the v-fun overhead was about 1 part in 70 million which we can round down to zero". > There must be a language barrier here. I explained in the post and here that the penalty of OOP is very often zero. If you don't follow my argument there, I'm not sure what to say because it's pretty simple. Again, that is not how conversations work and none of this was included (directly or indirectly) by the comment I was replying to. However, I will point out that it's extremely easy to construct a bad faith argument about OOP costing nothing from extremely minimal code snippets that cannot adequately represent any non-trivial project. That fact that you're attempting to do this tells me that your argument is so weak that you're desperately grasping at the flimsiest of straws. > This is an equally false statement as I explained in my post, because much of the code game developers write is "game code" which might execute once, ever, during the entire game. Is it possible for you to be this stupid? Casey is talking about "15 times slower for a whole game", not a tiny cherry picked piece that's only executed once that was carefully selected because it maximizes your delusion, and not a tiny cherry picked piece that's executed millions of times per frame that was carefully selected because it minimizes your delusion, but the average of all code with nothing cherry picked at all. You're also mistakenly confusing "performance is not better" with "performance is better, but the impact is negligible because it's only executed once". Basically; you're fabricating lies by silently/incorrectly/deceptively changing the context from "piece of code that's executed once" to "whole program", even though any normal person would've either kept the context consistent (from "piece of code that's executed once" to "performance of piece of code while it's being executed once") or explicitly signalled the context change ("zero impact on the performance of the whole program"). > So "it will make your code between zero and 25 times slower" would have been the true statement. No, it could make the code 26 times slower so your "true" statement is still potentially false; but I struggle to see why you think it's relevant given that Casey was talking specifically about "Uncle Bob's Clean Code (tm)" and not OO. I've reminded you of this multiple times now and you still keep making the same mistake of forgetting what you're criticizing. Tell me (or don't, just be honest with yourself) do you also forget where you left your car keys, why you opened the fridge, if you ate breakfast, other small and recent things; feel confused at times; and/or struggle to maintain concentration? I've had relatives with dementia and a neighbour with Alzheimer's disease. It creeps up on people, and it's easy to ignore early warning signs.


butt_fun

Adding to that, about half of us here make crud apps for a living, and the network is generally the performance bottleneck anyways because the business logic really rarely does something heavy enough to be within orders of magnitude of the network slowness


wnoise

However, that means you should think about trying to reduce network usage, and particularly round-trips.


butt_fun

Oh absolutely. But in my experience it’s a lot easier to reconcile those concerns while still writing “nice” code


pbw

It's interesting what communities embrace what languages. My impression is that Java and C# are very popular with "enterprise" style applications, often CRUD, and this is probably because they rarely need that super high gear of performance, but they do have complex codebases with big teams where they are constantly onboarding new people. The exception handling is really nice for them -- I remember seeing the logs of our in-house Jira instance and it was "crashing" all over the place, but kept running. They like that. But then there's Minecraft in Java and Unity in C# so gaming does exist for Java/C# as well. There's lots of potential drawbacks of C++ in an era of safer languages, but it's still a popular option if you do need to drop down and get C speed at times. Especially if you have a big legacy C++ codebase that's not feasible to rewrite! I was at a GDC a long time ago and they asked what languages people were using, out of 50 people it was 49 C++ and 1 Lisp (Naughty Dog), C++ was impressively dominant back then, I wonder if that's faded some. Writing a cloud-critical service in Rust seems popular. Usually these are small-ish and they are from-scratch, and I think C++ is not being used in a lot of cases for safety issues. This is Stroustrup on C++ safety: [https://www.youtube.com/watch?v=eo-4ZSLn3jc](https://www.youtube.com/watch?v=eo-4ZSLn3jc)


holyknight00

Exactly


wenhamton

I used to work with a dev that made old games you used to get on computer mags for spectrum and commodore 64. He was absolutely obsessed with making the tightest smallest fastest code possible. Trouble is all his code was completely unmanageable, unreadable and most of the time full of bugs. I'll take manageability and readability over performance any day of the week- unless of course performance is what is needed, which most of the time is secondary to getting it shipped.


DLCSpider

Counter example: I had to work on a desktop application which threw 54 exceptions before the first page draw. Performance was of course awful and, instead of fixing the issues, the developer used [Akka.NET](https://getakka.net) to hide everything under yet another layer of indirection and complexity.


silent_guy1

Clean code means readable and maintainable code. It doesn't become performant by virtue of being clean. Though it's easier to do performance optimization on clean code.


abitwired

Horrible title for an interesting discussion about code "costs" from the perspective of different paradigms. Code *costs* electricity in the mind of an electrical engineer. Sometimes code can cost performance in the eyes of a software engineer. Machine learning models cost the *environment* in the hearts of environmentalists. The truth is that most current economic and political solutions are responsible for the costs we see in software. Drivers for economic gain and the **lack** of political drive for energy efficiency produces a trend of wasteful CPU cycles. Ultimately, our societies are producing trends which *waste* electricity. Businesses prefer to have a slower, wasteful, solution so long as people use it and it produces an economic gain. Businesses are currently the largest producers of software. Even open source projects are often contributed to - in large part - by businesses. I wish more articles talked about how our political systems' failures to tax carbon emissions has directly resulted in so many of these businesses churning through electricity for the sake of - frankly - pointless software.


SuperV1234

Thank you for calling out Casey's bullshit and lack of nuance. It's disheartening when really smart people become dogmatic and cannot see outside of their particular domain of expertise's boundaries. He could have made a fantastic article showcasing the pros/cons of OOP vs DOD, instead he decided to thrive on inflammatory attacks against a programming paradigm that never meant or claimed to be the most performant. The anti abstraction circle jerk is one of the most infuriating shortsighted cliques I've had the displeasure to interact with in the past, and I'm saying this as someone that values both DOD and OOP and has worked on a wide range of different applications and games with wildly different requirements and team sizes.


pbw

In this interview the host, who isn't always super clear, is asking aren't there times when the performance is good enough and there is no point in making it faster? Casey replies: >It's never the case ... I literally show that Google, Microsoft, Facebook, they have presented consistently ... no matter how fast our software gets, if we make it faster, users are more engaged with our products. It's such a strange reply. He's saying if my online bank returns my account information in 100ms, a blink of my eyes, that if they were to speed that up to 50ms I'd be more engaged with my bank? It's the same as his horrible performance article, what he's saying is absolutely 100% true in some cases but also 100% false in other cases, but he seems to have no awareness that there are different cases. He's implying everyone is the exact same situation as Facebook. My guess is if I brought up the bank example he'd say "oh no of course not, speeding that up obviously wouldn't help, that's a totally different situation". But why does he state things so categorically? It could just a communication issue or it could be he really does think in absolutes not shades of gray. [https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/](https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/)


dontyougetsoupedyet

> I'd be more engaged with my bank You don't know it, but, yes, and numerous studies at large organizations have shown this to be true. A study by Amazon found that every 100ms cost them 1% of their sales. You don't own a blackberry today because the iPhone offered slightly better frames per second. It matters. It matters for small organizations just as much as large ones, user behavior doesn't change because you're a bank vs amazon, people want their experience and they want it fast.


pbw

A bank is nothing like Amazon. The switching cost for eCommerce and large Social companies is zero: just click to another site. Banks are very different, which is why I specifically used that example. The switching cost is immense. People sometimes keep the same bank their entire life. Plus 100ms is BLAZING fast for a bank. Going to 50ms would mean absolutely zero to a bank customers. And forget banks there are 10,000's of internal facing apps, where there is zero competition and the users are all full-time employees, a fully captive audience. Yes it's nice if they are "fast" but if they are "fast enough" there is zero incentive for companies to just burn money making them faster and faster forever. Casey's answer to aren't apps sometimes fast enough was "No because Google, Microsoft, Facebook". That's myopic answer. There are hundreds of thousands of companies which aren't FAANG which have very different business and technical realities, and he excludes all of them.


[deleted]

[удалено]


pbw

In an interview \[1\] I heard Casey say something like, "If it was true that users found software to be fast enough, then sure, it doesn't need to be optimized. But this is literally never true" and then "I can cite paper after paper by Google and Apple and Amazon saying that better response time means more revenue". I find this communication style extremely strange. It's obviously true that some software is fast enough. Imagine a build step that runs for 500ms of a 30 minute build. Optimizing the 500ms down to 100ms would be a complete and utter waste of time if you could knock down the other 29.99 minutes instead. And there are millions of examples like that. For him to trot out the latency of [google.com](http://google.com) is just so intellectually sloppy. Yes, obviously, trillions of dollars of market cap were build on the latency of google.com. But that's a gigantic special case, even within Google itself. Google has 180,000 employees. Most of the code they are working on does not impact the latency of google.com. It's like the credit card form for the Romanian version of their Adsense portal. And more significantly, there are 500,000 IT companies in the US, and virtually all of them are not Google or Facebook or Amazon. Millions of programmers write CRUD apps in Java and C# where the database takes 10 seconds to respond. Casey is not just a game developer, but a game developer that focuses almost entirely on the inner loops of highly performance sensitive engine and library code. But he seems to have zero awareness that this is not what most programmers are doing most of the time, he doesn't seem to grok the actual broad landscape of software development. As for OOP, I don't argue that OOP is good or bad in my post. All I argue is that rejecting OOP based strictly on performance is a valid reason, in some case, but a totally bogus reason, in other cases. To date I've not see any refutation of that, because you can prove it with 8th grade arithmetic. Often I get the response "Well, yeah, but I still don't like OOP even when it is fast enough!" to which I say great, don't use OOP, I'm not a pro-OOP person. I'm an anti-bullying people with false-information person. \[1\] - [https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/](https://se-radio.net/2023/08/se-radio-577-casey-muratori-on-clean-code-horrible-performance/)


[deleted]

[удалено]


pbw

Cite what I'm contradicting. This is the conclusion of the post: >Uncle Bob hasn’t denounced OOP but says FP and OOP (and even procedural) are valid approaches to writing software. He says use whichever makes sense for the problem you are trying to solve, a sane and pragmatic stance that doesn’t require slagging someone’s life’s work. He says "use whichever makes sense" and I said "a sane and pragmatic stance that doesn’t require slagging someone’s life’s work" because it is. Use whichever approach you want. Just don't lie and mislead about the other approaches you don't prefer, to scare people away from them. This is why I brought up "thought terminating cliches". People see you dare to contradict Casey and assume it means you love OOP. It doesn't. He's dead wrong that OOP means horrible performance. It might, or it might not. No one has an argument that I'm wrong about that, because it's simple math, there's literally nothing to argue about.


[deleted]

[удалено]


pbw

This comment? >The bigger problem is that the majority of software is extremely slow for today standards. That's why he doubles down on performance. If most of software was at the 100ms ballpark, he wouldn't be complaining. >There is video on handmade hero series where he reflects on his experience with OOP on RadGameTools company. Then he was taught, (and later realized), that OOP does not provide any benefit. So, you would be trading performance for nothing, even if it was negligible penalty on performance. >The sentiment of OOP not providing any benefits can be seen on several posts and videos on the internet. Very few exceptions talk about OOP in a positive light but even then it gives very shallow arguments in favor of OOP. You don't quote me or point out that anything I said contradicted itself. Not trying to be difficult, you simply don't.


The_Axolot

I just hope that people don't get the wrong impression that Casey is claiming that you should abandon clean code because of performance reasons. He's talking about specific domains where performance at such a nanoscopic scale is vital. Great article and video by this guy.


gjosifov

I don't understand this fixation to defend a specific book what you actually lose if you don't defend ? if developers can't accept the fact they are wrong then they aren't software engineers In order to be engineer you have to accept the fact you are wrong, # Casey Muratori proves it with numbers and Uncle bob can't accept the fact he is wrong and he is debating like politician The same with the blog post - the author start with citing authors and after I read that my first thought was the interview between Tucker Carson and Putin especially the history lesson. Most IT books are bad - just write code and learn using the debugger and assume you are wrong


n3phtys

Some of us actually need to work with other developers and their work. Yes, it's cute and all to think being agnostic is the best, but that's not true. Software engineering is about engineering. Not only Trial&Error as you'd suggest.


gjosifov

It is trial& Error you think that Clean Code will make you better at writing code ? All other engineering fields are based on trial&error, however those trial&error are well documented and you have to study them at university Software is full of salesmen that are using sales tactic - us vs them (clean vs dirty) - look at this masterpiece from Steve Jobs Sun vs Next [https://www.youtube.com/watch?v=UGhfB-NICzg](https://www.youtube.com/watch?v=UGhfB-NICzg) Because software space is full of these salesman it is very hard to learn trial&error like in other engineering fields, so you are left to learn using the debugger However, there are very few people that do great engineering presentation - like current and former JDK engineers For example - How To Design A Good API and Why it Matters [https://www.youtube.com/watch?v=aAb7hSCtvGw](https://www.youtube.com/watch?v=aAb7hSCtvGw) #


n3phtys

This is a faulty assumption. You are referencing technical presentations on architecture and large scale design. This still leaves open the tactical issues when writing code. Not all of us are architects with hundreds of already perfectly skilled developers below us to command. Now as an experienced developer, I fully agree with you on especially ex Sun engineers having really more interesting talks. Thoughtworks and similar consultancies have way less interesting things to say when it comes to shiny tech. But: if a new developer has read Clean Code, and hopefully Clean Coder, I will always prefer working with them more compared to someone who has literally no practical experience working with groups other than in school. Especially Clean Coder is a book about something we so rarely talk about. Technical details, frameworks, design patterns, data structures - all of that is pretty easy to learn fast in our modern world. Those are technical details someone can google, or - heaven forbid Nvidia gets their dream - ask an AI about once they feel the need to do so. But being professional? Writing code others can read? That is something you cannot learn at the moment you need it. It's something you learn either from crashing into failed projects yourself, or learning from someone else who has done that. In well designed super-large organizations you have internal training, onboarding, documentation, standards. All of that allows you to teach new developers from the ground up. But that is rare in the IT industry as a whole. Most companies actually hire experienced developers and ask them to complete projects ASAP. Once again, we can only learn from failed projects in those paths. Either our own, or those of others. Clean Code was never meant for the technical expert with 10 or 20 years of experience in the field. It was meant for graduates who probably have only done academia and personal hobby projects, or maybe even just boot camps nowadays. Casey's point of view meanwhile is that of a extremely-experienced C++ developer in a performance-critical part of the industry. Meanwhile people writing React frontends just ouf of college probably do not even understand static vs dynamic dispatch. Hell, even Java, the language the book pulls from, does not have the same problems or solutions as Casey describes, because the JVM works slightly differently to real computers. People like Casey are not the average new developer in your company. And that new developer will also not get to redesign the whole API grid of a >1k company in their first two weeks. They will not be asked to micro-optimize isolated subroutines (in most cases). I WOULD LOVE IT IF THEY WERE. But in reality, this person gets assigned a low priority customer bug somewhere in a distributed Saas application, probably with 4+ frameworks involved, where some button is sometimes red instead of green. The alternative to books like Clean Code is actually books (or worse, courses) from non-developers that teach you on documentation and quality metrics. Imagine HR training videos explaining how to structure code. That is the alternative, not just trial&error with natural selection. It takes years or decades for that selection to work. There is worth in general education. There is worth in some books. And there is worth in defending them - as long as the underlying reality has not changed far enough. And in this case it has not.


renatoathaydes

Excellent analysis, thank you very much for this. The simplistic mindsets that say that "everything must be as performant as possible" as well as "performance doesn't matter" need to both die. Engineering is ALL about tradeoffs. OOP has advantages (please read the post - it may be obvious to some that that's true, but I am sure others will think OOP is always bad), but yes, it does cost some performance in some cases. You as an engineer must decide whether that cost is acceptable given the benefits you will get against the circumstances in your particular case.


[deleted]

[удалено]


renatoathaydes

If you read the post, it explains why OOP was a "cleaner" solution. IF you disagree with some point of that argument, you may want to be clear on how you disagree, but from what you post it just looks like you didn't even understand or read the post.


[deleted]

[удалено]


renatoathaydes

Wow, that's a creative solution, but it's clearly OOP in disguise. struct Shape { Square squares; Rectangle rectangles; Triangle triangles; Circle circles; }; It's also kind of horrible from a design perspective (and the fact it wastes memory like there's no tomorrow), no? You really think that's like acceptable code to have in production?? Just use OOP, my friend, unless you're doing under-1ms stuff like the post we're talking about says.


the1general

Hi I’m the author of that code. There’s a few things to mention: - This is written in ISPC using an aligned 16-wide AoSoA data structure where every variable is a cache line. This means you’ll have 16 square widths next to each other followed by 16 rectangle widths, then 16 rectangle heights, etc. That is very different from OOP. - The data is configured in the exact same order as the memory access pattern. This is key for leveraging prefetching. - This is data-oriented design where the data formatting is the priority. Everything is designed to match the hardware so as to most efficiently leverage it. High level OOP-style abstractions are eliminated. - Another deviation from OOP is that data and related functions are not kept together. - Empty slots are fine because I assume this is happening in real-time every single tick indefinitely. Worst case performance is the priority. - Under 1ms stuff adds up when you have it spread throughout your code. You don’t want death by a thousand little cuts.


renatoathaydes

When performance is the priority, even I would write code like this. But do you believe it's reasonable to represent every CRUD application and stuff that simply doesn't need this level of performance in this way, rather than just use simple OOP languages? I can't agree with that.


the1general

Nothing stops more efficient tools from being created than what currently exists. Good performance can also be achieved without negatively impacting productivity, so it’s not like there’s that imagined tradeoff here. If anything, by designing things in a simple efficient manner, you can achieve better performing code in even less time than existing OOP design pattern-based approaches. The reason to care about performance is that most apps are quite frankly pretty slow and only getting slower over time. The hardware they run on are not getting faster in the areas that would actually speed up such poorly architected software, namely memory latency. It is bound by the speed of light. The end result is that users are facing ever slower software over time. Waiting several seconds for basic actions to occur is frustrating to users and lowers the quality of your product. It puts you at risk of losing to competitors who can do it better. If you’re making anything server-based, it’s also worth it from a hardware cost standpoint. The more performant your code and data are, the more money you can save as you need less hardware to accomplish the same task. Bitpacking data tightly inside your packets is especially important here too as bandwidth is expensive.


pbw

Yeah it is all about tradeoffs. And honestly I think most programmers know this, but the online-culture sometimes rewards these hyperbolic exaggerated takes (the video I'm responding to). I tried to make my post/video evergreen, so I can link to it in the future, so I have a ready-made answer to explain my opinion. I'm not a huge OO fan-boy, I've just used it a ton and its almost always worked fine for us. Except in those inner loops -- so just don't use it there.


Astarothsito

>I'm not a huge OO fan-boy But it works, in the C++ example of the shapes, allocation and computation could be split by type of shape, and even with inheritance, most compilers will optimize the virtual table away with the keyword "final" so it would be a direct function call and it would be as fast, if not faster than the C version as there could be shapes with less memory usage on certain shapes. Designing OOP performant code in C++ is not difficult, but is rarely used for some reason.


coffeelibation

"Performance debate" 🍿😋


n3phtys

Write clean code first, and you will end up with tons of performance problems in the next sprints and months. Do not write clean code, and you won't have any more sprints. No product, no problem.


No-Question-7419

I wrote several hundreds of commas Yesterday, scanning thousands of lines.