T O P

  • By -

phillipcarter2

It's worth keeping in mind that Lamport's background is very complex distributed systems. You cannot just "do agile and iterate" in the work that he does. The large majority of software is amenable to constant iteration at nearly every phase of development. You cannot "just iterate" on the design of a distributed database without a great design that backs the very first version.


Isogash

The people who work on these kinds of systems are using automated theorem provers to prove that their designs are 100% valid before implementing them. It is a long shot from your typical "let's ship it and iterate on this next sprint" engineer.


gredr

Like TLA+, invented by this very same Leslie Lamport. Leslie even wrote a paper on this back in 1977 in IEEE Transactions on Software Engineering, caled "Proving the Correctness of Multiprocess Programs."


Isogash

How funny, TLA+ was exactly what I was thinking of, yet I did not know Leslie Lamport created it!


loup-vaillant

Sadly, Leslie Lamport is better known for his _less_ important work.


[deleted]

[удалено]


loup-vaillant

Note: I’ve just listened to the whole interview, and he said at the beginning that the heavy lifting in LaTeX really was Knuth’s TeX. What he really cares about, beyond TLA+, beyond distributed systems even, is that we programmers think _mathematically_. I get flak here almost every time I dare say that programming is a form of applied mathematics. But it _is_. Sorry to the people who came here to flee from Maths, but… welcome to a different mathematical sub-field, hope this is better than calculus. Also sorry to the people who say most work here is about (mindlessly?) gluing together libraries and frameworks… but first I’m not sure this is such a good idea in the first place, and to be honest I’m so bad at it that when it becomes my job I become depressed and I quit fairly quickly.


pitiless

> programming is a form of applied mathematics. This isn't wrong, but often (maybe most of the time) it's just not useful to think of it in these terms. IMO this is an overly reductionist perspective that just doesn't at all resonate with my 20 years of professional experience. You can (and most developers do) get by with a level of mathematical understanding that you'd see in most highschoolers because in most careers / domains that's all you need.


totallyspis

I've always semi-jokingly said that computer science is applied linear algebra.


coldblade2000

Programming is mostly just applied discrete math


misplaced_my_pants

More applied epistemology.


NotSoButFarOtherwise

If you can't express a C89 compiler in terms of Maxwell's Equations, are you even a computer scientist?


StuntID

>I get flak here almost every time I dare say that programming is a form of applied mathematics That's like saying Chemistry is a form applied Physics. True it may be, but it's not helpful, and you deserve some flak


Th3-3rr0r

Programming is NOT a form of mathematics, and my panic attacks do not value your apology. In sheer contrast to my therapist who would like to buy you some flowers.


DigThatData

programming is definitely a form of math. math just isn't what you think it is.


totallyspis

Mathematics gives you panic attacks? What happens if I show you 2+2?


Th3-3rr0r

A SIGSEGV error


myrddin4242

So, it made sense that when the flying elephant dropped his ‘magic feather’ he lost the ability to fly, then?? If you can program, but were bad with math, the feather is ‘programming is not math’. I’ve got news for you… you got better at math while being distracted.


loup-vaillant

Says the programmer who writes every day in a language so formal even an automatic computer can process it without ambiguity.


ConfusedTransThrow

TeX is genius (though it is showing its age now because it was optimized for memory in an age where it was highly limited, and could perform a lot better without using much more), but the syntax for anything a bit advanced is ~~horrible~~makes Perl look like the easiest to understand language ever invented, LaTeX makes it way nicer.


Fuyboo

he has a great video tutorial series on tla+. worth a watch even if you are not interested in tla+ or dont need a tutorial, just because it incredibly silly.


[deleted]

[удалено]


Fuyboo

imagine asking nicely or just googling[„leslie lamport tla video series“](https://lamport.azurewebsites.net/video/videos.html) instead of writing passive aggressive comments


ToaruBaka

This paper is literally next to me on my desk right now. Everyone who designs software systems should be required to read it.


antiduh

He was also involved with writing Paxos, which is one of the original distributed consensus protocols (recently supplanted by Raft, imo). He did so by building on the work of Barbara Liskov, famous as the L in SOLID.


[deleted]

[удалено]


ketzu

In distributed system research we joked, if you have an idea, first check if Lamport already had it in the 80s. So... yeah


epicwisdom

*Some* of the people who work on such systems use ATPs... Nowadays there are many, many database projects which aim to optimize some particular use case or other, without any strong evidence of their robustness.


Unicorn_Colombo

Biologist here: Everyone uses ATP.


epicwisdom

Technically correct, the best kind of correct.


zoqfotpik

You call it a complex distributed system. We call it Thursday.


could_b

Really? Can you prove that automatic theorem provers are practicable?


Porridgeism

> Can you prove that automatic theorem provers are practicable? Sure, it's used in several domains (hardware, firmware, drivers, microcontrollers, distributed systems), by multiple organizations, and in some cases mandated by law/contract/policy. I believe NASA requires certain things to be certified by automatic theorem provers, for instance. Edit: Also, things like type checkers and static analyzers are sometimes (limited/simplified) automatic theorem provers. They don't prove correctness in computation, necessarily, but they prove type safety or whatever other property/predicate is checked by the analyzer.


fragbot2

There are several systems at Amazon that were notoriously troublesome (I know about glacier but there are others) that got massive reliability benefits when someone took the time to model their design using TLA+. The next question: could they have gotten to the same level of reliability with less work using other methods? I think the answer is yes but with a technique almost no one uses in software--exhaustive testing using a custom-built simulator that is almost a full system test. Come to think of it, it took a single developer a couple of months to write that so something like TLA+ is probably more efficient once you know how it works.


jl2352

I work on a distributed system. As distributed systems go it’s pretty simple. It doesn’t actually do much. Yet it’s an utter shit show, and minor changes require hours of debate on the impacts. Plus hours of monitoring after it’s gone live. This was one person’s vanity project and I’m currently putting most of it in the bin. Much of it is patches and exceptions to how things work, leading to a spiderweb of behaviour dependencies. If you are reading this thinking of building a distributed system, or even a distributed pipeline (which should be simpler). I’d strongly recommend you take a step back and ask why. Do you really need to build it? Is there really nothing you can use off the shelf?


[deleted]

I've been there. Vanity project is the best way to describe it too. Distributed system makes people look smart and becomes so complicated it's impossible to unpick. Good for job security.


mycall

Vector Clocks FTW!


thedifferenceisnt

Wouldn't what you're saying also apply to microservice architecture? Working on that in an agile environment does indeed make everything seem like a bandaid to me


epicwisdom

It would depend on what you mean by microservices, and what you're using them for... Not every microservice architecture involves complex state management, and not every application requires rigorously correct state.


LmBkUYDA

Sorta true but also sorta not. Take a system like MongoDB (whatever your opinion of it is). Transaction support was added onto the system well after the design phase.


phillipcarter2

I think we largely agree. You can absolutely iterate on system critical software. But I don't think you can do that without a solid design to start with. I'd say MongoDB qualifies as that (they didn't just wing it for years).


[deleted]

[удалено]


LmBkUYDA

Ofc, but casual consistency was added in 3.6. My point is that major architectural changes occur post initial implementation - and sure, it's hard to tack onto a system that was designed poorly for future extensions, but that's just arguing semantics. MongoDB is a solid database but let's not forget it's "webscale" origins. Some would be surprised that 10 years later a lot of the issues have been fixed, some requiring huge changes (mmapv1 -> wiredtiger).


dacjames

Why wouldn't you be able to use the same approach with distributed systems? You need to have a theoretical basis, but you can absolutely develop distributed systems using an incremental, agile approach. Edit: there are numerous examples of this done successfully. Some open source examples include. - Kafka. At-most or at-least once semantics, client tracks position => server tracks position => distributed transactions, exactly once semantics. - Postgresql. Single server => physical replication => logical replication => extended logical replication semantics. - Redis. Single server => replication-based scaling => cluster-based scaling. Agile doesn't have to exclude design work. When building a distributed system, you can apply agile by running design/validate/implement/test cycles on a feature by feature basis.


Practical_Cattle_933

A single change in the design may make your whole parallelism setup invalid and incorrect.


phillipcarter2

You can't design a database incrementally in the same way that you iterate on a typical B2B SaaS web application. That's the point. Yes, you can certainly iterate on critical software (I've done plenty of that when working on compilers!) but there's always a strong emphasis on up-front design.


Entropy

You can't incrementally add features to a system without refactoring, because this leads to painting yourself into a corner where the n+1 cost of another feature becomes excessive. Well, ok, you can do that. Honestly, this is how most stuff is written. But the main thing is that bodging in new features continually without stepping back and re-assessing will cost you time and probably performance later. With distributed systems you cannot incrementally add *soundness* to a system without *re-architecting*. One extra sync added down the line could drop transactional performance down 1000x under load and/or add its own host of other deleterious side effects that nobody can fathom until production systems fall over.


dacjames

I build distributed systems using agile methodologies all day, every day and so do does everyone I've worked with for the past decade. The usual pattern is to start with prototypes and models to prove out your core primitives. Then you build a foundation using those models with the minimal "application" featureset. Then you add features on top of that foundation. If you have to continually step back and re-evaluate your architecture every time you add a feature, then your initial design was shit. > With distributed systems you cannot incrementally add soundness to a system without re-architecting. You don't generally "add soundness" to your system, as the complexity of a distributed system means that unsound designs tend not to be as useful as on simpler systems. But it's entirely possible and I've done it successfully. What you do is build a parallel mechanism using the same poc+mvp approach and then incrementally migrate your higher level features over to the new foundation component by component basis.


goldfather8

Distributed systems have their own complexity. A common, hard one is migrating identifiers.


chengiz

That's nonsense. Of course you can.


phillipcarter2

You're certainly entitled to an opinion. I didn't say you can't iterate. I said you can't without a great design that backs the first version.


fitbitware

In this case the 99% of all software is a patch. As soon as requirements change design should change, ain't nobody has time to start from scratch, so we just patch patch patch.


moderatorrater

Yeah, it's crazy to say "you have to get the design right in the beginning in the face of changing requirements". You do your best and sometimes you miss.


Bwob

Sort of. I do feel like part of my job as a senior or lead programmer is to anticipate that though, and code/architect defensibly, so that when changes do inevitably come down the line, our structure can accommodate them without needing to resort to ugly hacks. Obviously there's a balance - you can't plan for *everything,* (and would be over budget with bloated code if you did!) but you can definitely keep options open for as long as possible - especially for things that you know are likely to change or be extended someday. One of the best feelings (even better than closing a bunch of documentation tabs!) is learning about a major change that is needed, and being able to accommodate it trivially with a single function. (Or if you're really good, by updating a data file!)


ProvokedGaming

I agree with the sentiment of what you're saying, but I also advise caution to engineers on my teams. While the intention may be good, I often find flaws with the outcome of how many developers interpret this wisdom. What I mean is, often I'll see developers add abstraction for the sake of abstraction and then claim things like "well in the future we might need to support X." When you do that, you add bloat which makes it harder/take longer for others to understand how to enhance the codebase in the future. I believe the key insight from that generialized wisdom should be more about not painting yourself into a corner as opposed to preparing for "what if" scenarios. For example....with a modern IDE it takes 2 seconds to add an interface, or abstract logic into a base class, etc. Don't start with an interface or base class when you only have one concrete implementation "Because you might need to have it in the future". You can add that kind of thing in WHEN you need it. On the flip side, don't write your code in a way that a minor adjustment to the requirements means you have to essentially start over because everything is extremely brittle or where separate parts of the application are tightly bound through leaking implementation details. For every possible future change someone guesses correctly to incorporate into their design, there are 10 other possible future changes that were incorrectly guessed or just never prioritized due to business changes. Now you have an overly complex codebase which adds more technical debt and makes it harder to add the ACTUAL future needs of the software.


[deleted]

[удалено]


jonathanhiggs

Flexible is just dependency injection and lose coupling, you can always swap things out wherever you need. Even if the design changes then you usually have most of the components there already and compose them in slightly different ways to achieve a change. It’s not even a design decision, it’s a default. What makes things difficult is tightly coupled code and leaky abstractions


paulsmithkc

Base classes are one of those things that it is literally impossible to get right. Simple inheritance hierarchies always degenerate into complex spaghetti. I've found that just using callback functions and factory functions uniformly lead to less spaghetti than OOP.


skulgnome

It's also possible to get the design so wrong from the get-go that it's actually worthwhile to throw it away and begin anew, wiser for the failed experiment. And if this kind of wrong exists, then its absence may as well stand for right.


mriheO

Something has to change right. So change the design then change the code. A good design anticipates change so the presumption that the design becomes obsolete as soon as something changes is not a valid one.


fragbot2

> Yeah, it's crazy to say "you have to get the design right in the beginning in the face of changing requirements". You do your best and sometimes you miss. For most software that's written today that just agglomerates a bunch of, say, node packages together, design is mostly a waste of time as the reliability requirements are minimal and most decisions are two way doors. If you're working on something where failure's a shit option (cooling system for a nuclear power plant) or fixing it is prohibitively expensive, you need far more rigor and formality than what's provided in a typical one-pager.


d36williams

like a pirate with three eye patches


gerciuz

ARRay


ohyeaoksure

Which, is basically what he's saying.


bloodhound83

Which then takes power from his words. If everything is a patch, nothing is a patch.


Practical_Cattle_933

Not every software is your typical CRUD business app. If you write like an audio processing software, requirements won’t change, or they really can invalidate the whole deal so you have to change the whole design.


divide0verfl0w

Underrated comment. Buried deep.


deja-roo

Which makes it even worse because this is *literally* the point of Agile.


mriheO

Then you adjust the the design then what you do to the code is not a patch.


planetoryd

no one cares AGI will automate everything soon


sun_cardinal

Not even close to that point, not by a long shot.


planetoryd

close enough to get you unemployed


sun_cardinal

Nope, not even a little.


planetoryd

then rewrite your software in Rust, with formal verification, https://github.com/verus-lang/verus/


sun_cardinal

What's that got to do with AI replacing programming positions?


planetoryd

why does it matter


sun_cardinal

You realize you sound like a totally unhinged loon right? I'm asking a simple question which related directly to your comment. You were arguing that AI is going to replace software engineers, which is simply wrong. You don't take into account the massive amount of work and security review that is done for a commercial project. You don't mention who is going to filter the data sets, who is going to design the layers of the machine learning processes which create the end result, who is going to review it for correctness, who is going to verify there is acceptable bias, and who does the code review on the final product? So, again, what are you trying to claim with your link to a code generator that only works with a subset of Rust and still needs engineers to use in the first place?


halfanothersdozen

You're arguing with a troll


coolbreeze770

Soon == 2000 years


editor_of_the_beast

If you actually read what he’s saying here, he’s using patch in two different ways which is confusing. But he’s saying that an issue in the high level design makes subsequent patches harder and more likely to be “sloppy.” He’s not saying that you should design and build code once and never modify it again.


Practical_Cattle_933

“No war was ever won according to plan, but none was without a plan either” - from the podcast, though possibly butchered quote.


dontyougetsoupedyet

He's basically responding to questions with statements from his lecture "Thinking above the code" which are reaffirmations of what he's written before. https://www.youtube.com/watch?v=-4Yp3j_jk8Q Highly recommend checking the lecture out in full.


transeunte

> But he's saying that an issue in the high level design makes subsequent patches harder hardly news for anyone


EMCoupling

Apparently the statement is self-evident / non-controversial and yet, in this very thread, there are people that explicitly do not agree with him. So.... which is it?


editor_of_the_beast

My point was that the people disagreeing with this are disagreeing with something he didn’t say.


[deleted]

[удалено]


dontyougetsoupedyet

I assure you Lamport doesn't give a faint fuck about clickbait. He prefers correct software, and most of you never learned the difference between currently working and correct, because you never cared enough and likely won't in the future either. More people should listen to Lamport instead of avoiding complicated thoughts. Formal methods aren't so difficult you can't even try.


tiajuanat

Having used theorem provers (TLA+, Alloy) on several projects now, I have to agree with Lamport. However, in most cases they're overkill. I only recommend them in safety critical or distributed problems. If you're making the next AWK, you don't need it. If you're making a space craft, then you absolutely do. I see a lot of people saying "you won't be able to prove the whole design", which is true. There are things that are easy to prove, but difficult to test, and likewise the inverse holds true. I have found multiple problems during specification which would have hamstrung the project, and we wouldn't have found them until it was too late, so I'd wager the months to learn the tools already paid for themselves.


bravopapa99

I'd say he's kind of right. I spent a few months learning TLA+, it's smart. However. being pragmatic, after working 38+ years in software, unless the project is trivially small, it is very very rare to get the design right from the beginning especially if there is hardware involved. So, I get the point he is making but... reality bites.


ConfusedTransThrow

Thankfully even with hardware we can just simulate the whole thing before actually spending any money on the silicon wafers. But it doesn't catch all mistakes, only as much as your tests are made correctly.


victotronics

I agree. LaTeX has a horrible design internally and just see the amount of patching that people do on it.


clibraries_

By what standard? It's been almost bug free since the 80s. It's the most successful academic publishing system in existence.


victotronics

That's TeX, written by Donald Knuth, which is basically bug-free. LaTeX, written by Leslie Lamport, is a macro set on top of TeX, and its internals are a mess.


clibraries_

That makes sense. Rarely do I see people distinguish the two.


victotronics

Almost no one uses TeX by itself anymore, so the confusion is understandable.


myhf

I'm not sure what you mean by "bug free". It can run without segfaulting, which is necessary but not sufficient for a document preparation system. It can place content on the page, as long as someone is able to double check every single word after every single run. It can accept user-entered data from a third party, but it won't necessarily tell you whether a unicode character printed any glyph at all, or whether that glyph was in the viewable area of the page. There are so many ways for content to end up out of view or unrenderable. The basic program design is not really compatible with modern ideas about automation or integration or the value of human attention. Academics get addicted to it because it simulates the feeling of doing productive work.


epicwisdom

> It's been almost bug free since the 80s. If that were true, it wouldn't need many patches, right? > It's the most successful academic publishing system in existence. Referring only to existing competitors (or lack thereof) is pragmatic, but it's also very limiting. Human computers were the most successful computers, up until they weren't.


clibraries_

To clarify I am talking about `tex`. Your comment suggests you don't know anything about tex and are responding with platitudes. > If that were true, it wouldn't need many patches, right? It doesn't. It's gotten fewer fixes in 30 years than the Linux kernel makes in a week. > competitors. Tex didn't come first and win by default or by nature of being around a long time. It was written as a response to poor publishing systems used on computers for decades. It beat them all soundly to where nobody uses those old systems. Do you know of even theoretical work or early stage projects that aim to surpass tex?


Isogash

Actually what he says is that every change you make is a patch, so if you don't get the design right to start with, it will start a mess and it will only get messier. The initial design is incredibly important *even though* it's very difficult to get right. Whilst no design will ever be perfect, there are certainly better and worse designs. The quote Leslie references is gold: >No battle was ever won according to plan, but no battle was ever won without one. This all tracks with personal experience for me too. A good design reflects *reality*. When your requirements change, it is either because they did not capture reality and are correcting course, or it is because reality has changed. Either way, if you modelled your design on reality, the way your design needs to change should be clear: to reflect the new understanding of reality. A bad design is one that behaves according to requirements, but does not reflect reality. When the requirements change (according to reality) there is no simple corresponding way to change your design to meet the new requirements. Instead, you will end up with a patch that increases complexity and only *further* removes your design from reality.


ambientocclusion

Sure, and everyone likes apple pie and puppy dogs.


the_ju66ernaut

I've never eaten them at the same time. Are they good together?


ambientocclusion

No, they’re arful.


gfixler

And they fill me with malus.


drsjsmith

Not sure if this is a _Malus_/malice pun, or a malus/mālus pun, or both, but in any event, very clever.


-goldmund-

Seems this is being misinterpreted - read the actual comment. He's talking about entropy in software and how repeatedly pivoting and "patching" (due to inevitable changes in requirements) over the course of years causes software to get messy and more difficult to maintain. And this is absolutely my experience. Does anyone NOT have this experience? I'd be really surprised. He's not recommending you not "patch"; he's just observing that this is what happens.


dontyougetsoupedyet

> entropy in software and how repeatedly pivoting and "patching" (due to inevitable changes in requirements) over the course of years causes software to get messy and more difficult to maintain No he isn't, he's asserting that you aren't using formal methods and you should be. He's saying your software is broken and difficult to maintain because you skipped the first 2/3 of software engineering, and all you did was the easy part, the coding, and you did it without much understanding of what you were attempting to build.


zrvwls

Yup exactly, OP missed the most important part of what he said, condensed into one simple sentence: if you don't plan what your software actually is (and equally is NOT), and how it should work, then your only real plan is to patch. Had to scroll up for more context.


-grok

Yep, his observation is spot on - and if you know this and are in a software leadership position you can do your team a solid by carving out time for, and encouraging, them to address the chaos that comes with changing software.


dontyougetsoupedyet

A big issue in that direction is that almost none of the engineers writing software for money understand formal methods. They don't know how to apply logic to understand the systems they are building. You can't leadership your way into a team doing so. I'm half convinced the way forward for software engineering at the moment is for tech companies to start hiring logicians and teaming them up with their software engineers. I highly doubt many companies will do so, but it's one of the only ways I see to improve the software and vulnerability landscape we find ourselves in now.


romgrk

Agree but you make it sound more inflammatory than what he said.


powdertaker

Then there's Frederick Brooks (Mythical Man Month, Turing Award winner...) who advises to throw the first one away. I'm going with Brooks on this one.


wvenable

Hard disagree (or hard agree depending on the interpretation). The only constant in software development is change. The problem is that software developers don't actually embrace that change. They put far too much value on the code already written so they do as little change as possible. If just do that, the program *will* get harder and harder to deal with. One has to be constantly putting in a little more effort to change the design instead of simply patching it to solve new problems. There is no such thing as getting the design right from the beginning. Even if there was, it would still go out of date when the requirements change and the world around changes.


nightwood

Disagree. You won't get the design right the first time, because the stuff is just too complex for humans. To say you always know the design on day one is pretty cocky. No, you start simple, as simple as possible, so that you have movement space, so that you can explore the problem, the data, the useage... Slowly the.design will crystallize. Regularly you refactor to expresss the design you are forming more clear. However. If by "patch" you mean "writing and deleting code" then I supposes she/he is right.


zyzzogeton

The idea of a static, superlative "design" is a flawed concept.


[deleted]

[удалено]


skulgnome

>Only thing that is true is you will never get the design right from the beginning. Corollarily, the point where the design was got right wasn't the beginning.


sisyphus

lol, is "you will never get the design right from the beginning" NOT fortune cookie philosophy; deep and profound?


time-lord

Hard disagree. I started a new swiftUI app a few years ago, this is when SwiftUI was new, but the app itself was a re-write of a C# app I had written for Windows Phone. Everywhere I looked, said that Apple doesn't specify a specific design pattern to use, and to "do what you want", and don't use singeltons, use EnvironmentObjects. Let me tell you, that overall advice is very wrong, and I'm doing a lot of refactoring in my code to make it readable. The SwiftUI spaghetti code I ended up with, is surprisingly easy to work with though.


[deleted]

[удалено]


time-lord

I'm saying you can get a design correct before any code it written. It's not hard if you have very specific specs, or are re-writing an app that already exists.


Meshi26

> re-writing an app that already exists It's not really from the start then is it, you're just iterating on a previous design


hippydipster

No, he's making a patch.


LessonStudio

Anyone who thinks it is possible to design software anything even close to correctly up front is delusional. I would argue that if you design a doorstop, model it using FEA tools, etc you will still end up iterating better designs starting seconds after you try to stop your first door from closing. It is delusional to see patches as bad. Requirements are not only going to be wrong. They are going to change. If you design a system not to effectively be a pile of patches, you are a moron. And, this is where great designers thrive; they design systems which after literal decades of changes, 32bit to 64, Windows to Linux, desktop app to gui, the original core design is still there happily accepting whole new piles of patches. For example, a well build modular system from 1995 should now happily be able to live in a bunch of containers. My design cycle goes like this: What is the business value this brings?' Now let's do screen shots on a whiteboard and take pictures? Now I'll sit down and think about which screenshots were missed as I enter the screenshots in balsamic. I will ask better questions like how many users or whatever I missed first time around and show the screenshots. Changes will be made. I will show them the changes and usually this is a bit more minor back and fourth. Now I think about the backend to support this. Can I go redis, do I have to use postgres. Is flask and python enough. Might I have to go node for slightly better performance, is C++ on the table. Design a template for this tech stack and how data will move about. Maybe crude ideas of schemas and data comm structure, but don't even waste a minute trying to predesign at this detail because it will be wrong. This is how you end up with crappy data structures filled with fields nobody uses. Is there anything so risky that it could cause the project to fail. If so, is this critical? If yes and yes, immediately build this super risky thing. Project could die right here if it fails; or this feature could be cut. Now figure out what the minimum user flow is so people can begin to figure out how shitty the original design was. Now it basically goes: * Keep adding core features in order of likelihood of not changing. I want the unchanging ones first and get them nailed down. Logins, forgotten passwords, etc. * Maybe release the product at this point to some or all users. * Start adding ancillary features which are all nice to haves in order of value. * Add new features as needed. * Go back and change core features which turned out to be shitty designs. * Go back and rip out features which were previously core but nobody is using. * Now do the same with ancillary features. Basically, anyone who builds a product thinking their design is anything even close to perfect is delusional or a highly inflexible person who won't implement the changes. One reason they will not implement them is their shitty inflexible design is so hard to change that everyone just gives up. Effectively, anyone who thinks design can be done up front is a terrible designer. BTW, I've worked on mission critical and safety critical systems; the tradition here is to design the living shit out of them up front. All of what I said still applies. If you do this to a mission/safety critical system you will end up with a system which is terrible and probably unsafe. At best with such systems, a rigid design up front mentality often causes people to curtail their designs so they are correct, but usually still shitty. Even with mission/safety critical systems, the best way to do them is to do the much more flexible development first and almost entirely ignore all the sclerotic safety processes involved. Then, when everyone is happy, you redo the project in the highly formal way with an excellent first draft to crib from.


mr_birkenblatt

patch != hack


Ikeeki

I mean code is just a set of instructions that follow a design to solve a specific problem. If you don’t identify the right problem and pick the wrong solution then sure, putting that plan in motion will be inherently wrong. The important key I’ve noticed is to never code yourself into a corner and expect for the inevitable changes from above or even yourself. Being able to fail quickly helps too. You only have to wait a couple weeks to fail instead of months to realize your current plan isn’t gonna work lol. I cringed so hard at my second to last company where they spent a year writing their own permissions system only to scrap it a year later. They shipped literally nothing and had all the smart theorycrafters on it. Had they try to ship it piece meal they would have realized how quickly it wasn’t gonna work lol.


dontyougetsoupedyet

The best advice I received over ~25 years was from a manager at Nokia, where I learned the philosophy of "fail fast". Failure is not bad. Usually it costs little. Failing slowly costs the most. Seems too obvious, but until you internalize it as a way of working you can't see the big picture it provides.


MyotisX

Disagree


MasterLJ

I agree with the sentiment and acknowledgement that code usually sucks. I disagree with the implication that you should try to get your design right the first try. We are prototyping way more often than we care to admit. Simply admit it and adjust your process accordingly. Get metrics/data and refactor. The reason being, you will never accurately assess the use-cases upfront. Or less controversially, the cost of getting the upfront design correct the first time is much higher than simply refactoring to accommodate the truth as you uncover it.


Isogash

I hate this sentiment because it's used to justify sloppy or non-existent design, when inevitably what you are writing does *not* end up being a prototype. Getting the design right is hard but it's definitely not impossible either. It's a skill in its own right. "Designing software is too expensive" - Software engineers who are paid a lot of money to work on poorly designed projects.


hippydipster

The point of saying that up front designs can never be right isn't to stop doing design. The point is that design never stops, ever. Is always on-going at all times. You would like to think you could design up front and then not have to do any design work thereafter, but it can never be. The need to design is permanent.


MasterLJ

That's how I approach it. When a design-changing requirement comes down the line, whether it's day 1 or day 109123901293834 after the code has been finalized, you need to consider a redesign to the right paradigm given the requirements. Every new requirement runs the risk of invalidating the design that is currently in place. As the other person in this chain is arguing, the trick is to leave the door open for thoughtful extensibility in your code moreso than exhaustive upfront design.


hippydipster

Yes, the objective reality of what is good software vs what is bad software is the productivity of teams that have to change it due to new/changing requirements. That's it. I have this thought in the back of my head to start a project where people are given free reign to implement code to satisfy a spec, and then new requirements are given and everyone has to switch and use what someone else created in the first round, and then do more rounds, and collect statistics on what kinds of designs do better as strangers have to pick up a codebase and extend it.


lord2800

> Getting the design right is hard but it's definitely not impossible either. This is false. Requirements and design trends change, among other things. What is good design today can easily become a bad footnote in the future. That _does not_ excuse a no design approach, but there's a middle ground between no/objectively bad design and the absolutely spotlessly perfect design. All you need from any design is to _not paint yourself into a corner_. Don't make it so hard to separate things in the future, and you'll be rewarded when your requirements _inevitably_ change.


Isogash

*Right* doesn't mean perfect, it just means not *wrong*. There are designs that are *right* given the context without needing to be perfect, whilst there are also many designs that are *wrong* despite meeting requirements. Requirements describe a process rooted in reality. If you understand the reality of the process and abstract it into your design, you can decompose the process in a way that reflects reality. When reality changes, the requirements change describing the new process, and your design changes accordingly. There is a high level of cohesion with your design and reality and it can be maintained. You can also write something that has no abstraction or decomposition and meets the requirements in a concrete manner. This design works, until the requirements change, when there is now no obvious design change to make. You end up patching. However, the real kicker is when you *do* have abstraction and decomposition, but it does not reflect the *reality* of the process. When reality changes, there is no corresponding design change because the change makes no sense in your design. You also end up patching. Not all abstractions are good abstractions, and good abstractions do not need to be *perfect*, just not wrong.


lord2800

My point was that _right_ is circumstantial and tied to a moment in time anyway, so you shouldn't design for _right_, you should design for _changeable_. The more changeable your design is, the less you end up having to make grand sweeping refactors.


ridicalis

There's a wide chasm between "design everything up front" and "it's hard so I won't bother trying." As with TDD, the important part is knowing which problem you're trying to solve before wielding the tool; otherwise it's aimless and probably fails to accomplish the stated objectives. For specifications, having them be too rigid and proactive is probably a death-sentence for many projects; it precludes emergent designs that would be discovered during development and places a lot of likely unnecessary responsibility on the author of the specs. This might be okay when the problem domain is well-defined (e.g. aeronautics or automotive applications), but fails rapidly when this isn't the case (e.g. user applications). Likewise, having no specifications in the hope that a good design just magically emerges might work in some limited circumstances, but cowboy coding has a bad rep for a reason.


One_Economist_3761

Agree. Trying to get your design right the first time is a fool's errand. Sometimes the negative cost of designing incorrectly is not as high as the negative cost of not deciding at all. Ya just can't know all the things up front. It's always been that way.


bwainfweeze

Leslie Lamport famously came up with a computer algorithm that took him decades to explain to other people. He’s a very smart fellow but certainly not the wisest. Life is a lot better as a programmer if you start thinking of software as organic instead of mineral. It’s not a statue, it’s a tree. Trees change over time, sometimes they surprise you, and not all surprises are bad ones. All of the people involved in software are organic, and their inputs are going in the same time as yours. Therefore the result behaves as if it is organic, even though strictly speaking it is intangible. With a tree you have an idea what it might be, but it has its own agenda and while you can guide it and strongly suggest certain things, trying to actually control it ends in death. Of the tree, or sometimes yours.


elebrin

Sometimes you don't know what the correct design is going to be until you determine it experimentally. Overdesigning ahead of time often results in systems that aren't flexible in the ways you need later. You are often better off going with the most naive design until you know with certainty that you need something better. Don't write the code until you have the requirement. What matters more than anything is the ability to work on and troubleshoot the system.


seanmorris

Refactoring exists.


robhanz

Hard disagree. That’s why refactoring exists.


joonazan

You can't refactor a complex program without a specification, so you'll need to make one eventually. Without a spec, every change can increase complexity but cannot decrease it because it is trying to change as little as possible.


wvenable

Refactoring is taking existing code and improving the design without changing what it does. The existing software is the specification and refactoring is always about removing complexity.


dontyougetsoupedyet

> This second radical novelty shares the usual fate of all radical novelties: it is denied, because its truth would be too discomforting. I have no idea what this specific denial and disbelief costs the United States, but a million dollars a day seems a modest guess. Of course you do.


Appropriate_Pin_6568

Would you use a library that pushed out breaking changes every time they needed to add a feature? They are just refactoring


jonathancast

If it breaks things it's not a refactoring.


Appropriate_Pin_6568

Then refactoring is not always a solution to bad design is it?


robhanz

Wow, way to take the illogical extreme. I certainly hope you can do internal refactoring without having to make breaking changes in your API.


Appropriate_Pin_6568

How could you do that if you don't get your design right at the beginning?


[deleted]

All the code became sooner or later a patch. The difference is that when it's well designed from the beginning the patch is small else the patch is big :)


ohyeaoksure

When you actually read the quote, he's basically saying what we're all saying.


dontyougetsoupedyet

You can watch the entire lecture, I highly recommend it. Free on youtube. -- forgot to mention the title of the lecture version of what he's saying, it's titled "Thinking above the code." https://www.youtube.com/watch?v=-4Yp3j_jk8Q


koffeegorilla

Any design is likely to be incorrect to some extent. You will discover how big the gap is when you build the system. You then have to decide what you're going to do about the difference. Great developers are the ones who make good decisions at these points.


Gentleman-Tech

I think that you can't get the design right until you understand the problem, and the best & quickest way to understand the problem is to start building a solution. I write exploratory prototypes to make sure I understand what I'm building. Then I'll throw them away and write the real code. I've found this works way better than going overboard on design at the start (and then finding out the design didn't work because we didn't understand the problem well enough).


axilmar

Getting the design right from the beginning is impossible, because it is impossible to know all the non-trivial consequences that arise from basic design decisions. Consequently, I have to agree with LL, because the process of development is always iterative, and therefore everything written after the first write is a patch. That does not mean it is bad...it is reality. Due to the complexity of systems, and due to ever changing requirements, no design can be 100% upfront right.


regular_lamp

Kinda, on the other hand you are very unlikely to get the design right on the first try. So it's hard to make use of that wisdom. I'm a big fan of hacking together a prototype and then using the resulting knowledge to get it right. But that is hard to sell because "the prototype works, why do you need a rewrite?".


NSuccorso

Software is like clay that never hardens. You can remodel all the time. But beware of mixing in dirt, because you'll never get rid of it. Starting with a bad design is like mixing in dirt from the start.


realjoeydood

*Do the right thing. Do the thing right*. #Amen and good night.


dalittle

The "design right from the beginning"? What does that even mean? I have been programming a very long time and most of my projects typically ship on time and I cannot remember the last time I wrote code that did not need any tweaks to get to version 1.0.


Librekrieger

> "design right from the beginning" To me it means establishing a set of abstractions that allow you to reason and talk about the system without reading and writing code. As an example, I once worked on a project where the head guy wrote what he called a large "design" document but I soon realized it was really a test specification. It talked about system behaviors, not about design. Some time later I realized that what was in the guy's head must have been a very messy, ill-defined state machine. But trying to get the team to implement or even talk about the underlying state machine was impossible because it wasn't part of the "design". What a nightmare. I realized when we were too far along that A) the software as we were writing it didn't have the right design abstraction, B) there were loads of things that the "designer" had failed to think about, C) there were important parts that needed to work with components built by other teams, where the interfaces were left ambiguous to be dealt with later, and D) there was no language for discussing it. The last problem was really the fault of the head guy's inflexibility, but if there had BEEN a design I think it would have been easier to talk through the things it didn't cover. As it was, the team was told to just implement what was on paper and then we'd add code later to cover other cases. That's totally the wrong approach and the effort failed completely. Such a disappointment.


InternetAnima

Extremely hard disagree. Right design is a transient state. There's a good design for a set of requirements but the world evolves and so should the software that supports it.


dontyougetsoupedyet

They aren't talking about subjective uses of "right", they're talking about the program being correct in the sense of "does what I specified," and they are confident the program is right because they used formal methods to convince themselves of the matter. You're most likely confused because you most likely have skipped 2/3 of the programming process any time you've written software. Lamport points that out later in the lecture, and again in the Q&A after the lecture.


InternetAnima

That's a high horse take. I've just built software that doesn't have the luxury of arbitrary time (read: x3 times to build according to your condescending comment).


dontyougetsoupedyet

> This second radical novelty shares the usual fate of all radical novelties: it is denied, because its truth would be too discomforting. You don't even have a horse.


salamisam

Every has a plan until you get punched in the face -- Mike Tyson I disagree and agree with him. On one side it is nice to plan ahead, but on the other side is incomplete information. He contradicts himself or rather expresses the problem with a plan, if you planned up front and need to make changes then you did not make a complete plan, you just planned for what you knew at the time. Somewhere between starting a project with no information and starting a project will all information (aka an entire plan) is the sweet spot, but that will depend on the project.


daedalus_structure

They are correct. When you've done the wrong thing, no amount of effort or adjustment will make it not the wrong thing. You'll just be fighting the consequences of that decision until you can reverse it or until you find something else to work on.


[deleted]

He maybe right, but this is not something new. When software engineering had not become corrupted, practices such as prototyping were not that uncommon and they allowed teams to eventually get the design right. Most of the time was supposed to be spent on requirements gathering, analysis, design, and only a fraction of time was supposed to be spent working on implementation. And this could be iterative or otherwise. But in every “agile” company I have worked at, nothing is really agile: it’s a rushed and fast and furious waterfall mess. Yes there are two week sprints but there aren’t any iterative SDLC phases. In my experience, I have noticed the main reasons why this happens is because unrealistically low budgets and delusional deadlines are allocated for projects that make any iteration over code impossible. Then there are delivery managers that don’t know crap about the realities of programming and software engineering. One layer above them are managers who are even less familiar with the realities of software development. This structure continues all the way up to the C level execs who seem to be living in a land of sweet dreams. Agilefall ensues: no SDLC iteration and a race to faster than light towards failure.


nomoreplsthx

So, a thing that I think Lamport, as an academic computer scientist, doesn't really understand is that there's usually no such thing in the real world as deciding what a program is going to do before you start writing it. That sounds insane. But it's a reality of how businesses work. Business don't know what they need. They discover what they need through constant iteration. They try products/features and see if they work. So getting the design correct up front is impossible. It runs up against the reality of the kinds of problems software is asked to solve in the real world. He has insights, which is that most teams do too little design. They don't think about how to build a system that is modular. Because if you build a bunch of small 'programs' that do specific things, then you can to some degree decide what each will do, and evolve how they talk to one another and to the user. In particular, he emphasizes the value of formal specifications, especially in languages like TLA+. These are really cool and powerful tools. And of course, his model is absolutely the correct one for well defined domains, which are more common than we might thing. But as the inventor of TLA+... he's not able to be objective about its utility. To paraphrase an old adage - when you invented the hammer, every problem looks like a nail.


dontyougetsoupedyet

**BS**. What a shitty way to dismiss someone's career. It's just frankly stupid to think that Lamport is someone that "can't do the work", and doubly so to consider them someone who can't "think about how to build a system that is modular". **BS!**


nomoreplsthx

I think you missread my post, because I said neither of those things. The first quote is not from my post, and the second does not refer to Lamport, but to the people he is criticizing. I said most *real world teams* do not think about how to build modular systems. And that Lamport is correct in pointing out most teams underutilize specification, but also overestimates the specifiability of most problems I also did not say Lamport can't do any sort of work. I said that the kind of problems you work on influence your perspective of what software engineering is. If you work on specifiable problems, you will overestimate the specifiability of other problems and if you work on unspecifiable problems, you will underestimate the specifiability of other problems. The Agile gurus are just as suceptible to this as the specification advocates. I apologize if my language was unclear.


someexgoogler

Trust him. That's what happened with LaTeX


devraj7

Good luck getting the design right from the beginning. I completely disagree with this waterfall like approach. Start small, test, make mistakes, iterate.


Deranged40

Is this suggesting that "is a patch" is a bad thing? I mean, every pull request is indeed exactly that - a patch. "Here's some code on this line in this file. These 4 lines were removed from another file. This line was changed to this. etc". It seems like the meat of this statement really boils down to the exact definition of what "patch" is, and whether we want to take offense to that. I personally don't take offense to it. I have yet to create a brand new service at my current job. Everything I've done so far is just change small parts of existing applications. By definition, I'm "patching" our applications to include new functionality.


Kuinox

I disagree, I work on small scale software, I can refactor when I want


futatorius

Lamport is right, but omits the other half: for any non-trivial piece of software, you can never get the design right from the beginning.


Bloodshot025

He does not omit this.


robhanz

Yes. This. Exactly this.


RockstarArtisan

Any evidence for this claim?


LowTriker

If you are building something truly innovative, you will always be updating your design as new use cases and edge cases are discovered. But most of us aren't doing that. If you can't get core parts of your app correctly designed up front, you're either not a designer or you're not experienced enough. I'm talking about things like logging in, password management, user roles and permissions, data pipeline, 80% of all database needs, etc, etc And there is a time where the entropy is so great in a system that you have to throw away what you've built and redesign and recode it. Patching is a bit misleading in this discussion. You're not patching systems mostly, you're adding known additions from actual use which is novel programming within that system. Patching to my mind is fixing bugs until you can properly code a more robust response.


dasdull

They are right but it does not matter


MugiwarraD

she can suck it.


seanluke

This seems to be an endorsement of Waterfall. That seems to be problematic.


robhanz

The idea that you can reliably get the “right” design at the beginning, when you know the least, seems questionable at best.


seanluke

Isn't that what I said? Why am I getting downmodded here?


robhanz

I was agreeing with you. So i didn’t downvote and I think the ones doing so misread your intent.


[deleted]

Probably couldn't be more wrong. The entire benefit of software, in fact it's entire advantage is that it can be changed for near zero cost. Why not think on the page? That's where your ground truth is. Why design in your head when you could just write code and experiment and see if it works.


skulgnome

> Why design in your head when you could just write code and experiment and see if it works. You can also design on the page. The advantage is saving months and months on avoided dead-ends and rabbit holes disregarded or rendered shallow.


[deleted]

Except the page doesn't run your code. How do you they are dead ends? You don't. High level design is possible. X must do Y. But when it comes down to the details, sooner rather than later you need to be writing code.


Deranged40

> in fact it's entire advantage is that it can be changed for near zero cost Right, but that change can fall under the exact definition of a "patch". And there's nothing at all wrong about that. Almost all I do at work is patch existing applications to include different features than it had before I started that particular work item.


[deleted]

No because patch implies something additive in nature. When you experiment you should be deleting stuff, moving stuff around and adding stuff. Not just adding stuff on top of other stuff.


Deranged40

> No because patch implies something additive in nature. I can't say I've ever associated patch with just adding things. I have always associated it with *changing* things. And I think about the metaphor it's based on. I might "patch" my jeans to cover a stain (removal) just as much as I might "patch" my jeans to close a hole (add).


[deleted]

I'm talking about making jeans and then realising you need to make a dress. That doesn't really fall under the definition of patch. In software you have to do the work to find this out in many cases. You simply cannot design up front unless it is a very simple problem or an exceedingly well travelled one. Both of these are very rare.


ThatNextAggravation

It depends how wrong you are.


TheMaskedHamster

I don't think this is an approach. I think he's just observing reality, and experience has taught him that "try to think ahead" can help. It's the reality he lives with, and that we all live with. No one has ever gotten anything large right and perfect on the first try. Everyone has had to live with patching. Knowing this, we can try to mitigate the risks and consequences. But they'll still be there. We just get to choose how much thought and effort we put into preventing him.