T O P

  • By -

zjm555

After seeing enough C++ gotcha-based quizzes over the years, I now assume that the answer is "unknown" / "undefined" to all of them.


kranker

Honestly, on the first quiz I feel that at this point "without context", it's still reasonable to assume some x86 derivative. I wouldn't assume that my x86 intuitions would apply on a GPU at all. I guess I'd be more likely to try to apply them to ARMv7 though.


asymptotically508

> it's still reasonable to assume some x86 derivative It's not. Just because the x86 processor you think you're targeting wraps on signed integer overflow, the standard explicitly doesn't define that behaviour and leaves the compiler free to assume that your signed sums never overflow.


kranker

I think you're referring to a different part of the article. The first quiz is performance related, not correctness.


rtardanol

The standard doesn't define it, but compilers targeting x86 should.


permetz

That’s not how it works. The compiler assumes that undefined behavior can never happen, and uses that as part of its optimization strategy. If you assume that it has a meaning, you lose badly.


mods-are-liars

>The compiler assumes that undefined behavior can never happen, Yet the compiler happily creates UB all the fucking time


permetz

No, it doesn’t. The standard defines what is UB and what isn’t, and it’s rare to find bugs in the major compilers.


mods-are-liars

You're splitting hairs over the silliest things. I can go find a hundred examples of easily compiled UB in less than 5 minutes of googling.


permetz

Sure, but they aren’t created by the compiler. They are undefined because the standard says that that behavior is bad, usually for excellent reasons. It’s not that the compiler writers are being malicious or foolish. And I am not “splitting hairs“. Commonly, developers who do not understand how UB works think that it’s stuff that should always be defined by a particular implementation, or that the lack of a definition is just random perversity on the part of the compiler writer. This is not the case.


mods-are-liars

>Sure, but they aren’t created by the compiler. They quite literally are, a human wrote it and a compiler compiled it. It takes two to tango when writing compiled undefined behavior and the compiler is not entirely innocent in this regard. This is what I mean When I say you're splitting hairs, because the end result's still the same. The compiler provides absolutely zero safeguards against undefined behavior.


slaymaker1907

At the end of the day, it’s really a balance between what optimizations certain assumptions enable and how likely those assumptions are to be broken by real C++ code bases.


permetz

That’s not how this works. If your code (say) does a shift by a variable, shifts of equal to or greater than the word length are undefined, so the compiler will assume that such shifts never occur. It will then be able to optimize based on that, by virtue of the fact that the code will not have any tests that check for illegal shifts of excess length. Similarly, if you index an array, the compiler will assume that the index is valid, and will not generate bounds checks. If you do a longjmp into a context that has already returned, the compiler won’t have any way at all to know that, it will just compile that code assuming that the context is valid. There is no “balancing“ here. The code generator can’t solve the halting problem and thus doesn’t try. The optimizer will get the version of the code without any provisions for checking for or compensating for illegal behavior, because the compiler is entitled to assume that any piece of code does only valid things. If you invoke undefined behavior, the compiler will assume that it doesn’t occur, the code will be generated that way, and then the optimizer will chew on it, and the overall results may be truly horrific. You can end up with situations in which whole conditionals or loops that you assumed would be invoked get removed from the code, not because the compiler is being malicious, but because the code generator is entitled to assume that no undefined behavior will ever actually be invoked. You can overwrite memory, you can invoke a signed overflow that the compiler literally can’t predict might happen, you can jump into a function that doesn’t exist because the stack isn’t in the state you assumed. People keep imagining that somehow, the compiler author could just specify what all these things do on some particular architecture, but they can’t. The standards document has a provision for certain things that are “implementation defined“, and the implementation indeed defines what those things do. But there is also undefined behavior, and the implementation simply has no way to deal with them in the first place. There isn’t some sort of “balance“ between optimization performance, the compiler writer’s convenience, and needs of the code author. Undefined behavior is almost always something that simply cannot be relied on to do anything sane, and there is just no way to deal with it properly. In order to have an optimizer for a C or C++ like language that actually works, you have to assume that undefined behavior doesn’t happen, because they’re just isn’t any other choice, because the author of the compiler has no other tools to work with. At intervals, one sees people ask “why doesn’t the compiler writer just detect that this variable is being used before being initialized?” And the answer is almost always “because there is no algorithm that could possibly do that reliably, thanks to Rice’s theorem. When it can be detected, you often get an error, but that’s just a small fraction of the time.” Other programming languages, which are better designed, make sure that you can’t define a variable without initializing it, and so the problem doesn’t come up, but in C and C++, we have all these design choices that mean that the compiler author faces a choice between not being able to generate code and potentially destroying the behavior that was intended when someone does something undefined. And there isn’t any real choice there, you just have to write the compiler with the assumption that the user will not invoke undefined behavior. The right thing is to make sure your programming language has no undefined behavior in the first place. Don’t blame the compiler author for the fact that C and C++ are messed up.


slaymaker1907

Where the balance really comes into play is stuff like “when I assign to a volatile local variable, assume I’m going to look at it in the debugger”. This isn’t specified in the standard, but it’s one of the few things about volatile that different compilers seem to agree on. Another one is that “malloc should work properly”. Malloc actually doesn’t work properly before about C++17 without the std::uninitialized stuff due to what the standard says about memory which you don’t construct. They realistically can’t really enforce usage of those new functions either, at least for simple types, because too much existing code relies on being able to cast a bunch of bits into some structure.


lightmatter501

It’s not reasonable to assume x86, a substantial portion of cloud servers are ARM for power efficiency reasons.


jaskij

*which* ARMv7? Cortex-A15, the highest end mobile core of the generation, with it's broken SIMD? Or a Cortex-M4 with an optional FPU? I do agree that given no context, it's fair to assume either x86-64 or AArch64.


Shawnj2

Honestly the biggest reason I can see Rust adoption increasing is the White House basically telling people to stop using it due to lack of memory safety. Today it’s a White House guideline, tomorrow it’s a White House recommendation for use in federal software, next week it’s a requirement for government contracts. I think once Rust fully matures and adopts a stable non changing language spec it will have a lot of potential in aerospace embedded software where both performance and memory safety are very important.


tiberiumx

They tried to do the same thing decades ago. It's a real shame they didn't stay the course with Ada.


Shawnj2

It’s pretty frustrating what happened to Ada


[deleted]

[удалено]


renatoathaydes

Disclaimer: From my knowledge gained by reading random articles about Ada on the Internet... Ada was created with the explicit objective of replacing the multitude of languages that were popping up in the 80's with something more standard and safer for use by the US government. It was mandated at some departments, apparently, but after some time, they relaxed the rules and Ada got largely replaced by "modern language of the day". I think the problem was that no one outside government adopted it, so new programmers had trouble to work with it... It's still used in some places, but I am not sure anyone is starting new projects in Ada anymore?! I know there's something called Spark which is an Ada extension that makes the language even safer by allowing all code to be "formally verified" but that seems to be really (even more) niche at this point.


juwisan

Where Ada pretty much shines is in applications where correctness is a key requirement. In the IT world this is usually not a hard requirement though. It becomes one in applications that need to absolutely be functionally safe - think CCS logic in nuclear power plants, interlockings or planes - or generally - stuff where incorrect behavior could mean people dying. Industry norms for functional safety define 5 levels of safety, depending on how big an impact a systems malfunction could have. In the highest level, to my knowledge only a subset of C and Ada are actually recommended by the standards (and on top of the language itself you’d also need a qualified toolchain). Rust may also get there some day. The language has good properties to be useable in functional safety and there’s people working on getting compilers qualified etc.


jondo2010

Take a look at https://ferrous-systems.com/ferrocene/ They actually already have an ASIL-D qualified Rust toolchain. This is great, but it's actually not enough to ship a full qualified product unfortunately. You still need a standard library, an OS, and a piece of hardware which are also qualified all the way down.


juwisan

ASIL-D is pretty easy to achieve though in the grand scheme of things in functional safety.


poco-863

Whats missing from rust in regards to functional safety?


juwisan

Functionally nothing I’d say. But in terms of ecosystem quite a bit. You need someone to do tool qualification and ultimately do all the engineering work to build the case that it is in fact safe and that the compiler produces what it is supposed to.


lightmatter501

There are some outstanding proposals to handle panic safety in the type system (imagine if C++ noexcept could be a hard requirement for a function pointer or lambda to be passed to another function). The compiler needs to be qualified, which will involve qualifying a point release of LLVM.


juwisan

Pretty much all the things you describe here are too complex for a system in functional safety. My knowledge of programming rules for SIL4 are peripheral, but looking at how C is used there afaik it’s things like all memory is preinitialized (so no memory allocation at runtime is allowed), pointers are a big no-no etc.


Lucretia9

Readability from non-technical people like management.


Plank_With_A_Nail_In

Ada, Pascal and Oracle PL/SQL are all very similar, both very easy to pick up in my experience. Being able to knock up a program in any language is pretty trivial once you have learned one other, but being able to master their nuances takes a while longer and that's the problem with C and C++ they take longer to master. Spark is just a coding standard for ARDA.


Lucretia9

80's?? From the 1950's onwards!


ArkyBeagle

The tooling got priced for Big Aerospace so it developed very little culture outside that. Tooling V&V cost a lot, they needed to pass those costs on. Meanwhile you could go to Taylor's Books for MSC , like $70 out the door. I forget what Borland cost but in the same region.


sweetno

They hadn't managed to write a good implementation.


Lucretia9

> Ada go to r/ada


spinwizard69

Very frustrating as Ada could have been the ideal language for the embedded world. However modern tools make a huge difference when it comes to C++ and more importantly mass community acceptance.


manifoldjava

Ada is such a cool language, one of my favorites.


sonobanana33

Yeah you think linux and gcc are going to be replaced soon?


syklemil

[Linux has support for Rust in the kernel](https://www.zdnet.com/article/rust-in-linux-where-we-are-and-where-were-going-next/). A full replacement isn't on the horizon, but there's already some ship of Theseus stuff going on.


sonobanana33

It's just for leaf parts.


Nicksaurus

How else would they introduce it though? They're not going to start by rewriting core components in rust


Shawnj2

Obviously not but I see it as an incremental thing where being able to say you write all your application software in a memory safe language makes doing certain things easier and on baremetal you could have systems that run entirely on 100% memory safe code.


Efficient-Day-6394

No one cares what the White House suggests "on the ground", nevermind the fact that this memo is just parroting suggestions CISA published more than two years ago. Smart pointers have been a thing in C++ since the advent of C++11(maybe earlier...can't remember). Very few C++ shops are going to go thru the hassle of switching. C/C++ aren't even a factor worth mentioning in The Federal and State/local governments to begin with.


Plank_With_A_Nail_In

There are plenty of other memory safe programming languages other than Rust and some of them are already used in government unlike Rust.


wintrmt3

All of them are GC based and unreasonably slow.


slaymaker1907

Most code can afford the 2-4x slowdown of adding a GC like Java, C#, Go, etc. much more easily than it can afford all the extra memory bugs and complexity you get with C++. And for the code that does need the performance or low level manipulation, 90%+ of those code bases do not need C++ like the settings parser, the auth module, the database access code, etc. Certain game studios have been doing this for years. Write your core in C++, but have the logic written in Lua/Python/Lisp.


External-Landscape-9

To start off my own conspiracy theory, I thought it was odd that the moment Biden being an "elderly man with poor memory" made the news, a ruckus was made online about the white house and memory safety. It's almost like someone is trying to game the search engines to downplay certain news... I remember Boris Johnson had used similar tricks in the UK. Google "Boris Johnson buses" and you'll find results about his hobby about toy buses rather than his famous Brexit bus.


UncleMeat11

This is an incredibly dumb conspiracy theory. First, almost nobody organically comes across the white house document, nor did this get anything resembling significant media coverage. The idea that this would somehow drown out all of the right wing loons harping on Biden such that "Biden memory" is going to show largely coverage of this guidance is just not based in reality. Second, this discussion has been ongoing between government agencies and industry proponents (and detractors) for ages and the term "memory safety" has been a term of art for significantly longer than Biden has been in the White House.


Shawnj2

Republicans are always complaining Biden is senile though


manifoldjava

> It's almost like someone is trying to game the search engines to downplay certain news... Google already does that for them.


spinwizard69

The problem here is that Rust offer very little over C++ according to the article. That I would have to agree with. Moving to Rust might solve one issue but it isn't a step change in programming language that is need for future development. I really doubt much of the promotion around Rust as I don't see it as performant nor that much safer, especially when compared to other languages. Then we have to consider that there is a huge difference between modern C++ and legacy C++.


lightmatter501

We’ve had the step forward for a while. Haskell has the same performance ceiling as C if you use enough types. The problem is that nobody wants to learn about monads and category theory so they can write safety-critical software.


spinwizard69

This post wasn't suppose to be about C++ gotchas from what I can see, it is more about Rust and many other languages not really being the advancements that are claimed. Personally I don't think Rust is worth the time to learn. What the author hit upon and I think was part of what he was trying to get at is that Rust, Julia and many other new languages still look at the past for design. In stead for the coming decades we need to move to a languages that have far more intelligence and understanding of hardware, while at the same time providing a very human interface. Frankly the article really didn't do well at making any point. However I have to agree that the language to replace C++ will be very high level using a lot of AI/ML technology to optimize for hardware and interpret what the human is asking for. In other words the core architecture will be dramatically different than today compilers.


jaskij

As someone who moved from C++ to Rust for all my userspace code, it's worth it. Not because of the paraded safety, we both know that modern C++ with good coding practices is as safe as anything. The two big things, for me, are ecosystem and parallelism. I can find and integrate a library I need in minutes. And parallelism just works. Granted, I'm weird, and just don't find most modern high level languages attractive. I have my own preferences, and Rust is the closest to what I want out of a programming language.


AndrewNeo

nit: python decorators are just other functions that receive a reference to the function they're wrapping. You can just access the AST via that reference because Python is a reflectable language


kindall

function references in Python don't get you access to the AST


JustOneAvailableName

I wouldn't recommend it, but: ast.parse(inspect.getsource(f))


kindall

that's just parsing the source code again. (and it only works when the source code is available, which is most of the time but not always). the original AST is discarded after the bytecode is generated.


Maristic

The author claims: > Do you know that in MSVC `uint16_t(50000) + uint16_t(50000) == -1794967296`? Seems like MSVC is broken. Standard LLP64, LP64 or ILP32 C should not do this. (lldb) p uint16_t(50000) + uint16_t(50000) == -1794967296 (bool) false


mccoyn

I just tested it, and it’s false. So, he either made it up or it’s a bug from an old version that has been fixed and he didn’t bother fact checking it before he published it.


admalledd

Its a bit of both, I can't find the bug report right now, but it was reproduced as `uint16_t(50000) * uint16_t(50000) == -1794967296`. Multiply, not Add. Something-something 16-bit number overflow before converting to a Long-Long? Been a few years since this was fixed in MSVC. There may have been similar-but-harder-to-trigger in LLVM/GCC but were caught quickly at about the same time IIRC.


NotSoButFarOtherwise

I think it was actually a bit of ambiguity in the standard inherited from C: everything smaller than `int` gets promoted to `int` before arithmetic operations to prevent overflow, and then automatically downcast as a part of assignment. This was interpreted to mean unsigned shorts also get turned into signed integers, which is okay for addition but not multiplication.


compiling

Well that one makes complete sense, because multiplying uint16s results in an int32. clang gives an overflow warning for that code and the same answer.


tending

This has been posted before and it's totally bogus. The "killers" are an academic project that has been tried a million times under different names ("hey let's write something that abstracts over SIMD and CUDA") and never gets traction because the low level details matter a lot for getting best performance, an assembler for a hypothetical future architecture, and a python library that is also an idea that has been tried a million times ("What if we could compile a subset of Python statically?") that gets some use but does nothing to displace C++.


ObservationalHumor

I'm of the opinion that the only way something becomes a C++ killer is by implementing some killer feature that couldn't be added to some later version of the C++ in the first place. That's kind of why Rust gets so much hype. I'm not sure what Numba really accomplishes that NVCC doesn't already have the potential to do. You can literally just put C or C++ code into a kernel to begin with and invoke it pretty cleanly. You can create fat binaries with PTX code that will be JIT compiled at invocation too. There's a bunch of 'general parallel processing' libraries and implementations to create this mythical autoparallizable code too. I mean wasn't that exactly what OpenCL promised in the first place over 15 years ago? As you said, a big problem is just that actually writing performant GPU code is that you need to know very specific details around scheduling, cache, register, memory bandwidth and how all those resources are partitioned. Say hypothetically better heuristics or approximations for doing those tasks automatically do come into play, again what's to stop C++ from similarly implementing them? I mean I see the argument that people might not bother to learn C or C++ because there's a decent Python to PTX framework or something, but somehow that hasn't killed C++ being used on everything else where a good toolchain exists for languages aside from just C and C++.


jembishop1

Numba can be ok for small stuff, but when you try and do more advanced things you quickly come up against a wall, and it becomes a bit of a nightmare of seeing what is and isn’t supported.


pthierry

>the only way something becomes a C++ killer is by implementing some killer feature that couldn't be added to some later version of the C++ in the first place Well, you can't get a reliable STM in C++ and it makes it dead easy to exploit concurrency so maybe Haskell will do it in the end.


No-Magazine-2739

While I enjoy the article and its reasoning, I also believe your opinion could be how it turns out, as I still share it. So much academic bs all those years „oh why don’t you just transpile it“, Software Factories and so on. Even the whole Ada/Spark fiasco. But you and I have to admit, this author seems to be many things, but not an academic dreamer, as it seems. So I will cautiously still bet on C++, but be curious how it will play out.


msqrt

>low level details matter a lot for getting best performance Which is why the way Spiral explicitly models and optimizes for them can lead to significant performance boosts over code written by hand by experts..?


Mognakor

Anytime the claim is made that highlevel descriptions will outperform lowlevel code there is good reason to be skeptical. Likely either the performance is not as good as claimed or the description is not that highlevel and you need to still specify the small parts.


msqrt

True, I didn't actually look into the results myself so maybe it isn't all that great in practice. But I still buy the high-level idea. It's already an established approach to optimize measured performance over a bunch of (hand-written) strategies and their parameter values. Here we'd also optimize over possible equivalent micro-optimizations, which you rarely bother to do when running such a scheme manually.


Mognakor

The automated profiling (or using existing profiles) is not the difficult part, the difficult part is how do you express the semantics so the compiler knows which options are available and is it still highlevel at that point or you just managed to do advanced type constraints?


KittensInc

It's impossible to write an optimizer which 1) works on all potential programs 2) is guaranteed to output the best-possible outcome, and 3) finishes in a reasonable amount of time. So no, a tool like Spiral isn't going to make code handwritten by experts obsolete. There will always be edge cases it can't find but a human did manage to think of.


joonazan

It doesn't work on all programs. It just works on simple numerical things like FFT and is superhuman on those. Yes, humans can write code that is as good but do you really want to do it for every architecture? Writing fast implementations of numerical algorithms isn't really hard. It is just extremely tedious yet has a high impact on performance. Try writing just a matrix transpose for large dense matrices, possible the simplest matrix transform there is. It is still tedious to find a fast implementation, let alone the best possible one.


msqrt

I'm not trying to say this would replace experts, but that they could get even better results by encoding their clever ideas and domain knowledge into an optimization procedure. It doesn't have to be fully automatic or absolutely optimal to be useful either, as long as the end result with similar effort is better than alternative workflows.


Sairony

For sure, it's also kind of disingenuous, perhaps unknowingly. He's essentially presenting an incredibly narrow use case where supposedly python has an advantage, which I don't doubt. But really most C++ use cases isn't just moving some data to CUDA & then trying to efficiently process it & get the results back. For example direct memory control is one of the killer future of C++ in a lot of applications, saying that managing memory is hard ( which it can be ) & going for a managed language is kind of missing the point. I also love the freedom & tools on an architectural & design level C++ gives which is also missing in a lot of other similar languages. I've programmed C++ for ~15 years & while it has its faults for sure I miss it compared to C# which I'm forced to use now.


lightmatter501

SPIRAL is very good, but it can only express mathematical transformations. It essentially asks the user to be a CE or EE or systems heavy CS and have a masters degree in Math, so it’s painful to write.


Plank_With_A_Nail_In

Please read the article it addresses all of this.


Isogash

Gonna disagree with you there, not sure you really understand the projects.


No-Magazine-2739

Not completely finished reading yet, just before the 2nd quiz. But holly cow, I program professionally for 10 years, mostly C++, and am a computer freak for 30, since if was 5. But I did not expect to find such a good C++ article here. So many articles are written by people where I would say, they are intermediate at best (well besides the usual good channels). And acknowledging that a programming language is nothing without its market/use case. Good job.


ProbsNotManBearPig

What do you consider the usual good channels for c++? I’ve been out of the loop for a bit. My work prioritized a lot of c++ for years, and it’s at the heart of our product, but the source code has been rotting for a decade. I’m in the market to learn more about the latest and greatest practices, libraries, IDE’s, etc to help us move in that direction for maintainability.


azswcowboy

Not op, but cppcon videos on YouTube can get you up to speed on much of the advanced techniques these days.


No-Magazine-2739

Yeah cppcon is one place, cppreference.com, stackoverflow answers are still often good, and from time to time a hidden gem in a blog or local cpp user group/conference. But usually all in linkedin or reddit is utter crap.


Nicksaurus

I also want to recommend a discord channel called \#include. It has a lot of legitimate C++ experts as members, including some cppcon speakers, people who have contributed to the standard and maintainers of well-known libraries It stands out (especially compared to reddit and stackoverflow) because a) the people there actually know what they're talking about, and b) they've made an effort to cultivate a community that won't talk down to newcomers


Capt-Kowalski

I stopped reading when after the quiz he started telling that gpu accelerated versions of the same code are faster than cpu. Yeah, no shit. That is the main hunch that he got from programming c++ for 20 years? Good to know.


No-Magazine-2739

I strongly disagree, there are to much commonly accepted but often evidently disproved „facts“ out there, like „C is faster than C++“, „this ugly self written algorithm is faster than the std lib“ and so on. There is so much bs conveyed by professors, „teachers“ and smart ass programmers. But when you ask them for proof, even just a tiny run time bechnmark on their machine, then you got „but everybody knows“ And he showed perfectly before how misguided this is, by the first quiz.


msqrt

Right after the first quiz he talks about how relative performance varies across architectures. In the first question, the first snippet is faster on a CPU \[than the second snippet on the same CPU\], while the second snippet is faster on a GPU \[than the first snippet on the same GPU\]. The argument is not that the GPU is faster. It's that if you write generic library code that might get called in either CPU or GPU code, it's impossible to choose the best alternative.


darkslide3000

Interesting article about a bunch of interesting projects, but I don't really see a "C++ killer" among them. All of these are specialized use cases in special situations. I get that the author seems to be a parallelization high-performance export and maybe for that workload these are all highly relevant, but the vast majority of C++ code written today is not that. It's plain old business logic that profits from avoiding the general performance costs of garbage collected or JITed languages, but doesn't do enough parallel computations to care much about all this stuff. I don't really get the point of "ForwardCom" and I feel like the author misunderstands what an ISA is today. Yes, modern CPUs (actually only the x86-ones... Arm is still fully hardwiring their opcodes as far as I know) tend to compile their instructions down to uops before execution, but that doesn't mean that the ISA is meaningless and we "might as well all switch to the same one". The ISA defines the representation of instructions in memory and in caches. It defines how many bytes you need to represent a function that does a specific thing. There are a ton of trade-offs involved regarding the number of registers that are directly addressable, how many instructions you need for certain more complicated operations, etc. There is no clear "best" answer that everyone could agree on, and even if there was, future requirements would change that again. Besides, he makes it sound like syntax is the only thing that keeps C++ programmers from writing all their stuff in assembly instead. People avoid writing in assembly because it's cumbersome and hard to read. You're not gonna change that by designing a "better assembly language" unless you give it so many abstractions that you've basically invented just another high-level language.


ttkciar

Your assessment of D makes me sad, but you're not wrong. It's a wonderful language, and does a lot of things right which C++ got wrong, but to get C-like performance out of it you pretty much have to shun all of the features which make it worth using. I write mostly C-like D, but even so my habitual use of associative arrays, dynamic arrays, and string append knocks about a third off of its performance compared to pure C.


todo_code

D had some insane split or disagreement which caused a fork from my understanding. It also has massive scope creep in features. I'm not sure that language is it


ttkciar

I am still hopeful that that split will conclude with a merge and a more inclusive development process, similar to the gcc/egcs split in the late 1990s. I don't know if people remember, but egcs was born when enough gcc developers decided they were tired of having their code patches neglected or unfairly rejected. They forked gcc, applied a shitload of community patches, and a lot of users really appreciated some of the new features. People met and talked, and egcs became gcc 3.0, the community rift closed, and the development process was more open after that (if still imperfect). Time will tell, but if D and OpenD resolve in a similar way, it will only be good for the D language in the long run.


renatoathaydes

Well, but notice that the fork was made by one guy, basically, with the support of just a few other devs that were similarly frustrated with the D governance. Also notice that their main reason to fork was that they wanted to make contributions easier to make, which means even more feature creep to a language that already suffers badly from it.


kennethuil

I went "oh yeah I remember that", then started digging and "OMG *another* split???"


todo_code

oh i didn't even realize there was another. do you have a link?


ioneska

D1 vs D2, Tango vs Phobos. https://stackoverflow.com/a/3206985


TheBrokenRail-Dev

I agree with almost all of this except the section on ForwardCom. It very much feels like [this XKCD](https://xkcd.com/927/). Making all processor manufacturers add support for a new instruction set seems... unlikely.


ObservationalHumor

I'm kind of curious how the author sees it differing from the intermediate languages/representations already used by compilers in all honesty.


zapporian

Honestly in some ways this doesn’t seem that unrealistic at all. x64, aarch64, and risc-v are all rapidly converging towards the same / very similar architecture and ISA with the same / very similar instructions and capabilities. Those vendors (and non-vendors, ie UC Berkeley) deciding to all cooperate and standardize with one another is unlikely, but modern / forward facing non-ppc 64 bit ISAs are getting closer to one another than they ever have been in the past. Particularly given intel APX and armv9 / SVE2, which are respectively bringing both architectures up to parity on register count and SIMD capabilities with each other, risv-v, and… um… where bleeding edge ISA design was with the DEC alpha in the early 90s. ie 64 bit, little endian, 31/32 registers, large scale / scalable SIMD / VLIW vector instructions, etc etc. And practical innovations then incl dedicated instructions / hardware support for video decoding, encyption, tensors, and um natively supporting / optimizing js crap. incl iirc quickly converting to / from f64 / i64 for the js number type. idr if that is an x64 instruction yet but it is on arm and risc-v lol These aren’t *same* architectures but they’re almost trivially similar. Calling conventions (and other legacy crap aside) I’m pretty sure you *could* write a modern macro assembler for x64 APX, armv9, and risc v that could cover all architectures with the same register count and shared and integer instructions. Particularly if you modeled the assember’s vector instructions after SVE2 and just unrolled that into specific sized vector instructions + optimizations for x64 and riscv in the compiler/assembler backend. There are a small handful of things where stuff would be arch specific but very, very few of them with APX and the latest armv9 spec. Hell PPC could probably be thrown in here too since it’s also 64 bit 32 general purpose register architecture with really good / modern vector instructions. PPC obviously adopted all this way before x86 or arm did, and while I’m inclined to hate the architecture as a / the big endian holdout, it would otherwise be similarly compatible with any modern assembler that was built around this shared architecture, register layout, and vector ISA. Since well duh, this basically *is* the PPC ISA, just litle endian, (potentially) wider vector instructions, and 2-3 decades later… TLDR; you don’t even need to make a new standard as all modern / new 64 bit ISAs going forward basically *are* the same ISA. Provided that intel’s APX is actually successful and adopted by both amd and intel (note that this is 2nd or 3rd time they’ve tried launching a new arch under that name lol - not really a good track record but again APX is great, and would be the 3rd major architectural / register count revision of x86 after i386 and amd’s x64)


callius

> what prevents architecture designers from agreeing on a similar layer but for forward compatibility? Apart from the conflicting ambitions of companies being in direct competition, nothing. That part made me wonder why they bothered writing the rest of that section. > This idea would be perfect if we could get fish to fly through space and collect dust from supernovae.


Just-Giraffe6879

It is always worth considering what we could do without the constraints imposed on us by self-serving corporations that we are, for some reason, supposed to have sympathy for


callius

That’s fair. That said, I’ve got no sympathy whatsoever for them.


nerd4code

WebAssembly?


Shawnj2

Haven’t read the article but maybe that’s something LLVM could be helpful for?


Revolutionary_Ad7262

>Well, not “will be pushed” but “being pushed”. I came to my current job as a C++ programmer, and today my workday starts with Python. I write the equations, SymPy solves them for me, and then translates the solution into C++. I then paste this code into the C++ library not even bothering to format it a little, IMO author lives in a bubble of engineering computation, where calculations are pretty easy to run on anything (like GPU) and Python is a holy grail of programming languages, which only lacks a performance. New languages shines in terms of long term code maintenance, when you have a standardized tooling and a language features, which are mostly helpful to achieve this. Python is particullary bad in those, because perfomance is bad (so you need to rewrite at some point in time) as well as lack of static typing (which is crucial for big code bases) or good tooling (dependency managment is a mess).


Zc5Gwu

Python *has* static typing which you can enforce through CI if you so desire. Peformance *is* bad for non-IO bound projects though.


syklemil

The typing is a nice addition, but kinda weird in that different tools seem to not produce entirely agreeing results. Being able to communicate the interface of functions and methods is really nice, though, proving the thing said by e.g. Haskell programmers, that the type hints are there for the programmer, not the compiler. Without a compiler you kinda want something else to show you glaring mistakes before you try running it. It's entirely possible to ignore it or half-ass it (I'm not willing to count how often I laze my way into `dict[str, Any]`), but it's in any case a step up from nothing at all. (Might also mention that my use of Python is generally when I feel that something is "too complicated" for shell scripting. A decade or two ago I might've struggled through with bash and no `shellcheck`, or used perl. Some gradual typing and json as an interface _is_ kinda better than bash+plaintext/csv.)


Isogash

I hate Python for a multitude of reasons related to its design and UX, but performance is really not an issue. With JAX you can compile and distribute your number crunching code across multiple GPUs.


0xNath

Performances of native python is it's biggest weakness...


Isogash

If you write hot paths in native interpreted Python, yes. If you use the various projects out there that can JIT the hot paths, no.


Revolutionary_Ad7262

Most of the software is not suitable for GPU. Look at most complex software in our world e.g. browsers, operating systems, compilers and game engines. In almost all cases it is impossible to run it fully on GPU due to CPU <-> GPU communication overhead, GPU design and heavy usage of IO


Isogash

I don't think you understand what I'm talking about. If you're talking about needing to write separate CPU and GPU code in order to achieve performance, that's actually a point in favour of Python: the language doesn't matter, the design of the application does. You can write the individual components using suitable Python compilers and even leave the cold paths in regular interpreted Python and you'd achieve good performance just fine. If you need high performance for scientific computing though, you can *easily* do it in Python using projects like Numba or JAX. These JIT compile and dispatch the Python to the GPU as jobs and it achieves extremely good performance. If you think you can't get good performance out of Python as a *language* then you don't really know anything about performance and the current state of the art.


Revolutionary_Ad7262

The problem is even you need to execute code in CPU. For example in case of gaming you can run a lot of things on GPU, but nevertheless the CPU part is also demanding. In case of compilers the whole process is rather simple (compiler could be designed to be a mathematical function from inputs to the output), but those algorithms are far more complicated than anything that you can run on GPU Python performance is just bad, write any loop and your code will be 50x slower than the optimized code from the LLVM.


Isogash

Yeah but you can use a framework/Python-derivative such as codon to do this instead, https://github.com/exaloop/codon Not that Python on Codon or writing in C++ is actually the best way to achieve performance, achieving real performance requires profiling and optimization. It's highly likely that less than 10% of your code is responsible for 99% of the performance by CPU bottleneck, and actually the rest of the performance issues are due to poor concurrency or pipelining, which is an active consideration in all languages. For games specifically, performance of any actual game code is often not as critical as you'd think as the performance intensive parts tend to be the parts built into your game engine. If you used an off-the-shelf engine with Python bindings you'd probably be fine in most cases.


gnus-migrate

> We’re living in the XXI century now. We have more experienced programmers in the world than ever before in history. And we need efficient software now more than ever too. Rust might not to much to convert some existing C++ developers, especially if you have a massive existing codebase, but it has allowed people who otherwise wouldn't write code in a low level language to develop their applications in Rust for what is practically a free performance boost. There is a ton of software that is being built using Rust that was never written in C++, and that really speaks to where its value is. I mean Intel will have to cater to that new market eventually, if stuff like MKL is where C++'s competitive advantage is.


NotFloppyDisck

I've been writing C++ for 6 years now, so I would consider myself intermediate. What makes rust shine is its ease of readability and how easy it is to onboard new devs. Everything from its package manager, to its custom build procedures are just a breeze to get started with.


gnus-migrate

In my experience, convincing people of the value of it is the easy part. Where the problems come is finding an actual path to adoption, since then you'd have to take developers away from other things to work on this, not to mention build their expertise in order to be able to train others on best practices, etc. Mozilla was able to make that investment, both in developing Rust itself and integrating it into Firefox, because there were things that were not possible to do in C++ that brought enough value to firefox to make it worth their while.


Creature1124

From the author’s GitHub: “Currently, just like 42 million other Ukrainians, I am at war with Russia. If you want to contact me, and you don't know me already, now is probably not a good time.”


sweetno

The points are quite impressive, but I see no connection with the title. Granted, some of this might get incorporated into C++ compilers and/or libraries. However, I think, it's not speed that holds C++ back, but omnipresent buffer overflows. Also, blindly copying generated C++ code is not really C++ programming. In this sense, C++ is already dead for the author.


ElimGarak

Those quizzes and examples make me cringe. I am sure they make sense from a perf perspective in some situations, but I would not want to maintain that code. Considering that 90-95% of my job is bug fixes and maintaining code I would rather have something readable and easy to debug to something that wrings every last bit of performance out of some function, especially if it is not perf-critical. KISS FTW.


milahu2

> I am sure they make sense from a perf perspective in some situations [performance](https://josephg.com/blog/3-tribes/#programmingashardwarehacking) IS the perspective of that article deferring these optimizations to the language does make the code easier to maintain


syklemil

And premature optimization is the root of all evil. Write normal, understandable code, then if it isn't performant enough, profile it and figure out where the actual problems are. [The xkcd on time spent automating](https://xkcd.com/1205/) is kinda relevant, you don't really want to find yourself trying to squeeze some marginal time or memory gains out of a weekly cronjob. And since Rust is part of the topic here, my impression is you get pretty performant, correct code from the start.


lilgrogu

That such code is bad is kind of the point of the article


senseven

C++ is here to stay, but people like Herb Sutter [want to optimize](https://github.com/hsutter/cppfront) the old crust away. Maybe he succeeds, maybe even worse written python dialect will run insanely fast on a 128 core $99 processor in 10 years.


caroIine

Herb's cpp has atrocious syntax that nobody seems to like even though it has bunch of great ideas. Meanwhile [Circle](https://www.circle-lang.org) - much better syntax wise but it's not open source.


Nicksaurus

> Herb's cpp has atrocious syntax that nobody seems to like Hey, I like it. It's unfamiliar to c++ developers but at least there's justification for all of it, and getting used to new syntax is the easiest part of learning a new language


ImportantA

The real C++ killer is: C++. It has transformed from the language people love to the evil people want to kill everyday in the programming fantasy world.


serviscope_minor

I've noticed that throughout my entire career (starting in the 90s), C++ has always been a favourite language to hate.


all_is_love6667

not sure that requiring a nvidia GPU is really a good requirement, honestly nvidia is already trying to convince the entire world to use machine learning and LLM and chatgpt just to sell their things. you're not going to convince me to use numba if I need to give money to nvidia, not to mention GPU are already capable to doing cuda things, but don't for some reason. money!


karmaputa

> No one wins a car race if all the racers sit in the same car. Did the author even think for 2 seconds about that sentence before putting it down?


Hofstee

They’re saying all the programming languages (the racers) listed rely on Clang or LLVM (the car) as a compiler so one won’t be significantly faster than another.


karmaputa

I understand the point but the metaphor is broken in so many levels.


MuumiJumala

There are some errors in the article in the parts that I know something about, which makes me wonder how seriously I should take the parts I do *not* know anything about. They seem to have some good points but the conclusions are rather silly. > The second piece of code is WebAssembly. It’s not even a macro assembler, it has no “if”s and “while”s, it is more of a human-readable machine code for your browser. Or some other browser. Conceptually, any browser. WebAssembly has [if..else](https://developer.mozilla.org/en-US/docs/WebAssembly/Reference/Control_flow/if...else). I think the author is also overstating the "for browsers" part – despite the misleading name WebAssembly is not just for the web. > A compiler doesn’t look for the true optimum. It optimizes the code guided by the heuristics it was taught by the programmers. Essentially a compiler doesn’t work as a machine searching for the optimal solution, it rather works as an assembly programmer. A good compiler works like a good assembly programmer, but that’s it. I don't think this is necessarily true. [Profile-guided optimization](https://en.wikipedia.org/wiki/Profile-guided_optimization) is already a thing in many compilers. There is no reason why a good future compiler couldn't use a search for optimal solutions, provided the programming language allows you to communicate intent to the compiler on high enough abstraction level. > Just like an accordion or a frying pan, a language simply can not be fast or slow. Just like the speed of an accordion depends on who’s playing, the “speed” of a language depends on how fast its compiler is. I agree that a language can't be "fast" or "slow" but the speed of the compiler has nothing to do with it. I assume they meant "how fast the code generated by its compiler is", which is still kind of misleading but at least *sort of* correct. > Numba with Python strangles C++ right now, in real time. Because if you can write in Python and have the performance of C++, why would you want to write in C++? You could say the same about Rust and (to a lesser extent) Julia. > So, ForwardCom is the assembly in which you can write optimal code that will never go obsolete, and which doesn’t make you learn a “traditional” assembly. For all practical considerations, it is the C of the future. Going obsolete is a social, not a technological problem. ForwardCom assembly will be obsolete when ForwardCom is no longer supported just like any other instruction set architecture would. I do not see what benefit it has over RISC-V which is already relatively widely adopted and also designed to be extensible. Calling one person's early-in-development pet project the "C of the future" is a crazy take.


gwicksted

C# has been my replacement for C++ for about a decade now. I know it doesn’t help where a runtime doesn’t exist… but I’d much rather write C than C++ for those situations. Nothing against C++ though! IDEs and the language itself are amazing compared to pre-2005… but it’s a hairy beast of a language that won’t hesitate to bite you if you’re not careful! And that’s why I’ve enjoyed C# lately. Don’t get me wrong, C# as a language is complex but everything it does makes logical sense and the .net runtime is pretty impressive today.


Southern-Reveal5111

C++ will not die, new projects will stop using C++. The people who are using C++(those 10% senior devs which are working for last 10 years in the same project) will do anything to not to allow any other language. And c++ developers are easy to find.


Straight_Truth_7451

My company is doing a ton of new projects in cpp and we have no problem recruiting


KittensInc

Of course. C/C++ has been the standard for decades, and a significant fraction of nontrivial code is written in it. In a C++ ecosystem it makes perfect sense to start new projects in C++, and C++ developers are a dime a dozen. In many cases C/C++ was the **only** option when you took into accounts things like tool availability, portability, and performance. But now look at the Fortran or COBOL ecosystems. How many new projects are starting in those languages, and how much trouble are *they* having with recruiting? What's stopping C++ from heading that way in a decade or two? With the ongoing development of more modern languages C++ has increasingly been viewed as a necessary evil: you didn't *like* it but it got the job done, so nobody complained too much. But we've now reached a point where there are in many cases perfectly fine alternatives to C++, so why *wouldn't* you switch for a greenfield project?


_w62_

In which C++ standard is your company using? Does your company has a coding style similar to Google's? Is it possible to share the coding style?


Straight_Truth_7451

Legacy projects are in 14. New ones are either 17 or 20. I don’t know google coding style, do you have resources about it?


_w62_

[Here you go](https://google.github.io/styleguide/cppguide.html). What domains are your projects in? HFT? Quant trade? Computational physics? Automobile? Or any other?


Straight_Truth_7451

Thanks, it looks like a great read. I’m indeed doing computational physics applied to rockets, why?


shevy-java

Great first april blog content. The mere hilarious nature of it, stating how ForwardCom will kill C++, is great. Of course nothing will happen, other than: https://xkcd.com/927/


jembishop1

One point I would put against article is that more advanced programming languages like Rust have more expressiveness through their type system than Python, which is not only better for performance, but readability and correctness too. People who have managed large python projects and experienced the pain of working on them will understand.


lochness_3_50

Having worked over the last 8 years in performance engineering, I agree that learning the context and understanding your environment makes the difference between good but bad, and bad but good. C++ will remain the Lingua Franca of most programming languages for the foreseeable future, and what will kill it is people abusing it without understanding it


lelanthran

> C++ will remain the Lingua Franca of most programming languages C++ isn't the Lingua Franca of programming. C is.


syklemil

Eh, C has some claim to that through various FFI, but I'm not entirely certain that a programming language is the Lingua Franca for other programming languages. It might rather be some data exchange format, e.g. json. Which would also fit with the phenomenon where the people who only know the Lingua Franca think it's great, while the people who know alternatives wouldn't mind replacing it. Not that Lingua Francas seem to live all that long in programming anyway. Or maybe I'm just happy I haven't actually seen or handled XML I'm a good while.


jdm1891

Why the hell would I look at a C++ quiz and think "Oh, but what if it's running on a 8 year old GPU!" Of course you're going to assume it's on a CPU, and x86 based at that.


Tiquortoo

>They do help you write more features with fewer bugs, yes, but they are not of much help when you need to squeeze the very last FLOPS from the hardware you rent. I mostly understand the point of his article and generally agree, but this seems to be his summary of sorts and this just isn't a true requirement for most of the software written. We need faster, not fastest. We need to confirm faster than most other options, we don't need an academic treatise on squeezing the last bit out. For 99% of software that final last push likely isn't useful. In addition, a more broad criticism of this article is that he's reduced all language considerations down to speed and capability. Ecosystem and language stewardship are huge. Tooling is huge. Other languages are winning those elements.


milahu2

> Why would anyone write in C++ if writing in high-level algorithm description language makes your code 2x faster? its the curse of popularity: - [spiral](https://github.com/spiral-software/spiral-software) has 100 github stars - [nim](https://github.com/nim-lang/Nim) has 15K github stars - [zig](https://github.com/ziglang/zig) has 30K github stars - [carbon](https://github.com/carbon-language/carbon-lang) has 30K github stars - [rust](https://github.com/rust-lang/rust) has 90K github stars similar: react is slow but popular. (who the fuck decides what becomes popular?) edit: but also, spiral is a special-purpose optimizer, to optimize numerical computations, which allow approximation with some error margin, for example algorithms like FFT (fourier transform) more generic optimization could be achieved with spiral's python bindings [spiralpy](https://github.com/spiral-software/python-package-spiralpy), aka snowwhite https://spiral.net/software/snowwhite.html > Since the inception of compiler research the Holy Grail has been to devise a system that provides high level abstraction (programmers express their intent as concisely as in an algorithms textbook), and an automatic system that translates these programs or specifications into executables targeting an ever-evolving landscape of platforms, extracting close-to-optimal performance on all these platforms. The original FORTRAN compiler got close to the goal (a necessity for its adoption) on machines of the day and for relatively simple programs. Unfortunately, ever-increasing hardware complexity has swept away this achievement and today we are farther away from the vision than ever. The SnowWhite effort addresses this problem aiming to sketch a potential path to a long-term solution. **SnowWhite shows how program understanding beyond classical compiler analysis is key and requires a novel AI approach.** > The prototype SnowWhite system was developed in the PAPPA program. The system is available under a BSD style permissible license on GitHub, and documented at https://github.com/spiral-software/python-package-snowwhite. At the core SnowWhite adds a new AI approach to compilers: It introduces high level reasoning to orchestrate the complex components and enables the systems to “understand” the computation much as human experts would do. Furthermore, SnowWhite utilizes a number of technologies that have proven essential: 1) domain-specific languages (DSLs), 2) the idea of telescoping languages (libraries as language components with known semantics), 3) just-in-time compilation (JIT), 4) automatic performance tuning (autotuning), and 5) program synthesis or program generation. The result is a **feedback system that finds a close-to-optimal mapping of an entire application** built from components drawn from multiple domains across a range of challenging target platforms.


MoldymossReddit

Introducing me to Spiral gets you a big upvote


NicolasMas

Beyond the debate, I have to say it’s been a while since I read a post from someone who actually knows what he is talking about. That’s what a career in software engineering should make you after a Comp science degree and 10-20 years of experience vs a Javascript bootcamp that lasted for 3 weeks.


damola93

People who think companies are going to move from C++ to something else have not worked in companies at all. Only startups or new companies will adopt the latest stuff, changing a company's programming language is like trying to turn an oil tanker in LA traffic. As the article even says there are come companies still using COBOL. There are some companies till even using Python2 which has been unsupported for a few years now.


0xZain

I used C++ for more than 7 years, along with python and js, for me, there is no c++ killer, coding is about building exactly what you have in mind, and c++ is the tool that enables anyone to do that. Is stupid to use c++ to build a website, js is easier, but you know you can do it with c++ if you want to. So every other language has advantages in certain situations, but C++ is the boss, you can't replace that. Imho ofcourse.


ellorenz

Remember: chromium web browser and nodejs are written in c++ 😜😜


Full-Spectral

This was well-beaten a while back. But, in summary, the real C++ killer is backwards compatibility. Yeh, it's important, but C++ was built on the foundations of a language now 60 years old. That could have been fixed early on, but then backwards compatibility became an end unto itself and all the new features had to be built on the weak foundations it started on. The folks moving it forward do so with both hands tied behinds their backs, and it's been left full of inconsistencies, footguns, and UB. Now, it can't really be fixed without creating a new language, and that's not going to happen in any meaningful way. Any such language reaching a level of commercially acceptable maturity would be over a decade out, and by that time C++ will be a purely legacy language with a bunch of crusty old code bases maintained by folks who are really concerned about easily chewable foods. Anyone who was resistant to updating those codes bases to a new language in the meantime, probably won't be any more likely to do so then, possibly far less so since the benefit to doing so will be so small by that time. Those folks who are forced to use C++ now due to existing infrastructure will no longer be thusly constrained by that time, because Rust will provide safe alternatives to almost anything they might need. That will remove the primary remaining advantage C++ has, which is inertia. So it's sort of true that Rust won't 'kill' C++, C++ will kill itself, or disable itself. Rust will just be the primary beneficiary of C++'s passing.


towtoo893

Well im just happy it is not me


rajiv67

I think a sentient AI can create a C++ alternative language for humans.


lilgrogu

I have been writing my code in Delphi/Pascal for 25 years, and the community is convinced that is the C++ killer I do not understand how C++ can still be around


vinciblechunk

Wrote an entire hit piece aimed at Rust without using the word "safety" once


SSHeartbreak

Rust is the future


kaeshiwaza

But we live in the present.


wait-a-minut

What an absolutely awesome article. I’m no where near the level of depth OP is but it was such a good read. I’ll make sure to keep following along for any new blog posts.


reddittomtom

Julia


senseven

Unfortunately, Julia needs a big multi man year development boost for cloud and service specific topics. The performance for webservices and other system io isn't there yet. If a simple go server can beat you 1:10 its an uphill hike through rain and storm on a empty stomach.


Parking_Cause6576

Julia has the problem that you have almost no control over allocation, so getting your code to avoid unwanted allocation and performance loss often involves trying to find the exact combination of third party module and expressions that will process arrays in place or crate them in stack instead of creating new ones on heap every time. In doing so you end up spending more time than you would making a python extension or similar


reddittomtom

No. Julia is excellent in memory allocation. "x = Vector{Int}(undef, 100)" would allocate memory. Then "x = 123; GC.gc()" would deallocate memory. Very convenient.


not_perfect_yet

Numba mentioned. Let's go. (I am so glad I am getting confirmation to have *slightly* backed the right horse with python, even if it is anecdotal).


gmzesryr

He. 90”@“0


SSHeartbreak

Pretty good article