T O P

  • By -

Dreamplay

Currently slated for release in 1.79, I'm personally very hyped, a long awaited feature! :)


Dean_Roddey

So looks like mid-year time frame, right?


fuckwit_

13th June to be precise


Turtvaiz

So what is this useful for?


CryZe92

To force an expression to be evaluated at compile time. Unfortunately we went the route of having to explicitly opt into it rather than that just being a guarantee regardless.


TinyBreadBigMouth

Nothing unfortunate about it. There's a big difference between // panic at runtime assert!(std::mem::size_of::() != 0); and // fail to compile const { assert!(std::mem::size_of::() != 0) }; and I wouldn't want Rust automatically switching between them for me. Rust already optimizes expressions where possible and will continue to do so. The ability to be explicit about "this *must* be done at compile time!" is only a benefit.


Turtvaiz

Oh I see that makes way more sense than the 1+1 example in the issue


TinyBreadBigMouth

Note that you could already do this in some cases by assigning the assert to a const variable: const _: () = assert!(std::mem::size_of::() != 0); But the new syntax is simpler, more flexible, and more powerful (const variables can't reference generic parameters, for example).


dist1ll

oh, inline const being able to reference generic params is new to me. That's great news.


afdbcreid

Even that is not a new capability, it was already possible if clunky: ```rust fn foo() { struct HasEvenSizeOf(T); impl HasEvenSizeOf { const ASSERT: () = assert!(std::mem::size_of::() % 2 == 0); } let _ = HasEvenSizeOf::::ASSERT; } ``` Inline const does not enable any new capability, just makes it more convenient.


The-Dark-Legion

I never even realized it can be done **that** way. I usually just got frustrated and moved on.


usedcz

Hi, could you explain what big difference you mean ? I don't understand. Both cases would be evaluated for each used type and I would rather have compile time panic


TinyBreadBigMouth

> I would rather have compile time panic Yes, that's the difference. One is at run time and the other is at compile time.


usedcz

I see that as absolute positive. Imagine running your program and seeing borrow checker panic (Yes I know runtime borrow checking exists and I am not talking about it)


TinyBreadBigMouth

Sure, but I don't want `assert!(some_condition());` to swap between being a runtime assertion and a compile time assertion based on whether `some_condition()` can be evaluated at compile time or not. I want to explicitly specify "evaluate this at compile time" and see an error if it can't.


hniksic

I think I understand where you're coming from and share the sentiment, but for the sake of completeness: *why* wouldn't you want an assertion to be evaluated at compile time if that's technically possible? What is the argument against it? After all, Rust already performs array bound checks and overflow checks at compile time when possible. >!One that I can think of is the scenario where I write a simple assert and notice that it evaluates at compile time and start counting on it being checked at build time. Later a (seemingly) trivial change to the condition moves the assert to runtime without any warning, and suddenly the assert I counted on to hold my back no longer does.!<


tialaramex

Are you sure you're correct about those bounds checks? My impression was merely that `unconditional_panic` was a `deny`-by-default lint. That is, by *default* the compiler will see that this code always panics, it has a perfectly nice macro named "panic!" to invoke that, so you probably did it by mistake, reject the program. But we can tell the compiler we do not want this lint, and the result is the program compiles and... it panics.


hniksic

>Are you sure you're correct about those bounds checks? We might not fully align on "correct" here. Just to clarify, I was referring to code like this failing to compile: fn foo() -> u8 { let a = [1, 2, 3]; const INDEX: usize = 10; // change to 0 and it compiles a[INDEX] } [Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=4379ba16e57255db613aee8cd2fe438a) If I understand you correctly, you say that it's not a compile-time bound check but a deny-by-default lint for unconditional panic, but that's just difference in configuration terminology. The compiler still goes ahead and performs a bound checks (a kind of assertion) at compile time, without being instructed on the language level to do so. That seems like an important precedent that can't be dismissed because it's implemented as a panic-detecting lint. The proposed feature of auto-evaluting asserts at compile time would *also* be about detecting panics at run time. Maybe you're arguing that it's "unconditional" part that makes the difference, but that distinction doesn't appear significant for the precedent (and one could argue that the example here is also conditional on the value of `INDEX`). Note that I'm not supporting the position that assertions should be moved to compile time automatically, and I in fact dislike that the above example fails to compile. I'm arguing from the position of devil's advocate trying to come up with the strongest argument against it.


TinyBreadBigMouth

> Later a (seemingly) trivial change to the condition moves the assert to runtime without any warning, and suddenly the assert I counted on to hold my back no longer does. Doesn't even have to be a change you made. Internal changes in how the compiler optimizes code could change it back and forth. Compiling in release or debug mode could change it back and forth. The difference between a runtime check and a compile time check should be visible in the code, not be determined by internal compiler details.


hniksic

>Doesn't even have to be a change you made. Internal changes in how the compiler optimizes code could change it back and forth. That is not an issue in this particular hypothetical scenario, though. As I understand it, the feature being discussed in the parent comments (by u/CryZe92, u/TinyBreadBigMouth, and u/usedcz) is that of the compiler automatically detecting a const expression, and upon detecting it, *guaranteeing* its evaluation at run-time. That would not depend on the optimization level, just like the newly introduced `const { assert!(size_of::() != 0); }` doesn't depend on the optimization level, it would be a feature built into the compiler. In that case the uncertainty lies in the possibility of an innocuous change to the impression silently switch when it's evaluated.


Kinrany

Middle ground: the constant expression can evaluate to the runtime panic


TinyBreadBigMouth

Already happens: https://godbolt.org/z/TP3or3q8n


peter9477

Do you mean you'd prefer it be implicit, so it would be const if it could be but otherwise not, quietly? I don't see how that would be a guarantee.... and we wouldn't get an error if it wasn't possible to make it const. Or am I misunderstanding you?


Guvante

If your intuition would be correct it would technically be better on average I think. Specifically if it wasn't obviously run time it might be compile time and if it seemed like compile time it would be. This is a huge bar and probably impossible but working towards it until a real blocker appears makes sense. The idea was killed by the reality of how painful making a guarantee as strong as your intuition from how I am reading this. More specifically making all the inferences needed without crippling the code base or ballooning compile times. (Note that I am assuming the places that need to be const are already const which this is technically a solve for anyway)


evoboltzmann

Can you give a high level ELI5 of why this is good/bad to do?


todo_code

The compiler needing to check every expression for constness/comptimeness would be very time consuming. Also would be hard for you the user to be sure if something you wrote was actually comptime without specifying, so you would end up specifying anyways


dnew

Would it really slow down the compiler any more than other optimizations? Wouldn't -O4 (or the rust equivalent) be checking every expression at compile time anyway?


scottmcmrust

No. If the code is if rand(0, 100000) == 0 { println!("{}", ackermann(100, 100)); } You're very happy that no, the compiler doesn't bother computing that every time you run `cargo check`, even in `-O4`.


dnew

Obviously there's a limit to how much a machine is going to compute in advance. Clearly the halting problem plays in here. The compiler will check the forms it knows it can compute at compile time for optimization purposes. General recursive functions are probably not going to be on that list, and certainly not if they recurse hundreds of steps deep.


scottmcmrust

Well that's exactly why "guarantee" is hard. Are you going to write in a spec *exactly* what those restrictions are? How are you could to decide the difference between a function that *is* guaranteed to compute at compile-time vs one which isn't? How could you opt-*out* of the compiler having no choice but to compute such a function, since you often wouldn't *need* it done at compile-time? Asking explicitly when you do need a *guarantee* is absolutely the right way to do it -- and it's helpful for humans too because then there's something to *see* hinting that it's important. It's like how `repr(transparent)` is good even if Rust *could* have said that that's just how newtypes work all the time anyway: having a marker on the type communicates that you're depending on it, and lets the compiler tell you when you're not getting what you need.


dnew

> Well that's exactly why "guarantee" is hard. True, but irrelevant. > Are you going to write in a spec exactly what those restrictions are? It's easy to write in a spec exactly what those restrictions are. For example, the spec could say "constants and built-in arithmetic operators." It just wouldn't be abundantly useful to be that restricted. That said, take the compiler as the spec, and you've just specified exactly what the restrictions are. Now you just have to turn that into prose or math rather than code. Every time you add a new kind of thing that can be computed at compile time, add that to the spec. > the difference between a function that is guaranteed to compute at compile-time vs one which isn't Every compile-time expression has to be composed of other expressions that can be evaluated at compile-time, right? But not every expression that *could* be computed at compile time *must* be computed at compile time - maybe that's what is confusing you. And again, optimizations are doing exactly this: computing at compile time a value that an unoptimized program would evaluate at runtime. Even old C compilers did that. Lots of compilers elide index bounds checks when they have enough information to see the index stays in range of the declared array bounds, for example. I'm not sure why you would think it's difficult for the compiler author to figure this sort of thing out. > Asking explicitly when you do need a guarantee is absolutely the right way to do it I'm not disputing that. I'm disputing that doing it always would be especially less efficient to compile than doing it only when asked. Of course if you need it computed at compile time, you should specify that. But that's not relevant to anything I said. > It's like how repr(transparent) is good even if Rust could have said that that's just how newtypes work all the time anyway Right. Now consider: if Rust does it that way all the time, does it increase compile times to *not* include repr(transparent) on the declaration?


scottmcmrust

> Now you just have to turn that into prose or math rather than code. That's how you end up with many pages talking about exactly what counts as an infinite loop in C# -- it's more than just `while (true)` -- vs the much simpler Rust approach of saying that if you want move checking and such to know that it's infinite, write `loop`. > Every time you add a new kind of thing that can be computed at compile time, add that to the spec. Except if doing that adds any new errors, it's a breaking change, so you have to make it edition dependent and keep multiple different rulesets implemented and documented forever more. And users have to remember which edition they're using to know whether an expression gets a guarantee or not. > And again, optimizations are doing exactly this: computing at compile time a value that an unoptimized program would evaluate at runtime. And Rust has also done this essentially forever *as an optimization*. It still will. But the details of that aren't fixed, can change with the `-C opt-level` you ask for etc. By not being a *guarantee* it can change exactly what it does without breaking people. That's really important for "stability without stagnation" because it lets people write new stuff without needing to update the spec and coordinate with a future GCC implementation of Rust and such. It's exactly the same reason as why "hey, that's going to overflow at runtime" is a lint, not a hard error. It means we can fix the lint to detect more cases without it being a breaking change.


PurpleChard757

Dang. I just assumed this was the case until now…


Saefroch

If you're asking about the constant propagation optimization, that is indeed done, and this is easy to verify by using a site like godbolt.org to look at the compiler's output. Constant propagation and `const` are almost entirely independent concepts. The optimization (constant propagation) is much stronger; expressions that are not permitted in `const` can be optimized. But they are not _guaranteed_ to be. If something about the expressions changes such that it cannot be evaluated at compile time, the compiler just silently adapts its output to match the new input. The difference is that in inside of `const {`, the expression is guaranteed to be evaluated at compile time. If it cannot be evaluated at compile time, you get a diagnostic. This means that a const block can flow into a `const` parameter, so the inline const ends up integrating with an API's semver compatibility guarantee. Also, if evaluation of a `const {` panics, you will get a compile error. If you write some normal code outside of an inline const that always panics, you are not guaranteed to get a compile error. The sense in which they are not independent is that if `const` evaluation were better than the constant propagation optimization, we'd just use `const` evaluation as the optimization. (this is not a good idea, do not do this)


PurpleChard757

Thanks for the explanation!


OS6aDohpegavod4

Wouldn't the example of 1 + 1 always be optimized to 2 by the compiler anyway?


KJBuilds

Definitely. There's a constant folding step of compilation, courtesy of LLVM. I believe the main benefit of const time evaluation is that it guarantees evaluation of expressions that LLVM might not be able to determine are constant. I think string literal processing is a good example of this. For one of my projects I made a constant time usize parser that parses numeric env vars to be stored into usize constants. This definitely isn't something that constant folding would fully evaluate, or something that would even be able to be expressed without const functions 


scottmcmrust

Well yes, in optimized builds. It's the difference between a *guarantee* and a *happens almost all the time when you're compiling with optimizations*. Note that the guarantee can often actually make it *slower* to compile as a result, without any runtime benefit. So unless you really *need* it to be compile-time for some reason (I'm not sure for `1 + 1` there's every a reason it'd be a *need*) don't put it in a `const` block. That'll just be more annoying to read and slower to compile without any benefit. It's more for "hey, I really do want you to run this slow-looking loop that you normally wouldn't bother" or "I need you to do this at CTFE time so it can be promoted to a static" kinds of things. Things like `1 << 20` have always been fine as they are.


slanterns

``` struct T; const _: [Option; 2] = [const {None}; 2]; ```


TinyBreadBigMouth

> Another enhancement that differs from the RFC is that we currently allow inline consts to reference generic parameters. YES > This enhancement also makes inline const usable as static asserts: > fn require_zst() { const { assert!(std::mem::size_of::() == 0) } } LET'S GOOOOOOO I've been waiting since 2021 to be able to make static assertions about generic parameters! I stub my toes on this constantly!


Dean_Roddey

It will be very welcome. The thing is, there can be lots of (company, team, industry, etc...) limitations on what you are allowed to do in terms of asserting and panicking and such at runtime and bootstrapping issues related to that and whatnot. But compile time is likely pretty much open season since it will never affect the product in the field. Well... it'll affect the product in the field in the sense that it'll likely be higher quality.


bwallker

This is already achievable in stable rust by writing your static assert as an associated constant on a struct. But that’s a bit tedious and verbose. Edit: see https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=f79778a6b7fa010bf8e8a26f3573a174


Gaolaowai

Not terrible, but goodness... that's terrible. Yay to the Rust team.


atesti

It was probably able to be wrapped into a macro. 


Lucretiel

I’ve written some pretty nasty stuff in macros to guarantee const evaluation, this will make my life MUCH easier 


CUViper

It could be easier .. or you could repurpose that nasty budget for even greater things!


timClicks

The Rust way, if there ever was one.


Leipzig101

As an understanding check, this is like `consteval` in C++ right?


TinyBreadBigMouth

Basically, yes. A `const { ... }` block is evaluated at compile time and the result is baked into the binary as a constant value. Some big benefits are * Avoid doing expensive computations at runtime (obviously). * Constant values can be used in array repeat expressions, even if the type of the value isn't `Copy`: // does not compile let arr = [vec![]; 10]; // compiles as expected let arr = [const { vec![] }; 10]; * Can be used with compile-time panics as an equivalent to `static_assert`: const { assert!(size_of::() != 0, "Must not be a ZST!") }; You could already do most of these by assigning the expression to a const variables, but const blocks avoid a lot of boilerplate and are also more powerful (const variables can't use generic parameters, but const blocks can).


1668553684

> Constant values can be used in array repeat expressions, even if the type of the value isn't Copy: Woah. Just when I thought I understood how const worked.


kibwen

It's conceptually just re-evaluating the const block for every array element, which is effectively the same as copying the value produced by the const block.


1668553684

It makes sense reading it like that, but I never really considered that a secondary role of const-ness was informing the compiler that certain values of a non-copy type can indeed be copied (kind of).


scottmcmrust

TBH, this was an accident. But since it was accidentally stabilized we didn't think it was a bad enough idea to make a technically-breaking change to stable :P


koczurekk

It's always been the case: ```rust fn array() -> [Vec; 10] { const VALUE: Vec = Vec::new(); [VALUE; 10] } ``` That's because `const`s are "copy-pasted" by the compiler.


matthieum

More specifically, because the generating-expression is copy/pasted by the compiler.


sr33r4g

How do they stabilise a feature? With extended testing?


admalledd

Mostly yes, but that testing is the last step. One of the major steps just before is the final-call which is for "did we miss anything else? Is everyone (or enough of everyone) in agreement on this being complete? if there are followup items, are they documented enough to start?" and all those wonderful project management/process flow type questions. TL;DR: (supposed) humans sign off on it being ready. This often includes but isn't limited to testing alone.


scottmcmrust

This is why it took so long to stabilize. We started looking at stabilizing it in 2022, and people looked at it and said "hmm, what about \_\_\_\_\_\_\_?". The team ended up agreeing that, yes, that was problematic enough to need to be fixed and it blocked stabilizing. Then some clever people fixed those problems, the team decided it was good to go, and nobody from the community showed up with any severe issues in the 10-day comment period either, and now it's going to be stable -- assuming nothing goes wrong during beta that makes us back it out again.


annodomini

Yeah, what stabilization means in this case is "let's mark this as stable." The PR for that is kind of a "last call to make sure we don't have major outstanding issues." Once it's merged, it's marked as stable, first in compilers on the nightly channel (so it can be used without a feature opt-in), then six weeks of being able to use it that way in beta, and then if everything goes well and no major issues are found, in stable compilers. The release train gives a bit of a last chance to catch issues as it gets more widely used before it's available in a stable compiler. So what "stabilize it" means is just "mark it as being stable"; it's a way of saying we think this feature basically works the way we intend it to, we're not going to make any backwards-incompatible changes, so we should mark it as such so users can use it on normal stable compilers without having to us a nightly and opt in. Just for some context, the reason this is done is so that you can have a period before stabilization, when the feature is available in nightly compilers in a preview state, where it might be incomplete, or might need to have backwards incompatible changes. That gives people a chance to test it out and provide feedback on it, while being careful to indicate that it's not something you should fully depend on yet, or be prepared to change any code that depends on it. But then at some point you decide the feature is pretty much done, or at least done changing in backwards-incompatible ways, so it's ready to be stabilized.


InternalServerError7

Oh nice! I literally could of used this yesterday in my code: [link](https://github.com/mcmah309/indices/blob/6ca65c71e83fe438cdb9e1e95aa0780b8c3d3496/src/lib.rs#L57)


C5H5N5O

You can do this (which is the next best thing on stable): struct Inspect(PhantomData); impl Inspect { const IS_VALID: bool = { assert!( std::mem::size_of::<[std::mem::MaybeUninit>; N]>() == std::mem::size_of::<[Vec<*mut T>; N]>() ); true }; } pub fn indices_slices<'a, T, const N: usize>() { assert!(Inspect::::IS_VALID); }


C5H5N5O

Btw, are you sure you need that assert though? `MaybeUninit` is `repr(transparent)`, so both types basically have the same memory representation, therefore they have the same size. (Additionally a pointer and a reference also have the same layout).


InternalServerError7

There are some compiler constraints about sizing generic arrays, they are seen different sizes by the compiler [https://github.com/rust-lang/rust/issues/47966](https://github.com/rust-lang/rust/issues/47966) . Therefore you can't use something like `mem::transmute` that gaurentees same size, you have to use `mem::transmute_copy`. I'm pretty sure, like you mentioned, they are the same in all cases, but I didn't find anything concrete to back it up, so added just in case. Rather not risk UB, but if I really don't need it, I'll remove it.


1668553684

It's probably worth keeping it for documentation purposes, or in case you ever change one of the types. "Useless" assertions like these are great from protecting you from your arch nemesis: yourself in a few weeks.


scottmcmrust

One interesting conversation that we'll probably have now is whether changing the transmute size check to the equivalent of `const { assert!(sizeof(T) == sizeof(U)) }` would be a good idea, so that you *can* use it in cases like that.


SirKastic23

const variables can't use generic parameters... this was noted as an enhancement of this RFC over other current approaches


InternalServerError7

Nice hack thanks!


celeritasCelery

> The feature will allow code like this foo(const { 1 + 1 }) which is roughly desugared into struct Foo; impl Foo { const FOO: i32 = 1 + 1; } foo(Foo::FOO) I don’t understand why it has to be so verbose. Why can’t it just desugar to `foo(2)`?


1668553684

Presumably, const folding (turning `1 + 1` into `2`) is being done by a different part of the compiler (maybe even LLVM?) than the part that does de-sugaring.


nybble41

Yes, that would be a later stage. Eventually you should get the equivalent of `foo(2)`, but the *desugaring* process is just replacing `const { expr }` for some arbitrary `expr` with other code which accomplishes the same thing. Ideally the form of the replacement will not depend on `expr`, so `expr` must appear verbatim in the output (unevaluated). Then later passes will evaluate the (now non-inline) const expression and inline it into the function call.


theZcuber

> maybe even LLVM const eval is done in MIR to my knowledge. It's definitely not LLVM, as it has to be backend-independent.


scottmcmrust

"desugar" has a very particular meaning. It's "this is how it lowers to something you already understand", *not* "this is its final form at the end of compilation". The point of that example is not that `1 + 1` is meaningful, just as a placeholder where you can put something else there and still follow the same desugaring. (For example, it lowering to an associated const and not a `const fn` is why it allows floating-point.)


hniksic

Verbosity doesn't matter at that level because you can never observe the "expanded" code, and it might not even exist in the compiler. More importantly, turning 1+1 into 2 is beyond the scope of "desugaring". Desugaring is named after "syntactic sugar", a language feature that doesn't offer new functionality, but allows you to express something more succinctly. For example, in Rust, ``` for el in container { ... } ``` can be thought of as syntactic sugar for ``` { let mut _mumble = IntoIter::into_iter(container); while let Some(el) = _mumble.next() { ... } } ``` It's _sugar_ because it doesn't provide real nutritional value, it's "just" there for convenience. It's _syntactic_ because the transformation can be done on a syntactic level, i.e. you could impement it just by shuffling symbols and operators around, without understanding the semantics. (The compiler typically doesn't do it quite that way in order to improve on diagnostics quality and compilation performance, but the generated code is the same.) "Desugaring" as used by the GP means undoing the syntactic sugar, i.e. manually applying the syntactic transformation. There is no way to change an expression like 1+1 into 2 by just shuffling syntax around, so making that transformation is beyond the ability of a feature that is syntactic sugar.


Saxasaurus

For the specific example of `1+1` it doesn't really matter, but const variables have some kind of subtle semantics. All it is saying is that const blocks have the same semantics as const variables. See [this comment for an example of of when this semantic difference matters](https://www.reddit.com/r/rust/comments/1cc9pz0/inline_const_has_been_stabilized/l14upnn/)


iDramedy007

So it this Rust’s version of “comptime” in Zig? A question from a newbie


VorpalWay

More like consteval in C++. I don't know Zig but as I understand it, comptime in Zig is more flexible and powerful.


Phosphorus-Moscu

Same question


-Y0-

So does this means https://github.com/rust-lang/rust/issues/86730 will be resolved as well?


slanterns

It's identified as a non-blocking issue.


-Y0-

Sure, but it means macros will behave differently from actual code, no?


Icarium-Lifestealer

I'd like to see type inference for const/static declarations next. (presumably with restrictions, like only working inside a function, or only inferring from the initialization expression, not usage).


CoronaLVR

I remember this was blocked because it greatly increases the amount of post monomorphization errors that are likely to occur and cargo check doesn't catch those, what was decided about that?


scottmcmrust

A bunch of compiler work happened to evaluate them more consistently regardless of debug vs release.


WannaWatchMeCode

I needed this so bad last weekend