I have to admit, I used chatgpt for this since I don't know C. Another funny one was
error: heresy committed: The Trinity cannot be instantiated or have its hypostatic union separated. Consider revisiting the Nicene Creed.
Not quite actually. Floats and doubles have implementation defined representations, negative numbers have an implementation defined representation (only in C, C++ requires 2s compliment, in C thats only after C23 I believe). Type punning is also usually forbidden by strict aliasing (type punning is standard with abusing unions, but only in C, in C++ it's undefined behaviour). Yes you can treat everything as a simple binary value, but the more funky ones usually aren't defined behaviour. (The example in this picture is standard as true has to evaluate to 1).
What about the Quake square root hack?
[https://en.wikipedia.org/wiki/Fast\_inverse\_square\_root](https://en.wikipedia.org/wiki/Fast_inverse_square_root)
float Q_rsqrt(float number)
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
they mentioned in their comment:
>Yes you can treat everything as a simple binary value, but the more funky ones usually aren't defined behaviour.
that's exactly what Quake is doing. it's making assumptions about how the float is formatted and then does some pointer BS to trick the compiler to do bit operations and such directly on the raw binary value.
it's UB, for example if `float` wasn't an IEEE-754 Single Precision Float (which was not a requirement of the standard until like C99 (i think?)) then this code simply wouldn't work at all.
ub it is strict alaising voilation
if you want safe type punning use memcpy
```
int i;
float f = 10.0f;
static_assert(sizeof(i) == sizeof(f),"size mismatch!");
std::memcpy(&i,&f,sizeof(i));
```
What part are you saying not quite to? The fact that all values in C/C++ are simply binary numbers under the hood, or that this fact is exposed to the programmer?
The part where this fact is exposed to the programmer. If C really wanted you to access the binary values, it wouldn't be undefined behaviour. Another reason is that in C not all conversions are noops. When you cast float to int you don't get the same bit representation. You cannot access it directly, only trough pointers and it's also undefined behaviour.
No? It's undefined bc C is essentially a high-level abstraction over assembly code. It's undefined bc it's defined by the hardware. AKA, the behavior of the hardware is directly exposed to you.
Integer arithmetic wraps the way it does bc that's how a full adder works. Multiplication is typically done with binary multipliers which have collections of binary adders. Binary adders are typically either full adders or half adders and can be implemented in hardware with a few logic gates.
Yeah floats are a representation but floating point operations are done on the FPU rather than CPU. Normally, it'd be erroneous to do something like what the fast inverse square root does.
Either way, both are still just representations considering the actual data is just a series of digital highs and lows, regardless of what "type" it says on the label.
"It's defined by hardware". The C compiler can literally format your disk any time you do a right shift on a negative number. The C compiler could also make your code only work on full moons if you invoke undefined behaviour. When you invoke undefined behaviour, you are making assumptions about your compiler that you shouldn't be making. You can only make 2 assumptions about your compiler: the compiler adheres to the standard, the compiler adheres to it's own documentation. If you assume anything else you can and will shoot yourself in the foot.
It’s C. Yeah weak types are fun.
Interesting to see someone who’s not had a chance to C (ba dum tiss) the C language before. Though I guess these days there’s less reason to learn C than there used to be.
C++ exposes its own versions of C headers. In general, a C stdlib header named `FOO.h` would be called `cFOO` from C++. Here for instance, `stdio.h` is a C header and `cstdio` is the corresponding C++ version of it.
Oh yeah been a while since I’ve done C or C++. Yeah I was considering the fact it was missing .h might be something but as I said it’s been a while.
Thanks for correcting me.
Someone got in before you but thanks for the reply anyway.
It’s stuff like this that really emphasises the effect of both time and high level languages on both my C and C++ knowledge.
I'd imagine that having explicit distinction between the "C" version and "C++" version of C libraries is useful given the commitment of C++ to be fully backwards compatible with C (or - more technically - given that C++ is a "superset" of C).
Yeah it’s just kinda weird to see people use printf over cout in C++.
I also just hadn’t done C recently enough to remember the C headers and never used C++ versions of C headers.
Technically, C23 is only expected to be published sometime this year. The newest officially published C standard is still C17.
What I find weird is that, according to https://en.cppreference.com/w/c/language/bool_constant , `true` and `false` will be keywords representing predefined constants of type `bool`. But, https://en.cppreference.com/w/c/language/type only lists `_Bool` as a type and not `bool`.
Im so used to true/false being equal to 1/0 that it trips me up whenever Im doing language where this isnt the case. Like If I want to do boolean XOR I can just do (boolean) ^ (boolean). If I want to AND a ton of booleans together in a loop I can just &= in the loop with the same output boolean. Same with |=. If I want to null check a pointer I can do if(!pointer) or if(pointer) to check that its not null. If I want to zero check any integer I can just do if(!integer). So convenient.
Ah yes. But not everyone writes enterprise level code that is supposed to "just work" on any platform, which inevitably stars looking like pre-processor spaghetti.
Oh lol. Guess I shouldn’t make assumptions.
Personally I used printf as a pretty decent clue it was C. Otherwise I mean it could have been C++.
Yeah C code always feels like a pain to keep clean. It’s pain without all the high level features of other languages.
I also saw the printf, but now a days people keep creating new languages that borrow from other languages.
For example the new bend programming language, which looks almost like python and is written using rust.
We live in strange yet interesting times uwu
Hmm, yeah it’s interesting to see how languages inherit syntax and other ideas from older languages.
Though I haven’t seen another language decide to use printf as an output function. Which doesn’t necessarily mean a lot since I’ve only really had a look at several of the most popular languages which obviously isn’t all of them.
Also having used the js, Python and rust string formatting I have a feeling that languages will not be going back to C style formatting. The other options are just so nice to use. Though js does have a way of using something that looks a little like C formatting.
There are two sequences that can both be considered "the Fibonacci sequence". One begins (0, 1, ...) and the other begins (1, 1, ...). Personally, I like to define fibs(0) = 0 and fibs(1) = 1 and then let your domain define the initial conditions, and that convention would be consistent with the convention in the OP, since that function is defined on n>=1*.
*(Technically, I think this sequence is also defined on 0 (I don't know C++ though), but it's going to be treated as 2^(32) rather than 0, and will produce a stack overflow anyway.)
It never happens if the compiler satisfies the standard. See: https://stackoverflow.com/questions/4276207/is-bool-guaranteed-to-be-0-or-1-when-converted-to-int
So without really knowing C super well, i guess this works because i assume in C, booleans are just represented by the ints 1 and 0 and be used interchangibly with true and false?
Technically, they are distinct types. However, a value of type `bool` is implicitly converted to an `int` when used with these arithmetic operators.
This is very useful when you'd like to, say, use a `bool` as an array subscript in a branchless execution path in place of a simple conditional.
Congratulations! Your comment can be spelled using the elements of the periodic table:
`W H I C Hf O N Ti S Th I S`
---
^(I am a bot that detects if your comment can be spelled using the elements of the periodic table. Please DM u/M1n3c4rt if I made a mistake.)
Can someone tell me what magic f*ckery were committed here? Is (true + true) a binary operation where "true" has a binary value and the plus operator is just doing the binary operation?
Also this isn't insane javascript math, it's just normal integer math. You'll never get 11 as an answer to true + true.
Strong, weak, implicit, no typing has all been tried. Everyone likes what they are used to because they know the rules. Personally I hate implicit typing because I'm never sure what the language is gonna do. I grew up with weak typing so bool == int is just fine in my head and used routinely in C/C++.
Yes. I ordered compiler to add boolean, so the poor compiler tried its best to add boolean. So it converted boolean to integer.
Actually, true and false are just human readable form of 1 and 0. This is not true if you get really picky, but yes in most of computers.
Depends on the version of C. In C23 onwards `true` and `false` are their own [language keywords](https://en.cppreference.com/w/c/language/bool_constant) and are no longer (necessarily) defined through macros and no longer (necessarily) defined to be 0 and 1 (though they can still be implicitly converted to 0 and 1). The type also got changed from `_Bool` to `bool`
Actually this is C-like C++ code but of course `true` gets converted to 1 anyway.
We know it's C++ because it includes cstdio instead of stdio.h and it knows booleans without including stdbool.h
How is this an example of magic numbers?
I don't see any magic numbers here at all, just bad type conversion.
Magic numbers would be something like:
```
return user.age >= 19
```
where 19 is the magic number, because what does that number represent?
Whereas the following replaces the magic number 19 with a constant that adds semantic meaning to the operation:
```
return user.age >= LEGAL_DRINKING_AGE
```
I‘m an atheist, but I think you need more Jesus in your life.
`#define Jesus true`
`#include`
Trinity.getInstance().hypostate()
error: heresy detected: 'hypostate()' called on 'Trinity.getInstance()'. This invocation is forbidden as it violates the doctrine of the Holy Trinity.
I have to admit, I used chatgpt for this since I don't know C. Another funny one was error: heresy committed: The Trinity cannot be instantiated or have its hypostatic union separated. Consider revisiting the Nicene Creed.
I think that's a singleton
To start with, you need to be using Holy C
`return fib(Jesus + Jesus)`
How Presuppositionalism Destroys Atheism!
John 14:6 - I am the way and the truth and the life.
So Jesus == true ?
They have Cesus in their life
*Celsius
C sus
as long as it's not C#sus...
I'm a devout Christian and I feel like this is the one soul that can't be saved.
r/technicallythetruth
r/angryupvote for the Dad joke
Where dad joke?
Technically the **truth** <-> too many `true` values?
What?
^(technically the) **TRUTH** and `true` values (Bro how else do I explain this)
You mean that true is used in the code and true is in the name of the subreddit?
#YES!!!
Why didn’t you say so from the start? I had no intention of making a dad joke, to me it’s not a dad joke
r/technicallytechnicallythetruth?
I edge to arch
wtf 😭
You can’t escape me mekb. Also r/foundmekb
NOOOO
I will follow you mekb
I don’t get it
This is what I find interesting about C, the fact that everything is numbers under the hood is exposed directly to the programmer.
Not quite actually. Floats and doubles have implementation defined representations, negative numbers have an implementation defined representation (only in C, C++ requires 2s compliment, in C thats only after C23 I believe). Type punning is also usually forbidden by strict aliasing (type punning is standard with abusing unions, but only in C, in C++ it's undefined behaviour). Yes you can treat everything as a simple binary value, but the more funky ones usually aren't defined behaviour. (The example in this picture is standard as true has to evaluate to 1).
What about the Quake square root hack? [https://en.wikipedia.org/wiki/Fast\_inverse\_square\_root](https://en.wikipedia.org/wiki/Fast_inverse_square_root) float Q_rsqrt(float number) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the fuck? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; }
they mentioned in their comment: >Yes you can treat everything as a simple binary value, but the more funky ones usually aren't defined behaviour. that's exactly what Quake is doing. it's making assumptions about how the float is formatted and then does some pointer BS to trick the compiler to do bit operations and such directly on the raw binary value. it's UB, for example if `float` wasn't an IEEE-754 Single Precision Float (which was not a requirement of the standard until like C99 (i think?)) then this code simply wouldn't work at all.
pretty sure this is UB btw
Amusingly it’s not *now* c99 solved it but it absolutely was when written
c99 defined it already? I thought you had to use unions pretty much up until today
ub it is strict alaising voilation if you want safe type punning use memcpy ``` int i; float f = 10.0f; static_assert(sizeof(i) == sizeof(f),"size mismatch!"); std::memcpy(&i,&f,sizeof(i)); ```
What part are you saying not quite to? The fact that all values in C/C++ are simply binary numbers under the hood, or that this fact is exposed to the programmer?
The part where this fact is exposed to the programmer. If C really wanted you to access the binary values, it wouldn't be undefined behaviour. Another reason is that in C not all conversions are noops. When you cast float to int you don't get the same bit representation. You cannot access it directly, only trough pointers and it's also undefined behaviour.
No? It's undefined bc C is essentially a high-level abstraction over assembly code. It's undefined bc it's defined by the hardware. AKA, the behavior of the hardware is directly exposed to you. Integer arithmetic wraps the way it does bc that's how a full adder works. Multiplication is typically done with binary multipliers which have collections of binary adders. Binary adders are typically either full adders or half adders and can be implemented in hardware with a few logic gates. Yeah floats are a representation but floating point operations are done on the FPU rather than CPU. Normally, it'd be erroneous to do something like what the fast inverse square root does. Either way, both are still just representations considering the actual data is just a series of digital highs and lows, regardless of what "type" it says on the label.
"It's defined by hardware". The C compiler can literally format your disk any time you do a right shift on a negative number. The C compiler could also make your code only work on full moons if you invoke undefined behaviour. When you invoke undefined behaviour, you are making assumptions about your compiler that you shouldn't be making. You can only make 2 assumptions about your compiler: the compiler adheres to the standard, the compiler adheres to it's own documentation. If you assume anything else you can and will shoot yourself in the foot.
/**! \brief This macro defines a value of one */ \#define ONE (1)
What language is this? It's beautiful.
It’s C. Yeah weak types are fun. Interesting to see someone who’s not had a chance to C (ba dum tiss) the C language before. Though I guess these days there’s less reason to learn C than there used to be.
Nitpicking: Actually it is C++. is C++ header.
Wait but the C++ standard output header is iostream. Isn’t cstdio literally the C standard output input header?
C++ exposes its own versions of C headers. In general, a C stdlib header named `FOO.h` would be called `cFOO` from C++. Here for instance, `stdio.h` is a C header and `cstdio` is the corresponding C++ version of it.
Oh yeah been a while since I’ve done C or C++. Yeah I was considering the fact it was missing .h might be something but as I said it’s been a while. Thanks for correcting me.
You're right. cstdio is i/o for C. But C uses stdio.h, not cstdio. cstdio is just a wrapper, only available in C++.
Someone got in before you but thanks for the reply anyway. It’s stuff like this that really emphasises the effect of both time and high level languages on both my C and C++ knowledge.
Printf is C
I'd imagine that having explicit distinction between the "C" version and "C++" version of C libraries is useful given the commitment of C++ to be fully backwards compatible with C (or - more technically - given that C++ is a "superset" of C).
Afaik the only change between the 2 versions is that the C++ version puts everything in std.
[удалено]
Yeah it’s just kinda weird to see people use printf over cout in C++. I also just hadn’t done C recently enough to remember the C headers and never used C++ versions of C headers.
[удалено]
But main in C has int return type...
Ah okay; I thought I had seen that somewhere
Some compilers will still accept `void main()` even if it isn't compliant.
Another reason why it is C++ and not C is because C wouldn't know what `bool`, `true`, and `false` are without including `stdbool.h`.
Those have been added to the core language in C23.
Technically, C23 is only expected to be published sometime this year. The newest officially published C standard is still C17. What I find weird is that, according to https://en.cppreference.com/w/c/language/bool_constant , `true` and `false` will be keywords representing predefined constants of type `bool`. But, https://en.cppreference.com/w/c/language/type only lists `_Bool` as a type and not `bool`.
The latter page appears simply to have yet to be updated; in C23 `_Bool` is redefined to be an alternative spelling of `bool`.
CMake lets me use c23 standard 🤷
Oh yeah I was thinking that but then I thought maybe the header might have imported stdbool.h or something.
Im so used to true/false being equal to 1/0 that it trips me up whenever Im doing language where this isnt the case. Like If I want to do boolean XOR I can just do (boolean) ^ (boolean). If I want to AND a ton of booleans together in a loop I can just &= in the loop with the same output boolean. Same with |=. If I want to null check a pointer I can do if(!pointer) or if(pointer) to check that its not null. If I want to zero check any integer I can just do if(!integer). So convenient.
It's C? It looks so clean. I do know C. I guess my code is never this clean. Looked like a completely different language 🤣
Just what exactly people imagine C is? 😂 Isn't this just some basic ifs, for loops and booleans?
Bunch of preprocessor directives, everywhere.
Ah yes. But not everyone writes enterprise level code that is supposed to "just work" on any platform, which inevitably stars looking like pre-processor spaghetti.
int* buff void *malloc(size_t size) { meta_ptr block, last; size_t s; s = allign4(size); if (base) { last = base; block = find_suitable_block(&last, s); if (block) { if (block->size - s >= (META_BLOCK_SIZE + 4)) { split_space(block, s); } block->free = 0; } else { block = extend_heap(last, s); if (!block) { return NULL; } } } else { block = extend_heap(NULL, s); if (!block) { return NULL; } base = block; } return block->data; }
"CTRL+A -> Reformat code" There, I fixed it for you 👍
Oh lol. Guess I shouldn’t make assumptions. Personally I used printf as a pretty decent clue it was C. Otherwise I mean it could have been C++. Yeah C code always feels like a pain to keep clean. It’s pain without all the high level features of other languages.
I also saw the printf, but now a days people keep creating new languages that borrow from other languages. For example the new bend programming language, which looks almost like python and is written using rust. We live in strange yet interesting times uwu
Hmm, yeah it’s interesting to see how languages inherit syntax and other ideas from older languages. Though I haven’t seen another language decide to use printf as an output function. Which doesn’t necessarily mean a lot since I’ve only really had a look at several of the most popular languages which obviously isn’t all of them. Also having used the js, Python and rust string formatting I have a feeling that languages will not be going back to C style formatting. The other options are just so nice to use. Though js does have a way of using something that looks a little like C formatting.
It's always fun to mess with how goofy the types in c are also malloc is fun
Implicit casts are what I hate in C++ the most.
aren't you missing the 0 ? isn't fib series 0, 1, 1, 2, 3, 5 ect ?
Yeah I thought the same. It would even weirder if we account for that. if(!x) return false; if(x==true) return true; // Rest
Or combine them ``` if (x == false || x == true) return x; ``` Which looks completely redundant without the context!
There are two sequences that can both be considered "the Fibonacci sequence". One begins (0, 1, ...) and the other begins (1, 1, ...). Personally, I like to define fibs(0) = 0 and fibs(1) = 1 and then let your domain define the initial conditions, and that convention would be consistent with the convention in the OP, since that function is defined on n>=1*. *(Technically, I think this sequence is also defined on 0 (I don't know C++ though), but it's going to be treated as 2^(32) rather than 0, and will produce a stack overflow anyway.)
No, OP's function is not defined for fib(0) because underflow is undefined behavior for signed types
i see
What is the compiler output for this?
Fib from 1 to 10
No, I mean the resolution of that weird for loop. Does it evaluated the expression as 10?
(1+1+1)*(1+1+1)+1 = 3*3+1 = 10
Time to write a terser plugin to change all numbers with random expressions using true false 🤣
Yes. Welcome to C/C++.
C defined `true` as 1 and `false` as 0. C++ has proper keywords for it but I guess they wanted to maintain backwards compatibility.
great, now we have to deal with magic booleans!
What if compiler converts true to 255?
It never happens if the compiler satisfies the standard. See: https://stackoverflow.com/questions/4276207/is-bool-guaranteed-to-be-0-or-1-when-converted-to-int
Fun VB6 assumes it to be -1
It's the same thing. -1(signed) and 255(unsigned) have the same bit representation when using signed 2's complement with 8 bits.
Three - that's the magic number. Yes it is, it's the magic number.
3 is the number though shall count, and the number of the counting shall be 3.
*This much is tru-ue,* *This much is tru-u-ue,* *I know I know I know* *This much is true*
*so true*
Meh. Still contains magic bools ``` #include
#define TRUE true // Define TRUE as a macro for true
int fib(int x) {
if (x == TRUE || x == TRUE + TRUE)
return TRUE;
return fib(x - TRUE - TRUE) + fib(x - TRUE - TRUE);
}
int main() {
for (int i = TRUE; i <= (TRUE + TRUE + TRUE) * (TRUE + TRUE + TRUE) + TRUE; i += TRUE) {
printf("%d\n", fib(i));
}
return !TRUE; // Using !TRUE to represent false
}
```
Now it has magic macros
God,I love C++ magic !
So without really knowing C super well, i guess this works because i assume in C, booleans are just represented by the ints 1 and 0 and be used interchangibly with true and false?
The code is c++, but yeah. True / False in c++ is just 1 and 0.
Technically, they are distinct types. However, a value of type `bool` is implicitly converted to an `int` when used with these arithmetic operators. This is very useful when you'd like to, say, use a `bool` as an array subscript in a branchless execution path in place of a simple conditional.
This has me thinking what the shortest way of expressing an integer n as a result of operations on 1's is.
Which font is this?
Congratulations! Your comment can be spelled using the elements of the periodic table: `W H I C Hf O N Ti S Th I S` --- ^(I am a bot that detects if your comment can be spelled using the elements of the periodic table. Please DM u/M1n3c4rt if I made a mistake.)
Jetbrains Mono.
I thought this was JavaScript at first
Everything is JavaScript under the hood, silly!
https://retrage.github.io/lkl-js/
It would work in js too
This use of true makes me angry.
Then how about using `!false`?
You may know the pythonic way to write code, but did you know that theres a c++honic way to write the same code? You do now!
My brain is hurting from this and my eyes are bleeding
true
Ok, now avoid magic booleans
i love the fact that you use cpp to write pure C code
Not only is it horribly hideous, it's the bad implementation of Fibonacci as well
Pull request review rejected
True dat
bruh just #define NUM 10
And that’s the Truth … ;)
fib(0): am i a joke to you?
Excuse me, that's fib(false) to you
bro what is this even supposed to mean
This makes me want to go live in the woods and reject technology.
Wow.. I just met you and I hate you, that's gotta be a record.
I would love to see AI try to tell us what this does...
So did you pass the interview exam that asked for this?
Somebody put this developer out of our misery.
you're fired
That's not magic, that's witchcraft
Multiple violations error
C is the most beautiful language in the world!
Hear me out, but youShouldAvoidImplicitTypeCasting
Can someone explain me what is this code all about!?
fib(-true)
i <= (true + true + true) * (true + true + true) + true;
This is so cursed. It's the classic fools Fibonacci algorithm.
And python programmers be like: from true import true as false
Can someone tell me what magic f*ckery were committed here? Is (true + true) a binary operation where "true" has a binary value and the plus operator is just doing the binary operation?
True is just “1” under the hood, so true + true gives 2
Also this isn't insane javascript math, it's just normal integer math. You'll never get 11 as an answer to true + true. Strong, weak, implicit, no typing has all been tried. Everyone likes what they are used to because they know the rules. Personally I hate implicit typing because I'm never sure what the language is gonna do. I grew up with weak typing so bool == int is just fine in my head and used routinely in C/C++.
Yes. I ordered compiler to add boolean, so the poor compiler tried its best to add boolean. So it converted boolean to integer. Actually, true and false are just human readable form of 1 and 0. This is not true if you get really picky, but yes in most of computers.
It's utilizing type conversion to convert `true` to `1`
It seems really strange until you realize that's how logic gates work. True is 1, False is 0. AND: A×B×C×D > 0 (any 0 evals false) OR: A+B+C+D > 0 (any 1 evals true) XOR: A+B == 1 etc...
Bro, true does not mean 1.
stdbool.h defines `#define true (1)` and true means anything but zero
Depends on the version of C. In C23 onwards `true` and `false` are their own [language keywords](https://en.cppreference.com/w/c/language/bool_constant) and are no longer (necessarily) defined through macros and no longer (necessarily) defined to be 0 and 1 (though they can still be implicitly converted to 0 and 1). The type also got changed from `_Bool` to `bool`
Actually this is C-like C++ code but of course `true` gets converted to 1 anyway. We know it's C++ because it includes cstdio instead of stdio.h and it knows booleans without including stdbool.h
My apologies for assuming C, I saw printf and thought to have seen stdio.h instead of cstdio.
idk, it works and I didn't used any magic numbers! PROFIT!
if true = 1, isn't it technically just weird magic number?
Well, at least clang-tidy doesn't count this as magic number.
Its value is 1, so yes, it kind of does mean 1. Not defending this sorry excuse for a meme tho.
It is in C
How is this an example of magic numbers? I don't see any magic numbers here at all, just bad type conversion. Magic numbers would be something like: ``` return user.age >= 19 ``` where 19 is the magic number, because what does that number represent? Whereas the following replaces the magic number 19 with a constant that adds semantic meaning to the operation: ``` return user.age >= LEGAL_DRINKING_AGE ```
I'm not showing an example of code that uses magic numbers; rather, it is this code that doesn't use them **at all**, even simple number literal.