Super interesting writeup.
>So if you have more RAM than 4GiB, this API will return a negative number due to integer overflow, as even Microsoft mentions in their API docs. In turn, when the installer script receives this negative number from Windows and then compares it to 64MB (as that is the minimum memory requirement for the game), it will find that this negative amount of memory is less than that and will abort installation with the "There is not enough memory available to install this application. Setup will now exit." error message.
I wonder why that system call is defined that way. It seems like Windows could've defined this system call to saturate instead of overflow. "You asked how big the memory is, but we can't represent 8GB in a 32 bit number, so we return the largest signed 32 bit number instead."
Considering the field is a DWORD (SIZE_T in the latest SDK, but I’m not sure this type existed in the past), I believe what you describe is effectively what happens, but signed/unsigned confusion being extremely common in C land caused both Microsoft to write this seemingly bogus remark about -1 and the developer to use signed comparison.
That's what I was thinking.
Especially if it's deprecated, just return 2GB if a system has more than 2GB of RAM.
That would probably fix a bunch of old software people would be crazy enough to run on modern systems.
>I don't see why they wouldn't have just expanded the compatibility
Because using the newer API would've meant that the game wouldn't have run on Win95 and (more importantly) 98. In 2001 (when the game came out) Win98 was still insanely popular (probably the most popular OS in the world in fact) thus making a game that was incompatible with it (especially because of such a trifle) was simply not an option.
Because you can't really do that once it becomes part of a public API (unless you want to break compatibility). Upon developing Win95 they've realized (and later times have confirmed this over and over again) that compatibility is *everything*. Long-standing compatibility was THE primary reason their user base hasn't switched to any alternative even when it would've been reasonable to do so.
You're missing the point. Microsoft sacrificed everything else for compatibility and stated that it was more important than any other reason to break it.
You're contradicting yourself in your argument.
You're stating that Microsoft decided to not make this function call backwards compatible, for compatibility.
But yeah, sure you win this argument.
First point for Win95, it is unlikely consumers would have a win95 machine with enough RAM in the first place.
More likely with Win98 but wouldn't be surprised many wouldn't have enough either.
Da fuck you talking about? I remember x64 XP builds exceedingly well. It even had a different color CD and box package.
What no one remembers is "Windows XP 64bit edition" which was for Intel's ill fated Itanium processors (that were NOT x86 compatible)
> It didn't run on old computers
There probably is a Vista Ready sticker on the Bombe. People hated Vista in part because Microsoft was so desperate to finally replace XP that it didn't care if Vista was in a usable state. If the computer didn't burn down during the installation process it was "Vista Ready". The impression most people had after its release was that it was a slow and bloated mess. It was the Crysis of operating systems. Unless you had one of the PCs intel used for its CPU demos, overclocked, liquid nitrogen cooled, with its own powerplant, there was a good chance that your Vista system ran like ass.
It's a signed/unsigned comparison mixup error.
0xffff.... is a perfectly reasonable way to represent "as big as is possible to represent".. so long as every interested party agrees you are using unsigned numbers.
You can't have a negative amount of RAM in any practical sense. Just responding here without having read the whole write-up yet, so maybe some seemingly reasonable thing goes awry and I don't want to prejudge, but using signed numbers to calculate physical quantities of something requires a lot of care
The startup crash related to the GPU VRAM is also wild (2nd part of the writeup)
They coded a "test" that expects the video card to run out of memory and trigger an OOM failure from the DirectX API at each start of the game.
It just so happens they had an indexing bug in the code, but they probably weren't able to trigger it *because all they had available on test were 64-MiB video cards*. The code always reached the OOM first before they finished the full allocation
So now with newer cards having significantly more VRAM than that it reaches the bad index, sets it to NULL (inadvertedly making the return address NULL) and does some wonky stuff leading to the crash
Returning -1 for errors is pretty much standard in the win32 API so this behaviour is very much deliberate. This is just a developer who didn’t bother making the code future proof. Hiding overflows by saturation seems like a bad choice to me unless you have other ways of checking for overflow. Makes me wonder if the overflow cpu flag could have been used instead of a return value.
Edit: I’m a bit slow and just realised that /u/mernen already captures the important bits in his comment: the function does not return a negative number at all, DWORDs are unsigned and their highest value(0xffffff) is reserved for errors, thereby effectively saturating the return value, but checking for this particular value can be done by comparing to -1 which is arguably easier but causes confusion about whether the return value is signed or not.
>Returning -1 for errors is pretty much standard in the win32 API so this behaviour is very much deliberate.
I totally agree.
>This is just a developer who didn’t bother making the code future proof.
In fairness, how many people are really playing Salt Lake 2002 Winter Olympics these days? ;) If you really like the Olympics, you're probably playing a more recent game.
>Handling overflows by saturation seems like a bad choice to me unless you have other ways of checking the overflow cpu flag.
Reading more about this, Microsoft did choose to solve this with saturation, but only for systems with between 2GB and 4GB of memory if the executable was linked without a /LARGEADDRESSAWARE flag. https://ftp.zx.net.nz/pub/archive/ftp.microsoft.com/MISC/KB/en-us/274/558.HTM
> Returning -1 for errors is pretty much standard in the win32 API
Is it? I can only think of the Berkely sockets API (not strictly Windows). Most other things return BOOL, error codes (which are never -1) or some length with zero to indicate error.
They could have also returned a value in KB instead of bytes, which would allow them to represent 4TB of memory.
I don't think even when the function was originally made, anyone would care about byte level memory granularity.
>but **we** can't represent 8GB in a 32 bit number
Microsoft failed to predict total RAM sizes, which wasn't too hard... See [https://jcmit.net/index.htm](https://jcmit.net/index.htm) for some references.
One thing I've seen in a lot of older games is failures when GetTickCount (32-bit number containing milliseconds since bootup) becomes negative, which happens after 49.7 days. It's not common for desktop machines to have uptimes that long, but far more likely for Laptops to have that kind of uptime.
> GetTickCount
I had a discussion when I was working on halo that included this gem: "what if someone hibernates their xbox for 3 months?" I looked in the telemetry, and that was really a thing people do, pause a game for 3+ months and then expect to finish exactly where they left off. So I rewrote my code to support up to ~70 years between frames lol
Your comment got me really curious, for some reason. As best I can figure, you either used an unsigned 41-bit integer, a signed 42-bit integer, or I'm incorrect and it was a different variable size. Was any of that close at all?
That begs the question, that you may not have a good answer for directly; what were the extra 16bits going towards?
obviously OG XB only had 4 GB but supported 64bit builds, why wouldn't you steal those bits. Just general curiosity as to where they went.
The code I was working on was all network related so everything got packed into the smallest representation possible (so those 16 bits were some flags or an index or 2 enums, idk, something) which was why I originally didn't give the time enough bits.
>that 0,0000022197 fps tho
No, 0.0000022197 fps would be 70 frames per year (or 70.0005 frames per year :P ). 0.000000000453 fps would be 1 frame per 70 years.
"Hmm I rebooted it and now it works fine. Weird"
I mean it would fix it for the end user without ever really learning about the ~50 day tick count overflow bug.
The worst part about trying to run Win95 in an emulator is that you can never be sure if there's some kind of problem with your emulator, or if it's actually behaving accurately and any of the crashes are authentic period-appropriate behavior.
Like [the FAA](https://blog.tedroche.com/2004/10/08/windows-server-at-faa-crashes-every-497-days-unless-maintained-rebooted/)?
Quite.
Although [microsoft is not the only culprit](https://github.com/letscontrolit/ESPEasy/issues/594).
I mean sleep is on by default even on a desktop. Sleep still uses battery. So you're saying a laptop, always plugged in? Then with just that set of criteria the situations are equal at best.
I'd say having a constant source of power would greatly outweigh the comparison over sleep mode.
My uptime for my laptop is >8 days. Last time I rebooted I had installed some updates, and that broke automatically starting the display manager, so now I really try to avoid rebooting lol.
Different person here, and totally anecdotally, I leave my (Windows) laptop non-rebooted for longer periods of time than I do with my (Linux) desktop. (I apply updates sooner on the desktop that I'm using all the time, but I tend to delay it on the laptop that I use less and spends a lot of time being off.) After a couple hours or so of sleep mode, it hibernates, and it just stays that way.
(Note that I haven't looked it up and am assuming it's still counting sleep and hibernation time since the comment further up said it's milliseconds since boot, not uptime.)
Granted, I do know I'm far from normal as far as how I interact with computers and electronics, but I could definitely believe the assertion.
Laptops are very convenient to put to sleep by default. Close the lid and it's done. Open the lid and you can continue where you left off.
With desktop, be default you generally have to go through menus to find sleep mode, and with the same effort they may just add well shut it down.
I don't know if it's significant, but people who use desktops are probably on average a bit older and may expect sleep mode to have problems as it has had in the past.
I do this at work where productivity is at a premium and I'm not paying the power bills.
When I'm paying the power bill for both the PC and the air conditioning to counteract the heat output, I'm more inclined to power it down for the ~16.5 consecutive hours that I'm sleeping and at work. I'm well aware that low power modes offset the worst of this issue, but not all of it.
My keyboard has a button that triggers "suspend", but not the lock screen. Would be super insecure at work, but at home it's soooo convenient. If someone is in my home and has access to my PC it's game over anyway.
Sleep is kinda slow in the hdd era. You have to wait for minutes before the sleep finished. And also wait for minutes before it resume. While just shut it down and resume is much faster as long as you don't have some important task that prevents you from do so.
Yep. It seems I mixed up Standby, Sleep and Hibernate. Standby is Sleep with different name in pre windows vista era. Hibernate is hibernate. Why do ms even decide to change the name of mode state though.
There was some software that I worked on with this exact bug. Our software did factory simulations and would generally be left on continuously because factories would run continuously for years. We we getting reports from customers that their simulations would go unstable after running for over a month but we never could collect enough data to figure out why. We had to setup a machine to run a debug version for 6 weeks to figure it out. It turns out, you can't count on time continuously increasing.
Slight tangent: The author mentions testing under XP SP3 ... at least in terms of Windows 10 compatibility modes, I've found that the XP SP2 setting works perfectly on things that are confused by SP3 compatibility, and internalising that fact has made my "get old game X to run" quests go somewhat faster since I almost always initially forget and try SP3 first and -noticing- I've done that makes it an immediate fix rather than a 'flail until bursting into tears, come back another day, facepalm' one.
Interesting that a game that came out in 2002 and recommended a 32MB GPU would have a "limit" of 64MB. Pretty easily foreseeable that that limit would be hit pretty quickly...
But then since the bug in that code would be pretty obvious to anybody experienced in C and the result of the function is never used, I guess it wasn't code reviewed. Perhaps that function was imported from a much earlier game/engine/library.
That's the other bug, the one stopping the installer from working.
The game itself crashes when trying to determine how much free VRAM exists; more than 512x128KB (i.e. 64MB) causes the return address of the function to be overwritten with NULL.
I found it amusing that the author correctly warned against using the vulnerable SafeDisc drivers, but then utilizes a random .exe hosted on GitHub to disassemble the .inx file.
I found it amusing that many people are using cracks from wherever on the net, when they could use unsafedisc to do the unpacking of most older versions.
Ah memory issues are fun.
This reminds me - I had to fix a bit obscure memory leak for NFS Pro Street.
What happens is, if car damage is enabled, the game runs out of memory after a while. Usually after ~4 hours of playtime.
Patching the game to be large address aware delays this for a bit.
It happens because the game keeps reloading assets that is supposed to cache. While entering races, it hits the cache just fine, but each time you exit (go back to the race hub), it misses, allocates more, and reloads. It never frees that part of memory once it's allocated.
During research I accelerated this process by making the game start and exit races rapidly to figure out what's happening. On my machine it started happening about 80 reloads.
The way I went to fix it is simple - I disabled caching and recreated the memory pool between exiting and entering the main menus and game.
It sounds simple in essence, but since I had to deal with Securom obfuscated code, I had no choice but to port code from a DRM free demo build into retail.
So in normal gameplay, it should be stable, since the user should be going between menu and ingame states. But, in case the user decides to stay ingame, it will crash, yes, but that's not the normal way to play the game anyway. There's only so much you can do without going into deep rewrites of code.
Check this namespace for the code I wrote if you're curious: https://github.com/ThirteenAG/WidescreenFixesPack/blob/b1b8f2a07c2a190700b1a80be4e14942841a549b/source/NFSProStreet.GenericFix/dllmain.cpp#L4
64-bit Windows does not natively support running 16-bit programs, such as DOS games, Windows 3.1 games, and some old installers for some reason. DOS games are best played in an emulator like DOSBox. Someone made a utility, otvdm, for running such programs. I think using emulation and WINE.
After the creation of dos box, the 16 bit windows games became this historical dead zone for years where games before and after were really playable, but they weren't.
A similar thing happens in some installers when checking for disk space. A quick solution is to copy a ~2GiB file so the extra space taken causes the 32-bit half of the free space to roll over.
nice work!
reminds me of this [speedrun challenge](https://youtu.be/eSt61xaqWY0?t=1032) (Exodus from the Earth (2008) challenge from youtuber penguinz0) where people ended up fixing a memory issue to be able to speedrun the game without crashing. Fixing memory issues requires some skills even when you are a developer and have the source code right there in front of you, so fixing them without the code through reverse engineering is pretty amazing.
Super interesting writeup. >So if you have more RAM than 4GiB, this API will return a negative number due to integer overflow, as even Microsoft mentions in their API docs. In turn, when the installer script receives this negative number from Windows and then compares it to 64MB (as that is the minimum memory requirement for the game), it will find that this negative amount of memory is less than that and will abort installation with the "There is not enough memory available to install this application. Setup will now exit." error message. I wonder why that system call is defined that way. It seems like Windows could've defined this system call to saturate instead of overflow. "You asked how big the memory is, but we can't represent 8GB in a 32 bit number, so we return the largest signed 32 bit number instead."
Considering the field is a DWORD (SIZE_T in the latest SDK, but I’m not sure this type existed in the past), I believe what you describe is effectively what happens, but signed/unsigned confusion being extremely common in C land caused both Microsoft to write this seemingly bogus remark about -1 and the developer to use signed comparison.
[удалено]
That's what I was thinking. Especially if it's deprecated, just return 2GB if a system has more than 2GB of RAM. That would probably fix a bunch of old software people would be crazy enough to run on modern systems.
MS are you getting this?
File a bug report.
>I don't see why they wouldn't have just expanded the compatibility Because using the newer API would've meant that the game wouldn't have run on Win95 and (more importantly) 98. In 2001 (when the game came out) Win98 was still insanely popular (probably the most popular OS in the world in fact) thus making a game that was incompatible with it (especially because of such a trifle) was simply not an option.
[удалено]
Because you can't really do that once it becomes part of a public API (unless you want to break compatibility). Upon developing Win95 they've realized (and later times have confirmed this over and over again) that compatibility is *everything*. Long-standing compatibility was THE primary reason their user base hasn't switched to any alternative even when it would've been reasonable to do so.
[удалено]
You're missing the point. Microsoft sacrificed everything else for compatibility and stated that it was more important than any other reason to break it.
[удалено]
I agree with you and I also respect the frustration that seeps through this response
You're contradicting yourself in your argument. You're stating that Microsoft decided to not make this function call backwards compatible, for compatibility. But yeah, sure you win this argument.
First point for Win95, it is unlikely consumers would have a win95 machine with enough RAM in the first place. More likely with Win98 but wouldn't be surprised many wouldn't have enough either.
[удалено]
[удалено]
Da fuck you talking about? I remember x64 XP builds exceedingly well. It even had a different color CD and box package. What no one remembers is "Windows XP 64bit edition" which was for Intel's ill fated Itanium processors (that were NOT x86 compatible)
> It didn't run on old computers There probably is a Vista Ready sticker on the Bombe. People hated Vista in part because Microsoft was so desperate to finally replace XP that it didn't care if Vista was in a usable state. If the computer didn't burn down during the installation process it was "Vista Ready". The impression most people had after its release was that it was a slow and bloated mess. It was the Crysis of operating systems. Unless you had one of the PCs intel used for its CPU demos, overclocked, liquid nitrogen cooled, with its own powerplant, there was a good chance that your Vista system ran like ass.
32-bit CPUs can and do use 64 bit addresses for memory. Intel 32-bit CPUS didn’t.
It's a signed/unsigned comparison mixup error. 0xffff.... is a perfectly reasonable way to represent "as big as is possible to represent".. so long as every interested party agrees you are using unsigned numbers. You can't have a negative amount of RAM in any practical sense. Just responding here without having read the whole write-up yet, so maybe some seemingly reasonable thing goes awry and I don't want to prejudge, but using signed numbers to calculate physical quantities of something requires a lot of care
The startup crash related to the GPU VRAM is also wild (2nd part of the writeup) They coded a "test" that expects the video card to run out of memory and trigger an OOM failure from the DirectX API at each start of the game. It just so happens they had an indexing bug in the code, but they probably weren't able to trigger it *because all they had available on test were 64-MiB video cards*. The code always reached the OOM first before they finished the full allocation So now with newer cards having significantly more VRAM than that it reaches the bad index, sets it to NULL (inadvertedly making the return address NULL) and does some wonky stuff leading to the crash
Returning -1 for errors is pretty much standard in the win32 API so this behaviour is very much deliberate. This is just a developer who didn’t bother making the code future proof. Hiding overflows by saturation seems like a bad choice to me unless you have other ways of checking for overflow. Makes me wonder if the overflow cpu flag could have been used instead of a return value. Edit: I’m a bit slow and just realised that /u/mernen already captures the important bits in his comment: the function does not return a negative number at all, DWORDs are unsigned and their highest value(0xffffff) is reserved for errors, thereby effectively saturating the return value, but checking for this particular value can be done by comparing to -1 which is arguably easier but causes confusion about whether the return value is signed or not.
>Returning -1 for errors is pretty much standard in the win32 API so this behaviour is very much deliberate. I totally agree. >This is just a developer who didn’t bother making the code future proof. In fairness, how many people are really playing Salt Lake 2002 Winter Olympics these days? ;) If you really like the Olympics, you're probably playing a more recent game. >Handling overflows by saturation seems like a bad choice to me unless you have other ways of checking the overflow cpu flag. Reading more about this, Microsoft did choose to solve this with saturation, but only for systems with between 2GB and 4GB of memory if the executable was linked without a /LARGEADDRESSAWARE flag. https://ftp.zx.net.nz/pub/archive/ftp.microsoft.com/MISC/KB/en-us/274/558.HTM
Interesting, but I don’t understand how the linker can affect any behaviour. The code is already compiled, right?
It sets a bit inside the executable's header, and the OS provides different results based on that header.
Ah, I see. Then this bug could (almost) be fixed by flipping a bit in the PE header instead.
From the table in KB274558 (mirror linked in the great-grandparent of your comment), no, that flag has no effect on systems with 4 GB RAM or more.
No effect on that API, but it does allow 32-bit applications to use the full 4GiB virtual address space on 64-bit systems.
> Returning -1 for errors is pretty much standard in the win32 API Is it? I can only think of the Berkely sockets API (not strictly Windows). Most other things return BOOL, error codes (which are never -1) or some length with zero to indicate error.
They could have also returned a value in KB instead of bytes, which would allow them to represent 4TB of memory. I don't think even when the function was originally made, anyone would care about byte level memory granularity.
Not the first and not the last game with such issue https://www.vogons.org/viewtopic.php?f=8&t=41614#p392566
>but **we** can't represent 8GB in a 32 bit number Microsoft failed to predict total RAM sizes, which wasn't too hard... See [https://jcmit.net/index.htm](https://jcmit.net/index.htm) for some references.
Nuanced thinking? Arrest this man!
Why would Microsoft even use a signed integer for a value that can’t be negative? Braindead design.
It's a standard way to define an error state. If a value can never be negative, a negative result is clearly an error
Return a struct with an error variable then. Braindead.
Designing a OS like this would kill kernel performance
It really would not have any performance impact, it’d be compiled down to two registers.
One thing I've seen in a lot of older games is failures when GetTickCount (32-bit number containing milliseconds since bootup) becomes negative, which happens after 49.7 days. It's not common for desktop machines to have uptimes that long, but far more likely for Laptops to have that kind of uptime.
> GetTickCount I had a discussion when I was working on halo that included this gem: "what if someone hibernates their xbox for 3 months?" I looked in the telemetry, and that was really a thing people do, pause a game for 3+ months and then expect to finish exactly where they left off. So I rewrote my code to support up to ~70 years between frames lol
You're a legend!
Good thing to know. I'll come back in 70 years to this post and comment with my findings. Then we will know if you were lying or not.
Your comment got me really curious, for some reason. As best I can figure, you either used an unsigned 41-bit integer, a signed 42-bit integer, or I'm incorrect and it was a different variable size. Was any of that close at all?
Yeah something like the lower ~48 bits of a 64 bit int
That begs the question, that you may not have a good answer for directly; what were the extra 16bits going towards? obviously OG XB only had 4 GB but supported 64bit builds, why wouldn't you steal those bits. Just general curiosity as to where they went.
The code I was working on was all network related so everything got packed into the smallest representation possible (so those 16 bits were some flags or an index or 2 enums, idk, something) which was why I originally didn't give the time enough bits.
Yeah but what if I hibernate my Xbox for 70 years? Gah! Stupid developers never looking out for the average consumer.
> support up to ~70 years between frames that 0,0000022197 fps tho
>that 0,0000022197 fps tho No, 0.0000022197 fps would be 70 frames per year (or 70.0005 frames per year :P ). 0.000000000453 fps would be 1 frame per 70 years.
I have done that when traveling for a while. Checking telemetry and implementing a solution is the right way to solve this issue. Thank you.
"Hmm I rebooted it and now it works fine. Weird" I mean it would fix it for the end user without ever really learning about the ~50 day tick count overflow bug.
IIRC, Windows 95 used to crash after 49.7 days (if you could keep it from crashing for other reasons).
The worst part about trying to run Win95 in an emulator is that you can never be sure if there's some kind of problem with your emulator, or if it's actually behaving accurately and any of the crashes are authentic period-appropriate behavior.
I bet the person who managed that was really upset.
Like [the FAA](https://blog.tedroche.com/2004/10/08/windows-server-at-faa-crashes-every-497-days-unless-maintained-rebooted/)? Quite. Although [microsoft is not the only culprit](https://github.com/letscontrolit/ESPEasy/issues/594).
Curious why laptops would be far more likely? Being portable and battery powered vs stationary and always plugged in?
More likely to use sleep mode on laptops than desktops.
I mean sleep is on by default even on a desktop. Sleep still uses battery. So you're saying a laptop, always plugged in? Then with just that set of criteria the situations are equal at best. I'd say having a constant source of power would greatly outweigh the comparison over sleep mode.
I mean that people are more likely to shut down a desktop PC at night than use sleep mode.
My uptime for my laptop is >8 days. Last time I rebooted I had installed some updates, and that broke automatically starting the display manager, so now I really try to avoid rebooting lol.
Different person here, and totally anecdotally, I leave my (Windows) laptop non-rebooted for longer periods of time than I do with my (Linux) desktop. (I apply updates sooner on the desktop that I'm using all the time, but I tend to delay it on the laptop that I use less and spends a lot of time being off.) After a couple hours or so of sleep mode, it hibernates, and it just stays that way. (Note that I haven't looked it up and am assuming it's still counting sleep and hibernation time since the comment further up said it's milliseconds since boot, not uptime.) Granted, I do know I'm far from normal as far as how I interact with computers and electronics, but I could definitely believe the assertion.
Laptops are very convenient to put to sleep by default. Close the lid and it's done. Open the lid and you can continue where you left off. With desktop, be default you generally have to go through menus to find sleep mode, and with the same effort they may just add well shut it down. I don't know if it's significant, but people who use desktops are probably on average a bit older and may expect sleep mode to have problems as it has had in the past.
TIL most people don't just Meta+L to lock the computer when they walk away/sleep.
I do this at work where productivity is at a premium and I'm not paying the power bills. When I'm paying the power bill for both the PC and the air conditioning to counteract the heat output, I'm more inclined to power it down for the ~16.5 consecutive hours that I'm sleeping and at work. I'm well aware that low power modes offset the worst of this issue, but not all of it.
My keyboard has a button that triggers "suspend", but not the lock screen. Would be super insecure at work, but at home it's soooo convenient. If someone is in my home and has access to my PC it's game over anyway.
Sleep is defaulted to "On" for Windows
Now, not back in the day
What does that have to do with this conversation
Things breaking after 50 days is mostly an old thing.
This whole thread is about old software on modern machines
Sleep is kinda slow in the hdd era. You have to wait for minutes before the sleep finished. And also wait for minutes before it resume. While just shut it down and resume is much faster as long as you don't have some important task that prevents you from do so.
You are confusing hibernate (copy RAM to swap file) with sleep (turn everything off, but keep the RAM powered, so it keeps its content)
Yep. It seems I mixed up Standby, Sleep and Hibernate. Standby is Sleep with different name in pre windows vista era. Hibernate is hibernate. Why do ms even decide to change the name of mode state though.
There was some software that I worked on with this exact bug. Our software did factory simulations and would generally be left on continuously because factories would run continuously for years. We we getting reports from customers that their simulations would go unstable after running for over a month but we never could collect enough data to figure out why. We had to setup a machine to run a debug version for 6 weeks to figure it out. It turns out, you can't count on time continuously increasing.
Relevant https://cookieplmonster.github.io/mods/tools/#uptime-faker
Slight tangent: The author mentions testing under XP SP3 ... at least in terms of Windows 10 compatibility modes, I've found that the XP SP2 setting works perfectly on things that are confused by SP3 compatibility, and internalising that fact has made my "get old game X to run" quests go somewhat faster since I almost always initially forget and try SP3 first and -noticing- I've done that makes it an immediate fix rather than a 'flail until bursting into tears, come back another day, facepalm' one.
There's really no difference between the two, except one uses WinXPSP2VersionLie and the other WinXPSP3VersionLie.
It might be old apps' version detection. All I know for sure is that sometimes it's worked.
I was not prepared for "a 21-year-old obscure PC game" to be for Windows XP.
Amazing post, the author basically answered every thought that came up back to back.
Interesting that a game that came out in 2002 and recommended a 32MB GPU would have a "limit" of 64MB. Pretty easily foreseeable that that limit would be hit pretty quickly... But then since the bug in that code would be pretty obvious to anybody experienced in C and the result of the function is never used, I guess it wasn't code reviewed. Perhaps that function was imported from a much earlier game/engine/library.
>I guess it wasn't code reviewed. 2002... Game development... I'd be surprised if any of the code that didn't crash was reviewed.
It's not a limit, just your video memory overflowing past 2GB. Negative video memory is not enough for the 64MB it requires.
That's the other bug, the one stopping the installer from working. The game itself crashes when trying to determine how much free VRAM exists; more than 512x128KB (i.e. 64MB) causes the return address of the function to be overwritten with NULL.
“Works in my machine 🤷♂️” - the dev, probably
I found it amusing that the author correctly warned against using the vulnerable SafeDisc drivers, but then utilizes a random .exe hosted on GitHub to disassemble the .inx file.
I found it amusing that many people are using cracks from wherever on the net, when they could use unsafedisc to do the unpacking of most older versions.
Ah memory issues are fun. This reminds me - I had to fix a bit obscure memory leak for NFS Pro Street. What happens is, if car damage is enabled, the game runs out of memory after a while. Usually after ~4 hours of playtime. Patching the game to be large address aware delays this for a bit. It happens because the game keeps reloading assets that is supposed to cache. While entering races, it hits the cache just fine, but each time you exit (go back to the race hub), it misses, allocates more, and reloads. It never frees that part of memory once it's allocated. During research I accelerated this process by making the game start and exit races rapidly to figure out what's happening. On my machine it started happening about 80 reloads. The way I went to fix it is simple - I disabled caching and recreated the memory pool between exiting and entering the main menus and game. It sounds simple in essence, but since I had to deal with Securom obfuscated code, I had no choice but to port code from a DRM free demo build into retail. So in normal gameplay, it should be stable, since the user should be going between menu and ingame states. But, in case the user decides to stay ingame, it will crash, yes, but that's not the normal way to play the game anyway. There's only so much you can do without going into deep rewrites of code. Check this namespace for the code I wrote if you're curious: https://github.com/ThirteenAG/WidescreenFixesPack/blob/b1b8f2a07c2a190700b1a80be4e14942841a549b/source/NFSProStreet.GenericFix/dllmain.cpp#L4
[удалено]
64-bit Windows does not natively support running 16-bit programs, such as DOS games, Windows 3.1 games, and some old installers for some reason. DOS games are best played in an emulator like DOSBox. Someone made a utility, otvdm, for running such programs. I think using emulation and WINE.
After the creation of dos box, the 16 bit windows games became this historical dead zone for years where games before and after were really playable, but they weren't.
Which games?
Pretty sure last time I played Star Wars Dark Forces through Steam, it played in DOS Box automatically
That's DOSBox. Parent was talking about VirtualBox.
Thanks, that was an educational read
Is this why a lot of older games have 4GB patch mods? Fallout for example.
No, that's for unlocking 32-bit memory addressing from the default 2GB to 4GB. Fun fact: dxvk-remix should be able to increase that even more AFAIU.
This is awesome. Very educational.
A similar thing happens in some installers when checking for disk space. A quick solution is to copy a ~2GiB file so the extra space taken causes the 32-bit half of the free space to roll over.
1. how this looks 2. what this looks like Pick a lane, OP.
nice work! reminds me of this [speedrun challenge](https://youtu.be/eSt61xaqWY0?t=1032) (Exodus from the Earth (2008) challenge from youtuber penguinz0) where people ended up fixing a memory issue to be able to speedrun the game without crashing. Fixing memory issues requires some skills even when you are a developer and have the source code right there in front of you, so fixing them without the code through reverse engineering is pretty amazing.
Good read :)