It must have been difficult and frustrating to work as part of the Windows team back in those days.
You see all the wacky software that doesn't follow the rules properly, does whatever it wants, breaks things. And you have to figure out how Windows can accommodate all that software, keep it from breaking, and also prevent it from messing up a computer or undo the damage.
They did not have the option of saying "this app developer wrote shitty software, sucks to be them, not my problem."
I wonder how much of this problem was caused by lack of adequate documentation describing how an installer should behave, and how much was developers not reading that documentation and being content when it works on their machine.
> I wonder how much of this problem was caused by lack of adequate documentation describing how an installer should behave, and how much was developers not reading that documentation and being content when it works on their machine.
There is a third option: the developers knew the rules and chose to ignore them for some reason. A modern example of this is the Zig language’s decision to reverse engineer and use undocumented APIs in Windows in preference of using documented APIs.
> In addition to what @The-King-of-Toasters said, the worst case scenario is really mild: A new version of windows comes out, breaking ntdll compatibility. Zig project adds a fix to the std lib. Application developer recompiles their zig project from source, and ships an update to their users.
Uh so what if the application developer isn't around any more?
The fact that they consider the worst case to be one where the application is still actively supported and the developer is willing to put up with this nonsense is pretty surprising. Not sure how anyone could believe that.
I understand the stated reasons and disagree with their conclusions. It seems like a lot of extra work on the part of the Zig developers to have to reverse engineer undocumented interfaces. There is potentially extra work for the Windows developers if they want to change their private implementation details that Zig programs are now relying on. And in the case where the lower level APIs are missing features implemented higher level APIs, Zig has to either reimplment the functionality themselves or not have the same level of support for Windows features that normal Win32 apps have. A concrete example is looks like Zig programs can't open serial devices:
My excitement for Zig dropped the longer they stayed at 0.x (and they really have meant 0.x with the breaking changes they were making). This decision from them completely killed it for me.
I understood not using the C Runtime and instead creating direct wrappers over the Win32 API, but going a level lower to APIs that are not guaranteed to be stable is nutty.
Part of this was the allowed or special treatment given to Zig at certain sites. It then allowed the false impression to be created of it being production ready and/or far more stable than it actually is. Often intentionally overlooking or glossing over that it's still in beta, making breaking changes, and has thousands of issues (over 3,000 on GitHub).
They were allowed to have it both ways, pre-1.0 and yet somehow (near, like, just about) production-ready. Almost there, for years. Strangely given a free pass to get away with this, for what looked to be undisclosed financial and other fuzzy reasons.
One of the craziest Raymond Chen stories is one where a Windows API call would return a pointer to a data structure the OS had allocated for the operation. The programmers at Microsoft made the data structure bigger than they needed, for future expansion. But some third party devs noticed the extra space, and started to use it to store data for their program. Then when Windows tried to start using the extra space, those applications would crash.
Reasonable people can disagree on a lot of things in programming. But I still do not understand how one can consider writing to memory the OS owns to be ok. It's sheer professional malpractice to do that kind of thing. With stuff like that, I don't think that any amount of documentation would have helped. The issue was that those programmers simply did not care about anything except getting their own program working, and did whatever the most expedient method was to get there.
> But I still do not understand how one can consider writing to memory the OS owns to be ok.
Go to Vogons and look at all of the memory tricks people will use to get various games running on MS-DOS. This kind of juggling exactly which drivers to load, etc. is why Microsoft added the boot menu in MS-DOS 6.0 to CONFIG.SYS.
I'm not necessarily saying that this was the case here, but it smells like that to me.
Back then, many programmers originally learned their ropes in an 8-bit home computer era (or earlier), where it used to be completely normal and even necessary that you used whatever memory region you got away with.
For example, on the C64, you would get away with using the memory locations $02, $2A, $52, $FB to $FE, $02A7 to $02FF, and $0313 as scratch space for your own programs. Memory was incredibly scarce. I can’t blame programmers for sticking with their habits and for taking several years to unlearn and adjust their misconceptions about who owns what if they came from a home computer era where that pattern used to be the only way to get stuff done.
Things were different back then. People did a lot of hacky stuff to fit their programs into memory, because you were genuinely constrained by hardware limitations.
Not to mention, the idea of the OS owning the machine was not as well developed as it is today. Windows 3.11 was just another program, it didn't have special permissions like modern OSes, and you would routinely bypass it to talk to the hardware directly.
"Not to mention, the idea of the OS owning the machine "
I agree--back then when computers had <=4MB or RAM I would've called hogging unused memory for some selfish speculative future use "professional malpractice".
When an OS uses any memory that's otherwise unused as a file cache, which is instantly available if an application wants more memory, but isn't shown as "unused": "This OS is terrible, I have 16GB of RAM but all of it is being used!"
When an OS doesn't do this: "This OS is terrible, I bought all this RAM and the OS doesn't use it!"
> Things were different back then. People did a lot of hacky stuff to fit their programs into memory, because you were genuinely constrained by hardware limitations.
Are you going to tell them what "32-bit Clean" meant for Mac developers, or will we let them find out that particular horror movie for themselves?
TBH i think a more likely explanation is that they needed to somehow identify separate instances of that data structure and they thought to store some ID or something in it so that when they encountered it next they'd be able to do that without keeping copies of all the data in it and then comparing their data with the system's.
Or you desperately need to tag some system object and the system provides no legitimate means to do so. That can be invaluable when troubleshooting things, or even just understanding how things work when the system fails to document behavior or unreasonably conceals things.
I've been there and done it, and I offer no apologies. The platform preferred and the requirements demanded by The Powers That Be were not my fault.
Ask the Zig people, who just started relying on undocumented unstable Windows behaviour intentionally due to some kind of religious belief: https://codeberg.org/ziglang/zig/issues/31131
There were no rules in DOS, or r_x permissions like Unix.
The DOS kernel itself didn't really impose any structure on the filesystem. All that mattered was:
- The two files that comprised DOS itself (MSDOS.SYS, IO.SYS) had to be "inode" 0 and 1 on the disk in early versions,
- the kernel parsed \CONFIG.SYS on boot, and I think looked for \COMMAND.COM if you didn't specify a different shell with COMSPEC= in CONFIG.SYS. There were defaults if \CONFIG.SYS didn't exist, but of course all your DEVICE= stuff won't load and you'll probably not have a working mouse, CD-ROM, etc.
\AUTOEXEC.BAT was optional. That's it. Any other files could be anywhere else. I think the MS-DOS installer disk put files in C:\DOS by convention but that was just a convention. As long as COMMAND.COM was findable DOS would boot and be useable-and if you mucked something up you just grab your DOS boot floppy with A:\COMMAND.COM on it and fix it.
From what I recall most installers-if provided-made a directory in \ and put all their files there, mixing executables with read-write data. There was no central registry of programs or anything unless you were using a third party front-end.
Windows 3.x and 95 inherited the DOS legacy there.
> I wonder how much of this problem was caused by lack of adequate documentation describing how an installer should behave, and how much was developers not reading that documentation and being content when it works on their machine.
It was mostly the latter. And when Windows broke, people would blame it on Microsoft, not on the software they installed. The same if the software broke. And you didn’t have online updates at the time that could retroactively add fixes. So Microsoft had to do everything they could to ensure broken software would still work, while also keeping Windows working, the best they could.
One workaround Microsoft has done for use-after-free is detecting when an application is prone to this and using an allocator that doesn't actually free RAM immediately. It believe that lovely bit of fun is a function of "Heap Quarantine".
Yes, the real, can't say no world of system software is not what one might wish.
The biggest cause for this problem isn't lack of docs, but poor OS design. Like, why would you let apps change anything without restrictions to begin with? Of course, then you have to have some dumb hidden folder wasting space to restore the changes, and this "waste space for no good reason because we can't architect properly" is still a common issue
When I was a kid, I released a small GUI program online that I made with either VB6 or VB.NET. The program used the standard open-file dialog. When I created the installer for my program through VB's release wizard, there was a page where it pointed out that my program depended on a certain system library (because of the open-file dialog) and it asked me if I wanted to include that library in the installer. I think the default answer was yes, or maybe it wasn't but it sounded like an obvious thing to enable so I did it. Apparently this screwed over and broke open-file dialogs globally across Windows for everyone who wasn't on the same version of Windows as me. Whoops! It's too bad that VB had such a foot-gun in it, and that the article's workaround didn't save those users.
>Windows 95 worked around this by keeping a backup copy of commonly-overwritten files in a hidden C:\Windows\SYSBCKUP directory. Whenever an installer finished, Windows went and checked whether any of these commonly-overwritten files had indeed been overwritten.
This is truly unhinged. I wonder if running an installer under wine in win95 mode will do this.
I find articles like this a good counter to the idea that typical software used to be better in the past (usually with an appeal to an idea that people were “real programmers” in those days and anything other than C as used in the 90s is a modern extravagance)
In my 25 years of using Windows I've grown so much disdain towards annoying, broken, slow installers that I started to instead extract them like zip archives, using various tools: 7-Zip, UniExtract, Observer plugin for Far Manager, sometimes even manual carving.
Most things just worked after being extracted like that. Some things needed a few registry entries, or regsvr32 some dll files.
The sad lesson is to be both proactive and reactive if you want a clean environment. Trust, verify, and stick around to clean up someone else's mess after the fact.
Windows, especially old versions, were beautifully pragmatic. Think about the things that would need to exist on an open-source OS to match this functionality. You'd need to:
1. Convince people to distribute programs via installers.
2. Provide some way that installers can tell the OS that they're an installer (and not invent 5 different ways to do this!)
3. Convince the creators of installers to actually use that function.
4. Convince library creators to maintain backward compatibility (big ask).
5. Convince people to not fork said libraries, creating ambiguous upgrade paths.
6. If there are multiple distros, convince them all to use the same backup/restore format for libraries (and not treat their own favorite libraries as "special")
> Some components addressed this problem by providing their own installer for the component, and telling installers, “You are not allowed to install these component file directly. Instead, you must run our custom installer.
This existed until Windows XP but with different name and behavior. Only from Vista, they added permissions to restrict user and third party programs to modify system files easily.
100 comments
You see all the wacky software that doesn't follow the rules properly, does whatever it wants, breaks things. And you have to figure out how Windows can accommodate all that software, keep it from breaking, and also prevent it from messing up a computer or undo the damage.
They did not have the option of saying "this app developer wrote shitty software, sucks to be them, not my problem."
I wonder how much of this problem was caused by lack of adequate documentation describing how an installer should behave, and how much was developers not reading that documentation and being content when it works on their machine.
> I wonder how much of this problem was caused by lack of adequate documentation describing how an installer should behave, and how much was developers not reading that documentation and being content when it works on their machine.
There is a third option: the developers knew the rules and chose to ignore them for some reason. A modern example of this is the Zig language’s decision to reverse engineer and use undocumented APIs in Windows in preference of using documented APIs.
https://codeberg.org/ziglang/zig/issues/31131
> In addition to what @The-King-of-Toasters said, the worst case scenario is really mild: A new version of windows comes out, breaking ntdll compatibility. Zig project adds a fix to the std lib. Application developer recompiles their zig project from source, and ships an update to their users.
Uh so what if the application developer isn't around any more?
The fact that they consider the worst case to be one where the application is still actively supported and the developer is willing to put up with this nonsense is pretty surprising. Not sure how anyone could believe that.
>ignore them for some reason
The reasons are clearly stated in the issue you have linked.
https://codeberg.org/ziglang/zig/src/commit/6193470ceea89a98...
All this to save a little memory CPU and memory usage? The juice does not seem worth the squeeze.
If we needed an example of why we should avoid using passive voice, this is it.
I understood not using the C Runtime and instead creating direct wrappers over the Win32 API, but going a level lower to APIs that are not guaranteed to be stable is nutty.
They were allowed to have it both ways, pre-1.0 and yet somehow (near, like, just about) production-ready. Almost there, for years. Strangely given a free pass to get away with this, for what looked to be undisclosed financial and other fuzzy reasons.
> documentation for the WDK and Windows SDK recommends that application developers avoid calling undocumented Nt entry points
So it's safe to call documented ntdll functions. But calling undocumented functions is more risky.
Reasonable people can disagree on a lot of things in programming. But I still do not understand how one can consider writing to memory the OS owns to be ok. It's sheer professional malpractice to do that kind of thing. With stuff like that, I don't think that any amount of documentation would have helped. The issue was that those programmers simply did not care about anything except getting their own program working, and did whatever the most expedient method was to get there.
> But I still do not understand how one can consider writing to memory the OS owns to be ok.
Go to Vogons and look at all of the memory tricks people will use to get various games running on MS-DOS. This kind of juggling exactly which drivers to load, etc. is why Microsoft added the boot menu in MS-DOS 6.0 to CONFIG.SYS.
I'm not necessarily saying that this was the case here, but it smells like that to me.
For example, on the C64, you would get away with using the memory locations $02, $2A, $52, $FB to $FE, $02A7 to $02FF, and $0313 as scratch space for your own programs. Memory was incredibly scarce. I can’t blame programmers for sticking with their habits and for taking several years to unlearn and adjust their misconceptions about who owns what if they came from a home computer era where that pattern used to be the only way to get stuff done.
>I still do not understand how one can consider
writing to memory the OS owns to be ok.Things were different back then. People did a lot of hacky stuff to fit their programs into memory, because you were genuinely constrained by hardware limitations.
Not to mention, the idea of the OS owning the machine was not as well developed as it is today. Windows 3.11 was just another program, it didn't have special permissions like modern OSes, and you would routinely bypass it to talk to the hardware directly.
I agree--back then when computers had <=4MB or RAM I would've called hogging unused memory for some selfish speculative future use "professional malpractice".
When an OS uses any memory that's otherwise unused as a file cache, which is instantly available if an application wants more memory, but isn't shown as "unused": "This OS is terrible, I have 16GB of RAM but all of it is being used!"
When an OS doesn't do this: "This OS is terrible, I bought all this RAM and the OS doesn't use it!"
> Things were different back then. People did a lot of hacky stuff to fit their programs into memory, because you were genuinely constrained by hardware limitations.
Are you going to tell them what "32-bit Clean" meant for Mac developers, or will we let them find out that particular horror movie for themselves?
> But I still do not understand how one can consider writing to memory the OS owns to be ok.
Your manager tells you to reduce memory usage of the program "or else".
I've been there and done it, and I offer no apologies. The platform preferred and the requirements demanded by The Powers That Be were not my fault.
There were no rules in DOS, or r_x permissions like Unix.
The DOS kernel itself didn't really impose any structure on the filesystem. All that mattered was:
- The two files that comprised DOS itself (MSDOS.SYS, IO.SYS) had to be "inode" 0 and 1 on the disk in early versions,
- the kernel parsed \CONFIG.SYS on boot, and I think looked for \COMMAND.COM if you didn't specify a different shell with COMSPEC= in CONFIG.SYS. There were defaults if \CONFIG.SYS didn't exist, but of course all your DEVICE= stuff won't load and you'll probably not have a working mouse, CD-ROM, etc.
\AUTOEXEC.BAT was optional. That's it. Any other files could be anywhere else. I think the MS-DOS installer disk put files in C:\DOS by convention but that was just a convention. As long as COMMAND.COM was findable DOS would boot and be useable-and if you mucked something up you just grab your DOS boot floppy with A:\COMMAND.COM on it and fix it.
From what I recall most installers-if provided-made a directory in \ and put all their files there, mixing executables with read-write data. There was no central registry of programs or anything unless you were using a third party front-end.
Windows 3.x and 95 inherited the DOS legacy there.
> I wonder how much of this problem was caused by lack of adequate documentation describing how an installer should behave, and how much was developers not reading that documentation and being content when it works on their machine.
It was mostly the latter. And when Windows broke, people would blame it on Microsoft, not on the software they installed. The same if the software broke. And you didn’t have online updates at the time that could retroactively add fixes. So Microsoft had to do everything they could to ensure broken software would still work, while also keeping Windows working, the best they could.
Yes, the real, can't say no world of system software is not what one might wish.
Also note that Microsoft Office has a long history of not following Windows rules. Microsoft didn't even set a good example.
>back in those days.
> You see all the wacky software that doesn't follow the rules properly, does whatever it wants, breaks things.
Just like today. Software is hard, software engineering even harder.
>Windows 95 worked around this by keeping a backup copy of commonly-overwritten files in a hidden C:\Windows\SYSBCKUP directory. Whenever an installer finished, Windows went and checked whether any of these commonly-overwritten files had indeed been overwritten.
This is truly unhinged. I wonder if running an installer under wine in win95 mode will do this.
> Whenever an installer finished, Windows went and checked whether any of these commonly-overwritten files had indeed been overwritten.
> Basically, Windows 95 waited for each installer to finish
How could it tell that a particular process was an installer? Just anything that writes to the PROGRA~1 or WINDOWS folders?
WinSxSdirectory was born and these days it is tens of gigabytes.Dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBaseIn an administrator command prompt. You can thank me when it's finished ;-)
Most things just worked after being extracted like that. Some things needed a few registry entries, or regsvr32 some dll files.
Unlike, it's so easy to reason about. I wish this kind of approach was still viable.
1. Convince people to distribute programs via installers.
2. Provide some way that installers can tell the OS that they're an installer (and not invent 5 different ways to do this!)
3. Convince the creators of installers to actually use that function.
4. Convince library creators to maintain backward compatibility (big ask).
5. Convince people to not fork said libraries, creating ambiguous upgrade paths.
6. If there are multiple distros, convince them all to use the same backup/restore format for libraries (and not treat their own favorite libraries as "special")
> Some components addressed this problem by providing their own installer for the component, and telling installers, “You are not allowed to install these component file directly. Instead, you must run our custom installer.
Aha, that’s why they do that.