One of the fun features that I developed for Warcraft (the RTS) was to fade the screen to grayscale when the game is paused.
Since the game uses a 256 color palette, it was only necessary to update a few bytes of data (3x256) instead of redrawing the whole screen, so the effect was quick.
I also used this trick when the game stalled due to missing network packets from other players. Initially the game would still be responsive when no messages were received so that you could still interact and send commands. After a few seconds the game would go into paused state with grayscale screen to signify the player that things were stuck. Then several seconds after that a dialog box would show allowing a player to quit the game.
This was much less disruptive than displaying a dialog box immediately on network stall.
One of my favourite things of being on HN is reading comments like this. Namely, devs who worked on games I played growing up. I absolutely love hearing stories from their past about little technical nuances like this comment. The more technical / specific, the better.
I'd honestly love to compile a book of "war stories" told by devs like netcoyote.
This is a great idea, but respectfully, if you're going to get traction you need to be the one instigating getting people to talk to you. Have a pitch, have an explicit ask, and be willing to put effort into making it happen.
Oh, and I forgot to mention that pause had to be synchronized across the network, so the pause button would pause for all players.
And in the "this is why we can't have nice things", that also introduced problems, because we didn't want a player who was losing to keep pausing the game until the winning player quit out of frustration, so I think we kept a per-player pause counter, which would only be restored if other players also paused? (I don't quite remember all the details, just that we had to prevent yet another abuse vector).
Omg I love this! I have been finding excuses to do little animation engine features that arent on the critical path of development for the sake of creative self-indulgence. Such features shipped was alpha channel based fading using the fundamental opengl fade parameter (under the hood its a linear interpolation of alpha values over 256, pieced together over a provided pair of timestamps).
I tell you what I'll do today on my dev time, I'll try implementing grayscale without aby research on pause and then compare notes (I'm assuming this wc code is available somewhere, which may be a bad assumption)
So I was able to create all the bits necessary to introduce the palette change in a similar manner (3x256 changes) on the triggerz and at the moment of truth instead of grey I got a GREEN and PURPLE fadeout (I wasnt sure if you meant rbg or rgb for the ratios so i tried both).
I also tried 128 across the board for grey, and it just made a dull fade which may be the best I can do with my method.
I think it may simply be because rather than have palletes controlled by rgb, I load predrawn sprites using sfml's sprite and texture classes. So the default rgba is 255,255,255,255 - so I have a sidequest to figure out the RIGHT WAY of applying rgb changes to predrawn sprites.
It may very well be a simple matter of "sfml does it differently" or perhaps having grey variants of all sprites and toggling. I feel there has to be a way to accomplish the fade to grey programmatically. Fun little dive tho! I'll have to post an update when I figure it out.
Hardware floating point was rare before the 486 DX and Pentiums. Not to mention that Integer<->FP conversion was slow. And division of any kind has always been slow. So you'd see a lot of fixed-point math approximations with power-of-two divisors so that you can shift-right.
I havent gotten behind the console, but thats like, exactly what I was gonna do, except precompute like 5 or 6 tween values for r,g,b between 255 and the target for greyscale.
But rather than do that and cache them for timing triggers, I kind of like the scaling down by multiplication approach.
Edit: manipulate the rgb values that is - I wouldnt have converged on those hard values on my own.
One of the things that impressed me in Quake (the first one) was the demo recording system. The system was deterministic enough that it could record your inputs/the game state and just play them back to get a gameplay video. Especially given that Quake had state of the art graphics at the time, and video playback on computers otherwise was a low-res, resource intensive affair at the time, it was way cool.
It always surprised me how few games had that feature - though a few important ones, like StarCraft, did - and it only became rarer over the years.
It wasn't really that much to do with determinism. Quake uses a client-server network model all the time, even when you're only playing a local single-player game. What the demo recording system does is capture all of the network packets that are being sent from the server to the client. When playing back a demo, all the game has to do is run a client and replay the packets that it originally received from the server. It's a very elegant system that naturally flows out of the rather forward-looking decision to build the entire engine around a robust networking model.
I don't see why it makes a difference for this purpose that you're replaying network packets or controller inputs or any other interface to the game engine.
The important thing is that there is some well-defined interface. I guess designing for networked multiplayer does probably necessitate that, but if the engine isn't deterministic it still isn't going to work.
There was a twitter thread years ago (which appears to be long gone) about how the SNES Pilot Wings pre-game demo was just a recording of controller inputs. For cartridges manufactured later in the game's life, a plane in the demo crashes rather than landing gracefully, due to a revised version of a chip in the cartridge. The inputs for the demo were never re-recorded, so the behaviour was off.
Carmack wrote a really interesting .plan about this. It seems to be written between Q2 and Q3A, and cites the Windows message queue as a big inspiration:
Checking in as a random indie developer who still prioritises determinism in my engine. I don't understand why so many games/engines sacrifice it when it has so much utility.
Bungies Marathon series (1994) had the same recording system, as other commenters mentioned due to networking multiplayer.
What's totally insane is that the modern engine rewrite Aleph One can also play back such old recordings, for M2 Durandal (1995) and Infinity (1996) at least.
I'm pretty sure it's because it's in fact 'just' a cool side effect to a common network architecture optimisation from the time where you could'nt send the 'state' of the entire game even with only delta modifiers and so you make the game detertministic to only synchronize inputs :) an exemple article I remember : https://www.gamedeveloper.com/programming/1500-archers-on-a-...
The main downside which probably caused the diseapearance is that any patch to the game will make the replay file unusable.
Also at the time (not sure for quake) there was often fixed framerate, today the upsides of using delta time based frame calculation AND multithreading/multi platform target probably make it harded to stay deterministic (specialy for game where you want to optimize input latency)
In some games - most famously Doom - entire multiplayer is based on exchanging just the inputs and the games on all connected computers are deterministic enough to provide same outcome on all of them.
I am one of the authors of Fire Fight game (1996-ish) and we pulled the same stunt. It was actually easy, we just had to build our own "random number generator" and fix all bugs with uninitialized memory :-)
Age of Empires 4 also does this. It's very cool and saves a lot of space, but it does have some significant downsides at least the way its implemented there - you can't rewind replays, and they become unwatchable when the game updates significantly.
Hey I am actually working on a browser game that is fully-deterministic (except for player inputs) and so I can basically replay games entirely. Think like a chess engine game.
I’ve been obsessing over this determinism and replayability for months, to the point where any game played is fully replay-able to the exact same events as the original game.
So you can play, then watch a recording and spectate your played games from different actors perspective (enemy perspective etc).
My rendering and game logic are fully decoupled.
I wrote the “engine” for the game from scratch. I think I only use one third party library currently.
The alternate to this was the first WORMS game, where, if I remember it properly, there were nondeterministic replays and the next turn picked up from the replay not the initial action
Quake1 was my first love. From the old DOS version to the GLQuake to grappling hooks on Trinicom. I was amazed not only by said demo system but by QuakeC, and how heavily it was used, especially in college communities. I remember MIT and UWisc both being unreasonably productive modders in said language.
As a kid, I couldn't wait to see what came next. Sadly, Q1 was rather one of a kind, and it was many years until anything else like it showed up.
Demos were really useful for helping validate competitive play too. While certain anti-cheat programs were available such as PunkBuster (Quake 3), having gaming ladders request everyone records a demo and upload it from their POV was a very low friction way to deter cheating. The idea being, no one looked at them unless there was suspicion so it wasn't even a time sink for administrators.
No fancy kernel level anti-cheats. Just ensure matches were played on legitimate servers and demos were recorded.
Also, back then live streaming while playing was usually too much of a computational and network burden (56k modems), but casting was just coming around as being a thing and certain Quake 3 mods had spectator modes that let someone streaming spectate you from the first person live which also helped deter cheating. There was even split screen spectating modes so you can follow the action (useful for 4v4 games, etc.).
Carmack and team really made something special back then. The ideas they had and what they did with their tech on relatively low end hardware was remarkable.
You could play back multiplayer Halo 3 matches in 3D, with a free camera. Was really interesting to see how matches played out, how you got killed and so on, and for taking cool screenshots.
If memory serves well, that worked by replaying network packets, which is what some other games do as well, the problem with that approach is that for live service games unlike old games that were often "set in stone", the protocol always changes, so it's a huge maintenance burden. You either need to add conversion tools, keep maintaining backwards compatibility with older protocol versions, or you accept that replays quickly become outdated.
It wasn't deterministic. It didn't record the inputs. It recorded the basic state of the objects you could see.
Deterministic game sync is a completely different approach more often used in RTS games. Quake had non-deterministic authoritative central server + clients getting an incomplete view of the world.
I worked on this for a pretty big game. We recorded the network traffic and played it back and simulated the game - so same problem with patches. It also has the awkward side effect of exposing a metric crap ton of “join in progress” style bugs because our game didn’t support JiP.
I had a puzzle game were all of the solutions it would show were playbacks of my keypresses as I solved it myself. As the puzzles got more difficult it got harder and harder to record a solution without having pauses to think about what to do next.
I always wondered how NES games, which were notoriously low memory, could have game simulation on the start screens. Think Super Mario Bros, but there are many others. If no input is received at the start menu, the game starts playing a demo run. You always see videos and posts about how developers were dissecting sprites and swapping color pallets to work around the small memory, so how in the heck did they manage having the gameplay demos?
I wrote about it here many times over the years but in 1991 I wrote a little DOS game (and I had a publisher and a deal but it never came out and yet it's how my career started but that's another story) and at some point I had an "impossible to find" bug because it was so hard to reproduce.
So I modified my game engine to be entirely deterministic: I'd record "random seed + player input + frame at which user(s) [two players but non-networked] input was happening". With that I could make tiny save files and replay (and I did find my "impossible to find" bug thanks to that).
First time I remember someone talking about it was a Gamasutra article by an Age of Empire dev (article which another poster already mentioned here in this thread): they had a 100% deterministic engine. FWIW I wrote an email to the author of that article back then and we discussed deterministic game engines.
Warcraft 3 definitely had a deterministic game engine: save files, even for 8 players (networked) games were tiny. But then you had another issue: when units, over different patches, would be "nerfed" to balance the game (or any other engine change really), your replay files wouldn't play correctly anymore. The game wouldn't bother shipping with older engines: no backward compatibility for replay files.
I had a fully deterministic game engine in 1991 and, funnily enough, a few days ago with the help of Claude Code CLI / Sonnet 4.6 I compiled that old game of mine again (I may put it on a public repo one day): I still had the source files and assets after all those years, but not the tooling anymore (no more MASM / no more linker) so I had to "fight" a bit (for example I had not one but two macros who now clashed with macros/functions used by the assembler: "incbin" and another one I forgot) to be able to compile it again (now using UASM, to compile for DOS but from Linux).
Another fun sidenote... A very good friends of mine wrote "World Rally Fever" (published by Team 17) and I was a beta tester of the game. Endless discussion with my friend because I was pissed off for his engine was so "non-deterministic" than hitting the Turbo button on my 486 (I think it was a 486) while I was playing the game would change the behavior of the (computer) opponents.
To me a deterministic game engine, unless you're a massively networked multi-player game, just makes sense.
Blizzard could do it for Warcraft 3 in 2002 for up to 8 players and hundreds of units. Several games had it already in the nineties.
It simplifies everything and I'd guesstimate something like 99% of all the game out there that don't do it could actually do it.
But it touches to something much more profound: state and how programmers think about state and reproducibility. Hint: most don't think about that at all.
Some do though: I was watching a Clojure conf vid the other day and they often keep hammering that "view is a function of state". And it is. That's how things are. It was true in 1991 when I wrote my DOS game, it was true for Age of Empire, Warcraft 3 and many other games. And it is still true today.
But we're in 2026 and there are still many devs insisting that "functional programming sucks" and that we should bow to the mutability gods for that is the only way and they'll fight you to death if you dare to say that "view <- fn(state)".
Super Smash Bros Brawl does this too for replays. I remember being a child and just learning about how computers worked and being very confused at how such a long video (which I knew to be "big") could possibly fit in such a small number of "blocks" on the Wii while screenshots were larger. I think the newer games do this too but they have issues because the game can be updated and then the replays no longer work.
I’d love to hear about the 2020 release of Microsoft Flight Simulator, which had an “active pause” feature that they hyped as a big innovation for that release. You could pause and switch camera angles and see what was going on, then quickly resume. Pretty much the whole game was still interact-able, but with your plane’s position paused. It was supposed to be a nice user-friendly way to pause while you checked gauges or fiddled with cockpit settings or whatever.
It never worked. You’d pause, and the plane was frozen in place yes, but the instrument cluster would still animate and show your altitude/speed changing as if you never paused. But you couldn’t control anything until unpaused. So you’d resume, and your momentum would suddenly leap to where the accumulated deltas ended up. So if you active-paused at full throttle, you’d unpause and start going way too fast… if you active paused while stalling, you’d unpause and your speed would be near zero… you’d even consume fuel while paused.
It’s like they literally just froze the plane’s position and left every other aspect of the physics engine untouched, never tested it, shipped it, and even did a bunch of marketing at how great the feature was. When it was so obviously broken.
I came back to the game after a year or so of updates, and not a thing had improved, it was every bit as broken as when they shipped it.
The 2024 release seems to have largely fixed it though from what I can see. It’s just nuts they had such a clearly broken feature for that long.
The strangest pause bug I know is in Mario Sunshine: pausing will misalign the collision logic (which runs 4 times per frame) and the main game loop. So certain specific physics interactions will behave differently depending on how many times the game has been paused modulo 4.
Too bad they didn't ask any VR developers. It's truly another beast, especially if you're developing with Unity for the Quest platform, since setting TimeScale to 0 in Unity effectively disables the physics engine, which means things like hands/controllers no longer work, which then breaks one of the requirements to even be able to release in the Meta store (handle pause state). The workaround used by Half-Life: Alyx (as told to me by the Hurricane VR developer when I asked years ago how to deal with pausing) is to clone your hands and disable/delete all physics-related stuff (e.g. Rigidbodies) on the new "paused" hands. If you are using laser pointers, then you'll also have to switch those out as well. If you have any active effects, particles or objects that obstruct the player's vision and/or visibility of the pause/resume UI, then you'll want to either disable those out or at least dim them substantially so the player can interact with the resume button e.g. with a laser pointer. You might also want to adjust the lighting to indicate that the user is paused.
Outside of VR, Unity offers a nice "AudioListener.pause" to pause all audio, but if you have any sound effects in your pause menu like when a user changes a setting, those no longer work, further requiring more hacky fun (or just set it to true, and ignore user-complaints about no audio on menus when paused).
On top of that, you have things like Animators for characters, which also have to be paused manually by setting speed to 0 (TimeScale = 0 doesn't seem to do it). Some particle systems also don't seem to be affected by TimeScale. If you have a networked game then you also have to (obviously) send the pause command to clients. If you have things in motion, pausing/restarting the physics engine might cause weird issues with (at least angular) velocity so you might have to save/restore those on any moving objects.
Like a lot of issues in gamedev, pausing the game is a surprisingly difficult problem to solve.
It's especially difficult to provide a one size fits all solution which would obviously be desirable for the popular modern engines that try to be a general solution.
I see a lot of comments here saying something along the lines of "isn't it just a state in the state machine?" which isn't wrong, but is an extremely simplistic way of thinking about it.
In, say, 1983, you could get away with something like that:
- pause the game: write "PAUSED" to the tilemap
- paused main loop: check input to unpause
- unpause the game: erase the "PAUSED" / restore the previous tiles
But at that time you could already see the same sort of issues as today.
Something somewhat common in Famicom/NES games is the sprites disappearing when the game is paused.
Perhaps deliberate/desirable in some cases (e.g. Tetris) but a lot of the time, probably just a result of the 'is paused' conditional branch in the main loop skipping the sprite building code[0].
There's an extremely large problem space and ultimately, each game has its own way to define what "paused" actually means.
You might be interested in the features Godot provides[1] for this. Particularly, the thing that makes it interesting is the 'process mode' that each node in the scene tree has.
This gives the developer quite a lot of control over what pausing actually means for a given game.
It's not a complete solution, but a useful tool to help solve the various problems.
[0] Simplified description of course. Also, the sprite building code often ended up distributed throughout the various gameplay logic routines, which you don't want to run in the paused state.
[ed] Just adding that Tetris is only an example of a game where you might want that behaviour, not a comment about how any of the Tetris games were actually made.
Pausing is unintuitive in Unity because you don't control the main loop - all active objects get updated every frame. The recommended way to do it is to set the "time scale" to zero and have menu animations use special timers that ignore time scale. If you control the game loop, you can usually just get away with an "if (paused)" [0].
When I present TLA+ [0], I am referencing game pauses (pause buffer / item duplication Legend of Zelda exploit, Dark Souls menuing to cancel animations) and deliberate crashes as mechanics to exploit games, as those are still valid actions that can be modeled, and not doing that allows such exploits to happen.
A system is only correct relative to the transition system you wrote down. If the real system admits extra transitions that you care about (pause, crash, re-entry, partial commits), and you didn't model them, then you proved correctness of the wrong system.
This is silly reporting with a couple of interesting stories. Forget about the technical ways of doing it. Doing it at all changes the game experience.
Pausing a game has a massive impact on the game experience. It lets you break the fourth wall experientially. Not wrong, but it changes the dynamic of the game.
Same as saving at any time does. As losing your loot or your life permanently does. Not wrong, but a hard choice that appeals to some players and not to others.
I used to pause pacman on my Atari 800 so I could run to church and sing in the choir or be an altar boy. Then I ran home and unpaused to continue. Sometimes in summer the computer over-heated and I lost everything while I was at church.
While the game is paused, if a player were to click on the "level up" buttons for their skills, each click actually advanced the game by 1 frame - so it was possible for people to die etc. during a pause screen.
I find the notion odd that this is even a problem to be solved.
It suggests a level of control way below what I would ordinarily consider required for game development.
I have made maybe around 50 games, and I think the level of control of time has only ever gone up. Starting at move one step when I say, to move a non-integer amount when I say, to (when network stuff comes into play) return to time X and then move forward y amount.
When I first played the NES the pause feature impressed me even more than did the graphics. Apparently Atari already had the feature on the 5200 console, but even as late as 1988 it felt like magic to hit a button, go and eat dinner, and an hour later resume my game with another press of the button.
So the simple case is using some sort of state variable:
switch(game_state):
case(paused):
case(gameplay)
You still have to be careful about how you implement "gameplay", though. For example if at any point you read the 'system clock' to do time-based stuff like animations or physics, then when you unpause you suddenly will have a couple minutes of advance in a place where you expect fractions of a second.
Seems like a solved problem for consoles, at least. On the Nintendo switch you can "pause" any game regardless of if the devs implemented it by pressing the home button which suspends the entire game at the OS level
Early versions of Unreal Engine had these animated procedural textures that would produce sparks, fire effects, etc. The odd part is that when you paused the game, the animated textures would still animate. Presumably, the game would pause its physics engine or set the timestep to 0, but the texture updater didn't pause. I suspect it was part of the core render loop and each new iteration of the texture was some sort of filtered version of the previous frame's texture. Arguably a very early version of GPU physics.
Modern games can have the same issue. Even taking a capture of the exact graphics commands and repeating them, you'll sometimes see animated physics effects like smoke and raindrops. They're doing the work on the GPU where it's not necessarily tied to any traditional physics timestep.
One of the things I was thinking about with regards to pause and or save games, is the need to control all aspects of real time logic, with possibly stopping/resuming, and saving it to disk is how our current ways of doing async is incredibly lacking.
Unity has introduced the idea of coroutines (which were essentially yield based generators), and people started using them, and immediately encountered problems with pausing/saves.
Internally these coroutines compile down to state machines with opaque internals which are not necessarily consistent across compilers/code changes, and its very difficult to accomodate needs like pausing when using them.
From what I've seen, the usual answer is that people go back to hand-written state machines, and go through the pain of essentially goto programming to fix these issues.
I only know pausing games is funky because the highest my playstation fans ever go is pausing some games. Quite weird pausing is not just a feature of the game engine or runtime, especially as the menu and settings system seem to be totally separate in most cases anyways.
I would prefer to understand why a paused or backgrounded game still manages to consume a ton of CPU or GPU
Like, you're still just churning away at the main game loop while literally nothing else is happening except for you waiting for me to unpause it?
Because THAT would be an actual achievement. Hell, I can suspend any process from the Unixy side of things by sending a SIGSTOP signal, for a far more perfect "pause".
If I was a game dev, I would not settle for "looks like paused, but still burning down the environment and heating up my home as a side effect"
The ability to pause is extremely important in games (at least single player ones).
I hate when games are into multiplayer modes even when played in single-player campaign (e.g.: Generation Zero) and thus cannot be paused.
Another thing that I hate in this regard are unpausable cutscenes. I remember when I was playing The Witcher 3, that at last there was some cutscene advancing some plot point, and right into the middle of it The Wife™ would barge in telling me something important that would require my attention... but I cannot pause that scene so I had to miss it while I listened to her. Why, oh why, devs hate pausing cutscenes so much??
The console cert ones are interesting but all the others are just Unity/Gamemaker/Unreal not allowing the developers to write normal code? The nonzero timestep thing is very strange
I would expect pausing to bring a game’s CPU/GPU usage down to near-zero, which won’t happen if the game keeps redundantly rendering the exact same frame. A game engine can optimize this by special casing time scale zero to simply render a single textured quad under the pause menu (which is probably what one of the commenters in TFA referred to).
You also often need several tiers of pausing. For example when paused you want the current game time to be zero, but you also want to ignore only some user inputs. You don't want to disable the menu or pause/unpause actions along with player actions, also you might want to pause dialog but not music (although you might alter it for the menu).
Then there are others such as vfx that can have their own tiers, you might have something in the background that is difficult to pause, stop and/or resume.
Then there are other things such as timers etc. that will need special handling, graphics card interactions etc. When done with those you also need to be able to resume and ensure that all of the previous will continue as if nothing have ever happened.
In the same vein some other "simple" things like saving/loading the game is often anything but.
Total self brag, one of the key foundations of my game engine is that every single instance of any object has an anchor to a timing system, and pausing can be propogated on the same cycle as the input-capture at the most granular level as desired.
For my game (custom engine) I had a way to stop the game clock from advancing, while the input and draw loop kept running. It would also put the game into the "pause" input state during which only the resume button would be active.
I am recently working on a "realtime with pause" style grand strategy game using my own engine (think Europa Universalis, Crusader King, Hearts of Iron).
The trick is to separate the logic simulation from other game loops (rendering, UI, input, sound, etc). So when a player pauses the game, everything else still more or less works. And the logic simulation should be able to take user "command" while being paused.
Most commands should mutate the game state and reflect in the UI immediately. A few commands that have to wait until the next tick should at least acknowledge the action result.
for our game dev project in undergrad we built an rts game as our capstone, and we used the "slow stuff down" trick - except different systems had different clock/time systems! and so some things were still running while it was paused, which led to some weird side effects (eg money glitch)
i did the thing you're "not supposed to do" and attached everything to a world clock so everything ran at like 60fps in terms of events, so it was a real-time "turn based" system
262 comments
Since the game uses a 256 color palette, it was only necessary to update a few bytes of data (3x256) instead of redrawing the whole screen, so the effect was quick.
I also used this trick when the game stalled due to missing network packets from other players. Initially the game would still be responsive when no messages were received so that you could still interact and send commands. After a few seconds the game would go into paused state with grayscale screen to signify the player that things were stuck. Then several seconds after that a dialog box would show allowing a player to quit the game.
This was much less disruptive than displaying a dialog box immediately on network stall.
I'd honestly love to compile a book of "war stories" told by devs like netcoyote.
Maybe I will.
Net, if you're interested, hit me up.
Fantastic idea though, you should do it.
Ara technica has a war stories feature on game development.
https://arstechnica.com/video/series/war-stories
For apple 2 games John Romero did a podcast. It’s decent but he seems to have stopped doing them.
https://appletimewarp.libsyn.com/ Or YouTube
Ted dabney experience has a lot of interesting interviews with older arcade game designers:
https://www.teddabneyexperience.com/episodes
And in the "this is why we can't have nice things", that also introduced problems, because we didn't want a player who was losing to keep pausing the game until the winning player quit out of frustration, so I think we kept a per-player pause counter, which would only be restored if other players also paused? (I don't quite remember all the details, just that we had to prevent yet another abuse vector).
I tell you what I'll do today on my dev time, I'll try implementing grayscale without aby research on pause and then compare notes (I'm assuming this wc code is available somewhere, which may be a bad assumption)
Code for those is available.
I also tried 128 across the board for grey, and it just made a dull fade which may be the best I can do with my method.
I think it may simply be because rather than have palletes controlled by rgb, I load predrawn sprites using sfml's sprite and texture classes. So the default rgba is 255,255,255,255 - so I have a sidequest to figure out the RIGHT WAY of applying rgb changes to predrawn sprites.
It may very well be a simple matter of "sfml does it differently" or perhaps having grey variants of all sprites and toggling. I feel there has to be a way to accomplish the fade to grey programmatically. Fun little dive tho! I'll have to post an update when I figure it out.
But rather than do that and cache them for timing triggers, I kind of like the scaling down by multiplication approach.
Edit: manipulate the rgb values that is - I wouldnt have converged on those hard values on my own.
And also that my “sound card works perfectly!”
It always surprised me how few games had that feature - though a few important ones, like StarCraft, did - and it only became rarer over the years.
There was a twitter thread years ago (which appears to be long gone) about how the SNES Pilot Wings pre-game demo was just a recording of controller inputs. For cartridges manufactured later in the game's life, a plane in the demo crashes rather than landing gracefully, due to a revised version of a chip in the cartridge. The inputs for the demo were never re-recorded, so the behaviour was off.
https://github.com/ESWAT/john-carmack-plan-archive/blob/mast...
> The system was deterministic enough that it could record your inputs/the game state and just play them back to get a gameplay video.
NOT how demos work in Quake. It’s more like Quake uses a client/server architecture, and the demo is a capture of the messages.
https://www.gamers.org/dEngine/quake/Qdem/dem-1.0.2.html
This used to be a promoted feature in CS, with "HLTV/GOTV", but sadly disappeared when they moved to CS2.
Spectating in-client is such as powerful way to learn what people are doing that you can't always see even from a recording from their perspective.
https://news.ycombinator.com/item?id=21920508
What's totally insane is that the modern engine rewrite Aleph One can also play back such old recordings, for M2 Durandal (1995) and Infinity (1996) at least.
The main downside which probably caused the diseapearance is that any patch to the game will make the replay file unusable. Also at the time (not sure for quake) there was often fixed framerate, today the upsides of using delta time based frame calculation AND multithreading/multi platform target probably make it harded to stay deterministic (specialy for game where you want to optimize input latency)
I am one of the authors of Fire Fight game (1996-ish) and we pulled the same stunt. It was actually easy, we just had to build our own "random number generator" and fix all bugs with uninitialized memory :-)
I’ve been obsessing over this determinism and replayability for months, to the point where any game played is fully replay-able to the exact same events as the original game. So you can play, then watch a recording and spectate your played games from different actors perspective (enemy perspective etc).
My rendering and game logic are fully decoupled.
I wrote the “engine” for the game from scratch. I think I only use one third party library currently.
Cool to see this discussion
As a kid, I couldn't wait to see what came next. Sadly, Q1 was rather one of a kind, and it was many years until anything else like it showed up.
No fancy kernel level anti-cheats. Just ensure matches were played on legitimate servers and demos were recorded.
Also, back then live streaming while playing was usually too much of a computational and network burden (56k modems), but casting was just coming around as being a thing and certain Quake 3 mods had spectator modes that let someone streaming spectate you from the first person live which also helped deter cheating. There was even split screen spectating modes so you can follow the action (useful for 4v4 games, etc.).
Carmack and team really made something special back then. The ideas they had and what they did with their tech on relatively low end hardware was remarkable.
It’s one of my favourites
Deterministic game sync is a completely different approach more often used in RTS games. Quake had non-deterministic authoritative central server + clients getting an incomplete view of the world.
Warcraft 3 replays couldn't jump in time, just forward very fast. HoN could do that. It was amazing.
For a few months they even made ALL replays searchable on a website. Every game of HoN played globally.
> The system was deterministic enough ...
I wrote about it here many times over the years but in 1991 I wrote a little DOS game (and I had a publisher and a deal but it never came out and yet it's how my career started but that's another story) and at some point I had an "impossible to find" bug because it was so hard to reproduce.
So I modified my game engine to be entirely deterministic: I'd record "random seed + player input + frame at which user(s) [two players but non-networked] input was happening". With that I could make tiny save files and replay (and I did find my "impossible to find" bug thanks to that).
First time I remember someone talking about it was a Gamasutra article by an Age of Empire dev (article which another poster already mentioned here in this thread): they had a 100% deterministic engine. FWIW I wrote an email to the author of that article back then and we discussed deterministic game engines.
Warcraft 3 definitely had a deterministic game engine: save files, even for 8 players (networked) games were tiny. But then you had another issue: when units, over different patches, would be "nerfed" to balance the game (or any other engine change really), your replay files wouldn't play correctly anymore. The game wouldn't bother shipping with older engines: no backward compatibility for replay files.
I had a fully deterministic game engine in 1991 and, funnily enough, a few days ago with the help of Claude Code CLI / Sonnet 4.6 I compiled that old game of mine again (I may put it on a public repo one day): I still had the source files and assets after all those years, but not the tooling anymore (no more MASM / no more linker) so I had to "fight" a bit (for example I had not one but two macros who now clashed with macros/functions used by the assembler: "incbin" and another one I forgot) to be able to compile it again (now using UASM, to compile for DOS but from Linux).
Another fun sidenote... A very good friends of mine wrote "World Rally Fever" (published by Team 17) and I was a beta tester of the game. Endless discussion with my friend because I was pissed off for his engine was so "non-deterministic" than hitting the Turbo button on my 486 (I think it was a 486) while I was playing the game would change the behavior of the (computer) opponents.
https://youtu.be/NhRQWNqbvTk
To me a deterministic game engine, unless you're a massively networked multi-player game, just makes sense.
Blizzard could do it for Warcraft 3 in 2002 for up to 8 players and hundreds of units. Several games had it already in the nineties.
It simplifies everything and I'd guesstimate something like 99% of all the game out there that don't do it could actually do it.
But it touches to something much more profound: state and how programmers think about state and reproducibility. Hint: most don't think about that at all.
Some do though: I was watching a Clojure conf vid the other day and they often keep hammering that "view is a function of state". And it is. That's how things are. It was true in 1991 when I wrote my DOS game, it was true for Age of Empire, Warcraft 3 and many other games. And it is still true today.
But we're in 2026 and there are still many devs insisting that "functional programming sucks" and that we should bow to the mutability gods for that is the only way and they'll fight you to death if you dare to say that "view <- fn(state)".
This explains that.
It never worked. You’d pause, and the plane was frozen in place yes, but the instrument cluster would still animate and show your altitude/speed changing as if you never paused. But you couldn’t control anything until unpaused. So you’d resume, and your momentum would suddenly leap to where the accumulated deltas ended up. So if you active-paused at full throttle, you’d unpause and start going way too fast… if you active paused while stalling, you’d unpause and your speed would be near zero… you’d even consume fuel while paused.
It’s like they literally just froze the plane’s position and left every other aspect of the physics engine untouched, never tested it, shipped it, and even did a bunch of marketing at how great the feature was. When it was so obviously broken.
I came back to the game after a year or so of updates, and not a thing had improved, it was every bit as broken as when they shipped it.
The 2024 release seems to have largely fixed it though from what I can see. It’s just nuts they had such a clearly broken feature for that long.
Outside of VR, Unity offers a nice "AudioListener.pause" to pause all audio, but if you have any sound effects in your pause menu like when a user changes a setting, those no longer work, further requiring more hacky fun (or just set it to true, and ignore user-complaints about no audio on menus when paused).
On top of that, you have things like Animators for characters, which also have to be paused manually by setting speed to 0 (TimeScale = 0 doesn't seem to do it). Some particle systems also don't seem to be affected by TimeScale. If you have a networked game then you also have to (obviously) send the pause command to clients. If you have things in motion, pausing/restarting the physics engine might cause weird issues with (at least angular) velocity so you might have to save/restore those on any moving objects.
I see a lot of comments here saying something along the lines of "isn't it just a state in the state machine?" which isn't wrong, but is an extremely simplistic way of thinking about it. In, say, 1983, you could get away with something like that:
- pause the game: write "PAUSED" to the tilemap
- paused main loop: check input to unpause
- unpause the game: erase the "PAUSED" / restore the previous tiles
But at that time you could already see the same sort of issues as today. Something somewhat common in Famicom/NES games is the sprites disappearing when the game is paused. Perhaps deliberate/desirable in some cases (e.g. Tetris) but a lot of the time, probably just a result of the 'is paused' conditional branch in the main loop skipping the sprite building code[0].
There's an extremely large problem space and ultimately, each game has its own way to define what "paused" actually means.
You might be interested in the features Godot provides[1] for this. Particularly, the thing that makes it interesting is the 'process mode' that each node in the scene tree has. This gives the developer quite a lot of control over what pausing actually means for a given game. It's not a complete solution, but a useful tool to help solve the various problems.
[0] Simplified description of course. Also, the sprite building code often ended up distributed throughout the various gameplay logic routines, which you don't want to run in the paused state.
[1] https://docs.godotengine.org/en/stable/tutorials/scripting/p...
[ed] Just adding that Tetris is only an example of a game where you might want that behaviour, not a comment about how any of the Tetris games were actually made.
Like torch flames and trees swaying in the wind.
[0] https://github.com/rameshvarun/marble-mouse/blob/8b25684a815...
A system is only correct relative to the transition system you wrote down. If the real system admits extra transitions that you care about (pause, crash, re-entry, partial commits), and you didn't model them, then you proved correctness of the wrong system.
[0] https://lamport.azurewebsites.net/video/videos.html
Pausing a game has a massive impact on the game experience. It lets you break the fourth wall experientially. Not wrong, but it changes the dynamic of the game.
Same as saving at any time does. As losing your loot or your life permanently does. Not wrong, but a hard choice that appeals to some players and not to others.
I used to pause pacman on my Atari 800 so I could run to church and sing in the choir or be an altar boy. Then I ran home and unpaused to continue. Sometimes in summer the computer over-heated and I lost everything while I was at church.
Lessons learnt? None, I think :)
While the game is paused, if a player were to click on the "level up" buttons for their skills, each click actually advanced the game by 1 frame - so it was possible for people to die etc. during a pause screen.
It suggests a level of control way below what I would ordinarily consider required for game development.
I have made maybe around 50 games, and I think the level of control of time has only ever gone up. Starting at move one step when I say, to move a non-integer amount when I say, to (when network stuff comes into play) return to time X and then move forward y amount.
switch(game_state):
You still have to be careful about how you implement "gameplay", though. For example if at any point you read the 'system clock' to do time-based stuff like animations or physics, then when you unpause you suddenly will have a couple minutes of advance in a place where you expect fractions of a second.Modern games can have the same issue. Even taking a capture of the exact graphics commands and repeating them, you'll sometimes see animated physics effects like smoke and raindrops. They're doing the work on the GPU where it's not necessarily tied to any traditional physics timestep.
Unity has introduced the idea of coroutines (which were essentially yield based generators), and people started using them, and immediately encountered problems with pausing/saves.
Internally these coroutines compile down to state machines with opaque internals which are not necessarily consistent across compilers/code changes, and its very difficult to accomodate needs like pausing when using them.
From what I've seen, the usual answer is that people go back to hand-written state machines, and go through the pain of essentially goto programming to fix these issues.
Like, you're still just churning away at the main game loop while literally nothing else is happening except for you waiting for me to unpause it?
Because THAT would be an actual achievement. Hell, I can suspend any process from the Unixy side of things by sending a SIGSTOP signal, for a far more perfect "pause".
If I was a game dev, I would not settle for "looks like paused, but still burning down the environment and heating up my home as a side effect"
I hate when games are into multiplayer modes even when played in single-player campaign (e.g.: Generation Zero) and thus cannot be paused.
Another thing that I hate in this regard are unpausable cutscenes. I remember when I was playing The Witcher 3, that at last there was some cutscene advancing some plot point, and right into the middle of it The Wife™ would barge in telling me something important that would require my attention... but I cannot pause that scene so I had to miss it while I listened to her. Why, oh why, devs hate pausing cutscenes so much??
Then there are others such as vfx that can have their own tiers, you might have something in the background that is difficult to pause, stop and/or resume.
Then there are other things such as timers etc. that will need special handling, graphics card interactions etc. When done with those you also need to be able to resume and ensure that all of the previous will continue as if nothing have ever happened.
In the same vein some other "simple" things like saving/loading the game is often anything but.
I really need to start blogging my notebook
The trick is to separate the logic simulation from other game loops (rendering, UI, input, sound, etc). So when a player pauses the game, everything else still more or less works. And the logic simulation should be able to take user "command" while being paused.
Most commands should mutate the game state and reflect in the UI immediately. A few commands that have to wait until the next tick should at least acknowledge the action result.
i did the thing you're "not supposed to do" and attached everything to a world clock so everything ran at like 60fps in terms of events, so it was a real-time "turn based" system
A damn blurred screenshot should not make the GPU consume hundreds of Watts.