The Voodoo cards had no right to look as good as they did for their time. Someone rebuilding one from scratch is exactly the kind of project HN was made for.
Yes but the Nvidia NV-1 preceding the Vooodoo was much more impressive. Using NURBS you could display perfectly round objects. Also it had forward texture mapping which significantly improves cache utilization and would be beneficial even today.
It was just way harder to program for. Triangles are much simpler to understand than bezier curves after all. And after Microsoft declared that DirectX only supports triangles the NV-1 was immediately dead.
> Also it had forward texture mapping which significantly improves cache utilization and would be beneficial even today.
Not really. Forward texture mapping simplifies texture access by making framebuffer access non-linear, reverse texture mapping has the opposite tradeoff. But that is assuming rectangular textures without UV mapping, like the Sega Saturn did; the moment you use UV mapping texture access will be non-linear no matter what. Besides that, forward texture mapping has serious difficulties the moment texture and screen sampling ratios don't match, which is pretty much always.
There is a reason why only the Saturn and the NV-1 used forward texture mapping, and the technology was abandoned afterwards.
Thank you! These things do pack in a ridiculous amount of functionality for what they do. Probably why they look so good but also why it took 30 years for a hardware re-implementation.
I had a Voodoo3, can't remember the model number anymore, but my friend who had a TNT2 would often comment about how much worse the 3dfx's 16bit color looked vs the TNT2's 32bit. I could never tell a difference.
Voodoo 16bit was nicely dithered. TNT 16bit was ugly while 32bit looked good by sacrificing ~30-50% of performance.
Nvidia was very smart to advertise 16 bit performance _and_ 32bit quality at the same time :)
3dfx were stupid not to include token 32bit output option on Avenger chip (voodoo3). Every voodoo chip since first one has performed blending calculations in full precision and only dropped to dithered 16bit output to save on framebuffer ram, but that ram saving was meaningless by the time 16MB V3 released.
I love the software look so much though! I never did like the blurring of textures :)
They're both beautiful in their own way, the darkness and glow in the hardware versions, some certain pixellated charm and roughness in the software version
A 3dfx Voodoo Banshee was the first graphics card I ever bought. I bought it to play the EverQuest beta, which also would have been around 1999. I remember logging into that game for the first time and it felt like a life-changing experience. And it kind of was.
I remember really liking the 3dfx splash screen[1] for some reason. Maybe because it was the only thing that actually ran smoothly on that card. But still, I was a loyal 3dfx user - probably because of their marketing which someone else mentioned in the comments - and was sad when it went out of business a couple years later.
I exhausted my teenage savings to buy the Voodoo 1 due to the Linux support. Granted, I was running Red Hat at the time so the installation consisted of installing what, two RPMs? Played a lot of Q3 and Unreal on that card.
Same here. I remember some kernel module or video driver named tdfx, and then, struggling to make X11 work with this DRI (Direct rendering infrastructure or something like that) setting on. It was very rewarding to see it enabled on glxinfo's output after days compiling half of your system and trying to figure out what was wrong, specially when the access to the internet was limited, and then being able to launch GLtron with hardware acceleration. Also remember playing Quake 3 and America's Army games around that time.
Fun times, now everything is straightforward on Linux but I somehow miss that era when you actually had to do everything by yourself.
I find your (and my!) reaction to LLM generated text fascinating. It has a distinct smell, and I honestly can't really put words to why I find it repellent, I just know that I do.
Are you sure this is AI? Normally when I read AI written stuff I zone out because it can go entire paragraphs without saying anything. The sentences here seem short and to the point.
Their previous posts published before ChatGPT seem similar enough. Although, they have way more em dashes and this one has none, almost like they were removed on purpose... lol
I tend to feel the same way, although I'm actively trying to move past it. I'm OK at writing, but thanks to a combination of educational background and natural aptitude, I'm darned near illiterate at higher math. That puts me behind the 8-ball as an engineer, even though I've been reasonably successful at both hardware and software work. I tend to miss tricks that are obvious to my peers, but when I do manage to come up with something useful, I'm able to communicate with my peers and connect with my customers. While I don't need or want LLM assistance with writing, I can't deny that recent models have been a godsend for getting me out of trouble in the math department.
Now, here's somebody who's clearly strong on the quantitative side of engineering, but presumably bad at communicating the results in English. I consider both skill sets to be of equal importance, so what right do I have to call them out for using AI to "cheat" at English when I rely on it myself to cover my own lack of math-fu? Is it just that I can conceal my use of leading-edge tools for research and reasoning, while they can't hide their own verbal handicap?
That doesn't sound fair. I would like to adopt a more progressive outlook with regard to this sort of thing, and would encourage others to do the same. This particular article isn't mindless slop and it shouldn't be rejected as such.
Besides all that, before long it won't be possible to call AI writing out anyway. We can get over it now or later. Either way, we'll have to get over it.
I find it odd the author adds all these extra semantics to their input registers, rather than keeping the FIFOs, "drain + FIFOs", "float to fixed point converting register", etc as separate components, separate from the task of being memory mapped registers. The central problem they were running into was one where they let the external controller asynchronously change state in the middle of the compute unit using it.
I'm noting down this conetrace for the future though, seems like a useful tool, and they seem to be doing a closed beta of sorts.
Tangentially related, that screenshot of Screamer 2 caught me off guard completely, I loved that game to death, and I feel I was the only one of my friends to have played it. Tremendous handling model and superb music.
I guess it's cool because it could possibly produce a single board design able to emulate many designs with a flash update including SLI requiring 2 Voodoo cards plus a host 2D card that could all be placed onto said one card. I don't know how one engineers the analog DAC bandwidth to render SVGA faithfully at 1600x1200 @ 60 Hz from a FPGA frame buffer though.
Btw, most 8 MiB vintage Voodoo 2 cards can be upgraded to 12 MiB by simply soldering on more RAM. I managed to snag a bunch of legit 125 MHz chips that work with every card produced.
Very cool! I am wondering one thing: how fast is it? Much of the "secret sauce" of the Voodoo is its high speed: a first-gen Verite or (God forbid) any ViRGE takes many more cycles for common operations like, say, Z-buffered pixels.
I'm guessing this isn't fully cycle-accurate, but is it at least somewhat "IPC-accurate"? I'm guessing yes? But much of that was also derived from Voodoo's (for the time) crazy high memory bandwidth AFAIK.
It’s been a while since I’ve struggled with Xilinx tools, but I can’t imagine there aren’t any hardware limitations these days. Does this run on a Spartan 6, or do you need the latest UltraScale for it?
I have such fond memories of my old Voodoo card. Surprised how much nostalgia those pictures evoked - its rendering really had a unique look this that (LLM-generated?) FPGA captured quite well.
IIRC, it was a gigantic (for the time) beast that barely fit in my chassis - BUT it had great driver support for ppc32/macos9 (which was already on its way out), and actually kept my machine going for longer than it had any right to.
And then, like a month after I bought it, NVidia bought 3dfx and immediately stopped supporting the drivers, leaving me with an extremely performant paperweight when I finally upgraded my machine. Thanks Jensen.
If you want to see what it's supposed to look like, copy the screenshot into GIMP, go into "Color, Levels" and in the "Input Levels" section, there should be a textbox+spinner with a "1.00". Set that to 0.45.
54 comments
It was just way harder to program for. Triangles are much simpler to understand than bezier curves after all. And after Microsoft declared that DirectX only supports triangles the NV-1 was immediately dead.
> Also it had forward texture mapping which significantly improves cache utilization and would be beneficial even today.
Not really. Forward texture mapping simplifies texture access by making framebuffer access non-linear, reverse texture mapping has the opposite tradeoff. But that is assuming rectangular textures without UV mapping, like the Sega Saturn did; the moment you use UV mapping texture access will be non-linear no matter what. Besides that, forward texture mapping has serious difficulties the moment texture and screen sampling ratios don't match, which is pretty much always.
There is a reason why only the Saturn and the NV-1 used forward texture mapping, and the technology was abandoned afterwards.
>
The Voodoo cards had no right to look as good as they did for their time.Nor did their marketing:
https://news.ycombinator.com/item?id=35027437
Nvidia was very smart to advertise 16 bit performance _and_ 32bit quality at the same time :)
3dfx were stupid not to include token 32bit output option on Avenger chip (voodoo3). Every voodoo chip since first one has performed blending calculations in full precision and only dropped to dithered 16bit output to save on framebuffer ram, but that ram saving was meaningless by the time 16MB V3 released.
They're both beautiful in their own way, the darkness and glow in the hardware versions, some certain pixellated charm and roughness in the software version
Getting it working in linux in ~1999 was really not easy, especially for a teenager with no linux experience.
My networking card wasn't working either, so I had to run to a friend's house for dial-up internet access, searching for help on Altavista.
Very cool project. Way above my head, still!
I remember really liking the 3dfx splash screen[1] for some reason. Maybe because it was the only thing that actually ran smoothly on that card. But still, I was a loyal 3dfx user - probably because of their marketing which someone else mentioned in the comments - and was sad when it went out of business a couple years later.
[1] https://www.youtube.com/watch?v=LanTZ_AnAso
I believe I tried redhat, but had issues with that as well. I never went back to it--moved to debian and never looked back.
Fun times, now everything is straightforward on Linux but I somehow miss that era when you actually had to do everything by yourself.
Also had the issue with modem, paging through the manual figured out the initialisation string
AT&FX1
Their previous posts published before ChatGPT seem similar enough. Although, they have way more em dashes and this one has none, almost like they were removed on purpose... lol
I don't know what is real anymore.
Now, here's somebody who's clearly strong on the quantitative side of engineering, but presumably bad at communicating the results in English. I consider both skill sets to be of equal importance, so what right do I have to call them out for using AI to "cheat" at English when I rely on it myself to cover my own lack of math-fu? Is it just that I can conceal my use of leading-edge tools for research and reasoning, while they can't hide their own verbal handicap?
That doesn't sound fair. I would like to adopt a more progressive outlook with regard to this sort of thing, and would encourage others to do the same. This particular article isn't mindless slop and it shouldn't be rejected as such.
Besides all that, before long it won't be possible to call AI writing out anyway. We can get over it now or later. Either way, we'll have to get over it.
https://lockbooks.net/pages/overclocked-launch
I'm noting down this conetrace for the future though, seems like a useful tool, and they seem to be doing a closed beta of sorts.
Btw, most 8 MiB vintage Voodoo 2 cards can be upgraded to 12 MiB by simply soldering on more RAM. I managed to snag a bunch of legit 125 MHz chips that work with every card produced.
I'm guessing this isn't fully cycle-accurate, but is it at least somewhat "IPC-accurate"? I'm guessing yes? But much of that was also derived from Voodoo's (for the time) crazy high memory bandwidth AFAIK.
Or does this only run in simulation anyway?
IIRC, it was a gigantic (for the time) beast that barely fit in my chassis - BUT it had great driver support for ppc32/macos9 (which was already on its way out), and actually kept my machine going for longer than it had any right to.
And then, like a month after I bought it, NVidia bought 3dfx and immediately stopped supporting the drivers, leaving me with an extremely performant paperweight when I finally upgraded my machine. Thanks Jensen.
If you want to see what it's supposed to look like, copy the screenshot into GIMP, go into "Color, Levels" and in the "Input Levels" section, there should be a textbox+spinner with a "1.00". Set that to 0.45.