I find first version https://github.com/petrmikheev/endeavour much more impressive. Dude somehow managed to get 100MHz DDR1 ram working on 2 layer board with no ground reference :o Its one of those things you only attempt when crazy or dont know any better. Anyone with EE experience will tell you its impossible, like flying commercial grade SoCs in satellites :) Mad lad.
I can't imagine why a 100 MHz digital signal at 2.5 V would be even particularly challenging on a small two-layer PCB. A lot of signal integrity lore has to do with passing EMI compliance (this almost certainly wouldn't), as well as with people extrapolating from Rick Hartley videos without pausing to think if it really applies to hobby stuff.
When PCBWay stopped accepting PayPal last month I tried JLC and I have to say, their quality was impressive and the price was a little over half the PCBWay quote. Certainly worth shopping around with your gerbers. Might be an interesting write up comparing quality and price from the leading Chinese fab houses for the same board.
Only done very small 2-layer boards for hobby projects, but it's crazy that you can get custom PCBs made and shipped from China for <£10
It seemed like it might be coming to an end when PCBWay suddenly had no reasonable payment options, but JLC has been great so far (and I believe PCBWay has credit card payments sorted now?)
(Just wish I had far more free time to spend on hobbies, there's so many possibilities with 3D printing, microcontrollers, and custom PCBs now all so readily available)
I've been ordering from PCBWAY since 2016 for my company. 100s of projects, but now because of PAYPAL issue we are forced to look for other suppliers. JLCPCB is good but not very professional. Cannot order large quantities, specifing the custom panel is bit difficuilt etc.
This is very impressive. How did you learn to design a real computer, not the toy ones a lot of people made? I read part 1 and part 2 and looks like you just “thrown in” Ethernet and other stuffs and it was done. Really hope to learn from the process, thanks!
The site configured at this address does not contain the requested file.
If this is your site, make sure that the filename case matches the URL as well as any file permissions.
For root URLs (like http://example.com/) you must provide an index.html file.
Read the full documentation for more information about using GitHub Pages.
I found the project on YouTube[1] and wanted to share it - but decided to find something that's text for HN, and in the rush to post I failed to check if the post is even complete. I should've posted the video instead.
I wanted to submit the github link as "Show HN". It seems that the guidelines doesn't forbid "Show HN" if somebody else posted a link to my project prematurely.
However it seems that I can not. Rejected with "We're temporarily restricting Show HNs because of a massive influx, mostly by users who aren't yet familiar with the site or its culture." :-(
This is really cool and impressive... but relatedly...
Has anyone figured out what the minimum specs for Quake are?
I feel like the first thing everyone does with a computer is to determine whether or not it can run quake, and I'm just wondering what the like, most simple computer that could exist is, that could run quake?
You can find a lot of discussion about what the minimum specs for Quake are. Famously, it needs a decent FPU, and the Pentium was a convenient early CPU with a decent built-in FPU. It was significantly faster than a 486.
…But people have managed to run Quake on the 486.
And the myth people tell about Quake is that it killed Cyrix, because Quake performance on Cyrix was subpar. But was that true? And if it was true, was that because the Cyrix was slower than a Pentium, or was it because the Quake code had assembly that was hand-optimized for the Pentium FPU pipeline?
Anyway. “Most simple computer that could run Quake” is probably going to include a decent FPU. If you are implementing something on an FPGA, you can probably get somewhere around 200 MHz clock anyway. At which point you can run Quake II.
My perspective from being a teen doing lan party stuff at the time: Quake ran slow on them, but it was far from the only thing that ran slow. Cyrix was well understood to be the value brand for general office apps and such, but not up to it for more demanding computing, and for having random compatibility issues here and there.
Ultimately what killed Cyrix is they just couldn't offer enough of a discount vs intel to matter, especially with all the lock in stuff intel was doing with Dell, Gateway, etc.
Intel Inside was a successful marketing campaign as well. If you were around back then I bet you can imagine the jingle/chord immediately.
I had a Cyrix 6x86 when Quake first came out. My disappointment at how poorly Quake ran on it was significant, especially because pretty much every other game at the time ran well on the Cyrix. The FPU performance in Quake was doubly handicapped on the Cyrix: not only was its FPU slower than the Pentium's to begin with, Quake's code was indeed hand-optimized for the Pentium's FPU pipeline. Fabien Sanglard's writeup of Michael Abrash's optimizations for Quake goes into great detail: https://fabiensanglard.net/quake_asm_optimizations/
Cyrix was physically incapable of pipelining FPU instructions. Without Pentium Quake would have had to wait two more years for commodification of CPUs delivering similar floating point performance.
Quake needed March 1994 Pentium 90-100 to deliver ~smooth 25fps. Cyrix released similarly performing 6x86MX PR200 in May 1997, AMD K5-PR166 January 1997. Quake was unfeasible till ~1998 at the earliest to be able to sell playable game.
Yes but also no. The problem with fixed point arithmetic is a lack of dynamic range compared to floating point. Floats are great at representing both large numbers with limited precision and small numbers with high precision, but with fixed point you have to make a choice based on which kind of number you're trying to represent. Meaning you need to use a mixture of 8.24, 16.16 and 24.8 fixed point types (and appropriate conversions) depending on the context of the calculations that you're doing.
It's possible to write a game engine with that limitation, but there's no easy natural conversion from Quake's judicious use of floats to a fully fixed-point codebase. You'd have to redesign and rewrite the entire engine from scratch, basically.
The PS1 doesn't an FPU but got a version of Quake 2, so it's possible. That said, it was somewhat different from the PC version, so it could be argued that it's not the same game.
I can't speak on Quake, but I was a level designer on the failed effort to port Unreal to PSX.
My understanding from talking to the coders at the time was that Unreal's software renderer was a huge advantage as a starting point. They were able to reuse a lot of the portal rendering stuff as setup on the R3K cpu, but none of the rasterization. That had to go to the graphics core, which was a post setup 2D engine that in addition to the usual sprites, could do tris and quads.
We had a budget of about 3k polygons post clipping, and having two enemies on screen would burn about half of that. The other huge limit is the texture cache was tiny, so we couldn't do lightmaps. Our lightning was baked in at vertex level and it just was what it was.
I imagine the situation with Quake was comparable. The BSP stuff would carry right over, but I can't imagine they got lightmapping proper working at the time. They'd also need some sort of solution for overdraw, as Quake's PVS was a lot more loose than Unreal's portal clipping.
The PS1 version uses a custom engine based on technology built for the game Shadow Master, the previous title by Hammerhead Studios. It was a technical tour de force for the original PlayStation.
I want to look at this from a different perspective… a single-precision floating-point multiply is pretty simple, no? 24x24 bit multiply, which is about half as many gates as a 32x32 bit multiply.
Maybe I would prefer to rip out the integer multiplication unit first, before ripping out the FPU.
Quake 2 was the one with the clever approximate inverse square root code, right? I wonder (especially since there’s an instruction nowadays to draw inspiration from), can you implement it “in hardware,” so to speak?
70 comments
It seemed like it might be coming to an end when PCBWay suddenly had no reasonable payment options, but JLC has been great so far (and I believe PCBWay has credit card payments sorted now?)
(Just wish I had far more free time to spend on hobbies, there's so many possibilities with 3D printing, microcontrollers, and custom PCBs now all so readily available)
URL: https://blog.mikhe.ch/quake2-on-fpga/part6.md
404 File not found
The site configured at this address does not contain the requested file.
If this is your site, make sure that the filename case matches the URL as well as any file permissions. For root URLs (like http://example.com/) you must provide an index.html file.
Read the full documentation for more information about using GitHub Pages.
I found the project on YouTube[1] and wanted to share it - but decided to find something that's text for HN, and in the rush to post I failed to check if the post is even complete. I should've posted the video instead.
[1]: https://youtu.be/sioLAkNQC_I
"More pictures in the next part.
Next part: coming soon"
I suppose the link came to HN a bit too early.
Part5: https://blog.mikhe.ch/quake2-on-fpga/part5.html
Part6: https://blog.mikhe.ch/quake2-on-fpga/part6.html
I wanted to submit the github link as "Show HN". It seems that the guidelines doesn't forbid "Show HN" if somebody else posted a link to my project prematurely.
However it seems that I can not. Rejected with "We're temporarily restricting Show HNs because of a massive influx, mostly by users who aren't yet familiar with the site or its culture." :-(
Nonetheless, impressive project!
Has anyone figured out what the minimum specs for Quake are?
I feel like the first thing everyone does with a computer is to determine whether or not it can run quake, and I'm just wondering what the like, most simple computer that could exist is, that could run quake?
…But people have managed to run Quake on the 486.
And the myth people tell about Quake is that it killed Cyrix, because Quake performance on Cyrix was subpar. But was that true? And if it was true, was that because the Cyrix was slower than a Pentium, or was it because the Quake code had assembly that was hand-optimized for the Pentium FPU pipeline?
Anyway. “Most simple computer that could run Quake” is probably going to include a decent FPU. If you are implementing something on an FPGA, you can probably get somewhere around 200 MHz clock anyway. At which point you can run Quake II.
Ultimately what killed Cyrix is they just couldn't offer enough of a discount vs intel to matter, especially with all the lock in stuff intel was doing with Dell, Gateway, etc.
Intel Inside was a successful marketing campaign as well. If you were around back then I bet you can imagine the jingle/chord immediately.
https://thandor.net/benchmark/33
Quake needed March 1994 Pentium 90-100 to deliver ~smooth 25fps. Cyrix released similarly performing 6x86MX PR200 in May 1997, AMD K5-PR166 January 1997. Quake was unfeasible till ~1998 at the earliest to be able to sell playable game.
It's possible to write a game engine with that limitation, but there's no easy natural conversion from Quake's judicious use of floats to a fully fixed-point codebase. You'd have to redesign and rewrite the entire engine from scratch, basically.
My understanding from talking to the coders at the time was that Unreal's software renderer was a huge advantage as a starting point. They were able to reuse a lot of the portal rendering stuff as setup on the R3K cpu, but none of the rasterization. That had to go to the graphics core, which was a post setup 2D engine that in addition to the usual sprites, could do tris and quads.
We had a budget of about 3k polygons post clipping, and having two enemies on screen would burn about half of that. The other huge limit is the texture cache was tiny, so we couldn't do lightmaps. Our lightning was baked in at vertex level and it just was what it was.
There's a bit more info here: https://www.terrygreer.com/unrealpsx.html
I imagine the situation with Quake was comparable. The BSP stuff would carry right over, but I can't imagine they got lightmapping proper working at the time. They'd also need some sort of solution for overdraw, as Quake's PVS was a lot more loose than Unreal's portal clipping.
Maybe I would prefer to rip out the integer multiplication unit first, before ripping out the FPU.
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
https://news.ycombinator.com/newsguidelines.html
Your entire comment history seems to be AI generated.