Not only is this an insanely cool project, the writeup is great. I was hooked the whole way through. I particularly love this part:
> At this point, the system was trying to find a framebuffer driver so that the Mac OS X GUI could be shown. As indicated in the logs, WindowServer was not happy - to fix this, I’d need to write my own framebuffer driver.
I'm surprised by how well abstracted MacOS is (was). The I/O Kit abstraction layers seemed to actually do what they said. A little kudos to the NeXT developers for that.
I felt similarly. The learning curve was a tad steep, especially since I had never written a driver before, but once I figured out how to structure things and saw the system come alive, I grew to appreciate the approach IOKit takes.
With that said, I haven't developed drivers for any other platforms, so I really can't say if the abstraction is good compared to what's used by modern systems.
IOKit was actually built from the ground up for OS X! NeXT had a different driver model called DriverKit. I've never coded against either, but my understanding was they're pretty different beasts. (I could be wrong)
That said, indeed, the abstraction layer here is delightful! I know that some NetBSD devs managed to get PPC Darwin running under a Mach/IOKit compatibility layer back in the day, up to running Xquartz on NetBSD! With NetBSD translating IOKit calls. :-)
There’s a great video of a NeXT-era Steve Jobs keynote floating around—I think the one where he announces the x86 port as NeXT was transitioning to a software-only company—where he specifically calls out DriverKit and how great it is.
Steve was not a developer but he made it his business to care about what they cared about.
Yeah - even from the start, I remember NeXT marketing was spending a disproportionate amount of their time selling NeXT’s “object technology”, AppKit and Interface Builder, DPS as an advanced graphics model. It was good hunch from Steve, given how how modern NeXTSTEP feels in retrospect.
For some reason, though, it means that people overlook how NeXT’s hardware was _very_ far from fast. You weren’t going to get SGI level oomph from m68k and MO disks.
Yes, you're right! I'm just dolt who's never checked what a .kext on OS X actually is.
I had been under the impression that DriverKit drivers were quite a different beast, but they're really not. Here's the layout of a NS ".config" bundle:
The driver itself is a Mach-O MH_OBJECT image, flagged with MH_NOUNDEFS. (except for the _reloc images, which are MH_PRELOAD. No clue how these two files relate/interact!)
OS X added a dedicated image type (MH_KEXT_BUNDLE) and they cleaned up a bit, standardized on plists instead of the "INI-esque" .table files, but yeah, basically the same.
IOKit was almost done in Java; C++ was the engineering plan to stop that from happening.
Remember: there was a short window of time where everyone thought Java was the future and Java support was featured heavily in some of the early OS X announcements.
Also DriverKit's Objective-C model was not the same as userspace. As I recall the compiler resolved all message sends at compile time. It was much less dynamic.
Mostly because they thought Objective-C wasn't going to land well with the Object Pascal / C++ communities, given those were the languages on Mac OS previously.
To note that Android Things did indeed use Java for writing drivers, and on Android since Project Treble, and the new userspace driver model since Android 8, that drivers are a mix of C++, Rust and some Java, all talking via Android IPC with the kernel.
Yes, also the same reason why Java was originally introduced, Apple was afraid that the developer community educated in Object Pascal / C++, wasn't keen into learning Objective-C.
When those fears proved not true, and devs were actually welcoming Objective-C, it was when they dropped Java and the whole Java/Objective-C runtime interop.
And there are enough parallels to Linux's stack, I'm thinking about looking through the Linux on Wii project more and comparing how it handles fb issues in comparison. I loved reading this whole post, crazy how many OSes have now been run on the humble Wii!
Excellent project! This is one of the topics that keeps Hacker News ever refreshing. Seeing work get done in a way that feels like real hacking but in a positive way.
The author has mentioned earlier attempts to port other OSes to the Wii but it appears these works didn't get much traction here on HN except for Windows:
Lastly, since we are in the context of turning the Wii into a computer, I'd like to honorable mention: Hosting a blog on the Wii (622pts, 104cmts): (https://news.ycombinator.com/item?id=43754953)
Back in the day I was a hardcore Mac nerd and became a professional at it too. My best reverse-engineering trophy was building one of the first "iOS" apps when there was not an official appstore for the iPhone.
But man, this is way ahead of what I could do. What this dude accomplished blew my mind. Not only the output (running MacOS on a Wii), but the detailed post itself. A-MA-ZING.
As the author of the NetBSD Wii and Wii U ports, congrats! I’m looking forward to seeing how you solved some of the problems that I faced along the way.
This reminds me the 2008-2009 era where Mac OS X Leopard was running Hackintosh on Dell Mini 9 and some other netbooks.
At $349, it was almost a fully functional laptop that runs on Mac OS X (comparing to over $1000+ MacBooks or $1599 MacBook Pros)
Two friends of mine literally working remotely in an Africa trip with Dell Mini 9 and mobile hotspots and were doing video conferencing with Skype (on Wi-Fi).
Debugging kernel panics on a Wii in an economy seat is a level of focus I can't even imagine. Most people can't read a book on a plane without losing their place every 5 minutes.
Before figuring out how to tackle this project, I needed to know whether it would even be possible. According to a 2021 Reddit comment:
There is a zero percent chance of this ever happening.
Feeling encouraged, I started with the basics: what hardware is in the Wii, and how does it compare to the hardware used in real Macs from the era.
I almost think such projects are worth it just to immortalize comments like these. There's a whole psychology of wrongness that centers on declaring that not-quite-impossible things will definitely never happen, because it feels like principled skepticism.
That used to be my thing: wherever our ops manager declared something was impossible, I’d put my mind to proving her wrong. Even though we both knew she might declare something impossible prematurely to motivate me.
My favorite was “it’s impossible to know which DB is failing from a stack trace”. I created STAIN (stack traces and instance names): a ruby library that would wrap an object in a viral proxy (all returns from all methods are themselves proxies) that would intercept all exceptions and annotate the call stack with the “stain”ed tag.
I've seen more than one half-joke-half-serious chunk of code that would "encode" arbitrary info into stack traces simply by recursively calling fn_a, then fn_s, fn_d, and fn_f before continuing with the actual intended call, giving you a stack trace with (effectively) "asdf" in it.
They've also been useful more than once, e.g. you can do that to know what iteration of a loop failed. There are of course other ways to do this, but it's hard to beat "stupid, simple, and works everywhere" when normal options (e.g. logs) stop working.
Well you're doing gods work as far as I'm concerned. Conflating difficulty in practice with impossibility in principle is, to my mind, a source of so much unnecessary cognitive error.
Similarly, one of the great things about Python (less so JS with the ecosystem's habit of shipping minified bundles) is that you can just edit source files in your site_packages once you know where they are. I've done things like add print statements around obscure Django errors as a poor imitation of instrumentation. Gets the job done!
I'm remindded of my favorite immortalized comment, "No wireless. Less space than a Nomad. Lame." Rob Malda of Slashdot, 2001, dunking on the iPod when it debuted.
They're kinda like high-effort shitposts. Which are my absolute favorite kind. The worse the effort/reward payoff, and the more it makes you ask "WHY??!!?", the better.
Wasn't the old Linux joke, don't ask "how do I do X with Linux" (because you'd get ridiculed for not reading the docs) but instead, just state "X isn't possible with Linux" and then someone would show you how it's done?
Its a great motivator, happened with me too, I once asked a question about getting the original camera on custom rom and got this as a response [1].
This lead to 2 year long project [2] and an awesome time full of learnings and collaboration
I got the idea of writing an emulator in JavaScript in the pre-Chrome era, circa 2007. I remember searching around trying to find whether somebody had done it before. It seemed not, and somebody on a forum declared “that’s not possible”.
To me, it was obviously possible, and I was determined to prove them wrong.
Neat, and kudos! Reminds me of my young hobbyist days. I wish low level dev work was that approachable now.
Back in the old days, it was REALLY easy to initialize VGA and throw pixels around in ASM, C, or C++. The 6502 and related chips were relatively easy chips to build stuff for, even though tooling was non-existent. Shoot, you could do some really awesome things on a Tandy CoCo2 and BASIC of all things.
It feels like engineering has made this type of thing inaccessible. Most systems require a ton of knowledge and expertise. There is no easy 'in' for someone with a special interest in development. Even worse, AI is artificially dumbing things down, while making things even more inaccessible.
As someone who's been trying to do something VERY similar (port Mac OS 9 to the Nintendo Wii U), all I can say is I'm 1) absolutely impressed, and 2) absolutely encouraged, as my project keeps telling me "this is impossible" at every opportunity.
What stood out to me is how much of this worked because of strong abstraction boundaries.
It’s interesting because we don’t often think about OS-level abstractions in the same way anymore — but projects like this really show how powerful they are when they’re done right.
Makes me wonder how feasible something like this would be with modern systems, where things feel more tightly coupled and security constraints are much stricter.
I hope OP is still reading comments. I noticed that the project was written in Xcode (the repo even has the xcodeproj folder) but in some screenshots I see CLion. Did you switch at some point or were you using both throughout the development simultaneously?
Amazing writeup, love this types of blog posts and hope the hawaii trip was enjoyable
> In the end, I learned (and accomplished) far more than I ever expected - and perhaps more importantly, I was reminded that the projects that seem just out of reach are exactly the ones worth pursuing.
Couldn't agree more. I've had my own experience porting something that seemed like an intractable problem (https://news.ycombinator.com/item?id=31251004), and when it finally comes together the feeling of accomplishment (and relief!) is great.
Exceptional work. While it may not mean much, I am truly impressed. I like to toy with reverse engineering here and there, but such a port like this would take me multiple lifetimes.
Not to distract too much from the main topic, but what do you think about the Hopper disassembler? I have only used Radare2, IDA Pro, and Ghidra. Though, I haven't used the latter two on MacOS. What do you prefer about Hopper? I have been hesitant to purchase a license because I was never sure if it was worth the money compared to the alternatives.
They are successfully porting Mac OS onto every kind of modern computer over at the hackintosh subreddit, and I can't understand why there is so little interest for this stuff in the "hacker" sphere.
Surely, it must be a better option than Linux if you want to get the most out of a PC computer? At least for 10 more years.
The one that really bugs me is the Apple TV. It would be a great little box to use for terminals/thin client style work and there are a ton of old cheap ones. Having a $50 dollar used box that was low power and could run OSX would be great.
Had a very similar issue porting a hypervisor to ARM S-EL2. Writes would succeed, there were no faults, and everything looked reasonable in GDB, but the other side never saw the data. The root cause was that Secure and Non-Secure physical address spaces were backed by different memory even at the same address, and a single PTE bit selected between them. That took me much longer to understand than I’d like to admit.
This was an incredible read! Especially for what looks like the first post to this blog too? I wanted to subscribe to the RSS feed but unfortunately it gives a 404 error.
I wonder what, if anything significant, has changed architecturally from osx to modern macos and how this post could be used as a guide for future porting efforts (aside from the obvious 2 CPU isa changes over the last 20 years)
I wonder if the YUV conversion could be offloaded somehow to the ARM inside the Hollywood or somehow using a shader (or equivalent) if the graphics were accelerated - though maybe this is way way too much.
This is extraordinary, not only pushing the limit but documenting everything so clearly to show people what can be accomplished with time and dedication. Thank you for such thorough documentation, and congrats on getting it done!
hand-rolled iokit drivers and a bootloader to get xnu running on 88mb of ram with cpu-bound yuv-to-rgb conversion at 60fps, all because the wii's powerpc 750cl is close enough to a g3 imac that darwin mostly just worked. solid systems work and a genuinely useful writeup but might try on a dreamcast personally. rom burns
YUV appears to be a PAL-specific color space. I wonder how off an NTSC Wii would be. Presumably it would have the wrong color space until an equivalent conversion scheme was devised for NTSC.
I was surprised to see regional color spaces leak into the project, but I presume that Nintendo's iOS (the coincidentally-named system this is replacing) could handle that abstraction for game developers.
The PowerPC-to-Intel transition still has the cleanest binary-format story in mainstream consumer OS history — Rosetta 1 was better engineering than people remember. Wild to see the Wii hardware resurrected for this.
This is incredible. I wonder when an LLM will pull this knowledge out to help someone down the line who would never have had the craft to pull this off, as it requires so much depth and broad skill. Admirable.
The Wii is very moddable. I've modded my Wii in the past just for playing modded versions of Super Smash Brother Melee (mainly training packs, like flashing a red light when I miss an L-cancel).
To each his own and all that but sitting in a hotel room hacking on a computer while on vacation (in Hawaii!) is PTSD trigger-warning territory for me.
What's not to love? A small and beautiful PowerPC Unix workstation, something IBM hasn't done in a long, long time. How far does MacPorts go with a PPC?
So freaking cool and I really loved the writing style. One thing that surprised me was working on this while on a plane. I find it incredibly difficult to do normal or even limited work while on a plane (thankfully I fly rarely) but working on a hardware project like this on a plane feels like playing on hard mode.
Kudos to the author for being able to do make real progress in such a hostile (IMHO) environment.
328 comments
> At this point, the system was trying to find a framebuffer driver so that the Mac OS X GUI could be shown. As indicated in the logs, WindowServer was not happy - to fix this, I’d need to write my own framebuffer driver.
I'm surprised by how well abstracted MacOS is (was). The I/O Kit abstraction layers seemed to actually do what they said. A little kudos to the NeXT developers for that.
With that said, I haven't developed drivers for any other platforms, so I really can't say if the abstraction is good compared to what's used by modern systems.
That said, indeed, the abstraction layer here is delightful! I know that some NetBSD devs managed to get PPC Darwin running under a Mach/IOKit compatibility layer back in the day, up to running Xquartz on NetBSD! With NetBSD translating IOKit calls. :-)
Steve was not a developer but he made it his business to care about what they cared about.
For some reason, though, it means that people overlook how NeXT’s hardware was _very_ far from fast. You weren’t going to get SGI level oomph from m68k and MO disks.
But that's a hazy, 20 year old memory.
I had been under the impression that DriverKit drivers were quite a different beast, but they're really not. Here's the layout of a NS ".config" bundle:
The driver itself is a Mach-O MH_OBJECT image, flagged with MH_NOUNDEFS. (except for the _reloc images, which are MH_PRELOAD. No clue how these two files relate/interact!)Now, on OS X:
OS X added a dedicated image type (MH_KEXT_BUNDLE) and they cleaned up a bit, standardized on plists instead of the "INI-esque" .table files, but yeah, basically the same.https://news.ycombinator.com/item?id=10006411
"At some stage in the future we may be able to move IOKit over to a good programming language"
Remember: there was a short window of time where everyone thought Java was the future and Java support was featured heavily in some of the early OS X announcements.
Also DriverKit's Objective-C model was not the same as userspace. As I recall the compiler resolved all message sends at compile time. It was much less dynamic.
To note that Android Things did indeed use Java for writing drivers, and on Android since Project Treble, and the new userspace driver model since Android 8, that drivers are a mix of C++, Rust and some Java, all talking via Android IPC with the kernel.
> there was a short window of time where everyone thought Java was the future
Makes me think of how plists in macOS are xml because back then xml was the future
When those fears proved not true, and devs were actually welcoming Objective-C, it was when they dropped Java and the whole Java/Objective-C runtime interop.
> I'm surprised by how well abstracted MacOS is (was).
Usually the difference between something being well-abstracted vs poorly-abstracted is how well it's explained.
You might also be interested in this similar work: Installing Mac OS on the Nintendo Wii [video] (123pts, 37cmts): (https://news.ycombinator.com/item?id=37306018)
The author has mentioned earlier attempts to port other OSes to the Wii but it appears these works didn't get much traction here on HN except for Windows:
Lastly, since we are in the context of turning the Wii into a computer, I'd like to honorable mention: Hosting a blog on the Wii (622pts, 104cmts): (https://news.ycombinator.com/item?id=43754953)But man, this is way ahead of what I could do. What this dude accomplished blew my mind. Not only the output (running MacOS on a Wii), but the detailed post itself. A-MA-ZING.
> As for RAM, the Wii has a unique configuration: 88 MB total
TIL Wii has only 88MB of RAM. Fortunately games weren't electron-based.
At $349, it was almost a fully functional laptop that runs on Mac OS X (comparing to over $1000+ MacBooks or $1599 MacBook Pros)
Two friends of mine literally working remotely in an Africa trip with Dell Mini 9 and mobile hotspots and were doing video conferencing with Skype (on Wi-Fi).
[1] https://en.wikipedia.org/wiki/Dell_Inspiron_Mini_Series
[2] https://en.wikipedia.org/wiki/Hackintosh
My favorite was “it’s impossible to know which DB is failing from a stack trace”. I created STAIN (stack traces and instance names): a ruby library that would wrap an object in a viral proxy (all returns from all methods are themselves proxies) that would intercept all exceptions and annotate the call stack with the “stain”ed tag.
fn_a, thenfn_s,fn_d, andfn_fbefore continuing with the actual intended call, giving you a stack trace with (effectively) "asdf" in it.They've also been useful more than once, e.g. you can do that to know what iteration of a loop failed. There are of course other ways to do this, but it's hard to beat "stupid, simple, and works everywhere" when normal options (e.g. logs) stop working.
[1] https://xdaforums.com/t/how-do-i-port-pocos-miui-camera-to-c...
[2] https://xdaforums.com/t/anxcamera-closed-on-xda-only-16th-fe...
To me, it was obviously possible, and I was determined to prove them wrong.
Anyway, this now exists because of that: https://github.com/bfirsh/jsnes
> Readers with a keen eye might notice some issues:
> - Everything is magenta.
was fun too
Back in the old days, it was REALLY easy to initialize VGA and throw pixels around in ASM, C, or C++. The 6502 and related chips were relatively easy chips to build stuff for, even though tooling was non-existent. Shoot, you could do some really awesome things on a Tandy CoCo2 and BASIC of all things.
It feels like engineering has made this type of thing inaccessible. Most systems require a ton of knowledge and expertise. There is no easy 'in' for someone with a special interest in development. Even worse, AI is artificially dumbing things down, while making things even more inaccessible.
A side note: you embedded .mov videos inside
tags. This is not compatible with all browsers (notably Chrome and Firefox), which won't load the videos.
It’s interesting because we don’t often think about OS-level abstractions in the same way anymore — but projects like this really show how powerful they are when they’re done right.
Makes me wonder how feasible something like this would be with modern systems, where things feel more tightly coupled and security constraints are much stricter.
Amazing writeup, love this types of blog posts and hope the hawaii trip was enjoyable
Now that the MacBook Neo has an A18, I wonder if you could get MacOS running on an iPhone? :)
> In the end, I learned (and accomplished) far more than I ever expected - and perhaps more importantly, I was reminded that the projects that seem just out of reach are exactly the ones worth pursuing.
Couldn't agree more. I've had my own experience porting something that seemed like an intractable problem (https://news.ycombinator.com/item?id=31251004), and when it finally comes together the feeling of accomplishment (and relief!) is great.
If you like this story, you might also like the story of how Mac OS X was ported to Intel as well.
https://news.ycombinator.com/item?id=4091216
Not to distract too much from the main topic, but what do you think about the Hopper disassembler? I have only used Radare2, IDA Pro, and Ghidra. Though, I haven't used the latter two on MacOS. What do you prefer about Hopper? I have been hesitant to purchase a license because I was never sure if it was worth the money compared to the alternatives.
Surely, it must be a better option than Linux if you want to get the most out of a PC computer? At least for 10 more years.
https://www.reddit.com/r/hackintosh/
> There is a zero percent chance of this ever happening.
Honestly, I would have said the same. Great work!
I wonder if the YUV conversion could be offloaded somehow to the ARM inside the Hollywood or somehow using a shader (or equivalent) if the graphics were accelerated - though maybe this is way way too much.
YUV appears to be a PAL-specific color space. I wonder how off an NTSC Wii would be. Presumably it would have the wrong color space until an equivalent conversion scheme was devised for NTSC.
I was surprised to see regional color spaces leak into the project, but I presume that Nintendo's iOS (the coincidentally-named system this is replacing) could handle that abstraction for game developers.
Kudos to the author for being able to do make real progress in such a hostile (IMHO) environment.
Much easier to do, because of the superior, more modern architecture of Windows NT. (It's not based on Apollo-era OS like OSX is.)
Always great when your debugging feedback is via a led xD