Will RISC-V end up with the same (or even worse) platform fragmentation as ARM? Because of absence of any common platform standard we have phones that are only good for landfill once their support lifetime is up, drivers never getting upstreamed to Linux kernel (or upstreaming not even possible due to completely quixotic platforms and boot protocols each manufacturer creates). RISC-V allows even higher fragmentation in the portions of instruction sets each CPU supports, e.g. one manufacturer might decide MUL/DIV are not needed for their CPU etc. ("M" extension).
RISC-V is addressing this issue quite directly. For things like desktops, laptops, SBCs and servers we have the RVA23 profile which defines quite specifically what features a chip must support to ensure code portability.
On top of this, there are platform specifications. For example, the server spec is about to finalize next month. It extends RVA23 which things like UEFI, SBI, and ACPI to ensure that your can take something like a Linux distro and easily install it on any RISC-V server, like you can in the world of x86-64.
> we have phones that are only good for landfill once their support lifetime is up
RISC-V will probably not solve that problem in general.
First, the ISA cannot really demand that your phone avoid a Broadcom wireless chip that requires proprietary firmware for example.
Also, the phone vendor can still lock down the devices to prevent running arbitrary code.
Thankfully, the RISC-V world is developing a culture of openness. If a company wants to create a fully “open” phone, they are quite likely to adopt RISC-V. And, because of RISC-V, even the SoC itself could be fully Open Source.
But your typical Android phone is not going to get more Open just because they contain a RISC-V CPU.
The answer is unequivocally yes: RISC-V is designed to be customizable and a vendor can put whatever they like into a given CPU. That being said, profiles and platform specs are designed to limit fragmentation. The modular design and core essential ISA also makes fat binaries much more straight-forward to implement than other ISAs.
You can choose to develop proprietary extensions, but who’s going to use them?
A great case study is the companies that implemented the pre-release vector standard in their chips.
The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.
If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.
The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).
Yes, extensions are perfect for embedded. But not just there.
Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.
RVA23 is the standard target for compilers now. If you support newer stuff, it’ll take a while before software catches up (just like SVE in ARM or AVX in x86).
If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.
Compiling the code is not the issue. The hard part is the system integration. Most notably the boot process and peripherals. It's not actually hard to compile code for any given ARM or x86 target. Even much less open ecosystems like IBM mainframes have free and open source compilers (eg GCC). The ISA is just how computation happens. But you have to boot the system, and get data in and out for the system to be actually useful, and pretty much all of that contains vendor specific quirks. Its really only the x86 world where that got so standardized across manufacturers, and that was mostly because people were initially trying to make compatible clones of the IBM PC.
Thanks, that however addresses only a part of the problem. ARM is also suffering from no boot/initialization standard where each manufacturer does it their own way instead of what PC had with BIOS or UEFI, making ARM devices incompatible with each other. I believe the same holds with RISC-V.
There is a RISC-V Server Platform Spec [0] on the way supposed to standardise SBI, UEFI and ACPI for server chips, and it is expected to be ratified next month. (I have not read it myself yet)
PC/x86 was an extreme outlier, sadly, and it was because of Microsoft/Intel business model. The architecture details was historically mostly decided on by Wintel, yet the system integration was done by many vendors, whose best interest was to stay as compatible as possible. Its unlikely that another platform would be able to reach this state, the PC architecturing was subsidized from the M$ software monopoly that nobody would have wanted to suffer thru again.
> Its unlikely that another platform would be able to reach this state...
Is this really true? The computer ecosystem is more open now than ever. The original PC BIOS (which PC-compatible manufacturers needed to implement) was never an open, documented standard. It was a proprietary, closed system made by IBM. It's pretty fair to say that IBM didn't anticipate a PC/x86 ecosystem developing around their product. They even sued companies who made their own compatible BIOSes (like Corona). Intel didn't really have much to do with the success of the product at that point in time either, much less Microsoft.
In contrast, every widely-used modern system for hardware abstraction (UEFI/ACPI/DeviceTree/OpenSBI/etc) are open, royalty-free standards that anyone can use. Their implementation in ARM is newer, and inconsistent, but that's only because of how hugely diverse the ARM ecosystem is.
I think the issue is that desktop and server computing are “open” in the sense that you have full control over the software you run on them. So people interpret the dominant desktop and server platform architecture (the world of x86-64) as being open.
The embedded world is mostly closed, you are meant to run the software your hardware comes with. The platform’s popular there are considered less open (ARM and RISC-V).
Mobile devices like phones and tablets are historically closed devices, regardless of ISA. They are generally getting more closed in the name of security.
It is not the ISA that is “open” but the industry.
That said, in RISC-V, there is a sub-current of openness. I do not think that will overcome the industry tendencies in general, but there will be a small cadre of folks trying to create an open presence in every niche. The good news is that there is nothing to stop them. They will succeed eventually.
The early PC era was a mess, and that's not the period I'm talking about. IBM was clearly not up to the task and Intel didn't care much yet, but Microsoft certainty did a lot for compatibility from the start (i. e. DOS abstracted away a lot of BIOS routines, so it would be easy to port MS-DOS to a non-IBM x86). But after IBM revealed MCA to show just exactly how much do they care about compatibility and platform openness, Intel realized they are missing out and cleaned up the MCA/EISA/VLB mess with PCI. Then Microsoft and Intel jointly released APM 1992 (which was clearly not enough), and then ACPI in 1996 (which is a total dumpster fire, but a sufficiently functional dumpster fire). I. e. ACPI and UEFI are exactly the product of the monopoly. M$/Intel profited from the abundance of cheapo white boxes, so it was in their best interest to come up with a standard even DELL can implement. The fact that AMD is going to implement the ACPI too wasn't much bother for Intel - they were so dominant that they could afford not to care.
On the other hand, ARM sells the cores to SoC vendors (and doesn't care much what becomes of it), SoC vendors ducktape the ARM cores to a bunch of Synopsys peripherals and sell the resulting SoCs to smartphone and car makers (and doesn't care much for the product). System integrators throw Android on top and sell it to the customers. Then Google, who get all the cream via Play, hides all the mess behind a thousand layers of Java abstractions.
DeviceTree is an offshot of Sun's OpenFirmware (and it leaves out all the hard stuff - OpenFirmware had Forth, DeviceTree expects the kernel to support every single brand of fan switch). OpenSBI is a disaster. I'm sorry, but what kind of bright mind came up with the idea of hiding damn *timer* behind a privilege switch? Timers were enough of a pain point on x86 already, then it settled on userspace-accesable RDTSC. RISC-V SBI? Reproducing x86 one stupid decision at a time.
Just like everything else outside PC thanks to clones becoming a thing.
One reason UNIX became widely adopted, besides being freely available versus the other OSes, was that allowed companies to abstract their hardware differences, offering some market differentiation, while keeping some common ground.
Those phones common ground is called Android with Java/Kotlin/C/C++ userspace, folks should stop seeing them as GNU/Linux.
This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].
> Extensibility powers technology innovation
>> While this flexibility could cause problems for the software ecosystem...
"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
> How mature is the software ecosystem?
10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.
The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.
I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.
> good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
Imo this is pretty misguided. If you're writing above assembly level, you can read the performance optimization manual for Intel, and that code will also be really fast on AMD (or even apple/graviton). At the assembly level, compilers need to know a little bit more, but mostly those are small details and if they get roughly the right metrics, the code they produce is pretty good.
I stopped listening to what Canonical says. They often get involved in things and disturb the ecosystem then abandon stuff or dig a "not invented here" hole.
Unity, Bazaar, Mir, Upstart, Snap, etc.
All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.
Ubuntu Touch... I was so excited about it that I bought one of the phones with it preloaded. I even used it as my sole daily driver for months, until I learned that I was not receiving all calls made to me. Even after that I kept hoping it would keep developing so that I could pick it up again one day. But then Canonical abandoned it instead. That's when they became as good as dead to me.
Sadly, KDE and Gnome each spent a lot of time on the same things. Plasma Mobile has ate more time that could have went into making Plasma a better desktop.
That's a strange argument. Open source software including Plasma Mobile is developed by volunteers who choose to spend their time on a given project. I am quite happy with the pace of Plasma Desktop and the progress made in the past 3 years on its 6th iteration.
As a KDE developer I can say that there are times that we have done things differently or taken the long road because we wanted to support Plasma Mobile.
It would have just been better to continue doing the desktop specific things and let the Plasma Mobile enthusiasts make those changes.
Correct, and I used bzr quite a bit during that time. It was interesting in some ways, but Canonical pushed it for many years after git was obviously the better choice.
Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.
Similarly upstart, from 2006, widely deployed before Redhat brought in systemd. And got dropped when Debian decided to go with systemd. Surprising how this gets misremembered given the hate systemd initially received.
I remember it well. At least Canonical also jumped on the systemd bandwagon when upstream (Debian) made a choice, instead of dragging upstart on, like it has done with countless other projects that are past their time (juju looking at you)
Not sure on the timelines, but snap, upstart and Mir were all attempts at evolving Linux ecosystem that lost to RedHat-backed systems. Unity was legit abandoned, and bazaar... Not sure what they were trying to solve there with git and forges already existing.
> bazaar... Not sure what they were trying to solve there with git and forges already existing.
You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.
IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.
(Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)
Lightweight branch model of git mapped so much better to the way that actual development processes of medium to large projects really work(ed).
Named branches vs bookmarks in hg just means bike shedding about branching strategy. Bookmarks ultimately work more like lightweight git style branches, but they came later, and originally couldn't even be shared (literally just local bookmarks). Named branches on the other hand permanently accumulate as part of the repository history.
Git came out with 1 cohesive branch design from day 1.
That's a fair criticism for some workflows, and I like the lightweight model, but we should keep in mind the context of the time:
When these DVCS appeared, Git's branch design departed from what "branch" meant practically everywhere else. That added to its already significant learning curve, creating more friction for people trying (or being asked) to adopt it.
Meanwhile, Mercurial's "branch" was closer to well-established norms. This was one of several factors that made it the easier of the two to learn, which was was important when already asking people to uproot from their familiar centralized systems and learn the ins and outs of distributed version control. I suspect it also made repository migrations more straightforward, avoiding the impedance mismatch presented by Git's branches.
I work on a mercurial hosted project right now. What ticks me off is all those unnamed heads you need to handle every time you pull other people's changes. Yes they're more flexible. Most of the time that just means extra operations for no good reason.
Yeah, agreed. I liked the idea of Mercurial branches better than git's — in principle I prefer more rather than less metadata in history — but they genuinely had a scaling problem. I can't recall the numbers, this being more than a decade ago, but I tested with a realistic number of branches for a team of developers using short-lived branches for a while and you could easily see Mercurial slowing down.
Back when I was testing bookmarks were available, but Bitbucket was pretty much the only forge that supported Mercurial and their tooling didn't support bookmarks, so that made them a non-starter for many users.
That is very different from my experience with git. I know that the kernel uses branches a lot, but that's probably because of git's history with the project. At every company I worked git is used exactly the same way as CVS or SVN was used many years ago: you make some local changes, you push these local changes to the central store, you forget about it. Branches make local switching between tasks easier, but apart from that nobody cares about branches and they're definitely not treated as an important part of the repo. In fact, they're usually deleted immediately after the change is merged.
I think you have it swapped around. This is exactly the kind of workflow that git provided better support for - lightweight branches, not integral part of master history, deleted after merge.
Wayland was created in 2008. Mir was created in 2013.
Bazaar and Git were created around the exact same time.
Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.
Unity was great, after it was abandoned I tried yet again GNOME 3, me that in the past have collaborated with Gtkmm, ended up moving into XFCE, and nowadays I am fully on macOS/Windows anyway.
If I ever go back to GNU/Linux full time, GNOME certainly won't be it.
Things improved a lot with Gnome over the years, but as a fellow Gnome 2 user the initial release of 3 and the following years were a real kick in the teeth.
Things have improved, but the overall Gnome Foundation attitude hasn't improved. They are still very stubborn and remove basic features. This seemed to start when they did their infamous "focus groups" where they claim users can't understand basic things.
I get the desire to provide a cohesive experience, but I think you can do that while also giving people control.
KDE is shaping up to be much better and it's likely because Valve is providing commercial support and exposing it to a larger audience.
Cosmic is the new kid backed by system76 and its pretty nice too and may rescue Gnome in some ways in due time.
I'm fine with a company getting things wrong from time to time. What I don't like is the attitude where they walk into the room and start moving the furniture around while smugly dismissing or ignoring talented and established people. Then after a bit of milling around they just give up and leave the room and everyone has to clean up the mess.
I miss Ubuntu One, their Dropbox alternative which came with a wee integrated Linux client. IIRC, their free tier was also more generous in comparison.
Snap is a terrible. It's the reason why I stopped using Debian based distros after decades for desktop usage.
Lying to users and turning apt install commands into shims for a barely functional replacement was disrespectful. Flatpak was and still is better, but even then if I say I want a system package you give me a system package. If you have infrastructural reasons why you cannot continue to provide that package then remove it, Debian based systems have many ways to provide such things.
Canonical did it because they wanted to boost Snap usage and if failed while sending a clear message they don't respect their user base.
That is half the problem. They often introduce neat ideas, but then fail or refuse to integrate them with he rest of the FOSS ecosystem. Then anyone who subscribes to their experiment is left cleaning up the mess and trying to migrate the features or ideas they like to the remaining projects that should have been extended in the first place.
Not sure how you can say that about upstart. It was pretty much the accepted successor to shell scripts for an init system, for a few years until Redhat started pushing systemd. You would probably be using it now if Debian hadn't gone with the Redhat systemd over OpenRC and upstart.
I've played with a bunch of RISC-V platforms, mostly SBCs in the raspi class
Beyond the potential platform fragmentation due to the variability of the ISA (a very unfortunate design choice IMO), mentioned elsewhere in this thread, what I find most frustrating is the boot process / equivalent of BIOS in that world.
My impression: complete lack of standardization, a ton of ad-hoc tools native to each vendor, a complete mess, especially when it comes to get the board to boot from devices the vendor didn't target (eg SSDs).
Until two things happen:
1. a CPU with a somewhat competitive compute power appears (so far, all the SBC's I've tried are way behind ARM and x86)
2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)
the whole RISC-V thing will remain a tiny niche thing, especially because when a vendor loses interest in the platform, all of the SW that is native to the platform goes to rot immediately (not that it was particularly good quality in the first place).
There are several RISC-V Linux distros where essentially all the software available for the x86-64 platform is also available on the RISC-V edition. Let’s use Ubuntu as an example.
> when a vendor loses interest in the platform
> the platform goes to rot immediately
Ubuntu will provide updates for 15 years. That does not seem very immediate.
For RVA23 hardware, I expect even new Ubuntu releases to support it up to around 2030 at least. 15 years from then will be 2045. I cannot say that I am picking up what you are laying down here.
> 2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)
I got the same experience tinkering with ARM devices. It soured me so much that I have decided that until ARM offers a unified boot mechanism like x86 PCs do, I will ignore it, no matter the supposed benefits.
I have touched some PC-98 and FM Towns, which are x86 but not IBM PC compatible.
But I understand your point, ARM has its roots in embedded systems and it shows. I really hope that RISC-V learns from that mistake and focuses on standardization, the board you linked looks very promising.
Not my area of expertise but what exactly is the difference between RISC-V and Power PC? Didn't Power-PC get a good run in the 90s and 2000s? Just wondering why there's renewed interest in RISC-like architectures when industry already had a good exploration of that area.
The interest is BECAUSE it's well explored territory. The concept is proven and works fine.
On the low end where RISC-V currently lives, simplicity is a virtue.
On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.
> Do you think Apple spends more money than Intel on chip design?
Absolutely. Apple's R&D budget for 2025 was 34 Billion to Intel's ~18 Billion (and the majority of Intel's R&D budget goes to architecture, while for Apple, that is all TSMC R&D and Apple pays TSMC another ~$20 billion a year, of which, something like 8 billion is probably TSMC R&D that goes into apple's chips).
Sure not all of Apple's 34B is CPU R&D, but on a like-for-like basis, Apple probably has at least 50% more chip design budget (and they only make ~10-20 different chips a year compared to Intel who make ~100-200)
Correct, ARM does not dominate x86 in desktop and servers. Just everywhere else.
Apple is top 5 for laptop and desktop market share. So, pretty sure Apple RISC Silicon has a presence in those markets. Very recently, Qualcomm has entered as well. And of course Chromebooks are primarily ARM.
ARM has only recently entered the server market. Already it is having some success, especially with hyperscalers.
RISC-V is about to enter all those markets. I mean, RISC-V silicon is in use in the cloud. But it is still an experiment at this stage. And you can buy a RISC-V laptop. But they are only for devs.
x86_64 machines are RISC under the hood and have been for ages, I believe; microcode is translating your x64 instructions to risc instructions that run on the real CPU, or something akin to that. RISC never died, CISC did, but is still presented as the front-facing ISA because of compatibility.
Ah, PowerPC. For a RISC processor it surely had a lot of instructions, most of them quite peculiar. But hey, it had fixed-length instruction encoding and couldn't address memory in instructions other than "explicit memory load/store", so it was RISC, right?
Your submission was sent successfully! Close
Thank you for contacting us. A member of our team will be in touch shortly. Close
You have successfully unsubscribed! Close
Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu
and upcoming events where you can meet our team. Close
Your preferences have been successfully updated. Close notification
Please try again or file a bug report. Close
122 comments
> platform fragmentation
RISC-V is addressing this issue quite directly. For things like desktops, laptops, SBCs and servers we have the RVA23 profile which defines quite specifically what features a chip must support to ensure code portability.
On top of this, there are platform specifications. For example, the server spec is about to finalize next month. It extends RVA23 which things like UEFI, SBI, and ACPI to ensure that your can take something like a Linux distro and easily install it on any RISC-V server, like you can in the world of x86-64.
> we have phones that are only good for landfill once their support lifetime is up
RISC-V will probably not solve that problem in general.
First, the ISA cannot really demand that your phone avoid a Broadcom wireless chip that requires proprietary firmware for example.
Also, the phone vendor can still lock down the devices to prevent running arbitrary code.
Thankfully, the RISC-V world is developing a culture of openness. If a company wants to create a fully “open” phone, they are quite likely to adopt RISC-V. And, because of RISC-V, even the SoC itself could be fully Open Source.
But your typical Android phone is not going to get more Open just because they contain a RISC-V CPU.
A great case study is the companies that implemented the pre-release vector standard in their chips.
The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.
If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.
The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).
Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.
If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.
[0]: https://github.com/riscv-non-isa/riscv-server-platform
Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.
> Its unlikely that another platform would be able to reach this state...
Is this really true? The computer ecosystem is more open now than ever. The original PC BIOS (which PC-compatible manufacturers needed to implement) was never an open, documented standard. It was a proprietary, closed system made by IBM. It's pretty fair to say that IBM didn't anticipate a PC/x86 ecosystem developing around their product. They even sued companies who made their own compatible BIOSes (like Corona). Intel didn't really have much to do with the success of the product at that point in time either, much less Microsoft.
In contrast, every widely-used modern system for hardware abstraction (UEFI/ACPI/DeviceTree/OpenSBI/etc) are open, royalty-free standards that anyone can use. Their implementation in ARM is newer, and inconsistent, but that's only because of how hugely diverse the ARM ecosystem is.
> Is this really true?
I think the issue is that desktop and server computing are “open” in the sense that you have full control over the software you run on them. So people interpret the dominant desktop and server platform architecture (the world of x86-64) as being open.
The embedded world is mostly closed, you are meant to run the software your hardware comes with. The platform’s popular there are considered less open (ARM and RISC-V).
Mobile devices like phones and tablets are historically closed devices, regardless of ISA. They are generally getting more closed in the name of security.
It is not the ISA that is “open” but the industry.
That said, in RISC-V, there is a sub-current of openness. I do not think that will overcome the industry tendencies in general, but there will be a small cadre of folks trying to create an open presence in every niche. The good news is that there is nothing to stop them. They will succeed eventually.
On the other hand, ARM sells the cores to SoC vendors (and doesn't care much what becomes of it), SoC vendors ducktape the ARM cores to a bunch of Synopsys peripherals and sell the resulting SoCs to smartphone and car makers (and doesn't care much for the product). System integrators throw Android on top and sell it to the customers. Then Google, who get all the cream via Play, hides all the mess behind a thousand layers of Java abstractions.
DeviceTree is an offshot of Sun's OpenFirmware (and it leaves out all the hard stuff - OpenFirmware had Forth, DeviceTree expects the kernel to support every single brand of fan switch). OpenSBI is a disaster. I'm sorry, but what kind of bright mind came up with the idea of hiding damn *timer* behind a privilege switch? Timers were enough of a pain point on x86 already, then it settled on userspace-accesable RDTSC. RISC-V SBI? Reproducing x86 one stupid decision at a time.
> Will RISC-V end up with the same (or even worse) platform fragmentation as ARM?
Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.
One reason UNIX became widely adopted, besides being freely available versus the other OSes, was that allowed companies to abstract their hardware differences, offering some market differentiation, while keeping some common ground.
Those phones common ground is called Android with Java/Kotlin/C/C++ userspace, folks should stop seeing them as GNU/Linux.
> Enabling new business models
This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].
> Extensibility powers technology innovation
>> While this flexibility could cause problems for the software ecosystem...
"While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
> How mature is the software ecosystem?
10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.
The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.
I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.
[1]: https://www.reuters.com/business/anthropic-weighs-building-i...
> good luck parsing through 100 different "performance optimization manuals" from 100 different companies.
Imo this is pretty misguided. If you're writing above assembly level, you can read the performance optimization manual for Intel, and that code will also be really fast on AMD (or even apple/graviton). At the assembly level, compilers need to know a little bit more, but mostly those are small details and if they get roughly the right metrics, the code they produce is pretty good.
> good luck parsing through 100 different "performance optimization manuals" from 100 different companies
This would be a problem for any ISA with multiple/many vendors.
Unity, Bazaar, Mir, Upstart, Snap, etc.
All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.
It would have just been better to continue doing the desktop specific things and let the Plasma Mobile enthusiasts make those changes.
Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.
> bazaar... Not sure what they were trying to solve there with git and forges already existing.
You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.
IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.
(Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)
Named branches vs bookmarks in hg just means bike shedding about branching strategy. Bookmarks ultimately work more like lightweight git style branches, but they came later, and originally couldn't even be shared (literally just local bookmarks). Named branches on the other hand permanently accumulate as part of the repository history.
Git came out with 1 cohesive branch design from day 1.
When these DVCS appeared, Git's branch design departed from what "branch" meant practically everywhere else. That added to its already significant learning curve, creating more friction for people trying (or being asked) to adopt it.
Meanwhile, Mercurial's "branch" was closer to well-established norms. This was one of several factors that made it the easier of the two to learn, which was was important when already asking people to uproot from their familiar centralized systems and learn the ins and outs of distributed version control. I suspect it also made repository migrations more straightforward, avoiding the impedance mismatch presented by Git's branches.
Back when I was testing bookmarks were available, but Bitbucket was pretty much the only forge that supported Mercurial and their tooling didn't support bookmarks, so that made them a non-starter for many users.
Bazaar and Git were created around the exact same time.
Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.
If I ever go back to GNU/Linux full time, GNOME certainly won't be it.
Things have improved, but the overall Gnome Foundation attitude hasn't improved. They are still very stubborn and remove basic features. This seemed to start when they did their infamous "focus groups" where they claim users can't understand basic things.
I get the desire to provide a cohesive experience, but I think you can do that while also giving people control.
KDE is shaping up to be much better and it's likely because Valve is providing commercial support and exposing it to a larger audience.
Cosmic is the new kid backed by system76 and its pretty nice too and may rescue Gnome in some ways in due time.
> Not sure what they were trying to solve there with git and forges already existing.
What?
Bzr predates git (by a few days but still). Launchpad predated GitHub by a lot. canonical just played those cards horribly and lost.
> Snap is definitely not abandoned.
You seem to be say it like it's a good thing?
Can't wait for that thing to explode and die.
Lying to users and turning apt install commands into shims for a barely functional replacement was disrespectful. Flatpak was and still is better, but even then if I say I want a system package you give me a system package. If you have infrastructural reasons why you cannot continue to provide that package then remove it, Debian based systems have many ways to provide such things.
Canonical did it because they wanted to boost Snap usage and if failed while sending a clear message they don't respect their user base.
Beyond the potential platform fragmentation due to the variability of the ISA (a very unfortunate design choice IMO), mentioned elsewhere in this thread, what I find most frustrating is the boot process / equivalent of BIOS in that world.
My impression: complete lack of standardization, a ton of ad-hoc tools native to each vendor, a complete mess, especially when it comes to get the board to boot from devices the vendor didn't target (eg SSDs).
Until two things happen:
1. a CPU with a somewhat competitive compute power appears (so far, all the SBC's I've tried are way behind ARM and x86)
2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)
the whole RISC-V thing will remain a tiny niche thing, especially because when a vendor loses interest in the platform, all of the SW that is native to the platform goes to rot immediately (not that it was particularly good quality in the first place).
> the whole RISC-V thing will remain a tiny niche
I think this is going to embarrassingly wrong.
> all of the SW that is native to the platform
There are several RISC-V Linux distros where essentially all the software available for the x86-64 platform is also available on the RISC-V edition. Let’s use Ubuntu as an example.
> when a vendor loses interest in the platform > the platform goes to rot immediately
Ubuntu will provide updates for 15 years. That does not seem very immediate.
For RVA23 hardware, I expect even new Ubuntu releases to support it up to around 2030 at least. 15 years from then will be 2045. I cannot say that I am picking up what you are laying down here.
> 2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)
I got the same experience tinkering with ARM devices. It soured me so much that I have decided that until ARM offers a unified boot mechanism like x86 PCs do, I will ignore it, no matter the supposed benefits.
The RISC-V server spec mandates UEFI, ACPI, and SBI. Here is a RISC-V “desktop” motherboard that has the same:
https://milkv.io/titan
But I understand your point, ARM has its roots in embedded systems and it shows. I really hope that RISC-V learns from that mistake and focuses on standardization, the board you linked looks very promising.
On the low end where RISC-V currently lives, simplicity is a virtue.
On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.
In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.
Do you think Apple spends more money than Intel on chip design?
> Do you think Apple spends more money than Intel on chip design?
Absolutely. Apple's R&D budget for 2025 was 34 Billion to Intel's ~18 Billion (and the majority of Intel's R&D budget goes to architecture, while for Apple, that is all TSMC R&D and Apple pays TSMC another ~$20 billion a year, of which, something like 8 billion is probably TSMC R&D that goes into apple's chips).
Sure not all of Apple's 34B is CPU R&D, but on a like-for-like basis, Apple probably has at least 50% more chip design budget (and they only make ~10-20 different chips a year compared to Intel who make ~100-200)
Apple business is vertical integration, they have zero presence in the chip market.
Apple is top 5 for laptop and desktop market share. So, pretty sure Apple RISC Silicon has a presence in those markets. Very recently, Qualcomm has entered as well. And of course Chromebooks are primarily ARM.
ARM has only recently entered the server market. Already it is having some success, especially with hyperscalers.
RISC-V is about to enter all those markets. I mean, RISC-V silicon is in use in the cloud. But it is still an experiment at this stage. And you can buy a RISC-V laptop. But they are only for devs.
Pretty much every new ISA introduced since the 80’s has been RISC.
PowerPC was adopted by Apple (RISC), they went back to Intel (CISC), and then they went back to RISC (Apple Silicon).
ARM, pretty much all phones, tablets, and Chromebooks is RISC.
Windows runs on ARM now as well (Qualcomm X Elite).
The interest around RISC-V is that anybody can use it in their chips without having to ask permission.