Ah, that explains this patchset that was submitted to the Linux kernel today
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on
s390 architecture, we aim to expand the platform's software ecosystem. This
initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU
virtualization on s390....."
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
If every device is directly connected to every other one of n devices with Thunderbolt cables, each with its own dedicated set of PCIe lanes, you'd be limited to 1/n of the theoretical maximum bandwidth between any two devices.
What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.
There were also PC compatible systems based around ISA backplanes. This was especially common for industrial computers but Zenith/Heathkit made ISA backplane based systems for the business and consumer markets. I own a Zenith Z-160 luggable computer from 1984 which uses an 8 slot 8-bit ISA backplane. 1 slot is occupied by a CPU card which also has the keyboard connector. My system has 2 memory cards which each provide up to 320k along with a serial and parallel port. Zenith sold a desktop version of this as the Z-150. They later released models based upon 16-bit ISA backplanes. I think but am not sure of the top of my head that the last CPU they produced a 16-bit card for was the 486.
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.
The transputer b008 series was also somewhat similar.
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.
For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.
That's what I was hoping Apple was going to do with a refreshed Mac Pro.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
This is a serious question. What does IBM, in fact, do? I'm surprised they are still around and apparently relevant. Are they more or less a services and consulting company now?
dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
Once you parse the marketing speak, looks like there may be ARM ISA silicon in future System Z.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
I wonder if we end up with z series running on arm long term.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
I think the #1 use case here is allowing AI/cloud workloads the ability to execute against the mainframe's data without ever leaving the secure bubble. I.e., bring the applications to the data rather than the data to the applications.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
IBM is desperate to keep the mainframe relevant. The typical transactional workloads are going to stay on the mainframe, and by bolting on ARM “for AI”they’re giving their customer CIOs a reason to defend their decision to stick with the mainframe.
It is wild how ARM - which was kind of a niche company and ISA - has taken the world by storm since the modern smartphone was born. Now their designs make their way upwards to big iron and AI datacenters.
Maybe I don't know enough technical details about these CPU architectures or IP agreements, but I don't see why IBM couldn't have done what Arm did but with PowerPC.
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
191 comments
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."
https://patchwork.kernel.org/project/linux-arm-kernel/cover/...
things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Then each device can be a host, a client, at the same time and at full bandwidth.
What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.
The transputer b008 series was also somewhat similar.
For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
https://news.ycombinator.com/item?id=46248644
https://512pixels.net/2024/03/apple-jonathan-modular-concept...
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
Is there really SW that's limited to (Linux) ARM and not x86?
>
dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and securityI think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
https://en.wikipedia.org/wiki/Linaro
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/