I set up my own home network with a Vertiv Liebert Li-ion UPS a few years ago and was thinking about how inefficient the whole process is regarding power. The current goes from AC to DC back to AC back to DC. Straight from the UPS as DC would work much better, and as I was teaching myself more about networking equipment, I was surprised to learn that most of it isn't DC input by default (i.e., each piece of equipment tends to come with built-in AC-DC conversion).
Then I started routing ethernet with PoE throughout my house and observed that other than a few large appliances, the majority of powered devices in a typical home in 2026 could be supplied via PoE DC current as well! Lighting, laptops, small/medium televisions. The current PoE spec allows up to 100 W, which covers like 80% of the powered devices in most homes. I think it would make more sense to have fewer AC outlets around the modern house and many more terminals for PoE instead (maybe with a more robust connector than RJ45). I wonder what sort of energy efficiency improvements this would yield. No more power bricks all over the place either.
"... throughout my house and observed that other than a few large appliances, the majority of powered devices in a typical home in 2026 could be supplied via PoE DC current as well!"
We installed 120 LED ceiling lights in our home circa 2020, all of which were run with high voltage (romex) and accompanied by 120 little transformer boxes that mount inside the ceiling next to them.
Later ...
We installed outdoor lighting with low voltage, outdoor rated wiring and powered by a 12V transformer[1] and I felt the same way you did: why did we use a mile of romex and install all of those little mini transformers when we could have powered the same lights with 12V and low voltage wire ?
I then learned that the energy draw of running the low-volt transformer all the time - especially one large enough to supply an entire house of lighting - would more than cancel out energy savings from powering lower voltage fixtures.
You don't have this problem with outdoor lighting because the entire transformer is on a switch leg and is off most of the time.
So ... I like the idea of removing a lot of unnecessary high voltage wire but it's not as simple as "just put all of your lights behind a transformer".
> I then learned that the energy draw of running the low-volt transformer all the time - especially one large enough to supply an entire house of lighting - would more than cancel out energy savings from powering lower voltage fixtures.
That's not a constraint of physics, you can absolutely build a DC power supply that is efficient in a wide load range. (Worst case it might involve paralleling and switching between multiple PSUs that target different load ranges.) But of course something like that is more expensive...
> But of course something like that is more expensive...
More expensive than an inefficient unit, but it should still be a lot cheaper than 120 separate units, right?
And I expect one big fat unit to do a better job of smoothing out voltage and avoiding flicker than a bunch of single-light units. Especially because the output capacitors are sized for the entire system, but you'll rarely have all the lights on at the same time.
Though for efficiency I'd think you'd want 48v and not 12v.
These days, you should not be using transformers to power small loads at all, you should be using switching power supplies. They have negligible power draw when there's no load attached.
I think we're slowly, slowly coming around to the idea of domestic DC distribution. The vast majority of consumer electronics would be perfectly happy to consume 12v. It's cheaper, safer, more efficient. Less design work and certification on inbuilt AC adapters.
I think it's highly unlikely we'll see mass scale retrofits, but if enough momentum builds up, I can see it as a great bonus feature for new builds.
I got lucky with my house and every room has a dedicated phone line meeting at a distribution panel (a couple of 2x4s with screw terminals) built in the 50s. I'm in the process of converting it to light duty DC power. The wiring is only good for an amp or two, but at 48v that's still significant power transmission.
I set up my own home network with a Vertiv Liebert Li-ion UPS a few years ago and was thinking about how inefficient the whole process is regarding power. The current goes from AC to DC back to AC back to DC.
With double-conversion, generally yes.
I recently ran across the (patented?) concept of a delta conversion/transformer UPS that seems to eliminate/reduce the inefficiencies:
The double-conversion only occurs when there's a 'hiccup' from utility power, otherwise if power is clean the double-conversion is not done at all so the inefficiencies don't kick in.
One of the main problems is conductor size. I wish we could access 22AWG copper in cheap and cheerful cat5e/cat6 format cable. 24AWG cat5e, sometimes CCA is not great for doing large amounts of POE.
> Lighting, laptops, small/medium televisions. The current PoE spec allows up to 100 W, which covers like 80% of the powered devices in most homes.
I find it a little hard to imagine that those devices outnumber things like stoves, dishwashers, washers/dryers, kettles, hair dryers... by 4:1.
Unsure why PoE would be better for LED lighting than the standard approach of screwing a bulb directly into AC, either. How many lumens do you get out of strip lights these days? And you still have AC-DC conversion for whatever's sourcing power onto the Ethernet link.
I think Ubiquiti (makers of the UniFi wifi products, as well as some of the most popular managed PoE switches) also make a ton of other PoE products such as the usual stuff like cameras, ip phones, network switches, access card readers, door locks, and, now, ceiling lights (presumably due to the latest PoE standards delivering significant wattage).
It's super nice because you only need to put the UPS/ATS at the PoE switch and then you get power redundancy everywhere you have ethernet running (i.e. the phones don't go down).
The problem is that all of those DC devices don't operate on 48V either. The vast majority of chips require a 5V or lower input, so with a 48V DC supply you're still going to need a per-device PSU to do DC-DC conversion. In other words: no getting rid of power bricks.
Efficiency isn't as straightforward either. You're still being fed by 120V/230V AC, so you're going to need some kind of centralized rectifier and down converter. It'll need to be specced for peak use, but in practice it'll usually operate at a fraction of that load - which means it'll have a pretty poor efficiency. A per-device PSU can be designed exactly for the expected load, which means it'll operate at its peak efficiency.
We also don't use 5V DC grids because the wire losses would be horrible, so a domestic DC grid should probably operate at pretty close to regular AC voltage as well. In practice this means the most sensible option would be to have a centralized rectifier and a grid operating at whatever voltage it outputs - but what would be the point?
As to PoE: I personally really like the idea, but I don't believe it'll have a bright future. For its traditional use the main issue is that there doesn't seem to be a future for twisted-pair beyond 10Gbps. 25GBASE-T might exist as a standard on paper, but the hardware never took off due to complete disinterest from the datacenter market, and it is too limited to be of use in offices and homes. I fully expect that 25G will arrive in the home and office as some form of fiber-optic interconnect - with fiber+copper hybrid for things like access points.
On the other hand, for a lot of IoT applications PoE seems to be too complicated and too expensive. It makes sense for things like cameras, but individual lights, or things like smoke sensors are probably better served in office/industrial applications by either a regular AC supply or a local DC one, plus something like KNX, X10, CAN, or Modbus for comms: just being able to be wired as a bus rather than a star topology is already a massive advantage. And for domestic use the whole "has a wire" thing is of course a massive drawback - most consumers strongly prefer using Wifi over running a dedicated wire to every single little doodad.
It's fun to think about. There's advantages both ways, but I think it leans most-heavily towards keeping AC.
1. One of these is simplicity. With AC, one single home run of cabling (eg, Romex) can feed a whole room full of stuff, like a bedroom or a living room. At one end of the run is a circuit breaker (a fairly simple electromechanical device) and at the other end is a series of outlets (which are physically daisy-chained, but are functionally just wired in parallel with eachother).
Since one single run of cable can feed many devices, it is easy to accomplish.
2. Another advantage is that it is universal. Anything can plug into these outlets. Whatever a person brings into the home to use, they can plug it into an outlet and it works. It works this same way in every home.
3. And there's quite a lot of power available: A common 20A 120v branch circuit cabled up with 12AWG Romex is stated to supply up to 16A continuously, or 1920W. For intermittent loads, it can supply 20A -- or 2400W. That's tiny by European standards, but it's still quite a lot of power. It's plenty to run a space heater when Grandma visits and she complains about the guest room being cold (even as you start to sweat when you cross the threshold to investigate) and a big TV and a whole world of table lamps, all at once. And you can plug this stuff into any outlets in a room, and it Just Works.
4. But, sure: Lots of devices want DC, not AC. So there's a necessary conversion step that is either integral to the device being plugged in, or in the form of the external wall warts we all know very well.
So let's compare to power-over-ethernet.
1. It's also simple, but only tangentially-so. One home-run cable per outlet, whether that outlet is used or not, is something that can be rationalized as being a simple topology. A PoE switch at the head-end instead of a central box with circuit breakers is a simple-enough thing to transition to. And a lot more individual cables are required, but they're relatively small and are generally easier to install.
2. It's standardized, but it's not universal at all. I've got a few PoE widgets around the house, but I'm pretty friggin' weird when it comes to what I do with electricity. I can't go to Wal-Mart and buy more PoE widgets to use at home, and when people visit they aren't bringing PoE adapters to charge their phones and other electronics. My computer monitor doesn't have a PoE input. I can easily imagine a table lamp or a fan that connects to PoE, and also uses it as a network connection for automation, and that sounds pretty sweet in ways that tickle my automation bones in the most filthy of fashions... but that's getting even further into the weeds compared to how regular people expect to do regular things.
3. There isn't a lot of power available. 802.3bt Type 4 is the highest spec. And within that spec: While switch ports can output up to 100W, a device being powered is limited drawing no more than 71.3W. Now, sure, that's 71.3W per port, but in a room with 10 ports that's still only ~700W -- at most -- in that room. And Grandma's space heater won't run on 71.3W, nor her electric blanket. My laptop wants more than this. The list of useful, portable things that we casually plug into a wall that only draw less than 71.3W is pretty short and most don't benefit from the main advantage of PoE, which is a combination of [some] power alongside high-speed Ethernet data.
4. We still need wall warts since PoE is nominally ~48VDC. For example: Phones use less than 71.3W while charging, but they don't run on 48V. That means 120V AC comes in from the grid, gets shifted to 48VDC for distribution within the dwelling, and then gets shifted yet again to the produce the power (5, 9, 15, and 20V are common-enough in USB PD world) that devices actually want. That's more lossy conversion steps, not fewer -- and we still get to keep the extra conversion (wall warts) as punishment for our great ideas. This is not the path towards increased energy efficiency.
---
PoE is great for the things we use it for today. A camera, a wireless access point -- you know, fixed-location stuff that uses networked data as its primary function and also requires power.
Installed PoE light fixtures (like, say, task lights in a kitchen) also sounds neat -- unless they die prematurely and no PoE replacements are to be found. (Now, you have not just one or two problems, but many: The lights aren't working in that space and they can't be replaced with a trip to Lowes because the Romex that would normally have been installed was deliberately deleted from the plan. It could have been a 20-minute DIY fix that costs less than $100, but now it involves drywall and paint and retrofitting new cabling. Or maybe PoE replacements do exist, but it's now 2035 and the new ones don't talk the same network protocols as the old ones did.)
But there are other upsides: I've got an 8-port PoE-powered network switch that works a treat. It's a dandy little thing. And it sure would be neat to plug my streaming box in with PoE and kill two birds with one cable; I would like that very much.
But most people? Most people don't give a damn about ethernet (PoE, or not!) these days, or streaming boxes, and that trend is increasing. They just plug their lamp into the regular outlet on the wall like they always have, and deal with whatever terrible UI is built into their smart TV, and use wifi for anything that needs data.
And when they buy a home that is filled with someone else's smart infrastucture, their first task (more often than not) is to figure out who to call to erase those parts completely and put it back to being normal and boring.
90% of the power in our academic data center goes 13.8kV 3-phase -> 400v 3-phase, and then the machines run directly from one leg to neutral (230v). One transformer step, no UPS losses, and the server power supplies are more efficient at EU voltages.
But what about availability? If you ask most of our users whether they’d prefer 4 9s of availability or 10% more money to spend on CPUs, they choose the CPUs. We asked them.
There are a lot of availability-insensitive workloads in the commercial world, as well, like AI training. What matters in those cases is how much computing you get done by the end of the month, and for a fixed budget a UPS reduces this number.
I've been hearing this line for over a decade, now. "Immersion cooling will make data centers scale!" "Converting to DC at the perimeter increases density!"
Yes, of course both of those things are true, and yes, some data centers do engage in those processes for their unique advantages. The issue is that aside from specialty kit designed for that use (like the AWS Outposts with their DC conversion), the rank-and-file kit is still predominantly AC-driven, and that doesn't seem to be changing just yet.
While I'd love to see more DC-flavored kit accessible to the mainstream, it's a chicken-and-egg problem that neither the power vendors (APC, Eaton, etc) or the kit makers (Dell, Cisco, HP, Supermicro, etc) seem to want to take the plunge on first. Until then, this remains a niche-feature for niche-users deal, I wager.
How is DC better than a three phase delta 800Vrms, at 400Hz?
- Three conductors vs two, but they can be the next gauge up since the current flows on three conductors
- no significant skin effect at 400Hz -> use speaker wire, lol.
- large voltage/current DC brakers are.. gnarly, and expensive. DC does not like to stop flowing
- The 400Hz distribution industry is massive; the entire aerospace industry runs on it. No need for niche or custom parts.
- 3 phase @ 400Hz is x6 = 2.4kHz. Six diodes will rectify it with almost no relevant amount of ripple (Vmin is 87% of Vmax) and very small caps will smooth it.
As an aside, with three (or more) phase you can use multi-tap transformers and get an arbitrary number of poles. 7 phases at 400Hz -> 5.6kHz. Your PSU is now 14 diodes and a ceramic cap.
- you still get to use step up/down transformers, but at 400Hz they're very small.
- merging power sources is a lot easier (but for the phase angle)
- DC-DC converters are great, but you're not going to beat a transformer in efficiency or reliability
800 volts DC, at the megawatt power supply levels, implies fault impulses of more than a megajoule. Google tells me that's about 2 hand grenades worth of boom. That's an optimistic lower bound.
The resulting copper plasma cloud is a burn and inhalation hazard, along with the overpressure.
Let's say you get a 10 kiloamp fault current, this will then induce voltages everywhere you don't want it to go. If all the interconnects are fiber, that's really not a problem, but you have to have everything EMP shielded if you don't want boards popping randomly after such an event.
The "efficiency" of removing the extra power conversions also removes filtering and surge suppression. It's entirely possible that one power supply over-voltage takes out half of your racks. The MOSFETs used tend to fail closed instead of open, making failures far worse than a simple outage.
415 comments
Then I started routing ethernet with PoE throughout my house and observed that other than a few large appliances, the majority of powered devices in a typical home in 2026 could be supplied via PoE DC current as well! Lighting, laptops, small/medium televisions. The current PoE spec allows up to 100 W, which covers like 80% of the powered devices in most homes. I think it would make more sense to have fewer AC outlets around the modern house and many more terminals for PoE instead (maybe with a more robust connector than RJ45). I wonder what sort of energy efficiency improvements this would yield. No more power bricks all over the place either.
We installed 120 LED ceiling lights in our home circa 2020, all of which were run with high voltage (romex) and accompanied by 120 little transformer boxes that mount inside the ceiling next to them.
Later ...
We installed outdoor lighting with low voltage, outdoor rated wiring and powered by a 12V transformer[1] and I felt the same way you did: why did we use a mile of romex and install all of those little mini transformers when we could have powered the same lights with 12V and low voltage wire ?
I then learned that the energy draw of running the low-volt transformer all the time - especially one large enough to supply an entire house of lighting - would more than cancel out energy savings from powering lower voltage fixtures.
You don't have this problem with outdoor lighting because the entire transformer is on a switch leg and is off most of the time.
So ... I like the idea of removing a lot of unnecessary high voltage wire but it's not as simple as "just put all of your lights behind a transformer".
[1] https://residential.vistapro.com/lex-cms/product/262396-es-s...
> I then learned that the energy draw of running the low-volt transformer all the time - especially one large enough to supply an entire house of lighting - would more than cancel out energy savings from powering lower voltage fixtures.
That's not a constraint of physics, you can absolutely build a DC power supply that is efficient in a wide load range. (Worst case it might involve paralleling and switching between multiple PSUs that target different load ranges.) But of course something like that is more expensive...
> But of course something like that is more expensive...
More expensive than an inefficient unit, but it should still be a lot cheaper than 120 separate units, right?
And I expect one big fat unit to do a better job of smoothing out voltage and avoiding flicker than a bunch of single-light units. Especially because the output capacitors are sized for the entire system, but you'll rarely have all the lights on at the same time.
Though for efficiency I'd think you'd want 48v and not 12v.
I think it's highly unlikely we'll see mass scale retrofits, but if enough momentum builds up, I can see it as a great bonus feature for new builds.
I got lucky with my house and every room has a dedicated phone line meeting at a distribution panel (a couple of 2x4s with screw terminals) built in the 50s. I'm in the process of converting it to light duty DC power. The wiring is only good for an amp or two, but at 48v that's still significant power transmission.
>
I set up my own home network with a Vertiv Liebert Li-ion UPS a few years ago and was thinking about how inefficient the whole process is regarding power. The current goes from AC to DC back to AC back to DC.With double-conversion, generally yes.
I recently ran across the (patented?) concept of a delta conversion/transformer UPS that seems to eliminate/reduce the inefficiencies:
* https://dc.mynetworkinsights.com/what-are-the-different-type...
* a bit technical: https://www.youtube.com/watch?v=nn_ydJemqCk
* Figures 6 to 8 [pdf]: https://www.totalpowersolutions.ie/wp-content/uploads/WP1-Di...
The double-conversion only occurs when there's a 'hiccup' from utility power, otherwise if power is clean the double-conversion is not done at all so the inefficiencies don't kick in.
> Lighting, laptops, small/medium televisions. The current PoE spec allows up to 100 W, which covers like 80% of the powered devices in most homes.
I find it a little hard to imagine that those devices outnumber things like stoves, dishwashers, washers/dryers, kettles, hair dryers... by 4:1.
Unsure why PoE would be better for LED lighting than the standard approach of screwing a bulb directly into AC, either. How many lumens do you get out of strip lights these days? And you still have AC-DC conversion for whatever's sourcing power onto the Ethernet link.
It's super nice because you only need to put the UPS/ATS at the PoE switch and then you get power redundancy everywhere you have ethernet running (i.e. the phones don't go down).
> maybe with a more robust connector than RJ45
USB-C could be that connector, using USB-PD instead of PoE. Though I'm not sure I'd want to need that much smarts for every single power outlet.
Efficiency isn't as straightforward either. You're still being fed by 120V/230V AC, so you're going to need some kind of centralized rectifier and down converter. It'll need to be specced for peak use, but in practice it'll usually operate at a fraction of that load - which means it'll have a pretty poor efficiency. A per-device PSU can be designed exactly for the expected load, which means it'll operate at its peak efficiency.
We also don't use 5V DC grids because the wire losses would be horrible, so a domestic DC grid should probably operate at pretty close to regular AC voltage as well. In practice this means the most sensible option would be to have a centralized rectifier and a grid operating at whatever voltage it outputs - but what would be the point?
As to PoE: I personally really like the idea, but I don't believe it'll have a bright future. For its traditional use the main issue is that there doesn't seem to be a future for twisted-pair beyond 10Gbps. 25GBASE-T might exist as a standard on paper, but the hardware never took off due to complete disinterest from the datacenter market, and it is too limited to be of use in offices and homes. I fully expect that 25G will arrive in the home and office as some form of fiber-optic interconnect - with fiber+copper hybrid for things like access points.
On the other hand, for a lot of IoT applications PoE seems to be too complicated and too expensive. It makes sense for things like cameras, but individual lights, or things like smoke sensors are probably better served in office/industrial applications by either a regular AC supply or a local DC one, plus something like KNX, X10, CAN, or Modbus for comms: just being able to be wired as a bus rather than a star topology is already a massive advantage. And for domestic use the whole "has a wire" thing is of course a massive drawback - most consumers strongly prefer using Wifi over running a dedicated wire to every single little doodad.
1. One of these is simplicity. With AC, one single home run of cabling (eg, Romex) can feed a whole room full of stuff, like a bedroom or a living room. At one end of the run is a circuit breaker (a fairly simple electromechanical device) and at the other end is a series of outlets (which are physically daisy-chained, but are functionally just wired in parallel with eachother).
Since one single run of cable can feed many devices, it is easy to accomplish.
2. Another advantage is that it is universal. Anything can plug into these outlets. Whatever a person brings into the home to use, they can plug it into an outlet and it works. It works this same way in every home.
3. And there's quite a lot of power available: A common 20A 120v branch circuit cabled up with 12AWG Romex is stated to supply up to 16A continuously, or 1920W. For intermittent loads, it can supply 20A -- or 2400W. That's tiny by European standards, but it's still quite a lot of power. It's plenty to run a space heater when Grandma visits and she complains about the guest room being cold (even as you start to sweat when you cross the threshold to investigate) and a big TV and a whole world of table lamps, all at once. And you can plug this stuff into any outlets in a room, and it Just Works.
4. But, sure: Lots of devices want DC, not AC. So there's a necessary conversion step that is either integral to the device being plugged in, or in the form of the external wall warts we all know very well.
So let's compare to power-over-ethernet.
1. It's also simple, but only tangentially-so. One home-run cable per outlet, whether that outlet is used or not, is something that can be rationalized as being a simple topology. A PoE switch at the head-end instead of a central box with circuit breakers is a simple-enough thing to transition to. And a lot more individual cables are required, but they're relatively small and are generally easier to install.
2. It's standardized, but it's not universal at all. I've got a few PoE widgets around the house, but I'm pretty friggin' weird when it comes to what I do with electricity. I can't go to Wal-Mart and buy more PoE widgets to use at home, and when people visit they aren't bringing PoE adapters to charge their phones and other electronics. My computer monitor doesn't have a PoE input. I can easily imagine a table lamp or a fan that connects to PoE, and also uses it as a network connection for automation, and that sounds pretty sweet in ways that tickle my automation bones in the most filthy of fashions... but that's getting even further into the weeds compared to how regular people expect to do regular things.
3. There isn't a lot of power available. 802.3bt Type 4 is the highest spec. And within that spec: While switch ports can output up to 100W, a device being powered is limited drawing no more than 71.3W. Now, sure, that's 71.3W per port, but in a room with 10 ports that's still only ~700W -- at most -- in that room. And Grandma's space heater won't run on 71.3W, nor her electric blanket. My laptop wants more than this. The list of useful, portable things that we casually plug into a wall that only draw less than 71.3W is pretty short and most don't benefit from the main advantage of PoE, which is a combination of [some] power alongside high-speed Ethernet data.
4. We still need wall warts since PoE is nominally ~48VDC. For example: Phones use less than 71.3W while charging, but they don't run on 48V. That means 120V AC comes in from the grid, gets shifted to 48VDC for distribution within the dwelling, and then gets shifted yet again to the produce the power (5, 9, 15, and 20V are common-enough in USB PD world) that devices actually want. That's more lossy conversion steps, not fewer -- and we still get to keep the extra conversion (wall warts) as punishment for our great ideas. This is not the path towards increased energy efficiency.
---
PoE is great for the things we use it for today. A camera, a wireless access point -- you know, fixed-location stuff that uses networked data as its primary function and also requires power.
Installed PoE light fixtures (like, say, task lights in a kitchen) also sounds neat -- unless they die prematurely and no PoE replacements are to be found. (Now, you have not just one or two problems, but many: The lights aren't working in that space and they can't be replaced with a trip to Lowes because the Romex that would normally have been installed was deliberately deleted from the plan. It could have been a 20-minute DIY fix that costs less than $100, but now it involves drywall and paint and retrofitting new cabling. Or maybe PoE replacements do exist, but it's now 2035 and the new ones don't talk the same network protocols as the old ones did.)
But there are other upsides: I've got an 8-port PoE-powered network switch that works a treat. It's a dandy little thing. And it sure would be neat to plug my streaming box in with PoE and kill two birds with one cable; I would like that very much.
But most people? Most people don't give a damn about ethernet (PoE, or not!) these days, or streaming boxes, and that trend is increasing. They just plug their lamp into the regular outlet on the wall like they always have, and deal with whatever terrible UI is built into their smart TV, and use wifi for anything that needs data.
And when they buy a home that is filled with someone else's smart infrastucture, their first task (more often than not) is to figure out who to call to erase those parts completely and put it back to being normal and boring.
But what about availability? If you ask most of our users whether they’d prefer 4 9s of availability or 10% more money to spend on CPUs, they choose the CPUs. We asked them.
There are a lot of availability-insensitive workloads in the commercial world, as well, like AI training. What matters in those cases is how much computing you get done by the end of the month, and for a fixed budget a UPS reduces this number.
Yes, of course both of those things are true, and yes, some data centers do engage in those processes for their unique advantages. The issue is that aside from specialty kit designed for that use (like the AWS Outposts with their DC conversion), the rank-and-file kit is still predominantly AC-driven, and that doesn't seem to be changing just yet.
While I'd love to see more DC-flavored kit accessible to the mainstream, it's a chicken-and-egg problem that neither the power vendors (APC, Eaton, etc) or the kit makers (Dell, Cisco, HP, Supermicro, etc) seem to want to take the plunge on first. Until then, this remains a niche-feature for niche-users deal, I wager.
- Three conductors vs two, but they can be the next gauge up since the current flows on three conductors
- no significant skin effect at 400Hz -> use speaker wire, lol.
- large voltage/current DC brakers are.. gnarly, and expensive. DC does not like to stop flowing
- The 400Hz distribution industry is massive; the entire aerospace industry runs on it. No need for niche or custom parts.
- 3 phase @ 400Hz is x6 = 2.4kHz. Six diodes will rectify it with almost no relevant amount of ripple (Vmin is 87% of Vmax) and very small caps will smooth it.
As an aside, with three (or more) phase you can use multi-tap transformers and get an arbitrary number of poles. 7 phases at 400Hz -> 5.6kHz. Your PSU is now 14 diodes and a ceramic cap.
- you still get to use step up/down transformers, but at 400Hz they're very small.
- merging power sources is a lot easier (but for the phase angle)
- DC-DC converters are great, but you're not going to beat a transformer in efficiency or reliability
800 volts DC, at the megawatt power supply levels, implies fault impulses of more than a megajoule. Google tells me that's about 2 hand grenades worth of boom. That's an optimistic lower bound.
The resulting copper plasma cloud is a burn and inhalation hazard, along with the overpressure.
Let's say you get a 10 kiloamp fault current, this will then induce voltages everywhere you don't want it to go. If all the interconnects are fiber, that's really not a problem, but you have to have everything EMP shielded if you don't want boards popping randomly after such an event.
The "efficiency" of removing the extra power conversions also removes filtering and surge suppression. It's entirely possible that one power supply over-voltage takes out half of your racks. The MOSFETs used tend to fail closed instead of open, making failures far worse than a simple outage.
Very smart people are making very smart mistakes.
Many datacenters I'd been to at that point were already DC.
Didn't think this was that new of a trend in 2026, but also acknowledge I did not visit more than a handful of datacenters since 2007.
It just seemed like a undenyably logical thing to do.