Every year or so there's a new article about some new spectacular storage medium. Crystals, graphene, lasers, quartz, holograms, whatever. It never materializes.
Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
It took 15 if not 20 years to commercialize even such obvious, low-tech thing as radio telegraph, which can literally be built form common house supplies. It happened about 60 years after Maxwell predicted the electromagnetic waves theoretically.
Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.
Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.
It's a good physical effect that doesn't obviously solve any real problems. Consider: 5D optical storage is thirty years old and SOTA transfer speed is about as many megabits per second. By the time it's fast enough to even approach a speed that's commercially useful, all of the other tech we have will have continued to progress. That's not to mention how fragile the quartz disks are. It's a real physical effect that doesn't solve problems.
We already have zero retention energy storage. The phenomenon in this paper isn't even all that new by the author's own admission—that's how it got to the fifty third revision. The tier 2 setup described here is purely speculative. Producing a single square centimeter of pure "fluorographane" (sic?) is still a task that would be exceedingly challenging for a research lab. And it's not clear how much energy it would take to read and write the data, or support the hardware necessary to do it at a speed that's makes it uniquely useful. Even if all of these problems are solved, and the cost is made reasonable, it's still completely unclear if it would be substantially better than what we have today.
all of the other tech we have will have continued to progress
These other techs also came from somewhere, often brewing slowly for decades until the conditions align for a commercial success. This one may be, or may not be, one of these slow-brewing techs of the future, suddenly displacing the SOTA of the (future) day.
Hell, the steam turbine was in principle invented in like 300 AD, and first commercialized (or, rather, militarized) by the end of 19th century. Leonardo da Vinci invented the helicopter, the submarine, and the tank, in principle; it took about 500 years for them to become large commercial and military successes.
It feels a little disjointed to compare old tech. Computing tech iteration cycles and adoption rates seem more interesting than things at the dawn of communications technology.
It doesn't take long to commercialize feasible new tech in this day and age. If someone invented an electromagnetic hovercar tomorrow, it will be available for sale next week and regulations will follow after.
Well, yeah. It takes a heck of a long time to pull something out of the lab, let alone theory, into the real world, and there's a ton of ways that it can die along the way. But you do need people to be pursuing these things to actually get something into production, else there really would never be any progress. To me this reaction feels a bit of a misunderstanding about why it's worth discussing these ideas at all: it's not meant to be a forecast of where technology is definitely going in the future, it's a potential direction that some people think is worth pursuing, and even if the odds are low for any given idea it doesn't make them worthless. (I've worked for near 10 years to turn something that 'worked in the lab' when I joined into an actual product, for example, and it's still not quite standing on its own feet in production yet).
I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).
The concept is interesting, but I'm getting a lot of red flags from this - there's no experimental data or proof-of-concept work at all, which makes this feel more like a blue-sky "Look what we could do if we could arrange atoms however we wanted!" pipe dream in the Drexlerian mode. Something about the writing style's also pinging my LLM radar, which while not disqualifying in-and-of-itself is very discouraging in combination with the other funkiness. The chemistry and manufacturability strike me as questionable in particular, and I'm not convinced the physics of reading and writing are nearly as clean as the author seems to think.
(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)
This is a pipe dream and I’m almost tempted to say a fever dream. The chemistry part seems somewhat sound, even though that’s outside of my field of expertise. But the entire readout process is questionable, and has clear signs of heavy AI writing.
The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted:
- handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out.
- some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible.
- a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell.
- constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.
I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).
"A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude."
Does that mean a scanning tunneling microscope is the I/O mechanism?
That's been demoed for atom-level storage in the past. But it's too slow for use.
Sniff test: a paper with a single author and 53 revisions, listing a gmail address as contact information despite the author, after a brief internet search, appearing to have affiliations with CSU Global, (maybe) the University of Central Florida, and the San Jose State University Department of Aerospace.
I don’t understand the comments here. They say in the last paragraph:
> A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude.
Are we supposed to read all these stories as lies?
Now it doesn’t say that this is easy to produce, but if those claims are true, it doesn’t really matter if it is very expensive.
It doesn’t say either if the stuff can withstand live conditions.
It’s annoying not to be able to trust whether solutions like these are viable or not.
Pardon my ignorance, but why does the "447 TB/cm^2" density value use square centimeters instead of a volume unit? Does the information capacity of this material really scale in proportion to area? How? Or is it just a typo?
Looks interesting but cant take it seriously when there are so many red flags of LLM style writing. The author continues to use AI to even reply to comments in this thread. (its not X its Y, Em Dashes, etc.)
This is like making the big companies lose their profit money since they still have their old products in their warehouses why would they want this tech to come that's why it never materializes
Yeah, I've been baited by "breakthroughs" in storage technology for almost 40 years at this point [1]. I'll believe it when it's in Best Buy. Battery "breakthroughs" have really taken up the mantle of headline-grabbing research fund-raising articles so it's nice to see a throwback to the OG: storage.
154 comments
Demonstrating this stuff is possible isn't the hard part, it seems. Productionizing it is. You have to have exceedingly fast read and write speeds: who cares if it can store an exabyte if it takes all month to read it, or if you produce data faster than you can write it? It has to be durable under adverse conditions. It has to be practical to manufacture the medium and the drives. You probably don't want to have to need a separate device to read and a device to write. By the time most of these problems are worked out, most of these technologies aren't a whole lot better than existing tech.
Stick this on the "Wouldn't it be nice if graphene..." pile.
Red LEDs were invented / discovered in 1920s, became commercially successful as indicators in 1960s. Optical fibers were invented in 1920s or so, became a commercial success in 1980s.
Certain things just take time. Do not dismiss a good physical effect, they are much more rare than so-called good ideas.
We already have zero retention energy storage. The phenomenon in this paper isn't even all that new by the author's own admission—that's how it got to the fifty third revision. The tier 2 setup described here is purely speculative. Producing a single square centimeter of pure "fluorographane" (sic?) is still a task that would be exceedingly challenging for a research lab. And it's not clear how much energy it would take to read and write the data, or support the hardware necessary to do it at a speed that's makes it uniquely useful. Even if all of these problems are solved, and the cost is made reasonable, it's still completely unclear if it would be substantially better than what we have today.
>
all of the other tech we have will have continued to progressThese other techs also came from somewhere, often brewing slowly for decades until the conditions align for a commercial success. This one may be, or may not be, one of these slow-brewing techs of the future, suddenly displacing the SOTA of the (future) day.
Hell, the steam turbine was in principle invented in like 300 AD, and first commercialized (or, rather, militarized) by the end of 19th century. Leonardo da Vinci invented the helicopter, the submarine, and the tank, in principle; it took about 500 years for them to become large commercial and military successes.
> who cares if it can store an exabyte if it takes all month to read it
To be fair, if I'm reading an exabyte in a month, my hardware's pushing >3 Tbps, which I'd be very happy with.
I'm not familiar enough with the space to know how this idea rates compared to alternative options at similar levels of development: the density is obviously extreme (but probably not the biggest advantage), and it makes sense to me that the underlying physics could work robustly, but the practicalities of how you read and write seem pretty difficult (and I think the paper kind of glosses over this: read caching and defect mapping could be trickier than it implied. Accessing the tape from both sides also seems like it will make the engineering more difficult).
(I'm also unclear how the bit is supposed to actually flip under the applied electric charge without the fluorine and carbon having to pass through each other.)
The AFM mechanism described as “tier 1” (very strong LLMism, btw) is somewhat optimistic but realistic. The fields needed are large compared to usual values in solid state devices, but I’d guess achievable with an AFM. But “tier 2” is vague and completely speculative. Some random things I noted: - handwaving that (not exact quote) “the read controller is cached. No need to read the same bit twice”. Cached with what?? If this miraculous technology can achieve 25 PB/s, what can possibly hope to cache it? More generally, it’s a strange thing to point out. - some magic and completely handwaved MEMS array that converts an 8um spot size laser beam into atomic-resolution 2D addressing? In my opinion this is the biggest sin of the manuscript. What I understood to be depicted is just fundamentally physically impossible. - a general misunderstanding of integrated electronics, and dishonest benchmarking, comparing real memory technologies being sold at scale right now, vs theoretical physical bounds on an untested idea. Also no mention of existing magnetic tape as far as I can tell. - constantly pulling out specific numbers or estimates with no citation and insufficient justification. Too many examples to even count.
I’m sorry for the harsh language, I wouldn’t use it for a usual review. But in my opinion this needs a very heavy toning down and complete rewrite, and is unfit for a proper review. Final remark: electronics is, and will always fundamentally be, intrinsically denser than optics. Some techniques “described” here, if they were possible, would have been applied to existing optical tech (i.e. phase change materials in blue-ray).
Does that mean a scanning tunneling microscope is the I/O mechanism? That's been demoed for atom-level storage in the past. But it's too slow for use.
> A scanning-probe prototype already constitutes a functional non-volatile memory device with areal density exceeding all existing technologies by more than five orders of magnitude.
Are we supposed to read all these stories as lies?
Now it doesn’t say that this is easy to produce, but if those claims are true, it doesn’t really matter if it is very expensive.
It doesn’t say either if the stuff can withstand live conditions.
It’s annoying not to be able to trust whether solutions like these are viable or not.
fluorographane -> Fluorographene
Can't find a single page about fluorographane
https://en.wikipedia.org/w/index.php?search=fluorographane&t...
But this
https://en.wikipedia.org/wiki/Fluorographene
https://nowigetit.us/pages/d7f94fd0-e608-47f9-8805-429898105...
[1]: https://www.tampabay.com/archive/1991/06/23/holograms-the-ne...