Paper Computing (great name!) is something I've been thinking about a lot to help my kids benefit from tech without exposing them to the brain melting addiction of screens. I sacrificed a few crazy nights of sleep to try to build a Paper Computer Agent prototype for a recent Gemini hackathon (only to disappointingly have submission issues right before the actual deadline) which my kids loved and keep asking me to set up permanently for them.
It's essentially a poor man's hacked up DynamicLand - projector, camera, live agent. There are so many things you could do if you had a strong working baseline for this. My kids used it to create stories, learn how to draw various things, and watching safe videos they could hold in their hand.
There's something weirdly compelling and delightfully physical about holding a piece of paper that shows a live rocket launch, with the flames streaming down the page. It could also project targeted pieces of text, such as inline homework advice, or graphs next to data. It doesn't take long to imagine any other number of fun use cases, and it feels a lot more freeing and inspiring than keeping everything bound to a screen.
R.I.P. to the Amazon Glow video calling device, killed before AI went mainstream. I'd love to hear how to get root on one... exactly the hardware your project could use most effectively and an amazing interface for playing games remotely with the grandparents.
I love how creatively ai is integrated in here. Amazing.
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/https://folk.computer/https://news.ycombinator.com/item?id=39241472 (165 points, 2 years ago, 53 comments)
I love gow creatively ai is integrated in here. Amazing.
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/https://folk.computer/
Thanks for all the kind words. I would love to work on this more but the hackathon sprint was really all I had time for (note the newborn in the video...) without more backing / support. I was really bummed that the hackathon rejected the submission, because it provided some Google support if you win.
If anyone knows of a way to develop this... the code is on Github, and I have a roadmap in mind, but as we all know there's a huge gap between hacky prototype and "works smoothly for other users".
this is really cool, I'd love to use something like this for my kids too. Maybe I'll try your project when I have some more free time. Would love to contribute but i'm not very skilled in python.
If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?
I was pretty excited when I saw the premise behind what Apple was doing with VisionPro because I figured they were steering towards this, but it seems they’ve looked away and don’t really care about going deeper into this direction.
I asked at some point if I could theoretically develop an application that could literally be controlled by a Fischer Price toy, like a little plastic car console or something. Or even potentially have a real keyboard that isn’t connected to anything, but the VisionPro can just see my keypresses and apply them as if I was actually pressing something. The former case is possible, but surprisingly difficult, but the latter case isn’t really there yet (requires too much precision and latency is worse than just using a Bluetooth keyboard).
Either way, the idea of a computing environment that meshes with and directly interacts with the real, physical objects around you is an interesting premise I’d like to see taken further with “Spatial Computing”/AR. Scanning and recording things I’m writing on a whiteboard or in a notebook by recognizing that I’ve picked up a pen and am writing something down would just be getting started.
Of course, if we’re ambiently recording everything you’re doing there will need to be some kind of regular process/interface to “sift” everything at the end of the day. This is the core of the Getting Things Done methodology. Everything goes into a big “intake list” and then you do periodic check-ins throughout the day where you review the list and decide whether to move those to a series of sub-lists to “do this now,” “do this soon,” or “do this someday.”
> Now that we have actually good AI, I have this vision of a form of computing that doesn’t involve me using a computer so much. Imagine you had the day’s emails to go through. It would be nice if the ones that required a simple decision could be dispatched with a few pen-strokes: I could write down a date that would work for that meeting; check a box to accept that invitation; etc.
This reminds me of those predictions from 1900 about the year 2000, when they thought we'd all live in enormous skyscrapers and get around by flying cars. Instead we moved out to suburbs because improved logistics systems meant we could buy things from suburban shopping centres rather than having to go into city centres. Revolution, not evolution.
Surely the real advantage of an 'actually good AI' would be getting the AI to do the work itself, rather than just allowing the work to be done in a format with which the human is more comfortable. The underlying problem is that there are too many things vying for our attention.
> At least then you could mimic in software that thing you get from physical objects—which is that they are usually built to do one, and only one, thing well. My alarm clock, for instance, is just an alarm clock; and that's what I like about it!
UNIX Principle anyone ? Do one thing, and do it well - seems like in this 'age of AI' the industry is rediscovering by detour best practices, decades old, all over again.
But otherwise having 'interfaces' printed out to you and an LLM multi-modal later working from your notes on it sounds really interesting and less stressful than modern 'computing'.
The Office's Michael Scott would be proud - Paper may just be the future of Digital after all!
Receive email, render page with the email and a reply section and a unique ID, print it out physically
Human picks up all the sheets out of the printer, writes out replies with pen
Human puts the stack of answered email sheets in a multi-page scanner
Scanner physically scans them, agent transcribes them and matches them back to the incoming emails via the unique ID on each sheet, sends replies
You could adjust this flow for anything where human input is just one part of a larger sequence: just add print -> write -> scan into your flow where you'd normally have a human type. It's kind of a rebirth of faxing
If projects like this and DynamicLand interest you, it's worth checking out https://folk.computer/ - they've been working on this much more recently than DynamicLand and share their code as open source.
Just the other day, I noticed my thinking was so hijacked by distractions while building something (with AI help) that I started writing in a notebook to stay on track. The last time I'd written in the notebook was 3 years ago; in this case writing stuff down in it really helped to get me unstuck.
I'm excited to imagine workflows that could make computing a more physical activity. Thanks for writing and sharing this.
The author is basically advocating that they want to be an executive with a secretary, but they want the secretary to be AI. I don't use secretary in a pejorative sense, just meaning that the author seems to want someone/something that does simple tasks but lets them make decisions, as opposed to an executive assistant that has a little more self-agency to do things on their own.
They just want OpenClaw with printing and scanning privileges. Every morning OpenClaw prints out a task list or items that need action, the author writes notes/responses, and places it on the scanner. This is basically how my program director worked at my last job. Every morning the secretary would have his schedule printed out, he'd go to meetings and write notes, and would pass by his secretary and stick a note or two on her desk saying "set up a meeting with XYZ org/team within the next few days on ABC topic." The secretary would also print documents/presentations and he'd mark them up throughout the day with changes he wanted made, and he'd drop the documents off when he was done going through them, and the secretary would distribute the documents to their respective POCs to make the changes.
Basically the only thing the author hasn't mentioned that the secretary did is that the secretary also acted as a gatekeeper for access to the program director, either in real-time ("no, you can't go in, they are meeting with a higher level director") or would take a request for a meeting and have enough personal context on whether the director would want the meeting themself or want to see it go through a division chief first. Not sure if OpenClaw can do that, but just about everything else is totally do-able. Not sure if I really want to see someone wasting this much paper just to "feel analog" but I suppose it probably isn't a big deal since most people won't do it this way, and will stick to digital forms of communication with their OpenClaw secretary.
I've been following someone on X building a "Screenless Phone" that can scan to get inputs and print on receipt paper to provide output - very interested in how these types of experiments evolve!
Emacs, and technologies built on it, such as org-mode, come somewhat close to ideas expressed here by having plain text in a buffer be the unifying data format. You can organize stuff by just moving snippets of text around.
I think it's difficult in practice to design data manipulation interfaces based on real-world objects because atoms are heavy and bits are not. Data is just much more malleable and transformable than real world objects, at least at the pre-Diamond Age tech level we're at. But maybe ML will help make this easier by allowing computers to track and scan the objects more easily.
>If nothing else I'd like it if operating systems and web browsers helped me be less distracted and frenetic, instead of encouraging exactly that multi-tasking freneticism. When I opened my phone or computer, it'd be nice if it was constrained to operate in a mode purpose-built for whatever task I intended to use it for.
I've often had this though. The short answer is that it isn't yours.
As a joke I proposed a sausage shaped phone that vibrates. That way if you happen to need or want a vibrator you have it in your hand already. It sounds stupid I know but phones are porn mags and slot machines too. For those with a gambling problem the usability of a work device that is also a casino should probably be expressed as a negative score. When I first held a smart phone I asked how to use it as an mp3 player. Today it still can't compete with a cheap mp3 player.
----
Long ago I read a story about a family that published and printed a weekly newspaper [sick] just for the 5 of them. It had articles looking back on the previous week and ones about things scheduled for this week. Members of the family would submit articles and the mum stitched everything together into the print.
I think with llm we can print a daily report? You can scan it too but you have to boot up a computer and launch applications which defeats the point.
Maybe the trick is to have big buttons that connect a bootable medium (with a single purpous OS) and start the computer.
Doing the sort of things the author wants to do simply wouldn't work for me. All I end up with is a pile of screwed up paper and nothing to show for it. Drafting and rewriting is so much better when you don't have to worry about making a mess.
The idea of writing a draft on paper, or cutting out squares to prototype layouts on a table, sounds like a nightmare to me. But I never did like pen and paper much and have lived and breathed computers since I was young. My ideal method of writing is a full screen monospaced terminal
That said, I do much prefer reading on paper, or at least on e-ink, for many of the same reasons outlined in the post. Computers and phones are just too distracting, and too dynamic.
And I'd love some way to write down shopping lists or appointments, and have them available wherever, without having to pull out the phone. Our current method is a whiteboard + a photo whenever we need it, which doesn't quite cut it.
Since I have a laptop, I threw away all paper support, focusing on the keyboard as primary information interface.
Using paper and space to organize ideas is nice, but that's a niche use-case. And in any case, you'll have to digitalize it anyway afterwards, so better start on the digital version immediately, and be good at it. Everytime I start a new project, I'm tempted to take a pencil and paper, but then I refrain and use draw.io or the like because I know it will be winning on the longer run.
For the rest, you can easily customize your phone / browser / anything to be less distracting.
As for using AI just for convenience, this looks like very expensive in terms of resource.
There’s a strong argument for paper computer, in the sense that we have evolved to think in space and with our body (Barbara Tversky’s work springs to mind). The cognitive load of parsing our thoughts, collaborating on ideas through digital interfaces is not insignificant, and changes the nature of the kind of combinatorial thinking required to externalise and socialise ideas, organise thoughts and structure work. I think AI created a huge opportunity for this kind of ambient association with computational power that over time can make the interface recede into the analogue rather than require us to engage with the digital.
I question the idea of pastoralism though, I would argue this is another kind of construct. Laurel Hatcher Ulrich’s ‘age of homespun’ talks about this in detail, and how handcraft revivals were an expression of fear or anxiety about the radical changes brought about by industrialisation, and became a sort of myth making device for the rejection of technological overlords.
In any case, Paper Computer charts neat reformulation of the personal computer into something more interesting. If all individual computing tasks become distributed back into real spaces, objects and physically manipulable media it becomes more of an interpersonal computer, and distributed computing power can be pushed to things that don’t ordinarily engage with computational tasks such as wind or plants or anything within the shared working environment.
I've read the other article about the 3d scanning Cat. Kind of get the feeling this guy has lived a privledge life and gets things done by other people doing it for him before he even asks. How many times does he have to mention he has is phone loaded up with social media apps and allows notifications? It must be really hard for him to turn these off considering his constant assault of convinces and people wanting to do things for him.
from a ux perspective, all of this is quite exciting and is made possible with our growing capabilities with computer vision and language understanding.
however, it is quite interesting, the way these things are named and branded. calling it "computerless" or "screenless" is quite funny semantically. of course we need computing (often through the cloud even) to get it to run. and the word "screen" was originally meant to describe a surface to be projected on, which a lot of these solutions do!!
i'd wager that along the same lines, pretty much any smart voice assistant in your room could fit into this bucket, albeit with different capability set.
this is still at a stage where it seems like a lot of work to make it do what we take for granted on a daily basis. very far from reality for those who wanted to get rid of screens altogether. for that, the focus should more be on what one does on a screen than the hardware itself.
This article tries so very hard to avoid confronting reality - going back to analog proves its inherent advantage over AI. There’s boatloads of research proving mind-hand-writing tool engagement is superior to voice recording or typing notes. I’m going to cite this in the future as a testament against AI, because that’s exactly what it is when seen through an academic lens.
I've been thinking about this too. I think it would be cool if you could put the interface in a pen, somehow. Like embed a tiny projector and camera in a pen. Then, if you want to read your emails, you can project them on a surface. If you want to reply, you write with the pen, then point the camera at it.
> they have the problem that they make it difficult to just use your calendar, todo list, or map—or even just respond to a friend's message—without encountering something else along the way, like a social network, short-form video, Slack, the news, or some other notification.
I see this seemingly everywhere. People are looking for these extreme solutions to solve the problem of getting distracted by an app like Instagram or TikTok on their phone. Wouldn’t uninstalling the app, and going a step further, deleting the account, be the more pragmatic solution here? We control what is installed on our devices, what accounts we have, and which notifications we receive. If someone has enough agency to move to a pen and paper, surely they can uninstall some apps?
While I like the idea of having a magic paper notebook that would somehow interact with computer systems, that idea seems like mostly science fiction without having significant levels of technology all around you (cameras, projectors, etc) which would kind of defeat the purpose imo.
I watched the first video on Dynamic Land and I think I’d feel very uncomfortable in a room like that. Look the wrong way and catch a projector’s light in the eye, and once big tech gets into the game, who knows what happens with all the data from the cameras. I’ve grown rather paranoid.
A phone with just utilities installed, no social media, or going a step further to something like an e-ink tablet (something like Remarkable), seems like it would get most of the way there and actually work today. The biggest concern then becomes the web browser, but the big tech companies do most of the work for us by making sites insufferable to use while logged out and without an app.
Something might be able to get rigged up with RocketBook as well, for an actual pen on paper experience, but having to take a picture of the pages is kind of a pain. I have one and the novelty wore off very quickly; it has sat in a drawer for years now.
I’ve struggled with this idea a bit myself, as I sometimes romanticize the idea of using analog tools, but when they exist alone on an island, that seems to come with some considerable downsides in the modern world.
Apple Notes can be good for some of this too. Instead of using ChatGPT, Apple Notes can use the phone camera to do live OCR on text and add it into a note. I’ve used it a couple times and it’s pretty handy, when I remember it.
I've been thinking along these lines too! My idea here is to use a receipt printer + scanner. In the morning the system prints a receipt with various widgets like weather, calendar, etc. The scanner takes in the marked up receipt at EOD to update the digital data and prepare for tomorrow's receipt.
The best way to predict the future is to look at the past. Humans have been living and working in the 3-D world since the dawn of time, we’ve worked with paper for thousands of years, we’ve only been working at screens for about 40 years. Technology to remove technology, such as this, is brilliant.
We are doing something related; taking the TipToi tech and getting it with our own pen to turn paper into interfaces that can control remote systems. See Https://papiro.press (the pages are still being redesigned, but we needed some placeholders to be able to talk to Chinese factories)
Unfortunately, I don’t this will work until we have robot secretaries that can automate updating paper wall calendars and documents and books scattered around a room.
The only compromise would be a limited area like a physical desktop that had affordances like an overhead camera and some form of paper output.
The problem with screens is you can't get good at them, even after 18 years of them. Not like you could a sewing machine, a stick shift car, or a loom.
There is something incredibly valuable about forcing yourself to trace execution logic on physical paper. It builds a mental model of state changes and memory that you just don't fully develop when a modern IDE's debugger is doing all the heavy lifting for you.
89 comments
It's essentially a poor man's hacked up DynamicLand - projector, camera, live agent. There are so many things you could do if you had a strong working baseline for this. My kids used it to create stories, learn how to draw various things, and watching safe videos they could hold in their hand.
There's something weirdly compelling and delightfully physical about holding a piece of paper that shows a live rocket launch, with the flames streaming down the page. It could also project targeted pieces of text, such as inline homework advice, or graphs next to data. It doesn't take long to imagine any other number of fun use cases, and it feels a lot more freeing and inspiring than keeping everything bound to a screen.
Github - https://github.com/Pugio/Orly (hacky minimal prototype that did the thing)
Video Pitch - https://youtu.be/-9l1x7GnmxU (filmed an hour before the deadline on an old phone with no sleep)
https://www.theverge.com/2022/10/20/23415167/amazon-glow-sup...
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/ https://news.ycombinator.com/item?id=39241472 (165 points, 2 years ago, 53 comments)
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/
If anyone knows of a way to develop this... the code is on Github, and I have a roadmap in mind, but as we all know there's a huge gap between hacky prototype and "works smoothly for other users".
If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?
I asked at some point if I could theoretically develop an application that could literally be controlled by a Fischer Price toy, like a little plastic car console or something. Or even potentially have a real keyboard that isn’t connected to anything, but the VisionPro can just see my keypresses and apply them as if I was actually pressing something. The former case is possible, but surprisingly difficult, but the latter case isn’t really there yet (requires too much precision and latency is worse than just using a Bluetooth keyboard).
Either way, the idea of a computing environment that meshes with and directly interacts with the real, physical objects around you is an interesting premise I’d like to see taken further with “Spatial Computing”/AR. Scanning and recording things I’m writing on a whiteboard or in a notebook by recognizing that I’ve picked up a pen and am writing something down would just be getting started.
Of course, if we’re ambiently recording everything you’re doing there will need to be some kind of regular process/interface to “sift” everything at the end of the day. This is the core of the Getting Things Done methodology. Everything goes into a big “intake list” and then you do periodic check-ins throughout the day where you review the list and decide whether to move those to a series of sub-lists to “do this now,” “do this soon,” or “do this someday.”
[0] https://www.youtube.com/watch?v=7wa3nm0qcfM [1] https://dynamicland.org/
https://daily.tinyprojects.dev/paper_website
https://news.ycombinator.com/item?id=29550812
> Now that we have actually good AI, I have this vision of a form of computing that doesn’t involve me using a computer so much. Imagine you had the day’s emails to go through. It would be nice if the ones that required a simple decision could be dispatched with a few pen-strokes: I could write down a date that would work for that meeting; check a box to accept that invitation; etc.
This reminds me of those predictions from 1900 about the year 2000, when they thought we'd all live in enormous skyscrapers and get around by flying cars. Instead we moved out to suburbs because improved logistics systems meant we could buy things from suburban shopping centres rather than having to go into city centres. Revolution, not evolution.
Surely the real advantage of an 'actually good AI' would be getting the AI to do the work itself, rather than just allowing the work to be done in a format with which the human is more comfortable. The underlying problem is that there are too many things vying for our attention.
> At least then you could mimic in software that thing you get from physical objects—which is that they are usually built to do one, and only one, thing well. My alarm clock, for instance, is just an alarm clock; and that's what I like about it!
UNIX Principle anyone ? Do one thing, and do it well - seems like in this 'age of AI' the industry is rediscovering by detour best practices, decades old, all over again.
But otherwise having 'interfaces' printed out to you and an LLM multi-modal later working from your notes on it sounds really interesting and less stressful than modern 'computing'.
The Office's Michael Scott would be proud - Paper may just be the future of Digital after all!
Human picks up all the sheets out of the printer, writes out replies with pen
Human puts the stack of answered email sheets in a multi-page scanner
Scanner physically scans them, agent transcribes them and matches them back to the incoming emails via the unique ID on each sheet, sends replies
You could adjust this flow for anything where human input is just one part of a larger sequence: just add print -> write -> scan into your flow where you'd normally have a human type. It's kind of a rebirth of faxing
Just the other day, I noticed my thinking was so hijacked by distractions while building something (with AI help) that I started writing in a notebook to stay on track. The last time I'd written in the notebook was 3 years ago; in this case writing stuff down in it really helped to get me unstuck.
I'm excited to imagine workflows that could make computing a more physical activity. Thanks for writing and sharing this.
They just want OpenClaw with printing and scanning privileges. Every morning OpenClaw prints out a task list or items that need action, the author writes notes/responses, and places it on the scanner. This is basically how my program director worked at my last job. Every morning the secretary would have his schedule printed out, he'd go to meetings and write notes, and would pass by his secretary and stick a note or two on her desk saying "set up a meeting with XYZ org/team within the next few days on ABC topic." The secretary would also print documents/presentations and he'd mark them up throughout the day with changes he wanted made, and he'd drop the documents off when he was done going through them, and the secretary would distribute the documents to their respective POCs to make the changes.
Basically the only thing the author hasn't mentioned that the secretary did is that the secretary also acted as a gatekeeper for access to the program director, either in real-time ("no, you can't go in, they are meeting with a higher level director") or would take a request for a meeting and have enough personal context on whether the director would want the meeting themself or want to see it go through a division chief first. Not sure if OpenClaw can do that, but just about everything else is totally do-able. Not sure if I really want to see someone wasting this much paper just to "feel analog" but I suppose it probably isn't a big deal since most people won't do it this way, and will stick to digital forms of communication with their OpenClaw secretary.
https://x.com/daviddorg/status/2037050583274954882
https://x.com/daviddorg/status/2033937383012635065
https://yearunplugged.com/newsletter
Emacs, and technologies built on it, such as org-mode, come somewhat close to ideas expressed here by having plain text in a buffer be the unifying data format. You can organize stuff by just moving snippets of text around.
I think it's difficult in practice to design data manipulation interfaces based on real-world objects because atoms are heavy and bits are not. Data is just much more malleable and transformable than real world objects, at least at the pre-Diamond Age tech level we're at. But maybe ML will help make this easier by allowing computers to track and scan the objects more easily.
>If nothing else I'd like it if operating systems and web browsers helped me be less distracted and frenetic, instead of encouraging exactly that multi-tasking freneticism. When I opened my phone or computer, it'd be nice if it was constrained to operate in a mode purpose-built for whatever task I intended to use it for.
I've often had this though. The short answer is that it isn't yours.
As a joke I proposed a sausage shaped phone that vibrates. That way if you happen to need or want a vibrator you have it in your hand already. It sounds stupid I know but phones are porn mags and slot machines too. For those with a gambling problem the usability of a work device that is also a casino should probably be expressed as a negative score. When I first held a smart phone I asked how to use it as an mp3 player. Today it still can't compete with a cheap mp3 player.
----
Long ago I read a story about a family that published and printed a weekly newspaper [sick] just for the 5 of them. It had articles looking back on the previous week and ones about things scheduled for this week. Members of the family would submit articles and the mum stitched everything together into the print.
I think with llm we can print a daily report? You can scan it too but you have to boot up a computer and launch applications which defeats the point.
Maybe the trick is to have big buttons that connect a bootable medium (with a single purpous OS) and start the computer.
(My blog post btw if you’re curious https://bhave.sh/make-humans-analog-again/)
That said, I do much prefer reading on paper, or at least on e-ink, for many of the same reasons outlined in the post. Computers and phones are just too distracting, and too dynamic.
And I'd love some way to write down shopping lists or appointments, and have them available wherever, without having to pull out the phone. Our current method is a whiteboard + a photo whenever we need it, which doesn't quite cut it.
Using paper and space to organize ideas is nice, but that's a niche use-case. And in any case, you'll have to digitalize it anyway afterwards, so better start on the digital version immediately, and be good at it. Everytime I start a new project, I'm tempted to take a pencil and paper, but then I refrain and use draw.io or the like because I know it will be winning on the longer run.
For the rest, you can easily customize your phone / browser / anything to be less distracting.
As for using AI just for convenience, this looks like very expensive in terms of resource.
I question the idea of pastoralism though, I would argue this is another kind of construct. Laurel Hatcher Ulrich’s ‘age of homespun’ talks about this in detail, and how handcraft revivals were an expression of fear or anxiety about the radical changes brought about by industrialisation, and became a sort of myth making device for the rejection of technological overlords.
In any case, Paper Computer charts neat reformulation of the personal computer into something more interesting. If all individual computing tasks become distributed back into real spaces, objects and physically manipulable media it becomes more of an interpersonal computer, and distributed computing power can be pushed to things that don’t ordinarily engage with computational tasks such as wind or plants or anything within the shared working environment.
You'll be able to tell if you had a message by visually scanning for one or more ball.
To listen to a message, you'd take a ball and place it at a certain position to execute playback.
I found the coming together of physical and digital in this way, incredibly alluring; and still do today.
The article's focus reminded me of similar outcomes.
however, it is quite interesting, the way these things are named and branded. calling it "computerless" or "screenless" is quite funny semantically. of course we need computing (often through the cloud even) to get it to run. and the word "screen" was originally meant to describe a surface to be projected on, which a lot of these solutions do!!
i'd wager that along the same lines, pretty much any smart voice assistant in your room could fit into this bucket, albeit with different capability set.
this is still at a stage where it seems like a lot of work to make it do what we take for granted on a daily basis. very far from reality for those who wanted to get rid of screens altogether. for that, the focus should more be on what one does on a screen than the hardware itself.
> they have the problem that they make it difficult to just use your calendar, todo list, or map—or even just respond to a friend's message—without encountering something else along the way, like a social network, short-form video, Slack, the news, or some other notification.
I see this seemingly everywhere. People are looking for these extreme solutions to solve the problem of getting distracted by an app like Instagram or TikTok on their phone. Wouldn’t uninstalling the app, and going a step further, deleting the account, be the more pragmatic solution here? We control what is installed on our devices, what accounts we have, and which notifications we receive. If someone has enough agency to move to a pen and paper, surely they can uninstall some apps?
While I like the idea of having a magic paper notebook that would somehow interact with computer systems, that idea seems like mostly science fiction without having significant levels of technology all around you (cameras, projectors, etc) which would kind of defeat the purpose imo.
I watched the first video on Dynamic Land and I think I’d feel very uncomfortable in a room like that. Look the wrong way and catch a projector’s light in the eye, and once big tech gets into the game, who knows what happens with all the data from the cameras. I’ve grown rather paranoid.
A phone with just utilities installed, no social media, or going a step further to something like an e-ink tablet (something like Remarkable), seems like it would get most of the way there and actually work today. The biggest concern then becomes the web browser, but the big tech companies do most of the work for us by making sites insufferable to use while logged out and without an app.
Something might be able to get rigged up with RocketBook as well, for an actual pen on paper experience, but having to take a picture of the pages is kind of a pain. I have one and the novelty wore off very quickly; it has sat in a drawer for years now.
I’ve struggled with this idea a bit myself, as I sometimes romanticize the idea of using analog tools, but when they exist alone on an island, that seems to come with some considerable downsides in the modern world.
Apple Notes can be good for some of this too. Instead of using ChatGPT, Apple Notes can use the phone camera to do live OCR on text and add it into a note. I’ve used it a couple times and it’s pretty handy, when I remember it.
The only compromise would be a limited area like a physical desktop that had affordances like an overhead camera and some form of paper output.
Also, check the spirograph too, among the slide ruler and any abacus.
(On HN 2017, 138 comments: https://news.ycombinator.com/item?id=15960056)