I haven't used it, I didn't know it existed until now, but I'm happy it exists and has been providing service to those who need it. There should be more of this.
Thanks, most of these came out restriction, we cant afford to throw money on horizontal scaling (adding more server,load server etc). So we kind of forced to try out new things to keep cost affordable. There are many thing left out on above doc: IIRC, we started with openvz and even today our security relies on SELinux, how we remapped user account creation with pre-existing templates for ext4 quota, we moved to xfs because of flexibility. Mysqldb quota/limits, fork bombs by college/school students bringing out docker environment. Old school internet is right term.
That's wonderful and I know why it's an Indian founder. Was so hard to get a remote shell back then. Indian debit cards didn't work online reliably and so on. So what's the hardware underneath? Cloud server or on-prem?
These days the world is amazing. Oracle Cloud gives you a ton for free. But perhaps there's some niche where this is useful. I have to say that this shared screen comms system is outrageously crazy, hahaha.
It began as on-prem, Freston hosted in his house (we shared server cost, some people called it crazy, because I sent money to someone I met in Linuxforums.org and never seen this person, even via internet, I trusted him because I know him for few years on that forum) After 3 years or so we moved on to cloud servers. Mostly switching from one infra and another if we get some credits :D Couple of years we had Linode sponsoring those nodes until its acquisition.
>shared screen comms system is outrageously crazy,
Thats Freston idea. I remember our typically chat begins with something like
"Hey Laks, Can you see me typing!" ;)
To be fair. 8GB of ram is huge. I don't know, maybe I'm stuck in the early 00s but even 2 GB of ram still seems extravagant; I remember when that was an exotic amount of RAM for dedicated gamers to play extremely high fidelity games, so for a mere web server 8 GB of ram almost seems like absurd overkill. I still feel a tinge of shame whenever I see any software of my own using more than a few hundred megabytes. What a waste.
The major difference, here, is this is intended for multiple users (not one person). Imaging 5,000 users all using the device at the same time. The amount of memory, open file handles, network connections, etc. for many users at once adds up.
Depends entirely on what you're doing. 8GB of RAM is very insufficient for 3D texturing workflows, for example, where you can have many different 4k textures cached in memory. For other things, 8GB is probably a lot.
I barely used or remember the ZX-81 my folks had with it's amazing 1KB of memory. It had a 16K expansion module you could plug into the back, which apparently made a big difference, but also didn't have the greatest connection. You could easily dislodge it typing on the keyboard. I do remember my father coming up with various ways to try to secure it.
The ZX Spectrum that followed, with its huge 48K of RAM was night and day. The programs were so much more complicated.
Even echo on linux these days takes 38K of disk space and a baseline of 13K of memory to execute, before whatever is required to hold the message you're repeating.
RAM was so tight on those 8-bit machines that many games used tricks like hiding things inside the viewable area of the screen to eck out just a little bit more.
Quite often clients were more powerful than servers. Hell, at one point a CPU embedded into a printer could be faster than, say, 8088. An X server (running on the client side) often required a more powerful machine than one running X clients (i.e. a server). A web browser is not an exception.
Well... The best days were just putting hardware in a 2U box, racking it, and paying a bit for power and networking. This was such an easier time, and a handful of core 2 duos were fully capable of streaming 1080p video to around a million daus.
Of course, there's far more money in really fancy shared hosting that wastes resources, so that's the current model. Then you market to C-level folks that "real companies" host on AWS or Azure, and that all others options are "unserious." If your opex for compute isn't a million, you're wrong.
Oh man, what a blast from the past. I have fond memories of learning linux networking with netkit (based on UML).
UML was a really really cool piece of technology.
If anybody is wondering, User Mode Linux lets you boot a Linux kernel as a normal linux process, and then run an userspace, still in a linux process. This is from 2001. Super cool.
I was trying to remember what this was called the other day, for some reason.
It turns out that if you run a uml kernel and point its root at the root of the disk the host Linux is running on, there's a hell of a turf war between the two and no-one wins.
Actually I opened up GitHub Sponsor just few weeks ago. Few tims i received enquiry from users (professors) who wanted to contribute back.only now i have proper channel to redirect such requests.
It takes a lot of guts to run something like this for years on end, kudos to you for setting this up and running it for all these years. I am wondering if you'd ever come across pubnixes or tilde servers when you first started up webminal?
53 comments
Great work giis.
I haven't used it, I didn't know it existed until now, but I'm happy it exists and has been providing service to those who need it. There should be more of this.
These days the world is amazing. Oracle Cloud gives you a ton for free. But perhaps there's some niche where this is useful. I have to say that this shared screen comms system is outrageously crazy, hahaha.
>shared screen comms system is outrageously crazy,
Thats Freston idea. I remember our typically chat begins with something like "Hey Laks, Can you see me typing!" ;)
https://shell.cloud.google.com
somtimes the "wrong" / "old" tool for some job is exactly right for you if you really understand it. UML is old but fits here.
15 years is long enough to call memory about a lot of things.
The ZX Spectrum that followed, with its huge 48K of RAM was night and day. The programs were so much more complicated.
Even echo on linux these days takes 38K of disk space and a baseline of 13K of memory to execute, before whatever is required to hold the message you're repeating.
https://youtu.be/XXBxV6-zamM?t=1694
RAM was so tight on those 8-bit machines that many games used tricks like hiding things inside the viewable area of the screen to eck out just a little bit more.
Hell, the RAM size was so important that they named machines after it.
Of course, there's far more money in really fancy shared hosting that wastes resources, so that's the current model. Then you market to C-level folks that "real companies" host on AWS or Azure, and that all others options are "unserious." If your opex for compute isn't a million, you're wrong.
> User Mode Linux
Oh man, what a blast from the past. I have fond memories of learning linux networking with netkit (based on UML).
UML was a really really cool piece of technology.
If anybody is wondering, User Mode Linux lets you boot a Linux kernel as a normal linux process, and then run an userspace, still in a linux process. This is from 2001. Super cool.
It turns out that if you run a uml kernel and point its root at the root of the disk the host Linux is running on, there's a hell of a turf war between the two and no-one wins.
[0] https://github.com/sponsors/Lakshmipathi
How many users can this support simultaneously? It says 256MB RAM per user, 8GB total on server? But it's probably more than 32 simultaneous users?
very easy to use. almost instant.