This is a blast from the past. This was my first venture into unix OS systems. I installed this on my parents Mac from a Jaz drive over SCSI. I was 16 years old.
Somehow I was able to get it to work, but I have memories of trying to compile programs and having to dive into the source and make changes with the blind confidence only a teenager could have. I'm surprised the system was as stable as it was
> but I have memories of trying to compile programs and having to dive into the source and make changes with the blind confidence only a teenager could have.
That is an awesome sentence. I sometimes miss this blind confidence I had as a teenager.
Some months ago, something snapped and I decided to finally let myself start working on the impossible projects that I have dreamed about for a while. I got back that feeling I had as a teenager, of coding without any ideas of the limits of my own abilities.
Specifically, I want to make some devops tools that I need and don't exist yet:
- A simple HTTP file server that uses mutual-TLS for auth and all configuration comes from a single TOML file. Passwords are poisonous to security. Stateful config APIs are poisonous to maintainability.
- A lightweight Dockerd replacement that uses mutual-TLS for auth, fetches binaries from the file server, actually verifies the SHA-256 digest of binaries (dockerd doesn't), and receives all configuration by HTTP PUT of a single config.toml file. It will store VM stdout logs locally and make them available via mutual-TLS HTTP.
- A metrics server that fetches logs via mutual-TLS HTTP, stores them in the file server, parses them (as JSON), calculates metrics and alarm states, caches derived data on the file server, serves a dashboard, re-exports specific metrics and alarm states, and notifies third-party services (Pagerduty/Opsgenie) on alarm state changes. All config comes from a single config.toml file.
- A simple monitoring daemon that performs repeated website and API requests and emits metrics to logs. These get picked up by the metrics server.
- An infrastructure management tool without the problems of Terraform. Specifically, it must support creating resources which contain other resources. And it shouldn't require the maintenance nightmare of multi-stage Terraform deployments.
I started writing this stuff in Golang. But I quickly became frustrated with Golang's incomplete libraries. There are stupid things missing like min(int,int) and basic synchronization structs (WaitableBool). Golang's http library doesn't support dynamic server-side request timeouts or mutual-TLS. I guess they want you to run Golang servers behind nginx or expensive Google Cloud load balancers, probably on Kubernetes. No thanks.
I learned Rust and started writing the tools above. I fell in love with forbid(unsafe). All of Rust's HTTP server libraries contain copious amounts of unsafe code. And none support mutual-TLS. So I started writing a safe Rust HTTP client & server library. Half-way through, I realized how much unsafe code is in tokio & async-std. So I wrote and released a small safe Rust async runtime, called "safina". Amazingly, it works. I resumed working on the HTTP library, adding TLS support. Then I realized that rustls has a lot of unsafe Rust/C/assembly code. So I started writing a safe Rust TLS 1.3 library. Now I'm deep into that project. It's satisfying.
I wish you will forget your limitations and try new things without reservation.
I love this comment. I love how one project can/often does lead to other sub projects that can all have their own sub projects. In the end it all comes together...and in my case I usually look back on days/weeks/months of work and am impressed with what I learned along the way. I also love the feeling that comes along with it.
> I guess they want you to run Golang servers behind nginx or expensive Google Cloud load balancers, probably on Kubernetes.
If you're using shared hosting, or your data has any value, why would you use anything besides a reverse proxy/Wireguard to access your hosted application server?
I read about MKLinux online at 14. I asked my mother to take me to Barnes and Nobles to see if I could find a book on Linux, as per the webpage that introduced me. No books on Linux in my Midwestern town at that time. I didn’t know what to do when it booted my Mac to a CLI, so I erased it.
I didn’t really get to a command line again until Mac OS X some years later.
I’m so jealous of today’s budding computer nerds. The world is their oyster. Info about how to use new things is very accessible. Damn kids /s.
Also jealous of todays little nerd seedlings...but I feel like we older nerds still have an understanding that can only be obtained by being there from the beginning (not the real beginning, but I was born in 73, got my first computer in 4th grade) and seeing the layers of technology begin to stack up over the years....if that makes sense..
It was even worse in Indonesia. Once I realize how far behind we were in general computer news & information, I literally mailed USD$ 100 to Ziff-Davis' office
That's how I got my monthly PC Magazine subscription (this was when they were the best magazine about PC)
Also good technical books were practically non-existent
I got my first Assembly and Norton's book from a bookstore specializing in bootleg copies
Only years later I began to be able to buy good books
Yeah, it's too much to do in one human lifetime, but we still discover lots of new stuff every day and thanks to SE, online docs and wikis of varying quality, it's quite easy to get things to run in no time, even if you've never heard of them before.
Me too! Down to the Jaz drive over SCSI. On a Performa 6115. Mostly I did it because everyone on IRC was talking about linux and bsd all the time. After a few months with it I finally got a dedicated computer for FreeBSD (with that super hot Adaptec 2940UW to keep the SCSI going)
Ah yes. There was a time when SCSI was the hottest hardware for workstations and geeks. When my classmates in the late 90es bought 3D cards for their computers, I was counting every penny to afford an Adapted UltraSCSI controller and disk system. It was really fast and knocked the socks off ATA. My last controller was a 19160U which ran like a dream on Linux and FreeBSD. It was very noise though with a set of 10k rpm drives in my little teenage room.
I think this was also the first "Linux" I ever installed. On one of those Tanzania clone, Motorola StarMax 3000s. I seem to recall, that it never booted through OF, at that point. I can't remember which version that was, probably the early DR3.
Some of them were rare even when they were new; I have a Radius 81/110 in my basement, built by a company that was in the Mac clone business for about three and half minutes. Not even sure it could run MkLinux.
The first computer I bought was a Power Computing Power 120(?). I think it's still in a box in my garage.
I went through a few Linux distributions for it: MkLinux first, then LinuxPPC, and finally Yellow Dog Linux. The driver for the Ethernet card needed to be patched for big-endian machines and the kernel needed a patch to allow the ADB keyboard CapsLock to be remapped to Control. Fun times.
I found a StarMax in the bin behind a local university department a few months ago. I thought those days were long over, but there are still strokes of luck, apparently.
These PPC Macs are a bit overlooked, in my mind, but of course this is exactly the era that I really started to get into computers. Just seems like everyone else is either attracted to the compacts, or the colourful iMac era ones.
Unfortunately, it went to the thrift shop many years ago, along with all the vintage UNIX workstations I wish I still had today. Haha. Hopefully, someone picked them up and they didn't just get recycled.
We actually used it in a semi-production configuration. We had an Apple 8100 sitting in the corner. We needed a machine to FTP all the weather sites every 10 minutes and download current conditions to input into a running near-realtime ocean current model. Kinda like Windy, except this was the mid 1990's.
The code was a custom Perl script written by a programmer that later died with his family on the plane that crashed into the Pentagon on 9/11.
It ran very reliably for weeks, except when somebody would login and run X-windows. Then it would die a horrible death within hours.
One oddity about the version we had, it had to be installed from MacOS (System 7 I believe). Some of the pathnames in the default install were longer than what was allowed in MacOS (256 characters, I think) so not everything would be installed correctly!
I also had an MkLinux machine I used as semi-production machine from ~96-99. I was able to get a Cable Modem in early '96. Back then ISPs only supported single machines directly connected to the modem, so I found an old Power 6100/60, installed MkLinux on it, and used it as a dedicated nat box.
There wasn't free list hosting or chat back then either (ISPs used to charge you to host a listserv!), so I got sendmail, majordomo, and ircd running on it. I even provided pop3 and smtp services for some of them who did not have their own internet accounts. As I recall getting ircd working on the platform was pretty challenging to me as a teenager.
My friends kept in touch using the chat and mail servers on that machine for years.
Just curious, are you the regular "geezer" who stopped posting a couple years ago, and if so why the new account? If not, and just a lurker who finally created an account to comment with, welcome to hn!
This was the first Linux I ever used. I still have the official MkLinux book somewhere at my parents house. Great memories!
I used KDE and it was rock solid and worked well. I think I had it on my beige G3. The HD eventually died and was replaced with one sending me back to MacOS when repaired.
It’s odd that I turned out to be a huge gnome fan in my later years. I can’t stand KDE now.
Wow, this still exists?! I remember trying to install MkLinux on a Performa 6116 that my parents no longer had a use for. I seem to recall it not having enough ram to run Xwindows, and the command line mystifying me. This at the same time as OS X 10.0 being released got me going for a long time on Unix-like systems.
In about 1997, after a pile of Mac OS 7 machines reached end of life (replaced with Windows PCs), I put MkLinux on a couple and ran one as our company firewall for quite a few years and the other as some kind of file server. Rock solid with an uptime of several years when they were shut down for good. Also compared to Mac OS of the time which had trouble running for a whole working day.
Mach's IPC model actually got worse on newer machines, as the amount of cycles wasted in some badly thought-out decisions was growing greater. Fortunately other resources make it less visible, so certain things were mitigated (that's how OSF/1 offshoots like Digital Unix and OSX didn't die under its weight), but you could always get a benchmark showing its problems.
What Mach really did was strangle the microkernel revolution in the crib by becoming the face of it then turning out to be hilariously slow and giving that reputation to microkernels in general.
Somehow I was able to get it to work, but I have memories of trying to compile programs and having to dive into the source and make changes with the blind confidence only a teenager could have. I'm surprised the system was as stable as it was