That's definitely true, and I think it's good you realize it so young.
It's inherent because of the diminishing returns for aggressive healthcare intervention. We're all going to die. We could ramp up the costs of intervention to arbitrary levels in a final death spasm, but we will still die. So, we have to strike some kind of balance.
The harder part, I think, is thinking about the decades leading up to that final end game. How do you trade off quality of life in different decades by saving and time-shifting some of your spending power into the future. It's not just medical costs but all the other aspects of life which carry a mix of predictable and unpredictable costs.
In the big picture, I think this is just the recurring story of capitalism. The big players can seize the market. Nearly every industry or medium offers economies of scale that favor large investors. And everything facing the public turns into this advertising and analytics game. So, yes, it's driven by VC money that can buy user attention and drown out the small hobbyists who cannot invest so much in marketing nor features.
I think the answer to your "dismantling" question would be similar to antitrust actions against railroads, steel industry, etc. a century ago. It takes political will and sensible regulation. Economics favor the capital, not democracy or other social values. As in with other mass consumer markets, I think the consumers also enable this in a tragedy of the commons scenario. They each can make self-serving compromises for convenience and enjoyment and ignore the externalities.
By the way, before the internet protocols dominated, there were bulletin board systems (BBSs) and unix-to-unix copy protocol (UUCP) networks. These had some grassroots kind of community federation but also got more commercial consolidation over time. Handwaving a bit, this included systems like Compuserve and AOL. In some ways, USENET was the biggest social media that made the transition from UUCP to internet. It too eventually suffered from the same erosion of its userbase and attacks by commercial consolidation and neglect, before the web.
Me too if being a bit hyperbolic. I also use the external Thinkpad branded keyboards with Trackpoint for desktop machines or even our "media PC" hooked to our TV screen. I feel vaguely disabled if put in front of a computer with only a touchpad, in the "I have no mouth but I must scream" sense.
I am keyboard focused, and mostly use the Trackpoint to change window focus or place a text cursor. I would prefer a desktop mouse for any precision pointer movement such as in photo editing or vector drawing.
The biggest frustration I've had is plenty of regressions in how Linux, Xorg, and/or Wayland handle the Trackpoint input. The calibration, acceleration, etc. can go completely crazy after some software update and it is basically impossible to bring it back the way it was using the available GUI desktop settings manager. Worse, this happens differently for different Trackpoint hardware instances, so in a household with multiple Thinkpads and external keyboards, muscle memory doesn't carry from one machine to another.
I also like getting scroll wheel emulation via Trackpad gestures, but the drivers seem to get into weird states of ignoring inputs on one or both. I don't know if this is some ham-fisted attempt at "ignore input while typing" that generalized into "ignore input during other inputs" and gets it wrong. I'll see moments where scroll gestures stop working, or others where the Trackpoint seems to be unresponsive.
Also, have they sorted out the 12V battery management for this low use scenario?
Our family has an older Hybrid Camry from around 2010 and it will destroy 12V batteries with too much local driving and parking. A typical regular ICE does better in the same conditions.
From the behavior, I assume it's because they only charged the 12V via a weak alternator when the ICE runs, rather than also keeping it charged via DC-DC conversion from the larger electric traction power system.
I just returned from a 21 day trip and my 12v was dead because I left the Prius off the charger. It's fine as long as it's on the charger (my guess is it trickle charges it), but if you leave it off the charger, the traction battery doesn't maintain the 12v.
That's too bad. It shouldn't necessarily even need to charge the battery when parked, so much as have smarter battery management logic to get the car into lower power consumption states when parked and to more aggressively maintain the 12V charge when operating.
We resorted to installing a big master cutoff switch on the negative battery terminal in the trunk of the Camry. So if there is no plan to use the car again in the next few days, we electrically isolate the battery.
Then, we have to go through a longer "boot up" process to reactivate the car for its next use. Use the mechanical key to open the trunk, restore power, then enter the cabin and go through multiple cycles with the START button to let all the computers power back up...
Considering the IBM "Big Bertha" LCD was an obtainable product in 2001, it doesn't seem too far-fetched that high resolution LCDs existed in some R&D labs years earlier.
It's not quite a 4K monitor, but I'll tell you it was pretty amazing to those of us who saw it demonstrated back then. This was a qualitatively different thing than we were familiar with. And, as I recall, it took 2 or 4 DVI inputs to drive it from typical graphics cards of the era. A single display output could not drive these kinds of pixel counts.
IBM had a CRT version of this before then. It worked the same by splitting the display into four quadrants. A Windows desktop only showed on one quadrant and you needed special software to use the whole display. The one we had at work was monochrome grayscale so it lacked the issues of dealing with a fine shadow mask.
LCD panels in that era were still being hand buffed and the defect rate would be high when attempting higher resolutions.
They started that project as Roentgen as I recall, and while they did exist there were eye-watering of expensive.
Around the time Apple started delivering HiDPI displays there was still a bit of scrambling by everyone to get software to play nice on OSX and Windows. Always fun when a game doesn’t realize you’re on a 270 DPI screen and makes the main menu so small you can barely read it to change the settings.
I imagine they're rejecting the word "forest" to describe the landscape there. Locals would reserve the word "forest" for the coniferous zone of much higher elevation mountains. For example, the fire that destroyed Paradise, California some years ago was what we would all consider a forest fire.
The wild areas near Malibu and Pacific Palisades are more a mixture of chaparral and hilly grassland. There may be some oak trees scattered about, but it feels like more trees exist in the private home landscaping than in the actual wild areas.
My experience is vaguely similar, but a decade earlier and longer and without much distro hopping. I touched SLS and Slackware first, but settled on Red Hat by the mid 1990s for consistency on my i386 and DEC Alpha hardware. Then I just followed through with Fedora and some CentOS.
For the longest time, my workflow has been almost all XTerm and whatever X11 enabled emacs came with the distro. I've reluctantly used other terminal programs pushed by the distros. For work: autotools, make, and gcc before shifting mostly to Python. Plus BSD Mail or Mutt, until enterprise login forced me to Thunderbird. And Netscape and Firefox.
I used to have to run Windows in a VM for office tools like Powerpoint and MS Word, but over time have been able to just use openoffice/libreoffice, partly because they got better at opening MS files, and partly because my career shifts and the changing world around me reduced the need for full MS compatibility.
I've developed a strong "data orientation" and a feeling for the short half-life of most software. My important artifacts are data files that I carry forward over years/decades, moving from system to system and tool to tool. I have a strong distaste for proprietary file formats and other data silos where the content is tightly bound to particular software. Consequently, I also dislike or distrust software with a premise of having such silos.
While I have quite a bit of skill and practice at building complex, distributed systems from my mostly academic CS career, I'm sort of an outsider to many popular end user practices. I dislike things like integrated IDEs, mobile phone apps, and cloud SaaS that all feel like the antithesis of my interests. Ironically, I have more understanding of how to build these things than I do for why anybody wants to embrace them. I don't actually want to eat the dog food, no matter how well I think we made it...
I feel like you're ignoring the amount of training that an expert musician does to learn a specific piece of music and maintain their proficiency in it. For the most part, they don't go to professional gigs and play something novel.
Traditional musicians have a whole live or real-time performance aspect as do athletes, dancers, etc. I think the amount of time they spend preparing for this can be similar to the time we spend working on one programming task. Bigger problems take more preparation. The difference is we don't have to then do a live performance after we've figured out how to program it. We just accumulate a recorded artifact and ship it, rather than doing a live recital after we've figured out all the difficult bits.
So it's difficult to draw parallels. Programmers have more in common with writers, painters, and sculptors who all work on a tangible artifact that is delivered after the fact and which acts as an accumulator of time-shifted work product. Some crafts, like glass blowing, are more like live music in that you develop a skill but then have to make a real-time performance each time you produce the artifact.
Actually no. When you're learning a piece from scratch, you start with 60bpm or slower and slowly polish your performance and reach to the normal speed of the piece. If you're going to perform with an orchestra, you also rehearse a lot. We started 14 weeks before the actual concert date (used to play double bass in a symphony).
Learning the instrument is akin to learning the programming language. Music theory is the same thing as programming languages / intro to computation courses. You pass through them once and revisit as needed. Not everyday.
However, when you finish a piece, 95% of the skill required to play it again is permanent. You just rehearse it a couple of times and, viola. The performance is there.
The constant exercise part is very on par with what programmers do every day. You either code (work) or playfight/practice (hobby projects). Also, composers and genres have similar structures in their pieces, so when you get used to them, you can just fly through them, even if you play for the first time.
There are a couple of comments which say that we're talking about senior programmers here. Senior musicians can play what they see in the first pass, or just improvise/remix what they hear for the first time (see [0]).
So, in most cases, the partitions in front of the musicians are cheat sheets. I remember just looking at the section and playing half (sometimes most) of it without even looking to it.
The live/improvised performance is akin to "hacking" in programming. I had my mentor who taught me playing double bass had to improvise a bridge section of a piece because he forgot that specific part. He said that since he knows the motifs, he bridged the part on the fly, in a solo performance in conservatory, and he got a pass because how he handled it. This is how we hack something together when we're in a rough spot and dig our way out of it by knowing what we're doing, but improvising and trusting the process.
So, I draw these conclusions from 10+ years of live concerts and 15+ years of professional sysadmin/programming/research work.
Thanks for the perspective. I'm not a musician but have some among my friends and family where I've observed their general approach over decades.
I think your point about similarities among some composers or genres is akin to the point about similarities in a genre of programming. You can easily produce your Nth version of CRUD app, audio or video filter, standard statistical analysis, etc. But changing to a different genre can require significant practice and learning. I think musicians face this just like programmers, without having to change instruments.
I think that musicians must do a lot more regular work, like athletes, to retain or recover their physical form. But, I don't really believe there are "programmer's hands" that can get clumsy after mere months of downtime. After all, even if our typing slows down, it isn't really a gating factor in being able to produce complex products. It might even be a benefit, when it biases one towards more compact solutions rather than churning out baroque monstrosities!
> Traditional musicians have a whole live or real-time performance aspect as do athletes, dancers, etc. I think the amount of time they spend preparing for this can be similar to the time we spend working on one programming task.
Using the word “one” here is a very subtle, disingenuous way of trying to make your point, it’s almost clever.
Clearly said, yet the general sentiment awakens in me a feeling more gothic horror than bright futurism. I am stuck with wonder and worry at the question of how rapidly this stuff will infiltrate into the global tech supply chain, and the eventual consequences of misguided trust.
To my eye, too much current AI and related tech are just exaggerated versions of magic 8-balls, Ouija boards, horoscopes, or Weizenbaum's ELIZA. The fundamental problem is people personifying these toys and letting their guard down. Human instincts take over and people effectively social engineer themselves, putting trust in plausible fictions.
It's not just LLMs though. It's been a long time coming, the way modern tech platforms have been exaggerating their capability with smoke and mirrors UX tricks, where a gleaming facade promises more reality and truth than it actually delivers. Individual users and user populations are left to soak up the errors and omissions and convince themselves everything is working as it should.
Someday, maybe, anthropologists will look back on us and recognize something like cargo cults. When we kept going through the motions of Search and Retrieval even though real information was no longer coming in for a landing.
We might blame ee cummings, but it's a bit like blaming William Gibson for every cyberpunk affectation that appears today...
reply