Both of which demonstrate that our species is much better at understanding how to scale madness and destruction than how to scale sustainable activity.
Falsehood Cruelty and Pollution are results of the 7 sins. Cruelty is typically caused by Anger or Envy, Pollution from Gluttony and Sloth, Falsehood from Pride and Envy etc.
The magazine in question is a science-aligned publication. Given the current public discourse, it's no surprise that science-aligned opinions will be attacked. The current public discourse is (gleefully, tribalistically) misinformed, misguided, and hell-bent on social fragmentation.
Watch the bonds between citizens and reality dissolve in real time.
“We didn’t design them to last 30 years or 40 years, we designed them not to fail,” John Casani, Voyager project manager from 1975 to 1977, says in a NASA statement.
> this is HN, a bunch of computer programmers think they know more than <figure of authority>
And they are correct
At least on the programming part, having in mind the huge advances in computers since the Voyager was built.
Any professional computer programmer here knows more in their field than a programmer from 70's. A Voyager built today with similar resources would be much better, 100% guaranteed.
Voyager 1 was a fantastic machine done by a terrific team, but lets not pretend that the state of the art hasn't changed. Anybody with computer skills polished towards building a machine in 1977 would be basically unemployable for building a machine in 2024.
You might be surprised to read the papers written by those early software developers. They were writing in the late 1960's and early 1970's about fundamental issues most developers of today don't fully grasp.
You might think, for example "waterfall, ewwww", but if you go back and re-read the first paper on waterfall development, it makes clear that waterfall development is in fact an anti-pattern. How many here are stuck on "modern" teams that think waterfall is a good idea, and yet those clueless old folks had figured out it was a dead end 50+ years ago.
One of the most critical aspects of managing software development is Conway's law. For distributed scalable systems, if you aren't thinking about Amdahl's law you're just a hacker and not actually an engineer. Check the dates on those papers.
They built incredibly sophisticated systems using incredibly primitive building blocks. If you honestly think they couldn't ramp up on PHP or Python or k8s, you need to spend a bit more time around some actual badasses.
I seriously doubt that the average dev today knows more than the average dev in the 70s. In fact, I would happily wager money on it, if it were somehow provable one way or the other.
There are so many abstractions today that you don’t _have_ to know how computers work. They did.
Re: state of the art, you don’t need or want state of the art for building things that go into space, or underwater (closest analogue I can think of that I have experience in [operating, not coding for]). You want them to be rock-steady, with as close to zero bugs as possible, and to never, ever surprise you.
The average dev knows far far less today. Just how deeply people had to know hardware back then is a massive difference.
And if we look at the average dev today, most code is in the framework, or a node package, or composer package, and the average dev gets by via stack overflow or AI.
There are certainly devs that actually understand coding, but the average dev barely does. Most devs don't understand the underlying OS, the hardware, or externals such as databases at all. It's all abstracted away.
And coders back then had almost nothing abstracted away.
> Any professional computer programmer here knows more in their field than a programmer from 70’s
Programming for regimes with different constraints is…very different. In a very real sense, “their field” for most modern programmers isn’t even the same field as developing software for 1970s deep space probes, plus, the issue wasn’t even about software but about end-to-end building and launching space probes (that was both what the quote was about and the field of the Voyager project manager.
But thanks for demonstrating the kind of hubris that the post you were responding to described.
But this is about the overall engineering of Voyager, not just the programming. Also, I'm skeptical how much better modern hardware will fare in deep space conditions, considering the use of finer and more fragile electronics. Since you're talking about general people instead of specialists, also consider how the median software developer seem to focus less on correctness, reliability, and optimization, compared to the standards of spacecraft engineering.
1) It was sophisticated indeed, top of its game, but lets not lie to yourselves. We still have best engineers and better programmers today. Just to put things in context in that time we moved from "Pong" to "World of Warcraft".
And is not just software. Reducing an entire computer room to the palm of your hand but with better storage, graphics and computing power is basically black magic. I can't imagine what Voyager could do with a current Nvidia chip.
2) Just because people is not trained in some specific domain does not mean that they couldn't be motivated to do it. I bet that the people that built the Voyager didn't born with the instructions engraved in their brains. And if they learned, other people can also.
If I learned something after lurking HN for a lot of years is to never, ever, underestimate this community. This place stills keep surprising me in good ways.
> Also, I'm skeptical how much better modern hardware will fare in deep space conditions, considering the use of finer and more fragile electronics.
Since then, we had massive advantages in manufacturing. Maybe COTS parts aren't as usable in space as they were back then, but we can now easily manufacture something more resilient or, as a fallback, simply use those old parts. Also, basically all current electronics are designed to be and are used on earth ~100% of the time. Over-engineering it for use in space is just a waste.
> skeptical how much better modern hardware will fare in deep space conditions
Why? Deep space radiation is only 4x the dosage compared to LEO. Starlink satellites use modern tech and they've spent >10,000 collective years in space since we launched more than 2 of them. The whole "modern electronics are more fragile" issue is overblown. The CPUs are tiny and easy to shield. The MMICs use huge features that you can see with a normal microscope.
Where did you get the number of 4x from? It seems different than what I understand, but I don't have any sources handy.
"Modern electronics are more fragile" issue really is not overblown. One of my peers have tested different types of non volatile memory in LEO and the TLC NAND sample gets totally wiped by ionizing radiation within the first week. CPUs, while being mostly logic and less susceptible to low energy events, can still be easily destroyed especially if radiation causes latchup. MMICs and discrete devices have huge features in comparison yes, but the junctions still degrade notably under radiation.
From my opinion as someone working on LEO satellite hardware, it's easy to have opinions about stuff like correctness and reliability because it is not naturally intuitive and usually requires observation of many samples over a long time that it doesn't affect most engineers. However, I've definitely seen a strong correlation between the effort spent on correctness and reliability, and the success of missions.
What is Starlink’s failure rate? Genuinely asking; I don’t know. My point is that if it’s > 0, that’s a problem for something designed to go billions of miles away.
The longevity of Voyager has only a little to do with software engineering, and latest software engineering has even less to do with building spacecraft like Voyager.
If anything I expect modern software engineering to have significantly higher risks of failure than programmers of ye olde days.
Why? The first step today would be installing Discord[1], the second step would be updating code live 420 no scope[2], and the third step would be figuring out how many JavaScript abstractions are desired.
I think the pushback is b/c this falls on common stereotypes about modern software being bloated and unnecessarily fragile. Those are justified stereotypes often enough, but spacecraft software is such a different animal that it just doesn't really apply very often.
That's an authority argument, even if in this case a strong one. What you seem to be overlooking is, that merely from this one quote, we cannot conclude, whether that is really all of methodology that went into building the Voyager 1 and 2. So while it is a witty quote, it doesn't actually tell us much, without additional statement, that we don't need to look any further for other methods that were applied.
>Nothing lasts forever, and if you don't figure out when it's going to fail, it's going to be sooner rather than later.
You might be surprised about the reality of the situation.
I had a professor who worked on the design and fabrication of the Apollo Guidance Computers, which likely was a somewhat similar process to the one being discussed here. It's been quite a few years since his lecture on it, but the process went something like this:
They started with an analysis of the predicted lifetime/reliability of every chip type/component available to potentially include in the design.
The design was constrained to only use components with the top x% of predicted life.
Then they surveyed each manufacturer of each of those component types to find the manufacturer with the highest lifetime components for each of the highest lifetime component types.
Then they surveyed the manufacturing batches of that manufacturer, to identify the batches with the highest lifetimes from that manufacturer.
Then they selected components from the highest lifetime batches of from the highest lifetime manufacturers of the highest lifetime components.
Using those components, they assembled a series of guidance computers, in batches.
They tested those batches, pushing units from each batch to failure.
They then selected the highest quality manufacturing batch as the production units.
When he gave this talk, decades after the Apollo era, NASA had been continuing to run lifetime failure analyses on other units from the production batch, to try to understand the ultimate failure rate for theoretical purposes.
Several decades after the Apollo program ended, they had still never seen any failure events in these systems, and shortly before the time of his lecture, I believe NASA had finally shut off the failure testing of these systems, as they were so remote from then "modern" technology (this was decades ago, hence the quotes around "modern").
This is what happens when you have the best minds committed to designing systems that don't fail. Yes, the systems probably will fail before the heat death of the universe. No, we don't have any idea when that failure time will be. Yes, it's likely to be a very long time in the future.
(And, of course, this is typed from memory about a lecture decades ago on events happening decades before that. This being HN, someone here probably worked on those systems, in which case hopefully they can add color and fix any defects in the narrative above).
Incidentally, you have the question backwards: no one really cares when it's going to fail. We care when it's not going to fail: will the spacecraft make it to its destination or not? It doesn't really matter what happens after that.
This might seem like a nitpick, but changes in approach and mindset like this are often the difference between success and failure with "impossible" problems like this. So it's critical to get your approach right!
You don’t do it that way. You figure out the how it’s gong to fail, when that failure is likely, and then engineer it not to do that in the relevant timeframe.
FWIW his quote also applies to a lotta devices here on Earth. For example guns are not designed to last forever, but they are designed not to fail. You don't want to hear a click when you expect a bang or vice versa. As a side effect, they last forever. It's fairly common for a 100 year old gun to work perfectly in 2024.
There are also expectations about maintenance and some notion of a “standard” environment. For example, unmaintained firearms work less well (or fail) when exposed to humid conditions.
Given the usual vainglorious soup of mythical names, claims, swains and dames, I think the one film to recommend remains... Monty Python and the Holy Grail.
“I mean, if I went 'round saying I was an emperor, just because some moistened bint had lobbed a scimitar at me, they'd put me away!”
it is painful to see Monty Python anti-story of comic relief drawn up to stand alone without understanding.. the psychological and historical themes in Aurthur speak to worlds that are gone today. Comic relief specifically mocks and deflates many important aspects of the myth. By saying "oh that is the ONE for me" it basically flushes a lot of content into ludicrous, and yes funny, cheap theater.
Credentialing. I frequently find myself reading Tweets from people I’ve never heard of because someone who I know to be an expert in a particular topic has liked or retweeted them. This kind of signaling helps surface more obscure content and make it available to people who wouldn’t have found it on their own. This is a huge deal.
One important one is reposting that shows post to your followers who might not see the original. It is important way to see other content.
Also, liking it signal that other people were interested in the post. I don't global likes are useful for likes from people you follow are important.
Finally, replies mean can see interaction from people you follow. If you follow interesting people, you see interesting discussion.
With social media, it isn't possible to read everything, I know I used to try to read my whole Twitter feed. There needs to be some way to filter than just time when you looked. I think the current algorithmic feed is bad because it tries to show other stuff instead of ordering things that want to see.
But all those features allow for optimization and create competition.
If you want likes, or views, or reposts, then you will have to "engineer" your post in such a way that it gets more attention. Not sure if that's always beneficial.
There is not much point in attention when post is only seen by followers and reposts. It is indication that wrote a good post. The only currency is followers. It was hard to get those without outside fame.
The problem is with Twitter and others is that they now have algorithmic feed. That means posts get seen globally and clout metrics are valuable for reach. Comments also get clout so get lots of drive-by ones and less discussion.
"social" means people interacting - replies, likes, etc.
If someone has an RSS reader with feeds from some news sources, official channels issuing announcements, etc - that's great, but does anyone consider that "social media"?
(Of course, you can believe that social media is bad and you don't want it, but that's a different question)
For things like a missing person alert, it provides an instant feedback mechanism and the ability to share things with people you might know in the affected area.
Otherwise, there’s absolutely utility to interacting over social media. We’re doing it right now!
How does a mixin compare to role or interface in languages that do not have multiple inheritance?
reply