Hacker News new | past | comments | ask | show | jobs | submit | heresie-dabord's comments login

It seems messy and even the author of TFA is unconvinced.

How does a mixin compare to role or interface in languages that do not have multiple inheritance?


> Kessler Syndrome and climate hell

Both of which demonstrate that our species is much better at understanding how to scale madness and destruction than how to scale sustainable activity.


> Probably not the greatest idea for production environments

Nor for any system where one takes care to not needlessly increase the threat surface.


> Pride, Greed, Lust, Anger, Gluttony, Envy, Sloth

The greatest popular innovation of our time appears to be to have extended the above list with Falsehood, Cruelty, and Pollution.


Falsehood Cruelty and Pollution are results of the 7 sins. Cruelty is typically caused by Anger or Envy, Pollution from Gluttony and Sloth, Falsehood from Pride and Envy etc.

I assure you falsehood, cruelty, and pollution have existed long before our time.

The magazine in question is a science-aligned publication. Given the current public discourse, it's no surprise that science-aligned opinions will be attacked. The current public discourse is (gleefully, tribalistically) misinformed, misguided, and hell-bent on social fragmentation.

Watch the bonds between citizens and reality dissolve in real time.


> Watch the bonds between citizens and reality dissolve in real time.

I've never thought our generations would need to fight this war again... Big brain and opposable thumbs are overrated.


> scientists and engineers back on Earth have increasingly had to deal with age-related maintenance issues

There is perhaps unintended irony in that sentence, but it does evoke some Asimov stories in which human characters age while supporting technology.


I like this quote:

“We didn’t design them to last 30 years or 40 years, we designed them not to fail,” John Casani, Voyager project manager from 1975 to 1977, says in a NASA statement.


I would expect a mindful trade-off against total system mass and cost.

That doesn't seem helpful. Nothing lasts forever, and if you don't figure out when it's going to fail, it's going to be sooner rather than later.

Maybe the guy who helped launch the two manmade objects furthest from earth knows more about how to build space probes than you

Shhhh, this is HN, where just like on reddit, a bunch of computer programmers think they know more than a professional in said professional's field.

Imagine what would happen if a dot-com billionaire started a space company.

Said dot-com billionaire being a physicist who only went into dot-com to earn money to start a space company.

> this is HN, a bunch of computer programmers think they know more than <figure of authority>

And they are correct

At least on the programming part, having in mind the huge advances in computers since the Voyager was built.

Any professional computer programmer here knows more in their field than a programmer from 70's. A Voyager built today with similar resources would be much better, 100% guaranteed.

Voyager 1 was a fantastic machine done by a terrific team, but lets not pretend that the state of the art hasn't changed. Anybody with computer skills polished towards building a machine in 1977 would be basically unemployable for building a machine in 2024.


You might be surprised to read the papers written by those early software developers. They were writing in the late 1960's and early 1970's about fundamental issues most developers of today don't fully grasp.

You might think, for example "waterfall, ewwww", but if you go back and re-read the first paper on waterfall development, it makes clear that waterfall development is in fact an anti-pattern. How many here are stuck on "modern" teams that think waterfall is a good idea, and yet those clueless old folks had figured out it was a dead end 50+ years ago.

One of the most critical aspects of managing software development is Conway's law. For distributed scalable systems, if you aren't thinking about Amdahl's law you're just a hacker and not actually an engineer. Check the dates on those papers.

They built incredibly sophisticated systems using incredibly primitive building blocks. If you honestly think they couldn't ramp up on PHP or Python or k8s, you need to spend a bit more time around some actual badasses.


> if you aren't thinking about Amdahl's law you're just a hacker and not actually an engineer.

This is really funny stuff, thank you!


I seriously doubt that the average dev today knows more than the average dev in the 70s. In fact, I would happily wager money on it, if it were somehow provable one way or the other.

There are so many abstractions today that you don’t _have_ to know how computers work. They did.

Re: state of the art, you don’t need or want state of the art for building things that go into space, or underwater (closest analogue I can think of that I have experience in [operating, not coding for]). You want them to be rock-steady, with as close to zero bugs as possible, and to never, ever surprise you.


The average dev knows far far less today. Just how deeply people had to know hardware back then is a massive difference.

And if we look at the average dev today, most code is in the framework, or a node package, or composer package, and the average dev gets by via stack overflow or AI.

There are certainly devs that actually understand coding, but the average dev barely does. Most devs don't understand the underlying OS, the hardware, or externals such as databases at all. It's all abstracted away.

And coders back then had almost nothing abstracted away.


> Any professional computer programmer here knows more in their field than a programmer from 70’s

Programming for regimes with different constraints is…very different. In a very real sense, “their field” for most modern programmers isn’t even the same field as developing software for 1970s deep space probes, plus, the issue wasn’t even about software but about end-to-end building and launching space probes (that was both what the quote was about and the field of the Voyager project manager.

But thanks for demonstrating the kind of hubris that the post you were responding to described.


But this is about the overall engineering of Voyager, not just the programming. Also, I'm skeptical how much better modern hardware will fare in deep space conditions, considering the use of finer and more fragile electronics. Since you're talking about general people instead of specialists, also consider how the median software developer seem to focus less on correctness, reliability, and optimization, compared to the standards of spacecraft engineering.

1) It was sophisticated indeed, top of its game, but lets not lie to yourselves. We still have best engineers and better programmers today. Just to put things in context in that time we moved from "Pong" to "World of Warcraft".

And is not just software. Reducing an entire computer room to the palm of your hand but with better storage, graphics and computing power is basically black magic. I can't imagine what Voyager could do with a current Nvidia chip.

2) Just because people is not trained in some specific domain does not mean that they couldn't be motivated to do it. I bet that the people that built the Voyager didn't born with the instructions engraved in their brains. And if they learned, other people can also.

If I learned something after lurking HN for a lot of years is to never, ever, underestimate this community. This place stills keep surprising me in good ways.


> Also, I'm skeptical how much better modern hardware will fare in deep space conditions, considering the use of finer and more fragile electronics.

Since then, we had massive advantages in manufacturing. Maybe COTS parts aren't as usable in space as they were back then, but we can now easily manufacture something more resilient or, as a fallback, simply use those old parts. Also, basically all current electronics are designed to be and are used on earth ~100% of the time. Over-engineering it for use in space is just a waste.


> skeptical how much better modern hardware will fare in deep space conditions

Why? Deep space radiation is only 4x the dosage compared to LEO. Starlink satellites use modern tech and they've spent >10,000 collective years in space since we launched more than 2 of them. The whole "modern electronics are more fragile" issue is overblown. The CPUs are tiny and easy to shield. The MMICs use huge features that you can see with a normal microscope.


Where did you get the number of 4x from? It seems different than what I understand, but I don't have any sources handy.

"Modern electronics are more fragile" issue really is not overblown. One of my peers have tested different types of non volatile memory in LEO and the TLC NAND sample gets totally wiped by ionizing radiation within the first week. CPUs, while being mostly logic and less susceptible to low energy events, can still be easily destroyed especially if radiation causes latchup. MMICs and discrete devices have huge features in comparison yes, but the junctions still degrade notably under radiation.

From my opinion as someone working on LEO satellite hardware, it's easy to have opinions about stuff like correctness and reliability because it is not naturally intuitive and usually requires observation of many samples over a long time that it doesn't affect most engineers. However, I've definitely seen a strong correlation between the effort spent on correctness and reliability, and the success of missions.


What is Starlink’s failure rate? Genuinely asking; I don’t know. My point is that if it’s > 0, that’s a problem for something designed to go billions of miles away.

The longevity of Voyager has only a little to do with software engineering, and latest software engineering has even less to do with building spacecraft like Voyager.

If anything I expect modern software engineering to have significantly higher risks of failure than programmers of ye olde days.

Why? The first step today would be installing Discord[1], the second step would be updating code live 420 no scope[2], and the third step would be figuring out how many JavaScript abstractions are desired.

[1]: https://news.ycombinator.com/item?id=42162380

[2]: https://news.ycombinator.com/item?id=41217037


I think the pushback is b/c this falls on common stereotypes about modern software being bloated and unnecessarily fragile. Those are justified stereotypes often enough, but spacecraft software is such a different animal that it just doesn't really apply very often.

I mean most developers today are js or python code monkeys.

Developers in the 80 invented every algorithm we can now use with a simple import statement and single line function call.

Your statement is probably true for a minority of developers today, but not "Any professional programmer here"


That's an authority argument, even if in this case a strong one. What you seem to be overlooking is, that merely from this one quote, we cannot conclude, whether that is really all of methodology that went into building the Voyager 1 and 2. So while it is a witty quote, it doesn't actually tell us much, without additional statement, that we don't need to look any further for other methods that were applied.

>Nothing lasts forever, and if you don't figure out when it's going to fail, it's going to be sooner rather than later.

You might be surprised about the reality of the situation.

I had a professor who worked on the design and fabrication of the Apollo Guidance Computers, which likely was a somewhat similar process to the one being discussed here. It's been quite a few years since his lecture on it, but the process went something like this:

They started with an analysis of the predicted lifetime/reliability of every chip type/component available to potentially include in the design.

The design was constrained to only use components with the top x% of predicted life.

Then they surveyed each manufacturer of each of those component types to find the manufacturer with the highest lifetime components for each of the highest lifetime component types.

Then they surveyed the manufacturing batches of that manufacturer, to identify the batches with the highest lifetimes from that manufacturer.

Then they selected components from the highest lifetime batches of from the highest lifetime manufacturers of the highest lifetime components.

Using those components, they assembled a series of guidance computers, in batches.

They tested those batches, pushing units from each batch to failure.

They then selected the highest quality manufacturing batch as the production units.

When he gave this talk, decades after the Apollo era, NASA had been continuing to run lifetime failure analyses on other units from the production batch, to try to understand the ultimate failure rate for theoretical purposes.

Several decades after the Apollo program ended, they had still never seen any failure events in these systems, and shortly before the time of his lecture, I believe NASA had finally shut off the failure testing of these systems, as they were so remote from then "modern" technology (this was decades ago, hence the quotes around "modern").

This is what happens when you have the best minds committed to designing systems that don't fail. Yes, the systems probably will fail before the heat death of the universe. No, we don't have any idea when that failure time will be. Yes, it's likely to be a very long time in the future.

(And, of course, this is typed from memory about a lecture decades ago on events happening decades before that. This being HN, someone here probably worked on those systems, in which case hopefully they can add color and fix any defects in the narrative above).


A testament to requirements and quality. Thanks for sharing this insight, cheers!

> This is what happens when you have the best minds committed to designing systems that don't fail.

Given how times have changed, perhaps it is also valuable to note that other major-yet-unwritten factor: confidence in the supply chain.


>This is what happens when you have the best minds committed to designing systems that don't fail.

And a budget to support them.


Does this approach scale with complexity? At some point, you're better off planning for failures and engineering for redundancy.

Incidentally, you have the question backwards: no one really cares when it's going to fail. We care when it's not going to fail: will the spacecraft make it to its destination or not? It doesn't really matter what happens after that.

This might seem like a nitpick, but changes in approach and mindset like this are often the difference between success and failure with "impossible" problems like this. So it's critical to get your approach right!


You don’t do it that way. You figure out the how it’s gong to fail, when that failure is likely, and then engineer it not to do that in the relevant timeframe.

The original engineer was right and you are not.


FWIW his quote also applies to a lotta devices here on Earth. For example guns are not designed to last forever, but they are designed not to fail. You don't want to hear a click when you expect a bang or vice versa. As a side effect, they last forever. It's fairly common for a 100 year old gun to work perfectly in 2024.

There are also expectations about maintenance and some notion of a “standard” environment. For example, unmaintained firearms work less well (or fail) when exposed to humid conditions.

See also: https://www.nps.gov/subjects/museums/upload/10-01_508.pdf


I imagine there are cases which exemplify your point, but this does not look like one of them.

Now with extra oppression of the masses by supreme executive power derived from a farcical electoral ceremony.

Given the usual vainglorious soup of mythical names, claims, swains and dames, I think the one film to recommend remains... Monty Python and the Holy Grail.

“I mean, if I went 'round saying I was an emperor, just because some moistened bint had lobbed a scimitar at me, they'd put me away!”

https://www.imdb.com/title/tt0071853/

https://en.wikipedia.org/wiki/Monty_Python_and_the_Holy_Grai...


it is painful to see Monty Python anti-story of comic relief drawn up to stand alone without understanding.. the psychological and historical themes in Aurthur speak to worlds that are gone today. Comic relief specifically mocks and deflates many important aspects of the myth. By saying "oh that is the ONE for me" it basically flushes a lot of content into ludicrous, and yes funny, cheap theater.

> many important aspects of the myth

I am of course open to hear what these aspects are. But let's be clear that lucid understanding and humour are the best parts of civilisation.


What information gain is there in most of the "interactivity" that is afforded by social media?

Credentialing. I frequently find myself reading Tweets from people I’ve never heard of because someone who I know to be an expert in a particular topic has liked or retweeted them. This kind of signaling helps surface more obscure content and make it available to people who wouldn’t have found it on their own. This is a huge deal.

There used to be RSS readers that allowed you to create and share feeds with your friends, actually.

Now we just need a site where we can browse a bunch of people’s feeds and find interesting ones. Sounds like Twitter.

Except that RSS is an open standard.

One important one is reposting that shows post to your followers who might not see the original. It is important way to see other content.

Also, liking it signal that other people were interested in the post. I don't global likes are useful for likes from people you follow are important.

Finally, replies mean can see interaction from people you follow. If you follow interesting people, you see interesting discussion.

With social media, it isn't possible to read everything, I know I used to try to read my whole Twitter feed. There needs to be some way to filter than just time when you looked. I think the current algorithmic feed is bad because it tries to show other stuff instead of ordering things that want to see.


But all those features allow for optimization and create competition.

If you want likes, or views, or reposts, then you will have to "engineer" your post in such a way that it gets more attention. Not sure if that's always beneficial.


There is not much point in attention when post is only seen by followers and reposts. It is indication that wrote a good post. The only currency is followers. It was hard to get those without outside fame.

The problem is with Twitter and others is that they now have algorithmic feed. That means posts get seen globally and clout metrics are valuable for reach. Comments also get clout so get lots of drive-by ones and less discussion.


"social" means people interacting - replies, likes, etc.

If someone has an RSS reader with feeds from some news sources, official channels issuing announcements, etc - that's great, but does anyone consider that "social media"?

(Of course, you can believe that social media is bad and you don't want it, but that's a different question)


For things like a missing person alert, it provides an instant feedback mechanism and the ability to share things with people you might know in the affected area.

Otherwise, there’s absolutely utility to interacting over social media. We’re doing it right now!


People seem to like it?

Why is information gain the correct metric?

We're talking about marketing here. Shouldn't it be conversions or awareness or something?


filter out unimportant stuff

The pronunciation is shown here: https://en.m.wikipedia.org/wiki/Brian_Kernighan

Also note BK's preference for his Desert-Island Programming Language:

"He has said that if stranded on an island with only one programming language it would have to be C."

I trust that the air-gap would keep him safe. ^_^


>if stranded on an island with only one programming language it would have to be C

evil plotting raccoon meme: strand Kernighan on an island with just a LISP machine

https://i.kym-cdn.com/entries/icons/original/000/028/727/Scr...


He then writes a C compiler in Lisp. Checkmate.

Kernighan would probably first write the simplest possible C interpreter in Lisp, so he could then write the C compiler in C and get it bootstrapped.

Mainly how the air gap would keep him safe would be by preventing updates of ISO C and GCC.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: