Hacker News new | past | comments | ask | show | jobs | submit | api's comments login

There's probably no way for the compiler to prove safety. Rust is designed to allow 100% safe bare metal development, like a perfectly safe C that still allows you to get close to the hardware, and that's tough.

What does safe mean here? Everything can be interpreted as a [u8], right?

[u8] guarantees to the compiler that two reads through the array at the same location without any intervening writes return the same value.

Turns out that's not the case on freshly returned uninitiated allocations. The first read could return old data (say "1"), and the second read could return a freshly zeroed page ("0").


https://www.ralfj.de/blog/2019/07/14/uninit.html perhaps (the OP also talks about this when linking to a talk about jemalloc)

I'm failing to understand the correlation to "safety" here. Reading a byte for which you don't know the value isn't "unsafe". It's literally (!) the desired behavior of foreign data being read from an external source, which is in fact the use case in the article.

There's no safety problem as long as the arbitrary value is deterministic, which it is, being process RAM. The "uninitialized data read" bugs reported from instrumentation tools in C code are because the code is assuming the value has some semantics. The read itself has no value and is presumably an artifact of the bug, but it is safe.


> There's no safety problem as long as the arbitrary value is deterministic, which it is, being process RAM.

The article discusses how it is in fact, on Linux with memory returned from at least one very common allocator, not deterministic. Ctrl-f tautology.


That's just a terminology collision. All RAM access is deterministic in the sense that the value will not change until written. It's not "predictable" in the sense that the value could be anything.

C code that reads uninitialized data is presumed to be buggy, because it wouldn't be doing that unless it thought the memory was initialized. But the read itself is not unsafe.

Rust is just confused, and is preventing all reads from uninitialized data a-priori instead of relying on its perfectly working type system to tell it whether the uninitialized data is safe to use. And that has performance impact, as described in the linked article, which has then resulted in some terrible API choices to evade.


> All RAM access is deterministic in the sense that the value will not change until written.

Again, the article literally points to how this is not true given modern allocators. The memory that Linux exposes to processes will change without being written to prior to being initialized given how allocators manage it. This isn't a fiction of the C-standard or rust reference, it's what actually happens in the real world on a regular basis.

Rust is not confused, it is correctly observing what is allowed to actually happen to uninitialized memory while the process does nothing to it.

You could change the C/Rust specification of that memory. You could in your C/rust implementation declare that the OS swapping out pages of uninitialized memory counts as a write just like any other, and that it's the programmers (allocators) responsibility to make sure those writes obey the normal aliasing rules. Doing so would be giving up performance though, because the fact that writing to memory has the side-effect of cancelling collection of freed pages is a powerful way for processes to quickly communicate with the OS. (You'd probably also cause other issues with memory mapped IO, values after the end of the stack changing, and so on, but we can just focus on this one issue for now).


I think Go would have been the most logical choice, given that it's Google.

We considered Go! At the time it was much more designed for servers than mobile devices. If I recall correctly the minimum binary size was like 30mb or something.

I think you are missing the fact that Dart is actually an incredibly nice language to work with in a way that Go absolutely is not.

The problem is that it’s yet another language. It’s the cognitive load and the inability to easily reuse code across a project.

That seems more like an argument against Go. Dart is the language more familiar to the average dev, and you're gonna have an easier time translating that to Java/C# than you are Go.

It's not great at super-complex tasks due to limited context, but it's quite a good "junior intern that has memorized the Internet." Local deepseek-r1 on my laptop (M1 w/64GiB RAM) can answer about any question I can throw at it... as long as it's not something on China's censored list. :)

How are you running r1 on 64mb of ram? I’m guessing you’re running a distill which is not r1

The 70b distill at 4bit quantize fits, so yes, and performance and quality seem pretty good. I can't run the gigantic one.

Elon actually did engineering and had some good ideas. I'm talking pre-social-media-brain-rot Elon.

Elon was fired from PayPal partially because he wanted to replace "old ugly mainframes" (Linux and UNIX machines) with the "cutting edge" Windows NT

This only looks stupid in hindsight.

I worked on something back then that had to interface with payment networks. All the payment networks had software for Windows to accomplish this that you could run under NT, while under Linux you had to implement your own connector -- which usually involved interacting with hideous old COBOL systems and/or XML and other abominations. In many cases you had to use dialup lines to talk to the banks. Again, software was available for Windows NT but not Linux.

Our solution was to run stuff to talk to banks on NT systems and everything else on Linux. Yes, those NT machines had banks of modems.

In the late 90s using NT for something to talk to banks is not necessarily a terrible idea seen through the lens of the time. Linux was also far less mature back then, and we did not have today's embarrassment of riches when it comes to Linux management and clustering and orchestration software.


> This only looks stupid in hindsight.

If you're a tech leader and confuse Linux boxes for mainframes then I don't think it's hindsight that makes you look foolish. It's that you do not, in fact, understand what you're talking about or how to talk about it - which is your job as a tech leader.


Under Linux perhaps. But a web company running on NT instead of Solaris in the 90's? I mean you could but you'd be hobbled pretty hard

Especially around the era Musk is quoted for (NT4 in the late 90's) I think most people would be understandably critical, even at the time


> This only looks stupid in hindsight.

It looked stupid enough at the time to get him fired for it.


Did he ever manage to run that Python script?

Didn't he lie about having a physics degree?

Yeah Elon has gotten annoying (my god has he been insufferable lately) but his companies have done genuine good for the human race. It's really hard for me to think of any of the other recently made billionaires who have gotten rich off of something other than addicting devices, time-wasting social media and financial schemes.

I'm sure him lining up to kiss Trump's ring for some kind of bailout is not a coincidence.

So far the whole business model of Silicon Valley since social media has been to monetize other peoples' content given out for free. The whole empire is built on this.

I wonder if this is going to come to an end through a combination of social media fatigue, social media fragmentation, and open source LLMs just giving it all back to us for free. LLMs are analogous to a "JPEG for ideas" so they're just lossy compression blobs of human thought expressed through language.


> So far the whole business model of Silicon Valley since social media has been to monetize other peoples' content given out for free. The whole empire is built on this.

It cannot die soon enough


I bet you could construct a hard proof that any kind of speculation is insecure in the sense that it cannot be proven secure.

If that's not true, then someone's going to figure out exactly how to set bounds on safe speculation and that will become part of future CPU designs.


> I find it fascinating that some users here seem to be attached to the concept of an alpha male, reality be damned. It’s clear that this meets some emotional/psychological need for them.

Grifters and people selling right-wing politics have figured out how to market to male insecurity. They're selling politicians of course but also quack supplements, self-help nonsense, masculinity gurus, hilarious "boot camps" where you spend tens of thousands to have some dumbass yell at you, etc. All that stuff will leave you still insecure, and with less money.

Andrew Tate is probably the undisputed master of the alpha male grift. He's known as an abuser of women, and he is, but really men are his main marks. In a way part of his grift is to make his marks utterly repulsive to most women, keeping them alone and insecure and customers.


Sure, but don't lose sight of the fact that female insecurity has been leveraged and marketed to as well, for pretty much forever. Fashion, makeup, beauty standards -- flip through a magazine and look at the ads, and think about what the ads are suggesting the reader needs or would benefit from, and further think about whether the reader will be more or less insecure afterwards.

(And male insecurity has always been a target. Don't let that bully kick sand at you at the beach! But I agree that the targeting has recently been vastly more tuned and optimized and the most egregious forms have become socially acceptable.)


That's what I was getting at -- historically female insecurity has been more aggressively targeted, but that's been changing. I now see all kinds of marketing and propaganda tilted toward male insecurity, a lot more than I remember.

The people who really make the magic generally do not capture the hype.

They are also priced on the idea that nothing will challenge them. If AMD, Intel, or anyone else comes out with a challenger for their top GPUs at competitive prices, that’s a problem.

I’m surprised they haven’t yet.


The biggest challengers are likely the hyperscalers and companies like Meta. It sort of flew under the radar when Meta released an update on their GPU plans last year and said their cluster would be as powerful as X NVDA GPUs, and not that it would have X NVDA GPUs [1].

Also, I should add that Deepseek just showed the top GPUs are not necessary to deliver big value.

[1] https://engineering.fb.com/2024/03/12/data-center-engineerin...

This announcement is one step in our ambitious infrastructure roadmap. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: