There's probably no way for the compiler to prove safety. Rust is designed to allow 100% safe bare metal development, like a perfectly safe C that still allows you to get close to the hardware, and that's tough.
[u8] guarantees to the compiler that two reads through the array at the same location without any intervening writes return the same value.
Turns out that's not the case on freshly returned uninitiated allocations. The first read could return old data (say "1"), and the second read could return a freshly zeroed page ("0").
I'm failing to understand the correlation to "safety" here. Reading a byte for which you don't know the value isn't "unsafe". It's literally (!) the desired behavior of foreign data being read from an external source, which is in fact the use case in the article.
There's no safety problem as long as the arbitrary value is deterministic, which it is, being process RAM. The "uninitialized data read" bugs reported from instrumentation tools in C code are because the code is assuming the value has some semantics. The read itself has no value and is presumably an artifact of the bug, but it is safe.
That's just a terminology collision. All RAM access is deterministic in the sense that the value will not change until written. It's not "predictable" in the sense that the value could be anything.
C code that reads uninitialized data is presumed to be buggy, because it wouldn't be doing that unless it thought the memory was initialized. But the read itself is not unsafe.
Rust is just confused, and is preventing all reads from uninitialized data a-priori instead of relying on its perfectly working type system to tell it whether the uninitialized data is safe to use. And that has performance impact, as described in the linked article, which has then resulted in some terrible API choices to evade.
> All RAM access is deterministic in the sense that the value will not change until written.
Again, the article literally points to how this is not true given modern allocators. The memory that Linux exposes to processes will change without being written to prior to being initialized given how allocators manage it. This isn't a fiction of the C-standard or rust reference, it's what actually happens in the real world on a regular basis.
Rust is not confused, it is correctly observing what is allowed to actually happen to uninitialized memory while the process does nothing to it.
You could change the C/Rust specification of that memory. You could in your C/rust implementation declare that the OS swapping out pages of uninitialized memory counts as a write just like any other, and that it's the programmers (allocators) responsibility to make sure those writes obey the normal aliasing rules. Doing so would be giving up performance though, because the fact that writing to memory has the side-effect of cancelling collection of freed pages is a powerful way for processes to quickly communicate with the OS. (You'd probably also cause other issues with memory mapped IO, values after the end of the stack changing, and so on, but we can just focus on this one issue for now).
We considered Go! At the time it was much more designed for servers than mobile devices. If I recall correctly the minimum binary size was like 30mb or something.
That seems more like an argument against Go. Dart is the language more familiar to the average dev, and you're gonna have an easier time translating that to Java/C# than you are Go.
It's not great at super-complex tasks due to limited context, but it's quite a good "junior intern that has memorized the Internet." Local deepseek-r1 on my laptop (M1 w/64GiB RAM) can answer about any question I can throw at it... as long as it's not something on China's censored list. :)
I worked on something back then that had to interface with payment networks. All the payment networks had software for Windows to accomplish this that you could run under NT, while under Linux you had to implement your own connector -- which usually involved interacting with hideous old COBOL systems and/or XML and other abominations. In many cases you had to use dialup lines to talk to the banks. Again, software was available for Windows NT but not Linux.
Our solution was to run stuff to talk to banks on NT systems and everything else on Linux. Yes, those NT machines had banks of modems.
In the late 90s using NT for something to talk to banks is not necessarily a terrible idea seen through the lens of the time. Linux was also far less mature back then, and we did not have today's embarrassment of riches when it comes to Linux management and clustering and orchestration software.
If you're a tech leader and confuse Linux boxes for mainframes then I don't think it's hindsight that makes you look foolish. It's that you do not, in fact, understand what you're talking about or how to talk about it - which is your job as a tech leader.
Yeah Elon has gotten annoying (my god has he been insufferable lately) but his companies have done genuine good for the human race. It's really hard for me to think of any of the other recently made billionaires who have gotten rich off of something other than addicting devices, time-wasting social media and financial schemes.
So far the whole business model of Silicon Valley since social media has been to monetize other peoples' content given out for free. The whole empire is built on this.
I wonder if this is going to come to an end through a combination of social media fatigue, social media fragmentation, and open source LLMs just giving it all back to us for free. LLMs are analogous to a "JPEG for ideas" so they're just lossy compression blobs of human thought expressed through language.
> So far the whole business model of Silicon Valley since social media has been to monetize other peoples' content given out for free. The whole empire is built on this.
> I find it fascinating that some users here seem to be attached to the concept of an alpha male, reality be damned. It’s clear that this meets some emotional/psychological need for them.
Grifters and people selling right-wing politics have figured out how to market to male insecurity. They're selling politicians of course but also quack supplements, self-help nonsense, masculinity gurus, hilarious "boot camps" where you spend tens of thousands to have some dumbass yell at you, etc. All that stuff will leave you still insecure, and with less money.
Andrew Tate is probably the undisputed master of the alpha male grift. He's known as an abuser of women, and he is, but really men are his main marks. In a way part of his grift is to make his marks utterly repulsive to most women, keeping them alone and insecure and customers.
Sure, but don't lose sight of the fact that female insecurity has been leveraged and marketed to as well, for pretty much forever. Fashion, makeup, beauty standards -- flip through a magazine and look at the ads, and think about what the ads are suggesting the reader needs or would benefit from, and further think about whether the reader will be more or less insecure afterwards.
(And male insecurity has always been a target. Don't let that bully kick sand at you at the beach! But I agree that the targeting has recently been vastly more tuned and optimized and the most egregious forms have become socially acceptable.)
That's what I was getting at -- historically female insecurity has been more aggressively targeted, but that's been changing. I now see all kinds of marketing and propaganda tilted toward male insecurity, a lot more than I remember.
They are also priced on the idea that nothing will challenge them. If AMD, Intel, or anyone else comes out with a challenger for their top GPUs at competitive prices, that’s a problem.
The biggest challengers are likely the hyperscalers and companies like Meta. It sort of flew under the radar when Meta released an update on their GPU plans last year and said their cluster would be as powerful as X NVDA GPUs, and not that it would have X NVDA GPUs [1].
Also, I should add that Deepseek just showed the top GPUs are not necessary to deliver big value.
This announcement is one step in our ambitious infrastructure roadmap. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.
reply