Used server hardware is much more expensive in the EU generally, because the market is much smaller (fewer data centers to begin with, longer cycles to reduce costs and EU WEEE mandatory scrapping instead of reuse).
Size of byte is implementation-defined, not unspecified. Why is that a problem for writing robust code? It is okay to assume implementation-defined behavior as long as you are targeting a subset of systems where these assumptions hold, and if you check them at build-time.
First, system that forwards traffic behind its own IP address is called proxy if it works on application level, and NAT if it works on IP level. So we have socks proxy, but home router with NAT.
Second, VPN is just a fancy name for an overlay network over WAN. Overlay network is overlay network even if it only contains two nodes - your host and a remote router providing internet access.
I do not have direct experience with these VPN services, but i would guess they work on IP level and not on application level. So they are just ISPs providing service through overlay network (VPN) instead of access network or physical network.
There is no reason to assume that say C compiler generates the same machine code for the same source code. AFAIK, a C compiler that chooses randomly between multiple C-semantically equivalent sequences of instructions is a valid C compiler.
I also browse with disabled JS by default, enabling it on selected sites for selected JS sources. It has several advantages:
1) Web is much faster.
2) Often JS makes continuous CPU load, raising speed of CPU fan to noisy levels.
3) Sometimes JS is used for animations, i hate animations on web pages.
4) Sometimes JS is used to auto-play videos (although recent Firefox with proper settings ca block that even with JS enabled in most cases), i hate auto-play videos. That was my primary reason to switch to disabled JS in the past.
5) Often cookie and other pop-ups are implemented with JS and do not show when JS is disabled (while the web still works).
There is still out-of-network healthcare (i.e. specific services or entire healthcare providers not covered by single payer) in many countries with universal healthcare. But it is usually clear which is which.
> There is still out-of-network healthcare in many countries with universal healthcare
Can you provide links?
I've personally used the healthcare systems in Australia and Canada for two decades each, and also for a short time in the UK. I've never heard of this.
Link: https://www.reginamaria.ro/ - one of the biggest networks in the country. I have to use it for most of the regular stuff and I pay a subscription plus out of pocket for some consultations. This is on top of paying 10% of my gross income to socialized healthcare money stealing scheme.
The treatment provided will be similar to the NHS, but with less waiting (if relevant) and nicer facilities, such as private rooms rather than shared wards in hospital.
There is a small handful of clinics in Japan that do not accept the universal health insurance, such as specialist ones targeting English-speaking expats.
> Debian could package every version variant ... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.
That would work for distributions that provide just distributions / builds. But one major advantage of Debian is that it is committed to provide security fixes regardless of upstream availability. So they essentially stand in for maintainers. And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.
>... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.
That's not reasonable for library packages, because they may have to interact with each other. You're also proposing a scheme that would cause an explosion in resource usage when it comes to compilation and distribution of packages. Packages should be granular unless you are packaging something that is just too difficult to handle at a granular level, and you just want to get it over with. I don't even know if Debian accepts monolithic packages cobbled together that way. I suspect they do, but it certainly isn't ideal.
>And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.
When this is done, it is likely because updating is riskier and more work than maintaining a few old versions. Library authors that constantly break stuff for no good reason make this work much harder. Some of them only want to use bleeding edge features and have zero interest in supporting any stable version of anything. Package systems that let anyone publish easily lead to a proliferation of unstable dependencies like that. App authors don't necessarily know what trouble they're buying into with any given dependency choice.
This is unrelated to whether it is statically-linked or dynamically-linked, it is about maintaining compatibility of API for libraries.
In C, it is generally assumed that libraries maintain compatibility within one major version, so programs rarely have tight version intervals and maintainers could just use the newest available version (in each major series) for all packages depending on it.
If the build system (for Rust and some other languages) makes it easy to depend on specific minor/patch versions (or upper-bound intervals of versions), it encourages developers to do so instead of working on fixing the mess in the ecosystem.
This is an inaccurate generalization of both C and Rust ecosystems, and an inflammatory framing.
openssl has made several rounds of incompatible changes. ffmpeg and related av* libs make major releases frequently. libx264 is famous for having "just pin a commit" approach to releases.
It's common for distros to carry multiple versions of major libraries. Sometimes it's a frequent churn (like llvm), sometimes it's a multi-year migration (gtk, ncurses, python, libpng).
C libraries aren't magically virtuous in their versioning. The language doesn't even help with stability, it's all manual painstaking work. You often don't see these pains, because distro the maintainers do heroic work of testing the upgrades, reporting issues upstream, holding back breaking changes, and patching everything to work together.
----
Cargo packages almost never pin specific dependency versions (i.e. they don't depend on exact minor/patch version). Pinning is discouraged, because it works very poorly in Cargo, and causes hard dependency resolution conflicts. The only place where pinning is used regularly is pairs of packages from the same project that are expected to be used together (when it's essentially one version of one package, but had to be split into two files for technical reasons, like derive macros and their helper functions).
By default, and this is universally used default, Cargo allows semver-major-compatible dependency upgrades (which is comparable to sover). `dep = "1.2.3"` is not exact, but means >=1.2.3 && <2.0.0. The ecosystem is quite serious about semver compatibility, and there is a tooling to test and enforce it. Note that in Cargo the first non-zero number in the version is the semver-major.
> ffmpeg and related av* libs make major releases frequently.
True, not every lib keeps long-term stable API.
> It's common for distros to carry multiple versions of major libraries. Sometimes it's a frequent churn (like llvm), sometimes it's a multi-year migration (gtk, ncurses, python, libpng).
This is consistent with what i wrote as these are changes in major versions.
> C libraries aren't magically virtuous in their versioning.
My statement was not categorical statement about C libraries, but it was about incentives shaping the ecosystem. Using libraries has costs and benefits. Frequent churn means higher costs, so it is acceptable when bringing higher benefits. In the C ecosystem, the cost for major version churn (or general library API incompatibility) is high, so people avoid it when possible. If sophisticated tooling like Cargo makes building against specific versions easy, it lowers the cost of churn, so everyone is less concerned with it.
It is true that in C ecosystem, keeping stability is not effortless, for software developers it means running CI with tens of distros / distro versions, to test their software against diversity of library versions. Not 'just use my Cargo.lock' (i am not sure how widespread this approach is in Rust ecosystem, but at least some people argues for it in this discussion).
A rust program depends on various libraries. It releases a specific version. Not pinning to specific dependencies for that specific version is weird for everyone who gets that program outside of their OS distribution package manager!
I release foo 1.2.3 as a package, when doing it depending on bar 4.5.6. Perhaps foo still compiles with bar 4.5.2 or 4.6.8... but that's _not_ foo 1.2.3 right? And if there are bug fixes in some of those but not others, if I had my deps set up to just package with "bar 4.x" , suddenly I'm getting bug reports for "1.2.3" without a real association to "the" 1.2.3.
I'm saying this, perhaps this is as easy as having two sets of deps (like deps for repackagers, deps for people wanting to reproduce what I built)... but if a binary is being shipped _not_ being explicit about the exact dependencies used is upstream of a whole lot of problems! Is there a great answer from OS distro package managers for this?
Is the thing that should happen here be that there's a configure step in rust builds that would mess around with deps based on what you have?
If your library changes API all the time, do you think it's a good library?
Do you rewrite every API call in your software every time you bump dependencies?
Do you think that a library that changes API very often and thus remains insecure because bumping it breaks the program should be running on people's computers?
Why do you think libraries get any releases at all?
Software A and B depend on C (version V). A has a bug that gets fixed by shipping with C version V+1. B unfortunately has a latent bug that appears when shipping with C (version V+1).
You’re now going to have to do a bunch of work, outright not ship stuff, or even introduce bugs because of this policy!
Nobody wants buggy libraries, and in this example it’s not the library’s fault (B might just be misusing C, or in fact compensating for a bug).
Reading through these threads I’m much more empathetic to the software bill of materials argument than before as a justification for this policy. I do not think there’s an intrinsic quality argument to the global single version policy.
Bug fix releases are proof that sometimes you do really just need to put out new things to fix a bug.
> If your library changes API all the time, do you think it's a good library?
If it has a 0.x.y version number? Yes, that's what 0.x means to begin with. In fact Rust/cargo provides a stricter interpretation of semver than the original, which allows you to express guarantees of API stability (by keeping the minor version number fixed) even in 0.x-version projects.
Is this really that cheap? Looking at several local (CZ) eshops, i cannot find 32 GB DDR4 ECC RDIMM cheaper than $75, which will be $1200 for 512 GB.
reply