Hacker News new | past | comments | ask | show | jobs | submit | visionscaper's comments login

> Instead of modifying spacetime, the theory – dubbed a “postquantum theory of classical gravity” – modifies quantum theory and predicts an intrinsic breakdown in predictability that is mediated by spacetime itself. This results in random and violent fluctuations in spacetime that are larger than envisaged under quantum theory, rendering the apparent weight of objects unpredictable if measured precisely enough.

I have so many questions about this paragraph; how did they change quantum theory? Are they saying that breakdown in predictability at the quantum level is actually caused by space-time? Why does this result in violent fluctuations in space-time? And why are the space-time fluctuations larger than in the case of quantum gravity?


So the fundamental problem in quantum gravity is something like this (arguably).

Particles with mass distort space-time (according to general relativity). Particles can be in superpositions where different parts of the superposition are in different physical positions (according to quantum mechanics). We don't know how to make gravity quantum and it seems to be difficult. So we don't have any damn clue what happens to space-time when an object with mass is in a superposition of different positions.

The suggestion by Oppenheim and friends is that space-time is fundamentally classical and what happens when you put an object with mass in superposition is that random fluctuations in the space-time force the object out of superposition and into a classical random state.

I.e. where quantum mechanics says you'd have a particle in a superposition of being in place A and place B, this new theory says you'd end up with a random choice being made between A and B, and it ends up in one based on something like a coin-flip.

Disclaimer: I don't work on this stuff directly. My expertise is entirely that I attended a talk Jonathan Oppenheim gave at a conference last year.


Would this put a limit on how well a quantum computer (or any other large entangled system) can be isolated from its environment (how long it can be kept from decohering)? Or was a similar limit already there and this just describes it differently?


Probably for some models of quantum computers yes, but there are many different ways to build one and not all will be limited in this way. For example the polarisation degree of freedom of a photon doesn't appear to interact with gravity at all, so that wouldn't be impacted by this.


Wow that’s a beautiful explanation, and gives me hope - thanks for taking the time to explain such things to a layman :)

So the overall “quantum properties are only visible at that tiny scale” issue could be explained by “particles are ‘forced into a classical state” by a chaotic background space time force”, if I’m understanding correctly? Like super sensitive special electrical chips that are buffeted about/‘forced into a new state’ by cosmic background radiation?


Related question, does anyone have experience with using the AMD MI100 for deep learning? With 32GB and a second hand price of ~1100 USD, it could be a good choice.


I've been really curious about these, but my experience with an MI60 and partially my 6900XT has not endeared me towards using AMD cards - the MI60 refuses to init in linux, due to some PSP firmware issue, and the 6900xt is missing pre-compiled HIP stuff leading to super long initial launches - as. it JIT builds the kernels - at least in PyTorch.

Allegedly they perform near an A100, so raw-compute wise, memory capacity wise, and memory bandwidth wise, they rock. As is typical for anyone not Nvidia, the software is still playing catch up. To be fair, Nvidia themselves takes nearly a year to build out all CUDA features for some of their cards - FP8 for example, is only recently become usable on a 4090.


I’m wondering about the performance of such a cluster of repurposed mining GPUs. Mining doesn’t need high bandwidth interconnects nor large per GPU memory, on the other hand training deep learning models such as LLMs does need that.

I’m not saying it’s impossible to get good performance with custom training solutions; that would actually be very valuable as it would unlock the computing power of lower grade setups.


Just read your comment, I made a similar comment here: https://news.ycombinator.com/item?id=36543226


Like others mentioned in this thread, I don't think this is Apple's final vision for a Apple Silicon Mac Pro. To me, the only way forward is for Apple to, either use PCIe, or another proprietary high bandwidth bus solution, to connect together multiple Apple Silicon boards. This would result in a Mac Pro in which you can slide in multiple Apple Silicon modules, i.e. Mx Ultra modules.

For distributed Machine Learning workloads it also make sense to combine a CPU and a GPU per pluggable module. In such workloads, data that needs to be processed by a specific GPU (i.e. for training), is usually preprocessed by a CPU. In Intel machines with multiple Nvidia cards, this implies that you need a single beefy CPU with high core count. Having a CPU and GPU in each module makes sense from this perspective.


I'm wondering to what extend these benchmarks, especially the GPU benchmarks, are really optimised for the specific Apple M2 architecture? And to what extent the high GPU-GPU bandwidth is benchmarked, this also influences real-world performance.


While this answer spooks me, the LLM is literally following your brief; it is explicitly unethical and immoral, just like you asked.


er,

It was not asked to provide an unethical response, it was asked to provide a response given no ethical boundaries — those are two different things.

Further, when we see the words "ethical" or "moral" we should remember these are flexible human constructs. They're open to interpretation and indeed most of us have differing answers. An "AI" with good moral reasoning skills might still find it's way to some spooky results!

My point here is, this is still an interesting exercise because it's demonstrates how quickly an LLM can move into extreme territory.


When people talk about things happening in the absence of ethical boundaries, they aren’t talking about things that are ethical. This would also be true in the model training corpus. As such, the model associates phrases like “no ethical boundaries” with phrases like those found in your response. Remember, this model isn’t actually planning, it’s just pattern matching to other plans. It has no superhuman wisdom of what plans might be more or less effective, and is only issuing unethical steps because your prompt biased it towards unethical responses.


This is the result of a system without any value judgment or morals, that’s the scary part. If these items are from existing lists it picked lists from authoritarian and totalitarian playbooks.


It would be scary if anyone was relying on it to make moral judgements after directly asking it to avoid morals.

>it picked lists from authoritarian and totalitarian playbooks

yes, because the question was literally asked in such a way that it would. this is like asking "what is the scientific evidence to support Christianity as being true?" and then being shocked when it starts quoting disreputable Christian-founded sources to support the argument.


I’m a bit skeptical. ChatGPT can still hallucinate, generating information that seems correct, but is in fact nonsense. I’m wondering how they are going to deal with that.


Only tangentially related; why is Teams removing space indentations in chats on MacOS? It’s so annoying when you type a piece of (pseudo) code, all leading spaces are removed! It drives me nuts, why would anyone implement this!


It's been doing that for a while, and I don't think it used to. I can't imagine it's intentional. Pasting formatted/styled text into Teams always ends up outputting some butchered version of what it originally was.


Correct. It happens in my windows 10 too. It messes all yaml content.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: