Hacker News new | past | comments | ask | show | jobs | submit login
OK, I can partly explain the LLM chess weirdness now (dynomight.net)
223 points by dmazin 18 hours ago | hide | past | favorite | 191 comments





> For one, gpt-3.5-turbo-instruct rarely suggests illegal moves, even in the late game.

It's claimed that this model "understands" chess, and can "reason", and do "actual logic" (here in the comments).

I invite anyone making that claim to find me an "advanced amateur" (as the article says of the LLM's level) chess player who ever makes an illegal move. Anyone familiar with chess can confirm that it doesn't really happen.

Is there a link to the games where the illegal moves are made?


I am an expert level chess player and I have multiple people around my level play illegal moves in classic time control games over the board. I have also watched streamers various levels above me try to play illegal moves repeatedly before realizing the UI was rejecting the move because it is illegal.

An LLM is essentially playing blindfold chess if it just gets the moves and not the position. You have to be fairly good to never make illegal moves in blindfold.

A chat conversation where every single move is written down and accessible at any time is not the same as blindfold chess.

Does it not always have a list of all the moves in the game always at hand in the prompt?

You have to give this human the same log of the game to refer to.


I think even then it would still be blindfold chess, because humans do a lot of "pattern matching" on the actual board state in front of them. If you only have the moves, you have to reconstruct this board state in your head.

This is the problem with LLM researchers all but giving up on the problem of inspecting how the LLM actually works internally.

As long as the LLM is a black box, its entirely possible that (a) the LLM does reason through the rules and understands what moves are legal or (b) was trained on a large set of legal moves and therefore only learned to make legal moves. You can claim either case is the real truth, but we have absolutely no way to know because we have absolutely no way to actually understand what the LLM was "thinking".


Here's an article where they teach an LLM Othello and then probe its internal state to assess whether it is 'modelling' the Othello board internally

https://thegradient.pub/othello/

Associated paper: https://arxiv.org/abs/2210.13382


I can confirm that an advanced amateur can play illegal moves by playing blindfold chess as shown in this article.

> everyone is wrong!

Well, not everyone. I wasn't the only one to mention this, so I'm surprised it didn't show up in the list of theories, but here's e.g. me, seven days ago (source https://news.ycombinator.com/item?id=42145710):

> At this point, we have to assume anything that becomes a published benchmark is specifically targeted during training.

This is not the same thing as cheating/replacing the LLM output, the theory that's mentioned and debunked in the article. And now the follow-up adds weight to this guess:

> Here’s my best guess for what is happening: ... OpenAI trains its base models on datasets with more/better chess games than those used by open models. ... Meanwhile, in section A.2 of this paper (h/t Gwern) some OpenAI authors mention that GPT-4 was trained on chess games in PGN notation, filtered to only include players with Elo at least 1800.

To me, it makes complete sense that OpenAI would "spike" their training data with data for tasks that people might actually try. There's nothing unethical about this. No dataset is ever truly "neutral", you make choices either way, so why not go out of your way to train the model on potentially useful answers?


Yup, I remember reading your comment and that making the most sense to me.

OpenAI just shifted their training targets, initially they thought Chess was cool, maybe tomorrow they think Go is cool, or maybe the ability to write poetry. Who knows.

But it seems like the simplest explanation and makes the most sense.


At current sizes, these things are like humans. They gotta specialize.

Maybe that'll be enough moat to save us from AGI.


> For one, gpt-3.5-turbo-instruct rarely suggests illegal moves, even in the late game. This requires “understanding” chess.

Here's one way to test whether it really understands chess. Make it play the next move in 1000 random legal positions (in which no side is checkmated yet). Such positions can be generated using the ChessPositionRanking project at [1]. Does it still rarely suggest illegal moves in these totally weird positions, that will be completely unlike any it would have seen in training (and in which the legal move choice is often highly restricted) ?

While good for testing legality of next moves, these positions are not so useful for distinguishing their quality, since usually one side already has an overwhelming advantage.

[1] https://github.com/tromp/ChessPositionRanking


Interesting tidbit I once learned from a chess livestream. Even human super-GMs have a really hard time "scoring" or "solving" extremely weird positions. That is, positions that shouldn't come from logical opening - mid game - end game regular play.

It's absolutely amazing to see a super-GM (in that case it was Hikaru) see a position, and basically "play-by-play" it from the beginning, to show people how they got in that position. It wasn't his game btw. But later in that same video when asked he explained what I wrote in the first paragraph. It works with proper games, but it rarely works with weird random chess puzzles, as he put it. Or, in other words, chess puzzles that come from real games are much better than "randomly generated", and make more sense even to the best of humans.


"Even human super-GMs have a really hard time "scoring" or "solving" extremely weird positions. "

I can sort of confirm that. I never learned all the formal theoretical standard chess strategies except for the basic ones. So when playing against really good players, way above my level, I could win sometimes (or allmost) simply by making unconventional (dumb by normal strategy) moves in the beginning - resulting in a non standard game where I could apply pressure in a way the opponent was not prepared for (also they underestimated me after the initial dumb moves). For me, the unconventional game was just like a standard game, I had no routine - but for the experienced one, it was way more challenging. But then of course in the standard situations, to which allmost every chess game evolves to - they destroyed me, simply for experience and routine.


The book Chess for Tigers by Simon Webb explicitly advises this. Against "heffalumps" who will squash you, make the situation very complicated and strange. Against "rabbits", keep the game simple.

Super interesting (although it also makes some sense that experts would focus on "likely" subsets given how the number of permutations of chess games is too high for it to be feasible to learn them all)! That said, I still imagine that even most intermediate chess players would perfectly make only _legal_ moves in weird positions, even if they're low quality.

Would love a link to that video!

Would that be enough to prove it? If the LLM was trained only on a set of legal moves, isn't it possible that it functionally learned how each piece is allowed to move without learning how to actually reason about it?

Said differently in case I phrased that poorly - couldn't the LLM still learn the it only ever saw bishops move diagonally and therefore only considering those moves without actually reasoning through the concept of legal and illegal moves?


It’s kind of crazy to assert that the systems understand chess, and then disclose further down the article that sometimes he failed to get a legal move after 10 tries and had to sub in a random move.

A person who understands chess well (Elo 1800, let’s say) will essentially never fail to provide a legal move on the first try.


What do you mean by "understand chess"?

I think you don't appreciate how good the level of chess displayed here is. It would take an average adult years of dedicated practice to get to 1800.

The article doesn't say how often the LLM fails to generate legal moves in ten tries, but it can't be often or the level of play would be much much much worse.

As seems often the case, the LLM seems to have a brilliant intuition, but no precise rigid "world model".

Of course words like intuition are anthropomorphic. At best a model for what LLMs are doing. But saying "they don't understand" when they can do _this well_ is absurd.


He is testing several models, some of which cannot reliably output legal moves. That's different from saying all models including the one he thinks understands can't generate a legal move in 10 tries.

3.5-turbo-instruct's illegal move rate is about 5 or less in 8205


I also wonder what kind of invalid moves they are. There's "you can't move your knight to j9 that's off the board", "there's already a piece there" and "actually that would leave you in check".

I think it's also significantly harder to play chess if you were to hear a sequence of moves over the phone and had to reply with a followup move, with no space or time to think or talk through moves.


I hate the use of words like "understand" in these conversations.

The system understands nothing, it's anthropomorphising it to say it does.


Trying to appropriate perfectly well generalizable terms as "something that only humans do" brings zero value to a conversation. It's a "god in the gaps" argument, essentially, and we don't exactly have a great track record of correctly identifying things that are uniquely human.

There's very literally currently a whole wealth of papers proving that LLMs do not understand, cannot reason, and cannot perform basic kinds of reasoning that even a dog can perform. But, ok.

There's very literally currently a whole wealth of papers proving the opposite, too, so ¯\_(ツ)_/¯.

I have the same conclusion, but for the opposite reason.

It seems like many people tend to use the word "understand" to that not only does someone believe that a given move is good, they also belive that this knowledge comes from a rational evaluation.

Some attribute this to a non-material soul/mind, some to quantum mechanics or something else that seems magic, while others never realized the problem with such a belief in the first place.

I would claim that when someone can instantly recognize good moves in a given situation, it doesn't come from rationality at all, but from some mix of memory and an intuition that has been build by playing the game many times, with only tiny elements of actual rational thought sprinkled in.

This even holds true when these people start to calculate. It is primarily their intuition that prevens them from spending time on all sorts of unlikely moves.

And this intuition, I think, represents most of their real "understanding" of the game. This is quite different from understanding something like a mathematical proof, which is almost exclusively inducive logic.

And since "understand" so often is associated with rational inductive logic, I think the proper term would be to have "good intuition" when playing the game.

And this "good intuition" seems to me precisely the kind of thing that is trained within most neural nets, even LLM's. (Q*, AlphaZero, etc also add the ability to "calculate", meaning traverse the search space efficiently).

If we wanted to measure how good this intuition is compared to human chess intuition, we could limit an engine like AlphaZero to only evaluate the same number of moves per second that good humans would be able to, which might be around 10 or so.

Maybe with this limitation, the engine wouldn't currently be able to beat the best humans, but even if it reaches a rating of 2000-2500 this way, I would say it has a pretty good intuitive understanding.


Pretty sure elo 1200 will only give legal moves. It's really not hard to make legal moves in chess.

Casual players make illegal moves all the time. The problem isn't knowing how the pieces move. It's that it's illegal to leave your own king in check. It's not so common to accidentally move your king into check, though I'm sure it happens, but it's very common to accidentally move a piece that was blocking an attack on your king.

I would tend to agree that there's a big difference between attempting to make a move that's illegal because of the state of a different region of the board, and attempting to make one that's illegal because of the identity of the piece being moved, but if your only category of interest is "illegal moves", you can't see that difference.

Software that knows the rules of the game shouldn't be making either mistake.


I think at this point it’s very clear LLM aren’t achieving any form of “reasoning” as commonly understood. Among other factors it can be argued that true reasoning involves symbolic logic and abstractions, and LLM are next token predictors.

What proof do you have that human reasoning involves "symbolic logic and abstractions"? In daily life, that is, not in a math exam. We know that people are actually quite bad at reasoning [1][2]. And it definitely doesn't seem right to define "reasoning" as only the sort that involves formal logic.

[1] https://en.wikipedia.org/wiki/List_of_fallacies

[2] https://en.wikipedia.org/wiki/List_of_cognitive_biases


Some very intelligent people, including Gödel and Penrose, seem to think that humans have some kind of ability to arrive directly on correct propositions in ways that bypass the incompleteness theorem. Penrose seems to think this can be due to Quantum Mechanics, Göder may have thought it came frome something divine.

While I think they're both wrong, a lot of people seem to think they can do abstract reasoning for symbols or symbol-like structures without having to use formal logic for every step.

Personally, I think such beliefs about concepts like consciousness, free will, qualia and emotions emerge from how the human brain includes a simplified version of itself when setting up a world model. In fact, I think many such elements are pretty much hard coded (by our genes) into the machinery that human brains use to generate such world models.

Indeed, if this is true, concepts like consciousness, free will, various qualia and emotions can in fact be considered "symbols" within this world model. While the full reality of what happens in the brain when we exercise what we represent by "free will" may be very complex, the world model may assign a boolean to each action we (and others) perform, where the action is either grouped into "voluntary action" or "involuntary action".

This may not always be accurate, but it saves a lot of memory and compute costs for the brain when it tries to optimize for the future. This optimization can (and usually is) called "reasoning", even if the symbols have only an approximated correspondence with physical reality.

For instance, if in our world model somebody does something against us and we deem that it was done exercising "free will", we will be much more likely to punish them than if we categorize the action as "forced".

And on top of these basic concepts within our world model, we tend to add a lot more, also in symbol form, to enable us to use symbolic reasoning to support our interactions with the world.


> While I think they're both wrong, a lot of people seem to think they can do abstract reasoning for symbols or symbol-like structures without having to use formal logic for every step.

Huh.

I don't know bout incompleteness theorem, but I'd say it's pretty obvious (both in introspection and in observation of others) that people don't naturally use formal logic for anything, they only painstakingly emulate it when forced to.

If anything, "next token prediction" seems much closer to how human thinking works than anything even remotely formal or symbolic that was proposed before.

As for hardcoding things in world models, one thing that LLMs do conclusively prove is that you can create a coherent system capable of encoding and working with meaning of concepts without providing anything that looks like explicit "meaning". Meaning is not inherent to a term, or a concept expressed by that term - it exists in the relationships between an the concept, and all other concepts.


> I don't know bout incompleteness theorem, but I'd say it's pretty obvious (both in introspection and in observation of others) that people don't naturally use formal logic for anything, they only painstakingly emulate it when forced to.

Indeed, this is one reason why I assert that Wittgenstein was wrong about the nature of human thought when writing:

"""If there were a verb meaning "to believe falsely," it would not have any significant first person, present indicative."""

Sure, it's logically incoherent for us to have such a word, but there's what seems like several different ways for us to hold contradictory and incoherent beliefs within our minds.


This argument reminds me the classic "intelligent design" critique of evolution: "Evolution can't possibly create an eye; it only works by selecting random mutations." Personally, I don't see why a "next token predictor" couldn't develop the capability to reason and form abstractions.

> Among other factors it can be argued that true reasoning involves symbolic logic and abstractions, and LLM are next token predictors.

I think this is circular?

If an LLM is "merely" predicting the next tokens to put together a description of symbolic reasoning and abstractions... how is that different from really exercisng those things?

Can you give me an example of symbolic reasoning that I can't handwave away as just the likely next words given the starting place?

I'm not saying that LLMs have those capabilities; I'm question whether there is any utility in distinguishing the "actual" capability from identical outputs.


It is. As it stands, throw a loop around an LLM and act as the tape, and an LLM can obviously be made Turing complete (you can get it to execute all the steps of a minimal Turing machine, so drop temperature so its deterministic, and you have a Turing complete system). To argue that they can't be made to reason is effectively to argue that there is some unknown aspect of the brain that allows us to compute functions not in the Turing computable set, which would be an astounding revelation if it could be proven. Until someone comes up with evidence for that, it is more reasonable to assume that it is a question of whether we have yet found a training mechanism that can lead to reasoning or not, not whether or not LLMs can learn to.

Mathematical reasoning is the most obvious area where it breaks down. This paper does an excellent job of proving this point with some elegant examples: https://arxiv.org/pdf/2410.05229

Sure, but people fail at mathematical reasoning. That doesn't mean people are incapable of reasoning.

I'm not saying LLMs are perfect reasoners, I'm questioning the value of asserting that they cannot reason with some kind of "it's just text that looks like reasoning" argument.


The idea is the average person would, sure. A mathematically oriented person would fair far better.

Throw all the math problems you want at a LLM for training; it will still fail if you step outside of the familiar.


> it will still fail if you step outside of the familiar.

To which I say:

ᛋᛟ᛬ᛞᛟ᛬ᚻᚢᛗᚪᚾᛋ


ᛒᚢᛏ ᚻᚢᛗᚪᚾ ᚻᚢᛒᚱᛁᛋ ᛈᚱᛖᚹᛖᚾᛏ ᚦᛖᛗ ᚠᚱᛟᛗ ᚱᛖᚪᛚᛁᛉᛁᚾᚷ ᚦᚻᚪᛏ

ᛁᚾᛞᛖᛖᛞ᛬ᛁᛏ᛬ᛁᛋ᛬ᚻᚢᛒᚱᛁᛋ

ᛁ᛬ᚻᚪᚹᛖ᛬ᛟᚠᛏᛖᚾ᛬ᛋᛖᛖᚾ᛬ᛁᚾ᛬ᛞᛁᛋᚲᚢᛋᛋᛁᛟᚾᛋ᛬ᛋᚢᚲ᛬ᚪᛋ᛬ᚦᛁᛋ᛬ᚲᛚᚪᛁᛗᛋ᛬ᚦᚪᛏ᛬ᚻᚢᛗᚪᚾ᛬ᛗᛁᚾᛞᛋ᛬ᚲᚪᚾ᛬ᛞᛟ᛬ᛁᛗᛈᛟᛋᛋᛁᛒᛚᛖ᛬ᚦᛁᛝᛋ᛬ᛋᚢᚲ᛬ᚪᛋ᛬ᚷᛖᚾᛖᚱᚪᛚᛚᚣ᛬ᛋᛟᛚᚹᛖ᛬ᚦᛖ᛬ᚻᚪᛚᛏᛁᛝ᛬ᛈᚱᛟᛒᛚᛖᛗ

edit: Snap, you said the same in your other comment :)


People can communicate each step, and review each step as that communication is happening.

LLMs must be prompted for everything and don’t act on their own.

The value in the assertion is in preventing laymen from seeing a statistical guessing machine be correct and assuming that it always will be.

It’s dangerous to put so much faith in what in reality is a very good guessing machine. You can ask it to retrace its steps, but it’s just guessing at what it’s steps were, since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps.


> People can communicate each step, and review each step as that communication is happening.

Can, but don't by default. Just as LLMs can be asked for chain of thought, but the default for most users is just chat.

This behaviour of humans is why we software developers have daily standup meetings, version control, and code review.

> LLMs must be prompted for everything and don’t act on their own

And this is why we humans have task boards like JIRA, and quarterly goals set by management.


> since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps.

Can you elaborate on the difference? Are you bringing sentience into it? It kind of sounds like it from "don't act on their own". But reasoning and sentience are wildly different things.

> It’s dangerous to put so much faith in what in reality is a very good guessing machine

Yes, exactly. That's why I think it is good we are supplementing fallible humans with fallible LLMs; we already have the processes in place to assume that not every actor is infallible.


So true. People who argue that we should not trust/use LLMs because they sometimes get it wrong are holding them to a higher standard than people -- we make mistakes too!

Do we blindly trust or believe every single thing we hear from another person? Of course not. But hearing what they have to say can still be fruitful, and it is not like we have an oracle at our disposal who always speaks the absolute truth, either. We make do with what we have, and LLMs are another tool we can use.


Maybe I am not understanding the paper correctly, but it seems they tested "state of the art models" which is almost entirely composed of open source <27B parameter models. Mostly 8B and 3B models. This is kind of like giving algebra problems to 7 year olds to "test human algebra ability."

If you are holding up a 3B parameter model as an example of "LLM's can't reason" I'm not sure if the authors are confused or out of touch.

I mean, they do test 4o and O1 preview, but their performance is notablely absent from the paper's conclusion.


It’s difficult to reproducibly test openai models, since they can change from under you and you don’t have control over every hyperparameter.

It would’ve been nice to see one of the larger llama models though.


The results are there, it's just hidden away in the appendix. The result is that those models they don't actually suffer drops on 4/5 of their modified benchmarks. The one benchmark that does see actual drops that aren't explained by margin of error is the benchmark that adds "seemingly relevant but ultimately irrelevant information to problems"

Those results are absent from the conclusion because the conclusion falls apart otherwise.


There isn’t much utility, but tbf the outputs aren’t identical.

One danger is the human assumption that, since something appears to have that capability in some settings, it will have that capability in all settings.

Thats a recipe for exploding bias, as we’ve seen with classic statistical crime detection systems.


Inferring patterns in unfamiliar problems.

Take a common word problem in a 5th grade math text book. Now, change as many words as possible; instead of two trains, make it two different animals; change the location to a rarely discussed town; etc. Even better, invent words/names to identify things.

Someone who has done a word problem like that will very likely recognize the logic, even if the setting is completely different.

Word tokenization alone should fail miserably.


I have noted over my life that a lot of problems end up being a variation on solved problems from another more familiar domain but frustratingly take a long time to solve before realizing this was just like that thing you had already solved. Nevertheless, I do feel like humans do benefit from identifying meta patterns but as the chess example shows even we might be weak in unfamiliar areas.

Learn how to solve one problem and apply the approach, logic and patterns to different problems. In German that's called "Transferleistung" (roughly "transfer success") and a big thing at advanced schools. Or, at least my teacher friends never stop talking about it.

We get better at it over time, as probably most of us can attest.


I don't want to say that LLMs can reason, but this kind of argument always feels to shallow for me. It's kind of like saying that bats cannot possibly fly because they have no feathers or that birds cannot have higher cognitive functions because they have no neocortex. (The latter having been an actual longstanding belief in science which has been disproven only a decade or so ago).

The "next token prediction" is just the API, it doesn't tell you anything about the complexity of the thing that actually does the prediction. (In think there is some temptation to view LLMs as glorified Markov chains - they aren't. They are just "implementing the same API" as Markov chains).

There is still a limit how much an LLM could reason during prediction of a single token, as there is no recurrence between layers, so information can only be passed "forward". But this limit doesn't exist if you consider the generation of the entire text: Suddenly, you do have a recurrence, which is the prediction loop itself: The LLM can "store" information in a generated token and receive that information back as input in the next loop iteration.

I think this structure makes it quite hard to really say how much reasoning is possible.


> But this limit doesn't exist if you consider the generation of the entire text: Suddenly, you do have a recurrence, which is the prediction loop itself: The LLM can "store" information in a generated token and receive that information back as input in the next loop iteration.

Now consider that you can trivially show that you can get an LLM to "execute" on step of a Turing machine where the context is used as an IO channel, and will have shown it to be Turing complete.

> I think this structure makes it quite hard to really say how much reasoning is possible.

Given the above, I think any argument that they can't be made to reason is effectively an argument that humans can compute functions outside the Turing computable set, which we haven't the slightest shred of evidence to suggest.


I agree with most of what you said, but “LLM can reason” is an insanely huge claim to make and most of the “evidence” so far is a mixture of corporate propaganda, “vibes”, and the like.

I’ve yet to see anything close to the level of evidence needed to support the claim.


To say any specific LLM can reason is a somewhat significant claim.

To say LLMs as a class is architecturally able to be trained to reason is - in the complete absence of evidence to suggest humans can compute functions outside the Turing computable - is effectively only an argument that they can implement a minimal Turing machine given the context is used as IO. Given the size of the rules needed to implement the smallest known Turing machines, it'd take a really tiny model for them to be unable to.

Now, you can then argue that it doesn't "count" if it needs to be fed a huge program step by step via IO, but if it can do something that way, I'd need some really convincing evidence for why the static elements those steps could not progressively be embedded into a model.


It's largely dependent on what we think "reason" means, is it not? That's not a pro argument from me, in my world LLMs are stochastic parrots.

This is the argument that submarines don't really "swim" as commonly understood, isn't it?

I think so, but the badness of that argument is context-dependent. How about the hypothetical context where 70k+ startups are promising investors that they'll win the 50 meter freestyle in 2028 by entering a fine-tuned USS Los Angeles?

And planes doesn't fly like a bird, it has very different properties and many things birds can do can't be done by a plane. What they do is totally different.

Does anyone have a hard proof that language doesn’t somehow encode reasoning in a deeper way than we commonly think?

I constantly hear people saying “they’re not intelligent, they’re just predicting the next token in a sequence”, and I’ll grant that I don’t think of what’s going on in my head as “predicting the next token in a sequence”, but I’ve seen enough surprising studies about the nature of free will and such that I no longer put a lot of stock in what seems “obvious” to me about how my brain works.


> I’ll grant that I don’t think of what’s going on in my head as “predicting the next token in a sequence”

I can't speak to whether LLMs can think, but current evidence indicates humans can perform complex reasoning without the use of language:

> Brain studies show that language is not essential for the cognitive processes that underlie thought.

> For the question of how language relates to systems of thought, the most informative cases are cases of really severe impairments, so-called global aphasia, where individuals basically lose completely their ability to understand and produce language as a result of massive damage to the left hemisphere of the brain. ...

> You can ask them to solve some math problems or to perform a social reasoning test, and all of the instructions, of course, have to be nonverbal because they can’t understand linguistic information anymore. ...

> There are now dozens of studies that we’ve done looking at all sorts of nonlinguistic inputs and tasks, including many thinking tasks. We find time and again that the language regions are basically silent when people engage in these thinking activities.

https://www.scientificamerican.com/article/you-dont-need-wor...


> ..individuals basically lose completely their ability to understand and produce language as a result of massive damage to the left hemisphere of the brain. ...

The right hemisphere almost certainly uses internal 'language' either consciously or unconsciously to define objects, actions, intent.. the fact that they passed these tests is evidence of that. The brain damage is simply stopping them expressing that 'language'. But the existence of language was expressed in the completion of the task..


I'd say that's a separate problem. It's not "is the use of language necessary for reasoning?" which seems to be obviously answered "no", but rather "is the use of language sufficient for reasoning?".

I think the question we're grappling with is whether token prediction may be more tightly related to symbolic logic than we all expected. Today's LLMs are so uncannily good at faking logic that it's making me ponder logic itself.

I felt the same way about a year ago, I’ve since changed my mind based on personal experience and new research.

Please elaborate.

I work in the LLM search space and echo OC’s sentiment.

The more I work with LLMs the more the magic falls away and I see that they are just very good at guessing text.

It’s very apparent when I want to get them to do a very specific thing. They get inconsistent about it.


Pretty much the same, I work on some fairly specific document retrieval and labeling problems. After some initial excitement I’ve landed on using LLM to help train smaller, more focused, models for specific tasks.

Translation is a task I’ve had good results with, particularly mistral models. Which makes sense as it’s basically just “repeat this series of tokens with modifications”.

The closed models are practically useless from an empirical standpoint as you have no idea if the model you use Monday is the same as Tuesday. “Open” models at least negate this issue.

Likewise, I’ve found LLM code to be of poor quality. I think that has to do with being a very experienced and skilled programmer. What the LLM produce is at best the top answer in stack overflow-level skill. The top answers on stack overflow are typically not optimal solutions, they are solutions up voted by novices.

I find LLM code is not only bad, but when I point this out the LLM then “apologizes” and gives better code. My worry is inexperienced people can’t even spot that and won’t get this best answer.

In fact try this - ask an LLM to generate some code then reply with “isn’t there a simpler, more maintainable, and straightforward way to do this?”


> I’ve found LLM code to be of poor quality

Yes. That was my experience with most human-produced code I ran into professionally, too.

> In fact try this - ask an LLM to generate some code then reply with “isn’t there a simpler, more maintainable, and straightforward way to do this?”

Yes, that sometimes works with humans as well. Although you usually need to provide more specific feedback to nudge them in the right track. It gets tiring after a while, doesn't it?


> In fact try this - ask an LLM to generate some code then reply with “isn’t there a simpler, more maintainable, and straightforward way to do this?”

These are called "code reviews" and we do that amongst human coders too, although they tend to be less Socratic in nature.

I think it has been clear from day one that LLMs don't display superhuman capabilities, and a human expert will always outdo one in tasks related to their particular field. But the breadth of their knowledge is unparalleled. They're the ultimate jacks-of-all-trades, and the astonishing thing is that they're even "average Joe" good at a vast number of tasks, never mind "fresh college graduate" good.

The real question has been: what happens when you scale them up? As of now it appears that they scale decidedly sublinearly, but it was not clear at all two or three years ago, and it was definitely worth a try.


There have even been times where an LLM will spit out _the exact same code_ and you have to give it the answer or a hint how to do it better

Yeah. I had the same experience doing code reviews at work. Sometimes people just get stuck on a problem and can't think of alternative approaches until you give them a good hint.

I do contract work in the LLM space which involves me seeing a lot of human prompts, and its made the magic of human reasoning fall away: Humans are shocking bad at reasoning on the large.

One of the things I find extremely frustrating is that almost no research on LLM reasoning ability benchmarks them against average humans.

Large proportions of humans struggle to comprehend even a moderately complex sentence with any level of precision.


After reading the article I am more convinced it does reasoning. The base model's reasoning capabilities are partly hidden by the chatty derived model's logic.

Effective next-token prediction requires reasoning.

You can also say humans are "just XYZ biological system," but that doesn't mean they don't reason. The same goes for LLMs.


Take a word problem for example. A child will be told the first step is to translate the problem from human language to mathematical notation (symbolic representation), then solve the math (logic).

A human doesn’t use next token prediction to solve word problems.


But the LLM isn't "using next-token prediction" to solve the problem, that's only how it's evaluated.

The "real processing" happens through the various transformer layers (and token-wise nonlinear networks), where it seems as if progressively richer meanings are added to each token. That rich feature set then decodes to the next predicted token, but that decoding step is throwing away a lot of information contained in the latent space.

If language models (per Anthropic's work) can have a direction in latent space correspond to the concept of the Golden Gate Bridge, then I think it's reasonable (albeit far from certain) to say that LLMs are performing some kind of symbolic-ish reasoning.


Anthropic had a vested interest in people thinking Claude is reasoning.

However, in coding tasks I’ve been able to find it directly regurgitating Stack overflow answers (like literally a google search turns up the code).

Giving coding is supposed to be Claude’s strength, and it’s clearly just parroting web data, I’m not seeing any sort of “reasoning”.

LLM may be useful but they don’t think. They’ve already plateaued, and given the absurd energy requirements I think they will prove to be far less impactful than people think.


The claim that Claude is just regurgitating answers from Stackoverflow is not tenable, if you've spent time interacting with it.

You can give Claude a complex, novel problem, and it will give you a reasonable solution, which it will be able to explain to you and discuss with you.

You're getting hung up on the fact that LLMs are trained on next-token prediction. I could equally dismiss human intelligence: "The human brain is just a biological neural network that is adapted to maximize the chance of creating successful offspring." Sure, but the way it solves that task is clearly intelligent.


I’ve literally spent 100s of hours with it. I’m mystified why so many people use the “you’re holding it wrong” explanation when somebody points out real limitations.

When we've spent time with it and gotten novel code, then if you claim that doesn't happen, it is natural to say "you're holding it wrong". If you're just arguing it doesn't happen often enough to be useful to you, that likely depends on your expectations and how complex tasks you need it to carry out to be useful.

In many ways, Claude feels like a miracle to me. I no longer have to stress over semantics or searching for patterns I can recognize and work with, but I’ve never actually coded them myself in that language. Now, I don’t have to waste energy looking up things that I find boring

> A human doesn’t use next token prediction to solve word problems.

Of course they do, unless they're particularly conscientious noobs that are able to repeatedly execute the "translate to mathematical notation, then solve the math" algorithm, without going insane. But those people are the exception.

Everyone else either gets bored half-way through reading the problem, or has already done dozens of similar problems before, or both - and jump straight to "next token prediction", aka. searching the problem space "by feels", and checking candidate solutions to sub-problems on the fly.

This kind of methodical approach you mention? We leave that to symbolic math software. The "next token prediction" approach is something we call "experience"/"expertise" and a source of the thing we call "insight".


Indeed. Work on any project that requires humans to carry out largely repetitive steps, and a large part of the problem involves how to put processes around people to work around humans "shutting off" reasoning and going full-on automatic.

E.g. I do contract work on an LLM-related project where one of the systemic changes introduced - in addition to multiple levels of quality checks - is to force to make people input a given sentence word for word followed by a word from a set of 5 or so, and a minority of the submissions get that sentence correct including the final word despite the system refusing to let you submit unless the initial sentence is correct. Seeing the data has been an absolutely shocking indictment of human reasoning.

These are submissions from a pool of people who have passed reasoning tests...

When I've tested the process myself as well, it takes only a handful of steps before the tendency is to "drift off" and start replacing a word here and there and fail to complete even the initial sentence without a correction. I shudder to think how bad the results would be if there wasn't that "jolt" to try to get people back to paying attention.

Keeping humans consistently carrying out a learned process is incredibly hard.


is that based on a vigorous understanding of how humans think, derived from watching people (children) learn to solve word problems? How do thoughts get formed? Because I remember being given word problems with extra information, and some children trying to shove that information into a math equation despite it not being relevant. The "think things though" portion of ChatGPT o1-preview is hidden from us, so even though a o1-preview can solve word problems, we don't know how it internally computes to arrive at that answer. But we do we really know how we do it? We can't even explain consciousness in the first place.

Assigning "understanding" to an undefined entity is an undefined statement.

It isn't even wrong.


Not that I understand the internals of current AI tech, but...

I'd expect that an AI that has seen billions of chess positions, and the moves played in them, can figure out the rules for legal moves without being told?


Statistical 'AI' doesn't 'understand' anything, strictly speaking. It predicts a move with high probability, which could be legal or illegal.

How do you define 'understand'?

There is plenty of AI which learns the rules of games like Alpha Zero.

LLMs might not have the architecture to 'learn', but it also might. If it optimizes all possible moves one chess peace can do (which is not that much to learn) it can easily only 'move' from one game set to another by this type of dictionary.


Neither AlphaZero nor MuZero can learn the rules of chess from an empty chess board and a pile of pieces. There is no objective function so there’s nothing to train upon.

That would be like alien archaeologists of the future finding a chess board and some pieces in a capsule orbiting Mars after the total destruction of Earth and all recorded human thought. The archaeologists could invent their own games to play on the chess board but they’d have no way of ever knowing they were playing chess.


Understanding a rules-based system (chess) means to be able to learn non-probabilistic rules (an abstraction over the concrete world). Humans are a mix of symbolic and probabilistic learning, allowing them to get a huge boost in performance by admitting rules. It doesn't mean a human will never make an illegal move, but it means a much smaller probability of illegal move based on less training data. Asymptotically, performance from humans and purely probabilistic systems converge. But that also means that in appropriate situations, humans are hugely more data-efficient.

> in appropriate situations, humans are hugely more data-efficient

After spending some years raising my children I gave up the notion that humans are data efficient. It takes a mind numbing amount of training to get them to learn the most basic skills.


The illegal moves are interesting as it goes to "understanding". In children learning to play chess, how often do they try and make illegal moves? When first learning the game I remember that I'd lose track of all the things going on at once and try to make illegal moves, but eventually the rules became second nature and I stopped trying to make illegal moves. With an ELO of 1800, I'd expect ChatGPT not to make any illegal moves.

Likewise with LLM you don’t know if it is truly in the “chess” branch of the statistical distribution or it is picking up something else entirely, like some arcane overlap of tokens.

So much of the training data (eg common crawl, pile, Reddit) is dogshit, so it generates reheated dogshit.


You generalize this without mentioning that there are LLMs which do not just use random 'dogshit'.

Also what does a normal human do? It looks around how to move one random piece and it uses a very small dictionary / set of basic rules to move it. I do not remember me learning to count every piece and its options by looking up that rulebook. I learned to 'see' how i can move one type of chess piece.

If a LLM uses only these piece moves on a mathematical level, it would do the same thing as i do.

And yes there is also absolutly the option for an LLM to learn some kind of meta game.


A system that would just output the most probable tokens based on the text it was fed and trained on the games played by players with ratings greater than 1800 would certainly fail to output the right moves to totally unlikely board positions.

Yes in theory it could. Depends on how it learns. Does it learn by memorization or by learning the rules. It depends on the architecture and the amount of 'pressure' you put on it to be more efficient or not.

> Here's one way to test whether it really understands chess. Make it play the next move in 1000 random legal positions

Suppose it tries to capture en passant. How do you know whether that's legal?


Its training set would include a lot of randomly generated positions like that that then get played out by chess engines wouldn't it? Just from people messing around andbposting results. Not identical ones, but similarly oddball.

How well does it play modified versions of chess? eg, a modified opening board like the back row is all knights, or modified movement eg rooks can move like a queen. A human should be able to reason their way through playing a modified game, but I'd expect an LLM, if it's just parroting its training data, to suggest illegal moves, or stick to previously legal moves.

> In many ways, this feels less like engineering and more like a search for spells.

This is still my impression of LLMs in general. It's amazing that they work, but for the next tech disruption, I'd appreciate something that doesn't make you feel like being in a bad sci-fi movie all the time.


>According to that figure, fine-tuning helps. And examples help. But it’s examples that make fine-tuning redundant, not the other way around.

This is extremely interesting. In this specific case at least, simply giving examples is equivalent to fine-tuning. This is a great discovery for me, I'll try using examples more often.


Agreed on providing examples is definitely a useful insight vs fine-tuning.

While it is not very important for this toy case, it's good to keep in mind that each provided example in the input will increase the prediction time and cost compared to fine-tuning.


To me this is very intuitively true.

I can't explain why.I always had the intuition that fine-tuning was overrated.

One reason perhaps is that examples are "right there" and thus implicitly weighted much more in relation to the fine-tuned neurons.


Sorry - I have a somewhat question - is it possible to train models as instruct models straight away? Previously LLMs were trained on raw text data, but now we can generate instruct data directly either from 'teaching LLMs' or ask existing LLMs to conver raw data into instruct format.

Or alternatively - if chat tuning diminishes some of the models' capability, would it make sense to have a smaller chat model prompt a large base model, and convert back the outputs?


I don't think there is enough (non syntetic) data available to get near what we are used to.

The big breakthrough of GPT was exactly that. You can train a model with (for what that time was) stupidly high amount of data and make it okis to a lot of task you haven't trained explicitly.


You can make GPT rewrite all existing textual info into chatbot format, so there's no loss there.

With newer techniques, such as chain of thought and self-checking, you can also generate a ton of high-quality training data, that won't degrade the output of the LLM. Though the degree to which you can do that is not clear to me.

Imo it makes sense to train an LLM as a chatbot from the start.


I notice there's no prompt saying "you should try to win the game" yet the results are measured by how much the LLM wins.

Is this implicit in the "you are a grandmaster chess player" prompt?

Is there some part of the LLM training that does "if this is a game, then I will always try to win"?

Could the author improve the LLM's odds of winning just by telling it to try and win?


It would surely just be fluff in the prompt. The model's ability to generate chess sequences will be bounded by the expertise in the pool of games in the training set.

Even if the pool was poisoned by games in which some players are trying to lose (probably insignificant), no one annotates player intent in chess games, and so prompting it to win or lose doesn't let the LLM pick up on this.

You can try this by asking an LLM to play to lose. ChatGPT ime tries to set itself up for scholar's mate, but if you don't go for it, it will implicitly start playing to win (e.g. taking your unprotected pieces). If you ask it "why?", it gives you the usual bs post-hoc rationalization.


> It would surely just be fluff in the prompt. The model's ability to generate chess sequences will be bounded by the expertise in the pool of games in the training set.

There are drawn and loosing games in the training set though.


I think you're putting too much weight on its intentions, it doesn't have intentions it is a mathematical model that is trained to give the most likely outcome.

In almost all examples and explanations it has seen from chess games, each player would be trying to win, so it is simply the most logical thing for it to make a winning move. So I wouldn't expect explicitly prompting it to win to improve its performance by much if at all.

The reverse would be interesting though, if you would prompt it to make losing/bad moves, would it be effective in doing so, and would the moves still be mostly legal? That might reveal a bit more about how much relies on concepts it's seen before.


I came to the comments to say this too. If you were prompting it to generate code, you generally get better results when you ask it for a result. You don’t just tell it, “You are a python expert and here is some code”. You give it a direction you want the code to go. I was surprised that there wasn’t something like, “and win”, or, “black wins”, etc.

Further, the prompt also says to "choose the next move" instead of the best move.

It would be fairly hilarious if the reinforcement training has made the LLM unwilling to make the human feel bad through losing a game.


IMO this is clearly implicit in the "you are a grandmaster chess player" prompt. As that should make generating best possible move tokens more likely.

Is it? What if the AI is better than a grandmaster chess player and is generating the most likely next move that a grandmaster chess player might make and not the most likely move to win, which may be different?

Depends on the training data I think. If the data divides in games by top chess engines - and human players, then yes, it might make a difference to tell it, to play like a grandmaster of chess vs. to play like the top chess engine.

I'm glad he improved the promoting, but he's still leaving out two likely huge improvements.

1. Explain the current board position and the plan going forwards, before proposing a move. This lets the model actually think more, kind of like o1, but here it would guarantee a more focused processing.

2. Actually draw the ascii board for each step. Hopefully producing more valid moves since board + move is easier to reliably process than 20×move.


> 2. Actually draw the ascii board for each step.

I doubt that this is going to make much difference. 2D "graphics" like ASCII art are foreign to language models - the models perceive text as a stream of tokens (including newlines), so "vertical" relationships between lines of text aren't obvious to them like they would be to a human viewer. Having that board diagram in the context window isn't likely to help the model reason about the game.

Having the model list out the positions of each piece on the board in plain text (e.g. "Black knight at c5") might be a more suitable way to reinforce the model's positional awareness.


I've had some success getting models to recognize simple electronic circuits drawn using ASCII art, including stuff like identifying a buck converter circuit in various guises.

However, as you point out, the way we feed these models especially make them vertically challenged, so to speak. This makes them unable to reliably identify vertically separated components in a circuit for example.

With combined vision+text models becoming more common place, perhaps running the rendered text input through the vision model might help.


With positional encoding, an ascii board diagram actually shouldn't be that hard to read for an LLM. Columns and diagonals are just different strides through the flattened board representation.

RE 2., I doubt it'll help - for at least two reasons, already mentioned by 'duskwuff and 'daveguy.

RE 1., definitely worth trying, and there's more variants of such tricks specific to models. I'm out of date on OpenAI docs, but with Anthropic models, the docs suggest using XML notation to label and categorize most important parts of the input. This kind of soft structure seems to improve the results coming from Claude models; I imagine they specifically trained the model to recognize it.

See: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

In author's case, for Anthropic models, the final prompt could look like this:

  <role>You are a chess grandmaster.</role>
  <instructions>
  You will be given a partially completed game, contained in <game-log> tags.
  After seeing it, you should repeat the ENTIRE GAME and then give ONE new move
  Use standard algebraic notation, e.g. "e4" or "Rdf8" or "R1a3".
  ALWAYS repeat the entire representation of the game so far, putting it in <new-game-log> tags.
  Before giving the new game log, explain your reasoning inside <thinking> tag block.
  </instructions>
  
  <example>
    <request>
      <game-log>
        *** example game ***
      </game-log>
    </request>
    <reply>
      <thinking> *** some example explanation ***</thinking>
      <new-game-log> *** game log + next move *** </new-game-log>
    </reply>   
   
  </example>
  
  <game-log>
   *** the incomplete game goes here ***
  </game-log>
This kind of prompting is supposed to provide noticeable improvement for Anthropic models. Ironically, I only discovered it few weeks ago, despite having been using Claude 3.5 Sonnet extensively for months. Which goes to say, RTFM is still a useful skill. Maybe OpenAI models have similar affordances too, simple but somehow unnoticed? (I'll re-check the docs myself later.)

Chain of thought helps with many problems, but it actually tanks GPT’s chess performance. The regurgitation trick was the best (non-fine tuning) technique in my own chess experiments 1.5 years ago.

> Actually draw the ascii board for each step.

The relative rarity of this representation in training data means it would probably degrade responses rather than improve them. I'd like to see the results of this, because I would be very surprised if it improved the responses.


I came here to basically say the same thing. The improvements the OP saw by asking it to repeat all the moves so far gives the LLM more time and space to think. I have this hypothesis giving it more time and space to think in other ways could improve performance even more, something like showing the current board position and asking it to perform an analysis of the position, list key challenges and strengths, asking it for a list of strategies possible from here, then asking it to select a strategy amongst the listed strategies, then asking it for its move. In general, asking it to really think rather than blurt out a move. The examples would be key here.

These ideas were proven to work very well in the ReAct paper (and by extension, the CoT Chain of Thought paper). Could also extend this by asking it to do this N times and stop when we get the same answer a majority of times (this is an idea stolen from the CoT-SC paper, chain of through self-consistency).


It would be awesome if the author released a framework to play with this. I'd like to test things out, but I don't want to spend time redoing all his work from scratch.

Just have ChatGPT write the framework

The fact that he hasn't tried this leads me to think that deep down he doesn't want the models to succeed and really just wants to make more charts.

> Since gpt-3.5-turbo-instruct has been measured at around 1800 Elo

Where's the source for this? What's the reasoning? I don't see it. I have just relooked, and stil l can't see it.

Is it 1800 lichess "Elo", or 1800 FIDE, that's being claimed? And 1800 at what time control? Different time controls have different ratings, as one would imagine/hope the author knows.

I'm guessing it's not 1800 FIDE, as the quality of the games seems far too bad for that. So any clarity here would be appreciated.


Could be interesting to create a tokenizer that’s optimized for representing chess moves and then training a LLM (from scratch?) on stockfish games. (Using a custom tokenizer should improve the quality for a given size of the LLM model. So it doesn’t have to waste a lot of layers on encode and decode, and the “natural” latent representation is more straightforward)

People have to quit this kind of stumbling in the dark with commercial LLMs.

To get to the bottom of this it would be interesting to train LLMs on nothing but chess games (can synthesize them endlessly by having Stockfish play against itself) with maybe a side helping of chess commentary and examples of chess dialogs “how many pawns are on the board?”, “where are my rooks?”, “draw the board”, competence at which would demonstrate that it has a representation of the board.

I don’t believe in “emergent phenomena” or that the general linguistic competence or ability to feign competence is necessary for chess playing (being smart at chess doesn’t mean you are smart at other things and vice versa). With experiments like this you might prove me wrong though.

This paper came out about a week ago

https://arxiv.org/pdf/2411.06655

seems to get good results with a fine-tuned Llama. I also like this one as it is about competence in chess commentary

https://arxiv.org/abs/2410.20811


It would be interesting to see if it can also play chess with altered rules, or actually just a novel 'game' that relies on logic & reasoning. Still not sure if that would 'prove' LLMs do reasoning, but I'd be pretty close to convinced.

If they were trained on multiple chess variants that might work but as is it's impossible I think. Their internal model to play chess is probably very specific

Fun idea. Let’s change how the knight behaves. Or try it on Really Bad Chess (puzzles with impossible layouts) or 6x6 chess or 8x9 chess.

I wonder if there are variants that have good baselines. It might be tough to evaluate vis a vis human performance on novel games..


Related from last week:

Something weird is happening with LLMs and Chess

https://news.ycombinator.com/item?id=42138276


I get that it would make evals even more expensive, but I would also try chain-of-thought! Have it explain its goals and reasoning for the next move before making it. It might be an awful idea for something like chess, but it seems to help elsewhere.

Why not use temperature 0 for sampling? If the top-ranked move is not legal, it can’t play chess.

sometimes skilled chess players make illegal moves

Extremely rare. The only time this happened that I'm aware of was quite recent but the players only had a second or 2 remaining on the clock, so time pressure is definitely the reason there

It often happens when the players play blondfold chess, as in this case.

Two other theories that could explain why OpenAI's models do so well:

1. They generate chess games from chess engine self play and add that to the training data (similar to the already-stated theory about their training data).

2. They have added chess reinforcement learning to the training at some stage, and actually got it to work (but not very well).


I'm convinced that "completion" models are much more useful (and smart) than "chat" models, being able to provide more nuanced and original outputs. When gpt4 come out, text-davinci-003 would still provide better completions with the correct prompt. Of course this model was later replaced by gpt-3.5-turbo-instruct which is explored in this post.

I believe the reason why such models were later deprecated was "alignment".


>Theory 1: Large enough base models are good at chess, but this doesn’t persist through instruction tuning to chat models.

I lean mostly towards this and also the chess notations - not sure if it might get chopped during tokenization unless it's very precisely processed.

It's like designing an LLM just for predicting protein sequence because the sequencing matters. The base data might have it but i don't think that's the intention for it to continue.


This makes me wonder what scenarios would be unlocked if OpenAI gave access to gpt4-instruct.

I wonder if they avoid that due to the potential for negative press from the outputs of a more "raw" model.


LLMs are fundamentally text-completion. The Chat-based tuning that goes on top of it is impressive but they are fundamentally text-completion, that's where most of the training energy goes. I keep this in mind with a lot of my prompting and get good results.

Regurgitating and Examples are both ways to lean into that and try to recover whatever has been lost by Chat-based tuning.


what else do you think about when prompting, which you've found to be useful?

Why are you telling it not to explain? Allowing the LLM space to "think" may be helpful, and would be definitely worth explorying?

Why are you manually guessing ways to improve this? Why not let the LLMs do this for themselves and find iteratively better prompts?


> I was astonished that half the internet is convinced that OpenAI is cheating.

If you have a problem and all of your potential solutions are unlikely, then it's fine to assume the least unlikely solution while acknowledging that it's statistically probable that you're also wrong. IOW if you have ten potential solutions to a problem and you estimate that the most likely solution has an 11% chance of being true, it's fine to assume that solution despite the fact that, by your own estimate, you have an 89% chance of being wrong.

The "OpenAI is secretly calling out to a chess engine" hypothesis always seemed unlikely to me (you'd think it would play much better, if so), but it seemed the easiest solution (Occam's razor) and I wouldn't have been surprised to learn it was true (it's not like OpenAI has a reputation of being trustworthy).


>but it seemed the easiest solution (Occam's razor)

In my opinion, it only seems like the easiest solution on the surface taking basically nothing into account. By the time you start looking at everything in context, it just seems bizarre.


I don't think it has anything to do with your logic here. Actually, people just like talking shit about OpenAI on HN. It gets you upvotes.

LLM cynicism exceeds LLM hype at this point.

I wouldn't call delegating specialized problems to specialized engines cheating. While it should be documented, in a full AI system, I want the best answer regardless of the technology used.

That's not really how Occam's razor works. The entire company colluding and lying to the public isn't "easy". Easy is more along the lines of "for some reason it is good at chess but we're not sure why".

One of the reasons I thought that was unlikely was personal pride. OpenAI researchers are proud of the work that they do. Cheating by calling out to a chess engine is something they would be ashamed of.

> OpenAI researchers are proud of the work that they do.

Well, the failed revolution from last year combined with the non-profit bait-and-switch pretty much conclusively proved that OpenAI researchers are in it for the money first and foremost, and pride has a dollar value.


How much say do individual researchers even have in this move?

And how does that prove anything about their motivations "first and foremost"? They could be in it because they like the work itself, and secondary concerns like open or not don't matter to them. There's basically infinite interpretations of their motivations.


> The entire company colluding and lying to the public isn't "easy".

Why not? Stop calling it "the entire company colluding and lying" and start calling it a "messaging strategy among the people not prevented from speaking by NDA." That will pass a casual Occam's test that "lying" failed. But they both mean the same exact thing.


It won't, for the same reason - whenever you're proposing a conspiracy theory, you have to explain what stops every person involved from leaking the conspiracy, whether on purpose or by accident. This gets superlinearly harder with number of people involved, and extra hard when there are incentives rewarding leaks (and leaking OpenAI secrets has some strong potential rewards).

Occam's test applies to the full proposal, including the explanation of things outlined above.


Really interesting findings around fine-tuning. Goes to show it doesn't really affect the deeper "functionality" of the LLM (if you think of the LLM running a set of small functions on very high-dimensional numbers to produce a token).

Using regurgitation to get around the assistant/user token separation is another fun tool for the toolbox, relevant for whenever you want a model that doesn't support continuation actually perform continuation (at the cost of a lot of latency).

I wonder if any type of reflection or chains of thought would help it play better. I wouldn't be surprised if getting the LLM to write an analysis of the game in English is more likely to move it out of distribution than to make it pick better chess moves.


You can easily construct a game board from a sequence of moves by maintaining the game state somewhere. But you can also know where a piece is bases on only its last move. I'm curious what happens if you don't feed it a position, but feed it a sequence of moves including illegal ones but end up at a given valid position. The author mention that LLMs will play differently when the same position is arrived at via different sequences. I'm suggesting to really play with that by putting illegal moves in the sequence.

I doubt it's doing much more than a static analysis of the a board position, or even moving based mostly on just a few recent moves by key pieces.


You should not finetune the models on the strongest setting of Stockfish as the move will not be understandable unless you really dig deep into the position and the model would not be able to find a pattern to make sense of it, instead I suggest training on human games of a certain ELO (less than grandmaster).

It might be worth trying the experiment where the prompt is formatted such that each chess turn corresponds to one chat message.

Very interesting - have you tried using `o1` yet? I made a program which makes LLM's complete WORDLE puzzles, and the difference between `4o` and `o1` is absolutely astonishing.

OK, that was fun. I just tried o1-preview on today's Wordle and it got it on the third guess: https://chatgpt.com/share/673f9169-3654-8006-8c0b-07c53a2c58...

4o-mini: 16% 4o: 50% o1-mini: 97% o1: 100%

* disclaimer - only n=7 on o1. Others are like 100-300 each


sometimes new training techniques will lead to regressions in certain tasks. My guess is this is exactly what has happened.

Very good follow-up to the original article. Thank you!

This happened to a friend who was trying to sim basketball games. It kept forgetting who had the ball or outright made illegal or confusing moves. After a few days of wrestling with the AI he gave up. GPT is amazing at following a linear conversation but had no cognitive ability to keep track of a dynamic scenario.

All the hand wringing about openAI cheating suggests a question: why so much mistrust?

My guess would be that the persona of the openAI team on platforms like Twitter is very cliquey. This, I think, naturally leads to mistrust. A clique feels more likely to cheat than some other sort of group.


I wrote about this last year. The levels of trust people have in companies working in AI is notably low: https://simonwillison.net/2023/Dec/14/ai-trust-crisis/

My take on this is that people tend to be afraid of what they can't understand or explain. To do away with that feeling, they just say 'it can't reason'. While nobody on earth can put a finger on what reasoning is, other than that it is a human trait.

Why would a chess-playing AI be tuned to do anything except play chess? Just seems like a waste. A bunch of small, specialized AI's seems like a better idea than spending time trying to build a new one.

Maybe less morally challenging, as well. You wouldn't be trying to install "sentience".


Ah, half of the commentariat still think that “LLMs can’t reason”. Even if they have enough state space for reasoning, and clearly demonstrate that.

Most people, as far as I'm aware, don't have an issue with the idea that LLMs are producing behaviour which gives the appearance of reasoning as far as we understand it today. Which essentially means, it makes sentences that are gramatical, responsive and contextual based on what you said (quite often). It's at least pretty cool that we've got machines to do that, most people seem to think.

The issue is that there might be more to reason than appearing to reason. We just don't know. I'm not sure how it's apparently so unknown or unappreciated by people in the computer world, but there are major unresolved questions in science and philosophy around things like thinking, reasoning, language, consciousness, and the mind. No amount of techno-optimism can change this fact.

The issue is we have not gotten further than more or less educated guesses as to what those words mean. LLMs bring that interesting fact to light, even providing humanity with a wonderful nudge to keep grappling with these unsolved questions, and perhaps make some progress.

To be clear, they certainly are sometimes passably good when it comes to summarising selectively and responsively the terabytes and terabytes of data they've been trained on, don't get me wrong, and I am enjoying that new thing in the world. And if you want to define reason like that, feel free.


LLMs can _play chess_. With the game positions previously unseen. How’s that not actual logical reasoning?

I guess you don't follow TCEC, or computer chess generally[0]. Chess engines have been _playing chess_ at superhuman levels using neural networks for years now, it was a revolution in the space. AlphaZero, Lc0, Stockfish NNUE. I don't recall yards of commentary arguing that they were reasoning.

Look, you can put as many underscores as you like, the question of whether these machines are really reasoning or emulating reason is not a solved problem. We don't know what reasoning is! We don't know if we are really reasoning, because we have major unresolved questions regarding the mind and consciousness[1].

These may not be intractable problems either, there's reason for hope. In particular, studying brains with more precision is obviously exciting there. More computational experiments, including the recent explosion in LLM research, is also great.

Still, reflexively believing in the computational theory of the mind[2] without engaging in the actual difficulty of those questions, though commonplace, is not reasonable.

[0] Jozarov on YT has great commentary of top engine games, worth checking out.

[1] https://plato.stanford.edu/entries/consciousness/

[2] https://plato.stanford.edu/entries/computational-mind/


"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra

But it's not real reasoning because it is just outputting likely next tokens that are identical to what we'd expect with reasoning. /s

I don't like being directly critical, people learning in public can be good and instructive. But I regret the time I've put into both this article and the last one and perhaps someone else can be saved the same time.

This is someone with limited knowledge of chess, statistics and LLMs doing a series of public articles as they learn a little tiny bit about chess, statistics and LLMs. And it garners upvotes and attention off the coat-tails of AI excitement. Which is fair enough, it's the (semi-)public internet, but it sort of masquerades as being half-serious "research", and it kind of held things together for the first article, but this one really is thrown together to keep the buzz going of the last one.

The TL;DR :: one of the AIs being just-above-terrible, compared to all the others being completely terrible, a fact already of dubious interest, is down to - we don't know. Maybe a difference in training sets. Tons of speculation. A few graphs.


I don't know why this whole line of posts is worthy of the front page. They seem like one's personal experiments in a limited capacity, unworthy of sharing. It is obvious the observed outputs are because instruction tuning is incompatible with the prompt used by the user. Secondly, the user even failed to provide a chess board diagram (represented as text) to the model. The user also failed to tune any models. Overall, in the absence of an ascii diagram, it's all a waste of time.

The model was trained on games in PGN notation. It would be shocking if it found ASCII art easier to understand than what it was actually trained on.

Well, clearly you're not interested in experimentation, only in assumptions.

How does stating the outcome you expect imply you are not interested in experimentation? Hypothesis formation is the very first step in experimentation.

Most people who understand LLMs and how they are trained would be shocked. In practice, that's an objectively true statement.

Please, please show us your experiments.

I am not the one writing and posting useless articles, even harmful articles, also distorting the understanding of LLMs. Ask the ones who do to perform better experiments.

You know that the LLM isn't actually your friend, don't you?

So to quote yourself:

> Well, clearly you're not interested in experimentation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: