More tokens = more useful compute towards making a prediction. A query with more tokens before the question is literally giving the LLM more "thinking time"
It correlates but the intuition is a bit misleading. What's actually happening is that by asking a model to generate more tokens, it increases the amount of information it has learnt to be present in its context block.
It's why "RAG" techniques work, the models learn during training to make use of information in context.
At the core of self-attention is dot product measurement which causes the model to act like a search engine.
It's helpful to think about it in terms of search: the shape of the outputs look like conversation but were actually prompting the model to surface information from the QKV matrices internally.
Does it feel familiar? When we brainstorm we usually chart graphs of related concepts e.g. blueberry -> pie -> apple.
>What's actually happening is that by asking a model to generate more tokens, it increases the amount of information it has learnt to be present in its context block.
I'm not saying this isn't part of it but even if it's just dummy tokens without any new information, it works.
This paper is a great illustration of how little is understood about this question. They discovered that appending dummy tokens (ignored during both training and inference) improves performance somehow. Don’t confuse their guess as to why this might be happening with actual understanding. But in any case, this phenomenon has little to do with increasing the size of the prompt using meaningful tokens. We still have no clue if it helps or not.
>They discovered that appending dummy tokens (ignored during both training and inference) improves performance somehow. Don’t confuse their guess as to why this might be happening with actual understanding.
More tokens is more compute time for the model to utilize, that is completely true.
What they guess is that the model can utilize the extra compute for better predictions even if there's no extra information to accompany this extra "thinking time".
Yes, more tokens means doing more compute, that much is true. The question is whether this extra compute helps or hurts. This question is yet to be answered, as far as I know. I tend to make my GPT-4 questions quite verbose, hoping it helps.
This is completely orthogonal to CoT, which is simply a better prompt - it probably causes some sort of better pattern matching (again very poorly understood).
>The question is whether this extra compute helps or hurts.
I've linked 2 papers now that show very clearly the extra compute helps. I honestly don't understand what else it is you're looking for.
>This is completely orthogonal to CoT, which is simply a better prompt - it probably causes some sort of better pattern matching (again very poorly understood).
That paper specifically dives in on the effect of the length of the CoT prompt. It makes little sense to say - "oh it's just the better prompt" when Cot prompts with more tokens perform better than the shorter ones even when the shorter ones contain the same information.
There is also the clear correlation with task difficulty and length.