Everyone, every last one of us, does this every single day, all day, and only occasionally do we deviate to check ourselves, and often then it's to save face.
A Nobel prize was given for related research to Daniel Kahneman.
If you think it doesn't apply to you, you're definitely wrong.
Do not lose the original point: some systems have a goal to sound plausible, while some have a goal to say the truth. Some systems, when asked "where have you been", will reply "at the baker's" because it is a nice narrative in their "novel writing, re-writing of reality", some other will check memory and say "at the butcher's", where they have actually been.
When people invent explicit reasons on why they turned left or right, those reasons remain hypotheses. The clumsy will promote those hypotheses to beliefs. The apt will keep the spontaneous ideas as hypotheses, until the ability to assess them comes.
Ideally yes, LLMs are tools that we expect to work, people are inherently fallible and (even unintentionally) deceptive. LLMs being human-like in this specific way is not desirable.
I have no illusions on LLMs, I have been working with them since og BERT, always with these same issues and more. I'm just stating what would be needed in the future to make them reliably useful outside of creative writing & (human-guided & checked) search.
If an LLM provides an incorrect/orthogonal rhetoric without a way to reliably fix/debug it it's just not as useful as it theoretically could be given the data contained in the parameters.
The difference is total in both humans and automated processes.