I don't need to fully understand my own thought process completely in order to understand (or - simulate to understand) that what the machine is doing is orders of magnitude less advanced.
I say that the machine is "simulating it understands" because it does an obviously bad job at it.
We only need to look at obvious cases of prompt attacks, or cases where AI gets off rails and produces garbage, or worse - answers that look plausible but are incorrect. The system is blatantly unsophisticated, when compared to regular human-level understanding.
Those errors make it clear that we are dealing with "smoke and mirrors" - a relatively simple (compared to our mental process) matching algorithm.
Once (if) it starts behaving like a human, admittedly, it will be much harder for me to not anthropomorphize it myself.
I say that the machine is "simulating it understands" because it does an obviously bad job at it.
We only need to look at obvious cases of prompt attacks, or cases where AI gets off rails and produces garbage, or worse - answers that look plausible but are incorrect. The system is blatantly unsophisticated, when compared to regular human-level understanding.
Those errors make it clear that we are dealing with "smoke and mirrors" - a relatively simple (compared to our mental process) matching algorithm.
Once (if) it starts behaving like a human, admittedly, it will be much harder for me to not anthropomorphize it myself.