Hacker News new | past | comments | ask | show | jobs | submit login

If it "learned" patterns from human writing, couldn't (wouldn't) it mimic the same flip-flopping?



It’s possible but I feel that if an LLM flips styles, it will stick to that style afterwards. And the more advanced LLMs (I could be wrong but iirc Copilot chat is supposed to be GPT-4?) are much less likely to flip styles in the middle. Bigger models tend to be more coherent.

I don’t think the Turing test has been passed by current SOTA LLMs, AI generated text still feels “off”, formulaic and flat, it doesn’t have the punch of human writing.


Current LLMs are deliberately trained to have a "flat, kind of robotic" default voice. Passing the Turing Test is not for a lack of ability here.


As long as prompt injection is possible, there’s zero LLMs that pass the Turing test




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: