Hacker News new | past | comments | ask | show | jobs | submit login

If you put a known prompt in an LLM and ask it to read it back to you, how often does it do it correctly? I would bet not all the time, particularly if you give it a long prompt like the one that is proposed here.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: