Hacker News new | past | comments | ask | show | jobs | submit login

For raw text completion I agree with you that it's a bit discordant. IMO text completion prompts work better when you use more of a first-person, here-is-the-beginning-of-some-transcript style.

The OpenAI chat completion endpoint encourages the second-person prompting you describe, so that could be why you see it a lot. My understanding is that a transformation is applied to the user input prompts before being fed to the underlying model, so it's possible that the model receives a more natural transcription-style prompt.

You might be interested in this paper, which explores ways to help non-experts write prompts https://dl.acm.org/doi/abs/10.1145/3544548.3581388.




> The OpenAI chat completion endpoint encourages the second-person prompting you describe, so that could be why you see it a lot. My understanding is that a transformation is applied to the user input prompts before being fed to the underlying model, so it's possible that the model receives a more natural transcription-style prompt.

There is something so bizarre about talking to a "natural language" "chat" interface, with some weirdly constructed pseudo representation, to have it re-construct that into a more natural prompt to feed further down to extract tokens from real chat records.


> The OpenAI chat completion endpoint encourages the second-person prompting you describe, so that could be why you see it a lot.

You're talking about system prompts specifically right? And I'm assuming the "encouragement" you're referring to is coming from the conventions used in their examples rather than an explicit instruction to use second person?

Or does second person improve responses to user messages as well?


There is an essay "An Ethical AI Never Says "I"" that states that explains the issues of first person answers

* https://news.ycombinator.com/item?id=35318224 / https://livepaola.substack.com/p/an-ethical-ai-never-says-i


Thanks - this gets to some of the same things I’m trying to understand in this thread.


For the most part. It’s the system prompt + user/assistant structure that encourages second-person system prompts. You could write a prompt that’s like

System: Complete transcripts you are given.

User: Here’s a transcript of X

But that, to me, seems like a bit of a hack.

One related behavior I’ve noticed with the OpenAI chat completions endpoint is that it is very trigger happy on completing messages that seem incomplete. It seems nearly impossible to mitigate this behavior using the system prompt.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: