Hacker News new | past | comments | ask | show | jobs | submit login

Maybe I'm being too skeptical, and certainly I am only a layman in this field, but the amount of ANN-based post-processing it takes to produce the final image seems to cast suspicion on the meaning of the result.

At what point do you reduce the signal to the equivalent of an LLM prompt, with most of the resulting image being explained by the training data?

Yeah, I know that modern phone cameras are also heavily post-processed, but the hardware is at least producing a reasonable optical image to begin with. There's some correspondence between input and output; at least they're comparable.






I've seen someone on this site comment to the effect that if they could use a tool like dall-e to generate a picture of "their dog" that looked better than a photo they could take themselves, they would happily take it over a photo.

The future is going to become difficult for people who find value in creative activities, beyond just a raw audio/visual/textual signal at the output. I think most people who really care about a creative medium would say there's some kind of value in the process and the human intentionality that creates works, both for the creator who engages in it and the audience who is aware of it.

In my opinion most AI creative tools don't actually benefit serious creators, they just provide a competitive edge for companies to sell new products and enable more dilettantes to enter the scene and flood us with mediocrity




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: