OP here. The CNET thing is actually pretty egregious, and not the kind of errors a human would make. These are the original investigations, if you'll excuse the tone:
https://futurism.com/cnet-ai-errors
I don't really agree that a junior writer would never make some of those money-related errors. (And AIs seem particularly unreliable with respect to that sort of thing.) But I would certainly hope that any halfway careful editor qualified to be editing that section of the site would catch them without a second look.
The point wasn’t that a junior writer would never make a mistake, it’s that’s junior writer would be trying their best for accuracy. However AI will happily hallucinate errors and keep on going with no shame.
AI or ChatGPT. if you create a system that uses it to create an outline of facts from 10 different articles and then use an embedding database to combine the facts into a semantically similar list of facts then use the list of facts to create an article you'll get a much better factually accurate article.
A junior writer would absolutely plagiarize or write things like, "For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you'll earn $10,300 at the end of the first year."
But if you're saving so much money from not having junior writers, why would you want to spend it on editors? The AIs in question are great at producing perfectly grammatical nonsense.
Your first article pretty much sums up the problem of using LLMs to generate articles: random hallucination.
> For an editor, that's bound to pose an issue. It's one thing to work with a writer who does their best to produce accurate work, but another entirely if they pepper their drafts with casual mistakes and embellishments.
There's a strong temptation for non-technical people to use LLMs to generate text about subjects they don't understand. For technical reviewers it can take longer to review the text (and detect/eliminate misinformation) than it does to write it properly in the first case. Assuming the goal is to create accurate, informative articles, there's simply no productivity gain in many cases.
This is not a new problem, incidentally. ChatGPT and other tools just make the generation capability a lot more accessible.
https://futurism.com/cnet-ai-plagiarism
https://futurism.com/cnet-bankrate-restarts-ai-articles