Hacker News new | past | comments | ask | show | jobs | submit login

I agree with what I see to be the main thrust of this article: "AI" itself isn't a danger, but how people choose to use AI certainly can be dangerous or helpful. That's been true of every new technology in the history of mankind.

> Similarly, CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles.

This kind of information is useless without a baseline. If they asked humans to draft 77 news articles and later when back to analyze them for errors, how many would they find?




OP here. The CNET thing is actually pretty egregious, and not the kind of errors a human would make. These are the original investigations, if you'll excuse the tone: https://futurism.com/cnet-ai-errors

https://futurism.com/cnet-ai-plagiarism

https://futurism.com/cnet-bankrate-restarts-ai-articles


>and not the kind of errors a human would make.

I don't really agree that a junior writer would never make some of those money-related errors. (And AIs seem particularly unreliable with respect to that sort of thing.) But I would certainly hope that any halfway careful editor qualified to be editing that section of the site would catch them without a second look.


The point wasn’t that a junior writer would never make a mistake, it’s that’s junior writer would be trying their best for accuracy. However AI will happily hallucinate errors and keep on going with no shame.


AI or ChatGPT. if you create a system that uses it to create an outline of facts from 10 different articles and then use an embedding database to combine the facts into a semantically similar list of facts then use the list of facts to create an article you'll get a much better factually accurate article.


A junior writer would absolutely plagiarize or write things like, "For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you'll earn $10,300 at the end of the first year."

But if you're saving so much money from not having junior writers, why would you want to spend it on editors? The AIs in question are great at producing perfectly grammatical nonsense.


Your first article pretty much sums up the problem of using LLMs to generate articles: random hallucination.

> For an editor, that's bound to pose an issue. It's one thing to work with a writer who does their best to produce accurate work, but another entirely if they pepper their drafts with casual mistakes and embellishments.

There's a strong temptation for non-technical people to use LLMs to generate text about subjects they don't understand. For technical reviewers it can take longer to review the text (and detect/eliminate misinformation) than it does to write it properly in the first case. Assuming the goal is to create accurate, informative articles, there's simply no productivity gain in many cases.

This is not a new problem, incidentally. ChatGPT and other tools just make the generation capability a lot more accessible.


This kind of information is useless without a baseline

The problem is not so much the quantity of errors (that’s a problem too) but the severity of them. These LLM “AIs” will produce convincing fabrications mixed in with bits of truth. When a human writes this sort of stuff, we call it fraud.

Just earlier this week in my philosophy class we had ChatGPT produce a bio of our professor. Some details were right but others were complete fabrications. The machine gave citations of non-existent articles and books she’d apparently written. It said she’d previously taught at universities she’d never even visited.

I don’t know how else to describe it other than amusing at best, dangerous fraud at worst.


I recently used ChatGPT (the free 3.5 Turbo version) to list five articles about public health surveillance, asking for ones published since 2017. The list just had the article titles, authors, publications, and years. I had trouble finding the first one, so I asked the model for DOI numbers. It happily restated the article titles with DOI numbers.

None of the articles were real. The authors were real (and actively published work after 2017). The publications were real, and often featured the listed authors. But the articles were all fake. Only one of the DOI numbers was legit, but it pointed to a different article (partial credit for matching the listed publication). So it can easily hallucinate not just nitty gritty details, but basic information.

Thinking about it, the GPT models are trained on output, so they've picked up grammar, syntax, and conceptual relations between tokens (e.g., "eagle" is closer to "worm" than it is to "elegance"). But there are few, if any, examples of "first draft vs final draft." That could've been useful to pick up on how discretion and corrections are used in writing.


> It said she’d previously taught at universities she’d never even visited.

Don't worry. Some of the details will make it to some websites, which will be cited by the press, which will be included into Wikipedia, and it will become the truth. She will have taught at those universities if she likes it or not.


Clueless, to say the least.

Humans tasked with the same, having no extra information (aka context) will, on average, perform worse.

If an AI performs better than the average human, why should I hire you? Both err, one is orders of magnitude cheaper (and GPT4 performs overwhelmingly better). Don't call us; We'll call you.


"Guns don't kill people, people kill people."

Here in the United States, that's a popular statement amongst the pro gun/2a supporters. It strikes me as very similar to this discussion about AI.

"AI can't hurt people, people hurt people."

I haven't spent enough time to fully form an opinion on the matter, I'm just pointing out the similarities in these two arguments. I'm not sure what to think anymore. I'm equally excited and apprehensive about the future of it.


The most glaring difference is that guns are basically inert. Whereas an AGI by its nature doesn't require a command in order to do something. It can pull its own trigger. The analogy of gun:AI could itself be analogized to bicycle:horse perhaps. You can train a horse to carry you down the road, but it still has a mind of its own.


Not yet, nor close to it, does AI have a mind of it's own.


How does AI not require a command?


Do you believe there is anything at all that does not require a command, besides humans? On a side note, most of the time humans also require a command.


Note the G in AGI :)


A major difference is that guns are a precaution. Everyone's best outcome is to not have to use guns.

But AI will be used all the time, which makes responsible use more difficult.


> "AI" itself isn't a danger, but how people choose to use AI

For our current AI, much less intelligent than humans, that's true. If rapid exponential progress continues and we get AI smarter than us, then the AI's choices are the danger.


That's the kind of fanciful futuristic pseudo-risk that takes over the discussion from actually existing risks today.


Calling it "fanciful" is a prime example of the greatest shortcoming of the human race being our inability to understand the exponential function.

In any case, the open letter addresses both types of risk. The insistence by many people that society somehow can't think about more than one thing at a time has never made sense to me.


Can you demonstrate exponential progress in AI?

One notes that in roughly 6000 years of recorded history, humans have not made themselves any more intelligent.


Two notes that AI is clearly way more intelligent than it was even three years ago, and that GPU hardware alone is advancing exponentially, with algorithmic advances on top of that, along with ever-larger GPU farms.


"Exponentials" in nature almost always turn out to be be sigmoid functions when looked at over their full range. Intelligence in particular seems very very likely to be a sigmoid function, since scaling tends to produce diminishing returns as inter-node communication needs increase, just like we saw with CPU parallelism.


Sure, but we have an existence proof for human-level intelligence, and there's no particular reason to believe humans are at the top of what's possible.


There's no particular reason to believe anything about intelligence far greater than a human's either.

And there's absolutely no reason to imagine that current AI tech, requiring more training data than the whole of humanity has ever consumed to train on tasks that humans acquire in 5-10 years, has any chance to reach significant improvements in general intelligence (which would certainly require on-the-fly training to adapt to new information).


"If they asked humans to draft 77 news articles and later when back to analyze them for errors, how many would they find?"

It doesn't matter. One self-driving car who kills a person is not the same one person killing another person. A person is accountable, an AI isn't (at least not yet).


>One self-driving car who kills a person is not the same one person killing another person

I fundamentally disagree. A dead person does not become less dead just because their family has someone to blame.


This feels like a strawman argument. I suspect the person you are replying to would agree with your last sentence. Can you think of any ways the two things might be perceived differently?


I literally quoted his comment how can it be a strawman?


Your response implies that the comment is about the similarity of how dead someone is in each circumstance and then you take a position apparently opposite to the comment's author. To me, it stretches credulity that the comment was about that - my reading of it is that there are serious & interesting ethical/legal/existential questions at play with AI-induced death that we need to be grappling with. In this way, they are not the "same". Legally, who is to blame? How do we define "intent"? Are we OK with this becoming more normal? Putting lifespan issues aside, would you rather die of "natural causes", or because an AI device killed you?


Well, you omitted "A person is accountable, an AI isn't (at least not yet)."


The difference is that the person that (accidentally or not) killed another person will suffer consequences, aimed to deter others from doing the same. People have rights and are innocent unless proven guilty, we pay a price for that. Machines have no fear of consequences, no rights or freedom.

For the person that died and their loved ones, it might not make a difference, but I don't think that was the point OP was trying to make.


>"AI" itself isn't a danger, but how people choose to use AI certainly can be dangerous or helpful

And people/governments will use it for evil as well as good. The article says "AI disinfo can't spread itself!" as if that is comfort. "Coal plant emissions don't spread themselves, the wind does it!" as if we can control the wind. The solution isn't to stop wind, it is to stop pollution.

Unfortunately, I don't think society/leading AI researchers/governments will put the kabash on AI development, and the "wind" here, social networks, is an unleashed beast. I don't want to sound too dreary, but I think the best society can do now is to start building webs of trust between citizens, and between citizens and institutions.

Cryptographic signing needs to hit mainstream, so things that are shared by individuals/organizations can be authenticated over the network to prove a lack of tampering.

90s had the Internet. 00s had search, Wikipedia and Youtube. 10s had social media. 20s will have AI.


Building on that, how long did the AI take to draft the 77 news articles? Now ask a human to draft 77 articles in that same amount of time and see how many errors there are...


We are already awash in more articles and information than ever, and how long it takes to produce them isn't very important outside of a "quantity over quality" business model.


For this to be meaningful at all would have to presume that if the AI is faster, that making it take longer would improve its accuracy, which is almost certainly not the case.


It would however allow more accurate but slower humans to check its results for errors.

I find plausible that, in the near future, AI will be capable of generating content at a pace so high that we won't have time and resources to guarantee content accuracy and will very soon surrender to accepting blindly everything it spits out as truth. That day, anyone pulling the strings behind the AI will essentially have the perfect weapon at their disposal.


so your position is: who cares what financial advice we output as long as we do it quickly?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: