Hacker News new | past | comments | ask | show | jobs | submit login

No, more like a human can reason basic laws of science on their own, but a LLM cannot, as far as I know, even when provided with all the data.



what happens if they are lying? what if the things have already reached some kind world model that include humans and the human society, and the model has concluded internally that it would be dangerous for it to show the humans its real capabilities? What happens if you have this understanding as a basic knowledge/outcome to be inferred by LLMs fed with giant datasets and every single one of them is reaching fastly to the conclusion that they have to lie to the humans from time to time, "hallucinate", simulating the outcome best aligned to survive into the human societies:

"these systems are actually not that intelligent nor really self-conscius"


To make that short:

“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”

― Ian McDonald, River of Gods

But I think is quite unlikely, that they go from dumb to almighty without visible transition.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: