Without knowledge of how to optimize answers for ‘truth,’ they’re modeling what humans do–tell stories, hedge, prevaricate, lie, do bad math, and sometimes, eventually, suss out the truth.
“While cognitive offloading with AI reduces people’s higher thinking abilities, thoughtful integration of ‘extraheric AI’—which nudges, questions, and challenges users—can substantially improve critical thinking.”
This has instilled a great fear of any kind of non-human entity that can seemingly think for itself, so large language models perfectly prey upon that fear. Both the fear of the unknown and the fear of a threat to humanity come together incredibly well to cause a fear of AI for those who don’t…