"I don't think that there's any model today that doesn't suffer from some hallucination," said co-founder and president of Anthropic, Daniela Amodei.
"They're really just sort of designed to predict the next word, and so there will be some rate at which the model does that inaccurately."
...
"This isn't fixable," said Emily Bender, a linguistics professor and director of the University of Washington's Computational Linguistics Laboratory.
"It's inherent in the mismatch between the technology and the proposed use cases."
...
When used to generate text, language models "are designed to make things up. That's all they do," Ms Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.
"But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance," Ms Bender said.