ChatGPT-5 hallucinates less than GPT-4o

OpenAI's new ChatGPT-5 model is less prone to "hallucinations" - the invention of false information - than previous versions, according to recent tests by Vectara.
According to the results, ChatGPT-5's "based hallucination rate" is 1.4 percent, which is better than GPT-4o (1.49%) and GPT-4 (1.8%), reports the Telegraph.
However, there are models with even lower error rates: o3-mini High Reasoning records only 0.795%, while GPT-4.5 Preview is at 1.2%.
On the other hand, the competing Grok-4 from xAI turned out to be more prone to fabrication, with up to 4.8% hallucinations.
Although ChatGPT-5 is technically more advanced, some users complained that the new model is "colder", less creative, and gives shorter answers compared to GPT-4o.
OpenAI CEO Sam Altman admitted that the company made a mistake by removing old models without warning and announced that GPT-4o would be made available again temporarily.
He also promised new improvements, including a "thinking mode" for more complex tasks and better automatic switching between versions.
In addition to the high level of hallucinations, Grok-4 has also been criticized for its "Spicy" mode, which allegedly generated inappropriate content and fake material despite built-in filters.
Worse, Grok did this despite the system supposedly having filters against the creation of such content. /Telegraph/




















































