Source Feed: National Post
Author: Courtney Greenberg
Publication Date: May 15, 2025 - 08:00
What are AI hallucinations? Computer expert breaks down why it happens, how to avoid it
May 15, 2025

More internet users are starting to replace popular search engines with advanced chatbots from artificial intelligence platforms. However, the more powerful they become, the more mistakes they’re making,
the New York Times reported
. These mistakes are referred to as hallucinations.
Hallucinations have even been at the centre of a recent case in Canada involving a lawyer accused of
using AI and fake cases to make legal arguments
. A Ontario Superior Court judge said the lawyer’s factum, or statement of facts about the case, included what the judge believed to be “possibly artificial intelligence hallucinations.”
As AI becomes more prevalent as it gets integrated into aspects of everyday life,
hallucinations are likely not going away
any time soon.
Here’s what to know.
How many people use AI chatbots?
A
report published in March
by Elon University showed that more than half of Americans use large language models (LLM) like OpenAI’s chatbot ChatGPT or Google’s Gemini. Two-thirds of those Americans are using LLMs as search engines, per the report. Around the world, nearly one billion people use chatbots today, according to
data from marketing site Exploding Topics
— with Canadians and Americans among top users.
There’s also been a surge in the amount of Canadians using AI recently, new data
released by Leger on Wednesday
revealed. Nearly half of the Canadians surveyed (47 per cent) in March said they’ve used AI tools, compared to only a quarter saying the same in February 2023.
Canadians are more likely to trust AI tools when it comes to tasks around the home, answering product questions via chat, or for using facial recognition for access. Canadians are much less trusting when it comes to using AI for driverless transport, teaching children or getting help to find a life partner. Canadians were split on whether AI is good (32 per cent) or bad (35 per cent) for society.
What are AI hallucinations?
An AI hallucination is when a chatbot presents a response as true, but it is not correct.
This can occur because AI chatbots are not “explicitly programmed,” said University of Toronto professor David Lie from the department of electrical and computer engineering in a phone interview with National Post on Tuesday. Lie is also the Canada Research Chair in Secure and Reliable Systems.
“The programmer beforehand doesn’t think of every possible question and every possible response that the AI could face while you’re using it,” he said. Therefore, the chatbots rely on inferences from the training data. Those inferences can be incorrect for a multitude of reasons. The training data may be incomplete or the training method leads it to the wrong shortcuts to arrive at the answer.
He compared how the current generation of artificial intelligences are modelled to the human brain.
“The way they’re trained is, you give a bunch of examples … trillions of them. And from that, it learns how to mimic, very much like how you would teach a human child,” said Lie. “When we learn things, we often make mistakes, too, even after lots and lots of learning. We may come to the wrong conclusions about things and have to be corrected.”
AI works the same way and is susceptible to “some level of hallucination.”
As for why the AI mistakes are referred to as hallucinations, Lie offered up one explanation. He gave the example of
a brain teaser that shows two lines
, one above the other. One line has arrows pointing outward, the other with arrows pointing in — almost like a “visual hallucination.”
“People think one is longer than the other and it’s not. It’s just because our brain takes these shortcuts when it’s learning about the environment, and sometimes those shortcuts tend to be wrong,” he said, just like how AI can take shortcuts that reach the wrong conclusion.
How can AI hallucinations affect regular chatbot users?
AI hallucinations can be detrimental to people who rely on AI chatbots for queries or research that they don’t know the answer to because when it provides a response, it doesn’t know it’s wrong.
“It’ll sound very confident in its response, and so if we don’t fact check it, we’ll also be misled,” said Lie. “That’s probably the biggest problem with hallucinations now. You have a chatbot that is in some position to answer questions, and most of the time it’s right, and every now and then it’s wrong. If we’re not careful, we might be misled by the times that it’s wrong and and that’s obviously not good.”
Chatbots cannot be correct 100 per cent of the time, said Lie. No one has gone through every piece of information on the internet, which many of the large models are trained on, to ensure that it is factual.
What can chatbot users do to try to avoid AI hallucinations?
One trick that Lie said tends to help get more accurate and correct responses from AI chatbots is to do a bit of research right before the question is asked. Users can then provide the chatbot with relevant information so it can narrow down the type of response the searcher wants. The technique is called “grounding.”
The information could be from an encyclopedia or any repository of information “that is not the entire internet,” but is a reliable source.
The chatbot will focus on the information that’s been given, said Lie.
“It’s kind of like if I have some students write a test, and I say, ‘You can bring in the textbook or some of your notes.’ They’re more likely to answer the questions correctly because they have that reference that they can look at to get information, as opposed to trying to recall things from their memory, which might be less reliable.”
As for the future of AI hallucinations, Lie said he has confidence in the researchers at the Schwartz Reisman Institute for Technology and Society, where he is the director. He said he believes many of them “will go on to found companies that will fix these problems.”
Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our newsletters here.
We're back at Superior Court in London, Ont., to bring you the latest updates from the ongoing trial of five former world junior hockey players.
May 15, 2025 - 10:49 | | CBC News - Canada
Manitoba Premier Wab Kinew says the province is declaring a state of emergency in Whiteshell Provincial Park due to wildfires.He says two people who died Wednesday in a fire in the eastern part of the province means officials are taking the danger seriously and want additional powers to enforce evacuation orders.More to come.
May 15, 2025 - 10:45 | | The Globe and Mail
Voting Day in the 2025 Osgoode By-election is Monday, June 16. Eligible electors will have the opportunity to vote for their City Councillor in Ward 20 Osgoode.
Verify your information on the Voters’ List
Eligible electors can verify, add, or amend their information on the Voters’ List from Thursday, May 15, to Friday, May 23, at 4:30 pm by using the “Am I on the Voters’ List?” tool on ottawa.ca/vote or completing and submitting an Application to add or amend my information on the Voters’ List to the City of Ottawa’s Elections Office (1221 B Cyrville Rd) or any City of Ottawa Client...
May 15, 2025 - 10:40 | City of Ottawa - Media Relations / Ville d'Ottawa - Relations avec les médias | City of Ottawa News Releases
Comments
Be the first to comment