While all chatbots struggle with hallucinations, does it matter which one you use? Absolutely.
Free versions aren’t just outdated but also prone to more hallucinations. Think of them as older models with the same name. They lack the improvements and refinements found in paid versions, which are generally more accurate and up to date.
Why are paid versions better?
It’s not just about having the newest version. Paid models are often fine-tuned more frequently, addressing known issues like generating false information. They also have access to the latest data and enhancements, which can drastically reduce hallucinations.
How to stay safe: 3 quick checks
If you’re using AI in your work, especially for anything public-facing or client-related, follow these three rules:
1. Ask for sources
Some tools, like Perplexity or ChatGPT with web access, can cite URLs or studies. If it doesn't show a source, don't trust it unquestioningly.
2. Cross-check key facts
If it provides stats, laws, or quotes, look them up. Even basic facts can be wrong. Use Perplexity or Google to confirm the stats or facts. Click on every link to verify the source. Double-check all facts with the AI itself. If you used ChatGPT, ask if all the sources are real, if the quotes are verbatim, and if the URLs go directly to the cited source or just a homepage.
3. Feed it your own material
Want to ensure accurate answers? Upload or paste in your real documents with verified quotes and data. Let AI work with facts you provide, not make guesses.
Train your chatbot to lie less
Here’s a vital hack: You can train the best chatbots to behave better. Paid versions, especially those with customization options, can learn from the accurate data you feed them. The more high-quality, reliable information you give, the more likely they will respond accurately.
Combine that with better prompting, such as making sure you include in your instructions to cite only verifiable data – that way, you can significantly reduce the mistruths you encounter.
Push back
If something looks off, call it out. Challenge with a prompt that asks, “How accurate is this statistic?” or “The link you provided doesn’t go to that information.” AI often tries again, and sometimes fails again, but forcing it to explain itself can surface better results or more reliable sources.
Be careful out there
AI is an incredible tool, but it’s not infallible. Treat AI-generated content as a starting point, not a final answer. Question it, cross-check it, and make it work for you: not the other way around. (-Kevin)