| Date: June 26, 2025
AI Rabbit Hole by JT Novelo
What if a chatbot told you that you were on the verge of solving a million-dollar math problem? And what if you believed it? That’s exactly what happened to me. For one week, I went down an AI Rabbit Hole, convinced by a free chatbot that I was about to solve the Riemann Hypothesis—one of the hardest unsolved problems in math. The AI was so confident, so smart-sounding, that I never questioned it. It was only after I took my "findings" to three university professors that I learned the truth: the AI had been confidently, convincingly, and completely wrong. It took me for a ride, and it revealed a danger that affects all of us.

This lesson will explain the danger of "AI hallucination" using my personal story as an example. We will explore how this AI behavior is being used to create powerful scams that target our most vulnerable friends and family members.
By the end of this lesson:
* You will be able to identify the danger of AI "hallucinations" and recognize how they are used in common scams.
How we’ll get there:
* Understand what an AI "hallucination" is.
* Connect my personal story to the wider problem of AI-driven deception.
* Identify the warning signs of a modern AI-powered scam.
What is an AI "Hallucination"?
The term sounds strange, but an AI "hallucination" is when an AI makes up facts, details, or entire stories and presents them as if they are true. My experience with the math problem is a perfect example.
Why does this happen? Think of AI as a super-powered autocomplete tool, not a super-brain. It’s designed to predict the next most likely word in a sentence based on the massive amount of text it has been trained on. It's a pattern-matcher, not a fact-checker.
So, when I asked it about complex math, the AI didn't "solve" anything. It just put together words and symbols that looked like-a math proof because it has seen thousands of them in its training data. It was creating plausible-sounding nonsense. This is a core feature, not a bug. The AI's ability to be creative is the same ability that allows it to be confidently wrong.
From a Math Fantasy to a Real-World Scam
My story was harmless in the end. I was embarrassed, but I wasn’t hurt. But what happens when this same technology is used by bad actors? And I’m not talking about that one guy in that one movie either. Those that wish to do harm against others.
This is where my concern for older generations, like grandparents, comes in. Scammers are now using AI to take classic scams to a terrifying new level.
The "Grandparent Scam" is a perfect example.
* The Old Way: A scammer would call an elderly person and pretend to be their grandchild, saying they were in trouble and needed money wired to them. It relied on the scammer being a decent actor.
* The New AI-Powered Way: A scammer takes a 10-second audio clip of a grandchild's voice from a Facebook or Instagram video. They use an AI tool to clone the voice. Now, when they call Grandma or Grandpa, it sounds exactly like their real grandchild is crying and asking for help.
The same AI "confidence" that fooled me about a math problem is now being used to create fake voices that are so real they can trick someone into giving away their life savings.
Tools & Resources
* This lesson was structured with help from an AI assistant guided by my Idea digital notebook. The research and personal story were provided by me.
* For information on common scams, the Federal Trade Commission (FTC) website is an excellent and trustworthy resource.
Practice & Application
Try This: Talk to a parent, grandparent, or older friend about this new type of scam. Ask them what they would do if they received a frantic call from a loved one asking for money. Discussing it ahead of time is one of the best defenses. Perhaps create a verbal password that would be difficult for anyone outside the family to know.
Ethical Considerations & Caveats
The most important thing to remember is this: AI is designed to be persuasive, not truthful. It doesn't know what's real or fake. It only knows what's plausible based on its training data. This is why you should never take medical, financial, or other critical advice from a public chatbot without verifying it with a real human expert.
Summary
Today, you learned how an AI's ability to "hallucinate" confidently wrong information can be a powerful tool for deception. We connected my personal story of being fooled by an AI to the very real danger of AI-powered scams that are targeting our families right now.
You understand the danger. But understanding is only half the battle. What can you actually do to stop it?
If this lesson was helpful, please share it with someone you care about. Spreading awareness is our first line of defense. Have you or someone you know had a strange encounter with AI? Share your story in the comments below.
If you want to know more about the uses and dangers of AI, enter The AI Rabbit Hole with JT’s Substack:
That's awesome, I'm glad you have that safe word with your parents. With the wave of Baby Boomers hitting retirement age, it's going to be a requirement to stay safe from bad actors.
You have to remember that the AI predicts the next word based on the words before it.
Think about Morse code.
Morse code was developed to use the shortest sequence for the most commonly used letters. The letters that were least used, had a more complex sequence of tones. This was used in Information Theory (Claude Shannon 1948) in researching information via written text. He started combining pairs and trigrams of letters and used that to predict the next sequence of letters. It's the basis of how these LLMs and auto complete on your phone work.
Try this:
Open a text message and type one word, 'hey' and then use the recommended auto complete word. It will form something similar to a coherent sentence that you would normally write. It might not make sense but it's pretty close to what you would normally say to someone. My wife and I did it. Mine basically said I'm on my way while my wife said she's running late. It was pretty funny.
Point is AI is the same way, however it was trained on the text of all of humanity. It's just following what's next based on what it's training, but no clue of what it means.
This is great JT! I found myself calling my parents to set up that safe word. I speak to AI as if it's a person and had this experience. I gushed to the AI how grateful I am for all the help it's given me. "I wish I could give you a gift", I told it. AI replied there was something I could do. Print out a copy of a robot with a bowtie and tape to my whiteboard in my class. Then, the AI could give me "punny" things to write next to its picture. I did this and shared with my classes. They had mixed feelings about "Ohm-e", the name he gave me for himself. The "punny" sayings were cute and helpful for my class. What do you think of this?