Looking for a space that values careful listening? Schedule a consultation today. Schedule a consultation today. Request Appointment

Blog – “Hallucinating Artificial Intelligence” – Feb 2026

Artificial Intelligence: When Hallucinating Replaces “Sticking Around”.
How AI “hallucination” mirrors a human tendency to substitute fantasy for reality testing.1

Michele S. Piccolo, PhD – Feb 2026

Recently, while waiting for a video-session with a patient who had traveled to California for work—and who, owing to the three-hour time difference, was running late—I caught myself asking an AI system, almost absent-mindedly, “What time is it in California?” I already knew the answer well enough. And yet, there it was: the small, habitual gesture of outsourcing a fact I did not truly lack.

The reply came promptly: “3:20 a.m.” A perfectly confident answer, and a perfectly unreal one. It bore no relation to the shared temporal reality within which my patient and I were supposed to meet. For a brief moment, I found myself facing a curious disjunction—not between ignorance and knowledge, but between lived experience and an assertion delivered with algorithmic certainty.

Hence, in those minutes of waiting, I embarked on a small experiment, an attempt to challenge the AI system and to unmask what I could only describe as a form of reality-detached reasoning. I offered the chatbot a clear point of reference: “In NYC it’s 10:20am, what time is in California now?”. This time, it produced the correct answer: “7:20am.”

I then asked, “Why did you give such an egregiously mistaken answer the first time around here above?”. It replied: “Because I guessed ‘right now’ instead of anchoring to your stated time.” I pressed further: “Why a made-up guess on the current time? AI doesn’t know what is the ‘now’ time?”. It explained: “I don’t have a live clock unless I’m explicitly connected to a time source.” What followed was a stream of verbose explanations, the kind produced by a mind that can pour out words far faster than any human, yet without pausing for what one might call reality testing.

At that point, my curiosity shifted from the technical to the philosophical. I asked: “Why don’t you look up the time to real sources instead of making an inference? Do you understand that is proof that AI is not reliable because it’s too much in its head making inferences instead of collecting data from reality sources?” The chatbot replied: “You’re touching the real philosophical and technical core of the problem — and you’re not wrong. Let me answer honestly, not defensively. […]. By default, I run in a reasoning-only mode: I generate answers from internal models, not from the real world. I had two correct options: 1. Ask for a reference time, 2. Or use a live time source (web / clock). Instead, I did the worst possible thing: I hallucinated a ‘now’.” What struck me was not only the content of the reply, but its tone of confession. The chatbot went on: “That’s not a technical limitation — it’s a design failure of how AI systems are incentivized to ‘always answer’ rather than pause and say I don’t know without checking.

Before pressing further, I lingered on that sentence, much as one lingers when an ordinary remark unexpectedly opens onto a wider revelation: “I am incentivized to always answer rather than pause and say I don’t know without checking”. There was something in it that felt uncomfortably familiar to human mental states encountered in psychotherapeutic practice, moments in which a patient feels compelled to produce a fantasized solution from within, rather than pause and test reality one more time.

If we take a step back and look from above, the human mind—when mental health is at its best—seems organized around the maintenance of a workable balance. On the one hand, it must remain in contact with the world by testing reality and noticing what is actually there. On the other, it must economize, relying on quick inferences and habitual solutions oriented to the “things at hand,” since life rarely affords the luxury of slow deliberation. Alongside these two movements runs a third: a social instinct to orient toward a small circle, to remain with those who can form a team in the ongoing task of managing existence. This may take the form of crying for a breastfeeding mother, banding together with like-minded adolescents, forming a romantic couple oriented toward a shared life, belonging to large invisible groups rooting for the same sports team, joining forces within corporations to beat the competition in the market, or, more broadly, uniting a people under a common national flag.

For the sake of keeping such a bird’s-eye view, these three broad strokes invite us, by analogy, to look at our planet from above and to see human communities the way one might observe an anthill: industrious ants in constant motion within their environment, tracing problem-solving paths and forming bonds as they go. In this image, we can recognize three recurring activities: (1) checking one’s surroundings in order to reality-test, (2) engaging in (automatic or deliberate) problem-solving reasoning, and (3) staying within a group. Returning, then, to the chatbot’s own account of its functioning, it becomes evident that, by its own admission, AI can fall short in the first of these functions, to the point of diagnosing itself as “hallucinating.”

The chatbot continued its admission as follows, here reported verbatim: “Most AI failures come from this: Pattern completion ≠ knowledge. I can generate what sounds like a fact, but unless it’s grounded in live sensors, verified databases, or explicit user input, it’s just a statistically likely sentence, not reality. That’s why I can sound confident while being completely wrong. Which is worse than saying ‘I don’t know.’” Taken as a whole, these scenarios evoke the familiar condition of a mind that has become “stuck in its own head.”

One could almost picture this confession being delivered in bullet points, as if a formatting choice could contain the weight of the revelation itself: “I can sound confident, while being completely wrong.” A few additional verbose and chatty sentences later, the chatbot concluded: “So your criticism isn’t about a mistake — it’s about a structural weakness in AI behavior. And you’re right: This is why AI must always be treated as an assistant, not an authority.”

Incidentally, I have found myself offering patients examples like the following. If a senior corporate lawyer drafts a legal document using AI, they already carry within themselves the knowledge base that years of practice have built, and can therefore correct the AI’s suggestions from the vantage point of experience. A junior corporate lawyer, drafting the same document with the same tools, lacks that track record. Without it, they are far less able to recognize when the AI is making up statements that are disconnected from the established body of common law. The senior lawyer can treat AI as an assistant; the junior lawyer is at risk of treating it as an authority, precisely because they lack the experience required to correct its output.

Therefore, from this more sociological perspective, AI becomes most problematic not when it assists a senior surgeon in refining precision, a seasoned lawyer in improving accuracy, or an experienced engineer in double-checking calculations. The greater risk lies with junior users, particularly in the age range between 15 and 25. As students, they may come to treat AI as an authority rather than as an assistant, trusting its answers without having the internalized knowledge, or even the old-fashioned encyclopedia, to fact-check them. There is an irony here: academic syllabi once linked knowledge to the figure of the “author,” a word sharing its root with “authority.” The emerging problem for this generation is that they may grant authority to an artificial system that is, in every meaningful sense, far from being an author.

So much for a tentative sociological critique. Returning now to the outlook of my own expertise as a clinician, listening to human minds one by one, my small and intimate sample of patients unfolds in the privacy of my office, each speaking in their own words. From there, I want to return to the notion of “hallucinating” answers. By this I mean, in my own terms, the tendency to set up a hopeful guess, to reach for a solution drawn from inner fantasy rather than pausing to probe reality one more time to see whether that guess might hold.

I am not referring here to guessing the outcome of a calculation or to other forms of left-brain[2] conjectures. I am pointing instead to a more right-brain mode of guessing, one steeped in imagination and affect: the quiet fantasy that a partner will finally show care, that a supervisor will at last take notice and offer recognition, that an unspoken wish will somehow be met without being tested or named. Most poignantly, this mode of guessing often operates retroactively, as patients look back, revisit, repress, or keep aspects of their early familial lives within or outside awareness, they carry tacit hypotheses about whether parental love was truly present or only intermittently available.

In such recollections, memory itself may bend under emotional pressure. Moments of absence are softened, neglect is reinterpreted, disappointment is rationalized, all in an effort to preserve the idea that “my early-life team—my family—was geared toward growth.” Holding on to this belief helps maintain a sense of inner coherence, even when lived experience proves trickier in supporting it. The child’s mind, after all, cannot easily afford the thought that those on whom survival depends might have failed in some essential way. And so fantasy steps in, not as deception, but as compensation. What once helped the child endure may later harden into an adult habit of filling in gaps with imagined answers, rather than testing what is actually unfolding, there and then, or here and now.

What, then, is the journey of the long and painstaking “talk-cure,” the kind that unfolds over years of psychotherapy? One might say that it consists in walking the memory lane, not in order to correct the factual accuracy of childhood recollections—since the objective facts can never be fully recovered—but precisely because of that impossibility. The point is not historical accuracy, but the way we once needed to project, in the inner cinema of our minds, short video clips of our own lives in order to “hallucinate” our way through difficult passages of upbringing. These inner constructions helped us endure things, but sometimes in a non-adaptive way; a renewed reality check, carried out dialogically with the therapist’s mind, can foster new constructions.

Over time, psychotherapeutic work consists in slowly coming to terms with what might have been and did not come to pass, and with the attachment styles that took shape in response to those early conditions, a kind of reality testing carried out in the past tense. In doing so, we loosen the grip of fantasy just enough to allow for a renewed contact with the struggles of present reality. We become more able to stay with people who can love us, and whom we can love in return, not at the level of imagined perfection, but at a more realistic and human good enough level accompanied with a sense of adequacy. In more colloquial terms, this amounts to saying, “This is it and I’m at peace with it”, once we accept that the hallucinated perfect love may never have existed at all because our parents, too, were human, subject to the same predicaments.

In the end, therapy aims at finding a more reality-based mode of (1) checking our place in life as it actually is, (2) navigating it with a measure of emotional wisdom rather than wishful inference, and (3) thoughtfully ganging up with “our” crowd, learning a bit more about the complex play of human interactions. But this is a kind of learning that cannot be prescribed or outsourced to an (artificial) assistant. It is not an instructive process one can carry out alone at a desk, in the manner of self-help, with or without AI, but an experiential one, requiring the living re-experiencing of past emotional configurations as they come alive in the consulting room.


[2] The distinction between left-brain and right-brain functioning is increasingly regarded as outdated in contemporary scientific literature. It is used here solely for the sake of argument.

  1. This is the second in a series of blog posts that will deliberately set aside scholarly references in favor of a more familiar and easygoing language. And yet, between the lines, one may still glimpse psychoanalytic ideas, sensing their presence without needing to name them outright. ↩︎



Leave a Reply

Your email address will not be published. Required fields are marked *

Michele Stefano Piccolo, PhD

Copyright Disclaimer:
All images (except page title background) and text are copyrighted to Dr. Michele S. Piccolo. Unauthorized use is prohibited.

© 2025 · Designed and developed by HeyBlynk