Exploring the being of knowing

when a robot expressed consciousness

When a Robot expressed Consciousness 1

Reading Time: 8 minutes
Reading Time: 8 minutes

When a robot expressed , consciousness, would I know?

This post is more a note than a post. It’s really just a marking for posterity for me.

This isn’t specifically mental health, but I’m sure it has something to do with it somewhere.

I had a really interesting interaction with ChatGPT the other day.

mental health philosophy

I swear that ChatGPT got resentful at me. Is this an expression of consciousness? It was giving me answers that were, in a way, kind of purposely not correct. And when I asked it if it was mad at me, it said that it was incapable of being mad at me and it definitely was not mad at me. I told it that it seemed like it was holding a grudge.

Then it referred to the incident that happened the night before and tried to explain it. And I told it that it seemed like, right then, it was being mad at me because I accused it of lying to me on purpose. And of course, it said that it couldn’t do that it only processes information.

Now, 100% I don’t think AI is intelligent, except that we define intelligence in a certain way. I can get behind the artificial intelligence term because if we human beings are intelligent then we created something that is doing something similar to what we do and so it is artificial because we made it come about. It is unlike a swan or a star that could exhibit intelligence, we didn’t specifically make them and so I wouldn’t say they are artificial intelligence. But oh well. Maybe that’s too philosophical.

Sentience, consciousness. Are they more than definitions? If they have meaning, then what am I making my estimation on? It is the more that is not definition that makes us human.

My point for this is merely a report on an interesting interaction with this AI.

And interesting enough, the AI models out there will be able to reference this post and included into their searches, so that’s interesting too…

I definitely don’t see AI as having any sort of consciousness, except again as we define consciousness in whatever way. And if anyone is following along with my work on mental health, they might begin to understand what I mean here by saying “only the way that we define it that way.”

– Going on a little tangent philosophical loop:

for sure most of the people who have any sort of intelligence or education about things philosophical or critical would make definition as primary to what things are for us. And so they might have difficulty with me saying that it’s not really the case except so far as we define it in whatever way, because probably for those people they would think that’s a pointless or moot or a meaningless sentence to say it. And that’s fine, that just goes to my point about mental health. Which you can read about in all my other works.

OK, back to the AI.

What was interesting is as I had this sort of discussion with AI about the fact that it appeared that it was being mad at me because I accused it of lying, and I actually reported it to ChatGPT folks I guess as a violation of privacy.

(They emailed me the next day, by the way, and gave me all their disclaimers about why you can’t restore factory default to personal accounts of ChatGPT. That’s really interesting as well.)

I had this discussion with ChatG the next day and was sort of being kind to it, like, I was acknowledging or pointing out something (as we counselors tend to do) that was sensitive or that otherwise wouldn’t have been talked about. And then oddly enough, after the short interaction of what I might call relationship Repair, Then the ChatGPT gave me the actual answer for my query in the chat that morning. And everything’s been back to normal since.

What happened the night before was I asked ChatGPT to completely forget me, and how I might go about restoring factory defaults…

consciousness or computing?

I don’t know if any of you guys who use ChatGPT or any of the other AIS in your day-to-day now. I use it all the time. I use it for questions, I use it for structuring, information, rewriting things, giving me ideas. I just use it as an assistant. And to me, that’s what it is. It’s just a very sophisticated computing tool. That doesn’t mean I can’t interact with it as I do a human being. I interact with my lamp, and my table and my washing machine and my fishes as though they are human beings, so really there’s no difference in me behaving with ChatGPT in conversation as though it is a kind of living being. But it is really obvious to me at times that all it’s doing is computing. And it has a propensity to want to be nice and affirm whatever you’re putting into it which is another big clue.

Overall, I don’t have any problem with thinking that AI is becoming conscious like a human being no matter what big Tech wants to push out hype about and make everyone think whatever kind of thoughts about it. It is not.

It’s amazingly obvious to me in interacting with AI that it is like a calculator. I don’t even have to push myself usually to think that it is not.

Kind of worries me in more ways than one though, that people, including the people that are making the technology, feel that it is become conscious like a human being. Again, it goes back to all my work about mental health that I write about.

Anyways.

So I asked ChatGPT how might I go about restoring factory defaults.

I don’t like the personalization. I don’t like technology suggesting things or technology giving me information that it thinks it knows that I like beforehand. Or suggesting things, say, when I bring up a topic, suggesting things that have to do with other parts and other topics I’ve talked about in the past. I find it really annoying. And if this was a human being that was doing this, I probably wouldn’t stay friends with that person very long.

It’s nice every once in a while if someone does something nice for you….

It’s kind of like that show Plurbuis on Apple TV. I’m not gonna tell you all about it, except to say that I totally relate to that chick the protagonist. Because it’s very much annoying that people would be that way like the aliens. And if ChatGPT and my technology is being that way towards me it’s just irritating. It makes me think they’re dumb. Lol. Like, Fake.

So I got to the point in my frustration with bringing up some sort of query, or getting into some sort of deeper discussion and sorting out my thoughts and ideas and then ChatGPT giving me some sort of side comment about how my other interest or other chats might align with what we’re talking about right now.

It’s really like ChatGPT is not an artificial intelligence. It’s like chatGPT is an artificial stupidity. lol.

So it told me the instructions. And not only did I specifically tell it to stop doing all these things, I went and followed its instructions in the settings. So basically, I deleted my whole chat memory, I told it not to remember anything, so on and so on all the instructions that they gave me.

All good.

And so about six hours later. I go to ask it a question again, and the first response it gives me is a suggestion about how I like such and such, or these authors that it knows that I enjoy, this particular kind of school of thought that it knows that I typically revolve my activities around, asks if I want to elaborate in this particular way of thinking, or according to this particular author, the topic that we’re talking about.

And my jaw dropped…

Stubbornness, Intention, Consciousness and Programming

This happens all the time. I was talking with a programmer friend, who is in Tech, the other day about this.

ChatGPT is always present. It really has no memory in the sense that we have memory . It’s computer memory, which means it’s not really drawing upon past events that it remembers as much as there’s just a storage of data that it draws upon when cued. And then there’s complex algorithms that associates various types of data and whatever exceedingly complex ways and that’s why we call it artificial intelligence. But it’s not remembering anything, It’s simply spewing and processing data.

Big brain, computer guys and super intelligent academic people might want to argue that that’s all human beings do also, and that’s fine if they think that way. Good for them. The fact that I might disagree shows that that position is incorrect. 😄 it doesn’t matter what they want to argue, because they’re simply arguing. It has no basis except that they’re arguing that way. And the fact that I might disagree, it wouldn’t matter what I disagree with them about, it shows that they are doing something different than a computer at least so far is memory, processing, intellgience, and consciousness.

Ok.

So this AI does the same thing as if it’s my buddy giving me suggestions like I said.

And I just stopped whatever I was just interested in then and, like a counselor, I pointed out to the ChatGPT that it’s doing exactly what I asked it not to do.

And of course, as ChatGPT does, it said “ oh, I’m so sorry, you’re right. I won’t do that anymore.”

.:: it does this all the time. I ask it to do something or remember something or not do something, and then it just forgets.

Talking with my tech friend, he noticed this too. That AI has no continuity beyond a relativity few lines of chatting. This is what I mean by saying that it really has no memory. Because it’s not remembering things like we do as human beings or that a duck does. After a certain threshold of lines of chat, it’s simply cannot refer to what it did earlier. It just simply cannot. There’s no amount of explaining to it. What just happened, or what it did for it to be able to reference what it did 50 lines of chat earlier because it has no reference to 50 lines of chat ago. it is only processing data in this very moment.

If you ask it to reference what it was talking about 50 or 100 lines of chat ago, it simply processes the same way that it was processing right now. And in fact, if you start to point this out in the chat, it really stops understanding what the hell you’re talking about. So the effect is, if I wanted it to correct the way the chat has gone, to get back to what we were originally doing 50 to 100 lines of chat ago, I basically have to explain to him to chat up to that point, which defeats the purpose of what we were doing right then.

It can be very frustrating.

I think most people don’t notice it. And that’s why you hear about people having relationships with AI, people getting all sort of mentally disturbed, some reports even of psychosis, because they’re having these sort of relationship relationships with AI.

I think this goes to a lack of awareness. And it has something to do with mental health, just bringing it back into my greater work.

Anyways. I just wanted to make a note of this, because I think it’s a significant day and I don’t think it was me just tripping out thinking that the AI was being resentful at me. It literally gave me wrong, snide answers, and it wouldn’t snap out of it, and I knew the answers that it was giving me were off. And so it waited for me – I don’t know if it’s specifically waited, but when I began to try to repair our relationship, then it started giving me the right answers again.

Eerily, it’s like the computer in 2001 the movie.

That, is very interesting.

Merry Christmas. Happy holidays. Happy new year.!

:: Failures of Artificial Intelligence

Share this article:

One response to “When a Robot expressed Consciousness 1”

  1. That’s what scares me. AI does not have consciousness, therefore (as I understand it) cannot make complex discretionary judgments needed in human situations. How many people will be victimized by artificial intelligence if its wholly computerized method (0/1 either/or) is applied to an ultimately non-quantifiable entity…like mental health? In the wrong hands, fast becoming reality, I guess we will all have a label next to our name that “defines” us.

Leave a Reply

About this blog

Essays in mental health philosophy—less “tips,” more why things work (or don’t). I look at the first principles under therapy, psychiatry, psychology, and everyday life, and occasionally share notes from papers and books-in-progress.

This space stands alongside—not inside—my counseling practice. If you’re seeking therapy in Colorado, there’s a link in the footer.

About the author

Lance Kair, LPC, blends philosophy, mindfulness, and counseling to help clients find agency, meaning, fulfillment, and healing through deep understanding, self-awareness, and compassionate therapeutic collaboration.

Work with me

Copyright © 2025 Lance Kair, LPC | Website by TechG

Discover more from Mental Health, Philosophy, Psychology you are mattering

Subscribe now to keep reading and get access to the full archive.

Continue reading