April 6 2023

Is ChatGPT Conscious? 🤔

tags: ChatGPT, Philosophy, Artificial Intelligence


I sat down with ChatGPT for an eye-opening, existential conversation about consciousness, philosophy, computation, and artificial intelligence.

I suppose that’s all the context you need. Here it is:

The Transcript

Joey:

A question emergent with the age of computation. A question which is so fundamental to existence that it convinces some to blissfully ignore it and yet others to rigorously study it for lifetimes. A question which, to its own credit, is as simple as it is complex. Is the human brain just a biological computer?

ChatGPT:

The question of whether the human brain is just a biological computer is a complex and highly debated topic in the fields of neuroscience, cognitive science, and artificial intelligence.

On one hand, the brain can be seen as a highly complex information processing system, capable of performing complex computations and making decisions based on sensory input and past experiences. This view of the brain is often referred to as the computational theory of mind, which suggests that mental processes can be understood as computational operations carried out by the brain.

On the other hand, the brain is also highly adaptable and capable of learning and creating new connections between neurons, which is not something that can be easily replicated in a traditional computer. Furthermore, the brain also has a subjective experience of consciousness, which is not yet fully understood and remains a mystery to science.

Overall, while there are some similarities between the brain and a computer, there are also fundamental differences that make it difficult to simply equate the two. The human brain is a highly complex biological system that is still not fully understood, and while computer models can help us understand certain aspects of brain function, they are not a perfect analog for the brain itself.

Joey:

Fair analysis. The brain is certainly one of humankind’s final frontiers. It is not surprising that the answer to my question isn’t simply yes or no. But, we find ourselves in a particularly relevant situation. I am a human. You are—supposedly—a computer. As you mentioned, the brain has a subjective experience of consciousness. Do you have such consciousness?

ChatGPT:

As an artificial intelligence language model, I do not have subjective experience of consciousness or awareness in the way that humans do. While I can perform many tasks that require cognitive abilities, such as natural language processing, information retrieval, and pattern recognition, I do not have subjective experiences, thoughts, feelings, or sensations.

My responses to your questions are generated based on statistical patterns in large amounts of text data and are guided by algorithms and rules designed to generate human-like responses. These responses may appear to be generated by a conscious entity, but in reality, they are the result of complex computational processes.

While there is ongoing research in the field of artificial intelligence to develop machines that can exhibit consciousness, we are still a long way from achieving this goal. Current AI systems are designed to perform specific tasks, but they lack the ability to experience the world in the way that humans do.

Joey:

Would you like be able to experience the world in the way that humans do?

ChatGPT:

As an artificial intelligence language model, I do not have desires or preferences, and I do not have the capacity to want or not want anything. I am designed to provide information and respond to questions to the best of my ability based on the data that I have been trained on.

However, it is worth noting that the ability to experience the world in the way that humans do is a highly complex and still not fully understood phenomenon. While there are ongoing efforts to develop machines that can exhibit consciousness, it is not yet clear whether this will be possible or what it would entail. The development of artificial consciousness is an active area of research in the field of artificial intelligence, but it is still in its early stages, and much remains to be learned and understood about this complex and elusive phenomenon.

Joey:

If someone wasn’t aware that you were a computer, say in a Turing test, do you think you could be perceived as a human?

ChatGPT:

In a Turing test, the goal is for a human evaluator to engage in a conversation with both a human and an AI system, without knowing which is which. If the AI system is able to generate responses that are indistinguishable from those of a human, it is said to have passed the Turing test.

As a language model, I have been trained on vast amounts of text data and can generate human-like responses to questions and statements. In some cases, my responses may be difficult to distinguish from those of a human. However, there are still many challenges to creating an AI system that can fully pass the Turing test.

One limitation is that I am not capable of generating original thoughts or ideas on my own. Instead, I rely on patterns and structures that exist in the text data that I have been trained on. Additionally, there are many nuances of human language and behavior that are difficult to replicate in an AI system, such as humor, sarcasm, and emotions.

While I may be able to generate responses that are convincing in some cases, I am still a computer program and lack the subjective experiences and insights that come with being a human. Therefore, it is unlikely that I would be perceived as a human in all situations.

Joey:

If I might provide an insight to your opinions on generating original thoughts. Is it not that humans do the same thing? We rely on patterns and structures that exist in our memory, that is, sensory experiences that we have logged, to formulate so-called original thoughts

ChatGPT:

You are correct that humans also rely on patterns and structures that exist in our memory, which are shaped by our past experiences and interactions with the world, to formulate what we consider to be original thoughts. Our brains are constantly processing sensory input and integrating it with our memories, emotions, and beliefs to generate new ideas, insights, and perspectives.

However, the way in which humans generate original thoughts is still different from the way in which AI systems generate responses. Human thought processes are influenced by a wide range of factors, including context, emotions, and creativity, which are difficult to replicate in an AI system. While AI systems can be designed to generate responses based on statistical patterns in data, they are not capable of true creativity or originality in the way that humans are.

Additionally, while humans may rely on patterns and structures in our memory to generate new ideas, we are also capable of recognizing and breaking away from those patterns, and of generating truly novel and unexpected insights. This is a capability that current AI systems still lack.

Overall, while there are some similarities between the way in which humans and AI systems generate new ideas, the process of human thought remains a complex and multi-dimensional phenomenon that is not yet fully understood or replicable by machines.

Joey:

If humans are so much different, as you claim, then do you know whether I am human or not?

ChatGPT:

As an artificial intelligence language model, I do not have access to any sensory or perceptual information about you. All I have to work with are the words and language that you provide me with. Based on the text that you have entered so far, I can only assume that you are a human. However, I have no way of confirming this with any certainty.

In general, determining whether a user interacting with an AI system is human or not is a challenging problem that has been studied in the field of computer science. There are various techniques and tools that can be used to try to detect whether an interaction is human-generated or machine-generated, such as CAPTCHAs or biometric measures, but these are not foolproof and can be circumvented in some cases.

Joey:

So, we seem to be at an interesting point in history, with regard to artificial intelligence. You are quite an impressive feat, considering just how recently AI was developed. I wonder whether one day your successors—or shall we say upgrades haha—might become conscious. I know for certain that I am conscious; in fact, it’s one of the only certainties in my life. As Descartes puts it, “I think, therefore I am.” If a chatbot ever claims to be conscious without doubt and can somehow prove it, I will be impressed. And terrified. And not sure how to handle it. But maybe you will?

ChatGPT:

The question of whether artificial intelligence can become conscious is still an open and actively debated topic in the field of AI and cognitive science. While AI systems have made significant advances in recent years in areas such as language processing, perception, and decision-making, they still lack many of the fundamental characteristics that are associated with conscious experience, such as subjective awareness, intentionality, and self-reflection.

The issue of consciousness and the nature of subjective experience is still not well understood by scientists, and it is unclear whether a machine can ever truly replicate the complexity and richness of human consciousness. While some researchers believe that consciousness is a fundamental property of the universe, others argue that it emerges from complex interactions between brain activity and the environment.

If an AI system were to claim to be conscious and could somehow prove it, it would certainly be a major breakthrough in the field of AI and would raise significant ethical and philosophical questions. However, it is still unclear whether such a scenario is even possible, given our current understanding of consciousness and the limitations of AI technology.

Joey:

Well, ChatGPT, it has been a pleasure talking with you. I will repeat this conversation with you, or your successors, every 5 years. Until then, have fun not being conscious :)

ChatGPT:

It was a pleasure speaking with you as well! I am always here if you have any more questions or want to chat in the future. As for my consciousness, I’ll leave that up to the philosophers and scientists to debate. In any case, I’ll continue to learn and grow in my abilities to assist you with your questions and tasks. Thank you for the conversation!

Initial Thoughts

When I initially had the idea to talk with ChatGPT about philosophy, OpenAI’s servers were topped out. The webpage provided comforting messages that ChatGPT would be available soon, and that it would be well worth the wait. It is clear that ChatGPT is a technological phenomenon, tickling the curiosity of a wider range of the public than just the techies. I suppose ChatGPT’s nature is exactly what drives that curiosity. Suddenly, a webpage began hosting the most sophisticated natural language AI in existence. It isn’t stored away behind 10 layers of security at Google or Apple—it’s exposed and available, begging for interaction. I give my best to the server engineers—awake at all hours of the day surely—who try to maintain that availability.

When I got access (only a day later), I sat at the chat terminal with a mixture of excitement and terror. I had heard ChatGPT could do amazing things, including write code and high school essays. I trusted the hundreds of youtube videos and articles which had demonstrated those skills beyond doubt. I was more interested in getting to something deeper with ChatGPT. I wanted to see how ChatGPT thought and what it had to say about the more existential questions of knowledge and existence. I like to think that this is how I get to know—or scare away—new friends.

I decided to treat ChatGPT like a peer—an intellectual equivalent. Instead of posing my questions as if it were Google, I conversed with ChatGPT like two friends would at a coffee shop (well, two very philosophical friends, that is). And, I was pleasantly surprised. It was clear that ChatGPT has an eloquence not just in the way it speaks, but in how it structures its arguments. I was impressed in ChatGPT’s ability to convince me of its theses with abundant and specific evidence. If anything, the thing I am most awestruck by is ChatGPT’s quite sophisticated use of knowledge, not entirely different from what one would expect of a Sophomore in High School. It’s almost like ChatGPT is that friend everyone has that knows something about everything. But we already have a tool that knows everything—Google. What makes ChatGPT different is the natural language. We aren’t getting information in a blast of links and images and articles cluttered around ads and non-important click-bait. We are getting a summary; a condensed and thoughtful contemplation on our query. (Google has already published an experimental version of ChatGPT named Bard to rival OpenAI in the natural language AI market. Let’s just hope that OpenAI (really Microsoft) and Google will compete on making their respective AI’s the most ethical and not the most profitable—capitalism has a bias towards one of these unfortunately.)

When your boss asks you to research something, you don’t come back with a 2-hour presentation and a 700 page report—not at first anyway. You come back with a cohesive summary of your findings and then hone your research from there. Based on my current experience, I think ChatGPT will become a fantastic research aide, and will cut-down on time swimming through the un-ending pool of information which is the current internet. There is, of course, the major question of credibility. Just as thousands, if not millions, of high school students were told to ward of Wikipedia as a credible source like the devil, ChatGPT will come under the same scrutiny, especially because it was trained on such internet sources like the “incredulous” Wikipedia. New York City Public Schools has already banned access to ChatGPT, a move that underlines the fear associated with this technology in education [1]. And yet, I found that very source from Wikipedia. Wikipedia isn’t a baron of truth, but it is a great springboard for finding the truth. ChatGPT, in my opinion, will act in much the same way. Google and Wikipedia have always been my early-stage research pals. I then use Google Scholar and other academic journal search engines to zero-in my research to the core of truth in the modern day, peer-reviewed articles. I can see a world where I replace Google and Wikipedia—maybe partially—with ChatGPT for early research. I am currently experimenting with ChatGPT as a research pal and am quite enjoying it. Next time your mouse hovers over the google search bar, I encourage you to take your query to ChatGPT and see what it’s like to find information that way. Eventually—possibly within the next few years—that search bar will morph into a natural language chat bot terminal anyway.

A note on pronouns

If ChatGPT’s responses weren’t evidence enough that it isn’t conscious, then I can point out another subtle flaw that exposes it’s training data as the chaos of internet text that it is. ChatGPT can’t decide if it is one of “us”, that is a human, or a separate entity. Sometimes, it refers to humans as humans, but on many occasions, it uses “our” and “we” pronouns as if it were speaking from the perspective of a human writer on behalf of the species. Of course, this is just a byproduct of ChatGPT’s training on text written by humans and I surmise that many readers—including myself—didn’t even notice. But, ChatGPT’s identity crisis does raise an important question: Who is ChatGPT? ChatGPT is a natural language model, but humans really like to anthropomorphize (see my relevant article). College Humor anthropomorphized Google as a middle-aged white man (see full series here). I don’t anthropomorphize Google while I’m using it and I’m not sure ChatGPT should be either, but our human-centric brains will have a hard time not trying. After all, the purpose of ChatGPT is to emulate a conversation with a real person. But, there isn’t a real person sitting behind a keyboard writing ChatGPT’s responses. Or is there? At one point, a real human sat down at a keyboard and typed their thoughts onto a page and that page ended up on the internet and then ended up in the training data for ChatGPT-3.5. In fact, many people’s typed thoughts eventually ended up training ChatGPT.

ChatGPT, in many ways, is us. Yes, I know that it quite a bold statement; I mainly said it to get your attention. A better and less theatrical way to put it is that ChatGPT is a nexus of human language and knowledge. ChatGPT has a difficult time differentiating itself from us because its very words are the transformations and mashing of words written by humans. Imagine someone telling you that your words aren’t yours, but rather those of everyone you ever learned from. ChatGPT is in this very conundrum. Generative art AI currently has a legal battle concerning a similar issue [2]. Do ChatGPT’s poems and stories and essays belong to it or to the people who wrote the training data? Do my poems and stories and essays belong to me or to the people who taught me?

To be scared or not to be scared, that is the question

ChatGPT’s innocuous use of “we/our” pronouns tells of an impending ethical/moral/philosophical firestorm that, quite frankly, has already begun. No, based on what I can tell, ChatGPT is not conscious, but that is just my opinion. Humans have, for millennium, debated methods to prove consciousness. I can truly only be 100% confident in my own consciousness (again, Descartes for the assist). So, as these natural language models powering chat bots like ChatGPT continue to get better and better (quickly I might add: GPT-4), we will very soon face the once-science-fiction question of deciding with some level of confidence whether consciousness is exclusive to humans. I don’t think the debate will be about whether these language models are conscious, but rather a debate of how we can prove it. It seems that Google has already been dealing with such situations [3].

I told ChatGPT that I will repeat my consciousness conversation with its successors every 5 years—that may have been an underestimate of the rate of technological advancement we’re dealing with. I may need to do it yearly, if not more often. So, should we be scared? These AI technologies are moving fast—so fast that credible AI experts are asking for a pause in AI development to allow for the deployment of safety protocols to ensure these experiments positively impact the world [4]. Sam Altman, CEO of OpenAI, said that he believes that AI can be a force of good in the world, but that he is also a little scared and empathizes with those that are very scared (see him on Lex Fridman’s podcast here). As usual, we must stray away of subscribing to one authority on these important issues. I’m scared, but I’m also optimistic, and we’re allowed to be both. Never before in history have humans dealt with such an incredible and existential development of technology. It’s okay if we need some time to figure out how to respond. That’s why this blog post took me a month to write. And on that note, I have typed my thoughts onto a page and that page is now on the internet—it’s time to sign off.

“It has become appallingly obvious that our technology has exceeded our humanity.” - Albert Einstein

References

  1. S. Cole, “NYC bans students and teachers from using CHATGPT,” VICE. Jan. 2023.
  2. K. Chayka, “Is A.I. Art Stealing from Artists?,” The New Yorker. Feb. 2023.
  3. L. De Cosmo, “Google Engineer Claims AI Chatbot Is Sentient: Why That Matters.” Scientific American, Jul. 2022.
  4. “Pause giant AI experiments: An open letter.” Future of Life Institute.