Recently, I’ve been feeling quite overwhelmed with juggling university, work and self-care. As I am currently trying my best to work through this rough patch and instil some healthy balance into my life, I didn’t have much time to review lectures or do in-depth research for a neuroscience hot topic to talk about this week. As I am stubbornly sticking to my own personal deadlines, I refuse to not submit an article on Friday, therefore this is my “lazy” post. (I wrote this essay for a philosophy module last term, recycling is good…)
John Searle’s Chinese room thought experiment successfully shows us that digital computers do not have “minds” in the way that humans do. Digital computers are unable to have mental states, emotions and consciousness. In this essay, I will be outlining the thought experiment itself and the conclusions that are drawn from it.
Searle’s hypothetical premise starts with a person (you) being locked inside of a room, which is equipped with baskets full of Chinese symbols. From outside of the room, questions are slid in which are written in Chinese. As a non-Chinese speaker, you use the instructions provided and an English computer program to string the Chinese symbols together to form appropriate answers to the questions that were given to you. You slide the answers out, and the Chinese you’ve constructed is as fluent as a native speaker. To the outside world reading your answers, it seems as though you are a native Chinese speaker or at the very least you understand what you are saying. Yet, this is only an illusion as the Chinese symbols make no sense to you. You are merely manipulating the physical symbols provided, there is no application to make it understood. Searle draws a parallel between you shuffling the Chinese symbols in and out of the room to the CPU (central processing unit) of a computer.
Searle’s thought experiment would pass the Turing test. To pass the Turing test means that an AI system is indistinguishable from the behaviour of a person. If the computer looks like it is understanding Chinese, then the CPU (you) will also look like you understand Chinese- therefore passing the test. Now that I have described the hypothetical premise, I will highlight the conclusions Searle drew from it.
According to Searle, computers operate in a syntactical manner only- that is the way in which they are programmed. Syntax concerns itself with the formal and grammatically correct structure of words to form sentences. Whereas, humans have both syntax and semantics. Semantics is when meaning/intention is applied to words. To a human, a seemingly random symbol can be linked to our memories and emotions – they are attached with more complex substances. Computers on the other hand, syntactically translate the words without emotions. Our understanding of language is based on semantics, we apply meaning, experiences and intention to sentences so that we can make sense of it.
According to Searle, being able to string grammatically correct Chinese sentences is not sufficient enough to claim that one understands Chinese. The thought experiment proves this, shuffling syntactically accurate sentences in and out of a room doesn’t equal understanding. Unless you are a native Chinese speaker, the symbols you are using will not make any sense. Even if you are placed in a robot and made to interact with the outside world, shuffling symbols in and out, you still do not understand the meaning of your conversations. For to have understanding, is to apply meaning to symbols, something that computers are unable to do. Some argue, that it is not the CPU (you) that defines understanding, but rather the system in its entirety. They say that in the Chinese room thought experiment, it is you shuffling the card, the instructions, the basket –it’s the totality of the system which makes it capable of understanding. Searle effectively rebuttals this argument by stating that if the CPU of the system does not understand what the symbols mean, then there is no way that the whole system does.
Humans, unlike digital computers, operate semantically. Our consciousness, emotions and our mental states allow us to apply meaning to things- all of which computers do not possess. It is difficult to define consciousness however majority of people would agree that computers are not conscious structures. An element of being conscious is being self-aware and extending beyond your pre-programmed beliefs and ideas. This (for the time being) is impossible for AI. Computers cannot go above and beyond their own program, they are not self-aware in the way that humans are. They are fed programs and these programs are regurgitated at a later time. Whereas humans can have original thoughts-an imagination.
These elements that make us “human” cannot be duplicated in AI. This is Searle’s second argument. Simulation is not the same as duplication. Our consciousness, morality, fears and desires cannot be duplicated in AI. Strong AI is artificial intelligence that is designed to think like us, mimic us and pass the Turning test of being indistinguishable from people. Strong AI can attempt to simulate these elements, but it is highly improbable that this will be a reality as our own knowledge of the brain is limited. For instance, we have not yet pinpointed the source of consciousness in our own bodies so if human bodies are to be duplicated by strong AI in the future, where would their consciousness originate from? What is the source of consciousness? This school of thought can be applied for other features that make a person “human”- our morality, faith and self-awareness. Regardless of how efficient the computer is, how fast the computer is, it has to be defined syntactically. That is the manner in which computers operate, whereas consciousness and other mental states are not coded for.
Searle strengthens his argument when he highlights how ludicrous it is to believe that a simulation is comparable to the real world. Searle mentions how a computer which is programmed to simulate a storm would not make us prepare for rain to fall on us. Therefore, why do we believe that a simulation of a human will be the same thing as a human? This very effective argument proposes the idea that a computer pretending to be a person, is simply just that. A computer is not a person as we are not programmed or coded for.
Searle brings forth the argument that an integral part of the conscious human experience is our physical biology. Mental states and arguably consciousness arise from our neurophysiology. According to Searle, “brains cause minds”. Mental states are a form of biological phenomena; they simply cannot exist in digital computers that operate syntactically. This short argument excludes the possibility that computers possess mental processes. Minds are capable of possessing mental contents as they are self-sufficient. Whereas digital computers are not self-sufficient. Searle concluded that any system that can create a mind must have powers equivalent to that of a brain. The idea of a digital brain that is somehow organic is far-fetched and highly unlikely. Our physical anatomy is a shared experience for all humans, it is a part of what makes us human, and computers cannot duplicate that.
- sham x
my poem of the week:
I like a look of agony
because I know its not true;
men do not sham convulsion,
nor stimulate a throe
The eyes glaze once, and that is death.
impossible to feign
the beads upon the forehead
by homely anguish strung
Comments