Back in 1980 John Searle wrote an essay on Artificial Intelligence on a concept he called “The Chinese Room”. This essay was written a criticism of the Turing test as the De facto test for machine intelligence and brings into question whether something can truly be considered intelligent if it appears to be so to the outside world.
The concept goes like this: Imagine yourself or another person with no knowledge of the Chinese language is placed in a windowless room with the walls covered in papers with Chinese writing on them (appearing as meaningless squiggles to anyone who doesn’t know better). They’re given a book that explains, in English, that papers will be posted under the door with Chinese characters written on them, then by consulting the rules laid out in the book and the papers on the wall they write some more Chinese characters on the paper and post it back through. To anyone on the outside it would seem like the person inside the room can understand Chinese perfectly well, despite them not knowing a single word. Taking this analogy further if time were somehow sped up inside the room then the responses could come back instantly, perhaps even being verbally returned rather than physically returned. In the field of Artificial Intelligence this would be a machine, perhaps one human in appearance who responds to forms off interaction verbally, physically and emotionally in a way that would not be distinguishable from a real living human. But ultimately it’s still a machine that’s only responding according to the way it’s programmed. Even if some of ways it would respond are randomised to give it more of a personality, perhaps responding more aggressively or even hesitating on certain subjects, it’s still just a machine. Even if it were to further develop it’s own programming too handle new situations out would be doing ask based on the algorithms it was originally designed with. So the question becomes where do you draw the line between a very well programmed machine and actual intelligence?
Let’s try another angle. Suppose someone suffers from an accident in later life and has to relearn how to perform basic actions and social interaction based on a set of rules to ensure they’re responding correctly. Can they no longer be considered intelligent, or even human, just because they’re following a set of rules on how to behave? When you think about it, apart from the accident, this isn’t too different from how a lot of us behave anyway. We all decide how to act based on the situation we’re in and who we’re with at the time. And we generally act differently at work compared to when we’re at home or with a close group of friends. The line may be hard to narrow down, but when it gets to the point where something appears intelligent to every form of perception I’d say there’s little reason why it shouldn’t actually be placed on the same side of the lines as ourselves.
This is a subject that’s popped up fairly often in fiction, although it’s rarely referred to by name. The idea of a machine that wants to be human has become quite the popular trope and has appeared in Isaac Asimov’s novella and short story “The bicentennial man” which tells the story of a robot which fights for the right to be recognised as human. More recently there’s the reimagining of Battlestar Galactica and constant prejudice against the Cylons who aren’t considered by the humans no matter how human they appear. On a slightly different note there’s Peter Watts novel “Blindsight” which is high Sci-Fi story about aliens, spaceships, vampires and intelligence told from the perspective of a man who had half his brain removed and is himself a sort of Chinese room.
More details on the Chinese Room argument can be found here:
Blindsight can be read online for free or downloaded in a number of different ebook formats from the following links: