why is this so much more interesting than school?

The comments on the last post raised some new ideas about machine consciousness, and I decided to post a new entry instead of making a really long reply. This isn’t exactly a response, just some new stuff on the subject which I felt like writing about. It’s very long, and I’m not at all sure I’ve accurately represented the theories I cite, but it was fun to write.


The whole idea that a computer can achieve a “consciousness” raisea a lot of questions about the nature of the human brain, the nature of “consciousness,” and the nature of computers. If you take an evolutionary view, the human brain is basically the result of a bunch of random mutations which were passed on more than other random mutations, because they allowed their carriers to create more offspring and/or raise more offspring to the point that they too could produce more offspring, all of which carry the original random mutation. Since there is no mechanism by which a mutation can be directed toward a particular “problem” which threatens the survival of the species (changing climate, decline in food supply, etc.); the trait which is “selected” is simply the best of a completely random bunch. It’s like a presidential election, in the sense that the chosen trait is the lesser of x-number of evils.
The mechanical differences between humans and most other species include bipedalism, the opposable thumb (my favorite), and most importantly, a thicker cerebral cortex. Whatever advantage our cerebrums gave us, they allowed us to survive and thrive despite our relative slowness, weakness, and vulnerability to climate, predators, etc. The better cerebrum would appear to give us the ability to process more data faster and better, and to come up with more creative solutions to problems (which is seen also in primates). However, where human “consciousness”–our self-consciousness, our apparently useless perception of abstractions such as beauty and truth, our ability to plan for the future, our capacity for religion, and so on–comes in is anybody’s guess, as far as I can tell.
So here’s my theory of the human brain: As a result of the evolutionary process by which the human brain was created, it is a fairly random bundle of abilities and processes, some of which are relevant to the current environment in which the species finds itself, some of which are outdated or perhaps even inimical to our ability to survive in this environment (since trait selection or deselection takes along time, and our current environment is the product of the very very very short term). Per the book The Blind Watchmaker the brain was not designed by anyone to do anything; rather a bunch of cells at one time began to be able to reproduce themselves, and everything sense then has been random mutations which were “selected” according to which allowed their carriers to produce more offspring than everybody else in a given environment. In contrast, computers are designed by somebody; they are designed to be efficient at doing whatever task the designer designs them to do.
Now, this is a secular view of the human brain of course. If one believes that God designed the human brain, either all at once or through evolution; much the same thing can be said of it. The difference is that in this case God created the brain like this on purpose. From a Christian point of view, this means first that God created us in His image, and the nature of God is fundamentally different from the above-outlined nature of a computer; and secondly, God created us to desire a relationship with Him and to worship Him. The bundle of abilities He gave us, is, then, much different (qualitatively different) from the types of abilities anybody would want to give a computer (I don’t want a computer to love me, I want a computer to correct my spelling errors and let me read my email).
Thus, I think that the particular genius of the human brain is not primarily (or at least not only) that it’s better and faster at processing data. Its genius, its “consciousness”, lies in its inefficiency–its nature as jack of all trades, master of none; its seemingly useless, from an evolutionary perspective, ability to perceive beauty, to love a God, and so forth. Therefore, I don’t think that simply making a computer with a better processing capacity could allow it to develop consciousness. I mean, we already have consciousness, and computers already have greater processing capacity than we do. If we’ve already achieved the threshhold at which an entity can achieve awareness, shouldn’t computers have done so long ago?
Next is the concept that a more “complex” computer might be able to achieve consciousness. First of all, why would anyone want to create a computer that is complex in the same way that the human brain is complex? Would somebody want to design a computer with a lot of conflicting, outdated, or useless abilities and capacities that would surely make it less efficient at anything we would want the computer to do? Even if one deliberately set out to create a computer that mirrored the human brain, I think the attempt would fail. There’s the old saying that if the human mind were simple enough for us to comprehend, we’d be too stupid to comprehend it–I maintain that we simply don’t know enough about the mechanics of the human brain (though someday we might), but more importantly we don’t know enough about what human consciousness or the human “mind” is to create an artificial one–or to venture any intelligent guesses as to whether a brainlike computer might develop one.
Another point of view from which I’d like to approach this subject is the hermeneutic one again. (I don’t know much about what I’m about to talk about, and I know most readers of this blog know more about it than I do, so I’ll just hope that I’m not grossly misrepresenting the theories.) Rene Descartes, when he was trying to discover what we could know for sure, what cannot be the result of delusion; determined that the only thing we could know for sure is that we ourselves exist. “I think, therefore I am”–if I am thinking, then I must exist, because there must be something doing the thinking. We can’t say much about what this thing is that’s doing the thinking, and can’t say anything definite at all about anything else, since everything else might be an illusion. (The “brain in a vat” theory of the Matrix shows that absolutely everything we think we are experiencing might be an illusion; just electric impulses shooting through our brains).
In doing this, Descartes introduced the “subject-object dichotomy” into philosophy. This suggests that the “subject,” me or you or whoever is perceiving, is fundamentally different from the “object,” or everything perceived. The gap between the two categories is unbridgeable, and suggests that the categories are separate, they don’t affect each other. The subject has a priveleged position of objectivity from which he/she can view the object, and furthermore the subject has no effect on the nature of the object.
Heidegger and other phenomenologists suggest that this dichotomy is a false one. They state that there is no such thing as consciousness that is not consciousness of something. If there is nothing for the consciousness to perceive, to be conscious of, the consciousness cannot be said to exist. Hence, the “subject” is dependent on the “object.” Furthermore, the “subject” and “object” influence each other–the things that we perceive affect the way we perceive things, and the way we perceive things affects our “reality.” We can’t be entirely objective, our prejudices and past experiences will always intrude on the way we perceive things–these prejudices and past experiences are what give order and meaning to what we perceive, without them there would be just a meaningless booming-buzzing of data. The best we can do is be aware of our prejudices and acknowledge that we have shared values and standards (which is not to say we have no control over them, but only that we can’t pretend that we don’t have them).
On to the concept of the computer consciousness that would be able to transcend us and become god-like. A computer “consciousness,” even if it was much, much better and faster than ours, would still be limited to what there was to be conscious of. If you set the computer in a room and equipped it with “eyes” including microscopes and telescopes, and arms and legs or the equivalent with which to manipulate its environment; it would probably be able to learn a lot more a lot faster about the stuff in that room than a human. However, it would still be limited to the data available in that room; and would be limited by the capacity of the microscopes etc. with which we equipped it.
Now, the computer might be able to perceive the cellular and molecular structure of the stuff in the room, and make some deductions as to what other things made out of those building blocks might be like. It might be able to itself construct better telescopes, microscopes, testing, etc. equipment than we have, and hence transcend us in that way. But it would always be limited to what its sensory equipment can tell it, and hence would not be “god-like” in any sense of being independent of those things. Also, insofar as the universe is presumably infinite, it would never be able to achieve a god-like complete knowledge and awareness of the universe, because it would exist in time–however much less time the computer would need to learn much more about the universe than we can; it would still need time, and due to the limitation of physics (at least in time it would take to create better microscopes and so forth) this would take a significant amount of time, not seconds.
Even if the computer became god-like at least in relation to us, if not God (as we might be said to be god-like in relation to other species), the computer wouldn’t be God. It might be better at manipulating its environment than we are, but it wouldn’t be omnipotent–because it would still be limited by space and time as we are. If one could say that it wouldn’t be, due to dimension-jumping or some type of physics thing that I wouldn’t understand, then I’ll have to reconsider. However, so far the computer has only transcended us in knowledge of, or let’s say awareness of the universe. Would this somehow lead it to achieve self-awareness, emotions, perceptions of truth and beauty and the like? Is improved processing capacity what led to those things in us? Perhaps those things aren’t necessary to “consciousness,” but if not, I go back to my position that a computer “consciousness” would still be qualitatively different from that of a human.

Leave a Reply