A friend and I once took an evening class with an anthropology professor who had some pretty unique theories. I had some difficulty staying alert in that class, it being a long evening lecture class and all, but these theories would be enough to wake me up and cause me to wonder if I’d just fallen asleep and dreamed that he said that.
One of the less wacky theories was that children are optimal foragers (the class was about hunter-gatherers). He said that because of our hunting-gathering origins we tended to prefer foods that had the highest fat, protein, and carbohydrate levels because those are the three nutrients that keep us from starving to death–vitamins & minerals are of secondary importance. Children are the best at this which is why they like stuff like macaroni & cheese & fruit juice. So when we try to eat healthy and lose weight, we’re going against our natural tendency to eat foods with the highest amounts of the above nutrients, which I guess is why it’s difficult to choose salad over a hamburger. Okay, so maybe it’s not that interesting, but it has stuck with me all these years.
A somewhat weirder theory was that the plants are going to kill us. The prof maintained that because we are focused in on only a few species of food (wheat, corn, and what have you); ultimately these species will evolve defense mechanisms that will make them poisonous to humans, to prevent us from keeping them from reproducing by consuming or confiscating their seeds. Now “the plants are going to kill us” is prima facie wacky, and more difficult to maintain from an evolutionary point of view, since domestication has actually increased the habitat of these species probably several thousand-fold; and they are so genetically engineered they don’t have much hope of any natural mutation working its way in to the general population. But maybe I didn’t fully understand his argument. Either way, the important thing is that I got to use the phrase “the plants are going to kill us” in my blog.
Yesterday Andy and I were discussing the concept of artificial intelligence and how it always seems to be perceived as hostile to humanity. I maintain (ha, you thought you were going to get out of this without one of my own personal wacky theories) that if an artificial “intelligence” or “awareness” could be created, it would be very different than human intelligence. First of all, I don’t think you can create a “consciousness” that has nothing to be conscious of–it would have to have sensory equipment and the equivalent of arms and legs with which to move around and manipulate its environment (based on the theories of Husserl, Heidegger, and other hermeneutical theorists, which I am incapable of reproducing here). Second, while I can imagine a computer becoming aware of its surroundings and able to act and solve some problems better than a human, I can’t imagine it writing a poem or painting a picture or having an emotion of altruism or love or hate. It might be able to mimic such things very well, but it would not be having the experience that a human has. Computers now can’t do anything but what a human tells it to do, and they have only the data that a human gives them or equips them to gather. My idea of an “aware” computer is one that doesn’t need a programmer–it recognizes problems that threaten it or whatever its mission is, and programs itself to solve that problem. I don’ t see how we can program a computer to have an emotion, or to have a human-like consciousness–largely because we ourselves have only a very sketchy notion of what human consciousness is. I mean, that has been one of the big debates of philosophers over the millennia, and we haven’t come to any kind of concensus yet. Science doesn’t seem to be able to throw much light on the phenomenon either, and the humanities and social sciences only study the artifacts of consciousness. In any case, I’m not too worried about the “the machines are going to kill us” hypothesis–I mean, if it’s not the machines or the plants, it’s bound to be something.
There is a lot of good science fiction that touches on these topics. The theory about the plants evolving to kill us is a new one to me, though. The closest thing to this in sci-fi that I’m aware of is Frank Herbert’s The Green Brain. I think the premise is that sometime in the future, mankind’s proficiency at killing off crop-eating insects finally forces them to evolve a form of collective consciousness to allow them to fight back in their own defense. It sounds interesting, but Frank Herbert’s early works were often fairly mediocre, so I haven’t read it out of fear that it might suck.
The machine-conscious thing is so interesting. Based on your observations above, would you define a “conscious” machine as one that could independently select and process the data of its choice, rather than being limited to processing only that data provided to it by humans?
Frank Herbert and Vernor Vinge seem to have a pretty mechanistic view of consciousness. One theory they seem to share is that consciousness is limited only by available processing power. Humans are limited to a certain level of consciousness by the processing power of their brains; computers are limited by the amount of processing power in their CPUs. The idea is that if you created a machine or being with the ability to continually add to its own processing power, its awareness would increase exponentially. Both of these writers have penned sci-fi stories this idea; in Herbert’s Destination Void and Vinge’s Fire Upon the Deep, humans manage to create mechanical artificial intelligences only to watch helplessly as said intelligence expands, usually within just a few seconds, until it has attained godlike levels of awareness.
Anyway, maybe some fun reading for you. These books are all out on the living room bookshelf if you ever want to take a peak at them 🙂
It’s been awhile since my Philosophy of Mind or my AI classes at Calvin, but one theory that seems popular amongst some philosophers is that of an emergent conciousness. That is, humans can’t *create* consciousness in a machine, but if we can create a sufficiently complex system (one as complex as, say, the human brain), then perhaps a consciousness will emerge on its own. Of course, for that to be the case, you would have to buy into the whole emergent consiousness theory, which would mean you’re rejecting the idea that we are imbued with a consciousness by some external influence. Unless you think that this external influence finds sufficiently complex systems, and then imbues them with consciousness.
So, the point is that you need to make a positronic brain that is attractive enough to God that he decides to bestow consciousness in it. Hopefully, it won’t then kill all humans and take over the world.