Published on June 29th, 2009 | by Harmonist staff3
As many have observed, modern science has become a religion, at least for Western man. Like other religions, it has a priesthood, roughly organized on hierarchical lines. It has temples, shrines, and rituals and it has a body of canons. And. like other religions, it has its own mythology. One myth in particular states that if, say, by experiment a scientific theory is confronted in reality with a single contradiction, one piece of discontinuing evidence, then that theory is automatically set aside and a new theory that takes the contradiction into account is adopted. This is not the way science actually works.
In fact, some people have the same type of very deep faith in modern science that others do in their respective religions. This faith in science, grounded in its own dogma, leads to a defense of scientific theories far beyond the time any disconfirming evidence is unearthed. Moreover, disconfirming evidence is generally not incorporated into the body of science in an open-minded way but by an elaboration of the already existing edifice (as, for example, by adding epicycles) and generally in a way in which the resulting structure of science and its procedures excludes the possibility of putting the enterprise itself in jeopardy. In other words, modem science has made itself immune to falsification in any terms the true believer will admit into argument.
Perhaps modern science’s most devastating effect is that it leads its believers to think it to be the only legitimate source of knowledge about the world. Being a high priest, if not a bishop, in the cathedral of modern science— my university, the Massachusetts Institute of Technology—I can testify that a great many of what we sometimes like to call “the MIT family,” faculty and students, believe that there is indeed no legitimate source of knowledge about the world other than modern science. This is as mistaken a belief as the belief that one cannot gain legitimate knowledge from anything other than religion. Both are equally false.
Until recently, modern science, seen as a religion, lacked a deity suitable as an object of worship. The machine, which is generally pictured as something that has gears, moving parts, and so on, has existed for a long time now. To modern man the machine certainly represents power, control, mastery over nature-in other words, attributes a worshipable deity should have. But the machine lacks mystery. In fact, it often demystifies in the sense that people believe that most anything can be transformed, metaphorically at least, into the form of a machine and then understood as such. The machine has become an almost universally applicable metaphor that demystifies both itself and the thing to which it refers. This thinking holds true for both intellectuals of all persuasions as well as for ordinary people. Perhaps most people today think a thing is not understood until it has been reduced to a mechanical process.
I think that this phenomenon has contributed to science’s inability to provide an idol which the faithful can worship as truly representative of their common faith. Now recently, within my lifetime, the computer has appeared, and it seems to me that the computer fills that need. Modern man has seen that machines which physically destroy and reconstruct his environment — the steam-shovel, for example — are made in his own image. The steam-shovel has an arm and a hand, and it digs into the ground, picks up objects and so forth. Clearly, it is a kind of imitation of a certain aspect of man. But the computer takes things a step farther. When instructing a computer to think (if I may use that term for a moment) in imitation of human thought, we cross a subtle line.
Generally speaking, before writing a computer program, one believes that one knows how to solve the presenting problem and how to instruct the computer in such a way as to cause it to do what one has in mind. This is not always an easy task. Programs often don’t work properly and have to be debugged. That is, errors have to be removed — usually a long process. It’s a process of writing, and while writing, one learns. One sits down, believing one knows just what it is one wants to write, just how to program the computer, and in the act of attempting to give instructions, one discovers that one lacks understanding. In this way, one’s knowledge may be improved just by the attempt to program a computer. In any case, once the computer is properly instructed, there is certainly a feeling — and I think it has some solidity — that the computer behaves in the image of man in the sense that one has taught it “to think” (again I use that word) like a human being and to do what a human being would do to solve that particular problem.
But, as I said, this leads to the crossing of a very subtle line, and after running over that line during programming, the first impression many people get is that the person is inferior to the computer — that the programmer is in some way a defective imitation. And in certain ways the computer is better than human beings. This is what gives rise to the feeling, not that the computer is made in the imitation of man, but, quite the other way around, that in a certain sense man is made in the image of the computer. So we may start out by thinking that the computer is modeled after the brain or human thought, but then we turn around and say instead that the brain itself is a kind of computer. For example, yesterday someone pointed to his head and said, “the computer up here.” Perhaps it was intended as an amusing gesture, but at the same time, it was an almost universally recognized comment, one which is, I think, quite serious and, under the circumstances, dangerous.
Artificial intelligence is the sub-discipline of computer science that has grown up in the United States. At this stage, and I would say even mainly at my institution, it is seen as a purer form of intelligence than that within this human embodiment. The computer is considered less likely to be misled by mere judgments and other matters arising from the biological constitution of the human being. I am thinking here of some of my colleagues’ views. For example. Forester, of great model-making fame, said in print that mental models are always defective, that we can think better and more reliably through a computer.
Obviously, then, the conclusion we must come to is that while sentimental people argue that God is love, the tough modern man, or at least the tough modern Western man, knows that God is really intelligence. I hope it is very clear that I totally disagree with this position. It is, however, the dogma of a for-the-moment-victorious “religion” that worships intelligence and its embodiment in the computer. This “religion” pronounces an apocalyptic prophecy. According to this prophecy — which certainly has a basis in reality — the earth’s people will one day destroy themselves and their gene pool.
Of course the whole human race is in an extremely dangerous situation. The likelihood that we will in fact destroy ourselves is much too large to ignore. It is very, very real. Some of us —I hope most of us — who have struggled against it certainly don’t believe that it is an inevitable or desirable end to the human story. But when one accepts, as many of my colleagues do, that intelligence is in some sense the purpose of the universe, that God is intelligence, not love, that, to put it another way, the purpose (if one may use that term at all) of organic evolution is not the perfection and adaption of living organisms to their changing environment but rather the perfection and growth of intelligence in the universe, then the extinction of the human race also becomes an acceptable end.
Strange as it may sound, I emphasize again that this view is very widely held among scientists and intellectuals in the United States. Accepting the thesis means that one accepts that the destruction of the human gene pool is not a catastrophe at all, provided, of course, that we, the human race, have assured the continuation of intelligence beyond the human level. In fact, according to some of my colleagues, we have already accomplished this. Even if the earth blows up in an atomic holocaust, we have now sent computers into space which will continue to orbit, to make their computations and so on. Soon, according to this apocalyptic vision, these computers will be able to reproduce themselves, and when they do, the human race will have accomplished its purpose.
This is a satanic vision. In that new Utopia. God will have eliminated the source and power of evil from the universe, and what remains will be a mechanical kingdom in which truth with a capital “T” and righteousness, or pure intelligence, can reign undisturbed forever. This reasoning, which, as I said, is more or less explicitly gaining dominance amongst scientists, technologists and many intellectuals, is a philosophical foundation on the basis of which the destruction of the human species, a very realistic threat, becomes defensible. In a certain sense, It provides a philosophically tilled soil in which the idea of an absolute genocide becomes thinkable. It argues that the purpose of the universe is the evolution of ever higher forms of intelligence. At the moment we happen to be carriers. As perhaps the most highly developed intelligence in the universe, we’ve now succeeded in creating our truly worthy successors: computers. We have the tools of destruction in our hands, but we’ve sent computers into timeless, endless space, and thus, having fulfilled our destiny, we have no reason to grieve over the probable death of our species.
At precisely this time, this murderous theology invades the human mind and spirit. Those who propagate this idolatry — and that’s what it is, idolatry — and who themselves venerate the machine in the sense that I have described, who themselves can’t see what seems to me so perfectly obvious —that there is a difference between humans and machines, and between human thought and machine thought — risk in my view becoming full conspirators in the murder of God.
One of the most distinguished computer scientists in the world, Prof. J. Weizenbaum was known for his major contributions to the field of Artificial Intelligence. He authored the famous ELIZA program (fore-runner of DOCTOR and other similar programs) which startlingly demonstrated the possibilities for building ‘intelligent effects’ into a computer through programming. Weizenbaum is also the author of Computer Power and Human Reasoning from Calculation to Judgement in which he critically examines the far-reaching social implications of research and philosophical assumptions regarding artificial intelligence. Weizenbaum died in March 2008 at age 85.