Archive for July, 2011

More on intelligence, computers, and humanity

July 30, 2011 2 comments

In my last post, I critiqued the notion of artificial intelligence, arguing that human intelligence is not actually what is being imitated by AI programs. Here I’d like to offer some further thoughts, and point to what I think is a pretty basic philosophical issue at stake in these sorts of discussions.

It seems to me that those who produce AI programs have a very specific goal in mind most of the time. What they want to do is “make computers do something that they’ve never done before.” When this is accomplished, the media then takes it upon itself to interpret this achievement as a sign that computers are becoming more like human beings. For example, consider the following case study, entitled “Computer learns language by playing games.” Based on this title, one might get the impression that a computer learned a language somehow. I’ll leave it to you to read the article and discover that nothing of the sort happened.

More recently, I listened to a wonderful episode of Radio Lab that had to do with these topics (Radiolab, by the way, should be commended for dealing with these sorts of issues, and many others, in a way that transcends many of the popular narratives about science), which featured a portion on the efforts by programmers to make computers which can simulate a human conversation.

Emphasis was put on how complex this task, and other artificial intelligence tasks, actually are. It seems to me that the underlying idea that is behind a lot of how people tend to think about these sorts of questions–questions of computers achieving human intelligence–is the idea of complexity. I talked about this in my last post, though the focus was more on computing power. The idea is that the reason it is difficult to make a computer mimic a human being is that human beings are very complex. We are the product of billions of years of evolution and so, well, that means we are pretty complex right?

In this argument, examples are cited: think of how complicated even the most basic tasks are: picking up a coffee cup, typing a sentence on the computer, smiling at your significant other–think of all the different things that are going on simultaneously in all of this. Pretty complicated, right? So simulating it is a difficult task. Thousands of little decisions and judgements have to be made in an instant.

And yet, I insist complexity is not really the issue. The issue is philosophical, and I think I can illustrate this. People who deal in artificial intelligence like to talk about what is called the “Turing Test,” which was mentioned on the Radiolab program. The Turing test, invented by mathematician Alan Turing, says that we can consider a computer to be intelligent, to be on the same level as a human, when it can hold a conversation with a human being (via instant messaging) and have that human being not be able to tell whether or not he is talking to a computer.

I don’t doubt that accomplishing such a feat is a difficult task, perhaps an impossible task, and would, if achieved, be a tremendous accomplishment in the field of artificial intelligence. It would require a computer that is complex and nuanced in its programming in a way that has never yet been realized. At the same time, I find that the fact that anyone would ever believe that succeeding at this test constitutes evidence that we have created something on the same level as a human being simply reveals the philosophical bias that underlies the confusion in this whole affair.

Consider this: computers have, for a long time now, been able to simulate basic spacial realities. A computer can contain in its programming, and project to the world, a convincingly realistic 3-dimensional environment. In the movies, special effects can simulate reality in a way that is convincing enough to us that we don’t notice–we can’t always tell whether or not an effect is real (that is, something recorded by the actual cameras).

There is no doubt that the creation of these special effects is a complicated task–as is the creation of the complex environments in popular video games like Halo. And yet no one ever wonders whether such realities, created on computers, are real, in the same way the universe is. No one wonders this, and no one wonders if they ever will be. There is no “Turing test” for simulating nature. And this is not because simulating nature is a complex task–though it is. We readily admit that computer graphics do not really capture all of the nuances of nature. But we also understand that the complexity is besides the point. The real issue is basic ontology: no one thinks that, if the simulation becomes complex enough, it will suddenly become “on the same level” as material reality, as having existence in the same way. Why not? One could try to give many sophisticated answers, but the easy answer is the correct one: just because it is not. It is still a simulation, and will never stop being that.

The question I want you to think about is this: why do we accept this as obvious–that a computer simulation of material reality would never actually exist in the way that material reality does–and yet find the idea of simulating a mind to raise all sorts of complicated philosophical conundrums about what makes a mind really a mind? We believe that, even if a computer could simulate physical reality in such a way as to fool us completely into believing that it is real, it would not follow that it is real. Yet we think, or at least, we give consideration to the idea, that if a computer could ever fool us into thinking it had consciousness, that it had a mind, then we have to accept that it does in fact possess such things in whatever sense is meaningful. Why do we go one way in one case (matter), and the opposite way in another (mind)?

My answer is the following: our unconscious commitment to materialism. If we believed in the mind as strongly as we believed in matter, then we would have no concerns that artificial minds, after reaching a certain level of complexity, would suddenly become real minds in every meaningful sense. We’d recognize this as an ontological impossibility. This is the idea I want you to take seriously, to internalize. Consider believing in the mind in this way.

Categories: Uncategorized

My Bro’s Stories

July 3, 2011 1 comment

My brother has a new webpage featuring some of his short stories. Check it out here.

Categories: Uncategorized